Patrick Cannon

I'm a machine learning researcher with a background in Monte Carlo methods. I think the hardest remaining problems in AI are fundamentally problems of sampling and inference: how to search, when to stop, and how to know what you don't know.

I have a PhD in statistics from the University of Bristol where I worked on particle MCMC for population genetics with Christophe Andrieu and Mark Beaumont. This time could be summarised as 3 years of finding tricks to sample from distributions that strongly preferred the idea of being left undisturbed.

From there I spent three years at Improbable building methods for calibrating complex simulators against real-world data. This worked beautifully right up until we asked what happens when the simulator is wrong which, for anything interesting, it invariably is. That question turned into papers at NeurIPS, UAI, and AISTATS and inspired a growing line of work on robust simulation-based inference. I then co-founded a computer vision startup pushing NeRFs and Gaussian splats to their limits, and now I'm at Amazon AGI, working on post-training for large multimodal models.

get in touch →


In domains with verifiable rewards — mathematics, code, and formal reasoning — we are witnessing extraordinary progress. Here, sampling and inference have clear traction. The potential to accelerate scientific discovery is imminent, provided we continue to scale inference-time compute, search, and our ability to trust the verification process itself.

But I'm also motivated by a longer term conviction. Current AI captures a kind of consensus, a melange, a blob - the natural consequence of the output that least offends. In some domains that's been fantastically useful, particularly where there are clear correctness criteria. Yet the richest aspects of human judgment are not well preserved under aggregation: taste, context-sensitivity, and irreducible perspective. These are not noise around a mean; they are a structured landscape of meaning. I believe this space is tractable and that calibrated uncertainty will be crucial to building it. That's where I'm focusing my research.

read the blog →


Approximate Bayesian Computation with Path Signatures

Dyer, Cannon, Schmon · UAI 2024 Spotlight ✦ Outstanding Paper Award

Black-box Bayesian inference for agent-based models

Dyer, Cannon, Farmer, Schmon · JEDC 2024

Robust Neural Posterior Estimation and Statistical Model Criticism

Ward, Cannon, Beaumont, Fasiolo, Schmon · NeurIPS 2022

Investigating the Impact of Model Misspecification in Simulation-Based Inference

Cannon, Ward, Schmon · arXiv 2022

Amortised Inference for Expensive Time-Series Simulators with Signatured Ratio Estimation

Dyer, Cannon, Schmon · AISTATS 2022

Calibrating Agent-Based Models to Microdata with Graph Neural Networks

Dyer, Cannon, Farmer, Schmon · ICML 2022 AI4ABM Workshop Spotlight

High Performance Simulation for Scalable Multi-Agent Reinforcement Learning

Langham-Lopez, Cannon, Schmon · ICML 2022 AI4ABM Workshop Spotlight

Generalized Posteriors in Approximate Bayesian Computation

Schmon*, Cannon*, Knoblauch · AABI 2021

Deep Signature Statistics for Likelihood-free Time-series Models

Dyer, Cannon, Schmon · ICML 2021 INNF+ Workshop

Full list on Google Scholar →