Patrick Cannon

I'm a machine learning researcher with a background in Monte Carlo methods. I think the hardest remaining problems in AI are fundamentally problems of sampling and inference: how to search, when to stop, and how to know what you don't know.

I have a PhD in statistics from the University of Bristol where I worked on particle MCMC for population genetics with Christophe Andrieu and Mark Beaumont. This time could be summarised as 3 years of finding tricks to sample from distributions that strongly preferred the idea of being left undisturbed.

From there I tackled simulation-based inference and normalising flows at Improbable, then co-founded a computer vision startup where I tried to push NeRFs and Gaussian splats to their limits. Now I'm at Amazon AGI, working on post-training for large multimodal models.

get in touch →


In domains with verifiable rewards — mathematics, code, formal reasoning — we're seeing extraordinary progress, and sampling and inference have clear traction here. The potential to accelerate science is real and imminent if we continue to improve inference-time compute, search, and verification.

But I'm also motivated by a longer term conviction. Current AI captures a kind of consensus, a melange, a blob - the natural consequence of the output that least offends. In some domains that's been fantastically useful, particularly where there are clear correctness criteria. Yet the richest aspects of human judgment are not well preserved under aggregation: taste, context-sensitivity, and the irreducible differences between perspectives. They are not noise around a mean, but part of a structured landscape of meaning. Similar to what Hegel called Geist: not a collection of viewpoints to be averaged over, but an evolving space of thought in which contradiction is not a problem to be solved but a feature to be modelled. I believe that space is tractable, and I suspect the mathematics of sampling and uncertainty will be central to it. I think we'll know when we're getting close; it will be blinding.

read the blog →


Approximate Bayesian Computation with Path Signatures

Dyer, Cannon, Schmon · UAI 2024 Spotlight ✦ Outstanding Paper Award

Black-box Bayesian inference for agent-based models

Dyer, Cannon, Farmer, Schmon · JEDC 2024

Robust Neural Posterior Estimation and Statistical Model Criticism

Ward, Cannon, Beaumont, Fasiolo, Schmon · NeurIPS 2022

Investigating the Impact of Model Misspecification in Simulation-Based Inference

Cannon, Ward, Schmon · arXiv 2022

Amortised Inference for Expensive Time-Series Simulators with Signatured Ratio Estimation

Dyer, Cannon, Schmon · AISTATS 2022

Calibrating Agent-Based Models to Microdata with Graph Neural Networks

Dyer, Cannon, Farmer, Schmon · ICML 2022 AI4ABM Workshop Spotlight

High Performance Simulation for Scalable Multi-Agent Reinforcement Learning

Langham-Lopez, Cannon, Schmon · ICML 2022 AI4ABM Workshop Spotlight

Generalized Posteriors in Approximate Bayesian Computation

Schmon*, Cannon*, Knoblauch · AABI 2021

Deep Signature Statistics for Likelihood-free Time-series Models

Dyer, Cannon, Schmon · ICML 2021 INNF+ Workshop

Full list on Google Scholar →