I'm a machine learning researcher with a background in Monte Carlo methods. I think the hardest remaining problems in AI are fundamentally problems of sampling and inference: how to search, when to stop, and how to know what you don't know.
I have a PhD in statistics from the University of Bristol where I worked on particle MCMC for population genetics with Christophe Andrieu and Mark Beaumont. This time could be summarised as 3 years of finding tricks to sample from distributions that strongly preferred the idea of being left undisturbed.
From there I tackled simulation-based inference and normalising flows at Improbable, then co-founded a computer vision startup where I tried to push NeRFs and Gaussian splats to their limits. Now I'm at Amazon AGI, working on post-training for large multimodal models.
Direction
In domains with verifiable rewards — mathematics, code, formal reasoning — we're seeing extraordinary progress, and sampling and inference have clear traction here. The potential to accelerate science is real and imminent if we continue to improve inference-time compute, search, and verification.
But I'm also motivated by a longer term conviction. Current AI captures a kind of consensus, a melange, a blob - the natural consequence of the output that least offends. In some domains that's been fantastically useful, particularly where there are clear correctness criteria. Yet the richest aspects of human judgment are not well preserved under aggregation: taste, context-sensitivity, and the irreducible differences between perspectives. They are not noise around a mean, but part of a structured landscape of meaning. Similar to what Hegel called Geist: not a collection of viewpoints to be averaged over, but an evolving space of thought in which contradiction is not a problem to be solved but a feature to be modelled. I believe that space is tractable, and I suspect the mathematics of sampling and uncertainty will be central to it. I think we'll know when we're getting close; it will be blinding.
Publications