I’m currently a PhD student in the Department of Statistics at the University of Oxford, as part of the StatML CDT. I’m supervised by Arnaud Doucet and George Deligiannidis, and am working on generative modelling theory with a particular focus on diffusion models. I’ve previously studied generalizations of diffusion models to arbitrary state spaces, and am currently exploring performance guarantees for diffusion methods and related techinques.
I’m also interested in interpretability of machine learning systems (especially from a safety perspective) and continue to think about feature representation and automatic detection of sparse features in model activations. At Redwood Research, I worked on applications of interpretability techniques for mechanistic anomaly detection.
Measuring Feature Sparsity in Language Models. Mingyang Deng, Lucas Tao, Joe Benton. NeurIPS 2023 Workshop on Socially Responsible Language Modelling Research
Linear Convergence Bounds for Diffusion Models via Stochastic Localization. Joe Benton, Valentin De Bortoli, Arnaud Doucet, George Deligiannidis. arXiv preprint, arXiv2308.03686.
Error Bounds for Flow Matching Methods. Joe Benton, George Deligiannidis, Arnaud Doucet. arXiv preprint, arXiv:2305.16860.
From Denoising Diffusions to Denoising Markov Models. Joe Benton, Yuyang Shi, Valentin De Bortoli, George Deligiannidis, Arnaud Doucet. Journal of the Royal Statistical Society, Series B, 2023. To appear.
Alpha-divergence Variational Inference Meets Importance Weighted Auto-Encoders: Methodology and Asymptotics. Kamélia Daudel, Joe Benton*, Yuyang Shi*, Arnaud Doucet. Journal of Machine Learning Research, 24(243):1−83, 2023.
Polysemanticity and Capacity in Neural Networks. Adam Scherlis, Kshitij Sachan, Adam S. Jermyn, Joe Benton, Buck Shlegeris. arXiv preprint, arXiv:2210.01892
A Continuous Time Framework for Discrete Denoising Models. Andrew Campbell, Joe Benton, Valentin De Bortoli, Tom Rainforth, George Deligiannidis, Arnaud Doucet. Advances in Neural Information Processing Systems, 2022