| Fri, Jan 30, 2026 | Submissions open |
| Fri, Mar 6, 2026 | Submissions close |
| Fri, Mar 27, 2026 | Decisions announced |
| Tue, May 5, 2026 | Workshop begins |
05 May 2026, Tangier, Morocco
The aim of probabilistic machine learning is to find accurate representations of our uncertain beliefs about the world and use them to make better informed decisions. This workshop brings together post-Bayesian approaches to inference and optimisation-based perspectives on uncertainty and decision-making. Post-Bayesian methods address the limitations of classical Bayesian inference by developing alternative inferential principles that remain robust in modern machine-learning settings, where standard modelling assumptions may be violated. Complementing this view, optimisation-based approaches treat inference and decision-making as problems of optimising functionals of probability distributions, providing a unifying framework for both learning probabilistic representations and acting upon them. This workshop welcomes all theoretical and methodological work on how best to represent, find and use probabilistic beliefs about the world.
We invite short papers submissions on all theoretical and methodological work on how best to represent, find and use probabilistic beliefs about the world.
Please submit your paper through OpenReview
Submissions should be formatted using the AISTATS LaTeX style. Papers are limited to 4 pages (excluding references). The review process will be double-blind. Accepted contributions will be presented as posters, and selected works will also be highlighted as contributed talks.
The important deadlines are listed below.
| Fri, Jan 30, 2026 | Submissions open |
| Fri, Mar 6, 2026 | Submissions close |
| Fri, Mar 27, 2026 | Decisions announced |
| Tue, May 5, 2026 | Workshop begins |
All presenters are listed in alphabetical order.
Chris Oates (Newcastle University)
Detecting Model Misspecification in Bayesian Inverse Problems via Variational Gradient Descent
Clémentine Chazal (CREST / ENSAE, Paris)
A Computable Measure of Suboptimality for Entropy-Regularised Variational Objectives
Geoff Pleiss (University of British Columbia)
Perfect Bayesian Optimization is Hard; “Good Enough” Bayesian Optimization is Easy
Pierre Alquier (ESSEC Asia-Pacific)
Empirical PAC-Bayes Bound for Markov Chains
| Time | Activity |
|---|---|
| 09:00–09:10 | Opening Remarks |
| 09:10–09:40 | Invited Talk - Pierre Alquier |
| 09:40–10:00 | Contributed Talks Towards E-Value Based Stopping Rules for Bayesian Deep Ensembles Emanuel Sommer, Rickmer Schulte, Sarah Deubner, Julius Kobialka, David Rügamer |
| 10:00–10:30 | Coffee |
| 10:30–11:00 | Invited Talk - Clémentine Chazal |
| 11:00–12:00 | Contributed Talks A Distributional Optimisation Perspective on Ensemble Methods Yan Lin, Congye Wang, Zheyang Shen, Matthew A Fisher, Chris J. Oates Sequential Updating of Predictively Oriented Posteriors Zheyang Shen, Gerardo Duran-Martin, Chris J. Oates Even More Guarantees for Variational Inference in the Presence of Symmetries Lena Zellinger, Antonio Vergari |
| 12:00–12:30 | Invited Talk - Chris Oates |
| 12:30–14:00 | Lunch |
| 14:00–14:30 | Invited Talk - Geoff Pleiss |
| 14:30–15:30 | Panel Discussion |
| 15:30–18:00 | Poster Session |
A Predictive View on Streaming Hidden Markov Models
Gerardo Duran-Martin
Addressing Stochastic Rising Bandits with Thompson Sampling
Marco Fiandri, Francesco Trovò, Alberto Maria Metelli
Axiomatizing Tempered Bayesian Updating via Local Likelihood Transformations
Yutong Zhang, Yaoran Yang
Bayesian inference with sources of uncertainty: a confidence-weighted approach to sparsity
Rafael Mouallem Rosa, Julyan Arbel, Hien Duy Nguyen
Closed-Form Reward Centroids for Inverse Reinforcement Learning
Filippo Lazzati, Alberto Maria Metelli
Encoding Inductive Biases in Simulation-based inference
Ben Riegler, Vincent Fortuin
Guiding Posterior Exploration with Optimizer-Derived Geometry
Moritz Schlager, Emanuel Sommer, Thomas Möllenhoff, David Rügamer
Implied Likelihoods in Linear Amortised Bayesian Methods
Samuel Power
Indirect Query Bayesian Optimization with Integrated Feedback
Mengyan Zhang, Shahine Bouabid, Cheng Soon Ong, Seth Flaxman, Dino Sejdinovic
Learning with Embedded Linear Equality Constraints via Variational Bayesian Inference
Matthew Marsh, Benoit Chachuat, Antonio Del rio chanona
Occam’s Razor is Only as Sharp as Your ELBO
Ethan Harvey, Michael C Hughes
Optimal information deletion and Bayes’ theorem
Hans Montcho, Håvard Rue
Pandora’s Regret: Decision-Aligned Evaluation for Sequential Search
Gerardo Flores, Ashia C. Wilson
Regularization Effects in Variational Training of Transformers
Yi Han, Jonathan Wenger, John Patrick Cunningham
Rethinking Probabilistic Circuit Parameter Learning
Anji Liu, Zilei Shao, Guy Van den Broeck
Rethinking Trust Region Bayesian Optimization in High Dimensions
Wei-Ting Tang, Joel Paulson
Robust and Adaptive Bayesian Contextual Bandits in Heavy-Tailed and Piecewise-Stationary Environments
Gianluca Palmari, Alvaro Cartea, Fayçal Drissi, Gerardo Duran-Martin
Robust Bayesian Experimental Design under Misspecification
Hany Abdulsamad, Sahel Iqbal, Christian A. Naesseth, Takuo Matsubara, Adrien Corenflos
Robust Obedience in Information Design for Bayesian Congestion Games
Yuwei Hu, Bryce Ferguson
Scalable Uncertainty Quantification for Black-Box Density-Based Clustering
Nicola Bariletto, Stephen Walker
Teacher Forcing as Generalized Bayes: Optimization Geometry Mismatch in Switching Surrogates for Chaotic Dynamics
Andre Herz, Daniel Durstewitz, Georgia Koppe
The Benefits of Sampling in Unregularized Variational Training of Deep Neural Networks
Juraj Marusic, Jonathan Wenger, Beau Coker, John Patrick Cunningham
Unlocking the Secrets of Perturbation Methods for End-to-End Prediction and Optimization
Kyle Heuton, Michael C Hughes
When Does Feel-Good Thompson Sampling Fail Under Approximate Posteriors?
Emile Timothy Anand, Sarah Liaw