December
Learned Reference-based Diffusion Sampler for multi-modal distributions
Louis Grenioux CMAP, École polytechnique
Over the past few years, several approaches utilizing score-based diffusion have been proposed to sample from probability distributions, that is without having access to exact samples and relying solely on evaluations of unnormalized densities. The resulting samplers approximate the time-reversal of a noising diffusion process, bridging the target distribution to an easy-to-sample base distribution. In practice, the performance of these methods heavily depends on key hyperparameters that require ground truth samples to be accurately tuned. Our work aims to highlight and address this fundamental issue, focusing in particular on multi-modal distributions, which pose significant challenges for existing sampling methods. Building on existing approaches, we introduce Learned Reference-based Diffusion Sampler (LRDS), a methodology specifically designed to leverage prior knowledge on the location of the target modes in order to bypass the obstacle of hyperparameter tuning. LRDS proceeds in two steps by (i) learning a reference diffusion model on samples located in high-density space regions and tailored for multimodality, and (ii) using this reference model to foster the training of a diffusion-based sampler.
Graph-informed importance sampling for piecewise deterministic Markov processes. Application in reliability assessment.
Guillaume Chennetier CERMICS, École nationale des ponts et chaussées
Piecewise Deterministic Markov Processes (PDMPs) describe deterministic dynamical systems with parameters subject to random jumps, making them versatile tools for modeling complex phenomena. However, generating their trajectories can be computationally expensive. For a large class of inference problems, an optimal sampling strategy exists and relies on a generalized version of the “committor function” of the process. We propose a novel adaptive importance sampling method to efficiently generate rare trajectories of PDMPs. The approach involves a two-phase process. In a first offline phase, the PDMP is approximated by a simpler process on a graph, enabling explicit computation of key quantities used to build a low-cost approximation of the committor function. In a second online phase, PDMP trajectories are generated using a distribution informed by this approximation and iteratively refined via cross-entropy minimization. We provide asymptotic guarantees and demonstrate the method’s effectiveness by estimating the failure probability of a complex industrial system.