- Indico style
- Indico style - inline minutes
- Indico style - numbered
- Indico style - numbered + minutes
- Indico Weeks View
Discussion Lead: Siddharth Mitra (Yale)
Topic: On the Convergence of Phi-Mutual Information Along Langevin Markov chains
Link: TBA
Abstract: Consider the goal of sampling from a target probability distribution \nu \propto \exp(-f) on R^d. The Langevin dynamics in continuous-time along with associated discrete-time algorithms are popular Markov chains for this task. We have a rich understanding of the mixing time of these Markov chains in a variety of metrics and under various assumptions on the target distribution \nu. In this talk, we will discuss the convergence of mutual information and Phi-mutual information along these Markov chains for strongly log-concave target distributions. The data processing inequality from information theory states that these quantities cannot increase along any Markov chain, and our focus will be to quantify the rate of decrease for the specific Langevin based Markov chains of interest. We will then discuss applications of our results, which allow us to study when we obtain approximately independent samples along these Markov chains, and allow us to analyze the pairwise Hellinger mutual information between samples, a quantity which arises when approximating a distribution by finitely many (dependent but identically distributed) samples.