Learning Ising Models with Latent Variables
Graphical models are a rich language for describing high-dimensional distributions in terms of their dependence structure. While there are algorithms with provable guarantees for learning undirected graphical models in a variety of settings, there has been much less progress in the important scenario when there are latent (i.e. unobserved) variables. Here we study sparse Ising models with latent variables with a focus on a prototypical example, the RBM (Restricted Boltzmann Machine). We discuss negative and positive results for this problem. In particular, we give two provable algorithms for learning such models: one builds upon concavity of magnetization estimates from statistical physics, and another is inspired by connections between RBMs and their historical relative, feedforward neural networks. Based on joint works with Guy Bresler, Ankur Moitra, Surbhi Goel, and Adam Klivans.
Presence at TTIC requires being fully vaccinated for COVID-19 or having a TTIC or UChicago-approved exemption. Masks are required in all common areas. Full visitor guidance available at ttic.edu/visitors
Contact Denise Howard (email@example.com) for Zoom information.
Host: Machine Learning Seminar Series
Frederic is currently a fellow at UC Berkeley's Simons Institute this semester and will be a Motwani postdoctoral fellow at Stanford starting in Spring 2022. He recently graduated from MIT, where he received his PHD in Mathematics and Statistics co-advised by Ankur Moitra and Elchanan Mossel. His interests include computational learning theory, high-dimensional statistics, applied probability, and algorithms for sampling and inference.