Deep Probabilistic Graphical Modeling
Deep learning (DL) is a powerful approach to modeling complex and large scale data. However, DL models lack interpretable quantities and calibrated uncertainty. In contrast, probabilistic graphical modeling (PGM) provides a framework for formulating an interpretable generative process of data and a way to express uncertainty about what we do not know. How can we develop machine learning methods that bring together the expressivity of DL with the interpretability and calibration of PGM to build flexible models endowed with an interpretable latent structure that can be fit efficiently? I call this line of research deep probabilistic graphical modeling (DPGM). In this talk, I will discuss my work on developing DPGM for text data. In particular, I will show how DPGM enables flexible and interpretable topic modeling at large scale, unlocking several known challenges. Furthermore, I will describe how we can account for both local and long-range context, under the DPGM framework, to build a flexible sequential document model that leads to state-of-the-art performance on a downstream document classification task.
Host: Michael Maire
Adji Bousso Dieng
Adji Bousso Dieng is a PhD Candidate at Columbia University where she is jointly advised by David Blei and John Paisley. Her research is in Artificial Intelligence and Statistics, bridging probabilistic graphical models and deep learning. Dieng is supported by a Dean Fellowship from Columbia University. She won a Microsoft Azure Research Award and a Google PhD Fellowship in Machine Learning. She was recognized as a rising star in machine learning by the University of Maryland.
Prior to Columbia, Dieng worked as a Junior Professional Associate at the World Bank. She did her undergraduate studies in France where she attended Lycee Henri IV and Telecom ParisTech–France's Grandes Ecoles system. She spent the third year of Telecom ParisTech's curriculum at Cornell University where she earned a Master in Statistics.