Variational Perspectives on Machine Learning: Algorithms, Inference, and Fairness
Machine learning plays a key role in shaping the decisions made by a growing number of institutions. This talk will share variational perspectives on aspects of inference, algorithms and fairness. On the topic of algorithms, I will present a variational framework on a classical family of convex optimization algorithm called accelerated gradient algorithms and demonstrate how it leads to simpler, faster gradient-based algorithms and generalizations of existing acceleration frameworks. On the topic of inference, I will present a variational framework for developing computationally efficient approximations of cross-validation and show how it provides fast and reliable estimates of out-of-sample performance for many machine learning models. On the topic of fairness, I will present a variational model for reasoning about the long-term impacts of using machine learning models to allocate scarce resources and opportunities to people, such as employment and educational decisions.
Host: Rebecca Willett
Ashia Wilson is a postdoctoral researcher in the Machine Learning Group at Microsoft Research, New England. She received undergraduate degrees in Applied Mathematics and Philosophy from Harvard University in 2011. She received her doctorate in Statistics from the University of California, Berkeley in 2018 advised by Benjamin Recht and Michael I. Jordan. Her research interests are in providing rigorous guarantees for algorithmic performance, and in developing frameworks for studying issues of fairness and governance in machine learning.