Vidya Muthukumar (Berkeley) - Fundamental Perspectives on Machine Learning: Strategic Agents and Contemporary ModelsReturn to Full Calendar
- March 26, 2020 at 10:30am - 11:30am
- Zoom or Live Stream (see below)
- Event Audience:
Speaker: Vidya Muthukumar PhD Student, University of California, Berkeley
Vidya Muthukumar is a final year graduate student in the EECS department at the University of California, Berkeley, advised by Anant Sahai. Her broad interests are in game theory, online and statistical learning. Recently, she is particularly interested in designing learning algorithms that provably adapt in strategic environments, fundamental properties of overparameterized models, and fairness, accountability and transparency in machine learning. Her honors include the IBM Research Science for Social Good Fellowship, SanDisk Fellowship and the UC Berkeley EECS Outstanding Course Development and Teaching Award. She served as co-president of UC Berkeley Women in Computer Science and Engineering (WICSE) in the academic year 2016-2017.
Abstract: Fundamental Perspectives on Machine Learning: Strategic Agents and Contemporary Models
Through recent advances in machine learning (ML) technology, we are getting closer to realizing the broadly stated goal of “artificially intelligent”, autonomous agents. In many cases — like cognitive radio, swarm robotics, and e-commerce — these agents will not be acting in isolation, and it is critical for them to directly interact with other agents who themselves behave strategically. The ensuing questions of how agents should learn from strategically generated data, and how such strategic behavior will manifest, are well-posed even when simple ML algorithms are used. On the other hand, most of the recent empirical success in single-agent AI is driven by the construction of overparameterized neural networks that would traditionally be considered too complex for reliable performance. Foundational mechanisms for understanding their state-of-the-art empirical performance remain elusive.
In this talk, I present two vignettes of my research that engage separately with the central difficulties in strategic learning and contemporary models from a fundamental perspective. First, I present a scheme by which an agent can provably learn from an unknown environment, by adapting online to the model that seems to best describe the data while remaining robust to strategically generated data. I also briefly touch upon credible approximations to how strategic agents will behave in the presence of such adaptive learning. Next, I present a signal-processing perspective on the overparameterized (high-dimensional) linear model, and ramifications for generalization in least-squares regression and classification. In addition to the commonly discussed pitfall of noise overfitting, I show that a phenomenon of signal “bleed”, observed classically in statistical signal processing and under-sampling theory, is equally dangerous for generalization. I use these phenomena to characterize special situations in which overparameterization is actually beneficial. I conclude with future directions that I plan to address for a more complete foundational understanding of multi-agent learning.
If you are affiliated with UChicago CS and would like to attend this talk remotely, contact firstname.lastname@example.org for links.
Host: Rebecca Willett