Swabha Swayamdipta (Allen Institute) - Addressing Biases for Robust, Generalizable AI
Addressing Biases for Robust, Generalizable AI
Artificial Intelligence has made unprecedented progress in the past decade. However, there still remains a large gap between the decision-making capabilities of humans and machines. In this talk, I will investigate two factors to explain why. First, I will discuss the presence of undesirable biases in datasets, which ultimately hurt generalization. I will then present bias mitigation algorithms that boost the ability of AI models to generalize to unseen data. Second, I will explore task-specific prior knowledge which aids robust generalization, but is often ignored when training modern AI architectures. Throughout this discussion, I will focus my attention on language applications, and will show how certain underlying structures can provide useful inductive biases for inferring meaning in natural language. I will conclude with a discussion of how the broader framework of dataset and model biases will play a critical role in the societal impact of AI, going forward.
Host: Michael Maire
Swabha Swayamdipta is a postdoctoral investigator at the Allen Institute for AI, working with Yejin Choi. Her research focuses on natural language processing, where she explores dataset and linguistic structural biases, and model interpretability. Swabha received her Ph.D. from Carnegie Mellon University, under the supervision of Noah A. Smith and Chris Dyer. During most of her Ph.D. she was a visiting student at the University of Washington. She holds a Masters degree from Columbia University, where she was advised by Owen Rambow. Her research has been published at leading NLP and machine learning conferences, and has received an honorable mention for the best paper at ACL 2020.