Olga Russakovsky (Princeton) - Fairness in Visual Recognition
Fairness in Visual Recognition
Computer vision models trained on unparalleled amounts of data hold promise for making impartial, well-informed decisions in a variety of applications. However, more and more historical societal biases are making their way into these seemingly innocuous systems. We focus our attention on bias in the form of inappropriate correlations between visual protected attributes (age, gender expression, skin color, …) and the predictions of visual recognition models, as well as any unintended discrepancy in error rates of vision systems across different social, demographic or cultural groups. In this talk, we’ll dive deeper both into the technical reasons and the potential solutions for bias in computer vision. I’ll highlight our recent work addressing bias in visual datasets (FAT*2020; ECCV 2020), in visual models (CVPR 2020; under review) as well as in the makeup of AI leadership (AI4All).
Host: Center for Data and Computing
Olga Russakovsky
Dr. Olga Russakovsky is an Assistant Professor in the Computer Science Department at Princeton University. Her research is in computer vision, closely integrated with the fields of machine learning, human-computer interaction and fairness, accountability and transparency. She has been awarded the AnitaB.org’s Emerging Leader Abie Award in honor of Denice Denton in 2020, the CRA-WP Anita Borg Early Career Award in 2020, the MIT Technology Review’s 35-under-35 Innovator award in 2017, the PAMI Everingham Prize in 2016 and Foreign Policy Magazine’s 100 Leading Global Thinkers award in 2015. In addition to her research, she co-founded and continues to serve on the Board of Directors of the AI4ALL foundation dedicated to increasing diversity and inclusion in Artificial Intelligence (AI). She completed her PhD at Stanford University in 2015 and her postdoctoral fellowship at Carnegie Mellon University in 2017.