Date & Time:
March 9, 2023 2:00 pm – 3:00 pm
Location:
Crerar 298, 5730 S. Ellis Ave., Chicago, IL,
03/09/2023 02:00 PM 03/09/2023 03:00 PM America/Chicago Michael Kim (UC Berkeley) – Foundations for Responsible Machine Learning Crerar 298, 5730 S. Ellis Ave., Chicago, IL,

Algorithms make predictions about people constantly. The spread of such prediction systems has raised concerns that machine learning algorithms may exhibit problematic behavior, especially against individuals from marginalized groups. This talk will provide an overview of my research building a theory of “responsible” machine learning. I will highlight a notion of fairness in prediction, called Multicalibration (ICML’18), which requires predictions to be well-calibrated, not simply overall, but on every group that can be meaningfully identified from data. This “multi-group” approach strengthens the guarantees of group fairness definitions, without incurring the costs (statistical and computational) associated with individual-level protections. Additionally, I will present a new paradigm for learning, Outcome Indistinguishability (STOC’21), which provides a broad framework for learning predictors satisfying formal guarantees of responsibility. Finally, I will discuss the threat of Undetectable Backdoors (FOCS’22), which represent a serious challenge for building trust in machine learning models.

Speakers

Michael P. Kim

Miller Postdoctoral Fellow, UC Berkeley

Michael P. Kim is a Postdoctoral Research Fellow at the Miller Institute for Basic Research in Science at UC Berkeley, hosted by Shafi Goldwasser. Before this, Kim completed his PhD in Computer Science at Stanford University, advised by Omer Reingold. Kim’s research addresses basic questions about the appropriate use of machine learning algorithms that make predictions about people. More generally, Kim is interested in how the computational lens (i.e., algorithms and complexity theory) can provide insights into emerging societal and scientific challenges.

Related News & Events

NeurIPS 2023 Award-winning paper by DSI Faculty Bo Li, DecodingTrust, provides a comprehensive framework for assessing trustworthiness of GPT models

Feb 01, 2024
Video

“Machine Learning Foundations Accelerate Innovation and Promote Trustworthiness” by Rebecca Willett

Jan 26, 2024
Video

Nightshade: Data Poisoning to Fight Generative AI with Ben Zhao

Jan 23, 2024

UChicago Undergrad Analyzes Machine Learning Models Used By CPD, Uncovers Lack of Transparency About Data Usage

Oct 31, 2023

In The News: U.N. Officials Urge Regulation of Artificial Intelligence

"Security Council members said they feared that a new technology might prove a major threat to world peace."
Jul 27, 2023

UChicago Computer Scientists Bring in Generative Neural Networks to Stop Real-Time Video From Lagging

Jun 29, 2023

UChicago Assistant Professor Raul Castro Fernandez Receives 2023 ACM SIGMOD Test-of-Time Award

Jun 27, 2023
Michael Franklin

Mike Franklin, Dan Nicolae Receive 2023 Arthur L. Kelly Faculty Prize

Jun 02, 2023

PhD Student Kevin Bryson Receives NSF Graduate Research Fellowship to Create Equitable Algorithmic Data Tools

Apr 14, 2023

Computer Science Displays Catch Attention at MSI’s Annual Robot Block Party

Apr 07, 2023

UChicago, Stanford Researchers Explore How Robots and Computers Can Help Strangers Have Meaningful In-Person Conversations

Mar 29, 2023

Postdoc Alum John Paparrizos Named ICDE Rising Star

Mar 15, 2023
arrow-down-largearrow-left-largearrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-smallbutton-arrowclosedocumentfacebookfacet-arrow-down-whitefacet-arrow-downPage 1CheckedCheckedicon-apple-t5backgroundLayer 1icon-google-t5icon-office365-t5icon-outlook-t5backgroundLayer 1icon-outlookcom-t5backgroundLayer 1icon-yahoo-t5backgroundLayer 1internal-yellowinternalintranetlinkedinlinkoutpauseplaypresentationsearch-bluesearchshareslider-arrow-nextslider-arrow-prevtwittervideoyoutube