Date & Time:
January 27, 2022 2:00 pm – 3:00 pm
01/27/2022 02:00 PM 01/27/2022 03:00 PM America/Chicago Sherry Wu (U. of Washington) – Interactive AI Model Debugging and Correction DSI/CS/Statistics Joint Candidate Talk

Interactive AI Model Debugging and Correction

Watch Via Live Stream

Research in Artificial Intelligence (AI) has advanced at an incredible pace, to the point where it is making its way into our everyday lives, explicitly and behind the scenes. However, beneath their impressive progress, many AI models hide deficiencies that amplify social biases or even cause fatal accidents. How do we identify, improve, and cope with imperfect models, while still benefiting from their use? I will discuss my work empowering humans to interact with AI models in order to debug and correct them. I will describe both (1) how I help experts run scalable and testable analyses on models in development, and (2) how I help end users collaborate with deployed AI in a transparent and controllable way. In my final remarks, I will discuss my future research perspectives on building human-centered AI through data-centric approaches.

Host: Chenhao Tan

Speakers

Sherry Tongshuang Wu

Ph.D. Student, University of Washington

Sherry Tongshuang Wu is a final year Ph.D. Candidate in Computer Science & Engineering at the University of Washington, advised by Jeffrey Heer and Dan Weld. She received her B.Eng in CSE from the Hong Kong University of Science and Technology. Her research lies at the intersection of Human-Computer Interaction (HCI) and Natural Language Processing (NLP), and aims to empower humans to debug and correct AI models interactively, both when the model is under active development, and after it is deployed for end users. Sherry has authored 19 papers in top-tier NLP, HCI and Visualization conferences and journals such as ACL, CHI, TOCHI, TVCG, etc, including a best paper award (top-1) and an honorable mention (top-3). You can find out more about her at the link below.

Related News & Events

No Name

NeurIPS 2023 Award-winning paper by DSI Faculty Bo Li, DecodingTrust, provides a comprehensive framework for assessing trustworthiness of GPT models

Feb 01, 2024
Video

“Machine Learning Foundations Accelerate Innovation and Promote Trustworthiness” by Rebecca Willett

Jan 26, 2024
Video

Nightshade: Data Poisoning to Fight Generative AI with Ben Zhao

Jan 23, 2024
No Name

UChicago Undergrad Analyzes Machine Learning Models Used By CPD, Uncovers Lack of Transparency About Data Usage

Oct 31, 2023
No Name

Five UChicago CS students named to Siebel Scholars Class of 2024

Oct 02, 2023
No Name

UChicago Computer Scientists Design Small Backpack That Mimics Big Sensations

Sep 11, 2023
No Name

In The News: U.N. Officials Urge Regulation of Artificial Intelligence

"Security Council members said they feared that a new technology might prove a major threat to world peace."
Jul 27, 2023
No Name

UChicago Computer Scientists Bring in Generative Neural Networks to Stop Real-Time Video From Lagging

Jun 29, 2023
No Name

UChicago Assistant Professor Raul Castro Fernandez Receives 2023 ACM SIGMOD Test-of-Time Award

Jun 27, 2023
Michael Franklin
No Name

Mike Franklin, Dan Nicolae Receive 2023 Arthur L. Kelly Faculty Prize

Jun 02, 2023
No Name

Computer Science Class Shows Students How To Successfully Create Circuit Boards Without Engineering Experience

May 17, 2023
No Name

UChicago CS Researchers Shine at CHI 2023 with 12 Papers and Multiple Awards

Apr 19, 2023
arrow-down-largearrow-left-largearrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-smallbutton-arrowclosedocumentfacebookfacet-arrow-down-whitefacet-arrow-downPage 1CheckedCheckedicon-apple-t5backgroundLayer 1icon-google-t5icon-office365-t5icon-outlook-t5backgroundLayer 1icon-outlookcom-t5backgroundLayer 1icon-yahoo-t5backgroundLayer 1internal-yellowinternalintranetlinkedinlinkoutpauseplaypresentationsearch-bluesearchshareslider-arrow-nextslider-arrow-prevtwittervideoyoutube