Date & Time:
February 23, 2022 3:00 pm – 4:00 pm
Crerar 390, 5730 S. Ellis Ave., Chicago, IL,
02/23/2022 03:00 PM 02/23/2022 04:00 PM America/Chicago YooJung Choi (UCLA) – Tractable Probabilistic Reasoning for Trustworthy AI Crerar 390, 5730 S. Ellis Ave., Chicago, IL,

Watch Via Live Stream

Automated decision-making systems are increasingly being deployed in areas with personal and societal impacts, leading to growing interest and need for trustworthy AI and ML systems–that is, models that are robust, explainable, fair, and so on. It is important to note that these guarantees only hold with respect to a certain model of the world, with inherent uncertainties. In this talk, I will present how probabilistic modeling and reasoning, by incorporating a distribution, offer a principled way to handle different kinds of uncertainties when learning and deploying trustworthy AI systems. For example, when learning classifiers, the labels in the training data may be biased; I will show that probabilistic circuits, a class of tractable probabilistic models, can be effective in enforcing and auditing fairness properties by explicitly modeling a latent unbiased label. In addition, I will also discuss recent breakthroughs in tractable inference of more complex queries such as information-theoretic quantities, to demonstrate the potential of probabilistic reasoning for trustworthy AI. Finally, I will conclude with my future work towards a framework to flexibly reason about and enforce trustworthy AI/ML system behaviors.


YooJung Choi

PhD Student, UCLA

YooJung Choi is a PhD student in the Computer Science department at UCLA, advised by Guy Van den Broeck. She is part of the Statistical and Relational Artificial Intelligence (StarAI) lab. Her research interests are in tractable probabilistic models, graphical models, knowledge compilation, and trustworthy AI/ML (robustness, fairness, explainability, and more).

Related News & Events

UChicago CS News

NeurIPS 2023 Award-winning paper by DSI Faculty Bo Li, DecodingTrust, provides a comprehensive framework for assessing trustworthiness of GPT models

Feb 01, 2024

“Machine Learning Foundations Accelerate Innovation and Promote Trustworthiness” by Rebecca Willett

Jan 26, 2024

Nightshade: Data Poisoning to Fight Generative AI with Ben Zhao

Jan 23, 2024
UChicago CS News

UChicago Undergrad Analyzes Machine Learning Models Used By CPD, Uncovers Lack of Transparency About Data Usage

Oct 31, 2023
In the News

In The News: U.N. Officials Urge Regulation of Artificial Intelligence

"Security Council members said they feared that a new technology might prove a major threat to world peace."
Jul 27, 2023
UChicago CS News

UChicago Computer Scientists Bring in Generative Neural Networks to Stop Real-Time Video From Lagging

Jun 29, 2023
UChicago CS News

UChicago Assistant Professor Raul Castro Fernandez Receives 2023 ACM SIGMOD Test-of-Time Award

Jun 27, 2023
Michael Franklin
UChicago CS News

Mike Franklin, Dan Nicolae Receive 2023 Arthur L. Kelly Faculty Prize

Jun 02, 2023
UChicago CS News

PhD Student Kevin Bryson Receives NSF Graduate Research Fellowship to Create Equitable Algorithmic Data Tools

Apr 14, 2023
UChicago CS News

Computer Science Displays Catch Attention at MSI’s Annual Robot Block Party

Apr 07, 2023
UChicago CS News

UChicago, Stanford Researchers Explore How Robots and Computers Can Help Strangers Have Meaningful In-Person Conversations

Mar 29, 2023
UChicago CS News

Postdoc Alum John Paparrizos Named ICDE Rising Star

Mar 15, 2023
arrow-down-largearrow-left-largearrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-smallbutton-arrowclosedocumentfacebookfacet-arrow-down-whitefacet-arrow-downPage 1CheckedCheckedicon-apple-t5backgroundLayer 1icon-google-t5icon-office365-t5icon-outlook-t5backgroundLayer 1icon-outlookcom-t5backgroundLayer 1icon-yahoo-t5backgroundLayer 1internal-yellowinternalintranetlinkedinlinkoutpauseplaypresentationsearch-bluesearchshareslider-arrow-nextslider-arrow-prevtwittervideoyoutube