Date & Time:
February 14, 2023 2:00 pm – 3:00 pm
Location:
Crerar 390, 5730 S. Ellis Ave., Chicago, IL,
02/14/2023 02:00 PM 02/14/2023 03:00 PM America/Chicago Ari Holtzman (U. of Washington) – Controlling Large Language Models: Generating (Useful) Text from Models We Don’t Fully Understand Crerar 390, 5730 S. Ellis Ave., Chicago, IL,

Generative language models have recently exploded in popularity, with services such as ChatGPT deployed to millions of users. These neural models are fascinating, useful, and incredibly mysterious: rather than designing what we want them to do, we nudge them in the right direction and must discover what they are capable of. But how can we rely on such inscrutable systems?

This talk will describe a number of key characteristics we want from generative models of text, such as coherence and correctness, and show how we can design algorithms to more reliably generate text with these properties. We will also highlight some of the challenges of using such models, including the need to discover and name new and often unexpected emergent behavior. Finally, we will discuss the implications this has for the grand challenge of understanding models at a level where we can safely control their behavior.

Speakers

Ari Holtzman

PhD Student, University of Washington

Ari Holtzman is a PhD student at the University of Washington. His research has focused broadly on generative models of text: how we can use them and how can we understand them better. His research interests have spanned everything from dialogue, including winning the first Amazon Alexa Prize in 2017, to fundamental research on text generation, such as proposing Nucleus Sampling, a decoding algorithm used broadly in deployed systems such as the GPT-3 API and academic research. Ari completed an interdisciplinary degree at NYU combining Computer Science and the Philosophy of Language.

Related News & Events

In the News

Data Ecology: A Socio-Technical Approach to Controlling Dataflows

Sep 18, 2024
UChicago CS News

NeurIPS 2023 Award-winning paper by DSI Faculty Bo Li, DecodingTrust, provides a comprehensive framework for assessing trustworthiness of GPT models

Feb 01, 2024
Video

“Machine Learning Foundations Accelerate Innovation and Promote Trustworthiness” by Rebecca Willett

Jan 26, 2024
Video

Nightshade: Data Poisoning to Fight Generative AI with Ben Zhao

Jan 23, 2024
UChicago CS News

UChicago Undergrad Analyzes Machine Learning Models Used By CPD, Uncovers Lack of Transparency About Data Usage

Oct 31, 2023
In the News

In The News: U.N. Officials Urge Regulation of Artificial Intelligence

"Security Council members said they feared that a new technology might prove a major threat to world peace."
Jul 27, 2023
UChicago CS News

UChicago Computer Scientists Bring in Generative Neural Networks to Stop Real-Time Video From Lagging

Jun 29, 2023
UChicago CS News

UChicago Assistant Professor Raul Castro Fernandez Receives 2023 ACM SIGMOD Test-of-Time Award

Jun 27, 2023
Michael Franklin
UChicago CS News

Mike Franklin, Dan Nicolae Receive 2023 Arthur L. Kelly Faculty Prize

Jun 02, 2023
UChicago CS News

PhD Student Kevin Bryson Receives NSF Graduate Research Fellowship to Create Equitable Algorithmic Data Tools

Apr 14, 2023
UChicago CS News

Computer Science Displays Catch Attention at MSI’s Annual Robot Block Party

Apr 07, 2023
UChicago CS News

UChicago, Stanford Researchers Explore How Robots and Computers Can Help Strangers Have Meaningful In-Person Conversations

Mar 29, 2023
arrow-down-largearrow-left-largearrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-smallbutton-arrowclosedocumentfacebookfacet-arrow-down-whitefacet-arrow-downPage 1CheckedCheckedicon-apple-t5backgroundLayer 1icon-google-t5icon-office365-t5icon-outlook-t5backgroundLayer 1icon-outlookcom-t5backgroundLayer 1icon-yahoo-t5backgroundLayer 1internal-yellowinternalintranetlinkedinlinkoutpauseplaypresentationsearch-bluesearchshareslider-arrow-nextslider-arrow-prevtwittervideoyoutube