Abstract: Human-centered AI advocates the shift from emulating human to empowering people so that AI can benefit humanity. A useful metaphor is to consider human as a puzzle piece; it is important to know the shape of this puzzle piece so that we can build AI as a complement. In this talk, I focus on the case of AI-assisted decision making by offering explanations of predictions to illustrate key principles towards human-centered AI. Ideally, explanations of AI predictions enhance human decisions by improving the transparency of AI models, but my work reveals that current approaches fall short of this goal. I then develop a theoretical framework to show that the missing link lies in the neglect of human interpretation. I thus build algorithms to align AI explanations with human intuitions and demonstrate substantial improvements in human performance. To conclude, I will compare my perspective with reinforcement learning from human feedback and discuss further directions towards human-centered AI.
Chenhao Tan is an assistant professor of computer science and data science at the University of Chicago, and is also affiliated with the Harris School of Public Policy. He obtained his PhD degree in the Department of Computer Science at Cornell University and bachelor’s degrees in computer science and in economics from Tsinghua University. Prior to joining the University of Chicago, he was an assistant professor at the University of Colorado Boulder and a postdoc at the University of Washington. His research interests include human-centered AI, natural language processing, and computational social science. His work has been covered by many news media outlets, such as the New York Times and the Washington Post. He also won a Sloan research fellowship, an NSF CAREER award, an NSF CRII award, a Google research scholar award, research awards from Amazon, IBM, JP Morgan, and Salesforce, a Facebook fellowship, and a Yahoo! Key Scientific Challenges award.