Peter Hase (Anthropic)- AI Safety Through Interpretable and Controllable Language Models
Abstract: The AI research community has become increasingly concerned about risks arising from capable AI systems, ranging from misuse of generative models to misalignment of agents. My research aims to address problems in AI safety by tackling key issues with the interpretability and controllability of large language models (LLMs). In this talk, I present research showing that we are well beyond the point of thinking of AI systems as “black boxes.” AI models, and LLMs especially, are more interpretable than ever. Advances in interpretability have enabled us to control model reasoning and update knowledge in LLMs, among other promising applications. My work has also highlighted challenges that must be solved for interpretability to continue progressing. Building from this point, I argue that we can explain LLM behavior in terms of “beliefs”, meaning that core knowledge about the world determines downstream behavior of models. Furthermore, model editing techniques provide a toolkit for intervening on beliefs in LLMs in order to test theories about their behavior. By better understanding beliefs in LLMs and developing robust methods for controlling their behavior, we will create a scientific foundation for building powerful and safe AI systems.
Speakers

Peter Hase
Peter Hase is an AI Resident at Anthropic. He recently completed his PhD at the University of North Carolina at Chapel Hill, advised by Mohit Bansal. His research focuses on NLP and AI Safety, with the goal of explaining and controlling the behavior of machine learning models. He is a recipient of a Google PhD Fellowship and before that a Royster PhD Fellowship. While at UNC, he also worked at Meta, Google, and the Allen Institute for AI.