Neuro-symbolic Representations for Commonsense Knowledge and Reasoning
Situations described using natural language are richer than what humans explicitly communicate. For example, the sentence “She pumped her fist” connotes many potential auspicious causes. For machines to understand natural language, they must be able to reason about the commonsense inferences that underlie explicitly stated information. In this talk, I will present work on combining traditional symbolic knowledge and reasoning techniques with modern neural representations to endow machines with these capacities.
First, I will describe COMET, an approach for learning commonsense knowledge about unlimited situations and concepts using transfer learning from language to knowledge. Second, I will demonstrate how these neural knowledge representations can dynamically construct symbolic graphs of contextual commonsense knowledge, and how these graphs can be used for interpretable, generalized reasoning. Finally, I will discuss current and future research directions on conceptualizing NLP as commonsense simulation, and the impact of this framing on challenging open-ended tasks such as story generation.
If you are affiliated with UChicago CS and would like to attend this talk remotely, contact firstname.lastname@example.org for links.
Host: Ben Zhao
Antoine Bosselut is a PhD Student at the University of Washington advised by Professor Yejin Choi, and a student researcher at the Allen Institute for Artificial Intelligence. His research focuses on building systems for commonsense knowledge representation and reasoning that combine the strengths of modern neural and traditional symbolic methods. He was also a student researcher on the Deep Learning team at Microsoft Research from 2017 to 2018. He is supported by an AI2 Key Scientific Challenges award.