Rachel Rudinger (Johns Hopkins) - Natural Language Understanding for Events and Participants in Text
Natural Language Understanding for Events and Participants in Text
Consider the difference between the two sentences “Pat didn’t remember to water the plants” and “Pat didn’t remember that she had watered the plants.” Fluent English speakers recognize that the former sentence implies that Pat did not water the plants, while the latter sentence implies she did. This distinction is crucial to understanding the meaning of these sentences, yet it is one that automated natural language processing (NLP) systems struggle to make. In this talk, I will discuss my work on developing state-of-the-art NLP models that make essential inferences about events (e.g., a “watering” event) and participants (e.g., “Pat” and “the plants”) in natural language sentences. In particular, I will focus on two supervised NLP tasks that serve as core tests of language understanding: Event Factuality Prediction and Semantic Proto-Role Labeling. I will also discuss my work on unsupervised acquisition of common-sense knowledge from large natural language text corpora, and the concomitant challenge of detecting problematic social biases in NLP models trained on such data.
Host: Ben Zhao
Rachel Rudinger
Rachel Rudinger is a Ph.D. candidate at Johns Hopkins University in Computer Science and the Center for Language and Speech Processing. Her work focuses on problems in natural language understanding, including event factuality prediction, semantic proto-role labeling, and acquisition of common-sense knowledge from text. During her Ph.D., Rachel has interned at the Allen Institute for Artificial Intelligence in Seattle, and at Saarland University in Saarbrücken, Germany. Previously, she received her B.S. in Computer Science (cum laude) from Yale University. Rachel is a NSF Graduate Research Fellow and member of MIT’s 2018 Rising Stars in EECS, and has been interviewed about her work on the popular NLP Highlights podcast.