Assistant professors Bill Fefferman and Chenhao Tan each received grants from the Google Research Scholar Program, which funds world-class research by early-career computer scientists. Fefferman received one of five awards in the quantum computing area, while Tan was one of seven researchers funded for work in natural language processing.

Both researchers said that the grants would help accelerate their research goals in finding practical applications for today’s quantum computers and building new, explainable AI models for understanding and generating language.

Fefferman’s proposal, “Understanding the feasibility of practical applications from quantum supremacy experiments,” will explore the theoretical foundations of how current and near-future quantum computers generate certified random numbers. If confirmed, these algorithms would provide a valuable use case for quantum computing in cryptography, cryptocurrency, and other applications that require truly random keys for security.

“We’re in this really exciting era where we’re seeing quantum computers that for the first time can solve problems that are at the border of what can be computed classically,” Fefferman said. “The next step is to show that we can channel this power to solve something useful. Quantum computers, by their nature, are non-deterministic, and so they can produce random numbers. But the really interesting question is, how do you trust the quantum computer?”

Fefferman will explore that question by probing the theory underlying the random circuit sampling experiment that Google used to claim “quantum supremacy” – a task performed by a quantum computer that would be all but impossible for a classical computer — in 2019. A proposal by Scott Aaronson described a protocol to use that experiment for generating certified random numbers, but the theory requires more vetting to confirm its veracity.

“What Scott was saying is that today, with the power that Google already has with this device, you could in principle generate certified random numbers using the existing experiment, assuming a very non-standard conjecture from complexity theory.” Fefferman said. “My proposal is about giving strong evidence that this conjecture is true, which will increase our confidence that Google’s experiment can produce certified random numbers.”

Tan’s proposal, “Robustifying NLP models via Learning from Explanations,” will help protect these increasingly used AI systems against vulnerabilities and common mistakes by making their decisions more transparent. Though chatbots, digital assistants, and similar technologies have rapidly advanced in their ability to understand commands and questions and return accurate and realistic responses, they can still be tripped up by how requests are worded or contradictory information that a human would easily recognize.

“The overarching goal is about how to make the model more robust and less brittle against small changes in the input,” Tan said. “A lot of findings have shown that these models may not actually understand how to perform the tasks that they are asked to perform, but use some shortcuts or spurious correlations to give you the right answer, with the wrong reason.”

For his Google-funded research, Tan will explore new ways of training these models. Instead of the current practice of providing raw information and letting neural networks form their own associations, Tan’s approach would include explanatory information that guides models towards using logic to arrive at an answer. For example, the training data may identify the important sentences in a body of text, or include reasoning processes that contain more information about how the input should be used to answer queries.

If successful, these more robust models may not only be better at day-to-day tasks, but also harder to “game” by malicious users looking to break the system or artificially trigger a desired output, such as loan approval or bypassing a misinformation filter. Tan’s group will also explore new approaches on popular NLP models such as GPT-3, which generates realistic text.

“We think explanation can help the AI actually learn the underlying reasoning and avoid these critical mistakes against attacks,” Tan said. “You not only have the labels, but explain the labels, and then see how you can teach the model to learn from such explanations.”

In previous years, additional UChicago CS faculty have received Google Research Awards, including Andrew Chien for the study of green cloud computing, Junchen Jiang for research on streaming video analytics, and Yanjing Li for a project on “Resilient Accelerators for Deep Learning Training Tasks.”

Related News

More UChicago CS stories from this research area.
In the News

UChicago CS Researchers Share in Special Prize on COVID-19 Research

Dec 01, 2022
UChicago CS News

The Computing Pipeline: A Foundation for Diversifying Computer Science

Nov 28, 2022
Haifeng Xu
UChicago CS News

New CS and DSI Faculty Haifeng Xu Brings Strategic Intelligence to NeurIPS 2022

Nov 28, 2022
UChicago CS News

UChicago’s Parsl Project Pivots to Sustainability and Community with New Grants

Nov 17, 2022
man browsing Netflix
UChicago CS News

Trending Now: How Netflix Chills Our Free Will

Nov 14, 2022
UChicago CS News

UChicago CS Research Finds New Angle on Database Query Processing with Geometry

Nov 08, 2022
In the News

Alumnus Pranav Gokhale Named to Crain’s 40 Under 40

Nov 07, 2022
UChicago CS News

Prof. Diana Franklin Discusses Quantum Computing Education on Entangled Things Podcast

Nov 03, 2022
UChicago CS News

UChicago AI Summit Examines Promise and Concerns for Science and Society

Nov 01, 2022
UChicago CS News

New Schmidt Futures Fellowship at UChicago to Foster Next Generation of AI-Driven Scientists

Oct 26, 2022
UChicago CS News

New UpDown Project Uses “Intelligent Data Movement” to Accelerate Graph Analytics

Oct 21, 2022
UChicago CS News

Five UChicago CS Students Named to Siebel Scholars 2023 Class

Sep 22, 2022
arrow-down-largearrow-left-largearrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-smallbutton-arrowclosedocumentfacebookfacet-arrow-down-whitefacet-arrow-downPage 1CheckedCheckedicon-apple-t5backgroundLayer 1icon-google-t5icon-office365-t5icon-outlook-t5backgroundLayer 1icon-outlookcom-t5backgroundLayer 1icon-yahoo-t5backgroundLayer 1internal-yellowinternalintranetlinkedinlinkoutpauseplaypresentationsearch-bluesearchshareslider-arrow-nextslider-arrow-prevtwittervideoyoutube