Assistant professors Bill Fefferman and Chenhao Tan each received grants from the Google Research Scholar Program, which funds world-class research by early-career computer scientists. Fefferman received one of five awards in the quantum computing area, while Tan was one of seven researchers funded for work in natural language processing.

Both researchers said that the grants would help accelerate their research goals in finding practical applications for today’s quantum computers and building new, explainable AI models for understanding and generating language.

Fefferman’s proposal, “Understanding the feasibility of practical applications from quantum supremacy experiments,” will explore the theoretical foundations of how current and near-future quantum computers generate certified random numbers. If confirmed, these algorithms would provide a valuable use case for quantum computing in cryptography, cryptocurrency, and other applications that require truly random keys for security.

“We’re in this really exciting era where we’re seeing quantum computers that for the first time can solve problems that are at the border of what can be computed classically,” Fefferman said. “The next step is to show that we can channel this power to solve something useful. Quantum computers, by their nature, are non-deterministic, and so they can produce random numbers. But the really interesting question is, how do you trust the quantum computer?”

Fefferman will explore that question by probing the theory underlying the random circuit sampling experiment that Google used to claim “quantum supremacy” – a task performed by a quantum computer that would be all but impossible for a classical computer — in 2019. A proposal by Scott Aaronson described a protocol to use that experiment for generating certified random numbers, but the theory requires more vetting to confirm its veracity.

“What Scott was saying is that today, with the power that Google already has with this device, you could in principle generate certified random numbers using the existing experiment, assuming a very non-standard conjecture from complexity theory.” Fefferman said. “My proposal is about giving strong evidence that this conjecture is true, which will increase our confidence that Google’s experiment can produce certified random numbers.”

Tan’s proposal, “Robustifying NLP models via Learning from Explanations,” will help protect these increasingly used AI systems against vulnerabilities and common mistakes by making their decisions more transparent. Though chatbots, digital assistants, and similar technologies have rapidly advanced in their ability to understand commands and questions and return accurate and realistic responses, they can still be tripped up by how requests are worded or contradictory information that a human would easily recognize.

“The overarching goal is about how to make the model more robust and less brittle against small changes in the input,” Tan said. “A lot of findings have shown that these models may not actually understand how to perform the tasks that they are asked to perform, but use some shortcuts or spurious correlations to give you the right answer, with the wrong reason.”

For his Google-funded research, Tan will explore new ways of training these models. Instead of the current practice of providing raw information and letting neural networks form their own associations, Tan’s approach would include explanatory information that guides models towards using logic to arrive at an answer. For example, the training data may identify the important sentences in a body of text, or include reasoning processes that contain more information about how the input should be used to answer queries.

If successful, these more robust models may not only be better at day-to-day tasks, but also harder to “game” by malicious users looking to break the system or artificially trigger a desired output, such as loan approval or bypassing a misinformation filter. Tan’s group will also explore new approaches on popular NLP models such as GPT-3, which generates realistic text.

“We think explanation can help the AI actually learn the underlying reasoning and avoid these critical mistakes against attacks,” Tan said. “You not only have the labels, but explain the labels, and then see how you can teach the model to learn from such explanations.”

In previous years, additional UChicago CS faculty have received Google Research Awards, including Andrew Chien for the study of green cloud computing, Junchen Jiang for research on streaming video analytics, and Yanjing Li for a project on “Resilient Accelerators for Deep Learning Training Tasks.”

Related News

More UChicago CS stories from this research area.

Argonne scientists use AI to identify new materials for carbon capture

Feb 19, 2024

New research unites quantum engineering and artificial intelligence

Jan 29, 2024
Video

“Machine Learning Foundations Accelerate Innovation and Promote Trustworthiness” by Rebecca Willett

Jan 26, 2024
Video

Nightshade: Data Poisoning to Fight Generative AI with Ben Zhao

Jan 23, 2024

Group From UChicago CS To Present Four Papers at Most Prestigious International Quantum Conference

Jan 09, 2024

Research Suggests That Privacy and Security Protection Fell To The Wayside During Remote Learning

A qualitative research study conducted by faculty and students at the University of Chicago and University of Maryland revealed key...
Oct 18, 2023

Five UChicago CS students named to Siebel Scholars Class of 2024

Oct 02, 2023

UChicago Researchers Win Internet Defense Prize and Distinguished Paper Awards at USENIX Security

Sep 05, 2023

In The News: U.N. Officials Urge Regulation of Artificial Intelligence

"Security Council members said they feared that a new technology might prove a major threat to world peace."
Jul 27, 2023

UChicago Scientists Make New Discovery Proving Entanglement Is Responsible for Computational Hardness In Quantum Systems

Jul 25, 2023

UChicago Computer Scientists Bring in Generative Neural Networks to Stop Real-Time Video From Lagging

Jun 29, 2023

UChicago Team Wins The NIH Long COVID Computational Challenge

Jun 28, 2023
arrow-down-largearrow-left-largearrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-smallbutton-arrowclosedocumentfacebookfacet-arrow-down-whitefacet-arrow-downPage 1CheckedCheckedicon-apple-t5backgroundLayer 1icon-google-t5icon-office365-t5icon-outlook-t5backgroundLayer 1icon-outlookcom-t5backgroundLayer 1icon-yahoo-t5backgroundLayer 1internal-yellowinternalintranetlinkedinlinkoutpauseplaypresentationsearch-bluesearchshareslider-arrow-nextslider-arrow-prevtwittervideoyoutube