Assistant professors Bill Fefferman and Chenhao Tan each received grants from the Google Research Scholar Program, which funds world-class research by early-career computer scientists. Fefferman received one of five awards in the quantum computing area, while Tan was one of seven researchers funded for work in natural language processing.

Both researchers said that the grants would help accelerate their research goals in finding practical applications for today’s quantum computers and building new, explainable AI models for understanding and generating language.

Fefferman’s proposal, “Understanding the feasibility of practical applications from quantum supremacy experiments,” will explore the theoretical foundations of how current and near-future quantum computers generate certified random numbers. If confirmed, these algorithms would provide a valuable use case for quantum computing in cryptography, cryptocurrency, and other applications that require truly random keys for security.

“We’re in this really exciting era where we’re seeing quantum computers that for the first time can solve problems that are at the border of what can be computed classically,” Fefferman said. “The next step is to show that we can channel this power to solve something useful. Quantum computers, by their nature, are non-deterministic, and so they can produce random numbers. But the really interesting question is, how do you trust the quantum computer?”

Fefferman will explore that question by probing the theory underlying the random circuit sampling experiment that Google used to claim “quantum supremacy” – a task performed by a quantum computer that would be all but impossible for a classical computer — in 2019. A proposal by Scott Aaronson described a protocol to use that experiment for generating certified random numbers, but the theory requires more vetting to confirm its veracity.

“What Scott was saying is that today, with the power that Google already has with this device, you could in principle generate certified random numbers using the existing experiment, assuming a very non-standard conjecture from complexity theory.” Fefferman said. “My proposal is about giving strong evidence that this conjecture is true, which will increase our confidence that Google’s experiment can produce certified random numbers.”

Tan’s proposal, “Robustifying NLP models via Learning from Explanations,” will help protect these increasingly used AI systems against vulnerabilities and common mistakes by making their decisions more transparent. Though chatbots, digital assistants, and similar technologies have rapidly advanced in their ability to understand commands and questions and return accurate and realistic responses, they can still be tripped up by how requests are worded or contradictory information that a human would easily recognize.

“The overarching goal is about how to make the model more robust and less brittle against small changes in the input,” Tan said. “A lot of findings have shown that these models may not actually understand how to perform the tasks that they are asked to perform, but use some shortcuts or spurious correlations to give you the right answer, with the wrong reason.”

For his Google-funded research, Tan will explore new ways of training these models. Instead of the current practice of providing raw information and letting neural networks form their own associations, Tan’s approach would include explanatory information that guides models towards using logic to arrive at an answer. For example, the training data may identify the important sentences in a body of text, or include reasoning processes that contain more information about how the input should be used to answer queries.

If successful, these more robust models may not only be better at day-to-day tasks, but also harder to “game” by malicious users looking to break the system or artificially trigger a desired output, such as loan approval or bypassing a misinformation filter. Tan’s group will also explore new approaches on popular NLP models such as GPT-3, which generates realistic text.

“We think explanation can help the AI actually learn the underlying reasoning and avoid these critical mistakes against attacks,” Tan said. “You not only have the labels, but explain the labels, and then see how you can teach the model to learn from such explanations.”

In previous years, additional UChicago CS faculty have received Google Research Awards, including Andrew Chien for the study of green cloud computing, Junchen Jiang for research on streaming video analytics, and Yanjing Li for a project on “Resilient Accelerators for Deep Learning Training Tasks.”

Related News

More UChicago CS stories from this research area.
UChicago CS News

UChicago Hosts NSF Workshop on Frontiers of Quantum Advantage

Aug 15, 2022
UChicago CS News

New 2022-23 CS Faculty Add Expertise in Linguistics, Visualization, Economics, and Data Science Education

Aug 11, 2022
In the News

UChicago Co-Leads $10 Million NSF Institute on Foundations of Data Science

Aug 09, 2022
UChicago CS News

UChicago CS Faculty Receive Industry Grants From J.P. Morgan, Google

Jul 19, 2022
In the News

Bill Fefferman Comments on New Standards for Quantum-Proof Cryptography

Jul 07, 2022
UChicago CS News

UChicago London Colloquium Features Data Science, Quantum Research

Jul 01, 2022
Video

Is it Ethical to Use Facial Imaging in Decision-Making?

Jun 28, 2022
UChicago CS News

Single Sign-On Migration for Chameleon Project Receives PEARC Best Paper Award

Jun 27, 2022
UChicago CS News

EPiQC Post-Doc Pens Op-Ed on Potential of Quantum Computing for Chemistry

Jun 24, 2022
UChicago CS News

Two Incoming UChicago CS PhD Students Receive Department of Energy Fellowship

Jun 16, 2022
UChicago CS News

Prof. Yanjing Li Receives Under-40 Innovators Award from DAC

Jun 15, 2022
Video

Data Science Institute Summit

Jun 15, 2022
arrow-down-largearrow-left-largearrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-smallbutton-arrowclosedocumentfacebookfacet-arrow-down-whitefacet-arrow-downPage 1CheckedCheckedicon-apple-t5backgroundLayer 1icon-google-t5icon-office365-t5icon-outlook-t5backgroundLayer 1icon-outlookcom-t5backgroundLayer 1icon-yahoo-t5backgroundLayer 1internal-yellowinternalintranetlinkedinlinkoutpauseplaypresentationsearch-bluesearchshareslider-arrow-nextslider-arrow-prevtwittervideoyoutube