Three projects led by UChicago CS faculty were part of a $6.5 million round of research funding on cybersecurity and secure critical infrastructure from the Digital Transformation Institute.

Research proposals from Professors Nick Feamster, Ben Zhao, and Heather Zheng were among the 24 projects chosen by the institute in its third round of proposals. The projects tackle important challenges and create new tools around security vulnerabilities on the internet and in machine learning systems

“Cybersecurity is an immediate existential issue,” said Thomas M. Siebel, chairman and CEO of C3 AI, a leading enterprise AI software provider. “We are equipping top scientists with the means to advance technology to help secure critical infrastructure.”

The University of Chicago is one of 10 member institutions in the Digital Transformation Institute, which was formed to accelerate the benefits of artificial intelligence for business, government, and society. Previous funding rounds have supported research on COVID-19 and climate and energy.

Read about the projects below and the full cohort of funded research here.

Continuously and Automatically Discovering and Remediating Internet-Facing Security Vulnerabilities

Nick Feamster (UChicago), Zakir Durumeric (Stanford), Prateek Mittal (Princeton)

The project has two themes: (1) Developing and applying fingerprinting tools and techniques to automatically generate fingerprints for known vulnerabilities and other security weaknesses; and (2) Designing, implementing, and deploying large-scale scanning techniques to uncover these vulnerabilities in a broad array of settings (such as industrial control and other cyber-physical settings). The approaches that we propose to develop extend a rich body of previous work in both supervised machine learning (to detect, fingerprint, and inventory vulnerable infrastructure), unsupervised machine learning (to detect anomalous device behavior), and large-scale Internet scanning.

Fundamental Limits on the Robustness of Supervised Machine Learning Algorithms

Ben Zhao (UChicago), Daniel Cullina (Penn State),  Arjun Nitin Bhagoji (UChicago)

Determining fundamental bounds on robustness for machine learning algorithms is of critical importance for securing cyberinfrastructure. Machine learning is ubiquitous but prone to severe vulnerabilities, particularly at deployment. Adversarial modifications of inputs can induce misclassification, with catastrophic consequences in safety-critical systems. This team will develop a framework to obtain lower bounds on robustness for any supervised learning algorithm (classifier), when the data distribution and adversary are specified. The framework will work with a general class of distributions and adversaries, encompassing most proposed in prior work. It can be extended to get lower bounds on robustness for any pre-trained feature extractor or family of classifiers and for multiple attackers operating in tandem. Its implications for training and deploying robust models are numerous and consequential. Perhaps the most important is enabling algorithm designers to get a robustness score for either a specific classifier or a family of classifiers. For any adversary, they can compute this score as the gap to the optimal performance possible. The optimal performance is the equilibrium of a classification game between the adversary and classifiers. Robustness scores can also be determined for pre-trained feature extractors, widely used in transfer learning, enabling designers to pick robust feature extractors. Robust training can also be improved via byproducts of the framework, which enables the identification of hard points, provides optimal soft labels for use during training, and enables better architecture search for robustness by identifying model layers and hyperparameters that impact robustness.

Robust and Scalable Forensics for Deep Neural Networks

Ben Zhao (UChicago), Heather Zheng (UChicago), Bo Li (University of Illinois at Urbana-Champaign)

For external-facing systems in real world settings, few if any security measures offer full protection against all attacks. In practice, digital forensics and incident response (DFIR) provide a complementary security tool that focuses on using post-attack evidence to trace back a successful attack to its root cause. Not only can forensic tools help identify (and patch) points of vulnerability responsible for successful attacks (e.g., breached servers, unreliable data-labeling services), but also provide a strong deterrent against future attackers with the threat of post-attack identification. This is particularly attractive for machine learning systems, where defenses are routinely broken soon after release by more powerful attacks. This team plans to build forensic tools to boost the security of deployed ML systems using post-attack analysis to identify key factors leading to a successful attack. We consider two broad types of attacks: “poison” attacks, where corrupted training data embeds misbehaviors into a model during training, and “inference-time” attacks, where an input is augmented by a model-specific adversarial perturbation. For poison attacks, we propose two complementary methods to identify the training data responsible for the misbehavior, one using selective unlearning and one using computation of the Shapley value from game theory. For inference time attacks, we will explore use of hidden labels to shift feature representations, making it possible to identify the source model of an adversarial example. Given promising early results, our goal is both a principled understanding of these approaches, and a suite of usable software tools.

Related News

More UChicago CS stories from this research area.
UChicago CS News

UChicago’s Parsl Project Pivots to Sustainability and Community with New Grants

Nov 17, 2022
man browsing Netflix
UChicago CS News

Trending Now: How Netflix Chills Our Free Will

Nov 14, 2022
In the News

Alumnus Pranav Gokhale Named to Crain’s 40 Under 40

Nov 07, 2022
UChicago CS News

Prof. Diana Franklin Discusses Quantum Computing Education on Entangled Things Podcast

Nov 03, 2022
UChicago CS News

UChicago AI Summit Examines Promise and Concerns for Science and Society

Nov 01, 2022
UChicago CS News

New Schmidt Futures Fellowship at UChicago to Foster Next Generation of AI-Driven Scientists

Oct 26, 2022
UChicago CS News

New UpDown Project Uses “Intelligent Data Movement” to Accelerate Graph Analytics

Oct 21, 2022
UChicago CS News

Five UChicago CS Students Named to Siebel Scholars 2023 Class

Sep 22, 2022
UChicago CS News

UChicago CS Students Emily Wenger and Xu Zhang Receive Harper Fellowships

Sep 14, 2022
In the News

Internet Disconnect

Sep 13, 2022
UChicago CS News

Asst. Prof. Aloni Cohen Receives Award For Revealing Flaws in Deidentifying Data

Sep 09, 2022
UChicago CS News

UChicago/Argonne Computer Scientist Ian Foster Receives ACM/IEEE Ken Kennedy Award

Sep 07, 2022
arrow-down-largearrow-left-largearrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-smallbutton-arrowclosedocumentfacebookfacet-arrow-down-whitefacet-arrow-downPage 1CheckedCheckedicon-apple-t5backgroundLayer 1icon-google-t5icon-office365-t5icon-outlook-t5backgroundLayer 1icon-outlookcom-t5backgroundLayer 1icon-yahoo-t5backgroundLayer 1internal-yellowinternalintranetlinkedinlinkoutpauseplaypresentationsearch-bluesearchshareslider-arrow-nextslider-arrow-prevtwittervideoyoutube