Date & Time:
October 24, 2019 2:30 pm – 3:30 pm
Location:
Crerar 298, 5730 S. Ellis Ave., Chicago, IL,
10/24/2019 02:30 PM 10/24/2019 03:30 PM America/Chicago Prateek Mittal (Princeton) – Compromising Cyber-Resilience via Spatial and Temporal Dynamics Crerar 298, 5730 S. Ellis Ave., Chicago, IL,

Compromising Cyber-Resilience via Spatial and Temporal Dynamics

When reasoning about cyber-resilience, security analysts typically rely on simple abstractions of the system to make the analysis tractable. In this talk, I will highlight a key limitation of this approach: commonly used abstractions do not explicitly model the ability of an adversary to maliciously induce temporal or spatial changes in the system, which can then be used to compromise user security or privacy. I will illustrate this compromise of system security and privacy via two case studies of critical systems: public key infrastructure, and machine learning-based systems.  

First, I will showcase how an adversary can exploit temporal dynamics in networked systems to compromise our public key infrastructure, particularly the Internet domain validation protocol. The domain validation protocol is widely used by web services to obtain digital certificates, which provide an authentic binding between a domain name and its public key, serving as a root of trust for privacy of our online communications. However, an adversary can exploit vulnerabilities in internet routing to intercept communications in the domain validation protocol, and maliciously obtain digital certificates. I will demonstrate the feasibility of this approach using real-world BGP attacks conducted in an ethical manner. This work shows that the core foundations of Internet encryption are at risk, and is impacting real-world deployment of secure countermeasures at Let’s Encrypt, the world’s largest certificate authority. 

Second, I will showcase how an adversary can exploit spatial perturbations (adversarial examples) in machine learning based systems to induce misclassifications. Given the ubiquity of machine learning applications, they are increasingly being deployed in adversarial scenarios, where an attacker stands to gain from the failure of a system to classify inputs correctly. I will introduce a new class of adversarial examples that fool existing machine learning classifiers trained on benign data by adding strategic perturbations to “out-of-distribution” test data. These attacks are robust in a physically-realizable contexts, and are effective in black-box scenarios. Finally, I will introduce a new framework for characterizing the fundamental limits of what can be learned in the presence of adversarial examples. This framework is the first approach to characterize lower bounds on the optimal loss that can be achieved by any classifier in the presence of an adversary.

Host: Nick Feamster

Prateek Mittal

Associate Professor of Electrical Engineering, Princeton University

Prateek Mittal is an Associate Professor of Electrical Engineering at Princeton University, where he is also affiliated with Computer Science and the Center for Information Technology Policy. He is interested in the design and development of privacy-preserving and secure systems. A unifying theme in Prateek’s work is to manipulate and exploit structural properties of data and networked systems to solve privacy and security challenges facing our society. His research has applied this distinct approach to widely-used operational systems, and has used the resulting insights to influence system design and operation, including that of the Tor network and the Let’s Encrypt certificate authority, directly impacting hundreds of millions of users. He is the recipient of Princeton University’s E. Lawrence Keyes, Jr. award for outstanding research and teaching, the NSF CAREER award, the ONR YIP award, the ARO YIP award, faculty research awards from IBM, Intel, Google, Cisco, and multiple award publications.

Related News & Events

Video

“Machine Learning Foundations Accelerate Innovation and Promote Trustworthiness” by Rebecca Willett

Jan 26, 2024
Video

Nightshade: Data Poisoning to Fight Generative AI with Ben Zhao

Jan 23, 2024
UChicago CS News

Research Suggests That Privacy and Security Protection Fell To The Wayside During Remote Learning

A qualitative research study conducted by faculty and students at the University of Chicago and University of Maryland revealed key...
Oct 18, 2023
UChicago CS News

Five UChicago CS students named to Siebel Scholars Class of 2024

Oct 02, 2023
UChicago CS News

UChicago Researchers Win Internet Defense Prize and Distinguished Paper Awards at USENIX Security

Sep 05, 2023
In the News

In The News: U.N. Officials Urge Regulation of Artificial Intelligence

"Security Council members said they feared that a new technology might prove a major threat to world peace."
Jul 27, 2023
UChicago CS News

UChicago Computer Scientists Bring in Generative Neural Networks to Stop Real-Time Video From Lagging

Jun 29, 2023
UChicago CS News

UChicago Team Wins The NIH Long COVID Computational Challenge

Jun 28, 2023
UChicago CS News

UChicago Assistant Professor Raul Castro Fernandez Receives 2023 ACM SIGMOD Test-of-Time Award

Jun 27, 2023
UChicago CS News

Chicago Public Schools Student Chris Deng Pursues Internet Equity with University of Chicago Faculty

May 16, 2023
UChicago CS News

Computer Science Displays Catch Attention at MSI’s Annual Robot Block Party

Apr 07, 2023
UChicago CS News

UChicago / School of the Art Institute Class Uses Art to Highlight Data Privacy Dangers

Apr 03, 2023
arrow-down-largearrow-left-largearrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-smallbutton-arrowclosedocumentfacebookfacet-arrow-down-whitefacet-arrow-downPage 1CheckedCheckedicon-apple-t5backgroundLayer 1icon-google-t5icon-office365-t5icon-outlook-t5backgroundLayer 1icon-outlookcom-t5backgroundLayer 1icon-yahoo-t5backgroundLayer 1internal-yellowinternalintranetlinkedinlinkoutpauseplaypresentationsearch-bluesearchshareslider-arrow-nextslider-arrow-prevtwittervideoyoutube