Prateek Mittal (Princeton) - Compromising Cyber-Resilience via Spatial and Temporal Dynamics
Compromising Cyber-Resilience via Spatial and Temporal Dynamics
When reasoning about cyber-resilience, security analysts typically rely on simple abstractions of the system to make the analysis tractable. In this talk, I will highlight a key limitation of this approach: commonly used abstractions do not explicitly model the ability of an adversary to maliciously induce temporal or spatial changes in the system, which can then be used to compromise user security or privacy. I will illustrate this compromise of system security and privacy via two case studies of critical systems: public key infrastructure, and machine learning-based systems.
First, I will showcase how an adversary can exploit temporal dynamics in networked systems to compromise our public key infrastructure, particularly the Internet domain validation protocol. The domain validation protocol is widely used by web services to obtain digital certificates, which provide an authentic binding between a domain name and its public key, serving as a root of trust for privacy of our online communications. However, an adversary can exploit vulnerabilities in internet routing to intercept communications in the domain validation protocol, and maliciously obtain digital certificates. I will demonstrate the feasibility of this approach using real-world BGP attacks conducted in an ethical manner. This work shows that the core foundations of Internet encryption are at risk, and is impacting real-world deployment of secure countermeasures at Let’s Encrypt, the world’s largest certificate authority.
Second, I will showcase how an adversary can exploit spatial perturbations (adversarial examples) in machine learning based systems to induce misclassifications. Given the ubiquity of machine learning applications, they are increasingly being deployed in adversarial scenarios, where an attacker stands to gain from the failure of a system to classify inputs correctly. I will introduce a new class of adversarial examples that fool existing machine learning classifiers trained on benign data by adding strategic perturbations to “out-of-distribution” test data. These attacks are robust in a physically-realizable contexts, and are effective in black-box scenarios. Finally, I will introduce a new framework for characterizing the fundamental limits of what can be learned in the presence of adversarial examples. This framework is the first approach to characterize lower bounds on the optimal loss that can be achieved by any classifier in the presence of an adversary.
Host: Nick Feamster
Prateek Mittal
Prateek Mittal is an Associate Professor of Electrical Engineering at Princeton University, where he is also affiliated with Computer Science and the Center for Information Technology Policy. He is interested in the design and development of privacy-preserving and secure systems. A unifying theme in Prateek’s work is to manipulate and exploit structural properties of data and networked systems to solve privacy and security challenges facing our society. His research has applied this distinct approach to widely-used operational systems, and has used the resulting insights to influence system design and operation, including that of the Tor network and the Let’s Encrypt certificate authority, directly impacting hundreds of millions of users. He is the recipient of Princeton University’s E. Lawrence Keyes, Jr. award for outstanding research and teaching, the NSF CAREER award, the ONR YIP award, the ARO YIP award, faculty research awards from IBM, Intel, Google, Cisco, and multiple award publications.