Date & Time:
November 22, 2019 12:30 pm – 1:30 pm
Location:
Crerar 354, 5730 S. Ellis Ave., Chicago, IL,
11/22/2019 12:30 PM 11/22/2019 01:30 PM America/Chicago MS Presentation: Huiying Li Crerar 354, 5730 S. Ellis Ave., Chicago, IL,

Latent Backdoor Attacks on Deep Neural Networks

Backdoor attacks on deep neural networks (DNNs) are hidden malicious behaviors embedded into DNN models, where misclassification rules are hidden inside normal models, only to be triggered by very specific inputs. When models are compromised, extremely dangerous consequences might happen since DNNs are widely deployed in safety and security-critical areas like self-driving cars. However, these “traditional” backdoors assume a context where users train their own models from scratch, which rarely occurs in practice. Instead, users typically customize “Teacher” models which are pretrained by model providers like Google, through a process called transfer learning. This customization process introduces significant changes to models and disrupts hidden backdoors, greatly reducing the actual impact of backdoors in practice. In this study, we describe latent backdoors, a more powerful and stealthy variant of backdoor attacks that functions under transfer learning. Latent backdoors are incomplete backdoors embedded into a “Teacher” model, and automatically inherited by multiple “Student” models through transfer learning. If any Student models include the label targeted by the backdoor, then its customization process completes the backdoor and makes it active. We show that latent backdoors can be quite effective in a variety of application contexts, and validate its practicality through real-world attacks against traffic sign recognition, iris identification of volunteers, and facial recognition of public figures (politicians). Finally, we evaluate 4 potential defenses, and find that only one is effective in disrupting latent backdoors, but might incur a cost in classification accuracy as tradeoff.

Huiying Li

M.S. Candidate, University of Chicago

Huiying's advisors are Prof. Ben Zhao and Prof. Heather Zheng

Related News & Events

Video

Nightshade: Data Poisoning to Fight Generative AI with Ben Zhao

Jan 23, 2024
UChicago CS News

Research Suggests That Privacy and Security Protection Fell To The Wayside During Remote Learning

A qualitative research study conducted by faculty and students at the University of Chicago and University of Maryland revealed key...
Oct 18, 2023
UChicago CS News

UChicago Researchers Win Internet Defense Prize and Distinguished Paper Awards at USENIX Security

Sep 05, 2023
UChicago CS News

Chicago Public Schools Student Chris Deng Pursues Internet Equity with University of Chicago Faculty

May 16, 2023
UChicago CS News

Computer Science Displays Catch Attention at MSI’s Annual Robot Block Party

Apr 07, 2023
UChicago CS News

UChicago / School of the Art Institute Class Uses Art to Highlight Data Privacy Dangers

Apr 03, 2023
Young students on computers
UChicago CS News

UChicago and NYU Research Team Finds Edtech Tools Could Pose Privacy Risks For Students

Feb 21, 2023
UChicago CS News

UChicago Scientists Develop New Tool to Protect Artists from AI Mimicry

Feb 13, 2023
In the News

Professors Rebecca Willett and Ben Zhao Discuss the Future of AI on Public Radio

Jan 26, 2023
UChicago CS News

Professor Heather Zheng Named ACM Fellow

Jan 18, 2023
UChicago CS News

Professor Fred Chong Named IEEE Fellow

Dec 09, 2022
UChicago CS News

The Computing Pipeline: A Foundation for Diversifying Computer Science

Nov 28, 2022
arrow-down-largearrow-left-largearrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-smallbutton-arrowclosedocumentfacebookfacet-arrow-down-whitefacet-arrow-downPage 1CheckedCheckedicon-apple-t5backgroundLayer 1icon-google-t5icon-office365-t5icon-outlook-t5backgroundLayer 1icon-outlookcom-t5backgroundLayer 1icon-yahoo-t5backgroundLayer 1internal-yellowinternalintranetlinkedinlinkoutpauseplaypresentationsearch-bluesearchshareslider-arrow-nextslider-arrow-prevtwittervideoyoutube