PHOTO FROM FORBES.COM / CREDIT: GERALT/PIXABAY
PHOTO FROM FORBES.COM / CREDIT: GERALT/PIXABAY

Results of a recent study led by Professors Ben Zhao and Heather Zheng’s SAND Lab are making waves in the popular press.  In an upcoming paper to appear at ACM Conference on Computer and Communications Security (CCS 2017), students from the SAND Lab successfully used deep learning AI models to mimic online product reviews written by human users. Fake online reviews today are often generated by crowdturfing campaigns, where real users write personalized content for pay (a “dark” version of crowdsourcing services like Amazon’s Mechanical Turk). Today's crowdturfing campaigns, while realistic, can be costly and easily identified by the “bursty” nature of the reviews. The work from the SAND Lab is a new attack where malicious parties can use software to generate large volumes of realistic online reviews for free, and control its timing to avoid detection even by today’s advanced detection tools. The study also showed that not only do real users fail to distinguish these fake reviews from those written by humans, but users also find the fake reviews to be “useful,” underscoring the potential impact of these new attacks.  The paper, led by UChicago PhD student Yuanshun Yao and Postdoc Bimal Viswanath, also identifies new mechanisms to detect these fake reviews, by looking for properties of natural writing samples that are lost in the review modeling/generation process.

The results of this project have resonated with popular press across the world. After an initial interview and article with Business Insider UK, news about the work has spread to numerous newspapers, technology news sites, financial news services and blogs across the world, from the US, to UK, India, China, and Australia. Here are a few of these articles:

Related News

More UChicago CS stories from this research area.
Video

“Machine Learning Foundations Accelerate Innovation and Promote Trustworthiness” by Rebecca Willett

Jan 26, 2024
Video

Nightshade: Data Poisoning to Fight Generative AI with Ben Zhao

Jan 23, 2024
No Name

Research Suggests That Privacy and Security Protection Fell To The Wayside During Remote Learning

A qualitative research study conducted by faculty and students at the University of Chicago and University of Maryland revealed key...
Oct 18, 2023
No Name

UChicago Researchers Win Internet Defense Prize and Distinguished Paper Awards at USENIX Security

Sep 05, 2023
No Name

In The News: U.N. Officials Urge Regulation of Artificial Intelligence

"Security Council members said they feared that a new technology might prove a major threat to world peace."
Jul 27, 2023
No Name

UChicago Computer Scientists Bring in Generative Neural Networks to Stop Real-Time Video From Lagging

Jun 29, 2023
No Name

Chicago Public Schools Student Chris Deng Pursues Internet Equity with University of Chicago Faculty

May 16, 2023
No Name

Computer Science Displays Catch Attention at MSI’s Annual Robot Block Party

Apr 07, 2023
No Name

UChicago / School of the Art Institute Class Uses Art to Highlight Data Privacy Dangers

Apr 03, 2023
No Name

UChicago, Stanford Researchers Explore How Robots and Computers Can Help Strangers Have Meaningful In-Person Conversations

Mar 29, 2023
No Name

Postdoc Alum John Paparrizos Named ICDE Rising Star

Mar 15, 2023
No Name

New EAGER Grant to Asst. Prof. Eric Jonas Will Explore ML for Quantum Spectrometry

Mar 03, 2023
arrow-down-largearrow-left-largearrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-smallbutton-arrowclosedocumentfacebookfacet-arrow-down-whitefacet-arrow-downPage 1CheckedCheckedicon-apple-t5backgroundLayer 1icon-google-t5icon-office365-t5icon-outlook-t5backgroundLayer 1icon-outlookcom-t5backgroundLayer 1icon-yahoo-t5backgroundLayer 1internal-yellowinternalintranetlinkedinlinkoutpauseplaypresentationsearch-bluesearchshareslider-arrow-nextslider-arrow-prevtwittervideoyoutube