Cloud providers, data centers, computer clusters, and other large-scale computer systems, share a common boogeyman: the fail-slow. Unlike its more dramatic cousin the fail-stop, which simply shuts down a software program or hardware component, the fail-slow can be a much more subtle and nefarious culprit, throttling performance in mysterious ways. Tracking down the source of a fail-slow fault can take up hundreds of valuable hours, and the primary cause can be the last thing you’d expect: a single faulty cooling fan, high altitude, or even a poorly-placed desk chair.

UCARE, a UChicago Computer Science systems research group led by Haryadi Gunawi, specializes in just these types of problems — faults that might be irrelevantly rare to a single machine, but can become a major nuisance at a larger scale. Fail-slow hardware is a perfect example, but remains largely unacknowledged and understudied by the CS community.

“If we want to build robust software we need to understand the failure mode of our hardware,” said Gunawi, a Neubauer Family Assistant Professor in the Department of Computer Science. “To me, this is a new failure mode that many people should think about when they want to build large-scale systems.”

So in a recent paper for the Usenix File and Storage Technology (FAST '18) conference, titled “Fail-Slow at Scale: Evidence of Hardware Performance Faults in Large Production Systems,” Gunawi and graduate student Riza Suminto collected over 100 fail-slow horror stories from operators of large-scale systems at universities, national laboratories, and private companies. The author list for the final paper represented an impressive cross-section of these categories, from Twitter, Huawei, and Pure Storage to the University of Utah, the University of Chicago Research Computing Center, and the Argonne and Los Alamos National Laboratories.

The survey found little overlap between the root causes of fail-slow events, reinforcing the unpredictability of these faults. Respondents reported failures along every step of the hardware chain, from storage to CPU to memory to network, that dropped performance across the full system. Most frustrating were the “cascading root causes,” where an original event as seemingly innocuous as a broken fan or a clogged air filter set off a Rube Goldberg machine series of events that could cripple an entire cluster.

For example, in one report, the failure of a single fan caused the other fans in a cooling system to work at maximum power, creating excess noise and vibration that slowed disk performance, which in turn slowed software processes. One center located at 7500 feet altitude saw a cooling defect which affected their CPUs — a bug that wasn’t noticeable during sea level tests by the manufacturer. And in a particularly low-tech example, a technician rocking in an office chair loosened disk drives in a stack, creating a system-wide ripple effect impossible to diagnose through error logs.

What many of these stories shared in common were very rare events or minute variations in hardware manufacturing, magnified in a large-scale system to become serious issues. The authors recommend more frequent fault tests for common fail-slow causes and improved transparency and data collection around both hardware and software performance.

“The complexity of our software and hardware ecosystems in the cloud scale is outpacing our efforts in debugging, verification, testing of these systems,” Gunawi said. “When we talk about a scale of thousands of machines, the probability that you see one or a few hardware components start limping is actually not that small.”

The “unspoken truth” validity of the paper has so far been reinforced by coverage in ZDNet and on CS blogs, as well as conversations Gunawi has had at conferences with engineers from some of the largest tech companies. By sharing these stories publicly, Gunawi hopes that the paper will draw more attention to these faults and energize computer scientists to find new ways of preventing a common cause of large-scale system nightmares.

“Many people still don't believe this problem, only large-scale operators believe this problem, and the goal is just to say, ‘believe us.’ That's why all these authors signed on to this paper,” Gunawi said. “If we can convince the community that fail-slow hardware is real, I bet the community will be able to deal with this problem, because we are all smart people.”

Related News

More UChicago CS stories from this research area.
UChicago CS News

Five UChicago CS students named to Siebel Scholars Class of 2024

Oct 02, 2023
UChicago CS News

UChicago Computer Scientists Bring in Generative Neural Networks to Stop Real-Time Video From Lagging

Jun 29, 2023
UChicago CS News

UChicago Team Wins The NIH Long COVID Computational Challenge

Jun 28, 2023
UChicago CS News

UChicago Assistant Professor Raul Castro Fernandez Receives 2023 ACM SIGMOD Test-of-Time Award

Jun 27, 2023
UChicago CS News

Computer Science Displays Catch Attention at MSI’s Annual Robot Block Party

Apr 07, 2023
UChicago CS News

Professor Heather Zheng Named ACM Fellow

Jan 18, 2023
Video

Ian Foster – Better Information Faster: Programming the Continuum

Jan 06, 2023
UChicago CS News

Q&A: Ian Foster on Receiving the 2023 IEEE Internet Award

Jan 06, 2023
UChicago CS News

Professor Fred Chong Named IEEE Fellow

Dec 09, 2022
UChicago CS News

Associate Professor Diana Franklin Named ACM Distinguished Member

Dec 07, 2022
UChicago CS News

UChicago’s Parsl Project Pivots to Sustainability and Community with New Grants

Nov 17, 2022
man browsing Netflix
UChicago CS News

Trending Now: How Netflix Chills Our Free Will

Nov 14, 2022
arrow-down-largearrow-left-largearrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-smallbutton-arrowclosedocumentfacebookfacet-arrow-down-whitefacet-arrow-downPage 1CheckedCheckedicon-apple-t5backgroundLayer 1icon-google-t5icon-office365-t5icon-outlook-t5backgroundLayer 1icon-outlookcom-t5backgroundLayer 1icon-yahoo-t5backgroundLayer 1internal-yellowinternalintranetlinkedinlinkoutpauseplaypresentationsearch-bluesearchshareslider-arrow-nextslider-arrow-prevtwittervideoyoutube