Computers are built to follow instructions. For better or worse, hardware executes a programmer’s code precisely as it is written, without editorializing or improvising. But under certain constraints, this obedience can become a liability. Anyone who has tried to encode a large video or run a neural network prediction on an underpowered device knows that just carrying out orders can create a laggy experience or unreasonably long waits.
In 2011, a team of MIT researchers including UChicago associate professor Hank Hoffmann (then a graduate student) proposed an alternative. Instead of loyally following the code, their “loop perforation” algorithm gave computers a generalizable option to go off-script and sacrifice accuracy in favor of performance. Though the paper was controversial when originally presented at FSE (The ACM Symposium on the Foundations of Software Engineering), its tradeoff principles have since become widespread in computer science.
To celebrate this forward-thinking research, FSE recently awarded Hoffmann and his co-authors Stelios Sidiroglou, Sasa Misailovic, and Martin Rinard the honorable mention in their annual Test of Time award. The honor was something of a surprise to Hoffmann, who remembers the divisive approach receiving rejections and skepticism.
“We recognized that there are a bunch of emerging computations where it is either infeasible to compute an exact answer or we don’t even know what the exact answer is,” Hoffmann said. “For those computations, if you don’t do exactly what was specified, it might make the answer a little bit worse, but it can also greatly reduce resource use and improve performance. Now, I think everybody accepts that and I think that’s why it won this award. But at the time, the idea that we’re just going to change the output that the software produces without permission from the programmer, that was controversial.”
Hoffmann and his team called their approach “loop perforation,” an analogy to how you can punch holes in a physical structure and reduce its weight without reducing the strength it can bear. When given a program, the algorithm looks for loops in the code that can be safely “perforated”: run only a subset of the times that the programmer specified. For example, instead of running a loop 100 times, Hoffmann said, the program might only run 50 loops, skipping every other turn.
The original paper tested this approach on a variety of software, in applications such as video encoding, financial analysis, and computer vision. In most cases, a small sacrifice of accuracy — typically less than 5 percent — could double the performance of the program, running it in half the time.
“I think the key insight was recognizing that this could be done, and the way we did it was so simple,” Hoffmann said. “There can be hundreds or thousands of loops in a program, so we built a methodology for figuring out which loops were amenable to this transformation. And we built tools that could show you the trade-off space, that provided an evaluation of the program, the accuracy, and the performance, so you could decide how much accuracy to sacrifice.”
Since that 2011 paper, Hoffmann has continued to research accuracy/performance trade-offs, which have become ever more important with the rise of resource-constrained technologies such as phones and other mobile devices, sensors, and autonomous vehicles. Projects such as his self-aware computing systems (SEEC) and the CALOREE framework extend the philosophy of creating flexible hardware and software that work together to find the optimal performance and accuracy for each given situation.
“I enjoy working on trade-offs and how you manage them in some principled way. I think it’s fundamental to our field,” Hoffmann said. “For a long time, physics was on our side, and we were the only people worried about those constraints. But now everybody has to manage trade-offs all the time, and [the early start] gave me a lot of experience in how to deal with that challenge.”