This spring, Argonne National Laboratory plans to activate Aurora, one of the world’s first exascale supercomputers, capable of a billion billion calculations per second. The landmark computational resource is nearly two decades in the making, and its jaw-dropping power will boost research in fields such as climate science, physics, astronomy, and engineering.
For Chicago Magazine, reporter Tal Rosenberg visited Argonne and spoke to scientists including Rick Stevens, Professor of Computer Science at UChicago and Associate Laboratory Director for Computing, Environment and Life Sciences at Argonne. Stevens explains the significance of a computer running at billion times faster than a consumer phone or laptop, and the challenges in energy consumption, cooling, and software development.
The biggest problem of all was scale, though not of what you’d probably guess. Sure, supercomputers are physically huge, but it’s what’s inside the machine that seemed the most difficult to figure out.
“On the hardware side, no big deal, because you just can add more hardware,” Stevens says. “But on the software side, a big deal. Think of it this way: You’re making Thanksgiving dinner, and you’re cooking all these dishes in parallel. And the amount of time it’s going to take to cook the meal is determined by the slowest step [the turkey], even if the pies only take one hour, or if the vegetables could cook in 30 minutes. So think of that same problem in these computations, except instead of having 20 things in Thanksgiving, we’ve got a million things, and now I’ve gotta have a billion things going on in parallel. And how fast this is, is going to be determined by the longest step, which means I have to make that longest step as fast as possible.”
Read the full article at Chicago Magazine.