Fire up the GPUs: UW-Madison, Morgridge project sparks next-level computing

Man inspecting computer components

A form of computing machinery that was once the province of hardcore video gamers — the graphic processing unit, or GPU — has recently taken the world of scientific research by storm.

Originally designed in the late 1990s with the capability of rendering 3D graphics, GPUs have been essential over the years to creating increasingly sophisticated and realistic visual effects.

While most of the research world has thought in terms of CPUs — or central processing units — as the lingua franca of computing power, GPUs are now emerging at the top of the rack for fields such as machine learning and scientific computing.

Morgridge Investigator Anthony Gitter, a UW-Madison associate professor of biostatistics and medical informatics, recognized the need early on in his machine learning projects related to protein engineering and drug discovery — projects that generate millions of data points. There were GPU-related tools available that could complete his team’s modeling experiments in days that would have taken months or years — if accomplished at all — with standard CPU-based computing.

But he also noticed, around 2018, a groundswell of DIY efforts across the UW-Madison campus related to GPUs.

“I saw a lot of my peers were trying to set up their own systems,” he recalls. “People were buying workstations that would have one GPU and sticking it under a desk for a grad student to run, then trying to figure out what hardware to buy, how to keep it maintained and what software to install.”

Gitter spotted an opportunity. Why not create a centralized resource and user community that could help support hundreds of varied GPU experiments, much like his Morgridge and UW-Madison colleagues have accomplished through the Center for High-Throughput Computing (CHTC)? That center successfully manages more than 300 unique projects a year, generating hundreds of millions of hours of computing time.

Read More