Even Jack Dongarra has a hard time wrapping his head around the number used to represent “exascale” computing: 1018.
But Dongarra, an expert in linear algebra algorithms and distinguished professor of computer science in the Electrical Engineering and Computer Science Department at the University of Tennessee, is sure of one thing. “That's a staggering amount of computing power.”
Indeed. Ten-to-the-eighteenth-power represents 1,000,000,000,000,000,000 operations per second, the amount of computing power possible in just one exascale supercomputer. In supercomputing parlance, it’s called an exaflop.
Dongarra, in his early 70s, has spent the last 50 years helping advance the numerical algorithms and software, parallel computing programming and performance benchmarking necessary to create an exascale supercomputer, each the gargantuan size of two tennis courts.
Now, as a credit to that work, Dongarra has won this year’s prestigious Turing Award, often referred to as the “Nobel Prize of Computing,” from the Association for Computing Machinery. Along with a fancy silver bowl, the award comes with a $1 million prize funded entirely by Google, which goes directly to Dongarra.
Right now, Dongarra and his group at University of Tennessee are busy contributing to the software and applications needed to operate three exascale supercomputers that the Department of Energy is having built to enable scientific research for things like wind energy, nuclear physics and weapons security, earthquake studies, cancer cures and more.
Protocol spoke on Monday with Dongarra — also a researcher with the University of Manchester and the Oak Ridge National Laboratory, home to one of those new exascale supercomputers — about why scientific supercomputing could use help from the big cloud providers, about the impact of AI computing energy use on the environment, about how the computing research community could spur more Turing wins for women and why he hopes research collaborations with China continue.
This interview was condensed and edited for clarity.
What do people get wrong when they talk about supercomputers?
I don't think people really have a good picture of a supercomputer. It's super, so it's pretty big. They have tremendous requirements in terms of power.
[The current supercomputer at Oak Ridge National Laboratory] has the power budget of about 20 megawatts — 20 megawatts is the power consumption. If you at your home used one megawatt for one year, you'd get a bill from the electric company for $1 million. So that computer at Oak Ridge, to turn it on, costs $20 million, just in the power consumption.
And that’s just the hardware. Then there's power that's needed, of course, on top of that, and people to run it, and applications to design, and software to build, and all those other things.
When you started this work in the early 1970s, did you ever envision that AI would advance to the point where it's so accessible, that it's almost this mainstream thing where businesspeople are incorporating AI tools into what they do every day?
So back in the 70s, AI was a novelty. I have to say that I didn't really see how it would be practical to use that in any kind of real application setting. Computing is the thing, of course, that fueled the AI and brought it to a point where you can do the kinds of things that were only talked about in theory; you can now practically realize that in a short amount of time. So the hardware, the computing hardware was there to carry out all of the associated computations, so that AI could actually do the things that [we dreamed of it] doing.
It’s an inflection point we’re at, one which has us either going in the traditional way of building our own equipment and using it just as it is, or going to cloud-based computing.
Today, it plays an important role in terms of how science is moving forward. It's part of a tool kit, helping us in getting a better understanding and coming to a point where we can get an approximation to a solution much, much quicker than we can by doing traditional modeling and simulation.
Your work has helped make computational processing and supercomputing more efficient. But there are serious concerns about the climate impact resulting from the massive amounts of energy required to enable the compute necessary for deep learning and developing things like large language AI models. Are you worried about the carbon footprint of your work or of supercomputing in general?
Of course, we're concerned about that. That's a serious issue. We have machines that are consuming 20 megawatts today. The next generation of things are going to consume 40 megawatts of power.
And you take a look at the big data centers that are in place that Microsoft, Google, Amazon have, and those dwarf the computers that are used in the scientific area. To sort of help with that on the commercial side, Google, Amazon and Microsoft are developing their own hardware, independent from commodity stuff.
In the scientific area, we sort of embrace commodity technology because of its cost. And we take that and build supercomputers. And that may not be the most efficient way of doing things.
So what Amazon, what Google and what Microsoft have gone and engaged in — and Apple — they're building their own processors. By avoiding the commodity processor, and developing specialized hardware that they can use to solve their problem, they can reduce the energy costs, so they can minimize the impact that they’re going to have in terms of that climate effect.
Jack Dongarra uses a Tektronix 4081 Workstation in 1980 at the Argonne National Lab.Photo courtesy of Jack Dongarra
In building Oak Ridge or the other two new exascale supercomputers, why isn’t the DOE going to an Amazon or Google Cloud and saying, you guys do this a lot more efficiently. Can we use your hardware, instead of using the less-efficient commodity parts?
A paper that we just wrote, this is done with my colleagues, talks about just this point: the cloud computing and the impact it's having in terms of where we go into the future in terms of scientific-based areas. It’s an inflection point we’re at, one which has us either going in the traditional way of building our own equipment and using it just as it is, or going to cloud-based computing, and using cloud-based systems to satisfy our needs.
The big systems that [the government has] today are in place for their lifetime. And then we basically get rid of that system and replace it with yet another monolithic kind of computer architecture, and use it to drive forward.
It presents a situation where the companies that we're talking about — the Amazons, and the Microsofts and the Googles — are exothermic in terms of the amount of cash they have that they can invest, where the government [is] endothermic, they need resources. And those resources are becoming harder to really get. So the right model may be the cloud services and using them to go forward.
We hear a lot about the U.S. competing directly with China to “win” or “lead” AI. And obviously, the kind of work you do really plays a role in how we advance and use artificial intelligence. What do you think about the idea that the United States is in an AI competition with China?
We try to understand what the Chinese are doing and how they're using their computers and what their computers are capable of. That's part of the game that we have.
The big systems that [the government has] today are in place for their lifetime. And then we basically get rid of that system and replace it with yet another monolithic kind of computer architecture.
I mentioned already, companies like Amazon and Facebook and Google. In China, there's Baidu and Alibaba and Tencent, of course, which have [their own] rules that compare with those companies, and they are using and deploying high-performance computing.
If you take a look at the Chinese supercomputers and look at the way in which they're being used, it's a very similar list to what we have in the U.S. The research that they're planning to do goes along the same lines as the research that's being investigated here in the States.
And I would almost hope that we can collaborate and understand how we move forward with these things in a way that leverages the resources that we have, rather than be in a position of head-to-head competition, where we can't really benefit from each other's products in that way.
The Turing Award has only gone to three women over the years. Do you know any women who it makes sense to consider for future years for the award?
Yes, of course, there are many women who are eligible and who could qualify for the Turing Award. Turing Award is determined by a committee; they vet applications that are submitted. The onus is on the community to put together the nominations such that they can be evaluated and judged on the merits of their research. But I have a strong feeling that there are many women who are qualified and should be nominated for the award.