Meet this year’s Turing Award winner
Hello and welcome to Protocol Enterprise! Today: Why Jack Dongarra thinks the U.S. government should run more of its supercomputing work in the cloud, Docker co-founder Solomon Hykes’ second act, and how Microsoft thinks it can help blind kids make friends.
Spin up
President Biden recently signed legislation that requires companies to report cybersecurity incidents within 72 hours of discovering the incident in hopes of moving faster on ransomware outbreaks, but it’s not clear how much of an effect that requirement will actually have. New research from BitSight found that it takes the average company 46 days to discover a cybersecurity incident, and big companies aren’t much faster, discovering incidents an average of 39 days after they occur.
Living at exascale
Even Jack Dongarra has a hard time wrapping his head around the number used to represent “exascale” computing: 1018.
But Dongarra, an expert in linear algebra algorithms and distinguished professor of computer science in the Electrical Engineering and Computer Science Department at the University of Tennessee, is sure of one thing. “That's a staggering amount of computing power.”
- Ten-to-the-eighteenth power represents 1,000,000,000,000,000,000 operations per second, the amount of computing power possible in just one exascale supercomputer. In supercomputing parlance, it’s called an exaflop.
- Dongarra, in his early 70s, has spent the last 50 years helping advance the numerical algorithms and software, parallel computing programming and performance benchmarking necessary to create an exascale supercomputer, each the gargantuan size of two tennis courts.
Now, as a credit to that work, Dongarra has won this year’s prestigious Turing Award, often referred to as the “Nobel Prize of Computing,” from the Association for Computing Machinery.
- Along with a fancy silver bowl, the award comes with a $1 million prize funded entirely by Google, which goes directly to Dongarra.
- Right now, Dongarra and his group at University of Tennessee are busy contributing to the software and applications needed to operate three exascale supercomputers that the Department of Energy is having built to enable scientific research for things like wind energy, nuclear physics and weapons security, earthquake studies, cancer cures and more.
Protocol Enterprise talked to Dongarra this week about his work and the future of supercomputing.
What do people get wrong when they talk about supercomputers?
I don't think people really have a good picture of a supercomputer. It's super, so it's pretty big. They have tremendous requirements in terms of power.
[The current supercomputer at Oak Ridge National Laboratory] has the power budget of about 20 megawatts — 20 megawatts is the power consumption. If you at your home used one megawatt for one year, you'd get a bill from the electric company for $1 million. So that computer at Oak Ridge, to turn it on costs $20 million, just in the power consumption.
And that’s just the hardware. Then there's power that's needed, of course, on top of that, and people to run it, and applications to design, and software to build, and all those other things.
In building Oak Ridge or the other two new exascale supercomputers, why isn’t the DOE going to an Amazon or Google Cloud and saying, you guys do this a lot more efficiently. Can we use your hardware, instead of using the less-efficient commodity parts?
A paper that we just wrote, this is done with my colleagues, talks about just this point: the cloud computing and the impact it's having in terms of where we go into the future in terms of scientific-based areas. It’s an inflection point we’re at, one which has us either going in the traditional way of building our own equipment and using it just as it is, or going to cloud-based computing, and using cloud-based systems to satisfy our needs.
The big systems that [the government has] today are in place for their lifetime. And then we basically get rid of that system and replace it with yet another monolithic kind of computer architecture, and use it to drive forward.
It presents a situation where the companies that we're talking about — the Amazons, and the Microsofts and the Googles — are exothermic in terms of the amount of cash they have that they can invest, where the government [is] endothermic, they need resources. And those resources are becoming harder to really get. So the right model may be the cloud services and using them to go forward.
We hear a lot about the U.S. competing directly with China to “win” or “lead” AI. And obviously, the kind of work you do really plays a role in how we advance and use artificial intelligence. What do you think about the idea that the United States is in an AI competition with China?
We try to understand what the Chinese are doing and how they're using their computers and what their computers are capable of. That's part of the game that we have.
If you take a look at the Chinese supercomputers and look at the way in which they're being used, it's a very similar list to what we have in the U.S. The research that they're planning to do goes along the same lines as the research that's being investigated here in the States.
And I would almost hope that we can collaborate and understand how we move forward with these things in a way that leverages the resources that we have, rather than be in a position of head-to-head competition, where we can't really benefit from each other's products in that way.
The Turing Award has only gone to three women over the years. Do you know any women who it makes sense to consider for future years for the award?
Yes, of course, there are many women who are eligible and who could qualify for the Turing Award. Turing Award is determined by a committee; they vet applications that are submitted. The onus is on the community to put together the nominations such that they can be evaluated and judged on the merits of their research. But I have a strong feeling that there are many women who are qualified and should be nominated for the award.
A MESSAGE FROM UPWORK

Seeking to triple its employee base, Whisk, a fully remote team, sought diverse talent from a wide variety of regions through Upwork, a work marketplace that connects businesses with independent professionals and agencies around the globe.
First Docker, now Dagger
Scaling a buzzy enterprise startup around a breakthrough cloud computing concept is a formidable task for any technical entrepreneur. Docker founder Solomon Hykes is ready to reveal a few more details behind his next adventure.
Dagger emerged from stealth mode Wednesday with $20 million in Series A funding led by Redpoint Ventures and former GitHub COO Erica Brescia. The company is trying to steal a page from Docker by making existing technology easier for developers to put into practice, which is far harder to pull off than it might sound.
The idea is to allow DevOps engineers to configure continuous integration and delivery pipelines using an open-source language called CUE rather than YAML, a popular configuration language associated with containers and Kubernetes that nobody actually seems to like. Developers like working with code they understand to configure how their applications are deployed to the world, which theoretically allows the process to operate much more smoothly.
Like Docker, Dagger will be an open-source project first with a commercial version to follow. But unlike Docker — one of the companies most responsible for making containers a mainstream technology with arguably the least to show for it — Hykes told Techcrunch that Dagger will move much more deliberately.
“I think with commercialization, at Docker, we felt like there was a playbook that we were obligated to follow and we didn’t really listen to our community enough,” Hykes said. Dagger is likely to follow its own playbook.
— Tom Krazit (email | twitter)
Vision for a better life
It’s hard enough growing up blind, and it’s even harder to engage with other kids, teachers and people in the world without knowing their exact location when you’re trying to talk in a group. Microsoft researchers think they’ve found a way to use virtual reality technology to help kids build a “people map” of those around them in order to communicate more effectively.
The “PeopleLens” is a headset that uses AI and facial-recognition technology to detect the position of familiar people, such as teachers or peers, and the relative distance from the wearer. It reads that information aloud to the user, who can then look directly at the person they want to talk to and speak at an appropriate volume for the distances involved. It also lets those people understand when the wearer has detected them, as good a replacement for eye contact as any.
The idea is to help blind children develop social communication skills, which are harder to explain to kids who have no concept of the visual communication cues we all take for granted. And while this is an important project on its own, a scaled-down version of this that helps people put names to faces could be really useful at enterprise tech conferences.
— Tom Krazit (email | twitter)
Around the enterprise
Tech consulting firm Globant acknowledged that it was the latest victim of the Lapsus$ hacking group after about 70GB of its data was dumped on the internet.
AWS customers can now use a new firewall service from Palo Alto Networksto protect their cloud workloads.
Fastly acquired Fanout for an undisclosed amount to boost its edge-computing capabilities.
Yandex might run out of server processors within a year if sanctions against Russia aren’t lifted, according to Bloomberg.
A MESSAGE FROM UPWORK

Whisk isn’t alone in unlocking the global marketplace to find the right types of employees to support its business goals. More than three-quarters of U.S. companies have used remote freelancers, according to research from Upwork, and more than a quarter of businesses plan to go fully remote in the next five years.
Thanks for reading — see you tomorrow!
Recent Issues
In a tough economy, benefits of the cloud 'only magnify'
November 14, 2022
Twitter’s security leads just quit. Now what?
November 10, 2022
Intel finally serves up a chip
November 09, 2022
The great AI race that wasn’t
November 08, 2022
Cloudflare sets a target
November 07, 2022
How Elon will bring back the Fail Whale
November 04, 2022
See more
To give you the best possible experience, this site uses cookies. If you continue browsing. you accept our use of cookies. You can review our privacy policy to find out more about the cookies we use.