Enterprise

Google Cloud’s top AI executive is caught between academic ideals and corporate reality

Talking with Andrew Moore, Google Cloud’s VP and GM for AI, reveals a disconnect between Google’s bold plan for a government AI data cloud and an academic’s goals for collaborative, global AI research.

Andrew Moore

One of Andrew Moore’s most important and visible tasks involves defining what a national AI research cloud might look like.

Photo: CMU

It’s nothing new to hear Google executives talk about happenings inside different divisions of the company as though they’re separate entities. But when the head of Google Cloud AI talks about the company’s detailed proposal for a government AI project that he is closely involved in as though it were conjured up in some distant universe, it’s confusing, to say the least.

In this case, that executive is Andrew Moore, Google Cloud’s vice president and general manager for AI and Industry Solutions. An academic at heart, Moore is reluctant to speak as a Googler when it comes to the company’s plan for a federally funded AI research cloud. But his arms' length separation from the company’s proposal reveals that Google views it just as much a political gambit as a cloud-services sales call to the government.

Moore was a computer science professor at world-renowned AI research university Carnegie Mellon before first joining Google in 2006 to head up its Pittsburgh engineering office. A fish out of water, perhaps, he returned in 2014 to the university as the dean of its School of Computer Science.

Now, Moore is back at Google; he still works from his digs in Pittsburgh, the city he calls “the center of the world for the most advanced forms of robotics and mathematical AI.” And although Moore was born in England and has a Ph.D. from the University of Cambridge, some say his accent has morphed over the years into something that sounds a bit less British and a bit more mid-Atlantic.

The ecosystem Moore is immersed in now that he’s back corporate-side — Moore has had no affiliation with Carnegie Mellon since he rejoined Google in 2019 — is one very familiar to an AI researcher: the cloud. At Google, Moore helps customers coax their machine-learning prototypes into active algorithmic systems operating in the wild. Those projects, like the academic research that gave birth to today’s productized cloud-based AI tools, require massive amounts of computing power and data, and that’s what the cloud is all about.

Right now, one of Moore’s most important and visible tasks involves defining what a possible national AI research cloud might look like. That involves deciding who will provide cloud services to AI researchers in the U.S., and how.

Google has particularly bold ideas about how this would-be National AI Research Resource might be built and how the company should be involved. The thing is, even though Moore is helping determine what the resource could be, he said he had nothing to do with the company’s proposal for it.

“My role is as a commissioner on this panel ... The panel did a request for information from a whole bunch of companies. And Google was one of the companies that responded, but I was not the author of that,” he said regarding the proposal for the controversial project. “My role within the commission was watching what many stakeholders — not just the cloud companies — but also other smaller and [medium-sized] companies are saying."

A Google exec at arms' length from Google

There’s little question that the modern world of data and AI driven by cloud computing requires a new approach to advanced research.

“There is an old-fashioned way of thinking of doing these very data intensive bits of research, which is, you put some servers in your laboratory, your lab at the university, and you download a whole bunch of data. And you do your work there, and you've got your grad students and yourself sitting there warmed by your servers, sitting next to you humming away,” he said.

That on-campus server scenario has become more and more rare, of course. “So as a dean, I would have new faculty members coming to me. When they're talking about their startup packages, they would talk about, 'Well, I need such and such a number of servers with this much disk space to operate.' But as time has gone by, that's turned into folks saying, ‘I need cloud credits.’”

A federal AI cloud and data repository could be a game-changer for academic researchers, but Google seems to recognize the political implications, too. Indeed, according to Moore, it was the company’s public policy team, its government affairs division, that put together Google’s NAIRR proposal. A Google Cloud spokesperson clarified that the document was drafted by “many teams at Google that care about the success of the task force,” but did not provide the names of those teams. Google Cloud does have a team dedicated to managing work for public-sector clients, for example.

Moore is Google Cloud’s top AI executive, but he distanced himself from that role when answering Protocol’s questions about Google’s approach to the NAIRR initiative. For instance, rather than explain why Google wants to partner in the project with AWS, Microsoft and other cloud providers, companies it competes with in the cutthroat cloud industry, he said, “I'm not going to talk about Google specifically, like it's particularly different from the others.”

Instead, the on-again, off-again university educator emphasized the lack of computing power and data resources available to academic AI researchers, and lamented the lure of corporate work. “Many of those academics, they look around, and they see opportunities for themselves in industry, doing perhaps closed research, where they would be able to get more done, but it's less useful for the country as a whole. So it's incredibly important — and this is something where you will see agreement throughout the commercial world in the United States — it's incredibly important that we support academic researchers,” he said.

Legitimate concerns and momentous decisions

Although Moore does not consider himself a Google mouthpiece in his position as a NAIRR task force member, he and the company have highlighted some of the very same data-security, quality and access issues.

“This is a very tough thing,” said Moore during an October NAIRR meeting. He was talking about the risk of data exfiltration, when an authorized person extracts data from a secured system and shares it with unauthorized third parties or moves it to an insecure place.

“I think this committee has a really big decision,” he continued, discussing the types of data and level of data granularity that could be available in the national research hub.

“If we’re going to do controlled data, which does not allow exfiltration, that means a lot of work by us, like tens of millions of dollars possibly of software engineering or contracts to someone to set that up. I’m on the fence between them. I could say, 'Let’s just go with safe old geology and weather and outdoor natural scenes with faces blurred and keep our life simple.' Or I could say, 'No, obviously if you’re going to work on diseases or something like that, you need patient data protected against exfiltration.' We’ve got a big, and, I think, momentous decision here, because I don’t think this exfiltration limitation thing is a small side issue for us.”

Google’s public policy team had something similar on its mind in its proposal to the task force. “Before making Google datasets publicly available for the open-source community, we spend hundreds of hours standardizing data and validating quality,” the company wrote in its response to a request for information by the task force. “This expensive error prone process, which is repeated for each analysis, not only becomes a barrier to the use of data, but also leads to problems of reproducibility in research questions,” it said, adding that “the success of a research initiative potentially involving sensitive data depends upon the ability to reliably credential users and provide granular access management.”

No worries, Google seemed to imply in its submission. Not only would the company take on the arduous task of preparing raw data flowing into the research cloud, but it would do it for free. Some have questioned Google’s motivations, arguing that getting first dibs on the raw data could grant the company privileged access to valuable information others wouldn’t see.

In his interview with Protocol, Moore said Google’s proposal presents “a very legitimate concern that there's an attempt to sort of control data like this.” However, he said, “There would be a bit more concern if we were in some world without any data rights, or discussion of data security: [if] a commercial company was licensed to take all this data, do what it wants with it and then put up its own interpretation of that data publicly.”

On the benefits of working with China

The NAIRR task force was established on recommendation from the National Security Commission on AI, which warned that without a full-fledged national effort to advance AI research in the U.S., the country could lose its leadership position in AI to China, becoming more vulnerable to AI-enabled threats. However, Moore downplayed the notion of an “AI race” between the U.S. and China.

“There's many notions of winning AI. Just as with the aerospace industry, or the cybersecurity industry, or quantum, or in the early days of engine manufacture, you're going to see international competition,” he said. “And it absolutely is the case that the U.S. funding agencies are inspired by helping keep the United States at the forefront in most areas of science and technology.”

Being at the forefront of science is one thing, but some— including people in military, cyber and data security, intellectual property law or civil and human rights arenas — believe the risk of China dominating AI advancements poses a grave threat that ought to limit collaboration in AI research and business activity between the two countries.

Moore, a scientist at his core — his Twitter bio reads, “I love Algorithms” — doesn’t see it that way.

“Among academia, for example, you will see plenty of cases where there are mutually supportive, friendly, creative bits of joint work going on between different countries. And so, I want to be clear that this national security commission was not indicating that there should be no joint AI research between continents,” he said. “I think it's actually considered to be a strong benefit and a chance to sort of bring people together if there is collaboration between different researchers in this area.”

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins