Enterprise

The FTC’s AI advisers are fighting a White House plan that could boost cloud giants

Critics of a national AI research project say it pushes a “myopic” large-scale AI approach that will entrench dominance of the very cloud giants the Biden administration has pushed the FTC to rein in.

U.S. National Artificial Intelligence Initiative

An effort to develop a national data and AI resource hub is meeting criticism.

Illustration: U.S. National Artificial Intelligence Initiative, Protocol

A talk at an October meeting hosted by the U.S. National Science Foundation led by Google Cloud’s Andrew Moore was exactly the sort of conversation that critics of a government push to facilitate large-scale AI projects are worried about.

Moore, vice president and general manager for AI and industry solutions, talked about AI testbeds, the environments used to simulate how AI and deep learning systems will operate when let loose to run in real life. During the discussion, academics and government agency chairs riffed on the role of federally-funded testing facilities, why and how they might be maintained, how much computing capacity they’d need to operate and, of course, the cost.

The talk Moore led was part of a meeting of a task force that began convening earlier this year, a preliminary step in an effort to develop a national data and AI resource hub. But despite its buzzworthy promise to “democratize” AI by giving a more diverse array of researchers access to its raw materials, critics of the project — including three members of the Federal Trade Commission’s new AI advisory team — want the initiative in its current form stopped altogether.

They say it will not only exacerbate development of harmful algorithmic technologies but will further entrench dominance of the very cloud tech giants the Biden administration has pushed the FTC to rein in.

The National Science Foundation and the White House Office of Science and Technology Policy are leading the task force, which was launched in June to plot out the National AI Research Resource. They say it will give more U.S.-based researchers the chance to build big AI projects by accessing immense data sets such as anonymized government transportation or medical data, along with the cloud infrastructure necessary to use that data to train machine-learning models or other AI systems.

However, some critics of the effort wonder if there’s a disconnect between the NAIRR and the Biden administration’s other efforts to protect people from the harmful effects of AI and the data collection driving it. They point to the FTC’s tough stance on Big Tech, as well as a proposal from the OSTP to establish a “Bill of Rights” for AI which would require AI to respect democratic values and treat people fairly.

“I think they’re trying to have it both ways in terms of supporting business and protecting the average person,” said Ben Winters, a counsel for the Electronic Privacy Information Center, regarding the Biden administration’s approach to AI policy. “There’s not really a recognition that deployment, deployment, deployment at all costs is at odds with protecting people’s privacy and civil liberties with respect to use of these tools." Over time, said Winters, “throughout the effort of OSTP and what they might come out with, it might end up being in conflict with the results of setting up the NAIRR.”

But the OSTP’s would-be AI Bill of Rights is not in conflict with the NAIRR project, said Daniel Ho, associate director of the Stanford Institute for Human-Centered Artificial Intelligence, who has pushed for a national research cloud. “OSTP’s AI Bill of Rights and the NAIRR are both animated by the critical role that AI plays in modern society with a particular focus on the mandate for AI to be more human-centered, developed with human impact, risks and benefits in mind,” he said.

The OSTP did not respond to a request for comment.

Cloud giants raise their hands

All three of the biggest cloud behemoths — Amazon, Google and Microsoft — have indicated they’d like to play a role in the NAIRR project.

Google said the resource should involve a multicloud approach that would rely on more than one cloud provider. Amazon and Microsoft also said the NAIRR initiative should involve partnerships with commercial cloud providers. They argued that working with the private-sector cloud companies would ensure the resource stays up-to-date as data and cloud-related technology continue their rapid pace of evolution. The three companies also mentioned the importance of promoting access to more diverse researchers as well as preserving data privacy and protecting against unfair or biased AI.

All signs seem to point to NAIRR partnerships with the top dogs of the cloud. Yet after just three meetings of the task force, its co-chair, Manish Parashar, director of NSF’s Office of Advanced Cyberinfrastructure, said it is too early to define how the research data hub will be constructed and managed, or in what ways private-sector cloud providers might be involved.

“Our charge from Congress is to consider the full range of data and compute resources, including, for example, high-performance computing, cloud computing, hybrid platforms and future computing paradigms,” he told Protocol in an email.

A specific recommendation made by the National Security Commission on Artificial Intelligence in a 2020 memo propelled Congress to create the task force. “Federal funding for AI has not kept pace with the growth or potential of the field,” wrote the commission, which called for billions in non-defense funding to strengthen AI development at academic and government-affiliated labs in order to bolster the nation’s future security and “build vital public-private partnerships.”

An NSF spokesperson said the NSCAI is not actively involved in the task force creating the NAIRR. But they share a member – Google’s Moore. In addition to his role on the NAIRR task force, Moore serves as an NSCAI commissioner alongside former Google CEO Eric Schmidt, who chairs that group, which was established during the Trump administration in 2018. Another big name in tech and cloud circles, Amazon CEO and former head of its AWS cloud division, Andy Jassy, sits on the commission, too. So do representatives from Microsoft and Oracle.

Like many in corporate AI, the security commission views AI research and development as a proxy cold war between American civil liberties and China’s surveillance-based authoritarianism. “We take seriously China’s ambition to surpass the United States as the world’s AI leader within a decade,” wrote the commission in a report published in March. “The AI competition is also a values competition.”

But threats of the U.S. “losing” a race for AI dominance against China are being used as a pretense for significant government AI investment for national security as well as for commercial contexts, said EPIC’s Winters. He said the concept comes with an “absurd assumption that because this tech is coming from and created in the United States, it has some virtuousness to it.”

AI advisers at the FTC push back

“We believe that the Task Force has, to date, over-emphasized the role of large-scale compute in its considerations. This leads to the myopic view that building bigger and more expansive compute is necessary for positive advances in the AI field,” wrote the American Civil Liberties Union in comments to the task force.

Indeed, not only do some critics of the NAIRR’s current competitive, large-scale AI focus hope it will shift gears away from that narrow view of what AI research is, but some recognize that a huge hub for applying massive data sets for machine-learning computations inherently requires the world’s largest cloud providers. Considering the project’s ties to private-sector cloud executives, some, including Meredith Whittaker, senior adviser on AI at the FTC, worry that it amounts to a Big Tech subsidy sham.

“Why would a conflicted government body populated by tech executives recommend ‘democratizing’ access to the infrastructures at the heart of their concentrated power? Because this proposal wouldn't actually reduce that power,” Whittaker wrote in an article — published after her November appointment to the FTC — in ACM Interactions, an academic journal published by the Association for Computing Machinery. “These companies control the tooling, development environments, languages, and software that define the AI research process — they make the water in which AI research swims,” she continued, adding that “there is no plausible scenario in which a national research infrastructure could be meaningfully constructed outside of the current tech-industry ecosystem.”

Whittaker’s article reiterated what she and two other researchers named to the FTC’s AI advisory group, Amba Kak and Sarah Myers West, wrote in a letter they sent to the task force before they joined the agency. All three have been colleagues of the AI Now Institute, a group that has been critical of Big Tech’s influence on AI and of AI’s negative impacts on minority communities.

In their letter, they encouraged the task force to reconsider its anticipated plan to involve large cloud providers in the NAIRR. The alternative? Rather than building shared research infrastructure with big cloud companies, NAIRR should “instead explore alternative ideas for expanding research into AI, increasing government investment in critical work on AI, and meaningfully democratizing decision making on the development and deployment of AI and related technologies.”

Because the NAIRR will make AI research more accessible for communities that have been historically underrepresented or underserved, NSF’s Parashar said it could facilitate efforts to mitigate the harms of AI. “You can imagine how such accessibility could open opportunities for more work in critical areas such as auditing, testing and evaluation, bias mitigation, security and more. Likewise, expanding access would strengthen our national ability to build AI technologies that are more equitable and meet the needs of all Americans,” he said.

Not all critics of the current NAIRR focus want it stopped. Nandita Sampath, a policy analyst with Consumer Reports focused on algorithmic bias and accountability issues, said NAIRR should serve as a hub for a variety of AI-related research including the sort of interdisciplinary social science research that is not the sole domain of Ph.D.s and engineers. However, she questioned the idea of killing the project all together. “I don’t know that scrapping this idea is what I think should happen,” she said, adding, “This needs to be designed in a way that does not entrench the power of these big tech companies.”

In its own comments submitted to the task force, EPIC argued that private corporate partners “should play an extremely limited role in the NAIRR,” and if they must be involved, “NAIRR should set rigorous restrictions on the influence of companies involved and on the access those companies have to government data sets.”

For now, there are no signs that the task force is considering any major shift away from planning a national research hub that will closely involve the giants of data and the cloud.

The group will hold its fourth public meeting on Dec. 13. Google’s Moore will be there, sharing his recommendation for what the NAIRR’s testbed environment for AI should look like.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins