The FTC’s AI advisers are fighting a White House plan that could boost cloud giants

Critics of a national AI research project say it pushes a “myopic” large-scale AI approach that will entrench dominance of the very cloud giants the Biden administration has pushed the FTC to rein in.

U.S. National Artificial Intelligence Initiative

An effort to develop a national data and AI resource hub is meeting criticism.

Illustration: U.S. National Artificial Intelligence Initiative, Protocol

A talk at an October meeting hosted by the U.S. National Science Foundation led by Google Cloud’s Andrew Moore was exactly the sort of conversation that critics of a government push to facilitate large-scale AI projects are worried about.

Moore, vice president and general manager for AI and industry solutions, talked about AI testbeds, the environments used to simulate how AI and deep learning systems will operate when let loose to run in real life. During the discussion, academics and government agency chairs riffed on the role of federally-funded testing facilities, why and how they might be maintained, how much computing capacity they’d need to operate and, of course, the cost.

The talk Moore led was part of a meeting of a task force that began convening earlier this year, a preliminary step in an effort to develop a national data and AI resource hub. But despite its buzzworthy promise to “democratize” AI by giving a more diverse array of researchers access to its raw materials, critics of the project — including three members of the Federal Trade Commission’s new AI advisory team — want the initiative in its current form stopped altogether.

They say it will not only exacerbate development of harmful algorithmic technologies but will further entrench dominance of the very cloud tech giants the Biden administration has pushed the FTC to rein in.

The National Science Foundation and the White House Office of Science and Technology Policy are leading the task force, which was launched in June to plot out the National AI Research Resource. They say it will give more U.S.-based researchers the chance to build big AI projects by accessing immense data sets such as anonymized government transportation or medical data, along with the cloud infrastructure necessary to use that data to train machine-learning models or other AI systems.

However, some critics of the effort wonder if there’s a disconnect between the NAIRR and the Biden administration’s other efforts to protect people from the harmful effects of AI and the data collection driving it. They point to the FTC’s tough stance on Big Tech, as well as a proposal from the OSTP to establish a “Bill of Rights” for AI which would require AI to respect democratic values and treat people fairly.

“I think they’re trying to have it both ways in terms of supporting business and protecting the average person,” said Ben Winters, a counsel for the Electronic Privacy Information Center, regarding the Biden administration’s approach to AI policy. “There’s not really a recognition that deployment, deployment, deployment at all costs is at odds with protecting people’s privacy and civil liberties with respect to use of these tools." Over time, said Winters, “throughout the effort of OSTP and what they might come out with, it might end up being in conflict with the results of setting up the NAIRR.”

But the OSTP’s would-be AI Bill of Rights is not in conflict with the NAIRR project, said Daniel Ho, associate director of the Stanford Institute for Human-Centered Artificial Intelligence, who has pushed for a national research cloud. “OSTP’s AI Bill of Rights and the NAIRR are both animated by the critical role that AI plays in modern society with a particular focus on the mandate for AI to be more human-centered, developed with human impact, risks and benefits in mind,” he said.

The OSTP did not respond to a request for comment.

Cloud giants raise their hands

All three of the biggest cloud behemoths — Amazon, Google and Microsoft — have indicated they’d like to play a role in the NAIRR project.

Google said the resource should involve a multicloud approach that would rely on more than one cloud provider. Amazon and Microsoft also said the NAIRR initiative should involve partnerships with commercial cloud providers. They argued that working with the private-sector cloud companies would ensure the resource stays up-to-date as data and cloud-related technology continue their rapid pace of evolution. The three companies also mentioned the importance of promoting access to more diverse researchers as well as preserving data privacy and protecting against unfair or biased AI.

All signs seem to point to NAIRR partnerships with the top dogs of the cloud. Yet after just three meetings of the task force, its co-chair, Manish Parashar, director of NSF’s Office of Advanced Cyberinfrastructure, said it is too early to define how the research data hub will be constructed and managed, or in what ways private-sector cloud providers might be involved.

“Our charge from Congress is to consider the full range of data and compute resources, including, for example, high-performance computing, cloud computing, hybrid platforms and future computing paradigms,” he told Protocol in an email.

A specific recommendation made by the National Security Commission on Artificial Intelligence in a 2020 memo propelled Congress to create the task force. “Federal funding for AI has not kept pace with the growth or potential of the field,” wrote the commission, which called for billions in non-defense funding to strengthen AI development at academic and government-affiliated labs in order to bolster the nation’s future security and “build vital public-private partnerships.”

An NSF spokesperson said the NSCAI is not actively involved in the task force creating the NAIRR. But they share a member – Google’s Moore. In addition to his role on the NAIRR task force, Moore serves as an NSCAI commissioner alongside former Google CEO Eric Schmidt, who chairs that group, which was established during the Trump administration in 2018. Another big name in tech and cloud circles, Amazon CEO and former head of its AWS cloud division, Andy Jassy, sits on the commission, too. So do representatives from Microsoft and Oracle.

Like many in corporate AI, the security commission views AI research and development as a proxy cold war between American civil liberties and China’s surveillance-based authoritarianism. “We take seriously China’s ambition to surpass the United States as the world’s AI leader within a decade,” wrote the commission in a report published in March. “The AI competition is also a values competition.”

But threats of the U.S. “losing” a race for AI dominance against China are being used as a pretense for significant government AI investment for national security as well as for commercial contexts, said EPIC’s Winters. He said the concept comes with an “absurd assumption that because this tech is coming from and created in the United States, it has some virtuousness to it.”

AI advisers at the FTC push back

“We believe that the Task Force has, to date, over-emphasized the role of large-scale compute in its considerations. This leads to the myopic view that building bigger and more expansive compute is necessary for positive advances in the AI field,” wrote the American Civil Liberties Union in comments to the task force.

Indeed, not only do some critics of the NAIRR’s current competitive, large-scale AI focus hope it will shift gears away from that narrow view of what AI research is, but some recognize that a huge hub for applying massive data sets for machine-learning computations inherently requires the world’s largest cloud providers. Considering the project’s ties to private-sector cloud executives, some, including Meredith Whittaker, senior adviser on AI at the FTC, worry that it amounts to a Big Tech subsidy sham.

“Why would a conflicted government body populated by tech executives recommend ‘democratizing’ access to the infrastructures at the heart of their concentrated power? Because this proposal wouldn't actually reduce that power,” Whittaker wrote in an article — published after her November appointment to the FTC — in ACM Interactions, an academic journal published by the Association for Computing Machinery. “These companies control the tooling, development environments, languages, and software that define the AI research process — they make the water in which AI research swims,” she continued, adding that “there is no plausible scenario in which a national research infrastructure could be meaningfully constructed outside of the current tech-industry ecosystem.”

Whittaker’s article reiterated what she and two other researchers named to the FTC’s AI advisory group, Amba Kak and Sarah Myers West, wrote in a letter they sent to the task force before they joined the agency. All three have been colleagues of the AI Now Institute, a group that has been critical of Big Tech’s influence on AI and of AI’s negative impacts on minority communities.

In their letter, they encouraged the task force to reconsider its anticipated plan to involve large cloud providers in the NAIRR. The alternative? Rather than building shared research infrastructure with big cloud companies, NAIRR should “instead explore alternative ideas for expanding research into AI, increasing government investment in critical work on AI, and meaningfully democratizing decision making on the development and deployment of AI and related technologies.”

Because the NAIRR will make AI research more accessible for communities that have been historically underrepresented or underserved, NSF’s Parashar said it could facilitate efforts to mitigate the harms of AI. “You can imagine how such accessibility could open opportunities for more work in critical areas such as auditing, testing and evaluation, bias mitigation, security and more. Likewise, expanding access would strengthen our national ability to build AI technologies that are more equitable and meet the needs of all Americans,” he said.

Not all critics of the current NAIRR focus want it stopped. Nandita Sampath, a policy analyst with Consumer Reports focused on algorithmic bias and accountability issues, said NAIRR should serve as a hub for a variety of AI-related research including the sort of interdisciplinary social science research that is not the sole domain of Ph.D.s and engineers. However, she questioned the idea of killing the project all together. “I don’t know that scrapping this idea is what I think should happen,” she said, adding, “This needs to be designed in a way that does not entrench the power of these big tech companies.”

In its own comments submitted to the task force, EPIC argued that private corporate partners “should play an extremely limited role in the NAIRR,” and if they must be involved, “NAIRR should set rigorous restrictions on the influence of companies involved and on the access those companies have to government data sets.”

For now, there are no signs that the task force is considering any major shift away from planning a national research hub that will closely involve the giants of data and the cloud.

The group will hold its fourth public meeting on Dec. 13. Google’s Moore will be there, sharing his recommendation for what the NAIRR’s testbed environment for AI should look like.


Can crypto regulate itself? The Lummis-Gillibrand bill hopes so.

Creating the equivalent of the stock markets’ FINRA for crypto is the ideal, but experts doubt that it will be easy.

The idea of creating a government-sanctioned private regulatory association has been drawing more attention in the debate over how to rein in a fast-growing industry whose technological quirks have baffled policymakers.

Illustration: Christopher T. Fong/Protocol

Regulating crypto is complicated. That’s why Sens. Cynthia Lummis and Kirsten Gillibrand want to explore the creation of a private sector group to help federal regulators do their job.

The bipartisan bill introduced by Lummis and Gillibrand would require the CFTC and the SEC to work with the crypto industry to look into setting up a self-regulatory organization to “facilitate innovative, efficient and orderly markets for digital assets.”

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Google Voice at (925) 307-9342.

Every day, millions of us press the “order” button on our favorite coffee store's mobile application: Our chosen brew will be on the counter when we arrive. It’s a personalized, seamless experience that we have all come to expect. What we don’t know is what’s happening behind the scenes. The mobile application is sourcing data from a database that stores information about each customer and what their favorite coffee drinks are. It is also leveraging event-streaming data in real time to ensure the ingredients for your personal coffee are in supply at your local store.

Applications like this power our daily lives, and if they can’t access massive amounts of data stored in a database as well as stream data “in motion” instantaneously, you — and millions of customers — won’t have these in-the-moment experiences.

Keep Reading Show less
Jennifer Goforth Gregory
Jennifer Goforth Gregory has worked in the B2B technology industry for over 20 years. As a freelance writer she writes for top technology brands, including IBM, HPE, Adobe, AT&T, Verizon, Epson, Oracle, Intel and Square. She specializes in a wide range of technology, such as AI, IoT, cloud, cybersecurity, and CX. Jennifer also wrote a bestselling book The Freelance Content Marketing Writer to help other writers launch a high earning freelance business.

Alperovitch: Cybersecurity defenders can’t be on high alert every day

With the continued threat of Russian cyber escalation, cybersecurity and geopolitics expert Dmitri Alperovitch says it’s not ideal for the U.S. to oscillate between moments of high alert and lesser states of cyber readiness.

Dmitri Alperovitch (the co-founder and former CTO of CrowdStrike) speaks at RSA Conference 2022.

Photo: RSA Conference

When it comes to cybersecurity vigilance, Dmitri Alperovitch wants to see more focus on resiliency of IT systems — and less on doing "surges" around particular dates or events.

For instance, whatever Russia is doing at the moment.

Keep Reading Show less
Kyle Alspach

Kyle Alspach ( @KyleAlspach) is a senior reporter at Protocol, focused on cybersecurity. He has covered the tech industry since 2010 for outlets including VentureBeat, CRN and the Boston Globe. He lives in Portland, Oregon, and can be reached at kalspach@protocol.com.


How the internet got privatized and how the government could fix it

Author Ben Tarnoff discusses municipal broadband, Web3 and why closing the “digital divide” isn’t enough.

The Biden administration’s Internet for All initiative, which kicked off in May, will roll out grant programs to expand and improve broadband infrastructure, teach digital skills and improve internet access for “everyone in America by the end of the decade.”

Decisions about who is eligible for these grants will be made based on the Federal Communications Commission’s broken, outdated and incorrect broadband maps — maps the FCC plans to update only after funding has been allocated. Inaccurate broadband maps are just one of many barriers to getting everyone in the country successfully online. Internet service providers that use government funds to connect rural and low-income areas have historically provided those regions with slow speeds and poor service, forcing community residents to find reliable internet outside of their homes.

Keep Reading Show less
Aditi Mukund
Aditi Mukund is Protocol’s Data Analyst. Prior to joining Protocol, she was an analyst at The Daily Beast and NPR where she wrangled data into actionable insights for editorial, audience, commerce, subscription, and product teams. She holds a B.S in Cognitive Science, Human Computer Interaction from The University of California, San Diego.

How I decided to exit my startup’s original business

Bluevine got its start in factoring invoices for small businesses. CEO Eyal Lifshitz explains why it dropped that business in favor of “end-to-end banking.”

"[I]t was a realization that we can't be successful at both at the same time: You've got to choose."

Photo: Bluevine

Click banner image for more How I decided series

Bluevine got its start in fintech by offering a modern version of invoice factoring, the centuries-old practice where businesses sell off their accounts receivable for up-front cash. It’s raised $240 million in venture capital and about $700 million in total financing since its founding in 2013 by serving small businesses. But along the way, it realized it was better to focus on the checking accounts and lines of credit it provided customers than its original product. It now manages some $500 million in checking-account deposits.

Keep Reading Show less
Ryan Deffenbaugh
Ryan Deffenbaugh is a reporter at Protocol focused on fintech. Before joining Protocol, he reported on New York's technology industry for Crain's New York Business. He is based in New York and can be reached at rdeffenbaugh@protocol.com.
Latest Stories