Climate

Data centers aren’t prepared for the climate crisis

Even as climate change increases the risks of floods, fires, and droughts, there are steps that data centers large and small can take to minimize their future vulnerability.

Rows of servers in a Google data center.
Data centers are struggling to operate amid record high temperatures.
Image: Google

Increasingly extreme weather threatens data centers and one of the things cloud computing customers prioritize most: reliability.

Data center operators have long planned for some climate risks, but climate change is increasing the odds of extreme events and throwing new ones into the mix. That’s creating a reckoning for operators, who could have to reevaluate everything from where to site new data centers to physically hardening infrastructure and spreading workloads across multiple regions.

A 2019 survey by the Uptime Institute, which advises business infrastructure companies on reliability, shows that a significant share of the cloud computing sector is being reactive to the threats that climate change poses or, even worse, doing nothing at all. Nearly a third of the nearly 500 data center operators that responded said they had not recently reviewed their risks and had no plans to do so. Meanwhile, just 22% said they are “preparing for increased severe weather events.”

Jay Dietrich, the Uptime Institute’s sustainability research director, said that large data center companies generally have the resources to undertake more regular risk assessments and prepare for how climate change will impact operations, from storms that could increase the risk of outages to drought that could complicate access to water for cooling. Meanwhile, smaller companies tend to be more reactive, though they stand to lose the most.

“If I’m a smaller company that doesn’t have a big data center infrastructure, but it’s integral to my operation,” Dietrich said, “I’d better be proactive because if that goes down, it’s my business that goes down with it.”

Amazon Web Services, Google, and Microsoft — dubbed the Big Three in the data center world — have the world’s biggest cloud computing footprints, and all three have robust risk assessment processes that take into account potential disasters.

AWS says it selects center locations to minimize the risks posed by flooding and extreme weather and relies on technology like automatic sensors, responsive equipment, and both water- and fire-detecting devices to protect them once they’re built. Similarly, Microsoft uses a complex threat assessment process, and Google assures customers that it automatically moves workloads between data centers in different regions in the event of a fire or other disaster.

If I’m a smaller company that doesn’t have a big data center infrastructure, but it’s integral to my operation, I’d better be proactive because if that goes down, it’s my business that goes down with it.”

However, none of the Big Three explicitly call out climate change in their public-facing risk assessment processes, much less the mounting threat it poses. (None of the three responded to Protocol’s specific questions and instead provided links to previous statements and webpages.)

A 2020 Uptime report warns that data center operators may have become complacent in their climate risk assessments, even though all evidence points to the fact that “the past is no longer a predictor of the future.” For instance, sea-level rise could overwhelm cables and other data transmission infrastructure, while the rise in large wildfires could directly threaten dozens of centers located in the West.

Meanwhile, storms are expected to intensify as well. A recent assessment found that roughly 3.3 gigawatts of data center capacity is in the federally recognized risk zone for hurricanes, and 6 gigawatts of capacity that is either planned or already under construction falls in the zone as well. And even when a data center itself is out of harm’s way, climate impacts have made power outages more likely, requiring centers to rely more on backup systems.

Given that data centers are designed to operate for 20 years — but are generally in use for much longer — the need to plan for how climate change is shifting baseline conditions is vital to ensuring operators aren’t caught off guard. This isn’t necessarily a future problem either. In 2017, wildfires got within three blocks of Sonoma County’s data center, and also scattered the team responsible for operating it across Northern California. And just this year, Google and Oracle's data centers experienced cooling system failures amid record heat in the U.K.

To account for these risks, Uptime encourages companies to spread workloads between data centers and regions; if a storm hits Florida, a provider should have infrastructure out-of-state so service can continue without pause, which happened during Hurricane Ian last month. While this redundancy is easier for a large company with widespread data centers, even smaller companies can benefit from using secondary and out-of-region sites for backup and recovery in case a climate-related disaster causes data loss at the original site.

Smaller fixes could have a big climate resiliency payoff as well. Uptime recommends investing in disaster prediction resources, such as those developed by insurance companies, to pinpoint the likelihood of disasters at any given site and use that information to take steps to prepare data centers for disaster, from moving generators and pumps to higher ground to installing flood barriers. These steps can improve a center’s reliability when disaster hits. At least some companies are already taking these steps, including Equinix, which has a global footprint of more than 240 data centers.

“We have undertaken a climate risk and resilience review of all our sites with our insurers,” Stephen Donohoe, the company’s vice president of global data center design, and Andrew Higgins, director of engineering development and master planning, told Protocol in a joint statement. “Climate risks are an integral part of our due diligence process during site selection, with flood risk, wind risk, water stress and extreme temperatures considered prior to acquiring the site mitigation measures are considered during the design process.”

[W]e’ve been really emphasizing that this is coming … You’re better off being in front of it than behind it.”

Major enterprise operations may have no choice but to take some of these steps, given policy changes underway in Europe and the U.S.

The EU’s corporate sustainability reporting directive, which will come into effect in 2023, requires large companies operating on the continent to disclose their exposure to various risks, including climate change. In the U.S., the Securities and Exchange Commission is considering a similar set of rules that would also require that companies disclose climate risk information, though a final rule is still months away.

If the rule, which is still in flux, comes into force, even the most reactive data center companies will have to change their ways.

“In our publications and discussions with clients and members, we’ve been really emphasizing that this is coming,” said Dietrich. “You’re better off being in front of it than behind it.”

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins