CrowdStrike co-founder on what's right (and wrong) with US cybersecurity policy

"What we have now is not working," said Dmitri Alperovitch on the U.S.'s efforts to stop hacks in recent years.

Dmitri Alperovitch

The Cyberspace Solarium Commission led by a bipartisan group of lawmakers made many promising recommendations, according to expert Dmitri Alperovitch

Photo: Patrick T. Fallon/Bloomberg via Getty Images

Lawmakers every year pass modest legislation intended to improve cybersecurity, and hackers every year ignore it.

"Over the last four or five years, we've had cybersecurity bills passed every year. Most of them have not been very impactful or modified things at the margins," said Dmitri Alperovitch, who co-founded the cybersecurity firm CrowdStrike in 2011. But that's about to change, he said.

In March, the Cyberspace Solarium Commission, led by a bipartisan group of lawmakers, issued a 122-page report calling for an overhaul of the U.S.' cybersecurity operations. The report makes over 75 recommendations that touch on everything from paper-based voting systems to reorganizing government agencies to better defend against hacks. Many of the recommendations are valuable, said Alperovitch, but what sets the commission apart from past efforts is that lawmakers are committed to turning many of the suggestions into actual policy.

Alperovitch is no stranger to politics. CrowdStrike played a key role in investigating the 2016 Democratic National Committee breach, and became entangled in the impeachment of Donald Trump when the president mentioned the company in a call with Ukrainian President Volodymyr Zelensky and said he believed it was owned by a Ukrainian.

Alperovitch, who was born in Moscow and moved to the U.S. as a child, announced in February that he was leaving CrowdStrike, where he held the role of chief technology officer, to launch a nonpartisan policy accelerator. He talked about what recommendations from the commission he thought were the most and least important, and the hurdles to getting good cybersecurity policy passed.

This conversation has been edited for length and clarity.

What do you think will come of the commission in terms of actual policy?

There have been a million commissions and documents written on this topic, and most of them have been collecting dust for many years. They might have had very good recommendations, but they went absolutely nowhere because the constituency that could do something about it was not involved.

The thing that makes this commission different is that it was initiated by Congress and had members of the House and Senate as commissioners and chairs: Mike Gallagher from the House side and Angus King on the Senate side. In D.C., you get things done by involving the stakeholders, and that's what was brilliant about the setup of the Solarium Commission, that from the get-go it was congressionally authorized, members of Congress are heavily involved, and now these people will be able to get something done.

The other thing to understand is that while the report is a laundry list of 75 if not more recommendations, really the important ones are the ones that Mike and Angus care about, and you're not going to see the majority of them go into legislation, but the important ones will be taken up by their offices and hopefully pushed through legislation. I know Mike Gallagher and have spoken to him a few times on this topic, and he's passionate about moving the ball forward.

I'm optimistic, particularly due to the fact that they're planning to do some of this through the National Defense Authorization Act, which has to pass every year. It's the funding for the military, there's no option but to pass it, so that's a great vehicle to get some of these important things into law.

What recommendations in the report do you think are the most important?

Certainly every time you have a laundry list of recommendations, not everything is going to be great, but several of the recommendations I think are really helpful in moving the ball forward. First, very little attention has been paid to what I think is the number one responsibility of the government in this area: to protect itself. That's the area that hasn't been getting a lot of attention and focus despite the fact that it's been the worst of the worst in terms of cybersecurity when you look at all the breaches from the Office of Personnel Management to the Defense Information Systems Agency and all the others they've experienced.

One recommendation that goes a long way is empowering CISA [the Department of Homeland Security's Cybersecurity and Infrastructure Security Agency] to continuously hunt across government networks. That's critical because most of these networks are already being infiltrated by adversaries — nation states and criminals — on a daily basis. And before you talk about shoring up any security, you have to figure out who is in there, who's trying to get in there, and kick them out before they do any damage.

Getting CISA to be more of an operational CISO for the federal government is where we need to end up. So it's good to see steps in that direction — giving them the ability to have visibility across those networks and start hunting across those networks to help detect and mitigate intrusions on federal government networks.

There was also a recommendation to enable Cyber Command to do this on the Department of Defense's side. That may be a shock to most people, but the reality is the DOD does not have continuous hunting capability across its own networks. Various services and commands do that on their own in isolation, and since we have cyber command with thousands of people, it makes sense to give them the power to do this across the entire DOD network.

What recommendations for private sector cybersecurity do you think are the most important?

On the private sector side, I have a view that we're in a fundamentally different place now than we were 10 years ago when everything seemed hopeless and you had no company that understood how to defend themselves against breaches from nation states and other sophisticated adversaries. We certainly have the knowledge now for how to defend ourselves, and you don't hear about some of the largest companies having breaches.

The fundamental issue now is how do we incentivize more companies to do the right thing, because we know the security strategies proven to be effective. There's recommendation 4.4.4 [amending the Sarbanes-Oxley Act to include cybersecurity reporting requirements] that I thought was great because it recognizes we need regulatory reforms to make these companies — boards of directors and CEOs — care sufficiently about cybersecurity, and the only way to do that is through lightweight legislation.

One of the things I would like to see is a set of metrics that are outcome driven that you would have every public company — and critical infrastructure companies, even if they're not public — track internally and report to the board on a quarterly basis just like how you do with other company metrics like your sales and expenses. What that would do is 1) give the board the right visibility into the right things that their security team should focus on and 2) when there are failures and breaches, you have litigation that gets started almost immediately when there's a big one and this would give the opposing side the ability to subpoena those metrics to see what the board knew and what goals they were setting.

If you can show that quarter after quarter the board was ignoring the dire state of cybersecurity within an organization, then you have a negligence claim against the company, the board and the CEO, and hopefully the threat of that will incentivize companies to pay closer attention to this and start doing the right things.

What recommendations did you not like?

I thought some of the recommendations involving creating additional layers of bureaucracy, like the [Bureau of Cyber Statistics]. I don't think we're going to solve our way out of this problem through more bureaucracy, so I don't find those things particularly helpful or persuasive. There's a bunch of stuff in there that's fine that I don't think is going to move the needle. But those three things I highlighted are really important: addressing what we are going to do about civilian federal government networks, military networks, and the private sector.

Were there any recommendations that you were hoping to see in the report but were absent?

They did a pretty good job of covering lots of topics — probably too many topics, to be honest, more topics than Congress can digest, so some prioritization would have been nice versus the laundry list.

One other thing I really liked that I didn't mention before is that you should give funding to the Election Assistance Commission to support paper-based voting systems across states, which is key. Obviously they couldn't have predicted the coronavirus epidemic, and they didn't recommend mail-based voting, but nowadays we might consider adding that, too.

What are the political hurdles to getting good cybersecurity policy instituted?

This is not a partisan issue, but anytime you try to change something, people stand up. Anytime you talk about regulatory reform, there's people in the private sector that won't be happy about that, and when you talk about CISA getting a bigger role in the federal government, other agencies will be unhappy. Every time you try to do something, you're going to break some eggs, but I think everyone realizes that what we have now is not working, and we need some major changes.

I think highly prescriptive regulation is the wrong way to go. I'm completely opposed to telling companies you need to patch or deploy two-factor authentication, because it completely depends on the organization. I'll give you an example: If you're in critical infrastructure, patching is almost always the worst thing you can do. Patches have taken down more operational critical infrastructure than any piece of malware ever. You need to be thinking about your risk, your operational needs, and what makes sense to you. Sometimes implementing two-factor authentication is the best thing for you, and sometimes it's not.

I think a prescriptive approach of "do X, Y and Z" without understanding the challenge each organization or network has is the wrong way to go about this, but holding people accountable for outcomes and showing that they knew things were bad and were negligent about it — like how Sarbanes-Oxley did — is going to incentivize boards to be much more involved and not to treat this as an issue for the CISO to solve.

The reality is that cybersecurity isn't something that's just the responsibility of the security team. It has to be done at the business level, because sometimes the best thing for a company's cybersecurity is not to do something — not to engage in a particular business line, not to take certain risks that are well beyond the control of the CISO.

The government has already taken a number of steps to deter attacks — it has indicted hackers, issued sanctions and negotiated big agreements — but those don't seem to have worked. Why would things change now?

It's unfair to say we haven't accomplished anything. Over the last five years I think the government has come a long way on doing public attribution, of doing indictments of intelligence operants in other countries. One of the things I've seen is how indictments on Chinese operatives in particular have had an effect in getting them to basically shut down their operations or retool. And either of those is a win. It's important to recognize some major changes have occurred. We've got a long way to go, but we're not just spinning wheels.

The commission recommends that the government should take a more active stance in "defending forward" and increasing the consequences of launching cyberattacks against the U.S. Is there a risk that those measures could create new problems if we get attribution wrong?

There's always a risk in getting attribution wrong, but let's be clear: The U.S. government over the last 30 years has done attribution on thousands if not tens of thousands of cases on the nation state level or the criminal level. I'm not aware of one case where they've gotten it wrong — maybe initially they thought it was someone else, but once they went through the motions, they got the right person. And our ability to do attribution now is light years ahead of where it was years ago.

A lot of people think attribution is still done through tracing attacks through the chain of computers that it occurred. That's not how it's done. In many cases, the intelligence community will have human sources and signal intelligence sources from the other side communicating and devising plans and claiming credit for attacks. In a lot of these cases you have incredibly precise attribution from hearing directly from the actors. So i'm not concerned about that. But I do think the jury's out on whether we can actually deter action on the part of our adversaries purely through cyberspace.

The challenge you have with cyber, and this goes for attacks against us as well as the attacks we do against others, is that they fall into a gray space between peace and war. The challenge cyber presents is that it's between those two where it's bad enough that we can't ignore it but not bad enough to go to war over, and we struggle with how to respond to those issues.

The adversaries are going to have the same situation; what we do to them is not bad enough to get them to stop, and in some cases it can escalate activities. We have not yet figured out how to deter attackers effectively, and I would argue that cyber is not necessarily the answer for deterrence. We need to think about other modes in which we can apply pressure. The right way on Chinese economic espionage activities, for example, is through trade, raising the level of pain. We don't know if it will ultimately be successful, but that's the best chance we have.

What parts of the U.S.' current cybersecurity strategy are misguided?

Some of the actions we've taken against individual operatives are not very productive. If you ask senior government officials what could Russia or China do to you personally in cyberspace to disobey the president, they would say "nothing." Why would it be any different for them? The consequences in countries like North Korea and China of disobeying an order from leadership can be life consequential, whereas here you might just lose your job. Why do you think you could do anything in cyberspace to get them to disobey an order like that? What we need to focus on is how to target the leadership and their thinking, not just individual operatives.

Wouldn't that mean things like indictments aren't going to solve anything?

You may have some opportunities when it comes to contractors who may have a choice, and that's what we've seen in China. Some of the indictments have been against firms, not government employees, that decided to stop that line of business because it was too risky or they said to themselves we're doing this for money and we don't want to be blacklisted for the rest of our lives. When it comes to military intelligence officers, you're less likely to achieve that result. And that's why I think some of the indictments we've done against government personnel in Russia, North Korea and Iran have not had the same effect — they've continued their operations.

Get in touch with us: Share information securely with Protocol via encrypted Signal or WhatsApp message, at 415-214-4715 or through our anonymous SecureDrop.

What nation state cyberthreat worries you the most right now?

In terms of the most serious cyberthreat, I actually don't think it's nation states. I think it's ransomware. Those kinds of attacks could come from nation states in the future — we have seen North Korea engage in some ransomware attacks in the past and you could very well see other countries do the same, not for monetary gain but other purposes, like coercing organizations and even countries to bend to their will. But right now criminal ransomware attacks are my number one worry. Ransomware attacks against hospitals would be devastating from a public health perspective. In election times, those attacks against our election infrastructure could create a constitutional crisis.

I worry a lot less about the grid going down or some of these dire cyber 9/11 or cyber Pearl Harbor predictions. Those things are pretty resilient, and the attacks are pretty hard to do, and the fact that we haven't seen anyone do it in 30 years is a fairly good indication that the countries that have the ability to do it are deterred. An attack like that would very likely trigger a kinetic response on our behalf, and that's something other countries are taking into consideration.


This carbon capture startup wants to clean up the worst polluters

The founder and CEO of point-source carbon capture company Carbon Clean discusses what the startup has learned, the future of carbon capture technology, as well as the role of companies like his in battling the climate crisis.

Carbon Clean CEO Aniruddha Sharma told Protocol that fossil fuels are necessary, at least in the near term, to lift the living standards of those who don’t have access to cars and electricity.

Photo: Carbon Clean

Carbon capture and storage has taken on increasing importance as companies with stubborn emissions look for new ways to meet their net zero goals. For hard-to-abate industries like cement and steel production, it’s one of the few options that exist to help them get there.

Yet it’s proven incredibly challenging to scale the technology, which captures carbon pollution at the source. U.K.-based company Carbon Clean is leading the charge to bring down costs. This year, it raised a $150 million series C round, which the startup said is the largest-ever funding round for a point-source carbon capture company.

Keep Reading Show less
Michelle Ma

Michelle Ma (@himichellema) is a reporter at Protocol covering climate. Previously, she was a news editor of live journalism and special coverage for The Wall Street Journal. Prior to that, she worked as a staff writer at Wirecutter. She can be reached at

Sponsored Content

Great products are built on strong patents

Experts say robust intellectual property protection is essential to ensure the long-term R&D required to innovate and maintain America's technology leadership.

Every great tech product that you rely on each day, from the smartphone in your pocket to your music streaming service and navigational system in the car, shares one important thing: part of its innovative design is protected by intellectual property (IP) laws.

From 5G to artificial intelligence, IP protection offers a powerful incentive for researchers to create ground-breaking products, and governmental leaders say its protection is an essential part of maintaining US technology leadership. To quote Secretary of Commerce Gina Raimondo: "intellectual property protection is vital for American innovation and entrepreneurship.”

Keep Reading Show less
James Daly
James Daly has a deep knowledge of creating brand voice identity, including understanding various audiences and targeting messaging accordingly. He enjoys commissioning, editing, writing, and business development, particularly in launching new ventures and building passionate audiences. Daly has led teams large and small to multiple awards and quantifiable success through a strategy built on teamwork, passion, fact-checking, intelligence, analytics, and audience growth while meeting budget goals and production deadlines in fast-paced environments. Daly is the Editorial Director of 2030 Media and a contributor at Wired.

Why companies cut staff after raising millions

Are tech firms blowing millions in funding just weeks after getting it? Experts say it's more complicated than that.

Bolt, Trade Republic, HomeLight, and Stord all drew attention from funding announcements that happened just weeks or days before layoffs.

Photo: Pulp Photography/Getty Images

Fintech startup Bolt was one of the first tech companies to slash jobs, cutting 250 employees, or a third of its staff, in May. For some workers, the pain of layoffs was a shock not only because they were the first, but also because the cuts came just four months after Bolt had announced a $355 million series E funding round and achieved a peak valuation of $11 billion.

“Bolt employees were blind sided because the CEO was saying just weeks ago how everything is fine,” an anonymous user wrote on the message board Blind. “It has been an extremely rough day for 1/3 of Bolt employees,” another user posted. “Sadly, I was one of them who was let go after getting a pay-raise just a couple of weeks ago.”

Keep Reading Show less
Nat Rubio-Licht

Nat Rubio-Licht is a Los Angeles-based news writer at Protocol. They graduated from Syracuse University with a degree in newspaper and online journalism in May 2020. Prior to joining the team, they worked at the Los Angeles Business Journal as a technology and aerospace reporter.


The fight to define the carbon offset market's future

The world’s largest carbon offset issuer is fighting a voluntary effort to standardize the industry. And the fate of the climate could hang in the balance.

It has become increasingly clear that scaling the credit market will first require clear standards and transparency.

Kevin Frayer/Getty Images

There’s a major fight brewing over what kind of standards will govern the carbon offset market.

A group of independent experts looking to clean up the market’s checkered record and the biggest carbon credit issuer on the voluntary market is trying to influence efforts to define what counts as a quality credit. The outcome could make or break an industry increasingly central to tech companies meeting their net zero goals.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (


White House AI Bill of Rights lacks specific guidance for AI rules

The document unveiled today by the White House Office of Science and Technology Policy is long on tech guidance, but short on restrictions for AI.

While the document provides extensive suggestions for how to incorporate AI rights in technical design, it does not include any recommendations for restrictions on the use of controversial forms of AI.

Photo: Ana Lanza/Unsplash

It was a year in the making, but people eagerly anticipating the White House Bill of Rights for AI will have to continue waiting for concrete recommendations for future AI policy or restrictions.

Instead, the document unveiled today by the White House Office of Science and Technology Policy is legally non-binding and intended to be used as a handbook and a “guide for society” that could someday inform government AI legislation or regulations.

Blueprint for an AI Bill of Rights features a list of five guidelines for protecting people in relation to AI use:

Keep Reading Show less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories