Get access to Protocol
In the surreal past few months, P.W. Singer has watched the concept for his new science fiction thriller come to life.
Singer is an author, an expert on 21st-century warfare at New America and, lately more than ever, a consultant helping companies in tech and other industries grapple with the repercussions of global unrest. His soon-to-be-released novel about social upheaval, automation and artificial intelligence, "Burn-in," is named after the practice of pushing a new technology to the breaking point. But Singer didn't expect the future of technology to arrive as fast as it has amid the fallout from COVID-19, forcing tech companies, governments and people everywhere to adapt on the fly.
"Has Pandora ever been put back in a box?" Singer asked during a recent phone interview from his home near Washington, D.C. "Roles and applications that would have previously seen a more gradual transition over the course of years have been pushed forward in a matter of weeks."
The result is a new urgency for answering major questions about AI, automation, cybersecurity, the rise of "open-source intelligence," and how social media companies are handling life-or-death misinformation. We asked Singer about all of these things, plus the way savvy companies have been running "war games" for economic shocks like this one, and how pandemic-induced turmoil will shape the next generation of technologists.
This interview has been lightly edited and condensed for clarity.
What has the evolution of COVID-19 revealed about our collective ability to withstand major economic and social shocks?
This was a catastrophe like no other in that no one can claim we had not gamed it out. The scenario had been run generically on pandemics for well over 20 years. They had been run specifically on this type of outbreak as recently as a 2019 Trump administration report. This was not a situation where you could do as the 9/11 commission did and say it was a failure of imagination.
Sometimes people will say, "Oh but China lied to us." Of course they lied to us. We could prove that they were lying using what's called open-source intelligence. Other countries took the same information that was available and prepared. It is such a graphic illustration of the importance of competence and leadership.
There's a similar parallel not just in terms of nations and how governments handled it, but how businesses handled it. The grocery chain Meijer looked at what was playing out and gamed up and prepared for it. Certain restaurants did this — used their supply chain to stockpile various goods and pivoted from in-person to delivery faster, but also started to operate as a kind of general store.
What kind of proactive measures have proved most beneficial for tech companies?
One of the things that good companies and organizations did as the trends became clear is they ran the equivalent of a fire drill. OK, we have the plan of what everyone is supposed to do, but when you actually run the drill, that's when you learn. One of the most important was, how do you shift to remote work on scale? What are all of the new questions that come out of that in terms of business continuity, security of your networks, the battle rhythm of how much time leadership is focused on coronavirus response, versus all the other demands?
There are some small but important questions, like what happens in the office when everyone is supposed to be working remotely? If you're doing regular cleaning but your workforce is not there, at a certain point the cleaning moves from being good for health to being bad for health, because you get the retransmission risk with janitorial staff.
The macro is, how are you dealing with access to systems? Most corporations, when they thought about working remotely, it was in terms of business travel-type situations. Some of the assumptions that were baked in were, "Oh, well, people would be working out of the coffee shop." Security of the level needed was not gamed into this for everyone from the defense department to Zoom, which I'm not blaming, but did not plan for this level of scale. While certain scenarios you could prep for, others it's just something that you learn.
What about staying informed as all of this unfolded. You wrote a book about the weaponization of social media several years ago. Have we made any headway in combating misinformation?
Social media is not just a communication space and a marketplace. It's also a battle space. You have sides that go back and forth. They use tactics and strategies to achieve their goals. We've seen its weaponization to target elections, to target military units. We've seen it used to target corporations to try to sabotage their share price, to harm the rollout of a new product. We've also seen it have a real and very sad impact on public health.
This is now a matter of life and death. The deliberate spread of misinformation on coronavirus didn't just shape a laggard Trump administration response, but also shaped individual-level decisions that were irresponsible and dangerous. It cost lives.
There's no one silver bullet. You need a response at the government level. You need a response at the business and corporate level. You need a response at the individual level. The nations that weathered the storm, they have a national strategy for this. The U.S. does not.
What role should social media companies play?
Years and years back, the platform companies essentially had a very laissez-faire attitude: Whatever is said on the platform, it will be countered by others. Very quickly that turned out to be an unsustainable position, even though it aligned with a very classic Silicon Valley libertarian mindset.
Interventions are motivated by a mix of bad publicity, customers saying, "Whoa I don't want this here," and the fear — not the actuality — of government intervention. The very first instance was child porn. Then it moves into post-9/11 terrorism and things like beheading videos. Then it grows a little bit more difficult, where it's not just actual violent imagery, but people calling for it. We have Charlottesville, and it gets a little tougher, because we're starting to see an overlap with violent extremism, but these are "very fine people."
I remember having these conversations with some of the senior leaders at these companies. After the mass killings in Montreal and Pittsburgh and Charleston, they would say, "We can't police this kind of stuff." After the mass killing in Christchurch, it's, "Now we can." You had a similar soft intervention on public health related to anti-vaxxer conspiracy theories: "We're not going to ban it, but we're going to lower its rank."
Then we had coronavirus breakout, and all of them again implemented things that were unthinkable, impossible for them to do just a few months earlier. They should be applauded for doing it, but as they take on more and more of a political role, they are forced to play politics. For example, when someone posts information about a medical treatment that is not effective and maybe even dangerous, they knocked offline certain individuals for doing that, but not others because they're a little bit too prominent, and if we do, then it will look like we're playing politics.
Given that minefield, where do these companies go from here?
I'm incredibly empathetic toward these companies, because they're being forced to play this role in the U.S. essentially because we have not updated our election rules. In other nations, the companies have more guidance. Where I have less empathy for them is when they don't war-game it out before they announce a policy and try to figure out how bad-faith actors are going to game their policies.
Another challenge for them is how they do this in a period of time when, oh by the way, they're dealing with all the effects of coronavirus on their own enterprise. A few weeks ago, there was a blast of kind-of-weird content moderation happening. It was because the platform companies had to send many of their people home, and they were using more and more AI that was understandably squirrelly. People were looking for conspiracy, when it was just AI doing its thing.
For AI and other technologies you've written about, like automation, there was already a lot of anxiety about potential complications, including lost jobs. What could the COVID-19 crisis mean for our trajectory in grappling with those issues?
The trends toward greater automation, in everything from AI to the hardware side of robotics, were already in place before coronavirus. All of the challenges they were going to bring were also going to happen before all of this — from what they do to the workforce to how they play out in cybersecurity. All indications are that these trends have been and will be drastically accelerated by the pandemic.
Roles and applications that would have previously seen a more gradual transition over the course of years have been pushed forward in a matter of weeks. An entire generation of kids has been rapidly thrown into distance learning. Remote work is happening on a scale that was not even ever expected. Medicine is being conducted remotely in a manner and level not anticipated in a decade. AI and robotics have been put into roles that range from outbreak and policing surveillance to replacing human cleaning crews to using bots for grocery delivery.
After the outbreak is over, I wouldn't just say it's unlikely, I would say it's unthinkable that 100% of these roles and modes simply go back to the way they were before. What this means is that all the related questions — social questions, security questions, legal and ethical questions — that we would have seen debated over the course of those years are going to be with us. Many concerns were understandably set aside during a crisis. The introduction happened in an emergency. But you've got to figure out, "OK, how do we deal with this?"
What are some of areas where this tension with automation and AI is most likely to come to a head?
We see this [tension] in everything from policing to its use in health care to education. You can see this on cybersecurity for telework, or what it means for facilities that have moved to greater automation, which opens up a whole new kind of hacking where you could have more of a physical effect. What does it do for everything from business culture to American culture to your family life?
It's going to be challenging, but it's incredibly important. It's one of the big questions coming out of all of this that we're going to be wrestling for the next generation, at least. That's what we played with in "Burn-in." A burn-in is when you take a technology to the breaking point. An engineer will do a burn-in of a new stereo: play it as loud as possible to see how long before it breaks. What we've just done is a burn-in for America.
So in reckoning with this, is it realistic to think about rolling any of these technologies back?
Has Pandora ever been put back in a box? No. And again, many of these rollouts happen for very good reasons. They were pushed out with good intent. And there's a parallel with this. Think about the internet itself: It was rolled out to help computer scientists figure out how to share computing time. It led to everything from entire new industries to what is it, one-third of marriages? But it also brought all of these challenges — from how I think about the security of networks to what online information means for everything from democracy to my share price. Same thing with AI.
Now the challenge of AI is that AI is a double black box. Scientists often refer to it that way because the very value of it is that it operates in ways that humans increasingly can't wrap their heads around, and it provides insights and recommendations that humans would not have come up with on their own. But it's also a black box to leaders and the public. While every single industry is being reshaped by AI and automation — from high tech to agriculture and retail — surveys show that only 17% of business executives have even a passing familiarity with AI.
This is a great parallel to the internet 15 years back. But it's also not just business understanding. It's government. The secretary of the Treasury said that AI and automation is not going to be an issue for roughly 50 to 100 years. That's why it's not even on his "radar screen." It's already here right now.
So where does that leave us?
You have an irony. At the very moment that the technologies of science fiction are coming true, the way we think about them because of science fiction is not all that useful. We are on the 100th anniversary of the word robot. It was created for a play. From that point in 1920 all the way until today, the way we always think of robots is a mechanical servant that wises up and then rises up — from that first play to the "Terminator" movies to all the Silicon Valley billionaires giving millions of dollars now to the existential threats posed by AI.
At least for our lifetime, the technology will be more like the steam engine, electricity or the computer. It's a technology that will change our world in ways that will be evident, but also lie in the background and change it even further. Think about how computers went from being a big machine used for code-breaking to my kitchen has more than 25 computers in it. It's changed how I cook, how I interact with my kids at the breakfast table.
Looking ahead, there's a lot of speculation about reopening society and using antibody tests and navigating a period of uncertainty until a vaccine is developed. What would you advise companies to do to prepare for this?
Oh, goodness. The first is to do something that, frustratingly enough, too few in government have been doing, which is listen to the experts. The great thing about our increasingly online and transparent world is you can access the various forms of data out there on outbreak numbers, on models that project where we might be a week, a month from now. They are certainly not in agreement, but you can get that data. And good companies were doing that — looking at not what was said in a Twitter account, but what research universities and institutions are doing.
Same thing with best practices in how to keep your employees safe. Don't use what your golf buddy posted on his Facebook account. You can get the best information from Mass General or the CDC, whatever.
Get in touch with us: Share information securely with Protocol via encrypted Signal or WhatsApp message, at 415-214-4715 or through our anonymous SecureDrop.
If you're planning for the future, run the scenarios on the spectrum. Best case, worst case. Given what we're seeing in these different models, what's my plan, in quotation marks, for "transitioning back" after all this is through? What will I put into place to smooth that transition? What's my plan for a rollback if we get news to the contrary? Different people have different worst-case scenarios. For some it might be high numbers of their workforce getting sick. For others, an extended shutdown. If the worst thing you do is get ready for the worst-case scenario, and it doesn't happen, you've done a good job as a leader.
Lauren Hepler ( @lahepler) is a former reporter for Protocol covering how people live and work in Silicon Valley. She previously covered development, energy, and tech for The New York Times, The Guardian, the LA Times, the Silicon Valley Business Journal, and others. Lauren can be reached at firstname.lastname@example.org (just ask for Signal), and you can share information with her anonymously via Protocol's SecureDrop. She grew up in Ohio and lives in Oakland.