In the last week of February, John Kelly III, IBM's executive vice president and director of its Watson Health unit, visited the Vatican's Pontifical Academy for Life in Rome. He was there to sign the Pope's "Call for AI Ethics," a project that plans to support the development of industry-wide ethical standards.
In March, Kelly sat down with Protocol to discuss why IBM joined Microsoft, the Italian government and others in signing the Pope's pledge, and what it means for the future of AI, including the company's own efforts with Watson.
Then the pandemic hit. The Pontifical Academy for Life notified attendees that someone at the AI summit had been exposed to the virus. Kelly went into two weeks of self-isolation and started a plan to help IBM respond to the crisis. So far, this has included an optional tool to track the whereabouts of employees to curb the spread of COVID-19, working with other tech companies to share computing resources to help find a cure for the virus, and opening up IBM's vast patent library.
Protocol caught up with Kelly again to discuss how things have changed since the virus hit the U.S.
This interview has been lightly edited and condensed for length and clarity.
How has the Call for AI Ethics affected your response to the virus?
You recall that when we met, I had just come back from meetings with the Pontifical Academy. We had signed the Call for AI Ethics, which was all about making sure that the use of these technologies, particularly artificial intelligence, were broadly for the human good. It was a call for doing things to help health care, to preserve privacy, to make sure that everyone has access to it, to make sure it was things that would preserve health and family, and things that were good uses of technology. It was about saying, "Hey, let's not use this for fake news, let's not use this to disrupt society."
At the time, the issues that people were wrestling with were things around facial recognition — is that an invasion of privacy — the tracking of people and their locations, that kind of stuff. We were saying, "Hey, we've invented a lot of technology and we think we can do this responsibly." And then wham! We got hit by this coronavirus. We've tried to put those ethical concepts into action here. We've really tried to stay ahead of this thing as it swept around the Northern Hemisphere and then through the Southern Hemisphere. We've tried to bring our best technology to bear here.
A couple of examples: It might seem mundane, but call centers and crisis health centers were being overwhelmed with calls for information. People were waiting hours on the phone just to get a response. So we built a series of AI Watson chatbots that now many clients are using, whether it's health care institutions or state governments or unemployment insurance. We were visiting one of the largest health care provider hospital systems in the country, and the CEO and the CIO and one of the researchers said, "John, we're just out of compute power. We need more supercomputing to model drug discovery for the virus." We said, "OK, we can fix that." So we pulled together a consortium of the biggest supercomputing centers in the world, with the Department of Energy, universities, IBM and some other tech companies. We said, "Here's more compute power than anybody's ever had access to, for free: Go find solutions to the problems." They told us that the rate of discovery is just off the charts.
I noticed that it was very difficult to get good, viable data on how fast the virus was spreading in the United States, in a county or city, and across the country. So I thought, we can use our AI and we can use our Weather Company app. Every county in the U.S. has to report to the CDC their infection and death rates. The trouble is that when you lift the hood, everybody's reporting it in a different way. We used artificial intelligence two to four times a day to scrape all of their data, which is in different formats — sometimes it's an Excel file, sometimes it's a PDF, sometimes it's a handwritten piece of paper — we scrape it, and then we post it, just like we post a weather map. We post a coronavirus map by county in the U.S.
I saw that the supercomputing collaboration is being described as a Manhattan Project for coronavirus. What does that entail logistically?
We took the lead because we know where all the big supercomputers are — we built most of them — Oak Ridge, Livermore, MIT, RPI. We have relationships with them all, and they all had their own little pockets of research going on. We said, "Hey, we could pool this and have a process to prioritize who gets on them quickly." We called all of our friends at the supercomputing centers and said we're going to pull this together. We're going to have a few leaders that are going to rapidly evaluate, and we want you to donate compute capacity, and we'll make decisions in 24 hours [about who gets to go online]. For example, if [someone] has a proposal and it looks good, you're going to be on Oak Ridge tomorrow morning. It's that fast.
How many labs and hospitals are hooked up to the system right now?
Last I checked, it was 50 or 60 different power users. [A spokesperson for IBM said it has now received 68 proposals to date: "36 have been matched with supercomputing systems and 27 of those experiments are already underway. The projects fall into three core categories: 1) better understanding the protein structure of the virus, 2) using AI to identify cellular binding sites or narrow down molecular candidates, and 3) forecasting the spread of the pandemic."]
Before coronavirus, the number one concern about AI was about its use for surveillance and facial recognition. One of the big trade-offs that we see going on right now is between what we need to do to detect and track the spread of the disease while also respecting personal privacy. How are you handling that conflict?
The big question now is the privacy issue around location. This particular virus spreads based on a cough or sneeze within 6 feet. So you really want to know which people were together when. Today it's mostly manual. If John has the virus, where was John 10 days ago? Who did he see? People manually hunt this down. Of course, technology is totally capable of knowing that, because whether it's the GPS on your phone, or in the case of our weather app, we know your location. We use it to push a weather forecast for the city you're in. So the big challenge right now is, how far do we push that location-based tracking?
We've taken the position that it has to be an opt-in. We should not — based on those ethical principles from the Vatican — track people's locations, and I should not try to find out that you were next to Adam last Tuesday night, for example. It's not ethical. So we actually have built an application that we'll be piloting in India starting this week. It's an opt-in for our employees. They can opt in, and when they opt in, we track for them, and for us, and actually for other employees, where the person has been for the last 16 days. Then on the 17th day we drop [the data]. So we don't track you forever. We delete it. But for the last 16 days, we have a record of where you were. We're not calling it a tracker, we're calling it a location reminder.
If I all of a sudden start showing symptoms, and I want to know where I was a week ago Wednesday, I can't remember where the heck I was. This will tell me where I've been, and I can inform people that I was near. Or, we at IBM, because they've opted in and given us permission, we can tell other people who were in that location, that someone — we won't tell them who — has symptoms; take your temperature more often, be careful, watch for any symptoms, let us know if there's any problem. It's a way of at least taking out the, "Where was I a week ago?" part of it out. We're not going so far as to say, "Hey, I'm going to push information to you that you were next to this person and that person had it." We're not going to step over that line.
That pilot program is for IBM employees in India?
It is. We're piloting versions of that for some other companies, if they want to know, as an example, inside of an automotive factory, how close have our people come to each other. We have social distancing rules, and we just want to know that our employees are keeping distance from each other. Again, opt-in. So wherever we, or our clients, say that people can opt in and we use it only for the stated purpose of reminding you where you were, or reminding you to socially distance, that meets our value system. We don't think it's ethical to truly electronically trace and track.
[Statement from IBM: "The health and safety of IBM employees, clients and partners remains our top priority. 'IBM Location Reminder' is one voluntary initiative to fight COVID-19 that we are piloting in India for our employees. The mobile app, designed and developed in India, is a new tool we're making available on a voluntary basis to IBMers to help curb the spread of the virus by identifying employees who may have come in close contact with a person affected in the last 14 days."]
One of the big anxieties is about what happens when, hopefully, this pandemic passes, then how do we transition back? What kind of role do these technologies have?
Like all businesses, we started, actually almost a month ago, thinking about going back to work, new ways of working, and what's the company going to be like? We're testing and trying all of the technologies. It's a living lab. We think it's not going to go back to being the same. It's going to be much different. People will be much more aware of their behaviors and social distancing. I think that people will be more willing to opt in and give data that's important for these kinds of things. I probably wouldn't have said that back when we met the first time, but I think this, frankly, has educated and scared people. They're going to want to know that if this thing starts to break out again, we're going to have even earlier warning systems.
The coronavirus, as bad as it is, it's not Ebola, as an example. But what if this thing had the spread rate that it has, but it had the death rate of Ebola? I think we as society, would have been much more willing to really take a hard line, to step in and start tracing and tracking big time. So I think it will be very interesting to see how society wrestles now with these ethical issues of AI and these technologies under the framework of a whole different world. We thought about this, and we said, "Look, we don't know where this is going to go." We're just not smart enough to know. But we do know that we have a heck of a lot of technology that can be useful in fighting this disease. That's why we did the patent pledge.
Some other companies said, look, here's five patents in my portfolio, and I'll license to some people, or I'll license to anybody, but only for two years. We stepped back and said, we're not that smart, so we will pledge our entire portfolio, free of use, for these corona-like viruses, basically forever. It's not just a couple of patents, it's the whole enchilada. Thousands and thousands of patents. You can use it for discovering or building devices. We're hoping others follow. We're the largest [patent] producer in the world, for 27 years and running.
Get in touch with us: Share information securely with Protocol via encrypted Signal or WhatsApp message, at 415-214-4715 or through our anonymous SecureDrop.
We know that in there, there are thousands of patents that relate to tools and techniques that can be used in these pandemic situations. We know there's some biology patents in there that can be used for things that kill viruses on surfaces. We know there's patents in there that use UV light to kill bacteria on a touchscreen. So there's this huge war chest of technology and ideas. We just didn't want people worrying about, OK well, we've got to pay IBM if we use this. We said, "No, this is too important. Ethically, we shouldn't hoard this stuff. If you want to use it, take it and use it for this purpose."
Are companies and researchers making use of the patent library?
The calls are starting to come in. People want to commercialize some of the materials that kill viruses. We know that states, corporations and drug discovery companies want access to this AI. The scale of this thing means that humans can't answer all the phones. Humans can't do all the drug testing. So, there's a huge amount of interest. And I think it'll be both small startup companies and IT companies that no longer have to negotiate some huge cross-license with us. Time matters here. We just said, "take it, use it."