People

Citizen’s plan to keep people safe (and beat COVID-19) with an app

Citizen CEO Andrew Frame talks privacy, safety, coronavirus and the future of the neighborhood watch.

Citizen app

Citizen added COVID-19 tracking to its app over the summer — but its bigger plans got derailed.

Photo: Citizen

Citizen is an app built on the idea that transparency is a good thing. It's the place users — more than 7 million of them, in 28 cities with many more to come soon — can find out when there's a crime, a protest or an incident of any kind nearby. (Just yesterday, it alerted me, along with 17,900 residents of Washington, D.C., that it was about to get very windy. It did indeed get windy.) Users can stream or upload video of what's going on, locals can chat about the latest incidents and everyone's a little safer at the end of the day knowing what's happening in their city.

At least, that's how CEO Andrew Frame sees it. Critics of Citizen say the app is creating hordes of voyeurs, incentivizing people to run into dangerous situations just to grab a video, and encouraging racial profiling and other problematic behaviors all under the guise of whatever "safety" means. They say the app promotes paranoia, alerting users to things that they don't actually need to know about. (That the app was originally called "Vigilante" doesn't help its case.)

Above all, Citizen raises questions: When does safety trump privacy? What are people willing to give up or risk in order to keep themselves and their loved ones safe? What does "safety" even mean in an increasingly digital world?

Frame joined the Source Code podcast to discuss all that and more. He talked about the app's history, what it means to promote safety above all else, how his company tried — and failed — to fight COVID-19 long before other tech companies got involved and what it means to be a good Citizen citizen.

The following excerpts from our conversation have been lightly edited for length and clarity.

Citizen has had sort of a windy history, so maybe go all the way back to the beginning of the product and the company. What was the thing you were trying to build when you started out building it?

I wanted to build something that was consumer [facing], because that's where my passion lies. And I feel like that's the only thing I'm sort of good at. It had to be mobile first, network-effect driven, with a very noble mission so that once the network effect activates, hopefully, it becomes the gift that keeps on giving.

As soon as I identified wanting to build a safety product, the question was, how do you get to market with a product that keeps people safe? There was a moment where I was sitting outside, and I was just thinking about, what is the Trojan horse to build a safety system? And it just dawned on me: I think all of us remember playing with police scanners, and listening to police. It's this incredible data stream that is filled with people in need of help. And it was just this closed system that had open access for listening. When a fire hits a building, nobody knows except the fire department. Why don't the 1,000 people inside of the building know that their building is on fire? The information is right there, transmitted right through the walls of that building.

And so it just instantly hit me: Let's open this system, create a trusted shared safety system where police and civilians and everybody has access to the same information. There are so many great byproducts of creating this open, shared system, one of which is you get the people out of the burning building. You get the kidnapped kid back because you got the whole neighborhood engaged. You eliminate police brutality, because now you've built what I call "conditional transparency." It's not a radical transparency. It's a conditional transparency, meaning the transparency hits once a kid is missing. It doesn't mean there's overhead planes surveilling, there's no surveillance state in this. It's simply transparency when it is urgently needed. And that was kind of the genesis of this thing.

I'm getting ahead of myself here, but you kind of already got into what over the years has been complicated in how people think about Citizen. The reason to not tell the people in the building that there's a fire is that you don't want to cause a panic, right? And the reason to not tell people in the neighborhood is that you don't want them running toward the fire, because some people will run toward a fire.

All the pushback I've ever seen about Citizen has basically been about that fundamental question. Is it productive to have all of this information given to everyone all the time? At some point, do you just have to decide that, on balance, you believe it is?

You know, there would be no innovation if people were just fixated on worst-case scenarios. On any product, we can think about worst-case scenarios. And of course, there are valid reasons. But it's not going to stop you from building something great.

Just take Uber, right? I was taught not to hitchhike, that'd be a very dangerous thing. And if we were all in a room thinking about what happens when you just allow people to hitchhike with their neighbors, and build it as a global platform? People would say, "Oh, my gosh, that's the worst idea of all time."

A lot of people did say that about Uber!

Yeah. So you know, it's just so easy to just fixate on the worst-case scenario. This is no longer an idea. This is something that is prevalent across 28 cities, we have ZIP codes, where 60[%], 70[%], 80% of the entire ZIP code are active Citizen users. We don't have those stories. Believe me, I was also terrified of this. At the beginning, it gave me and the team a huge amount of anxiety, because we also thought about the worst-case scenario. But we also thought about the best-case scenario, which is why we built this and why we're scaling. And it far exceeds the unfounded fear of some of these worst-case scenarios.

I feel like COVID has become kind of a bigger part of Citizen than I would have expected. Rewinding to the early days of the pandemic, was it obvious to you that Citizen needed to be part of this, needed to help be the solution here?

At first, everybody in the company wanted to pitch COVID ideas. It wasn't until we realized just the architectural model of contact tracing — a very simple idea of just enabling Bluetooth and allowing everybody's Bluetooth radio to say hello to each other as you come in close physical contact — that it was like, OK, wait a sec, we already have distribution. We had 30% of New York City on the app back in March.

We just went, "Oh, my goodness, we have the safety platform, all we have to do is ask these 30% of New Yorkers to turn on Bluetooth." Suddenly, we have the largest and most dense contact-tracing system in any big city in the world, in ground zero, which was New York at the time.

Unfortunately, we never got the approval to launch it.

What happened?

It's a complicated story. Apple and Google had their own plans. We came forward with this architecture, and we needed help. We needed allies. We needed Apple, we needed Google, we needed the government.

We created a product that integrated at-home testing and contact tracing. Because we knew those were the two tactics on how to manage this here in the U.S. Our ask was basically, a) obviously we need to get it in the App Store; b) we wanted the government to pay for all that at-home testing for the U.S. Labcorp was working on their at-home test kit. And basically, you could just ship a test kit to anybody that asks for it within 24 hours, saliva sample, they send it back, and the result is delivered in the app. So it'll say, "You're negative for COVID."

Keep in mind, this is back in March that we had this. We built prototypes and everything inside the company, we were taking tests and getting the results in the app. Now, if you are positive, it would automatically inform all of the people that your Bluetooth recently saw that "you have had a potential exposure to COVID." Those people would then be offered a free test that would show up the next day. You can imagine if we partnered with Amazon on the logistics side, where Amazon is trucking these. It's like OK, tech, come together and let's go, like, what can we do about COVID? There was zero government support. None whatsoever. We needed them to pay for the tests. It'd be great if Amazon did the logistics. But we called it the "testing-tracing flywheel." And this is, again, back in March, people had never even heard of contact tracing.

We didn't get the support. We didn't get the alliances, we were on our own on this one. And the government, there was a Contact Tracing Task Force, we were selected as the app to provide contact tracing to America, before [the task force] was suddenly just disbanded and dissolved and just went away one day.

My question was going to be, wouldn't this be a useful time to do things like partner with local governments, but you're saying you did, and then it fell apart?

Well, there was no official partnership. But we were told that this was by far the best system for America. And there were just a bunch of meetings that didn't really go anywhere. And then suddenly, you know, they put a bunch of great technology entrepreneurs on the task force, who we knew! We knew all these people, and it was great. Sometimes government needs entrepreneurs to step up and solve problems. And in this case, it seemed like something was going to happen, but then it was just instantly disbanded with no plan.

Looking back now with almost a year of hindsight, was there anything else you could have done? Should you have just pushed through and said, "We'll figure something else out, screw all the rest of you?"

No, we couldn't, we didn't have approval from the app stores. Right now, innovation is no longer open. You have to go through the app stores, and they will decide if your innovation can reach the world. This is not how it was, pre-mobile. With the internet, you can build anything you want. And you have an open, uncensored, unfiltered internet. And with app stores, you don't. You must get through the process.

This is actually sort of a perfect example of the kind of tension we were talking about earlier. You have an obvious societal good. No one is going to argue that curbing the spread of COVID-19 is not a good thing. We're all on the same page about that. But then, and this is what I think we spent maybe six months too long debating, you have all of the privacy implications of that. What does it mean that there's now a system that knows that not only do I have COVID, but who gave it to me and where they got it from? Were those the kinds of debates that you were having, even back in March? Was that the holdup?

That actually annoyed me, because I just don't think that COVID was this mass privacy thing. All of the data was protected, nobody would know. But most likely, just through the social interaction, like people, every person that has COVID has told me. It's not like, "Oh, my gosh, this person has COVID. Nobody can know."

And we didn't expose the name. We did expose the location. There was a big fight even with the app stores, because we thought that the location was super relevant. Why? Because it protects more people. If you went to a superspreader dinner, it should say, here's the time and place you got COVID. We can say Friday night, 8 p.m., here's the location. We're not going to tell you who, but you're like, "Oh, that was the birthday party." What's the next step? Tell everybody at the birthday party that there was an exposure there, because not everybody has the contact tracer.

This was about beating COVID. I just think that there was this red herring around privacy that stopped everything. Privacy became the fundamental discussion, when it's like, OK, we won on the privacy side, but how are we doing as a nation now? Printing trillions of dollars, small businesses all going out of business. What mattered more: this privacy thing that seems to be extremely unfounded and a red herring, or beating COVID?

That's the big existential question around all of this, right? And it goes back to what you were saying before about best- and worst-case scenarios. There's just always going to be a tension between privacy and safety, right? Even as we talk about things like encryption, there's always going to be that tension. And at some point, all you can do is pick a side and be clear about which side you've picked.

Yeah. I mean, you've got to use judgment. And this is really where mission comes in. What is this privacy preservation that we're even talking about? I mean, the amount of tracking going on by Big Tech is thousands of times more privacy-invading than the COVID thing we were talking about.

The fact that we just got so sucked up talking about privacy preservation being the only issue, when in the meantime, COVID is spreading rapidly and just destroying families and small businesses and American restaurants that'll never come back. I think that the focus was on the wrong area.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins