'You’re reading my mind': GitHub's AI coder amazes (and terrifies) developers

Will Copilot make it into every developer’s toolkit?

A diagram explaining how Copilot works.

Copilot, built with OpenAI’s Codex tool and Microsoft-owned GitHub’s monstrous code database, launched in late June.

Image: GitHub

Tech consultant Ady Ngom laughed nervously. A tool called Github Copilot had just anticipated the exact function he wanted to type: The AI helped him write the classic Fibonacci function. Now, he wanted to speed it up using memoization. Copilot, which Ngom had installed only moments before, was one step ahead of him. “Can it listen to what I’m saying? Maybe that’s how it got it,” Ngom joked on a November LinkedIn live broadcast, where he was chatting about Copilot with some other developers.

“I was like, holy moly, you are just reading my mind right now,” Ngom told Protocol. “It was a surreal moment.”

Copilot, built with OpenAI’s Codex tool and Microsoft-owned GitHub’s monstrous code database, has stunned coders far and wide since its technical preview launch in late June of last year. Search #GitHubCopilot on Twitter and you’ll see software developers have had similarly amazing, yet somewhat jarring experiences. “Github #Copilot is so useful that it scares me at times,” one user tweeted. “I am truly amazed by the potential and craziness of #GitHubCopilot. I mean it seriously is crazy,” said another. Right now, Copilot is available only to individual developers who sign up.

Described as “your AI pair programmer,” Copilot recommends lines or blocks of code right in your editor. It can autocomplete repetitive code, offer lists of potential solutions and turn comments into code. It’s not perfect; France-based UI engineer Ivan Galiatin said sometimes it can feel like coding with a child. But in the six months since its launch, Copilot’s usefulness and power is clear. GitHub says Copilot’s suggestions make up more than 35% of newly written code in languages like Java and Python.

Any new technology that automates a significant chunk of work will always be both amazing and scary. The Copilot team hears the “amazed and scared” reactions loud and clear. Greg Brockman, co-founder and CTO of OpenAI, told Protocol, “We want to maximize the reasons to be amazed and minimize the reasons to be scared.”

Copilot prompts questions about the future of education, labor and creativity within the computer science world. The debates will only get fiercer if Copilot makes its way to the workplace.

“We see many people clamoring for it,” said Oege de Moor, the GitHub VP overseeing Copilot. “Just imagine, you’re leading a large organization and you hear about 35% of the code could be written. That’s really a big win.”

How Copilot works

AI code completion isn’t new. Plenty of developers use tools like code predictors Tabnine and Kite. But Copilot stands out because of the companies behind it. GitHub hosts the work of 73 million developers: an incredible wealth of information on which Copilot can train. Combine this with Microsoft’s reach and OpenAI’s comprehensive AI research, and it’s hard to fail.

“One thing that’s very convenient about building a product like this is you can really dogfood it internally,” said Brockman. “Dogfooding”(encouraging engineers to use the software they make) helps identify flaws. The teams at OpenAI and GitHub are intimately familiar with coding frustrations and technicalities, which made it easier to evaluate Copilot as they built it.

The goal is to replace the boring part of coding. Instead of going to Stack Overflow or Quora or Google to find a basic coding solution, Copilot grabs it for you. You press tab to accept the suggestion, or keep typing to ignore. “I can delegate some tasks that I just don’t want to do,” Galiatin said. “It saves a lot of mental energy and a lot of time.”

“The part that always gets me is how quickly it adapts to your style of coding, instead of imposing on you a way of coding,” Ngom said. It won’t make you a better coder, Ngom said. But it will make you faster. “It says, 'How can I facilitate your thinking process?' rather than, 'How can I take away your thinking process and just give you code?'”

Training new coders

De Moor hopes Copilot draws more people to the coding profession, helping tech companies tackle the industrywide recruitment problem. Highly skilled developers can offload grunt work and solve more interesting problems. Beginners or low-skilled developers can use Copilot as a jumping-off point.

“People who are not yet proficient in a programming language or who do not have the benefit of a computer science education, this will help them get started much more quickly,” de Moor said.

Not everyone agrees that Copilot is good for beginners. “I mentor junior developers and I tell them to stay away from Copilot,” Ngom said. “It is very important to get a solid understanding of programming and coding practice first.” Some developers are fearful of Copilot undermining computer science education, teaching students to use it as a crutch throughout their careers.

Copilot makes sense for intermediate and advanced developers, Ngom said. “I don’t have any problem with copy and pasting, as long as you understand what you’re copy and pasting.”

But Brockman thinks AI-powered tools like Copilot could be a real “democratizing force” in computer science education. He referenced a handy experimental feature that explains strings of code in plain language. “You can have this personalized code tutor that is just for you and able to give you the feedback and instruction you need,” Brockman said.

Copilot in the workplace: Is it a developer tool or is it a developer?

An obvious concern is Copilot squashing originality. Then there’s the existential concern of Copilot replacing developers altogether. The World Economic Forum predicted that AI will replace 85 million jobs worldwide by 2025 — though it also says it will create 97 million jobs at the same time.

Though some are concerned about AI taking their jobs, the developers Protocol spoke to are not. At least, not yet. Copilot is no match for human intuition, and developers’ jobs extend beyond writing code. De Moor and Brockman, as expected, pushed back on concerns as well.

“The truly creative part of coding is deciding what the software should do,” de Moor said. “I don’t foresee a future where Copilot produces useful code without human input. I do see unbridled human creativity no longer held back by irrelevant details.”

“I actually think what we’re seeing is AI taking no jobs, but taking the drudge work from all jobs in unison,” Brockman said. “We have these tools that just enhance what people can do.”

Right now, these conversations are hypothetical. Copilot isn’t in the workplace yet. Enterprises are still working through how to use AI generally, and Copilot has to clear some specific hurdles.

For one, using a tool trained on public code to create commercial products is ethically and legally ambiguous. That debate is still evolving. There are also ethical questions about code reproduction. GitHub is working on a filter to eliminate Copilot suggestions that are carbon copies of code in GitHub’s repository. De Moor said this doesn’t happen often. And if it does, it’s usually because there’s only one way to write the code. But he doesn’t want users “to be worried that you might unwittingly infringe an open-source license by using Copilot.”

Suman Hansada, an India-based SaaS engineer, has been using Copilot for personal projects but can’t use it in his job. Copilot, and by extension GitHub and Microsoft, learns from its users’ code. That’s a no-go for companies reliant on customers paying for their code. “Your company can have proprietary code that they don’t want to be in the public domain,” Hansada said.

Still, as Copilot and OpenAI’s Codex improve, Brockman thinks workplace adoption is imminent. “You multiply the productivity benefits across all your developers,” he said. “I think we’re gonna see these tools at every developer’s desk.”


A pro-China disinformation campaign is targeting rare earth miners

It’s uncommon for cyber criminals to target private industry. But a new operation has cast doubt on miners looking to gain a foothold in the West in an apparent attempt to protect China’s upper hand in a market that has become increasingly vital.

It is very uncommon for coordinated disinformation operations to target private industry, rather than governments or civil society, a cybersecurity expert says.

Photo: Goh Seng Chong/Bloomberg via Getty Images

Just when we thought the renewable energy supply chains couldn’t get more fraught, a sophisticated disinformation campaign has taken to social media to further complicate things.

Known as Dragonbridge, the campaign has existed for at least three years, but in the last few months it has shifted its focus to target several mining companies “with negative messaging in response to potential or planned rare earths production activities.” It was initially uncovered by cybersecurity firm Mandiant and peddles narratives in the Chinese interest via its network of thousands of fake social media accounts.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (

Some of the most astounding tech-enabled advances of the next decade, from cutting-edge medical research to urban traffic control and factory floor optimization, will be enabled by a device often smaller than a thumbnail: the memory chip.

While vast amounts of data are created, stored and processed every moment — by some estimates, 2.5 quintillion bytes daily — the insights in that code are unlocked by the memory chips that hold it and transfer it. “Memory will propel the next 10 years into the most transformative years in human history,” said Sanjay Mehrotra, president and CEO of Micron Technology.

Keep Reading Show less
James Daly
James Daly has a deep knowledge of creating brand voice identity, including understanding various audiences and targeting messaging accordingly. He enjoys commissioning, editing, writing, and business development, particularly in launching new ventures and building passionate audiences. Daly has led teams large and small to multiple awards and quantifiable success through a strategy built on teamwork, passion, fact-checking, intelligence, analytics, and audience growth while meeting budget goals and production deadlines in fast-paced environments. Daly is the Editorial Director of 2030 Media and a contributor at Wired.

Ripple’s CEO threatens to leave the US if it loses SEC case

CEO Brad Garlinghouse said a few countries have reached out to Ripple about relocating.

"There's no doubt that if the SEC doesn't win their case against us that that is good for crypto in the United States,” Brad Garlinghouse told Protocol.

Photo: Stephen McCarthy/Sportsfile for Collision via Getty Images

Ripple CEO Brad Garlinghouse said the crypto company will move to another country if it loses in its legal battle with the SEC.

Garlinghouse said he’s confident that Ripple will prevail against the federal regulator, which accused the company of failing to register roughly $1.4 billion in XRP tokens as securities.

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at or via Google Voice at (925) 307-9342.


The Supreme Court’s EPA ruling is bad news for tech regulation, too

The justices just gave themselves a lot of discretion to smack down agency rules.

The ruling could also endanger work on competition issues by the FTC and net neutrality by the FCC.

Photo: Geoff Livingston/Getty Images

The Supreme Court’s decision last week gutting the Environmental Protection Agency’s ability to regulate greenhouse gas emissions didn’t just signal the conservative justices’ dislike of the Clean Air Act at a moment of climate crisis. It also served as a warning for anyone that would like to see more regulation of Big Tech.

At the heart of Chief Justice John Roberts’ decision in West Virginia v. EPA was a codification of the “major questions doctrine,” which, he wrote, requires “clear congressional authorization” when agencies want to regulate on areas of great “economic and political significance.”

Keep Reading Show less
Ben Brody

Ben Brody (@ BenBrodyDC) is a senior reporter at Protocol focusing on how Congress, courts and agencies affect the online world we live in. He formerly covered tech policy and lobbying (including antitrust, Section 230 and privacy) at Bloomberg News, where he previously reported on the influence industry, government ethics and the 2016 presidential election. Before that, Ben covered business news at CNNMoney and AdAge, and all manner of stories in and around New York. He still loves appearing on the New York news radio he grew up with.


Microsoft and Google are still using emotion AI, but with limits

Microsoft said accessibility goals overrode problems with emotion recognition and Google offers off-the-shelf emotion recognition technology amid growing concern over the controversial AI.

Emotion recognition is a well-established field of computer vision research; however, AI-based technologies used in an attempt to assess people’s emotional states have moved beyond the research phase.

Photo: Microsoft

Microsoft said last month it would no longer provide general use of an AI-based cloud software feature used to infer people’s emotions. However, despite its own admission that emotion recognition technology creates “risks,” it turns out the company will retain its emotion recognition capability in an app used by people with vision loss.

In fact, amid growing concerns over development and use of controversial emotion recognition in everyday software, both Microsoft and Google continue to incorporate the AI-based features in their products.

“The Seeing AI person channel enables you to recognize people and to get a description of them, including an estimate of their age and also their emotion,” said Saqib Shaikh, a software engineering manager and project lead for Seeing AI at Microsoft who helped build the app, in a tutorial about the product in a 2017 Microsoft video.

Keep Reading Show less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories