Enterprise

Amazon’s Werner Vogels: Enterprises are more daring than you might think

The longtime chief technology officer talked with Protocol about the AWS customers that first flocked to serverless, how AI and ML are making life easier for developers and his “primitives, not frameworks” stance.

Amazon CTO Werner Vogels

"We knew that if cloud would really be effective, development would change radically."

Photo: Amazon

When AWS unveiled Lambda in 2014, Werner Vogels thought the serverless compute service would be the domain of young, more tech-savvy businesses.

But it was enterprises that flocked to serverless first, Amazon’s longtime chief technology officer told Protocol in an interview last week.

“For them, it was immediately obvious what the benefits were and how you only pay for the five microseconds that this code runs, and any idle is not being charged to you,” Vogels said. “And you don't have to worry about reliability and security and multi-[availability zone] and all these things that then go out of the window. That was really an eye-opener for me — this idea that we sometimes have in our head that sort of the young businesses are more technologically advanced and moving faster. Clearly in the area of serverless, that was not the case.”

AWS Lambda launched into general availability in 2015, and more than a million customers are using it today, according to AWS.

Vogels gave Protocol a rundown on AWS Lambda and serverless computing, which allows customers to build and run applications and services without provisioning or managing servers. He also talked about Amazon CodeWhisperer, AWS’ new machine learning-powered coding tool, launched in preview in June; how artificial intelligence and ML are changing developers’ lives; and his thoughts on AWS providing customers with “primitives” versus higher-level managed services.

This interview has been edited and condensed for clarity.

So what's the “state of the state” on AWS Lambda and how it's helping customers, and are there any new features that we can expect?

You'll see a whole range of different migrations happening. We've had folks from Capital One that migrated old mainframe codes to Lambda. [IRobot, which Amazon announced plans to acquire on Friday], the folks that make Roomba, the automatic [vacuum] cleaner, have their complete back end running as serverless because, for example, that's a service that their customers don't pay for, and as such, they really wanted to minimize their costs yet provide a good service. There's a whole range of different projects happening and whether that is pre-processing images at some telescope deep in Chile, all the way up to monitoring Snowcones running in the International Space Station, where they were in Lambda on that device as well and actually can do processing of imagery and things like that. It's become quite pervasive in that sense.

Now, the one thing is, of course, if you have existing code, and you want to move over to the cloud … moving over to a virtual machine is easy — it's all in the same environment that you had on-premises. If you want to decompose the application that you had, don't want to do too many code changes, probably containers are a better target for that.

But for quite a few of our customers that really want to start from scratch, but sort of really innovate and really think about [what] event-driven architectures look like, serverless becomes quickly the sudden default target for them. Mostly also because it's not only that we see significant reduction in cost for our customers, but also a significant reduction in their carbon footprints, because we're able to do much better packing on energy than customers would be able to do by themselves. We now also run serverless on our Graviton processors, so you'll see easily a 40% reduction in cost in energy usage.

For me, serverless means that our customers don't have to think about security, reliability, managing performance, managing scale, doing failover — all those kinds of things — and really controlling costs.

But always I'm a bit ambivalent about the word “serverless,” mostly because many people associate that with when we launched Lambda. But in essence, the first service that we launched, S3, also is really serverless. For me, serverless means that our customers don't have to think about security, reliability, managing performance, managing scale, doing failover — all those kinds of things — and really controlling costs. And so, in essence, almost all services at AWS are serverless by nature. If you think about DynamoDB [a serverless NoSQL database], or if you think about Neptune [a graph database service] or any of the other services that we have, most of them are serverless because you don't have to think about sort of provisioning them, managing them. That's all done for you.

Can you talk about the value of CodeWhisperer and what you think is the next big thing for or the future of low-code/no-code?

For me, CodeWhisperer is more an assistant to a developer. There's a number of application areas where I think machine learning really shines and it is sort of augmenting professionals by helping them, taking away mundane tasks. And we already did that, of course, in AWS. If you think about development, there's CodeGuru and DevOps Guru, which are both already machine-learning services to help customers with, on one hand, operations, and the other one sort of doing the early security checks during the development process.

CodeWhisperer even takes that a step further, where if you look how our developers develop, there's quite a few mundane tasks where you will go search on the web for a … piece of code — how do we do [single sign-on] login into X, Y or Z? Most people will just cut and paste or do a little translation. If that was in Python and you need to actually write it in TypeScript, we may do a translation on that.

There's a lot of work, actually, that developers do in that particular area. So we thought that we could really help our customers there by … using machine learning to look at the complete base of, on one hand, the AWS code, the Amazon code and all the open-source code that is out there, and then do a qualitative test on that, and then include it into this body of work where we can easily help customers by just writing some plain text, and then saying, “I want a [single sign-on] log-on here,” and then the code automatically appears. And with that, we can do checks for security, we can do checks for bias. There's lots of other things that are now possible because we're basically assisting the developer in being more efficient and actually writing the code that they really want to write.

When we launched Lambda, I said the only code that will be written in the future is business logic. Well, it turns out we're still not completely there, but tools like CodeWhisperer definitely help us to get on that path because you can focus on what's the unique code that you need to write for the application that you have, instead of the same code that everybody else needs to write.

People really like it. It's also something that we continuously improve. This is not a standing-still product. As we look at more code, as we get more feedback, the service improves.

If I think about … software developers, it's one of the few jobs in the world where you can be truly creative and can go to work and create something new every morning. However, there's quite a bit of heavy lifting still around that [that] sort of has nothing to do with your creativity or your ability to solve problems. With CodeWhisperer, we really tried to take the heavy lifting away so that people can focus on the creativity part of the development job, and I think anything we can do there, developers like.

In your tech predictions for 2022, you said this is the year when artificial intelligence and machine learning take on the undifferentiated heavy lifting in the lives of developers. Can you just expand on that, and how AWS is helping that?

When you think about CodeWhisperer and CodeGuru and DevOps Guru … or Copilot from GitHub … this is just the beginning of seeing the application area of machine learning to augment humans. Whether there is a radiologist somewhere that is late at night looking at imagery and gets help from machine learning to compare these images or whether it's a developer, we're really at the cusp of how machine learning will accelerate the way that we can build digital systems.

I was in Germany not that long ago, and there the government told me that they have 80,000 open IT positions. With all the scarceness in the world of labor, anything which we can do to make the life of developers easier so that they're more productive, that it makes it easier for people that do not have a four-year computer science degree to actually get started in the IT world, anything we can do there will benefit all the enterprises in the world.

What's another developer problem that you're trying to solve, or what are developers asking AWS for?

If you're an organization like AWS or Amazon or quite a few other organizations around the world, you make use of the DevOps principle, where basically your developers also have operational tasks. If you do operations, there's information that is coming from 10 or 20 different sides. There's log files, there's metrics, there's dashboards and actually tying that information together and … analyzing the massive amounts of log files that are being produced by systems in real time, surfacing that to the operators, showing that there may be potential problems here and then give context around it because normally these log files are pretty cryptic. So what we do with DevOps Guru, for example, is provide context around it such that the operators can immediately start taking action, looking for what [the] root cause of particular problems are. So we're looking at all of the different aspects of development and operations to see what are the kind of things that we can build to help customers there.

At AWS re:Invent last year, you put up a slide that read “primitives, not frameworks,” and you said AWS gives customers primitives or simple machines, not frameworks. Meanwhile, Google Cloud and Microsoft are offering these sort of larger, chunkier blocks such as managed services where customers don't have to do the heavy lifting, and AWS also seems to be selling more of them as well.

Let me clarify that. It mostly has to do also with sort of the speed of innovation of AWS.

Last year, we launched more than 3,000 features and services. And so why are we still looking at these fine-ingrained building blocks? Let me go back to the beginning of AWS — when we started then, how software companies at that moment were providing infrastructure or platforms was basically that they would give developers everything [but] the kitchen sink on Day One. And they would tell you, "This is how you shall develop software on this platform." Given that these platforms took quite a while to develop, basically what you operate is a platform that is already five years old, that is looking at five years back.

Werner Vogels gives his keynote at AWS re:Invent 2021. Photo: Amazon Web Services, Inc.

We knew that if cloud would really be effective, development would change radically. Development would indeed be able to scale quicker and make use of multiple availability zones and many different types of databases and things like that. So we needed to make sure that we were not building things from the past, but that we were building for how our customers would want to build in 2025. To do that, you don't give them everything and tell them what to do. You give them small building blocks, and that's what I mean by primitives. And all these small building blocks together make a very rich ecosystem for developers to choose from.

Now, quite a few, especially the more tech-savvy companies, are more than happy to put these building blocks together themselves. For example, if you want to build a data lake, we have to use Glue [a serverless data integration service], we have to use S3, maybe some Redshift, Kinesis for ingestion, Athena for ad hoc analytics. I think there's quite a few customers that are building these things by themselves.

But then there's a whole category of customers that just want a data lake. They don't want to think about Glue and S3 and Kinesis, so we give them a service or solution called Lake Formation. That automatically grabs all these things together and gives them this higher-level component.

Now the fact that we are delivering these higher-level solutions, for example, some customers just want a backup solution, and they don't want to think about how to move things into S3 and then do some intelligent tiering [so] that if this data isn't accessed in two weeks, then it is being moved into cold storage. They don't want to think about that. They just want a backup solution. And so for that, we provide them some backup. So we do have these higher-level services. It's more managed-style services for you, but they're all still based on the primitives that sit underneath there. So whether you want to start with Lake Formation and later on maybe start tweaking things under the covers, that's still possible for you. While we are providing these higher-level components, where customers need to have less worry about which components can fit together, we still provide the underlying components to the developers as well.

Is quantum computing something that enterprise CTOs should be keeping their eye on? Do you expect there to be an enterprise use for it, or will it be a domain just for researchers, or is it just too far out to surmise?

There is a back-and-forth there. If I look at some of the newer developments, it's clearly research oriented. The reason for us to provide Braket, which is our quantum compute service, is that customers generally start experimenting with the different types of hardware that are out there. And there's typical usage there. It's life sciences, it's oil and gas. All of these companies are already investigating whether they could see significant speed-ups if they would transform their algorithms into things that could run on a quantum machine.

Now, there's a major difference between, let's say, traditional development and quantum development. The tools, the compilers, the software principles, the books, the documentation for traditional development — that's huge, you need great support.

In quantum, I think what we'll see in the coming four or five years, as I listen to the Amazon researchers working on this, [is that] much of the work will not only go into hardware, but also how to provide better software support around it, such that development for these types of machines becomes easier or even goes at the same level as traditional machines. But one of the things that I think is very, very clear is that we're not going to be able to solve new problems necessarily with quantum computing; we're just going to be able to solve old problems much, much faster. That's why the life sciences companies and health care and … companies that are very interested in the high-performance compute are experimenting with quantum because that … could accelerate their algorithms, maybe by orders of magnitude. But, we still have to see the results of that. So I'm keeping a very close eye on it, because I think there may be very interesting workloads and application areas in the future.

Enterprise

Microsoft’s new chief partner officer: 'Customers need help'

The new Microsoft Cloud Partner Program forces new certification requirements on the hundreds of thousands of partners that sell and support its products and services. Nicole Dezen says those changes now give customers “total clarity” into which ones are best suited to meet their cloud needs.

Nicole Dezen, Microsoft's chief partner officer, talked with Protocol last week about the company's announcement.

Photo: Microsoft

As Microsoft launches the biggest overhaul of its partner program today since 2010, its new chief partner officer says the changes will help enterprises and other customers more easily identify qualified partners that are the right fit to help with their cloud needs.

“All of our priorities, all of our design principles, are built with the customer in mind,” Nicole Dezen, Microsoft’s chief partner officer and corporate vice president of global partner solutions, told Protocol in an exclusive interview, her first since being appointed in July.

Keep Reading Show less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Sponsored Content

Great products are built on strong patents

Experts say robust intellectual property protection is essential to ensure the long-term R&D required to innovate and maintain America's technology leadership.

Every great tech product that you rely on each day, from the smartphone in your pocket to your music streaming service and navigational system in the car, shares one important thing: part of its innovative design is protected by intellectual property (IP) laws.

From 5G to artificial intelligence, IP protection offers a powerful incentive for researchers to create ground-breaking products, and governmental leaders say its protection is an essential part of maintaining US technology leadership. To quote Secretary of Commerce Gina Raimondo: "intellectual property protection is vital for American innovation and entrepreneurship.”

Keep Reading Show less
James Daly
James Daly has a deep knowledge of creating brand voice identity, including understanding various audiences and targeting messaging accordingly. He enjoys commissioning, editing, writing, and business development, particularly in launching new ventures and building passionate audiences. Daly has led teams large and small to multiple awards and quantifiable success through a strategy built on teamwork, passion, fact-checking, intelligence, analytics, and audience growth while meeting budget goals and production deadlines in fast-paced environments. Daly is the Editorial Director of 2030 Media and a contributor at Wired.
Workplace

An IPO may soon be in Notion’s future

Notion COO Akshay Kothari says there’s room to grow, aided by a new CFO who knows how to take a company public.

Notion has hired its first chief financial officer: Rama Katkar.

Photo: Courtesy of Notion

It’s been a year since Notion’s triumphant $275 million funding round and $10 billion valuation. Since then the landscape for productivity startups trying to make it on their own has completely changed, especially for those pandemic darlings that flourished in the all-remote world.

As recession looms, companies looking to cut costs are less likely to spend money on tools outside of their Microsoft or Google workplace bundles. Enterprise platforms are bulking up and it could spell trouble for the productivity startups trying to unseat them. But Notion COO Akshay Kothari says the company is still aiming to build the next Microsoft, not be the next Microsoft. And in a move signaling a new chapter of maturity, Notion has hired its first chief financial officer: Rama Katkar, Instacart’s former VP of finance.

Keep Reading Show less
Lizzy Lawrence

Lizzy Lawrence ( @LizzyLaw_) is a reporter at Protocol, covering tools and productivity in the workplace. She's a recent graduate of the University of Michigan, where she studied sociology and international studies. She served as editor in chief of The Michigan Daily, her school's independent newspaper. She's based in D.C., and can be reached at llawrence@protocol.com.

Securing the Enterprise

Securing the enterprise

There’s no let-up in the surge of cyberattacks against businesses. But shutting down the hackers will require many enterprises to evolve their strategy.

In today’s enterprise, “identity and security are very merged.”

Illustration: iStock/Getty Images Plus; Protocol
the Protocol team
Protocol focuses on the people, power and politics of tech, with no agenda and just one goal: to arm decision-makers in tech, business and public policy with the unbiased, fact-based news and analysis they need to navigate a world in rapid change.
Fintech

How neobanks are helping consumers game credit scoring

The CFPB says it is closely monitoring secured credit cards offered by neobanks.

Regulators are scrutinizing neobanks' card offerings.

Photo: Oscar Wong/Moment/Getty Images

About one in six Americans has a credit score below 619, according to the CFPB. Another 23% have too thin a credit file to score or no file at all. That puts them in a credit trap: To build credit, these consumers need someone to give them a line of credit with which they can demonstrate good financial habits. But with scores that low, few lenders are prepared to offer them anything.

Neobanks say they can solve the problem through a new twist on secured credit cards. But regulators are already scrutinizing their offerings.

Keep Reading Show less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

Latest Stories
Bulletins