OpenSSL dodges a bullet
Hello, and welcome to Protocol Enterprise! Today: why fears of a major vulnerability in a key piece of open-source software turned out to be overblown; why open-source AI has played such a role in research collaborations between the U.S. and China; and AMD limps out of the third quarter thanks to a shaky consumer economy.
Open and shut
The team that maintains OpenSSL, a key piece of widely used open-source software that’s used to provide encryption for internet communications, disclosed a pair of vulnerabilities on Tuesday that affect the most recent version of the software.
However, after initially rating the vulnerabilities as “critical” in a heads-up advisory last week, the new vulnerabilities have been downgraded to a severity rating of “high,” though administrators are still being urged to patch systems quickly.
- The OpenSSL project team disclosed last week that a new vulnerability would be announced on Nov. 1 but did not provide specifics.
- The announcement had generated significant attention in the cybersecurity community due to the ubiquity of OpenSSL and the massive impact of a previously disclosed critical vulnerability in the software: the Heartbleed vulnerability of 2014.
- OpenSSL enables secure internet communications by providing the underlying technology for the HTTPS protocol, now used on 82% of page loads worldwide, according to Firefox.
- The Heartbleed vulnerability affected a significant number of major websites and led to attacks including the theft of hundreds of social insurance numbers in Canada, which prompted the shutdown of a tax filing website for the Canada Revenue Agency.
The new vulnerability only affects OpenSSL versions 3.0 and above. Data from cybersecurity vendor Wiz suggests that just 1.5% of OpenSSL instances are affected by the vulnerability.
- “[Given] the fact the vulnerability is primarily client-side, requires the malicious certificate to be signed by a trusted CA (or the user to ignore the warning), and is complex to exploit, I estimate a low chance of seeing in-the-wild exploitation,” security researcher Marcus Hutchins wrote in a post.
- The pre-announcement on the new version last week was presumably to give organizations time to determine if their applications would be affected before disclosing the full details on the vulnerabilities, said Brian Fox, co-founder and CTO of software supply chain security vendor Sonatype.
- Given the tendency for malicious actors to quickly utilize major vulnerabilities in cyberattacks, many expected that attackers would begin seeking to exploit the issue shortly after the disclosure.
- The new vulnerabilities both involve buffer overflow issues, a common bug in software code that can enable an attacker to gain unauthorized access to parts of memory.
The severity rating for the vulnerability was downgraded to “high” due to analysis that determined that certain mitigating factors should make it a less-severe issue, according to the OpenSSL advisory on the issue.
- “Many platforms implement stack overflow protections which would mitigate against the risk of remote code execution,” the OpenSSL team wrote in the advisory.
- One initial analysis suggests that exploiting the vulnerability is more difficult than it could be since the issue occurs after the validation of an encryption certificate.
Sponsored content from SkyBridge

Valuations have become less hype-driven and more realistic; the amount of time spent on due diligence has increased substantially; and every founder needs to directly, clearly, and concisely answer the question, “Does this project have any real-world utility, and does it create economic value?”
Killing the golden goose
There’s no shortage of space race themes tossed around when people discuss China’s AI advancements. But the distance between how yesterday’s energy and space-related technologies were built and how today’s AI-related tech is produced is striking. In fact, the way modern AI technologies are developed shows there isn’t really a race for one country to win.
The AI industry has skyrocketed because a global community has constructed it, together, brick by digital brick. Not only have developers in China created popular open-source AI parts, there are several examples of open-source AI tech used today that combine components from the U.S. and China:
- Alibaba’s open-source DeepRec recommendation engine is based on Google’s TensorFlow and crafted with help from Intel and AI chipmaker Nvidia.
- An open-source deep learning project from Baidu blends its AI framework PaddlePaddle with Google’s TensorFlow and Kubernetes.
- Apache Spark, the data tool built and open-sourced by Databricks, chose Huawei’s Volcano as its default scheduler for batch data processing.
Now that the U.S. government has signaled further separation of U.S. tech from China, China AI and tech experts warn against approaches that are too broad. “The fact that AI development is so globalized renders that broad brush overall strategy relatively infeasible,” said Jeff Ding, an assistant professor of political science at George Washington University, who publishes a newsletter focused on AI in China.
Matt Sheehan, a fellow in the Asia Program at the Carnegie Endowment for International Peace added, “if the U.S. government comes in and tells AI scientists they can’t publish like this anymore, we’re going to see a huge backlash that could do major damage to U.S. competitiveness.”
There’s lots more to dig into in this full story, which includes an interactive cross-border AI graphic. It’s part of our series Are the U.S. and China Really in an AI Race?
AI and chips: What the future holds for the U.S. and China
Join Protocol Enterprise’s Kate Kaye for a virtual event Thursday, Nov. 3 at 10:30 a.m. PDT that will feature two separate discussions between tech and policy experts on the future of AI-related partnerships among tech businesses, developers, and AI researchers in the U.S. and China, part of Protocol Enterprise’s special report on the future of global AI development amid the rise of nationalism.
In the first discussion, Kate and an expert panel — Davis Sawyer, co-founder and chief product officer of Deeplite; Xiaomeng Lu, director of geotechnology at Eurasia Group; and Abigail Coplin, assistant professor of sociology and science, technology, and society at Vassar College — will address AI tech collaboration between the U.S. and China, the possibility of further U.S.-China detachments that could affect the AI and semiconductor industries, how AI tech collaboration between the U.S. and China will be difficult to disentangle, and more.
In the second discussion, Kate will be joined by Matt Sheehan, a fellow in the Asia Program at the Carnegie Endowment for International Peace, and Rebecca Arcesati, an analyst at the Mercator Institute for China Studies (MERICS), to examine the realities and misconceptions around AI ethics in China, the country’s AI and data privacy regulations, and the risks of an AI conversation in the U.S. driven by national security forces.
Saved by the data center
Industry watchers had some idea what to expect from AMD’s earnings Tuesday afternoon. Not only did the chipmaker issue a revenue warning for the third quarter, but a bunch of the U.S. hyperscalers reported earnings last week, which included projections for capital spending in several cases.
AWS predicted a $10 billion increase in its tech infrastructure spending, and Meta said it was spending more on data center growth, largely driven by the costs around AI. Microsoft and Google also essentially signaled there was little to worry about on that front.
AMD reported that third-quarter data center operating profit jumped 64% to $505 million and revenue rose 45% to $1.6 billion, compared with the year-ago period. Healthy results to be sure, but the company will likely face more intense competition in the near future, as Intel disclosed Tuesday that it planned to launch its next generation Sapphire Rapid server chips Jan. 10, after multiple delays.
AMD’s revenue warning last month was triggered by a much weaker-than-expected market for PC chips. Tuesday’s results are the first in recent memory when all of the company’s other segments — data center, gaming, and embedded — exceeded its PC chip business by revenue.
In the context of a choppy market, and speedy drop-off in demand for consumer chips, AMD’s outlook for the remainder of the year continues to reflect that trend.
“We will continue to invest in our strategic priorities around the data center, embedded and commercial markets, while tightening expenses across the rest of the business and aligning our supply chain with the current demand outlook,” AMD CEO Lisa Su said in her prepared remarks in the earnings call.
— Max A. Cherney (email | twitter)Around the enterprise
Data center operators are not nearly as prepared for the effects of climate change as they should be,Protocol’s Lisa Martine Jenkins reported.
Oracle laid off more cloud employees, this time in its Oracle Cloud Infrastructure group, according to Business Insider.Sponsored content from SkyBridge

The VC correction is proving once again that valuations are not an indicator of success. While money continues to flow, the crypto winter and VC slowdown have forced even the most committed Web3 venture capitalists (and their investors) to proceed with more caution.
Thanks for reading — see you tomorrow!
Recent Issues
In a tough economy, benefits of the cloud 'only magnify'
November 14, 2022
Twitter’s security leads just quit. Now what?
November 10, 2022
Intel finally serves up a chip
November 09, 2022
The great AI race that wasn’t
November 08, 2022
Cloudflare sets a target
November 07, 2022
How Elon will bring back the Fail Whale
November 04, 2022
See more
To give you the best possible experience, this site uses cookies. If you continue browsing. you accept our use of cookies. You can review our privacy policy to find out more about the cookies we use.