Intel hits out at 'mess' of using custom silicon in virtual RAN

Operators using custom silicon alongside general purpose processors will need twice the resources, says Intel.

Iain Morris, International Editor

September 11, 2023

7 Min Read
Tennis champion Novak Djokovic at the Queens Club in 2018
What does the world's best tennis player have in common with Intel? Loads, of course.(Source: Carine06 via Creative Commons)

If semiconductor companies were likened to sports stars, Intel could easily be matched to Novak Djokovic, the Serbian tennis player who won his twenty-fourth major last night at the US Open. Both are general-purpose players, hard to topple anywhere but lacking the pizzazz of rivals in specific conditions. Both have dominated their respective fields for years, and yet their grips are no longer as tight. Rooting for their underdog challengers is almost instinctive – unless you are an Intel shareholder or Djokovic's mum.

Just as the tennis maestro is under attack from a small band of much younger players, so Intel is having to fight off an array of silicon rivals in markets it once had to itself. Much of the world's attention is fixed on PCs and data centers, where Intel last year made about $51 billion in revenues. But the much smaller telecom market for radio access network (RAN) equipment is a rare growth opportunity for Intel. The seepage of cloud computing into the RAN would naturally suit a company whose chips are ubiquitous in the cloud, as vital as nets on a tennis court.

Last year, Intel is likely to have made just $100,000 from this nascent "virtual" RAN market, according to Joe Madden, the founder of and lead analyst for Mobile Experts. But virtualization still accounts for just a single-digit-percentage share of the overall RAN market. If it takes off, as some analysts expect, there will be a lot more at stake in future chip revenues. Hence Intel's determination to snuff out any threats.

Those are now coming from several directions. In a plain-vanilla fully virtualized RAN, all software functions would be handled by a general-purpose central processing unit (CPU) of the kind Intel makes. But companies such as Marvell, Nvidia and Qualcomm are pitching customized silicon as an auxiliary for the most demanding software. These "inline accelerators," as they are usually called, would entirely replace the CPU in Layer 1 (or L1), a category of demanding RAN functions. The RAN could run with a less powerful and less costly CPU, meaning Intel would earn less.

Inline or out

But there is a trade-off. While an operator might gain through customized silicon in performance, energy efficiency and CPU savings, it could stand to lose on virtualization. Custom silicon, almost by definition, uses proprietary code that cannot just be lifted and shifted to another type of chip. At the very least, an operator using these inline accelerators alongside Intel's CPUs would have two balls in play – something even Djokovic would struggle to manage.

"You can't have pieces of the network be cloud-native and pieces of the network be not cloud-native because then you end up with a mess for the operator, where they have to build two management systems, where they have to hire two sets of people who understand two different systems," said Sachin Katti, the general manager of the network and edge group at Intel.

He is confident Intel has addressed any performance degradation linked to CPUs through its own accelerators. Unlike the inline accelerators sold by Marvell, Nvidia and Qualcomm, these offload just one or two functions and keep everything on the same platform. "As far as the software is concerned, it doesn't even notice the difference," he told Light Reading. "The main software is always running on a general-purpose system and that is why it can always be managed and updated in a cloud-native manner."

Chip stocks in 2023

a chart showing chip stock prices in 2023

Naysayers would argue this sort of openness is convenient for Intel because of its dominant position in the bigger computing world. Most CPUs today come from Intel, which had a 71% share of the data-center market last year, according to Counterpoint Research. The main alternative, with a 20% share, was AMD, and it uses the same x86 instruction set. In other words, the software written for more than 90% of the world's CPUs is based on the same foundational architecture. For years, Intel has worked to cultivate the broadest-possible ecosystem of developers.

The good news is that x86 chipmakers and their partners can boast some degree of portability where rivals cannot. The standout example is Ericsson – inconveniently enough for small competitors like Mavenir, which enjoys portraying the Swedish company as antagonistic to the concept of an open and virtual RAN. Having already worked with Intel on cloud RAN products, Ericsson announced a tie-up with AMD earlier this year. Software written for one can apparently be moved to the other. "That is what we believe our customers are asking for because it provides true openness and true mobility in the software stack," said Fredrik Jejdling, the head of Ericsson's networks business, in April.

Arm flexing

The caveat concerns the roughly 9% of this CPU market not based on x86. Today, the main alternative is Arm, a UK-based chip designer that sells blueprints to a host of mainly US customers, including Marvell, Nvidia and Qualcomm. The inline accelerators promoted by Marvell and Qualcomm feature Arm's technology, while Nvidia is one of several companies building Arm-based CPUs. Others targeting the RAN sector include AWS and an Oracle-backed startup called Ampere Computing. What Ericsson could not do is move its cloud RAN software to one of these Arm-based CPUs.

But Katti does not view this lack of compatibility between Arm and x86 as an "insurmountable technical problem," pointing out that AWS already has a system called Nitro capable of "recompiling" x86 software to run on Arm. "I don't see why, if the telco world wanted to solve that problem, you couldn't," said Katti. "It's just a question of whether it's important enough for the telcos to want to figure it out."

Currently, the answer is probably a negative. Arm still has a long way to go, as evidenced by its smaller footprint in the market for data-center CPUs and even smaller presence in virtual RAN, where Intel at the start of the year was boasting a 99% share. Not unreasonably, Katti also argues that Arm cannot easily be compared with Intel because it operates solely as a licensor to numerous other licensees.

"Arm is not one product," said Katti. "It is ultimately an architectural license, an instruction set. Every variant of Arm is different and so Ampere's Arm product is different from AWS's Arm product. They are not really comparable. And, frankly, I don't think software you write for Ampere is easily portable to Arm running on AWS. So don't assume because it's Arm it means the software is portable. It actually will require a significant amount of work even within the Arm ecosystem."

An acknowledgement of Arm's immaturity comes from other quarters as well. "Arm has built the platform and hasn't thought through some of these other things, like what kinds of optimization are required for the telco world," said Geetha Ram, the head of RAN compute for HPE. She notes a lack of Arm support for encryption at the central units (CUs) of any virtual RAN. "It takes x times longer to get something done on Arm because it's not optimized."

Code violation

A lack of portability leaves Layer 1 software developers with a potentially awkward choice: Either stick to one silicon platform, perhaps benefiting from some portability between Intel and AMD if they are in the mix, or maintain multiple libraries of code for different accelerators. Ericsson seems likely to have two already – one for purpose-built products, where its own silicon is used, and another for Intel- or AMD-based cloud RAN. Nokia, by contrast, relies on Marvell for both purpose-built and cloud RAN. The Layer 1 software does not change between them.

Still unclear is how much advantage companies would see from the full Layer 1 virtualization envisaged by Katti. Even if there were no custom silicon in use, Ram thinks network equipment providers (NEPs) would continue to retain dedicated resources for Layer 1 development. "Would a NEP allow someone to do this like an app developer on x86?" she said. "I don't think the x86 model of app development is valid for a critical piece of code like L1."

But if she is right, virtualization or cloudification surely loses some of its appeal. The reason for investing in one or the other is supposedly to run all workloads on the same underlying platform, replacing old vertical silos with a new horizontal layer. It's out with the spaghetti tangles and in with a neat lasagne, to seize an analogy used by a few telco executives before now. Lasagne could turn out to be more expensive and less digestible than they hoped.

Read more about:

Europe

About the Author(s)

Iain Morris

International Editor, Light Reading

Iain Morris joined Light Reading as News Editor at the start of 2015 -- and we mean, right at the start. His friends and family were still singing Auld Lang Syne as Iain started sourcing New Year's Eve UK mobile network congestion statistics. Prior to boosting Light Reading's UK-based editorial team numbers (he is based in London, south of the river), Iain was a successful freelance writer and editor who had been covering the telecoms sector for the past 15 years. His work has appeared in publications including The Economist (classy!) and The Observer, besides a variety of trade and business journals. He was previously the lead telecoms analyst for the Economist Intelligence Unit, and before that worked as a features editor at Telecommunications magazine. Iain started out in telecoms as an editor at consulting and market-research company Analysys (now Analysys Mason).

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like