Chip choices kickstart open RAN war between lookaside and inline

A comprehensive guide to the different types of RAN acceleration, what various suppliers are doing and what drives their strategies.

Iain Morris, International Editor

August 15, 2023

11 Min Read
Chip choices kickstart open RAN war between lookaside and inline
Nokia shows off equipment at this year's Mobile World Congress.(Source: Nokia)

In parts of a radio access network (RAN), software is a bit like motor fuel. Just as the same basic formula is good for most engines, so the same code works on numerous silicon chips. But deep in the bowels of the RAN, where computing demands are heaviest, porting the software written for one hardware platform to another is more like a human organ transplant. The chances of a terminal rejection are high.

Across swathes of the industry, there is little optimism this can change, despite a push by some telcos for complete hardware and software disaggregation at all layers of the RAN software stack. In the data link and network layers (commonly known as Layers 2 and 3, or L2 and L3), Nokia has designed a software "trunk" that runs on multiple platforms – whether custom-built or using the general-purpose designs of Intel (with its x86 architecture) and Arm. Down in the physical layer (L1), it cannot easily do the same.

"L1 is much closer to the underlying HW [hardware] and requires a lot more compute/processing," explained Mark Atkinson, Nokia's RAN chief, during an email exchange. "Therefore, L1 SW [software] is custom developed. When the compute architecture changes, the SW needs to be re-written."

The issue has shaped part of the Finnish vendor's strategy regarding more open, virtual and cloud-native networks. In one of these, much of the software it has written for its purpose-built kit could be deployed on common, off-the-shelf equipment, whether x86- or Arm-based. But the L1 setup does not effectively change. Whatever the scenario, the same Nokia software is combined with the silicon of Marvell, a US chipmaker.

Ericsson and Nokia are poles apart

Nokia has controversially opted for a technique called inline acceleration to achieve this. In a strict virtual or cloud RAN, the central processing unit – usually Intel-supplied – would support all the network functions. As far as Nokia is concerned, this general-purpose tool does not measure up performance-wise in L1. Its answer is to shift the L1 processing from the CPU to the same in-house software and Marvell chip used in purpose-built deployments. A PCIe card, which can be slotted into any compatible server, hosts the L1 technologies.

Nokia's rationale is compelling. Besides addressing the performance problems of using general-purpose CPUs, the inline accelerators can draw on the scale economies and upgrade cycle of purpose-built kit. The controversy is largely about the cloud credentials. If hardware and software are so closely entwined, dedicated to certain functions and completely separate from the main CPU platform, does inline really qualify as virtualized? Intel, perhaps unsurprisingly, says no.

Major vendors and their L1 strategies9857.png(Source: companies, Light Reading)
(Note: Table is not meant to be taken as a strict guide to commercial activity but highlights partners and technologies described in interviews and press releases; HW in this instance refers to the choice of silicon)

Intel has a huge ally in Ericsson, Nokia's main rival, which has a radically different approach to cloud RAN. Unlike Nokia, the Swedish vendor believes it is possible to keep most L1 processing on the CPU without performance degradation. The exceptions singled out in a white paper from Intel are functions known as forward error correction (FEC) and discrete Fourier transform (DFT), which is used in more antenna-rich "massive MIMO" networks.

These would therefore need their own hardware accelerator, albeit one handling fewer tasks. It has been achieved through an alternative to inline dubbed lookaside. Intel previously offered this on PCIe cards. More recently, it has criticized these as an additive cost and complication and begun putting the accelerator on the same die as the CPU. To distinguish it from the original lookaside approach, Intel calls this an integrated accelerator.

The implication is that Ericsson cannot realize the "synergies" trumpeted by Nokia. In purpose-built L1, it appears less reliant than its Finnish rival on third parties, instead using chips designed by Ericsson Silicon, its in-house semiconductor business. The rules of the game outlined by Atkinson would mean the software written for these could not be deployed on Intel's CPUs and accelerators. Ericsson presumably needs two sets of code – one for purpose-built and the other for cloud RAN.

The L1 data flow using Intel's virtual RAN technology2477.png(Source: Intel)

Doing the same as Nokia and Marvell would raise potentially awkward questions for Ericsson. Marvell spies an opportunity to sell merchant silicon to L1 software developers besides Nokia. But if Ericsson did likewise, it would be selling hardware to its RAN competitors. Using its purpose-built technologies for inline would leave Ericsson more vulnerable than Nokia to accusations that it continues to exercise control over the most important hardware and software elements.

Ericsson's partnership with Intel, by contrast, allows it to present cloud RAN as something markedly different from purpose-built. One of the big selling points, according to Intel, is that all layers of the RAN stack can be written in the same standard C/C++ programming language, reused in follow-on CPU generations and ported to other CPUs. This probably means CPUs that are at least x86-based, if not from Intel. But Arm barely figures in this CPU market today. Avoiding Intel's FlexRAN reference design, and relying on in-house software, Ericsson can run the same L1 code on AMD as well as Intel chips, it recently claimed.

Qualcomm gets into software

Such optionality is not a feature of the more customized chips used for inline acceleration, according to Intel. These typically come with digital signal processors (DSPs) that must be managed and programmed with proprietary tools and languages, it says in its white paper. "Of course, the more you try to optimize, the more the L1 tends to be tied to the hardware," conceded Gerardo Giaretta, in charge of 5G RAN infrastructure for Qualcomm, another inline backer.

Yet Giaretta resists the criticism of inline as a wholly proprietary tech incompatible with virtualization. In his mind, there are two sides to L1. One of these, which Giaretta calls the "data-handling" bit, is "super hardware-dependent," he says. But he does not view it as a point of competitive differentiation between software vendors.

The secret sauce lies in algorithms to do with channel estimation and beamforming, he believes. "That part of the software can still be implemented in a pretty open way in an inline accelerator," he recently told Light Reading. "Our solution uses DSPs that are not Qualcomm-specific and have a toolchain that is very common to the others."

9479.jpeg

An engineer examines private 5G products supplied by Qualcomm.
(Qualcomm)

While Giaretta declined to provide more details, he insisted that nearly all DSP intellectual property used in this RAN area originates with two companies: CEVA and Cadence Design Systems (thanks to the Tensilica business that Cadence acquired back in 2013). "Every vendor knows how to program those DSPs," he said. "We are not bringing a Qualcomm-specific DSP that makes the entire platform difficult to code on."

Still, Qualcomm is supplying not only silicon but also L1 software with its own inline accelerator cards, making it look very different from the hardware-only Marvell. Its approach seems to hold interest for newer entrants that lack the resources and RAN experience of Ericsson and Nokia. They include NEC and Rakuten of Japan as well as US-based Mavenir.

Spread betting

Working with Qualcomm means all three can pursue an inline strategy while keeping in-house resources freer for lookaside as well. Mavenir, notably, started out in the RAN as a FlexRAN client of Intel, building its software on top of the chipmaker's reference design. The drawback is that FlexRAN works only with Intel's products. It is not even compatible with AMD, the only other big vendor of x86-based chips. But thanks to partnerships, Mavenir can feasibly invest in FlexRAN-based software for lookaside rollouts with Intel and introduce Qualcomm if customers prefer inline. For L2 and L3, the same software should be deployable on multiple CPUs, according to Atkinson's logic.

Much the same goes for Rakuten. Altiostar, the company it wholly acquired two years ago, also started out as a major FlexRAN client. Even before the takeover, Rakuten Mobile was building a nationwide mobile network in Japan using Altiostar's software with Intel's CPUs and hardware accelerators. Tareq Amin, who quit the CEO post at Rakuten Mobile last week, went as far as saying he would never use a PCIe card, deeming them costly and complicated.

Yet Rakuten Symphony, the part of the Japanese company that serves other telcos, has had a partnership with Qualcomm since February last year. In theory, this should allow it to pursue the same approach as Mavenir. NEC, the other RAN vendor Giaretta cites as a Qualcomm customer, could do likewise. NEC did not respond to questions about its technical strategy, but Giaretta said it is using Qualcomm's L1 software.

2543.jpg

Tareq Amin, formerly of Rakuten, viewed PCIe cards as an additional cost.
(Source: Iain Morris/Light Reading)

Some mystery also surrounds South Korea's Samsung, the biggest of the RAN vendors after Huawei, Ericsson, Nokia and ZTE, according to various analysts. Last week, it announced an updated virtual RAN alliance with Intel, and it has been emphatic that its software is not based on FlexRAN. This all makes it look very similar to Ericsson, and Samsung similarly boasts a range of purpose-built RAN products.

Unlike Ericsson, though, Samsung claims to be agnostic about acceleration techniques. "Samsung is offering both inline and lookaside accelerators to fulfill various configuration options and the needs of each operator," said a spokesperson for Samsung in emailed comments.

Samsung declined to divulge further details, including the names of chip suppliers, but it is probably working with Marvell. The Nokia supplier was named as an open RAN partner in October last year. Much further back, in 2019, Samsung and Marvell revealed they were collaborating on both 4G and 5G, without reference to virtualization or the cloud. Could Samsung have developed purpose-built L1 software for Marvell's chips that it can redeploy with inline accelerator cards, much as Nokia is doing? Possibly.

More inline guys

That leaves Fujitsu, another Japanese vendor best known for its role with Dish Network in the US, to round out the main group. "We're inline guys," Greg Manganello, Fujitsu's head of network integration and software, told Light Reading at this year's Mobile World Congress, citing Marvell and Nvidia as his main chip suppliers.

Rather like Qualcomm, Nvidia stumps up both hardware and software. Unlike Qualcomm and others, however, it offers graphical processing units (GPUs) for much of the inline acceleration. These can either be integrated with its Arm-based CPUs or provided on separate accelerator cards. There is no obligation to use Aerial, Nvidia's L1 software, but the alternative would presumably mean finding developers who are comfortable coding for GPUs. Fujitsu has opted for Aerial, it confirmed by email.

Critics regard Nvidia as an expensive and power-hungry choice. By Nvidia's own admission, the economics are questionable unless GPUs deployed at the network edge are used for both L1 acceleration and to support artificial intelligence (AI) needs such as the training of large language models (LLMs). Scott Petty, Vodafone's chief technology officer, has downplayed interest. "The need for on-prem LLMs – the business case is not there yet," he said at a recent press briefing. Accessing generative AI through hyperscalers is the preferred strategy for now.

7760.png

Vodafone CTO Scott Petty seems unlikely to buy from Nvidia anytime soon.
(Source: Vodafone)

Apathy and peril

All this seems to explain why Fujitsu also has a partnership with software-less Marvell. And there, just like Nokia (and possibly Samsung), it is probably contributing its own L1 code. One takeaway from all this is the industry's heavy reliance on a single highly regarded but relatively small and unprofitable chipmaker. For its last full fiscal year, Marvell made sales of $5.9 billion and a net loss of $163.5 million, although sales were up $1.46 billion on the year-earlier figure.

It now figures in the plans of at least three of the world's biggest RAN vendors outside China. And Nokia appears to have no L1 alternatives. The dangers of dependency were illustrated just a few years ago when problems at Intel upset Nokia's whole 5G business. Broadcom and Marvell were subsequently introduced as part of a supplier diversification strategy, but they now appear to be serving entirely different needs.

Decisions about lookaside and inline come with trade-offs and risk. Covering both bases implies resources must be more thinly spread. Focusing on one could be dangerous if telcos swing en masse behind the other, or decide RAN virtualization is really not for them. Today, hardly any telcos have sounded as fanatical as the Nordic vendors about specific acceleration techniques. They might never, or be unwilling to switch vendors because of something so apparently arcane. The opposite scenario is the more perilous.

Update: The table in this story has been changed since it was first published to include AMD, which had been mistakenly omitted, as a hardware supplier to Ericsson.

Related posts:

— Iain Morris, International Editor, Light Reading

Read more about:

AsiaEurope

About the Author(s)

Iain Morris

International Editor, Light Reading

Iain Morris joined Light Reading as News Editor at the start of 2015 -- and we mean, right at the start. His friends and family were still singing Auld Lang Syne as Iain started sourcing New Year's Eve UK mobile network congestion statistics. Prior to boosting Light Reading's UK-based editorial team numbers (he is based in London, south of the river), Iain was a successful freelance writer and editor who had been covering the telecoms sector for the past 15 years. His work has appeared in publications including The Economist (classy!) and The Observer, besides a variety of trade and business journals. He was previously the lead telecoms analyst for the Economist Intelligence Unit, and before that worked as a features editor at Telecommunications magazine. Iain started out in telecoms as an editor at consulting and market-research company Analysys (now Analysys Mason).

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like