11 insights from the Layer123 Reunion Congress: part 1
Layer123's note - this is part 1 of 2, you can read the 'Automation' and 'Future' sections here.
Dr. Mark H Mortensen, Principal Analyst at ACG Research, spent three days at the Layer123 Reunion: Intelligent Network Automation Congress in April - and here is his summary of what the speakers and delegates thought about some key issues in our industry.
I attended the excellent Layer 123 Reunion conference in Madrid, Spain from 27-28 April. Here I discuss the major topics and generally agreed-to points I saw in the conferences – and MYTAKE personal opinions.
This is meant to document my understanding of what I heard – If you were there, please let me know if you disagree with my interpretations of the proceedings. And everyone is invited to comment on my opinions on LinkedIn, where I will be posting this document broken into the four topical areas at www.linkedin.com/in/markhmortensen .
VIRTUALIZATIONCloud native software technologies (from the Hyperscalers) have won.
The tech wars are over on this. Cloud native software architectures (Service-oriented open APIs, microservices-based software architectures, CI/CT/CD processes, and containerized deployment,) have won over their former competitors.
Innovations in these areas have, and will probably continue to, come from the hyperscalers. Telecoms will be more a consumer of them than and innovator in these areas.
It is not clear what the next round will look like.
MYTAKE: This is the most significant advance in software technology I have seen in my 44 years in the industry – bigger than the move to object-oriented programming. Sure, it uses prodigious amounts of hardware, storage, bandwidth, and electrical power. But it will be to the telecoms (and all industries) what architectural steel and reinforced concrete was to architecture – releasing the designers from the constraints and high construction and maintenance costs of the past and unleashing new creative solutions. I believe I am already seeing some of these in some new vendors’ offerings and some new initiatives from the more progressive of the traditional vendors. And let’s not try to mess with the technology too much, creating our own “Telco-grade” versions of things that already work.
Network virtualization progress is slower than expected, but important.
Network virtualization, in its original form, is dead
The original concept of a common x86-based compute under-layer is gone – we have GPUs and other specialized processors and other hardware platforms that run the difficult software or provide a strong, secure distributed computing platform much better.
MYTAKE: The “commonality” focus has shifted to the software that sits on all of these processors, normalizing the operations of implementing the software, on the various hardware platforms but making use of the right hardware for its best operations. Although there are many options, the VMware proprietary solution (TANZU with its deep security features) and Red Hat OpenShift (riding on the open source innovation model) will dominate for the foreseeable future, with most vendors choosing them as their major platform.
We are all disappointed at the slow pace of NFV implementation, although all agree it has been an interesting 10 years.
In the data center market, about 40% of the routers are now virtualized. We are seeing nowhere near as deep a deployment in telco even here. And in other places of the network, VNF/CNF deployments are minimal (with the exception of 4G components like the HSS which were classic software modules that could easily be ported onto a virtualized or containerized infrastructure.)
5G SA core (and parts of the rest of 5G) is the exception – going into production use now.
The original concept was borrowed from the enterprise application space, where virtualization saved about 30% of infrastructure costs. However, prices for virtualized versions of network elements are about the same as their physical components. This may have more to do with commercial concerns of the equipment vendors, who still have significant market power (in the Porter 5-forces sense), than it does with the costs of creating the CNFs. There are some new, small software-based vendors trying to change this situation.
Disaggregation and the use of small, software-based vendors are two ways that the CSPs hope to gain more market power. But they are worried about the additional operational complexity of the first, and the scale of support and financial stability of the second.
MYTAKE: Telco NFV did not get anywhere near the 30% cost reduction the founders hoped for, and costs of virtualized networks may actually be higher by the time you do a TCO. But the agility and flexibility is so much more, it will be worth it for most cases. Expect this to be a major trend, especially if hardware supply chain problems persist. 5G will be the test case.
We now understand how to implement CNFs, focus now shifts to how to manage them.
There are many good ways of deploying CNFs now that operate to scale. It matters little what kind of software it is – software is software (except for the need for high-performance or tuned hardware, as discussed above).
Deployment is mostly in private clouds now, but hybrid and public cloud implementations are gaining traction in the marketplace (Rakutan, Dish, others).
Deployments are mostly static – locking the CNFs to the underlying computing platforms
We are seeing some automatic scaling in some cases.
The future will be dynamically distributed software.
MYTAKE: The future will not be “Where is the best place to run this software?” but “Where is the best place to run this software NOW and for the next 5 minutes,” slopping it around the distributed computing platform as needed.
5G will be the proof point for network CNFs in Telecoms.
NEW MODELSNetwork element disaggregation comes in two flavors - vertical and horizontal.
There are two types of element disaggregation – horizontal and vertical:
Horizontal breaks up the box into several boxes – e.g.The TIP disaggregated optical network project). The main purpose of this is to reduce the vendor lock-in and the cost (by increasing competition).
Vertical takes some of the control software out of the box, defining a control API to the box (e.g. the RICs in the O-RAN architecture). The main purpose of this is to provide an open programming environment to allow the CSP or third parties to provide innovative control software.
By disaggregating, someone must take on the job of ensuring it all works together when deployed (the TIP is taking this on), and someone has to manage the piece parts in the field. It is less clear how the latter will happen – the default is the operators themselves.
O-RAN will be the proof point for vertical disaggregation, with the flavors of RIC.
MYTAKE: Horizontal disaggregation will be consequential to the market shares and pricing power of the vendors in those markets, but the major effect on the industry will be the vertical disaggregation. We’ve tried this before, with the Advanced Intelligent Network (AIN) architecture for circuit switches that outboarded the advanced call control features from the switching fabric into an auxiliary box. It was moderately successful but did not really advance the multivendor aspects much. Maybe this time we will be able to unlock the creativity of the larger software community – I am hopeful.
Creating the software-based future is as much a people problem as a technical problem.
Our current workforce in Telecom needs to upgrade, through programs of up-skilling (“It’s easier to teach software to a network person than teaching network to a software person.”) and replacement. Software skills will be key. However, CSPs face the challenge of not being known as leading-edge places for new hires.
Getting managers and workers to accept increased automation is proving difficult due to several major issues:
job retention concerns,
distrust of the recommendations of AIs,
the concern of personal and corporate liability if the automation causes major problems.
Embodying the need for security in everyone’s minds and everyday work practices will take time and effort.
MYTAKE: I’ve written extensively on the AI and management issues. It is hard, indeed, and will take time, good management, and technological advances. We must adopt the “if it happens twice, automate it” approach and involve the network technicians in the process of automating the tasks, then build out processes within domains, then do cross-domain work. The TM Forum’s Autonomous Networks architecture is a good guide here. Planning, implementation, management, and maintenance of the large number of these automated processes, AI training datasets, refresh and update procedures, etc. will be the next round of problems we will create.
Too many competing standards still, but things are getting better, not worse.
It is all about APIs right now. Defining practical APIs, using a use-case methodology, has been shown to be very effective. And the CSPs are more willing to take part in setting these use-case requirements than they used to be.
There are still too many standards in play – confuses vendors and CSPs. But we are seeing some more effective cooperation happening between organizations. The TIP MUST (for SDN transport) project is a good example of cooperation, using CSP-created use cases to feed requirements into the existing SDOs, then validating the results with field work. They also are coordinating with the IETF TerraFlow OS project which is creating an open source version of the cross-domain control software.
MYTAKE: We have learned a lot from the IETF way of creating “standards” – they need to be incremental and reduced to practice early in the process to ensure we are building something practical. Many of us remember the boondoggles of Q3 network management interfaces with ASN.1 notation (too complex, finally collapsed under its own weight) and Asynchronous Transport Mode (ATM) data networking (killed by the simpler TCP/IP-based protocols). We did learn from those. We need to move more quickly to the various SDOs working together, giving up ground to create the “centers of gravity” for further standards work. ITU, then later IETF were the centers of gravity for network management. The focus is shifting to the TMForum. I believe that the other SDOs should start to orbit more around them.