Intelligence and Learning in O-RAN for 5G and 6G Cellular Networks

Northeastern University Background
The traditional approach for cellular networks involved a rigid separation among the different entities that contributed to the network deployment, i.e., telecom vendors, operators, and chipset manufacturers. The performance optimization was mostly implemented by vendors during the equipment design process. As a result, the final appliances sold to operators (e.g., core network elements, or base stations) merely included a finite set of possible configurations, leaving the operators with little to no room for controlling the network, and limited decision capabilities mostly circumscribed to deployment choices and network coverage. Such a closed and inflexible approach prevents agile control of the network, often results in sub-optimal performance and, most importantly, severely hinders the deployment of data-driven solutions.
The fifth (5G) and sixth generations (6G) of cellular networks will undoubtedly accelerate the transition from inflexible and monolithic network architectures to agile disaggregated architectures based on softwarization and virtualization, as well as on openness and re- programmability of network components.
Open, disaggregated, and flexible architectures are expected to become enablers of new functionalities, including the ability to provide on-demand virtual network slices to isolate different mobile virtual network operators or diverse network services and run-time traffic requirements to be provided on the same physical infrastructure and to split network functions on multiple software and hardware components, possibly provided by multiple vendors; capture and expose network analytics not accessible in old—monolithic—architectures, and control the entire network physical infrastructure in real time via third party software applications and open interfaces.
Technology Overview
Northeastern University researchers have developed O-RAN, a data-driven closed-control loop, large scale experimental testbed using open source, programmable Radio Access Network (RAN) and RAN Intelligent Controller (RIC) components. This involves a Deep Reinforcement Learning (DRL) agent running as a RIC xApp. The DRL agent dynamically selects the optimal configuration of the base stations of the network, and of the network slices instantiated on them (e.g., slices configuration and scheduling policy to execute at each slice) based on the performance metrics sent by the base stations at run-time. Practical integration of closed- control loops in cellular networks, effectively implements the vision of self-optimizing and autonomous networks. It provides automated network control through a DRL agent running in the form of RIC xApps within the O-RAN framework and optimally selects configurations for each network slice instantiated on the network base stations based on performance metrics sent by the base stations.
Benefits

To significantly simplify network management and improve performance in general in next-generation cellular networks
Automatic performance optimization of the base stations of the network, as a whole or for each network slice, based on the real-time traffic demands and Quality-of-Service (QoS) requirements

Applications

Cellular network managers and providers

Opportunity

Development partner
Commercial partner
Licensing

Related Blog

Smart, interactive desk

Get ready to take your space management game to the next level with the University of Glasgow’s innovative project! By combining the

Mechanical Hamstring™

University of Delaware Technology Overview This device was created to allow athletes who suffer a hamstring strain to return to the field

Join Our Newsletter

                                                   Receive Innovation Updates, New Listing Highlights And More