Abstract

Future mobile data traffic predictions expect a significant increase in user data traffic, requiring new forms of mobile network infrastructures. Fifth generation (5G) communication standards propose the densification of small cell access base stations (BSs) in order to provide multigigabit and low latency connectivity. This densification requires a high capacity backhaul network. Using optical links to connect all the small cells is economically not feasible for large scale radio access networks where multiple BSs are deployed. A wireless backhaul formed by a mesh of millimeter-wave (mmWave) links is an attractive mobile backhaul solution, as flexible wireless (multihop) paths can be formed to interconnect all the access BSs. Moreover, a wireless backhaul allows the dynamic reconfiguration of the backhaul topology to match varying traffic demands or adaptively power on/off small cells for green backhaul operation. However, conducting and precisely controlling reconfiguration experiments over real mmWave multihop networks is a challenging task. In this paper, we develop a Software-Defined Networking (SDN) based approach to enable such a dynamic backhaul reconfiguration and use real-world mmWave equipment to setup a SDN-enabled mmWave testbed to conduct various reconfiguration experiments. In our approach, the SDN control plane is not only responsible for configuring the forwarding plane but also for the link configuration, antenna alignment, and adaptive mesh node power on/off operations. We implement the SDN-based reconfiguration operations in a testbed with four nodes, each equipped with multiple mmWave interfaces that can be mechanically steered to connect to different neighbors. We evaluate the impact of various reconfiguration operations on existing user traffic using a set of extensive testbed measurements. Moreover, we measure the impact of the channel assignment on existing traffic, showing that a setup with an optimal channel assignment between the mesh links can result in a 44% throughput increase, when compared to a suboptimal configuration.

1. Introduction

By 2021, mobile data traffic is predicted to grow to 49 exabytes per month, a sevenfold increase over 2016 [1]. With the present increase of mobile devices and respective traffic demands, the current mobile communication infrastructures will soon become resource-saturated. Consequently, fifth generation (5G) mobile networks need to provide multigigabit capacity and low latency connectivity at the access level, especially with the emergence of extremely demanding applications such as augmented reality, ultra-high definition video streaming, and self-driven automobiles [2]. To that end, multiple enablers for these requirements are proposed, including increased spectrum efficiency, network densification, and spectrum extension. Spectrum efficiency aims to improve the wireless radio resource usage, e.g., by coordinating multiple base stations (BSs) using schemes such as coordinated multipoint processing (CoMP) [3], improving modulation and coding schemes [4], or using massive multiple-input multiple-output (MIMO) techniques [5]. Network densification aims to create ultra-dense networks (UDNs) and spectrum extension explores the usage of additional frequency bands for communication. More specifically, millimeter-wave (mmWave) band frequencies, located between 30 and 140 GHz, are promising solutions, due to the large amount of spectrum available, which can provide the necessary multigigabit capacity [6].

By fulfilling the 5G capacity requirements at the access level, the backhaul network needs to be designed so that it does not become the network bottleneck. The backhaul connects the BSs to the core network and is typically formed by fiber-cabled or point-to-point fixed microwave wireless links. However, interconnecting all the UDN small cells through fiber links is economically not feasible, which motivates the need of wireless backhaul solutions. mmWave-band links can be used to support the aggregated fronthaul traffic, as the small cells would be often located within close vicinity, forming multihop backhaul topologies [7]. However, due to network densification, a large number of small cells and backhaul links bring new challenges to the network management, due to the complexity of resource allocation problems, forwarding decisions in the backhaul, and mmWave-related connectivity problems. 5G standardization motivates split-plane HetNet architectures, where the control and data plane are split. For example, the macrocells can provide control plane connectivity while the data plane is mostly forwarding traffic through the small cells [8]. With a split-plane architecture, Software-Defined Networking (SDN) becomes an attractive solution for backhaul management. SDN is a networking paradigm where the control plane is decoupled from the data plane, logically centralizing the control plane at the SDN controller [9], which can be replicated and distributed for scalability and fault tolerance. With SDN, the controller has a global network view and can make decisions on the forwarding plane, based on the overall network knowledge.

Due to the dynamic nature of mobile communication traffic caused by diverse traffic demands during different times of the day, user mobility, and/or temporary network failures (for example, caused by long lasting obstacle blockage in mmWave links), it is important to be able to dynamically reconfigure the backhaul, in order to maintain the connectivity and adapt backhaul capacity to traffic demands. Such reconfiguration typically involves rerouting existing traffic to match new forwarding decisions and turning on more small cells to provide additional localized capacity on-demand. By adaptively powering off nodes and links that are not needed, the backhaul can also be managed in an energy efficient way. Such adaptation leads to significant changes in the topology and link configuration and ought to be seamless in order to minimize the impact on existing traffic. Consequently, a proper orchestration of the reconfiguration steps is required. From the SDN architecture perspective, these reconfiguration operations can be centrally triggered by a controller, in order to achieve different target policies defined by network operators, such as energy efficiency or capacity maximization.

While the reconfiguration of the mmWave wireless backhaul remains a fundamental aspect to consider, it is also important to build testbed environments in order to study the impact of such reconfiguration operations on real traffic. Existing testbed work is mostly focused on exploring physical layer aspects, such as beamforming in IEEE 802.11ad [10] networks [11] or propagation properties of mmWave frequencies [12]. The UE connectivity with access points (APs) using IEEE 802.11ad links has been also explored, focusing on the AP deployment [13] and lower-layer protocol tuning and its relation with higher-layer (i.e., TCP) protocols [14]. Within the wireless backhaul, several architectures consider the usage of mmWave-related technology for the backhaul links and its management to be done through SDN [1517]. However, research on mmWave testbeds is still limited, especially when considering dynamic reconfigurable network aspects, and integrating multihop mmWave mesh topology management with SDN, in order to build future adaptive and reconfigurable SDN-based mmWave backhaul networks.

In this paper, we use the concept of SDN to develop dynamically reconfigurable mmWave mesh backhaul networks, where reconfiguration experiments can be orchestrated in a flexible way. In our approach, the SDN control plane exposes a high-level API that can be used by management applications to schedule and trigger various reconfiguration operations, which include the channel (re-)assignment of the links, the update of flow forwarding rules, the establishment of links with different neighbor nodes, the alignment of mmWave directional antennas, and powering on/off small cell backhaul nodes. We deploy an indoor testbed involving a SDN controller and four small cell multiradio mmWave mesh nodes, which is used to conduct controlled experiments to demonstrate the capabilities of our SDN-based reconfiguration orchestration platform. Based on various use cases, we orchestrate the alignment of mechanical steerable antenna elements with adaptive power reconfiguration and dynamic backhaul traffic rerouting, effectively coping with varying traffic demands. Our set of experiments is designed to evaluate the impact of various backhaul reconfiguration operations on existing user traffic. Moreover, we conduct experiments to show the impact of different channel assignment configurations in the wireless backhaul. Our results show a 44% throughput increase and 83 times lower latency for a scenario without cochannel interference, compared to when cochannel interference is present.

In summary, this paper provides the following main contributions:(i)A comprehensive overview on wireless backhaul testbeds and related reconfiguration use cases(ii)A detailed presentation of our designed SDN-based architecture for the wireless backhaul management(iii)A thorough description of our developed mesh network testbed with steerable mmWave interfaces and power control units, which are configurable through a SDN control plane(iv)A performance evaluation of the SDN-based reconfiguration of our testbed, focused on the quality of existing user traffic over different topology changes

The remainder of this paper is structured as follows. Section 2 explores the usage of SDN for managing a wireless backhaul, which is followed by our corresponding architecture, in Section 3. Section 4 describes the testbed used to conduct our evaluation, presented in Section 5. Lastly, we present our conclusions and future work directions in Section 6.

2. Software-Defined Networking for Wireless Network Management

The centralized principles of SDN have motivated the design of new split-plane architectures, which can be beneficial for the network operation. With the centralization of the control plane, network programmability and the enforcement of global network policies become easier. Consequently, the network configuration tuning can be done by the SDN control plane, rather than individually at each network device. Considering the management of the forwarding plane, SDN allows the configuration of the managed devices with forwarding rules that match different packet header fields. The operator can then choose to install generic forwarding rules (e.g., matching traffic from a specific ingress port) or apply fine-tuned rules to distinguish specific flows (by matching their protocol source and destination port numbers, for example).

The SDN concept has been applied to different mobile network segments, mostly combined with network functions virtualization (NFV) of current mobile packet core infrastructures, e.g., long-term evolution evolved packet core (LTE EPC) [18]. Yet, the adoption of this type of architecture in the wireless backhaul has not been vastly explored. Therefore, hereby we discuss the placement of SDN functionalities for the wireless backhaul management, focusing on the reconfiguration aspects.

Wireless networks can be more complex to manage, due to additional configuration primitives that are not present within wired networks (e.g., neighbor selection, channel assignment, or transmission power configuration). Considering the transition between two different configuration states (set of active topology nodes and links, together with their configuration and traffic forwarding states), C1 and C2, while a basic forwarding rule update in wired networks only requires the specification of new forwarding rules to install, a reconfiguration within a wireless network requires the consideration of more steps. For example, it is required to firstly establish the new links from C2, and only when each link is ready, it is possible to install new forwarding rules and remove the unused links from C1. Other configuration primitives entail the assignment of channels to links, the power on/off operation of small cell mesh nodes, and the alignment of directional antennas to form links with new neighbors. In particular, when such link establishment takes a significant amount of time and no backup paths are available, the service quality for existing users can be significantly penalized. Without the coordination and proper ordering of these commands, the network can experience significant disruption of the existing traffic.

There are multiple options how an enhanced backhaul reconfiguration could be achieved:(i)Using a distributed approach, the backhaul network would autonomously reconfigure itself based on, e.g., distributed protocols. The SDN control plane would only be responsible for the forwarding rule installation in the data plane. An example of such an approach is used in [19], which uses a distributed channel assignment approach. Its drawback, however, is that typically the routing impacts the traffic demands for a given channel, which impacts the channel assignment. In order to achieve optimized joint channel assignment and routing, additional coordination between distributed management protocols and SDN control plane would be required.(ii)Using a centralized but legacy approach, the backhaul network would be managed by a separate network management and control plane, while the SDN control plane would be responsible for the establishment of forwarding rules. For example, in optical transport networks, a separate optical transport network control plane is connected to a centralized network management system (NMS). By using legacy protocols, the NMS is then responsible for wavelength assignment and optical path management [20], which impacts the available capacity at the routing layer. Again, suboptimal performance would be achieved if the legacy network management would perform its own decision logic regarding configuration and adaptation, independent from the SDN control plane, which would be responsible for the traffic steering. For optimal operation, a coordination between those two management systems would be required.(iii)Using an integrated SDN-based approach, the SDN control plane would be responsible for the management of the wireless network configuration aspects, as well the establishment of forwarding rules. Such an integrated approach would enable SDN controlled traffic engineering and wireless link management. This significantly simplifies the joint optimization of traffic routing and wireless resource management (e.g., channel assignment). When using an integrated SDN-based approach, the configuration of wireless resources should be carried out by a set of distributed SDN controllers [21], for scalability and resilience reasons.

In this paper, we leverage the integrated SDN-based approach and we identify a set of important backhaul reconfiguration use cases in the next section.

2.1. Wireless Backhaul Reconfiguration Use Cases

Despite the centralized global network view, when considering a dense wireless backhaul, the reconfiguration decision making processes become difficult to compute. The high number of managed nodes and possible parameterization options increase the complexity of the calculation of new configuration states. This motivates the need for new algorithms and frameworks that optimize the backhaul operation for energy efficiency goals or other objectives. These frameworks can receive topology data as inputs from the SDN controller and apply optimization algorithms to calculate new network configuration states. As such optimization algorithms are difficult to implement, they can be outsourced to dedicated frameworks and implemented in domain specific modeling languages (e.g., MiniZinc [22] or OPL [23]), and solvers such as Gurobi or CPLEX can be used to apply, e.g., branch-and-cut or heuristic algorithms to solve the optimization problems.

Within a dense wireless backhaul, often an UE can be within coverage of multiple small cells, alongside with one or more macrocells. Typically, the UE connects to the cell with the highest received signal strength. However, solely using this metric often leads to situations where some BSs are highly loaded while others are under-utilized, especially in hot-spot areas, where a high number of users can be located simultaneously. To avoid the multihop backhaul becoming the bottleneck, user association schemes that jointly optimize resources in fronthaul and backhaul (e.g., optimal routing and power allocation) are required [24]. Similarly, but considering the dual connectivity of the UE to both macro- and small cells, the overall backhaul throughput can be maximized by calculating an optimal traffic routing between the macro- and small cells [25].

Due to the densification of future 5G mmWave wireless backhaul networks, a high number of small cells contribute to a significant increase of the overall power consumption. Therefore, it becomes important to adaptively turn on/off small cells, in order to optimize the network for a power efficient operation, while still maintaining network connectivity. Deciding which cells should be turned on or off, combined with an efficient routing of existing traffic, becomes a difficult optimization problem to solve [26]. In addition, to minimize the backhaul energy consumption while maximizing the available capacity, the adaptive small cell powering on/off can be combined with an optimal user association [27]. Moreover, the backhaul small cells can rely on additional renewable energy sources, which can be used to optimize the overall energy consumption, alongside with adaptive powering of the backhaul small cells [28].

While it is known that mmWave links have a dynamic link margin due to random high path loss, shadowing, and blockages [29], when adding cochannel interference, forming stable mmWave links becomes even more challenging [30]. To solve such interference problems, interference coordination between the base stations is required. However, interference coordination on a per packet basis is difficult to achieve due to strict timing requirements, and is typically not available in off-the-shelf mmWave hardware (e.g., IEEE 802.11ad). Alternatively, interference coordination can be achieved through channel assignment schemes, which assign different frequencies to different links for a longer duration. As the optimal channel assignment depends on the traffic matrix, changes in traffic typically require a channel reassignment.

2.2. Software-Defined Wireless Architectures and Testbeds

The usage of SDN-based architectures has been thoroughly proposed for the management of wireless backhaul networks. Specific to SDN-based small cell backhaul networks, aspects such as out-of-band control channel, energy efficiency, dynamic optimization policies, resilient forwarding, and flexible path computation strategies should be taken into consideration [17]. The 5G-Crosshaul architecture [31] introduces a control infrastructure that is responsible for the management of heterogeneous fronthaul and backhaul networks, based on SDN/NFV principles, where different virtual network functions are managed according to an active orchestration of network resources. As mmWave mesh networks can also be formed in different environments, the authors of [32] proposed to use SDN to manage a network formed by unmanned aerial vehicles that are interconnected through IEEE 802.11ad/802.11ac links.

Recently, the application of SDN-based concepts has been studied in wireless testbeds. For example, in [33] it is showed that using centralized SDN-based forwarding reconfiguration can reduce the network disruption, compared to distributed routing protocols. The centralized architecture allows the SDN controller to quickly detect failures and react to it, while distributed routing protocols have higher response times due to the inherent convergence times. By using a SDN-based mesh testbed, authors in [34] show the benefits of a controller-triggered network reconfiguration, addressing interference-aware routing and flow load balancing scenarios. A SDN-empowered wireless small cell backhaul testbed is presented with WiseHAUL [35], featuring nine nodes that can have their forwarding tables configured by a controller. Similarly, the impact of OpenFlow-based resilience in mmWave mesh backhaul networks has been studied, showing how the backhaul can benefit from fast-failover resiliency and SDN-based reconfiguration, to avoid network disruption, caused by temporary failures [36]. Using the NITOS testbed, authors in [37] deployed a hybrid backhaul composed by mmWave and sub 6 GHz network interfaces, where the SDN controller can obtain link quality metrics related to the mmWave interfaces through OpenFlow protocol extensions. The framework of [38] uses an outdoor deployment of a mmWave mesh network with mmWave links spanning up to 185 meters to conduct network performance measurements in Berlin. Within the 5G-XHaul project, a city-wide SDN testbed with mmWave wireless backhaul links was deployed in Bristol [39], featuring a demonstration where different network slices are routed through paths formed by heterogeneous links. Additionally, Facebook’s Terragraph [40] proposed a commercial solution for an mmWave-based backhaul, as an alternative to legacy copper and fiber networks.

Although different architectural solutions are proposed to use SDN for the management of wireless networks, dynamic and complex reconfiguration aspects of such networks have not been thoroughly considered. Therefore, additional experimental work using real-world SDN-based wireless testbeds, together with proper performance evaluation of key reconfiguration aspects, is necessary, in order to understand and quantify the benefits of SDN for the dynamic backhaul reconfiguration.

3. Architectural Considerations

The centralized aspects of software-defined wireless networks make such paradigm attractive for managing a wireless backhaul. However, it is important to address configurability aspects, if the backhaul configuration needs to be changed over time. With that, our goal is to provide an architecture that can orchestrate the overall wireless backhaul reconfiguration, taking advantage of the dynamic configuration aspects of wireless networks. Therefore, we present the SOCRA (Software-Defined Small Cell Radio Access Network) architecture, with its main functionalities:(i)Provide a split-plane network design, using an out-of-band control channel(ii)Enable the change of backhaul forwarding configurations, by specifying different routes, which can be individually tuned to match single flows(iii)Adaptively power on/off the mesh nodes, in order to reduce the overall backhaul power consumption(iv)Dynamically configure the wireless backhaul links and related configuration parameters, e.g., channel assignment and beam alignment(v)Provide a high-level orchestration API to network operators, allowing them to modify the network configuration (e.g., align two mmWave interfaces and establish a link between them)

We consider an architecture where a SDN control plane is responsible for the management of a wireless mesh backhaul. The backhaul is a HetNet formed by multiple small cell nodes that are interconnected between mmWave wireless links, as depicted in Figure 1. The small cells can provide localized coverage to existing UEs with high capacity and forward the access-level traffic through the multihop mmWave links towards the core network. In addition, the small cells are located under the umbrella of macrocells (e.g., LTE eNodeBs), which can provide an out-of-band control channel, as well as a backup data plane to the backhaul. The SDN control plane manages the network by dynamically configuring the mesh nodes forwarding rules, wireless links and topology formation, and power configurations (power on/off nodes and interfaces).

To enable outdoor links that can span a large distance (e.g., up to 200 m), high antenna gain is required, especially for mmWave links. This is due to the high path loss at mmWave frequency bands, and additional oxygen attenuation [41]. Furthermore, for backhaul networks, a coverage of 360° is necessary. The high gain can be achieved either using large antenna arrays with at least 8x8 antenna elements, or using regular arrays and lenses [42]. To achieve full 360° coverage with digital beamforming, multiple radio units can be combined (e.g., four radio units, each covering 90°). In order to enable connections with multiple neighbors within the same sector, a radio unit can use point-to-multipoint (P2MP) connectivity [39]. While this increases the overall system flexibility, it also increases its complexity, as those nodes have to share the available bandwidth from a single mmWave interface through beam switching techniques, which has an impact on the total achievable throughput per node. For a mesh topology where small cell nodes need to receive packets and forward them to another neighbor towards the destination, this causes the reduction of capacity on all interconnected links.

When using reflect arrays or lenses to achieve high link gains, high gain beams are used on the transceiver and are focused at a single focal point, creating narrow “pencil” beams [43]. However, this requires a mechanical alignment of the antennas and reflect arrays to form point-to-point (P2P) links, with the cost of eliminating P2MP capabilities, as they require a constant beam realignment between the connected nodes. On the other hand, each radio unit can create a full 360° coverage by rotating the antennas and reflect arrays. This allows all established links to be dedicated between two nodes, offering full link capacity, independent of neighboring nodes. The available link capacities can then be used as input parameters for optimization frameworks to calculate optimal solutions for traffic aware mesh backhaul reconfiguration. Additionally, resilience is provided, as multiple mmWave interfaces can be positioned in the same sector. Nonetheless, a mechanical alignment requires the SDN control plane to calculate the necessary angles between the backhaul nodes and align the radio units, before establishing the links. The alignment is time costly (i.e., order of seconds) and depends on the angle and rotational speeds of the mechanical elements. Yet, once all links are established, the backhaul topology is rather static and changes are only necessary due to e.g., permanent link blocking, hardware failures, or significant changes in traffic demands, which happen typically in the order of minutes or hours. As targeted in our design, load balancing and recovery of link failure can be handled on higher layers, using fast-failover among different neighbor nodes [36], leading to fast rerouting within the mesh topology.

3.1. SOCRA SDN Controller

The SOCRA SDN controller architecture is depicted in the top part of Figure 1 and is designed to enable the configuration of the wireless backhaul. Internally, it contains core SDN controller modules, which include the Packet Handler, Address Tracker, Flow Writer, and Network Graph. For scalability and resiliency, the controller ideally needs to be distributed using, e.g., a microservices architecture or other scalable multicontroller architectural principles. However, in this paper, for simplicity and testbed purposes, we limit the discussion to a single controller.

The Address Tracker keeps track of the locations of network hosts (e.g., UEs), and when they were lastly observed. The Network Graph maintains the backhaul topology, providing an internal interface to compute paths between backhaul nodes. The Packet Handler receives incoming packets sent to the controller by managed devices. When it receives a packet, if the destination can be resolved by the Address Tracker, it calculates a path between the source and destination nodes through the Network Graph and installs the corresponding forwarding rules using the Flow Writer.

In addition, the controller features modules specific to the wireless backhaul configuration. More specifically, it includes an mmWave Configuration Service (MCS) and a Power Manager module. The MCS handles the configuration of backhaul mmWave links, managing the creation/modification of backhaul links and respective beam/antenna alignment configuration. The Power Manager is responsible for managing and monitoring the backhaul nodes’ power states, powering on/off backhaul nodes and accessing their respective power consumption.

To allow the communication between network management and orchestration applications, the controller exposes a set of REST Northbound APIs with its Orchestrator Interface. The provided APIs feature high-level configuration commands that can be used to configure the managed backhaul network, which are detailed in Section 3.3. The communication between the controller and the mesh nodes is performed by Southbound APIs, allowing the management of forwarding tables, mmWave link configuration and power management.

3.2. Small Cell Mesh Node

As part of the proposed architecture, the backhaul network is formed by multiple small cell mesh nodes. We present an overview of a small cell mesh node in Figure 2. Each node is composed of two main components, the PCU and the computation device, respectively.

The PCU is used to manage the power supply of the computation device, providing power consumption information and adaptively powering on/off the mesh node’s computation device. The computation device is connected to a flexible number of mmWave radio interfaces and respective movement controllers. When using mechanical rotating antennas, the movement controllers are responsible for rotating and aligning the mmWave interfaces, according to the requested positioning. This positioning information contains the azimuth and elevation angles that the mmWave interface should transition to, within the supported range values ( to for azimuth and to for elevation), assuming that all the interfaces were initially calibrated to have the same alignment in their positions, for both coordinates.

Internally, the mesh nodes contain different modules that handle the multiple types of configuration requests. The forwarding plane maintains the mesh node’s forwarding tables and is responsible for processing incoming and outgoing packets from the mmWave interfaces, interacting with the mmWave driver and the operating system’s network stack. The link configuration module is responsible for the configuration of the mmWave interfaces, setting up new links and handling the parameterization of existing links, e.g., channel assignment or transmission power, by interacting with the mmWave driver. The interface movement module communicates with the movement controller, configuring the mesh node antennas’ positioning and alignment. Finally, the Power Management module is responsible for gracefully shutting down the computation device.

3.3. Management and Orchestration

As previously mentioned in Section 3.1, the Orchestrator Interface of the SOCRA SDN controller can receive configuration requests from management applications. These configurations allow the managed backhaul network to be configured, from a high-level perspective. Then, internally, the controller translates the received messages into the respective low-level configuration commands, which directly interact with the managed network devices. An overview of the provided RESTful API is presented in Table 1. The API can be divided into mmWave link, power, and forwarding rule management commands. The mmWave link commands allow the request of link configurations by specifying the related interfaces, link parameterization (e.g., used channel) and, optionally, the alignment values. The power configuration request messages abstract the power management operations, while the forwarding rule management messages provide an interface for specifying desired backhaul traffic forwarding rules.

Using these APIs, the SDN control plane can be coordinated by management applications, which can then configure the backhaul based on different use cases. We now present two scenarios where the proposed architecture can be used to reconfigure the mmWave mesh backhaul.

3.3.1. Use Case I: Optimal Steerable mmWave Mesh Backhaul Reconfiguration

In this use case, the orchestrator API of the SDN controller is used to adjust the backhaul topology, in order to cope with, e.g., changes in traffic demands or permanent node or link failures. Given a currently deployed topology state C1 (Figure 3(a)), the API can be used to orchestrate the necessary steps in order to implement a given new topology and link configuration state C2, where additional backhaul nodes are powered on, the topology is changed to form new links and forwarding rules are adjusted accordingly (Figure 3(d)).

Consequently, when a new link with a different neighbor should be established, the mmWave beam alignment is required to change. If the antennas need to be mechanically aligned, this operation is not immediate and, due to the directionality of the used transceivers, it is not possible to establish a new link until the interfaces are nearly aligned. Therefore, when a network interface that serves traffic needs to be realigned, the existing traffic needs to be rerouted via a different set of links/nodes, before the configuration operation starts, in order to avoid the disruption of ongoing network service. The ordering of this reconfiguration plays an important role, as it is desired to avoid the disruption of the backhaul operation (Figure 3(b)), while creating alternative backup paths, which allows existing traffic to not be affected by ongoing topology changes (Figure 3(c)).

To solve this wireless mesh backhaul reconfiguration problem, we previously developed a Mixed Integer Linear Program (MILP) based framework that computes the optimal steps of antenna realignments, link establishments and flow routing operations that needs to be done, when transition between two topology configurations C1 and C2 [44]. The system model builds a wireless backhaul topology with directional network interfaces that can be realigned over time. The constraints include maximum link capacity, possible links that could be formed, flow conservation and interface movement speed and alignment. The decision variables define the links, interface position and movement status (moving clock or counterclockwise), flow rates, and packet loss. The inputs contain the backhaul nodes, their respective positions, possible links and maximum capacity, necessary interface alignment angles, Internet-connected gateway nodes, and initial and final traffic demand matrices. The optimization goal is to minimize the total packet loss during the transition between the C1 and C2 topology configuration states. The total packet loss is quantified by the sum of all the backhaul nodes’ packet loss over the total reconfiguration time.

The framework aims to reduce the packet loss by establishing backup links between unused network interfaces, before the ones used in the final topology configuration begin their alignment. Therefore, by providing a higher number of network interfaces per node, the total packet loss reduction can be improved. In addition, by providing additional reconfiguration time slots, it is possible to establish a higher number of backup links, which also contribute to the overall packet loss reduction.

In the evaluation section, we validate the main configuration primitives that our framework offers in order to implement the given use case. These primitives include (a) rotating the small cell network interfaces to adaptively change the backhaul topology, (b) dynamic backhaul mmWave link establishment, and (c) backhaul traffic forwarding. The configuration messages with these primitives can be exchanged between the framework and the SDN controller Northbound REST API, through JSON formatted messages. The topology data (e.g., node positioning and the set of possible links between the mesh nodes) can be provided by the SDN controller as input, which can be obtained from the controller’s network graph. The computed framework solutions can be interpreted into specific configuration instructions that are sent to the SDN controller. For example, the interface movement variable values can be translated into single instructions with the involved interface and the destination alignment coordinates. Additionally, the link status variable can be translated into link configuration and forwarding rule installation messages, whenever there is a change in the respective variable values.

3.3.2. Use Case II: On-Demand Powered Mesh Backhaul

Future small cell backhaul networks will often be formed by a dense mesh deployment of small cell nodes, generally in close vicinity, and interconnected by mmWave wireless links. Having all small cell nodes always powered on will lead to high operational costs due to their power consumption. For green backhaul operation, the backhaul nodes which do not serve any user traffic should be adaptively powered off, which changes the backhaul topology. For example, authors in [24] jointly solve the optimal UE association, traffic flow routing, power allocation and switch/off operation in order to minimize both access and backhaul power consumption.

In this work, we consider [26], which computes the overall topology on/off state, minimizing the total power consumption, while serving the required user traffic demands. The LTE macrocell provides coverage to the UEs and serves as mmWave gateway that can connect the surrounding small cells to the core network. The small cells can provide multigigabit capacity, as long as they can be connected to the macrocell through multihop paths. Therefore, the algorithm can activate small cells for relaying, even if they do not serve any user traffic. Additionally, the UE traffic can be multiplexed among different paths, splitting the required demand.

The constraints ensure that the traffic demand can be served, the maximum link capacity is not violated, and the traffic forwarding is only done on active APs. The decision variables define the traffic demand of each AP, the associated users with each AP, the amount of traffic sent on each link, for each flow, and the on/off state of the APs. The algorithm execution has three steps that (1) perform an initial on/off selection of the backhaul nodes, (2) create the necessary mmWave links, and (3) activate relay nodes. The algorithm computes new backhaul topologies and routing paths when UE traffic demands change. In addition, it reduces the backhaul power consumption by offloading the UE traffic through the LTE macrocell, therefore taking advantage of such HetNet architecture.

Overall, authors in [26] focus on the calculation of a new backhaul configuration state C2, given C1 and assuming a change of traffic demands. On the other hand, the use case presented in Section 3.3.1 is based on the reconfiguration between C1 and C2. Yet, the application of both use cases can be combined in order to (a) calculate a new backhaul topology and forwarding configuration C2, based on energy efficiency requirements, and (b) compute the necessary backhaul reconfiguration operations, to transition between the initial C1 and final C2 configuration states.

In the evaluation section, we therefore focus on seamless reconfiguration operations that involve multiple on/off operations in the backhaul nodes, the establishment of new links, and the update of the forwarding rules. Similarly as previously described, the SDN controller can request a new backhaul configuration state, by providing the topology information and existing traffic demands as input to the algorithm.

4. mmWave Mesh Backhaul Testbed

In order to evaluate our architecture and reconfiguration actions, we deploy a small cell wireless mesh backhaul testbed using mmWave links, which is integrated with our SDN controller and the small cell mesh node extensions (see Figure 4). The testbed is composed of a SDN controller, four small cell nodes (N1 to N4, respectively) and three nodes responsible for the traffic generation, i.e., Sender (S), Receiver 1 (R1), and Receiver 2 (R2). The backhaul mesh nodes are connected to an internal control network through a WiFi link and the receiver and sender nodes are connected to the same control network through an Ethernet link. The mesh nodes use the WiFi internal network for the SDN control channel; however our architecture supports the usage of other link types (e.g., LTE, or WiMAX) as control channel [45]. The small cell nodes are interconnected through four mmWave links (N1-N2, N1-N3, N2-N4, and N3-N4).

The testbed mesh nodes are positioned indoor, and the respective positioning layout is illustrated in Figure 5. The mmWave interfaces of each node are stacked on top of each other and mounted on tripods, with exception of N4, which has its interfaces also stacked, but attached to an existing platform in the lab room. As seen in Figure 5, the link distances between the mmWave interface pairs vary between 2.4 m and 3.8 m.

We use three separate machines to generate traffic (R1, R2, and S), which are connected to N4, N2, and N1, respectively, via a 10 gigabit Ethernet link.

We use the following three classes of commercial off-the-shelf minicomputers as computation devices that can support the necessary computation power to handle the high-throughput forwarding rates: (i)Class 1: Intel NUC Kit D54250WYKIntel i5 4250U, 16 GB RAM, SSD (ii)Class 2: Intel NUC Kit NUC7i7BNHIntel Core i7 7567U, 16 GB RAM, SSD (iii)Class 3: Gigabyte BRIX GB-BKi7HA-7500Intel Core i7 7500U, 16 GB RAM, SSD

In our experimental setup, the nodes N1, N3, and N4 are class 2 devices, and node N2 is a class 1 device. The SDN controller operates on a dedicated class 1 machine and is connected to the internal wired and wireless control networks. Both R1 and R2 are class 3 devices and S is a class 1 device. All testbed nodes use Ubuntu 16.04 with kernel 4.15.0-34. To orchestrate experiments and test the SDN-based reconfiguration capabilities of our testbed, we use an additional laptop connected to the control network. This laptop uses a REST client application to communicate with the SDN controller, issuing commands presented in Section 3.3. Additionally, N2 and N3 are equipped with TP-Link HS110 Smart Plug PSUs, used to dynamically switch them on or off, also connected to the internal Wi-Fi control network.

The power consumption of the used mesh nodes is listed in Table 2. The measurements were conducted under different computing requirements (idle and under high CPU load, with and without the mmWave interfaces connected to the node’s USB ports), collecting the reported power consumption each second, during one hour, for every measured state. We measure the power consumption using a GUDE 8001 Power Distribution Unit, which has built-in power monitoring functionalities. In the idle power state, the computation device has the used software running, but without any additional computing processes running. To perform the CPU-loaded measurements, we use the stress-ng (http://kernel.ubuntu.com/~cking/stress-ng/) to maximize the device’s CPU utilization. While it is observable that the power consumption is significantly higher with a high CPU load, if the backhaul topology is formed by thousands of small cell nodes, the aggregated power consumption of the idle nodes could reach substantial values, motivating the need of energy efficient topology management policies.

4.1. SDN Controller

We use a single SDN controller for the testbed to implement the SDN control plane (see Section 3.1) extending the OpenDaylight (ODL) L2Switch project (https://github.com/opendaylight/l2switch), which has a Packet Handler, Address Tracker, Flow Writer, and Network Graph components implementation.

The MCS translates high-level mmWave link configuration commands into individual device configuration messages, implemented through OpenFlow extensions (see Section 4.1.1). The configuration of a link between two nodes is done by setting the first node as access point (AP), and the second as station (STA). Based on the nodes’ internal interface identifiers, an unique SSID for that link is created (e.g., "of:4:1-of:3:2", if the new link is between interface 1 of node 4 and interface 2 of node 3). In addition, the involved interfaces’ positioning parameters can also be provided as inputs.

The Power Manager interacts with the mesh node’s PCU using an implementation of the TP-Link Smart Home Protocol southbound API (https://github.com/intrbiz/hs110). Additionally, the Power Manager sends shutdown requests to the computation device with the extended OpenFlow API. A high-level node power off command is internally orchestrated by (1) sending a graceful shutdown request to the computation device, and (2) sending a PCU power off request 5 seconds after (so the power supply is cut after the computation device’s operating system is no longer running). Similarly, the Power Manager boots up the computation devices by turning the respective PCU’s power on, as they are configured to wake on power.

4.1.1. OpenFlow Extensions

To increase the wireless backhaul configurability, we introduce new message types to the OpenFlow protocol. This approach has been previously considered also in, e.g., [37, 45, 46]. The extensions are implemented by extending ODL’s OpenFlow Plugin, which is responsible for the serialization and deserialization of OpenFlow protocol messages. An abstracted list of the new OpenFlow messages can be found in Table 3. The ofp_mmwave_config message is sent by the SDN controller to address the small cell configuration, regarding (1) mmWave link configuration, (2) mmWave interface alignment, and (3) small cell power state (on/off). Whenever a new node connects to the SDN controller, the controller requests initial status data from the nodes, through a features request message. If the new node is a small cell mesh node, it includes that information in the features reply, then sending a ofp_sc_features message (requested by the controller), which contains information about its GPS coordinates, PCU IP address used for manage its power configuration and a list of scanned neighbors (BSSID, RSSI, and respective channel). If the node does not send a ofp_sc_features message, it is treated as a regular OpenFlow device by the SDN controller, providing backwards compatibility with the original OpenFlow specification. In addition, the controller periodically requests statistics from the existing mmWave links, which are returned using a multipart list of ofp_mmwave_stats messages.

4.2. Small Cell Mesh Nodes

Each small cell mesh node has a computation device that is connected to two mmWave interfaces. The nodes can be placed in a closed compartment, as seen in Figure 6(a). This enclosure provides a water and weather resistant case with flexible mounting options for outdoor experiments, e.g., on streetlights or walls, enabling an easy integration onto existing infrastructures. The used materials do not add significant attenuation to the mmWave transmission. Additionally, for the easiness of mobility and interaction with the node components, these nodes can be placed on adjustable tripods or similar support structures, which we used during our indoor experiments, as seen in Figure 6(b). The computation devices provide USB 3 ports, which are used to connect the mmWave interfaces, although additional device bus types can be integrated in future research (e.g., Thunderbolt 3 or M.2).

Each mesh node uses a modified version of Open vSwitch (OVS) 2.10.1, which integrates the enhanced OpenFlow API extensions. Internally, to process wireless-related commands, OVS exchanges messages with a Small Cell Agent (SCA) via a local UDP socket. The SCA software is written in Python and is used to communicate with the local hardware and software components used by the computation device. As seen in Figure 7, the SCA has multiple internal modules: The Beam Steering module uses a serial communication module to interact with the movement controller and sets the alignment of the mmWave interfaces. The Power Management module gracefully terminates all the running software, i.e., OVS and the other SCA modules, whenever the SCA receives a shutdown request message from OVS. Lastly, the Statistics and Link Configuration modules interact with the existing WiGig software in order to retrieve RSSI statistics and configure the existing links. Internally, the used software uses ported versions of the built-in wpa_cli and wpa_supplicant tools, which are adapted to operate with the mmWave module drivers (wigig_cli and wigig_supplicant, respectively).

To configure a mmWave link, the SCA orchestrates a set of procedures that (1) detach the involved interface from the OVS bridge (After inspecting the wigig_supplicant log, we discovered that the WiGig mmWave driver does not correctly process EAPOL authentication frames when the interfaces are added as OVS switch ports. As we do not have access to the driver source code, we were not able to fix this behavior.), (2) start a new wigig_supplicant instance, (3) perform a local link discovery routine, and finally, (4) reattach the interface to OVS. The local link discovery is conducted differently whether the interface is configured as AP or STA: when in STA mode, the SCA detects the new link by sending ICMP packets through the involved interface every 0.1 s using ping tool until it receives a response. When in AP mode, the new link is detected when the SCA captures an ICMP packet on the new interface. To delete existing links, the SCA terminates the respective running wigig_supplicant instances.

4.3. Steerable mmWave Interfaces

Each mmWave interface used in the mesh nodes is formed by a USB 3 dongle with a 60 GHz transceiver, manufactured by Panasonic Inc., Japan, which can be seen, inside and outside its original enclosure in Figure 8. The dongles are based on WiGig/IEEE 802.11ad standards [47]. However, as they do not comply with the full protocol specification, the dongles do not support features such as digital beamforming or P2MP connectivity. The WiGig module uses the modulation coding scheme (MCS) 9 (-QPSK with a coding rate and a PHY data rate of 2502.5 Mbps, which is translated into an achievable MAC-layer throughput of 1.6 Gbps [48]). The module can operate on channel 2 and channel 3, among the 4 channels of IEEE 802.11ad, which operate between 59.40 to 61.56 GHz and 61.56 to 63.72 GHz (each with 2.16 GHz of bandwidth), respectively [49].

To provide the necessary attenuation to cope with the high path loss at mmWave frequency bands, the mmWave interfaces use a passive antenna reflect array, which offers beam forming capabilities. This array is made from specifically designed cm printed circuit boards (PCB). It adds a gain of 26 dBi and narrows the beam to , allowing a maximum intersite distance of 200 m, also greatly reducing the interference between adjacent nodes [50]. This narrow beam requires a fine alignment of the mmWave interfaces, in order to form links between the nodes. For that reason, we integrate the mmWave dongle and reflective arrays in a steerable mechanical platform. The platform allows a full horizontal movement freedom and in the vertical direction. Respectively, the movement for each direction is controlled by a step motor, which is connected to a movement controller. This controller is connected to the computation device through a serial-to-USB interface, which can be used by the computation device to modify the antenna alignment. Figure 9 shows the front and back of a fully assembled mmWave interface. In the left, the front of the reflect array can be seen, along with the USB mmWave dongle, and both step motors below. Figure 9 (right) shows the back of the device, containing the movement controller PCB.

5. Testbed Evaluation and Discussion

In this section, we evaluate several aspects of SDN-based mesh backhaul reconfiguration, using our mmWave multihop testbed. Our experiments aim not only to validate the developed backhaul reconfiguration primitives, but also serve to answer the following questions:(i)What is the impact of topology changes and suboptimal channel assignment on existing traffic using a SDN-based mesh backhaul reconfiguration?(ii)What is the impact of energy efficient small cell power on-off on existing traffic using a SDN-based mesh backhaul reconfiguration?

In the following subsections, we will analyze important key performance indicators such as delay, loss, and throughput when answering both questions for a variety of traffic demands. We begin with baseline experiments to identify the performance impact of different primitives used for the backhaul reconfiguration operations.

We use different traffic generation tools to conduct our experiments. iperf3 (https://software.es.net/iperf/) generates UDP traffic between the sender and receiver nodes. The traffic flows are orchestrated remotely to run until the respective iperf3 client applications are terminated. Each flow uses 7882 byte packets, matching the configured MTU in the mmWave interfaces. We vary the sending rate, according to our experiments. The iperf3 server instance reports the throughput and loss values every second, which we later correlate with the configuration stages, to obtain the average and standard deviation values. We use the ping tool to measure the RTT by sending ICMP packets every 10 ms between the involved hosts. We use tcpdump to capture traffic during the whole experiment duration on the wired links from the receiver and sender nodes (as well as in the mesh node counterpart), and after a mmWave link is established (as it is not possible to start the traffic capture before) in all the mmWave interfaces of the involved testbed nodes. We implement scripts to correlate the trace files in order to dissect the RTT per link and per node, for identifying latency bottlenecks.

5.1. Baseline Backhaul Link Reconfiguration

As a baseline measurement, we evaluate the impact of a single backhaul link configuration on existing traffic. To that end, we prepare a testbed setup where mesh node N1 uses a single mmWave interface. Initially, N1 has its interface aligned with N2 and a link is established between the two nodes (Figure 10(a)). In addition, the N2-N4 and N3-N4 links are also established. At the same time, the sender node S is sending an 800 Mbps UDP flow to R1 across the N1-N2-N4 path. The reconfiguration procedure consists in aligning N1’s interface with N3 (through a rotation), establishing a new N1-N3 link, and updating the previous installed forwarding rules to match the new link when it is detected by the SDN controller (Figure 10(b)). We repeat the experiment 15 times.

The links are aligned by requesting the N1 antenna to rotate to a new position that will align with the opposite link interface. The alignment values are calculated before the experiments, firstly according to the nodes’ indoor positions, followed by the manual tuning of the respective interface alignment angles, in order to improve the link signal quality. The obtained azimuth and elevation values are then used in the configuration scripts as input during the experiments, although they can also be stored in the SDN controller. The forwarding rules are updated whenever there are new links formed by different interfaces. Since the involved links are wireless, it is necessary to rewrite the MAC addresses on each link, to match the source and destination MAC addresses from the associated STA/AP on the respective links [51, 52]. Before traffic is sent to the hosts, the small cell nodes rewrite the MAC addresses, matching the original source/destination addresses. Although the MAC addresses are modified on each link, the end-to-end forwarding paths remain identified by the source and destination IP addresses from the respective hosts.

Figure 11 shows the cumulative distribution function (CDF) for the different time intervals required to perform the configuration operations and reestablish end-to-end connectivity between the sender and receiver nodes. These include the mechanical interface alignment from the N1-N2 position to the N1-N3 destination, the internal link configuration by N1 and N3, and the detection of the new link by the SDN controller. The interface alignment time is measured by polling the mechanical controller status until the interface is no longer moving. The internal link configuration time is computed by calculating the maximum link configuration time between the AP and the STA. The link detection at the SDN controller is obtained by measuring the elapsed time from when the link configuration requests are sent to N1 and N3, until the new link is detected in ODL. Internally, ODL detects a new link when it receives a Link Layer Discovery Protocol (LLDP) packet from one of the corresponding interfaces (which are flooded every 5 seconds, or when a new switch port is added to a node) and updates its network graph. Lastly, the traffic interruption interval is obtained by calculating the maximum time where no packets are exchanged during the reconfiguration process. The results show that the alignment time is constant (3.06 s). The internal link configuration requires the lowest delay from all steps involved for reestablishing connectivity. Nonetheless, this operation still takes an average of 2.88 s, due to the overhead of restarting wigig_supplicant, as previously mentioned in Section 4.2. Note, that we only trigger the internal link configuration after the interfaces have been properly aligned. The new link is detected in the SDN controller at average around 3.41 s after the antennas have been aligned. This is because after the internal link is configured, the controller needs to receive the LLDP link detection messages and update the internal network graph, which, for this single link, takes around less than one second (interface detection, followed by generating, receiving and processing an incoming LLDP packet at the controller). The average of the total traffic interruption time (6.50 s) is shorter than the sum of the average alignment times and the link detection times by the SDN controller (6.46 s). Through the analysis of the packet trace files, we found that N1 can still transmit packets to N2 for some duration when the antennas start to rotate away, until the link breaks due to the complete misalignment. In addition, the mechanical platform initially increments its speed, moving slower in the beginning of the rotation. This causes the end-to-end connectivity interruption interval to be smaller than the total reconfiguration time.

This experiment demonstrates that the SDN controller is capable of reconfiguring the network and reestablishing end-to-end connectivity between the sender and receiver nodes. Due to the availability of only a single interface at N1, the backhaul reconfiguration leads to packet loss and negatively impacts existing traffic, in case the antennas need to rotate and links need to be reestablished with different neighbors. In the next sections, we evaluate the additional benefit of having multiple radios and antennas, which allows us to establish backup paths that can serve existing traffic while reconfiguring.

5.2. Optimal Steerable mmWave Mesh Backhaul Reconfiguration

To implement any wireless backhaul reconfiguration use cases from Section 2.1, it is crucial that the wireless backhaul can perform these reconfiguration operations with minimal disruption on existing UE traffic. Consequently, we ask the question: what is the impact of topology changes and suboptimal channel assignment on existing traffic using a SDN-based mesh backhaul reconfiguration?

To answer this question, we design a set of experiments where we use two different backhaul configurations (C1, C2) to serve a given traffic matrix. Moreover, these experiments aim to validate the related reconfiguration primitives that are used in network optimization frameworks (e.g., the previously primitives described in Section 3.3.2).

The experiments are orchestrated by a set of procedures that allow the transition from an initial topology state C1 to any given final C2 configuration. The main goal is to make this transition as seamless as possible, leading to minimum traffic disruption. The C1-C2 transition is done by (1) aligning the involved C2 mmWave antennas to their final positions, (2) configuring the new links from C2, (3) updating the new topology routes by rewriting OpenFlow rules, and (4) deleting unused links from C1.

In our experiments, the sender node has an active flow towards each receiver node. We consider two different scenarios: in the first one, the initial backhaul configuration C1 is able to handle the existing traffic demand, but one of the primary mmWave links that forwards the traffic from both nodes needs to be deactivated, causing the reconfiguration of the backhaul (antenna movement, neighbor establishment, rule update) to reroute all the traffic through new paths that are not initially configured, thus forming configuration state C2. In the second scenario, the initial backhaul configuration contains one bottleneck link that cannot forward the required traffic between the sender and two receiver nodes. The backhaul is then reconfigured to split the two flows through different paths. We investigate both optimal and suboptimal channel assignments in the mesh and its impact on the reconfiguration.

For both scenarios, the experimental methodology is similar. The testbed is initialized by configuring the N1-N2 and N2-N4 mmWave links using channel 3 and 2, respectively. Once the links are available, the initial forwarding rules are installed, routing the traffic between S and R1 nodes using the N1-N2-N4 path, and the traffic between S and R2 using the N1-N2 link (Figure 12). With the forwarding rules installed, the traffic and RTT measurements start and, approximately 20 s after, the backhaul reconfiguration is triggered by the SDN controller, aligning the interfaces to the new positions and configuring the new links. With the new links formed, the forwarding rules of the new configuration C2 are installed, replacing the initial ones. Afterwards, unused links from the initial topology are removed and the measurements continue until the end of the experiment. We repeat each experiment 15 times.

5.2.1. Scenario I: Reconfiguration under Low Traffic Volume

In this scenario, we aim to evaluate the reconfiguration of our testbed when transitioning between two configuration states C1 and C2, where the backhaul can provide the required demands in both initial and end states. Such a transition is necessary when the SDN controller detects a link failure persisting for a long duration, caused by, e.g., long-term blockage or hardware problems on the small cell transceivers. In this experiment, the sender node starts by sending two 500 Mbps UDP flows: F1 (to Receiver 1) and F2 (to Receiver 2). As the N1-N2 link is disabled in the final configuration state C2, the topology is reconfigured by setting up the N1-N3 (channel 2) and N3-N4 links (channel 3). When the new links are established, the traffic is rerouted to the new configuration C2, where the forwarding rules for F1 are configured to use the N1-N3-N4 path, and F2 is routed through N1-N3-N4-N2 path (Figure 13).

To avoid traffic disruptions, the rules are installed first in the nodes without ongoing traffic from active flows, then in the nodes where the initial links are being used. Therefore, forwarding rules for F1 are installed first in N3, updating N4 and N1 afterwards. The new F2 forwarding entries are installed in N3 and N4, then in N2 and N1. After the traffic is rerouted to use the new paths, the N1-N2 link is disabled and the testbed reaches the final configuration state C2. With our testbed, the calculation of the flow installation order is straightforward and can easily be hard-coded in the experimental scripts, but with larger scenarios, such flow migration is required to be computed considering the whole topology as input [53]. It is worth noting that during the initial configuration state C1, flow F1 is routed over two mmWave link hops and F2 over one hop, while in C2, flow F1 experiences two wireless hops and F2 experiences three hops.

Figure 14 shows the RTT for both receivers R1 and R2 along with the throughput values of flows F1 and F2 over time. Complementary to the plot, the average and standard deviation (stdev) values of the RTT, throughput, and packet loss, for all the testing iterations and all configuration stages, are found in Table 4.

As the traffic demands of these experiments do not saturate the links, the impact on queuing is minimal for both nodes. R1 experiences higher RTT because it traverses more wireless hops in C1. After the links are aligned, after the second path is being configured (42 s), we observe punctual RTT spikes on both receiver nodes (up to nearly 10 ms). A more detailed analysis of these sudden delay spikes revealed interference effects, caused from having all the mmWave links active during this period and also due to the physical deployment of our nodes in the testbed. These interference effects motivated a more thorough analysis of the impact of channel assignment changes on mmWave links, which are discussed in detail in Section 5.2.3.

After the forwarding rules are updated, we can observe the impact of the new configuration C2 on the existing flows. While the average RTT of R1 stays the same due to the new forwarding configuration having the same number of hops (with an average short increase of 0.28 ms due to higher RTT measurements caused by the interference between the N2-N4 and N1-N3 links), the average RTT of R2 is increased to approximately 1.7 ms. This value is higher than the average RTT of R1 as the new forwarding path for F2 is composed by three mmWave hops, instead of the initial one-hop path.

Despite the variations of RTT between the two receiver nodes, the throughput of both flows F1 and F2 is not affected by the multiple reconfiguration operations. Therefore, we show that an orchestrated reconfiguration of the network under a low traffic volume can be achieved with minor impact on existing flows, without causing packet loss.

5.2.2. Scenario II: Reconfiguration with High Traffic Volume

When backhaul links are highly utilized, additional user traffic demand spikes (e.g., a sudden increase of users entering a stadium or a music arena) lead to persistent link congestion. In order to cope with an increased demand, new small cells can be powered on if available, routing traffic away from hotspots. With new demands, the backhaul orchestrator is required to compute a new backhaul configuration that can fulfill the increased traffic demand. To that end, we evaluate such a scenario in our testbed, following the same reconfiguration goals, as previously mentioned.

In this set of experiments, we setup two 900 Mbps UDP flows F1 and F2 between S and R1 and R2, respectively. The reconfiguration of the testbed is then triggered, configuring the N1-N3 and N3-N4 links. When the new links are formed, the forwarding rules of the flow F1 are updated to use the N1-N3-N4 path. Similarly as in the previous scenario, we install the new rules firstly in N3, and only after in N4 and N1. When the flow F1 is successfully rerouted, the N2-N4 link is then deactivated, having flow F1 routed through N1-N3-N4, and the flow F2 routed using the N1-N2 forwarding path (Figure 15).

For this experiment, the results of the bandwidth of flows F1 and F2, along with the RTT between the server and both receiver nodes, can be observed in Figure 16, over the elapsed experiment time, during one iteration. In addition, the respective average and stdev values, before and after the network reconfiguration, are presented in Table 5.

From the beginning of the traffic measurements ( 6 s), we observe the saturation of the N1-N2 link, as F1 and F2 receives less bandwidth compared to its target (900 Mbps). The aggregated throughput of both flows is capped to the 1.5 Gbps maximum available link capacity, resulting in an average packet loss of approximately 15% for both flows (18.9% in F1 and 11% in F2). The link congestion results in queue buildups, leading to bufferbloat and approximately 45 ms RTT for both receiver nodes (with exception of the interval between the measurements’ start and the interface alignment, as this average value is slightly lower, due to the initial measurement values, before the N1-N2 link becomes congested).

After the new N1-N3-N4 path is configured and the F1 flow is rerouted, both F1 and F2 reach the desired throughput. The congestion on the N1-N2 link disappears, reducing the latency between S and R2 to around 1.2 ms. At the same time, the RTT between S and R1 drops to 1.7 ms. This increased latency, compared to the values for R2, is caused by the additional hop between the two nodes (S-N1-N3-N4-N1), compared to the S-N1-N2-R2 path. During this last experiment interval, we again observe interference between the mmWave links, which results in latency spikes (e.g., at 46 s) caused by a single high-RTT measurement (4 ms), followed by the decrease of the latency to the average values on the following received ICMP packets. As it can be seen from Table 5, the average packet loss for the last configuration stage is nonzero. This is because iperf3 records packet loss every second, which does not perfectly align with the configuration stage changes.

5.2.3. Impact of Channel Assignment within the mmWave Backhaul

To investigate the effects of channel assignment on traffic and interference, we conduct a set of experiments with a suboptimal channel configuration on the used links. To that end, we reconstruct the experiments from Section 5.2.2, modifying the used channels in the mmWave link configuration: N1-N2 and N2-N4 were set to use channel 2, while N1-N3 and N3-N4 used channel 3. With this configuration, each of the two disjoint paths between N1 and N4 would be using the same channel on both links.

The throughput and RTT over time for a single experiment iteration are shown in Figure 17, while the average and stdev values for the 15 tested iterations can be found in Table 6. From the start of the measurements, until the update of the forwarding rules (43 s), both flows are routed using the N1-N2 link, which uses the same channel as the N2-N4 second hop of flow F1. Contrary to the previous experiments, where the N1-N2 link was fully utilized, both flows have an aggregated throughput of approximately 670 Mbps, which is 44% less than the full link utilization. Consequently, both nodes experience an increased packet loss (75.14% for F1 and 47.99% for F2), caused not only by the N1-N2 link saturation, but also by cochannel interference. In this scenario, as both links within a path use the same channel and both N2’s interfaces are vertically stacked (having the transmitter and receiver radio in close physical proximity), when N2 receives a packet from N1 on one interface and N2 should forward a different packet to N4 in parallel, the transmitter radio at N2 it will not sense the reception from N1 due to the directional antennas. Consequently, it will send the packet in parallel to node N4. The high sending power of the transmitter interface N2 will lead to additional cochannel interference on the reception of the packet from N1 on the receiving radio on N2. This additional interference reduces the dynamic link margin of the mmWave links due to random high path loss, shadowing, and blockages, further leading to high random packet loss. Similarly, when N4 sends an ACK back to N2 and N2 sends back an ACK to N1 at the same time, the transmission of the ACK will interfere, leading to packet loss and reduced throughput. Note, also that the throughput of F2 is higher before the reconfiguration, compared to F1. F2 is routed over a single hop, while F1 is forwarded over two links, leading to a higher packet loss rate in F1.

Regarding the RTT measured at the receiver nodes before the reconfiguration, we can observe the effects of increased queuing delay at N1-N2, as it increases shortly after the traffic congests the links. Yet, the average values are higher (212.96 ms more in R1 and 35.9 ms in R2) compared to the results from the previous scenario. While using a channel assignment configuration without significant interference does not impact the RTT values difference between the two receiver nodes (as it is primarily caused by a single link), in this set of experiments we observe an increase of approximately 177 ms between the measured RTT at R1 and R2, due to the delay of the N2-N4 link. This is because the additional cochannel interference leads to high packet loss, requiring the lower layer to frequently retransmit lost packets. This leads to significant queue buildups and bufferbloat due to cochannel interference.

After the network is reconfigured, the cochannel interference negatively affects the performance of flow F1, as it is forwarded over two hops having the same channel (ch. 3). The reasons for the low throughput and high packet loss for F1 are the same as described before. At the same time, F2 recovers from its throughput deficit and high latency, as the N1-N2 link is exclusively using channel 2 and it is solely used to forward the between the source and R2.

To conclude, it is possible to observe the negative impact of suboptimal channel assignment within the wireless backhaul, even when using our directional antennas and forwarding packets over more than one hop. The effects of cochannel interference cause a significant decrease of the existing flows’ throughput and an increase in the measured RTT, when compared to an identical scenario where a better channel assignment results in less interference among the used links.

5.3. Adaptive On/Off Mesh Backhaul Operation

To reduce the overall backhaul power consumption, unused nodes should be powered off, leading to a change of backhaul configuration. Therefore, when transitioning between different configuration states, the SDN controller needs to be able to turn on or off the backhaul nodes, reconfigure the involved mmWave backhaul links, and update the existing forwarding rules, according to the newly formed topology. In this section, we evaluate the adaptive on/off configuration primitives, by seamlessly performing a reconfiguration of the backhaul jointly with the link realignment, link configuration and forwarding rule update commands, and the required combined orchestration by the SDN controller.

At the beginning of the experiment, N3 is powered off and all the remaining mesh nodes are switched on. We configure the N1-N2-N4 path initially (configuration C1), by installing the respective links and forwarding rules, and start a 800 Mbps UDP flow F1 between S and R1, as shown in Figure 18(a). At the same time, we start the RTT measurement between R1 and S. After 10 s, we power on N3, which takes approximately 33 s to boot. With all the mesh nodes turned on, we proceed to reconfigure the testbed to use the N1-N3-N4 path, using the same reconfiguration routines as in the previous described scenarios: firstly, aligning the involved mmWave interfaces of N1, N3, and N4, followed by the configuration of the new links and the update of the forwarding rules to match the new path. Ten seconds after, we disable the links in the N1-N2-N4 path and power off N2, leaving the mesh network operating again with three nodes powered on, as seen in Figure 18(b) (configuration C2). N2 is powered on again after 15 s, and after it is operational, we re-establish the connectivity on the N1-N2-N4 path, rewriting the flows to its original configuration until the end of the experiment (configuration C1).

As it can be seen in Table 7, the receiver does not experience any packet loss during any of the experimental phases. Consequently, the throughput for the existing flows between the two nodes maintains the desired rate. Figure 19 shows the RTT and throughput values over one iteration, where the network reconfiguration events are marked over vertical dashes. The plotted RTT values are separated by different lines.

While there is no significant deviation from the overall measured RTT values during the experiment, similarly to the previous experiments, it is possible to see punctual RTT spikes whenever all the backhaul links are active due to interference (shortly after 64 s and 138 s, respectively), caused by MAC-layer link establishment messages being exchanged. Shortly after the update of new forwarding rules (around 71 s and 146 s), we observe a short interruption on the latency measurements on the mmWave links, while the total RTT does not get affected. A closer inspection of the collected trace files reveals that, while the forwarding rules are being updated, the end-to-end delay is successfully measured at R1 (by sending and receiving the respective ICMP request and reply packets). However, a number of ICMP requests are sent by R1 over the newly configured path, while the replies are sent over the old path, as the forwarding rules in N1 are the last ones to be installed. Whenever the request and reply packets are split over different links, it is not possible to calculate the RTT individually per link, as they would require both packets to compute this metric.

Similarly to the baseline link configuration experiments, the interface alignment times are also stable, i.e., 4.92 s (0.053) for a rotation of N3’s interface to the N3-N1 link, and 3.48 s (0.012) for a rotation of N3’s interface to the N3-N4 link. Figure 20 shows a CDF of the internal link configuration time by the mesh nodes, alongside the link detection time by the SDN controller (as measured in Section 5.1), for all the configured link pairs, during this set of experiments. While the internal link configuration time is consistent, averaging 1.92 s with stdev 0.149 s, the detection of the new links at the SDN controller varies between 2.06 s and 7.75 s. This variation is caused by the scheduling of LLDP packets by the controller for the backhaul links, as it occurs every 5 seconds, or when a new switch port is added. However, when configuring a new link and the mmWave interface is added back to OVS by the SCA, if both interfaces from that link are not ready (i.e., both interfaces are not configured in OVS in both nodes), the sent LLDP packet is not received by the corresponding interface. Therefore, only the next LLDP packet can be transmitted (after 5 seconds), if the link is ready (e.g., if a LLDP packet is sent at t=1.8s and the link is ready at 1.9s, the next LLDP packet is only transmitted at t=6.8).

5.4. Reflections

Our experiments demonstrate how SDN enables the orchestration of different backhaul reconfiguration mechanisms, even under the presence of complex reconfiguration operations, i.e., a series of interface alignment, link establishment, power on/off management, and rerouting steps. As we have seen, a proper orchestration leads to a seamless network operation and uninterrupted end-to-end user connectivity. By being able to adaptively power on/off the network nodes, the backhaul can be easily reconfigured to achieve energy efficiency goals and contribute towards the goal of green network operation.

However, we also need to consider how resiliency can be applied jointly with this type of energy efficient network reconfiguration. Specifically, if all the unused backhaul nodes in a given topology configuration state are powered off, any kind of failure, even temporary, would disrupt the network operation. Therefore, a balance needs to be made between energy efficient and resilient operations of the mesh backhaul, where nodes can be adaptively powered on/off, while providing backup links and/or paths. Nonetheless, having both fast-failover resiliency and adaptive on/off backhaul reconfiguration mechanisms available in the network would allow the operators to fine tune the backhaul operation, according to the desired policies.

6. Conclusions

In this paper, we present the SOCRA architecture, which uses the SDN control plane to manage a small cell wireless multihop mesh backhaul network. The SDN control plane is responsible for all the configuration-related operations, including channel assignment, interface alignment using mechanical rotating directional antennas, powering on/off the small cells, and managing the forwarding states. The proposed architecture provides an orchestration interface that can be used to communicate with external optimization frameworks, which can use the framework to optimize the backhaul for different operational goals (e.g., energy efficiency or resiliency).

We implemented an SDN controller and a multiradio mmWave small cell backhaul node and deployed a testbed in a mesh topology formed by small cell nodes, which can interpret the reconfiguration commands issued by the SDN controller and translate them into appropriate actions. The nodes are equipped with multiple mechanical steerable mmWave interfaces that can rotate and align with each other, dynamically forming new links, according to the SDN controller’s instructions. The testbed was used to validate different reconfiguration primitives, which included the realignment of the mmWave interfaces, the dynamic configuration of the backhaul links, on/off powering of the small cell nodes, and the update of the forwarding rules. We evaluated the reconfiguration of our testbed when transitioning between different configuration states using different traffic scenarios and channel assignments. Our measurement results indicate that it is possible to reconfigure the backhaul without a significant impact on existing UE traffic, if there are available backup paths that can be used for temporary traffic routing.

As future work, we intend to investigate the impact of the backhaul channel assignment using different positioning and distances of the backhaul nodes and interfaces. Moreover, we will extend our backhaul optimization framework to consider the used channels in the formed links and develop fast-heuristics. The fast-heuristics guide the backhaul orchestrator, providing an ordered sequence of reconfiguration operations to perform, in order to minimize the impact on the end-to-end performance.

Data Availability

The experimental log and trace files used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

Parts of this work have been funded by the Knowledge Foundation of Sweden through the project SOCRA. This work is also funded by the European Commission (EC) H2020 and the Ministry of Internal affairs and Communications (MIC) in Japan under grant agreements 723171 in the EC and 0159-0149, 0150, 0151 in the MIC.