METRO-HAUL: Crowdsourced video broadcast

Home » METRO-HAUL: Crowdsourced video broadcast

Overview

MetroHaul  has designed and built a cost-effective, agile, disaggregated packet-optical metro infrastructure with compute capabilities at the network edge, addressing capacity increase and characteristics such as low latency and high bandwidth. Metro-Haul control plane consists of the Control, Orchestration and Management (COM) system, based on the principles of ETSI NFV framework and a hierarchical SDN control plane, which facilitates the deployment of multilayer end-to-end network slices, including Virtual Network Functions (VNFs) in multiple datacentres (with multiple Virtual Infrastructure Managers) and simultaneously dedicates packet and optical network resources, including Layer 2 (L2) VPNs and photonic media channels.

A Crowdsourced Live Video Streaming (CLVS) is an example of such Network Service (NS), in which thousands of users attending an event (sports, concerts, etc.) stream video from their smartphones to a CLVS platform. The content from all the users is edited in real time, producing an aggregated video, which can be broadcast to a large number of viewers. Metro-Haul has conducted an experimental demonstration of CLVS NS using its infrastructure, with NFV orchestration using COM system implemented over a disaggregated optical network testbed in the UK. 

Architecture

The testbed includes the inter-partner VPN network, which is used to connect all the control plane components hosted at various premises in Europe. There are data plane connections with the two PoPs at Bristol and BT, Adastral park. The optical infrastructure has ROADM connections with Voyager Muxponder connections at each site in addition to the optical fiber connecting these two sites. There is also an interaction between the control plane components. For each PoP, there is the setup for OpenStack hosting the VNFs; for orchestration, the VNF setup including the descriptors for the network service. 

Deployments

First, a CLVS application administrator requests a CLVS NS from Net2Plan, which provides the network planning service. Net2Plan gathers the NS network/compute resources, latency and bandwidth requirements. It then executes a resource allocation algorithm producing NSD and VNF locations as output. Here, the location of DPI VNF and FW VNF is decided as Bristol AMEN and BT MCEN respectively. Net2Plan uses this output to deploy the network slice by instantiating the CLVS NS using an NFVO.

The OSM NFVO offers the E2E slicing service to deploy the NS using the NSD; this service includes both VNF deployment as well as interconnecting VNFs. OSM deploys the VNFs as VMs on the compute servers using OpenStack as the reference Virtual Infrastructure Manager (VIM). Once the VNFs are active, OSM requests an L2 based E2E connectivity service offered by the PSC to interconnect the VNFs.

PSC is a hierarchical SDN controller which implements IETF L2SM model as Northbound API (NBI), where OSM specifies the AMEN and MCEN network endpoints and VLAN ID while requesting connectivity. The PSC first connects the AMEN and MCEN at the optical layer using the optical connectivity service offered by ONOS SDN controller. We have implemented Transport API (TAPI) v2.1 photonic media layer extensions on the ONOS NBI, for the PSC to request a connectivity service (optical channel - OCh)) between Voyager muxponder line ports. Also, device drivers (using NETCONF/YANG) for Voyager muxponder (OpenConfig terminal-device model) and ROADMs (OpenROADM device model) are implemented to configure the line ports and network media channels respectively. Once the optical connectivity is deployed, the PSC deploys the L2 VLAN-based packet connectivity between the ethernet client ports of the Voyager muxponder using NetOS packet SDN controller. NetOS exposes the topology via TAPI context to the PSC, where the the optical link between Voyager muxponders is discovered using LLDP. To interconnect the Voyager muxponder client ports, NetOS configures the VLAN on both the client and line ports, which are internally connected to the same bridge on the Voyager muxponder. As a VLAN is deployed, the E2E connectivity is established. Finally, OSM requests connectivity for each VLAN between the VNFs.

Results

MetroHaul project deploys the requested Network Service using a set of services as part of its Control, Orchestration, and Management (COM) system. It has conducted 30 runs of the crowdsourced video streaming demo to get average setup times with 95% confidence intervals of COM service. The network service is deployed within ≈ 5 minutes which meets the 5G PPP KPI.

  • Network planning: 6.059 ± 0.283.
  • E2E network service: 292.679 ± 6.056.
  • E2E connection: 131.549 ± 5.241.
  • Optical connection: 56.149 ± 0.649.
  • VLAN connection (no optics): 20.129 ± 0.710

5G Empowerment

5G network orchestration utilises SDN and NFV technologies to leverage programmable networks and network functions on generic compute servers. This allows flexibility and agility in network service deployment. A network service deployment which may take hours/days using legacy technology can be deployed in minutes now. This is illustrated by the results for deploying the crowdsourced demo.

Locations:  Bristol (UK), Ipswich (UK)

Dates: Q3-2020

Partners involved: BT, TIM, CTTC, University of Bristol, UPC, CNIT, OLC, NAUDIT, TUE, TEI, NOKIA, Zeeta Networks

METRO-HAUL Website@MetroHaul


Countries:

Type of Experiment:

ITU Functionality:

News

Global5G.org publishes white paper: How Europe can accelerate network densification for the 5G ERA

Global5G.org has published its White Paper entitled “How Europe can accelerate network densification for the 5G Era”.
 

09/23/2019
Cloudscape Brazil 2019 – Position Papers from Global5G.org, To-Euro 5G and NetWorld2020 SME Working Group

Cloudscape Brazil 2019 aims to become the playground for EU-BR initiatives drving innovation in ICT through collaborative work between outstanding research institutes, large enterprises and SMEs.

07/16/2019