Opendata, web and dolomites

Report

Teaser, summary, work performed and final results

Periodic Reporting for period 2 - SUPERFLUIDITY (Superfluidity: a super-fluid, cloud-native, converged edge system)

Teaser

The vision of the SUPERFLUIDITY project, in the framework of 5G, emerges from powerful drivers that are shaping our society.The first driver is the increase of population and the still growing globalisation and physical and virtual mobility: more people (2 billion and a half...

Summary

The vision of the SUPERFLUIDITY project, in the framework of 5G, emerges from powerful drivers that are shaping our society.

The first driver is the increase of population and the still growing globalisation and physical and virtual mobility: more people (2 billion and a half in 1950, almost 7.5 billion today, half of them living in cities), and more interconnections among them.
The second driver is the proliferation of new or improved applications and services that need network connectivity: social networks, video (high definition), IoT (metering, smart home, connected cars), industry 4.0 (or the fourth industrial revolution, the current trend of automation and data exchange in manufacturing technologies), low latency services (games, virtual reality, autonomous vehicles), advanced services (face recognition and speech translation, cognitive and expert systems, big data exploitation).

Simply put, we have more connected devices, with each requiring higher data rates, lower latency, and ubiquitous coverage, with very high densities of users possible. This scenario sets out challenging requirements for the evolution of the communication and networking technologies. An important answer to these challenges is to optimise resources utilisation by dynamically allocating them in time and space, and in general resort to virtualisation techniques as much as possible. Benefits of a full virtualisation of network devices, at all layers, include: i) sharing: resources divided into multiple virtual pieces used by different users; ii) isolation: sharing of a resource does not endanger security and privacy of users; iii) aggregation: if resources are not big enough to accomplish a task, they can be aggregated; iv) dynamics: reallocation of resources in space and time on demand; v) ease of management: software-based devices are easier to manage and update. In addition, it is necessary that the network be programmable, as a function of the needs of the services that it provides. The overall vision is thus the one of a software network with an application/service-centric network control able to dynamically share and allocate virtualised resources, allowing to: reduce costs, simplify network management, increase flexibility, ease evolution, and dynamically deploy network services.

Work performed

The Superfluidity project focused on achieving a ‘superfluid’ Internet i.e. the ability to instantiate and provision network functions and services on-the-fly, run them anywhere in the network (core, aggregation, edge), and make them portable across multiple hardware (computing and networking) platforms, while taking advantage of high performance accelerators, when available.
Superfluidity builds upon the recognition that network services and functions usually comprise multiple elementary building blocks, which we generically call Reusable Functional Blocks (RFBs), and which can operate at different layers in the protocol stack.
A second fundamental consideration is that the infrastructures that support the composition and execution of RFBs are heterogeneous. For example, an important trend in the current evolution of NFV is the support of containers along with regular VMs. In addition to VMs and containers, Superfluidity also focuses on a third promising virtualization technology, namely the Unikernels.
A third important consideration is that the NFV/SDN concepts are currently supported by a diversified set of frameworks and tools. An opportunistic and agile approach has been chosen to apply the Superfluidity concepts into the Superfluidity integrated system demonstrator.

1. Selection and detailed description of use cases / scenarios for the integrated project testbed
2. Setup of two interconnected testbeds, elaboration of an integration plan, components deployment and integration
2.1.Final integrated system up and running. Validation and assessment of the integrated system.
3. Kuryr: enabling neutron-networking ecosystem for containers, regardless they are running on baremetal servers or inside OpenStack VMs. The work done avoid double encapsulation for the nested case, as well as speeds up the containers boot up in Neutron networks.
4. RDCL 3D tool for the design of NFV based services on heterogeneous infrastructure.
4.1.The OSM GUI for the OSM lightweight build is based on the RDCL 3D tool. It is part of OSM Release Four to be released in May 2018.
5. Integration of decomposed C-RAN solution prototyped with RDCL 3D tool
6. Unikernel technology: LightVM re-architects the Xen virtualization system and uses unikernels to achieve VM boot times of a few milliseconds for up to 8,000 guests (faster than containers).
7. Unikernel orchestration: OpenVIM extensions to support Unikernels. Performance evaluation of Virtual Infrastructure Managers extensions (OpenVIM, OpenStack and Nomad) for Unikernel orchestration.
7.1.OpenVIM extensions for Unikernels merged in the mainstream OpenVIM.
8. Intent based reprogrammable fronthaul network infrastructure
9. Algorithms and tools for flow/packet processing function allocation to processors in Fastclick
10. Design of a Packet Manipulation Processor (PMP) based on a RISC architecture for extended Data Plane Programmability
11. Integration of the Citrix Hammer Traffic Generator in the demonstrator
12. A novel approach to resource allocation in 5G networks data plane that extends the packet switching principle to CPUs.
13. OSM adoption and integration in the MEC environment as a Management and Orchestration (MEO) solution for applications Life Cycle Management
14. Platform-aware C-RAN baseband RFB adaptation and optimization using dynamic task clustering and scheduling approach
15. Scalable and predictable 5G baseband architecture employing dynamic task scheduling and guaranteed service inter-task connection allocation
16. Evaluation of cost of software switching for NFV
17. MDP (Markov Decision Process)-Optimized VM scaling and load balancing mechanism for NFV
18. Automating workload fingerprinting, trying to identify the hardware subsystems on a compute node that are most significantly affected by the deployment of a workload.
19. A novel automated approach for the implementation of block abstraction models.
20. Debugging P4 Pro

Final results

Hereafter we list some areas in which SUPERFLUIDITY has advanced the state of the art. More detailed information can be found in the public deliverable D8.8: Final Report on Innovation and Exploitation Actions (http://superfluidity.eu/results/deliverables/).

Cloud Networking: SUPERFLUIDITY aimed to meet the stringent requirements imposed by future 5G networks by designing and implementing a superfluid, converged network architecture that is location, hardware and time-independent.

Network Services Decomposition and Programmability: SUPERFLUIDITY has devised programming abstractions targeted to 5G functions.

RAN Cloud and Mobile Edge Computing: Beyond the current vision of a static RAN function fully located in one “edge computing” place, SUPERFLUIDITY supported the ability to dynamically deploy a softwarized C-RAN.

Automated Security and Correctness: SUPERFLUIDITY has worked on automatic correctness verification of network services.

Website & more info

More info: http://superfluidity.eu/.