Software-defined networking

from Wikipedia, the free encyclopedia

Software-defined networking ( SDN ) is an approach to building computer network devices and software that separates and abstracts two essential components of such devices. These are the control plane and the data plane . The first concepts for this come from Stanford University from around 2005. In 2013, many manufacturers described their products as SDN-capable, which is already a hype.

description

Architecture overview

SDN enables network administrators to manage the network more easily by abstracting the lower functional levels into virtual services. The hardware no longer has to be configured manually. This became more and more important with the advent of virtualization , in which a larger data center has to create and configure virtual systems in increasing numbers over the network, and associated firewall rules and network addresses have to be generated. There are several approaches to generate analog virtual networks ( VLANs ), but this leads to a high level of complexity. SDN gives network administrators programmable, central control of network traffic without having to manually access the individual physical network components.

SDN decouples the system that decides where to send the data (the control plane) from the underlying system that forwards the data to the selected destination (the data plane). The developers and providers of these systems state that this technology simplifies network administration and enables new applications, such as network virtualization, in which the control plane is separated from the data plane and implemented as a pure application.

The Open Networking Foundation was established to promote SDN standards. Trends like cloud computing are blurring the boundaries between network and computer in a technical environment where SDN standards seem useful. However, Alcatel-Lucent suggests its own way.

background

Internet Protocol (IP) -based networks are based on the concept of autonomous systems (AS). This approach makes it possible to expand networks by connecting more and more nodes that forward incoming packets to a meaningful next node without requiring a detailed view of the entire network structure. This concept is simple and has proven stable and expandable. In the simplest case, this concept does not allow a component to be connected to another point in the network because it would no longer receive its packets. The identity of a network component results from its place in the network. In addition, with simple AS structures it is hardly possible to assign certain properties to the component, such as logical grouping, access control, quality of service , specific processing of data before delivery or context information on data streams that go beyond the content of the individual package.

Supplementary standards have been adopted by the Internet Engineering Task Force ( IETF ) to take account of these needs, such as DHCP , routing , virtual LANs or virtual private networks (VPNs). This gradually increased the complexity of the specification and configuration of the network by the administrators.

The increased use of the Internet, now also via mobile devices, was one of the drivers that forced the need for highly standardized operating system instances. Cloud architectures with dynamic resource allocation take this into account, and SDN aims to roll out the required network configurations in a manner comparable to that of virtual systems. The high dynamics of relocating more or less heavily used systems to suitable hardware resources is supported. It does not have to be configured manually on switches every time in order to comply with the agreed policies. Rather, the adjustments to routing and firewall rules, bandwidth allocations, etc. are automated and centralized, while the decentralized hardware components only take on simple tasks such as forwarding a packet to the correct port.

The central, software-defined control also recognizes specific contexts that can be determined from source-target relationships. This means that every packet does not have to be subjected to the complete firewall check if earlier, comparable packets from the same connection have already passed the check. Appropriate mechanisms for integrating manufacturer-specific functions and implementations have been found. Openflow goes one step further with the standardization of commands for configuring the data plane. The OpenFlow protocol enables software controllers to be developed that control the entire network. They can be implemented in a centralized or distributed manner in order to place a level above the traditional IP core functions which manages the more complex and subscriber-specific network functions.

The term "software-defined networking" was coined in 2009 by Kate Greene.

Decoupling of data plane and control plane

An SDN can be configured so that the hardware with the control plane is different from that with the data plane, so that a switch only forwards packets while a separate server takes over the tasks of the control plane.

There are two reasons for this approach. On the one hand, different numbers of devices, replacement cycles, etc. can be used. On the other hand, optimally suitable hardware can be used in each case, e.g. B. a powerful server for the control plane and a larger number of energy-saving switches for pure forwarding.

Control planes and data planes themselves have to communicate with one another via the network. OpenFlow is a standard for such a protocol, but other methods can be used. OpenFlow is administered by the Open Networking Foundation.

SDN deployment models

Symmetrical vs. asymmetrical
In an asymmetric model, the global information of an SDN is centralized as much as possible, while the operation of the switches is distributed as much as possible. The expectations are clear: centralization of the configuration avoids too many redundancies and possible inconsistencies, while the distribution of the data traffic avoids bandwidth bottlenecks at central points. However, aspects remain to be considered, e.g. B. How immune to failure such a construction is, especially with several locations, and whether such centralized structures can grow sufficiently. With the opposite approach, each component also knows all the control plane configurations that are relevant to it, so that even in the event of any partial failures, each remaining substructure can continue to work normally within its limits. Approaches that are particularly suitable in practice are those in which the number of control planes is minimal, but where each location can work autonomously in an emergency without a single point of failure .
Floodless vs. flood-based
In the flood-based model, a significant proportion of the global information distribution is achieved by communicating every change using normal broadcast and multicast mechanisms. This allows SDN models to become more symmetrical. Transparent bridging is used to make information generally known and to disseminate the identity of the network participants. On the other hand, the network load per network node increases the more nodes are added, which limits the scalability. In the floodless model, on the other hand, the correct functioning of all components is ensured via local caches of SDN lookup tables, which are synchronized with one another from time to time.
Host-based vs. grid-centered
The host-based model assumes that in data centers with many virtual machines it is beneficial to do the SDN processing on the hypervisor system, because there is always free capacity for the relatively low load. The network-centric approach, on the other hand, remains traditional with dedicated routers and does not leave the routing functions to the virtualization hosts.

Some boundary lines cannot always be drawn sharply. Virtualization hosts can take on some CPU-heavy SDN tasks such as encrypting VPN traffic, while other functions are located on dedicated SDN servers. There are also certain dependencies; host-based approaches imply an asymmetrical design.

Applications

One application of SDN is Infrastructure as a Service (IaaS). Here SDN is combined with virtual systems and virtual storage, which allows an "elastic", i.e. demand-dependent, resource allocation. In particular, scale-out scenarios, in which additional systems are switched on as required, benefit from the possible automation. The providers of very large software installations such as Google and Facebook show that suitable software architectures can be found for such distributed systems. But even for small applications that only run on one VM, the same mechanisms can help to completely abstract from the (network) hardware.

Another approach provides for a dynamic reallocation of virtual systems to the virtualization hosts. The aim is to use as few hosts as possible as much as possible. Virtualization environments in which reallocations are only carried out manually (if at all), at best, take into account the overloaded hosts, while unused capacities are mostly ignored, which leads to a creeping underload of the relevant hosts and a waste of resources.

SDN allows loads to be distributed over many connections, for example between the application servers and the network backbone. Traditionally, VLANs are defined here manually and routes are set or bondings are used, which is time-consuming and does not allow dynamic adaptation to changing loads. Load balancing on various application servers, distributed firewalls and other applications are also included.

SDN can also take over Managed Network Services (MNS) in companies or with carriers. This is about service level agreements , ie even with permanent changes to the network, as is inevitable in such environments, the individual subscriber should get their guaranteed bandwidths, latency times, availability and security features.

If the SDN "overlay" does not take into account the characteristics of the underlying infrastructure, inefficiency and low throughput are the likely consequences. Carriers in particular are therefore interested in SDN solutions that take into account data volumes, topology and hardware and react accordingly. Accordingly, there is a proposal for an SDN solution that takes network resources into account so that the data stream can be continuously optimized and the requirements are handled in a more predictable manner.

Access control in SDN

For security reasons, administrators are usually given remote access to the control plane via RBAC .

swell

  1. Sean Michael Kerner: OpenFlow Inventor Martin Casado on SDN, VMware, and Software Defined Networking Hype . In: Enterprise Networking Planet , April 29, 2013. Retrieved May 21, 2013. 
  2. a b c http://arstechnica.com/information-technology/2013/02/100gbps-and-beyond-what-lies-ahead-in-the-world-of-networking/2/
  3. ^ Margaret Rouse: software-defined networking (SDN) . TechTarget. June 2012. Retrieved May 21, 2013.
  4. Bort, Julie. The Three Letters That Are Setting The Enterprise Tech World On Fire , Business Insider , October 5, 2012
  5. Software-Defined Networking: The New Norm for Networks ( Memento from January 16, 2013 in the Internet Archive ; PDF; 739 kB)
  6. Big Switch Networks Products . Retrieved October 20, 2013.
  7. David Strom: Software-defined networking could drastically change today's network infrastructure . TechTarget. November 2012. Retrieved May 21, 2013.
  8. http://enterprise.alcatel-lucent.com/?dept=Innovation&page=SoftwareDefinedNetworks
  9. http://www.networkcomputing.com/next-gen-network-tech-center/alcatel-lucents-sdn-strategy-downplays-o/240142962
  10. Kate Greene: TR10: Software-Defined Networking . In: Technology Review , MIT, March / April 2009. Retrieved May 21, 2013. 
  11. ^ Open Networking Foundation . Retrieved October 20, 2013.
  12. ^ Still no VMware of Networking. Overlays change nothing beneath the surface. . Retrieved October 20, 2013.
  13. ^ Adoption of SDN: Progress Update . Archived from the original on October 21, 2012. Retrieved October 20, 2013.
  14. Blueprint for Infrastructure SDN (PDF; 719 kB) Retrieved on October 20, 2013.

Web links