Network congestion avoidance

from Wikipedia, the free encyclopedia

Network congestion avoidance is a process in telecommunication networks to prevent traffic jams. The causal problem is the limitation of all resources, in particular the available channel capacity and the computing power available in the individual routers. The consequences of loading beyond the available capacity are an increase in the runtime delay , more or less strong fluctuations in the runtime ( jitter ) and the loss of data packets.

The deliberate overloading of one or more network components is known as a ( Denial of Service ) attack and is used to render individual hosts or parts of a network unusable.

Procedures for avoiding overload situations start at various points; each has specific advantages and disadvantages.

End-to-end strategies

Traffic is generated on a source host and finds its way through the network via hops to its destination host. Both the target host and each individual hop can tell the source host to throttle the sending rate. Connection-oriented protocols such as TCP generally monitor the number of faulty or lost data packets as well as delays in order to be able to adapt the transmission speed adequately.

TCP / IP-based avoidance of overload

This effect occurs when many parallel TCP connections are deleted in a router queue because the buffer does not support any service classes, but simply deletes the last elements of the queue. As a result, many (all) TCP connections throttle their transmission stream almost simultaneously, so that the congestion is cleared. However, the connections increase their transmission rate after a while, so that another overload situation can occur. This oscillation (wave-like up and down) of all TCP connections is called TCP Global Synchronization and is used synonymously for queue tail-drop , i.e. the deletion of packets at the end of the queue without considering service classes.

Hop-based strategies

Every network element that is not transparent is called a hop. A router is a network element; An operating system runs on each router , which controls the hardware and has the corresponding options for being included in the process of network congestion avoidance. On the one hand by choosing the routing protocol, on the other hand by AQM:

Routing protocol

According to the routing protocol used, a router decides which of the routers available and known to it will be used to forward each packet.

Bufferbloat avoidance

Buffers are necessary and sensible, they serve to absorb load peaks. However, there are multiple buffers in each router and their size has grown over time. In the event of a traffic jam, all buffers fill up and the named undesirable effects occur. It is therefore necessary to adapt the size of each buffer as skillfully as possible to the respective needs.

Active Queue Management (AQM)

The network scheduler manages the data packets in the send queue buffer. Data buffers that are too small lead to packet loss during peak loads, buffers that are too large lead to an increased runtime delay when they are full. Depending on the algorithm used, the network scheduler can both specifically discard (= delete) packets in the buffer and change the order of the packets in the buffer. Common is z. B. the prioritization of data packets that belong to a real-time connection, e.g. IP telephony data packets or packets that belong to an SSH connection.

credentials

  1. http://www.bufferbloat.net/projects/bloat/wiki/Linux_Tips

Web links

  • RFC 2309 "Recommendations for managing queues and avoiding congestion on the Internet"