21 Congestion at Network layer and MPLS

Prof. Bhushan Trivedi

epgp books

 

Introduction

 

We have seen the processing at network layer including how routing takes place in wired and wireless networks. We have seen how it provides services to transport layer by taking service from the data link layer. The process of finding the next immediate router and forwarding it using a routing table is also discussed. The process of connectionless forwarding, which is deployed at the IP layer introduces problems of its own. One of the problems is called congestion or traffic jam of data. When a router cannot process data as fast as it is coming inside, buffers are getting filled up. After some time, if the process continues, it starts dropping packets. That phenomenon is known as congestion. We need to handle that at network layer as it is where the decision to routing is made. The rudimentary routing process based on destination IP address is not suitable for ISP and other networks where service based routing is preferred. We will see one typical approach to provide such service in this module, known as Multiprotocol Label Switching or MPLS.

 

Congestion

 

Congestion, in short, is a traffic jam in networks. Many times, for a router, the inflow is more than outflow. The packets keep on pouring on the input port but are not going out from the output port with the same pace. As a consequence, packets start accumulating and start filling the buffers of the router. The outgoing queues get increasingly longer. Eventually, buffers overrun and the router starts dropping packets. We will learn about congestion and some methods to avoid congestion in the following.

 

Congestion is likely when the network capacity is less than what it needs to pass through. Let us understand the difference between a congestion problem and a flow control problem. A flow control problem occurs when a sender sends faster than a receiver can receive and receiver buffers overrun resulting in dropping packets. In congestion, intermediaries are in receipt of packets more than they can handle. There is no issue with the sender and receiver speed. It is the network or intermediaries which are incapable of handling the input number of packets at the rate at which they are approaching. However, both of these problems need similar techniques for management. For example, the place where the problem occurs is usually provided with an adequate number of buffers to make sure that it survives a reasonable number of jump in the traffic. If the traffic increase in short bursts, flow control can be managed by buffers at the receiver and congestion control is managed by buffers in the routers. When the traffic goes high, the buffers keep the additional packets they receive and as it goes down, the output lines continue sending at full speed for a while to exhaust those buffers.

 

We have already learned that the network traffic is bursty in nature. This burstiness helps us actually during congestion. Not everybody may send at their full speed and only once in a while, a sender blasts at his full speed enables ISPs to manage their networks in more optimized fashion. Let us take an example to understand.

 

Consider an ISP with five customers with 2 Mb lines connecting each customer. ISP connects with the rest of the world with a 10 Mb line. If each customer sends at his full speed, little additional traffic will result in congestion in the outgoing line as it will be more than 10 Mb. In fact, the customer’s traffic is bursty helps the ISP in this case as a customer hardly sends at pick speed, the 2 Mb in this case. Consider each one of them sends at their half the speed on an average, the ISP has only 5 Mb output and its 10 Mb line is more than sufficient. If ever, a sender starts sending faster, for example, 3 Mb per second for a while, the line can still accommodate that without problem 3 Mb is still free. That means, even when two users start sending data at 3 Mb, the line can still accommodate one more Mb as the total comes out to be only 9 Mb. As we know, this burst does not last long and thus such a situation does not continue for long. When some other user decides to send more about of data, the senders under consideration might have exhausted their burst. Even when momentarily the data flow goes beyond 10 Mb, the ISP buffers can keep that going by storing those packets till the burst lasts.

 

Congestion becomes a serious problem only when the bursts last long enough to fill the buffers completely. When the routers have no buffers to store and no capacity to send the incoming packets further, they have no option but to drop those packets. Unfortunately, the problem with congestion that it feeds upon itself. A congested router drops packets and slowed down in responding. The other connected routers start feeling the pressure as they are not getting the acknowledgments and thus cannot discard packets from their outgoing lines, thus building congestion at their place. It is similar to congested crossroads starts congesting adjoining crossroads unless some action is taken to alleviate the congestion.

 

Congestion Control Algorithms

 

The methods to alleviate congestion are known as congestion control algorithms. There are many congestion control algorithms exists in the world. One can categorize the congestion control algorithms in two different types, one is congestion avoidance algorithm and another is congestion detection algorithms. The detection algorithm believes in letting the sender send as much data it can, whenever he wants to, and if there is a congestion, the algorithm takes a remedial action. Unlike that, the preventive solutions believe in making sure the network controls each sender and allows or denies any sender based on network conditions. As the Internet is connectionless, the detection strategy works better here. As there is no way an IP can decide how much a sender can send or when can he send, it can only look at the network condition and take an evasive action if there is a congestion. We have discussed that the autonomy of the routers helps in solving such problem by avoiding congested paths. On the contrary, a connection-oriented network can always deploy, and maintain strict control on new connection establishment process and only allow the connection if the network can manage and not otherwise. This strategy demands a good design in a way that every sender is given a typical bandwidth or rate at which he can transmit and network strictly monitoring them. This process disallows new connection when the network is incapable to handle them and also reduces traffic on ongoing connections. These processes are known as admission control and flow control.

 

Congestion Control management in the Internet

 

The Internet is smart enough to provide two different layers of congestions control, detect and recover from congestion using autonomy of routers at the network layer and providing flow control at TCP level to reduce congestion. The Internet has two different ways to control congestion. The first type is known as implicit congestion control. In this case, whenever there is a retransmission, TCP assumes congestion and act accordingly. In another case, known as explicit congestion notification, an explicit method a TCP process at the sender receives a typical information about congestion in an explicit form which forces it to reduce flow. Both methods are used by the Internet. However, admission control is not directly provided either at the IP or TCP layer. When VOIP connections are on, admission control is needed. A new additional connection might reduce the quality of other connections to an unacceptable level. Ideally, it should be denied. When the transport layer does not provide that service, application layer regulation is used. A recent addition to congestion control happens at Internet Routers, known as Random Early Discard, is described below.

 

Global Synchronization and Random Early Discard

 

When router buffer overruns, a normal reaction is to drop packets one after another. Normally a router contains multiple connections and this process targets all connections passing through it (assuming all connections are sending with almost equal probability). TCP is designed in such a way that whenever a connection experience a retransmission, it assumes congestion and reduced to one segment at a time, however large earlier sender’s window was and then quickly built up to half of the original window. This behavior has a dramatic effect on TCP’s performance. Let us try to understand.

 

Consider a router with 10 connections, each of which carries 2 kb of data. Consider total 10 connections passing through the router. When router’s queue is full, it drops each incoming packet for a while, inducing packet loss on each of the connections. The original speed was 2 * 10 = 20 kb. Consider the reduced speed is of 0.5 kb reduces the incoming flow to 0.5 * 10 = 5 kb. Assume the network capacity has reduced to 19 kb and that is the reason the queue is full and new incoming packets start being dropped. So TCP reduces traffic on each connection and thus the router starts receiving reduced traffic from all connections. 5 kb is little too less for the network and for a while the network bandwidth is wasted. Soon the senders’ window size grow to their full capacity and the flow 20 kb results into congestion yet again, again the flow drops down to 5 kb, again increased to 20 and again falls back to 5 kb. Such an erratic oscillation does not augur well with the user nor does it utilizes the network bandwidth properly. This phenomenon is known as global synchronization. We can clearly perceive that there is no need to increase or decrease the traffic to such an extreme extent and a better solution is needed. That solution is known as Random Early Discard and described in the following.

 

The router, when the output queues are growing alarmingly large, does not wait till it goes full and drops a packet belonging to a random TCP connection. That reduces traffic on that connection and others can continue sending at their full speed. The total traffic now is 9 * 2 + 0.5 = 18.5 kb which is just below the capacity of the network (which is now 19 kb) and most connections do not even have any degradation in their speed! If the network enters into severe congestion and the capacity is further reduced to 16 kb, the RED drops one more packet belong to some connection and the traffic reduces to 7 * 2 + 0.5 * 2 = 15 kb which works! We do not have dramatic increase and decrease the users do not get the erratic reduction and increase in the traffic.

 

RED seems a good solution but a user whose packet is dropped might feel this solution unfair. As others are allowed to send at full speed and he is not, he might feel deprived of his rights if he has paid the ISP some amount for his connection. The RED handles this problem by picking up connections at RANDOM to drop and thus the same user is not picked up every time and feel deserted. This method discards the packet little earlier than the congestion actually sets in and that is why it is named as random early discard.

 

Jitter control

 

All other methods except admission control are good for normal data traffic. For a real-time traffic like audio or video, they make little sense. It is sometimes better to continue sending at a full speed even when congestion arrives. The user might lose some frame but continuously receive the video or can continue talking. The loss of a small number of frames, for an average human user, is less disconcerting than a complete loss of communication or inadequate delay in the process. Thus we must have some other way to solve the problem while dealing with a real-world traffic.

 

The process which smooths the delivery process of a real-time traffic and reduces the inter-packet delivery delay to the minimum is known as jitter control. The variation in inter-packet intervals is known as jitter and the idea is to reduce it and make it as equal to the original delay as possible.

 

For jitter control, routers normally provide a separate queue for real-time traffic and allow the real-time packet go before other data packets. This is done even when the data packets in queue arrived before that packet. When a packet arrives ahead of its schedule, the routers might keep that in the queue until there is time for its departure.

 

To learn about the services a packet needs, IP provided a field called differentiate services. We will not elaborate that further. We have also seen that the Wi-Fi extensions also provided separate service classes for real-time traffic.

 

The process of Switching

 

The routers in the Internet route traffic based on the destination address of the packet. A few methods which work better for many modern situations are being proposed. Switching is one such method.

 

Switching is a method where the packet is not forwarded based on the destination address, but some other information, known as a tag, which is comparatively much smaller than a destination address.

 

The conventional process of routing is described as follows.

 

1. The router receives a packet from some neighbor router or a neighbor network.

 

2. It extracts the destination address from the packet

 

3. Extracts the destination network portion from the destination address (the network part of the destination address)

 

4. It then looks at the routing table to find out the best next router for the given destination address

 

5.  It sends the packet over to the best neighbor.

 

Computer Networks and ITCP/IP  Protocols 5

 

This process changes a bit when switching is used. The problem with the routing process is, even after all possible optimizations, using prefix based aggregation, deploying hierarchical routing, etc. we will not be in a position to get the routing table size decreased to minimum entries and the processing of long IP addresses and respected masks takes more time than expected.

 

Ideally, the administrators expect to group packets in the form of service requirements and not destination addresses. For example, all customers who need to download a file demand a different service than a VOIP connection. A less delay low bandwidth path is better for VOIP and long delay large bandwidth path is better for the file download operation. Similarly, different categories of customers need different types of services. A gold category of a customer may expect 1 Gbps while silver customers expect 700 Mbps and bronze may only be provided 500 Mb. The idea is to tag the packet based on service it requires irrespective of the destination IP address. Whenever a packet enters the network, the packet is examined and the service it requires is found out. The packet is tagged with a label indicating that service. The routers along the path only need to look at that label to route the packet to the destination, based on the tag value.

 

Such a design simplifies the routing process as the only handful of service categories are required to be honored in a network and thus the only handful of tags are needed. The forwarding tables become much shorter and processing small tags is much faster compared to the lengthy IP addresses.

 

Business relationships can also be modeled using this scheme. For example, an ISP, called ISP-1 has some business relationship with ISP-2. So when ISP-2’s packet enters ISP-1, it tags the packet as per the business relationship and treating the packet as per that scheme. Multi-tier ISP system where an ISP works under another can also be modeled with this scheme. It is implemented by passing an additional label on top of the existing label. Thus in a bigger hierarchy, where every ISP has their own way of interpreting and also have their own business relationship with others, they find their own tags for the packet to treat it accordingly. We will describe that process in little more detail in the following.

 

Labeling hierarchy in MPLS

 

To understand the hierarchy and impact of that in MPLS processing, consider figure 31.1. There are four service providers. ISP-1 acts at the top layer, ISP-2 and ISP-3 are acting below it, the fourth ISP works under ISP-3. Thus ISP-1 provides services to customers of ISP-2 and ISP-3. ISP-3 provides services to ISP-4 in turn. Normally larger ISPs gets the bunch of IP addresses and distribute them to their customers (lower level ISPs) based on their contract and their requirements. Lowest level ISPs serve customers like us and provide us the IP addresses and services based on our contract with them. One typical service that we always insist is the bandwidth, 2 Mb or 20 Mb etc.1 Once a customer has decided to take a particular service and paid money for it, his packets are to be tagged with that specific service type that customer is entitled to. Not only that tag is marked accordingly, when the packet reaches to the region of other ISP, for example, we are a customer of ISP-3 and we have sent a packet which now enters

 

1  Usually, it is represented as a peak value at burst and the average value in normal case and so on. into ISP-2 area. How that packet is treated in that area solely depends on the contract between ISP-2 and ISP-3. A tag indicating the service that our packet is entitled to in the region of ISP-2 will be additionally tagged on it accordingly. A customer may be given some options to choose from the lowest level ISP and we get the service based on whatever we have subscribed for. Higher grade services are provided to those who paid for it. The ISP’s network must forward those packets accordingly, and as long as possible, the contracts of that ISP with others also reflect that policy.

 

Look at the case described in figure 31.1. We assume that a packet traveling in the networks arrive at D. It has to travel along the path D-I-P-Q-L-M-W. We pick up a packet traveling from D to W. The path consists of regions belong to multiple ISPs. The figure also depicts the regions of respective ISPs. We can see that ISP-1 provides services to ISP-2 and ISP-2 provides its services to ISP-4. When this packet arrives at ISP-1 at router D, it adds its specific tag based on sender and ISP-1’s contract. When it goes to I, which belongs to ISP-2, the new tag is attached on top of it which indicates the services that it should receive based on the contract between ISP-1 and ISP-2. When it leaves from P it leaves ISP-2’s region, so that tag is removed. Now it enters ISP-3’s region when it reaches Q, and thus it adds a new tag indicating the services it is to receive based on ISP-3’s contract with ISP-1’s. It now enters into ISP-4’s region. Unlike the earlier case, ISP-4 is a subset of ISP-3 and thus it has not left ISP-3’s region, so the respective tag is not removed. However, a new tag is attached based on ISP-4’s understanding of how ISP-3’s other customers are to be treated. When it leaves M, that tag is removed as it is now leaving ISP-4’s region and going back to ISP-3rd region. When it reaches W, it also has left the ISP-3rd region and now that tag is removed. If the packet is to travel further outside the ISP-1 network, X must also remove the ISP-1 tag.

 

Figure 31.2 the tags attached and detached on the packet which travels along the path mentioned in fig 31.1. Left column indicate the node where the packet is currently situated and right-hand side column indicates the tags attached to that packet, in sequence presented

 

Figure 31.2 depicts the addition and removal of tags at various nodes. The advantage of this mechanism is, when a packet is traveling over the line, the routers can exactly determine the type of service to be provided to that packet. Moreover, the forwarding process becomes faster as the tags are small and we only handful of them based on the number of ISPs the packet is traveling and types of services that they provide.

 

Not only the MPLS simplifies the routing process, it also helps to monitor the SLAs (Service Level Agreements) or MOUs (Memorandum of Understanding) are being followed. Often, the organizations and the ISPs have clear cut MOUs for the service. Once MPLS in place, it is easy for an organization to see if the MOUs or SLAs are being met or not.

 

Providing services for other layers

 

One of the major advantages of MPLS is what packet classification process allowed us. The routers have the liberty to route packets based on administrator’s instruction and not on typical fields of the typical header. Monitoring service level agreements, providing specific services to a specific class of customers, provide source address based routing, load balance between multiple routes is all possible with MPLS tags. MPLS enables the users to give a specific flow label to a packet which indicates the type of service to be provided to the packet.

 

MPLS does not only allow labeling based on layer-3 (i.e. IP address) but other layers like Layer-4 (TCP or UDP), Layer 5(port number indicating a typical application) and many other things. It is possible for the user to decide if SMTP traffic (which is going to the mail server) is routed differently than FTP traffic (which is going to the file server). The users can also provide instruction like network-1 users outgoing request should pass through only proxy server-1 and network-2 user’s outgoing request should pass only through proxy server-2 by routing accordingly. If the user wants UDP traffic to be routed separately, it is possible when the MPLS tagging is done on the layer-4 data. Such a requirement is quite common for administrators who are very serious about their security and always see the UDP traffic as ‘dangerous’. Another common requirement is to provide a separate path to the HTTP traffic which has the major share of network traffic usually. That is possible by MPLS. Even ISPs tend to use MPLS especially when connecting to other ISPs to monitor and control the traffic.

 

Process

 

When the MPLS is implemented in a network, the routers are instructed about types of labels that are possible on that network in the beginning. The routers at the edge of the network label incoming packets based on their content, administrator’s provided rules and types of labels. For example, when a new packet comes in, the router looks at the port number of the receiver, if it is 25, (heading for mail server), the router might attach tag 5 based on its understanding that tag no.5 is to be attached for the SMTP traffic. If the port number is 80 (it is heading for a web server), the router might decide to attach a tag no. 3. If the packet is coming from network-1, the router might attach tag-2, based on the instruction that the traffic coming from network-1 should travel along a specific route irrespective of where they are heading to.

 

Subsequent routers along the path only look at the label and route packet accordingly without looking at the IP header or destination IP address. By choosing a typical path for a typical string of packets, it is possible to provide a specific quality of service. For example, if we keep a typical segment for routing VOIP calls, the packets tagged with that label only are routed into that segment. Another possibility is to divide the available bandwidth between different types of customers based on their chosen pack.

 

Finally, when the packet goes out the network, the final router does the important job of ripping that label of the packet and send it out as it entered in. The reason for this is also obvious. The routers outside that network have little idea about the tags that this network is using and it is not possible for them to handle these tags.

 

The routers at the edges which attach and detach the tags and intermediary routers which routes the packets based on tags must be able to read and process tags. Such routers are known as MPLS-enabled-routers.

 

MPLS seems to be working like virtual circuit but there are some important differences.

 

 

1.The labels are not dependent on the values of IP header, especially sender and receiver’s address. The value is decided by Admin based on the category of the traffic they would like to route separately. Unlike that, the virtual circuit decides a typical path based on those values only. Another critical difference emanates from this is that it is possible that a packet with different destination addresses might have the same tag. Virtual circuits have to have a different value for the different destination address.

 

2. The virtual circuit is a connection oriented service and demands the connection establishment and closing phases which are not required here. Closer it might seem, the MPLS is connectionless and not connection oriented.

 

3. In the case of VC, every router along the pack marks an entry for a virtual circuit number. The routers route packets based on virtual circuit number. MPLS, being connectionless, has to have a different approach. When we study MPLS frame format we will see how MPLS handles it.

 

MPLS tag format

 

 

What can we notice from that figure? Let us list out.

 

2, You can notice the similarity between packet classification tag and MPLS tag here. Both are attached just before IP header for the same reason, they need to be processed before IP header is either processed or skipped.

 

1. One of the important outcomes of MPLS header attached above IP is, it can route the IPv4 and IPv6 traffic in a similar manner if their requirement is same.

 

2. MPLS is used for a normal Ethernet network as well as router-to-router links. When it is used with a normal network, the MPLS tag comes after that network header and when in the router-to-router link, after PPP header. Router-to-router links use PPP protocol and when multiple types of traffic between the pair of routers need to provide different services, they use MPLS. When MPLS is used with Ethernet, the value of the type field (which describes what the Ethernet frame is carrying inside) is 0x8841.

 

3. The size of the MPLS tag is 32 bit. The label field with is 20 bits, has the largest share of it. It is the tag id or actual label value. It is locally identified. That means, any pair of routers, for example, R1 and R2, decides the label which can be used over the link between them. We have seen that the edge router decides the tag no. 5 for SMTP and tag no. 3 for HTTP traffic. Once it reaches next (MPLS-enabled) router, it might tag SMTP packet with tag 20 and HTTP traffic with tag 25, as this may be the agreement it has with the next router along the path. Therefore, the tags that are used when the packet travel from R1 to R2 are not identical when they travel between R2 to R3 or R3 to R4. 20 bits can hold 220 labels but in practice, only a handful of the tags are actually used.

 

4. The service the routers agreed upon is also indicated by another field, the QOS field. Apart from the label, this field additionally indicates some extra services to be provided to the packet. This is used for the experimental purpose at this point in time.

 

5.When routing happens using IP header, it is possible for routers to identify if the routing process has run into error and packet is routed in a cycle. For example, it is possible that routing tables are incorrectly deployed to indicate when the destination is D, A should forward it to B, B should forward it to C and C should forward it to A. Thus, the packet actually roams around in the cycle and do not reach the destination. IP has a Time To Live field which is initialized with a value bigger than a maximum number of hops the packet will ever travel. The TTL value is decremented at each hop. If the packet, unfortunately, roams around in the cycle, the TTL value will become 0 at some time and the router can discard that packet learning that the packet is cycling. Thus it prevents the packet to roam around for indefinite period even when the routers are configured incorrectly. That advantage is just lost when we do not use IP header. To compensate for that, TTL is added in MPLS to provide the same effect. If ever the MPLS enabled routes the packet in a cycle, the TTL field will become zero and indicates that something is wrong.

 

6. The S field indicates if the number of tags is one or more. If only one tag is attached it has a value 1 and 0 otherwise. This field is useful when it is sent outside the network. The final router will receive a tag with value 1 and it removes it. If the value is 0, it has to remove multiple tags, not only the outer tag. The S field is sometimes denoted as BoS (Bottom of Stack) field which indicates true when it is indeed at the bottom of the stack with only one entry.

 

We have stated that the first router in the network tags the packet with appropriate tag based on a few things. How do the intermediary routers decide the tag value? Do they need to elaborately check inner details? If it is so, the advantage that we have from using tags for routing is just defeated. There is a simple trick to avoid such recalculation. There are multiple classes designed by the administrators which demand specific forwarding called FEC or forward equivalence class. Each class will have specific routing related information associated with it. For example, FEC for SMTP is to drive the content to an SMTP server through a specified path. FEC for a VOIP call may be is a specified minimum hop length path. The FEC or Forwarding Equivalence class is the service class the packet is designed to get. The FEC is the value from where the tag value is derived. Thus, the first router at the edge of the network not only decides the tag, it also decides the FEC. Every other router along the path looks at the FEC value and choose their tag accordingly. For a particular connection, the FEC is a constant value, present in every packet’s MPLS tag. The routers along the path use FEC to choose the type of service and assign the tag accordingly.

 

The difference between a tag and FEC is worth noticing. FEC remains same for every packet belongs to same connection throughout the route. The tag for a given packet changes every hope even for a single transmission. FEC can be even shared between multiple connections. Unlike that, the label or tag is a local value, which is only significant for a pair of routers.

 

The MPLS labels and FECs are decided based on some global protocol processing in the network. That part is not standardized in the true sense and many variants are used in practice. We will not throw any light on that issue further as it is dependent on the type of MPLS solution used with typical administrator’s policy and need.

 

Summary

 

Congestion is a common problem in networks and needs a solution. Connectionless forwarding at network layer enabled the router to avoid congested paths but additional measures are needed. A technique based on random early discard helps avoid Global Synchronization problem and mitigates the congestion problem to some extent. The MPLS is a solution for administrators to provide service based forwarding. A handful of tags is used to differentiate packets based on their need for routing. Instead of IP addresses, the packets are switched over using these labels. MPLS is heavily used by ISPs to honor business relations and forwarding traffic accordingly. The tags are managed locally based on Forward Equivalence classes which are decided by the administrators for their own convenience to implement their specific routing decisions.

you can view video on Congestion at Network layer and MPLS

References

  1. Computer Networks by Bhushan Trivedi, Oxford University Press
  2. Data Communication and Networking, Bhushan Trivedi, Oxford University Press