24 Connection management in TCP and congestion control

Prof. Bhushan Trivedi

 

Introduction

 

The TCP manages connection release in two different fashions, symmetric, when both parties close together and asymmetric when one party completes the connection before the other does. We will throw some more light on both of them in this module. TCP also manages the congestion control using two different methods, one is based on explicit notification and another is based on implicit observation. We will look at both in this module.

 

Connection release examples

 

A connection is released by TCP when the application demands so. For example, when we start a mail client it instructs the TCP underneath to establish the connection to the mail server we would like to send the mail. Once we finish sending a mail, we are done. Now we press sign out button. The mail client now sends the connection release message to the TCP underneath. The client TCP now communicates with the server TCP to close the connection that it established earlier. We will be learning about this process of connection release in this section. You might assume incorrectly that the connection release process may be quite similar to connection establishment. A connection close process does not guarantee to be foolproof, unlike the connection establishment. We will soon see why.

 

Let us take one more example to see the need for connection release process. When a user opens a web browser, the HTTP client initiates to establish a connection with the web server. The HTTP client instructs the TCP running on that very machine to communicate with the TCP of the machine where the web server is running. Once a user has completed surfing the website, he might press log out button. The HTTP client instructs the TCP connection now to close the connection. The TCP client communicates with the TCO server at the machine where the web server is hosted to release the connection that they have established some time back.

 

Symmetric and Asymmetric close

 

When a user browses the web or uses a mail client, the server usually has nothing left when the client is done. Unlike that, when a file is downloaded, it is quite possible that when a user closes down, the server might still be sending remaining part of the file. It is better if server completes that process before closing. It is, therefore, a good idea that server can decide to immediately close or otherwise when the client decides to do so. That is the Computer Networks and ITCP/IP  Protocols reason, why, TCP provides two different types of connections, Asymmetric, where the client and server close separately, and symmetric, where both of them does so together. In a case of asymmetric close, it is possible that one of the sides closes first, but the other side continues to send for a while. This type of connection is also known as a half-open connection.

 

Connection release is more complex

 

The process of connection release is more complex than the establishment. For example, when the connection is released, it is quite possible that a message does not reach the receiver and it cannot synchronize with the sender for the connection close process. The literature describes this problem as a two army problem and concludes that there is no complete solution to a connection close problem. You may refer to Reference-1 or Reference-2 for learning about that problem. We will soon see by example that the process is not foolproof.

 

The difference between establishment and close is; the other end is already active in close. If the establishment process does not go through correctly, the other end has no problem as the connection is not established. In the case of close, the other end is active, and if it cannot get the close message from the sender, and if the sender closes down on its own, it will be left stranded. That is why the connection release is a more complex problem.

 

Connection release process

 

The connection release process was discussed in the previous module. The figure 26.1 depicts the case. In the case of symmetric connection, the Disconnection ACK and Disconnection Request from the receiver are sent together as a single segment.

 

 

 

The above normal case sometimes is known as a four-way handshake or modified three-way handshake, is basically an asynchronous process of closing. The connection establishment process does not require an asynchronous version but closing does. The reason is, there is nothing which was going on before connection establishment for which either party needs to wait. Unlike that, it is quite possible that one of the parties have still something to send when the other party signals end. We need an asymmetric close in this case.

 

Disconnection cases

 

Let us try to see how the disconnection process goes on and what are the consequences of some of the possible events. We will also be able to see that one of the cases does not close the connection in an amicable manner.

 

 

Closely observe the cases depicted in 26.2. The first case describes the symmetric close operation. You can see that the ack of the DR is sending with DR from the receiver. Next four cases describe four other different situations which lead the connection close process into a problem. Let us discuss each case one after another.

 

The case (b) is where the first disconnection request is lost. The sender times out retransmits, the receiver receives the fresh DR and the connection is closed without any problem thereafter. The case (c) describes a little extreme case where not only the first DR but subsequent DRs are also lost due to some problem in the network. The sender cannot continue forever and thus it only decides to send repeat DRs of a specific number, say 10. So after sending 10 DRs, it concludes that it cannot proceed further and close from its side. The unfortunate part of this case is that the receiver might be completely unaware of the sender’s predicaments and continue listening on that port while the sender has closed down. The case described in (d) is where the DR is received correctly but the ack to it and subsequent DRs are lost. Fortunately, the receiver has already received the DR and it, thus, has started a timer. The sender, like the previous case, closes down on its own, but receiver also closes down in an amicable manner. The case (c) describes a case where DR and reverse DR are both received but the final ACK is lost. That will also be taken care by the receiver’ timer. The sender does not need any timer for closing here.

 

Congestion Control

 

Congestion control is ideally the job of the network layer as it is the network layer which decides the outgoing path for each incoming packet. However, the Internet network layer is connectionless which restricts the processing abilities of the network layer. IP receives many packets but it cannot relate a packet with any connection or it cannot even communicate back to sending IP to slow down on a typical connection. If IP were connection oriented, the job would have been very easy. A congestion infected router would inform the IP who initiated this connection and tell it to slow down. TCP have to do that job in the absence of a connection-oriented network layer.

 

Using transport layer to manage congestion

 

In the case of TCP, it decides the volume of data to be sent (based on the size of the sender’s window), it also communicates with the receiver and receiver also communicates back with the sender. That means they can communicate with each other and sender regulate the traffic flow as per receiver’s feedback. We have also seen that TCP contains information about every connection and can relate every segment with a connection. A connection in TCP parlance is a collection of two endpoints. An endpoint is a collection of an IP address and a port number. An IP address identifies a machine and a port number identifies a process running within. For example, 128.66.203.7 is an IP address describing a machine and 80 describes a web server running within. That means 128.66.203.7,80 defines an end point. Another endpoint may be 128.66.203.37, 1234 where the machine is identified by 128.66.203.37 and the process running on port number 1234. A connection is identified as a collection of these two endpoints. TCP is designed to discern a segment’s connection it belongs to.

 

TCP provides extensive methods to control congestion but UDP has no mechanism for doing so. Thus when we discuss congestion control, our discussion is confined to TCP only. UDP is normally used for real-time traffic like VOIP calls and Video transmission. Congestion control mechanisms have two problems, first, it slows down the transmission rate and second, it demands retransmission of the lost segments. Neither suits real time communication. A human viewer prefers to have a snowy video or a video with some frames missing rather than a jittery video. A human listener prefers to have a word or two lost but not a long delay between words. If a congestion drops a packet or two, it is fine but real time communication is better not slowed down.

 

Congestion control at transport layer

 

Congestion has always been a burning issue in the Internet circles. Not only users who always want bigger and better bandwidth, but researchers and administrators alike. Thus The Internet committees who worked on congestion control, have done an extensive research and found a solution to the congestion problem in the best possible manner. They are not satisfied with one solution but they provided multiple solutions and also kept on improving the congestion control process since the inception of the Internet. We have discussed how congestion control is managed at network layer earlier, now we will concentrate on how it is managed at the transport layer by TCP.

 

When congestion control is deployed at the transport layer, it is divided into two phases. The first phase designed on detecting the congestion. The second phase, then, takes the remedial action, usually about identifying the troubling connections and slowing them down. In the third phase, the traffic is increased to a regular speed when the congestion is removed1.

 

Implicit and Explicit Congestion detection

 

TCP combats the congestion with two different methods of detection. First, is known as implicit or indirect way of detecting. Whenever TCP needs to retransmit due to ack not coming back in time, it understands that there is a congestion in the network. This simple method is an example of learning about congestion in an indirect way. One may ask “Why can’t the TCP ask the IP, which anyway is aware of congestion, to determine it for sure?” TCP can, but should not ask IP as we would like to reduce inter-layer communication to

 

1  The TCP deploys “let the congestion happen, we will take the remedial action if so” approach and not “let everybody send as per the network load so there is no likelihood of congestion”, simply because the Internet is connectionless and we do not have any control on connection establishment process using any the network-wide management process.

 

promote layer independence. If ever, TCP or IP changes their version, the solution would stop working. So, little inefficient but more independent method of indirect learning is used. However, the next method that we describe in the following does depend on communicating between these two layers but without really remaining dependent on their structure.

 

Another method to detect congestion is known as explicit congestion notification. A router, while experiencing congestion, sets a bit of the packet’s IP header. The IP packet with a modified header travels to the receiver. When it reaches to the receiver, the receiver’s IP process learns about congestion along the path and informs the TCP process running on top of it. The TCP process then sets a typical bit in the TCP header, known as Congestion Experienced bit in the ACK to that specific segment. When that ack reaches back to the sender, it learns that a typical segment’s ack indicates that the segment passed through a congested area. TCP sender, in that case, slows down the traffic flow on that connection accordingly. It also indicates the other end that it has received the congestion notification and reduced the traffic by setting another bit in the TCP header known as “Congestion Notification Received”. The messages that the IP and TCP sent does not depend on their structure so a change of version does not have any impact on it. Another point, a router which sets the bit of the IP header could also set the TCP header bit, but it should not poke into higher layer protocol content. It is better to be left to the receiver TCP.

 

Fast recovery

 

 

TCP, upon receipt of three consecutive duplicate acks, assume the segment next to the ack number is lost and retransmits that without waiting for the retransmission timer goes off. This process is called fast recovery and is described in the figure 26.3.

 

You can notice that the segment 101-200 is lost and thus the receiver continues to send the ack of an earlier segment, i.e. 101, indicating “data till 100 is received and I am now expecting 101”. You can also see that three more segments are sent and received correctly at the receiver, but every time the ack is numbered as 101. When three such duplicate acks apart from the first genuine ack of the segment, TCP sender senses trouble and retransmit the 101-200 segment without waiting for it to timeout. When the receiver receives that segment, it acknowledges all segments together by sending a cumulative ack 501.

 

Slow Start and AIMD

 

Let us now learn how TCP reacts to the implicit signal of congestion, the need for retransmission.

 

TCP keeps another value apart from the receiver’s window advertisement, called congestion window. Whenever there is no congestion, TCP keeps the congestions window growing exponentially, it starts with 1 segment, goes to 2, then 4 and so on. That process is ironically called slow start but is actually otherwise. The TCP can send data as much as a minimum of either the congestion window size or sender’s window size suggests. We have already seen how the window advertisement is managed and we will now see how the congestion window is managed.

 

The TCP starts with the congestion window size as big as the receiver’s window size in the beginning. That value is also called congestion value threshold. As soon as the TCP experience the congestion, the congestion window size threshold is halved and it begins the slow start, that means, the congestion window size is reduced to 1 segment. The size of 1 segment depends on the network. For example, for an Ethernet network, the maximum size of the segment is 1460 bytes. The congestion window increases exponentially during the slow start, till the congestion window size threshold value.

 

Let us take a few examples to understand all these terms. Figure 26.4 showcases the process of a slow start. The congestion window starts with 1, then grows exponentially to 2,4,8, and 16. That means every segment acknowledged increased the congestion window size by two. The slow start continues till the congestion value threshold. The congestion value threshold value is set to half the size of congestion window when the congestion has occurred. For example, if retransmission is required when the congestion window size was 16, the threshold is set to 8. So the exponential growth continues till the congestion window grows to 8. Once that stage is reached, the window grows much slowly, to avoid leading to congestion yet again. Only when all segments acknowledged the congestion window size is allowed to be increased by one. That phase is known as congestion avoidance; it is also known as an additive increase. That part is depicted in the figure 26.5. Till 8 the growth was exponential but after that, they increase linearly.

 

Another example is depicted in the figure 26.6 to indicate everything together. The retransmission is required when the congestion window size was 64. It starts with new congestion window size of 1 segment and congestion window threshold as half the earlier size, 32. Till 32, we continue with a slow start and after that, we observe the additive increase. We can find three consecutive duplicate acks when the congestion window reaches 64. Fast recovery, which we have seen before, is deployed now and the congestion window is reduced to half, and the additive increase is again deployed with congestion window size 32. You can now justify the word fast recovery as if there were a retransmission, it would begin from the congestion window as low as 1. That slump in the congestion window, from whatever it was previously, to 1, is known as multiplicative decrease and everything together is known as AIMD, i.e. additive increase, multiplicative decrease.

 

 

A slow start and AIMD together served the TCP’s idea of learning from an indirect signal of packet loss was quite successful as the networks, especially the wired networks, are becoming more and more reliable due to better design of wires as well as better routers and additional fault tolerance were built into them. The explicit congestion notification scheme, which was added later, proves to be of additional check that TCP takes to combat congestion in more meaningful fashion. As we mentioned before, congestion has always been a burning issue and thus there is one more trick normally deploys by TCP to help manage congestion in more smooth fashion. We have already studied RED while we discussed IP. We will revisit RED in this module with some more information.

 

Fast retransmit

 

Whenever fast recovery takes place, the ack usually comes back indicating all other segments are received correctly as indicated in 26.3. It might not be the case, for example, shown in

 

 

figure 26.7. When the sender gets three consecutive ACK 101, it fast recovers and sent 101-200 segment. It should get ACK 601 but instead gets 401. Fearing rightly that 401-500 is also lost, it immediately retransmits the same without waiting for anything else and it gets the ack 601. If the final 501-600 is also lost, this process returns the ack 501 which again forces fast retransmit of 501-600 but fortunately, it is received correctly so that does not demand fast retransmit yet again. This process is known as fast retransmit as it neither waits for the retransmission timer to go off nor it waits for three consecutive duplicate acks.

 

RED from the context of TCP

 

We have seen that due to the nature of the TCP if a router drops a packet each from all the connections passing through, the process of global synchronization begins. After learning about the slow start, you can understand what it is. Figure 26.7 describes the RED process. The first part depicted in (a) describes the normal operation of an outgoing queue of a router.

 

 

 

 

The process described in (b) indicates the tail drop where all packets coming from all connections are dropped popularly known as a tail drop. The third case (c) describes RED. Two values are maintained, Min and Max, where Max indicates the maximum size of the queue. The min value is decided by the router. In a normal processing mode, the router starts dropping packets when the queue reaches the size, Max. The RED changes that behavior slightly. Here is the description.

 

RED process

 

The RED process is executed for each incoming packet. Average queue length of the router’s typical outgoing line is also calculated. The average value, like other parameters, is calculated in weighted average fashion. There is a value g used as a weight for the current packet that arrives at the router, to be multiplied with the queue size at that point in time. Thus, the average value of queue size is calculated as shown in the following pseudo-code, which is self-explanatory.

 

For every incoming packet do following

 

{

Queuen = (1 – g) × Queuen-1 + (g × Current queue length)

 

//  Queuen = Weighted avg. when datagram N arrives //Queuen-1 = Weighted average when datagram N−1 arrived

 

//  (g is chosen as a very small value)

//Once the value of Queuen is calculated, the rest of the process is simple as follows.

 

If Queuen is < Min,

 

the packet is queued.

If Queuen is > Max,

 

the packet is dropped.

 

Otherwise

 

if the average falls between Min and Max,

 

p is calculated

 

A packet is dropped with probability p.

Queuen-1 = Queuen

 

//The average value becomes previous average value now

 

}

 

Summary

 

In this module, we continued our journey of connection management. We have seen five different cases of disconnection and found that there is one case which fails the protocol, that means the connection is closed without other party noticing it. We have looked at two different methods to detect congestion, explicit and implicit. We have described how the method based on explicit congestion notification helps the TCP to recover from congestion in an informed fashion. We have seen how slow start and AIMD helps TCP to recover from congestion after a retransmission. Finally, we have seen how RED based congestion control process is carried out.

 

References

 

1. Computer Networks by Bhushan Trivedi, Oxford University Press

2.Data Communication and Networking, Bhushan Trivedi, Oxford University Press