"In October of '86,
the Internet had the first of what became
a series of
congestion collapses.
..., the data throughput from LBL to UC Berkeley (sites separated by 400 yards and 2 IMP - i.e., routers - hops) dropped from 32 Kilo bits/sec to 40 bits/sec." |
|
In other words:
We assume that every TCP source is sending at
maximum data rate
This is achieved by sending packets whose size is as large as possible In other words, in the analysis of TCP congestion control scheme, we always assume that:
|
The dependency is pretty complicated and very dynamic in nature
The following examples will derive a simple relationship between the data transmission rate and the transmit window size.
But do not conclude that data rate is proportional to the window size. The above examples are "idealized". Network delays, route changes and other factors can make the relationship very unpredictable and dynamic.
Transmit Window
|
The advertised window size is static
(In fact, if a TCP flow can send at top speed (the window is equal to the advertised window size), you don't need congestion control ! :-))
Congestion Window Size is
|
In fact, it changes faster than the weather and it is just as unpredictable...
|
AWS is negotiated at connection establishment and remains unchanged afterwards
CWND changes over time !!!
 
  |
(We have not yet discussed HOW TCP changes the value of CWND - will come next)
In the remainder of the discussion, we will discuss how TCP updates the value of CWND
|
|
|
|
|
If the network can handle this transmission rate, TCP will not need to do any congestion control !!! (Because the bottle neck is at the receiver...)
The picture above shows a scenario where the network capacity is less than what the receiver can handle - i.e., the network is the bottle neck.
Because the packet drop happens at the moment when the sender was transmitting 50 Kbps , the new target congestion rate is set to 25 Kbps
(In the figure, it happens when sender is transmitting 30 Kbps )
Because the packet drop happens at the moment when the sender was transmitting 30 Kbps , the new target congestion rate is set to 15 Kbps
And so on....
In the slow start phase, transmission rate increases exponentially in time.
In the congestion avoidance phase, transmission rate increases linearly in time.
We will look at each mechanism separately and indicated when the mechanism is appropriate.
|
Initilization:
Slow Start:
|
|
|
Why not just set CWND to SSThrehHold and be done with it ???
(i.e., CWND > SSThresHold)
|
Example of TCP operation in the congestion avoidance phase:
TCP sends out 4 packets (each containing MSS bytes) to the receiver.
CWND = CWND + MSS * MSS/CWND // CWND = 4 MSS = 4 MSS + MSS * MSS/(4 MSS) = 4 MSS + MSS * 1/4 = 4.25 MSS |
CWND = CWND + MSS * MSS/CWND // CWND = 4.25 MSS = 4.25 MSS + MSS * MSS/(4.25 MSS) = 4.25 MSS + MSS * 1/4.25 = 4.485 MSS |
CWND = CWND + MSS * MSS/CWND // CWND = 4.485 MSS = 4.485 MSS + MSS * MSS/(4.485 MSS) = 4.485 MSS + MSS * 1/4.485 = 4.708 MSS |
CWND = CWND + MSS * MSS/CWND // CWND = 4.708 MSS = 4.708 MSS + MSS * MSS/(4.708 MSS) = 4.708 MSS + MSS * 1/4.708 = 4.92 MSS |
(In the slow start phase, CWND DOUBLES after each RTT seconds)
CWND = CWND + MSS * MSS/CWND + MSS/8 |
So why so foolish ???
If TCP would stop increasing CWND, it would not be true to its goal.
(This technique is similar to kids testing their boundary by asking their parents for favors over and over again... The boundary may have moved :-))
This is the new "safe" operation level...
/home/cs558000/bin/ns Tahoe.tcl
I have saved a copy of the animation file generated by the simulation.
The NAM (Network Animation) output file is here: click here
/home/cs558000/bin/nam Tahoe.nam
In gnuplot, issue the command:
plot "WinFile" using 1:2 title "Flow 1" with lines 1
You should see this plot:
You can see the operation of TCP Tahoe clearly from the above figure:
TCP marks SSThresh = 25 (approximately) and begins another slow start
SSThresHold is approximately 22.
 
 
When TCP performs a
fast restransmit
(so TCP did not timeout):
|
/home/cs558000/bin/ns Reno.tcl
The NAM (Network Animation) output file is here: click here
/home/cs558000/bin/nam Reno.nam
In gnuplot, issue the command:
plot "Reno-Window" using 1:2 title "Flow 1" with lines 1
You should see this plot:
You can see that this small change in TCP Reno has resulted in a huge performance improvement: