Saturday 9 March 2013

note:

thes articals are just reference just write the points you think ar important and suits the question best. its not necessary to write each n every wordd.
thnku..

hope by hope choke packets

At high speeds or over long distances, sending a choke packet to the source hosts does not
work well because the reaction is so slow. Consider, for example, a host in San Francisco
(router A in Fig. 5-28) that is sending traffic to a host in New York (router D in Fig. 5-28) at
155 Mbps. If the New York host begins to run out of buffers, it will take about 30 msec for a
choke packet to get back to San Francisco to tell it to slow down. The choke packet
propagation is shown as the second, third, and fourth steps in Fig. 5-28(a). In those 30 msec,
another 4.6 megabits will have been sent. Even if the host in San Francisco completely shuts
down immediately, the 4.6 megabits in the pipe will continue to pour in and have to be dealt
with. Only in the seventh diagram in Fig. 5-28(a) will the New York router notice a slower flow
An alternative approach is to have the choke packet take effect at every hop it passes through,
as shown in the sequence of Fig. 5-28(b).
fig 5-28
 Here, as soon as the choke packet reaches F, F is
required to reduce the flow to D. Doing so will require F to devote more buffers to the flow,
since the source is still sending away at full blast, but it gives D immediate relief, like a
headache remedy in a television commercial. In the next step, the choke packet reaches E,
which tells E to reduce the flow to F. This action puts a greater demand on E's buffers but
gives F immediate relief. Finally, the choke packet reaches A and the flow genuinely slows
down.
The net effect of this hop-by-hop scheme is to provide quick relief at the point of congestion at
the price of using up more buffers upstream. In this way, congestion can be nipped in the bud
without losing any packets. The idea is discussed in detail and simulation results are given in
(Mishra and Kanakia, 1992).

choke packets

The previous congestion control algorithm is fairly subtle. It uses a roundabout means to tell
the source to slow down. Why not just tell it directly? In this approach, the router sends a
choke packet back to the source host, giving it the destination found in the packet. The
296
original packet is tagged (a header bit is turned on) so that it will not generate any more choke
packets farther along the path and is then forwarded in the usual way.
When the source host gets the choke packet, it is required to reduce the traffic sent to the
specified destination by X percent. Since other packets aimed at the same destination are
probably already under way and will generate yet more choke packets, the host should ignore
choke packets referring to that destination for a fixed time interval. After that period has
expired, the host listens for more choke packets for another interval. If one arrives, the line is
still congested, so the host reduces the flow still more and begins ignoring choke packets
again. If no choke packets arrive during the listening period, the host may increase the flow
again. The feedback implicit in this protocol can help prevent congestion yet not throttle any
flow unless trouble occurs.
Hosts can reduce traffic by adjusting their policy parameters, for example, their window size.
Typically, the first choke packet causes the data rate to be reduced to 0.50 of its previous rate,
the next one causes a reduction to 0.25, and so on. Increases are done in smaller increments
to prevent congestion from reoccurring quickly.
Several variations on this congestion control algorithm have been proposed. For one, the
routers can maintain several thresholds. Depending on which threshold has been crossed, the
choke packet can contain a mild warning, a stern warning, or an ultimatum.
Another variation is to use queue lengths or buffer utilization instead of line utilization as the
trigger signal. The same exponential weighting can be used with this metric as with u, of
course.

congestion prevention policies

Let us begin our study of methods to control congestion by looking at open loop systems.
These systems are designed to minimize congestion in the first place, rather than letting it
happen and reacting after the fact. They try to achieve their goal by using appropriate policies
at various levels.
policies that affect congestion

Let us start at the data link layer and work our way upward. The retransmission policy is
concerned with how fast a sender times out and what it transmits upon timeout. A jumpy
sender that times out quickly and retransmits all outstanding packets using go back n will put
a heavier load on the system than will a leisurely sender that uses selective repeat. Closely
related to this is the buffering policy. If receivers routinely discard all out-of-order packets,
these packets will have to be transmitted again later, creating extra load. With respect to
congestion control, selective repeat is clearly better than go back n.
Acknowledgement policy also affects congestion. If each packet is acknowledged immediately,
the acknowledgement packets generate extra traffic. However, if acknowledgements are saved
up to piggyback onto reverse traffic, extra timeouts and retransmissions may result. A tight
flow control scheme (e.g., a small window) reduces the data rate and thus helps fight
congestion.
At the network layer, the choice between using virtual circuits and using datagrams affects
congestion since many congestion control algorithms work only with virtual-circuit subnets.
Packet queueing and service policy relates to whether routers have one queue per input line,
one queue per output line, or both. It also relates to the order in which packets are processed

(e.g., round robin or priority based). Discard policy is the rule telling which packet is dropped
when there is no space. A good policy can help alleviate congestion and a bad one can make it
worse.
A good routing algorithm can help avoid congestion by spreading the traffic over all the lines,
whereas a bad one can send too much traffic over already congested lines. Finally, packet
lifetime management deals with how long a packet may live before being discarded. If it is too
long, lost packets may clog up the works for a long time, but if it is too short, packets may
sometimes time out before reaching their destination, thus inducing retransmissions.

general principal of congestion control

Many problems in complex systems, such as computer networks, can be viewed from a control
theory point of view. This approach leads to dividing all solutions into two groups: open loop
and closed loop. Open loop solutions attempt to solve the problem by good design, in essence,
to make sure it does not occur in the first place. Once the system is up and running, midcourse
corrections are not made.
Tools for doing open-loop control include deciding when to accept new traffic, deciding when to
discard packets and which ones, and making scheduling decisions at various points in the
network. All of these have in common the fact that they make decisions without regard to the
current state of the network.
In contrast, closed loop solutions are based on the concept of a feedback loop. This approach
has three parts when applied to congestion control:
1. Monitor the system to detect when and where congestion occurs.
2. Pass this information to places where action can be taken.
3. Adjust system operation to correct the problem.


The presence of congestion means that the load is (temporarily) greater than the resources (in
part of the system) can handle. Two solutions come to mind: increase the resources or
decrease the load. For example, the subnet may start using dial-up telephone lines to
temporarily increase the bandwidth between certain points. On satellite systems, increasing
transmission power often gives higher bandwidth. Splitting traffic over multiple routes instead
of always using the best one may also effectively increase the bandwidth. Finally, spare
routers that are normally used only as backups (to make the system fault tolerant) can be put
on-line to give more capacity when serious congestion appears.However, sometimes it is not possible to increase the capacity, or it has already been
increased to the limit. The only way then to beat back the congestion is to decrease the load.
Several ways exist to reduce the load, including denying service to some users, degrading
service to some or all users, and having users schedule their demands in a more predictable
way.

introduction to congestion

When too many packets are present in (a part of) the subnet, performance degrades. This
situation is called congestion. When the number of packets
dumped into the subnet by the hosts is within its carrying capacity, they are all delivered
(except for a few that are afflicted with transmission errors) and the number delivered is
proportional to the number sent. However, as traffic increases too far, the routers are no
longer able to cope and they begin losing packets. This tends to make matters worse. At very
high trafffic, performance collapses completely and almost no packets are delivered.
 When too much traffic is offered, congestion sets in and
performance degrades sharply.

Congestion can be brought on by several factors. If all of a sudden, streams of packets begin
arriving on three or four input lines and all need the same output line, a queue will build up. If
there is insufficient memory to hold all of them, packets will be lost. Adding more memory may
help up to a point, but Nagle (1987) discovered that if routers have an infinite amount of
memory, congestion gets worse, not better, because by the time packets get to the front of
the queue, they have already timed out (repeatedly) and duplicates have been sent. All these

packets will be dutifully forwarded to the next router, increasing the load all the way to the
destination.
Slow processors can also cause congestion. If the routers' CPUs are slow at performing the
bookkeeping tasks required of them (queueing buffers, updating tables, etc.), queues can build
up, even though there is excess line capacity. Similarly, low-bandwidth lines can also cause
congestion. Upgrading the lines but not changing the processors, or vice versa, often helps a
little, but frequently just shifts the bottleneck. Also, upgrading part, but not all, of the system,
often just moves the bottleneck somewhere else. The real problem is frequently a mismatch
between parts of the system. This problem will persist until all the components are in balance.

      It is worth explicitly pointing out the difference between congestion control and flow control, as
the relationship is subtle. Congestion control has to do with making sure the subnet is able to
carry the offered traffic. It is a global issue, involving the behavior of all the hosts, all the
routers, the store-and-forwarding processing within the routers, and all the other factors that
tend to diminish the carrying capacity of the subnet.
Flow control, in contrast, relates to the point-to-point traffic between a given sender and a
given receiver. Its job is to make sure that a fast sender cannot continually transmit data
faster than the receiver is able to absorb it. Flow control frequently involves some direct
feedback from the receiver to the sender to tell the sender how things are doing at the other
end.