0% found this document useful (0 votes)
7 views29 pages

MCN Unit - 1

The document discusses the network layer's design issues, including packet switching methods, services to the transport layer, and routing algorithms. It contrasts connection-oriented and connectionless services, explaining the implementation of both, and highlights routing algorithms such as shortest path routing and flooding. Additionally, it covers congestion control techniques and the importance of Quality of Service in network communications.

Uploaded by

vineetha devika
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views29 pages

MCN Unit - 1

The document discusses the network layer's design issues, including packet switching methods, services to the transport layer, and routing algorithms. It contrasts connection-oriented and connectionless services, explaining the implementation of both, and highlights routing algorithms such as shortest path routing and flooding. Additionally, it covers congestion control techniques and the importance of Quality of Service in network communications.

Uploaded by

vineetha devika
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

MCN UNIT - I

UNIT – I

Network layer: Network Layer design issues: store-and forward packet switching, services
provided transport layers, implementation connection less services, implementation
connection oriented services, comparison of virtual –circuit and datagram subnets, Routing
Algorithms shortest path routing, flooding, distance vector routing, link state routing,
Hierarchical routing, congestion control algorithms :Approaches to congestion control,
Traffic aware routing, Admission control, Traffic throttling, choke Packets, Load shedding,
Random early detection, Quality of Service, Application requirements, Traffic shaping,
Leaky and Token buckets.

Network Layer design issues


In the following sections we will provide an introduction to some of the issues that the
designers of the network layer must grapple with. These issues include the service provided
to the transport layer and the internal design of the subnet.

Store and Forward packet switching


 The network layer protocols operation can be seen in Fig. 5-1.
 The major components of the system are the carrier's equipment (routers connected by
transmission lines), shown inside the shaded oval.
 The customers' equipment, shown outside the oval. Host H1 is directly connected to one
of the carrier's routers, A, by a leased line. In contrast, H2 is on a LAN with a router, F,
owned and operated by the customer. This router also has a leased line to the carrier's
equipment.
 We have shown F as being outside the oval because it does not belong to the carrier, but
in terms of construction, software, and protocols, it is probably no different from the
carrier's routers. Whether it belongs to the subnet is arguable, but for the purposes of this
chapter, routers on customer premises are considered part of the subnet.

Fig 5.1 : The environment of the network layer protocols

 A host with a packet to send transmits it to the nearest router, either on its own LAN or
over a point- to- point link to the carrier.
 The packet is stored there until it has fully arrived so the checksum can be verified. Then
it is forwarded to the next router along the path until it reaches the destination host,
where it is delivered.
 This mechanism is store-and-forward packet switching.

1
MCN UNIT - I

Services provided to the Transport layer


The network layer provides services to the transport layer at the network layer/transport layer
interface. The network layer services have been designed with the following goals in mind.
1. The services should be independent of the router technology.
2. The transport layer should be shielded from the number, type, and topology of the
routers present.
3. The network addresses made available to the transport layer should use a uniform
numbering plan, even across LANs and WANs.

 There is a discussion centers on whether the network layer should provide connection-
oriented service or connectionless service.
 In their view (based on 30 years of actual experience with a real, working computer
network), the subnet is inherently unreliable, no matter how it is designed. Therefore, the
hosts should accept the fact that the network is unreliable and do error control (i.e., error
detection and correction) and flow control themselves.
 This viewpoint leads quickly to the conclusion that the network service should be
connectionless, with primitives SEND PACKET and RECEIVE PACKET and little else.
 In particular, no packet ordering and flow control should be done, because the hosts are
going to do that anyway, and there is usually little to be gained by doing it twice.
 Furthermore, each packet must carry the full destination address, because each packet
sent is carried independently of its predecessors, if any.
 The other camp (represented by the telephone companies) argues that the subnet should
provide a reliable, connection-oriented service.
 These two camps are best exemplified by the Internet and ATM. The Internet offers
connectionless network-layer service; ATM networks offer connection-oriented
network-layer service. However, it is interesting to note that as quality-of-service
guarantees are becoming more and more important, the Internet is evolving. In particular,
it is starting to acquire properties normally associated with connection-oriented service,
as we will see later.

Implementation of connection less services


 Two different organizations are possible, depending on the type of service offered.
 If connectionless service is offered, packets are injected into the subnet individually and
routed independently of each other. No advance setup is needed. In this context, the
packets are frequently called datagrams (in analogy with telegrams) and the subnet is
called a datagram subnet.
 If connection-oriented service is used, a path from the source router to the destination
router must be established before any data packets can be sent. This connection is called
a VC (virtual circuit), in analogy with the physical circuits set up by the telephone
system, and the subnet is called a virtual- circuit subnet.

 Let us now see how a datagram subnet works. Suppose that the process P1 in Fig. 5-2
has a long message for P2. It hands the message to the transport layer with instructions
to deliver it to process P2 on host H2. The transport layer code runs on H1, typically
within the operating system. It prepends a transport header to the front of the message
and hands the result to the network layer, probably just another procedure within the
operating system.

2
MCN UNIT - I

Figure 5-2. Routing within a datagram subnet

 Let us assume that the message is four times longer than the maximum packet size, so
the network layer has to break it into four packets, 1, 2, 3, and 4 and sends each of them
in turn to router A using some point-to-point protocol,
 For example, PPP. At this point the carrier takes over. Every router has an internal table
telling it where to send packets for each possible destination. Each table entry is a pair
consisting of a destination and the outgoing line to use for that destination. Only
directly-connected lines can be used.
 For example, in Fig. 5-2, A has only two outgoing lines—to B and C—so every
incoming packet must be sent to one of these routers, even if the ultimate destination is
some other router. A's initial routing table is shown in the figure under the label
''initially''. As they arrived at A, packets 1, 2, and 3 were stored briefly (to verify their
checksums). Then each was forwarded to C according to A's table. Packet 1 was then
forwarded to E and then to F. When it got to F, it was encapsulated in a data link layer
frame and sent to H2 over the LAN. Packets 2 and 3 follow the same route.
 However, something different happened to packet 4. When it got to A it was sent to
router B, even though it is also destined for F. For some reason, A decided to send
packet 4 via a different route than that of the first three. Perhaps it learned of a traffic
jam somewhere along the ACE path and updated its routing table, as shown under the
label ''later.''
 The algorithm that manages the tables and makes the routing decisions is called the
routing algorithm.

Implementation of connection oriented services

 For connection-oriented service, we need a virtual-circuit subnet.


 Let us see how that works.

3
MCN UNIT - I

 The idea behind virtual circuits is to avoid having to choose a new route for every packet sent, as in Fig.
5-2. Instead, when a connection is established, a route from the source machine to the
destination machine is chosen as part of the connection setup and stored in tables inside the routers.
 That route is used for all traffic flowing over the connection, exactly the same way that the telephone system
works.
 When the connection is released, the virtual circuit is also terminated. With connection-oriented service, each
packet carries an identifier telling which virtual circuit it belongs to.
 As an example, consider the situation of Fig. 5-3. Here, host H1 has established connection 1 with host H2. It
is remembered as the first entry in each of the routing tables. The first line of A's table says that if a packet
bearing connection identifier 1 comes in from H1, it is to be sent to router C and given connection identifier 1.
Similarly, the first entry at C routes the packet to E, also with connection identifier 1.

Figure 5-3. Routing within a virtual-circuit subnet.

Now let us consider what happens if H3 also wants to establish a connection to H2. It chooses
connection identifier 1 (because it is initiating the connection and this is its only connection) and tells
the subnet to establish the virtual circuit. This leads to the second row in the tables. Note that we have a
conflict here because although A can easily distinguish connection 1 packets from H1 from connection 1
packets from H3, C cannot do this. For this reason, A assigns a different connection identifier to the
outgoing traffic for the second connection. Avoiding conflicts of this kind is why routers need the
ability to replace connection identifiers in outgoing packets. In some contexts, this is called label
switching.

Comparison of virtual –circuit and datagram subnets

The major issues are listed in Fig. 5-4, although purists could probably find a counter example for everything in the
figure.

4
MCN UNIT - I

Figure 5-4. Comparison of datagram and virtual-circuit subnets.

Routing Algorithms

The routing algorithm is that part of the network layer software responsible for deciding which output
line an incoming packet should be transmitted on.

Properties of Routing Algorithm:


Correctness, simplicity, robustness, stability, fairness, and optimality.

Fairness and optimality may sound obvious, but as it turns out, they are often contradictory goals.
There is enough traffic between A and A', between B and B', and between C and C' to saturate the
horizontal links. To maximize the total flow, the X to X' traffic should be shut off altogether.
Unfortunately, X and X' may not see it that way. Evidently, some compromise between global efficiency and
fairness to individual connections is needed.

Category of Algorithm
 Routing algorithms can be grouped into two major classes: nonadaptive and adaptive.

5
MCN UNIT - I

 Non adaptive algorithms do not base their routing decisions on measurements or estimates of the current
traffic and topology. Instead, the choice of the route to use to get from I to J is computed in advance, off-line,
and downloaded to the routers when the network is booted.
 This procedure is sometimes called Static Routing.
 Adaptive algorithms, in contrast, change their routing decisions to reflect changes in the topology,
and usually the traffic as well
 This procedure is sometimes called dynamic routing.

The Optimality Principle


 If router J is on the optimal path from router I to router K, then the optimal path from J to K also
falls along the same route.
 The set of optimal routes from all sources to a given destination form a tree rooted at the destination.
Such a tree is called a sink tree.

(a) A subnet. (b) A sink tree for router B.

 As a direct consequence of the optimality principle, we can see that the set of optimal routes from
all sources to a given destination form a tree rooted at the destination.
 Such a tree is called a sink tree where the distance metric is the number of hops. Note that a sink
tree is not necessarily unique; other trees with the same path lengths may exist.
 The goal of all routing algorithms is to discover and use the sink trees for all routers.

Shortest path routing


• A technique to study routing algorithms: The idea is to build a graph of the subnet, with each
node of the graph representing a router and each arc of the graph representing a
communication line (often called a link).
• To choose a route between a given pair of routers, the algorithm just finds the shortest path
between them on the graph.
• One way of measuring path length is the number of hops. Another metric is the geographic
distance in kilometers. Many other metrics are also possible. For example, each arc could be
labeled with the mean queuing and transmission delay for some standard test packet as determined
by hourly test runs.
• In the general case, the labels on the arcs could be computed as a function of the distance, bandwidth,
average traffic, communication cost, mean queue length, measured delay, and other factors. By

6
MCN UNIT - I

changing the weighting function, the algorithm would then compute the ''shortest'' path measured
according to any one of a number of criteria or to a combination of criteria.

The first five steps used in computing the shortest path from A to D. The arrows indicate the working node.
 To illustrate how the labelling algorithm works, look at the weighted, undirected graph of Fig. 5-
/(a), where the weights represent, for example, distance.
 We want to find the shortest path from A to D. We start out by marking node A as
permanent, indicated by a filled-in circle.
 Then we examine, in turn, each of the nodes adjacent to A (the working node), relabeling each
one with the distance to A.
 Whenever a node is relabelled, we also label it with the node from which the probe was made so
that we can reconstruct the final path later.
 Having examined each of the nodes adjacent to A, we examine all the tentatively labelled nodes in
the whole graph and make the one with the smallest label permanent, as shown in Fig. 5-
7(b).
 This one becomes the new working node.

We now start at B and examine all nodes adjacent to it. If the sum of the label on B and the distance from
B to the node being considered is less than the label on that node, we have a shorter path, so the
node is relabelled.
After all the nodes adjacent to the working node have been inspected and the tentative labels changed if
possible, the entire graph is searched for the tentatively-labelled node with the smallest value. This node
is made permanent and becomes the working node for the next round. Figure 5-/ shows the first five
steps of the algorithm.

7
MCN UNIT - I

 To see why the algorithm works, look at Fig. 5-/(c). At that point we have just made E
permanent. Suppose that there were a shorter path than ABE, say AXYZE. There are two
possibilities: either node Z has already been made permanent, or it has not been. If it has, then
E has already been probed (on the round following the one when Z was made permanent), so
the AXYZE path has not escaped our attention and thus cannot be a shorter path.
 Now consider the case where Z is still tentatively labelled. Either the label at Z is greater than
or equal to that at E, in which case AXYZE cannot be a shorter path than ABE, or it is less
than that of E, in which case Z and not E will become permanent first, allowing E to be
probed from Z.
 This algorithm is given in Fig. 5-8. The global variables n and dist describe the graph
and are initialized before shortest path is called. The only difference between the program and
the algorithm described above is that in Fig. 5-8, we compute the shortest path starting at the
terminal node, t, rather than at the source node, s. Since the shortest path from t to s in an
undirected graph is the same as the shortest path from s to t, it does not matter at which end we
begin (unless there are several shortest paths, in which case reversing the search might discover
a different one). The reason for searching backward is that each node is labelled with its
predecessor rather than its successor. When the final path is copied into the output variable,
path, the path is thus reversed. By reversing the search, the two effects cancel, and the answer
is produced in the correct order.

Flooding

 Another static algorithm is flooding, in which every incoming packet is sent out on every outgoing
line except the one it arrived on.
 Flooding obviously generates vast numbers of duplicate packets, in fact, an infinite number unless
some measures are taken to damp the process.
 One such measure is to have a hop counter contained in the header of each packet, which is decremented
at each hop, with the packet being discarded when the counter reaches zero.
 Ideally, the hop counter should be initialized to the length of the path from source to destination.
If the sender does not know how long the path is, it can initialize the counter to the worst case,
namely, the full diameter of the subnet.

Distance vector routing

 Distance vector routing algorithms operate by having each router maintain a table (i.e, a vector) giving
the best known distance to each destination and which line to use to get there.
 These tables are updated by exchanging information with the neighbors.
The distance vector routing algorithm is sometimes called by other names, most commonly
the distributed Bellman-Ford routing algorithm and the Ford-Fulkerson algorithm, after the
researchers who developed it (Bellman, 195/; and Ford and Fulkerson, 1962).
 It was the original ARPANET routing algorithm and was also used in the Internet under the name RIP.

8
MCN UNIT - I

a) A subnet. (b) Input from A, I, H, K, and the new routing table for J.

 Part (a) shows a subnet. The first four columns of part (b) show the delay vectors received from
the neighbours of router J.
 A claims to have a 12-msec delay to B, a 25-msec delay to C, a 40-msec delay to D, etc.
Suppose that J has measured or estimated its delay to its neighbours, A, I, H, and K as 8, 10,
12, and 6 msec, respectively. Each node constructs a one-dimensional array containing the
"distances"(costs) to all other nodes and distributes that vector to its immediate neighbors.

1. The starting assumption for distance-vector routing is that each node knows the cost of the
link to each of its directly connected neighbors.
2. A link that is down is assigned an infinite cost.

Example

9
MCN UNIT - I

Table 1. Initial distances stored at each node(global view).

Information Distance to Reach Node

A B C D E F G
Stored at Node
A 0 1 1 ∞ 1 1 ∞

B 1 0 1 ∞ ∞ ∞ ∞

C 1 1 0 1 ∞ ∞ ∞

D ∞ ∞ 1 0 ∞ ∞ 1

E 1 ∞ ∞ ∞ 0 ∞ ∞

F 1 ∞ ∞ ∞ ∞ 0 1

G ∞ ∞ ∞ 1 ∞ 1 0

We can represent each node's knowledge about the distances to all other nodes as a table like the one given
in Table 1.

Note that each node only knows the information in one row of the table. Every node sends a message to
its directly connected neighbors containing its personal list of distance. (for example, A sends its
information to its neighbors B,C,E, and F. )
1. If any of the recipients of the information from A find that A is advertising a path shorter
than the one they currently know about, they update their list to give the new path length and note
that they should send packets for that destination through A. ( node B learns from A that
node E can be reached at a cost of 1; B also knows it can reach A at a cost of 1, so it adds
these to get the cost of reaching E by means of A. B records that it can reach E at a cost
of 2 by going through A.)
2. After every node has exchanged a few updates with its directly connected neighbors, all
nodes will know the least-cost path to all the other nodes.
3. In addition to updating their list of distances when they receive updates, the nodes need to
keep track of which node told them about the path that they used to calculate the cost, so that
they can create their forwarding table. ( for example, B knows that it was A who said " I
can reach E in one hop" and so B puts an entry in its table that says " To reach E, use the
link to A.)
Table 2. final distances stored at each node ( global view).

Information

10
MCN UNIT - I

Stored at Node Distance to Reach Node


A B C D E F G
A
0 1 1 2 1 1 2

B 1 0 1 2 2 2 3
C
D 1 1 0 1 2 2 2
2 2 1 0 3 2 1
E 1 2 2 3 0 2 3
F 1 2 2 2 2 0 1
G 2 3 2 1 3 1 0

In practice, each node's forwarding table consists of a set of triples of the form: ( Destination, Cost,
NextHop).
For example, Table 3 shows the complete routing table maintained at node B for the network in figure1.

Table 3. Routing table maintained at node B.

Destination Cost NextHop


A 1 A

C 1 C

D 2 C

E 2 A

F 2 A

G 3 A

THE COUNT-TO-INFINITY PROBLEM

The count-to-infinity problem.

11
MCN UNIT - I

 Consider the five-node (linear) subnet of Fig. 5-10, where the delay metric is the number of
hops. Suppose A is down initially and all the other routers know this. In other words, they have
all recorded the delay to A as infinity.
 Now let us consider the situation of Fig. 5-10(b), in which all the lines and routers are initially
up. Routers B, C, D, and E have distances to A of 1, 2, 3, and 4, respectively. Suddenly A goes
down, or alternatively, the line between A and B is cut, which is effectively the same thing from
B's point of view.

Link state routing

The idea behind link state routing is simple and can be stated as five parts. Each router must do the following:
1. Discover its neighbors and learn their network addresses.

2. Measure the delay or cost to each of its


neighbors.

3.Construct a packet telling all it has


just learned.

4. Send this packet to all other routers.

5. Compute the shortest path to every other router

Learning about the Neighbours

When a router is booted, its first task is to learn who its neighbours are. It accomplishes this goal by
sending a special HELLO packet on each point-to-point line. The router on the other end is expected to send
back a reply telling who it is.

Nine routers and a LAN. (b) A graph model of (a). (b)Measuring Line Cost

12
MCN UNIT - I

 The link state routing algorithm requires each router to know, or at least have a reasonable
estimate of, the delay to each of its neighbors. The most direct way to determine this delay is to
send over the line a special ECHO packet that the other side is required to send back immediately.
 By measuring the round-trip time and dividing it by two, the sending router can get a
reasonable estimate of the delay.
 For even better results, the test can be conducted several times, and the average used. Of course,
this method implicitly assumes the delays are symmetric, which may not always be the case.

Figure: A subnet in which the East and West parts are connected by two lines.

 Unfortunately, there is also an argument against including the load in the delay calculation.
Consider the subnet of Fig. 5-12, which is divided into two parts, East and West, connected by
two lines, CF and EI.

Building Link State Packets

(a) A subnet. (b) The link state packets for this subnet.
 Once the information needed for the exchange has been collected, the next step is for each router to
build a packet containing all the data.
 The packet starts with the identity of the sender, followed by a sequence number and age (to
be described later), and a list of neighbours.
 For each neighbour, the delay to that neighbour is given.
 An example subnet is given in Fig. 5-13(a) with delays shown as labels on the lines. The
corresponding link state packets for all six routers are shown in Fig. 5-13(b).

13
MCN UNIT - I

Distributing the Link State Packets

The packet buffer for router B in Fig. 5-13.


 In Fig. 5-14, the link state packet from A arrives directly, so it must be sent to C and F
and acknowledged to A, as indicated by the flag bits.
 Similarly, the packet from F has to be forwarded to A and C and
acknowledged to F.

Hierarchical routing

• The routers are divided into what we will call regions, with each router knowing all the details
about how to route packets to destinations within its own region, but knowing nothing about the
internal structure of other regions.
• For huge networks, a two-level hierarchy may be insufficient; it may be necessary to group
the regions into clusters, the clusters into zones, the zones into groups, and so on, until we run
out of names for aggregations.

14
MCN UNIT - I

 Figure 5-15 gives a quantitative example of routing in a two-level hierarchy with five regions.
 The full routing table for router 1A has 1/ entries, as shown in Fig. 5-15(b).
 When routing is done hierarchically, as in Fig. 5-15(c), there are entries for all the local routers
as before, but all other regions have been condensed into a single router, so all traffic for region
2 goes via the 1B -2A line, but the rest of the remote traffic goes via the 1C -3B line.
 Hierarchical routing has reduced the table from 1/ to / entries. As the ratio of the number of
regions to the number of routers per region grows, the savings in table space increase.

Congestion control algorithms

When too many packets are present in (a part of) the subnet, performance degrades. This
situation is called congestion.
 Figure 5-25 depicts the symptom. When the number of packets dumped into the subnet by the
hosts is within its carrying capacity, they are all delivered (except for a few that are
afflicted with transmission errors) and the number delivered is proportional to the number sent.
 However, as traffic increases too far, the routers are no longer able to cope and they begin
losing packets. This tends to make matters worse. At very high traffic, performance collapses
completely and almost no packets are delivered.

Figure 5-25. When too much traffic is offered, congestion sets in and performance degrades sharply.

Congestion can be brought on by several factors. If all of a sudden, streams of packets begin arriving on
three or four input lines and all need the same output line, a queue will build up.
 If there is insufficient memory to hold all of them, packets will be lost.
 Slow processors can also cause congestion. If the routers' CPUs are slow at
performing the bookkeeping tasks required of them (queuing buffers, updating tables, etc.),
queues can build up, even though there is excess line capacity. Similarly, low-bandwidth lines
can also cause congestion.

15
MCN UNIT - I

Approaches to congestion control


 Many problems in complex systems, such as computer networks, can be viewed from a control
theory point of view. This approach leads to dividing all solutions into two groups: open loop and
closed loop.

Figure: Timescales Of Approaches To Congestion Control

 Open loop solutions attempt to solve the problem by good design.

Tools for doing open-loop control include deciding when to accept new traffic, deciding when
to discard packets and which ones, and making scheduling decisions at various points in the network.
 Closed loop solutions are based on the concept of a feedback loop.
 This approach has three parts when applied to congestion control:
1. Monitor the system to detect when and where congestion occurs.
2. Pass this information to places where action can be taken.
3. Adjust system operation to correct the problem.
 A variety of metrics can be used to monitor the subnet for congestion. Chief among these are
the percentage of all packets discarded for lack of buffer space, the average queue lengths, the
number of packets that time out and are retransmitted, the average packet delay, and the standard
deviation of packet delay. In all cases, rising numbers indicate growing congestion.
 The second step in the feedback loop is to transfer the information about the congestion from the point
where it is detected to the point where something can be done about it.
 In all feedback schemes, the hope is that knowledge of congestion will cause the hosts to
take appropriate action to reduce the congestion.
 The presence of congestion means that the load is (temporarily) greater than the resources (in part
of the system) can handle. Two solutions come to mind: increase the resources or decrease the load.

Traffic aware routing

These schemes adapted to changes in topology, but not to changes in load. The goal in taking load into account
when computing routes is to shift traffic away from hotspots that will be the first places in the network to
experience congestion.

The most direct way to do this is to set the link weight to be a function of the (fixed) link bandwidth
and propagation delay plus the (variable) measured load or average queuing delay. Least-weight paths
will then favor paths that are more lightly loaded, all else being equal.

16
MCN UNIT - I

Consider the network of Fig. 5-23, which is divided into two parts, East and West, connected by two
links, CF and EI. Suppose that most of the traffic between East and West is using link CF, and, as a
result, this link is heavily loaded with long delays. Including queueing delay in the weight used for
the shortest path calculation will make EI more attractive. After the new routing tables have been
installed, most of the East- West traffic will now go over EI, loading this link. Consequently, in the next
update, CF will appear to be the shortest path. As a result, the routing tables may oscillate wildly,
leading to erratic routing and many potential problems.

If load is ignored and only bandwidth and propagation delay are considered, this problem does not occur.
Attempts to include load but change weights within a narrow range only slow down routing oscillations.
Two techniques can contribute to a successful solution. The first is multipath routing, in which there
can be multiple paths from a source to a destination. In our example this means that the traffic can be
spread across both of the East to West links. The second one is for the routing scheme to shift
traffic across routes slowly enough that it is able to converge.

Admission control
 One technique that is widely used to keep congestion that has already started from getting worse
is admission control.
 Once congestion has been signaled, no more virtual circuits are set up until the problem has gone away.
 An alternative approach is to allow new virtual circuits but carefully route all new virtual
circuits around problem areas. For example, consider the subnet of Fig. 5-2/(a), in which two
routers are congested, as indicated.

17
MCN UNIT - I

Figure 5-2/. (a) A congested subnet. (b) A redrawn subnet that eliminates the congestion.
A virtual circuit from A to B is also shown.

Suppose that a host attached to router A wants to set up a connection to a host attached to router B.
Normally, this connection would pass through one of the congested routers. To avoid this situation, we
can redraw the subnet as shown in Fig. 5-2/(b), omitting the congested routers and all of their lines.
The dashed line shows a possible route for the virtual circuit that avoids the congested routers.

Traffic throttling

Each router can easily monitor the utilization of its output lines and other resources. For example, it can
associate with each line a real variable, u, whose value, between 0.0 and 1.0, reflects the recent utilization
of that line. To maintain a good estimate of u, a sample of the instantaneous line utilization, f (either 0 or 1),
can be made periodically and u updated according to where the constant a determines how fast the router
forgets recent history.

Whenever u moves above the threshold, the output line enters a ''warning'' state. Each newly- arriving
packet is checked to see if its output line is in warning state. If it is, some action is taken. The action
taken can be one of several alternatives, which we will now discuss.

THE WARNING BIT


 The old DECNET architecture signaled the warning state by setting a special bit in the packet's
header.
 When the packet arrived at its destination, the transport entity copied the bit into the next
acknowledgement sent back to the source. The source then cut back on traffic.
 As long as the router was in the warning state, it continued to set the warning bit, which meant
that the source continued to get acknowledgements with it set.
 The source monitored the fraction of acknowledgements with the bit set and adjusted its
transmission rate accordingly. As long as the warning bits continued to flow in, the source
continued to decrease its transmission rate. When they slowed to a trickle, it increased its
transmission rate.
 Note that since every router along the path could set the warning bit, traffic increased only when
no router was in trouble.

18
MCN UNIT - I

Choke Packets
 In this approach, the router sends a choke packet back to the source host, giving it the
destination found in the packet.
 The original packet is tagged (a header bit is turned on) so that it will not generate any more
choke packets farther along the path and is then forwarded in the usual way.
 When the source host gets the choke packet, it is required to reduce the traffic sent to the
specified destination by X percent. Since other packets aimed at the same destination are probably
already under way and will generate yet more choke packets, the host should ignore choke
packets referring to that destination for a fixed time interval. After that period has expired, the
host listens for more choke packets for another interval. If one arrives, the line is still congested,
so the host reduces the flow still more and begins ignoring choke packets again. If no choke
packets arrive during the listening period, the host may increase the flow again.
 The feedback implicit in this protocol can help prevent congestion yet not throttle any flow
unless trouble occurs.
 Hosts can reduce traffic by adjusting their policy parameters.
 Increases are done in smaller increments to prevent congestion from reoccurring quickly.
 Routers can maintain several thresholds. Depending on which threshold has been crossed, the
choke packet can contain a mild warning, a stern warning, or an ultimatum.

HOP-BY-HOP BACK PRESSURE


 At high speeds or over long distances, sending a choke packet to the source hosts does not work
well because the reaction is so slow.
Consider, for example, a host in San Francisco (router A in Fig. 5-28) that is sending traffic to a
host in New York (router D in Fig. 5-28) at 155 Mbps. If the New York host begins to run out of
buffers, it will take about 30 msec for a choke packet to get back to San Francisco to tell it to slow down.
The choke packet propagation is shown as the second, third, and fourth steps in Fig. 5-28(a). In those
30 msec, another 4.6 megabits will have been sent. Even if the host in San Francisco completely shuts
down immediately, the 4.6 megabits in the pipe will continue to pour in and have to be dealt with. Only in
the seventh diagram in Fig. 5- 28(a) will the New York router notice a slower flow.

19
MCN UNIT - I

Figure 5-28. (a) A choke packet that affects only the source. (b) A choke packet that affects each hop it
passes through.
An alternative approach is to have the choke packet take effect at every hop it passes through, as shown in
the sequence of Fig. 5-28(b). Here, as soon as the choke packet reaches F, F is required to reduce the flow
to D. Doing so will require F to devote more buffers to the flow, since the source is still sending away at
full blast, but it gives D immediate relief, like a headache remedy in a television commercial. In the next
step, the choke packet reaches E, which tells E to reduce the flow to F. This action puts a greater
demand on E's buffers but gives F immediate relief. Finally, the choke packet reaches A and the flow
genuinely slows down. The net effect of this hop-by-hop scheme is to provide quick relief at the point of
congestion at the price of using up more buffers upstream. In this way, congestion can be nipped in the
bud without losing any packets.

Load Shedding
 When none of the above methods make the congestion disappear, routers can bring out the
heavy artillery: load shedding.
 Load shedding is a fancy way of saying that when routers are being in undated by packets that
they cannot handle, they just throw them away.
 A router drowning in packets can just pick packets at random to drop, but usually it can do better
than that.
 Which packet to discard may depend on the applications running.
 To implement an intelligent discard policy, applications must mark their packets in priority
classes to indicate how important they are. If they do this, then when packets have to be discarded,
routers can first drop packets from the lowest class, then the next lowest class, and so on.

20
MCN UNIT - I

Random Early Detection


 It is well known that dealing with congestion after it is first detected is more effective than letting
it gum up the works and then trying to deal with it. This observation leads to the idea of
discarding packets before all the buffer space is really exhausted. A popular algorithm for doing
this is called RED (Random Early Detection).
 In some transport protocols (including TCP), the response to lost packets is for the source to slow
down. The reasoning behind this logic is that TCP was designed for wired networks and wired
networks are very reliable, so lost packets are mostly due to buffer overruns rather than
transmission errors. This fact can be exploited to help reduce congestion.
 By having routers drop packets before the situation has become hopeless (hence the ''early'' in
the name), the idea is that there is time for action to be taken before it is too late. To determine
when to start discarding, routers maintain a running average of their queue lengths. When the
average queue length on some line exceeds a threshold, the line is said to be congested and action
is taken.

Quality of Service

Quality of service (QoS) is an internetworking issue that has been discussed more than
defined. We can informally define quality of service as something a flow seeks to attain.

Flow Characteristics
Traditionally, four types of characteristics are attributed to a flow: reliability, delay, jitter,
and bandwidth, as shown in Figure 24.15.

Reliability

Reliability is a characteristic that a flow needs. Lack of reliability means losing a packet or
acknowledgment, which entails retransmission. However, the sensitivity of application
programs to reliability is not the same. For example, it is more important that electronic mail,
file transfer, and Internet access have reliable transmissions than telephony or audio
conferencing.

Delay

Source-to-destination delay is another flow characteristic. Again applications can tolerate


delay in different degrees. In this case, telephony, audio conferencing, video conferencing,
and remote log-in need minimum delay, while delay in file transfer or e-mail is less
important.

21
MCN UNIT - I

JITTER
The variation (i.e., standard deviation) in the packet arrival times is called jitter.
High jitter, for example, having some packets taking 20 msec and others taking 30 msec to
arrive will give an uneven quality to the sound or movie. Jitter is illustrated in Fig. 5-
In contrast, an agreement that 99 percent of the packets be delivered with a delay in the range
of 24.5 msec to 25.5 msec might be acceptable.

Figure 5-29. (a) High jitter. (b) Low jitter.


 The jitter can be bounded by computing the expected transit time for each hop along the
path. When a packet arrives at a router, the router checks to see how much the packet is
behind or ahead of its schedule. This information is stored in the packet and updated at
each hop. If the packet is ahead of schedule, it is held just long enough to get it back on
schedule. If it is behind schedule, the router tries to get it out the door quickly.
 In fact, the algorithm for determining which of several packets competing for an output
line should go next can always choose the packet furthest behind in its schedule.
 In this way, packets that are ahead of schedule get slowed down and packets that are
behind schedule get speeded up, in both cases reducing the amount of jitter.
 In some applications, such as video on demand, jitter can be eliminated by buffering at
the receiver and then fetching data for display from the buffer instead of from the
network in real time. However, for other applications, especially those that require real-
time interaction between people such as Internet telephony and videoconferencing, the
delay inherent in buffering is not acceptable.

Bandwidth

Different applications need different bandwidths. In video conferencing we need to send


millions of bits per second to refresh a color screen while the total number of bits in an e-
mail may not reach even a million.

Flow Classes

Based on the flow characteristics, we can classify flows into groups, with each group having
similar levels of characteristics. This categorization is not formal or universal; some
protocols such as ATM have defined classes, as we will see later.

TECHNIQUES TO IMPROVE QOS

1. In Section 24.5 we tried to define QOS in terms of its characteristics. In this section,
we discuss some techniques that can be used to improve the quality of service.
common methods: scheduling, traffic shaping, admission control, and resource
reservation.

22
MCN UNIT - I

1. Over Provisioning
2. Buffering
3. Traffic Shaping
The Leaky Bucket Algorithm
The Token Bucket Algorithm
4. Resource Reservation
5. Admission Control
6. Packet Scheduling

1. Over provisioning
• Providing so much router capacity, buffer space and bandwidth.
• Packets fly easily
Problem: Expensive
2. Buffering
Flows buffered on receiving side before being delivered.
Problem: increase delay adv: smooth jitter

3. Traffic Shaping
Refer the Traffic Shaping Concept in this unit.

4. Resource reservation
Once a specific route for a flow is established it is possible to reserve resources along that
route to make sure the needed capacity is available. Three different kinds of resources can
potentially be reserved: Bandwidth, Buffer space and CPU cycles. Bandwidth
If a flow requires 1 Mbps and the outgoing line has a capacity of 2 Mbps flow can be routed
through this line.
Buffer space
Arriving packet is deposited on the network interface card by the hardware itself. The router
software copies it to a buffer in RAM and queue that buffer for transmission on the chosen
outgoing line. If buffer is not available, the packet has to be discarded.
For a good quality of service, some buffers can be reserved for a specific flow so that flow
does not have to compete for buffers with other flows.
CPU Cycle:
Router takes CPU time to process a packet.
Making sure that the CPU is not overloaded is needed to ensure timely processing of each
packet.

23
MCN UNIT - I

5. Admission Control

 Admission control refers to the mechanism used by a router, or a switch, to accept or


reject a flow based on predefined parameters called flow specifications.
 Before a router accepts a flow for processing, it checks the flow specifications to see
if its capacity (in terms of bandwidth, buffer size, CPU speed, etc.) and its previous
commitments to other flows can handle the new flow.

6. Packet Scheduling

Packets from different flows arrive at a switch or router for processing. A good scheduling
technique treats the different flows in a fair and appropriate manner. Several scheduling
techniques are designed to improve the quality of service. We discuss three of them here:
FIFO queuing, priority queuing, and weighted fair queuing.

FIFO Queuing

In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the node (router or
switch) is ready to process them. If the average arrival rate is higher than the average
processing rate, the queue will fill up and new packets will be discarded. A FIFO queue is
familiar to those who have had to wait for a bus at a bus stop. Figure 24.16 shows a
conceptual view of a FIFO queue.

Priority Queuing

In priority queuing, packets are first assigned to a priority class. Each priority class has its
own queue. The packets in the highest-priority queue are processed first. Packets in the
lowest-priority queue are processed last. Note that the system does not stop serving a queue
until it is empty. Figure 24.17 shows priority queuing with two priority levels (for simplicity).

24
MCN UNIT - I

A priority queue can provide better QoS than the FIFO queue because higher priority traffic,
such as multimedia, can reach the destination with less delay. However, there is a potential
drawback. If there is a continuous flow in a high-priority queue, the packets in the lower-
priority queues will never have a chance to be processed. This is a condition called starvation.

Weighted Fair Queuing

A better scheduling method is weighted fair queuing. In this technique, the packets are still
assigned to different classes and admitted to different queues. The queues, however, are
weighted based on the priority of the queues; higher priority means a higher weight. The
system processes packets in each queue in a round-robin fashion with the number of packets
selected from each queue based on the corresponding weight. For example, if the weights are
3, 2, and 1, three packets are processed from the first queue, two from the second queue, and
one from the third queue. If the system does not impose priority on the classes, all weights
can be equaL In this way, we have fair queuing with priority. Figure 24.18 shows the
technique with three classes.

Application Requirements

Definition: Applications require specific network performance metrics like bandwidth,


latency, and reliability for optimal functioning.
Critical Requirements:
Bandwidth: Ensures sufficient data transfer rates.
Latency: Low delay is critical for real-time applications (e.g., VoIP, video conferencing).
Jitter: Consistent packet delivery times prevent disruptions in streaming applications.
Packet Loss: Minimization ensures data integrity.
Categories of Applications:
Real-time Applications: Require low latency and jitter (e.g., VoIP, online gaming).
Non-Real-Time Applications: Can tolerate delays but need reliability (e.g., file transfers,
emails).
Importance of QoS for Applications:
 Prevents service disruptions by allocating resources based on priority.
 Supports evolving requirements for IoT and high-bandwidth applications.
Examples of Application-Specific QoS:
 IPTV: Requires high bandwidth and minimal jitter.

25
MCN UNIT - I

 Online Gaming: Demands low latency and high reliability.


 Video Conferencing: Needs high bandwidth, low jitter, and low packet loss.

Traffic Shaping

Traffic shaping is a mechanism to control the amount and the rate of the traffic sent to the
network. Two techniques can shape traffic: leaky bucket and token bucket.

Leaky Bucket

If a bucket has a small hole at the bottom, the water leaks from the bucket at a constant rate
as long as there is water in the bucket. The rate at which the water leaks does not depend on
the rate at which the water is input to the bucket unless the bucket is empty. The input rate
can vary, but the output rate remains constant. Similarly, in networking, a technique called
leaky bucket can smooth out bursty traffic. Bursty chunks are stored in the bucket and sent
out at an average rate. Figure 24.19 shows a leaky bucket and its effects.

26
MCN UNIT - I

In the figure, we assume that the network has committed a bandwidth of 3 Mbps for a host.
The use of the leaky bucket shapes the input traffic to make it conform to this commitment.
In Figure 24.19 the host sends a burst of data at a rate of 12 Mbps for 2 s, for a total of 24
Mbits of data. The host is silent for 5 s and then sends data at a rate of 2 Mbps for 3 s, for a
total of 6 Mbits of data. In all, the host has sent 30 Mbits of data in lOs. The leaky bucket
smooths the traffic by sending out data at a rate of 3 Mbps during the same 10 s. Without the
leaky bucket, the beginning burst may have hurt the network by consuming more bandwidth
than is set aside for this host. We can also see that the leaky bucket may prevent congestion.
As an analogy, consider the freeway during rush hour (bursty traffic). If, instead, commuters
could stagger their working hours, congestion o'n our freeways could be avoided.

A simple leaky bucket implementation is shown in Figure 24.20. A FIFO queue holds the
packets. If the traffic consists of fixed-size packets (e.g., cells in ATM

networks), the process removes a fixed number of packets from the queue at each tick of the
clock. If the traffic consists of variable-length packets, the fixed output rate must be based on
the number of bytes or bits.

27
MCN UNIT - I

The following is an algorithm for variable-length packets:


1. Initialize a counter to n at the tick of the clock.
2. If n is greater than the size of the packet, send the packet and decrement the counter by the
packet size. Repeat this step until n is smaller than the packet size.
3. Reset the counter and go to step 1.

Token Bucket:

The leaky bucket is very restrictive. It does not credit an idle host. For example, if a host is
not sending for a while, its bucket becomes empty. Now if the host has bursty data, the leaky
bucket allows only an average rate. The time when the host was idle is not taken into account.
On the other hand, the token bucket algorithm allows idle hosts to accumulate credit for the
future in the form of tokens. For each tick of the clock, the system sends n tokens to the
bucket. The system removes one token for every cell (or byte) of data sent. For example, if n
is 100 and the host is idle for 100 ticks, the bucket collects 10,000 tokens. Now the host can
consume all these tokens in one tick with 10,000 cells, or the host takes 1000 ticks with 10
cells per tick. In other words, the host can send bursty data as long as the bucket is not empty.
Figure 24.21 shows the idea.

The token bucket can easily be implemented with a counter. The token is initialized to zero.
Each time a token is added, the counter is incremented by 1. Each time a unit of data is sent,
the counter is decremented by 1. When the counter is zero, the host cannot send data.

Combining Token Bucket and Leaky Bucket

 The two techniques can be combined to credit an idle host and at the same time
regulate the traffic.
 The leaky bucket is applied after the token bucket; the rate of the leaky bucket needs
to be higher than the rate of tokens dropped in the bucket.

28
MCN UNIT - I

29

You might also like