Introduction To IP and ATM Design and Performance
Introduction To IP and ATM Design and Performance
J M Pitts
J A Schormans
Queen Mary
University of London
UK
New York
Weinheim
Brisbane
Toronto
Singapore
First Edition published in 1996 as Introduction to ATM Design and Performance by John Wiley & Sons, Ltd.
Copyright 2000 by John Wiley & Sons, Ltd
Baffins Lane, Chichester,
West Sussex, PO19 1UD, England
National
01243 779777
International (C44) 1243 779777
e-mail (for orders and customer service enquiries): [email protected]
Visit our Home Page on http://www.wiley.co.uk or http://www.wiley.com
Reprinted March 2001
All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted,
in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except
under the terms of the Copyright Designs and Patents Act 1988 or under the terms of a licence issued by
the Copyright Licensing Agency, 90 Tottenham Court Road, London, W1P 9HE, UK, without the permission
in writing of the Publisher, with the exception of any material supplied specifically for the purpose of being
entered and executed on a computer system, for exclusive use by the purchaser of the publication.
Neither the authors nor John Wiley & Sons Ltd accept any responsibility or liability for loss or damage
occasioned to any person or property through using the material, instructions, methods or ideas contained
herein, or acting or refraining from acting as a result of such use. The author(s) and Publisher expressly disclaim
all implied warranties, including merchantability or fitness for any particular purpose. There will be no duty
on the authors or Publisher to correct any errors or defects in the software.
Designations used by companies to distinguish their products are often claimed as trademarks. In all instances
where John Wiley & Sons is aware of a claim, the product names appear in initial capital or all capital
letters. Readers, however, should contact the appropriate companies for more complete information regarding
trademarks and registration.
Other Wiley Editorial Offices
John Wiley & Sons, Inc., 605 Third Avenue,
New York, NY 10158-0012, USA
Wiley-VCH Verlag GmbH
Pappelallee 3, D-69469 Weinheim, Germany
Jacaranda Wiley Ltd, 33 Park Road, Milton,
Queensland 4064, Australia
John Wiley & Sons (Canada) Ltd, 22 Worcester Road
Rexdale, Ontario, M9W 1L1, Canada
John Wiley & Sons (Asia) Pte Ltd, 2 Clementi Loop #02-01,
Jin Xing Distripark, Singapore 129809
To
Suzanne, Rebekah, Verity and Barnabas
Jacqueline, Matthew and Daniel
Contents
Preface
PART I
1
xi
INTRODUCTORY TOPICS
Circuit Switching
Packet Switching
Cell Switching and ATM
Connection-orientated Service
Connectionless Service and IP
Buffering in ATM switches and IP routers
Buffer Management
Traffic Control
3
5
7
8
9
11
11
13
15
15
Source models
Queueing behaviour
16
18
30
30
32
35
37
37
40
41
Teletraffic Engineering
45
Sharing Resources
Mesh and Star Networks
Traffic Intensity
Performance
TCP: Traffic, Capacity and Performance
Variation of Traffic Intensity
Erlangs Lost Call Formula
Traffic Tables
45
45
47
49
49
50
52
53
viii
CONTENTS
Performance Evaluation
Methods of Performance Evaluation
Measurement
Predictive evaluation: analysis/simulation
Queueing Theory
Notation
Elementary relationships
The M/M/1 queue
The M/D/1/K queue
Delay in the M/M/1 and M/D/1 queueing systems
69
71
73
74
75
77
77
77
Traffic Models
81
81
82
83
86
89
PART II
95
97
97
98
100
104
108
End-to-end delay
Cell-Scale Queueing
Cell-scale Queueing
Multiplexing Constant-bit-rate Traffic
Analysis of an Infinite Queue with Multiplexed CBR Input: The
ND/D/1
Heavy-traffic Approximation for the M/D/1 Queue
Heavy-traffic Approximation for the ND/D/1 Queue
Cell-scale Queueing in Switches
60
60
61
64
65
69
Cell-rate simulation
58
Fundamentals of Simulation
Accelerated Simulation
57
57
57
57
Burst-Scale Queueing
ATM Queueing Behaviour
Burst-scale Queueing Behaviour
110
113
113
114
115
117
119
121
125
125
127
ix
CONTENTS
10
11
150
151
152
153
155
157
159
160
160
161
165
167
167
168
172
Dimensioning
Combining The Burst and Cell Scales
Dimensioning The Buffer
Small buffers for cell-scale queueing
Large buffers for burst-scale queueing
13
149
12
129
129
131
136
139
141
145
Priority Control
Priorities
Space Priority and The Cell Loss Priority Bit
Partial Buffer Sharing
Increasing the admissible load
Dimensioning buffers for partial buffer sharing
172
173
176
178
182
182
184
187
187
190
193
198
200
205
205
205
207
214
215
218
219
CONTENTS
PART III
14
IP PERFORMANCE
AND TRAFFIC MANAGEMENT
15
Resource Reservation
229
230
231
234
236
238
239
245
253
253
254
255
257
259
IP Buffer Management
First-in First-out Buffering
Random Early Detection Probabilistic Packet Discard
Virtual Buffers and Scheduling Algorithms
17
229
16
227
261
265
267
267
267
273
Precedence queueing
Weighted fair queueing
273
274
275
279
Self-similar Traffic
287
287
289
292
293
References
299
Index
301
Preface
In recent years, we have taught design and performance evaluation
techniques to undergraduates and postgraduates in the Department
of Electronic Engineering at Queen Mary, University of London
(http://www.elec.qmw.ac.uk/) and to graduates on various University
of London M.Sc. courses for industry. We have found that many
engineers and students of engineering experience difficulty in making
sense of teletraffic issues. This is partly because of the subject itself:
the technologies and standards are flexible, complicated, and always
evolving. However, some of the difficulties arise because of the advanced
mathematical models that have been applied to IP and ATM analysis.
The research literature, and many books reporting on it, is full of
differing analytical approaches applied to a bewildering array of traffic
mixes, buffer management mechanisms, switch designs, and traffic and
congestion control algorithms.
To counter this trend, our book, which is intended for use by students
both at final-year undergraduate, and at postgraduate level, and by practising engineers in the telecommunications and Internet world, provides
an introduction to the design and performance issues surrounding IP
and ATM. We cover performance evaluation by analysis and simulation,
presenting key formulas describing traffic and queueing behaviour, and
practical examples, with graphs and tables for the design of IP and ATM
networks.
In line with our general approach, derivations are included where they
demonstrate an intuitively simple technique; alternatively we give the
formula (and a reference) and then show how to apply it. As a bonus,
the formulas are available as Mathcad files (see below for details) so
there is no need to program them for yourself. In fact, many of the
graphs have the Mathcad code right beside them on the page. We have
ensured that the need for prior knowledge (in particular, probability
theory) has been kept to a minimum. We feel strongly that this enhances
the work, both as a textbook and as a design guide; it is far easier to
xii
PREFACE
make progress when you are not trying to deal with another subject in
the background.
For the second edition, we have added a substantial amount of new
material on IP traffic issues. Since the first edition, much work has
been done in the IP community to make the technology QoS-aware. In
essence, the techniques and mechanisms to do this are generic however,
they are often disguised by the use of confusing jargon in the different
communities. Of course, there are real differences in the technologies, but
the underlying approaches for providing guaranteed performance to a
wide range of service types are very similar.
We have introduced new ideas from our own research more accurate,
usable results and understandable derivations. These new ideas make
use of the excess-rate technique for queueing analysis, which we have
found applicable to a wide variety of queueing systems. Whilst we still
do not claim that the book is comprehensive, we do believe it presents
the essentials of design and performance analysis for both IP and ATM
technologies in an intuitive and understandable way.
Organization
In Chapter 1, we describe both IP and ATM technologies. On the surface
the technologies appear to be rather different, but both depend on similar
approaches to buffer management and traffic control in order to provide
performance guarantees to a wide variety of services. We highlight
the fundamental operations of both IP and ATM as they relate to the
underlying queueing and performance issues, rather than describe the
technologies and standards in detail. Chapter 2 is the executive summary
for the book: it gathers together the range of analytical solutions covered,
lists the parameters, and groups them according to their use in addressing
IP and ATM traffic issues. You may wish to skip over it on a first reading,
but use it afterwards as a ready reference.
PREFACE
xiii
Chapter 3 introduces the concept of resource sharing, which underpins the design and performance of any telecommunications technology,
in the context of circuit-switched networks. Here, we see the tradeoff between the economics of providing telecommunications capability
and satisfying the service requirements of the customer. To evaluate
the performance of shared resources, we need an understanding of
queueing theory. In Chapter 4, we introduce the fundamental concept
of a queue (or waiting line), its notation, and some elementary relationships, and apply these to the basic process of buffering, using ATM as
an example. This familiarizes the reader with the important measures of
delay and loss (whether of packets or cells), the typical orders of magnitude for these measures, and the use of approximations, without having
to struggle through analytical derivations at the same time. Simulation
is widely used to study performance and design issues, and Chapter 5
provides an introduction to the basic principles, including accelerated
techniques.
Chapter 6 describes a variety of simple traffic models, both for single
sources and for aggregate traffic, with sample parameter values typical
of IP and ATM. The distinction between levels of traffic behaviour,
particularly the cell/packet and burst levels is introduced, as well as
the different ways in which timing information is presented in source
models. Both these aspects are important in helping to simplify and
clarify the analysis of queueing behaviour.
In Part II, we turn to queueing and traffic control issues, with the specific
focus on ATM. Even if your main interest is in IP, we recommend you
read these chapters. The reason is not just that the queueing behaviour
is very similar (ATM cells and fixed-size packets look the same to a
queueing system), but because the development of an appreciation for
both the underlying queueing issues and the influence of key traffic
parameters builds in a more intuitive way.
In Chapter 7, we treat the queueing behaviour of ATM cells in output
buffers, taking the reader very carefully through the analytical derivation
of the queue state probability distribution, the cell loss probability, and
the cell delay distribution. The analytical approach used is a direct
probabilistic technique which is simple and intuitive, and key stages
in the derivation are illustrated graphically. This basic technique is
the underlying analytical approach applied in Chapter 13 to the more
complex issues of priority mechanisms, in Chapter 14 to basic packet
switching with variable-length packets, and in Chapter 17 to the problem
of queueing under self-similar traffic input.
Chapters 8 and 9 take the traffic models of Chapter 6 and the concept
of different levels of traffic behaviour, and apply them to the analysis of
ATM queueing. The distinction between cell-scale queueing (Chapter 8)
and burst-scale queueing (Chapter 9) is of fundamental importance
xiv
PREFACE
xv
PREFACE
Acknowledgements
This new edition has benefited from the comments and questions raised
by readers of the first edition, posted, e-mailed and telephoned from
around the world. We would like to thank our colleagues in the
Department of Electronic Engineering for a friendly, encouraging and
stimulating academic environment in which to work. But most important
of all are our families thank you for your patience, understanding and
support through thick and thin!
PART I
Introductory Topics
An Introduction
to the Technologies of IP
and ATM
the bare necessities
CIRCUIT SWITCHING
In traditional analogue circuit switching, a call is set-up on the basis that
it receives a path (from source to destination) that is its property for
the duration of the call, i.e. the whole of the bandwidth of the circuit
is available to the calling parties for the whole of the call. In a digital
circuit-switched system, the whole bit-rate of the line is assigned to a
call for only a single time slot per frame. This is called time division
multiplexing.
During the time period of a frame, the transmitting party will generate
a fixed number of bits of digital data (for example, 8 bits to represent
the level of an analogue telephony signal) and these bits will be grouped
together in the time slot allocated to that call. On a transmission link, the
same time slot in every frame is assigned to a call for the duration of that
call (Figure 1.1). So the time slot is identified by its position in the frame,
hence use of the name position multiplexing, although this term is not
used as much as time division multiplexing.
When a connection is set up, a route is found through the network and
that route remains fixed for the duration of the connection. The route will
probably traverse a number of switching nodes and require the use of
many transmission links to provide a circuit from source to destination.
The time slot position used by a call is likely to be different on each link.
The switches which interconnect the transmission links perform the time
slot interchange (as well as the space switching) necessary to provide
the through-connection (e.g. link M, time slot 2 switches to link N, time
slot 7 in Figure 1.2).
Direction of transmission
Another 8 bits of data
Time
Duration of frame
One frame contains 8 time slots,
each time slot contains 8 bits
Figure 1.1.
Link M
. . .
Time
Link N
. . .
Figure 1.2.
PACKET SWITCHING
PACKET SWITCHING
Lets now consider a generic packet-switching network, i.e. one intended
to represent the main characteristics of packet switching, rather than any
particular packet-switching system (later on in the chapter well look
more closely at the specifics of IP).
Instead of being organized into single eight-bit time slots which repeat
at regular intervals, data in a packet-switched network is organised into
packets comprising many bytes of user data (bytes may also be known as
octets). Packets can vary in size depending on how much data there is to
send, usually up to some predetermined limit (for example, 4096 bytes).
Each packet is then sent from node to node as a group of contiguous bits
fully occupying the link bit-rate for the duration of the packet. If there
is no packet to send, then nothing is sent on the link. When a packet
is ready, and the link is idle, then the packet can be sent immediately.
If the link is busy (another packet is currently being transmitted), then
the packet must wait in a buffer until the previous one has completed
transmission (Figure 1.3).
Each packet has a label to identify it as belonging to a particular
communication. Thus packets from different sources and to different
Packet
waiting
in buffer
Direction of transmission
Transmitted packet
...
Label Information
Link overhead
Packet being
transmitted
Link idle
...
Link overhead added to
beginning and end of
packet that is being
transmitted
Time
Figure 1.3.
which rejects (blocks) a connection request if there is no circuit available. The effect of this non-blocking operation is that packets experience
greater and greater delays across the network, as the load on the network
increases. As the load approaches the network capacity, the node buffers
become full, and further incoming packets cannot be stored. This triggers retransmission of those packets which only worsens the situation
by increasing the load; the successful throughput of packets decreases
significantly.
In order to maintain throughput, congestion control techniques, particularly flow control, are used. Their aim is to limit the rate at which sources
offer packets to the network. The flow control can be exercised on a linkby-link, or end-to-end basis. Thus a connection cannot be guaranteed any
particular bit-rate: it is allowed to send packets to the network as and
when it needs to, but if the network is congested then the network exerts
control by restricting this rate of flow.
The main performance issues for a user of a packet-switched network
are the delay experienced on any connection and the throughput. The
network operator aims to maximize throughput and limit the delay, even
in the presence of congestion. The user is able to send information on
demand, and the network provides error control through re-transmission
of packets on a link-by-link basis. Capacity is not dedicated to the
connection, but shared on a dynamic basis with other connections. The
capacity available to the user is reduced by the per-packet overheads
required for label multiplexing, flow and error control.
Direction of transmission
Time
A cell, containing a
header field and an
information field
CONNECTION-ORIENTATED SERVICE
Lets take a more detailed look at the cell header in ATM. The label
consists of two components: the virtual channel identifier (VCI) and the
virtual path identifier (VPI). These identifiers do not have end-to-end
(user-to-user) significance; they identify a particular virtual channel (VC)
or virtual path (VP) on the link over which the cell is being transmitted.
When the cell arrives at the next node, the VCI and the VPI are used to
look up in the routeing table to what outgoing port the cell should be
switched and what new VCI and VPI values the cell should have. The
routeing table values are established at the set-up of a connection, and
remain constant for the duration of the connection, so the cells always take
the same route through the network, and the cell sequence integrity of
the connection is maintained. Hence ATM provides connection-orientated
service.
But surely only one label is needed to achieve this cell routeing
mechanism, and that would also make the routeing tables simpler: so
why have two types of identifier? The reason is for the flexibility gained
in handling connections. The basic equivalent to a circuit-switched or
packet-switched connection in ATM is the virtual channel connection
(VCC). This is established over a series of concatenated virtual channel
links. A virtual path is a bundle of virtual channel links, i.e. it groups a
number of VC links in parallel. This idea enables direct logical routes to
be established between two switching nodes that are not connected by a
direct physical link.
The best way to appreciate why this concept is so flexible is to consider
an example. Figure 1.5 shows three switching nodes connected in a
physical star structure to a cross-connect node. Over this physical
network, a logical network of three virtual paths has been established.
SWITCH
CROSS-CONNECT
physical link
physical link
VCI: 42
SWITCH
VPI: 12
VPI: 25
VCI: 42
Cross-connect converts
VPI values (e.g. 12 25)
but does not change VCI
values (e.g. 42)
SWITCH
Figure 1.5. Virtual Paths and Virtual Channels
10
BUFFER MANAGEMENT
11
BUFFER MANAGEMENT
Both ATM and IP feature buffer management mechanisms that are
designed to enhance the capability of the networks. In essence, these
mechanisms deal with how cells or packets gain access to the finite
waiting area of the buffer and, once in that waiting area, how they gain
access to the server for onward transmission. The former deals with how
the buffer space is partitioned, and the discard policies in operation. The
latter deals with how the packets or cells are ordered and scheduled for
service, and how the service capacity is partitioned.
The key requirement is to provide partitions, i.e. virtual buffers, through
which different groups of traffic can be forwarded. In the extreme, a
virtual buffer is provided for each IP flow, or ATM connection, and it has
its own buffer space and service capacity allocation. This is called per-flow
or per-VC queueing. Typically, considerations of scale mean that traffic,
whether flows or connections, must be handled in aggregate through
virtual buffers, particularly in the core of the network. Terminology varies
12
TRAFFIC CONTROL
13
TRAFFIC CONTROL
We have seen that both IP and ATM provide temporary storage for
packets and cells in buffers across the network, introducing variable
delays, and on occasion, loss too. These buffers incorporate various mechanisms to enable the networks to cater for different types of traffic both
elastic and inelastic. As we have noted, part of the solution to this problem
is the use of buffer management strategies: partitioning and reserving
appropriate resources both buffer space and service capacity. However,
there is another part to the overall solution: traffic control. This allows
users to state their communications needs, and enables the network to
coordinate and monitor its corresponding provision.
Upon receiving a reservation request (for a connection in ATM, or a
flow in IP), a network assesses whether or not it can handle the traffic, in
addition to what has already been accepted on the network. This process
is rather more complicated than for circuit switching, because some of
the reservation requests will be from variable bit-rate (VBR) services, for
which the instantaneous bit-rate required will be varying in a random
manner over time, as indeed will be the capacity available because many
of the existing connections will also be VBR! So if a request arrives for
a time-varying amount of capacity, and the capacity available is also
varying with time, it is no longer a trivial problem to determine whether
the connection or flow should be accepted.
In practice such a system works in the following way: the user declares
values for some parameters which describe the traffic behaviour of the
requested connection or flow, as well as the loss and delay performance
required; the network then uses these traffic and performance values
to come to an accept/reject decision, and informs the user. If accepted,
the network has to ensure that the sequence of cells or packets corresponds to the declared traffic values. This whole process is aimed at
preventing congestion in the network and ensuring that the performance
requirements are met for all carried traffic.
The traffic and performance values agreed by the user and the network
form a traffic contract. The mechanism which makes the accept/reject
decision is the admission control function, and this resides in the ATM
switching nodes or IP routers in the network. A mechanism is also
necessary to ensure compliance with the traffic contract, i.e. the user
14
should not exceed the peak (or mean, or whatever) rate that was agreed
for the connection, flow, or aggregate. This mechanism is called usage
parameter control (UPC) in ATM and is situated on entry to the network.
If the user does exceed the traffic contract, then the UPC mechanism
takes action to protect the network from the effects of this excess, e.g.
discarding some of the cells from the non-compliant connection. A similar
mechanism in DiffServ for IP networks is called traffic conditioning, and
this involves packet metering, marking, shaping and dropping.
In order to design algorithms for these mechanisms, it is important that
we understand the characteristics of the traffic sources, and the effects
these sources have when they are multiplexed through buffers in the
network, in terms of the delay and loss incurred. How we design the
algorithms is very closely related to how large we make the buffers, and
what buffer management mechanisms are proposed. Buffer dimensioning
and management mechanisms depend on how we intend to handle the
different services and their performance requirements.
This chapter is the executive summary for the book: it provides a quick
way to find a range of analytical solutions for a variety of design and
performance issues relating to IP and ATM traffic problems. If you are
already familiar with performance evaluation and want a quick overview
of what the book has to offer, then read on. Otherwise, youll probably
find that its best to skip this chapter, and come back to it after you have
read the rest of the book youll then be able to use this chapter as a
ready reference.
16
Source models
Model:
Use:
Formula:
Parameters:
Location:
Model:
Use:
Formulas:
Parameters:
Location:
Model:
Use:
Formulas:
Parameters:
Location:
Model:
Use:
Formula:
Parameters:
Location:
geometric distribution
inter-arrival times, service times, for cells, packets, bursts, flows, calls
Prfk time slots between arrivalsg D 1 pk1 p
Prf k time slots between arrivalsg D 1 1 pk
k time slots
p probability of an arrival, or end of service, in a time slot
Chapter 6, page 85
Poisson distribution
number of arrivals or amount of work, for octets, cells, packets, bursts,
flows, calls
Tk T
Prfk arrivals in time Tg D
e
k!
T time
k number of arrivals, or amount of work
rate of arrivals
Chapter 6, page 86
binomial distribution
number of arrivals (in time, or from a number of inputs) or amount of
work, for octets, cells, packets, bursts, flows, calls
Prfk arrivals in N time slotsg D
N!
1 pNk pk
N k! k!
Model:
Use:
Formulas:
Parameters:
Location:
Model:
Use:
Formulas:
Parameters:
Location:
Model:
Use:
Formulas:
17
Batch distribution
number of arrivals, or amount of work, for octets, cells, packets,
bursts, flows, calls
a0 D 1 p
a1 D p b1
a2 D p b2
..
.
ak D p bk
..
.
aM D p bM
k number of arrivals
p probability there is a batch of arrivals in a time slot
bk probability there are k arrivals in a batch (given that there is a
batch in a time slot)
M maximum number of arrivals in batch
Chapter 6, page 88
ONOFF two-state
rate of arrivals, for octets, cells, packets
1
Ton D E[on]
R
1
Toff D E[off]
C
R rate of arrivals
E[on] mean number of arrivals in ON state
C service rate, or rate of time-base
E[off] mean number of time units in OFF state
Chapter 6, page 91
Pareto distribution
number of arrivals, or amount of work, for octets, cells, packets, etc.
PrfX > xg D
x
Fx D 1
x
C1
f x D
E[x] D
1
(continued overleaf )
18
Model:
Pareto distribution
Parameters:
Location:
Queueing behaviour
There are a number of basic queueing relationships which are true,
regardless of the pattern of arrivals or of service, assuming that the buffer
capacity is infinite (or that the loss is very low). For the basic FIFO queue,
there is a wide range of queueing analyses that can be applied to both
IP and ATM, according to the multiplexing scenario. These queueing
relationships and analyses are summarized below.
Model:
Use:
Formulas:
Parameters:
Location:
Model:
Use:
Formulas:
elementary relationships
queues with infinite buffer capacity
Ds
w D tw (known as Littles formula)
q D tq (ditto)
tq D tw C s
qDwC
mean number of arrivals per unit time
s mean service time for each customer
utilization; fraction of time the server is busy
w mean number of customers waiting to be served
tw mean time a customer spends waiting for service
q mean number of customers in the system (waiting or being
served)
tq mean time a customer spends in the system
Chapter 4, page 61
M/M/1
classical continuous-time queueing model; NB: assumes variablesize customers, so more appropriate for IP, but has been used for
ATM
qD
1
(continued)
19
Model:
Parameters:
Location:
Model:
Use:
Formulas:
M/M/1
s
tw D
1
Prfsystem size D xg D 1 x
Prfsystem size > xg D xC1
utilization; load (as fraction of service rate) offered to system
q mean number in the system (waiting or being served)
tw mean time spent waiting for service
x buffer capacity in packets or cells
Chapter 4, page 62
k1
si ak i
iD1
sk D
a0
PrfUd D 1g D Ud 1 D s0 C s1
PrfUd D kg D Ud k D sk
1
ai
iD0
Bd k D
Td k D
k
E[a]
k
Ud j Bd k j
jD1
Td,n k D
k
jD1
s
2 1
ak probability there are k arrivals in a time slot
for M/D/1: tw D
Parameters:
20
Model:
Location:
Model:
Use:
Formulas:
k1
ui ak i
iD1
uk D
a0
uX D AX
s0 D
1
X
ui
iD0
E[a] 1 s0
E[a]
Location:
21
Model:
Use:
Formulas:
Parameters:
ND/D/1
multiple constant-bit-rate (CBR) sources into deterministic server
this can be applied to ATM, and to IP (with fixed packet sizes)
N
N!
nx n
n x Nn D N C x
Qx D
1
n! N n!
D
D
DnCx
nDxC1
x buffer capacity (in cells or packets)
N number of CBR sources
D period of CBR source (in service time slots)
Qx probability that queue exceeds x (estimate for loss probability)
Location:
Model:
Use:
Formulas:
Parameters:
2x
1
1
x D lnQx
2
1
2x
D
2 x lnQx
Location:
Model:
Use:
Formulas:
Parameters:
x 1
2x N C
22
Model:
Location:
Model:
Use:
Formulas:
Parameters:
Location:
Geo/Geo/1
basic discrete-time queueing model for IP (variable-size packets)
p
s0 D 1
q
p
p
1q k
sk D 1
q
1q
1p
k
p
1q
Qk D
q
1p
p
1 q x/q
Qx D
q
1p
q probability a packet completes service at the end of an octet slot
p probability a packet arrives in an octet slot
sk probability there are k octets in system
Qk probability that queue exceeds k octets
Qx probability that queue exceeds x packets
Chapter 14, page 232
k
e e 2 C C e
e e 2 C C e
Formulas: pk D 1
1 C e
1 C e
kC1
e e 2 C C e
Qk D
1 C e
Parameters: arrival rate of Poisson process
pk probability an arriving excess-rate cell/packet finds k in the
system
(continued)
23
Model:
Use:
Formulas:
E[a] 1 a1 1 C a1 C a02
pk D 1
a0 E[a] 1 C a0
k
E[a] 1 a1 1 C a1 C a02
Parameters:
Location:
Model:
Use:
24
Model:
Formulas:
1
gi D
iD1
a0 D
1
gi ei
iD1
a1 D
1
gi i ei
iD1
E[a] 1 a1 1 C a1 C a02
pk D 1
a0 E[a] 1 C a0
k
E[a] 1 a1 1 C a1 C a02
Parameters:
Location:
Model:
Use:
Formulas:
Parameters:
C R e
XCR
Ton 1RCC
1 C R C e
RC
CLPexcess-rate
CLP D
R
R ON rate
C service rate of buffer
XCR
Ton 1RCC
(continued)
25
Model:
Location:
Model:
Use:
Formulas:
Parameters:
Location:
Model:
Use:
Formulas:
ONOFF/D/1/K
X buffer capacity in cells/packets
Ton mean duration in ON state
Toff mean duration in OFF state
activity factor of source (probability of being ON)
CLP loss probability
Chapter 9, page 130
ONOFF/D/1/K
basic discrete-time queueing model for IP or ATM, suitable for per-flow
or per-VC scenarios
1
aD1
Ton R C
1
sD1
Toff C
1
pX D
X
s
1a
1C
1
a
sa
RC
RC
CLP D
CLPexcess-rate D
pX
R
R
R ON rate
C service rate of buffer
X buffer capacity in cells/packets
Ton mean duration in ON state
Toff mean duration in OFF state
pk D probability an excess-rate arrival finds k in the buffer
CLP loss probability
Chapter 9, page 136
multiple ONOFF sources bufferless analysis
burst-scale loss model for IP or ATM for delay-sensitive traffic,
or, combined with burst-scale delay analysis, for delay-insensitive
traffic
m
Ton
D
D
h
Ton C Toff
C
N0 D
h
(continued overleaf )
26
Model:
N!
n 1 Nn
n! N n!
N
pn n N0
nDdN0 e
Location:
Model:
Use:
Formulas:
Parameters:
e
Prfcell needs bufferg
1 2 N0
bN0 c!
m mean rate of single source
h ON rate of single source
C service rate of buffer
N total number of ONOFF sources being multiplexed
offered load as fraction of service rate
N0 minimum number of active sources for burst-scale queueing
Prfcell needs bufferg estimate of loss probability
Location:
27
Model:
Use:
Formulas:
Parameters:
Location:
Model:
Use:
Formulas:
CLPexcess-rate D e
N total number of ONOFF sources being multiplexed
Ton mean duration in ON state for single source
Toff mean duration in OFF state for single source
h ON rate of single source
C service rate of buffer
number of bursts arriving per unit time
b mean number of cells/packets per burst
offered load as fraction of service rate
N0 minimum number of active sources for burst-scale queueing
CLPexcess-rate excess-rate loss probability, i.e. conditioned on the
probability that the cell/packet needs a buffer
Chapter 9, page 146
multiple ONOFF sources excess-rate analysis
combined burst-scale loss and delay analysis suitable for IP and
ATM scenarios with multiple flows (e.g. RSVP), or variable-bitrate (VBR) traffic (e.g. SBR/VBR transfer capability)
C
N0 D
h
Ap
AD
h
N
0 1
N0
r
N0
A
A
N0
A
N0
DD
N0 !
N0 A
r!
N0 !
N0 A
rD0
Ton D
h Ton
C Ap
(continued overleaf )
28
Model:
Ron D C C h
xC1
1
1
hD
Ton Ron C
Qx D
C Ap
1
Toff C Roff
Parameters:
Location:
Model:
Use:
Formulas:
BD
1
(continued)
29
Model:
Geo/Pareto/1
qD
B
a0 D 1 q
a1 D q b1
a2 D q b2
..
.
ak D q bk
E[a] D
s0 D 1 E[a]
sk 1 s0 ak 1
sk D
Parameters:
Location:
k1
si ak i
iD1
a0
x number of arrivals, or amount of work
a power-law decay
bx probability that Pareto batch is of size x packets
B mean batch size in packets
mean number of packets arriving per time unit
q probability that a batch arrives in a time unit
ak probability there are k arrivals in a time unit
E[a] mean number of arrivals per time unit
sk probability there are k in the system at the end of any time
unit
Chapter 17, page 293
1
1
1
1
xD1
0.5
X C 0.5
1
1
1
Formulas: bx D
1
Xx>1
x 0.5
x C 0.5
X C 0.5
0
x>X
BD
x bx
x
(continued overleaf )
30
k1
si ak i
iD1
a0
Parameters: x number of arrivals, or amount of work
power-law decay
bx probability that Pareto batch is of size x packets
B mean batch size in packets
mean number of packets arriving per time unit
q probability that a batch arrives in a time unit
ak probability there are k arrivals in a time unit
E[a] mean number of arrivals per time unit
sk probability there are k in the system at the end of any time unit
Location: Chapter 17, page 298
31
space thus any of the analysis methods for FIFO queues (see previous
section) can be applied, as appropriate to the multiplexing scenario, and
traffic source(s). Typically these will give a decay rate for each virtual
buffer, which can then be used, along with the performance requirement,
to assess the partitioning of buffer space.
There is clearly benefit in partitioning to maintain different performance guarantees for a variety of service types sharing an output port.
However, the cost of partitioning is that it is not optimal when considering the overall loss situation at an output port: the loss of a cell or
packet from a full virtual buffer may not be necessary if buffer space is
shared. Indeed, buffer space can be shared across multiple output ports.
The results for both partitioning and sharing are summarized below.
Model:
Use:
Formulas:
Xj
V
Xj
jD1
logSj
XC
logdrj
logSi
jD1
Xi D
V
logdri
1
logdri
logdrj
jD1
V
Parameters:
Location:
Model:
Use:
Formulas:
32
Model:
k
PN1 j P1 k j
jD0
QN k D 1
k
PN j
jD0
Location:
QN k 1 efkCN1lnkCN1klnkN1lnN1Cklndr CN1ln1dr g
dr decay rate in individual buffer
pk queue state probability for single (virtual) buffer, i.e. probability that individual buffer has k cells/packets
PN k autoconvolution for N buffers sharing space, i.e. probability
that shared space has k cells/packets
QN k overflow probability from shared buffer
Chapter 16, page 280
Model:
33
k1
aj
jD0
Ah k D 1
k1
jD0
1
ah j
n imn
i m!
ah
al
ai
a m, n D
n!
i
m
n!
a
a
iDmCn
mCn1
1
0
am C n C i
A m, n D 1
ai
0
n1
jD0
iD0
iD0
n C i!
j! n C i j!
ah
a
j nCij
al
a
u0 D 1
1 a0
u1 D
a0
below the threshold:
k1
Ak C
ui Ak i C 1
iD1
uk D
a0
AM C
M1
ui A0 M i, 1
iD1
uM D
ah 0
M1
fui A0 M i, k M C 1g
iD1
k1
C
fui Ah k i C 1g
iDM
uk D
A0 M, X M
uX D
ah 0
1
s0 D X
ui
ah 0
iD0
34
Model:
iD0
j
CLPh D
ll j D
iDM
j lh j
M1
ah
iD0
j
CLPl D
Parameters:
Location:
1
si
al
a
rDMiCj
X
ar
r M i!
r M i j! j!
ah
a
rMij
si al j
iDM
j ll j
al
a mean arrival rate of both high- and low-priority arrivals
al mean arrival rate of low-priority arrivals
ah mean arrival rate of high-priority arrivals
ak probability there are k arrivals in one time slot
al k probability there are k low-priority arrivals in one time slot
ah k probability there are k high-priority arrivals in one time slot
Ak probability at least k cells arrive in one time slot
Ah k probability at least k high-priority cells arrive in one time
slot
a0 m, n probability that m cells of either low or high priority are
admitted, up to the threshold, and a further n high-priority cells
are admitted above the threshold
A0 m, n probability that m cells of either low or high priority are
admitted, up to the threshold, and at least a further n high-priority
cells are admitted above the threshold
sk probability there are k cells in the system
lh j probability that j high-priority cells are lost in a time slot
ll j probability that j low-priority cells are lost in a time slot
M PBS threshold (cells)
X buffer capacity in cells
CLP overall cell loss probability
CLPh cell loss probability for high-priority cells
CLPl cell loss probability for low-priority cells
Chapter 13, page 207
35
Model:
Use:
Formulas:
Location:
Model:
Use:
Formulas:
dr
1 dr
dr decay rate based on appropriate queueing analysis (see earlier,
e.g. use both basic packet-scale and burst-scale decay rates)
qD
Parameters:
36
Model:
Use:
Formulas:
Parameters:
Location:
Model:
Use:
Formulas:
sk D
a0
a1 k a
e 1
k!
a2 k a
a2 k D
e 2
k!
k
1
a2 i
a1 k D
iD0
bk D
a2
u0 D s0 C s1
uk D sk C 1 for k > 0
k
i
uk i
bj a1 i j
vk D
iD0
jD0
a1 xk ka
a1 k, x D
e 1
x!
w0 D v0
k
vi a1 k i, k i
wk D
iD1
for k > 0
(continued)
37
Model:
Parameters:
Location:
38
Model:
Formulas:
nC1
ln
min CLPi
iD1!nC1
then accept if
n
hi
hnC1
C
1
C
C
iD1
If
nC1>
ln
then accept if
2 x2
min CLPi
iD1!nC1
n
hnC1
hi
C
C
C
iD1
2 x n C 1
2 x n C 1 2 x2 C n C 1 ln
Parameters:
Location:
Model:
Use:
Formulas:
min CLPi
iD1!nC1
Parameters:
right-hand side of inequality test is based on heavy traffic approximation; for alternative based on exact M/D/1 analysis, see Table 10.1
hi fixed arrival rate of ith flow or connection
CLPi loss requirement of ith flow or connection
n number of existing flows or connections
(continued)
39
Model:
Location:
Model:
Use:
Formulas:
e
1 2 N0
bN0 c!
and can be found in Table 10.4
CLP
Parameters:
Location:
Model:
Use:
Formulas:
X 1
N0 b 4C1
40
Model:
Parameters:
Location:
Policing mechanisms
Once admitted, the flow of packets or cells is monitored (to ensure
compliance with the traffic contract) by a policing function, usually a
token, or leaky, bucket. In IP, the token bucket is typically integrated into
the queue scheduling mechanism, whereas in ATM, the leaky bucket is
normally a separate function on entry to the network. Both can be assessed
using a variety of forms of queueing analysis (see earlier), in order to
know how to configure the buckets appropriately. Some examples of
analysis are summarized below.
Model:
Use:
Formulas:
Parameters:
Location:
Model:
Use:
Formulas:
Parameters:
Location:
Model:
allowance for impact on load of dual leaky bucket for VBR traffic using
SBR transfer capability in ATM
"
#
IBT
MBS D 1 C
T
TPCR
SCR
X
2
CD
MBS
D
MBS lnCLP
D 2
X
TPCR inter-arrival time at peak cell rate
TSCR inter-arrival time at sustainable cell rate (SCR)
IBT intrinsic burst tolerance
D inter-arrival time at sustainable cell rate, in units of the cell slot
time
MBS maximum burst size allowed through by leaky bucket for SCR
X buffer capacity of output port downstream in network
admissible load assuming worst-case arrival pattern of maximumsized bursts arriving at cell slot rate
Chapter 11, page 183
multiple ONOFF sources excess-rate analysis
Use:
Formulas:
Parameters:
Location:
41
42
Parameters:
Location:
Model:
Use:
Formulas:
Parameters:
D
3
b
1
N0
X buffer capacity in cells or packets
b mean number of cells/packets per burst
maximum offered load expected
N0 minimum number of active sources for burst-scale queueing
(calculated from service rate of buffer, C, and maximum arrival rate,
h, for an individual flow or connection)
CLPtarget overall loss probability as design target
CLPbsl loss probability contribution from burst-scale loss analysis
Location:
Model:
Use:
Model:
Formulas:
43
AN
A1
A2
AN
BD
1C
C
C C
N!
1!
2!
N!
see Table 12.4 for full erlang traffic table
link dimension is then
C D ECR N
Parameters:
Location:
Chapter 3, page 52
Teletraffic Engineering
the economic and service arguments
SHARING RESOURCES
A simple answer to the question Why have a network? is To communicate information between people. A slightly more detailed answer would
be: To communicate information between all people who would want to
exchange information, when they want to. Teletraffic engineering addresses
the problems caused by sharing of network resources among the population of users; it is used to answer questions like: How much traffic
needs to be handled? What level of performance should be maintained?
What type of, and how many, resources are required? How should the
resources be organized to handle traffic?
46
TELETRAFFIC ENGINEERING
Contrast this with a star network, where each user has a single handset
connected to two N to 1 switches, and the poles of the switches are
connected by a single path (Figure 3.2). In this example, there are N
handsets, N C 1 paths, and 2 switches. However, only 2 users may
communicate at any one time, i.e. 3/N C 1 of the paths, 2/N of the
handsets and both of the switches would be in use. So for a network with
120 users, the maximum values are: path utilization is just under 3%,
handset utilization is just under 2% and switch utilization is 100%.
In the course of one day, suppose that each one of the 120 users
initiates on average two 3-minute calls. Thus the total traffic volume is
120 2 3 D 720 call minutes, i.e. 12 hours of calls. Both star and mesh
networks can handle this amount of traffic; the mesh network can carry
up to 60 calls simultaneously; the star network carries only 1 call at a
time. The mesh network provides the maximum capability for immediate
communication, but at the expense of many paths and switches. The star
network provides the minimum capability for communication between
any two users at minimum cost, but at the inconvenience of having to
wait to use the network.
The capacity of the star network could be increased by installing M
switching units, where each unit comprises two N to 1 switches linked
by a single path (Figure 3.3). Thus, with N/2 switching units, the star
network would have the same communication capability as the mesh
network, with the same number of switches and handsets, but requiring
only 3N/2 paths. Even in this case, though, the size becomes impractical as
N
N.(N-1)/2
N
Figure 3.1.
N
N+1
2
Figure 3.2.
47
TRAFFIC INTENSITY
N
N +M
2M
Figure 3.3.
TRAFFIC INTENSITY
Traffic volume is defined as the total call holding time for all calls, i.e. the
number of calls multiplied by the mean holding time per call. This is not
very helpful in determining the total number of paths or switching units
required. We need a measure that gives some indication of the average
workload we are applying to the network.
Traffic intensity is defined in two ways, depending on whether we
are concerned with the workload applied to the network (offered traffic),
or the work done by the network (carried traffic). The offered traffic
intensity is defined as:
ch
AD
T
where c is number of call attempts in time period T, and h is the mean
call holding time (the average call duration). Note that if we let T equal h
then the offered traffic intensity is just the number of call attempts during
the mean call holding time. The rate of call attempts, also called the call
arrival rate, is given by
c
aD
T
So the offered traffic intensity can also be expressed as
ADah
For any specific pattern of call attempts, there may be insufficient paths
to satisfy all of the call attempts; this is particularly obvious in the case
of the star network in Figure 3.2 which has just one path available. A call
attempt made when the network is full is blocked (lost) and cannot be
carried. If, during time period T, cc calls are carried and cl calls are lost,
then the total number of call attempts is
c D cc C cl
48
TELETRAFFIC ENGINEERING
We then have
AD
cc C cl h
DCCL
T
cc h
T
cl h
T
AD
49
PERFORMANCE
The two different network structures, mesh and star, illustrate how the
same volume of traffic can be handled very differently. With the star
network, users may have to wait significantly longer for service (which,
in a circuit-switched network, can mean repeated attempts by a user to
establish a call). A comparison of the waiting time and the delay that users
will tolerate (before they give up and become customers of a competing
network operator) enables us to assess the adequacy of the network. The
waiting time is a measure of performance, as is the loss of a customer.
This also shows a general principle about the flow of traffic:introducing
delay reduces the flow, and a reduced traffic flow requires fewer
resources. The challenge is to find an optimum value of the delay
introduced in order to balance the traffic demand, the performance
requirements, and the amount (and cost) of network resources. We
will see that much teletraffic engineering is concerned with assessing
the traffic flow of cells or packets being carried through the delaying
mechanism of the buffer.
Capacity
Traffic
Figure 3.4. Traffic, Capacity and Performance
50
TELETRAFFIC ENGINEERING
may be fixed in order to determine how the others vary with each
other, or two elements may be fixed in order to find a value for the
third. For example, the emphasis in dimensioning is on determining the
capacity required, given specific traffic demand and performance targets.
Performance engineering aims at assessing the feasibility of a particular
network design (or, more commonly, an aspect or part of a network)
under different traffic conditions; hence the emphasis is on varying the
traffic and measuring the performance for a given capacity (network
design). Admission control procedures for calls in an ATM network
have the capacity and performance requirements fixed, with the aim of
assessing how much, and what mix of, traffic can be accepted by the
network.
In summary, a network provides the ability to communicate information between users, with the aim of providing an effective service at
reasonable cost. It is uneconomic to provide separate paths between every
pair of users. There is thus a need to share paths, and provide users with
the means to access these paths when required. A network comprises
building blocks (switches, terminal equipment, transmission paths), each
of which has a finite capacity for transferring information. Whether or not
this capacity is adequate depends on the demand from users for transferring information, and the requirements that users place on that transfer.
Teletraffic engineering is concerned with the relationships between these
three elements of traffic, capacity and performance.
51
10
15
101
Probability
102
103
104
105
106
107
Poisson (k, ) :D
k
e
k!
i :D 0.. 15
xi :D i
yi :D Poisson (i, 2.5)
Figure 3.5. Graph of the Distribution of Demand for an Offered Traffic Intensity of
2.5 E, and the Mathcad Code to Generate (x, y) Values for Plotting the Graph
200 3
D 10 E
60
52
TELETRAFFIC ENGINEERING
A2
AN
A1
C
C C
1C
1!
2!
N!
L
A
Number of circuits, N
10
15
100
101
102
103
104
105
106
107
AN
BA, N :D N!
N r
A
rD0 r!
i :D 0.. 15
xi :D i
yi :D B2.5 , i
Figure 3.6. Graph of the Probability of Call Blocking for A D 2.5 E, and the Mathcad
Code to Generate x, y Values for Plotting the Graph
53
TRAFFIC TABLES
TRAFFIC TABLES
The problem is that Erlangs lost call formula gives the call blocking (i.e.
loss) probability, B, given a certain number, N, of trunks being offered
a certain amount, A, of traffic. But the dimensioning question comes
the other way around: with a certain amount, A, of traffic offered, how
many trunks, N, are required to give a blocking probability of B? It is
not possible to express N in terms of B, so traffic tables, like the one in
Table 3.1, have been produced (using iteration), and are widely used, to
simplify this calculation.
Table 3.1. Table of Traffic which May Be Offered,
Based on Erlangs Lost Call Formula
Probability of blocking, B
Number of
trunks, N
0.02
1
2
3
4
5
6
7
8
9
10
0.02
0.22
0.60
1.1
1.7
2.3
2.9
3.6
4.3
5.1
0.01
0.005
0.001
0.005
0.105
0.35
0.7
1.1
1.6
2.2
2.7
3.3
4.0
0.001
0.046
0.19
0.44
0.8
1.1
1.6
2.1
2.6
3.1
offered traffic, A:
0.01
0.15
0.45
0.9
1.4
1.9
2.5
3.1
3.8
4.5
54
TELETRAFFIC ENGINEERING
0.6
B = 0.02
0.5
B = 0.01
0.4
B = 0.001
0.3
B = 0.005
0.2
0.1
0
Figure 3.7.
3
Offered traffic
Percentage overload
0
50
100
150
Probability of call
blocking
0.1
0.01
0.001
10 circuits
7 circuits
4 circuits
TRAFFIC TABLES
55
However, if we consider how a group of circuits perform under overload, there are disadvantages in having large groups. Here, we use the
rows of data from Table 3.1 and plot, in Figure 3.8, the blocking probability against the percentage increase in offered traffic over the offered
traffic for B D 0.001. Small groups of circuits do better under overload
conditions than larger groups; this is because the inefficient small groups
have more waste capacity to deal with unexpected overload and the
deterioration in the blocking probability is small. For a large group of
circuits this deterioration can be substantial.
Performance Evaluation
hows it going?
Measurement
Measurement methods require real networks to be available for experimentation. The advantage of direct measurement of network performance
is that no detail of network operation is excluded: the actual operation of
the real network is being monitored and measured. However, there are
some constraints. A revenue-earning network cannot be exercised to its
limits of performance because customers are likely to complain and take
their business elsewhere. An experimental network may be limited in the
number and type of traffic sources available, thus restricting the range of
realistic experimental conditions.
58
PERFORMANCE EVALUATION
QUEUEING THEORY
Analysis of the queueing process is a fundamental part of performance
evaluation, because queues (or waiting lines) form in telecommunications systems whenever customers contend for limited resources. In
technologies such as ATM or IP not only do connections contest, and
may be made to queue, but each accepted connection consists of a stream
of cells or packets and these also must queue at the switching nodes or
routers as they traverse the network.
We will use a queue then as a mathematical expression of the idea of
resource contention (Figure 4.1): customers arrive at a queueing system
needing a certain amount of service; they wait for service, if it is not
immediately available, in a storage area (called a buffer, queue, or
waiting line); and having waited a certain length of time, they are
served and leave the system. Note that the term customers is the general
expression you will encounter in queueing theory terminology and it is
q, number of customers in system
Customers
arriving
with rate
w, number of
customers waiting
r, utilization
Buffer
Server
s, service time
Customers
leaving
59
QUEUEING THEORY
used to mean anything that queues; in ATM or IP, the customers can be
cells, packets, bursts, flows, or connections. In the rest of this chapter, the
queueing systems refer to ATM buffers and the customers are cells.
Any queueing system is described by the arrival pattern of customers,
the service pattern of customers, the number of service channels, and
the system capacity. The arrival pattern of customers is the input to
a queueing system and can sometimes be specified just as the average
number of arrivals per unit of time (mean arrival rate, ) or by the average
time between arrivals (mean inter-arrival time). The simplest input any
queueing system can have is deterministic, in which the arrival pattern
is one customer every t time units, i.e. an arrival rate of 1/t. So, for a
64 kbit/s constant bit-rate (CBR) service, if all 48 octets of the information
field are filled then the cell rate is 167 cell/s, and the inter-arrival time
is 6 ms. If the arrival pattern is stochastic (i.e. it varies in some random
fashion over time), then further characterization is required, e.g. the
probability distribution of the time between arrivals. Arrivals may come
in batches instead of singly, and the size of these batches may vary. We
will look at a selection of arrival patterns in Chapter 6.
The service pattern of customers, as with arrival patterns, can be
described as either a rate, , of serving customers, or as the time, s,
required to service a customer. There is one important difference: service
time or service rate are conditioned on the system not being empty. If it is
empty, the service facility is said to be idle. However, when an ATM cell
buffer is empty, a continuous stream of empty cell slots is transmitted.
Thus the server is synchronized and deterministic; this is illustrated in
Figure 1.4
In the mathematical analysis of an ATM buffer, the synchronization is
often neglected thus a cell is assumed to enter service immediately upon
entry to an empty buffer, instead of waiting until the beginning of the
next free slot. For a 155.52 Mbit/s link, the cell slot rate is 366 792 cell/s
and the service time per cell is 2.726 s. However, 1 in every 27 cell slots is
used for operations and maintenance (OAM) cells for various monitoring
and measurement duties. Thus the cell slot rate available for traffic is
26
366 792 D 353 208 cell/s
27
which can be approximated as a service time per cell of 2.831 s.
The number of service channels refers to the number of servers that
can serve customers simultaneously. Multi-channel systems may differ
according to the organization of the queue(s): each server may have its
own queue, or there may be only one queue for all the servers. This is of
particular interest when analysing different ATM switch designs.
The system capacity consists of the waiting area and the number of
service channels, and may be finite or infinite. Obviously in a real system
60
PERFORMANCE EVALUATION
Notation
Kendalls notation, A/B/X/Y/Z, is widely used to describe queueing
systems:
A
B
X
Y
Z
Elementary relationships
Table 4.1 summarizes the notation commonly used for the various
elements of a queueing process. This notation is not standardized, so
beware. . . for example, q may be used, either to mean the average
number of customers in the system, or the average number waiting to
be served (unless otherwise stated, we will use it to mean the average
number in the system).
There are some basic queueing relationships which are true, assuming
that the system capacity is infinite, but regardless of the arrival or service
Table 4.1.
Notation
s
q
tq
w
tw
Description
mean number of arrivals per unit time
mean service time for each customer
utilization; fraction of time the server is busy
mean number of customers in the system (waiting or being served)
mean time a customer spends in the system
mean number of customers waiting to be served
mean time a customer spends waiting for service
61
QUEUEING THEORY
patterns and the number of channels or the queue discipline. The utilization, , is equal to the product of the mean arrival rate and the mean
service time, i.e.
Ds
for a single-server queue. With one thousand 64 kbit/s CBR sources, the
arrival rate is 166 667 cell/s. We have calculated that the service time of a
cell is 2.831 s, so the utilization, , is 0.472.
The mean number of customers in the queue is related to the average
time spent waiting in the queue by a formula called Littles formula (often
written as L D W). In our notation this is:
w D tw
So, if the mean waiting time is 50 s, then the average queue length
is 8.333 cells. This relationship also applies to the average number of
customers in the system:
q D tq
The mean time in the system is simply equal to the sum of the mean
service time and waiting time, i.e.
tq D tw C s
which, in our example, gives a value of 52.831 s. The mean number of
customers in a single-server system is given by
qDwC
which gives a value of 8.805 cells.
62
PERFORMANCE EVALUATION
50
40
30
20
10
0
0.0
0.2
0.4
Utilization
0.6
0.8
1.0
1
i :D 0.. 999
i
xi :D
1000
yi :D q xi
q :D
Figure 4.2. Graph of the Average Number of Cells in the M/M/1 Queueing System,
and the Mathcad Code to Generate (x, y) Values for Plotting the Graph
63
QUEUEING THEORY
which suggests that it is best to operate the system below 80% utilization
to avoid large queues building up.
But we still do not have any idea of how large to make the ATM buffer.
The next step is to look at the distribution of system size which is given by
Prfsystem size D xg D 1 x
Figure 4.3 shows this distribution for a range of different utilization
values, including the value of 0.472 which is our particular example. In
this case we can read from the graph that the probability associated with
a system size of 10 cells is 0.0003.
From this we might conclude that a buffer length of 10 cells would
not be adequate to meet the cell loss probability (CLP) requirements of
ATM which are often quoted as being 108 or less. For the system size
probability to be less than 108 , the system size needs to be 24 cells;
the actual probability is 7.89 109 . In making this deduction, we have
approximated the CLP by the probability that the buffer has reached a
Queue size
10
15
100
= 0.2
= 0.4
= 0.472
= 0.6
= 0.8
101
Probability
20
102
103
104
105
PrSystemSizeisX , x :D 1 x
i :D 0.. 20
xi :D i
y1i :D PrSystemSizeisX 0.2, xi
y2i :D PrSystemSizeisX 0.4, xi
y3i :D PrSystemSizeisX 0.472, xi
y4i :D PrSystemSizeisX 0.6, xi
y5i :D PrSystemSizeisX 0.8, xi
Figure 4.3. Graph of the System State Distribution for the M/M/1 Queue, and the
Mathcad Code to Generate (x, y) Values for Plotting the Graph
64
PERFORMANCE EVALUATION
particular level in our infinite buffer model. This assumes that an infinite
buffer model is a good model of a finite buffer, and that Prfsystem size D
xg is a reasonable approximation to the loss from a finite queue of size x.
Before we leave the M/M/1, lets look at another approximation to the
CLP. This is the probability that the system size exceeds x. This is found by
summing the state probabilities up to and including that for x, and then
subtracting this sum from 1 (this is a simpler task than summing from
x C 1 up to infinity). The equation for this turns out to be very simple:
Prfsystem size > xg D xC1
When x D 24 and D 0.472, this equation gives a value of 7.06 109
which is very close to the previous estimate.
Now Figure 4.4 compares the results for the two approximations,
Prfsystem size D xg and Prfsystem size xg, with the actual loss probability from the M/M/1/K system, for a system size of 24 cells, with the
utilization varying from 0 to 1. What we find is that all three approaches
give very similar results over most utilization values, diverging only
when the utilization approaches 100%. For the example utilization value
of 0.472, there is in fact very little difference. The main point to note
here is that an infinite queue can provide a useful approximation for a
finite one.
1E+00
1E02
1E04
0.2
Utilization
0.4
0.6
0.8
1
Pr {x>24}
M/M/1/24
Pr{x=24}
1E06
1E08
1E10
1E12
Figure 4.4. Comparison of CLP Estimates for Finite M/M/1 Queueing System
65
QUEUEING THEORY
arrival times, remains the same. We will deal with a finite queue directly,
rather than approximating it to an infinite queue. This, then, is called the
M/D/1/K queueing system.
The solution for this system is described in Chapter 7. Figure 4.5
compares the cell loss from the M/D/1/K with the M/M/1 CLP estimator, Prfsystem size D xg, when the system size is 10. As before, the
utilization ranges from 0 to 1. At the utilization of interest, 0.472, the
difference between the cell loss results is about two orders of magnitude.
So we need to remember that performance evaluation answers can
be rather sensitive to the choice of model, and that this means they will
always be, to some extent, open to debate. For the cell loss probability in
the M/D/1/K to be less than 108 , the system size needs to be a minimum
of 15 cells, and the actual CLP (if it is 15 cells) is 4.34 109 . So, by using
a more accurate model of the system (compared to the M/M/1), we
can save on designed buffer space, or alternatively, if we use a system
size of 24 cells, the utilization can be increased to 66.8%, rather than
47.2%. This increase corresponds to 415 extra 64 kbit/s simultaneous
CBR connections.
It is also worth noting from Figure 4.5 that the cell loss probabilities are
very close for high utilizations, i.e. the difference between the two models,
with their very different service time assumptions, becomes almost
negligible under heavy traffic conditions. In later chapters we present
some useful heavy traffic results which can be used for performance
evaluation of ATM, where applicable.
0.2
Utilization
0.4
0.6
0.8
1
0.01
0.0001
1E06
M/M/1
M/D/1/10
1E08
1E10
1E12
66
PERFORMANCE EVALUATION
important to real-time services, e.g. voice and video. Littles result allows
us to calculate the average waiting time from the average number waiting
in the queue and the arrival rate. If we apply this analysis to the example
of 1000 CBR connections multiplexed together, we obtain the following:
tw D
w
0.422
D
D 2.532 s
166 667
s
1
s
2 1
In both cases we need to add the service time (cell transmission time) to
obtain the overall delay through the system. But the main point to note is
that the average waiting time in the M/D/1 queue (which works out as
1.265 s in our example) is half that for the M/M/1 queue.
Figure 4.6 shows the average waiting time against utilization for both
queue models. The straight line shows the cell service time. Notice how it
dominates the delay up to about 60% utilization. We can take as a useful
rule of thumb that the average delay arising from queueing across a
network will be approximately twice the sum of the service times. This
assumes, of course, that the utilization in any queue will be no more
than about 60%. For the total end-to-end delay, we must also add in the
propagation times on the transmission links.
So, are these significant values? Well, yes, but, taken alone, they
are not sufficient. We should remember that they are averages, and
cells will actually experience delays both larger and smaller. Delay is
particularly important when we consider the end-to-end characteristics
of connections; all the cells in a connection will have to pass through a
series of buffers, each of which will delay them by some random amount
depending on the number of cells already in the buffer on arrival. This
will result in certain cells being delayed more than others, so-called delay
jitter, or cell delay variation (CDV).
67
QUEUEING THEORY
50
40
30
20
M/M/1
10
M/D/1
0
0.0
0.2
0.4
0.6
Utilization
0.8
1.0
s
1
s
MD1tw , s :D
2 1
i :D 0.. 999
i
xi :D
1000
y1i :D MM1tw xi , 2.831
y2i :D MD1tw xi , 2.831
y3i :D 2.831
MM1tw , s :D
Figure 4.6. Graph of the Average Waiting Times for M/M/1 and M/D/1 Queues,
and the Mathcad Code to Generate (x, y) Values for Plotting the Graph
Other traffic
Monitored call
3
2
Longer
interval
Figure 4.7.
1
Shorter
interval
68
PERFORMANCE EVALUATION
10
1E+00
Probability
1E01
Number
of
buffers
1E02
10
1E03
3
2
1E04
1E05
Figure 4.8.
Fundamentals of Simulation
those vital statistics
In the former, the simulator moves from time instant i to time instant
i C 1 regardless of whether the system state has changed, e.g. if the
M/D/1 queue is empty at i it could still be empty at i C 1 and the
program will still only advance the clock to time i C 1. These instants
can correspond to cell slots in ATM. In discrete-event simulation, the
simulator clock is advanced to the next time for which there is a change
in the state of the simulation model, e.g. a cell arrival or departure at the
M/D/1 queue.
So we have a choice: discrete time advance or discrete event advance.
The latter can run more quickly because it will cut out the slot-toslot transitions when the queue is empty, but the former is easier
to understand in the context of ATM because it is simpler to implement and it models the cell buffer from the point of view of the
server process, i.e. the conveyor belt of cell slots (see Figure 1.4).
We will concentrate on the discrete time advance mechanism in this
introduction.
70
FUNDAMENTALS OF SIMULATION
The main program loop implements the discrete time advance mechanism
in the form of a loop counter, i. The beginning of the loop corresponds
to the start of time slot i, and the first section generate new arrivals calls
function Poisson which returns a random non-negative integer for the
number of cell arrivals during this current time slot. We model the queue
with an arrivals-first buffer management strategy, so the service instants
occur at the end of the time slot after any arrivals. This is dealt with by
the second section, serve a waiting cell, which decrements the queue state
variable K, if it is greater than 0, i.e. if the queue is not empty. At this
point, in store results we record the state of the queue in a histogram. This
is simply a count of the number of times the queue is in state K, for each
possible value of K, (see Figure 5.1), and can be converted to an estimate
of the state probability distribution by dividing each value in the array
histogram[] by the total number of time slots in the simulation run.
71
Histogram(K)
102
101
100
5
queue state, K
10
Figure 5.1. An Example of a Histogram of the Queue State (for a Simulation Run of
1000 Time Slots)
The REPEAT loop corresponds to the generation of cells, and the loop
records the number of cells in the batch in variable j, returning the
final total in variable X. Remember that with this particular simulation
program we are not interested in the arrival time of each cell within the
slot, but in the number of arrivals during a slot.
72
FUNDAMENTALS OF SIMULATION
The period is of particular relevance for ATM traffic studies, where rare
events can occur with probabilities as low as 1010 (e.g. lost cells). Once
an RNG repeats its sequence, unwanted correlations will begin to appear
in the results, depending on how the random number sequence has been
applied. In our discrete time advance simulation, we are simulating time
slot by time slot, where each time slot can have 0 or more cell arrivals.
The RNG is called once per time slot, and then once for each cell arrival
73
during the time slot. With the discrete event advance approach, a cellby-cell simulator would call the RNG once per cell arrival to generate the
inter-arrival time to the next cell.
The WichmannHill algorithm has a period of about 7 1012 . Thus,
so long as the number of units simulated does not exceed the period
of 7 1012 , this RNG algorithm can be applied. The computing time
required to simulate this number of cells is impractical anyway, so we can
be confident that this RNG algorithm will not introduce correlation due
to repetition of the random number sequence. Note that the period of the
WichmannHill algorithm is significantly better than many of the random
number generators that are supplied in general-purpose programming
languages. So, check carefully before you use a built-in RNG.
Note that there are other ways in which correlations can appear in a
sequence of random numbers. For more details, see [5.1].
74
FUNDAMENTALS OF SIMULATION
store results
i Ai
time slotlimit
actualload D 0.495
actualload :D
q :D 0 , 1 ..
maxK
Kbinsq :D q
histogramK :D hist (Kbins , K)
end of simulation
Q2 D
histogram[K]
KD3
or, alternatively as
i
Q2 D
2
histogram[K]
KD0
If we start our M/D/1 simulator, and plot Q2 for it as this value evolves
over time, we will see something like that which is shown in Figure 5.2.
75
1E+00
1E01
Actual value
Q(2)
1E02
Simulation
measurements
1E03
Figure 5.2.
Here, the simulator calculates a measurement result for Q2 every 1000
time slots; that is to say it provides an estimate of Q2 every 1000 slots. But
from Figure 5.2 we can see that there are transient measurements, and
that these strongly reflect the initial system state. It is possible to cut out
these measurements in the calculation of steady state results; however,
it is not easy to identify when the transient phase is finished. We might
consider the first 7000 slots as the transient period in our example.
O
Q2
D
Q2j
jD1
76
FUNDAMENTALS OF SIMULATION
calculating
N
O
Q2
z/2
2
O
Q2j Q2
jD1
N N 1
10
Experiments
Actual value
ACCELERATED SIMULATION
77
Validation
Any simulation model will need to be checked to ensure that it works. This
can be a problem: a very general program that is capable of analysing
a large number of scenarios will be impossible to test in all of them,
especially as it would probably have been developed to solve systems
that have no analytical solution to check against. However, even for the
most general of simulators it will be possible to test certain simple models
that do have analytical solutions, e.g. the M/D/1.
ACCELERATED SIMULATION
In the discussion on random number generation we mentioned that the
computing time required to simulate 1012 cells is impractical, although
cell loss probabilities of 1010 are typically specified for ATM buffers. In
fact, most published simulation results for ATM extend no further than
probabilities of 105 or so.
How can a simulation be accelerated in order to be able to measure
such rare events? There are three main ways to achieve this: use more
computing power, particularly in the form of parallel processing; use
statistical techniques to make better use of the simulation measurements;
and decompose the simulation model into connection, burst and cell
scales and use only those time scales that are relevant to the study.
We will focus on the last approach because it extends the analytical
understanding of the cell and burst scales that we develop in later
chapters and applies it to the process of simulation. In particular, burstscale queueing behaviour can be modelled by a technique called cell-rate
simulation.
Cell-rate simulation
The basic unit of traffic with cell-rate simulation is a burst of cells.
This is defined as a fixed cell-rate lasting for a particular time period
78
FUNDAMENTALS OF SIMULATION
Aell rate
A burst of
fixed cell rate
An event marks
a change from
one fixed cell
rate to another
Time
during which it is assumed that the inter-arrival times do not vary (see
Figure 5.4). Thus instead of an event being the arrival or service of a cell,
an event marks the change from one fixed cell-rate to another. Hence
traffic sources in a cell-rate simulator must produce a sequence of bursts
of cells. Such traffic sources, based on a cell-rate description are covered
in Chapter 6.
The multiplexing of bursts from different sources through an ATM
buffer has to take into account the simultaneous nature of these bursts.
Bursts from different sources will overlap in time and a change in the rate
of just one source can affect the output rates of all the other VCs passing
through the buffer.
An ATM buffer is described by two parameters: the maximum number
of cells it can hold, i.e. its buffer capacity; and the constant rate at which
cells are served, i.e. its cell service-rate. The state of a queue, at any
moment in time, is determined by the combination of the input rates
of all the VCs, the current size of the queue, and the queue parameter
values.
The flow of traffic through a queue is described by input, output,
queueing and loss rates (see Figure 5.5). Over any time period, all cells
input to the buffer must be accounted for; they are or served queued or
lost. At any time, the rates for each VC, and for all VCs, must balance:
input rate D output rate C queueing rate C loss rate
When the queue is empty, the output rates of VCs are equal to their
input rates, the total input rate is less than the service rate, and so there
is no burst-scale queueing.
79
ACCELERATED SIMULATION
Input rate
ve
Output rate
Queueing rate
+ve
Loss rate
Figure 5.5.
The Balance of Cell Rates in the Queueing Model for Cell-Rate Simulation
80
FUNDAMENTALS OF SIMULATION
100000
Low utilization (40%)
10000
100
1000
10000
100000
Traffic Models
youve got a source
82
TRAFFIC MODELS
Time-scale of activity
Dimensioning
Use made
of the
service
Calendar
Connection
Burst
Characteristic
behaviour
of the
service
Cell
Performance Engineering
Figure 6.1. Levels of Traffic Behaviour
83
Time
t
t1
Figure 6.3.
t2
84
TRAFFIC MODELS
5e006
1e005
Time
1.5e005
2e005
2.5e005
F(t)
0.1
0.01
F, t :D 1 et
i :D 1.. 250
x1i :D i 107
y1i :D F166667, x1i
j :D 1 .. 8
x2j :D j 2 831 106
y2j :D 1
Figure 6.4. Graph of the Negative Exponential Distribution for a Load of 0.472, and
the Mathcad Code to Generate x, y Values for Plotting the Graph
where the arrival rate is . This distribution, Ft, is shown in Figure 6.4
for a load of 47.2% (i.e. the 1000 CBR source example from Chapter 4). The
arrival rate is 166 667 cell/s which corresponds to an average inter-arrival
time of 6 s. The cell slot intervals are also shown every 2.831 s on the
time axis.
The discrete time equivalent is to have a geometrically distributed
number of time slots between arrivals (Figure 6.5), where that number is
counted from the end of the first cell to the end of the next cell to arrive.
...
Time
85
Obviously a cell rate of 1 cell per time slot has an inter-arrival time of
1 cell slot, i.e. no empty cell slots between arrivals. The probability that a
cell time slot contains a cell is a constant, which we will call p. Hence a
time slot is empty with probability 1 p. The probability that there are k
time slots between arrivals is given by
Prfk time slots between arrivalsg D 1 pk1 p
i.e. k 1 empty time slots, followed by one full time slot. This is the
geometric distribution, the discrete time equivalent of the negative exponential distribution. The geometric distribution is often introduced in
text books in terms of the throwing of dice or coins, hence it is thought
Time
0
5e006
1e005
1.5e005
2e005
2.5e005
Probability
0.1
0.01
F, t :D 1 et
Geometric (p, k) :D 1 1 pk
i :D 1..250
x1i :D i 107
y1i :D F166667, x1i
j :D 1..8
y2j :D 1
j :D 1..8
y3j :D Geometric 166667 2.831 106 , j
Figure 6.6. A Comparison of Negative Exponential and Geometric Distributions,
and the Mathcad Code to Generate x, y Values for Plotting the Graph
86
TRAFFIC MODELS
COUNTING ARRIVALS
An alternative way of presenting timing information about an arrival
process is by counting the number of arrivals in a defined time interval.
There is an equivalence here with the inter-arrival time approach in
continuous time: negative exponential distributed inter-arrival times
form a Poisson process:
Prfk arrivals in time Tg D
Tk T
e
k!
N!
1 pNk pk
N k! k!
87
COUNTING ARRIVALS
Buffer
Source
Negative exponential
distribution for time
between arrivals
Figure 6.7.
Stream
Geometrically distributed
number of time slots
between cells in
synchronized cell stream
to the buffer, at a cell arrival rate of cells per time slot. At the buffer
output, a cell occupies time slot i with probability p as we previously
defined for the Bernoulli process. Now if is the cell arrival rate and p
is the output cell rate (both in terms of number of cells per time slot),
and if we are not losing any cells in our (infinite) buffer, we must have
that D p.
Note that the output process of an ATM buffer of infinite length, fed
by a Poisson source is not actually a Bernoulli process. The reason is that
the queue introduces dependence from slot to slot. If there are cells in the
buffer, then the probability that no cell is served at the next cell slot is 0,
whereas for the Bernoulli process it is 1 p. So, although the output cell
stream is not a memoryless process, the Bernoulli process is still a useful
approximate model, variations of which are frequently encountered in
teletraffic engineering for ATM and for IP.
The limitation of the negative exponential and geometric inter-arrival
processes is that they do not incorporate all of the important characteristics of typical traffic, as will become apparent later.
Certain forms of switch analysis assume batch-arrival processes: here,
instead of a single arrival with probability p, we get a group (the batch),
and the number in the group can have any distribution. This form of
arrival process can also be considered in this category of counting arrivals.
For example, at a buffer in an ATM switch, a batch of arrivals up to some
maximum, M, arrive from different parts of the switch during a time slot.
This can be thought of as counting the same number of arrivals as cells in
the batch during that time slot. The Bernoulli process with batch arrivals
is characterized by having an independent and identically distributed
number of arrivals per discrete time period. This is defined in two parts:
the presence of a batch
Prfthere is a batch of arrivals in a time slotg D p
or the absence of a batch
88
TRAFFIC MODELS
k
e
k!
For the binomial distribution, we now want the probability that there
are k arrivals from M inputs where each input has a probability, p, of
producing a cell arrival in any time slot. Thus
ak D
M!
1 pMk pk
M k! k!
and the total arrival rate is M p cells per time slot. Figure 6.8 shows
what happens when the total arrival rate is fixed at 0.95 cells per time
89
RATES OF FLOW
10
100
101
Poisson
M=100
M=20
M=10
Probability
102
103
104
105
106
10 7
108
Poisson k , :D
k
e
k!
Binomial k , M , p :D
M!
1 pMk pk
M k! k!
i :D 0 .. 10
xi :D i
y1i :D Poisson x
i , 0.95
0.95
100
0.95
y3i :D Binomial xi , 20,
20
0.95
y4i :D Binomial xi , 10,
10
y2i :D Binomial
xi , 100,
Figure 6.8. A Comparison of Binomial and Poisson Distributions, and the Mathcad
Code to Generate x, y Values for Plotting the Graph
slot and the numbers of inputs are 10, 20 and 100 (and so p is 0.095,
0.0475 and 0.0095 respectively). The binomial distribution tends towards
the Poisson distribution, and in fact in the limit as N ! 1 and p ! 0 the
distributions are the same.
RATES OF FLOW
The simplest form of source using a rate description is the periodic
arrival stream. We have already met an example of this in 64 kbit/s CBR
90
TRAFFIC MODELS
telephony, which has a cell rate of 167 cell/s in ATM. The next step is
to consider an ONOFF source, where the process switches between a
silent state, producing no cells, and a state which produces a particular
fixed rate of cells. Sources with durations (in the ON and OFF states)
distributed as negative exponentials have been most frequently studied,
and have been applied to data traffic, to packet-speech traffic, and as a
general model for bursty traffic in an ATM multiplexor.
Figure 6.9 shows a typical teletraffic model for an ONOFF source.
During the time in which the source is on (called the sojourn time in
the active state), the source generates cells at a rate of R. After each cell,
another cell is generated with probability a, or the source changes to the
silent state with probability 1 a. Similarly, in the silent state, the source
generates another empty time slot with probability s, or moves to the
active state with probability 1 s. This type of source generates cells in
patterns like that shown in Figure 6.10; for this pattern, R is equal to half
of the cell slot rate. Note that there are empty slots during the active state;
these occur if the cell arrival rate, R, is less than the cell slot rate.
We can view the ONOFF source in a different way. Instead of showing
the cell generation process and empty time slot process explicitly as
Bernoulli processes, we can simply describe the active state as having a
geometrically distributed number of cell arrivals, and the silent state as
having a geometrically distributed number of cell slots. The mean number
of cells in an active state, E[on], is equal to the inverse of the probability
of exiting the active state, i.e. 1/1 a cells. The mean number of empty
Pr{no} = 1- a
SILENT STATE
Silent for
another time
slot?
Pr{yes} = s
Pr{yes} = a
ACTIVE STATE
Generate
another cell
arrival?
Pr{no} = 1-s
Figure 6.9.
ACTIVE
SILENT
1/R
1/C
ACTIVE
Time
91
RATES OF FLOW
1
SILENT STATE
ACTIVE STATE
Generate empty
slotsat a rate of C
E[off] = 1/(1-s)
Generate cells at
a rate of R
E[on] =1/(1-a)
1
Figure 6.11.
cell slots in a silent state, E[off], is equal to 1/1 s cell slots. At the
end of a sojourn period in a state, the process switches to the other state
with probability 1. Figure 6.11 shows this alternative representation of
the ONOFF source model.
It is important to note that the geometric distributions for the active
and silent states have different time bases. For the active state the unit of
time is 1/R, i.e. the cell inter-arrival time. Thus the mean duration in the
active state is
1
Ton D E[on]
R
For the silent state the unit of time is 1/C, where C is the cell slot rate;
thus the mean duration in the silent state is
Toff D
1
E[off]
C
92
TRAFFIC MODELS
1
SILENT STATE
ACTIVE STATE
C = 353208 slot/s
R = 167 cell/s
Figure 6.12.
1
D 160
1a
aD1
1
D 0.993 75
160
E[off] D
1
D 596 921
1s
and
so
sD1
1
D 0.999 998 324 7
596 921
93
RATES OF FLOW
STATE 3
p(1,3)
p(2,3)
p(3,1)
p(3,2)
p(1,2)
STATE 1
STATE 2
p (2,1
PART II
ATM Queueing and
Traffic Control
98
n+1
Time (slotted)
Departure instant
for cell in service
during time slot n 1
Figure 7.1.
Strategy
Departure instant
for cell in service
during time slot n
enters service at the beginning of a time slot. The cell departs at the end
of a time slot, and this is synchronized with the start of service of the
next cell (or empty time slot, if there is nothing waiting in the buffer).
Cells arrive during time slots, as shown in Figure 7.1. The exact instants
of arrival are unimportant, but we will assume that any arrivals in a time
slot occur before the departure instant for the cell in service during the
time slot. This is called an arrivals-first buffer management strategy. We
will also assume that if a cell arrives during time slot n, the earliest it can
be transmitted (served) is during time slot n C 1.
For our analysis, we will use a Bernoulli process with batch arrivals,
characterized by an independent and identically distributed batch of k
arrivals (k D 0, 1, 2, . . .) in each cell slot:
ak D Prfk arrivals in a cell slotg
It is particularly important to note that the state probabilities refer to the
state of the queue at moments in time that are usually called the end of
time-slot instants. These instants are after the arrivals (if there are any)
and after the departure (if there is one); indeed they are usually defined
to be at a time t after the end of the slot, where t ! 0.
99
i+2
i+1
a(0)
i
a(1)
a(i-2)
a(i-1)
a(i)
a(i)
.
.
.
3
2
1
0
Figure 7.2. How to Reach State i at the End of a Time Slot from States at the End of
the Previous Slot
s
100
If we now consider the service time of a cell to be one time slot, for
simplicity, then the average number of arrivals per time slot is denoted
E[a] (which is the mean of the arrival distribution ak), and the average
number of cells carried per time slot is the utilization. Thus
E[a] D
But the utilization is just the steady-state probability that the system is
not empty, so
E[a] D D 1 s0
and therefore
s0 D 1 E[a]
So from just the arrival rate (without any knowledge of the arrival
distribution ak) we are able to determine the probability that the system
is empty at the end of any time slot. It is worth noting that, if the applied
cell arrival rate is greater than the cell service rate (one cell per time
slot), then
s0 < 0
which is a very silly answer! Obviously then we need to ensure that cells
are not arriving faster (on average) than the system is able to transmit
them. If E[a] 1 cell per time slot, then it is said that the queueing system
is unstable, and the number of cells in the buffer will simply grow in an
unbounded fashion.
a(0)
a(0)
101
You may ask how it can be that sk applies as the state probabilities for
the end of time slot n 1 and time slot n. Well, the answer lies in the fact
that these are steady-state (sometimes called long-run) probabilities,
and, on the assumption that the buffer has been active for a very long
period, the probability distribution for the queue at the end of time slot
n 1 is the same as the probability distribution for the end of time slot n.
Our equation can be rearranged to give a formula for s1:
s1 D s0
1 a0
a0
We can continue with this process to find a similar expression for the
general state, k.
sk 1 D s0 ak 1 C s1 ak 1 C s2 ak 2 C C sk 1
a1 C sk a0
which, when rearranged, gives:
sk 1 s0 ak 1
k1
si ak i
iD1
sk D
a0
2
a(1)
a(1)
1
0
a(0)
102
Queue size
0
10
15
20
25
30
10
State probability
101
Poisson
Binomial
102
103
104
105
106
k
e
k!
Binomial k, M, P :D 0 if k > M
M!
1 pMK pk
M K! k!
k :D 0.. 30
Poisson k, :D
if k M
103
Because we have used the simplifying assumption that the queue length
is infinite, we can, theoretically, make k as large as we like. In practice,
how large we can make it will depend upon the value of sk that results
from this calculation, and the program used to implement this algorithm
(depending on the relative precision of the real-number representation
being used).
Now what about results? What does this state distribution look like?
Well, in part this will depend on the actual input distribution, the values
of ak, so we can start by obtaining results for the two input distributions
discussed in Chapter 6: the binomial and the Poisson. Specifically, let us
Buffer capacity, X
10
15
20
25
30
100
101
Poisson
Binomial
102
103
104
105
106
QX, s :D qx0
1 s0
for i 2 1.. X
if X > 0
qx
qxi1 si
i
qx
xk :D k
yP :D infiniteQ30, aP, 0.8
yB :D infiniteQ30, aB, 0.8
y1 :D Q30, yP
y2 :D Q30, yB
Figure 7.6. Graph of the Approximation to the Cell Loss by the Probability that the
Queue State Exceeds X, and the Mathcad Code to Generate (x, y) Values for Plotting
the Graph
104
1 a0
a0
k1
si ak i
iD1
a0
For the system to become full with the arrivals-first buffer management
strategy, there is actually only one way in which this can happen at the end
of time-slot instants: to be full at the end of time slot i, the buffer must begin
slot i empty, and have X or more cells arrive in the slot. If the system is
non-empty at the start, then just before the end of the slot (given enough
arrivals) the system will be full, but when the cell departure occurs at
the slot end, there will be X 1 cells left, and not X. So for the full state,
we have:
sX D s0 AX
105
where
Ak D 1 a0 a1 ak 1
So Ak is the probability that at least k cells arrive in a slot. Now we face
the problem that, without the value for s0, we cannot evaluate sk for
k > 0. What we do is to define a new variable, uk, as follows:
uk D
sk
s0
so
u0 D 1
Then
u1 D
1 a0
a0
uk 1 ak 1
k1
ui ak i
iD1
uk D
a0
uX D AX
and all the values of uk, 0 k X, can be evaluated! Then using the
fact that all the state probabilities must sum to 1, i.e.
X
si D 1
iD0
we have
X
si
iD0
1
ui
D
D
s0
s0
iD0
X
so
s0 D
1
X
ui
iD0
The other values of sk, for k > 0, can then be found from the definition
of uk:
sk D s0 uk
Now we can apply the basic traffic theory again, using the relationship
between offered, carried and lost traffic at the cell level, i.e.
LDAC
106
Queue size
0
10
101
State probability
102
103
Poisson
Binomial
104
105
106
107
108
109
1010
k : D 0.. 10
aPk :D Poisson k, 0.8
aBk :D Binominal k, 8, 0.1
1
finiteQstate (X, a) :D u0
1 a0
u1
a0
for k 2 2.. X 1
if X > 2
k1
ui ak1
uk1 ak1
iD1
uk
a0
X1
u
1
ai if X > 1
x
iD0
1
s0
X
ui
iD0
for k 2 1.. X
sk
s0 uk
s
xk :D k
y1 :D finiteQstate (10, aP)
y2 :D finiteQstate (10, aB)
Figure 7.7. Graph of the State Probability Distribution for a Finite Queue of 10 Cells
and a Load of 80%, and the Mathcad Code to Generate (x, y) Values for Plotting the
Graph
107
Buffer capacity, X
0
10
15
20
25
30
101
Poisson
Binomial
102
103
104
105
106
k : D 0.. 30
aPk :D Poisson k, 0.8
aBk :D Binominal k, 8, 0.1
finiteQloss (X, a, Ea) :D u0
u1
for
uk
ux
s0
for
sk
CLP
i :D 2.. 30
xi :D i
y1i :D finiteQloss (xi , aP, 0.8)
y2i :D finiteQloss (xi , aB, 0.8)
1
1 a0
a0
k 2 2.. X 1
if X > 2
k1
ui aki
uk1 ak1
iD1
1
X1
a0
ai
if X > 1
iD0
1
X
ui
iD0
k 2 1.. X
s0 uk
Ea 1 s0
Ea
Figure 7.8. Graph of the Exact Cell Loss Probability against System Capacity X for
a Load of 80%
108
As before, we consider the service time of a cell to be one time slot, for
simplicity; then the average number of arrivals per time slot is E[a] and
the average number of cells carried per time slot is the utilization. Thus
L D E[a] D E[a] 1 s0
and the cell loss probability is just the ratio of lost traffic to offered traffic:
CLP D
E[a] 1 s0
E[a]
Figure 7.7 shows the state probability distribution for an output buffer
of capacity 10 cells (which includes the server) being fed from our 8
Bernoulli sources each having p D 0.1 as before. The total load is 80%.
Notice that the probability of the buffer being full is very low in the
Poisson case, and zero in the binomial case. This is because the arrivalsfirst strategy needs 10 cells to arrive at an empty queue in order for the
queue to fill up; the maximum batch size with 8 Bernoulli sources is
8 cells.
Now we can generate the exact cell loss probabilities for finite buffers.
Figure 7.8 plots the exact CLP value for binomial and Poisson input to a
finite queue of system capacity X, where X varies from 2 up to 30 cells.
Now compare this with Figure 7.6.
DELAYS
We looked at waiting times in M/M/1 and M/D/1 queueing systems in
Chapter 4. Waiting time plus service time gives the system time, which is
the overall delay through the queueing system. So, how do we work out
the probabilities associated with particular delays in the output buffers
of an ATM switch? Notice first that the delay experienced by a cell, which
we will call cell C, in a buffer has two components: the delay due to the
unfinished work (cells) in the buffer when cell C arrives, Ud ; and the
delay caused by the other cells in the batch in which C arrives, Bd .
T d D Ud C B d
where Td is the total delay from the arrival of C until the completion of
its transmission (the total system time).
In effect we have already determined Ud ; these values are given by the
state probabilities as follows:
PrfUd D 1g D Ud 1 D s0 C s1
Remember that we assumed that each cell will be delayed by at least 1
time slot, the slot in which it is transmitted. For all k > 1 we have the
109
DELAYS
relationship:
PrfUd D kg D Ud k D sk
The formula for Bd k D PrfBd D kg accounts for the position of C within
the batch as well:
k
1
ai
iD0
Bd k D
E[a]
k
Ud j Bd k j
jD1
We plot the cell delay probabilities for the example we have been
considering (binomial and Poisson input processes, p D 0.1 and M D 8,
D 0.8) in Figure 7.9.
Delay (time slots)
Probability of delay
10
0.1
Poisson
Binomial
0.01
0.001
Figure 7.9.
Cell Delay Probabilities for a Finite Buffer of Size 10 Cells with a Load of 80%
110
End-to-end delay
To find the cell delay variation through a number of switches, we convolve
the cell delay distribution for a single buffer with itself. Let
Td,n k D Prftotal delay through n buffers D kg
Then, for two switches the delay distribution is given by
Td,2 k D
k
jD1
There is one very important assumption we are making: that the arrivals
to each buffer are independent of each other. This is definitely not the
case if all the traffic through the first buffer goes through the second
one. In practice, it is likely that only a small proportion will do so; the
bulk of the traffic will be routed elsewhere. This situation is shown in
Figure 7.10.
We can extend our calculation for 2 switches by applying it recursively
to find the delay through n buffers:
Td,n k D
k
jD1
Through traffic
Buffer n
Through traffic
111
DELAYS
10
20
30
40
50
n=1
Probability of delay
1E01 n=2
n=3
1E02
n=5
1E03
n=7
1E04
n=9
1E05
Figure 7.11. End-to-End Delay Distributions for 1, 2, 3, 5, 7 and 9 Buffers, with a Load of 80%
Cell-Scale Queueing
dealing with the jitters
CELL-SCALE QUEUEING
In Chapter 4 we considered a situation in which a large collection of CBR
voice sources all send their cells to a single buffer. We stated that it was
reasonably accurate under certain circumstances (when the number of
sources is large enough) to model the total cell-arrival process from all
the voice sources as a Poisson process.
Now a Poisson process is a single statistical model from which the
detailed information about the behaviour of the individual sources has
been lost, quite deliberately, in order to achieve simplicity. The process
features a random number (a batch) of arrivals per slot (see Figure 8.1)
where this batch can vary as 0, 1, 2, . . . , 1.
So we could say that in, for example, slot n C 4, the process has
overloaded the queueing system because two cells have arrived one
more than the buffer can transmit. Again, in slot n C 5 the buffer has
been overloaded by three cells in the slot. So the process provides short
periods during which its instantaneous arrival rate is greater than the cell
service rate; indeed, if this did not happen, there would be no need for a
buffer.
But what does this mean for our N CBR sources? Each source is at a
constant rate of 167 cell/s, so the cell rate will never individually exceed
the service rate of the buffer; and provided N 167 < 353 208 cell/s, the
total cell rate will not do so either. The maximum number of sources
is 353 208/167 D 2115 or, put another way, each source produces one
cell every 2115 time slots. However, the sources are not necessarily
arranged such that a cell from each one arrives in its own time slot;
indeed, although the probability is not high, all the sources could be
(accidentally) synchronized such that all the cells arrive in the same slot.
In fact, for our example of multiplexing 2115 CBR sources, it is possible
114
CELL-SCALE QUEUEING
n+1
Figure 8.1.
n+2
n+3
n+7
n+8 n+9
n+10
for any number of cells varying from 0 up to 2115 to arrive in the same
slot. The queueing behaviour which arises from this is called cell-scale
queueing.
Queue
size
Figure 8.2. Repeating Patterns in the Size of the Queue when Constant-Bit-Rate
Traffic Is Multiplexed
115
Queue
size
Queue
size
Figure 8.2.
(continued)
116
CELL-SCALE QUEUEING
n! N n!
D
D
nDxC1
DNCx
DnCx
Lets put some numbers in, and see how the cell loss varies with different
parameters and their values. The distribution of Qx for a fixed load of
Buffer capacity
0
10
20
30
40
100
101
102
103
Q(x)
104
105
106
107
108
109
N = 1000
N = 500
N = 200
N = 50
1010
N
NDD1Q x, N, :D D
N
nx n
n x Nn D N C x
combin
(N,
n)
1
D
D
DnCx
nDxC1
xk :D k
y1k :D NDD1Q k, 1000, 0.95
y2k :D NDD1Q k, 500, 0.95
y3k :D NDD1Q k, 200, 0.95
y4k :D NDD1Q k, 50, 0.95
k :D 0.. 40
Figure 8.3. Results for the ND/D/1 Queue with a Load of 95%, and the Mathcad
Code to Generate (x, y) Values for Plotting the Graph
117
Qx D e
2x
1
118
CELL-SCALE QUEUEING
Buffer capacity
0
10
20
30
10 0
101
102
103
CLP
104
105
106
107
= 0.95
108
= 0.75
109
= 0.55
1010
k :D 0.. 30
ap95k :D Poisson k, 0.95
ap75k :D Poisson k, 0.75
ap55k :D Poisson k, 0.55
2x
1
MD1Qheavy x, :D e
i :D 2.. 30
xk :D k
Y1i :D finiteQloss xi , ap95, 0.95
Y2k :D MD1Qheavy xk , 0.95
Y3i :D finiteQloss xi , ap75, 0.75
Y4k :D MD1Qheavy xk , 0.75
Y5i :D finiteQloss xi , ap55, 0.55
Y6k :D MD1Qheavy xk , 0.55
Figure 8.4. Comparing the Heavy-Traffic Results for the M/D/1 with Exact Analysis
of the M/D/1/K, and the Mathcad Code to Generate (x, y) Values for Plotting the
Graph
1
119
and
D
2x
2x lnQx
We will not investigate how to use these equations just yet. The first
relates to buffer dimensioning, and the second to admission control, and
both these topics are dealt with in later chapters.
10
20
30
40
101
102
Q(x)
103
104
105
106
107
108
109
N = 1000
N = 500
N = 200
N = 50
1010
k :D 0.. 40
2x
x 1
C
N
NDD1Qheavy x, N, :D e
xk :D k
y1k :D NDD1Q k, 1000, 0.95
y2k :D NDD1Q k, 500, 0.95
y3k :D NDD1Q k, 200, 0.95
y4k :D NDD1Q k, 50, 0.95
y5k :D NDD1Qheavy k, 1000, 0.95
y6k :D NDD1Qheavy k, 500, 0.95
y7k :D NDD1Qheavy k, 200, 0.95
y8k :D NDD1Qheavy k, 50, 0.95
Figure 8.5. Comparison of Exact and Approximate Results for ND/D/1 at a Load
of 95%, and the Mathcad Code to Generate (x, y) Values for Plotting the Graph
120
CELL-SCALE QUEUEING
Qx D e
x 1
2x N C
Figure 8.5 shows how the approximation compares with exact results
from the ND/D/1 analysis for a load of 95%. The approximate results
are shown as lines, and the exact results as markers. In this case the
approximation is in very good agreement. Figure 8.6 shows how the
Buffer capacity
0
10
20
30
40
100
= 0.95
101
= 0.95
102
= 0.95
103
CLP
104
105
106
107
108
109
1010
k :D 0.. 40
xk :D k
y1k :D NDD1Q k, 200, 0.95
y2k :D NDD1Qheavy k, 200, 0.95
y3k :D NDD1Q k, 200, 0.75
y4k :D NDD1Qheavy k, 200, 0.75
y5k :D NDD1Q k, 200, 0.55
y6k :D NDD1Qheavy k, 200, 0.55
Figure 8.6. Comparison of Exact and Approximate Results for ND/D/1 for a
variety of Loads, with N D 200, and the Mathcad Code to Generate (x, y) Values for
Plotting the Graph
121
122
CELL-SCALE QUEUEING
Source
1
Source
multiplexer
Source
2
.
.
.
Source
N
Source
1
Source
multiplexer
Source
2
.
.
.
2 2 ATM
switching
element
Source
N
Figure 8.8.
10
15
20
1E+00
1E01
1E02
1E03
1E04
1E05
1E06
1E07
1E08
1E09
1E10
output rate of either of the switch output ports, but still there can be cell
loss in the switch. Figure 8.9 shows an example of the cell loss probabilities
for either of the output buffers in the switch for the scenario illustrated in
Figure 8.8. This assumes that the output from each source multiplexor is
a Bernoulli process, with parameter p0 D 0.5, and that the cells are routed
123
Burst-Scale Queueing
information overload!
126
BURST-SCALE QUEUEING
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Figure 9.1.
127
50%
25%
33%
108%
3
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
128
BURST-SCALE QUEUEING
We can identify two parts to this excess-rate demand, and analyse the
parts separately. First, what is the probability that an arriving cell is an
excess-rate cell? This is the same as saying that the cell needs burst-scale
buffer storage. Then, secondly, what is the probability that such a cell is
lost, i.e. the probability that a cell is lost, given that it is an excess-rate
cell? We can then calculate the overall cell loss probability arising from
burst-scale queueing as:
Prfcell is lostg Prfcell is lostjcell needs bufferg Prfcell needs bufferg
The probability that a cell needs the buffer is called the burst-scale loss
factor; this is found by considering how the input rate compares with the
service rate of the queue. A cell needs to be stored in the buffer if the
total input rate exceeds the queues service rate. If there is no burst-scale
buffer storage, these cells are lost, and
Prfcell is lostg Prfcell needs bufferg
The probability that a cell is lost given that it needs the buffer is called
the burst-scale delay factor; this is the probability that an excess-rate cell
is lost. If the burst-scale buffer size is 0, then this probability is 1, i.e. all
excess-rate cells are lost. However, if there is some buffer storage, then
only some of the excess-rate cells will be lost (when this buffer storage
is full).
Figure 9.3 shows how these two factors combine on a graph of cell
loss probability against the buffer capacity. The burst-scale delay factor is
shown as a straight line with the cell loss decreasing as the buffer capacity
increases. The burst-scale loss factor is the intersection of the straight line
with the zero buffer axis.
Buffer capacity
0
10
20
30
1
0.1
CLP
40
129
R
Excess cell rate
C
Service for
excess-rate
cells
Time
Figure 9.4.
130
BURST-SCALE QUEUEING
Ton
X
Toff
Figure 9.5.
Analysis
CLPexcess-rate D
C R e
XCR
Ton 1RCC
1 C R C e
XCR
Ton 1RCC
where
R = ON rate
C = service rate of queue
X = buffer capacity of queue
Ton = mean duration in ON state
Toff = mean duration in OFF state
and
=
Ton
D probability that the source is active
Ton C Toff
Note that CLPexcess-rate is the probability that a cell is lost given that it is an
excess-rate cell. The probability that a cell is an excess-rate cell is simply
the proportion of excess-rate cells to all arriving cells, i.e. R C/R. Thus
the overall cell loss probability is
CLP D
RC
CLPexcess-rate
R
131
state, i.e. when there are excess-rate arrivals. Thus the mean number of
excess-rate arrivals in one ON/OFF cycle is R C Ton , and so the mean
excess rate is simply R C . The number of excess-rate cells arriving
during the time period is R C T, and so the number of excess-rate
cells actually lost during a time period T is given by the right-hand
side of the equation. There is no other way of losing cells, so the two
sides of the equation are indeed equal, and the result for CLP follows
directly.
We will take an example and put numbers into the formula later on,
when we can compare with the results for the discrete approach.
a)
b)
132
BURST-SCALE QUEUEING
Pr{no} = 1a
Silent for
another
time slot?
Generate
another excessrate arrival?
Pr{yes} = s
Figure 9.6.
Pr{yes} = a
Pr{no} = 1s
1
1a
But this is simply the mean duration in the ON state multiplied by the
excess rate, so
1
D Ton R C
E[on] D
1a
giving
aD1
1
Ton R C
In a similar manner, the mean number of empty time slots in the OFF
state is
1
D Toff C
E[off] D
1s
Cell rate
a
R
1a
1-a
C
1s
s s s s
s
Time
Figure 9.7.
The Process of Arrivals and Time Slots for the ON/OFF Source Model
133
giving
sD1
1
Toff C
Time
Figure 9.8.
134
BURST-SCALE QUEUEING
1a
pX
a
X
X1
(1-a).p(X )
135
is full), and then there is a gap of at least two empty time slots, so that
the next arrival sees fewer than X 1 in the queue. The second term is
for an arriving excess-rate cell which sees X 1 in the queue, taking the
state of the queue up to X, and then there is a gap of at least two empty
time slots, so that the next arrival sees fewer than X 1 in the queue.
Rearranging, and substituting for pX, gives
pX 2 D
s
pX 1
a
s
pX i C 1
a
pX i D
s
a
1a
pX
s
pX i D pX C
iD0
a.p(Xi+1)
iD1
pX D 1
Xi+2
Xi+1
a.p(Xi)
X i
1a
s
Xi
Multiply by s
Figure 9.10. Equating Up- and Down-Crossing Probabilities in the General Case
136
BURST-SCALE QUEUEING
X
iD1
1
s
a
i
1a
s
pX D
1C
s
a
1
1a
sa
which is valid except when a D s (in which case the previous formula
must be used). As in the case of the continuous fluid-flow analysis, the
overall cell loss probability is given by
CLP D
RC
RC
CLPexcess-rate D
pX
R
R
0.96
D 0.362
0.96 C 1.69
137
Buffer capacity
0
10
20
30
40
100
101
102
103
104
C = 80
C = 120
C = 150
C = 160
105
k :D 1..
40
RC
R
C R e
XCR
Ton1RCC
1 C R C e
1
CLPdiscFFC, , R, X, Ton :D
Toff
Ton
a
1
Ton R C
1
Toff C
RC
R
s X
1a
1
C
1
a
sa
Xk :D k
y1k :D CLPdiscFF 80, 0.362, 167, k, 0.96
y2k :D CLPdiscFF 120, 0.362, 167, k, 0.96
y3k :D CLPdiscFF 150, 0.362, 167, k, 0.96
y4k :D CLPdiscFF 160, 0.362, 167, k, 0.96
y5k :D CLPcontFF 80, 0.362, 167, k, 0.96
y6k :D CLPcontFF 120, 0.362, 167, k, 0.96
y7k :D CLPcontFF 150, 0.362, 167, k, 0.96
y8k :D CLPcontFF 160, 0.362, 167, k, 0.96
Figure 9.11.
XCR
Ton1RCC
Cell Loss Probability against Buffer Capacity for a Single ON/OFF Source
138
BURST-SCALE QUEUEING
Capacity
(cell/s)
80
120
150
160
83.52
45.12
16.32
6.72
Buffer capacity
0
10
20
30
40
100
101
102
103
* 10
*1
* 0.1
104
105
k :D 1..
40
xk :D k
y1k :D CLPdiscFF 150, 0.362, 167, k, 9.6
y2k :D CLPdiscFF 150, 0.362, 167, k, 0.96
y3k :D CLPdiscFF 150, 0.362, 167, k, 0.096
y4k :D CLPcontFF 150, 0.362, 167, k, 9.6
y5k :D CLPcontFF 150, 0.362, 167, k, 0.96
y6k :D CLPcontFF 150, 0.362, 167, k, 0.096
Figure 9.12. The Effect of Scaling the Mean State Durations, Ton and Toff , when
C D 150 cell/s
139
cell loss probability plotted against the buffer capacity, X, as the service
capacity is varied between the mean and peak cell rates of the source.
The discrete fluid-flow results are shown as markers, and the continuous
as solid lines.
As the service capacity gets closer to the ON rate, R, the gradient
steepens. This means that the buffer is better able to cope with the bursts
of excess-rate cells. We can see more clearly why this is so by looking
at Table 9.1, which shows the average number of excess-rate cells in an
active period. When this number is large relative to the capacity of the
buffer, then the buffer does not cope very well because it only takes
a fraction of an average burst to fill it up. It would not make much
difference if there was no buffer spacethere is so little difference to the
cell loss over the range of buffer capacity shown. The buffer only makes
a difference to the cell loss if the average excess-rate burst length is less
than the buffer capacity, i.e. when it would take a number of bursts
to fill the buffer. Notice that it is only in these circumstances that the
discrete and continuous fluid-flow results show any difference; and then
the discrete approach is more accurate because it does not include the
fractions of cells allowed by the continuous fluid-flow analysis. These
small amounts actually represent quite a large proportion of an excessrate burst when the average number of excess-rate cells in the burst is
small.
Figure 9.12 illustrates the strong influence of the average state durations
on the results for cell loss probability. Here, C D 150 cell/s, with other
parameter values as before, and the Ton and Toff values have been scaled
by 0.1, 1 and 10. In each case the load on the buffer remains constant at a
value of
167
R
D 0.403
D 0.362
C
150
Ton
Ton C Toff
140
BURST-SCALE QUEUEING
Ton
Ton
Toff
Source 1
Ton
Toff
X
Mean rate of source
m = hTon/(Ton + Toff)
Source 2
Toff
Source N
Figure 9.13.
Ton
m
D
h
Ton C Toff
C
h
This may well not be an integer value. If we round the value up, to
dN0 e (this notation means take the first integer above N0 ), this gives the
minimum number of sources required for burst-scale queueing to take
place. If we round the value down, to bN0 c (this notation means take the
first integer below N0 ), this gives the maximum number of sources we
can have in the system without having burst-scale queueing.
We saw earlier in the chapter that the burst-scale queueing behaviour
can be separated into two components: the burst-scale loss factor, which
is the probability that a cell is an excess-rate cell; and the burst-scale delay
factor, which is the probability that a cell is lost given that it is an excessrate cell. Both factors contribute to quantifying the cell loss: the burst-scale
loss factor gives the cell loss probability if we assume there is no buffer;
141
this value is multiplied by the burst-scale delay factor to give the cell loss
probability if we assume there is a buffer of some finite capacity.
h C
h C
hC
D
D
m
h
h
N!
n 1 Nn
n! N n!
pn n h C
nDdN0 e
The mean arrival rate is simply N m, so the probability that a cell needs
the buffer is given by the ratio of the mean excess rate to the mean
arrival rate:
N
pn n h C
nDdN0 e
Nm
pn n N0
nDdN0 e
Lets put some numbers into this formula, using the example of two
different types of video source, each with a mean bit-rate of 768 kbit/s and
peak bit-rates of either 4.608 Mbit/s or 9.216 Mbit/s. The corresponding
142
BURST-SCALE QUEUEING
Table 9.2.
h (cell/s)
Parameter Values
2000
D
h
12 000
24 000
0.167
0.083
N0 D
353 207.55
h
29.43
14.72
cell rates are m D 2000 cell/s and h D 12 000 cell/s or 24 000 cell/s, and
the other parameter values are shown in Table 9.2.
Figure 9.14 shows how the probability that a cell needs the buffer
increases with the number of video sources being multiplexed through
the buffer. The minimum number of sources needed to produce burstscale queueing is 30 (for h D 12 000) or 15 (for h D 24 000). The results
show that about twice these values (60 and 30, respectively) produce
loss probabilities of about 1010 , increasing to between 101 and 102
for 150 of either source (see Figure 9.14). For both types of source the
mean rate, m, is 2000 cell/s, so the average load offered to the buffer, as a
fraction of its service capacity, ranges from 30 2000/353 208 17% up
to 150 2000/353 208 85%.
We know from Chapter 6 that the binomial distribution can be approximated by the Poisson distribution when the number of sources, N,
becomes large. This can be used to provide an approximate result for
Prfcell needs bufferg, the burst-scale loss factor, and it has the advantage
of being less demanding computationally because there is no summation [9.3].
Prfcell needs bufferg
N0 bN0 c
N0
1
e
1
2 N0
bN0 c!
m h
N
Nm
DN D
C
h C
N0
Figure 9.15 shows results for ON/OFF sources with peak rate h D
12 000 cell/s, and mean rates varying from m D 2000 cell/s ( D 0.167)
down to 500 cell/s ( D 0.042). N0 is fixed at 29.43, and the graph plots
the loss probability varying with the offered load,
. We can see that
for any particular value of
the burst-scale loss factor increases, as the
activity factor, , decreases, towards an upper limit given by the approximate result. The approximation thus gives a conservative estimate of the
probability that a cell needs the buffer. Note that as the activity factor
decreases, the number of sources must increase to maintain the constant
load, taking it into the region for which the Poisson approximation is
valid.
143
Number of sources, N
30
60
90
120
150
10 0
h = 24000 cell/s
h = 12000 cell/s
10 1
10 2
10 3
10 4
10 5
10 6
10 7
10 8
10 9
10 10
k :D 0.. 120
N
BSLexact , N, NO :D
xk :D k C 30
nDceil NO
2000
353207.55
, xk ,
24000
24000
2000
353207.55
y2k :D BSLexact
, xk ,
12000
12000
y1k :D BSLexact
How does this Poisson approximation change our view of the source
process? Instead of considering N identical ON/OFF sources each with
probability of being in the active state and producing a burst of fixed
rate h, we are modelling the traffic as just one Poisson source which
produces overlapping bursts. The approximation equates the average
number of active sources with the average number of bursts in progress.
Its similar to our definition of traffic intensity, but at the burst level.
The average number of active sources is simply N ; now, recalling
that the probability of being active is related to the average durations in
144
BURST-SCALE QUEUEING
Offered load,
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
10 0
101
102
103
104
105
106
107
approx.
=0.042
=0.083
=0.167
108
109
1010
k :D 0..
130
BSLapprox
, NO :D
N2k :D k C 30
N2k 2000
x1k :D
353207.5
y1k :D BSLapprox
x2k :D x1k
y2k :D BSLexact
N3k :D 2 k C 60
N3k 1000
x3k :D
353207.5
y3k :D BSLexact
1
NOfloor NO
NO
e
1
2 NO
floor NO!
x1k ,
353207.55
12000
2000
353207.55
, N2k ,
12000
12000
1000
353207.55
, N3k ,
12000
12000
N4k :D 4 k C 120
N4k 500
x4k :D
353207.5
500
353207.55
y4k :D BSLexact
, N4k ,
12000
12000
Figure 9.15.
1.0
145
Ton
Ton C Toff
N
D Ton
Ton C Toff
which is the average burst duration multiplied by the burst rate, (each
source produces one burst every cycle time, Ton C Toff ). This is the average
number of bursts in progress.
b
C
N
Ton C Toff
Nm
D
C
C
146
BURST-SCALE QUEUEING
CLPexcess-rate D e
X 1
3
N0
b 4
C1
hm
m
so Toff takes values of 0.2 second or 0.44 second respectively. The ON/OFF
source cycle times Ton C Toff are 0.24 s and 0.48 s, so the burst rates for
the equivalent Poisson source of bursts are 4.167 (i.e. 1/0.24) or 2.083
times the number of sources, N, respectively.
Figure 9.16 shows the effect of the buffer capacity on the excess-rate
cell loss when there are 60 sources, giving an offered load of 0.34. The
results for three types of source are shown: the two just described, and
147
Buffer capacity, X
100
100
200
300
400
101
102
k :D 1.. 40
BSDapprox NO, X, b,
:D e
xk :D k 10
NO
X 1
3
b 4
C1
353207.5
y1k :D BSDapprox
, xk , 960, 0.34
24000
353207.5
, xk , 480, 0.34
y2k :D BSDapprox
24000
353207.5
, xk , 480, 0.34
y3k :D BSDapprox
12000
Figure 9.16.
then the higher peak-rate source with an average active state duration
of half the original. This makes the average burst length, b, the same as
that for the lower-rate source. We can then make a fair assessment of the
impact of N0 , with b and
kept constant. It is clear then that as the peak
rate decreases, and therefore N0 increases, the buffer is better able to cope
with the excess-rate bursts.
Figure 9.17 shows how the two factors which make up the overall
cell loss probability are combined. The buffer capacity value was set at
400 cells. This corresponds to a maximum waiting time of 1.1 ms. The
burst-scale delay factor is shown for the two different peak rates as the
148
BURST-SCALE QUEUEING
Number of sources, N
30
60
90
120
150
100
101
102
103
104
105
106
burst scale delay
burst scale loss
combined (h = 24000 cell/s)
burst scale loss
burst scale delay
combined (h = 12000 cell/s)
107
108
109
1010
k :D 0..
120
xk :D k C 30
353207.55
xk 2000
y1k :D BSDapprox
, 400, 960,
24000
353207.5
2000
353207.55
y2k :D BSLexact
, xk ,
24000
24000
y3k :D y1k y2k
353207.55
xk 2000
y4k :D BSDapprox
, 400, 480,
12000
353207.5
2000
353207.55
y5k :D BSLexact
, xk ,
12000
12000
y6k :D y4k y5k
Figure 9.17.
Loss Factor
curves with markers only. These results tend to an excess-rate loss probability of 1 as the number of sources, and hence the offered load increases.
The burst-scale loss results from Figure 9.14 are shown as the lines without
markers. The overall cell loss probabilities are the product of the two
factors and are the results shown with both lines and markers. Notice that
the extra benefit gained by having a large buffer for burst-scale queueing
does not appear to be that significant, for the situation considered here.
10
Connection Admission
Control
the net that likes to say YES!
150
151
x
(cells)
101
102
103
104
105
106
107
108
109
1010
1011
1012
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
96.3%
99.9%
99.9%
99.9%
99.9%
99.9%
99.9%
99.9%
99.9%
99.9%
99.9%
99.9%
99.9%
99.9%
99.9%
99.9%
99.9%
99.9%
99.9%
99.9%
59.7%
85.2%
92.4%
95.6%
97.2%
98.2%
98.9%
99.4%
99.7%
99.9%
99.9%
99.9%
99.9%
99.9%
99.9%
99.9%
99.9%
99.9%
99.9%
99.9%
41.9%
71.2%
82.4%
87.7%
90.7%
92.7%
94.0%
95.0%
95.7%
96.3%
96.7%
97.1%
97.4%
97.7%
97.9%
98.1%
98.2%
98.4%
98.5%
98.6%
16.6%
60.1%
74.2%
81.3%
85.4%
88.2%
90.1%
91.5%
92.6%
93.5%
94.2%
94.7%
95.2%
95.6%
95.9%
96.2%
96.5%
96.7%
96.9%
97.1%
6.6%
50.7%
66.9%
75.5%
80.7%
84.1%
86.6%
88.4%
89.8%
90.9%
91.8%
92.6%
93.2%
93.7%
94.2%
94.6%
95.0%
95.3%
95.5%
95.8%
2.9%
42.7%
60.4%
70.2%
76.2%
80.3%
83.2%
85.4%
87.1%
88.5%
89.6%
90.5%
91.3%
92.0%
92.5%
93.0%
93.5%
93.9%
94.2%
94.5%
1.35%
35.8%
54.4%
65.2%
72.0%
76.7%
80.0%
82.6%
84.6%
86.2%
87.5%
88.6%
89.5%
90.3%
91.0%
91.5%
92.1%
92.5%
92.9%
93.3%
0.62%
29.9%
49.0%
60.5%
68.0%
73.2%
77.0%
79.8%
82.1%
83.9%
85.4%
86.7%
87.7%
88.6%
89.4%
90.1%
90.7%
91.2%
91.7%
92.1%
0.28%
24.9%
44.0%
56.2%
64.2%
69.9%
74.0%
77.2%
79.7%
81.7%
83.4%
84.8%
86.0%
87.0%
87.9%
88.6%
89.3%
89.9%
90.5%
91.0%
0.13%
20.7%
39.5%
52.1%
60.6%
66.7%
71.2%
74.6%
77.4%
79.6%
81.4%
83.0%
84.3%
85.4%
86.4%
87.2%
88.0%
88.7%
89.3%
89.8%
0.06%
17.1%
35.4%
48.2%
57.2%
63.6%
68.4%
72.1%
75.1%
77.5%
79.5%
81.2%
82.6%
83.8%
84.9%
85.9%
86.7%
87.4%
88.1%
88.7%
0.03%
14.2%
31.6%
44.6%
53.9%
60.7%
65.8%
69.7%
72.9%
75.5%
77.6%
79.4%
81.0%
82.3%
83.5%
84.5%
85.4%
86.2%
86.9%
87.6%
152
2x
2 x lnCLP
2 x ln
2x
min CLPi
iD1!nC1
153
x 1
2x N C
2xN
2 x N 2 x2 C N lnCLP
nC1
ln
2 x2
min CLPi
iD1!nC1
154
then we can load the link up to 100% with any mix of n C 1 CBR sources,
i.e. we can accept the connection provided that
n
hi
hnC1
C
1
C
C
iD1
Otherwise, if
2 x2
nC1>
ln
min CLPi
iD1!nC1
2 x n C 1
2 x n C 1 2
x2
C n C 1 ln
min CLPi
iD1!nC1
N
nDxC1
N!
n! N n!
nx
D
n
1
nx
D
Nn
DNCx
DnCx
155
Table 10.2. CAC Look-up Table for Deterministic Bit-Rate Transfer Capability:
Maximum Number of Sources for 100% Loading, Given Buffer Capacity and Cell
Loss Probability
23
89
200
353
550
790
1064
1389
1758
2171
2627
3126
3669
4256
4885
5558
6275
7035
7839
8685
11
45
100
176
275
395
537
701
886
1085
1313
1563
1834
2128
2442
2779
3137
3517
3919
4342
8
30
67
118
183
264
358
467
591
729
881
1042
1223
1418
1628
1852
2091
2345
2613
2895
6
23
50
89
138
198
269
351
443
547
661
786
922
1064
1221
1389
1568
1758
1959
2171
5
19
41
71
111
159
215
281
355
438
529
629
738
856
982
1111
1255
1407
1567
1737
5
5
5
16
14
13
34
30
26
60
52
45
92
80
70
133 114 100
180 155 136
234 201 176
296 254 223
365 313 275
441 379 332
525 450 394
616 528 462
714 612 536
819 702 615
931 799 699
1045 901 789
1172 1005 884
1306 1119 985
1447 1240 1085
5
12
24
41
63
89
121
157
198
244
295
351
411
477
547
622
702
786
876
970
5
11
22
37
57
81
109
142
179
220
266
316
371
429
493
560
632
708
788
873
5
11
20
34
52
74
100
129
163
201
242
288
337
391
448
510
575
644
717
794
5
10
19
32
48
68
92
119
150
185
223
264
310
359
411
468
527
591
658
729
156
Table 10.3. (a) Maximum Admissible Load for a Buffer Capacity of 10 Cells, Given Number of
Sources and Cell Loss Probability
1
10
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
98.9%
98.0%
93.5%
92.0%
91.3%
90.9%
90.6%
90.4%
90.3%
90.2%
90.1%
2
10
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
96.2%
93.8%
90.9%
88.9%
88.2%
87.0%
83.0%
81.7%
81.1%
80.8%
80.5%
80.4%
80.2%
80.1%
80.1%
3
10
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
88.9%
84.8%
82.2%
80.5%
78.4%
77.6%
76.9%
73.5%
72.3%
71.8%
71.6%
71.5%
71.4%
71.2%
71.1%
71.0%
4
10
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
85.7%
78.4%
74.6%
72.3%
70.7%
69.0%
68.2%
67.6%
64.7%
63.7%
63.3%
63.1%
62.8%
62.7%
62.6%
62.5%
62.5%
5
10
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
95.2%
75.0%
69.0%
64.9%
63.2%
61.4%
60.6%
60.0%
59.2%
56.7%
56.0%
55.6%
55.3%
55.2%
55.1%
55.0%
54.9%
54.9%
106
107
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
94.4%
85.7%
82.6%
80.0%
65.2%
59.7%
56.8%
55.1%
53.9%
53.0%
52.3%
51.8%
49.5%
48.9%
48.5%
48.4%
48.2%
48.1%
48.1%
48.0%
48.0%
100.0%
100.0%
100.0%
100.0%
100.0%
93.8%
84.2%
81.0%
75.0%
73.1%
69.0%
56.6%
51.3%
49.0%
47.6%
46.7%
46.0%
45.5%
44.8%
43.1%
42.6%
42.3%
42.1%
42.0%
41.9%
41.9%
41.8%
41.8%
108
109
1010
1011
1012
appropriate, using the mean cell rate, mi , instead of the peak cell rate hi ,
to calculate the load in the inequality test; i.e. if
n
mnC1
mi
C
C
C
iD1
2 x ln
2x
min CLPi
iD1!nC1
157
Table 10.3. (b) Maximum Admissible Load for a Buffer Capacity of 50 Cells, Given Number of
Sources and Cell Loss Probability
1
10
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
2
10
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
3
10
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
99.3%
98.6%
98.1%
4
5
10
10
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
99.8%
99.0%
97.9%
97.0%
96.3%
95.7%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
99.6%
98.4%
97.5%
96.6%
95.5%
94.7%
94.0%
93.5%
106
107
108
109
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
98.5%
97.2%
96.0%
95.2%
94.3%
93.2%
92.4%
91.7%
91.2%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
98.0%
96.2%
94.7%
93.6%
92.8%
92.0%
90.9%
90.2%
89.6%
89.1%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
99.3%
98.0%
95.6%
93.9%
92.4%
91.4%
90.5%
89.8%
88.7%
87.9%
87.4%
86.9%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
98.5%
96.9%
95.5%
93.1%
91.5%
90.2%
89.1%
88.3%
87.6%
86.5%
85.8%
85.2%
84.8%
1010
1011
1012
8
10
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
9
10
100.0%
100.0%
100.0%
100.0%
100.0%
100.0%
99.6%
1010
1011
1012
100.0%
100.0%
100.0%
100.0%
99.6%
99.0%
98.4%
100.0%
100.0%
99.9%
99.1%
98.4%
97.7%
97.2%
100.0%
99.5%
98.6%
97.8%
97.2%
96.5%
96.0%
Nm
C
158
e
CLP
1 2 N0
bN0 c!
How can we use this formula in a connection admission control algorithm? In a similar manner to Erlangs lost call formula, we must use the
formula to produce a table which allows us, in this case, to specify the
required cell loss probability and the source peak cell rate and find out
the maximum allowed utilization. We can then calculate the maximum
number of sources of this type (with mean cell rate m) that can be accepted
using the formula
C
ND
m
Table 10.4 does not directly use the peak cell rate, but, rather, the number
of peak cell rates which fit into the service capacity, i.e. the parameter N0 .
Example peak rates for the standard service capacity of 353 208 cell/s are
shown.
So, if we have a source with a peak cell rate of 8830.19 cell/s (i.e.
3.39 Mbit/s) and a mean cell rate of 2000 cell/s (i.e. 768 kbit/s), and we
want the CLP to be no more than 1010 , then we can accept
Table 10.4. Maximum Admissible Load for Burst-Scale Constraint
h
(cell/s)
N0
101
102
103
104
105
106
107
108
109
35 320.76
17 660.38
11 773.59
8 830.19
7 064.15
5 886.79
5 045.82
4 415.09
3 924.53
3 532.08
1 766.04
1 177.36
883.02
706.42
588.68
504.58
10
20
30
40
50
60
70
80
90
100
200
300
400
500
600
700
72.1%
82.3%
86.5%
88.9%
90.5%
91.7%
92.5%
93.2%
93.7%
94.2%
96.4%
97.3%
97.8%
98.1%
98.4%
98.5%
52.3%
67.0%
73.7%
77.8%
80.5%
82.5%
84.1%
85.3%
86.3%
87.2%
91.7%
93.6%
94.7%
95.4%
95.9%
96.3%
37.9%
54.3%
62.5%
67.5%
71.1%
73.7%
75.8%
77.4%
78.8%
80.0%
86.4%
89.2%
90.8%
91.9%
92.7%
93.3%
28.1%
44.9%
53.8%
59.5%
63.5%
66.6%
69.0%
71.0%
72.6%
74.0%
81.8%
85.3%
87.4%
88.8%
89.9%
90.7%
21.2%
37.7%
46.9%
53.0%
57.4%
60.8%
63.5%
65.7%
67.5%
69.1%
78.0%
82.0%
84.5%
86.2%
87.4%
88.4%
16.2%
32.0%
41.4%
47.7%
52.4%
55.9%
58.8%
61.2%
63.2%
64.9%
74.7%
79.2%
82.0%
83.9%
85.3%
86.4%
12.5%
27.4%
36.8%
43.3%
48.1%
51.8%
54.8%
57.3%
59.5%
61.3%
71.8%
76.8%
79.8%
81.9%
83.4%
84.7%
9.7%
23.6%
32.9%
39.4%
44.3%
48.2%
51.3%
54.0%
56.2%
58.1%
69.3%
74.6%
77.8%
80.1%
81.8%
83.1%
7.6%
20.5%
29.6%
36.1%
41.1%
45.0%
48.3%
51.0%
53.3%
55.3%
67.0%
72.6%
76.0%
78.4%
80.2%
81.7%
5.9%
17.8%
26.7%
33.2%
38.2%
42.2%
45.5%
48.3%
50.6%
52.7%
64.9%
70.7%
74.4%
76.9%
78.8%
80.3%
4.7%
15.6%
24.1%
30.6%
35.6%
39.6%
43.0%
45.8%
48.2%
50.4%
62.9%
69.0%
72.8%
75.5%
77.5%
79.1%
3.7%
13.6%
21.9%
28.2%
33.2%
37.3%
40.7%
43.6%
46.0%
48.2%
61.1%
67.5%
71.4%
74.2%
76.3%
78.0%
159
ND
where is chosen (as a function of CLP and N0 ) from Table 10.4 in the
manner described previously.
n
C
mi
mnC1
C
min CLPi ,
iD1!nC1
C
C
max
h
i
iD1
iD1!nC1
160
C h
N0
h
C
D
D
hD
N
h N
N
G
Two-level CAC
One of these approaches, aimed at simplifying CAC, is to divide the
sources into classes and partition the service capacity so that each source
161
162
calculate:
40 475
N0 x
D
D 53.79
b
0.04 8830.19
58 2000
D 0.328
353 208
So we can calculate the CLP gain due to the burst-scale delay factor:
CLPexcess-rate D e
X 13
N0 b 4C1
D 8.58 104
Thus there is a further CLP gain of about 103 , i.e. an overall CLP of
about 1013 .
Although the excess-rate cell loss is an exponential function, which can
thus be rearranged fairly easily, we will use a tabular approach because
it clearly illustrates the process required. Table 10.5 specifies the CLP
and the admissible load in order to find a value for N0 x/b (this was
introduced in Chapter 9 as the size of a buffer in units of excess-rate
bursts). The CLP target is 1010 . By how much can the load be increased
so that the overall CLP meets this target? Looking down the 102 column
of Table 10.5, we find that the admissible load could increase to a value
of nearly 0.4. Then, we check in Table 10.4 to see that the burst-scale loss
contribution for a load of 0.394 is 108 . Thus the overall CLP meets our
target of 1010 .
The number of connections that can be accepted is now
ND
0.02
0.04
0.06
0.08
0.10
0.12
0.14
0.16
0.18
0.20
0.22
0.24
0.26
0.28
0.30
0.32
0.34
0.36
0.38
0.40
0.42
0.44
0.46
0.48
0.50
0.52
0.54
load,
10
2.6
3.0
3.4
3.9
4.4
5.0
5.6
6.4
7.2
8.1
9.1
10.3
11.6
13.1
14.8
16.7
18.9
21.4
24.3
27.7
31.6
36.2
41.5
47.8
55.3
64.1
74.8
1
5.3
6.0
6.9
7.8
8.8
10.0
11.3
12.7
14.4
16.2
18.2
20.6
23.2
26.2
29.5
33.4
37.8
42.9
48.7
55.4
63.3
72.4
83.1
95.6
110.5
128.3
149.5
10
2
7.9
9.1
10.3
11.7
13.3
15.0
16.9
19.1
21.5
24.3
27.4
30.8
34.8
39.2
44.3
50.1
56.7
64.3
73.0
83.1
94.9
108.6
124.6
143.5
165.8
192.4
224.3
10
3
10.6
12.1
13.8
15.6
17.7
20.0
22.6
25.5
28.7
32.4
36.5
41.1
46.4
52.3
59.1
66.8
75.6
85.7
97.4
110.9
126.5
144.8
166.1
191.3
221.0
256.5
299.0
10
4
13.2
15.1
17.2
19.5
22.1
25.0
28.2
31.9
35.9
40.5
45.6
51.4
58.0
65.4
73.8
83.5
94.5
107.2
121.7
138.6
158.1
180.9
207.6
239.1
276.3
320.6
373.8
10
5
15.9
18.1
20.6
23.4
26.5
30.0
33.9
38.2
43.1
48.6
54.7
61.7
69.6
78.5
88.6
100.2
113.4
128.6
146.1
166.3
189.8
217.1
249.2
286.9
331.6
384.8
448.5
106
18.5
21.1
24.1
27.3
31.0
35.0
39.5
44.6
50.3
56.7
63.9
72.0
81.1
91.5
103.4
116.9
132.3
150.0
170.4
194.0
221.4
253.3
290.7
334.7
386.8
448.9
523.3
107
21.1
24.2
27.5
31.2
35.4
40.0
45.2
51.0
57.5
64.8
73.0
82.2
92.7
104.6
118.2
133.6
151.2
171.5
194.8
221.7
253.0
289.5
332.2
382.5
442.1
513.0
598.0
108
23.8
27.2
30.9
35.1
39.8
45.0
50.8
57.3
64.6
72.9
82.1
92.5
104.3
117.7
132.9
150.3
170.1
192.9
219.1
249.4
284.6
325.7
373.8
430.4
497.4
577.1
672.8
109
26.4
30.2
34.4
39.0
44.2
50.0
56.5
63.7
71.8
81.0
91.2
102.8
115.9
130.8
147.7
167.0
189.0
214.3
243.5
277.2
316.3
361.9
415.3
478.2
552.6
641.3
747.5
1010
Table 10.5. Burst-Scale Delay Factor Table for Values of N0 x/b, Given Admissible Load and CLP
31.7
36.2
41.3
46.8
53.1
60.0
67.8
76.5
86.2
97.1
109.5
123.4
139.1
156.9
177.2
200.4
226.8
257.2
292.2
332.6
379.5
434.3
498.3
573.8
663.1
769.5
897.0
1012
(continued overleaf )
29.1
33.2
37.8
42.9
48.6
55.0
62.1
70.1
79.0
89.0
100.3
113.1
127.5
143.9
162.5
183.7
207.9
235.8
267.8
304.9
347.9
398.1
456.8
526.0
607.9
705.4
822.3
1011
163
0.56
0.58
0.60
0.62
0.64
0.66
0.68
0.70
0.72
0.74
0.76
0.78
0.80
0.82
0.84
0.86
0.88
0.90
0.92
0.94
0.96
0.98
load,
10
2
10
3
10
4
10
5
106
107
109
1010
1011
1012
87.6
175.2
262.7
350.3
437.9
525.5
613.1
700.6
788.2
875.8
963.4
1 051.0
103.2
206.4
309.5
412.7
515.9
619.1
722.3
825.5
928.6
1 031.8
1 135.0
1 238.2
122.3
244.6
367.0
489.3
611.6
733.9
856.3
978.6
1 100.9
1 223.2
1 345.6
1 467.9
146.0
292.1
438.1
584.1
730.2
876.2
1 022.2
1 168.2
1 314.3
1 460.3
1 606.3
1 752.4
175.7
351.4
527.1
702.8
878.5
1 054.2
1 229.9
1 405.6
1 581.3
1 756.9
1 932.6
2 108.3
213.2
426.5
639.7
853.0
1 066.2
1 279.5
1 492.7
1 706.0
1 919.2
2 132.5
2 345.7
2 558.9
261.4
522.8
784.2
1 045.6
1 307.0
1 568.4
1 829.8
2 091.2
2 352.6
2 614.0
2 875.4
3 136.8
324.1
648.1
972.2
1 296.3
1 620.3
1 944.4
2 268.5
2 592.5
2 916.6
3 240.7
3 564.7
3 888.8
407.0
814.0
1 220.9
1 627.9
2 034.9
2 441.9
2 848.9
3 255.8
3 662.8
4 069.8
4 476.8
4 883.8
518.8
1 037.6
1 556.4
2 075.2
2 593.9
3 112.7
3 631.5
4 150.3
4 669.1
5 187.9
5 706.7
6 225.5
672.9
1 345.8
2 018.8
2 691.7
3 364.6
4 037.5
4 710.4
5 383.4
6 056.3
6 729.2
7 402.1
8 075.0
890.9
1 781.9
2 672.8
3 563.7
4 454.7
5 345.6
6 236.5
7 127.5
8 018.4
8 909.3
9 800.3
10 691.2
1 208.9
2 417.7
3 626.6
4 835.4
6 044.3
7 253.1
8 462.0
9 670.9
10 879.7
12 088.6
13 297.4
14 506.3
1 689.8
3 379.7
5 069.5
6 759.3
8 449.1
10 139.0
11 828.8
13 518.6
15 208.4
16 898.3
18 588.1
20 277.9
2 451.0
4 902.0
7 353.0
9 804.0
12 255.0
14 706.0
17 157.0
19 608.0
22 058.9
24 509.9
26 960.9
29 411.9
3 725.8
7 451.5
11 177.3
14 903.0
18 628.8
22 354.5
26 080.3
29 806.1
33 531.8
37 257.6
40 983.3
44 709.1
6 023.0
12 045.9
18 068.9
24 091.9
30 114.8
36 137.8
42 160.8
48 183.7
54 206.7
60 229.7
66 252.6
72 275.6
10 591.9
21 183.8
31 775.7
42 367.6
52 959.5
63 551.3
74 143.2
84 735.1
95 327.0
1 05 918.9
1 16 510.8
1 27 102.7
21 047.1
42 094.1
63 141.2
84 188.3 1 05 235.3 1 26 282.4 1 47 329.5
1 68 376.5
1 89 423.6
2 10 470.7
2 31 517.7
2 52 564.8
50 742.2 1 01 484.3 1 52 226.5 2 02 968.6 2 53 710.8 3 04 452.9 3 55 195.1
4 05 937.2
4 56 679.4
5 07 421.5
5 58 163.7
6 08 905.8
1 74 133.0 3 48 266.0 5 22 399.0 6 96 532.0 8 70 665.0 10 44 798.0 12 18 931.0 13 93 064.0 15 67 197.0 17 41 330.0 19 15 463.0 20 89 596.0
14 16 089.8 28 32 179.7 42 48 269.5 56 64 359.3 70 80 449.2 84 96 539.0 99 12 628.8 11 328 718.7 12 744 808.5 14 160 898.3 15 576 988.2 16 993 078.0
10
1
164
CONNECTION ADMISSION CONTROL
165
2.
rate-envelope multiplexing
3.
The first corresponds to peak rate allocation, i.e. the deterministic bit-rate
transfer capability, and deals with the cell-scale queueing behaviour.
In this book we have considered two different algorithms, based on
either the M/D/1 or ND/D/1 systems. The second and third operating
principles allow for the statistical multiplexing of variable bit-rate streams
and are two approaches to providing the statistical bit-rate transfer
capability. Rate envelope multiplexing is the term for what we have
called the burst-scale loss factor, i.e. it is the bufferless approach. The
term arises because the objective is to keep the total input rate to within
the service rate; any excess rate is assumed to be lost. Rate sharing
corresponds to the combined burst-scale loss and delay factors, i.e. it
assumes there is a large buffer available to cope with the excess cell
rates. It allows higher admissible loads, but the penalty is greater delay.
Thus the objective is not to limit the combined cell rate, but to share
the service capacity by providing sufficient buffer space to absorb the
excess-rate cells.
These three different operating principles require different traffic
parameters to describe the source traffic characteristics. DBR requires
just the peak cell rate of the source. Rate envelope multiplexing additionally needs the mean cell rate, and rate sharing requires peak cell rate,
mean cell rate and some measure of burst length. The actual parameters
depend on the CAC policy and what information it uses. But there is one
166
11
the ability to detect any traffic situation that does not conform to the
traffic contract,
a rapid response to violations of the traffic contract, and
168
But are all these features possible in one algorithm? Lets recall what
parameters we want to check. The most important one is the peak cell
rate; it is needed for both deterministic and statistical bit-rate transfer
capabilities. For SBR, the traffic contract also contains the mean cell
rate (for rate envelope multiplexing). With rate-sharing statistical multiplexing, the burst length is additionally required. Before we look at a
specific algorithm, lets consider the feasibility of controlling the mean
cell rate.
Tk T
e
k!
10 1 11 1
e
e D 0.2642
0!
1!
Thus this strict mean cell rate control would reject one or more cells from
a well-behaved Poisson source in 26 out of every 100 time units. What
proportion of the number of cells does this represent? Well, we know that
the mean number of cells per time unit is 1, and this can also be found by
summing the probabilities of there being k cells weighted by the number
of cells, k, i.e.
mean number of cells D 1 D 0
10 1
11 1
12 1
e C1
e C2
e
0!
1!
2!
C C k
1k 1
e C
k!
When there are k 1 cell arrivals in a time unit, then one cell is allowed
on to the network and k 1 are rejected. Thus the proportion of cells
being allowed on to the network is
1
1
1k 1
11 1
1
e C
e
1!
k!
kD2
D 0.6321
169
which means that almost 37% of cells are being rejected although the
traffic contract is not being violated.
There are two options open to us: increase the maximum number of
cells allowed into the network per time unit or increase the measurement
interval to many time units. The object is to decrease this proportion of
cells being rejected to an acceptably low level, for example 1 in 1010 .
Lets define j as the maximum number of cells allowed into the network
during time interval T. The first option requires us to find the smallest
value of j for which the following inequality holds:
1
Tk T
e
k j
k!
kDjC1
1010
T
where, in this case, the mean cell rate of the source, , is 1 cell per time
unit, and the measurement interval, T, is 1 time unit. Table 11.1 shows
the proportion of cells rejected for a range of values of j.
To meet our requirement of no more than 1 in 1010 cells rejected for
a Poisson source of mean rate 1 cell per time unit, we must accept up
to 12 cells per time unit. If the Poisson source doubles its rate, then our
limit of 12 cells per time unit would result in 1.2 107 of the cells being
rejected. Ideally we would want 50% of the cells to be rejected to keep
the source to its contracted mean of 1 cell per time unit. If the Poisson
source increases its rate to 10 cells per time unit, then 5.3% of the cells are
Table 11.1. Proportion of Cells Rejected when no more than j cells
Are Allowed per Time Unit
1 cell/time unit
2 cells/time unit
10 cells/time unit
3.68E-01
1.04E-01
2.33E-02
4.35E-03
6.89E-04
9.47E-05
1.15E-05
1.25E-06
1.22E-07
1.09E-08
9.00E-10
6.84E-11
4.84E-12
3.20E-13
1.98E-14
5.68E-01
2.71E-01
1.09E-01
3.76E-02
1.12E-02
2.96E-03
6.95E-04
1.47E-04
2.82E-05
4.96E-06
8.03E-07
1.21E-07
1.69E-08
2.21E-09
2.71E-10
9.00E-01
8.00E-01
7.00E-01
6.01E-01
5.04E-01
4.11E-01
3.24E-01
2.46E-01
1.79E-01
1.25E-01
8.34E-02
5.31E-02
3.22E-02
1.87E-02
1.03E-02
170
rejected, and hence over 9 cells per time unit are allowed through. Thus
measurement over a short interval means that either too many legitimate
cells are rejected (if the limit is small) or, for cells which violate the
contract, not enough are rejected (when the limit is large).
Lets now extend the measurement interval. Instead of tabulating for
all values of j, the results are shown in Figure 11.1 for two different time
intervals: 10 time units and 100 time units. For the 1010 requirement, j
is 34 (for T D 10) and 163 (for T D 100), i.e. the rate is limited to 3.4 cells
per time unit, or 1.63 cells per time unit over the respective measurement
intervals. So, as the measurement interval increases, the mean rate is
being more closely controlled. The problem now is that the time taken to
Maximum number of cells, j, allowed in T time units
0
10
20
40
60
80
100
120
140
180
200
10 1
160
10 2
10 3
10 4
10 5
10 6
10 7
10 8
10 9
10 10
k :D 1.. 200
max j
k j dpoisk, T
kDjC1
xk :D k
y1k :D Propreject 100, 1, xk , 250
y2k :D Propreject 10, 1, xk , 250
T
Figure 11.1. Proportion of Cells Rejected for Limit of j Cells in T Time Units
171
0
10 0
10 1
10 2
10 3
log T :D 0.. 30
Findj , T, reject, max j :D j
ceil T
while Propreject T, , j, max j C j > reject
j
jC1
j
log T
xlog T :D 10 10
Findj 1, xlog T , 1010 , 500
y1log T :D
xlog T
y2log T :D 1
Figure 11.2.
172
173
Cell
stream
Counter
value 5
Bucket
limit
3
2
1
0
Leak
rate
10 11 12
traffic source generates a burst of cells at a rate higher than the leak rate,
the bucket begins to fill. Provided that the burst is short, the bucket will
not fill up and no action will be taken against the cell stream. If a long
enough burst of cells arrives at a rate higher than the leak rate, then the
bucket will eventually overflow. In this case, each cell that arrives to find
the counter at its maximum value is deemed to be in violation of the
traffic contract and may be discarded or tagged by changing the CLP
bit in the cell header from high to low priority. Another possible course
of action is for the connection to be released.
In Figure 11.3, the counter has a value of 2 at the start of the sequence.
The leak rate is one every four cell slots and the traffic source being
monitored is in a highly active state sending cells at a rate of 50% of
the cell slot rate. It is not until the tenth cell slot in the sequence that a
cell arrival finds the bucket on its limit. This non-conforming cell is then
subject to discard or tagging. An important point to note is that the cells
do not pass through the bucket, as though queueing in a buffer. Cells do
not queue in the bucket, and therefore there is no variable delay through
a leaky bucket. However, the operation of the bucket can be analysed as
though it were a buffer with cells being served at the leak rate. This then
allows us to find the probability that cells will be discarded or tagged by
the UPC function.
174
T 23 = 10 cell slots
2
T 12 = 14 cell slots
1
T 23 = 6 cell slots
2
3
Time
175
Lets see how the leaky bucket would work in this situation. First, we
must alter slightly our leaky bucket algorithm so that it can deal with any
values of T (the inter-arrival time at the peak cell rate) and (the CDV
tolerance). The leaky bucket counter works with integers, so we need to
find integers k and n such that
Dk
T
n
i.e. the inter-arrival time at the peak cell rate is divided into n equal parts,
with n chosen so that the CDV tolerance is an integer multiple, k, of T/n.
Then we operate the leaky bucket in the following way: the counter is
incremented by n (the splash) when a cell arrives, and it is decremented
at a leak rate of n/T. If the addition of a splash takes the counter above its
limit of k C n, then the cell is in violation of the contract and is discarded
or tagged. If the counter value is greater than n but less than or equal to
k C n, then the cell is within the CDV tolerance and is allowed to enter
the network.
Figure 11.5 shows how the counter value changes for the three cell
arrivals of the example of Figure 11.4. In this case, n D 10, k D 4, the leak
rate is equal to the cell slot rate, and the leaky bucket limit is k C n D 14.
We assume that, when a cell arrives at the same time as the counter is
decremented, the decrement takes place first, followed by the addition of
the splash of n. Thus in the example shown the counter reaches, but does
not exceed, its limit at the arrival of cell number 3. This is because the
inter-arrival time between cells 2 and 3 has suffered the maximum CDV
permitted in the traffic contract which the leaky bucket is monitoring.
Figure 11.6 shows what happens for the case when cell number 2 is
delayed by 5 cell slots rather than 4 cell slots. The counter exceeds its
limit when cell number 3 arrives, and so that cell must be discarded
because it has violated the traffic contract.
Bucket
limit
15
Counter
value
10
3
1
0
CDV = 4 cell slots
Figure 11.5. Example of Cell Stream with CDV within the Tolerance
176
15
Counter
value
Traffic
contract
violation
Bucket
limit
3
10
0
CDV = 5 cell slots
Figure 11.6.
The same principle applies if the tolerance, , exceeds the peak rate
inter-arrival time, T, i.e. k > n. In this case it will take a number of
successive cells with inter-arrival times less than T for the bucket to build
up to its limit. Note that this extra parameter, the CDV tolerance, is an
integral part of the traffic contract and must be specified in addition to
the peak cell rate.
177
Maximum
burst size
1 2 3 4 5 6
1 2 3 4 5 6
Figure 11.7.
Time
first n units in the leaky bucket effectively act as the service space for a
splash. These n units are required for the leaky bucket to operate correctly
on a peak cell rate whether or not there is any CDV tolerance. Thus it is
in the extra k units where a queue forms, and so the leaky bucket limit of
k C n is equivalent to the system capacity.
So, we can analyse the formation of a queue by considering the time
taken, tMBS , for an excess rate to fill up the leaky buckets queueing space, k:
tMBS D
queueing space
k
D
excess rate
n CSR n PCR
where CSR is the cell slot rate, and PCR is the peak cell rate. We also
know that
n
k D D n PCR
T
so, substituting for k gives
tMBS D
n PCR
PCR
D
n CSR PCR
CSR PCR
The maximum burst size is found by multiplying tMBS by the cell slot rate
and adding one for the first cell in the burst which fills the server space, n:
CSR PCR
CSR PCR
which, in terms of the inter-arrival time at the peak rate, T, and the cell
slot time, , is
MBS D 1 C
T
178
20
D 6 cells
MBS D 1 C
51
The time taken, tcycle , for the leaky bucket to go through one complete
cycle of filling, during the maximum burst, and then emptying during
the silence period, is given by
n MBS D n PCR tcycle
where the left-hand side gives the total number of units by which the
leaky bucket is incremented, and the right-hand side gives the total
number by which it is decremented. The total number of cell slots in a
complete cycle is CSR tcycle . It is necessary to round this up to the nearest
integer number of cell slots to ensure that the leaky bucket empties
completely, so the number of empty cell slots is
MBS
MBS
ECS D CSR
PCR
179
2 X C D
lnCLP
D 2
X
with the proviso that the load can never exceed a value of 1. This formula
applies to the CBR cell streams. For the worst-case streams, we just
replace X by X/MBS to give:
X
2
CD
MBS
D
MBS lnCLP
D 2
X
where
/
D1C
MBS D 1 C
T
D1
Note that D is just the inter-arrival time, T, in units of the cell slot time, .
180
35 321
32 110
29 434
27 170
25 229
23 547
22 075
20 777
19 623
18 590
17 660
16 819
16 055
15 357
14 717
14 128
13 585
13 082
12 615
12 180
11 774
10 092
8 830
7 849
7 064
6 422
5 887
5 434
5 046
4 709
4 415
4 155
3 925
3 718
3 532
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
9
9
12
13
13
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
6
6
8
8
11
11
12
15
15
16
16
17
22
23
24
24
25
25
26
27
27
35
40
45
50
54
58
65
70
75
80
85
90
95
100
5
5
6
7
8
9
10
10
13
13
13
14
17
18
19
19
20
20
26
27
27
30
33
45
50
54
58
61
65
68
71
85
90
95
100
3
4
5
6
7
7
8
9
11
11
11
12
14
15
15
16
16
20
21
21
22
30
33
36
39
54
58
61
65
68
71
75
78
82
85
181
slots (corresponding to time values of 0.057, 0.113, 0.17, 0.226 and 0.283 ms
respectively). The peak cell rates being monitored vary from 1% up to 10%
of the cell slot rate. If the CDV tolerance is zero, then in this case the link
can be loaded to 100% of capacity for each of the peak cell rates shown.
Figure 11.8 plots the data of Table 11.2 as the admissible load against the
monitored peak cell rate. Note that when the CDV is relatively small (e.g.
40 cell slots or less), then there is little or no reduction in the admissible
Admissible load
100.0
9
8
7
6
5
4
3
0.057 ms
0.113 ms
0.170 ms
0.226 ms
0.283 ms
101.0
0
10000
20000
Peak cell rate
30000
40000
k :D 1.. 35
1
max
X, CSR, D, , CLP :D
CSR
MaxBS
1 C floor
D
1
X
2
CD
MaxBS
MaxBS lnCLP
D 2
X
floorD
N
N
D if N > D
N
D
Figure 11.8. Admissible Load for CBR Sources with Different CDV Tolerances
182
Dk :D k C 9 if k < 22
k 21 5 C 30 otherwise
CSR :D 353207.5
CSR
PCRk :D
Dk
CLP :D 1010
20
y1k :D max
50, CSR, Dk ,
, CLP
CSR
40
, CLP
y2k :D max
50, CSR, Dk ,
CSR
60
, CLP
y3k :D max
50, CSR, Dk ,
CSR
80
, CLP
y4k :D max
50, CSR, Dk ,
CSR
100
, CLP
y5k :D max
50, CSR, Dk ,
CSR
Figure 11.8. (continued)
load in this example. The CDV in the access network may well be of this
order, particularly if the access network utilization is low and buffers are
dimensioned to cope with only cell-scale and not burst-scale queueing.
Traffic shaping
One solution to the problem of worst-case traffic is to introduce a spacer
after the leaky bucket in order to enforce a minimum time between
cells, corresponding to the particular peak cell-rate being monitored by
the leaky bucket. Alternatively, this spacer could be implemented before
the leaky bucket as per-VC queueing in the access network. Spacing
is performed only on those cells that conform to the traffic contract;
this prevents the bunching together of cells (whether of the worst-case
traffic or caused by variation in cell delay within the CDV tolerance of the
traffic contract). However, spacing introduces extra complexity, which
is required on a per-connection basis. The leaky bucket is just a simple
countera spacer requires buffer storage and introduces delay.
183
one to control the peak cell rate, the other to control a virtual mean
cell rate. In ITU Recommendation I.371 the term used for this virtual
mean is sustainable cell rate (SCR). With two leaky buckets, the effect
of the CDV tolerance on the peak-cell-rate leaky bucket is not so severe.
The reason is that the leaky bucket for the sustainable cell rate limits
the number of worst-case bursts that can pass through the peak-cell-rate
leaky bucket. For each ON/OFF cycle at the cell slot rate the SCR leakybucket level increases by a certain amount. When the SCR leaky bucket
reaches its limit, the ON/OFF cycles must stop until the SCR counter
has returned to zero. So the maximum burst size is still determined by
the PCR leaky-bucket parameter values, but the overall mean cell rate
allowed onto the network is limited by the sustainable cell rate rather
than the peak cell rate.
This dual leaky bucket arrangement is called the leaky cup and
saucer. The cup is the leaky bucket for the sustainable cell rate: it is a
deep container with a base of relatively small diameter. The saucer is
the leaky bucket for the peak cell rate: it is a shallow container with a
large-diameter base. The depth corresponds to the bucket limit and the
diameter of the base to the cell rate being controlled.
The worst-case traffic is shown in Figure 11.9(a). The effect of the leaky
buckets is to limit the number of cells over different time periods. For
the example in the figure, the saucer limit is 2 cells in 4 cell slots and
the cup limit is 6 cells in 24 cell slots. An alternative worst-case traffic
which is adopted in ITU Recommendation E.736 [10.3] is an ON/OFF
source with maximum-length bursts at the peak cell rate rather than at
the cell slot rate. An example of this type of worst-case traffic is shown
in Figure 11.9(b). Note that the time axis is in cell slots, so the area under
the curve is equal to the number of cells sent.
The maximum burst size at the peak cell rate is obtained in a similar
way to that at the cell slot rate, i.e.
MBS D 1 C
IBT
TSCR TPCR
where IBT is called the intrinsic burst tolerance. This is another important
parameter in the traffic contract (in addition to the inter-arrival times TSCR
and TPCR for the sustainable and peak cell rates respectively). The purpose
of the intrinsic burst tolerance is in fact to specify the burst length limit
in the traffic contract.
Two CDV tolerances are specified in the traffic contract. We are already
familiar with the CDV tolerance, , for the peak cell rate. From now on we
call this PCR to distinguish it from the CDV tolerance for the sustainable
0
. This latter has to be added to the intrinsic burst tolerance
cell rate, SCR
in order to determine the counter limit for the cup. As before, we need to
184
Cell rate
0.75
0.5
0.25
0
(a)
5
10
Saucer empties
15
Cell rate
30
Time
0.75
0.5
0.25
0
20
25
Cup empties
10
15
(b)
20
25
Cup empties
30
Time
TSCR
n
In most cases, n can be set to 1 because the intrinsic burst tolerance will
be many times larger than TSCR .
185
X
CD
2
MBS
D
MBS lnCLP
D 2
X
A graph of utilization against the maximum burst size is shown in
Figure 11.10. The CLP requirement varies from 104 down to 1010 . The
1.0
Admissible load
CLP = 1e04
CLP = 1e06
CLP = 1e08
CLP = 1e10
0.5
0.0
0
10
20
30
Maximum burst size
k :D 1.. 50
2
max
CS X, D, MaxBS, CLP :D
40
X
CD
MaxBS
50
MaxBS lnCLP
D 2
X
MBSk :D k
y1k :D max
CS 50, 100, MBSk , 104
y2k :D max
CS 50, 100, MBSk , 106
y3k :D max
CS 50, 100, MBSk , 108
y4k :D max
CS 50, 100, MBSk , 1010
Figure 11.10. Admissible Load for Worst-Case Traffic through Leaky Cup and
Saucer
186
link buffer has a capacity of 50 cells, the cell slot rate is 353 208 cell/s,
and the sustainable cell rate is chosen to be 3532 cell/s, i.e. D D 100. The
maximum burst size allowed by the leaky cup and saucer is varied from
1 up to 50 cells. The peak cell rate and intrinsic burst tolerance are not
specified explicitly; different combinations can be calculated from the
maximum burst size and the sustainable cell rate.
It is important to use the correct value of MBS because this obviously
can have a significant effect on the admissible load. Suppose that the
peak cell rate is twice the sustainable cell rate, i.e. TPCR D TSCR /2. The
maximum burst size at the peak cell rate is
MBSPCR
D1C
IBT
D 1 C 2 IBT
TSCR
T
TSCR
SCR
12
Dimensioning
real networks dont lose cells?
188
DIMENSIONING
CLPbsl
load
30
60
90
4.46E-10
1.11E-05
9.10E-04
0.17
0.34
0.51
Buffer capacity, X
0
10
20
30
40
50
60
70
80
90
100
100
101
102
N = 90
N = 60
N = 30
103
104
105
106
107
108
109
1010
Nm
OverallCLP X, N, m, h, C, b :D
C
C
N0
h
m
h
for i 2 0 .. X
ai
Poisson i,
finiteQloss X, a,
csloss
bsloss
BSLexact , N, N0 BSDapprox N0, X, b,
csloss C bsloss
xk :D k
y1k :D OverallCLP k , 90 , 2000 , 24000 , 353207.5 , 480
y2k :D OverallCLP k , 60 , 2000 , 24000 , 353207.5 , 480
y3k :D OverallCLP k , 30 , 2000 , 24000 , 353207.5 , 480
k :D 2.. 100
Figure 12.1.
Overall Cell Loss Probability against Buffer Capacity for N VBR Sources
189
10
20
30
40
50
60
70
80
90
100
100
101
102
103
104
105
106
107
108
N = 170
N = 150
N = 120
109
1010
k :D 0.. 50
xk :D k
170 2000
353207. 5
150 2000
y2k :D NDD1Q k , 150 ,
353207. 5
120 2000
y3k :D NDD1Q k , 120 ,
353207. 5
y1k :D NDD1Q
k , 170 ,
Figure 12.2. Cell Loss Probability against Buffer Capacity for N CBR Sources
190
DIMENSIONING
Buffer capacity, X
0
10
20
30
40
50
60
70
80
90
100
100
load = 0.96
load = 0.85
load = 0.68
101
102
103
104
105
106
107
108
109
1010
k :D 0.. 100
aP68k :D Poisson k , 0 . 68
aP85k :D Poisson k , 0 . 85
aP96k :D Poisson k , 0 . 96
i :D 2.. 100
xi :D i
y1i :D finiteQloss xi , aP68 , 0 . 68
y2i :D finiteQloss xi , aP85 , 0 . 85
y3i :D finiteQloss xi , aP96 , 0 . 96
Figure 12.3. Cell Loss Probability against Buffer Capacity for Random Traffic
capacity for 120, 150 and 170 sources. The corresponding values for the
offered load are 0.68, 0.85, and 0.96 respectively. Figure 12.3 takes the load
values used for the CBR traffic and assumes that the traffic is random.
The cell loss results are found using the exact analysis for the finite
M/D/1 system. A summary of the three different situations is depicted
in Figure 12.4, comparing 30 VBR sources, 150 CBR sources, and an
offered load of 0.85 of random traffic (the same load as 150 CBR sources).
191
Buffer capacity, X
0
10
20
30
40
50
60
70
80
90
100
100
VBR
random
CBR
101
102
103
104
105
106
107
108
109
1010
k :D 0.. 100
aP85k :D Poisson k , 0 . 85
i :D 2.. 100
xi :D i
150 2000
y1i :D NDD1Q i , 150 ,
353207. 5
y2i :D finiteQloss xi , aP85 , 0 . 85
y3i :D OverallCLP i , 30 , 2000 , 24000 , 353207. 5 , 480
Figure 12.4. Comparison of VBR, CBR and Random Traffic through a Finite Buffer
Restrict the number of bursty sources so that the total input rate only
rarely exceeds the cell slot rate, and assume that all excess-rate cells
are lost. This is the bufferless or burst-scale loss option (also known
as rate envelope multiplexing).
2.
Assume that we have a big enough buffer to cope with excessrate cells, so only a proportion are lost; the other excess-rate cells are
delayed in the buffer. This is the burst-scale delay option (rate-sharing
statistical multiplexing).
192
DIMENSIONING
Short buffer
Single
server
Loss
sensitive
cells
...
Long
Buffer
...
193
load
buffer
capacity
mean
maximum
mean
maximum
(cells)
delay s
delay s
delay s
delay s
(a) Buffer Dimensioning for Cell-Scale Queueing: Buffer Capacity, Mean and Maximum
Delay, Given the Offered Load and a Cell Loss Probability of 108
0.50
0.51
0.52
0.53
0.54
0.55
0.56
0.57
0.58
0.59
0.60
0.61
0.62
0.63
0.64
0.65
0.66
0.67
0.68
0.69
0.70
0.71
0.72
0.73
0.74
0.75
0.76
0.77
16
16
17
17
17
18
18
19
19
20
20
21
21
22
23
23
24
25
25
26
27
28
29
30
31
33
34
35
4.2
4.3
4.4
4.4
4.5
4.6
4.6
4.7
4.8
4.9
5.0
5.0
5.1
5.2
5.3
5.5
5.6
5.7
5.8
6.0
6.1
6.3
6.5
6.7
6.9
7.1
7.3
7.6
45.3
45.3
48.1
48.1
48.1
51.0
51.0
53.8
53.8
56.6
56.6
59.5
59.5
62.3
65.1
65.1
67.9
70.8
70.8
73.6
76.4
79.3
82.1
84.9
87.8
93.4
96.3
99.1
1.1
1.1
1.1
1.1
1.1
1.1
1.2
1.2
1.2
1.2
1.2
1.3
1.3
1.3
1.3
1.4
1.4
1.4
1.5
1.5
1.5
1.6
1.6
1.7
1.7
1.8
1.8
1.9
11.3
11.3
12.0
12.0
12.0
12.7
12.7
13.4
13.4
14.2
14.2
14.9
14.9
15.6
16.3
16.3
17.0
17.7
17.7
18.4
19.1
19.8
20.5
21.2
21.9
23.4
24.1
24.8
(continued overleaf )
194
DIMENSIONING
buffer
capacity
mean
maximum
mean
maximum
load
(cells)
delay s
delay s
delay s
delay s
0.78
0.79
0.80
0.81
0.82
0.83
0.84
0.85
0.86
0.87
0.88
0.89
0.90
0.91
0.92
0.93
0.94
0.95
0.96
0.97
0.98
37
39
41
43
45
48
51
54
58
62
67
73
79
88
98
112
129
153
189
248
362
7.9
8.2
8.5
8.9
9.3
9.7
10.3
10.9
11.5
12.3
13.2
14.3
15.6
17.1
19.1
21.6
25.0
29.7
36.8
48.6
72.2
104.8
110.4
116.1
121.7
127.4
135.9
144.4
152.9
164.2
175.5
189.7
206.7
223.7
249.1
277.5
317.1
365.2
433.2
535.1
702.1
1024.9
2.0
2.0
2.1
2.2
2.3
2.4
2.6
2.7
2.9
3.1
3.3
3.6
3.9
4.3
4.8
5.4
6.3
7.4
9.2
12.2
18.0
26.2
27.6
29.0
30.4
31.9
34.0
36.1
38.2
41.1
43.9
47.4
51.7
55.9
62.3
69.4
79.3
91.3
108.3
133.8
175.5
256.2
(b) Buffer Dimensioning for Cell-Scale Queueing: Buffer Capacity, Mean and Maximum
Delay, Given the Offered Load and a Cell Loss Probability of 1010
0.50
0.51
0.52
0.53
0.54
0.55
0.56
0.57
0.58
0.59
0.60
0.61
0.62
0.63
0.64
0.65
0.66
0.67
0.68
0.69
19
20
20
21
21
22
23
23
24
24
25
26
26
27
28
29
30
31
32
33
4.2
4.3
4.4
4.4
4.5
4.6
4.6
4.7
4.8
4.9
5.0
5.0
5.1
5.2
5.3
5.5
5.6
5.7
5.8
6.0
53.8
56.6
56.6
59.5
59.5
62.3
65.1
65.1
67.9
67.9
70.8
73.6
73.6
76.4
79.3
82.1
84.9
87.8
90.6
93.4
1.1
1.1
1.1
1.1
1.1
1.1
1.2
1.2
1.2
1.2
1.2
1.3
1.3
1.3
1.3
1.4
1.4
1.4
1.5
1.5
13.4
14.2
14.2
14.9
14.9
15.6
16.3
16.3
17.0
17.0
17.7
18.4
18.4
19.1
19.8
20.5
21.2
21.9
22.6
23.4
195
buffer
capacity
mean
maximum
mean
maximum
load
(cells)
delay s
delay s
delay s
delay s
0.70
0.71
0.72
0.73
0.74
0.75
0.76
0.77
0.78
0.79
0.80
0.81
0.82
0.83
0.84
0.85
0.86
0.87
0.88
0.89
0.90
0.91
0.92
0.93
0.94
0.95
0.96
0.97
0.98
34
35
37
38
39
41
43
45
47
49
51
54
57
60
64
68
73
79
85
93
102
113
126
144
167
199
246
324
476
6.1
6.3
6.5
6.7
6.9
7.1
7.3
7.6
7.9
8.2
8.5
8.9
9.3
9.7
10.3
10.9
11.5
12.3
13.2
14.3
15.6
17.1
19.1
21.6
25.0
29.7
36.8
48.6
72.2
96.3
99.1
104.8
107.6
110.4
116.1
121.7
127.4
133.1
138.7
144.4
152.9
161.4
169.9
181.2
192.5
206.7
223.7
240.7
263.3
288.8
319.9
356.7
407.7
472.8
563.4
696.5
917.3
1347.6
1.5
1.6
1.6
1.7
1.7
1.8
1.8
1.9
2.0
2.0
2.1
2.2
2.3
2.4
2.6
2.7
2.9
3.1
3.3
3.6
3.9
4.3
4.8
5.4
6.3
7.4
9.2
12.2
18.0
24.1
24.8
26.2
26.9
27.6
29.0
30.4
31.9
33.3
34.7
36.1
38.2
40.3
42.5
45.3
48.1
51.7
55.9
60.2
65.8
72.2
80.0
89.2
101.9
118.2
140.9
174.1
229.3
336.9
(c) Buffer Dimensioning for Cell-Scale Queueing: Buffer Capacity, Mean and Maximum
Delay, Given the Offered Load and a Cell Loss Probability of 1012
0.50
0.51
0.52
0.53
0.54
0.55
0.56
0.57
0.58
0.59
0.60
0.61
23
24
24
25
26
26
27
28
28
29
30
31
4.2
4.3
4.4
4.4
4.5
4.6
4.6
4.7
4.8
4.9
5.0
5.0
65.1
67.9
67.9
70.8
73.6
73.6
76.4
79.3
79.3
82.1
84.9
87.8
1.1
1.1
1.1
1.1
1.1
1.1
1.2
1.2
1.2
1.2
1.2
1.3
16.3
17.0
17.0
17.7
18.4
18.4
19.1
19.8
19.8
20.5
21.2
21.9
(continued overleaf )
196
DIMENSIONING
buffer
capacity
mean
maximum
mean
maximum
load
(cells)
delay s
delay s
delay s
delay s
0.62
0.63
0.64
0.65
0.66
0.67
0.68
0.69
0.70
0.71
0.72
0.73
0.74
0.75
0.76
0.77
0.78
0.79
0.80
0.81
0.82
0.83
0.84
0.85
0.86
0.87
0.88
0.89
0.90
0.91
0.92
0.93
0.94
0.95
0.96
0.97
0.98
32
33
34
35
36
37
38
39
41
42
44
46
47
49
52
54
56
59
62
65
69
73
78
83
89
96
104
113
124
138
154
176
204
244
303
400
592
5.1
5.2
5.3
5.5
5.6
5.7
5.8
6.0
6.1
6.3
6.5
6.7
6.9
7.1
7.3
7.6
7.9
8.2
8.5
8.9
9.3
9.7
10.3
10.9
11.5
12.3
13.2
14.3
15.6
17.1
19.1
21.6
25.0
29.7
36.8
48.6
72.2
90.6
93.4
96.3
99.1
101.9
104.8
107.6
110.4
116.1
118.9
124.6
130.2
133.1
138.7
147.2
152.9
158.5
167.0
175.5
184.0
195.4
206.7
220.8
235.0
252.0
271.8
294.4
319.9
351.1
390.7
436.0
498.3
577.6
690.8
857.9
1132.5
1676.1
1.3
1.3
1.3
1.4
1.4
1.4
1.5
1.5
1.5
1.6
1.6
1.7
1.7
1.8
1.8
1.9
2.0
2.0
2.1
2.2
2.3
2.4
2.6
2.7
2.9
3.1
3.3
3.6
3.9
4.3
4.8
5.4
6.3
7.4
9.2
12.2
18.0
22.6
23.4
24.1
24.8
25.5
26.2
26.9
27.6
29.0
29.7
31.1
32.6
33.3
34.7
36.8
38.2
39.6
41.8
43.9
46.0
48.8
51.7
55.2
58.7
63.0
67.9
73.6
80.0
87.8
97.7
109.0
124.6
144.4
172.7
214.5
283.1
419.0
the finite M/D/1 queue to show the buffer capacity for a given load and
cell loss probability. The first column is the load, varying from 50% up to
98%, and the second column gives the buffer size for a particular cell loss
probability requirement (Table 12.2 part (a) is for a CLP of 108 , part (b)
is for 1010 , and part (c) is for 1012 ). Then there are extra columns which
197
give the mean delay and maximum delay through the buffer for link
rates of 155.52 Mbit/s and 622.08 Mbit/s. The maximum delay is just the
buffer capacity multiplied by the time per cell slot, s, at the appropriate
link rate. The mean delay depends on the load, , and is calculated using
the formula for an infinite M/D/1 system:
tq D s C
s
2 1
This is very close to the mean delay through a finite M/D/1 because the
loss is extremely low (mean delays only differ noticeably when the loss
from the finite system is high).
Figure 12.6 presents the mean and maximum delay values from
Table 12.2 in the form of a graph and clearly shows how the delays
increase substantially above a load of about 80%. This graph can be used
to select a maximum load according to the cell loss and delay constraints,
and the buffers link rate; the required buffer size can then be read from
the appropriate part of Table 12.2.
So, to summarize, we dimension short buffers to cope with cell-scale
queueing behaviour using Table 12.2 and Figure 12.6. This approach is
applicable to networks which offer the deterministic bit-rate transfer
capability and the statistical bit-rate transfer capability based on rate
envelope multiplexing. For SBR based on rate sharing, buffer dimensioning requires a different approach, based on the burst-scale queueing
behaviour.
900
Delay in microseconds through
155.52 Mbit/s link buffer
250
Maximum delay
225
800
200
700
175
600
150
CLP = 108
500
125
CLP = 1010
400
100
CLP = 1012
300
75
200
50
100
0
25
Mean delay
0.5
0.6
0.7
0.8
0.9
1000
Load
Figure 12.6. Mean and Maximum Delays for a Buffer with Link Rate of either
155.52 Mbit/s or 622.08 Mbit/s for Cell Loss Probabilities of 108 , 1010 and 1012
198
DIMENSIONING
10000
1000
100
CLP = 1012
CLP = 1010
10
CLP = 108
1
0.5
0.6
0.7
0.8
0.9
1.0
Load
Figure 12.7. Buffer Capacity in Units of Mean Burst Length, Given Load and Cell
Loss Probability, for Traffic with a Peak Cell Rate of 1/100 of the Cell Slot Rate (i.e.
N0 D 100)
199
0.0
0.1
0.2
0.3
0.4
Load
0.5
0.6
0.7
0.8
0.9
1.0
1000
100
10
1
N0 = 10, 20, ... 100, 200, ... 700
Figure 12.8. Buffer Capacity in Units of Mean Burst Length, Given Load and the
Ratio of Cell Slot Rate to Peak Cell Rate, for a Cell Loss Probability of 1010
(Table 12.3). We then need to find the value of X/b which gives a cell
loss contribution, CLPbsd , from the burst-scale delay factor, to meet the
overall CLP target. This is found by rearranging the equation
X 13
N0 b 4C1
CLPtarget
D CLPbsd D e
CLPbsl
to give
4C1
X
D
b
1 3
ln
CLPtarget
CLPbsl
N0
Table 12.3 gives the figures for an overall CLP target of 1010 , and
Figure 12.7 shows results for three different CLP targets: 108 , 1010 and
1012 . Figure 12.8 shows results for a range of values of N0 for an overall
CLP target of 1010 .
Table 12.3. Burst-Scale Parameter Values for N0 D 100 and a CLP Target of 1010
CLPbsl
101
102
103
104
105
106
107
108
109
1010
load,
0.94
0.87
0.80
0.74
0.69
0.65
0.61
0.58
0.55
0.53
CLPbsd
109
108
107
106
105
104
103
102
101
100
5064.2
394.2
84.6
31.1
14.7
7.7
4.1
2.1
0.8
X/b
200
DIMENSIONING
1E+00
10
C
cell/s
D
1E02
100
D=100
D=40
1E03
1E05
1E06
90
D=80
1E01
1E04
80
D=20
D=150
D=200
1E07
1E08
1E09
D=300
1E10
1E11
1E12
Figure 12.9. Maximum Number of CBR Connections, Given D Cell Slots between
Arrivals and CLP
201
(where C is the cell slot rate), we can find from Figure 12.9 the maximum
number of connections that can be admitted onto a link of cell rate C.
The link cannot be loaded to more than 100% capacity, so the maximum
possible number of sources of any particular type is numerically equal
to the (constant) number of cell slots between arrivals. Lets take an
example. Suppose we have CBR sources of cell rate 3532 cell/s being
multiplexed over a 155.52 Mbit/s link, with a CLP requirement of 107 .
This gives a value of 100 for D, and from Figure 12.9, the maximum
number of connections is (near enough) 50.
Now that we have a figure for the maximum number of connections,
we can calculate the offered traffic at the connection level for a given
probability of blocking. Figure 12.10 shows how the connection blocking
probability varies with the maximum number of connections for different
offered traffic intensities. With our example, we find that for 50 connections maximum and a connection blocking probability of 0.02, the offered
traffic intensity is 40 erlangs. Note that the mean number of connections
in progress is numerically equal to the offered traffic, i.e. 40 connections.
The cell loss probability for this number of connections can be found
from Figure 12.9: it is 2 109 . This is over an order of magnitude lower
than the CLP requirement in the traffic contract, and therefore provides
a useful safety margin.
For variable-bit-rate traffic, we will only consider rate envelope multiplexing and not rate sharing. Figure 12.11 shows how the cell loss
10
20
90
0.1
100
100 E
90 E
0.01
80 E
0.001
70 E
0.0001
10 E
20 E
30 E
40 E
50 E
60 E
202
DIMENSIONING
1E+00
1E01
1E02
1E03
1E04
1E05
1E06
1E07
1E08
1E09
1E10
1E11
1E12
0.1
0.2
0.3
0.4
Load
0.5
0.6
0.7
0.8
0.9
probability from a (short) buffer varies with the utilization for a range of
VBR sources of different peak cell rates. The key parameter defining this
relationship is N0 , the ratio of the cell slot rate to the peak cell rate. Given
N0 and a CLP requirement, we can read off a value for the utilization.
This then needs to be multiplied by the ratio of the cell slot rate, C, to the
mean cell rate, m, to obtain the maximum number of connections that can
be admitted onto the link.
So, for example, for sources with a peak cell rate of 8830 cell/s and
a mean cell rate of 3532 cell/s being multiplexed onto a 155.52 Mbit/s
link, N0 is 40 and, according to Figure 12.11, the utilization is about
0.4 for a CLP of 108 . This utilization is multiplied by 100 (i.e. C/m)
to give a maximum of 40 connections. From Figure 12.10, an offered
traffic intensity of 30 erlangs gives a connection blocking probability of
just under 0.02 for 40 connections maximum. Thus the mean number
of connections in progress is 30, giving a mean load of 0.3, and from
Figure 12.11 the corresponding cell loss probability is found to be 1011 .
This is three orders of magnitude better than for the maximum number
of connections.
As an alternative to using Figure 12.10, a traffic table based on Erlangs
lost call formula is provided in Table 12.4.
So, we have seen that, for both CBR and VBR traffic, when the connection blocking probability requirements are taken into account, the actual
cell loss probability can be rather lower than that for the maximum
allowed number of connections.
203
Table 12.4.
0.1
0.05
0.02
0.01
0.005
0.001
0.0001
0.02
0.22
0.60
1.09
1.65
2.27
2.93
3.62
4.34
5.08
5.84
6.61
7.40
8.20
9.00
9.82
10.65
11.49
12.33
13.18
14.03
14.89
15.76
16.63
17.50
18.38
19.26
20.15
21.03
21.93
22.82
23.72
24.62
25.52
26.43
27.34
28.25
29.16
30.08
30.99
0.01
0.15
0.45
0.86
1.36
1.90
2.50
3.12
3.78
4.46
5.15
5.87
6.60
7.35
8.10
8.87
9.65
10.43
11.23
12.03
12.83
13.65
14.47
15.29
16.12
16.95
17.79
18.64
19.48
20.33
21.19
22.04
22.90
23.77
24.63
25.50
26.37
27.25
28.12
29.00
0.005
0.10
0.34
0.70
1.13
1.62
2.15
2.72
3.33
3.96
4.61
5.27
5.96
6.66
7.37
8.09
8.83
9.57
10.33
11.09
11.85
12.63
13.41
14.20
14.99
15.79
16.59
17.40
18.21
19.03
19.85
20.67
21.50
22.33
23.16
24.00
24.84
25.68
26.53
27.38
0.001
0.04
0.19
0.43
0.76
1.14
1.57
2.05
2.55
3.09
3.65
4.23
4.83
5.44
6.07
6.72
7.37
8.04
8.72
9.41
10.10
10.81
11.52
12.24
12.96
13.70
14.43
15.18
15.93
16.68
17.44
18.20
18.97
19.74
20.51
21.29
22.07
22.86
23.65
24.44
0.0001
0.03
0.08
0.23
0.45
0.72
1.05
1.42
1.82
2.26
2.72
3.20
3.71
4.23
4.78
5.33
5.91
6.49
7.09
7.70
8.31
8.94
9.58
10.22
10.88
11.53
12.20
12.88
13.56
14.24
14.93
15.63
16.33
17.04
17.75
18.46
19.18
19.91
20.63
21.37
0.11
0.59
1.27
2.04
2.88
3.75
4.66
5.59
6.54
7.51
8.48
9.47
10.46
11.47
12.48
13.50
14.52
15.54
16.57
17.61
18.65
19.69
20.73
21.78
22.83
23.88
24.93
25.99
27.05
28.11
29.17
30.23
31.30
32.36
33.43
34.50
35.57
36.64
37.71
38.78
0.05
0.38
0.89
1.52
2.21
2.96
3.73
4.54
5.37
6.21
7.07
7.95
8.83
9.72
10.63
11.54
12.46
13.38
14.31
15.24
16.18
17.13
18.07
19.03
19.98
20.94
21.90
22.86
23.83
24.80
25.77
26.74
27.72
28.69
29.67
30.65
31.63
32.62
33.60
34.59
(continued overleaf )
204
DIMENSIONING
0.1
0.05
0.02
0.01
0.005
0.001
31.91
32.83
33.75
34.68
35.60
36.53
37.46
38.39
39.32
40.25
44.9
49.6
54.3
59.1
63.9
68.6
73.4
78.3
83.1
87.9
92.8
97.6
102.5
107.4
112.3
117.1
122.0
126.9
131.8
136.8
141.7
146.6
151.5
156.4
161.4
166.3
171.3
176.2
181.2
186.1
29.88
30.77
31.65
32.54
33.43
34.32
35.21
36.10
37.00
37.90
42.4
46.9
51.5
56.1
60.7
65.3
70.0
74.6
79.3
84.0
88.7
93.4
98.2
102.9
107.7
112.4
117.2
122.0
126.7
131.5
136.3
141.1
145.9
150.7
155.5
160.4
165.2
170.0
174.9
179.7
28.23
29.08
29.93
30.79
31.65
32.51
33.38
34.24
35.11
35.98
40.3
44.7
49.1
53.6
58.1
62.6
67.2
71.7
76.3
80.9
85.5
90.1
94.7
99.3
104.0
108.6
113.3
118.0
122.7
127.3
132.0
136.7
141.5
146.2
150.9
155.6
160.4
165.1
169.8
174.6
25.23
26.03
26.83
27.64
28.44
29.25
30.06
30.87
31.69
32.51
36.6
40.7
44.9
49.2
53.5
57.8
62.1
66.4
70.8
75.2
79.6
84.0
88.5
92.9
97.4
101.9
106.4
110.9
115.4
119.9
124.4
129.0
133.5
138.1
142.6
147.2
151.8
156.4
161.0
165.6
0.0001
39.86
40.93
42.01
43.08
44.16
45.24
46.32
47.40
48.48
49.56
54.9
60.4
65.8
71.2
76.7
82.2
87.6
93.1
98.6
104.1
109.5
115.0
120.5
126.0
131.5
137.0
142.5
148.1
153.6
159.1
164.6
170.1
175.6
181.1
186.7
192.2
197.7
203.2
208.7
214.0
35.58
36.57
37.56
38.55
39.55
40.54
41.54
42.53
43.53
44.53
49.5
54.5
59.6
64.6
69.7
74.8
79.9
85.0
90.1
95.2
100.3
105.4
110.6
115.7
120.9
126.0
131.2
136.3
141.5
146.7
151.8
157.0
162.2
167.3
172.5
177.7
182.9
188.1
193.3
198.5
22.10
22.84
23.58
24.33
25.08
25.83
26.58
27.34
28.10
28.86
32.7
36.6
40.5
44.5
48.6
52.6
56.7
60.9
65.0
69.2
73.4
77.6
81.9
86.2
90.4
94.7
99.0
103.4
107.7
112.1
116.4
120.8
125.2
129.6
134.0
138.4
142.8
147.2
151.7
156.1
13
Priority Control
the customer comes first
PRIORITIES
ATM networks can feature two forms of priority mechanism: space and
time. Both forms relate to how an ATM buffer operates, and these are
illustrated in Figure 13.1. Space priority addresses whether or not a cell
is admitted into the finite waiting area of the buffer. Time priority deals
with the order in which cells leave the waiting area and enter the server
for onward transmission. Thus the main focus for the space priority
mechanism is to distinguish different levels of cell loss performance,
whereas for time priority the focus is on the delay performance. For both
forms of priority, the waiting area can be organized in different ways,
depending on the specific priority algorithm being implemented.
The ATM standards explicitly support space priority, by the provision
of a cell loss priority bit in the ATM cell header. High priority is indicated
by the cell loss priority bit having a value of 0, low priority with a value of
1. Different levels of time priority, however, are not explicitly supported
in the standards. One way they can be organized is by assigning different
levels of time priority to particular VPI/VCI values or ranges of values.
206
PRIORITY CONTROL
Space priority
mechanism controls
access to buffer
capacity
Time priority
mechanism controls
access to server
capacity
Waiting area
Server
ATM buffer
H
5
H
4
H
3
H
2
H
1
server
ATM buffer
The last low priority cell is pushed
out of the buffer, providing room
for the arriving high priority cell
H
6
H
5
H
4
H
3
H
2
H
1
server
ATM buffer
Figure 13.2.
207
server
ATM buffer
Threshold
Figure 13.3. Space Priority: Partial Buffer Sharing
are rather more complex for the push-out mechanism because, when a
high-priority cell arrives at a full buffer, a low-priority cell in the buffer
must be found and discarded. Thus the partial buffer sharing scheme
achieves the best compromise between performance and complexity.
Lets look at how partial buffer sharing can be analysed, so we can
quantify the improvements in admissible load that are possible with
space priorities.
ak a
e
k!
where the mean arrival rate (in cells per cell slot) is given by parameter a.
This mean arrival rate is the sum of mean arrival rates, ah and al , for the
high- and low-priority streams respectively:
a D ah C al
208
PRIORITY CONTROL
and so we can define the probability that there are k high-priority arrivals
in one slot as
ak
ah k D h eah
k!
and the probability that there are k low-priority arrivals in one slot as
al k D
akl al
e
k!
s0 1 a0
a0
k1
si Ak i C 1
iD1
where Ak is the probability that at least k cells arrive during the time
slot, and is expressed simply as the probability that fewer than k cells do
not arrive.
k1
Ak D 1
aj
jD0
209
s0 Ak C
k1
si Ak i C 1
iD1
sk D
a0
a m, n D
1
iDmCn
i m!
ai
n! i m n!
ah
a
n imn
al
a
210
PRIORITY CONTROL
1
a0 m, j
jDn
A0 m, n D 1
mCn1
ai
iD0
ah
a
1
am C n C i
iD0
jD0
j nCij
n1
n C i!
j! n C i j!
al
a
M1
si A0 M i, 1
iD1
211
M1
fsi A0 M i, k M C 1g
iD1
k1
fsi Ah k i C 1g
iDM
This differs from the situation for k D M in two respects: first, the crossing
up from state 0 requires M cells of either priority and a further k M of
high-priority; and secondly, it is now possible to cross the line from a state
at or above the threshold this can only be achieved with high-priority
arrivals.
At the buffer limit, k D X, we have only one way of reaching this state:
from state 0, with M cells of either priority followed by at least X M
cells of high-priority. If there is at least one cell in the queue at the start
of the slot, and enough arrivals fill the queue, then at the end of the slot,
the cell in the server will complete service and take the queue state from
X down to X 1. Thus for k D X we have
sX ah 0 D s0 A0 M, X M
Now, as in Chapter 7, we have no value for s0, so we cannot evaluate
sk for k > 0. Therefore we define a new variable, uk, as
uk D
sk
s0
so
u0 D 1
Then
u1 D
1 a0
a0
k1
ui Ak i C 1
iD1
a0
At the threshold
AM C
uM D
M1
ui A0 M i, 1
iD1
ah 0
212
PRIORITY CONTROL
A M, k M C
M1
fui A0 M i, k M C 1g
iD1
k1
C
uk D
fui Ah k i C 1g
iDM
ah 0
A0 M, X M
ah 0
1
X
ui
iD0
and, from that, find the rest of the state probability distribution:
sk D s0 uk
Before we go on to calculate the cell loss probability for the high-and
low-priority cell streams, lets first show an example state probability
distribution for an ATM buffer implementing the partial buffer sharing
scheme. Figure 13.4 shows the state probabilities when the buffer capacity
is 20 cells, and the threshold level is 15 cells, for three different loads:
(i) the low priority load, al , is 0.7 and the high-priority load, ah , is 0.175 of
the cell slot rate; (ii) al D 0.6 and ah D 0.15; and (iii) al D 0.5 and ah D 0.125.
The graph shows a clear distinction between the gradients of the state
probability distribution below and above the threshold level. Below the
threshold, the queue behaves like an ordinary M/D/1 with a gradient
corresponding to the combined high- and low-priority load. Above the
threshold, only the high-priority cell stream has any effect, and so the
gradient is much steeper because the load on this part of the queue is
much less.
In Chapter 7, the loss probability was found by comparing the offered
and the carried traffic at the cell level. But now we have two different
priority streams, and the partial buffer sharing analysis only gives the
combined carried traffic. The overall cell loss probability can be found
213
Queue size
0
10
15
20
1E+00
1E01
1E02
State probability
(i)
1E03
(ii)
1E04
(iii)
1E05
1E06
1E07
1E08
1E09
Figure 13.4. State Probability Distribution for ATM Buffer with Partial Buffer Sharing (i) al D 0.7,
ah D 0.175; (ii) al D 0.6, ah D 0.15; (iii) al D 0.5, ah D 0.125
using
CLP D
al C ah 1 s0
al C ah
CLPh D
j lh j
ah
where lh j is the probability that j high-priority cells are lost in a cell slot
and is given by
lh j D
M1
iD0
si a0 M i, X M C j C
X
si ah X i C j
iDM
The first summation on the right-hand side accounts for the different ways
of losing j cells when the state of the system is less than the threshold.
214
PRIORITY CONTROL
CLPl D
j ll j
al
where ll j is the probability that j low-priority cells are lost in a cell slot
and is given by
ll j D
M1
iD0
1
r M i!
si
ar
r M i j! j!
rDMiCj
X
ah
a
rMij j
al
a
si al j
iDM
The first term on the right-hand side accounts for the different ways of
losing j cells when the state of the system is less than the threshold. This
involves filling up to the threshold with either M i cells of either low or
high-priority, followed by any number of high-priority cells along with j
low-priority cells (which are lost). The second summation deals with the
different ways of losing j cells when the state of the system is above the
threshold. This is simply the probability of j low-priority cells arriving in
a time slot, for each of the states at or above the threshold.
215
be increased to 0.7 to give a cell loss probability of 1.16 103 , and the
high-priority load of 0.125 has a cell loss probability of 9.36 1011 . The
total admissible load has increased by just over 30% of the cell slot rate,
from 0.521 to 0.825, representing a 75% increase in the low-priority traffic.
If the threshold is set to M D 18, the low-priority load can only be
increased to 0.475 giving a cell loss probability of 5.6 108 , and the
high-priority load of 0.125 has a cell loss probability of 8.8 1011 . But
even this is an extra 8% of the cell slot rate, representing an increase in
20% for the low-priority traffic, for a cell loss margin of between two and
three orders of magnitude. Thus a substantial increase in load is possible,
particularly if the difference in cell loss probability requirement is large.
Figure 13.5.
ah D 0.125
0.7
0.8
0.9
Low priority
High priority
216
PRIORITY CONTROL
Buffer capacity, X
0
10
20
30
40
50
1E+00
1E01
Low priority
1E02
Cell loss probability
1E03
1E04
1E05
1E06
1E07
1E08
1E09
High priority
1E10
1E11
1E12
Figure 13.6. Low- and High-Priority Cell Loss against Buffer Capacity, for a D 0.925 and
XMD5
Set the threshold by using Table 10.1 based on the M/D/1/X analysis
(without priorities) for the combined load and the combined cell loss
probability requirement. The latter is found using the following
relationship (which is based on equating the average number of cells
Table 13.1. Cell Loss Probability Margin between Low- and High-Priority Traffic
0.01
0.02
0.03
0.04
0.05
0.1
0.15
0.2
0.25
2.7E-03
6.5E-06
1.4E-08
3.0E-11
5.6E-03
2.7E-05
1.2E-07
5.4E-10
8.4E-03
6.3E-05
4.5E-07
2.9E-09
1.8E-11
1.1E-02
1.2E-04
1.2E-06
1.0E-08
9.0E-11
1.4E-02
1.8E-04
2.5E-06
2.8E-08
3.3E-10
2.8E-02
8.4E-04
2.6E-05
6.7E-07
1.9E-08
4.4E-02
2.2E-03
1.1E-04
4.9E-06
2.5E-07
5.9E-02
4.3E-03
3.2E-04
2.2E-05
1.6E-06
7.6E-02
7.5E-03
7.4E-04
7.0E-05
7.0E-06
217
0.05
0.1
0.15
0.2
0.25
1E+00
1E01
XM=1
1E02
XM=2
1E03
XM=3
1E04
XM=4
1E05
XM=5
1E06
1E07
1E08
1E09
1E10
Figure 13.7. Cell Loss Probability Margin against High-Priority Load for Different
Values of X M
al CLPl C ah CLPh
a
From Table 10.1 we find that the threshold is between 20 and 25 cells,
but closer to 20; we will use M D 21. Table 13.1 gives an additional buffer
space of 3 cells for a margin of 104 and high-priority load of 0.15.
Thus the total buffer capacity is 24. If we put these values of X D 24
and M D 21, al D 0.55 and ah D 0.15 back into the analysis, the results
are CLPl D 5.5 107 and CLPh D 6.3 1011 . For a buffer size of 23, a
threshold of 20, and the same load, the results are CLPl D 1.1 106 and
CLPh D 1.2 1010 .
Two of the values for high-priority load in Table 13.1 are of particular
interest in the development of a useful dimensioning rule; these values
are 0.04 and 0.25. In Figure 13.8, the CLP margin is plotted against the
218
PRIORITY CONTROL
Cuffer space above threshold, X M
0
1E+00
1E01
Cell loss probability margin
1E02
1E03
ah = 0.25
1E04
1E05
1E06
al = 0.04
1E07
1E08
1E09
1E10
1E11
Figure 13.8. Cell Loss Probability Margin against Buffer Space Reserved for
High-Priority Traffic, X M
219
server
low
low
low
low
low
low
high
low
the queue goes into service. It is only during time slot n C 1 that the highpriority cell arrives and is then placed at the head of the queue. The same
principle can be applied with many levels of priority. Note that any cell
arriving to find the buffer full is lost, regardless of the level of time priority.
The effect of time priorities is to decrease the delay for the higherpriority traffic at the expense of increasing the delays for the lowerpriority traffic. As far as ATM is concerned, this means that real-time
connections (e.g. voice and interactive video) can be speeded on their
way at the expense of delaying the cells of connections which do not have
real-time constraints, e.g. email data.
To analyse the delay performance for a system with two levels of time
priority, we will assume an M/D/1 system, with infinite buffer length.
Although time priorities do affect the cell loss performance, we will
concentrate on those analytical results that apply to delay.
a1 C a2
2 1 a1
a1 C a2
2
1 a1 a2
w1 a1 C
w2 D
220
PRIORITY CONTROL
where wi is the mean wait (in time slots) endured by cells of priority i
while in the buffer.
Consider an ATM scenario in which a very small proportion of traffic,
say about 1%, is given high time priority. Figure 13.10 shows the effect
on the mean waiting times. Granting a time priority to a small proportion
of traffic has very little effect on the mean wait for the lower-priority
traffic, which is indistinguishable from the mean wait when there are no
priorities. We can also see from the results that the waiting time for the
high-priority cells is greatly improved.
Figure 13.11 shows what happens if the proportion of high-priority
traffic is significantly increased, to 50% of the combined high- and
low-priority load. Even in this situation, mean waiting times for the
low-priority cells are not severely affected, and waiting times for the
priority traffic have still been noticeably improved. Figure 13.12 illustrates the case when most of the traffic is high-priority and only
1% is of low priority. Here, there is little difference between the nopriority case and the results for the high-priority traffic, but the very
small amount of low-priority traffic has significantly worse waiting
times.
The results so far are for the mean waiting time. Lets now consider
the effect of time priorities on the distribution of the waiting-time (see
also [13.2]). To find the waiting time probabilities for cells in an ATM
20
15
Priority 2
No priority
Priority 1
10
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Total load
Figure 13.10. Mean Waiting Time for High and Low Time-Priority Traffic, where
the Proportion of High-Priority Traffic is 1% of the Total Load
221
20
15
priority 2
no priority
priority 1
10
0.1
0.2
0.3
0.7
0.8
0.9
Figure 13.11. Mean Waiting Time for High and Low Time-Priority Traffic, where
the Proportion of High-Priority Traffic is 50% of the Total Load
20
15
priority 2
no priority
priority 1
10
0.1
0.2
0.3
0.4 0.5
Total load
0.6
0.7
0.8
0.9
Figure 13.12. Mean Waiting Time for High and Low Time-Priority Traffic, where
the Proportion of High-Priority Traffic is 99% of the Total Load
buffer where different levels of time priority are present requires the use
of convolution (as indeed did finding waiting times in a non-priority
buffersee Chapter 7). A cell, say C, arriving in time slot i will wait
behind a number of cells, and this number has four components:
222
PRIORITY CONTROL
1.
the total number of cells of equal or higher priority that are present
in the buffer at the end of time slot i 1
2. the number of cells of the same priority that are ahead of C in the batch
in which C arrives
3. all the cells of higher priority than C that arrive in time slot i
4.
component 2
component 3
k
i
uk i
vk D
bj a1 i j
iD0
jD0
But where do the three distributions, uk, bk and a1 k come from?
As we are assuming Poisson arrivals for both priorities, a1 k is simply:
a1 k D
ak1 a1
e
k!
ak2 a2
e
k!
where
a1 is the arrival rate (in cells per time slot) of high-priority cells
a2 is the arrival rate (in cells per time slot) of low-priority cells
223
The unfinished work, uk, is actually found from the state probabilities,
denoted sk, the formula for which was given in Chapter 7:
u0 D s0 C s1
uk D sk C 1
for k > 0
What about the wait caused by other cells of low priority arriving in the
same batch, but in front of C? Well, there is a simple approach here too:
bk D PrfC is k C 1th in the batchg
D
k
a2 i
iD0
a2
So now all the parts are assembled, and we need only implement
the convolution to find the virtual waiting-time distribution. However,
this still leaves us with the problem of accounting for subsequent highpriority arrivals. In fact, this is very easy to do using a formula developed
(originally) for an entirely different purpose. The result is that:
w0 D v0
k
wk D
vi a1 k i, k i
iD1
for k > 0
where:
wk D Prfa priority 2 cell must wait k time slots before it enters serviceg
a1 k, x D
axk
1
eka1
x!
224
PRIORITY CONTROL
10
1E+00
1E01
Probability
1E02
priority 2
priority 1
1E03
1E04
1E05
1E06
1E07
1E08
Figure 13.13. Waiting-Time Distribution for High- and Low Time-Priority Traffic,
where the Proportion of High-Priority Traffic Is 1% of a Total Load of 0.8 Cells per
Time Slot
Waiting time (time slots)
0
10
1E+00
1E01
Probability
1E02
1E03
priority 2
priority 1
1E04
1E05
1E06
1E07
1E08
Figure 13.14. Waiting Time Distribution for High and Low Time-Priority Traffic,
where the Proportion of High-Priority Traffic Is 50% of a Total Load of 0.8 Cells per
Time Slot
225
SERVER
PART III
IP Performance and
Traffic Management
14
230
the packet waiting-time probabilities, by which we mean the probabilities associated with a packet being delayed k time units.
Toct
q
231
q
1
D
s
Toct
p
Toct
p
D
q
This is also the utilization, assuming an infinite buffer size and, hence, no
packet loss. We define the state probability, i.e. the probability of being
in state k, as
sk D Prfthere are k octets in the queueing system at the
end of any octet slotg
As before, the utilization is just the steady-state probability that the
system is not empty, so
D 1 s0
and therefore
s0 D 1
p
q
p
p
1 a0
D 1
s1 D s0
a0
q
1p
Similarly, we find a formula for s2 by writing the balance equation for
s1, and rearranging:
s2 D
232
s2 D s1
1q
1p
p
p
D 1
q
1p
p
p
D 1
q
1q
1q
1p
1q 2
1p
p
sk D 1
q
1q
1q
1p
k
for k > 0
1
q
giving an expression for the probability that the queue exceeds x packets:
Qx D
1q
1p
x
q
So, what do the results look like? Lets use a load of 80%, for comparison
with the results in Chapter 7, and assume an average packet size of
233
p
D 0.8
q
1
D 500 ) q D 0.002
q
p D 0.8 0.002 D 0.0016
The results are shown in Figure 14.1, labelled Geo/Geo/1. Those
labelled Poisson and Binomial are the results from Chapter 7
Buffer capacity, X
0
10
15
20
25
100
Geo/Geo/1
Poisson
Binomial
101
102
103
104
105
106
1
500
p :D 0.8 q
q :D
p
packetQ x :D
q
k :D 0.. 30
xk :D k
y1 :D packetQ x
1q
1p
x
Figure 14.1. Graph of the Probability that the Queue State Exceeds X, and the
Mathcad Code to Generate (x, y) Values for Plotting the Geo/Geo/1 Results. For
Details of how to Generate the Results for Poisson and Binomial Arrivals to a
Deterministic Queue, see Figure 7.6
234
(Figure 7.6) for fixed service times at a load of 80%. Notice that the
variability in the packet sizes (and hence service times) produces a
flatter gradient than the fixed-cell-size analysis for the same load. The
graph shows that, for a given performance requirement (e.g. 0.01), the
buffer needs to be about twice the size (X D 21) of that for fixed-size
packets or cells (X D 10). This corresponds closely with the difference, in
average waiting times, between M/D/1 and M/M/1 queueing systems
mentioned in Chapter 4.
sk C 1
sk
as k ! 1
p
p
1 q kC1
1q
sk C 1
q
1q
1p
D
D
sk
1p
p
p
1q k
1
q
1q
1p
1
235
10
15
20
25
30
100
90% load
80% load
101
102
103
104
Here we see a constant decay rate
105
Figure 14.2.
System
The Decay Rate of the State Probabilities for the M/D/1 Queueing
Radio
s(1)/s(0)
s(2)/s(1)
s(3)/s(2)
s(4)/s(3)
s(5)/s(4)
s(6)/s(5)
s(7)/s(6)
DR
1.4596
0.9430
0.8359
0.8153
0.8129
0.8129
0.8129
236
the waiting time i.e. the sum of the service time of all the packets
ahead in the queue
the loss the probability that the buffer overflows a finite length is
often closely approximated by the probability that the infinite buffer
model contains more than would fit in the given finite buffer length.
10
20
30
100
Overflow probability
constant multiplier
Constant
multiplier
101
decay rate
Decay
rate
102
103
40
237
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
100
101
Loss probability
102
103
104
105
106
107
108
109
1010
Q X, s :D qx0
1 s0
for i 2 1.. X
qx
qxi1 si
i
qxX
i :D 10
j :D 0.. 14
k :D 0.. 30
j
loadj :D
C 0.25
20
aPk,j :D Poisson k, loadj
xj :D loadj
if X > 0
238
239
240
IMC
101
e
No change
e
Under load
100
Excess arrivals
l e e
q
q
q
102
q
...
103
241
But we can find an expression for E[B] based on the arrival probabilities:
1
E[B] D
i 1 ai
iD2
1 a0 a1
where
ak D Prfk arrivals in a packet service timeg
The numerator weights all the probabilities of having i packets arriving
by the number that are actually excess-rate packets, i.e. i 1. This ranges
over all situations in which there is at least one excess-rate packet arrival.
The denominator normalizes the probabilities to this condition (that there
are ER arrivals). A simple rearrangement of the numerator gives
1
E[B] D
i ai
iD1
1
ai
iD1
1 a0 a1
E[a] 1 a0
1 a0 a1
where E[a] is the mean number of packets arriving per unit time. We now
have an expression for the parameter of the geometric ER series:
qD1
1 a0 a1
1
D1
E[B]
E[a] 1 C a0
Consider now how the queue increases in size by one packet. We define
the state probability as
pk D Prfan arriving excess-rate packet finds k packets in the bufferg
Remember that we are creating an Imbedded Markov Chain at excessrate arrival instants. Thus to move from state k to k C 1 either we need
another ER packet in the same service time interval, with probability q,
or for the queue content to remain unchanged until the next ER packet
arrival. To express this latter probability we need to define
dk D Prfqueue content decreases by k packets between ER arrivalsg
and
Dk D Prfqueue content decreases by at least k packets
between ER arrivalsg
242
The queue size decreases only when there is no arrival, and it remains
the same only when there is one arrival. For the queue to decrease
by k packets between excess arrivals, there must be k slots with no
arrivals, with probability a0, any number of slots with one arrival, with
probability a1, and then a slot with excess arrivals, with probability
1 a0 a1. So dk is given by
dk D
1
nDk
In fact, we only need d0 in the queue analysis, and this is simply
d0 D
1
nD0
which reduces to
d0 D
1 a0 a1
1 a1
We also require
D1 D 1 d0 D
a0
1 a1
Now, for the balance equations: in a similar way to the discrete fluid-flow
analysis in Chapter 9, we develop these by equating the up and down
probabilities of crossing between adjacent queue states. As before, we are
concerned with the state as seen by an excess-rate arrival, so we must
consider arriving packets one at a time. Thus the state can only ever
increase by one.
Initially, let the buffer capacity be X packets. To cross between states
X 1 and X, an arriving excess-rate packet sees X 1, taking the queue
state up to X, and another excess-rate packet follows to see the queue in
state X. This happens either immediately, with probability q, or after any
number of time units in which the queue state stays the same, i.e. with
probability 1 q d0. So the probability of going up is
Prfgoing upg D q C 1 q d0 pX 1
To go down, an arriving excess-rate packet sees X in the queue and is
lost (because the buffer is full) and then there is a gap containing any
number of time units, at least one of which is empty and the rest in which
the queue state does not change. Then the next excess-rate arrival sees
fewer than X in the queue. For this to happen, it is simply the probability
that the next ER packet does not see X, or, put another way, one minus
243
the probability that the next ER packet does see X. This latter probability
has the same conditions as the up transition, i.e. either another ER packet
follows immediately, or it follows after any number of time units in which
the queue state stays the same. So
Prfgoing downg D 1 q C 1 q d0 pX
Equating the probabilities of going up and down gives
q C 1 q d0 pX 1 D 1 q C 1 q d0 pX
For a line between X 1 and X 2, equating probabilities gives
q C 1 q d0 pX 2 D 1 q C 1 q d0 D1 pX
C 1 q C 1 q d0 D1 pX 1
The left-hand side is the probability of going up, and has the same
conditions as before. The probability of coming down, on the right-hand
side of the equation, contains two possibilities. The first term is for an
arriving ER packet which sees X in the queue and is lost (because the
buffer is full) and the second term is for an arriving ER packet which
sees X 1 in the queue, taking the state of the queue up to X. Then, in
both cases, there is a period without ER packets during which the queue
content decreases by at least two empty time units, so that the next ER
arrival sees fewer than X 1 in the queue.
Substituting for pX, and rearranging gives
q C 1 q d0 pX 2 D D1 pX 1
In the general case, for a line between X i C 1 and X i, the probability
of going up remains the same as before, i.e. the only way to go up is
for an ER packet to see X i, and to be followed (either immediately
or after a period during which the queue state remains the same) by
another ER packet which sees X i C 1. The probability of going down
consists of many components, one for each state above X i, but they
can be arranged in two groups: the probability of coming down from
X i C 1 itself; and the probability of coming down to below X i C 1
from above X i C 1. This latter is just the probability of going down
between X i C 2 and X i C 1 multiplied by D1, which is the same
as going up from X i C 1 multiplied by D1. This is precisely the same
grouping as illustrated in Figure 9.10 for the discrete fluid-flow approach.
The general equation then is
q C 1 q d0 pX i D D1 pX i C 1
244
so
q C 1 q d0
pX D p0
D1
X
pi D p0
iD0
X
q C 1 q d0 i
iD0
D1
D1
q C 1 q d0
pk D 1
D1
q C 1 q d0
D1
k
k
E[a] 1 a1 1 C a1 C a02
kC1
245
These then are the general forms for pk and Qk, into which we can
substitute appropriate expressions from the Poisson distribution, i.e.
E[a] D
a0 D e
a1 D e
to give
e e 2 C C e
pk D 1
1 C e
kC1
e e 2 C C e
Qk D
1 C e
e e 2 C C e
1 C e
Well, was it really worth all the effort? Lets take a look at some results.
Figure 14.7 shows the queue state probabilities for three different values
of D 0.55, 0.75, 0.95. In the figure, the lines are the exact results found
using the approach developed in Chapter 7, with Poisson input, and the
markers show the results from the excess-rate analysis with GAPP input.
Note that the results from the exact analysis are discrete, not continuous,
but are shown as continuous lines for clarity.
Figure 14.8 shows the results for Qk, the probability that the queue
exceeds k, comparing exact and excess-rate GAPP analyses. Figure 14.9
compares the excess-rate GAPP results with those from the heavy-traffic
analysis in Chapter 8. It is clear that the excess-rate GAPP provides a
very accurate approximation to the exact results across the full range of
load values, and it is significantly more accurate than the heavy-traffic
approximation.
246
Queue size
0
10
20
30
40
100
101
102
State probability
103
104
105
106
Exact 95%
ER GAPP 95%
Exact 75%
ER GAPP 75%
Exact 55%
ER GAPP 55%
107
108
109
1010
k :D 0.. 40
aP95k :D Poisson k, 0.95
aP75k :D Poisson k, 0.75
aP55k :D Poisson k, 0.55
k
e e 2 C C e
e e 2 C C e
GAPPMDI k, :D 1
1 C e
1 C e
xk :D k
y1 :D infiniteQ 40, aP95, 0.95
y2 :D GAPPMDI x, 0.95
y3 :D infiniteQ 40, aP75, 0.75
y4 :D GAPPMDI x, 0.75
y5 :D infiniteQ 40, aP55, 0.55
y6 :D GAPPMDI x, 0.55
Figure 14.7. State Probability Distributions at Various Load Levels, Comparing the Exact Analysis
and Excess-Rate Analysis Methods
247
Buffer capacity, X
0
10
20
30
40
100
101
102
Exact 95%
ER GAPP 95%
Exact 75%
ER GAPP 75%
Exact 55%
ER GAPP 55%
103
104
105
106
107
108
109
1010
k :D 0..
40
GAPPMD1Q k, :D
e e 2 C C e
1 C e
kC1
xk :D k
yP1 :D infiniteQ 40, aP95, 0.95
y1 :D Q 40, yP1
y2 :D GAPPM1Q x, 0.95
yP3 :D infiniteQ 40, aP75, 0.75
y3 :D Q 40, yP3
y4 :D GAPPMD1Q x, 0.75
yP5 :D infiniteQ 40, aP55, 0.55
y5 :D Q 40, yP5
y6 :D GAPPMD1Q x, 0.55
Figure 14.8. Probability that the Queue Exceeds X for Various Load Levels,
Comparing the Exact Analysis and Excess-Rate Analysis Methods
But what if 50% of the packets are short, say 40 octets, and 50% of the
packets are long, say 960 octets? The average packet length is 500 octets,
equivalent to one time unit. The probability that there are no packets
arriving in one packet service time is now a weighted sum of the two
possible situations, i.e.
248
Buffer capacity, X
0
10
20
30
40
100
101
102
Heavy Traffic 95%
ER GAPP 95%
Heavy Traffic 75%
ER GAPP 75%
Heavy Traffic 55%
ER GAPP 55%
103
104
105
106
107
108
109
1010
k :D 0..
40
2k
HeavyTrafficMD1Q k, :D e
xk :D k
y1 :D HeavyTrafficMD1Q x, 0.95
y2 :D GAPPMD1Q x, 0.95
y3 :D HeavyTrafficMD1Q x, 0.75
y4 :D GAPPMD1Q x, 0.75
y5 :D HeavyTrafficMD1Q x, 0.55
y6 :D GAPPMD1Q x, 0.55
1
Figure 14.9. Probability that the Queue Exceeds X for Various Load Levels,
Comparing the Excess-Rate and Heavy Traffic Approximation Methods
a0 D 0.5
40
500
0!
0
40
960
500
0!
48
0
giving
a0 D 0.5 e 25 C 0.5 e 25
The same approach applies for a1, i.e.
a1 D 0.5
960
e 500
2
2
48 48
e 25 C 0.5
e 25
25
25
249
Lets make this more general. Assume we have short packets of length 1
time unit, and long packets of length n time units. The proportion of short
packets is ps , and so the proportion of long packets is (1 ps ). Packets
arrive at a rate of packets per time unit. The mean service time (in time
units) is then simply
s D ps 1 C 1 ps n
and mean number of arrivals per packet service time (i.e. the utilization)
is given by
E[a] D D s D fps C 1 ps ng
The general form for the probability of k arrivals in a packet service time
is then given by
ak D ps
k
n k n
e C 1 ps
e
k!
k!
1
gi D
iD1
a0 D
1
gi ei
iD1
a1 D
1
gi i ei
iD1
where
gk D Prfa packet requires k time units to be servedg
But for now, lets keep to the bi-modal case and look at some results
(in Figure 14.10) for different lengths and proportions of short and long
packets. We fix the short packet length to 40 octets, the mean packet
250
Buffer capacity, X
0
10
20
30
40
100
101
102
103
104
105
106
Bi-modal 2340
Geo/Geo/1
Bi-modal 960
Bi-modal 540
M/D/1
107
k :D 0.. 40
longandshortQ k, , ps, n :D
ps
C
1
ps n
Ea
ps e C 1 ps en
a0
a1
ps e C 1 ps n en
Ea 1 a1 1 C a1 C a02
decayrate
a0 Ea 1 C a0
decayratekC1
xk :D k
y1 :D GAPPMD1Q x, 0.8
y2 :D longandshortQ x, 0.8, 0.08, 13.5
y3 :D longandshortQ x, 0.8, 0.5, 24
y4 :D longandshortQ x, 0.8, 0.8, 58.5
y5 :D packetQ x
Figure 14.10. Probability that the Queue Exceeds X for Different Service Distributions (Geometric, Deterministic and Bi-modal)
251
length to 500 octets and the load to 0.8 packets arriving per mean service
time. We set the time unit to be the time to serve a short packet, so
sD
500
D ps 1 C 1 ps n
40
Figure 14.10 shows results for three different lengths for the long packets:
2340, 960, and 540 octets. These give values of n D 58.5, 24 and 13.5
with corresponding values of ps D 0.8, 0.5 and 0.08 respectively. Also
in the figure are the results for the M/D/1 with the same load of 0.8,
and the Geo/Geo/1 results from Figure 14.1 (a load of 0.8 and mean
packet-size of 500 octets). Note that the M/D/1 gives a lower bound on
the probabilities, but the Geo/Geo/1 is not the worst case. Introducing
a small proportion of short packets, and hence slightly increasing the
length of the long packets (to maintain an average of 500 octets) results
in a decay rate only a little higher than the M/D/1. When there are equal
proportions of short and long packets, the decay rate approaches that for
the Geo/Geo/1. However, when most of the packets are short, and only
20% are (very) long packets, the decay rate is rather worse than that for
the Geo/Geo/1 queue.
15
Resource Reservation
go with the flow
254
RESOURCE RESERVATION
Ap
Ton h
Ap
D F Ton
h
i.e. the flow attempt rate multiplied by the mean flow duration.
255
PI input ports
Output port
of interest
PI
256
RESOURCE RESERVATION
N
N1
N2
Exponentially
distributed
OFF period
Exponentially
distributed
ON period
Ron
Roff
2N state process
Figure 15.2.
2 state process
S
.
.
.
C/h
C/h -1
.
.
.
3
2
ON period
ON period
T(on) = mean ON time
Ron = mean ON rate
Channel
capacity = C
OFF period
OFF period
T(off) = mean OFF time
Roff = mean OFF rate
Figure 15.3.
Time
257
C
h
Ap
D F Ton
h
N0
N0 A
DD
N
1
0
AN0
Ar
N0
C
r!
N0 !
N0 A
rD0
AN0
N0!
258
RESOURCE RESERVATION
N0 B
N0 A C A B
The mean number of calls (packet flows) waiting, averaged over all calls,
is given by
A
wDD
N0 A
But what we need is the mean number waiting, conditioned on there
being some waiting. This is simply given by
A
w
D
D
N0 A
Thus, when the aggregate traffic is in the ON state, i.e. there are some
packet flows waiting, then the mean input rate to the output port exceeds
the service rate. This excess rate is simply the product of the conditional
mean number waiting and the packet rate of a packet flow, h. So
Ron D C C h
Ap
A
DCCh
N0 A
C Ap
The mean duration in the excess-rate (ON) state is the same as the
conditional mean delay for calls in the waiting-call system. From Littles
formula, we have
A
w D F tw D
tw
Ton
which, on rearranging and substituting for w, gives
tw D
Ton
A
Ton
wD
D
A
A
N0 A
tw
Ton
h Ton
D
D
D
N0 A
C Ap
This completes the parameterization of the ON state. In order to parameterize the OFF state we need to make use of D, the probability that a
packet flow is delayed. This probability is, in fact, the probability that the
259
1D
D
The mean load, in packet/s, is the weighted sum of the rates in the ON
and OFF states, i.e.
Ap D D Ron C 1 D Roff
and so
Roff D
Ap D Ron
1D
1
Toff C Roff
For a finite buffer size of X, we had the following results from Chapter 9:
pX 1 D
and
pX i D
1a
pX
a
s
pX i C 1
a
260
RESOURCE RESERVATION
p0
0<k<X
s
pk D
k
s
a
p0
kDX
1a
s
These state probabilities must sum to 1, and so, after some rearrangement,
we can find p0 thus:
a
s
p0 D
X
1s
a
1
1a
s
1
s
s
As we found in the previous chapter for this form of expression, the
probability that the queue exceeds k packets is then a geometric progression, i.e.
kC1
a
Qk D
s
This result is equivalent to the burst-scale delay factor it is the probability that excess-rate packets see more than k in the queue. It is in our,
now familiar, decay rate form, and provides an excellent approximation
to the probability that a finite buffer of length k overflows. This latter is a
good approximation to the loss probability.
However, we have not quite finished. We now need an expression for
the probability that a packet is an excess-rate arrival. In the discrete fluidflow model of Chapter 9, this was simply R C/R the proportion of
arrivals that are excess-rate arrivals. This simple expression needs to be
modified because when the aggregate process is in the OFF state, packets
are still arriving at the queue.
We need to find the ratio of the mean excess rate to the mean arrival
rate. If we consider a single ONOFF cycle of the aggregate model, then
this ratio is the mean number of excess packets in an ON period to the
261
VOICE-OVER-IP, REVISITED
Ron C Ton
Ap Ton C Toff
hD
C Ap
xC1
1
1
hD
Ton Ron C
Qx D
C Ap
1
Toff C Roff
VOICE-OVER-IP, REVISITED
In the last chapter we looked at the excess-rate M/D/1 analysis as
a suitable model for voice-over-IP. The assumption of a deterministic
server is reasonable, given that voice packets tend to be of fixed size,
and the Poisson arrival process is a good limit for N CBR sources when
N is large (as we found in Chapter 8). But if the voice sources are using
activity detection, then they do not send packets during silent periods.
Thus we have ONOFF behaviour, which can be viewed as a series of
overlapping packet flows (see Figure 15.1).
Suppose we have N D 100 packet voice sources, each producing packets
at a rate of h D 167 packet/s, when active, into a buffer of size X D 100
packets and service capacity C D 7302.5 packet/s. The mean time when
active is Ton D 0.35 seconds and when inactive is Toff D 0.65 second, thus
each source has, on average, one active period every Ton C Toff D 1 second.
The rate at which these active periods arrive, from the population of N
packet sources, is then
FD
N
D 100 s1
Ton C Toff
Therefore, we can find the overall mean load, Ap , and the offered traffic,
A, in erlangs.
Ap D F Ton h D 100 0.35 167 D 5845 packet/s
262
RESOURCE RESERVATION
C
D 43.728
h
which needs to be rounded down to the nearest integer, i.e. N0 D 43. Lets
now parameterize the two-state excess-rate model.
AN0
N !
B D N 0 D 0.028 14
0
Ar
rD0
DD
r!
N0 B
D 0.134 66
N0 A C A B
Ron D C C h
Ap
D 7972.22
C Ap
Ap D Ron
D 5513.98
1D
h Ton
D 0.0401
Ton D
C Ap
Roff D
Toff D Ton
1D
D 0.257 71
D
We can now calculate the geometric parameters, a and s, and hence the
decay rate.
aD1
1
D 0.962 77
Ton Ron C
sD1
1
D 0.997 83
Toff C Roff
decay rate D
a
D 0.964 86
s
hD
D 0.015 43
C Ap
263
VOICE-OVER-IP, REVISITED
QX D
C Ap
XC1
a
s
D 4.161 35 104
Figure 15.4 shows these analytical results on a graph of Qx against x. The
Mathcad code to generate the analytical results is shown in Figure 15.5.
Also shown, as a dashed line in Figure 15.4, are the results of applying
the burst-scale analysis (both loss and delay factors, from Chapter 9) to
the same scenario. Simulation results for this scenario show a decay rate
of approximately 0.97. The figure of 0.964 86 obtained from the excessrate aggregate flow analysis is very close to these simulation results,
and illustrates the accuracy of the excess-rate technique. In contrast, the
burst-scale delay factor gives a decay rate of 0.998 59. This latter is typical
of other published techniques which tend to overestimate the decay rate
by a significant margin; the interested reader is referred to [15.4] for a
more comprehensive comparison.
If we return to the M/D/1 scenario, where we assume that the voice
sources are of a constant rate, how many sources can be supported
over the same buffer, and with the same packet loss probability? The
excess-rate analysis gives us the following equation:
e e 2 C C e
Q100 D
1 C e
101
D 4.161 35 104
Buffer capacity, X
0
10
20
30
40
50
60
70
80
90 100
10 0
101
102
103
104
105
Figure 15.4. Packet Loss Probability Estimate for Voice Sources, Based on ExcessRate Aggregate Flow Analysis
264
RESOURCE RESERVATION
k :D 0.. 100
Ap
afQ k , h , Tflow , Ap, C :D A
h
C
N0
floor
h
AN0
1
B
r
N0! N0 A
rD0
r!
N0 B
D
N0 A C A B
h Tflow
Ton
C Ap
1-D
Ton
Toff
D
Ap
Ron
CCh
C Ap
Ap D Ron
Roff
1D
1
1
Ton
Ron
C
decayrate
1
1
Toff C Roff
hD
probexcess
C Ap
probexcess decayratekC1
xk :D k
y :D afQ x , 167 , 0.35 , 5845 , 7302.5
Figure 15.5.
which is plotted in Figure 15.6 for values of load ranging from 0.8 up to
0.99 of the service capacity.
The value of loss probability we require occurs at an offered load of
about 0.96; in fact 0.961 yields a loss probability of 4.164 104 . This
offered load is just the total input rate from all the CBR sources, divided
by the service rate of the output port. So, we have
NCBR h
D 0.961
C
0.961 7302.5
D 42.02
NCBR D
167
265
0.80
0.85
Offered load
0.90
0.95
1.00
100
Packet loss probability
101
102
103
104
105
106
107
108
109
1010
Figure 15.6. Packet Loss Probability Estimate for Voice Sources, Based on ExcessRate M/D/1 Analysis
266
RESOURCE RESERVATION
10000
9500
9000
12
10
8500
8000
7500
7000
6
4
12
10
6
4
6500
0
20
40
60
80
100
Token bucket size, B packets
120
Figure 15.7. Example of Relationship between Token Bucket Parameter Values for
Voice-over-IP Aggregate Traffic
Figure 15.7 shows the relationship between B and R for various values
of the packet loss probability estimate (102 down to 1012 ). The scenario
is the aggregate flow of voice-over-IP traffic, using the parameter values
and formulas in the previous section. The tokens are equivalent to packets,
rather than octets, in this figure. A simple scaling factor (the number of
octets per packet) can be applied to convert to octets. There is a clear
trade-off between rate allocation (R) and burstiness (B) for the aggregate
flow. With a smaller rate allocation, the aggregate flow exceeds this value
more often, and so a larger token bucket is required to accommodate
these bursts.
16
IP Buffer Management
packets in the space time continuum
268
IP BUFFER MANAGEMENT
269
30
Queue size
20
10
0
5000
6000
7000
8000
9000
10000
Time
Figure 16.1.
w D 0.002
Sample Trace of Actual Queue Size (Grey) and EWMA (Black) with
30
Queue size
20
10
0
5000
6000
7000
8000
9000
10000
Time
Figure 16.2.
w D 0.01
Sample Trace of Actual Queue Size (Grey) and EWMA (Black) with
270
IP BUFFER MANAGEMENT
Configuring the values of the thresholds, min and max , depends on the
target queue size, and hence system load, required. In [16.1] a rule of
thumb is given to set max > 2min in order to avoid the synchronization
problems mentioned earlier, but no specific guidance is given on setting
min . Obviously if there is not much difference between the thresholds,
then the mechanism cannot provide sufficient advance warning of potential congestion, and it soon gets into a state where it drops all arriving
packets. Also, if the thresholds are set too low, this will constrain the
normal operation of the buffer, and lead to under-utilization. So, are there
any useful indicators?
From the packet queueing analysis in the previous two chapters, we
know that in general the queue state probabilities can be expressed as
pk D 1 dr dr k
where dr is the decay rate, k is the queue size and pk is the queue state
probability. The mean queue size can be found from this expression, as
follows:
1
1
k pk D 1 dr
k dr k
qD
kD1
kD1
1
k 1 dr k
kD2
1
dr k
kD1
qD
1
dr k
kD1
1
dr k
kD2
And, as before, we now subtract this equation from the previous one to
obtain
1 dr q D dr
qD
dr
1 dr
271
For the example shown in Figures 16.1 and 16.2, assuming a fixed packet
size (i.e. the M/D/1 queue model) and using the GAPP formula with a
load of 0.9 gives a decay rate of
e e 2 C C e
dr D
1 C e
D 0.817
D0.9
0.817
D 4.478
1 0.817
which is towards the lower end of the values shown on the EWMA traces.
Figure 16.3 gives some useful indicators to aid the configuration of the
thresholds, min and max . These curves are for both the mean queue size
against decay rate, and for various levels of probability of exceeding a
threshold queue size. Recall that the latter is given by
Prfqueue size > kg D Qk D dr kC1
103
Q(k) = 0.0001
Q(k) = 0.01
Q(k) = 0.1
mean queue size
102
101
100
0.80
Figure 16.3.
Decay Rate
0.85
0.90
Decay rate
0.95
1.00
272
IP BUFFER MANAGEMENT
So, to find the threshold k, given a specified probability, we just take logs
of both sides and rearrange thus:
threshold D
logPrfthreshold exceededg
1
logdr
Note that this defines a threshold in terms of the probability that the
actual queue size exceeds the threshold, not the probability that the
EWMA queue size exceeds the threshold. But it does indicate how the
queue behaviour deviates from the mean size in heavily loaded queues.
But what if we want to be sure that the mechanism can cope with a
certain level of bursty traffic, without initiating packet discard? Recall
the scenario in Chapter 15 for multiplexing an aggregate of packet flows.
There, we found that although the queue behaviour did not go into the
excess-rate ON state very often, when it did, the bursts could have a
substantial impact on the queue (producing a decay rate of 0.964 72). It
is thus the conditional behaviour of the queueing above the long-term
average which needs to be taken into account. In this particular case, the
decay rate of 0.964 72 has a mean queue size of
qD
0.964 72
D 27.345 packets
1 0.964 72
5845
D 0.8
7302.5
0.659
D 1.933 packets
1 0.659
273
Precedence queueing
There are a variety of different scheduling algorithms. In Chapter 13, we
looked at time priorities, also called head-of-line (HOL) priorities, or
precedence queueing in IP. This is a static scheme: each arriving packet
has a fixed, previously defined, priority level that it keeps for the whole
of its journey across the network. In IPv4, the Type of Service (TOS) field
can be used to determine the priority level, and in IPv6 the equivalent
field is called the Priority Field. The scheduling operates as follows (see
Figure 16.4): packets of priority 2 will be served only if there are no packets
Inputs
.
.
.
.
.
.
Packet router
Priority 1 buffer
.
.
.
Outputs
server
Priority 2 buffer
..
Priority P buffer
274
IP BUFFER MANAGEMENT
275
.
.
.
N IP
flows
entering
a buffer
Single o/p
line
sense: it makes far worse use of the available space than would, for
example, complete sharing of a buffer. This can be easily seen when you
realize that a single flows virtual buffer can overflow, so causing loss,
even when there is still plenty of space available in the rest of the buffer.
Each virtual buffer can be treated independently for performance analysis, so any of the previous approaches covered in this book can be re-used.
If we have per-flow queueing, then the input traffic is just a single source.
With a variable-rate flow, the peak rate, mean rate and burst length can be
used to characterize a single ONOFF source for queueing analysis. If we
have per-class queueing, then whatever is appropriate from the M/D/1,
M/G/1 or multiple ONOFF burst-scale analyses can be applied.
logloss probability
1
logdr
For realistically sized buffers, one packet space will make little difference,
so we can simplify this equation further to give
X
logloss probability
logdr
276
IP BUFFER MANAGEMENT
Parameter
Bi-modal 540
Bi-modal 960
Bi-modal 2340
40
540
13.5
0.08
0.064
0.8
0.4628
0.33 982
0.67 541
40
960
24
0.5
0.064
0.8
0.57 662
0.19 532
0.78 997
40
2340
58.5
0.8
0.064
0.8
0.75 514
0.06 574
0.91 454
X1
X2
X3
C/3
C/3
Service rate
C packet/s
C/3
277
and so
X2 D
X1 logdr1
logdr2
X3 D
X1 logdr1
logdr3
X1 logdr1 X1 logdr1
C
logdr2
logdr3
3
jD1
1
logdrj
So we can write
X
Xi D
logdri
3
jD1
1
logdrj
278
IP BUFFER MANAGEMENT
V
Xj
jD1
..
.
XV D
279
This buffer partitioning formula can be used to evaluate the correct size
of the partition allocated to any traffic class depending only on knowing
the total space available, the decay rates and the desired scaling factors.
For our example with three virtual buffers, we have
X D 200 packets
dr1 D 0.675 41
dr2 D 0.789 97
dr3 D 0.914 54
and
S1 D 10 000
S2 D 100
S3 D 1
By applying the general partitioning formula, we obtain (to the nearest
whole packet)
X1 D 46 packets
X2 D 56 packets
X3 D 98 packets
This gives loss probabilities for each of the virtual buffers of
LP1 D 1.446 108
LP2 D 1.846 106
LP3 D 1.577 104
280
IP BUFFER MANAGEMENT
N=8 virtual
o/p buffers
Example
switch
element
with 8 i/p
and 8 o/p
lines, and
a single
shared o/p
buffer.
N=8 o/p
lines
Figure 16.7. Example of a Switch/Router with Output Ports Sharing Buffer Space
P2 k D
k
P1 j P1 k j
jD0
i.e. to find the probability that the shared buffer is in state k, find all the
different ways in which the two individual queues can have k packets
between them, and sum these probabilities.
The autoconvolution for N buffers sharing the buffer space can then be
constructed recursively, i.e.
PN k D
k
PN1 j P1 k j
jD0
QN k D 1
k
PN j
jD0
281
where dr is the decay rate, k is the queue size, and pk is the individual queue state probability. The autoconvolution of this geometric
distribution is given by a negative binomial; thus
PN k D kCN1 CN1 dr k 1 dr N
where k is the size of the combined queues in the shared buffer.
The probability that the shared buffer overflows is expressed as
QN k 1 D 1
k1
PN j D
jD0
1
PN j
jDk
1
PN j
jDk
1
PN k qjk
jDk
Note that this is, in essence, the same approach that we used previously in Chapter 14 for the Geometrically Approximated Poisson Process.
However, we cannot parameterize it via the mean of the excess-rate batch
size instead we estimate the geometric parameter, q, from the ratio of
successive queue state probabilities:
qD
PN k C 1
D
PN k
kCN
k C N dr
kC1
for k N
1
1q
282
IP BUFFER MANAGEMENT
Total queue size
0
10
20
30
40
50
60
70
80
90
100
100
101
State probability
102
103
104
105
10
107
108
separate
conv. 2 buffers
neg. bin. 2 buffers
conv. 4 buffers
neg. bin. 4 buffers
conv. 8 buffers
neg. bin. 8 buffers
283
10
20
30
40
50
60
70
80
90
100
101
102
103
104
105
106
107
108
separate
conv. 2 buffers
neg. bin. 2 buffers
conv. 4 buffers
neg. bin. 4 buffers
conv. 8 buffers
neg. bin. 8 buffers
Q(s) :D qx0
1 s0
for i 2 1.. last (s)
qx
qxi1 si
i
qx
NegBinomialQ N , k , dr :D combin (k C N 1, N 1 drk 1 drN1
k :D 0.. 100
xk :D k
y1 :D Q (Psingle)
y2 :D Q (Autoconv2, Psingle
y3k :D NegBinomialQ 2, k , dr
y4 :D Q (Autoconv 4, Psingle
y5k :D NegBinomialQ 4, k, dr
y6 :D Q (Autoconv 8, Psingle
y7k :D NegBinomialQ 8, k , dr
Figure 16.9. Overflow Probabilities for Shared Buffers, and Mathcad Code to
Generate (x, y) Values for Plotting the Graph
284
IP BUFFER MANAGEMENT
10
100
10
102
103
20
30
40
50
separate
simple, 2 buffers
neg. bin. 2 buffers
simple, 4 buffers
neg. bin. 4 buffers
simple, 8 buffers
neg. bin. 8 buffers
104
105
106
107
108
k :D 0 .. 50
NegBinomialQ (N, k, dr) :D k
kN
combin (k C N 1 , N 1 drk 1 drN1
SimpleNBQ N , k , dr :D k
kN
ekCN1lnkCN1klnkN1lnN1
a
b
eklndrCN1ln1dr
a b
xk :D k
y1 :D Q (Psingle)
y2k :D SimpleNBQ 2, k , dr
y3k :D NegBinomialQ 2, k , dr
y4k :D SimpleNBQ 4, k , dr
y5k :D NegBinomialQ 4, k , dr
y6k :D SimpleNBQ 8, k , dr
y7k :D NegBinomialQ 8, k , dr
Figure 16.10. Comparison Showing the Benefits of Sharing Buffer Space Overflow
Probability vs. Buffer Capacity per Output Port
gives
QN k 1
k C N 1kCN1
dr k 1 dr N1
kk N 1N1
which has the distinct advantage of not requiring the user to evaluate
large factorials. Applying logarithms ensures that all the powers can be
285
17
Self-similar Traffic
play it again, Sam
288
SELF-SIMILAR TRAFFIC
50
100
1
45
40
35
30
25
20
15
10
5
0
10
20
30
40
50
60
Time (scaled units)
70
80
90
100
100
1
45
40
35
30
25
20
15
10
5
0
10
20
30
40
50
60
70
80
90
100
289
Another way of expressing this is that the process has long-term (slowly
decaying) correlations. So, an individual communications process with
heavy-tailed sojourn times exhibits long-range dependence. And the
aggregation of LRD sources produces a traffic stream with self-similar
characteristics.
So, how do we model and analyse the impact of this traffic? There
have been claims that traditional approaches to teletraffic modelling
no longer apply. Much research effort has been, and is being, spent on
developing new teletraffic models, such as Fractional Brownian Motion
(FBM) processes (e.g. [17.2]) and non-linear chaotic maps (e.g. [17.3]).
However, because of their mathematical complexity, assessing their
impact on network resources is not a simple task, although good progress
is being made.
In this chapter we take a different approach: with a little effort we can
re-apply what we already know about traffic engineering usefully, and
generate results for these new scenarios quickly. Indeed, this is in line
with our approach throughout this book.
1
x
PrfX > xg D
where is the parameter which specifies the minimum value that the
distribution can take, i.e. x . For example, if D 25, then PrfX > 25g D
1, i.e. X cannot be less than or equal to 25. For our purposes it is often
convenient to set D 1.
The cumulative distribution function is
Fx D 1
x
and the probability density function is given by
f x D
C1
290
SELF-SIMILAR TRAFFIC
1
Note that for this formula to be correct, > 1 is essential; otherwise the
Pareto has an infinite mean.
Lets put some numbers in to get an idea of the effect of moving to
heavy-tailed distributions. Assume that we have a queue with a timeslotted arrival process of packets or cells. The load is 0.5, and we have a
batch arriving as a Bernoulli process, such that
Prfthere is a batch in a time slotg D 0.25
thus the mean number of arrivals in any batch is 2. We calculate the
probability of having more than x arrivals in any time slot, in two cases:
for an exponentially distributed batch size, and for a Pareto-distributed
batch size. In the former case, we have
x
D2
1
E[x]
D2
E[x] 1
hence
2
1
x
giving
1
10
2
0.25 D 0.0025
Thus for a batch size of greater than 10 arrivals there is not that much
difference between the two distributions the probability is of the same
291
order of magnitude. However, if we try again for more than 100 arrivals
we obtain
Prf>100 arrivals in any time slotg D e
100
2
0
100
101
20
40
60
1
100
2
Batch size, x
80
100
120
140
160
180
200
Pareto, E(x) = 10
Pareto, E(x) = 2
Exponential, E(x) = 10
Exponential, E(x) = 2
102
Pr{X>x}
103
104
105
106
107
108
292
SELF-SIMILAR TRAFFIC
100
10
3 4 5 67 101
Batch size, x
2 3 4 5 67 102
3 4 5 67 103
101
102
Pareto, E(x) = 10
Pareto, E(x) = 2
Exponential, E(x) = 10
Exponential, E(x) = 2
Pr{X>x}
103
104
105
106
107
108
Figure 17.4.
Scale for x
293
.
.
.
Pareto distributed
number of packets
in an arriving batch
.
.
.
Time
294
SELF-SIMILAR TRAFFIC
Fx D 1
1
x
20
40
60
Batch size
80 100 120
140
160
180
200
100
= 1.9
= 1.1
101
Probability
102
103
104
105
106
107
108
BatchPareto q , k , :D 1 q if k D 0
1
1
q if k D 1
1 . 5
1
1
q
k 0.5
k C 0.5
maxX :D 1000
k :D 0 .. maxX
l :D 0.. 1
1.9
:D
1.1
l
Bl :D
l 1
:D 0.25
ql :D
Bl
xk :D k
y1k :D BatchPareto q0 , k , 0
y2k :D BatchPareto q1 , k , 1
Figure 17.6.
otherwise
295
Queue size
100
3 4 5 6 7 101
3 4 5 6 7 102
2 3 4 5 6 7 103
100
= 1.9
= 1.1
State probability
101
102
103
104
105
106
maxX :D 1000
k :D 0.. maxX
1 :D 0..1
1.9
:D
1.1
1
B1 :D
1 1
:D 0.25
q1 :D
B1
aP1k :D Batchpareto q0 , k, 0
ap2k :D Batchpareto q1 , k, 1
xk :D k
y1 :D infiniteQmaxX, aP1,
y2 :D infiniteQmaxX, aP2,
Figure 17.7.
distribution, i.e.
1
x 0.5
1
x C 0.5
Note that F1 D 0, i.e. the probability that an arriving batch is less than
or (exactly) equal to 1 packet is zero. Remember this is for a continuous
distribution; so, for the discrete case of a batch size of one packet,
296
SELF-SIMILAR TRAFFIC
we have
1
1.5
0.25
D
B
B
100
101
100
3 4 5 6 7 101
Size
3 4 5 6 7 102
3 4 5 6 7 103
= 1.9
= 1.1
Probability
102
103
104
105
106
107
108
Figure 17.8. Comparison of Power-Law Decays for Arrival (Thin) and Queue-State
(Thick) Probability Distributions
297
100
3 4 5 6 7 101
Size
2 3 4 5 6 7 102
3 4 5 6 7 103
100
101
= 1.9
= 1.1
Probability
102
103
104
105
106
107
108
BatchparetoTrunc q , k , , X :D 1 qif k D 0
1
1
1.5
q
if k D 1
1
1
XC0.5
1
1
k0.5
kC0.5
q
1
1
X C 0.5
0 if k > X
Xlimit :D 500
aP1k :D BatchParetoTrunc q0 , k, 0 , Xlimit
aP2k :D BatchParetoTrunc q1 , k, 1 , Xlimit
alt1 :D
k.aP1k
if
k X k > 1
alt1 D 0.242
alt2 :D
k.aP2k
k
alt2 D 0.115
y1 :D infiniteQ maxX, aP1, alt1
y2 :D infiniteQ maxX, aP2, alt1
Figure 17.9. Effect of Truncated Power-Law Decays for Arrival (Thin) and Queue-State (Thick)
Probability Distributions
298
SELF-SIMILAR TRAFFIC
Now that we have prepared the arrival distribution, we can put this
directly into the queueing analysis from Chapter 7. Figure 17.7 shows
the resulting queue state probabilities for both D 1.1 and 1.9. Note that
the queue-state probabilities have power-law decay similar to, but not
the same as, the arrival distributions. This is illustrated in Figure 17.8,
which shows the arrival probabilities as thin lines and the queue-state
probabilities as thick lines.
From these results it appears that the advantage of having a large buffer
is somewhat diminished by having to cope with LRD traffic: no buffer
would seem to be large enough! However, in practice there is an upper
limit to the time scales of correlated traffic activity. We can model this by
truncating the Pareto distribution, and simply using the same approach
to the queueing analysis.
Suppose X is the maximum number of packets in a batch. Our truncated, discrete version of the Pareto distribution now looks like
1
1
1
1
xD1
0.5
X C 0.5
1
1
1
bx D
1
Xx>1
x 0.5
x C 0.5
X C 0.5
0
x>X
Note that, because of the truncation, the probability density needs to be
conditioned on what remains, i.e.
1
1
X C 0.5
Figure 17.9 shows the result of applying this arrival distribution to the
queueing analysis from Chapter 7. In this case we have the same values
as before for , i.e. D 1.1 and 1.9, and we set X D 500. The load is
reduced because of the truncation to 0.115 and 0.242 respectively. The
figure shows both the truncated arrival distributions and the resulting
queue-state distributions. For the latter, it is clear that the power-law
decay begins to change, even before the truncation limit, towards an
exponential decay.
So, we can see that it is important to know the actual limit of the
ON period activity in the presence of LRD traffic, because it has such a
significant effect on the buffer size needed.
References
[1.1] Griffiths, J.M. (ed.), ISDN Explained: Worldwide Network and Applications Technology, John Wiley & Sons, ISBN 0 471 93480 1 (1992)
[1.2] Cuthbert, L.G. and Sapanel, J-C., ATM: the Broadband Telecommunications Solution, The Institution of Electrical Engineers, ISBN 0 85296 815 9 (1993)
[1.3] Thomas, S.A., IPng and the TCP/IP Protocols: Implementing the Next Generation
Internet, John Wiley & Sons, ISBN 0 471 13088 5 (1996)
[3.1] Flood, J.E., Telecommunications Switching, Traffic and Networks, Prentice Hall,
ISBN 0 130 33309 3 (1995)
[3.2] Bear, D., Principles of Telecommunication Traffic Engineering, Peter Peregrinus
(IEE), ISBN 0 86341 108 8 (1988)
[5.1] Law, A.M. and Kelton, W.D., Simulation Modelling and Analysis, McGraw-Hill,
ISBN 0 07 100803 9 (1991)
[5.2] Pitts, J.M., Cell-rate modelling for accelerated simulation of ATM at the burst
level, IEE Proceedings Communications, 142, 6, December 1995
[6.1] Cosmas, J.P., Petit, G., Lehnert, R., Blondia, C., Kontovassilis, K., and
Cassals, O., A review of voice, data and video traffic models for ATM,
European Transactions on Telecommunications, 5, 2, March 1994
[7.1] Pattavina, A., Switching Theory: Architecture and Performance in Broadband ATM
Networks, John Wiley & Sons ISBN 0 471 96338 0 (1998)
[8.1] Roberts, J.W. and Virtamo, J.T., The superposition of periodic cell arrival
processes in an ATM multiplexer, IEEE Trans. Commun., 39, 2, pp. 298303
[8.2] Norros, I., Roberts, J.W., Simonian, A. and Virtamo, J.T., The superposition of
variable bit rate sources in an ATM multiplexer, IEEE JSAC, 9, 3, April 1991,
pp. 378387
[8.3] Schormans, J.A., Pitts, J.M., Clements, B.R. and Scharf, E.M., Approximation
to M/D/1 for ATM CAC, buffer dimensioning and cell loss performance,
Electronics Letters, 32, 3, 1996, pp. 164165
[9.1] Onvural, R., Asynchronous Transfer Mode Networks: Performance Issues, Artech
House (1995)
[9.2] Schormans, J.A., Pitts, J.M. and Cuthbert, L.G., Exact fluid-flow analysis of
single on/off source feeding an ATM buffer, Electronics Letters, 30, 14, 7 July
1994, pp. 11161117
[9.3] Lindberger, K., Analytical methods for the traffical problems with statistical multiplexing in ATM networks, 13th International Teletraffic Congress,
Copenhagen 1991, 14: Teletraffic and datatraffic in a period of change
[10.1] ITU Recommendation I.371 TRAFFIC CONTROL AND CONGESTION CONTROL IN
B-ISDN, August 1996
300
REFERENCES
Index
Asynchronous Transfer Mode (ATM)
standards 58
technology of 3, 8
time slotted nature 8
Switches 11
Bernoulli process
with batch arrivals 19, 87, 98
binomial distribution 16, 89
distribution of cell arrivals 16
formula for 16
blocking
connection blocking 47
buffers
balance equations for buffering 99
delays in 110
at output 99
finite 104
sharing and partitioning 273, 279
see also priorities
virtual buffers in IP 273
burst scale queueing 125
by simulation 77
in combination with cell scale 187
using large buffers 198
call
arrival rate 16, 47
average holding time 47
capacity 49
cell 7
loss priority bit 205
loss probability 104, 236, 245
see also queue
switching 97
cell scale queueing 113, 121
by simulation 73
in combination with burst scale 187
channel 5
circuits 5
circuit switching 3
connection admission control (CAC) 149
a practical scheme 159
CAC in the ITU standards 165
equivalent cell rate and linear CAC 160
using 2 level CAC 160
via burst scale analysis 39, 157, 161
via cell scale analysis 37, 152, 153
using M/D/1 152
using N.D/D/1 153
connectionless service 10
constant bitrate sources (CBR) 113, 125, 150
multiplex of 113
cross connect 9
deterministic bitrate transfer capability (DBR)
150
DIFFSERV 12, 253
Differentiated performance 32
Decay rate analysis 234
as an approximation for buffer overflow
236
used in Dimensioning 42, 187
of buffers 192
effective bandwidth see CAC, equivalent cell
rate
equivalent bandwidth see CAC, equivalent
cell rate
Erlang
Erlangs lost call formula 42, 52
traffic table 54
302
Excess Rate analysis (see also GAPP) 22, 27
for multiplex of ON/OFF sources 27
for VoIP 261
for RED 271
exponential distribution
formula for 16
of inter-arrival times 16
Geometrically Approximated Poisson Process
(GAPP) 22, 23, 240
GAPP approximation for M/D/1 queue
239
GAPP approximation for RED in IP 271
GAPP approximation for buffer sharing
279
geometric distribution 86
formula for 86
of inter-arrival times 86
of number of cell arrivals / empty slots 87
with batch arrivals 87
Internet Protocol (IP) 10, 58
IP queueing see Packet Queueing
IP source models, see sources models for IP
IP best effort traffic, ER queueing analysis
245
IP packet flow aggregation 254, 255
IP buffer management 267
IP virtual buffers 272
IP buffer partitioning 275
IP buffer sharing 279
INTSERV 12, 253
label multiplexing see packet switching
load 47
Long Range Dependency in Traffic 287
in queue model 293
mesh networks 45
MPLS 12
multiplexors
octets 5
packet queueing 229
for variable length packets 247
packet switching 5, 229
Pareto distribution 17, 289
in queueing model 30, 293
performance evaluation 57
INDEX
by analysis 58
by measurement 57
by simulation 57
Poisson 16
distribution of number of cell arrivals per
slot 16, 86
distribution of traffic 51, 86
formula for distribution 16
position multiplexing 4
priorities 32, 205
space priority and selective discarding
205
partial buffer sharing 207
via M/D/1 analysis 207
push out mechanism 206
precedence queueing in IP 273
time priority 35, 218
distribution of waiting times 2/22, 220
via mean value analysis 219
via M/D/1 analysis 220
QoS 253
queue
cell delay variation 68
cell delay in ATM switch 68, 108
cell loss probability in 104
via simulation 73
cell waiting time probability 108, 220
customers of 58
deterministic see M/D/1
discard mechanisms 32
fluid-flow queueing model 129
continuous analysis 129
discrete analysis 131
M/D/1 21, 22, 66, 117
delay in 108
heavy traffic approximation 117
mean delay in 66
M/M/1 19, 62
mean delay in 61
mean number in 62
system size distribution 63
summary of formulae 18
N.D/D/1 21, 115
heavy traffic approximation 117
queueing
burst scale queueing behaviour 127
queue state probabilities 98
queueing system, instability of 100
steady state probabilities, meaning of 99
303
INDEX
output buffered 97
statistical bitrate transfer capability (SBR)
10/2
sustainable cell rate 150
Synchronous Digital Hierarchy (SDH) 5
teletraffic engineering 45
time division multiplexing 3
timeslot 3
traffic
busy hour 51
carried 47
conditioning of aggregate flows in IP
intensity 47, 50
levels of behaviour 81
lost 47
models, see sources
offered 47
traffic shaping 182
traffic contract 150
usage parameter control (UPC) 13, 167
by controlling mean cell rate 168
by controlling the peak cell rate 173
by dual leaky buckets (leaky cup and
saucer) 40, 182
by leaky bucket 172
by window method 172
Tolerance problem 176
worst case cell streams 178
variable bitrate services (VBR) 150
virtual channels 9, 205
virtual channel connection 9, 205
virtual paths 9, 205
Voice over IP (VoIP) 239
basic queueing model 239
advanced queueing model 261
Weighted Fair Queueing (WFQ)
274
265