0% found this document useful (0 votes)
213 views20 pages

EPMS: Process Analysis and Simulation

This document discusses approaches to analyzing business processes, including queueing theory, simulation modeling, and knowledge-based approaches. It then focuses on using queueing theory to model business processes, where activities are modeled as queues and resources (human or machines) perform transformations on job units as they move through the process. Analytical queueing models provide insights but have limitations due to assumptions. Simulation offers more flexibility to model complex, variable business processes and enables experimentation through "what if" scenarios.

Uploaded by

suci annisa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
213 views20 pages

EPMS: Process Analysis and Simulation

This document discusses approaches to analyzing business processes, including queueing theory, simulation modeling, and knowledge-based approaches. It then focuses on using queueing theory to model business processes, where activities are modeled as queues and resources (human or machines) perform transformations on job units as they move through the process. Analytical queueing models provide insights but have limitations due to assumptions. Simulation offers more flexibility to model complex, variable business processes and enables experimentation through "what if" scenarios.

Uploaded by

suci annisa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

14

EPMS for Business Process Analysis

There are three general approaches to process analysis and design: an analytical approach,
a computational approach, and a qualitative or knowledge-based approach. First, queue-
ing theory provides a framework to analyze business processes. As an analysis tool, it
has a benefit in that an analysis team can quickly build a model and obtain results. The
drawbacks are that queueing models can be mathematically complex, are approximate
for more complicated systems, and the results are only valid if the actual system being
studied matches the underlying assumptions of the queueing model. Second, within com-
putational approaches, there are three main simulation models: continuous simulation,
discrete-event simulation, and agent-based simulation. The benefits of simulation are that
a system of almost any complexity could, in theory, be modeled accurately. A drawback is
that simulation modeling often requires significant expertise and time to both develop a
model and to analyze it. Also, to obtain greater accuracy of the results requires the secure-
ment of more accurate data than what might be called for in analytical approaches. Third,
a knowledge-based approach uses rules or heuristics based on best practices to guide the
analysis. There are many heuristics that can be applied; it is through the knowledgeable
application of these rules that the process can be improved.
Since a business process is a set of activities, business processes can be modeled with
queuing theory. In business processes, a flow or job unit is routed from activity to activity
and, at each activity, some transformation is done to the job until the job finally departs
the process. Each activity of the process is performed by a resource, either of a human or
machine nature. If the resource is busy when the job arrives, then the job must wait in a
queue until the resource becomes available. The benefits of applying queueing theory to
analyze business processes is that, first, they provide the analyst with insight into the per-
formance of business processes, and, second, the performance analysis can be conducted
rapidly, allowing for fast generation of alternative process designs.
Analytical queuing models offer powerful means for understanding and evaluating
queuing processes. However, the use of these analytical models is somewhat restricted
by their underlying assumptions. The limitations pertain to the structure of the queuing
system, the way variability can be incorporated into the models, and the focus on steady-
state analysis. Because many business processes are cross-functional and characterized
by complex structures and variability patterns, a more flexible modeling tool is needed.
Simulation, discussed in the latter half of this chapter, offers this flexibility and represents
a powerful approach for analysis and quantitative evaluation of business processes.
Simulation is a technique that enables us to define and launch an imitation of the behav-
ior of a certain real system in order to analyze its functionality and performance in detail.
For this purpose, real-life input data is required and collected for use in running, observing
the system’s behavior over time, and conducting different experiments without disturb-
ing the functioning of the original system. One of the most important properties of the
simulation technique is to enable experts to carry out experiments on the behavior of a sys-
tem by generating various options of “what if” questions. This characteristic of simulation
gives a possibility for exploring ideas and creating different scenarios that are based on an

329
330 Enterprise Process Management Systems

understanding of the system’s operation and deep analysis of the simulation output results.
This actually represents simulation’s main advantage, which consequently has led to the
widespread use of the technique in various fields for both academic and practical purposes.
Process simulation can be the primary resource for copious amount of process data
under differing experimental conditions and parameters which can then be mined
and analyzed for its characteristics and patterns. Process mining and process analy-
sis are covered in several publications like W. M. P. van der Aalst (2002), M. Dumas,
M. La Rosa, J. Mendling, and H. Reijers (2013) and M. Laguna and J. Marklund (2013).

14.1 Queuing Systems


We come across people “queuing” for many activities in daily life. It can be the issue of
“consulting” in a medical clinic, “rationing” in a ration shop, the issue of cinema tickets,
the issue of rail/airline tickets, etc. The arriving people are called “customers,” while the
person issuing the ticket is referred to as the “server.” There can be more than one queue
and more than one server in many cases—for example, the outpatient department (OPD),
rail tickets, bus tickets, etc. If the server is free at the time of arrival of the customer, he can
be serviced immediately. If there are a number of people, a waiting line and consequently
waiting time comes into operation. There can also be server idle time.
Queuing theory was originally developed by Agner Krarup Erlang in 1909. Erlang was
a Danish engineer who worked in the Copenhagen telephone exchange. While studying
the telephone traffic problem, he used a Poisson process as the arrival mechanism and, for
the first time, modeled the telephone queues as an M/D/1 queuing system. Ever since the
first queuing model, queuing theory has been well-developed and extended to many com-
plex situations, even with complicated queuing networks. These models, together with
the advancement of computer technology, have been used widely now in many fields and
have shown significant benefits in optimizing behavior within these systems.
The entities that request services are called customers, and the processes that provide
services and fulfill customers’ needs are called service channels. It is obvious that capacity is
the key factor in influencing system behaviors. If the service channels have less capacity and
cannot satisfy customer demand, then a waiting line will form and systems may become
more and more crowded; thus, the quality of service will be degraded and many customers
might choose to leave the system before getting served. From the standpoint of customers,
the more service channel capacity, the better; this implies a smaller amount of waiting to
be served and a high service quality is present. On the other hand, if the service channels
have more capacity than needed, then, from the service provider’s perspective, more ser-
vice channels predominantly mean more investment, capital expenditure, and human labor
involved, which increases the operations costs of the service or the manufacturing process.
Thus, one of the most important purposes of studying queuing theory is to find a balance
between these two costs, i.e., the waiting cost and the service cost. If the customer waits for too
long, he/she may not be happy with the service and thus might not return in the future, caus-
ing loss of potential profit; or, conversely, parts may be waiting too long, increasing production
cycle time and thus again losing potential sales and profits. These costs are considered to be
waiting costs. Service costs are those that increase service capacity such as salary paid to the
servers. Queuing theory application balances these two costs by determining the right level of
EPMS for Business Process Analysis 331

FIGURE 14.1
Total cost of queue operations versus process capacity.

service so that the total cost of the operations (waiting cost + service cost) can be optimized.
Figure 14.1 shows the schematic of the total cost of queue operations versus process capacity.

14.1.1 Queuing Process


1. Calling population: This represents a group of flow units, for example customers,
some of whom are associated with the queuing system for predetermined pur-
poses. This means that a flow unit joins one or more queues of flow units waiting
in different buffers within a certain process.
The calling population may contain flow units of the same type or flow units of
different types. The first is called a homogeneous calling population, whereas the
second is called a heterogeneous calling population. Calling populations are usu-
ally heterogeneous. For example, patients enter a hospital for different purposes,
i.e., different health problems; to this regard, such a calling population consists of
different subpopulations.
The calling population could be understood as an infinite or finite population.
The calling population is considered as infinite when a flow unit joins a queue of
such a large number of flow units that the queuing system is not affected by the
arrival of the new flow unit (for example, a patient joins a waiting list of patients
in a long queue that may last months of waiting for the purpose of undergoing an
operation). Otherwise, the calling population is considered finite if the criterion
for an infinite population is not valid (for example, a patient joins a queue of a
number of patients in a waiting room to see a specialist physician).
2. Arrival process: This represents a determined way or path that every flow unit
should follow after entering the queuing process. The path for each flow unit is
determined depending on the type of flow unit. Such a path consists of a number
of activities through which the flow unit passes.
There are queues, particularly when people (customers) as flow units form the
queue, in which the person who joins the queue may make a decision of one of the
following three possible actions:
332 Enterprise Process Management Systems

• The person decides to leave and not to join the queue because of the large num-
ber of people already in the queue.
• Renege, the person joins the queue, but, after some time, decides to leave the
queue.
• The person decides to wait in the queue regardless of the time spent waiting.

3. Queue configuration: This indicates the type of queue that the flow unit joins,
which determines the requirements to join a queue and the behavior of a flow unit
or customer who joins the queue. There are two types of queues, as follows:
• Single-line queue configuration requires that a flow unit joins the queue at the
end of a single line and the flow unit is served after all the flow units before it
are served.
• The multiple-line queue configuration enables the flow unit to choose one of
several queue lines.
4. Queue discipline: Queue discipline represents the rule or discipline used to choose
the next flow unit for serving; there are different disciplines used depending on the
purpose of the queuing system. The most commonly used rule is known as first-in-
first-out. Some queue disciplines also use priority of the flow unit to select the next to
be served. This rule is, for example, used in medical institutions, where patients with
life-threatening conditions or children have the highest priority to be served first.
5. Service mechanism: This consists of a number of services that perform a set of tasks
on the flow unit within a process. The flow unit enters the service facility when the
resource is available to provide the flow unit with the service it needs. The time spent
by the service in performing the work on the flow unit is called the service time.

14.2 Queuing Models


Whenever an OPD is scheduled, one witnesses large queues at the OPD section prior to the
start of the OPD hours to register the patient number tokens. The actors in this situation are the
patients arriving to get the tokens for physician’s consulting, and the service counter at the hos-
pital or the registration server is providing the service by registering tokens. The arrival process
is represented by the interarrival times, while the service process is represented by the service
time per patient. The interarrival time is the time between successive patient arrivals. The ser-
vice time per patient is the time taken to provide the patients the expected service as desired.
In the earlier OPD clinic example, the interarrival time may be very less during an
OPD off day and may be high during an OPD scheduled day. Similarly, the service
rate will be high during OPD scheduled days and will be slightly low during the OPD
off days. In other words, the interarrival times and service rates are probabilistic.
Queue discipline represents the manner in which the customers waiting in a queue are
served. It may be:

• First-come-first-serve
• Service in random order
• Last-come-first-serve
EPMS for Business Process Analysis 333

If there are more queues, then the customers often join the queue where the length is
small. This is known as jockeying. Sometimes, the customers tend to move away from the
queue place upon seeing its length. This is known as balking. If the customers wait for a
long time in the queue, but have not been serviced, then they may move away. This is
known as reneging.
The field of queueing theory has developed a taxonomy to describe systems
based on their arrival process, service process, and number of servers, written
as arrival/service/number servers. The basic notation, widely used in queue-
ing theory, is composed of three symbols separated by forward slashes. The
values for the symbols are:

• M for Poisson or exponential distributions


• D for deterministic (constant) distributions
• E for Erlang distributions
• G for general distributions (any arbitrary distribution)
• GI for general independent in the case of arrival rates

There are varieties of queuing models that arise from the elements of a queue that are
described next.

14.2.1 Model I: Pure Birth Model


The pure birth model considers only arrivals, and the inter-arrival time in the pure birth
model is explained by the exponential distribution. Birth of babies is a classical example
for the pure birth model.
Let P0(t) be the probability of no customer arrivals during a period of time t.
Let Pn(t) be the probability of n arrivals during a period of time t.
As P0 (t) = e −λt,

(λt)n e − λt
Pn (t) = , n = 0, 1, 2…
n!

This is a Poisson distribution with mean E{n/t} = λt arrivals during a period of time t.

14.2.2 Model II: Pure Death Model


The pure death model contradicts the pure birth model, in the sense that only departures
are considered in this model. Here, the system starts with N customers at time 0, and no
further arrivals are permitted to the system. The departure rate is l customer per unit time.
Pn (t) is the probability of n customers in the system after t time units.
So,

(µt )N −n e −µt
Pn (t) = , n = 0, 1, 2… N
(N − n)!
334 Enterprise Process Management Systems

and

N
P0 (t) = 1 − ∑ P (t)
n =1
0

14.2.3 Model III: Generalized Poisson Queuing Model


This model considers both interarrival time and service time, and both these times fol-
low exponential distribution. During the early operation of the system, it will be in the
transient state. On the contrary, if the system is in operation for a long time, it attains
a steady state. In this model, both the interarrival and the service time exhibit state
dependency.
Assuming n as the number of customers in the system (“system” referring to those cus-
tomers who are waiting for service and who are being serviced), then:

λ n is the arrival rate where there are n customers in the system already
µ n is the service rate where there are n customers in the system already
Pn is the steady-state probability of n customers in the system

All of the earlier steady-state probabilities help in determining the different parameters for
the model such as average queue length, waiting time in the system, and various measures
of system’s performance.
Then, with

 λ λ …λ 0 
Pn =  n −1 n −2  P0 , n = 1, 2,…
 µ nµ n −1 µ1 



we can determine P0 from the equation P0 = 1.
n =0
For n = 0,

λ 
P1 =  0  P0
 µ1 

For n = 1,

λ 0P0 + µ 2P2 = ( λ1 + µ1 ) P1

Substituting the values of P1, we get

λ λ 
P2 =  1 0  P0
 µ 2µ 1 
EPMS for Business Process Analysis 335

14.2.4 Single-Server Models


The basic queuing models can be classified into six categories using Kendall notation,
which employs six parameters to define a model (P/Q/R): (X/Y/Z).
The parameters of the notation are as follows:

P is the distribution of the arrival rate.


Q is the distribution of the service rate.
R refers to the number of service channels providing the service.
X is the service discipline; it may be general, first-come-first-serve, service in random
order, or last-come-first-serve.
Y is the maximum number of customers allowed to stay in the system at any point
in time.
Z is the calling source size.

[Link] Model IV (M/M/1): (GD/∞/∞)


The features of this model are as follows:

1. There is a single service channel providing the service.


2. Arrival rate or input follows Poisson distribution.
3. The service rate is exponentially distributed.
4. There is no limit on the system’s capacity.
5. Customers are served on a first-come-first-served basis.

Assuming,

1. λ: arrival rate of customers (number/hour)


2. μ: service rate (number/hour)
3. T: mean time between arrivals = 1/λ
4. t: average time of servicing = 1/μ
5. ρ(rho): utilization factor or traffic intensity = λ/μ
6. ρ0: idling factor of the facility; that is, the probability of providing the service right
away to the customers without them having to wait = 1 − ρ
7. Pn: probability that there are n customers waiting in the system for service.
8. N: number of customers in the system
9. Lq: length of the queue or average number of customers in the queue waiting for
service
10. Ls: average number of customers in the system (both at service counters and in the
queue)
11. Wq: average waiting time in the queue
12. Ws: average waiting time in the system
13. W/W > 0: expected waiting time of a customer who has to wait
14. L/L > 0: expected length of a nonempty queue
336 Enterprise Process Management Systems

The formula list for the model (M/ M/ 1) : (GD/∞/∞) is given in the following:

1. P0 = 1 − (λ / µ ) = 1 − ρ
 λ  λ
2. Pn =    1 −  = (ρ) n × (1 − ρ)
 µ  µ 
3. Probability of queue size greater than n(Q ≥ n) = (ρ)n
 λ 
4. L s =  
µ−λ 
 λ   λ   λ2 
5. L q =  ×  =  
 µ − λ   µ   µ(µ − λ ) 
 λ  λ 1  1 
6. Ws =   ×   Ls ×   =  
µ−λ  µ  µ  µ−λ 
 λ 
7. Wq =  
 µ(µ − λ ) 
 λ 
8. L / L > 0 = L n =  
µ−λ 
 1 
9. Average waiting time in the nonempty queue =  
µ−λ 
10. Probability of an arrival waiting for t mins or more = ρe −(µ−λ ) t

[Link] Model V (M/M/1): (GD/N/∞)


The features of this model are:

1. A single-service channel to provide the service


2. Both the arrival rate and service rate follow Poisson distribution
3. The number of customers allowed in the system cannot exceed N at any point of time

The formula list is given in the following:

 λ
1−  N
µ λ λ
1. ( a ) PN =  ×   , ≠ 1, N = 0, 1, 2,…N
 λ µ µ
1− 
 µ
(1 − ρ)
= × (ρ)N , ρ ≠ 1, N = 0, 1, 2,…N
(1 − ρ)N +1
EPMS for Business Process Analysis 337

(b) If ρ = 1, that is 100% utilization,

1
Pn =
N +1

2. Effective arrival rate of customers ( λ e )

λ e = λ ( 1 − PN ) = µ ( L s − L q )

 λ
  1 − (N + 1)(ρ) + N(ρ) 
N N +1

 µ
3. (a) L s = , if ρ ≠ 1
(1 − ρ)(1 − ρ)N +1

( ρ ) 1 − (N + 1)(ρ)N + N (ρ)N +1 


= , if ρ ≠ 1
(1 − ρ)(1 − ρ)N +1

(b) If ρ = 1, L s = N/2

λe λ((1 − PN )
4. Lq = Ls − = Ls =
µ µ

Ls
5. Ws =
λ(1 − PN )

Lq Lq 1
6. Wq = = = Ws −
λ e λ(1 − PN ) µ

14.2.5 Multiple-Server Models


In this case, there will be C counters or servers in parallel to serve the customer. The cus-
tomer has the option of choosing the server that is free, or that has the least number of
people waiting for service. The mean arrival rate (λ) follows Poisson distribution, while
the mean service rate (µ) follows exponential distribution.

[Link] Model VII (M/M/C): (GD/∞/∞)


The features of this model are as follows:

1. Arrival rate and service rate follow Poisson distribution.


2. There are C serving channels.
3. An infinite number of customers is permitted in the system.
4. GD stands for general discipline servicing.
338 Enterprise Process Management Systems

The formula list is given in the following:

 ρn
Pn l = for 0 ≤ n ≤ c
 n!
1. 
 ρn
 Pn = P0 for n > c
Cn − c C !
−1
 
 C −1 ρn C 

ρ
2. P0 =  + 
 n = 0 n ! C !  1 − ρ  
  c  

C⋅ρ
3. L q = Pc
(C − ρ)2
4. L s = L q + ρ

Lq
5. Wq =
λ
1
6. Ws = Wq +
µ
/∞/∞)
Morse (1998) shows that for (M/M/C): (GD/
ρ ρ
Lq = as → 1
C−ρ C

[Link] Model VIII (M/M/C): (GD/N/∞)


The features of this model include:

1. The system limit is finite and is equal to N.


2. The arrival rate and service rate follow Poisson distribution.
3. The maximum queue length is N − C.
4. There are C service channels.

Here λ e < λ , because of N

The generalized model can be defined as

(N − n)λ for 0 ≤ n ≤ N


λn = 
0 for n ≥ N

 nµ for 0 ≤ n ≤ N

µ n = Cµ for C ≤ n ≤ N
 0 for n ≥ N

EPMS for Business Process Analysis 339

The formula list is given in the following:

 N Cnρn ⋅ P0 for 0 ≤ n ≤ C

1. Pn =  n ! ρn
N Cn ⋅ P0 for C ≤ n ≤ N
 C ! Cn −C
−1
 C N
n ! ρn 
2. P0 =  ∑
 n =0
n
N Cnρ + ∑
n = C +1

C ! Cn −C 
N
3. L q = ∑ (n − C)P
n = c +1
n

λe
4. L s = L q +
µ
C
5. λ e = µ ( C − C1 ) , C1 = ∑(C − n)P = λ ( N − L )
n =0
n s

Ls
6. Ws =
λe
Lq
7. w q =
λe

14.3 Simulation
For queuing systems such as the M/M/C queue, the analytical models are well-developed
and we can predict their steady-state performance without too much difficulty. We made
many assumptions (i.e., Poisson arrival process) about the analytical queuing models in
the previous sections so as to simplify the problem so that a mathematical model could be
formulated. However, in the real world, the situation becomes more dynamic and compli-
cated than those mathematical models can handle; even though queuing network models
are available, many situations are simply beyond the capabilities of the analytical math-
ematical model.
When modeling these systems, information such as shift patterns, lunch breaks, machine
breakdowns, arrival rates, and so forth cannot be ignored, as they will have significant
impacts on system performance; also, some systems might never arrive at a steady state
and do not operate on a 24/7 basis. So, it is nearly impossible to study these types of sys-
tems using queuing theory and models; a theoretical solution for those queuing systems
would be difficult to obtain. An alternative to the mathematical model is to use the simula-
tion model instead.
Simulation is the imitation or representation of the behavior of some real thing, state
of affairs, or process. The act of simulating something generally entails representing cer-
tain key characteristics or behaviors of a selected physical or abstract system. To simulate
340 Enterprise Process Management Systems

means to mimic the reality in some way; simulation is a system that represents or emulates
the behavior of another system over time. More specifically, a simulation is the imitation of
the operation of a real-world process or system over time.
Simulation is a technique that enables us to define and launch an imitation of the behav-
ior of a certain real system in order to analyze its functionality and performance in detail.
For this purpose, real-life input data is required and collected for use in running, observ-
ing the system’s behavior over time, and conducting different experiments without dis-
turbing the functioning of the original system. One of the most important properties of
the simulation technique is to enable experts to carry out experiments on the behavior of
a system by generating various options of what-if questions. This characteristic of simula-
tion gives a possibility for exploring ideas and creating different scenarios that are based
on an understanding of the system’s operation and deep analysis of the simulation output
results. This actually represents simulation’s main advantage, which consequently has led
to the widespread use of the technique in various fields for both academic and practical
purposes.
Characteristics of simulation:

• Simulation enables the study and experimentation of the internal interactions of a


complex system or a subsystem.
• The knowledge gained in designing a simulation model may be of great value
toward suggesting improvement in the system under investigation.
• By changing simulation inputs and observing the corresponding outputs, valu-
able insight may be obtained into which variables are most important and how
they influence other variables.
• Simulation can be used to experiment with new designs or policies prior to imple-
mentation, so as to prepare for what may happen in real life.
• Simulation can be used to verify analytic solutions.
• Simulation models designed for training allow learning to occur without cost or
disruption.
• Animation shows a system in simulated operation so that the plan can be
visualized.

The advantages of simulation include:

1. A simulation can help in understanding how the system operates.


2. Bottleneck analysis can be performed, indicating where work-in-process, informa-
tion, materials, and so on are being excessively delayed.
3. What-if questions can be answered, which is useful in the design of new
systems.
4. New policies, operating procedures, decision rules, information flows, organi-
zational procedures and so on can be explored without disrupting the ongoing
operations of the real system.
5. Time can be compressed or expanded, allowing for a speed-up or slowdown of the
phenomena under investigation.
EPMS for Business Process Analysis 341

14.3.1 Simulation Models


The rapid development of computer hardware and software in recent years has made com-
puter simulation an effective tool for process modeling and an attractive technique for
predicting the performance of alternative process designs. It also helps in optimizing their
efficiency. The main advantage of simulation is that it is a tool that compresses time and
space, thus enabling a robust validation of ideas for process design and improvement.
Modern simulation software in a sense combines the descriptive strength of the symbolic
models with the quantitative strength of the analytical models. It offers graphical repre-
sentation of the model through graphical interfaces, as well as graphical illustration of the
system dynamics through plots of output data and animation of process operations. At
the same time, it enables estimation of quantitative performance measures through the
statistical analysis of output data. The main disadvantage of simulation is the time spent
learning how to use the simulation software and how to interpret the results.
Until recently, simulation software packages could be used only as what-if tools. This
means that, given a simulation model, the designer would experiment with alternative
designs and operating strategies in order to measure system performance. Consequently,
in such an environment, the model becomes an experimental tool that is used to find
an effective design. However, modern simulation software packages merge optimization
technology with simulation. The optimization consists of an automated search for the best
values (nearoptimal values) of input factors (the decision variables). This valuable tool
allows designers to identify critical input factors that the optimization engine can manipu-
late to search for the best values. The best values depend on the measure of performance
that is obtained after one or several executions of the simulation model.
A simulation is a tool for evaluating a given design and an optimization model is a tool
used to search for an optimal solution to a decision problem that is a simulation model
is, by nature, descriptive, and an optimization model is, by nature, prescriptive.

[Link] Discrete-Event Simulation


Business processes usually are modeled as computer-based, dynamic, stochastic, and dis-
crete simulation models. The most common way to represent these models in a computer
is using discrete-event simulation. In simple terms, discrete-event simulation describes
how a system with discrete flow units or jobs evolves over time. Technically, this means
that a computer program tracks how and when state variables such as queue lengths and
resource availabilities change over time. The state variables change as a result of an event
(or discrete event) occurring in the system.
Because the state variables change only when an event occurs, a discrete-event simula-
tion model examines the dynamics of the system from one event to the next. That is, the
simulation moves the simulation clock from one event to the next and considers that the
system does not change in any way between two consecutive events. The simulation keeps
track of the time when each event occurs but assumes that nothing happens during the
elapsed time between two consecutive events.
Discreet-event models focus only on the time instances when these discrete events
occur. This feature allows for significant time compression because it makes it pos-
sible to skip through all of the time segments between events in which the state of
342 Enterprise Process Management Systems

the system remains unchanged. Consequently, in a short period of time, a computer can
simulate a large number of events corresponding to a lengthy real-time span.
To illustrate the mechanics of a discrete-event simulation model, consider an informa-
tion desk with a single server. Assume that the objective of the simulation is to estimate the
average delay of a customer. The simulation then must have the following state variables:

1. Status of the server (busy or idle)


2. Number of customers in the queue
3. Time of arrival of each person in the queue

As the simulation runs, two events can change the value of these state variables; these are:

• Arrival of a customer, which either changes the status of the server from idle to
busy or increases the number of customers in the queue
• Completion of service, which either changes the status of the server from busy to
idle or decreases the number of customers in the queue

A single-server queuing process can be represented with a timeline on which the time of
each event is marked (Figure 14.2).
Assuming the following notation:

t j: arrival time of the jth job


Aj = t j − t j−1: time between the arrival of job j − 1 and the arrival of job j
Sj: service time for job j
Dj: delay time for job j
cj = t j + Dj + sj: completion time for job j
ei: time of occurrence of event i

FIGURE 14.2
Events timeline for a single server.
EPMS for Business Process Analysis 343

Figure 14.2 shows a graphical representation of the events in a single-server process. This
example has six events, starting with event 0 and finishing with event 5. Event 0, e0, is the
initialization of the simulation. Event 1, e1, is the arrival of the first job, with arrival time
equal to t1. The arrival of the second job occurs at time t2. Because c1 > t2, the second job
is going to experience a delay. The delay D2 is equal to the difference between c1 and t2
(D2 = c1 − t2).
Further in Figure 14.2, the completion time for job 1 is calculated as: c1 = t1 + S1, because
this jobdoes not experience any delay. The last event in this figure, labeled e5, is the comple-
tion time for job 2. In this case, the calculation of the completion time c2 includes the waiting
time D2(c2 = t2 + D2 + S2).
There are three main mathematical system formalisms distinguished by how they
treat time and data values; they are as follows:

• Continuous systems: These systems are classically modeled by differential equa-


tions in linear and nonlinear manners. Values are continuous quantities and are
computable for all times.
• Temporally discrete (sampled data) systems: These systems have continuously
valued elements measured at discrete time points. Their behavior is described by
difference equations. Sampled data systems are increasingly important because
they are the basis of most computer simulations and nearly all real-time digital
signal processing.
• Discrete-event systems: A discrete-event system is one in which some or all of the
quantities take on discrete values at arbitrary points in time. Queuing networks
are a classical example. Asynchronous digital logic is a pure example of a discrete-
event system. The quantities of interest (say data packets in a communication net-
work) move around the network in discrete units, but they may arrive or leave a
node at an arbitrary, continuous time.

Continuous systems have a large and powerful body of theory. Linear systems have
comprehensive analytical and numerical solution methods and an extensive theory of
estimation and control. Nonlinear systems are still incompletely understood, but many
numerical techniques are available, some analytical stability methods are known, and
practical control approaches are accessible. The very active field of dynamical systems
addresses nonlinear as well as control aspects of systems. Similar results are available
for sampled data systems. Computational frameworks exist for discrete-event systems
(based on state machines and Petri nets), but are less complete than those for differen-
tial or difference equation systems in their ability to determine stability and synthesize
control laws. A variety of simulation tools are available for all three types of systems.
Some tools attempt to integrate all three types into a single framework, though this is
difficult.
Many modern systems are a mixture of all three types. For example, consider a
computer-based temperature controller for a chemical process. The complete system may
include continuous plant dynamics, a sampled data system for control under normal con-
ditions, and discrete-event controller behavior associated with threshold crossings and
mode changes. A comprehensive and practical modern system theory should answer the
classic questions about such a mixed system—stability, closed-loop dynamics, and control
law synthesis. No such comprehensive theory exists, but constructing one is an objective
of current research.
344 Enterprise Process Management Systems

14.3.2 Simulation Procedure


1. Problem formulation: In this step, a problem formulation statement should be pre-
pared, which shows that the problem discussed is understood completely. The
problem statement should be signed by the customer or organization that orders
the simulation and also by the person or team manager, who is responsible for
conducting the system simulation.
2. The setting of objectives and overall project plan: In this step, the simulation team
should study if simulation is an appropriate method of solving the problem for-
mulated in the first step. In addition to this, real goals should be defined, which
could be expected to be achieved from carrying out the simulation.
In accordance with the defined goals, the simulation should be considered as an
independent project or a subproject within a large project, for example a simula-
tion subproject that deals with the preparation, running, and analysis of results
of a simulation of a business process model within a project of business process
management.
3. Model conceptualization: This step deals with developing a model of the system
that is intended to be simulated. To do this properly, the modeler should use a
modeling technique that enables her/him to transfer the behavior of the system
into the model developed as closely as possible. The model developed should be
shown to the users in order to suggest corrections necessary to make it a true
reflection of the original system discussed.
4. Data collection: The simulation of any system requires the collection of detailed
data about the system’s behavior and each activity performed within the frame-
work of the system. The Collection of data is usually done in connection with
system modeling activities; this is when interviews are organized with users.
Concerning business process management projects, analysts usually collect
data about the organization’s business processes; their work processes; and each
activity within every work process, such as its description, time duration, con-
straints, resources needed, costs, and other data.
5. Model translation: The model of the system developed in the third step has to be
translated into a simulation language program, such as GPSS/H, using the data
collected in the previous step.
There are also a number of software packages, such as iGrafx, Arena, and oth-
ers, which enable modelers to translate their models into simple diagrams, such as
a flowchart, before running the simulation process.
6. Verification: This step deals with the verification of whether the accurately written
program truly reflects the translation of the systems model into the program. This
step requires debugging the program carefully in order to remove any mistakes
that exist in it. The result of this step should be a program that represents the
behavior of the system that is presented by the model.
7. Validation: The validation step examines the model developed in order to find
out whether it is a true reflection of the original real system. This can be achieved
by performing a comparison between the model and the system concerned. Such
a comparison could be carried out by testing the simulation model using tests
already used from real processes in which the input data and the expected output
data are known in advance.
EPMS for Business Process Analysis 345

8. Experimental design: In the experimental design step, the simulation team pre-
pares different alternative scenarios for running the simulation process. These
scenarios are developed on the basis of a complete understanding of the behavior
of the system, generating different possible behavior possibilities of the system
by using what-if questions, and trying to implement ideas for achieving improve-
ments in the functioning of the system.
9. Runs and analysis: In this step, the simulation team deals with estimating and
analyzing the performance results of the simulation in the prepared scenarios of
the previous step. On the basis of the results of the previously completed simu-
lation runs, the team may determine the need for conducting more simulation
runs. New ideas may be considered in the context of making changes in the exist-
ing scenarios or new scenarios developed on the basis of the carefully analyzed
output data, leading to the performance of new simulation runs on the system
concerned.

14.4 Process Analytics


Section 14.2.3 introduced the four performance measures of quality, time, cost, and flex-
ibility. These performance measures are usually considered to be generally relevant for
any kind of business. Beyond this general set, a company should also identify specific
measures. Often, the measures are industry-specific, like profit per square meter in gas-
tronomy, return rate in online shopping, or customer churn in marketing. Any specific
measure that a company aims to define should be accurate, cost-effective, and easy-to-
understand. This subsection focuses on the four general performance measures of qual-
ity, time, cost, and flexibility. The question this section addresses is how to spot when a
process does not perform well according to one of these dimensions. Event log is the data
generated from the execution or effectively by the simulation of processes; event logs pro-
vide very detailed data that are relevant to process performance.

14.4.1 Quality Measurement


The quality of a product created in a process is often not directly visible from execution
logs. However, a good indication is to check whether there are repetitions in the execu-
tion logs, because such typically occur when a task has not been completed successfully.
Repetitions can be found in the sequences of task.
The loop of a rework pattern increases the cycle time of a task to

T
CT =
1− r

in comparison to T being the time to execute the task only once.


The repetition probability r from a series of event logs would be

T
r = 1−
CT
346 Enterprise Process Management Systems

Both CT and T can now be determined using the data of the event logs.
In some information systems, it might be easier to track repetition based on the assign-
ment of tasks to resources. One example is helpdesk ticketing systems that record which
resource is working on a case. Also, the logs of these systems offer insights into repetition.
A typical process supported with ticketing systems is incident resolution. For example, an
incident might be a call by a patient who complains that the online doctor’s appointment
booking system does not work. Such an incident is recorded by a dedicated participant—
for example, a call center agent. Then, it is forwarded to a first-level support team who tries
to solve the problem. In case the problem turns out to be too specific, it is forwarded to a
second-level support team with specialized knowledge in the problem domain.
In the best case, the problem is solved and the patient is notified accordingly. In the
undesirable case, the team identifies that the problem is within the competence area of
another team. This has the consequence of that the problem is rooted back to the first-level
team. Similar to the repetition of tasks, we now see that there is a repeated assignment of
the problem to the same team. Accordingly, log information can be used to determine how
likely it is that a problem is rooted back.

14.4.2 Time Measurement


Time and its more specific measures cycle time and waiting time are important general per-
formance measures. Event logs typically show timestamps such that they can be used for
time analysis. Time analysis is concerned with the temporal occurrence and probabilities
of different types of events. The event logs of a process generally relate each event to the
point in time of its occurrence. Therefore, it is straightforward to plot events on the time
axis. Furthermore, we can employ classifiers to group events on a second axis. A classifier
typically refers to one of the attributes of an event, like a case identification number (ID) or
participant ID.
There are two levels of detail for plotting events in a diagram, as follows:

1. Dotted charts using the timestamp to plot an event: the dotted chart is a simple yet
powerful visualization tool for event logs. Each event is plotted on a two-dimensional
canvas, with the first axis representing its occurrence in time and the second axis rep-
resenting its association with a classifier such as a case ID. There are different options
to organize the first axis. Time can be represented either in a relative manner, such that
the first event is counted as zero, or in an absolute manner, such that later cases with
a later start event are further right in comparison to cases that began earlier. The sec-
ond axis can be sorted according to different criteria. For instance, cases can be shown
according to their historical order or their overall cycle time.
2. A timeline chart showing the duration of a task and its waiting time: the temporal
analysis of event logs can be enhanced with further details if a corresponding
process model is available and tasks can be related to a start and an end event. The
idea is to utilize the concept of token replay for identifying the point in time when
a task gets activated.
• For tasks in a sequence, the activation time is the point in time when the previ-
ous task was completed.
• For tasks after an AND-join, this is the point in time when all previous tasks
were completed.
• For XOR-joins and splits it is the point when one of the previous tasks completes.
EPMS for Business Process Analysis 347

Using this information, we can plot a task not as a dot but instead as a bar in a timeline
chart. A timeline chart shows a waiting time (from activation until starting) and a process-
ing time (from starting until completion) for each task. The timelines of each task can be
visualized in a similar way as a dot in the dotted chart. The timeline chart is more infor-
mative than the dotted chart, since it shows the duration of the tasks and also the waiting
times. Both pieces of information are a valuable input for quantitative process analysis.
When thousands of cases are available as a log, one can estimate the distribution of wait-
ing time and processing time of each task, and:

1. Bottlenecks with long waiting times can be spotted.


2. What tasks are most promising to focus redesign efforts upon can be identified.
3. The execution times of running process instances, which is helpful for process
monitoring, can be realized.

14.4.3 Cost Measurement


In a process context, cost measurement is mainly related to the problem of assigning indi-
rect costs. Direct costs like the purchasing costs of four wheels that are assembled on a
car can be easily determined. Indirect labor or machine depreciation are more difficult.
In accounting, the concept of activity-based costing (ABC) was developed to more accu-
rately assign indirect costs to products and services as well as to individual customers. The
motivation of ABC is that human resources and machinery are often shared by different
products and services as well as are used to serve different customers. For instance, the
depot of BuildIT rents out expensive machinery such as bulldozers to different construc-
tion sites. On the one hand, that involves costs in terms of working hours of the persons
working at the depot. On the other hand, machines like bulldozers lose value over time
and require maintenance. The idea of ABC is to use activities in a manner so as to help
distribute the indirect costs, e.g., those associated with the depot.

14.4.4 Flexibility Measurement


Flexibility refers to the degree of variation that a process permits. This flexibility can be
discussed in relation to the event logs the process produces. For the company owning the
process, this is important information in order to compare the desired level of flexibility
with the actual flexibility. It might turn out that the process is more flexible than what is
demanded from a business perspective. This is the case when flexibility can be equated
with a lack of standardization. Often, the performance of processes suffers when too many
options are allowed.

14.5 Summary
This chapter explained the rationale for modeling business processes with queuing theory.
In business processes, each activity of the process is performed by a resource (i.e., either a
human resource or machine resource); thus, if the resource is busy when the job arrives,
then the job will wait in a queue until the resource becomes available. The benefits of
applying queueing theory to analyze business processes is, first, they provide the analyst
348 Enterprise Process Management Systems

with insight into the performance of business processes and, second, the performance
analysis can be conducted rapidly, allowing for the fast generation of alternative process
designs. The second half of the chapter introduced simulation as a technique that enables
defining and experimenting in the context of the imitation of the behavior of a real system
in order to analyze its functionality and performance in greater detail. For this purpose,
real-life input data are required and collected for use in running and observing the sys-
tem’s behavior over time and conducting for different experiments without disturbing the
functioning of the original system.

You might also like