0% found this document useful (0 votes)
78 views41 pages

Optimization Techniques and Their Applications To Mine

Optimization Techniques and their Applications to Mine

Uploaded by

lephucanh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views41 pages

Optimization Techniques and Their Applications To Mine

Optimization Techniques and their Applications to Mine

Uploaded by

lephucanh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

i

Optimization Techniques and their


Applications to Mine Systems

This book describes the fundamental and theoretical concepts of optimization algorithms in a
systematic manner, along with their potential applications and implementation strategies in mining
engineering. It explains basics of systems engineering, linear programming, and integer linear pro-
gramming, transportation and assignment algorithms, network analysis, dynamic programming,
queuing theory and their applications to mine systems. Reliability analysis of mine systems, inven-
tory management in mines, and applications of non-linear optimization in mines are discussed as
well. All the optimization algorithms are explained with suitable examples and numerical problems
in each of the chapters.
Features include:

• Integrates operations research, reliability, and novel computerized technologies in single


volume, with a modern vision of continuous improvement of mining systems.
• Systematically reviews optimization methods and algorithms applied to mining systems
including reliability analysis.
• Gives out software-based solutions such as MATLAB®, AMPL, LINDO for the optimization
problems.
• All discussed algorithms are supported by examples in each chapter.
• Includes case studies for performance improvement of the mine systems.

This book is aimed primarily at professionals, graduate students, and researchers in mining
engineering.
ii
iii

Optimization Techniques and


their Applications to Mine
Systems

Amit Kumar Gorai and Snehamoy Chatterjee


iv

MATLAB® is a trademark of The MathWorks, Inc. and is used with permission. The MathWorks does not warrant the
accuracy of the text or exercises in this book. This book’s use or discussion of MATLAB® software or related products does
not constitute endorsement or sponsorship by The MathWorks of a particular pedagogical approach or particular use of the
MATLAB® software.
First edition published 2023
by CRC Press
6000 Broken Sound Parkway NW, Suite 300, Boca Raton, FL 33487-​2742
and by CRC Press
4 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN
CRC Press is an imprint of Taylor & Francis Group, LLC
© 2023 Taylor & Francis Group, LLC
Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume
responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted
to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission
to publish in this form has not been obtained. If any copyright material has not been acknowledged, please write and let us
know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized
in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying,
microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, access www.copyri​ght.com or contact the
Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-​750-​8400. For works that are not
available on CCC please contact [email protected]
Trademark notice: Product or corporate names may be trademarks or registered trademarks and are used only for identification
and explanation without intent to infringe.
Library of Congress Cataloging‑in‑Publication Data
Names: Gorai, Amit Kumar, author. | Chatterjee, Snehamoy, editor.
Title: Optimization techniques and their applications to mine systems / Amit Kumar Gorai, Snehamoy Chatterjee.
Description: Boca Raton : CRC Press, 2022. | Includes bibliographical references and index.
Identifiers: LCCN 2022002805 | ISBN 9781032060989 (hardback) | ISBN 9781032060996 (paperback) |
ISBN 9781003200703 (ebook)
Subjects: LCSH: Mining engineering.
Classification: LCC TN153.G575 2022 | DDC 622.0285–dc23/eng/20220408
LC record available at https://lccn.loc.gov/2022002805
ISBN: 9781032060989 (hbk)
ISBN: 9781032060996 (pbk)
ISBN: 9781003200703 (ebk)
DOI: 10.1201/​9781003200703
Typeset in Times
by Newgen Publishing UK
v

Contents
Preface................................................................................................................................................xi
Author biographies...........................................................................................................................xiii

Chapter 1 Introduction to mine systems........................................................................................ 1


1.1 Definition of a system......................................................................................... 1
1.2 Types of system................................................................................................... 1
1.3 System approach................................................................................................. 4
1.4 System analysis................................................................................................... 5
1.5 Elements of a mining system.............................................................................. 5
1.6 Definition and classification of optimization problem........................................ 6
1.6.1 Based on the existence of constraints.................................................... 6
1.6.2 Based on the nature of the equations involved...................................... 7
1.6.3 Based on the permissible values of the decision variables.................... 8
1.6.4 Based on the number of the objective function..................................... 8
1.7 Solving optimization problems........................................................................... 9
1.7.1 Classical optimization techniques......................................................... 9
1.7.1.1 Direct methods........................................................................ 9
1.7.1.2 Gradient methods.................................................................... 9
1.7.1.3 Linear programming methods................................................. 9
1.7.1.4 Interior point method.............................................................. 9
1.7.2 Advanced optimization techniques....................................................... 9

Chapter 2 Basics of probability and statistics.............................................................................. 11


2.1 Definition of probability................................................................................... 11
2.2 Additional theory of probability....................................................................... 13
2.3 Probability distributions.................................................................................... 14
2.4 Common probability distribution functions...................................................... 16
2.4.1 Uniform distribution............................................................................ 16
2.4.2 Normal distribution............................................................................. 18
2.4.3 Poisson distribution............................................................................. 26
2.4.4 Exponential distribution...................................................................... 27
2.5 Conditional probability..................................................................................... 29
2.6 Memoryless property of the probability distribution........................................ 30
2.7 Theorem of total probability for compound events.......................................... 32
2.8 Bayes’ rule........................................................................................................ 33
2.9 Definition of statistics....................................................................................... 34
2.10 Statistical analyses of data................................................................................ 36
2.10.1 Common tools of descriptive statistics................................................ 36
2.10.1.1 Arithmetic Mean............................................................... 36
2.10.1.2 Median.............................................................................. 37
2.10.1.3 Mode................................................................................. 39
2.10.1.4 Standard deviation............................................................40
2.10.1.5 Mean Absolute Deviation.................................................42
2.10.1.6 Skewness........................................................................... 43

v
vi

vi Contents

2.10.1.7 Coefficient of variation..................................................... 43


2.10.1.8 Expectation or expected value.......................................... 44
2.10.1.9 Variance and covariance................................................... 46
2.10.1.10 Correlation coefficient...................................................... 48
2.10.2 Standard analysis tools of inferential statistics...................................48
2.10.2.1 Hypothesis Tests............................................................... 48

Chapter 3 Linear programming for mining systems.................................................................... 55


3.1 Introduction....................................................................................................... 55
3.2 Definition of Linear Programming Problem (LPP).......................................... 55
3.3 Solution algorithms of LPP............................................................................... 56
3.3.1 Graphical method................................................................................ 56
3.3.1.1 Multiple Solutions................................................................. 59
3.3.1.2 Unbounded solution.............................................................. 60
3.3.2 Simplex method................................................................................... 61
3.3.3 Big-​M method..................................................................................... 68
3.4 Sensitivity analysis............................................................................................ 71
3.4.1 Graphical method of sensitivity analysis............................................. 72
3.4.2 Sensitivity analysis of the model using simplex method..................... 76
3.5 The dual problem.............................................................................................. 82
3.5.1 Formulation of dual problem for a given primal LPP......................... 82
3.5.2 Dual simplex algorithm....................................................................... 86
3.6 Case Study of the application of LPP in optimization of coal
transportation from mine to power plants......................................................... 91

Chapter 4 Transportation and assignment problems in mines..................................................... 99


4.1 Definition of a transportation problem.............................................................. 99
4.2 Types of transportation problem....................................................................... 99
4.3 Solution algorithms of a transportation model................................................ 100
4.3.1 Initial basic feasible solution............................................................. 102
4.3.1.1 The north-​west corner method............................................ 102
4.3.1.2 Matrix minimum method.................................................... 103
4.3.1.3 Vogel Approximation Method (VAM)................................ 104
4.3.2 Determination of optimal solution.................................................... 107
4.3.2.1 The Modified Distribution method..................................... 107
4.3.2.2 Stepping Stone Method....................................................... 113
4.3.3 Solution algorithm of an unbalanced transportation model.............. 116
4.3.4 Solution algorithm of a transportation model with
prohibited routes................................................................................ 118
4.3.5 Solution algorithm for degeneracy problem...................................... 119
4.4 Assignment problem....................................................................................... 119
4.4.1 The Hungarian Assignment Method (HAM)....................................120
4.5 Case study on the application of transportation model in
mining system................................................................................................. 130

Chapter 5 Integer linear programming for mining systems....................................................... 145


5.1 Definition........................................................................................................ 145
5.2 Formulation of ILP......................................................................................... 145
vii

Contents vii

5.3 Solution algorithms of an ILP......................................................................... 147


5.3.1 Cutting plane method or Gomory’s cut method................................ 147
5.3.2 Branch and bound (B&B) algorithm................................................. 155
5.4 Case Study of the application of Mixed Integer Programming
(MIP) in production scheduling of a mine...................................................... 163

Chapter 6 Dynamic programming for mining systems.............................................................. 183


6.1 Introduction..................................................................................................... 183
6.2 Solution algorithm of dynamic programming................................................. 183
6.3 Example 1: Maximising Project NPV............................................................. 184
6.3.1 Backward recursion algorithm.......................................................... 185
6.3.2 Forward recursion algorithm............................................................. 188
6.4 Example 2: Decision on ultimate pit limit (UPL) of
two-​dimensional (2-​D) blocks........................................................................ 190
6.5 Example 3: Stope boundary optimization using dynamic programming........ 200
6.6 Case Study of dynamic programming applications to determine
the ultimate pit for a copper deposit............................................................... 204

Chapter 7 Network analysis for mining project planning.......................................................... 215


7.1 Introduction..................................................................................................... 215
7.2 Representation of network diagram................................................................ 215
7.3 Methods of determining the duration of a project.......................................... 216
7.3.1 Critical Path Method (CPM)............................................................. 216
7.3.2 Program Evaluation and Review Technique (PERT)........................ 225
7.3.2.1 PERT analysis algorithm..................................................... 225
7.4 Network crashing............................................................................................ 228

Chapter 8 Reliability analysis of mining systems...................................................................... 241


8.1 Definition........................................................................................................ 241
8.2 Statistical concepts of reliability..................................................................... 241
8.3 Hazard function............................................................................................... 241
8.4 Cumulative hazard rate................................................................................... 242
8.5 Reliability functions........................................................................................ 243
8.5.1 Reliability calculation with an exponential
distribution function.......................................................................... 243
8.5.2 Reliability calculation with a normal probability
density function................................................................................. 247
8.5.3 Reliability calculation with a Weibull distribution
probability density function.............................................................. 250
8.5.4 Reliability calculation with a Poisson distribution
probability mass function.................................................................. 253
8.5.5 Reliability calculation for a binomial distribution............................ 254
8.6 Mean time between failure (MTBF) and mean time to
failure (MTTF)................................................................................................ 255
8.7 Maintainability and mean time to repair (MTTR).......................................... 257
8.8 Reliability of a system.................................................................................... 259
8.8.1 System reliability on a series configuration...................................... 259
8.8.2 System reliability on parallel configuration...................................... 261
vii

viii Contents

8.8.3System reliability of a combination of series and


parallel system................................................................................... 263
8.8.4 System reliability of k-​out-​of-​n configuration..................................264
8.8.5 System reliability of bridge configuration......................................... 266
8.8.6 System reliability of standby redundancy......................................... 268
8.9 Availability...................................................................................................... 270
8.10 Improvement of system reliability.................................................................. 272
8.10.1 Redundancy optimization.................................................................. 272
8.11 Reliability analysis to a mine system: A Case Study...................................... 278
8.11.1 Introduction....................................................................................... 278
8.11.2 Data................................................................................................... 278
8.11.3 Exploratory data analysis.................................................................. 278
8.11.4 Estimating the best fit probability density function (PDF)
for TBF and TTR............................................................................... 279
8.11.5 Reliability analysis for estimation of maintenance schedule............ 284

Chapter 9 Inventory management in mines...............................................................................289


9.1 Introduction..................................................................................................... 289
9.2 Costs involved in inventory models................................................................ 290
9.3 Inventory models............................................................................................. 291
9.3.1 Deterministic model.......................................................................... 292
9.3.1.1 Basic economic order quantity (EOQ) model..................... 292
9.3.1.2 EOQ model with planned shortages................................... 296
9.3.1.3 EOQ model with price discounts........................................ 301
9.3.1.4 Multi-​item EOQ model with no storage limitation............. 306
9.3.1.5 Multi-​item EOQ model with storage limitation.................. 309
9.3.2 Fixed time-​period model................................................................... 311
9.3.3 Probabilistic EOQ model.................................................................. 314

Chapter 10 Queuing theory and its application in mines............................................................. 321


10.1 Introduction..................................................................................................... 321
10.2 Kendall notation.............................................................................................. 322
10.3 Probability distributions commonly used in queuing models......................... 323
10.3.1 Geometric distribution....................................................................... 323
10.3.2 Poisson distribution........................................................................... 324
10.3.3 Exponential distribution.................................................................... 324
10.3.4 Erlang distribution............................................................................. 324
10.4 Relation between the exponential and Poisson distributions.......................... 325
10.5 Little’s law...................................................................................................... 327
10.6 Queuing Model............................................................................................... 327
10.6.1 M/​M/​1 Model.................................................................................... 327
10.6.1.1 Time-​dependent behaviour of the flows of
dump trucks......................................................................... 328
10.6.2 M/​M/​s queuing system...................................................................... 334
10.6.3 Infinite server queue model (M/​M/​∞)............................................... 344
10.6.4 (M/​M/​s): (FCFS)/​K/​K queuing system............................................. 346
10.7 Cost models..................................................................................................... 351
10.8 Case Study for the application of queuing theory for shovel-​truck
optimization in an open-​pit mine.................................................................... 357
ix

Contents ix

Chapter 11 Non-​linear algorithms for mining systems................................................................ 367


11.1 Introduction..................................................................................................... 367
11.2 Stationary points............................................................................................. 367
11.3 Classifications of non-​linear programming..................................................... 368
11.3.1 Unconstrained optimization algorithm for solving non-​linear
problems............................................................................................ 368
11.3.2 Constrained optimization algorithm for solving non-​linear
problems............................................................................................ 373
11.4 Case study on the application of non-​linear optimization for open-​pit
production scheduling..................................................................................... 377

Bibliography.................................................................................................................................. 383
Index............................................................................................................................................... 387
x
xi

Preface
Mining is one of the oldest industries and was discovered almost 20,000 years ago. Today, the mining
industry generates more than US$700 bn revenue worldwide, only by the top 40 mining companies.
Although mining companies are generating a significant amount of revenue, the net profit margin
decreased from 25 per cent in 2010 to 10 per cent in 2018. As time passes, mining is becoming
more challenging due to greater depth, low-​grade, limited resources, and complex geo-​mining
conditions. Therefore, mine system optimization will play an important role in maximizing profit
by satisfying several constraints. Moreover, today’s mining industry uses complex and sophisticated
systems whose reliability has become a critical issue. A significant amount of research is going on
around the globe to address these challenges; however, to the best of the authors’ knowledge, there
is no single book available that covers system engineering and optimization from mining industry
prospects. Students, researchers, and engineers need to consult with multiple sources to find reli-
able information related to this subject that causes serious difficulty. This book combines different
systems engineering and optimization concepts in the light of mining engineering to make a one-​
stop-​shop for all information seekers. This book covers almost every aspect of systems engineering
and optimizations and is presented so that the readers don’t require previous knowledge about the
subject to understand the contents. The book describes the fundamentals and theoretical concepts
of optimization algorithms and their potential applications and implementation strategies in mines.
This book includes chapters on the basics of systems engineering, linear programming, and integer
linear programming and their applications in mines, transportation and assignment algorithms,
network analysis, dynamic programming, queuing theory and its applications to mine systems, reli-
ability analysis of mine systems, inventory management in mines, and applications of non-​linear
optimization in mines. The book contains example problems and their solutions, and at the end of
each chapter, there are various problems to provide readers the opportunity to comprehend their
knowledge and understanding about the topics. A wide-​ranging list of references is provided to give
readers a view of developments in the area over the years. The book is composed of 11 chapters.
This book will be valuable to many individuals, including graduate and undergraduate students,
researchers, academicians in mining engineering, mining engineering professionals, and associated
professionals concerned with mining equipment.
We have a tremendous debt of gratitude to many individuals and organizations, especially to
those companies around the globe who have shared their data to use in this book. The quality of this
book is also substantially improved from the reviewers’ suggestions (Julian M. Ortiz of Queen’s
University, Mustafa Kumral of McGill University, Victor Octavio Tenorio of the University of
Arizona) and my colleagues in academia and industry.

xi
xii
newgenprepdf

xiii

Author biographies
Dr Amit Kumar Gorai is an Associate Professor in the Department of Mining Engineering at the
National Institute of Technology, Rourkela, India. Prior to joining at NIT Rourkela, Dr Gorai had
worked at Birla Institute of Technology Mesra, Ranchi, for over seven years. He has published
over 60 research articles in the area of reliability analysis of mining systems, machine learning
applications for quality monitoring of ores/​coal, remote sensing applications for environmental
management in mines, and so on. Dr Gorai has also written one guidebook, A Complete Guide for
Mining Engineers, and one edited book, Sustainable Mining Practices.
Dr. Gorai received his PhD from the Indian School of Mines, Dhanbad, India. He is the recipient of
the Endeavour Executive Fellowship from the Australian Government for working at the University
of New South Wales, Sydney, Australia, and Raman Postdoctoral Fellowship from University Grants
Commission, New Delhi for working at Jackson State University, MS, USA. He has been teaching
Mine Systems Engineering at NIT Rourkela for the last few years.
He has completed several sponsored research projects in the field of environmental modelling. His
current research area is systems optimization, machine learning, GIS, and remote sensing. Dr Gorai
is a member of the Institution of Engineers India (IEI), The Mining, Geological & Metallurgical
Institute (MGMI) of India, and the International Associate of Mathematical Geosciences (IAMG).

Dr Snehamoy Chatterjee is an Associate Professor and Witte Family Endowed Faculty Fellow
in Mining Engineering in the Geological and Mining Engineering and Sciences Department at
Michigan Technological University. Before joining Michigan Tech, Dr Chatterjee worked as an
Assistant Professor at the National Institute of Technology, Rourkela, India. Dr Chatterjee specializes
in ore reserve estimation, short-​and long-​range mine planning, mining machine reliability analysis,
mine safety evaluation, and the application of image analysis and artificial intelligence in mining
problems. He received his PhD in Mining Engineering from the Indian Institute of Technology
Kharagpur, India. Dr Chatterjee worked as a Post-​Doctoral Fellow at the University of Alaska
Fairbanks and as a research associate at COSMO Stochastic Mine Planning Laboratory, McGill
University, Canada, where he focused on mine planning optimization and ore-​body modelling under
uncertainty. Presently, Dr Chatterjee is actively involved in research work in resource modelling,
production planning, online quality monitoring, and machine learning. He teaches courses and
advises students on topics related to mine planning, mineral resource modelling, mining machine
reliability, and vision-​based online quality monitoring. He has completed several sponsored research
and industry projects for different government organizations and mining companies in India and
the USA.
Dr Chatterjee is an active member of the International Associate of Mathematical Geosciences
(IAMG), the Society for Mining, Metallurgy, and Exploration, Inc. (SME), the American Geophysical
Union (AGU). He has served as a co-​convener and a technical committee member for several inter-
national mining conferences. He is also a reviewer for more than 30 journals and has received
The Editor’s Best Reviewer Awards 2014 from Mathematical Geosciences Journal. He is the 2015
APCOM Young Professional Award recipient at the 37th APCOM in Fairbanks, Alaska. He is an
editorial board member of the International Journal of Mining, Reclamation and Environment and
Journal of Artificial Intelligence Research, and associate editor of Results in Geophysical Sciences.

xiii
xiv
1

1 Introduction to mine systems

1.1 DEFINITION OF A SYSTEM
A system can be defined as a device or scheme that accepts single or multiple inputs and provides
single or multiple outputs. According to Dooge (1973), a system is ‘any structure, device, scheme,
or procedure, real or abstract, that interrelates an input and output or cause and effect or any other
things/​information in a given time reference’. Systems theory views the mines as a complex system
of interconnected subsystems. A mining system can be classified as an open system or a closed
system depending on the demarcation and definition of the boundary. Depending on the demarca-
tion of the boundary, one can check which entities are inside the system and which are outside. In an
open system, materials/​mass and energy both can flow through the boundary of the system; whereas,
in a closed system, energy can flow through the boundary of the system, but materials/​mass remain
fixed and cannot flow through the boundary of the system. One can make a simplified representation
of the mine systems in order to understand it and predict its future behaviour. A typical example of
a mining system is represented in Figure 1.1.

1.2 TYPES OF SYSTEM
Any system can be defined or classified in multiple ways. These are as follows:

• Simple and Complex Systems: If the input has a direct relation with the output, the system
is said to be a simple system. It may be linear or non-​linear in nature.
On the other hand, a complex system is a combination of several simple systems. All these
simple systems can be termed a sub-​system. Each subsystem has a distinct relation between
input and output. It may be linear or non-​linear in nature.
Typical examples of a simple and a complex system are presented here.

Example of a simple system


y = a1 x1

FIGURE 1.1 A typical example of a mining system (mine as an open system).

DOI: 10.1201/9781003200703-1 1
2

2 Optimization techniques and their applications

Example of a complex system

y = a1 x1 + a2 x2 +  an xn

• y = y1 + y2 +  yn

where y is the output from the system, x1 , x2 ,…, xn are inputs to the system, and a1 , a2 ,…, an
are weights associated with the inputs. Both the above simple and complex systems are
linear. In the simple system, the output is directly proportional to the input, but the same
does not hold true for the complex system. Thus, the complex system represents n number
of sub-​systems or simple systems.

• Linear and Nonlinear Systems: A linear system is a type of system in which the output from
the system varies directly with respect to the inputs of the system. Also, a linear system satis-
fies superposition and homogenate principles. In a linear system, the output of a combination
of inputs is equal to the sum of the outputs from each of the individual inputs.

For example, the inputs of two systems are x1 (t ) and x2 (t ), and the corresponding outputs are respect-
ively y1 (t ) and y2 (t ). Then, for a linear system, superposition and homogenate principles will hold
good, and mathematically, this can be represented as:

f  a1 x1 ( t ) + a2 x2 ( t ) = a1 f  x1 ( t ) + a2 f  x2 ( t ) = a1 y1 ( t ) + a2 y2 ( t )

It is evident from the above relationship that the output of the overall system is equal to the output
of the individual system.
On the other hand, if the condition of superposition and homogenate principles is not satisfied,
then the system is called a non-​linear system. That is, in a non-​linear system, the above equation
does not hold.

f  a1 x1 ( t ) + a2 x2 ( t ) ≠ a1 f  x1 ( t ) + a2 f  x2 ( t )

f  a1 x1 ( t ) + a2 x2 ( t ) ≠ a1 y1 ( t ) + a2 y2 ( t )

• Time-​Variant and Time-​Invariant Systems: If any time shifts in the input in the system
cause the same amount of time shift in the output, the system is said to be a time-​invariant
system.

If the above condition is not satisfied, the system is said to be a time-​variant system

Example of a time-​invariant system

y (t ) = k + x (t )

where y ( t ) is output and x ( t ) is input of the system. k is any constant.


Let there is delay input by ∆t and output is y1 ( t )
We have, y1 ( t ) = k + x ( t − ∆t )
Again, assuming for the delay in output by ∆t and output is y2 ( t )
We have, y2 ( t ) = y ( t − ∆t ) = k + x ( t − ∆t )
3

Introduction to mine systems 3

Therefore, the above two equations indicate that

y1 ( t ) = y2 ( t )

Thus, the above system is a time-​invariant system.

Example of a time-​variant system

y ( t ) = tx ( t )

Let there is delay input by ∆t and output is y1 ( t )


We have, y1 ( t ) = tx ( t − ∆t )
Again, assuming for the delay in output by ∆t and output is y2 ( t )
We have, y2 ( t ) = y ( t − ∆t ) = ( t − ∆t ) x ( t − ∆t )

Therefore, the above two equations indicate that

y1 ( t ) ≠ y2 ( t )

Thus, the above system is a time-​variant system.

• Continuous and discrete changes systems: A system is said to be continuous if the variable(s)
are subjected to change continuously over time.

On the other hand, if the variable(s) are subjected to change at a discrete interval of time.
For example, a system in mine showing the number of dump trucks waiting in the queue for
being loaded is a discrete number. That is, for any moment, the number of dump trucks is an integer
number.
On the other hand, the strata pressure on the roof is continuous in nature. That is, every moment,
the pressure is subjected to change.
The graphical representation of the characteristics of the discrete and continuous data is shown
in Figure 1.2.

FIGURE 1.2 Discrete vs. continuous system.


4

4 Optimization techniques and their applications

• Lumped Parameter and Distributed Parameter Systems: If the dependent variables are a
function of time alone and the variation in space is either non-​existent or ignored, the system
is said to be a lumped parameter system. This type of system can be represented by ordinary
differential equations.
On the other hand, if all the dependent variables are functions of time and at least one variable
has spatial variation, then the system is said to be a distributed parameter system. This type of
system is governed by a partial differential equation.
• Static and Dynamic Systems: If the output of the system doesn’t depend on the time-​
dependent input variable, the system is said to be a static system. But, if the output of the
system depends on the time-​dependent input, then the system is said to be a dynamic system.
A static system has memory-​less property, but the dynamic system does not have the same.

Example of a static system: y ( t ) = c x ( t ).


At t = 0, the output of the system is given by

y (0) = c * x (0)

It is clear that the output depends on the current input only, and hence the system represents a
static, memory-​less property.
Example of a dynamic system: y ( t ) = c1 x ( t ) + c2 x ( t − n ).
At t = 0, the output of the system is given by

y ( 0 ) = c1 x ( 0 ) + c2 x ( − n )

In the above equation, x ( − n ) represents the past value for the current input, and thus the system
needs memory to get this current output. This type of system is called a dynamic system.

• Deterministic and Probabilistic Systems: If the occurrence of all events is known with com-
plete certainty, the system is said to be a deterministic system. In a deterministic system, the
output remains the same for constant input.

But, if the occurrence of an event cannot be perfectly known, the system is said to be probabil-
istic. In a probabilistic system, the output may not be constant for the same input, and thus the input-​
output relationship in this type of system is probabilistic in nature.
For example, the failure time of a mining machine is probabilistic, but the capacity of the same
machine is deterministic.

1.3 SYSTEM APPROACH
In any mining system, the relationship between the input–​output is controlled by multiple factors
like the characteristic of the deposit (shape, size, depth, and strength), mining method, type of
machinery, workforce, etc., and the physical laws governing the system. In many of the mining
systems, the characteristics of the deposit and the laws governing the system are very complex. Thus
modelling of those complex systems requires considering simplifying assumptions and transform-
ation functions to determine the output corresponding to input. The system analysis process requires
defining and formulation of the system by constructing a mathematical model, wherein the input-​
output relationships are estimated through existing operating conditions of the system.
In general, the objective of the system approach is to break down a complex system into multiple
simple sub-​systems for a better understanding of the different components of the complex system.
Most of the mining systems are open as most of the sub-​systems are linked to each other. To analyse
and understand a mining system, it needs to be identified the different sub-​systems by defining the
boundaries.
5

Introduction to mine systems 5

The system approach should focus on the common objects of the system without neglecting the
sub-​systems. Major characteristics of a system approach are:

1. Holism: It tells that a change in any sub-​system of a system directly or indirectly affects
the entire properties of the system (Boulding, 1985; Litterer, 1973; von Bertalanffy, 1968).
2. Specialization: The entire system can be divided into different subsystems for easy
understanding of the typical role of each sub-​system in the system.
3. Non-​summation: Every subsystem is of importance to the entire system, and hence it is of
utmost importance to understand the role of individual sub-​systems to get the complete per-
spective of the system (Boulding, 1985; Litterer, 1973).
4. Grouping: The process of grouping may lead to its own complexity by more specialized
sub-​systems. Therefore, it is desired to make a group with related disciplines or
sub-​disciplines.
5. Coordination: The grouped components in a sub-​system need coordination and control
in the study of systems. It is difficult to maintain a unified, holistic concept without proper
coordination.
6. Emergent properties: This property tells that the group of interrelated components has a
common property instead of properties of any individual component. This is the general
view of a system.

1.4 SYSTEM ANALYSIS
System analysis is a process of collecting information, identifying problems, and decomposing a
system into smaller sub-​systems. It is usually done using a standard optimization technique based
on the formulated mathematical equations. It should be noted that systems analysis is not simply
to solve a mathematical model but requires decision-​making for designing a system. The system
analysis techniques can be used for solving both descriptive and prescriptive models. A descriptive
model explains how a system works; whereas, a prescriptive model offers a solution for optimal
operation of the system for achieving the desired objectives.

1.5 ELEMENTS OF A MINING SYSTEM


The elements of a mining system are represented in a diagram, as shown in Figure 1.3.

FIGURE 1.3 Elements of a mining system.


6

6 Optimization techniques and their applications

• Inputs and outputs


The main objective of a mining system is to maximize (or minimize) outputs from the system
for a given input. The inputs to a typical mining system are workforce, heavy earth moving
machinery, explosives, supports, transportation equipment, etc. The outputs of any mining
system are production, productivity, quality, accident and incident rates, etc.
• Production operation
The production operation in a mining system involves the actual transformation of input (in-​
situ coal or ore) to output (extracted coal or mineral) for supplying to the consumer. The pro-
duction process involves many operations like drilling and blasting, loading into dump trucks/​
haul trucks or conveyor belts, transportation, etc. Therefore, the operational process needs to
modify by changing the input either totally or partially, depending on the desired productions.
• Control management
The control management guides the mining system by analysing the pattern of activities
governing input, operations, and output. The decision-​making played a significant role in opti-
mizing the productions operations based on the available resources and desired outputs. The
production behaviour of a mining system is controlled by the operational process. To optimize
the system, management should know how much input is needed to obtain the desired output.
• Feedback
Feedback of the production process requires decision-​making on the system’s alteration to
achieve the desired output. Positive feedback is routine that encourages the performance of
the system; whereas, negative feedback provides the information to management for action.
• Environment
The output of the system depends on the environment facilitates the mining systems. It
includes any external factors like political interference and market conditions that affect the
actual performance of the system.
• Boundaries and interface
Each system has boundaries that determine its periphery of influence and control. Boundaries
are the limits that identify its sub-​system, processes, and interrelationship among them. Thus,
the definition of the boundaries of any mining system is crucial in determining the nature of
its interface with other systems for successful design.

1.6 DEFINITION AND CLASSIFICATION OF OPTIMIZATION PROBLEM


Optimization is a process of searching for the most cost-​effective or most efficient solution of any
system or sub-​systems under the given constraints of any defined problem by maximizing desired
factors and minimizing undesired ones. In other words, optimization problems are represented
by a mathematical equation of the objective function and constraints, and decision variables are
estimated based on a formal search procedure for optimizing the objective function. An optimization
problem can be classified in multiple ways like the nature of constraints, characteristics of decision
variables, type of equations used, and several objective functions. A brief description of the classi-
fication is given below.

1.6.1 Based on the existence of constraints


• Constrained optimization problem: Any problem is said to be a constrained optimization
problem if it has one or more constraints. The generalized form of a constrained optimization
problem is shown below:
7

Introduction to mine systems 7

 Maximize ( Minimize ) Z = ∑ ci xipk pk ∈ Real number


 i =1 to m
 k =1 to m
Subject to
 ∑ ∑ a ji xipk ≤ bi
 j =1 to n i =1 to m


x ≥ 0 i = 1 to m
 i
In the above optimization problem, Z is called the objective function that needs to be optimized. The
variables x1 , x2 ,…, xm are called decision variables. The decision variables are subject to n number
of constraints along with the non-​negativity constraints. The value of pk can be any real number.

• Unconstrained optimization problem: Any problem is said to be an unconstrained optimiza-


tion problem if it has no constraints. The generalized form of an unconstrained optimization
problem is shown below:

Maximize ( Minimize ) Z = ∑ ci xipk pk ∈ Real number


i =1 to m
k =1 to m

In the above optimization problem, Z is called the objective function that needs to be optimized. The
variables xi are called decision variables. The decision variables are not subjected to any constraints.
The value of pk can be any real number.

1.6.2 Based on the nature of the equations involved


• Linear programming problem (LPP): If the objective function and all the constraints of
a problem are linear, then the problem is said to be a linear programming problem (LPP). It
is probably the single-​most applied optimization technique in engineering decision-​making.
Thus, the generalized form of the LPP is

Maximize ( Minimize ) Z = ∑ ci xi
 i =1 to m

Subject to ∑ a ji xi ≤ b j
 i =1 to m

 xi ≥ 0 i = 1 to m and j = 1 to n

If the decision variables are restricted to integer values, the same problem represents an integer-​
programming problem.

• Quadratic programming problem: In a quadratic programming problem, the objective


function is a quadratic function, and all constraint functions are linear. The general form of a
quadratic problem is as follows:

1 T
Maximize ( Minimize ) Z = qT x + x Qx
2
8

8 Optimization techniques and their applications

Subject to Ax = a

Bx ≤ b
x≥0
The objective function is arranged such that the vector q contains all of the (single-​differentiated)
linear terms and Q contains all of the (twice-​differentiated) quadratic terms. The constants contained
in the objective function are left out of the general formulation.
As for the constraints, the matrix equation Ax = a contains all of the linear equality constraints,
and Bx ≤ b are the linear inequality constraints.

• Non-​linear programming problem: In non-​linear programming problem, the objective


function and/​or one or more constraints are non-​linear. An unconstrained problem can be
represented as a non-​linear programming problem if at least one of the values of pk is not
equal to 1.

1.6.3 Based on the permissible values of the decision variables


• Integer programming problem: Integer programming, which is a variant of LPP, the deci-
sion variables are integer. It is a special case of LPP in which all the decision variables are
restricted to take only integer (or discrete) values. The general form of an integer program-
ming problem is given by

Maximize ( Minimize ) Z = ∑ ci xi
 i =1 to m


Subject to ∑ a ji xi ≤ b j
 i =1 to m

 xi ∈ Integer i = 1 to m and j = 1 to n

If some of the variables are integer values and the remaining are real values, the problem is a
mixed integer programming problem. The binary programming problem is a special case of an
integer programming problem, where decision variables are all binary, that is, 0 or 1.

1.6.4 Based on the number of the objective function


• Single-​objective problem: If the optimization problem has only one objective function, then
the problem is said to be a single-​objective problem.
• Multi-​objective problem: If the optimization problem has more than one objective function,
then the problem is said to be a multi-​objective problem.

The mathematical form of a multi-​objective problem can be represented as follows:

Minimize f1 ( x ) , f2 ( x ) ,…, fk ( x )

Subject to g j ( x ) ≤ 0, j = 1, 2,…, m

In the given problem, the objective functions, f1 ( x ) , f2 ( x ) ,…, fk ( x ) need to be minimized simul-
taneously. The problem has m number of constraints.
9

Introduction to mine systems 9

1.7 SOLVING OPTIMIZATION PROBLEMS


In the last few decades, many algorithms have been developed for solving different types of opti-
mization problems. The selection of a suitable algorithm for solving an optimization problem
depends on the nature of the problem. The major advancement in optimization happened after the
development of digital computers. Currently, many options are available to solve complex optimiza-
tion problems. A few techniques are explained below.

1.7.1 Classical optimization techniques


The classical optimization techniques are useful for solving constrained and unconstrained single-​
and multi-​variable optimization problems that involve continuous and differentiable functions. The
followings are the few popular classical optimization techniques.

1.7.1.1 Direct methods
Direct methods are the simple searching approach to obtain the optimum solution. These methods do
not require any derivatives at any points for finding the optimal solution. The golden-​section search
(Press et al., 2007) can be used to solve the one-​dimensional problem, whereas univariate search or
random search method (Rastrigin, 1963) can be used for solving multi-​dimensional problems.

1.7.1.2 Gradient methods
In the gradient method, the derivative information of the optimization function is used to locate the
solution. The first derivatives of the function offer slopes of the function that become zero at the
optimal point. The steepest slope or gradient of the function tells the optimal solution. For a one-​
dimensional problem, Newton’s method (Avriel, 2003) can be used to find the optimum solution of
the function.

1.7.1.3 Linear programming methods


In this method, linear mathematical functions are formulated to represent both the objective
functions and constraints to derive the optimal solution. For single and two-​variable linear program-
ming problems, the graphical solution method can be used. For more than two variable problems,
the simplex method (Murty, 1983) can be used to determine the optimal solution.

1.7.1.4 Interior point method


Interior point method (IPM), also referred to as barrier method, is generally used to solve linear
and nonlinear convex optimization problems. The method was first proposed by Soviet mathemat-
ician Dikin in 1967 and reinvented in the USA in the mid-​1980s. In this method, the violations of
inequality constraints are prevented by shifting the objective function with a barrier term that causes
the optimal unconstrained value to be in the feasible space.

1.7.2 Advanced optimization techniques


Most of the real mining optimization problems involve complexities like a large number of variables,
non-​linearity, and multiple conflicting objectives. Furthermore, it is difficult to find the global
optimum solution due to the large search space. In general, if the system cannot be solved using
the classical optimization solving methods, the evolutionary algorithms (EAs) can be used. EAs are
applied to a large-​scale optimization problem for obtaining near-​optimum solutions. This type of
algorithm can be easily applied to an optimization problem with many decision variables and non-​
linear objective functions and constraints.
10

10 Optimization techniques and their applications

Goldberg (1989) has developed the first evolutionary optimization technique, called a genetic
algorithm (GA). GA algorithm was designed based on the Darwinian principle ‘the survival of the
fittest and the natural process of evolution through reproduction’. In the last few decades, many
other evolutionary algorithms like Particle Swarm Optimization (PSO) (Eberhart and Kennedy,
1995), estimation of distribution algorithm (EDA) (Pelikan, 2005), Tabu search (Glover, 1986), Ant
Colony (Colorni et al., 1991), etc., have been developed.
In this type of algorithm, the solving process starts from a population of possible random solutions
and moved towards the optimal by incorporating generation and selection.
11

2 Basics of probability and


statistics

2.1 DEFINITION OF PROBABILITY
Probability is defined as the chances of occurrence of any event in an experiment. The sum of all
the possible outcomes is called the sample space, and a subset of sample space is known as an
event.
If there are S exhaustive (i.e., at least one of the events must occur), mutually exclusive (i.e., only
one event occurs at a time), and equally likely outcomes of a random experiment (i.e., equal chances
of occurrence of each event) and r of them are favourable to the occurrence of an event A, the prob-
ability of the occurrence of the event A is given by

r
P ( A) = (2.1)
S
It is sometimes expressed as ‘odds in favour of A’ or the ‘odds against A’. The ‘odds in favour of
A’ is defined as the ratio of occurrence of event A to the non-​occurrence of event A. On the other
hand, ‘odds against A’ is defined as the ratio of non-​occurrence of event A to the occurrence of
event A

Probability of occurrence of event A r/S r


Odds in favour of A = = =
Probability of non-occurrence of event A ( S − r ) / S S − r

Again,

Probability of non-occurrence of event A ( S − r ) / S S − r


Odds against A = = =
Probability of occurrence of event A r/S r

The total probability of the occurrence of any event ranges from 0 to 1.

i.e.,0 ≤ P ( A) ≤ 1

P ( A) = 0 indicates event A is impossible, and P ( A) = 1 indicates the event is certain.


In the above discussion, the discrete sample space was considered. But, the probability can also
be determined for a continuous sample space. For a continuous sample space, the probability of
occurrence is measured as a probability density function. The probability density function of
any continuous random variable gives the relative likelihood of any outcome in a specific range.
Therefore, for a continuous random variable, the probability of an outcome of any single or discrete
outcome is zero.

Example 2.1: Two detonators are picked at random from a detonator box that has 12 detonators,
of which four are defective. Determine the probability that both the detonators have chosen are
defective.

DOI: 10.1201/9781003200703-2 11
12

12 Optimization techniques and their applications

Solution: Let A be the event of picking two defective detonators.


The probability of occurrence of event A is given by

Number of ways of selection 2 defective detonators out of 4


P ( A) =
Number of ways of selection 2 detonators out of 12.

4C2 6 1
 P ( A) = = = = 0.09
12C2 66 11

where 4C2 represents the number of ways two items can be picked from four items at a time, and
12C2 represents the number of ways two items can be picked from 12 items at a time.
The probability of occurrence of event A is 0.09. Therefore, the probability of the event that both
the picked detonators are defective is 0.09.

Example 2.2: From open-​pit coal mine, 500 workers were randomly chosen. It was found that 30
workers experienced an injury in the year 2020. The distribution of injury, based on the younger
age group ( age ≤ 35 years ) and older age group (age > 35 years ), generates the following cross-​
classification table.

Number of workers
Age group Injured Non-​Injured Row total
Younger age group 10 120 130
Older age group 20 350 370
Column total 30 470 500

Determine the odds of injury for the younger age group compared to the older age group.

Solution
We have,

Number of workers injured in younger age group =​N YI =​10


Number of workers non-​injured in younger age group =​N YNI =​120
Number of workers injured in older age group =​N OI =​20
Number of workers non-​injured in older age group =​N ONI =​350

N YI 10
Odds of injury for the younger group = P (Y ) = =
N YNI 120

N OI 20
Odds of injury for the older group = P (O ) = =
N ONI 350

P (Y )
∴ Odds of injury for younger compared to older age group =
P (O )

10 350
= * = 1.45
120 20
Therefore, the odds of the injury for the younger age group as compared to the older age group
is 1.45.
13

Basics of probability and statistics 13

2.2 ADDITIONAL THEORY OF PROBABILITY


If two events A and B are mutually exclusive, the probability of occurrence of either A or B is
given by

P ( A ∪ B ) = P ( A) + P ( B ) (2.2)

where P ( A) is the probability of even A and P ( B ) is the probability of event B.


If events A and B are not mutually exclusive, then

P ( A ∪ B ) = P ( A) + P ( B ) – P ( A ∩ B ) (2.3)

P ( A ∩ B ) represents the probability of occurrence of both events simultaneously.


( )
For n number of mutually exclusive events A1 , A2 ,…, An , the probability of occurrence of either
of the A1 , A2 ,…, An events can be presented as:

( ) ( ) ( )
P A1 ∪ A2 ∪  ∪ An = P A1 + P A2 +  = P An ( ) (2.4)

In the case of three non-​mutually exclusive events:

P ( A ∪ B ∪ C ) = P ( A ) + P ( B ) + P (C ) – P ( A ∩ B ) − P ( B ∩ C ) − P ( A ∩ C ) + P ( A ∩ B ∩ C ) (2.5)

where
      P ( A) =​probability of occurring the event A
P ( B ) =​probability of occurring the event B
P (C ) =​probability of occurring the event C
P ( A ∩ B ) =​probability of occurring both the events A and B simultaneously
P ( B ∩ C ) =​probability of occurring both the events C and B simultaneously
P ( A ∩ C ) =​probability of occurring both the events A and C simultaneously
P ( A ∩ B ∩ C ) =​probability of occurring all the three events A, B, and C simultaneously

Example 2.3: The probability of failure of a dump truck A is 0.7, and that of dump truck B is 0.2.
If the probability of failure of both the dump trucks is 0.3 simultaneously, determine the probability
that neither of the dump trucks fails.

Solution
We have,

Probability of failure of a dumper A = P ( A) = 0.7

Probability of failure of a dumper B = P ( B ) = 0.2

Probability of failure of both the dump trucks A and B = P ( A ∩ B ) = 0.3

Probability of failure of either of the dump trucks A or B = P ( A ∩ B ) = P ( A) + P ( B ) – P ( A ∩ B )

⇒ P ( A ∩ B ) = 0.7 + 0.2 − 0.3 = 0.6

The probability that neither of the dump trucks fails = P ( A ∩ B ) = 1 − P ( A ∩ B ) = 1 − 0.6 = 0.4

Therefore, the probability that neither the dump truck fails is 0.4.
14

14 Optimization techniques and their applications

2.3 PROBABILITY DISTRIBUTIONS
The probability distribution for a discrete random variable is referred to as the probability mass
function (PMF); whereas, the same is referred to as probability density function (PDF) for the con-
tinuous random variable. A PMF, P ( X ) or PDF, f ( x ) must be non-​negative, and the sum of the prob-
ability of the entire sample space must be equal to 1. The probability distribution can be represented
by a discrete or continuous function, as explained in subsequent subsections.
The probability distribution for a discrete random variable is represented by spikes of probability
values correspond to the random variable.
An important probability measure is the cumulative distribution function (CDF). The CDF of a
discrete random variable, FX ( k ) represents the probability that the random variable X is less than or
equal to k. It can be represented mathematically, as
k
P ( X ≤ k ) = FX ( k ) = ∑P ( x ) (2.6)
x=0

For example, a mine worker takes leave randomly on any one day of the week. The probabilities of
taking the leave each day are given in Table 2.1. The probability of taking the leave on Sunday and
Saturday are 0.3 and 0.2, respectively. For the rest of the days, the probability of taking leave is 0.1.
The spikes of PMF and CDF for the given data (Table 2.1) are shown in Figures 2.1(a) and 2.1(b),
respectively. The CDF values can be directly determined from the PMF values by cumulating the
values. The CDF and PMF values for the first day of the week (Monday) are the same. In the subse-
quent days, the CDF values are calculated by taking the sum of the current day PMF value and the
previous day’s CDF value. The CDF value for the last day is 1 [Figure 2.1(b)], which is equal to the
sum of the PMF for each day.
On the other hand, the probability distribution of a continuous random variable is represented
by smooth curves, as shown in Figure 2.2(a). The CDF of a continuous random variable is a non-​
decreasing function with a maximum value of 1, as shown in Figure 2.2(b). Mathematically, it can
be represented as
k
P ( X ≤ k ) = FX ( k ) = ∫ f ( x ) dx (2.7)
0

where f(x) is a probability density function of a continuous random variable, X. FX ( k ) represents the
cumulative distribution function for the random variable X ≤ k , which is measured by the ordinate
of the probability curve.
The shaded area of Figure 2.2(a) represents the probability that X ≤ k . It is also represented
by CDF value, FX ( k ) [Figure 2(b)]. The total probability for any PDF is equal to 1, as shown in
Eq. (2.1), and can be presented as:

TABLE 2.1
Probability Mass Function

Monday Tuesday Wednesday Thursday Friday Saturday Sunday


Day (X) 1 2 3 4 5 6 7
P(X=​k) 1/​10 1/​10 1/​10 1/​10 1/​10 2/​10 3/​10
FX(k) 1/​10 2/​10 3/​10 4/​10 5/​10 7/​10 1
15

Basics of probability and statistics 15

FIGURE 2.1 Probability distribution of a discrete random variable: (a) probability mass function (PMF) and
(b) cumulative distribution function (CDF).

FIGURE 2.2 Probability density function of a continuous random variable: (a) shaded area represents
P ( X ≤ k ) and (b) cumulative distribution function (CDF), showing the probability vs. k.
16

16 Optimization techniques and their applications

+∞

∫ f ( x ) dx = 1 (2.8)
−∞

The probability of a continuous random variable taking a value exactly equal to a given value is zero.
This can be proved from the following derivation:
m
P ( X = m ) = P ( m < X ≤ m ) = ∫ f ( x ) dx = 0
m

2.4 COMMON PROBABILITY DISTRIBUTION FUNCTIONS


2.4.1 Uniform distribution
For discrete uniform distribution, a finite number of values are equally likely to be observed, as
shown in Figure 2.3(a). The PMF for a discrete random variable in the interval [a, b] is given by

1
f ( x) = , for a ≤ x ≤ b, n = b − a + 1
n
The cumulative distribution function for discrete uniform distribution in the interval [a,b] can be
determined as

i
F ( x) = , i = 1, 2, , n
n

The CDF for a discrete uniform distribution is shown in Figure 2.3(b).

FIGURE 2.3 Characteristics of uniform distribution: (a) PMF of a discrete random variable, (b) CDF of
a discrete random variable, (c) PDF of a continuous random variable, and (d) CDF of a continuous random
variable.
17

Basics of probability and statistics 17

On the other hand, the probability density function for continuous uniform distribution in the
interval [a, b] is given by

 0, x<a
 1

f ( x) =  , a≤x≤b (2.9)
b − a
 0, x>b

The cumulative distribution function for a continuous uniform distribution on the interval [a,b] can
be determined as
x
P ( X ≥ a ) = F ( x ) = ∫ f ( x ) dx
a

0, x<a
 x−a

FX ( x ) =  , a≤x≤b (2.10)
b−a
1, x>b

A uniform distribution, also called a rectangular distribution, is a distribution that has a constant
probability, as shown in Figure 2.3(c). The nature of the CDF of a uniform distribution function is
shown in Figure 2.3(d). The CDF of uniform distribution is a straight line that intercepts the x-​axis
1
at a value a, and it has a slope of .
b−a

Example 2.4: The daily explosives demand in an opencast mine is uniformly distributed between
2500 and 3250 kg. The explosive tank, which has a storage capacity of 3000 kg, is refilled daily
after the end of the last shift. What is the probability that the tank will be empty before the end of
the last shift?

Solution
For uniform distribution of demand in the range of a (=​2550) to b (=​3250), the probability density
function can be represented as

1 1 1
f ( x) = = =
b − a 3250 − 2550 700

b 3250 3250
1 1 1
P ( k < x ≤ b ) = ∫ f ( x ) dx = ∫ 700 dx = 700 ∫ dx = 700 [ x ]3000 = 0.355
3250

k 3000 3000

Therefore, the probability that the explosive tank will be empty before the last shift is 0.355.

Example 2.5: The daily coal production from a mine follows a continuous uniform distribution with
a range [3000, 3500] tons. Find the probability that the production in a randomly selected day has
greater than 3200 tons.

Solution
The probability that the production in a randomly selected day is greater than 3200 tonnes is
given by
18

18 Optimization techniques and their applications

FIGURE 2.4 PDF of the given uniform distribution function.

3500 3500 3500


1 1 1 300
P (3200 < x ≤ 3500 ) = ∫ f ( x ) dx = ∫ 3500 − 3000 dx = 500 ∫ dx = 500 [ x ]3500
3200 =
500
= 0.6
3200 3200 3200

Therefore, the probability of coal production greater than 3200 tonnes is 0.6.

Example 2.6: A mine worker arrives at pit-​bottom at a random time (i.e., no prior knowledge of the
scheduled start time) to ride on the cage on the next trip. Cage starts at pit-​bottom every 10 minutes
without fail, and hence the next trip will start any time during the next 10 minutes with evenly
distributed probability (a uniform distribution). Find the probability that the cage will start within
the next 5 minutes after the arrival of the worker at the pit bottom.

Solution
The probability density function (Figure 2.4) represents a horizontal line above the interval from
0 to 10 minutes for a uniform distribution. As the total probability is one, the total area under the
curve must be one, and hence the height of the horizontal line is 1/​10 with the bin size of 1 minute.
The probability that the cage will start within the next 5 minutes after the arrival of the worker
at the pit-​bottom is P ( 0 ≤ X ≤ 5). This represents the shaded region in Figure 2.4. Its area is,
(5)*(1/​10) =​1/​2.
Thus, the probability that the cage will start within the next 5 minutes after the arrival of the
worker at the pit bottom is
1
P ( 0 ≤ X ≤ 5) =
2

2.4.2 Normal distribution
Although many distribution functions are developed and applied for different purposes, the Gaussian
distribution, also known as the normal distribution, is the most widely used distribution function
across all disciplines. The probability density function of the normal distribution is given by

1 ( x − µ )2
f ( x) =

e 2 σ2 (2.11)
2 πσ
where
µ =​mean of the distribution
σ2 =​variance of the distribution
x  ( −∞, ∞ )
19

Basics of probability and statistics 19

FIGURE 2.5 The probability density function of a normal distribution.

The nature of the PDF for a normal distribution function is shown in Figure 2.5. The PDF of the
normal distribution is a bell-​shaped curve and is symmetric around mean µ. Therefore, the prob-
ability of both the right-​hand and left-​hand sides of mean µ is equal, and that is 0.5.
A normal distribution with arbitrary data range can be converted into a standardized normal

density by putting
( x − µ ) = z , which referred to z-​score or standard score. Taking derivative of
σ
( x − µ ) = z with respect to z, we get dz = dx / σ. Thus, Eq. 2.11 can be rewritten as
σ
2
1 − Z2
f (z) = e

In the normalized PDF, µ = 0 and σ = 1, as shown in Figure 2.6. The CDF of a standardized normal
density function is given by:
Z Z Z2
1
F ( z ) = ∫ f ( z ) dz =


∫e 2 dz
0 0

The peak of the curve (at the mean) in a normalized PDF of the normal distribution function is
approximately 0.399.
Furthermore, the distribution can easily be normalized to adapt to the particular mean and
standard deviation of interest. It can be demonstrated using the probability, P ( X ≤ b ) , written as

b b
1 ( x − µ )2
P ( X ≤ b) = f ( x ) dx =

∫ ∫σ 2π
e 2 σ2 dx
−∞ −∞

The above expression determines the area under the curve from the extreme left (−∞) to x =​ b. This
can be represented as the shaded region, as shown in Figure 2.7.
20

20 Optimization techniques and their applications

FIGURE 2.6 The probability density function of the standardized normal distribution curve.

FIGURE 2.7 Shaded area represents the probability of X ≤ b.

Thus, by replacing
( x − µ) = z , and performing derivative with respect to z, the probability
σ
expression can be written as:
b b
1 ( x − µ )2 b
1 Z2
P ( X ≤ b) = f ( x ) dx =
− −
∫ ∫σ 2π
e 2 σ2 dx = ∫ 2π
e 2 dz
−∞ −∞ −∞

We can transform any data that follows a normal distribution to a standardized normal distribution.
This transformation will help to determine the probabilities for any normally distributed data using
the standardized normal distribution (Table 2.2) without any application of integral calculus. The
beauty of normal distribution is that we can easily calculate the probability within a specific range
by knowing the standard deviation value. For example, if the mean is μ and the standard deviation
is σ, then about 68% of the values lie within μ ± σ, and 95% of values lie within μ ± 2σ in a normal
distribution, as shown in Figure 2.8.
21

Basics of probability and statistics 21

TABLE 2.2
Standardized Normal Distribution Table

z 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
0.0 0.5000 0.5010 0.5080 0.5120 0.5160 0.5199 0.5239 0.5279 0.5319 0.5359
0.1 0.5398 0.5438 0.5478 0.5517 0.5557 0.5596 0.5636 0.5675 0.5714 0.5753
0.2 0.5793 0.5832 0.5871 0.5910 0.5918 0.5987 0.6026 0.6064 0.6103 0.6141
0.3 0.6179 0.6217 0.6255 0.6293 0.6331 0.6368 0.6106 0.6143 0.6180 0.6517
0.4 0.6554 0.6591 0.6628 0.6664 0.6700 0.6736 0.6772 0.6808 0.6844 0.6879
0.5 0.6915 0.6950 0.6985 0.7019 0.7054 0.7088 0.7123 0.7157 0.7190 0.7224
0.6 0.7257 0.7291 0.7324 0.7357 0.7389 0.7422 0.7454 0.7186 0.7517 0.7549
0.7 0.7580 0.7611 0.7642 0.7673 0.7704 0.7734 0.7764 0.7794 0.7823 0.7852
0.8 0.7881 0.7910 0.7939 0.7967 0.7995 0.8023 0.8051 0.8078 0.8106 0.8133
0.9 0.8159 0.8186 0.8212 0.8238 0.8264 0.8289 0.8315 0.8310 0.8365 0.8389
1.0 0.8413 0.8438 0.8161 0.8185 0.8508 0.8531 0.8554 0.8577 0.8599 0.8621
1.1 0.8643 0.8665 0.8686 0.8708 0.8729 0.8749 0.8770 0.8790 0.8810 0.8830
1.2 0.8849 0.8869 0.8888 0.8907 0.8925 0.8944 0.8962 0.8980 0.8997 0.9015
1.3 0.9032 0.9049 0.9066 0.9082 0.9099 0.9115 0.9131 0.9147 0.9162 0.9177
1.4 0.9192 0.9207 0.9222 0.9236 0.9251 0.9265 0.9279 0.9292 0.9306 0.9319
1.5 0.9332 0.9345 0.9357 0.9370 0.9382 0.9394 0.9106 0.9418 0.9429 0.9441
1.6 0.9452 0.9163 0.9474 0.9184 0.9495 0.9505 0.9515 0.9525 0.9535 0.9545
1.7 0.9554 0.9564 0.9573 0.9582 0.9591 0.9599 0.9608 0.9616 0.9625 0.9633
1.8 0.9641 0.9649 0.9656 0.9664 0.9671 0.9678 0.9686 0.9693 0.9699 0.9706
1.9 0.9713 0.9719 0.9726 0.9732 0.9738 0.9714 0.9750 0.9756 0.9761 0.9767
2.0 0.9772 0.9778 0.9783 0.9788 0.9793 0.9798 0.9803 0.9808 0.9812 0.9817
2.1 0.9821 0.9826 0.9830 0.9834 0.9838 0.9842 0.9816 0.9850 0.9854 0.9857
2.2 0.9861 0.9864 0.9868 0.9871 0.9875 0.9878 0.9881 0.9884 0.9887 0.9890
2.3 0.9893 0.9896 0.9898 0.9901 0.9904 0.9906 0.9909 0.9911 0.9913 0.9916
2.4 0.9918 0.9920 0.9922 0.9924 0.9927 0.9929 0.9931 0.9932 0.9934 0.9936
2.5 0.9938 0.9910 0.9941 0.9943 0.9945 0.9916 0.9918 0.9949 0.9951 0.9952
2.6 0.9953 0.9955 0.9956 0.9957 0.9958 0.9960 0.9961 0.9962 0.9963 0.9964
2.7 0.9965 0.9966 0.9967 0.9968 0.9969 0.9970 0.9971 0.9972 0.9973 0.9974
2.8 0.9974 0.9975 0.9976 0.9977 0.9977 0.9978 0.9979 0.9979 0.9980 0.9981
2.9 0.9981 0.9982 0.9982 0.9983 0.9984 0.9984 0.9985 0.9985 0.9986 0.9986

The probability value of a normally distributed function can be calculated from the standardized
normal distribution. The label for rows contains the integer part and the first decimal place of z. The
label for columns contains the second decimal place of z. The values within the table are the prob-
abilities corresponding to the different z-​value. These probabilities are calculations of the area under
the normal curve from the starting point (negative infinity) to a specified value with the maximum
up to positive infinity.
For example, to find the value for z ≤ 0.72, the value corresponding to the row with 0.7 and the
column with 0.02 gives a probability of 0.7611 for a cumulative from −∞. Thus, four different cases
can be observed when the z-​score value is used for probability calculation.

Case 1: P ( z ≤ a )
The probability of z is less than any specific positive value ‘ a ’ can be determined using the standard
normal distribution table data, as given in Table 2.2. This is generally represented as Φ.
22

22 Optimization techniques and their applications

FIGURE 2.8 Probability density function of a normal distribution with σ, 2σ, and 3σ limits.

FIGURE 2.9 Shaded area represents P ( z ≤ a ).

Therefore, for any positive value, a , the probability of z less than acan be determined as

P ( z ≤ a ) = φ (a ) for a > 0

Graphically, this can be represented as shown in Figure 2.9. The shaded region (Figure 2.9)
represents the probability of z less and equal to a.

Case 2: P ( z ≤ − a )
Since the standard normal distribution table only provides the probability for values less than a
positive z-​value (i.e., z-​values on the right-​hand side of the mean), the probability for z less than a
negative value can be determined using an indirect method. The standard normal distribution has
a total area (probability) equal to 1, and it is also symmetrical around the mean. Thus, the probability
for z ≤ − a [Figure 2.10(a)] will be the same as the probability of z > a [Figure 2.10(b)]. The prob-
ability of z ≤ a is shown in Figure 2.10(c).
23

Basics of probability and statistics 23

FIGURE 2.10 Graphical representation of the probability: (a) shaded area representing P ( z ≤ − a) , (b) the
shaded area representing P ( z ≥ a) , and (c) the shaded area representing P ( z ≤ a) .

We have,

P ( z ≤ − a ) = φ ( − a ) for a > 0

Also, we know

P (z > a) + P (z ≤ a) = 1

 P ( z > a) = 1 − P ( z ≤ a) = 1 − φ (a)

 P ( z ≤ −a) = 1 − P ( z ≤ a) = 1 − φ (a)

Note: P ( z ≤ − a ) = P ( z > a )

Therefore, φ ( − a ) = 1 − φ ( a ) .

Case 3: P ( a < z ≤ b )
The shaded area in Figure 2.11 represents the probability of a < z ≤ b .
Probability for a < z ≤ b can be given by

P ( a < z ≤ b) = P ( z ≤ b) − P ( z ≤ a ) = φ (b) − φ ( a )

Case 4: −
​ P ( −a < z ≤ b )
The shaded area in Figure 2.12 represents the probability of −a < z ≤ b.
Probability for −a < z ≤ b can be given by

P ( −a < z ≤ b ) = P ( z ≤ b ) − P ( z ≤ −a ) = P ( z ≤ b ) − P ( z > a )
= P ( z ≤ b ) − {1 − P ( z ≤ a )} = φ ( b ) − {1 − φ (a )}
24

24 Optimization techniques and their applications

FIGURE 2.11 Shaded area represents P ( a < z ≤ b) .

FIGURE 2.12 Shaded area represents P ( − a < z ≤ b) .

Example 2.7: A mine management is investigating the time the workers take to complete a specific
task, using an individual learning approach. The mine management determined that the time to com-
plete a task follows a normal distribution with a mean μ of 60 minutes and a standard deviation σ of
6 minutes. Determine the probability that a randomly selected mine worker will perform the task in

a) Less than 71 minutes.


b) Greater than 66 minutes.
c) Less than 54 minutes.
d) Greater than 66 minutes and less than 71 minutes.
25

Basics of probability and statistics 25

Solution

(a) The probability to complete the task in less than 71 minutes can be calculated as:

 X − µ t − µ  t − µ  71 − 60 
P (X < t) = P  <  = P  z <  = P  z <  = P ( z < 1..83)
 σ σ  σ  6 

To calculate the probability, we need to find the z-​score value corresponding to 1.83 from Table 2.2.
First, find ‘1.8’ on the left side (z-​column) and move across the table to ‘0.03’ on the top or bottom,
and record the value in the cell, which is 0.96442.

P ( z < 1.83) = 0.9644

It reveals that the randomly selected mine worker finished the work in less than 71 minutes is
96.44%.

(b) The probability to complete the task in more than 66 minutes can be calculated as:

 X − µ t − µ  t − µ  66 − 60 
P (X > t) = P  >  = P  z >  = P  z >  = P ( z > 1) = 1 − P ( z ≤ 1)
 σ σ  σ  6 
= 1 − 0.8413 = 0.1587
Find ‘1.0’ on the left side (z-​column) and move across the table to ‘0.00’ on the top or bottom, and
record the value in the cell: 0.8413. Thus the probability that the randomly selected mine worker
finishes the work in more than 66 minutes is 15.87%.

(c) The probability to complete the task within 54 minutes can be calculated as:

 t − µ  54 − 60 
Pz <
  = P  z <  = P ( z < −1) = P ( z > 1)
σ  6 
= 0.1587 [Refer sub-problem (b)]

Thus the probability that the randomly selected mine worker finishes the work in less than 54
minutes is 15.87%.

(d) The probability to complete the task in greater than 66 minutes and less than 71 minutes can
be calculated as:

t −µ t − µ  66 − 60 71 − 60 
( )
P t1 < X < t2 = P  1
 σ
<z< 2
σ 
= P
 6
<z<
6 

 P (1 < z < 1.833) = P ( z < 1.833) − P ( z < 1) = 0.9644 − 0.8413 = 0.1231

Therefore, the probability that the randomly selected mine worker finishes the work in between 66
and 71 minutes is 12.31%.

Example 2.8: The mean grade of the iron ore sample in a particular mine is 65%, with a standard
deviation of 8%. Determine the probability that the mean grade of 25 random iron ore samples is
68%, assuming the grade is normally distributed.
26

26 Optimization techniques and their applications

Solution
Given data:

Mean grade of the random sample (x ) =​68%


Number of random sample (n) =​25
Mean grade of the iron samples (µ ) =​65%
Standard deviation of the iron samples (σ) =​8%

The z-​score can be determined as

x −µ 68 − 65
z= = = 1.87
σ/ n 8 / 25
The z-​score value corresponds to 1.87 is 0.9697. This reveals that the probability of randomly
selected iron ore samples, having a mean grade value of 68% is 0.9697. This also indicates that the
probability that the random samples are taken from the same iron ore mine is 0.9697.

2.4.3 Poisson distribution
Poisson distribution, a discrete probability distribution, tells the number of occurrences of a spe-
cific event within a particular time interval. In Poisson distribution, the events occur with a known
constant mean rate, and the time interval between two consecutive events is independent. Poisson
distribution is a count distribution and is thus represented by a probability mass function.
The probability mass function of Poisson distribution is written as:

λ x e− λ
f ( X = x) = x = 0, 1, 2, … (2.12)
x!
where
λ =​mean of the number of occurrences within a specific time interval (or mean of the distribution).
x =​number of occurrences of a given event.
x! =​factorial of x.

The expected number of occurrences E(X) per unit time is equal to the mean number of arrival (λ)
per unit time. It can be derived as follows:
∞ ∞ ∞
λ x e− λ λ x −1
E (X) = ∑xf ( x ) = ∑x = λe − λ ∑ = λ e − λ *e λ = λ
x=0 x=0 x! x =1 ( x − 1 )!

Example 2.9: If the failure occurs of a shovel according to a Poisson distribution with an average
of four failures in every 20 weeks. Determine the probability that there will not be more than one
failure during a particular week.

Solution
The average number of failures per week is given by

4
λ= = 0.2
20
The probability mass function for a Poisson distribution is given by

λ x e− λ
P ( X = x) =
x!

You might also like