0% found this document useful (0 votes)
106 views

Managing Intermittent Demand

Uploaded by

yuzukieba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
106 views

Managing Intermittent Demand

Uploaded by

yuzukieba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 167

Torben Engelmeyer

Managing
Intermittent
Demand
Managing Intermittent Demand
Torben Engelmeyer

Managing Intermittent
Demand
Torben Engelmeyer
Wuppertal, Germany

Doctoral Thesis - University of Wuppertal, 2015

ISBN 978-3-658-14061-8 ISBN 978-3-658-14062-5 (eBook)


DOI 10.1007/978-3-658-14062-5

Library of Congress Control Number: 2016939049

Springer Gabler
© Springer Fachmedien Wiesbaden 2016
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or
dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt
from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained
herein or for any errors or omissions that may have been made.

Printed on acid-free paper

This Springer Gabler imprint is published by Springer Nature


The registered company is Springer Fachmedien Wiesbaden GmbH
Contents

List of Figures VII

List of Tables XI

List of Symbols XIII

1 Introduction 1

I Fundamentals and Methodology 5

2 Inventory Management 7
2.1 Supply Chain Performance Measurement . . . . . . . . . . . 7
2.2 Relevant Costs . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Inventory Policies . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.1 Stochastic Inventory Models . . . . . . . . . . . . . . 20
2.3.2 Determination of the Order-Up-To-Level S . . . . . 20
2.3.3 Determination of the Reorder Point s . . . . . . . . 22

3 Demand Analysis and Forecasting 29


3.1 Time Series Analysis . . . . . . . . . . . . . . . . . . . . . . 30
3.2 Croston-Type Models . . . . . . . . . . . . . . . . . . . . . 33
3.3 Integer-Valued Autoregressive Moving Average Processes . . 43
3.3.1 Model Specification . . . . . . . . . . . . . . . . . . 44
3.3.2 Process Estimation and Identification . . . . . . . . 47
3.3.3 Forecasting in Pure INAR Processes . . . . . . . . . 50
3.3.4 Forecast Aggregation . . . . . . . . . . . . . . . . . . 54
VI Contents

3.3.5 Point Forecasts . . . . . . . . . . . . . . . . . . . . . 58


3.4 Forecasting Performance Measures . . . . . . . . . . . . . . 60

4 Demand Classification 63
4.1 ABC Classification . . . . . . . . . . . . . . . . . . . . . . . 63
4.2 Forecast-Based Classification . . . . . . . . . . . . . . . . . 66
4.3 Multi-Criteria Inventory Classification . . . . . . . . . . . . 70

II Empirical Analysis 73

5 Simulation Design 75
5.1 Data Description and Preparation . . . . . . . . . . . . . . 75
5.2 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.3 Simulation Procedure . . . . . . . . . . . . . . . . . . . . . 79
5.4 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . 83

6 Results 87
6.1 Forecasts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
6.2 Inventory Simulation . . . . . . . . . . . . . . . . . . . . . . 91
6.2.1 α-Service Level Target . . . . . . . . . . . . . . . . . 92
6.2.2 β-Service Level Target . . . . . . . . . . . . . . . . . 99
6.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

7 Conclusion 109

A Appendix 113

Bibliography 149
List of Figures

2.1 Inventory levels of one SKU over five weeks . . . . . . . . . 8


2.2 Different inventory policies . . . . . . . . . . . . . . . . . . 17
2.3 Reorder point determination subject to an α-service constraint 24
2.4 Reorder point determination subject to a β-service constraint 26

3.1 Theoretical ACF and PACF of an AR(2) and an MA(2) process 33


3.2 Simulated compound Bernoulli process . . . . . . . . . . . . 35
3.3 Parameter fitting of the Croston procedure . . . . . . . . . 39
3.4 Simulated INAR(1) process . . . . . . . . . . . . . . . . . . 46
3.5 Graph representation of the Markov chain of an INAR(1) . 52
3.6 Graph representation of the Markov chain of an INAR(2) . 53
3.7 Forecast of the future PMF of an INAR process . . . . . . . 54
3.8 Graph representation of the Markov chain of an INAR(2) . 56

4.1 Pareto chart of the revenue for a German wholesaler . . . . 65


4.2 SKU clusters based on three single-criteria classifications . . 66
4.3 Classification scheme . . . . . . . . . . . . . . . . . . . . . . 68
4.4 Distribution of the ABC and HIL clusters . . . . . . . . . . 69

5.1 Revenue and advertisement over time . . . . . . . . . . . . 77


5.2 Distribution of the inventory risk index . . . . . . . . . . . 79
5.3 Different samples of a rolling simulation . . . . . . . . . . . 80
5.4 Rolling forecast over time . . . . . . . . . . . . . . . . . . . 81
5.5 Inventory simulation of an SKU over 30 periods . . . . . . . 83
5.6 Implementation of the inventory simulation algorithm . . . 85
5.7 Structure of the parallelization setup . . . . . . . . . . . . . 86
VIII List of Figures

6.1 One-step-ahead forecast performance separated according to


risk clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
6.2 One-step-ahead percentage better forecast performance of
all SKUs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.3 Distribution of one-step-ahead MASE separated according
to methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
6.4 Difference between achieved and target service in case of an
α-service constraint . . . . . . . . . . . . . . . . . . . . . . . 92
6.5 Achieved service vs. mean inventory levels (α-service target) 94
6.6 Distribution of the α-service level separated according to
methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
6.7 Comparison of the resulting inventory level and the inven-
tory risk clusters . . . . . . . . . . . . . . . . . . . . . . . . 99
6.8 Difference between achieved and target service in case of a
β-service constraint . . . . . . . . . . . . . . . . . . . . . . . 101
6.9 Achieved service vs. mean inventory levels (β-service target) 102
6.10 Distribution of the β-service level separated according to
methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.11 Comparison of the resulting inventory levels and the inven-
tory risk clusters . . . . . . . . . . . . . . . . . . . . . . . . 106

A.1 Five-step-ahead forecast performance separated according to


risk cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
A.2 Five-step-ahead percentage better forecast performance of
all SKUs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
A.3 Five-step-ahead percentage better forecast performance of
M cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
A.4 Five-step-ahead percentage better forecast performance of N
cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
A.5 Five-step-ahead percentage better forecast performance of O
cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
List of Figures IX

A.6 One-step-ahead percentage better forecast performance of M


cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
A.7 One-step-ahead percentage better forecast performance of N
cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
A.8 One-step-ahead percentage better forecast performance of O
cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
A.9 Distribution of five-step-ahead MASE separated according
to method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
A.10 Distribution of the inventory level separated according to
method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
A.11 Inventory level separated according to inventory risk clusters
(CRO/Gamma) . . . . . . . . . . . . . . . . . . . . . . . . . 118
A.12 Inventory level separated according to inventory risk clusters
(CRO/Normal) . . . . . . . . . . . . . . . . . . . . . . . . . 118
A.13 Inventory level separated according to inventory risk clusters
(ES/Gamma) . . . . . . . . . . . . . . . . . . . . . . . . . . 118
A.14 Inventory level separated according to inventory risk clusters
(ES/Normal) . . . . . . . . . . . . . . . . . . . . . . . . . . 119
A.15 Inventory level separated according to inventory risk clusters
(LEV/Gamma) . . . . . . . . . . . . . . . . . . . . . . . . . 119
A.16 Inventory level separated according to inventory risk clusters
(LEV/Normal) . . . . . . . . . . . . . . . . . . . . . . . . . 119
A.17 Inventory level separated according to inventory risk clusters
(SYN/Gamma) . . . . . . . . . . . . . . . . . . . . . . . . . 120
A.18 Inventory level separated according to inventory risk clusters
(SYN/Normal) . . . . . . . . . . . . . . . . . . . . . . . . . 120
A.19 Inventory level separated according to inventory risk clusters
(TEU/Gamma) . . . . . . . . . . . . . . . . . . . . . . . . . 120
A.20 Inventory level separated according to inventory risk clusters
(TEU/Normal) . . . . . . . . . . . . . . . . . . . . . . . . . 121
A.21 Distribution of the inventory level separated according to
method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
X List of Figures

A.22 Compairson of the resulting inventory level and the inven-


tory risk cluster (CRO/Gamma) . . . . . . . . . . . . . . . 134
A.23 Compairson of the resulting inventory level and the inven-
tory risk cluster (CRO/Normal) . . . . . . . . . . . . . . . . 134
A.24 Compairson of the resulting inventory level and the inven-
tory risk cluster (ES/Gamma) . . . . . . . . . . . . . . . . . 134
A.25 Inventory level separated according to inventory risk clusters
(ES/Normal) . . . . . . . . . . . . . . . . . . . . . . . . . . 135
A.26 Inventory level separated according to inventory risk clusters
(LEV/Gamma) . . . . . . . . . . . . . . . . . . . . . . . . . 135
A.27 Inventory level separated according to inventory risk clusters
(LEV/Normal) . . . . . . . . . . . . . . . . . . . . . . . . . 135
A.28 Inventory level separated according to inventory risk clusters
(SYN/Gamma) . . . . . . . . . . . . . . . . . . . . . . . . . 136
A.29 Inventory level separated according to inventory risk clusters
(SYN/Normal) . . . . . . . . . . . . . . . . . . . . . . . . . 136
A.30 Inventory level separated according to inventory risk clusters
(TEU/Gamma) . . . . . . . . . . . . . . . . . . . . . . . . . 136
A.31 Inventory level separated according to inventory risk clusters
(TEU/Normal) . . . . . . . . . . . . . . . . . . . . . . . . . 137
List of Tables

2.1 Exemplary demand series with corresponding inventory . . 9


2.2 Expected interest rate and gross margin of European indus-
try sectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

4.1 Different weighting schemes of the MCIC approach . . . . . 72

5.1 Summary of variables . . . . . . . . . . . . . . . . . . . . . 76

A.1 Summary of achieved α-service (CRO/Gamma) . . . . . . . 122


A.2 Summary of achieved α-service (CRO/Normal . . . . . . . . 123
A.3 Summary of achieved α-service (ES/Gamma) . . . . . . . . 124
A.4 Summary of achieved α-service (ES/Normal) . . . . . . . . 125
A.5 Summary of achieved α-service (INAR) . . . . . . . . . . . 126
A.6 Summary of achieved α-service (LEV/Gamma) . . . . . . . 127
A.7 Summary of achieved α-service (LEV/Normal) . . . . . . . 128
A.8 Summary of achieved α-service (SYN/Gamma) . . . . . . . 129
A.9 Summary of achieved α-service (SYN/Normal) . . . . . . . 130
A.10 Summary of achieved α-service (TEU/Gamma) . . . . . . . 131
A.11 Summary of achieved α-service (TEU/Normal) . . . . . . . 132
A.12 Summary of achieved β-service (CRO/Gamma) . . . . . . . 138
A.13 Summary of achieved β-service (CRO/Normal) . . . . . . . 139
A.14 Summary of achieved β-service (ES/Gamma) . . . . . . . . 140
A.15 Summary of achieved β-service (ES/Normal) . . . . . . . . 141
A.16 Summary of achieved β-service (INAR) . . . . . . . . . . . 142
A.17 Summary of achieved β-service (LEV/Gamma) . . . . . . . 143
A.18 Summary of achieved β-service (LEV/Normal) . . . . . . . 144
XII List of Tables

A.19 Summary of achieved β-service (SYN/Gamma) . . . . . . . 145


A.20 Summary of achieved β-service (SYN/Normal) . . . . . . . 146
A.21 Summary of achieved β-service (TEU/Gamma) . . . . . . . 147
A.22 Summary of achieved β-service (TEU/Normal) . . . . . . . 148
List of Symbols

α Probability of satisfying the demand in a period directly from stock

β Share of demand, which could be delivered directly from stock with


no delay

δ Expected number of periods between two consequtive positive de-


mands

η Inventory turnover

γ(h) Autocovariance function at lag h

μy Expectation of demand series

μ2ltd Second moment of the demand during lead time

μ+
Y
Expectation of positive demands

ωi CAPM risk measure

πy+ Probability of a positive demand in period t

πltd Probability of a positive demand during lead time

ρ(h) Autocorrelation function at lag h

σy Standard deviation of demand

σY+ Standard deviation of positive demands


XIV List of Symbols

σltd Standard deviation of demand during lead time

εt Error terms

ξ Probability vector of current Markov state

ax Selection vector

Bi Bernoulli distributed random variable

Ci Clustering criteria

cij j-th criteria of the i-th SKU

CV 2 Squared coefficient of variation

D Gap between s and S

fltd Probabiliy density function of the demand during lead time

G Maximal plausible demand in a period

h Holding costs per unit per period

It Inventory level in period t

J Number of criteria in MCIC

K Fixed order costs

L Lead time

M Transition matrix of an INAR(p)-process

Px Set of all paths, where the sum of the weights of the visited vertices
equals x

pltd Probability mass function of the demand during lead time


List of Symbols XV

Q Order quantity

qt Scaled forecasting error

r Reorder interval

S Order-up-to level

s Reorder point

T Time series length

U Number of SKUs in MCIC

uf Return of a risk free asset

ui Return of asset i

um Return of the market portfolio

wij Weight of the j-th criteria of the i-th SKU

Xt Random variable modeling the probability of a positive demand in


period t

yf First positive demand

Yt Random variable modeling the demand in period t

yt Observed demand in period t

Yt+ Random variable modeling the positive demand in period t


1 Introduction

The past years of logistic management have been studded with buzzwords
trying to condense the achievements and problems of their times. Just-in-
time production changed the view on inventories from assets, which convert
to cash, to pure cost drivers. With an increasing availability of data, con-
cepts like Efficient Consumer Response and Predictive Analytics resulted
in the need for optimized decisions based on uncertain demand informa-
tion. These trends show that inventory management is a crucial function
of logistics, and it is a tautology that optimal inventory decisions are based
on optimal forecasts. But this frequently leads to problems. On the one
hand, forecasts are calculated using sophisticated methods gathering all
the features of a demand series in order to produce the most accurate fore-
casts. On the other hand, there is a wide range of stochastic inventory
models for all circumstances which come with rigid stochastic assumptions
like gaussian or gamma distributed lead time demand.

Taken individually, both approaches are optimal and no problems may re-
sult, if all assumptions along the method chain are met. The problem arises
with the combination of the forecast and inventory management methods
during the sales and operation planning process. Forecast method selection
depends on statistical error measures and does not consider the resulting
supply chain performance. The stochastic inventory model reduces the
forecast to the first two moments and therefore, most information remains
unused when the reorder levels are optimized. Additionally, assuming con-
tinuous demand is always a simplifying assumption which only holds for a
small subset of Stock Keeping Units (SKUs). Johnston and Boylan (2003)
stated that about 75% of all items across most branches move no more

© Springer Fachmedien Wiesbaden 2016


T. Engelmeyer, Managing Intermittent Demand,
DOI 10.1007/978-3-658-14062-5_1
2 Introduction

than six times a year and therefore are referred to as intermittent. Those
intermittent demand SKUs collectively sum up to about 60% of inventory
investments and lead to theoretical inconsistencies between the methods
used in the sales and operations planning process.

To face these issues, this study presents a consistent forecast inventory


model which directly connects forecast and inventory optimization. It is
based on the prediction of the future probability mass functions (PMF)
by assuming an integer-valued autoregressive process as demand process.
Those future PMFs are combined using a Markov Chain simulation tech-
nique and in addition, a modern multi-criteria inventory classification scheme
is presented to distinguish different SKU clusters by their inventory risk.
These methods are combined to create a new consistent approach without
any theoretical breaks.

In an extensive simulation study, based on the demand series of 4 310 SKUs


of a German wholesaler, the consistent approach is compared with a wide
range of forecast/inventory model combinations. In order to create the
most realistic results, the evaluation is based exclusively on out-of-sample
forecasts, and the simulation covers different order costs, service levels,
service level targets, and lead times.

By using the consistent approach, the mean inventory level is lowered


whereas the service level is increased. Thus, the consistent approach leads
to dominant replenishment strategies, which improve the overall inventory
performance. Additionally, it is shown that the forecast methods which
perform best in terms of statistical error measure lead to the worst overall
inventory performance.

The remainder of this work is structured as follows. In Chapter 2 the


basics of inventory management are described, and the different supply
chain performance measures and the relevant costs are presented. The
focus of this chapter is on stochastic inventory policies, since those are
Introduction 3

essential for an automated inventory management system. Chapter 2 also


highlights the importance of appropriate forecasts for inventory control.

After this Chapter 3 presents forecast methods which are suitable for in-
termittent demand series. It starts with the fundamentals of time series
analysis and a description of the available forecast methods, but this chap-
ter mainly deals with integer-valued autoregressive processes. Those have
been applied in finance and physics, but there is no record of previous
connections of integer-valued autoregressive processes and inventory man-
agement. Additionally, the Markov Chain simulation technique in order to
aggregate the future PMFs is proposed at the end of this chapter.

Based on the definitions of Chapters 2 and 3, Chapter 4 describes methods


to classify demand series. The first section starts with the description of
the widely used ABC classification, and the next section deals with the new
multi-criteria inventory classification to distinguish SKUs by their inventory
risk.

The empirical part of this study begins with the description of the data
and the simulation design (Chapter 5) which includes the application of
all methods proposed in the previous chapters. After this, the results are
described in Chapter 6. It is divided into four sections. First, the re-
sults of the forecast performance by means of statistical error measures
are presented, and then the next two sections of the chapter deal with the
inventory performance of all the methods for the α- and β-service level tar-
gets, respectively. The last section of this chapter summarizes the results.
This study closes with a conclusion and outlook in Chapter 7.
Part I

Fundamentals and Methodology


2 Inventory Management

The objective of inventory management is to fulfill the customer’s needs by


stocking the right quantity at the right time such that the resulting costs
are minimal. It is a crucial function in most companies, and in general it
cannot be separated from other functions. For example, the optimal inven-
tory policy will certainly depend on promotion campaigns conducted by
the marketing department. Nevertheless, this chapter deals with the tech-
nical part of inventory management and methods in order to find optimal
inventory policies. Additionally, when focussing on retail companies, only
inventories of finished goods will be considered, which are referred to as
stock keeping units (SKUs).

This chapter is divided into three parts. Section 2.1 deals with the measure-
ment of the supply chain performance. Thus, it describes what is meant
by fulfilling the customer’s needs. After this, Section 2.2 gives an overview
of the relevant costs, their relationships, and how they can be estimated
based on a company’s balance sheet. The last and main part of this chapter
introduces different inventory policies and the decision variables which can
be set in order to fulfill the customer’s needs at minimal costs. It focuses
on stochastic inventory models and methods to find an optimal inventory
policy.

2.1 Supply Chain Performance Measurement

This section considers the measurement of supply chain performance, i.e.


what separates a good inventory management system from a bad one.
Three different measures are described, namely the α-, the β-service, and

© Springer Fachmedien Wiesbaden 2016


T. Engelmeyer, Managing Intermittent Demand,
DOI 10.1007/978-3-658-14062-5_2
8 2 Inventory Management

the inventory turnover. All measures are widely used in theory and practice
and feature two different views of what is meant by supply chain perfor-
mance. The α- and the β-service measure the performance from the cus-
tomer’s perspective whereas the inventory turnover is a technical measure
to describe the supply chain performance in sense of capital efficiency. This
section presents these three performance indicators and gives an example
to state the properties of those views.

Inventory levels
30

20
Quantity

10

-10
Week 1 Week 2 Week 3 Week 4 Week 5
Figure 2.1: Inventory levels of one SKU over five weeks

To introduce the terms and methods of inventory management, Figure 2.1


shows the continuously sampled inventory level of an exemplary SKU over
five weeks. The inventory level starts at 10 pieces, and the underlying in-
ventory policy is static. This means that at the beginning of each week a
delivery of 20 pieces arrives, and the inventory level increases. The contin-
uous tracked demand has a relatively high variance. Combined with the
static inventory policy, this leads to negative inventory levels at the end of
weeks 2 and 4. This backlog, i.e. the demand which waits to be satisfied, is
marked with a gray shade. The backlog adds up to 10 pieces at the end of
week 2 and 1 piece at the end of week 4. By way of representing every sin-
gle inventory change, the figure gives a very detailed view of what is really
happening with the inventory level. Nevertheless, in most circumstances
the data is not that detailed, but aggregated for different periods. Thus,
2.1 Supply Chain Performance Measurement 9

Table 2.1 lists the weekly aggregated data of Figure 2.1. The demand is
aggregated as the sum of sold pieces within a week. The second row lists
the inventory level at the end of each week. Therefore, the inventory level
at week 0 is the starting inventory level of the first week. The inventory
level in each period is equal to the inventory level of the previous week plus
deliveries of the current week minus the demand during the current week.
For example, the inventory level at the end of week 3 is 5, because it starts
with −10, the demand is 5 and a quantity 20 is delivered at the beginning
of the week ((−10) − 5 + 20 = 5). In this setup it is assumed that customer
demands which cannot be fulfilled are backordered, which means the cus-
tomer will wait for the next delivery if the SKU is understocked. This
assumption will not hold in every case, but if one considers the inventory
of an online store, this might be realistic. The assumption that unsatis-
fied customers will wait for the next delivery reduces the complexity and
is therefore well suited for an introductory example.

Week 0 Week 1 Week 2 Week 3 Week 4 Week 5


Demand 20 40 5 26 10
Inv. level 10 10 - 10 5 - 1 9
Deliveries 20 20 20 20 20

Table 2.1: Exemplary demand series with corresponding inventory

α-Service Level

The α-service focuses on customer satisfaction. It measures the probability


of serving the demand in a period directly from stock. In other words
the α-service measures the probability of completely satisfying a customer.
The α-service is given by:

α = P (yt ≤ It ) , (2.1)
10 2 Inventory Management

where yt is the demand of a customer or the demand within a period and It


is the inventory level at time t. For the given example the α-service is 60%,
because in 2 out of 5 periods the whole demand could not be delivered
directly from stock, no matter if the backlog is 10 or just 1 piece as in
period 4.

Fixing an α-service target can be useful if the customer is unsatisfied no


matter if the shortage is one or a hundred pieces, as is the case in the auto-
motive sector, in which the production line stops at any shortage (Nahmias,
2009).

β-Service Level

In contrast to the customer focus of the α-service, the β-service level focuses
on delivered quantities. It measures the share of demand which could be
delivered directly from stock with no delay. Because of that the β-service
is usually used as a target for a warehouse. It is given by:

E(yt |yt > It )


β = 1− , (2.2)
E(yt )

where E(yt |yt > It ) is the conditional expectation of the demand which
exceeds the current inventory level. For the given example β equals 89.1%
because 11 of 101 pieces could not be delivered directly from stock.

It can be seen that by choosing one service level over the other totally
different values can occur. Thus, there is no overall right decision. It
depends upon circumstances. If it does matter how many pieces of an
order could not be delivered, one should use the β-service, but if it is an all
or nothing evaluation of whether the inventory management is successful,
the α-service is the appropriate performance measure.
2.1 Supply Chain Performance Measurement 11

Inventory Turnover

The inventory turnover η indicates how often the average inventory level is
sold within a given period. Therefore, it focuses on the fixed capital of the
inventory. Additionally, as a dimensionless measure it allows to compare
the inventory level of two different SKUs. It is defined as the ratio of the
sales of a SKU and its average inventory level:


T
yt
t=1
η= . (2.3)
1 
T
T It
t=1

Compared with the service measures, one can see that the inventory turnover
does not regard the customer at all. It is a purely inventory focused mea-
sure and is not used to find an optimal inventory policy. In most setups
inventory management needs to fulfill a certain service constraint while
minimizing the total inventory costs. Thus, a high inventory turnover re-
sults from a good inventory policy. The average inventory level in each
period is the mean value of the inventory level at the beginning of the
period (inventory level at the end of the previous period plus deliveries)
and the inventory level at the end of the period. Therefore, the average
inventory levels of the five periods are 20, 10, 7.5, 13, and 14. For the given
example, η equals 7.83 because in total 101 pieces are sold and the overall
average inventory level is 12.9.
12 2 Inventory Management

2.2 Relevant Costs

In order to minimize the total inventory costs, it is crucial to have infor-


mation about what the relevant costs are. This fact might be trivial, but
Themido et al. (2000) state that there is a lack of appropriate cost infor-
mation in many companies. In addition, Askarany, Yazdifar, and Askary
(2010) argue that there is still a high potential to increase the accuracy of
the internal cost estimates of companies. It is not the focus of this work
to describe the methods to estimate costs, but every optimal inventory
control system is based on those estimates. Therefore, in addition to the
description of the relevant costs, there will also be a reference to estimation
methods.

The relevant costs of an inventory control system are split into three differ-
ent parts. First, the order costs which arise with an order. These are costs
like the handling, transportation or labor costs, as well as the variable costs
of an order, which may vary due to quantity discounts (Axsäter, 2006, p.
44ff). The order costs per unit decrease as the order quantity increases. So
there is a positive impact of high order quantities on order costs. This part
of the inventory costs can be estimated with data provided by internal ac-
counting. Themido et al. (2000) and Everaert et al. (2008) discuss the use
of activity-based costing models to determine the handling, transportation
and order costs. They argue that the use of activity-based costing models
can increase the accuracy of a company’s logistic costs estimation.

The second part of the inventory costs are the holding costs, which occur
when keeping SKUs on stock. The holding costs again consist of different
parts. There is the interest which needs to be paid for the fixed capital and
the opportunity costs of using this fixed capital in a different way. There are
the costs of running the warehouse, and even if they are non-cash expenses,
the risk of spoilage of perishable goods and thievery needs to be regarded
because they increase at higher inventory levels (Axsäter, 2006, p. 44).
Overall the holding costs will rise with the order quantity, thus there is a
2.2 Relevant Costs 13

negative impact of a high order quantity on holding costs. The estimation


of the warehouse costs, spoilage and thievery can be done based on data
provided by internal accounting whereas the opportunity costs of capital
can be estimated using the capital asset pricing model (CAPM).

The CAPM was independently developed by Sharpe (1964) and Lintner


(1965) to value capital assets based on their risk. The concept of this ap-
proach is to assume a linear relationship between the rate of an asset’s
return which an investor expects and its risk. Thus, for a given risk, this
model can be used to estimate the expected rate of return, i.e. the op-
portunity costs of capital (Singhal and Raturi, 1990). Sharpe (1964) and
Lintner (1965) propose a risk measure which is based on the relationship
between the risk of the asset and the risk of the market portfolio. It can
be calculated as follows:1

cov(uM ; ui )
ωi = , (2.4)
var(uM )

where cov(uM ; ui ) denotes the covariance between the rate of return of the
asset and the rate of return of the market. var(uM ) denotes the variance
of the return of the market portfolio. Based on this risk ωi measure, the
expected rate of return can be calculated using the following equation:

ui = uf + ωi · (uM − uf ) , (2.5)

where uf is the return of a risk free asset. Jones and Tuzel (2013) showed
a relationship between the risk measure of the CAPM and the inventory
levels of a company in an empirical study. Therefore, there is empirical
evidence that the estimated ωi influences inventory decisions. The CAPM
is used to estimate the opportunity costs of capital in different industry
sectors in Europe in order to give an indication about the inventory holding
costs. This might not be a very accurate measure, but it will provide a

1 The proposed risk measure is usually denoted as β, but for reasons of consistency ω
will be used.
14 2 Inventory Management

suitable estimate. Table 2.2 lists the results of an empirical analysis using
the CAPM. It is based on the stock returns of 9 833 European companies
in 39 industry sectors between January 2000 and May 2014 (Bloomberg,
2014). The different ω-values were calculated by comparing the returns of
the different sector portfolio, i.e. an equally weighted portfolio containing
all the companies of a sector, and the market portfolio containing all 9 833
shares. The risk-free rate was assumed to be 0.045, which equals the average
return of a German federal bond during this period. In addition to the
expected interest rate, the table also lists the 95% confidence interval of
the expected interest rate, the average gross margin, and the number of
sampled companies of the sector.

The third part of the inventory costs arise with a stock out. They are
called shortage or lost sales costs. To define these costs, it needs to be
distinguished whether, in case of a shortage, the customer is willing to wait
until the product is available again. If so, there are additional costs of
express deliveries or compensation discounts. If the customer is not willing
to wait, the sale is lost and therefore, the company will not earn the gross
margin. In addition, the customer is unsatisfied. This raises the risk that
the customer may switch to a competitor. The costs of an unsatisfied
customer are called loss of goodwill costs and whereas the gross margin
is relatively easy to estimate (see Table 2.2), the estimation of the loss of
goodwill costs is almost impossible (Crone, 2010, p. 39). The risk of a
stock out falls with higher order quantities. Thus, higher order quantities
have a positive impact on the lost sales costs.

After describing the three parts of the inventory costs, it can be seen that
there is a trade-off between high order quantities, which reduce the order
and lost sales costs, and the low order quantities, which cut the holding
costs. Therefore, one goal of inventory management is to find an order
quantity, which balances these different costs.
2.2 Relevant Costs 15

ICB sector name ui CI0.95 gross margin n


Aerospace & Defense 0.115 [0.103;0.127] 0.268 40
Alternative Energy 0.137 [0.119;0.154] 0.217 66
Automobiles & Parts 0.178 [0.161;0.195] 0.207 74
Beverages 0.071 [0.061;0.081] 0.471 92
Chemicals 0.134 [0.124;0.144] 0.284 120
Construction & Materials 0.221 [0.160;0.283] 0.233 245
Electricity 0.076 [0.069;0.084] 0.382 105
Electronic & Electrical Equipment 0.113 [0.106;0.120] 0.366 219
Equity Investment Instruments 0.110 [0.102;0.117] 0.770 344
Financial Services 0.064 [0.055;0.072] 0.468 652
Fixed Line Telecommunications 0.110 [0.065;0.156] 0.419 47
Food & Drug Retailers 0.106 [0.098;0.115] 0.223 54
Food Producers 0.071 [0.066;0.076] 0.278 209
Forestry & Paper 0.123 [0.111;0.136] 0.292 42
Gas, Water & Multiutilities 0.083 [0.072;0.093] 0.376 40
General Industrials 0.109 [0.100;0.118] 0.246 84
General Retailers 0.201 [0.174;0.228] 0.376 207
Health Care Equipment & Services 0.104 [0.096;0.113] 0.519 212
Household Goods & Home Construction 0.108 [0.100;0.116] 0.281 136
Industrial Engineering 0.125 [0.117;0.133] 0.299 264
Industrial Metals & Mining 0.216 [0.190;0.242] 0.132 78
Industrial Transportation 0.089 [0.082;0.097] 0.257 241
Leisure Goods 0.094 [0.079;0.108] 0.405 78
Media 0.087 [0.070;0.105] 0.447 317
Mining 0.095 [0.076;0.114] 0.238 202
Mobile Telecommunications 0.233 [0.187;0.278] 0.429 33
Nonequity Investment Instruments 0.110 [0.104;0.115] 0.483 1625
Nonlife Insurance 0.121 [0.112;0.130] 0.587 73
Oil & Gas Producers 0.156 [0.135;0.177] 0.334 159
Oil Equipment, Services & Distribution 0.128 [0.117;0.140] 0.359 81
Personal Goods 0.106 [0.095;0.116] 0.453 149
Pharmaceuticals & Biotechnology 0.129 [0.120;0.138] 0.633 240
Real Estate Investment & Services 0.071 [0.066;0.076] 0.577 403
Real Estate Investment Trusts 0.089 [0.081;0.096] 0.817 97
Software & Computer Services 0.107 [0.098;0.115] 0.559 512
Support Services 0.102 [0.095;0.110] 0.312 375
Technology Hardware & Equipment 0.136 [0.122;0.150] 0.402 165
Tobacco 0.128 [0.110;0.146] 0.379 4
Travel & Leisure 0.072 [0.059;0.086] 0.282 333

Table 2.2: Expected interest rate and gross margin of European industry sectors.
16 2 Inventory Management

2.3 Inventory Policies

Inventory management answers the question of when to order and how


much should be ordered. The inventory policy is the strategic framework
which determines the way those questions are answered, i.e. whether an
order will be placed on a regular basis or whether the order quantity is
constant. The different inventory policies are defined by the variables the
inventory management decides on. The literature provides a vast number
of inventory policies to find optimal solutions for various different circum-
stances (see Bakker, Riezebos, and Teunter (2012) and Goyal and Giri
(2001) for an overview). Therefore, this section only describes a subset of
inventory policies which are frequently used in practice.

The literature distinguishes mainly between four different inventory poli-


cies. They can be differentiated from each other by considering the flexibil-
ity of the time when an order is placed and the amount which is ordered.
There could either be a fixed interval r between two consecutive orders, or
an order could be placed whenever the inventory level falls below a certain
level s. The same scheme holds for the order quantity, which can be either
fixed or variable. If the quantity is fixed for all orders, it is denoted as Q.
If instead the order size depends on the current inventory level and should
fill the stock up to a given level, this order up to level is denoted as S. The
third variable represents the review interval, i.e. whether if the inventory
level is known at every point in time or just at given intervals T . If T > 0
the inventory policy is referred to as periodic review. However, if T = 0 the
policies are called continuous review because the inventory level is known
at each point in time. The different inventory policies are defined by the
combination of those decision variables. Thus, there are four different cases:

• (r,Q,T) - fixed time between orders, fixed order quantity

• (r,S,T) - fixed time between orders, variable order quantity

• (s,Q,T) - variable time between orders, fixed order quantity


2.3 Inventory Policies 17

• (s,S,T) - variable time between orders, variable order quantity

These four inventory policies also include several special cases. For exam-
ple, the news vendor model for highly perishable goods can be formulated
as a base-stock policy (Bouakiz and Sobel, 1992). The intuition behind
those base-stock policies is to order whenever the inventory level is reduced
such that the inventory level is S at the beginning of each period. They are
in fact (s,S,T ) policies, but due to their special properties they are referred
to as (S − 1,S,1) policies.

r,Q,T r,S,T

30 S
Inventory

20

10

0
Orders

32
29
26
5 10 15 20 25 5 10 15 20 25
s,Q,T s,S,T

30 S
Inventory

20

10
s s
0
Orders

32
29
26
5 10 15 20 25 5 10 15 20 25

Figure 2.2: Different inventory policies

To state the behavior the four different inventory policies, Figure 2.2 shows
the inventory levels and order sizes over 30 weeks of simulated inventories
using each of these different policies. The rows separate the decision about
when an order is placed whereas the columns seperate a fixed order size
from a flexible one. All four graphs are based on the same demand series
18 2 Inventory Management

and parameters. The order size Q and the order up-to-level S are set to
30 the fixed interval between two orders r is 10, and the reorder point s
is 5. Once an order is placed, it takes 2 weeks until the delivery arrives.
Thus, the lead time L is 2 and the review period T is 1 week for all graphs.
The upper-left graph plots the inventory and order sizes of the (r,Q,T )
policy based on the given parameters. The first order is placed in period
4 and arrives two weeks later in period 6. One can see that all order
sizes are equal, and the time between two consecutive orders is fixed at 10
weeks. The dotted line denotes the inventory position, which equals the
inventory level plus the future deliveries. Therefore, the inventory position
and inventory level are equal if no delivery is in progress. As in Figure 2.1
the gray shaded inventory marks a backlog. For the first policy this is the
case in period 5, 15, 23, 24 and 25.

The second graph shows the results of a (r,S,T ) policy and has the same
structure as the first graph. Additionally, the horizontal dashed line marks
the order-up-to level S. As in the first graph, the orders are equidistant,
but in this graph they differ in size. The order size is selected in the way
that the order will raise the inventory position to S, in other words the
order size is equal to S − It . Thus, the order-up-to level is an upper bound
of the inventory position. The inventory level will not reach the order-up-to
level in most cases, even if the inventory position does, due to the demand
during the lead time L.

The third graph shows the results of a (s,Q,T ) policy. In this case an order
of Q pieces is placed whenever the inventory position drops below s (dashed
line). The inventory position is used for the order rule because it gives clear
instructions, unlike the inventory level. Consider an order policy based on
the inventory level. An order would be placed not only in week 3, but also
in week 4, because the inventory level is still below s since the delivery has
not arrived yet. In contrast, a rule based on the inventory position will
not lead to further orders. Only one order is placed in week 4, because the
2.3 Inventory Policies 19

inventory position regards future deliveries. Additionally, it can be seen


that the reorder point s is a lower bound of the inventory position.

The last graph shows the results of a simulated (s,S,T ) policy. The in-
ventory position fluctuates between the reorder level s and the order-up-to
level S. Similar to the third graph, the first order is placed in week 3,
whereas the second and third orders are placed in period 13 and 20. Back-
logs appear in period 14 and 21. This inventory policy has the highest
adaptability, and Sani and Kingsman (1997) show that in case of inter-
mittent demand series, the (s,S,T ) policy suits best. Therefore, a (s,S,T )
policy is described and used in the following.
20 2 Inventory Management

2.3.1 Stochastic Inventory Models

The challenges which arise from an uncertain demand are based on the com-
bination of two circumstances, the uncertain demand, and the positive lead
time. For example, if there is no lead time (L = 0) one would place an order
whenever the inventory level falls to zero (s = 0) no matter how volatile
the demand is because it would be delivered immediately. If the demand is
certain and the lead time is positive, one would place the order such that
the remaining stock lasts until the next order arrives. As this generally
also holds for the case of a positive lead time and an uncertain demand, a
new problem arises. It could be the case that the reorder point s, i.e. the
amount that should last until the new order arrives, is too low. Thus, the
demand cannot be satisfied from stock, and will be backlogged. This leads
to shortage costs and unsatisfied customers. Therefore, an inventory policy
based on stochastic demands needs to regard those uncertainties and the
resulting costs. This can be done by finding a set of inventory variables,
e.g. the reorder point s and the order-up-to level S, which minimizes the
total inventory costs as the sum of order, holding, and shortage costs. But,
as mentioned above, it is hard to estimate the loss of goodwill costs which
provide the main part of the shortage costs (Crone, 2010, p. 39). To avoid
this issue, the selection of s and S can be based on minimizing the sum
of the holding and order costs while satisfying a certain service constraint,
e.g. find s,S such that an α-service level of 95% is achieved at minimal
cost. This method avoids the need to determine the shortage costs because
they are implicitly assumed by selecting a service target. Those service
constrained inventory policies will be used in the following.

2.3.2 Determination of the Order-Up-To-Level S

As described above by using a (s,S,T ) policy the inventory position will vary
between the two bounds s and S. There are some approaches to determine
S based on the demand series directly (e.g., see Teunter, Syntetos, and
2.3 Inventory Policies 21

Babai (2010)), but in most cases S results from selecting a reorder point
s and the gap between s and S, denoted as D. This position gap D is
selected such that the sum of the order and inventory costs are minimized.
Thus, the remainder of this section describes the economic order quantity
model to determine the gap D.

The widest used inventory model is the economic order quantity model de-
veloped in F. W. Harris (1990).2 This model can be denoted as a (r,Q,0)
policy because the order interval and quantity are constant, and the in-
ventory is known at every point in time. It determines the cost optimal
position gap D∗ based on the assumption that the lead time is zero and
that the underlying demand series is a continuous-time continuous-space
time series with a constant demand rate. The demand needs to be fulfilled
immediately, i.e. shortages are not allowed. Based on these assumptions,
the optimal solution can be derived in a straightforward manner. As short-
ages are prohibited and the lead time is zero, a cost optimal order is placed
everytime the invetory level reaches 0. Therefore, the total costs C(D) are
determined by the order and holding costs:

μy D
C(D) = ·K + ·h , (2.6)
D  2 
order costs holding costs

where D is the inventory position gap, μy is the demand in a given period,


K are the fixed order costs and h are the holding costs per unit and per
period. F. W. Harris (1990) defines the order costs as product of the
μ
number of orders ( Dy ) and the fixed order costs K. The holding costs are
defined as the product of the holding costs per unit and per period h and
the average inventory level in the period ( D
2 ). Thus, in order to calculate
the optimal inventory position gap using the EOQ model, D∗ results from
the minimum of the total costs function C(D), i.e. the root of the first

2 The original publication by Harris dates back to 1913 while this citation refers to the
digital reprint in 1990.
22 2 Inventory Management

derivative.

μy D ∗ 2 · μy · K
minimize C(D) = ·K + ·h ⇒ D = . (2.7)
D D 2 h

The underlying assumptions of the EOQ are very strict and will not hold
in many practical setups. Therefore, the literature provides extensions to
the EOQ in order to reduce the strictness of the assumptions. Axsäter
(2006, p. 55 ff) considers the EOQ in case of finite production rates and
discounts. Grubbström and Erdem (1999) derive an adapted version of the
EOQ with backlogging and Weiss (1982) and Goh (1994) regard nonlinear
holding costs. There is also a wide range of literature dealing with imperfect
SKU quality (see Khan et al. (2011) for a review). A historical review and
several other extensions of the EOQ are given in Choi (2013).

Haneveld and Teunter (1998) show that in case of slow moving demand, an
inventory policy based on maximizing the discounted cash flows instead of
minimizing the inventory costs leads to better results. This addresses the
main drawback of the EOQ model. The strict assumption of a certain and
fixed demand rate. While this leads to the straightforward derivation of
the optimal order quantity D∗ , it does not reflect the majority of practical
settings in which the future demand is uncertain and variable.

Nevertheless, the original EOQ model is still frequently used due to its
easy implementation and robustness against violations of the assumptions
(Drake and Marley, 2014). Therefore, and for reasons of comparability, the
EOQ will be used to determine the inventory gap D even if there may be
potential issues.

2.3.3 Determination of the Reorder Point s

The determination of the reorder point s differs depending on whether an


α- or β-service constraint is specified. As described in Section 2.1, the α-
service is the probability of satisfying the entire demand in a period, or in
2.3 Inventory Policies 23

other words not going out of stock. In general the literature provides the
following rule to determine the reorder point for a given α-service constraint
(Schneider, 1978):

s
fltd (x) dx = α . (2.8)
−∞

s is implicitly defined as the α-quantile of the lead time demand. This


formula only holds if the lead time demand is approximated by a continuous
distribution, which is always a simplifying assumption if one considers the
demand series. Thus, it needs to be transferred into the discrete case:


s
pltd (i) ≥ α . (2.9)
i=0

This rule has the same interpretation as (2.8) and it is theoretically much
closer to an observed demand series, but it has the drawback that it does
not have an unique solution for s. The reorder point is the smallest s which
satisfies condition (2.9). This is equal to the definition of the Value-at-
Risk, a frequently used risk measure for financial assets in portfolio theory.
Therefore, the determination of s for a given α-service level can also be
interpreted by means of portfolio theory. Figure 2.3 shows the quantile
function of the lead time demand. The dashed lines indicate an α-service
level of 95% and the resulting s.

If the lead time distribution is approximated by a continuous distribution,


Schneider (1978) derives an implicit calculation rule to determine s for a
given α-service constraint. In most cases the assumption of a Gaussian lead
time demand distribution is used to calculate s. The main reason for this
is the central limit theorem and the convolution invariance of the normal
distribution. This means that the sum of two independent normally dis-
tributed random variables, is normally distributed again. The assumption
of a normal distribution might be no problem for fast moving SKUs, but in
24 2 Inventory Management

α service constraint
30
lead time demand

20
s

10

0
0.00 0.25 0.50 0.75 α 1.00
cumulative prob.

Figure 2.3: Reorder point determination subject to an α-service constraint

the case of intermittent demand series this assumption could lead to poor
results. Intermittent demand series have a low mean value and a relatively
high variance. Therefore, a normal distribution based on those values will
have a considerable positive density in the negative range. To avoid these
problems, the literature provides inventory policies which are based on the
gamma distribution (e.g., see Dunsmuir and Snyder (1989) and Moors and
Strijbosch (2002)). This distribution is only defined for positive values and
is therefore a suitable supplement to the normal distribution. In addition,
the gamma distribution is also completely defined by the first two central
moments and is convolution invariant. Thus, in the following the reorder
points are approximated using the normal distribution and additionally the
gamma distribution.
2.3 Inventory Policies 25

If a normal distribution is assumed, the reorder point s can be found as


the root of the following approximation:


μ2ltd
(1 − α) D + 2·μ q2
1 q e− 2 !
− q erfc √ + √
ltd
f (q) = − =0 , (2.10)
σltd 2 2 2π
s = μltd + q · σltd , (2.11)

where μltd is the expectation, μ2ltd is the second moment and σltd is the
standard deviation of the lead time demand. erfc(x) is the complemen-
tary error function (Abramowitz and Stegun, 1972, p. 297). If a gamma
distribution is assumed, s can be found as the root of the following approx-
imation

μltd · Γ(p + 1,b · s) s · Γ(p,b · s) μ2ltd !
f (s) = − − (1 − α) D + =0 ,
Γ(p + 1) Γ(p) 2 · μltd
(2.12)

where Γ(x,y) is the incomplete gamma distribution (Abramowitz and Ste-



2
μ μ
gun, 1972, p. 260), p = σy2 and b = σyy · (L + 1).
y

In case of a β-service constraint, s generally can be found by using the fol-


lowing rule if a continuous lead time distribution is assumed. s is implicitly
defined as the limit for which the upper partial moment of the lead time
distribution, i.e. the expected lost sales, is equal to the lost sales restriction
((1 − β) · D).

∞
(x − s)fltd (x) dx = (1 − β) · D . (2.13)
s

As in (2.8), Equation (2.13) is transferred into (2.14) for a discrete space


lead time distribution. Again this transformation results in a non-unique
solution for s. Because the expected lost sales reduce with a higher re-
order point, the reorder point is selected as the lowest s which satisfies the
26 2 Inventory Management

inequation (2.14):



(i − s)pltd (i) ≤ (1 − β) · D . (2.14)
i=s

As mentioned above, the α-service can be interpreted by means of portfolio


theory. This is also the case for the β-service even if it is not a perfect
match. The expected shortfall is a risk measure from portfolio theory.
In contrast to (2.14) it tracks the lower partial moment but the general
intuition remains the same. Therefore, both the α- and β-service may also
be interpreted as risk measures. Figure 2.4 shows the expected lost sales
(upper partial moment) of the lead time demand distribution in relation to
the order size. The dashed lines indicate the resulting reorder size s if the
β-service is equal to 95% (1 − β = 0.05).

β service constraint
0.30
exp. lost sales / order size

0.25

0.20

0.15

0.10

0.05

0.00
0 10 20 30
s

Figure 2.4: Reorder point determination subject to a β-service constraint

Schneider (1978) also derives an implicit solution of s for a continuous lead


time distribution assumption if a β-service constraint is given. If a normal
distribution is assumed, the reorder point s can be calculated using the
following approximation. The root of (2.15) can be interpreted as the safety
2.3 Inventory Policies 27

factor (Schneider, 1978), while s results from the reverse z-transformation


of q.

The notation is equal to (2.10).

q2
(β − 1)(2Dμltd + μ2ltd ) 1  2  q e− 2 q !
f (q) = 2 + q + 1 · erfc √ + √ =0 ,
σltd 2 2 2π
(2.15)
s = μltd + q · σltd . (2.16)

In case of a gamma distributed lead time demand Schneider (1978) argue


that the provided approximation only holds if the value of D is high. This
might be a problem due to the low mean of intermittent demand series.
Thus, it is referred to the procedure of Dunsmuir and Snyder (1989). Their
approach is also based on the upper partial moment of the gamma distri-
bution, but they regard the properties of intermittent demand series. The
implicit rule provided in Dunsmuir and Snyder (1989) in order to calculate
the reorder point s has the same structure as equation (2.13). It is also
based on the integral over the gamma density function, but this integral
can be solved using the incomplete gamma function. This leads to the
following implicit calculation rule for the reorder point s.

e−v·s · (v · s)v−1 · s − Γ (v,v · s) · (s − 1) (1 − β) · S


= , (2.17)
Γ (v) πltd
(μ+
ltd )
2
v= + 2 , (2.18)
(σltd )

where the parameters μ+ +


ltd and σltd are the expectation and variance of
the positive demands during lead time. They can be calculated using the
following rule (Dunsmuir and Snyder, 1989, p. 17):

μY
ltd = L ·
μ+
πltd
, (2.19)
28 2 Inventory Management

+ 2 L · σy2 + (1 − πltd ) · πltd · (μ+


ltd )
2
(σltd ) = , (2.20)
πltd
πltd = 1 − (1 − πY )L , (2.21)

where πY is the share of positive demands.


3 Demand Analysis and Forecasting

All the described inventory policies rely on the knowledge about future de-
mand. The EOQ assumes this future demand to be certain with a constant
demand rate. In contrast, the definition of the variables of the described
(s,S,T ) policies rely on the knowledge of the future lead time demand distri-
bution. If a continuous lead time demand distribution is assumed, informa-
tion about the first two moments of the lead time demand will be needed.
This chapter deals with methods utilized to estimate the expectation and
variance of the future demand for intermittent demand series.

From a statistical point of view every ordering event of a given SKU pro-
duces mainly two kinds of information: The amount and the timestamp of
the order which together form the demand time series. Due to the slow
stock rotation of intermittent SKUs, the special property of these time se-
ries is the high share of periods with a non-positive demand (yt = 0) even if
the degree of aggregation is high like weeks or month (Syntetos, Babai, et
al., 2011, p. 34). Thus, the methods described in this chapter are designed
to take into account the features of intermittent time series.

Section 3.1 provides a short introduction into time series analysis and the
notation used. After that, Section 3.2 describes the most common fore-
casting models for intermittent demand, which are based on the work of
Croston (1972). But as mentioned above, determining the reorder level is
based on the knowledge of the probability mass function (PMF) of the fu-
ture demand. Therefore, Section 3.3 introduces integer-valued autoregres-
sive moving average processes, which can be used to estimate the complete
future PMF of the demand during lead time.

© Springer Fachmedien Wiesbaden 2016


T. Engelmeyer, Managing Intermittent Demand,
DOI 10.1007/978-3-658-14062-5_3
30 3 Demand Analysis and Forecasting

3.1 Time Series Analysis

The mathematical model underlying a time series is called data generat-


ing process (DGP) (Brockwell and Davis, 2009, p. 7f). This DGP is an
ordered sequence of random variables Yt , where t is the time index which
theoretically lasts from −∞ to ∞, but for practical reasons the finite part
of the DGP in the range of t {0,1,2, . . . ,T } is used hereafter. The time
series itself is considered as one realization of the underlying DGP where
each observation yt realizes form the corresponding random variable Yt .
An example (3.1) is the definition of a stochastic process known as white
noise process:

Yt = εt ∀t , (3.1)

where the random variable εt may follow any distribution, but in most
cases it is assumed that εt follows a normal distribution, which makes the
process called Gaussian white noise. Thus, if εt ∼ N (μ,σ 2 ), the expectation
and variance of the process is equal for all periods t. This property of a
DGP is known as mean and variance stationarity. If the DGP regards linear
combinations of the preceding random variables, the process is called an
autoregressive process. Equation (3.2) denotes an AR(1) process as the
random variable Yt is a linear combination of one past random variable
Yt−1 and the error term εt . Thus, an AR(2) process would consider the
last two random variables, and a general AR(p) process would regard the
last p random variables.

Yt = φ · Yt−1 + εt . (3.2)

The φ coefficients can be interpreted as the influence of the past of the


DGP on its current state. If the absolute value of φ is less than 1 (|φ| < 1),
the AR(1) process is mean and variance stationary. The counterpart of
an AR process is the moving average (MA) process, which is defined as a
3.1 Time Series Analysis 31

linear combination of the past and current error terms. The MA process is
mean and variance stationary, regardless of the value of the parameters ψ.

Yt = εt + ψ · εt−1 . (3.3)

Beside the variance stationarity there is a more general definition which


treats not only the variation of one random variable of the process, but
also the linear relationship between consecutive random variables. The
autocovariance at lag h is defined as the covariance between the two random
variables Yt and Yt−h .

γ(h) = COV (Yt ,Yt−h ) . (3.4)

The autocovariance γ(h) is the expected common variation of two random


variables Yt and Yt−h of a given DGP, where the time lag h denotes the
number of periods between the random variables. Hence, it holds for the
special case of a white noise process that γ(h) is equal to 0 if h = 0. In
case h = 0, γ(0) is equal to the variance of εt denoted as σε2 . Therefore, it
is mostly just referred to covariance stationarity, which includes variance
stationarity. In addition, based on the autocovariance, the autocorrelation
ρ(h) is defined in (3.5). It is a dimensionless measure describing the linear
relationship between two random variables of the DGP. It is a continuous
measure between −1 and 1, where h denotes the time lag between the
random variables. The sign of ρ(h) indicates the direction of the linear
relationship and if ρ(h) equals 0, a linear relationship between the random
variables can be excluded. ρ(0) always equals 1, and in case of a white
noise process all ρ(h) ∀h = 0 are 0. The definition of the autocorrelation as
a function of the lag h refers to the term autocorrelation function (ACF).

γ(h)
ρ(h) = . (3.5)
γ(0)
32 3 Demand Analysis and Forecasting

Another important statistical instrument for analyzing stochastic processes


is the partial autocorrelation function (PACF). This function maps the cor-
relation between two random variables while all other correlations are ex-
cluded. The PACF ν(h) can be calculated using the Yule-Walker equations
(Kirchgässner, Wolters, and Hassler, 2012, p. 53) as the h-th element of
the solution vector of the following equation system:
⎡ ⎤⎡ ⎤ ⎡ ⎤
1 ρ(1) ρ(2) ... ρ(h − 1) ν1 ρ(1)
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ ρ(1) 1 ρ(1) ... ρ(h − 2) ⎥ ⎢ ν2 ⎥ ⎢ ρ(2) ⎥
⎢ ⎥⎢ ⎥=⎢ . ⎥ .
⎢ .. ⎥⎢ .. ⎥ ⎢ . ⎥
⎣ . ⎦⎣ . ⎦ ⎣ . ⎦
ρ(h − 1) ρ(h − 2) ρ(h − 3) ... ρ(1) νh ρ(h)
(3.6)

Figure 3.1 shows the theoretical ACF and PACF of an AR(2) and MA(2)
process. The parameters of the two processes are φ = {0.3,0.2} and ψ =
{0.3,0.2}. It can be seen that the ACF of an AR process decreases expo-
nentially and has a limit of 0. In contrast, the PACF of an AR process
breaks off after the order of the process. This is also the case if the ACF
of an MA process is considered. It breaks off after the order of the process,
whereas the PACF of a MA-process has a more complex structure. The
absolute value of the PACF of a MA process also decreases exponentially
and has a limit of 0.

Due to the properties of the ACF and PACF, i.e. the respective break off
after the process order, they can be used to identify the processes. For
example if the PACF of a process breaks off after lag 1 and the ACF
exponentially decreases, the underlying DGP is an AR(1) process.

After describing the fundamentals of time series analysis, the next two
sections deal with two different approaches to model intermittent demand
series. First, Section 3.2 introduces the class of methods based on Croston
(1972) and Section 3.3 presents the integer-valued autoregressive processes.
3.2 Croston-Type Models 33

ACF PACF

0.3

0.2

AR
0.1

0.0

-0.1

0.3

0.2

MA
0.1

0.0

-0.1
1 3 5 7 9 1 3 5 7 9
lag

Figure 3.1: Theoretical ACF and PACF of an AR(2) and an MA(2) process

3.2 Croston-Type Models

The forecasting method developed in Croston (1972) is the most commonly


used forecasting procedure for intermittent demand (Teunter, Syntetos, and
Babai, 2011). The basic idea of this method is to split the observations into
two groups (Gardner Jr., 1985). On the one hand, the group of observa-
tions show zero demand (yt = 0) and on the other hand the observations
with a positive demand (yt > 0). The forecast itself is a combination of
the estimated time lag between two consecutive positive demands and the
expected amount of items sold in a period with a positive demand. Cros-
ton (1972) assumes that the demand at time t (Yt ) is a product of two
independent random variables Xt and Yt+ :

Yt = Xt · Yt+ ∀t , (3.7)
Xt ∼ Ber(πY ) +
, (3.8)
2
Yt+ ∼ N (μ+
Y
,σY+ ) , (3.9)
34 3 Demand Analysis and Forecasting

where Xt follows a Bernoulli distribution, which indicates whether some-


thing is sold in period t. The probability of a positive demand is constant
and denoted as πY+ , where a superscript + indicates the focus on positive
demands. The random variable Yt+ defines the amount of items sold in
a period with a positive demand and follows a normal distribution with
2
expectation μ+
Y
and variance σY+ . Thus, Croston (1972) assumes that the
demand is generated by a compound Bernoulli process.

Technically a compound Bernoulli process is a mixture of a Bernoulli pro-


cess, i.e. a sequence of independent and identically distributed Bernoulli
random variables to model if something is sold in a given period, and an-
other random variable modeling the quantity sold. Independent and identi-
cally distributed means that the realization of one variable has no influence
on the realization of any other variable and that the probability of the pos-
sible realizations 0 (no demand) and 1 (positive demand) remains constant
over time. These features lead to two statistical properties of a Bernoulli
process (Syntetos, Babai, et al., 2011, p. 35):

• The number of positive demands in a given time interval is binomial


distributed.

• The number of periods between positive demands is negative bino-


mial distributed and therefore, the number of periods between two
consecutive positive demands is geometrically distributed.

Figure 3.2 shows a simulated compound Bernoulli process as it is assumed


in Croston (1972) with Xt ∼ Ber(0.3) and Yt+ ∼ N (10,1). It can be seen
that the share of zero demands is about 70% and if the simulated demand
is positive, it floats around the expectation of Yt+ . Thus, due to the contin-
uous distribution of the positive demands, the process has innovations that
cannot be observed in a practical setup, because the demand is naturally
discrete. This is one of the main criticisms of the Croston procedure.

Based on these definitions, the next steps in Croston (1972) are straight
forward. Due to the independence of Yt+ and Xt the expectation of Yt can
3.2 Croston-Type Models 35

12.5

10.0

7.5
Yt

5.0

2.5

0.0
10 20 30 40 50
t

Figure 3.2: Simulated compound Bernoulli process

be written as:

E(Yt ) = E(Xt · Yt+ ) = E(Xt ) · E(Yt+ ) = πY+ · μ+


Y
. (3.10)

Therefore, the expectation of Yt depends on the probability of a positive


demand πY+ and the expectation of Yt+ (μ+ Y
). As mentioned above, the
time between two consecutive positive demands follows a geometric distri-
bution. The expected time between two consecutive positive demands δ
equals 1/πY+ . This is used to rewrite the expectation of Yt :

1 1
δ= ⇐⇒ πY+ = , (3.11)
πY+ δ
1 + μ+
E(Yt ) = πY+ · μ+ = ·μ = Y . (3.12)
Y δ Y δ

Thus, the expectation of Yt is equal to the expectation of the positive de-


mands (μ+Y
) weighted by the expected time between two positive demands
δ. These two factors will be estimated separately via exponential smoothing
using the following recursive equations:

ŷt+ = α · yt + (1 − α) · ŷt−1
+
, (3.13)
dˆt = α · dt + (1 − α) · dˆt−1 , (3.14)
36 3 Demand Analysis and Forecasting

where yt is the realization of Yt+ and dt is a variable counting the periods


since the last positive demand. The parameter α is the smoothing factor.
It is identical for both equations and weights the influence of the past
observation. The prediction itself is, analogous to the expectation of Yt ,
calculated as the ratio of ŷ + and dˆt :
t

C ŷt+
ŷt+h = , (3.15)
dˆt

where the superscript C indicates the Croston method. In contrast to (3.4)


and (3.5) the parameter h denotes the forecasting horizon, i.e. the number
of periods between the actual time index and the period the forecast is
C
calculated for. The calculation is independent from h and therefore, ŷt+h
is constant for every horizon. The variance of the forecast is given by: 3

2 2
2 +2
(1 − δ) · μ+ + σY+
V C
(ŷt+h ) = α σY + α(1 − α) δ
2 Y
, (3.16)
2−α

which is also constant for every forecasting horizon.

Based on the fact that Croston uses simple exponential smoothing to calcu-
late the components ŷt+ and dˆt , the forecasting procedure is similar to the
method of exponential smoothing. The first step of the forecast procedure
is determining the start values. After that Equations (3.13) and (3.14) are
used to fit the parameters and in the last step Equation (3.15) is used to
calculate the forecast.

The forecast is based on the recursive equations given in (3.13) and (3.14).
A recursive definition always leads to the question of how to set the starting
values in case t = 1. The literature provides several different approaches
to address this problem, but in contrast to other approaches the version
provided in Willemain et al. (1994, p. 535f) is based on the values of the

3 The original specification of the variance of ŷt+h


C in Croston (1972) is incorrect. Thus,
the given equation is the corrected version provided in Rao (1973).
3.2 Croston-Type Models 37

demand time series. Therefore, this method is used.

ŷ0+ = yf , (3.17)
dˆ0 = f . (3.18)

The constant f defines the index of the first period with a positive demand.
Therefore, ŷ + is equal to the first positive demand, and dˆ0 is set to the
0
index of this period.

After defining the starting values, one needs to select a proper smoothing
parameter α. Generally there are three ways of doing so. First, a value can
be chosen externally without regarding the data. The literature suggests
values between 0.1 and 0.3 (see Croston (1972) or Gardner Jr. (1985) for
example). This is clearly the most convenient way to determine α, but
Gardner Jr. (2006, p. 651) argue that there is no reason to choose exoge-
nous smoothing parameters due to the suitable search algorithms. There-
fore, the second option is to select α via numerical optimization, such that
the resulting α minimizes the forecasting error. For example, set α to the
value which minimizes the mean squared error of the one-step-ahead fore-
cast. These error measures are considered in Section 3.4. The third way
of finding α is to estimate it by means of statistical inference. While there
is a progress in finding stochastic models underlying adhoc methods like
exponential smoothing, this is not the case for the Croston method. For
example, Hyndman, Koehler, et al. (2008) have developed a state space
formulation of many exponential smoothing methods, which leads to tools
of statistical inference like estimating parameters and forecasting intervals.
But Shenstone and Hyndman (2005) argued that there is no consistent
stochastic model underlying the Croston procedure. Therefore, determin-
ing α by statistical estimation is not considered furthermore and numerical
optimization is used hereafter.

By using these starting values and the smoothing parameter, the model
is fitted along the time series. Within this procedure two different cases
38 3 Demand Analysis and Forecasting

are distinguished. If the demand in period t is positive, ŷt+ and dˆt are
calculated using equations (3.13) and (3.14) whereas dt is set to 1. If yt is
zero, the parameters will not change and will be carried over into the next
period. The variable dt , which measuring the number of periods between
two positive demands, is incremented by 1.

ŷt+ = ŷt−1
+
, (3.19)
dˆt = dˆt−1 , (3.20)
dt = dt−1 + 1 . (3.21)

Figure 3.3 illustrates this process of parameter fitting. The upper part
shows the simulated time series of Figure 3.2 together with the result
of (3.13) and (3.15). The lower part shows the variable dt and the re-
sult of (3.14). It can be seen that dt is incremented by 1 in each period
with a zero demand and is set down to 1 if yt is positive. The dotted line in
the lower part shows the estimation of dˆt , which is only updated in periods
with a positive demand. The dotted line in the upper part of the graph
shows the estimation of ŷt+ , which is equal to the exponential smoothing of
yt while excluding all periods with zero demand. The dashed line shows the
fitting of the Croston procedure. It is also only updated when the demand
is positive. Despite the fact that the Croston method is widely accepted
as the default forecasting procedure for intermittent demand (Babai, Syn-
tetos, and Teunter, 2014), there are some issues with this method.4 For
example, there are problems with the model specification. Croston (1972)
ignores the fact that the number of sales is naturally discrete and assumes
Yt+ to be normal distributed. Additionally, updating the forecast only in
periods with a positive demand leads to long periods without updating the
parameters if the demand is highly intermittent. Separating the demand
into two independent random variables Yt+ and Xt , i.e. assuming that the
level of demand is independent from its occurrence, is a very strict assump-

4A more detailed overview over the issues of the Croston method can be found in
Gardner Jr. (2006, p. 655f).
3.2 Croston-Type Models 39

12.5

10.0

7.5
Yt

5.0

2.5

0.0
10.0
dt

5.0

0.0
10 20 30 40
t
Figure 3.3: Parameter fitting of the Croston procedure

C
tion. Syntetos and Boylan (2001) show that the estimator ŷt+h is positively
1
biased. This is due to the fact that E(X) = E X (Teunter, Syntetos, and
1

Babai, 2011). Therefore, they suggest that the Croston procedure should
not be used.

To correct the bias of the Croston estimator, Syntetos and Boylan (2005)
S
provide a new estimator denoted as ŷt+h in a later work:5


α ŷt+
S
ŷt+h = 1− · . (3.22)
2 dˆt

S
Compared with the original Croston estimator, ŷt+h is weighted with the
factor 1 − 2 where α is the smoothing parameter used in (3.13) and (3.14).
α

Since α is defined between 0 and 1, the correction takes values between 0.5
(α = 1) and 1 (α = 0) and therefore reduces the forecast. The authors do
not change the structure of the Croston procedure. Thus, the initialization,
the calculation of α, and the parameter fitting remain the same.

5 Syntetos and Boylan (2001) already provide an unbiased estimator but the literature
makes no mention about this version, therefore only the estimator given in Syntetos
and Boylan (2005) is used hereafter.
40 3 Demand Analysis and Forecasting

Another estimator which is frequently mentioned in the literature was de-


veloped in Leven and Segerstedt (2004). The authors intend to develop a
C
straightforward estimator which is not biased as ŷt+h and does not have
any more parameters than necessary. Their method does not distinguish
between periods of zero demands and periods of positive demands. It is
given by:

yt
ŷtL = α· + (1 − α) · ŷt−1
L
, (3.23)
dt
L
ŷt+h = ŷtL . (3.24)

As in the previous adjusted version of the Croston procedure Leven and


Segerstedt (2004) do not change the main procedure. The parameters are
also only updated in periods of positive demand, and dt is a counting vari-
able. This definition does not lead to the bias of ŷtC (Leven and Segerstedt,
2004, p. 362f), but Teunter and Duncan (2009, p. 322) show that ŷt+h L is
also biased.

The last estimator described in this section is provided in Teunter, Syntetos,


and Babai (2011). This estimator also bases on the ideas of Croston (1972),
but they change the procedure more than the preceding adaptions of the
Croston method. The demand series is also separated into positive and zero
demands, but πY+ is estimated instead of δ. This has the advantage that the
parameters can be updated in every period. Furthermore, the two different
recursive equations to update the parameters use two different smoothing
parameters as suggested in Schultz (1987). In case of a positive demand in
period t, the parameters are updated using the following equations:

ŷt+ = α · yt + (1 − α) · ŷt−1
+
, (3.25)
π̂t+ = β · xt + (1 − β) · π̂t−1
+
, (3.26)


0 if yt = 0
xt = , (3.27)
1 else
3.2 Croston-Type Models 41

where xt is a variable which indicates whether yt is positive. If the demand


in period t is zero, ŷt+ will not be updated and is taken over from the last
period.

In all cases the forecast is calculated as the product of the smoothed prob-
ability of a positive demand π̂t+ and the smoothed level of the positive
demands ŷt+ :

T
ŷt+h = π̂t+ · ŷt+ . (3.28)

In addition the authors provide an adapted calculation rule for the forecast-
ing variance, which accounts for the two different smoothing parameters.

2 2 2 2
T
απY+ σY+ βπ + (1 − πY+ )μ+ αβπY+ (1 − πY+ )σY+
V (ŷt+h ) = + Y Y
+ .
2−α 2−β (2 − α)(2 − β)
(3.29)

The empirical results on the performance of the different Croston-type


methods are ambiguous. Mainly, there are four empirical studies compar-
ing the forecast performance of different methods suitable for intermittent
demand.

Willemain et al. (1994) compared the performance of the Croston proce-


dure to exponential smoothing. They did a comparison based on industry
data and a Monte Carlo study, in which they violate Croston’s underly-
ing distribution assumption. They use different forecast measures and find
Croston to be more accurate than exponential smoothing. Willemain et
al. (1994) recommend using the Croston method in case of intermittent
demand series.

Eaves and Kingsman (2004) compare the Croston method, the adapted
version provided in Syntetos and Boylan (2005), exponential smoothing,
and a moving average approach. The authors compare the methods using
different forecasting performance measures and the resulting stock levels.
42 3 Demand Analysis and Forecasting

Although the results could not confirm the overall performance increase of
Croston compared with exponential smoothing, but the authors state that
the adaption of Syntetos and Boylan performs best.

Teunter and Duncan (2009) focus on forecast performance measures and the
achieved service levels. They could not confirm the results of the first two
empirical studies in case of the forecast performance measures. However
they showe that using Croston-type methods can improve the resulting
service levels.

Babai, Syntetos, and Teunter (2014) is the first comparative study which
also includes the Croston adaption developed in Teunter, Syntetos, and
Babai (2011). The authors use two different industry datasets. In one
dataset the Croston procedure performed better than exponential smooth-
ing while in the other dataset the opposite occurred. In both cases the
adaption of Syntetos and Boylan and Teunter, Syntetos, and Babai (2011)
lead to an increase in forecast performance compared with the Croston
procedure.

All of the above-mentioned studies show that Croston-type forecast meth-


ods outperform exponential smoothing, but except for Willemain et al.
(1994) no later study recommends the Croston-method it self, but the
adaption of Syntetos and Boylan. The forecast quality also differs if in-
ventory performance, i.e. the stock and service levels, are regarded in the
study in addition to the traditional forecast performance measures. But all
of the forecasting methods presented so far only estimate the first two cen-
tral moments of the demand series and therefore, an additional lead time
demand distribution assumption is required in order to use those forecasts
for inventory management.
3.3 Integer-Valued Autoregressive Moving Average Processes 43

3.3 Integer-Valued Autoregressive Moving Average Processes

This section introduces the integer-valued version of the widely used Box-
Jenkins autoregressive moving average (ARMA) models. The restriction to
a discrete state space while retaining several of the statistic properties of
classical ARMA models is achieved through the introduction of the bino-
mial thinning operator. The usage of integer-valued autoregressive moving
average processes (INARMA) has several advantages compared with the
ad-hoc methods described before. First, all forecasting models presented
in Section 3.2 only estimate the first two central moments of the demand
series and lack of a consistent theoretical basis. In addition, the forecasts
based on INARMA processes are coherent, i.e. they satisfy all constrains
of the demand time series, which is the foundation of a suitable time series
model (Chatfield, 2000, p. 81).

The literature on the theoretical properties of INARMA processes is wide-


spread, but there are only few known applications. For example Brännäs
and Quoreshi (2010), Jung and Tremayne (2010), and McCabe, Martin, and
D. Harris (2011) use INARMA processes to model the number of transac-
tions in intra-day data of stocks. While Brännäs and Quoreshi used per
minute order data, Jung and Tremayne and McCabe, Martin, and D. Har-
ris focus on iceberg orders. Those type of orders reduce the price impact of
large orders by splitting them into small parts. Jung and Tremayne (2006)
proposed a physical application by modeling the number of gold particles
in a liquid suspension. In addition, several other applications are conceiv-
able because count data is widespread, like the number of members of a
queue (McCabe, Martin, and D. Harris, 2011), the annual count of natural
disasters or the number of patients in an emergency department (Maiti
and Biswas, 2015). However, there is no known application of INARMA
processes in inventory optimization.

To use these models in inventory control, there must be an additional as-


sumption about the probability distribution of the lead time demand, which
44 3 Demand Analysis and Forecasting

may lead to theoretical inconsistencies. For example, Croston (1972) as-


sumes a compound Bernoulli distributed demand while many inventory
optimization methods rely on a normally distributed demand assumptions.
In contrast, the consistent statistical model underlying the INARMA mod-
els enables to predict the complete future probability distribution of the
demand. This can be used directly to determine the reorder level s with-
out any additional assumption. While there are several different definitions
of INARMA processes, this thesis focuses on INARMA models with Poisson
distributed marginals and the binomial thinning operator.6

The remainder of this section is structured as follows. First, the model


specifications of INARMA processes will be introduced in Section 3.3.1.
After this, Section 3.3.2 presents techniques to estimate and identify dif-
ferent types of INARMA processes. Section 3.3.3 describes methods to
calculate the probability mass function of future demand and Section 3.3.4
defines how those probability mass functions can be aggregated. Finally,
Section 3.3.5 considers the calculation of point forecasts based on the future
probability mass functions.

3.3.1 Model Specification

The idea of INARMA processes is that the demand in period t results from
two mechanisms. First, the current demand depends on the sales of the
last periods, i.e. how much demand of the past periods ’survives’. Second,
the current demand depends on the outcome of an innovation process. This
idea has been first introduced in Al-Osh and Alzaid (1987) by defining the
first order integer-valued autoregressive process (INAR(1)) as follows:

Yt = φ ◦ Yt−1 + εt . (3.30)
    
current demand lingering demand new demand

6A detailed overview about different thinning operators is given in Weiß (2008).


3.3 Integer-Valued Autoregressive Moving Average Processes 45

The demand in period t (Yt ) is the sum of the lingering demand of the
last period φ ◦ Yt−1 and the new demand εt . The ’◦’ denotes the binomial
thinning operator developed in Steutel and Harn (1979). It is defined as
the sum of Yt−1 many independent and indentically distributed Bernoulli
random variables with P (Bi = 1) = φ:


X
φ◦X = Bi , (3.31)
i=1

Bi ∼ Ber(φ) . (3.32)

Thus, (φ ◦ X) itself is a random variable following a binomial distribution


with the expectation E(φ ◦ X) = φ · X and the variance V (φ ◦ X) = φ · (1 −
φ)·X. In addition, it holds that 0◦X = 0 and 1◦X = X (Al-Osh and Alzaid,
1987, p. 262). The parameter φ is defined in the closed interval between
0 and 1, and the random variables εt are assumed to be independent and
identically Poisson distributed with the intensity parameter λ. Figure 3.4
shows 20 periods of a simulated INAR(1) process with λ = 1.5 and φ =
0.5. The autoregressive component is highlighted in light gray, while εt is
marked in dark gray. It shows a typical pattern of an INAR(1) process,
with a relatively high variance and periods of zero observation. In period
1 the value of Y1 results exclusively from the Poisson innovation ε1 , and
in period 2 and 3 the Poisson innovation equals 0. Thus, the value of
period 2 and 3 results from the binomial thinning, i.e. the autoregressive
component. The value of period 1 is cut by half in period 2 and again cut
by half in period 3. Therefore, if the value of Yt is high, it is likely that
Yt+1 will also be relatively high, and after a period when Yt equals 0, the
value of Yt+1 only depends on the Poisson innovation εt+1 .
46 3 Demand Analysis and Forecasting

4
Demand
3 φ ◦ Yt−1
Yt

εt
2

0
5 10 15
t
Figure 3.4: Simulated INAR(1) process

According to Alzaid and Al-Osh (1990), Jung and Tremayne (2006), and
Bu and McCabe (2008) the INAR(1) process can be generalized considering
more timelags. Equation (3.33) defines a general INAR(p) process with p
different timelags.

Yt = φ1 ◦ Yt−1 + φ2 ◦ Yt−2 + · · · + φp ◦ Yt−p + εt , (3.33)

where the general intuition remains the same, but the demand Yt now
depends on εt and the demands of the last p periods.

Whereas the AR part describes the relationship between random variables


of the process, the MA part describes the relationship between the different
error terms εt . Al-Osh and Alzaid (1988, p. 284) define the integer-valued
MA(1) process as follows:

INMA(1): Yt = εt + ψ ◦ εt−1 . (3.34)

Due to this definition the demand in period t can be interpreted as purely


random, but the demand in the current period depends on the new demand
of the recent period εt−1 . The definitions of a general INMA(q) process
and the linkage of the INAR and INMA part is straight forward (Al-Osh
3.3 Integer-Valued Autoregressive Moving Average Processes 47

and Alzaid, 1988, p. 295):

INMA(q): Yt = εt + ψ1 ◦ εt−1 + ψ2 ◦ εt−2 + · · · + ψq ◦ εt−q ,

(3.35)

p
q
INARMA(p,q): Yt = φi ◦ Yt−i + εt + ψi ◦ εt−i . (3.36)
i=1 i=1

Analogously to the general INAR(p), the general INMA(q) is the sum of εt


and q-many laged thinnings of the recent errors εt−1 ,...,εt−q . The general
INARMA(p,q)-process is the sum of p-many laged binomial thinnings of
the recent observations, εt and the sum of q-many laged thinnings of the
recent error terms.

3.3.2 Process Estimation and Identification

A comprehensive overview of the different estimation methods separated


by the process order is given in Mohammadipour (2009, pp. 104ff). In this
section two different estimation methods are presented. The first addresses
pure INAR processes, and the second covers the general INARMA(p,q)
case.

The conditional least squares estimator for the parameters of an INAR(p)


process can be obtained by minimizing the following quadratic function:


T  2
Q(φ̂,λ̂) = yt − (φ̂1 · yt−1 + φ̂2 · yt−2 + ... + φ̂p · yt−p + λ̂) ,
t=p+1
(3.37)

 2

T
p
Q(φ̂,λ̂) = yt − φ̂i · yt−i − λ̂ , (3.38)
t=p+1 i=1
48 3 Demand Analysis and Forecasting

where φ̂ is the vector of the estimators for the different INAR lags. λ̂
denotes the estimator for the expectation of the Poisson marginal. There-
fore, the estimators are chosen to minimize the overall quadratic difference
between the expected value of yt and the observation. This minimization
could either be done by setting the gradient to zero, or due to the convexity
of Q(φ̂,λ̂) the minimum can also be obtained through numerical optimiza-
tion using the Nelder-Mead Algorithm (Nelder and Mead, 1965). Du and Li
(1991) show that the conditional least squares estimator is asymptotically
normal and strongly consistent.

The most recent approach for estimating the parameters of a general IN-
ARMA(p,q) has been developed in Neal and Subba Rao (2007). It is a
Markov Chain Monte Carlo (MCMC) approach which estimates all the
parameters simultaneously. This approach is briefly described below.

A MCMC approach estimates the parameters by modeling the estimation


process as a Markov Chain with the distribution of the estimates as the
stationary distribution of a Markov process. This stationary distribution
results from an iterative Monte Carlo simulation. To understand the me-
chanics of the MCMC approach given in Neal and Subba Rao (2007), one
must consider the given observation of an INARMA(p,q) process as the sum
of p many independent Bernoulli experiments based on the past observa-
tions, q many Bernoulli experiments based on the past error terms, and the
current error term εt whereas all those past observations are based on p + q
many Bernoulli experiments as well. If one knew all the outcomes of all
Bernoulli experiments, estimating the parameters would be simple, which
is the main premise of the MCMC procedure. Based on randomly selected
parameters, different sets of Bernoulli outcomes are simulated where ran-
domly means drawing the parameters from a distribution which depends
on the past simulation. Each iteration consists of the following two steps.

First the estimators φ̂1 ,...,φ̂p ,ψ̂1 ,...,ψ̂q and λ̂ are drawn from random dis-
tributions. One after another all φ̂’s and ψ̂’s are updated following a beta
distribution whereas λ̂ is drawn from a gamma distibution.
3.3 Integer-Valued Autoregressive Moving Average Processes 49

The beta distribution is a continuous distribution with support in the range


between 0 and 1 (Johnson, Kotz, and Balakrishnan, 1995a, p. 210ff), which
is defined by two positive shape parameters and can therefore take a large
variety of shapes. The shape parameters for the beta distribution are set
in such a way that the expectation of this distributions reflects the current
state of the Markov Chain, i.e. if for example about 30% of the sales are
taken over to the following period through the Bernoulli experiments, the
expectation of the beta distribution for φ̂1 will be 0.3.

The gamma distribution is also a continuous distribution with support in


the positive range (Johnson, Kotz, and Balakrishnan, 1995b, p. 337ff). It
is defined by two positive parameters and is also very adaptable. The λ
parameter affects the value of the marginal (εt ) in every period. Therefore,
the expectation of the gamma distribution to draw the estimate λ̂ is set to
the mean of the guessed marginals in the current Markov state.

After updating the estimates, they are used to guess which Bernoulli out-
comes could have led to the observed time series. These guesses are then
used in the next iteration step to update the estimates.

The stationary distribution of the Markov chain, i.e. the distribution of the
estimates, is determined by storing the estimates over several iterations.
However, the average of the stored estimates is used as the estimators of
the p + q parameters and λ.

The definition of the binomial thinning operator as a sum of independent


Bernoulli experiments with a constant success probability has an additional
advantage. The autocorrelation function (ACF) and the partial autocorre-
lation function (PACF) are the same as in the continuous ARMA case (see
Mohammadipour (2009) for a detailed analysis). So the most commonly
used tools for model identification presented in Section 3.1 can be used
without making any adjustments. The ACF can be used to identify pure
INMA(q) processes and the PACF can be used to identify pure INAR(p)
processes.
50 3 Demand Analysis and Forecasting

A method to identify mixed INARMA(p,q) processes is described in Enciso-


Mora, Neal, and Subba Rao (2009). It is an approach which is based on
the MCMC estimation of Neal and Subba Rao (2007), but in addition this
approach includes the order of the process into the state of the Markov
chain and estimates it simultaneously with the parameters. This method
is not used in the remainder of this thesis and therefore, please refer to
Enciso-Mora, Neal, and Subba Rao (2009) for additional information.

3.3.3 Forecasting in Pure INAR Processes

As for the Croston-type forecasting methods, INARMA processes can be


used to produce point forecasts of the conditional expectation E(Yt+h |yt ,...)
and the conditional variance V (Yt+h |yt ,...). Furthermore the consistent
specification of the underlying stochastic model of INARMA processes leads
to the possibility of forecasting the complete probability mass function of
the future random variables Yt+h . As previously shown in Section (2.3.3)
this is actually needed to optimize the reorder points s and the order-up-to
level S for given service degree constraints. In addition, the future proba-
bility mass function holds all the information to calculate point forecasts.
Therefore, there will be no detailed description of MSE optimal point fore-
cast of INARMA processes, but Section 3.3.5 provides an overview of two
methods to calculate point forecasts of INARMA processes.7 The remain-
der of this section is separated according to the model order of the assumed
process. First, the probability mass function forecast is described for the
INAR(1) case. Second, it is described for the more general INAR(p)-case
and third, for the most general INARMA(p,q)-case.

Freeland and McCabe (2004) provide an approach to calculate the condi-


tional h-step-ahead probability mass function (p̂(Yt+h = x|yt )) of an INAR(1)

7 For a detailed description of point forecasts in INARMA-processes see Moham-


madipour (2009, p. 124ff).
3.3 Integer-Valued Autoregressive Moving Average Processes 51

process based on the last observation.

min(x,yt )
yt
p̂(Yt+h = x|yt ) = (φ̂h )i (1 − φ̂h )yt −i
i
i=0
   x−i
1 1 − φ̂h 1 − φ̂h
· exp −λ̂ · λ̂ . (3.39)
(x − i)! 1 − φ̂ 1 − φ̂

This equation is based on an idea described in Bu, McCabe, and Hadri


(2008, p. 976). They consider an INAR(1) process (3.30) to be the sum
of two independent random variables, the binomial thinning (φ ◦ Yt−1 ) and
the Poisson innovations (εt ). Therefore, the probability of Yt = x can be
calculated via the convolution of those two random variables. This is how to
interpret Equation (3.39), it is the convolution of a binomial and a Poisson
distribution. As shown in Bu and McCabe (2008), this equation can be
extended to the auxiliary INAR(p) case:

p̂(Yt = x|yt−1 ,...,yt−p ) =


min(x,yt−1 )
yt−1 i1
φ̂1 (1 − φ̂1 )yt−1 −i1
i1
i1 =0

...
min(x−(i1 +...+ip−1 ),yt−p )
yt−p ip
· φ̂p (1 − φ̂p )yt−p −sp
ip
ip =0

e−λ̂ · λ̂x−(s1 +...+sp )


· , (3.40)
(x − (s1 + ... + sp ))!

but with a slightly different interpretation. Considering a general INAR(p)


process given in Equation (3.33) as the convolution of φ1 ◦ Yt−1 and φ2 ◦
Yt−2 + · · · + φp ◦ Yt−p + εt , which are independent random variables. Then
regarding φ2 ◦ Yt−2 + · · · + φp ◦ Yt−p + εt as the convolution of φ2 ◦ Yt−2 and
φ3 ◦Yt−3 +· · ·+φp ◦Yt−p +εt and so on. This is what Equation (3.40) repre-
sents. The drawback of this extension is that it only holds for the one-step
52 3 Demand Analysis and Forecasting

predictive PMF. To gather further forecasting horizons, the authors define


the INAR(p) process as a Markov chain, with each state represented by the
value of the past p observations. Due to the Poisson innovations the set of
states is theoretically infinitely large, but the authors argue that the prob-
ability of observing most of those states is negligible. Therefore, a Markov
chain with a finite set of states remains. The state set of an INAR(1)
process, for example, would be A = ((0),(1),(2),...,(G)), and in case of an
INAR(2) process A would be ((0,0),(0,1),...,(0,G),(1,0),...,(1,G),...,(G,G))
with G as a number large enough to gather all plausible states. The h-
step-ahead forcast of the PMF is then given by (Bu and McCabe, 2008, p.
155):

P (Yt+h = x|yt ,yt−1 ,...,yt−p+1 ) = ξt M h ax ∀h > 0, (3.41)

with ξt as the probability vector of the current state. M is the h-step


transition matrix, and ax is a selecting vector containing zeros except for
the x’s position is 1. M 1 can be calculated using Equation (3.40), while
M h equals M (h−1) M .

2 2 2 2

1 1 1 1

0 0 0 0

t t+1 t+2 t+3

Figure 3.5: Graph representation of the Markov chain of an INAR(1)

Figure 3.5 shows a graphical representation of the different states of an


INAR(1) with G = 2. It is assumed that Yt equals 1. Thus, it can be seen
that from this starting state in period t every other state is reachable in
3.3 Integer-Valued Autoregressive Moving Average Processes 53

2,2 2,2 2,2 2,2

1,2 1,2 1,2 1,2

0,2 0,2 0,2 0,2

2,1 2,1 2,1 2,1

1,1 1,1 1,1 1,1

0,1 0,1 0,1 0,1

2,0 2,0 2,0 2,0

1,0 1,0 1,0 1,0

0,0 0,0 0,0 0,0

t t+1 t+2 t+3

Figure 3.6: Graph representation of the Markov chain of an INAR(2)

the consecutive periods t + 1,t + 2,.... Additionally, after period t + 1 the


pattern of the graph is regular. In contrast to Figure 3.5, Figure 3.6 shows
the graph representation of the Markov chain of an INAR(2) process. In
order to represent the two lagged variables in the INAR(2) process the
state is represented by the last two realizations of Yt , i.e. the starting state
(1,0) represents the realization yt−1 = 1 and yt = 0. For this reason not all
states are reachable from the starting state. The realization of yt in period
t needs to be the same as the realization of yt−1 in period t + 1. Hence,
from the starting state only those states are reachable where yt−1 equals
0 and only then in period t + 2 all states are reachable from the starting
state.

Figure 3.7 shows an exemplary demand series of an intermittent SKU over


30 periods and the subsequent 10-step forecast of the probability mass
function of yt . The brightness of the squares after period 30 indicates
the probability of the different realizations of the future random variables
(Yt+h ). Every vertical set of squares belongs to another PMF. The struc-
ture of the different PMFs changes massively within the first periods, and
it converges to the equilibrium distribution. The mode of the PMF falls
54 3 Demand Analysis and Forecasting

because the last observation (y30 ) is above the unconditional mean of the
demand series. An important fact is that the PMFs are not symmetric and
that each PMF depends on its predecessor, as described in this section.

P (yt+h )
10
0.12
yt

0.08
5
0.04

0
1 10 20 30 40
t

Figure 3.7: Forecast of the future PMF of an INAR process

Neal and Subba Rao (2007) also propose a method to determine the fu-
ture PMF of a mixed INARMA process. They use the MCMC approach
described in Section 3.3.2, but instead of using the past observations to
guess the Bernoulli outcomes of the process, they use the sampled values
of the MCMC approach itself. This has the advantage that it is possible to
calculate the future PMFs of general INARMA(p,q) processes, but there is
no report about a practical application of this approach.

3.3.4 Forecast Aggregation

As described in Section 2.3.3 information about the lead time demand, i.e.
the PMF of the demand during the lead time p̂ltd (x) is needed in order to
optimize the reorder point s.


h
p̂ltd (x) = P ( Yt+i = x|yt ,yt−1 ,...,yt−p+1 ). (3.42)
i=1

In most cases this PMF is obtained by the convolution of the future PMFs
while assuming the independence of Yt ,Yt+1 ,...,Yt+h (Nahmias, 1979, p.
3.3 Integer-Valued Autoregressive Moving Average Processes 55

276). In case of INARMA(p,q) processes this assumption does not hold


because each PMF depends on its p predecessors. Therefore, a new ap-
proach to determine the PMF of the sum of future demands during the
lead time is described in this section. By using the law of total probability,
those probabilities can be written as:


h
P( Yt+i =x|yt ,...) =
i=1

1 x−(j
x x−j 1 +j2 )
x−(j1 +j2 +...)
...
j1 =0 j2 =0 j3 =0 jh−1 =0

P (Yt+1 = j1 ∩ Yt+2 = j2 ∩ ... ∩ Yt+h = x − (j1 + ... + jh−1 )) .


(3.43)

The right-hand side of this equation is the sum of the probabilities of all
intersections which equal x. If x were 0, for example, there would be just
one summand, i.e. the probability of all future values (Yt ,...,Yt+h ) would
be zero. If x is 1, there are h many summands, i.e. the probability of
one future observation equals 1, and all others are zero. The formulation
given in (3.43) may help to calculate the p̂ltd (x) of an INAR(1) process,
but finding a formulation for the intersection probabilities of an INAR(p)
or even an INARMA(p,q) will be hard if not impossible. Therefore, in the
following a more intuitive way using graph representation will be presented
to calculate those intersection probabilities. It is based on the same idea
of defining the process as a finite space Markov chain described above.
The Markov chain will be defined as a directed acylic graph, where the
different states, i.e. the vertices, represent the values of Yt ,Yt+1 ,...,Yt+h
and the edges represent switching probabilities from one state to next.
Additionally, an auxiliary vertex E with edges connecting every state of
the last layer (t + h) to it has been added to the graph.

Figure 3.8 shows a graph representation of this Markov chain of an INAR(2)


process with G = 2. Starting from the current certain state (1,0) there are
56 3 Demand Analysis and Forecasting

2,2 2,2 2,2 2,2

1,2 1,2 1,2 1,2

0,2 0,2 0,2 0,2

2,1 2,1 2,1 2,1

1,1 1,1 1,1 1,1 E

0,1 0,1 0,1 0,1

2,0 2,0 2,0 2,0

1,0 1,0 1,0 1,0

0,0 0,0 0,0 0,0

t t+1 t+2 t+3 end

Figure 3.8: Graph representation of the Markov chain of an INAR(2)

three ((G + 1)-many) edges leading to the next layer of the graph marked
with (t + 1). The next layers (t + 2,t + 3,...) will have (G + 1)2 reachable
vertices, where every vertex except for those in the (t+2) layer are reachable
from G + 1 preceding vertices. All vertices of the (t + h) layer are connected
to the auxiliary end vertex E. All (G + 1)h -many paths from the starting
vertex (1,0) to the end vertex E represent the plausible outcomes of the sum
of the future random variables Yt+1 ,...,Yt+h . By weighting the edges with
the switching probability and the vertices with the current value of Yt , the
length of the path indicates the probability of that path, while the visited

vertices provide information about the resulting value of hi=1 Yt+i of that
path, i.e. the sum of the vertex weights (bold number). The edges which
connect the last layer (t + h) to the auxiliary end vertex E are weighted
with 1. Let P be the set of all paths from (1,0) to E and let Px denote the
subset of this paths where the sum of the weights of the visited vertices
equals x. Each path p is defined by the used edges (e1 ,e2 ,...,eh+1 ). Thus,
the probability mass function of the demand during the lead time can be
3.3 Integer-Valued Autoregressive Moving Average Processes 57

defined as:


h+1
p̂ltd (x) = f (ei ), (3.44)
pPx i=1

where f (e1 ) is the probability of switching from the first to the second
vertex in this path. This probability is calculated using Equation (3.41).
In most cases the number of states in a layer is much higher than the
number of layers. The set of all possible paths should be determined using
depth-first search. An advantage of this formulation is that it is based on
the methods and terms which are used in operations research (e.g., see Sun
and Queyranne (2002)).

In order to use other graph algorithms it might be useful to redefine the


edge weights. In most cases the edge weights are interpreted as the distance
between two vertices and are summed up within graph algorithms, e.g. to
find the shortest path. The summation of the switching probabilities would
not lead to meaningful results. Thus, the edge weights can be redefined as
logarithmic switching probabilities. This leads to a different definition of
the PMF of the lead time demands:
h+1 

p̂ltd (x) = exp g(ei ) , (3.45)
pPx i=1

where g(ei ) = ln f (ei ). Using this definition the summation of the edge
weights is meaningful. The only drawback of using logarithmic probabilities
as edge weights is, that this leads to negative edge weights, only a subset
of graph algorithms is suitable for graphs with negative edge weights.

The calculation of p̂ltd (x) relies on the evaluation of all possible paths
between the starting vertex (0,1) and the auxiliary end vertex E. As men-
tioned above, the cardinality of this set P rises polynomially with the num-
ber of states and exponentially with the number of layers (|P | = (G + 1)h ).
Therefore, the calculation of p̂ltd (x) is complex especially if the lead time
58 3 Demand Analysis and Forecasting

is high. Due to the fact that the aim of this procedure is to calculate the
future PMF and that the majority of paths will have very low probabili-
ties, much computational effort is misspent in paths with low probabilities.
Hence, the next step is to reduce the set of all possible paths to the most
plausible ones.

This reduction is achieved via a Monte Carlo simulation. Considering the


future Markov states given in Figure 3.8, where each plausible demand
pattern is represented by a path through this graph, information about the
most plausible paths through this graph can be gathered by starting at a
state and selecting the next state randomly. The probabilities of selecting
the future state are calculated using Equation (3.41). Therefore, selecting
the next state is like playing roulette, where the sizes of the pockets differ by
representing the probability of jumping into the next states. This method
simulates different paths based on their probability, and it is possible that
the same path will be simulated repeatedly while other paths will not be
evaluated at all. The estimated probability that the lead time demand
equal x (p̂ltd (x)) is calculated as the share of the paths with a vertex
weight sum of x. For example, if 1000 paths are simulated, and 500 result
in a lead time demand of 0 while 300 result in a lead time demand of 1,
the estimated probabilities would be: p̂ltd (0) = 0.5 and p̂ltd (1) = 0.3. This
approach will not result in the exact probabilities for the future demand,
but the computational effort can be limited by choosing a small number of
simulation runs, i.e. evaluated paths.

3.3.5 Point Forecasts

Beside the estimate of the future probability mass functions and their lead
time aggregation, point forecasts may also prove useful. This section pro-
vides two different ways to calculate those point forecasts. The first ap-
proach is to predict the future expectation. The interpretation of the re-
sults is very close to the point forecasts calculated using the Croston-type
3.3 Integer-Valued Autoregressive Moving Average Processes 59

method. For a given PMF of the future demands the calculation of ŷt+h is
straightforward:


ŷt+h = i · P̂ (Yt+h = i) , (3.46)
i=0

where P̂ (Yt+h = i) can be calculated using (3.41). Due to the Poisson in-
novations of the INAR process this sum has theoretically infinitely many
summands, but as mentioned above most of the summands will be rela-
tively small. Thus, one can replace the upper limit of the sum with G
in order to simplify the calculations (see Section 3.3.3). The drawback of
this approach is that it will produce forecast values which do not satisfy
the constraints of the underlying model, i.e. the values of ŷt+h calculated
using Equation (3.46) are not integer values. In order to avoid this issue,
the literature provides techniques to produce coherent forecasts (e.g., see
Freeland and McCabe (2004) and Jung and Tremayne (2006)). Therefore,
the second approach described in this section is a coherent approach based
on the median.

The specification of ŷt+h based on the median is also based on the PMF of
Yt+h :
 

y
ŷt+h = inf y| P̂ (Yt+h = i) ≥ 0.5 . (3.47)
i=0

The future demand estimate ŷt+h is defined as the infimum, i.e. the highest
lower bound for which the cumulative probabilities of Yt+h exceeds 0.5.
This specification will always lead to integer-valued forecasts.

Nevertheless, all forecast methods presented in Eection 3.2 produce non-


coherent forecasts. Thus, for reasons of comparability Equation (3.46) will
be used to calculate point forecasts of Yt+h .
60 3 Demand Analysis and Forecasting

3.4 Forecasting Performance Measures

As mentioned above, the aim of this study is to develop a consistent


forecast-based inventory model, which integrates the forecast model into
the inventory optimization without an inconsistency in the underlying as-
sumptions of those two parts. In general the literature on inventory op-
timization and forecasting is separate, and the performance of different
forecast models is measured without regarding the resulting inventory per-
formance. Instead, the forecast models are compared using forecasting
performance measures which are introduced below.

Hyndman and Koehler (2006) provide a very detailed overview of various


forecasting performance measures. They trade off the advantages and dis-
advantages of a wide range of forecasting performance measures and com-
pare them for different underlying time series. They consider continuous
space time series as well as low count time series as they appear with in-
termittent demand. Therefore, this section focuses on the properties of the
performance measures suitable for this kind of time series, namely scale-
dependent and scaled measures.

All scale-dependent measures are based on the forecasting error ε̃t :

ε̃t = yt − ŷt , (3.48)

where yt is the observation and ŷt is the forecast in time t. In case of


an unbiased forecasting method, and the expectation of this error is 0,
two steps remain to calculate a forecasting performance measure. First,
positive and negative deviations between yt and ŷt should not cancel each
other out, and second, the different errors along the time series should be
aggregated in order to improve the interpretability. The absolute values of
ε̃t or ε̃2t are used to avoid balancing of the positive and negative deviations
between the forecast and the observation. The aggregation of the resulting
errors is done by using the mean or the median. The combination of those
3.4 Forecasting Performance Measures 61

approaches leads to four different scale dependent forecast performance


measures:

Mean Absolute Error (MAE) = mean(|ε̃t |) , (3.49)


Mean Squared Error (MSE) = mean(ε̃2t ) , (3.50)
Median Absolute Error (MdAE) = median(|ε̃t |) , (3.51)
Median Squared Error (MdSE) = median(ε̃2t ) . (3.52)

The MAE assumes a linear loss function and thus, every deviation is
weighted equally. In contrast, due to the squaring of ε̃t the MSE weights
larger deviations between the forecast and the observation higher than
smaller ones. This also implies that the MSE is not on same scale as
the underlying time series. Therefore, the root of the MSE, the RMSE,
is reported in most cases. Using the median to aggregate the error terms
instead of the mean leads to an outlier-robust measure.

Scale-dependent performance measures are suited for low count time series,
and they allow the comparison of different forecast methods for the same
time series. However, due to their scale dependency a direct comparison of
the forecast performance of different time series is impossible.

To avoid this problem Hyndman and Koehler (2006) propose using scaled
performance measures. They differ from scale-dependent performance mea-
sures by weighting the error term with the MAE of the in-sample naive
forecast as follows:

ε̃t
qt = , (3.53)

T
1
T −1 |yt − yt−1 |
t=2

where qt is the scaled error. The future estimates of the naive method are
equal to the current value (ŷt = yt−1 ). Thus, the MAE of this method is
reduced to the sum of the absolute first differences of this time series as
shown in the denominator of (3.53). Analogously these scaled errors can
62 3 Demand Analysis and Forecasting

be aggregated in four different ways:

Mean Absolute Scaled Error (MASE) = mean(|qt |) , (3.54)


Mean Squared Scaled Error (MSSE) = mean(qt2 ) , (3.55)
Median Absolute Scaled Error (MdASE) = median(|qt |) , (3.56)
Median Squared Scaled Error (MdSSE) = median(qt2 ) . (3.57)

Those four measures allow different forecasts of different time series to be


compared and have an intuitive interpretation. A MASE < 1, for example,
indicates a forecast method which gives superior forecasts than the one-
step naive method. If the MASE is larger than 1, the forecast is inferior.
Hyndman and Koehler (2006) weigh the advantages and aptitudes of the
different performance measures and recommend using the MASE.

This chapter provides methods which give information about future de-
mand. This information can be used to determine the reorder point s. The
Croston-type models described in Section 3.2 still need additional assump-
tions about the distribution of the demand during the lead time. In con-
trast, the proposed INARMA processes enable the supply chain manager
to derive the reorder point directly without any additional assumptions.
Therefore, using the PMF forecasts of INARMA processes in order to find
the optimal inventory policy is a consistent approach, and it does not have
any theoretical breaks.
4 Demand Classification

The methods presented in Chapter 2 and 3 have been selected because


they regard the properties of low count time series as they arise with in-
termittent demand. In a practical setup with thousands of SKUs, only a
subset is classified as intermittent. This chapter introduces the methods to
classify SKUs in order to determine suitable methods to use for SKUs with
different properties. Section 4.1 describes the widely used single-criteria
Pareto classification and gives a short overview of k-means clustering in or-
der to estimate the boundaries of the classes from the data instead of fixing
those boundaries exogenously. Section 4.2 introduces a two-criteria clas-
sification scheme developed in Syntetos, Boylan, and Croston (2005) and
Boylan, Syntetos, and Karakostas (2008) to distinguish among smooth,
erratic, lumpy, and intermittent demand. This chapter closes with the
description of a multi-criteria item classification in Section 4.3.

4.1 ABC Classification

The ABC classification is a synonym for the single-criteria Pareto classi-


fication scheme, which is the most widely used SKU classification scheme
(Babai, Ladhari, and Lajili, 2015, p. 1). It aims to provide easy criteria
to determine differences in a single criteria of the SKUs like revenue, price,
or volatility of demand. While the literature provides many different three
letter combinations, it is common to use the letters ’ABC’ to indicate the
classification by revenue, the letters ’HIL’ 8
for classification by price and
’XYZ’ for the volatility of demand. Thus, classification by revenue places
8 The letters HIL stand for high, intermediate and low.

© Springer Fachmedien Wiesbaden 2016


T. Engelmeyer, Managing Intermittent Demand,
DOI 10.1007/978-3-658-14062-5_4
64 4 Demand Classification

the most lucrative SKUs in the A-cluster and the least in the C-cluster
(Gudehus, 2005, p. 133). This is achieved by sorting and cumulating the
share of SKU revenue. Along this cumulative share the SKUs are classified
for given thresholds, which usually are 0.80 and 0.95 (Gudehus, 2005, p.
134). This means that the most lucrative SKUs, all of which have a share
of revenue of 80% are A-SKUs the next 15% will build the B-cluster and
the last 5% are C-SKUs. This procedure can be used for all positively
valued criteria. Thus, in order to define this procedure formally, let C be
the ordered vector containing the criteria of U -many SKUs. It holds that
Ci ≥ Cj ∀i < j. The share of Ci in the total of the criteria is defined as:

Ci
C̃i = . (4.1)

I
Ci
i=1

Figure 4.1 shows the cumulative sum of the revenue of the data described in
Section 5.1. This plot is called a Pareto chart, and it tracks the cumulative
revenue against the share of the number of SKUs. All three clusters contain
nearly the same share of SKUs. Therefore, about 33% of the SKUs earn
80% of the revenue, the next third earns about 15%, and the last third of
the SKUs generates 5% of the revenue.

As mentioned above, instead of those thresholds being specified exoge-


nously, they can be estimated from the data. This can be done by using
k-means cluster analysis to group the SKUs. This method minimizes the
squared distance between the SKUs within a cluster and the cluster centers.
Starting with a given number of random cluster centers, e.g. 0.40, 0.875,
and 0.975, the algorithm iteratively repeats the following two steps until
the solution converges (Hastie, Tibshirani, and Friedman, 2009, p. 460f):

• Assign every SKU to the cluster with the nearest center.

• Update all cluster centers based on the new assignment.


4.1 ABC Classification 65

Pareto Chart of Revenue


1.00
cumulative share of revenue

0.75

0.50

A B C

0.25

0.00
0.25 0.50 0.75
share of SKUs

Figure 4.1: Pareto chart of the revenue for a German wholesaler

After convergence, the last assignment is the solution of the ABC analysis.
The thresholds can be obtained as the highest values of the criteria of the
A and B cluster.

Single criteria Pareto classification is very versatile and is used to clas-


sify SKUs based on many different criteria. Teunter, Babai, and Syntetos
(2009), for example, link ABC classification to inventory policies by using
inventory costs to classify SKUs. But if inventory decisions are based on
several single-criteria classifications, the scope of guidelines would be com-
plex. Consider an inventory system which is based on the ABC, XYZ and
HIL classification mentioned above. Figure 4.2 shows the resulting set of
clusters. By using the three different classifications 33 = 27 different clus-
ters occur. A guideline for the highlighted CXL-cluster might contain a
certain forecasting method, inventory policy, and service degree. In order
to reduce this complexity two different classification schemes which regard
more than one criteria is described in the remainder of this chapter.
66 4 Demand Classification

HIL

AB Z
C XY

Figure 4.2: SKU clusters based on three single-criteria classifications

4.2 Forecast-Based Classification

As mentioned above, all forecast methods are only suitable for specific types
of demand patterns, i.e. if the wrong method is chosen, the forecast quality
will be poor. This fact is utilized by the classification scheme described in
this section. If a forecast method is suitable for a specific type of demand
pattern, it will produce small forecasting errors whereas a non-suitable
method will produce high error measures. Therefore, if the forecasting er-
rors of a certain SKU are poor in case of Exponential Smoothing and are
better when using the Croston method, this SKU might have an intermit-
tent demand pattern. Syntetos, Boylan, and Croston (2005) and Boylan,
Syntetos, and Karakostas (2008) propose a classification scheme based on
this idea. They distinguish between four different demand patterns: inter-
mittent, lumpy, erratic, and smooth and use two different dimensions, the
variation of the positive demands, and the probability of positive demands
as criteria to seperate the demand patterns. This probability of a positive,
i.e. non-zero demand, is denoted as πY+ and is defined as:

πY+ = P (Yt ≥ 0) . (4.2)


4.2 Forecast-Based Classification 67

Due to the fact that a forecast is usually more difficult for time series with
a higher variation. The variation of positive demands is regularly used as a
measure for the predictability of the demand pattern. Therefore, it is the
criterion in most XYZ analysis setups.

As the second dimension to classify SKUs, Syntetos, Boylan, and Croston


(2005) suggest the squared coefficient of variation. It is defined as:
 2
σY+
CV 2 = . (4.3)
μ+Y

The coefficient of variation (CV ) itself is regularly used in classification and


has also been proposed as a single parameter of the gamma distribution
(Snyder, 1984). The coefficient of variation can be interpreted as the rela-
tive variation of demand around the mean. The squared version originates
in Williams (1984) as a result of a variance partition of demand, but it
has a technical advantage over the plain coefficient of variation. Due to
the non-linear transformation, SKUs with a CV less than 1 will result in
a smaller value, and SKUs with a CV greater than 1 will lead to a higher
value. This stretches the distances between the values and simplifies the
classification. Figure 4.3 shows the classification scheme.
68 4 Demand Classification

lumpy erratic

CV 2

intermittent smooth

πY+

Figure 4.3: Classification scheme

It can be seen that a SKU with a high probability of a positive demand is


classified as smooth or erratic demand whereas a low value of πY+ leads to
a classification as intermittent or lumpy demand. Additionally, a SKU is
classified as intermittent or smooth if the value of CV 2 is under a certain
threshold. The thresholds of πY+ and CV 2 are estimated through a mini-
mized overall forecasting error, if a suitable forecasting method is used in
each group. Boylan, Syntetos, and Karakostas (2008) recommend to use
the Croston method for smooth demand patterns and the unbiased deriva-
tive proposed in Syntetos, Boylan, and Croston (2005) for the other three
groups. They suggest, πY+ = 0.76 and CV 2 = 0.49 as thresholds.

Beside defining the thresholds, this approach is generally intuitive and pro-
vides a comprehensive overview of the SKUs set structure. Thus, this
classification scheme is used in the remainder to state the different prop-
erties of the SKUs and to determine in which region they will be found.
Figure 4.4 links forecast-based classification to ABC and HIL analysis de-
scribed in Section 4.1. They show the SKU classification scheme of a Ger-
man wholesaler.9 In most cases the SKUs will be irregularly distributed
9 The used dataset will be described in section 5.1.
4.2 Forecast-Based Classification 69

A B C H I L

5 5

4 4

3 3
CV 2

CV 2
2 2

1 1

0 0
0.25 0.50 0.75 1.00 0.25 0.50 0.75 1.00
πY+ πY+
Figure 4.4: Distribution of the ABC and HIL clusters

in this scheme. Thus, a hexagonal binning plot is used instead of a scatter


plot. One hexagon may contain more than one SKU, and the coloring is
determined by the modal ABC and HIL cluster classification of the SKUs
in this hexagon. In both cases the 0.80, 0.15, 0.05 rule was used for Pareto
classification. The two lines indicate the suggested thresholds of πY+ = 0.76
and CV 2 = 0.49. The left plot shows the distribution of the ABC clusters.
It can be seen that the majority of A-SKUs are in the region of smooth and
erratic demand, and the intermittent demand SKUs are almost exclusively
in the C cluster. Thus, smooth and erratic demand SKUs generate the ma-
jority of the company’s revenue. The right plot indicates that high value
SKUs are mainly in the section with a share of positive demands less than
0.75 and therefore, belong to the group of lumpy and intermittent demand.
Linking the Pareto classification and forecast-based classification suggests
that these SKUs are important due to the high fixed working capital caused
by avoidable stocks rather than the revenue generated by them.
70 4 Demand Classification

4.3 Multi-Criteria Inventory Classification

As described in Section 4.1, using inventory guidelines based on several


different criteria leads to complex structures. This section deals with a
multi-criteria inventory classification (MCIC) approach which allows for
different criteria while omitting the complex result. The approach was
developed in Ng (2007) and is used to generate an inventory risk indicator.
Compared with other MCIC approaches, it has two advantages. It avoids
the problem of selecting weights for the different criteria by choosing them
through optimization, and it can be formulated as a linear program, which
is a frequently used model in logistics. Additionally, Babai, Ladhari, and
Lajili (2015) show the superiority of this approach as opposed to three other
MCIC approaches based on inventory performance in an empirical study.

Consider a MCIC setup with J-many criteria of U -many SKUs and let
cij be the j-th criteria of the i-th SKU. The first step is to transform the
different criteria into a 0 − 1 scale by using Equation (4.4) and (4.5). The
structure of both equations is the same, but (4.5) switches the direction
of the relationship between the input criterion and the final index. Thus,
Formula (4.4) is used in case a higher value of the criterion should lead to
a higher value of the resulting index whereas if Equation (4.4) is used, a
higher value of the criterion leads to a lower value of the resulting index.

cij − min (c•j )


c̃ij = , (4.4)
max (c•j ) − min (c•j )
cij − min (c•j )
c̃ij = 1 − , (4.5)
max (c•j ) − min (c•j )

where the  • notation in c•j denotes a vector containing the j-th criterion
of all SKUs. min() refers to the smallest and max() refers to the largest
value of the vector. Therefore, by using Equations (4.4) and (4.5), the
lowest value of c̃•j is 0, and the highest is 1.
4.3 Multi-Criteria Inventory Classification 71

In the next step the different values of c̃i• are aggregated. As mentioned
above, the advantage of this approach is that the aggregation weights do
not have to be selected in advance. Only the ranking of the criteria must
be specified, i.e. c̃•1 is the most important criterion, c̃•2 is the second most
important, and so on. The weight of a certain criterion is calculated based
on its respective rank. The intuition behind this step is to build the index
Ci based on all measures c̃i• regarding their rank, i.e. if the value of the
most important criterion c̃i1 is higher than all other c̃ij , it should have the
highest weight. If c̃i2 has the highest value, Ci should be a weighted sum
of c̃i1 and c̃i2 . Ng (2007) model this idea as the linear program defined in
the Formulas (4.6) to (4.9).


J
maximize Ci = wij · c̃ij (4.6)
j=1

subject to:

J
wij = 1 (4.7)
j=1

wij − wi(j+1) ≥ 0, ∀j {1,2, . . . ,(J − 1)} (4.8)


wij ≥ 0, ∀j {1,2, . . . ,J} . (4.9)

The Objective (4.6) is the maximization of the weighted sum of the cri-
teria of a given SKU by choosing the weights wij while considering three
constraints. Equation (4.7) restricts the sum of all weights to 1. Con-
straint (4.9) is a non-negativity constraint and in addition to Constraint (4.7),
both require Ci to be a weighted arithmetic mean of c̃ij . The rank of the
different criteria is regarded in Constraint (4.8), which ensures that the
weight of the j-th criterion is always greater or equal than the (j + 1)-th
criterion. Without this constraint, the solution would be trivial as the max-
imal Ci is always yielded from weighting the highest c̃ij with 1. Instead
Constraint (4.8) leads to a weighting scheme that equally weights all crite-
72 4 Demand Classification

ria between the most important criteria and the criteria with the highest
value. Table 4.1 lists the different weighting schemes in case of J = 3.

max(ci• ) wi1 wi2 wi3


ci1 1.00 0.00 0.00
ci2 0.50 0.50 0.00
ci3 0.33 0.33 0.33

Table 4.1: Different weighting schemes of the MCIC approach

There are three possible weightings schemes if J = 3. Either ci1 , ci2 or ci3
could have the highest value. Thus, if ci1 is the maximum of ci• , wi1 will
be 1, and all other weights will be 0. If ci2 is the maximum of ci• , wi1 and
wi2 will be 0.5, and wi3 will be 0. If ci3 is the maximum of ci• , all three
weights will be 0.3̄. Due to the formulation as a linear program, Ci can
be calculated via a standard software solver like the one implemented in
Microsoft Excel.

In the third step of this MCIC approach, the Pareto classification scheme
described in Section 4.1 is used to group the SKUs. Thus, the SKUs are
grouped into three clusters according to their inventory risk index Ci . The
M cluster contains the SKUs with the highest risk whereas the N cluster
contains the SKUs with a lower risk. The SKUs with the relatively lowest
risk are grouped into the O cluster. The three letters M, N and O are
chosen arbitrarily, but may relate to major, noticeable, and ordinary risk.
Part II

Empirical Analysis
5 Simulation Design

This chapter introduces the empirical analysis. First, Section 5.1 describes
the used dataset and presents summaries of the variables. After this Sec-
tion 5.2 deals with the application of the forecast-based classification and
the MCIC approach. Section 5.3 describes the procedure of the inven-
tory simulation, and the details of the implementation are provided in
Section 5.4.

5.1 Data Description and Preparation

The dataset used in the empirical simulation study contains information


from 29 807 SKUs of a German wholesaler. The tracked time period spans
90 weeks from the 26th calendar week in 2002 to the 11th calendar week
in 2004. In addition to the number of sales in each week, this dataset
contains two further variables: the price of the SKUs during each week and
a dummy variable tracking whether a marketing campaign took place that
week. The type of marketing campaign is unknown.

The first step of this analysis is the data preparation whereby implausible
values and inappropriate demand series are removed. Some demand series
contain values of −1, but there is no information about which event may
lead to negative demands. Therefore, those demands are set to 0. Addition-
ally, the described methods are unsuitable for seasonal goods like winter
tires or gingerbread. Therefore, SKUs which have a very long period of
zero demands (30 weeks or more) are removed from the dataset. Finally,
18 288 SKUs remain after this step.

© Springer Fachmedien Wiesbaden 2016


T. Engelmeyer, Managing Intermittent Demand,
DOI 10.1007/978-3-658-14062-5_5
76 5 Simulation Design

In order to give an overview of the variables, Table 5.1 lists aggregated


cross-sectional statistics whereas Figure 5.1 shows the revenue and the share
of SKUs with a marketing campaign over time.

Variable Min. 1st Qu. Median Mean 3rd Qu. Max.


price 0.14 1.85 3.82 7.16 7.54 679.60
demand 0.06 1.70 5.74 16.89 16.25 2658.00
advertisement 0.00 0.00 0.00 0.08 0.13 0.78

Table 5.1: Summary of variables

The average SKU price ranges between e 0.14 and e 679.60. The median
is lower than the mean. Therefore, the distribution of the average prices is
right-skewed. 75% of the SKUs are sold for less than e 7.54 on average. The
distribution of the average demand per week is also right-skewed and ranges
between 0.06 and 2 658.00 pieces per week. 75% of the SKUs are sold less
than 16.25 times per period. The interpretation of the marketing campaigns
is slightly different because this variable does not have a quantitative scale.
The variable advertisement tracks the share of weeks in which a marketing
campaign hase taken place for the SKUs. Thus, there is no marketing
campaign for at least 50% of the SKUs in the considered time period. The
average share of periods with a marketing campaign is 8%, and there is at
least one SKU with a share of 78%.

Figure 5.1 shows the aggregation of the variables for the different SKUs.
Thus, the upper part of the graphic illustrates the revenue generated by the
18 288 SKUs whereas the lower part shows the share of SKUs for which a
marketing campaign has taken place within the given week. The blue lines
indicate a locally weighted smoothing. Due to Christmas sales, the revenue
has two peaks each at the end of a year. Additionally, the revenue of the
second half of 2002 is higher compared to the revenue in 2003. It ranges
between about e 600 000 and e 1 694 000, and after the 27th week of 2003
the smoothed revenue remains constant for the rest of the considered time
5.2 Classification 77

Revenue (e million)

1.50

1.25

1.00

0.75
Advertisment

0.12
0.10
0.08
0.06

2002W26 2003W01 2003W27 2004W01


Week

Figure 5.1: Revenue and advertisement over time

period. The share of SKUs for which an marketing campaign has taken
place ranges between 4.5 and 12.7%. It can be seen that the smoothed
value floats slightly over time whereas the actual value varies significantly
from period to period.

5.2 Classification

As a first step, the 18 288 SKUs are grouped according to the forecast-
based classification method described in Section 4.2. The limit values of
the measures CV 2 and πY+ are optimized using the MASE. For every SKU
an one-step-ahead forecast is calculated using Exponential Smoothing (ES),
and the method developed in Syntetos and Boylan (2005) (SYN).10 These
forecasts are rated using the MASE. Thus, every SKU has two different
MASE values: one value which refers to the methods suitable for intermit-
tent demand and another which is not.

The limit values CV 2∗ and πY+∗ are optimized via a grid search in two
consecutive steps. First, several different average MASE values across all

10 The smoothing parameters of both methods are also optimized using the MASE.
78 5 Simulation Design

SKUs are calculated for different threshold values of πY+∗ . If the share of
positive demands (πY+ ) of a SKU is below the threshold πY+∗ , the MASE of
SYN is regarded. Otherwise, if πY+ ≥ πY+∗ the MASE of ES is used. The
minimal average MASE across all SKUs results from a threshold πY+∗ =
0.83. After this step the threshold of the squared coefficient of variation
CV 2∗ is optimized in the same way. The MASE of SYN is used if CV 2 is
below CV 2∗ and otherwise, the MASE of ES is used. The minimal average
MASE across all SKUs results at CV 2∗ = 0.5. These optimal values lead
to the final classification. If a SKU has a squared coefficient of variation
below 0.5 and a share of positive demands below 0.83, it is classified as
intermittent demand. These values match well with those proposed in
Syntetos and Boylan (2005). The group of intermittent demand contains
4 310 SKUs and comprise approximately 14.5% of all 29 807 SKUs.

In addition to forecast-based classification, the SKUs are also classified


using the MCIC approach described in Section 4.3. The following SKU cri-
teria are included in ranked order: price, probability of a positive demand,
and the coefficient of variation. The thresholds between the clusters are
determined by the k-means approach described in Section 4.1.

Figure 5.2 illustrates a hexagon binning plot of the distribution of the


three inventory risk clusters derived from the forecast-based classification
described above. It can be seen that within the group of smooth and erratic
demand series, nearly all SKUs show a low inventory risk. The group of
intermittent demand (bottom-left corner) contains all three clusters and is
horizontally separated. Thus, within the cluster of intermittent demand, a
SKU is classified as risky (M cluster) if the probability of a positive demand
is below 0.3 whereas if πY+ ≥ 0.70, it is classified as unrisky (O cluster). The
distribution of the inventory risk in the group of lumpy demand series is
more complex. There is also a horizontal separation between the N and O
cluster, but the M cluster crosses the other clusters diagonally.
5.3 Simulation Procedure 79

M N O

3
CV 2

0
0.25 0.50 0.75 1.00
πY+

Figure 5.2: Distribution of the inventory risk index

5.3 Simulation Procedure

This section describes the simulation procedure. In order to attain mean-


ingful results, the proposed empirical study is divided into several inde-
pendent simulations based on different parameters. First, different service
constraints may be used to optimize inventory decisions. Therefore, both
the α- and the β-service level are used as targets and to gather informa-
tion about the behavior of the methods in case of different values of those
targets, 5 different service targets are simulated for each of them. The
the lead time L is the second parameter which varies among the different
simulations. This parameter is set to 1, 3 and 5 weeks and in addition,
the fixed order costs are also set to e 5 and e 25 to simulate low and high
order costs. Due to the fact that no interactions between the SKUs are
considered, every SKU can be simulated separately, which leads to a scal-
able simulation design. By multiplying these different parameters settings
(2 service targets with 5 values each, 3 lead times, 2 order costs, 4 310
80 5 Simulation Design

SKUs) there are a total of 258 600 different and independent simulations.
The remainder of this section describes the processes within each of those
simulations.

t=60
0 t t+1 t+2 T
past future

t=61
0 t–1 t t+1 T
past future
t=62
0 t–2 t–1 t T
past future

Figure 5.3: Different samples of a rolling simulation

In order to simulate a realistic application of the described methods, the


simulation is calculated along each of the time series. This reproduces
the behavior of newly implemented inventory systems in a company. At
the date of implementation there is a certain dataset available containing
information of the past. Then, as time goes by, this training sample (past)
grows and includes new information each week. Therefore, at first each
time series is split into two parts: the first 60 weeks include the training
sample, and the remaining 30 weeks comprise the test sample. Step by step
the course of time in a practical setup is simulated by increasing the size of
the training sample gradually. Figure 5.3 shows these sample sizes. They
are denoted as past and future during the different simulation steps. Then
each of those simulation steps is divided into three intermediate steps: the
forecast, the parameter optimization, and the inventory simulation.

Forecast

Each simulation step starts with the calculation of the forecasts based
on six different methods described in Sections 3.2 and 3.3. These are
5.3 Simulation Procedure 81

namely the methods of Croston (CRO), Syntetos and Boylan (SYN), Leven
and Segerstedt (LEV), Teunter, Syntetos, and Babai (TEU), Exponential
Smoothing (ES) and integer-valued autoregressive processes (INAR). In
each step the smoothing parameters of the first five models and the model
order of INAR are estimated based on the past data. Then, the forecasts
and variances are estimated for the future L periods. In case of INAR L-
many future PMFs are estimated and aggregated using the Markov chain
approach presented in Section 3.3.4. The median of the L-many future
PMFs of INAR is used as point forecasts, whereas the inventory parameter
optimization is based on the aggregated PMF. Figure 5.4 shows the struc-
ture of this procedure over time. In every period for all methods L-many
forecasts are estimated based on the past observations.

t=60
0 t t+1 t+2 t+L
past future
t=61
0 t–1 t t+1 t+2 t+L
past future
t=62
0 t–2 t t+1 t+2 t+L
past future

Figure 5.4: Rolling forecast over time

Parameter Optimization

Based on these forecasts, the inventory parameters are optimized using the
different stochastic inventory models described in Section 2.3.3. All order
sizes D are calculated using the EOQ, but the determination of the reorder
points s differs among the methods. The determination of the reorder
point in case of the Croston-type models and the Exponential Smoothing
(CRO,SYN,LEV,TEU,ES) is based on additional assumptions of a lead
82 5 Simulation Design

time demand distribution. Thus, in this case the reorder point is calculated
based on the assumption of a normal or a gamma distributed demand
during the lead time. These resulting reorder points are rounded to the
next integer value in order to use them afterwards. In contrast, there is
no need of an additional distribution assumption in case of INAR, and the
resulting reorder points are, due to the definition, always integer valued.
To sum up this intermediate step, 11 different reorder points are calculated
in every simulation step whereas the order sizes are equal for all methods.

Inventory Simulation

This intermediate step is straightforward. The demand series is used to


update the inventory position, and if it falls below the calculated reorder
point, an order is placed. It arrives after L periods. A separate inventory
position is simulated for all of the 11 different forecast/distribution combi-
nations. The resulting data is used to calculate the achieved service levels
and average inventory levels.

The exemplary results of the inventory simulation of one SKU over 30


periods is illustrated in Figure 5.5. It is based on INAR and L = 2, and
the target β-service level is 0.95. The demand within a week yt is shown
in gray whereas the solid yellow line shows the reorder level s, and the
dashed orange line shows the inventory level It . The vertical dashed lines
indicate whether an order is placed at the end of the period. The shown
section of the demand series starts with 5 periods without any demand. It
can be seen that s is reduced from 5 to 3 within these periods. Due to the
first demand in week 66, I66 falls below s, and an order is placed. This
order increases the inventory level 2 periods later. The demand series is
intermittent, and one can see that the reorder point increases as a result of
a high demand, e.g. after periods 71, 76, and 80. This is a typical pattern
of reorder point determination using INAR and leads to a high inventory
performance. After period 80 the demand is lower and more frequent.
5.4 Implementation 83

Overall, the service level target of this exemplary inventory simulation is


met with an achieved service level of 0.954. This is due to the demand of
66 pieces in total that could be satisfied in each period except for the last
with a demand of 3 pieces.

20

15
pieces

yt
10 It
s

0
65 70 75 80 85
t

Figure 5.5: Inventory simulation of an SKU over 30 periods

5.4 Implementation

As mentioned in the previous section, each simulation step consists of sev-


eral parts. In order to give an overview of the implementation, Figure 5.6
illustrates the developed algorithm. The inventory simulation is divided
into three modules: namely the forecast, the optimization, and the actual
inventory simulation. These modules are shown in the center of the figure,
each of which requires external data and parameters. The results of the
algorithm are calculated by the forecast and inventory simulation module.

The forecast module consists of six forecast methods. The calculations


of the four Croston-type methods (CRO, LEV, SYN, TEU) and ES differ
from the calculations needed for the INAR forecast, but all of the methods
depend on the same demand data. In case of the Croston-type models
and ES, the parameters are optimized using the MASE. It can be seen that
84 5 Simulation Design

the optimization is an iterative process between the forecast, the parameter


optimization, and the evaluation using the MASE. In case of INAR the first
step is the model identification based on the ACF and PACF. After this, the
parameters are estimated using the CLS or MCMC approach, respectively.
INAR predicts the future PMF and the median whereas CRO, ES, LEV,
SYN, and TEU predict the future expectation value and variance. The
median and the expectation value are evaluated using the MASE and these
values are saved as results.

Additionally, the future expectation and variance as well as the PMF fore-
cast are used in the optimization module. It is divided into a set of six
different methods which result all combinations of the two service levels
and the three distribution assumptions. The results of the five Croston-
type forecast models are used as inputs for the optimization based on the
gamma and normal distribution whereas the PMF forecast of the INAR
model is the input for the INAR optimization. The set of other input pa-
rameters is equal for all six optimization modules. It contains of the order
costs, the holding costs, the service level, and the lead time. The outputs
of this module are reorder points and order-up-to levels.

The actual inventory simulation is the last module. It uses the different
reorder points, order-up-to levels, and the demand data to simulate the
inventory level and position in each period as shown in Figure 5.5. Ad-
ditionally, this module calculates the achieved α- and β-service levels and
taken together with the inventory level those achieved service levels are
saved as results.

The described algorithms are implemented using the statistical program-


ming language R (R Core Team, 2015) and C++, while the C++ code
was seamlessly integrated into the R environment using the Rcpp package.
This approach of using two different programming languages has several
advantages. It combines the convenience of the interpreted programming
language R with the efficiency of the compiled language C++. The R
code handles the data processing and implements the simulation structure,
5.4 Implementation 85

External Data / Parameters

Forecast
Optimization
CRO
ES INAR

β-Service
α-Service
LEV
SYN

Identification

Inventory Simulation
TEU Model Gamma Distribution

Normal Distribution

INAR
Optimization
Parameter

Estimation
Parameter

MASE Results

Figure 5.6: Implementation of the inventory simulation algorithm

whereas the C++ code implements the extensively often called functions,
e.g. the forecast calculation, the reorder point determination, and the cal-
culation of the MASE. In total the codebase of the algorithms amounts to
about 7 000 lines of code.

As already mentioned, the inventory simulation of the different SKUs are


independent and therefore parallelizable. In order to achieve low computa-
tion times, the calculations are distributed over several machines, whereas
the data of all SKUs are stored in a database on a single data node. Each
worker node may have one or more CPUs. This design increases the scal-
ability and robustness of the computation and leads to the advantage that
the worker nodes neither need to have the same hardware, nor need to be
available during the entire runtime of the simulation. The data node keeps
track of the overall process, the assignment of the SKUs to the worker
nodes, and the availability of the different worker nodes. Therefore, if a
new worker node becomes available, it registers itself at the data node
and receives data. Otherwise, if a worker node cancels its calculation, the
86 5 Simulation Design

remaining SKUs are rescheduled. During the simulation each available


worker node receives data from the data node and computes the inventory
simulation. Afterwards, the results are sent back to the data node, and if
the simulation is still running, it receives data from the data node again.
Figure 5.7 shows the structure of this parallelization setup and the schedul-
ing of the SKUs (cylinders). It can be seen that the different worker nodes’
performance leads to an irregular assignment of the SKUs. This can eas-
ily be described as a waiting queue at the check-in counter of an airport.
There is a single waiting queue of air passengers (list of SKUs) and several
check-in counters (worker nodes). The personnel might differ in efficiency
and over time counters are opened and closed. Nevertheless, each passen-
ger is processed at the next free counter. There is no schedule in advance.
This procedure is scalable because more passengers can be processed in the
same time if more counters are opened.

Data Node

Worker Worker Worker Worker


Node Node Node Node
...

Figure 5.7: Structure of the parallelization setup

The advantage of the inventory simulation over this example are the costs.
The inventory simulation was calculated using computing instances of a
cloud computing service provider. The 4 310 SKUs were processed in about
3 hours by two worker nodes, each having 16 CPU cores, and approximately
18 GB of data were generated.
6 Results

This chapter presents and analyzes this study’s results. Section 6.1 focuses
on the forecast performance of the different methods and shows three dif-
ferent analyses. First, the distribution of the MASE is examined in order
to give an impression of the overall method performance. Additionally, the
second analysis shows the percentage better forecast performance. Last,
the third analysis presents the distribution of the MASE in relation to the
probability of a positive demand and the squared coefficient of variation.

The results of the inventory simulation are presented in Section 6.2. They
are separated according to the service targets and present the achieved
service levels as well as the resulting inventory levels. Both subsections
provide four different analyses. First, the difference between the service
target levels and the achieved service levels are shown, separated according
to the different methods. Then, in order to provide information about the
economic results, the relation between the achieved service levels and the
inventory level is shown. The third analysis provides an overview of the
achieved service levels in relation to the probability of a positive demand
and the squared coefficient of variation of a SKU. Finally, each subsection
closes with an analysis of the relation between the resulting inventory level
and inventory risk cluster. Due to the vast amount of data created in this
study, the provided analyses only cover a subset. A more complete overview
is given in the appendix.

© Springer Fachmedien Wiesbaden 2016


T. Engelmeyer, Managing Intermittent Demand,
DOI 10.1007/978-3-658-14062-5_6
88 6 Results

6.1 Forecasts

The first analysis considers the general performance of the applied fore-
cast methods separated according to the different inventory risk clusters.
Figure 6.1 illustrates a combination of a box and a violin plot of the one-
step-ahead MASE of the six different forecast methods.11 A lower MASE
indicates a better forecast, and a MASE of 1 indicates a forecast perfor-
mance which equals the performance of the naive forecast. The forecast

M N O
2.0

1.5
MASE

1.0

0.5
INAR

INAR

INAR
CRO

CRO

CRO
TEU

TEU

TEU
SYN

LEV

SYN

LEV

SYN

LEV
ES

ES

ES
method / horizon

Figure 6.1: One-step-ahead forecast performance separated according to risk clusters

performance of the different methods in the M cluster is heterogeneous.


CRO has a relatively high range between the highest and the lowest MASE,
and the median MASE is just slightly below 1. The third quartile is the
highest of all other MASE in all clusters. The overall performance of SYN
is better compared with CRO and SYN also outperforms LEV. The third
quartile of the MASE of SYN is slightly below 1. LEV has the highest
range and no clear peak. It shares the lowest values with ES and INAR,
but it also has the highest MASE. In contrast, TEU has the lowest range
and the second best median after INAR. The third quartile of TEU is be-
low the median of CRO, SYN, LEV, and ES. Additionally, the distribution
11 The results of a rolling five step ahead forecast can be found in the appendix.
6.1 Forecasts 89

of TEU is symmetric. The distribution of ES is the only left-skewed one


and the third quartile of ES is slightly below 1. The INAR model has the
lowest MASE. As in case of ES, the third quartile of INAR is below the
median of all the other methods except for TEU, and it is even smaller than
the first quantile of CRO. Compared with the M cluster, the interquantile
ranges are much smaller in the N cluster. While CRO, SYN and LEV still
have the highest values, not a single third quantile is higher than 1. TEU
has the smallest range and is symmetric. The results in the O cluster are
very similar to the results of the M cluster. Therefore, there is no obvious
relationship between the inventory risk and the forecast performance. The

CRO SYN LEV TEU ES INAR

CRO 23 62 30 32 20
SYN 77 69 37 52 30
LEV 38 31 36 30 19
TEU 70 63 64 59 33
ES 68 48 70 41 28
INAR 80 70 81 67 72

Figure 6.2: One-step-ahead percentage better forecast performance of all SKUs

next analysis considers the percentage better forecast performance across


all SKUs.

Figure 6.2 shows the results of this analysis. The numbers and filling color
indicate the share of SKUs in which the row method has a lower MASE
than the column method. The forecasting performance of CRO is poor
compared with the other methods. LEV is the only method which is worse
in terms of forecast performance compared with CRO. The results of SYN
are better. For 77% of the SKUs SYN produces better forecasts than CRO
and in 52% better forecasts than ES. LEV is worse in 69% compared with
SYN. TEU produces the best forecasts in the group of Croston-type models.
In 70% TEU is better compared with CRO, in 63% compared with SYN
90 6 Results

and in 64% compared with LEV. ES is the only method which has not
been developed for intermittent demand series in particular. Nevertheless,
ES produces forecasts which are superior compared to CRO (68%) and
LEV (70%). The forecast performance of INAR is comparatively the best
method. It excels CRO and LEV in about 80% of cases and SYN, TEU,
and ES in about 70% of cases. Figure 6.3 shows a hexagonal binning plot

CRO SYN LEV


0.5

0.4

0.3

0.2
MASE
0.1 1.50
0.0 1.25
CV 2

TEU ES INAR
1.00
0.5
0.75
0.4 0.50
0.3

0.2

0.1

0.0
.2 .4 .6 .8 .2 .4 .6 .8 .2 .4 .6 .8
πY+

Figure 6.3: Distribution of one-step-ahead MASE separated according to methods

of the distribution of the one-step-ahead MASE separated according to the


applied methods. Generally, the previously mentioned results are again
noticeable. Thus, LEV has the lowest forecast performance compared with
the other methods while TEU, ES, and INAR produce far better forecasts.
The additional insight given in this figure is the distribution of the MASE in
case of a changing probability of a positive demand and a changing squared
coefficient of variation. The MASE distribution of CRO, SYN, and LEV
is far more heterogeneous compared to the distribution of TEU, ES and
6.2 Inventory Simulation 91

INAR. The mentioned disparity is erratic, i.e. there is no obvious structure


in the MASE distribution for all the methods.

The results in this chapter show that the methods, which regard the proper-
ties of intermittent demand, do not generally increase the forecast accuracy
compared with ES. LEV produces by far the worst forecasts whereas the
performance of INAR is superior compared with the other methods. SYN
is favorable compared with CRO, LEV, and ES. TEU produces the best
forecast in the group of Croston-type models. Additionally, there is no ev-
idence for a relationship between inventory risk and forecast performance.

6.2 Inventory Simulation

This section presents the results of the inventory simulation. It is separated


according to the service target. Thus, Section 6.2.1 presents the results of
the inventory simulation for given α-service constraint, and Section 6.2.2
presents the results for given β-service constraints. Both are structured in
the same way. The first part describes the difference between the service
target and the achieved average service whereas the second part deals with
the relationship between the achieved average service and the resulting
inventory level. Each section closes with an analysis of the relationship
between inventory level and inventory risk cluster.

Those three analyses are used because they all view the results from a dif-
ferent perspective. The main results of the inventory simulation are clearly
the achieved service levels, but exclusively using this perspective might lead
to incorrect conclusions. As previously stated, inventory management aims
to fulfill customer needs at minimal cost. Therefore, the second analysis
shows the relation between the achieved service levels and the inventory
level. This analysis also calculates the relative change in inventory level if
the service is increased. The third analysis connects the provided inventory
risk clustering with the resulting inventory levels on a SKU basis.
92 6 Results

6.2.1 α-Service Level Target

Figure 6.4 illustrates the difference between the average achieved service
level and the target value. The columns separate the different lead times,
and the rows separate the two different order costs. The shapes mark the
assumed lead time demand distributions, and the methods are color-coded.
The black lines indicate a perfect match between the service target and the
average achieved service. In case of a short lead time L = 1 and low order

L=1 L=3 L=5


1.0

0.9

K=5
Normal
Gamma
0.8 INAR
avg. Service

ES
1.0
CRO
SYN
LEV
0.9 K = 25
TEU
INAR

0.8

.93 .95 .97 .93 .95 .97 .93 .95 .97


Service Target

Figure 6.4: Difference between achieved and target service in case of an α-service con-
straint

costs K = 5, the results are mainly close together. INAR has the highest
achieved service levels across all target values, and it stands out that TEU,
ES, and LEV in combination with a gamma distribution lead to the poorest
results. Even if the spread between them and the other methods decreases
at higher service targets, the spread does not disappear. For low service
targets (0.91) LEV/Gamma leads to a service of about 0.74, ES/Gamma
leads to a service of 0.77, and TEU/Gamma leads to an average service
of 0.85. All other methods lead to average service levels of approximately
6.2 Inventory Simulation 93

0.95. In case of high service targets (0.99) LEV/Gamma, ES/Gamma,


and TEU/Gamma lead to service levels between 0.93 and 0.95 whereas the
other service levels are close to the target.

In case of L = 3 and K = 5, the main results remain the same as in the


L = 1 case. The overall performance decreases for all the methods and
all the service targets. Solely ES/Gamma leads to better results. INAR
has the highest service levels across all the target values, and in case of
lower service levels SYN/Normal and CRO/Gamma perfectly meet the
target. Increasing service targets increase the gap between the target and
the achieved service levels for all the methods except for INAR, which is
also the only method that meets the service constraint in case of targets
higher than 0.95.

In case of a long lead time (L = 5), the change of the structure between
the service levels of L = 1 and L = 3 takes on. All the methods achieve a
lower service across all the target values and in this case INAR is the only
method which has a service level which meets the service constraints. All
the other methods for all the other service targets have a lower average
service level than the target value. The results of the methods based on
normal distribution spread far more widely than in case of shorter lead
times. TEU/Gamma and LEV/Gamma still have the lowest service.
94 6 Results

L=1 L=3 L=5

700

600

K=5
ES
CRO
Inventory (in 1.000 e )

500
SYN
400 LEV
TEU
INAR
1200
1100 Normal

K = 25
1000 Gamma
INAR
900
800

0.8 0.9 1.0 0.8 0.9 1.0 0.8 0.9 1.0


avg. Service

Figure 6.5: Achieved service vs. mean inventory levels (α-service target)

For higher order costs the achieved service levels are far better and closer.
Especially in case of L = 3 and L = 5 there are only slight differences.
Independent of the lead times and the service targets, INAR has an average
service of over 0.99 and therefore overshoots every target. In case of L = 1
and K = 25 the structure of the achieved service is very similar to the
L = 1, K = 5 case. Only TEU/Gamma, ES/Gamma, and LEV/Gamma
do not meet the service constraint. If L = 5, the distance between the
results of these three methods and the other methods declines, but for
high service targets INAR is the only method which meets the constraints.
TEU/Gamma results in the worst service.

The next analysis concerns the relationship between the achieved service
and the resulting mean inventory levels. Figure 6.5 is structured similar to
Figure 6.4 and shows the results of this analysis, but in contrast to the prior
figure, high values are no longer desired. In fact, a suitable method leads
to high service and low inventory levels. There is a trade off between those
two values, but in the sense of Pareto efficiency there are methods which
6.2 Inventory Simulation 95

are dominated by others because they achieve the same service levels with
higher inventories or lower service levels with the same inventory level.

It appears that for short lead times (L = 1) and low order costs (K = 5)
several methods lead to mainly the same results. One can distinguish
three different clusters. First, the results of all the methods based on the
gamma distribution except for SYN/Gamma are close together. The second
cluster consists of ES/Normal and TEU/Normal, and the third cluster
includes all the other methods. This third cluster dominates both other
clusters. Whereas the first cluster is dominated because of the poor service
levels, the second cluster is dominated because of the high inventory costs.
The results of ES/Normal and TEU/Normal demonstrate the difference
between the prior analysis and this one. By regarding the achieved and
target service levels exclusively, both ES/Normal and TEU/Normal lead to
sufficient results, but in relation to the mean inventory levels both methods
are dominated. Across all different service targets, ES/Normal results in
approximately 24%-36% higher inventories. If the service level is increased
by one percentage point the inventory level rises about 3.9% in the group
of non dominated methods.12

In case of L = 3 and K = 5 there is no change in the group of dominated


methods, but compared with the previous case the different inventory lev-
els are spread more widely. The high service level of INAR is also asso-
ciated with the highest inventory levels between e 530 000 and e 660 000.
The other dominant methods lead to inventories between e 400 000 and
e 515 000. Additionally, one can see than an exponential growth in the
inventory levels occurs during rising service targets. Among the dominant
methods the inventory level rises about 4.7% if the service increases by one
percentage point.

The spread between INAR and the other dominant methods increases if L =
5. TEU/Normal, TEU/Gamma, ES/Normal, ES/Gamma, and LEV/Gamma
12 Estimateof a log-linear regression model which regresses the inventory level of the
achieved service level of the dominant methods.
96 6 Results

are still dominated and are therefore not preferable. As previously men-
tioned, the achieved service of all the methods except for INAR is lower
compared with the shorter lead times. Due to the simultaneous reduction
in inventory levels. CRO and SYN remain dominant regardless of the used
lead time distribution. In this case the inventory level rises approximately
4.4% if the service increases by one percentage point.

The results in case of K = 25 are very similar overall. One can see that
the same methods are dominated, and that the methods group to the same
clusters. The main difference is the much higher inventory level, which
reaches from e 865 000 to e 1 050 000 among the dominant methods. The
rising separation between INAR and the other dominant models is also
noticeable. The inventory level rises by approximately 3.1% if L = 1, by
approximately 4.3% if L = 3, and by approximately 4.7% if L = 5 when
service levels increase by 1 percentage point.

Figure 6.6 shows the distribution of the α-service if L = 5 and K = 5 sepa-


rated according to method. Dark blue indicates higher service levels. The
service level of CRO/Gamma and CRO/Normal are almost uniformly dis-
tributed even though CRO/Gamma has a slight decrease in the achieved
service levels with an increasing probability of a positive demand. In con-
trast, the service levels of ES/Gamma vary noticeably with a changing
probability of a positive demand, i.e. they increase with a higher proba-
bility. ES/Normal has mainly the same structure as CRO/Normal. The
distribution of INAR is very homogeneous, there is no noticeable change in
the average service level for different probabilities of a positive demand or
different demand variations. LEV/Gamma shares the same pattern with
ES/Gamma, but it is much clearer. In contrast to this the pattern mir-
rors in case of SYN/Normal and SYN/Gamma. For those two cases the
service level decreases with a higher probability of a positive demand. As
mentioned above, the overall service level of TEU/Gamma is lower com-
pared with the other methods, but interestingly the distribution of the
service level seems to be reversed when TEU/Gamma and TEU/Normal
6.2 Inventory Simulation 97

are compared. The service level increases with a higher probability of a


positive demand in case of TEU/Gamma, which is similar to ES/Gamma
and LEV/Gamma. The service achieved with TEU/Normal decreases with
a higher probability of a positive demand as it is the case for SYN/Normal
and SYN/Gamma.
98 6 Results

CRO/Gamma CRO/Normal ES/Gamma


0.5

0.4

0.3

0.2

0.1

0.0
ES/Normal INAR/INAR LEV/Gamma
0.5

0.4

0.3

0.2

0.1 Service
1.0
0.0 0.9
CV 2

LEV/Normal SYN/Gamma SYN/Normal 0.8


0.5 0.7
0.6
0.4
0.5
0.3

0.2

0.1

0.0
TEU/Gamma TEU/Normal
0.5

0.4

0.3

0.2

0.1

0.0
.2 .4 .6 .8 .2 .4 .6 .8
πY+

Figure 6.6: Distribution of the α-service level separated according to methods


6.2 Inventory Simulation 99

Figure 6.7 shows the distribution of the mean inventory level separated
according to the different inventory risk clusters. It only contains the data
of the INAR model in case of K = 5, L = 5, and α = 0.95. The mean
inventory level of the SKUs within the M cluster have the widest spread.
The distribution is right-skewed, and the highest simulated mean inventory
level is about e 2 500. The median of this cluster is about e 250 and it can
be seen that the first quartile of the M cluster is above the third quartile of
the N cluster at about e 165. The distribution of the mean inventory levels
in the N cluster is more peaked. The median is about e 115, and the highest
mean inventory level in this cluster is about e 1 000. The distribution of the
O cluster is again more peaked, but it is bimodal. There are two noticeable
peaks above and below the median. The highest mean inventory level in
this cluster is about e 420 and the median is e 110. Overall, there is an
obvious relation between the mean inventory level and the inventory risk
cluster.
2500

2000
Inventory (in e )

1500

1000

500

0
M N O
Inventory risk cluster

Figure 6.7: Comparison of the resulting inventory level and the inventory risk clusters

6.2.2 β-Service Level Target

This section presents the results of the inventory simulation based on the
β-service level. The structure corresponds to Section 6.2.1 and therefore,
100 6 Results

the first figure in this section shows the difference between the achieved
and the service target in case of a β-service constraint.

In case of short lead times (L = 1) and low order costs (K = 5) one can
distinguish between two different clusters of methods. On the one hand,
the methods based on the gamma distribution achieve a very high overall
service, but they overshoot the target. Additionally, except for the highest
service target one cannot notice an increase in the achieved service level
with rising service targets. On the other hand, the methods based on the
normal distribution and INAR perform better. For low service targets it
also overshoots, but with a lower difference and with a rising target an
increase in the achieved service can be seen. The results in the second clus-
ter are quite similar, but compared with the other methods TEU/Normal
leads to the lowest service levels.

With rising lead times (L = 3) the structure of the results changes signifi-
cantly. The achieved service of the group based on the gamma distribution
is reduced by approximately 6 percentage points. In comparison, the group
of methods based on the normal distribution performs worse in case of low
targets but except for TEU/Normal they perform better in case of high
targets. The result of INAR is the best among all the methods, and a gap
between INAR and all the other methods is noticeable.

In case of L = 5 and K = 5 this gap is even wider, but INAR still fits the tar-
get very well. All the other methods do not satisfy the service target. They
undershoot it by approximately 7 percentage points in case of ES/Normal
and approximately 13 percentage points in case of TEU/Normal.

If the order costs are high (K = 25), the difference between the methods is
much smaller. In case of L = 1 there is almost no change in the achieved
service along the different service level targets. The methods based on the
gamma distribution lead to service levels of approximately 0.98 across all
targets. The achieved service levels of the other methods are lower and also
almost constant at about 0.97.
6.2 Inventory Simulation 101

L=1 L=3 L=5


1.0

K=5
0.9 Normal
Gamma
INAR
avg. Service

0.8
1.0 ES
CRO
SYN
LEV

K = 25
0.9 TEU
INAR

0.8

.93 .95 .97 .93 .95 .97 .93 .95 .97


Service Target

Figure 6.8: Difference between achieved and target service in case of a β-service con-
straint

In case of K = 25 and L = 3 the results of all methods are much better


compared with the previous cases. CRO/Normal and SYN/Normal match
the 0.95, and INAR matches the 0.99 service level target. For lower ser-
vice level targets the methods based on the gamma distribution lead to
higher service levels compared with INAR whereas the methods based on
the normal distribution lead to lower service levels. Additionally, INAR
exclusively meets the service level targets above 0.95. TEU/Normal leads
to the lowest service levels.

In case of K = 25 and L = 5 the differences between the methods increase


further. Except for the two lowest service level targets INAR is the only
method which meets the constraint. For a service level target of 0.91 there
is a gap between the methods based on the gamma distribution and those
based on the normal distribution. This gap closes with a rising service
level target. TEU/Normal again leads to the lowest service levels whereas
102 6 Results

LEV/Gamma and LEV/Normal lead to the highest service levels within


their groups.

L=1 L=3 L=5


700

600

K=5
ES
500
CRO
Inventory (in 1.000 e )

400 SYN
LEV
300
TEU
1200 INAR
1100
Normal

K = 25
1000
Gamma
900 INAR

800

0.8 0.9 1.0 0.8 0.9 1.0 0.8 0.9 1.0


avg. Service

Figure 6.9: Achieved service vs. mean inventory levels (β-service target)

The next analysis, shown in Figure 6.9, considers the relationship between
the achieved β-service levels and the resulting inventory levels. In case of
short lead times the results are very close together. It is hard to distin-
guish among the different methods, but one can see that ES/Normal is
dominated. For low order costs, the inventory level is between e 430 000
and e 550 000 and rises approximately 3.6% if the service level is increased
by 1 percentage point. In case of K = 25 the inventory level is between
e 920 000 and e 1 030 000. The inventory level rises approximately 3.2% if
the service level is increased by 1 percentage point.

If K = 5 and L = 3, the results show an interesting anomaly. ES/Normal


is again dominated for all service targets, but INAR is also partly domi-
nated. The interesting anomaly is that the results of INAR the low ser-
vice targets are dominated by the results of LEV/Gamma, LEV/Normal,
ES/Gamma, SYN/Normal, and CRO/Normal for the high service level tar-
6.2 Inventory Simulation 103

gets. This means that even if the achieved service of those methods is far
below the target, it can be met by increasing the input parameter of the
inventory simulation. Hence, the methods lead to service levels above 0.91
and are dominant. The inventory among the dominant methods is between
e 365 000 and e 640 000.

This anomaly also appears in case of L = 3 and K = 25. In this case the
inventory level is between e 850 000 and e 1 130 000 and it increases by 3.5%
if the service is increased by 1 percentage point. The results of the methods
based on the gamma distribution are close together and dominant, whereas
ES/Normal is dominated for all service level targets. Except for the highest
target, INAR is dominated.

In case of L = 5 and K = 5 one can distinguish among the three different


groups. ES/Normal forms the first group, and as in the previous cases it is
dominated. The second group also consists of a single method. INAR leads
to the highest inventory levels, approximately e 700 000, but is dominant for
all different service targets. The third group contains all other methods.
In this group the CRO and SYN lead to dominant strategies regardless
of the lead time distribution assumption. The lowest inventory level is
approximately e 325 000 and among the dominant methods the inventory
level increases by 4.2% if the service level is increased by 1 percentage point.

In case of L = 5 and K = 25 the inventory level has the highest range be-
tween e 800 000 and e 1 200 000. Among the dominant methods the inven-
tory level increases by 4.5% if the service level is increased by 1 percentage
point. As in all the other cases ES/Normal is dominated, and the results of
the high service targets of LEV/Normal and LEV/Gamma dominate the
results of the low service targets of INAR. Additionally, CRO/Gamma and
SYN/Gamma lead to dominant strategies for low service level targets.
104 6 Results

CRO/Gamma CRO/Normal ES/Gamma


0.5

0.4

0.3

0.2

0.1

0.0
ES/Normal INAR/INAR LEV/Gamma
0.5

0.4

0.3

0.2

0.1 Service
1.0
0.0 0.9
CV 2

LEV/Normal SYN/Gamma SYN/Normal 0.8


0.5 0.7
0.6
0.4
0.5
0.3

0.2

0.1

0.0
TEU/Gamma TEU/Normal
0.5

0.4

0.3

0.2

0.1

0.0
.2 .4 .6 .8 .2 .4 .6 .8
πY+

Figure 6.10: Distribution of the β-service level separated according to methods


6.2 Inventory Simulation 105

Figure 6.10 shows the distribution of the β-service in case of L = 5 and


K = 5 separated according to methods. A darker blue indicates a higher
service level. For low probabilities of a positive demand, the distribution of
the service level of CRO/Gamma and CRO/Normal is quite similar at high
levels. The service level of both methods decreases with a rising probability
of a positive demand, but in case of CRO/Gamma the decrease is steeper.
ES/Gamma and ES/Normal share the same decrease as CRO/Gamma and
CRO/Normal. For a small probability of a positive demand ES/Gamma
leads to higher service levels whereas ES/Normal leads to lower service
levels if the probability of a positive demand is approximately 0.8. The
distribution of the INAR service levels is homogeneous at a high value,
i.e. there is no noticeable relationship between the different values of CV 2
or πY+ and the service level. The results of LEV/Gamma, LEV/Normal,
and SYN/Gamma are quite similar to previous results based on CRO and
ES. SYN/Normal shows the boldest decline in the service level for a rising
probability of a positive demand. As mentioned above, TEU/Gamma and
TEU/Normal lead to the lowest service levels among the regarded methods.
Interestingly, both methods also indicate a decrease in service levels along
the x-axis, but instead of a gradually decrease, it seems that the service
level drops at approximately πY+ = 0.25.

The results of the last analysis are shown in Figure 6.11. It tracks the
resulting INAR inventory levels in case of L = 5 and K = 5 and a β-service
target of 0.95 separated according to the inventory risk clusters. It can
be seen, that the M-cluster gathers the SKUs with the highest inventory
level. This cluster has the highest range with a maximum inventory level
of approximately e 2200. The median inventory level is e 238, and the first
quartile is approximately e 150. The distribution of the inventory levels
within the N cluster is more peaked. The third quartile is at the same level
as the first quartile of the M cluster at approximately e 150. The interquan-
tile range of the inventory level within the N cluster is approximately e 85,
and the highest value in this cluster is e 640. Interestingly, there is only
106 6 Results

2000
Inventory (in e )

1500

1000

500

0
M N O
Inventory risk cluster

Figure 6.11: Comparison of the resulting inventory levels and the inventory risk clusters

a slight difference between the distributions of the N and O cluster. The


first quartile of the O cluster is only e 1 below the first quartile of the N
cluster at e 66. The median is e 93, the third quartile is e 130, and the
highest inventory level within the O cluster is approximately e 454.

6.3 Summary

There is a major difference in the forecast performance among the used


methods. LEV leads to the highest range in MASE values whereas the
INAR performance is the best of all the used methods. Among the group
of Croston-type models TEU leads to the smallest range and the best re-
sults. Additionally, there is no noticeable relation between the forecast
performance and the inventory risk clusters.

Therefore, this study reveals a massive mismatch between the forecast per-
formance of these methods and the results of the inventory simulation.
There is an obvious relationship between the inventory performance and
the inventory risk clusters. The use of TEU leads to the worst results in
almost every case. Moreover, CRO, SYN, and even LEV lead to high ser-
6.3 Summary 107

vice levels and dominant strategies. If the inventory is optimized using a


α-service level restriction, the use of the gamma distribution leads to low
service levels. INAR exceeds the service targets in almost all cases. For
high service level targets INAR exclusively meets the service target. As
expected, the resulting inventory level increases with a higher service level,
but surprisingly the value of this increase is robust at approximately 4%.
The dominant strategies are LEV/Normal, SYN/Gamma, CRO/Gamma,
and INAR, and the classification of the SKU’s inventory risk clearly dis-
tinguishes among the resulting inventory levels.

In case of the β-service level targets and low order costs, the performance
of all the methods except for INAR is significantly reduced with rising lead
times. INAR matches the service target and along with CRO/Normal,
CRO/Gamma, and SYN/Normal, INAR leads to dominant results. The
distribution of the achieved INAR service level along with the probabil-
ity of a positive demand is homogeneous whereas the service levels of all
the other methods suffer from a higher probability of a positive demand.
Interestingly, LEV/Gamma and ES/Gamma show the opposite pattern in
case of an α-service target. The distribution of the service level separated
according to the inventory risk clusters shows that the clustering leads to
a suitable separation between the inventory levels of the M and N cluster,
but there is no significant difference between the N and O cluster.
7 Conclusion

Optimal inventory decisions are based on appropriate forecasts, but there


is frequently an inconsistency between the stochastic inventory framework
and the forecast model. On the one hand, major efforts are made to con-
sider all the features of the demand series to produce the most accurate
forecasts. On the other hand, the majority of inventory frameworks are
accompanied by rigid stochastic assumptions, such as Gaussian or gamma
distributed lead time demand, i.e. they rely only on the point and variance
predictions of the forecast model. Therefore, most of the forecast informa-
tion remains unused when the reorder levels are optimized. This may not
be a problem for fast-moving goods, but if the average demand is low or
intermittent, the simplifying assumption of continuous demands will lead
to non-optimal results. This work aims to increase the service level and to
reduce the inventory level by combining the forecast and inventory model
into one consistent forecast-based inventory model. It is based on the pre-
diction of the future probability distribution by assuming an integer-valued
autoregressive process as a demand process. Using a simulation study based
on the demand series of a German wholesaler, this integrated method is
compared with a wide range of forecast/inventory model combinations.

In order to base this application on a solid structure, Chapter 2 dealt with


the foundations of inventory management and started with the descrip-
tion of the trade off between inventory performance measures and relevant
costs. The main part of Chapter 2 introduced different inventory policies
and emphasized the importance of knowledge about the future demands.
Consequently, Chapter 3 presented a wide range of forecast methods which
are suitable for intermittent demand series. The last theoretical chapter

© Springer Fachmedien Wiesbaden 2016


T. Engelmeyer, Managing Intermittent Demand,
DOI 10.1007/978-3-658-14062-5_7
110 Conclusion

dealt with the classification of SKUs and presented an MCIC approach


to distinguish different inventory risk clusters. After this, Chapter 5 pre-
sented the simulation design and the implementation. Finally, Chapter 6
concluded with the results.

The contribution of this study is the connection between inventory opti-


mization and forecast estimation based on the prediction of the complete
future probability mass function. The developed algorithms can be used to
identify, estimate, and predict the demand as well as optimize the inven-
tory decision of intermittent SKUs. Additionally, the inventory simulation
can be used to gain insight into the relationship between the service and
inventory levels. This work also offers a new inventory risk classification,
which has proven to be useful with regard to the distinction between the
SKU’s resulting inventory level.

The presented consistent INAR model outperforms any other method in


relation to the achieved service level. By comparing the achieved service
level and the resulting inventory level, INAR leads to higher service lev-
els combined with lower or equal inventory levels, i.e. dominant inventory
strategies, in almost every case. By using a method which is suitable for in-
termittent demand series, approximately 20% of inventory levels can be re-
duced, or in other words, an increase in service can be achieved at the same
inventory level. Considering the results of the forecast and inventory simu-
lation separately, it can be seen that CRO, LEV, and SYN, which perform
poorly with regard to the MASE, lead to dominant inventory strategies.
However TEU, which achieves low MASE values, dominates independently
of the demand distribution.

These results imply several things. First, and most importantly, one must
not regard forecasts as a separate problem. If the forecast is used in in-
ventory management, the appropriate method should be selected based on
inventory performance and not based on statistical error measures. In ad-
dition, the use of appropriate methods for intermittent demand leads to
significantly improved results. In case of an α-service level target and low
Conclusion 111

order costs, the inventory management should be based on the consistent


INAR model whereas in case of high order costs, methods based on the
normal distribution lead to dominant strategies. In case of a β-service level
target the INAR model leads to a very good match between the service
target and achieved service level in all cases. Hence, it should be used to
determine the reorder point.

There are limitations on the presented models and inventory simulations.


Since the models and simulations are based on univariate stochastic pro-
cesses, they do not regard any exogenous variables. Therefore, the influence
of promotion campaigns or other variables cannot be taken into account.
Additionally, due to the lack of information about lead times and order
costs of the SKUs, those parameters have been set to commonly assumed
values. If those parameters are available, the simulation might lead to more
realistic results. The demand of a SKU is a discrete measure. Assuming
it to be continuous is always a simplifying assumption. Nevertheless, the
presented method is not efficient in case of fast-moving consumer goods.
The main reason for this is the dramatic increase in possible states of the
Markov chain. Therefore, the presented consistent INAR model should be
adapted for those SKUs.

The integration of a forecast and an optimization problem using the com-


plete future probability mass function leads to further possible applications.
There are other optimization problems in supply chain management, which
are based on distribution assumptions, and the presented approach might
foster a consistent integration of forecasts into other robust optimization
problems as well. A more specific extension to the provided approach would
be taking stochastic lead times into account. Even if more information is
necessary, this adaption can increase the practical relevance even further.
A Appendix

M N O
2.0

1.5
MASE

1.0

0.5
INAR

INAR

INAR
CRO

CRO

CRO
TEU

TEU

TEU
SYN

LEV

SYN

LEV

SYN

LEV
ES

ES

ES

method / horizon

Figure A.1: Five-step-ahead forecast performance separated according to risk cluster

© Springer Fachmedien Wiesbaden 2016


T. Engelmeyer, Managing Intermittent Demand,
DOI 10.1007/978-3-658-14062-5
114 Appendix

CRO SYN LEV TEU ES INAR

CRO 23 62 29 33 23
SYN 77 69 37 52 31
LEV 38 31 36 30 18
TEU 71 63 64 60 34
ES 67 48 70 40 29
INAR 77 69 82 66 71

Figure A.2: Five-step-ahead percentage better forecast performance of all SKUs

CRO SYN LEV TEU ES INAR

CRO 19 56 20 30 17
SYN 81 62 25 48 21
LEV 44 38 38 37 14
TEU 80 75 62 64 25
ES 70 52 63 36 21
INAR 83 79 86 75 79

Figure A.3: Five-step-ahead percentage better forecast performance of M cluster

CRO SYN LEV TEU ES INAR

CRO 27 75 46 40 30
SYN 73 81 57 58 41
LEV 25 19 32 19 21
TEU 54 43 68 53 41
ES 60 42 81 47 38
INAR 70 59 79 59 62

Figure A.4: Five-step-ahead percentage better forecast performance of N cluster


Appendix 115

CRO SYN LEV TEU ES INAR

CRO 22 55 20 29 19
SYN 78 63 25 48 26
LEV 45 37 37 36 17
TEU 80 75 63 63 31
ES 71 52 64 37 24
INAR 81 74 83 69 76

Figure A.5: Five-step-ahead percentage better forecast performance of O cluster

CRO SYN LEV TEU ES INAR

CRO 19 55 20 29 13
SYN 81 62 24 48 18
LEV 45 38 39 36 18
TEU 80 76 61 62 25
ES 71 52 64 38 19
INAR 87 82 82 75 81

Figure A.6: One-step-ahead percentage better forecast performance of M cluster

CRO SYN LEV TEU ES INAR

CRO 26 75 46 39 28
SYN 74 80 58 58 41
LEV 25 20 33 18 18
TEU 54 42 67 53 39
ES 61 42 82 47 36
INAR 72 59 82 61 64

Figure A.7: One-step-ahead percentage better forecast performance of N cluster


116 Appendix

CRO SYN LEV TEU ES INAR

CRO 22 55 21 28 17
SYN 78 62 26 48 25
LEV 45 38 37 36 20
TEU 79 74 63 63 31
ES 72 52 64 37 24
INAR 83 75 80 69 76

Figure A.8: One-step-ahead percentage better forecast performance of O cluster

CRO SYN LEV


0.5

0.4

0.3

0.2
MASE
0.1 1.50
0.0 1.25
CV 2

TEU ES INAR
1.00
0.5
0.75
0.4 0.50
0.3

0.2

0.1

0.0
.2 .4 .6 .8 .2 .4 .6 .8 .2 .4 .6 .8
πY+

Figure A.9: Distribution of five-step-ahead MASE separated according to method


Appendix 117

Gamma / CRO Gamma / ES Gamma / LEV


0.5

0.4

0.3

0.2

0.1

0.0
Gamma / SYN Gamma / TEU INAR / INAR
0.5

0.4

0.3

0.2
Inventory
0.1
5000
0.0 4000
CV 2

Normal / CRO Normal / ES Normal / LEV 3000


0.5
2000
0.4 1000

0.3

0.2

0.1

0.0
Normal / SYN Normal / TEU
0.5

0.4

0.3

0.2

0.1

0.0
.2 .4 .6 .8 .2 .4 .6 .8
πY+

Figure A.10: Distribution of the inventory level separated according to method


118 Appendix

1500
Inventory (in e )

1000

500

0
M N O
Inventory risk cluster
Figure A.11: Inventory level separated according to inventory risk clusters
(CRO/Gamma)

1500
Inventory (in e )

1000

500

0
M N O
Inventory risk cluster
Figure A.12: Inventory level separated according to inventory risk clusters
(CRO/Normal)

2500

2000
Inventory (in e )

1500

1000

500

0
M N O
Inventory risk cluster
Figure A.13: Inventory level separated according to inventory risk clusters (ES/Gamma)
Appendix 119

3000
Inventory (in e )

2000

1000

0
M N O
Inventory risk cluster
Figure A.14: Inventory level separated according to inventory risk clusters (ES/Normal)

2000
Inventory (in e )

1500

1000

500

0
M N O
Inventory risk cluster
Figure A.15: Inventory level separated according to inventory risk clusters
(LEV/Gamma)

2000
Inventory (in e )

1500

1000

500

0
M N O
Inventory risk cluster
Figure A.16: Inventory level separated according to inventory risk clusters
(LEV/Normal)
120 Appendix

1500
Inventory (in e )

1000

500

0
M N O
Inventory risk cluster
Figure A.17: Inventory level separated according to inventory risk clusters
(SYN/Gamma)

2000
Inventory (in e )

1500

1000

500

0
M N O
Inventory risk cluster
Figure A.18: Inventory level separated according to inventory risk clusters
(SYN/Normal)

3000
Inventory (in e )

2000

1000

0
M N O
Inventory risk cluster
Figure A.19: Inventory level separated according to inventory risk clusters
(TEU/Gamma)
Appendix 121

3000
Inventory (in e )

2000

1000

0
M N O
Inventory risk cluster
Figure A.20: Inventory level separated according to inventory risk clusters
(TEU/Normal)
122 Appendix

Min. 1st Qu. Median Mean 3rd Qu. Max.


L=1 K=5 α = 0.91 0.00 0.93 0.95 0.94 0.97 1.00
α = 0.93 0.00 0.93 0.97 0.95 0.97 1.00
α = 0.95 0.00 0.95 0.97 0.96 0.98 1.00
α = 0.97 0.00 0.97 0.97 0.97 1.00 1.00
α = 0.99 0.00 0.97 0.98 0.98 1.00 1.00
K = 25 α = 0.91 0.48 0.93 0.97 0.95 1.00 1.00
α = 0.93 0.55 0.95 0.97 0.96 1.00 1.00
α = 0.95 0.67 0.97 0.97 0.97 1.00 1.00
α = 0.97 0.85 0.97 0.98 0.98 1.00 1.00
α = 0.99 0.90 0.98 1.00 0.99 1.00 1.00
L=3 K=5 α = 0.91 0.00 0.90 0.93 0.91 0.97 1.00
α = 0.93 0.00 0.92 0.93 0.92 0.98 1.00
α = 0.95 0.00 0.92 0.95 0.93 0.98 1.00
α = 0.97 0.00 0.93 0.97 0.94 1.00 1.00
α = 0.99 0.00 0.95 0.98 0.95 1.00 1.00
K = 25 α = 0.91 0.00 0.92 0.93 0.94 1.00 1.00
α = 0.93 0.00 0.93 0.95 0.95 1.00 1.00
α = 0.95 0.00 0.93 0.97 0.96 1.00 1.00
α = 0.97 0.00 0.95 0.98 0.97 1.00 1.00
α = 0.99 0.00 0.97 1.00 0.98 1.00 1.00
L=5 K=5 α = 0.91 0.00 0.87 0.92 0.87 0.97 1.00
α = 0.93 0.00 0.88 0.92 0.88 0.98 1.00
α = 0.95 0.00 0.88 0.93 0.89 0.98 1.00
α = 0.97 0.00 0.90 0.95 0.90 1.00 1.00
α = 0.99 0.00 0.92 0.97 0.91 1.00 1.00
K = 25 α = 0.91 0.00 0.90 0.93 0.94 1.00 1.00
α = 0.93 0.00 0.90 0.95 0.95 1.00 1.00
α = 0.95 0.00 0.92 0.97 0.96 1.00 1.00
α = 0.97 0.00 0.93 0.98 0.97 1.00 1.00
α = 0.99 0.00 0.95 1.00 0.97 1.00 1.00

Table A.1: Summary of achieved α-service (CRO/Gamma)


Appendix 123

Min. 1st Qu. Median Mean 3rd Qu. Max.


L=1 K=5 α = 0.91 0.00 0.93 0.97 0.96 1.00 1.00
α = 0.93 0.00 0.93 0.97 0.96 1.00 1.00
α = 0.95 0.00 0.95 0.97 0.97 1.00 1.00
α = 0.97 0.00 0.97 0.98 0.98 1.00 1.00
α = 0.99 0.00 0.97 1.00 0.98 1.00 1.00
K = 25 α = 0.91 0.48 0.97 1.00 0.97 1.00 1.00
α = 0.93 0.55 0.97 0.98 0.97 1.00 1.00
α = 0.95 0.67 0.97 0.98 0.98 1.00 1.00
α = 0.97 0.85 0.97 1.00 0.98 1.00 1.00
α = 0.99 0.90 0.98 1.00 0.99 1.00 1.00
L=3 K=5 α = 0.91 0.00 0.92 0.95 0.93 0.98 1.00
α = 0.93 0.00 0.92 0.95 0.94 1.00 1.00
α = 0.95 0.00 0.93 0.97 0.95 1.00 1.00
α = 0.97 0.00 0.95 0.98 0.96 1.00 1.00
α = 0.99 0.00 0.95 0.98 0.96 1.00 1.00
K = 25 α = 0.91 0.00 0.93 0.97 0.96 1.00 1.00
α = 0.93 0.00 0.93 0.97 0.96 1.00 1.00
α = 0.95 0.00 0.95 0.98 0.97 1.00 1.00
α = 0.97 0.02 0.97 1.00 0.98 1.00 1.00
α = 0.99 0.77 0.98 1.00 0.99 1.00 1.00
L=5 K=5 α = 0.91 0.00 0.90 0.93 0.90 1.00 1.00
α = 0.93 0.00 0.90 0.95 0.91 1.00 1.00
α = 0.95 0.00 0.92 0.95 0.92 1.00 1.00
α = 0.97 0.00 0.93 0.97 0.93 1.00 1.00
α = 0.99 0.00 0.93 0.98 0.94 1.00 1.00
K = 25 α = 0.91 0.00 0.90 0.97 0.95 1.00 1.00
α = 0.93 0.00 0.92 0.98 0.96 1.00 1.00
α = 0.95 0.00 0.93 0.98 0.97 1.00 1.00
α = 0.97 0.00 0.95 1.00 0.98 1.00 1.00
α = 0.99 0.00 0.97 1.00 0.98 1.00 1.00

Table A.2: Summary of achieved α-service (CRO/Normal


124 Appendix

Min. 1st Qu. Median Mean 3rd Qu. Max.


L=1 K=5 α = 0.91 0.00 0.87 0.93 0.77 0.97 1.00
α = 0.93 0.00 0.88 0.93 0.78 0.97 1.00
α = 0.95 0.00 0.90 0.95 0.82 0.97 1.00
α = 0.97 0.00 0.93 0.97 0.89 0.98 1.00
α = 0.99 0.00 0.97 0.98 0.95 1.00 1.00
K = 25 α = 0.91 0.00 0.88 0.93 0.83 0.97 1.00
α = 0.93 0.00 0.92 0.97 0.86 0.97 1.00
α = 0.95 0.00 0.93 0.97 0.90 1.00 1.00
α = 0.97 0.00 0.97 0.97 0.94 1.00 1.00
α = 0.99 0.00 0.97 1.00 0.97 1.00 1.00
L=3 K=5 α = 0.91 0.00 0.85 0.92 0.82 0.95 1.00
α = 0.93 0.00 0.87 0.93 0.85 0.97 1.00
α = 0.95 0.00 0.90 0.93 0.89 0.98 1.00
α = 0.97 0.00 0.93 0.95 0.92 1.00 1.00
α = 0.99 0.00 0.95 0.98 0.94 1.00 1.00
K = 25 α = 0.91 0.00 0.88 0.93 0.89 1.00 1.00
α = 0.93 0.00 0.92 0.93 0.92 1.00 1.00
α = 0.95 0.00 0.93 0.95 0.94 1.00 1.00
α = 0.97 0.00 0.95 0.98 0.95 1.00 1.00
α = 0.99 0.00 0.97 1.00 0.97 1.00 1.00
L=5 K=5 α = 0.91 0.00 0.82 0.90 0.81 0.95 1.00
α = 0.93 0.00 0.85 0.92 0.84 0.97 1.00
α = 0.95 0.00 0.87 0.93 0.86 0.98 1.00
α = 0.97 0.00 0.90 0.95 0.89 1.00 1.00
α = 0.99 0.00 0.92 0.98 0.92 1.00 1.00
K = 25 α = 0.91 0.00 0.88 0.92 0.90 1.00 1.00
α = 0.93 0.00 0.90 0.93 0.92 1.00 1.00
α = 0.95 0.00 0.92 0.97 0.94 1.00 1.00
α = 0.97 0.00 0.93 0.98 0.95 1.00 1.00
α = 0.99 0.00 0.97 1.00 0.97 1.00 1.00

Table A.3: Summary of achieved α-service (ES/Gamma)


Appendix 125

Min. 1st Qu. Median Mean 3rd Qu. Max.


L=1 K=5 α = 0.91 0.00 0.93 0.97 0.93 1.00 1.00
α = 0.93 0.00 0.93 0.97 0.93 1.00 1.00
α = 0.95 0.00 0.95 0.97 0.94 1.00 1.00
α = 0.97 0.00 0.97 0.98 0.96 1.00 1.00
α = 0.99 0.00 0.97 1.00 0.98 1.00 1.00
K = 25 α = 0.91 0.00 0.93 0.97 0.95 1.00 1.00
α = 0.93 0.00 0.95 0.97 0.96 1.00 1.00
α = 0.95 0.00 0.97 0.97 0.97 1.00 1.00
α = 0.97 0.00 0.97 0.98 0.98 1.00 1.00
α = 0.99 0.32 0.98 1.00 0.99 1.00 1.00
L=3 K=5 α = 0.91 0.00 0.90 0.93 0.90 0.98 1.00
α = 0.93 0.00 0.92 0.95 0.92 1.00 1.00
α = 0.95 0.00 0.93 0.97 0.94 1.00 1.00
α = 0.97 0.00 0.95 0.98 0.95 1.00 1.00
α = 0.99 0.00 0.97 1.00 0.97 1.00 1.00
K = 25 α = 0.91 0.00 0.92 0.93 0.94 1.00 1.00
α = 0.93 0.00 0.93 0.97 0.95 1.00 1.00
α = 0.95 0.18 0.93 0.98 0.97 1.00 1.00
α = 0.97 0.00 0.97 1.00 0.98 1.00 1.00
α = 0.99 0.00 0.98 1.00 0.99 1.00 1.00
L=5 K=5 α = 0.91 0.00 0.88 0.93 0.88 0.98 1.00
α = 0.93 0.00 0.90 0.95 0.90 1.00 1.00
α = 0.95 0.00 0.90 0.95 0.91 1.00 1.00
α = 0.97 0.00 0.92 0.97 0.92 1.00 1.00
α = 0.99 0.00 0.95 1.00 0.94 1.00 1.00
K = 25 α = 0.91 0.00 0.90 0.95 0.94 1.00 1.00
α = 0.93 0.00 0.92 0.97 0.95 1.00 1.00
α = 0.95 0.00 0.93 0.98 0.96 1.00 1.00
α = 0.97 0.00 0.95 1.00 0.97 1.00 1.00
α = 0.99 0.00 0.98 1.00 0.98 1.00 1.00

Table A.4: Summary of achieved α-service (ES/Normal)


126 Appendix

Min. 1st Qu. Median Mean 3rd Qu. Max.


L=1 K=5 α = 0.91 0.00 0.97 0.98 0.98 1.00 1.00
α = 0.93 0.00 0.97 0.98 0.98 1.00 1.00
α = 0.95 0.00 0.98 1.00 0.99 1.00 1.00
α = 0.97 0.00 0.98 1.00 0.99 1.00 1.00
α = 0.99 0.00 1.00 1.00 0.99 1.00 1.00
K = 25 α = 0.91 0.87 0.98 1.00 0.99 1.00 1.00
α = 0.93 0.87 0.98 1.00 0.99 1.00 1.00
α = 0.95 0.88 1.00 1.00 0.99 1.00 1.00
α = 0.97 0.88 1.00 1.00 1.00 1.00 1.00
α = 0.99 0.90 1.00 1.00 1.00 1.00 1.00
L=3 K=5 α = 0.91 0.00 0.98 1.00 0.98 1.00 1.00
α = 0.93 0.00 0.98 1.00 0.98 1.00 1.00
α = 0.95 0.00 0.98 1.00 0.99 1.00 1.00
α = 0.97 0.00 1.00 1.00 0.99 1.00 1.00
α = 0.99 0.00 1.00 1.00 1.00 1.00 1.00
K = 25 α = 0.91 0.00 1.00 1.00 0.99 1.00 1.00
α = 0.93 0.00 1.00 1.00 1.00 1.00 1.00
α = 0.95 0.77 1.00 1.00 1.00 1.00 1.00
α = 0.97 0.77 1.00 1.00 1.00 1.00 1.00
α = 0.99 0.85 1.00 1.00 1.00 1.00 1.00
L=5 K=5 α = 0.91 0.00 0.98 1.00 0.98 1.00 1.00
α = 0.93 0.00 0.98 1.00 0.98 1.00 1.00
α = 0.95 0.00 0.98 1.00 0.98 1.00 1.00
α = 0.97 0.00 1.00 1.00 0.99 1.00 1.00
α = 0.99 0.00 1.00 1.00 0.99 1.00 1.00
K = 25 α = 0.91 0.00 1.00 1.00 0.99 1.00 1.00
α = 0.93 0.43 1.00 1.00 1.00 1.00 1.00
α = 0.95 0.43 1.00 1.00 1.00 1.00 1.00
α = 0.97 0.62 1.00 1.00 1.00 1.00 1.00
α = 0.99 0.78 1.00 1.00 1.00 1.00 1.00

Table A.5: Summary of achieved α-service (INAR)


Appendix 127

Min. 1st Qu. Median Mean 3rd Qu. Max.


L=1 K=5 α = 0.91 0.00 0.87 0.93 0.74 0.97 1.00
α = 0.93 0.00 0.88 0.93 0.75 0.97 1.00
α = 0.95 0.00 0.90 0.95 0.76 0.98 1.00
α = 0.97 0.00 0.92 0.97 0.81 0.98 1.00
α = 0.99 0.00 0.95 0.98 0.93 1.00 1.00
K = 25 α = 0.91 0.00 0.85 0.93 0.77 0.97 1.00
α = 0.93 0.00 0.90 0.97 0.79 0.97 1.00
α = 0.95 0.00 0.93 0.97 0.83 0.98 1.00
α = 0.97 0.00 0.97 0.97 0.89 1.00 1.00
α = 0.99 0.00 0.97 1.00 0.97 1.00 1.00
L=3 K=5 α = 0.91 0.00 0.80 0.92 0.74 0.95 1.00
α = 0.93 0.00 0.83 0.93 0.78 0.97 1.00
α = 0.95 0.00 0.87 0.95 0.83 0.98 1.00
α = 0.97 0.00 0.90 0.95 0.88 1.00 1.00
α = 0.99 0.00 0.93 0.97 0.94 1.00 1.00
K = 25 α = 0.91 0.00 0.88 0.93 0.84 0.98 1.00
α = 0.93 0.00 0.92 0.93 0.88 1.00 1.00
α = 0.95 0.00 0.93 0.97 0.91 1.00 1.00
α = 0.97 0.00 0.95 0.98 0.95 1.00 1.00
α = 0.99 0.03 0.97 1.00 0.98 1.00 1.00
L=5 K=5 α = 0.91 0.00 0.72 0.90 0.75 0.97 1.00
α = 0.93 0.00 0.77 0.92 0.79 0.98 1.00
α = 0.95 0.00 0.82 0.93 0.83 0.98 1.00
α = 0.97 0.00 0.87 0.93 0.87 1.00 1.00
α = 0.99 0.00 0.90 0.97 0.91 1.00 1.00
K = 25 α = 0.91 0.00 0.88 0.92 0.87 1.00 1.00
α = 0.93 0.00 0.90 0.95 0.90 1.00 1.00
α = 0.95 0.00 0.92 0.97 0.93 1.00 1.00
α = 0.97 0.00 0.93 0.98 0.95 1.00 1.00
α = 0.99 0.05 0.95 1.00 0.97 1.00 1.00

Table A.6: Summary of achieved α-service (LEV/Gamma)


128 Appendix

Min. 1st Qu. Median Mean 3rd Qu. Max.


L=1 K=5 α = 0.91 0.00 0.93 0.97 0.96 1.00 1.00
α = 0.93 0.00 0.93 0.97 0.96 1.00 1.00
α = 0.95 0.00 0.95 0.98 0.97 1.00 1.00
α = 0.97 0.00 0.97 0.98 0.98 1.00 1.00
α = 0.99 0.00 0.97 1.00 0.98 1.00 1.00
K = 25 α = 0.91 0.48 0.97 1.00 0.97 1.00 1.00
α = 0.93 0.55 0.97 0.98 0.97 1.00 1.00
α = 0.95 0.77 0.97 0.98 0.98 1.00 1.00
α = 0.97 0.90 0.97 1.00 0.99 1.00 1.00
α = 0.99 0.90 0.98 1.00 0.99 1.00 1.00
L=3 K=5 α = 0.91 0.00 0.92 0.95 0.94 1.00 1.00
α = 0.93 0.00 0.93 0.97 0.95 1.00 1.00
α = 0.95 0.00 0.93 0.97 0.95 1.00 1.00
α = 0.97 0.00 0.95 0.98 0.96 1.00 1.00
α = 0.99 0.00 0.97 1.00 0.97 1.00 1.00
K = 25 α = 0.91 0.00 0.93 0.98 0.96 1.00 1.00
α = 0.93 0.00 0.93 0.98 0.97 1.00 1.00
α = 0.95 0.00 0.95 1.00 0.98 1.00 1.00
α = 0.97 0.00 0.97 1.00 0.98 1.00 1.00
α = 0.99 0.80 0.98 1.00 0.99 1.00 1.00
L=5 K=5 α = 0.91 0.00 0.90 0.95 0.91 1.00 1.00
α = 0.93 0.00 0.92 0.97 0.92 1.00 1.00
α = 0.95 0.00 0.92 0.97 0.93 1.00 1.00
α = 0.97 0.00 0.93 0.98 0.94 1.00 1.00
α = 0.99 0.00 0.95 0.98 0.95 1.00 1.00
K = 25 α = 0.91 0.00 0.92 0.97 0.96 1.00 1.00
α = 0.93 0.00 0.93 0.98 0.96 1.00 1.00
α = 0.95 0.00 0.95 1.00 0.97 1.00 1.00
α = 0.97 0.00 0.97 1.00 0.98 1.00 1.00
α = 0.99 0.50 0.98 1.00 0.99 1.00 1.00

Table A.7: Summary of achieved α-service (LEV/Normal)


Appendix 129

Min. 1st Qu. Median Mean 3rd Qu. Max.


L=1 K=5 α = 0.91 0.00 0.92 0.93 0.94 0.97 1.00
α = 0.93 0.00 0.93 0.95 0.95 0.97 1.00
α = 0.95 0.00 0.93 0.97 0.96 0.98 1.00
α = 0.97 0.00 0.95 0.97 0.97 1.00 1.00
α = 0.99 0.00 0.97 0.98 0.98 1.00 1.00
K = 25 α = 0.91 0.47 0.92 0.97 0.95 1.00 1.00
α = 0.93 0.52 0.93 0.97 0.96 1.00 1.00
α = 0.95 0.67 0.97 0.97 0.97 1.00 1.00
α = 0.97 0.85 0.97 0.98 0.98 1.00 1.00
α = 0.99 0.88 0.98 1.00 0.99 1.00 1.00
L=3 K=5 α = 0.91 0.00 0.88 0.93 0.90 0.95 1.00
α = 0.93 0.00 0.90 0.93 0.91 0.97 1.00
α = 0.95 0.00 0.92 0.93 0.92 0.98 1.00
α = 0.97 0.00 0.93 0.95 0.94 1.00 1.00
α = 0.99 0.00 0.95 0.98 0.96 1.00 1.00
K = 25 α = 0.91 0.00 0.92 0.93 0.94 1.00 1.00
α = 0.93 0.00 0.93 0.93 0.95 1.00 1.00
α = 0.95 0.00 0.93 0.95 0.96 1.00 1.00
α = 0.97 0.00 0.95 0.98 0.97 1.00 1.00
α = 0.99 0.00 0.97 1.00 0.98 1.00 1.00
L=5 K=5 α = 0.91 0.00 0.85 0.90 0.85 0.95 1.00
α = 0.93 0.00 0.87 0.92 0.87 0.97 1.00
α = 0.95 0.00 0.88 0.92 0.88 0.98 1.00
α = 0.97 0.00 0.90 0.95 0.90 1.00 1.00
α = 0.99 0.00 0.92 0.97 0.93 1.00 1.00
K = 25 α = 0.91 0.00 0.90 0.92 0.93 1.00 1.00
α = 0.93 0.00 0.90 0.93 0.94 1.00 1.00
α = 0.95 0.00 0.92 0.95 0.95 1.00 1.00
α = 0.97 0.00 0.93 0.98 0.96 1.00 1.00
α = 0.99 0.00 0.95 1.00 0.97 1.00 1.00

Table A.8: Summary of achieved α-service (SYN/Gamma)


130 Appendix

Min. 1st Qu. Median Mean 3rd Qu. Max.


L=1 K=5 α = 0.91 0.00 0.93 0.95 0.95 0.97 1.00
α = 0.93 0.00 0.93 0.97 0.95 0.98 1.00
α = 0.95 0.00 0.93 0.97 0.96 1.00 1.00
α = 0.97 0.00 0.95 0.98 0.97 1.00 1.00
α = 0.99 0.00 0.97 0.98 0.98 1.00 1.00
K = 25 α = 0.91 0.47 0.93 0.97 0.96 1.00 1.00
α = 0.93 0.52 0.95 0.97 0.97 1.00 1.00
α = 0.95 0.67 0.97 0.97 0.97 1.00 1.00
α = 0.97 0.85 0.97 0.98 0.98 1.00 1.00
α = 0.99 0.90 0.98 1.00 0.99 1.00 1.00
L=3 K=5 α = 0.91 0.00 0.88 0.93 0.91 0.97 1.00
α = 0.93 0.00 0.90 0.93 0.92 0.98 1.00
α = 0.95 0.00 0.92 0.95 0.93 1.00 1.00
α = 0.97 0.00 0.93 0.97 0.95 1.00 1.00
α = 0.99 0.00 0.95 0.98 0.96 1.00 1.00
K = 25 α = 0.91 0.00 0.92 0.93 0.95 1.00 1.00
α = 0.93 0.00 0.93 0.95 0.95 1.00 1.00
α = 0.95 0.00 0.93 0.97 0.97 1.00 1.00
α = 0.97 0.00 0.95 0.98 0.97 1.00 1.00
α = 0.99 0.77 0.97 1.00 0.99 1.00 1.00
L=5 K=5 α = 0.91 0.00 0.87 0.92 0.87 0.97 1.00
α = 0.93 0.00 0.88 0.92 0.89 0.98 1.00
α = 0.95 0.00 0.90 0.93 0.91 1.00 1.00
α = 0.97 0.00 0.92 0.95 0.92 1.00 1.00
α = 0.99 0.00 0.93 0.98 0.94 1.00 1.00
K = 25 α = 0.91 0.00 0.90 0.93 0.94 1.00 1.00
α = 0.93 0.00 0.90 0.95 0.95 1.00 1.00
α = 0.95 0.00 0.92 0.97 0.96 1.00 1.00
α = 0.97 0.00 0.93 1.00 0.97 1.00 1.00
α = 0.99 0.50 0.97 1.00 0.98 1.00 1.00

Table A.9: Summary of achieved α-service (SYN/Normal)


Appendix 131

Min. 1st Qu. Median Mean 3rd Qu. Max.


L=1 K=5 α = 0.91 0.00 0.88 0.93 0.84 0.97 1.00
α = 0.93 0.00 0.90 0.93 0.86 0.97 1.00
α = 0.95 0.00 0.92 0.95 0.88 0.97 1.00
α = 0.97 0.00 0.93 0.97 0.90 0.98 1.00
α = 0.99 0.00 0.95 0.97 0.92 1.00 1.00
K = 25 α = 0.91 0.00 0.88 0.93 0.88 0.97 1.00
α = 0.93 0.00 0.92 0.97 0.90 0.97 1.00
α = 0.95 0.00 0.95 0.97 0.92 1.00 1.00
α = 0.97 0.00 0.97 0.97 0.93 1.00 1.00
α = 0.99 0.00 0.97 1.00 0.95 1.00 1.00
L=3 K=5 α = 0.91 0.00 0.85 0.90 0.82 0.93 1.00
α = 0.93 0.00 0.87 0.92 0.84 0.95 1.00
α = 0.95 0.00 0.88 0.93 0.86 0.97 1.00
α = 0.97 0.00 0.90 0.93 0.87 0.98 1.00
α = 0.99 0.00 0.93 0.95 0.90 1.00 1.00
K = 25 α = 0.91 0.00 0.88 0.93 0.88 0.97 1.00
α = 0.93 0.00 0.92 0.93 0.90 1.00 1.00
α = 0.95 0.00 0.93 0.93 0.91 1.00 1.00
α = 0.97 0.00 0.93 0.97 0.93 1.00 1.00
α = 0.99 0.00 0.95 0.98 0.95 1.00 1.00
L=5 K=5 α = 0.91 0.00 0.80 0.88 0.78 0.92 1.00
α = 0.93 0.00 0.82 0.90 0.79 0.93 1.00
α = 0.95 0.00 0.83 0.90 0.81 0.95 1.00
α = 0.97 0.00 0.85 0.92 0.83 0.97 1.00
α = 0.99 0.00 0.88 0.93 0.86 0.98 1.00
K = 25 α = 0.91 0.00 0.88 0.90 0.87 1.00 1.00
α = 0.93 0.00 0.90 0.92 0.89 1.00 1.00
α = 0.95 0.00 0.90 0.93 0.90 1.00 1.00
α = 0.97 0.00 0.92 0.97 0.92 1.00 1.00
α = 0.99 0.00 0.93 0.98 0.94 1.00 1.00

Table A.10: Summary of achieved α-service (TEU/Gamma)


132 Appendix

Min. 1st Qu. Median Mean 3rd Qu. Max.


L=1 K=5 α = 0.91 0.00 0.92 0.97 0.95 1.00 1.00
α = 0.93 0.00 0.93 0.97 0.95 1.00 1.00
α = 0.95 0.00 0.93 0.97 0.96 1.00 1.00
α = 0.97 0.00 0.95 0.97 0.97 1.00 1.00
α = 0.99 0.00 0.97 0.98 0.98 1.00 1.00
K = 25 α = 0.91 0.57 0.93 1.00 0.97 1.00 1.00
α = 0.93 0.50 0.95 0.97 0.97 1.00 1.00
α = 0.95 0.70 0.97 0.97 0.97 1.00 1.00
α = 0.97 0.72 0.97 0.98 0.98 1.00 1.00
α = 0.99 0.65 0.98 1.00 0.99 1.00 1.00
L=3 K=5 α = 0.91 0.00 0.88 0.93 0.90 0.97 1.00
α = 0.93 0.00 0.90 0.93 0.91 0.98 1.00
α = 0.95 0.00 0.90 0.93 0.92 0.98 1.00
α = 0.97 0.00 0.93 0.95 0.93 1.00 1.00
α = 0.99 0.00 0.93 0.97 0.95 1.00 1.00
K = 25 α = 0.91 0.00 0.92 0.93 0.94 1.00 1.00
α = 0.93 0.00 0.93 0.93 0.95 1.00 1.00
α = 0.95 0.00 0.93 0.97 0.96 1.00 1.00
α = 0.97 0.00 0.95 0.98 0.97 1.00 1.00
α = 0.99 0.00 0.97 1.00 0.98 1.00 1.00
L=5 K=5 α = 0.91 0.00 0.83 0.90 0.85 0.97 1.00
α = 0.93 0.00 0.85 0.90 0.86 0.97 1.00
α = 0.95 0.00 0.87 0.92 0.88 0.98 1.00
α = 0.97 0.00 0.88 0.93 0.89 1.00 1.00
α = 0.99 0.00 0.90 0.95 0.91 1.00 1.00
K = 25 α = 0.91 0.00 0.90 0.92 0.93 1.00 1.00
α = 0.93 0.00 0.90 0.93 0.94 1.00 1.00
α = 0.95 0.00 0.90 0.97 0.95 1.00 1.00
α = 0.97 0.00 0.93 0.98 0.96 1.00 1.00
α = 0.99 0.00 0.95 1.00 0.97 1.00 1.00

Table A.11: Summary of achieved α-service (TEU/Normal)


Appendix 133

Gamma / CRO Gamma / ES Gamma / LEV


0.5

0.4

0.3

0.2

0.1

0.0
Gamma / SYN Gamma / TEU INAR / INAR
0.5

0.4

0.3

0.2
Inventory
0.1 5000
0.0 4000
CV 2

Normal / CRO Normal / ES Normal / LEV 3000


0.5 2000
1000
0.4

0.3

0.2

0.1

0.0
Normal / SYN Normal / TEU
0.5

0.4

0.3

0.2

0.1

0.0
.2 .4 .6 .8 .2 .4 .6 .8
πY+

Figure A.21: Distribution of the inventory level separated according to method


134 Appendix

600
Inventory (in e )

400

200

0
M N O
Inventory risk cluster

Figure A.22: Compairson of the resulting inventory level and the inventory risk cluster
(CRO/Gamma)
Inventory (in e )

1000

500

0
M N O
Inventory risk cluster

Figure A.23: Compairson of the resulting inventory level and the inventory risk cluster
(CRO/Normal)

900
Inventory (in e )

600

300

0
M N O
Inventory risk cluster

Figure A.24: Compairson of the resulting inventory level and the inventory risk cluster
(ES/Gamma)
Appendix 135

4000

3000
Inventory (in e )

2000

1000

0
M N O
Inventory risk cluster

Figure A.25: Inventory level separated according to inventory risk clusters (ES/Normal)

1500
Inventory (in e )

1000

500

0
M N O
Inventory risk cluster

Figure A.26: Inventory level separated according to inventory risk clusters


(LEV/Gamma)

1500
Inventory (in e )

1000

500

0
M N O
Inventory risk cluster

Figure A.27: Inventory level separated according to inventory risk clusters


(LEV/Normal)
136 Appendix

600
Inventory (in e )

400

200

0
M N O
Inventory risk cluster

Figure A.28: Inventory level separated according to inventory risk clusters


(SYN/Gamma)

1500
Inventory (in e )

1000

500

0
M N O
Inventory risk cluster

Figure A.29: Inventory level separated according to inventory risk clusters


(SYN/Normal)
Inventory (in e )

2000

1000

0
M N O
Inventory risk cluster

Figure A.30: Inventory level separated according to inventory risk clusters


(TEU/Gamma)
Appendix 137

2500

2000
Inventory (in e )

1500

1000

500

0
M N O
Inventory risk cluster

Figure A.31: Inventory level separated according to inventory risk clusters


(TEU/Normal)
138 Appendix

Min. 1st Qu. Median Mean 3rd Qu. Max.


L=1 K=5 β = 0.91 0.00 0.95 0.98 0.97 1.00 1.00
β = 0.93 0.00 0.95 0.98 0.97 1.00 1.00
β = 0.95 0.00 0.95 0.98 0.97 1.00 1.00
β = 0.97 0.00 0.95 0.98 0.97 1.00 1.00
β = 0.99 0.00 0.96 0.99 0.97 1.00 1.00
K = 25 β = 0.91 0.69 0.98 1.00 0.99 1.00 1.00
β = 0.93 0.69 0.98 1.00 0.99 1.00 1.00
β = 0.95 0.69 0.98 1.00 0.99 1.00 1.00
β = 0.97 0.69 0.98 1.00 0.99 1.00 1.00
β = 0.99 0.69 0.98 1.00 0.99 1.00 1.00
L=3 K=5 β = 0.91 0.00 0.87 0.93 0.90 0.99 1.00
β = 0.93 0.00 0.87 0.93 0.90 1.00 1.00
β = 0.95 0.00 0.88 0.93 0.91 1.00 1.00
β = 0.97 0.00 0.89 0.94 0.92 1.00 1.00
β = 0.99 0.00 0.91 0.96 0.94 1.00 1.00
K = 25 β = 0.91 0.00 0.93 0.98 0.96 1.00 1.00
β = 0.93 0.00 0.94 0.98 0.96 1.00 1.00
β = 0.95 0.00 0.94 0.98 0.96 1.00 1.00
β = 0.97 0.00 0.94 0.98 0.96 1.00 1.00
β = 0.99 0.00 0.96 1.00 0.97 1.00 1.00
L=5 K=5 β = 0.91 0.00 0.80 0.88 0.83 0.95 1.00
β = 0.93 0.00 0.81 0.88 0.84 0.95 1.00
β = 0.95 0.00 0.82 0.89 0.85 0.96 1.00
β = 0.97 0.00 0.84 0.91 0.86 0.97 1.00
β = 0.99 0.00 0.87 0.93 0.89 1.00 1.00
K = 25 β = 0.91 0.00 0.90 0.96 0.94 1.00 1.00
β = 0.93 0.00 0.90 0.96 0.94 1.00 1.00
β = 0.95 0.00 0.91 0.96 0.94 1.00 1.00
β = 0.97 0.00 0.92 0.97 0.95 1.00 1.00
β = 0.99 0.00 0.93 0.99 0.96 1.00 1.00

Table A.12: Summary of achieved β-service (CRO/Gamma)


Appendix 139

Min. 1st Qu. Median Mean 3rd Qu. Max.


L=1 K=5 β = 0.91 0.00 0.90 0.95 0.94 1.00 1.00
β = 0.93 0.00 0.91 0.95 0.94 1.00 1.00
β = 0.95 0.00 0.92 0.96 0.95 1.00 1.00
β = 0.97 0.00 0.93 0.96 0.95 1.00 1.00
β = 0.99 0.00 0.95 0.98 0.97 1.00 1.00
K = 25 β = 0.91 0.65 0.94 0.98 0.97 1.00 1.00
β = 0.93 0.65 0.95 0.98 0.97 1.00 1.00
β = 0.95 0.65 0.95 0.98 0.97 1.00 1.00
β = 0.97 0.65 0.96 1.00 0.98 1.00 1.00
β = 0.99 0.69 0.97 1.00 0.98 1.00 1.00
L=3 K=5 β = 0.91 0.00 0.84 0.90 0.88 0.97 1.00
β = 0.93 0.00 0.85 0.91 0.89 0.97 1.00
β = 0.95 0.00 0.87 0.93 0.90 0.98 1.00
β = 0.97 0.00 0.89 0.94 0.92 1.00 1.00
β = 0.99 0.00 0.92 0.96 0.94 1.00 1.00
K = 25 β = 0.91 0.00 0.90 0.95 0.94 1.00 1.00
β = 0.93 0.00 0.91 0.96 0.94 1.00 1.00
β = 0.95 0.00 0.92 0.96 0.95 1.00 1.00
β = 0.97 0.03 0.93 0.98 0.96 1.00 1.00
β = 0.99 0.58 0.96 1.00 0.97 1.00 1.00
L=5 K=5 β = 0.91 0.00 0.78 0.87 0.83 0.95 1.00
β = 0.93 0.00 0.80 0.88 0.84 0.96 1.00
β = 0.95 0.00 0.82 0.90 0.86 0.97 1.00
β = 0.97 0.00 0.85 0.92 0.88 1.00 1.00
β = 0.99 0.00 0.88 0.95 0.91 1.00 1.00
K = 25 β = 0.91 0.00 0.87 0.93 0.92 1.00 1.00
β = 0.93 0.03 0.88 0.94 0.93 1.00 1.00
β = 0.95 0.03 0.89 0.95 0.94 1.00 1.00
β = 0.97 0.04 0.91 0.97 0.95 1.00 1.00
β = 0.99 0.04 0.94 1.00 0.96 1.00 1.00

Table A.13: Summary of achieved β-service (CRO/Normal)


140 Appendix

Min. 1st Qu. Median Mean 3rd Qu. Max.


L=1 K=5 β = 0.91 0.00 0.95 0.98 0.97 1.00 1.00
β = 0.93 0.00 0.95 0.98 0.97 1.00 1.00
β = 0.95 0.00 0.95 0.98 0.97 1.00 1.00
β = 0.97 0.00 0.95 0.98 0.97 1.00 1.00
β = 0.99 0.00 0.96 1.00 0.98 1.00 1.00
K = 25 β = 0.91 0.69 0.98 1.00 0.99 1.00 1.00
β = 0.93 0.69 0.98 1.00 0.99 1.00 1.00
β = 0.95 0.69 0.98 1.00 0.99 1.00 1.00
β = 0.97 0.69 0.98 1.00 0.99 1.00 1.00
β = 0.99 0.69 0.98 1.00 0.99 1.00 1.00
L=3 K=5 β = 0.91 0.00 0.87 0.93 0.90 1.00 1.00
β = 0.93 0.00 0.87 0.93 0.91 1.00 1.00
β = 0.95 0.00 0.88 0.94 0.91 1.00 1.00
β = 0.97 0.00 0.89 0.94 0.92 1.00 1.00
β = 0.99 0.00 0.92 0.96 0.94 1.00 1.00
K = 25 β = 0.91 0.00 0.93 0.98 0.96 1.00 1.00
β = 0.93 0.00 0.94 0.98 0.96 1.00 1.00
β = 0.95 0.00 0.94 0.98 0.96 1.00 1.00
β = 0.97 0.00 0.94 0.99 0.97 1.00 1.00
β = 0.99 0.00 0.96 1.00 0.97 1.00 1.00
L=5 K=5 β = 0.91 0.00 0.80 0.88 0.84 0.95 1.00
β = 0.93 0.00 0.81 0.89 0.84 0.95 1.00
β = 0.95 0.00 0.82 0.90 0.85 0.96 1.00
β = 0.97 0.00 0.84 0.91 0.87 0.97 1.00
β = 0.99 0.00 0.86 0.93 0.89 1.00 1.00
K = 25 β = 0.91 0.00 0.90 0.96 0.94 1.00 1.00
β = 0.93 0.00 0.90 0.96 0.94 1.00 1.00
β = 0.95 0.00 0.91 0.96 0.94 1.00 1.00
β = 0.97 0.00 0.92 0.97 0.95 1.00 1.00
β = 0.99 0.00 0.94 1.00 0.96 1.00 1.00

Table A.14: Summary of achieved β-service (ES/Gamma)


Appendix 141

Min. 1st Qu. Median Mean 3rd Qu. Max.


L=1 K=5 β = 0.91 0.00 0.91 0.95 0.94 1.00 1.00
β = 0.93 0.00 0.92 0.95 0.94 1.00 1.00
β = 0.95 0.00 0.93 0.96 0.95 1.00 1.00
β = 0.97 0.00 0.94 0.97 0.96 1.00 1.00
β = 0.99 0.00 0.96 1.00 0.97 1.00 1.00
K = 25 β = 0.91 0.66 0.94 0.97 0.97 1.00 1.00
β = 0.93 0.66 0.95 0.98 0.97 1.00 1.00
β = 0.95 0.66 0.95 0.98 0.97 1.00 1.00
β = 0.97 0.66 0.96 1.00 0.98 1.00 1.00
β = 0.99 0.78 0.98 1.00 0.99 1.00 1.00
L=3 K=5 β = 0.91 0.00 0.84 0.90 0.89 0.97 1.00
β = 0.93 0.00 0.86 0.92 0.90 0.98 1.00
β = 0.95 0.00 0.88 0.93 0.91 1.00 1.00
β = 0.97 0.00 0.90 0.95 0.93 1.00 1.00
β = 0.99 0.00 0.93 0.98 0.95 1.00 1.00
K = 25 β = 0.91 0.00 0.90 0.95 0.94 1.00 1.00
β = 0.93 0.00 0.91 0.96 0.95 1.00 1.00
β = 0.95 0.00 0.92 0.96 0.95 1.00 1.00
β = 0.97 0.00 0.94 0.98 0.96 1.00 1.00
β = 0.99 0.00 0.96 1.00 0.98 1.00 1.00
L=5 K=5 β = 0.91 0.00 0.79 0.87 0.83 0.95 1.00
β = 0.93 0.00 0.81 0.89 0.85 0.96 1.00
β = 0.95 0.00 0.83 0.91 0.87 0.98 1.00
β = 0.97 0.00 0.86 0.93 0.89 1.00 1.00
β = 0.99 0.00 0.90 0.97 0.92 1.00 1.00
K = 25 β = 0.91 0.00 0.87 0.93 0.92 1.00 1.00
β = 0.93 0.00 0.88 0.94 0.93 1.00 1.00
β = 0.95 0.00 0.90 0.95 0.94 1.00 1.00
β = 0.97 0.04 0.92 0.98 0.95 1.00 1.00
β = 0.99 0.29 0.96 1.00 0.97 1.00 1.00

Table A.15: Summary of achieved β-service (ES/Normal)


142 Appendix

Min. 1st Qu. Median Mean 3rd Qu. Max.


L=1 K=5 β = 0.91 0.00 0.92 0.95 0.94 1.00 1.00
β = 0.93 0.00 0.92 0.95 0.94 1.00 1.00
β = 0.95 0.00 0.93 0.96 0.95 1.00 1.00
β = 0.97 0.00 0.94 0.97 0.96 1.00 1.00
β = 0.99 0.00 0.97 1.00 0.98 1.00 1.00
K = 25 β = 0.91 0.49 0.96 0.99 0.97 1.00 1.00
β = 0.93 0.61 0.96 0.99 0.97 1.00 1.00
β = 0.95 0.65 0.96 0.99 0.97 1.00 1.00
β = 0.97 0.65 0.96 1.00 0.98 1.00 1.00
β = 0.99 0.69 0.99 1.00 0.99 1.00 1.00
L=3 K=5 β = 0.91 0.00 0.89 0.93 0.92 0.98 1.00
β = 0.93 0.00 0.90 0.95 0.93 1.00 1.00
β = 0.95 0.00 0.92 0.97 0.95 1.00 1.00
β = 0.97 0.00 0.95 0.99 0.96 1.00 1.00
β = 0.99 0.00 0.98 1.00 0.98 1.00 1.00
K = 25 β = 0.91 0.33 0.92 0.96 0.95 1.00 1.00
β = 0.93 0.03 0.93 0.97 0.96 1.00 1.00
β = 0.95 0.39 0.94 0.99 0.96 1.00 1.00
β = 0.97 0.58 0.96 1.00 0.98 1.00 1.00
β = 0.99 0.61 1.00 1.00 0.99 1.00 1.00
L=5 K=5 β = 0.91 0.00 0.88 0.94 0.92 1.00 1.00
β = 0.93 0.00 0.90 0.96 0.93 1.00 1.00
β = 0.95 0.00 0.92 0.97 0.95 1.00 1.00
β = 0.97 0.00 0.95 1.00 0.96 1.00 1.00
β = 0.99 0.00 0.99 1.00 0.98 1.00 1.00
K = 25 β = 0.91 0.09 0.91 0.96 0.94 1.00 1.00
β = 0.93 0.55 0.93 0.98 0.96 1.00 1.00
β = 0.95 0.42 0.95 1.00 0.97 1.00 1.00
β = 0.97 0.45 0.97 1.00 0.98 1.00 1.00
β = 0.99 0.66 1.00 1.00 0.99 1.00 1.00

Table A.16: Summary of achieved β-service (INAR)


Appendix 143

Min. 1st Qu. Median Mean 3rd Qu. Max.


L=1 K=5 β = 0.91 0.00 0.95 0.98 0.97 1.00 1.00
β = 0.93 0.00 0.95 0.98 0.97 1.00 1.00
β = 0.95 0.00 0.95 0.98 0.97 1.00 1.00
β = 0.97 0.00 0.95 0.98 0.97 1.00 1.00
β = 0.99 0.39 0.97 1.00 0.98 1.00 1.00
K = 25 β = 0.91 0.69 0.98 1.00 0.99 1.00 1.00
β = 0.93 0.69 0.98 1.00 0.99 1.00 1.00
β = 0.95 0.69 0.98 1.00 0.99 1.00 1.00
β = 0.97 0.69 0.98 1.00 0.99 1.00 1.00
β = 0.99 0.73 0.98 1.00 0.99 1.00 1.00
L=3 K=5 β = 0.91 0.00 0.87 0.93 0.90 1.00 1.00
β = 0.93 0.00 0.88 0.93 0.91 1.00 1.00
β = 0.95 0.00 0.89 0.94 0.91 1.00 1.00
β = 0.97 0.00 0.90 0.95 0.93 1.00 1.00
β = 0.99 0.00 0.93 0.97 0.95 1.00 1.00
K = 25 β = 0.91 0.00 0.94 0.98 0.96 1.00 1.00
β = 0.93 0.00 0.94 0.98 0.96 1.00 1.00
β = 0.95 0.00 0.94 0.98 0.96 1.00 1.00
β = 0.97 0.00 0.94 0.99 0.97 1.00 1.00
β = 0.99 0.00 0.96 1.00 0.98 1.00 1.00
L=5 K=5 β = 0.91 0.00 0.81 0.88 0.84 0.96 1.00
β = 0.93 0.00 0.82 0.90 0.85 0.96 1.00
β = 0.95 0.00 0.84 0.91 0.87 0.97 1.00
β = 0.97 0.00 0.86 0.92 0.88 0.98 1.00
β = 0.99 0.00 0.88 0.95 0.90 1.00 1.00
K = 25 β = 0.91 0.03 0.90 0.96 0.94 1.00 1.00
β = 0.93 0.03 0.90 0.96 0.94 1.00 1.00
β = 0.95 0.03 0.91 0.96 0.95 1.00 1.00
β = 0.97 0.03 0.93 0.98 0.95 1.00 1.00
β = 0.99 0.00 0.95 1.00 0.97 1.00 1.00

Table A.17: Summary of achieved β-service (LEV/Gamma)


144 Appendix

Min. 1st Qu. Median Mean 3rd Qu. Max.


L=1 K=5 β = 0.91 0.00 0.90 0.94 0.93 0.98 1.00
β = 0.93 0.00 0.91 0.95 0.94 0.99 1.00
β = 0.95 0.00 0.92 0.95 0.94 1.00 1.00
β = 0.97 0.00 0.93 0.96 0.95 1.00 1.00
β = 0.99 0.00 0.95 0.98 0.96 1.00 1.00
K = 25 β = 0.91 0.65 0.94 0.97 0.96 1.00 1.00
β = 0.93 0.65 0.95 0.98 0.97 1.00 1.00
β = 0.95 0.66 0.95 0.98 0.97 1.00 1.00
β = 0.97 0.66 0.96 0.99 0.97 1.00 1.00
β = 0.99 0.66 0.97 1.00 0.98 1.00 1.00
L=3 K=5 β = 0.91 0.00 0.83 0.90 0.88 0.96 1.00
β = 0.93 0.00 0.85 0.91 0.89 0.97 1.00
β = 0.95 0.00 0.86 0.92 0.90 0.98 1.00
β = 0.97 0.00 0.89 0.94 0.91 1.00 1.00
β = 0.99 0.00 0.91 0.96 0.93 1.00 1.00
K = 25 β = 0.91 0.00 0.90 0.95 0.94 1.00 1.00
β = 0.93 0.00 0.91 0.96 0.94 1.00 1.00
β = 0.95 0.00 0.92 0.96 0.95 1.00 1.00
β = 0.97 0.05 0.93 0.98 0.96 1.00 1.00
β = 0.99 0.05 0.96 1.00 0.97 1.00 1.00
L=5 K=5 β = 0.91 0.00 0.78 0.87 0.82 0.94 1.00
β = 0.93 0.00 0.80 0.88 0.84 0.95 1.00
β = 0.95 0.00 0.82 0.90 0.86 0.97 1.00
β = 0.97 0.00 0.85 0.92 0.88 1.00 1.00
β = 0.99 0.00 0.88 0.95 0.90 1.00 1.00
K = 25 β = 0.91 0.05 0.87 0.93 0.92 1.00 1.00
β = 0.93 0.00 0.88 0.94 0.93 1.00 1.00
β = 0.95 0.00 0.89 0.95 0.94 1.00 1.00
β = 0.97 0.00 0.92 0.97 0.95 1.00 1.00
β = 0.99 0.05 0.94 1.00 0.96 1.00 1.00

Table A.18: Summary of achieved β-service (LEV/Normal)


Appendix 145

Min. 1st Qu. Median Mean 3rd Qu. Max.


L=1 K=5 β = 0.91 0.00 0.95 0.98 0.97 1.00 1.00
β = 0.93 0.00 0.95 0.98 0.97 1.00 1.00
β = 0.95 0.00 0.95 0.98 0.97 1.00 1.00
β = 0.97 0.00 0.95 0.98 0.97 1.00 1.00
β = 0.99 0.00 0.96 0.99 0.97 1.00 1.00
K = 25 β = 0.91 0.69 0.98 1.00 0.99 1.00 1.00
β = 0.93 0.69 0.98 1.00 0.99 1.00 1.00
β = 0.95 0.69 0.98 1.00 0.99 1.00 1.00
β = 0.97 0.69 0.98 1.00 0.99 1.00 1.00
β = 0.99 0.69 0.98 1.00 0.99 1.00 1.00
L=3 K=5 β = 0.91 0.00 0.86 0.92 0.90 0.99 1.00
β = 0.93 0.00 0.87 0.93 0.90 0.99 1.00
β = 0.95 0.00 0.87 0.93 0.91 1.00 1.00
β = 0.97 0.00 0.88 0.94 0.91 1.00 1.00
β = 0.99 0.00 0.91 0.96 0.93 1.00 1.00
K = 25 β = 0.91 0.00 0.93 0.98 0.96 1.00 1.00
β = 0.93 0.00 0.94 0.98 0.96 1.00 1.00
β = 0.95 0.00 0.94 0.98 0.96 1.00 1.00
β = 0.97 0.00 0.94 0.98 0.96 1.00 1.00
β = 0.99 0.00 0.95 1.00 0.97 1.00 1.00
L=5 K=5 β = 0.91 0.00 0.79 0.87 0.83 0.95 1.00
β = 0.93 0.00 0.80 0.88 0.83 0.95 1.00
β = 0.95 0.00 0.81 0.88 0.84 0.95 1.00
β = 0.97 0.00 0.83 0.90 0.86 0.96 1.00
β = 0.99 0.00 0.86 0.93 0.89 0.99 1.00
K = 25 β = 0.91 0.00 0.89 0.95 0.94 1.00 1.00
β = 0.93 0.00 0.90 0.96 0.94 1.00 1.00
β = 0.95 0.00 0.90 0.96 0.94 1.00 1.00
β = 0.97 0.00 0.91 0.96 0.94 1.00 1.00
β = 0.99 0.00 0.93 0.98 0.96 1.00 1.00

Table A.19: Summary of achieved β-service (SYN/Gamma)


146 Appendix

Min. 1st Qu. Median Mean 3rd Qu. Max.


L=1 K=5 β = 0.91 0.00 0.90 0.94 0.94 1.00 1.00
β = 0.93 0.00 0.91 0.95 0.94 1.00 1.00
β = 0.95 0.00 0.92 0.96 0.95 1.00 1.00
β = 0.97 0.00 0.93 0.97 0.96 1.00 1.00
β = 0.99 0.00 0.95 0.99 0.97 1.00 1.00
K = 25 β = 0.91 0.65 0.94 0.97 0.97 1.00 1.00
β = 0.93 0.65 0.95 0.98 0.97 1.00 1.00
β = 0.95 0.65 0.95 0.98 0.97 1.00 1.00
β = 0.97 0.69 0.96 1.00 0.98 1.00 1.00
β = 0.99 0.69 0.97 1.00 0.98 1.00 1.00
L=3 K=5 β = 0.91 0.00 0.83 0.90 0.88 0.96 1.00
β = 0.93 0.00 0.85 0.91 0.89 0.97 1.00
β = 0.95 0.00 0.86 0.92 0.90 0.98 1.00
β = 0.97 0.00 0.88 0.94 0.92 1.00 1.00
β = 0.99 0.00 0.92 0.96 0.94 1.00 1.00
K = 25 β = 0.91 0.03 0.90 0.95 0.94 1.00 1.00
β = 0.93 0.58 0.91 0.95 0.94 1.00 1.00
β = 0.95 0.58 0.92 0.96 0.95 1.00 1.00
β = 0.97 0.61 0.93 0.98 0.96 1.00 1.00
β = 0.99 0.67 0.95 1.00 0.97 1.00 1.00
L=5 K=5 β = 0.91 0.00 0.77 0.86 0.81 0.94 1.00
β = 0.93 0.00 0.79 0.87 0.83 0.95 1.00
β = 0.95 0.00 0.81 0.89 0.85 0.96 1.00
β = 0.97 0.00 0.84 0.91 0.87 1.00 1.00
β = 0.99 0.00 0.88 0.95 0.91 1.00 1.00
K = 25 β = 0.91 0.00 0.86 0.92 0.91 1.00 1.00
β = 0.93 0.00 0.87 0.93 0.92 1.00 1.00
β = 0.95 0.00 0.88 0.95 0.93 1.00 1.00
β = 0.97 0.04 0.91 0.97 0.94 1.00 1.00
β = 0.99 0.04 0.94 1.00 0.96 1.00 1.00

Table A.20: Summary of achieved β-service (SYN/Normal)


Appendix 147

Min. 1st Qu. Median Mean 3rd Qu. Max.


L=1 K=5 β = 0.91 0.00 0.95 0.98 0.97 1.00 1.00
β = 0.93 0.00 0.95 0.98 0.97 1.00 1.00
β = 0.95 0.00 0.95 0.98 0.97 1.00 1.00
β = 0.97 0.00 0.95 0.98 0.97 1.00 1.00
β = 0.99 0.00 0.96 0.99 0.97 1.00 1.00
K = 25 β = 0.91 0.69 0.98 1.00 0.99 1.00 1.00
β = 0.93 0.69 0.98 1.00 0.99 1.00 1.00
β = 0.95 0.69 0.98 1.00 0.99 1.00 1.00
β = 0.97 0.69 0.98 1.00 0.99 1.00 1.00
β = 0.99 0.69 0.98 1.00 0.99 1.00 1.00
L=3 K=5 β = 0.91 0.00 0.86 0.92 0.90 0.99 1.00
β = 0.93 0.00 0.87 0.93 0.90 0.99 1.00
β = 0.95 0.00 0.87 0.93 0.91 1.00 1.00
β = 0.97 0.00 0.88 0.94 0.91 1.00 1.00
β = 0.99 0.00 0.90 0.95 0.92 1.00 1.00
K = 25 β = 0.91 0.00 0.93 0.98 0.96 1.00 1.00
β = 0.93 0.00 0.93 0.98 0.96 1.00 1.00
β = 0.95 0.00 0.94 0.98 0.96 1.00 1.00
β = 0.97 0.00 0.94 0.98 0.96 1.00 1.00
β = 0.99 0.00 0.95 1.00 0.97 1.00 1.00
L=5 K=5 β = 0.91 0.00 0.79 0.87 0.82 0.95 1.00
β = 0.93 0.00 0.80 0.88 0.83 0.95 1.00
β = 0.95 0.00 0.81 0.88 0.84 0.95 1.00
β = 0.97 0.00 0.82 0.89 0.85 0.96 1.00
β = 0.99 0.00 0.84 0.91 0.86 0.98 1.00
K = 25 β = 0.91 0.00 0.89 0.95 0.93 1.00 1.00
β = 0.93 0.00 0.89 0.95 0.94 1.00 1.00
β = 0.95 0.00 0.90 0.96 0.94 1.00 1.00
β = 0.97 0.00 0.91 0.96 0.94 1.00 1.00
β = 0.99 0.00 0.92 0.98 0.95 1.00 1.00

Table A.21: Summary of achieved β-service (TEU/Gamma)


148 Appendix

Min. 1st Qu. Median Mean 3rd Qu. Max.


L=1 K=5 β = 0.91 0.00 0.90 0.94 0.93 0.98 1.00
β = 0.93 0.00 0.90 0.94 0.93 0.98 1.00
β = 0.95 0.00 0.91 0.95 0.94 0.99 1.00
β = 0.97 0.00 0.92 0.95 0.94 1.00 1.00
β = 0.99 0.00 0.94 0.98 0.96 1.00 1.00
K = 25 β = 0.91 0.65 0.94 0.97 0.96 1.00 1.00
β = 0.93 0.65 0.94 0.97 0.97 1.00 1.00
β = 0.95 0.65 0.95 0.98 0.97 1.00 1.00
β = 0.97 0.65 0.95 0.98 0.97 1.00 1.00
β = 0.99 0.66 0.97 1.00 0.98 1.00 1.00
L=3 K=5 β = 0.91 0.00 0.82 0.88 0.86 0.94 1.00
β = 0.93 0.00 0.83 0.89 0.87 0.95 1.00
β = 0.95 0.00 0.84 0.90 0.88 0.95 1.00
β = 0.97 0.00 0.86 0.92 0.90 0.97 1.00
β = 0.99 0.00 0.90 0.95 0.92 1.00 1.00
K = 25 β = 0.91 0.00 0.89 0.94 0.93 1.00 1.00
β = 0.93 0.00 0.90 0.94 0.94 1.00 1.00
β = 0.95 0.00 0.91 0.95 0.94 1.00 1.00
β = 0.97 0.00 0.92 0.96 0.95 1.00 1.00
β = 0.99 0.00 0.95 0.99 0.97 1.00 1.00
L=5 K=5 β = 0.91 0.00 0.75 0.83 0.78 0.91 1.00
β = 0.93 0.00 0.76 0.84 0.80 0.92 1.00
β = 0.95 0.00 0.78 0.86 0.81 0.93 1.00
β = 0.97 0.00 0.81 0.88 0.84 0.95 1.00
β = 0.99 0.00 0.85 0.92 0.87 0.98 1.00
K = 25 β = 0.91 0.00 0.85 0.91 0.90 1.00 1.00
β = 0.93 0.00 0.86 0.92 0.91 1.00 1.00
β = 0.95 0.00 0.88 0.93 0.92 1.00 1.00
β = 0.97 0.00 0.89 0.95 0.93 1.00 1.00
β = 0.99 0.00 0.92 0.98 0.95 1.00 1.00

Table A.22: Summary of achieved β-service (TEU/Normal)


Bibliography

Abramowitz, M. and I. Stegun (1972). Handbook of Mathematical Functions


with Formulas. Courier Corporation.

Alzaid, A. A. and M. Al-Osh (1990). “An Integer-Valued pth-Order Autore-


gressive Structure (INAR(p)) Process”. In: Journal of Applied Probability
27.2, pp. 314–324.

Askarany, D., H. Yazdifar, and S. Askary (2010). “Supply Chain Manage-


ment, Activity-Based Costing and Organisational Factors”. In: Interna-
tional Journal of Production Economics 127.2, pp. 238–248.

Axsäter, S. (2006). Inventory Control. Vol. 90. International Series in Op-


erations Research & Management Science. New York : Springer.

Babai, M. Z., T. Ladhari, and I. Lajili (2015). “On the Inventory Per-
formance of Multi-Criteria Classification Methods: Empirical Investiga-
tion”. In: International Journal of Production Research 53, pp. 279–290.

Babai, M. Z., A. A. Syntetos, and R. H. Teunter (2014). “Intermittent


Demand Forecasting: an Empirical Study on Accuracy and the Risk of
Obsolescence”. In: International Journal of Production Economics 157,
pp. 212–219.

Bakker, M., J. Riezebos, and R. H. Teunter (2012). “Review of Inventory


Systems with Deterioration Since 2001”. In: European Journal of Oper-
ational Research 221.2, pp. 275–284.

© Springer Fachmedien Wiesbaden 2016


T. Engelmeyer, Managing Intermittent Demand,
DOI 10.1007/978-3-658-14062-5
150 Bibliography

Bloomberg (2014). Shareprices of European Companies Between January


2000 and May 2014. Retrieved from Bloomberg database.

Bouakiz, M. and M. J. Sobel (1992). “Inventory Control with an Exponen-


tial Utility Criterion”. In: Operations Research 40.3, pp. 603–608.

Boylan, J. E., A. A. Syntetos, and G. C. Karakostas (2008). “Classification


for Forecasting and Stock Control: a Case Study”. In: The Journal of the
Operational Research Society 59.4, pp. 473–481.

Brännäs, K. and S. Quoreshi (2010). “Integer-Valued Moving Average Mod-


elling of the Number of Transactions in Stocks”. In: Applied Financial
Economics 20, pp. 1429–1440.

Brockwell, P. J. and R. A. Davis (2009). Time Series: Theory and Methods.


Springer.

Bu, R. and B. P. McCabe (2008). “Model Selection, Estimation and Fore-


casting in INAR(p) Models: a Likelihood-Based Markov Chain Approach”.
In: International Journal of Forecasting 24.1, pp. 151–162.

Bu, R., B. P. McCabe, and K. Hadri (2008). “Maximum Likelihood Es-


timation of Higher-Order Integer-Valued Autoregressive Processes”. In:
Journal of Time Series Analysis 29.6, pp. 973–994.

Chatfield, C. (2000). Time-Series Forecasting. CRC Press.

Choi, T. (2013). Handbook of EOQ Inventory Problems. Ed. by T. Choi.


Vol. 197. Stochastic and Deterministic Models and Applications. Boston,
MA: Springer Science & Business Media.

Crone, S. F. (2010). Neuronale Netze zur Prognose und Disposition im


Handel. Springer.
Bibliography 151

Croston, J. D. (1972). “Forecasting and Stock Control for Intermittent


Demands”. In: Operational Research Quarterly 23.3, pp. 289–303.

Drake, Matthew J and Kathryn A Marley (2014). “A Century of the EOQ”.


In: Handbook of EOQ Inventory Problems.

Du, J. and Y. Li (1991). “The Integer-Valued Autoregressive (INAR(p))


Model”. In: Journal of Time Series Analysis 12.2, pp. 129–142.

Dunsmuir, W. and R. D. Snyder (1989). “Control of Inventories with In-


termittent Demand”. In: European Journal of Operational Research 40,
pp. 16–21.

Eaves, A. H. and B. G. Kingsman (2004). “Forecasting for the Ordering


and Stock-Holding of Spare Parts”. In: The Journal of the Operational
Research Society 55.4, pp. 431–437.

Enciso-Mora, V., P. Neal, and T. Subba Rao (2009). “Efficient Order Se-
lection Algorithms for Integer-Valued ARMA Processes”. In: Journal of
Time Series Analysis 30.1, pp. 1–18.

Everaert, P. et al. (2008). “Cost Modeling in Logistics Using Time-Driven


ABC”. In: International Journal of Physical Distribution & Logistics
Management 38.3, pp. 172–191.

Freeland, R. K. and B. P. McCabe (2004). “Forecasting Discrete Valued


Low Count Time Series”. In: International Journal of Forecasting 20.3,
pp. 427–434.

Gardner Jr., E. S. (1985). “Exponential Smoothing: The State of the Art”.


In: Journal of Forecasting 4.1, pp. 1–28.

– (2006). “Exponential Smoothing: The State of the Art—Part II”. In:


International Journal of Forecasting 22.4, pp. 637–666.
152 Bibliography

Goh, M. (1994). “EOQ Models with General Demand and Holding Cost
Functions”. In: European Journal of Operational Research 73.1, pp. 50–
54.

Goyal, S. K. and B. C. Giri (2001). “Recent Trends in Modeling of Deteri-


orating Inventory”. In: European Journal of Operational Research 134.1,
pp. 1–16.

Grubbström, R. W. and A. Erdem (1999). “The EOQ with Backlogging


Derived Without Derivatives”. In: International Journal of Production
Economics 59.1-3, pp. 529–530.

Gudehus, T. (2005). Logistik. Grundlagen - Strategien - Anwendungen.


Berlin/Heidelberg: Springer.

Haneveld, W. K. and R. H. Teunter (1998). “Effects of Discounting and


Demand Rate Variability on the EOQ”. In: International Journal of Pro-
duction Economics 54, pp. 173–192.

Harris, F. W. (1990). “How Many Parts to Make at Once”. In: Operations


Research 38.6, pp. 947–950.

Hastie, T., R. Tibshirani, and J. Friedman (2009). The Elements of Statis-


tical Learning. Data Mining, Inference, and Prediction, Second Edition.
Springer.

Hyndman, R. J. and A. B. Koehler (2006). “Another Look at Measures


of Forecast Accuracy”. In: International Journal of Forecasting 22.4,
pp. 679–688.

Hyndman, R. J., A. B. Koehler, et al. (2008). Forecasting with Exponential


Smoothing. The State Space Approach. Berlin, Heidelberg: Springer.
Bibliography 153

Johnson, N. L., S. Kotz, and N. Balakrishnan (1995a). Continuous Uni-


variate Distributions. Volume 2. John Wiley & Sons, Inc.

– (1995b). Continuous Univariate Distributions. Volume 1. John Wiley &


Sons, Inc.

Johnston, F. R. and J. E. Boylan (2003). “An Examination of the Size


of Orders From Customers, Their Characterisation and the Implications
for Inventory Control of Slow Moving Items”. In: The Journal of the
Operational Research Society 54, pp. 833–837.

Jones, C. S. and S. Tuzel (2013). “Inventory Investment and the Cost of


Capital”. In: Journal of Financial Economics 107.3, pp. 557–579.

Jung, R. C. and A. R. Tremayne (2006). “Coherent Forecasting in Inte-


ger Time Series Models”. In: International Journal of Forecasting 22.2,
pp. 223–238.

– (2010). “Convolution-Closed Models for Count Time Series with Appli-


cations”. In: Journal of Time Series Analysis 32.3, pp. 268–280.

Khan, M. et al. (2011). “A Review of the Extensions of a Modified EOQ


Model for Imperfect Quality Items”. In: International Journal of Pro-
duction Economics 132.1, pp. 1–12.

Kirchgässner, G., J. Wolters, and U. Hassler (2012). Introduction to Modern


Time Series Analysis. Springer Texts in Business and Economics. Berlin,
Heidelberg: Springer Science & Business Media.

Leven, E. and A. Segerstedt (2004). “Inventory Control with a Modified


Croston Procedure and Erlang Distribution”. In: International Journal
of Production Economics 90.3, pp. 361–367.
154 Bibliography

Lintner, J. (1965). “The Valuation of Risk Assets and the Selection of Risky
Investments in Stock Portfolios and Capital Budgets”. In: The Review of
Economics and Statistics 47.1, pp. 13–37.

Maiti, R. and A. Biswas (2015). “Coherent forecasting for stationary time


series of discrete data”. In: Advances in Statistical Analysis 99, pp. 337–
365.

McCabe, B. P., G. M. Martin, and D. Harris (2011). “Efficient Probabilistic


Forecasts for Counts”. In: Journal of the Royal Statistical Society: Series
B (Statistical Methodology) 73.2, pp. 253–272.

Mohammadipour, M. (2009). “Intermittent Demand Forecasting with In-


teger Autoregressive Moving Average Models”. PhD thesis.

Moors, J. J. and L. W. Strijbosch (2002). “Exact Fill Rates for (R, s, S)


Inventory Control with Gamma Distributed Demand”. In: The Journal
of the Operational Research Society 53.11, pp. 1268–1274.

Nahmias, S. (1979). “Simple Approximations for a Variety of Dynamic


Leadtime Lost-Sales Inventory Models”. In: Operations Research 27.5,
pp. 904–924.

– (2009). Production and Operations Analysis. McGraw-Hill.

Neal, P. and T. Subba Rao (2007). “MCMC for Integer-Valued ARMA


Processes”. In: Journal of Time Series Analysis 28.1, pp. 92–110.

Nelder, J. A. and R. Mead (1965). “A Simplex Method for Function Mini-


mization”. In: The Computer Journal 7.4, pp. 308–313.

Ng, W. L. (2007). “A Simple Classifier for Multiple Criteria ABC Analysis”.


In: European Journal of Operational Research 177.1, pp. 344–353.
Bibliography 155

Al-Osh, M. and A. A. Alzaid (1987). “First-Order Integer-Valued Autore-


gressive (INAR(1)) Process”. In: Journal of Time Series Analysis 8.3,
pp. 261–275.

– (1988). “Integer-Valued Moving Average (INMA) Process ”. In: Statistical


Papers 29.1, pp. 281–300.

R Core Team (2015). “R: A Language and Environment for Statistical


Computing”. In:

Rao, A. V. (1973). “A Comment on: Forecasting and Stock Control for In-
termittent Demands”. In: Operational Research Quarterly 24.4, pp. 639–
640.

Sani, B. and B. G. Kingsman (1997). “Selecting the Best Periodic Inventory


Control and Demand Forecasting Methods for Low Demand Items”. In:
The Journal of the Operational Research Society 48.7, pp. 700–713.

Schneider, H. (1978). “Methods for Determining the Re-Order Point of an


(s,S) Ordering Policy When a Service Level Is Specified”. In: The Journal
of the Operational Research Society 29.12, pp. 1181–1193.

Schultz, C. R. (1987). “Forecasting and Inventory Control for Sporadic


Demand Under Periodic Review”. In: The Journal of the Operational
Research Society 38.5, pp. 453–458.

Sharpe, W. F. (1964). “Capital Asset Prices: A Theory of Market Equi-


librium Under Conditions of Risk”. In: The Journal of Finance 19.3,
pp. 425–442.

Shenstone, L. and R. J. Hyndman (2005). “Stochastic Models Underlying


Croston’s Method for Intermittent Demand Forecasting”. In: Journal of
Forecasting 24.6, pp. 389–402.
156 Bibliography

Singhal, V. R. and A. S Raturi (1990). “The Effect of Inventory Decisions


and Parameters on the Opportunity Cost of Capital”. In: Journal of
Operations Management 9.3, pp. 1–15.

Snyder, R. D. (1984). “Inventory Control with the Gamma Probability Dis-


tribution”. In: European Journal of Operational Research 17.3, pp. 373–
381.

Steutel, F. W. and K. van Harn (1979). “Discrete Analogues of Self-


Decomposability and Stability”. In: The Annals of Probability 7.5,
pp. 893–899.

Sun, D. and M. Queyranne (2002). “Production and Inventory Model Using


Net Present Value”. In: Operations Research 50.3, pp. 528–537.

Syntetos, A. A., M. Z. Babai, et al. (2011). “Distributional Assumptions


for Parametric Forecasting of Intermittent Demand”. In: ed. by N. Altay
and Lewis A Litteral. Springer London, pp. 31–52.

Syntetos, A. A. and J. E. Boylan (2001). “on the bias of intermittent


demand estimates”. In: International Journal of Production Economics
71.1-3, pp. 457–466.

– (2005). “The Accuracy of Intermittent Demand Estimates”. In: Interna-


tional Journal of Forecasting 21.2, pp. 303–314.

Syntetos, A. A., J. E. Boylan, and J. D. Croston (2005). “On the Cat-


egorization of Demand Patterns”. In: The Journal of the Operational
Research Society 56.5, pp. 495–503.

Teunter, R. H., M. Z. Babai, and A. A. Syntetos (2009). “ABC Classifica-


tion: Service Levels and Inventory Costs”. In: Production and Operations
Management 19.3, pp. 343–352.
Bibliography 157

Teunter, R. H. and L. Duncan (2009). “Forecasting Intermittent Demand: A


Comparative Study”. In: The Journal of the Operational Research Society
60.3, pp. 321–329.

Teunter, R. H., A. A. Syntetos, and M. Z. Babai (2010). “Determining


Order-Up-To Levels Under Periodic Review for Compound Binomial
(Intermittent) Demand”. In: European Journal of Operational Research
203.3, pp. 619–624.

– (2011). “Intermittent Demand: Linking Forecasting to Inventory Obso-


lescence”. In: European Journal of Operational Research 214.3, pp. 606–
615.

Themido, I. et al. (2000). “Logistic Costs Case Study-An ABC Approach”.


In: The Journal of the Operational Research Society 51.10, pp. 1148–
1157.

Weiß, C. H. (2008). “Thinning Operations for Modeling Time Series of


Counts—a Survey”. In: Advances in Statistical Analysis 92, pp. 319–341.

Weiss, H. J. (1982). “Economic order quantity models with nonlinear hold-


ing costs”. In: European Journal of Operational Research 9.1, pp. 56–
60.

Willemain, T. R. et al. (1994). “Forecasting Intermittent Demand in Man-


ufacturing: A Comparative Evaluation of Croston’s Method”. In: Inter-
national Journal of Forecasting 10.4, pp. 529–538.

Williams, T. M. (1984). “Stock Control with Sporadic and Slow-Moving


Demand”. In: The Journal of the Operational Research Society 35.10,
pp. 939–948.

You might also like