Analytics For Supply Chain and Operations 7-2014
Analytics For Supply Chain and Operations 7-2014
ANALYTICS FOR SUPPLY CHAIN AND OPERATIONS
Background
In the last few years, there has been increased interest in using analytics in many areas
of business. Part of the interest is driven by increased availability of data from Enterprise
Resource Planning (ERP) systems as well as “big data” from sources such as social media or
in the supply chain context from sensors and RFID tags. There are also more advanced
technological capabilities to use analytics packages running on hardware that has larger
memory and processing capacity or on the “cloud.”
Analytics have always played an important role in supply chain and operations in the
form of demand forecasting and planning, transportation routing, inventory optimization and
network design. Analytics are also widely applied in related areas such as manufacturing,
procurement and pricing. More recently analytics have been applied to supply chain
segmentation, risk management, complexity reduction and manufacturing flexibility.
Many companies struggle with where to start investing resources in analytics. They may
also be concerned with the possible disconnect between the business requirements and the
analytics process. They would also prefer to have this capability in house so as to be able to
continuously support operations with analytics.
OPS Rules has developed expertise that enables this process through its
analyze/innovate/transform approach to Supply Chain and Operations Analytics.
- Analyze
! Understand the challenges
! Define the goals and related tradeoffs
! Select the correct analytics modeling technology
! Apply proven methodology to make sure the model corresponds to the
customer’s operations
- Innovate
! Work with the customer to come up with the questions that will drive the
scenario and trade-off analysis
! Work closely with the customer to make sure the work corresponds to the
needs and produces applicable results.
- Transform
! Actionable results that can help achieve results fast
! Transfer of knowledge to the customer team for long term productivity
This methodology provides not only a proven way to achieve significant results, but
also a way to transfer the knowledge and skills to the team in order to continue to benefit from
this approach.
ANALYTICS FOR SUPPLY CHAIN AND OPERATIONS
Modeling Methodology
Our goal is to apply proven analytic and data-driven approaches to examine the
opportunities we believe exist for many organizations. After understanding the area of concern
and how it can best be approached, we create a validated model of the current system. We
compare the model results to the details of the business. Once the results are acceptable, we
analyze a complete set of improvement scenarios using simulation and optimization tools.
The results of these models provide a way to analyze the tradeoffs in the system as well
as provide new ideas for improvement. Using detailed mathematical models and universal
manufacturing and supply chain laws that are always true to reflect the relationships between:
One interesting aspect of modeling is that if results for a specific scenario are
inconsistent with the intuition about the business, you need to understand where the
discrepancy is coming from. Sometimes, the model suggests a new and surprising insight that
would have been hard or impossible to obtain without the model. But, sometimes, it is an
indication of a problem with the data, the assumptions or the model. Any such discrepancy
needs to be understood before continuing with the model.
We also leverage empirical rules, based on prior research at companies that explain
relationships between operations strategies and channel characteristics, product attributes,
customer value and Information Technology capability.
Typically, we follow a 9-step process with the customer involved in all steps:
The experience and complexity required to perform this type of analysis is threefold. First, it
requires deep understanding of the company’s operations. Second, it requires knowledge of
the analytics tools and modelling technology finally, it requires knowledge of supply chain
principles in areas such as inventory, production and transportation. Capabilities in these three
areas will combine to come up with ideas for scenarios and drivers of costs or inefficiencies in
the system.
A good example for what this type of analysis involves is the work we did with PepsiCo
Worldwide Flavours (PWF) on end-to-end inventory optimization, which they call Attila the Hun
inventory optimization. PWF recently went through a reorganization that led to reassessment
of inventory in the manufacturing plants. With a multi-tier network of three plants, four
Distribution Centers (DCs) in the domestic and international markets, ~500 finished goods and
~2000 components and raw materials, this was not something that could be done easily.
Management realized that this complex multi-level supply chain network could not be fully
optimized using single-echelon optimization methods.
Therefore the company chose to work with OPS Rules and deploy an end-to-end
inventory optimization process. The initial part of the process was to create a validated
baseline model of PWF’s network. You will recognize these as step 1, 2 and 3. We also created
optimized models of the baseline, which is step 4.
The next steps, covering 5,6 and 7 in our methodology and the most creative parts,
were to plan the various scenarios that will uncover the most information about the drivers of
inventory in the PWF supply chain.
Our first discovery was that most of the excess inventory was in raw materials at the
plants. This led to devising several scenarios to see what was driving the raw material
inventory. These included:
We examined what would happen if we changed supplier lead times for the raw material by
either reducing or increasing it. It turned out that this had a relatively minor impact on the
amount of inventory held in the plant.
Then we looked at another factor that is often neglected by inventory planners and that is
lead-time variability. We discovered that reducing lead-time variability even a little had a
significant impact on supply chain costs.
This implies that one of the best ways for PWF to reduce inventory at the plants is to work
with suppliers to improve their performance by focusing on improving “on time delivery.” This
was an unexpected insight, as management typically expects other factors to be more
important. Steps 8 and 9 would ensue from this conclusion.
Analytics is of course a very wide area, we would like to focus in this section on a technology
that has not been implemented widely in supply chain until recently called Machine Learning
and in particular how it can be combined with optimization to produce breakthrough results.
A good definition of machine learning is here: "Machine learning is about learning to do better
in the future based on what was experienced in the past. The emphasis of machine learning is
on automatic methods. The goal is to devise learning algorithms that do the learning
automatically without human intervention or assistance. The machine learning paradigm can be
viewed as programming by example. Often we have a specific task in mind, such as spam
filtering. But rather than program the computer to solve the task directly, in machine learning,
we seek methods by which the computer will come up with its own program based on
examples that we provide."
Some examples of machine learning include demand and price forecasting, character or face
recognition, medical diagnosis, fraud detection, topic spotting (such as trending news). There
are several types of machine learning results – the main ones are:
Regression – this is where the values are real. For instance if you need to price a product or
when putting your house on the market – the machine learning algorithm will learn from
previous sales and predict what is the product price based on this information.
Classification – this is where the values are discrete – for instance whether you will be accepted
into a certain program or whether you have a certain disease or not. The machine learning
algorithm will again look at previous cases and predict whether your characteristics place you
in or out of certain groups.
Another aspect of machine learning is what type of process is used – these can be:
Supervised – this is where the information includes the result such as providing a list of
characteristics and the resulting price or conclusion and using these to predict a new case.
This is typically used in price forecasting but also used in recommendation engines as well as
in character and face recognition.
Unsupervised – this is where information is provided with no clear conclusion in mind and the
algorithms come up with correlations based solely on the data. This is used for data mining
and has become more sophisticated in recent years with the use of visualization to help data
matter experts determine the meaning of the results. This type of process has applications in
fraud detection, genetics analysis and finance.
While a machine learning approach enhances the quality of grouping and forecasting, the
results are often independent of each other. In order for a system to work in a meaningful way,
it needs to make sure all the parts are optimized relative to each other.
Examples
The following are examples of implementing analytics in different operations areas
and at different levels from manufacturing operations to end-to-end supply chain
optimization.
1) Complexity Reduction
2) End to end inventory optimization at Schneider Electric
3) Supply chain risk management at Ford
4) Simulation of complex manufacturing process
5) Machine Learning and optimization at Rue La La
Challenge: Many companies are aware of the need to simplify their supply chains and
processes in order to become more efficient and profitable. One way to achieve this is to gain
a deeper understanding of customer needs and product performance in order to reduce
complexity. Once a firm focuses on complexity reduction, many opportunities for profit
improvement may become evident, but where do you start?
Approach: Using David Simchi-Levi’s framework for supply chain segmentation that takes
advantage of synergies to reduce complexity and benefit from economies of scale. One
important concept in this method is “trimming the tail” to reduce product complexity and cost,
which results in increased margins.
A critical component is to constantly evaluate your portfolio of products and offerings. Most
companies typically measure each SKU by units sold, revenue and profit contribution. Then,
depending primarily upon which products are the most profitable, they decide which should be
eliminated.
Process: To ensure complete assessment of the company’s portfolio, three things need to be
considered:
1. All costs have to be figured into the equation: Inventory costs, logistics costs, manufacturing
set-up costs, costs to configure or assemble, and cost of direct materials.
2. Variability: how much will the demand for the SKU fluctuate?
3. Relationship Analysis: assess the relationship between SKU’s (some SKU’s help drive
demand for others).
Once costs are calculated and variability is determined, and product relationships are
taken into account plus a few other factors, then a map of the portfolio can be created. With
these illuminating facts and metrics, better analysis can be performed and decisions can be
made about the portfolio. Below is an example of one of the maps/charts we use to plot the
portfolio and make decisions about where to best hold inventory for which SKU’s. Similar
graphs are used to show the total landed cost, variation and profit contribution characteristics
of each SKU in the portfolio.
The graph below shows an example of the power of understanding demand and its
variability so that inventory can be positioned effectively across a distribution network. In this
case the analysis was used to determine the retailer’s Hub and Spoke policy. Items with high
variability and low sales are mostly stored in the hubs as that way there is the benefit of risk
pooling demand from many stores. Items with high demand and low variability are mostly
stored in the spokes as they are replenished often. Items that are low volume and low
variability are split between the hubs and spokes depending on other characteristics.
Benefits: The evaluation of your portfolio can be a painstaking process. Many companies hold
hundred thousands of SKU’s, but this analysis is vital and must be performed and periodically
re-assessed to ensure you are maximizing the profitability of your business. You should also
remember that increasing the number of products in your portfolio increases the variability of
ALL existing products in your portfolio.
Approach: In order to do this, first the company needs to identify ways to optimize this supply
chain to improve its performance in order to increase productivity, reduce cost and improve
asset and working capital efficiency. Through this analysis the company can identify operations
strategies for further cost reduction.
! Model current process steps and network using actual client data and commercially
available analysis tools
! Inventory levels
! Forecast accuracy
! Supplier uncertainty
For the controllable risks, there are standard strategies and approaches related to day-
to-day operations such as maintaining inventory and backup plans. With their past experience,
companies can anticipate some of these controllable risks. But for the “black swans” the
perception is that they are so rare and unexpected that there is not much that can be done. We
believe that both these assumptions are wrong.
Approach: While a specific unexpected event is very hard to forecast, there are still quite a few
of these in a given period. In the last few months of 2012, we experienced Hurricane Sandy,
an extended strike at the Long Beach port, renewed unrest in Egypt and a small Japanese
earthquake.
There are several actions that can be taken to prepare for these events:
In order to understand the impact of supply chain disruptions, it is necessary to model the
supply chain and assess the cost and recovery time from various closures and other scenarios.
We also do a qualitative benchmarking and assessment of existing risk management
processes and systems. We avoid trying to create an “incident-based risk prioritization” for a
few reasons. First, it is better to analyze the network and see where the potential incidents or
events will have the greatest impact.
! Step 1: Identify each critical node in the system for a given business unit, product line,
geographical region or just for the most critical products in the portfolio.
! Step 2: Calculate the Time-To-Recovery (TTR) for each of these nodes in the network.
! Step 3: Using the TTR information in the model developed in Step 2, we then help
companies calculate the cost of lost Sales during TTR – this provides you with what we
call the Financial Impact (FI).
! Step 4: Finally, we calculate the Risk Exposure Index ™ by aggregating the Financial
Impact (FI) across all nodes. The figure below is a simplified illustration of a network
and the measurements at each node. The aggregation of the individual measurements
allow us to create a Risk Exposure Index ™ for a given value chain.
! A quantified measure of risk--it determines the cost of risk, based on the entire network
! There is no requirement to try to forecast or scenario plan against the myriad of events
that could possibly occur and affect your operations. Measuring the risk allows you
prioritize and locate the areas of focus that will be bottlenecks during a crisis.
! It forces a discussion to understand why TTR for similar facilities or suppliers is different
! It forces a process to reduce TTR in various stages of the supply chain and allows you
to do so in a prioritized fashion
The Harvard Business Review, From Superstorms to Factory Fires, describes how this method
was successfully applied at Ford Motor Company.
Approach: Using a simulation model you can review the current process and discover the
bottlenecks. New ideas can then be analyzed for time and cost. You can experiment for
instance with a fixed schedule for production to reduce setup costs as well as other changes
to the process. The next step would be to expand the analysis to improve the production
process by looking at how the outcome is coordinated with the overall supply chain planning
process through an optimization model.
Process: Using the company’s data, we can build an initial model to simulate the real system.
Typically, this requires several iterations, to validate the simulation model so as to ensure it
correctly represents the system we are optimizing. This process achieves multiple outcomes.
First, we learn about the uniqueness of the specific facility and production.
Second, we create a baseline that we can then change the constraints to illustrate what
happens under a different set of conditions/constraints.
The simulation model – current state has four modules: Demand and production logic,
production sequence, production process and KPI collectors.
The simulation model includes information on both demand and production logic so as to
capture trade-offs between cycle times, set ups, fill rates, etc. This logic includes four
components:
Approach: The team started looking at historical data and discovered that by setting prices
using historical data it could solve both problems. This approach involved using a combination
of machine learning in order to predict demand for new items and estimate lost sales followed
by optimization in order to take into account competing styles when setting prices.
This approach produced increased revenues of 10% and won the team the 2014 INFORMS
Revenue Management and Pricing Section Practice Award.
Process: Integration of the MIT Demand Forecasting and Price Optimization Tool with
Rue La La’s Business Processes. Every day the price optimizer suggests recommended
prices for the first exposure style events that are starting the next day. It prices all styles for an
event together for the next day in a single run. It takes about an hour and at the end of that run
an email goes out to all of the merchants with recommendations for prices for the exposure
styles for next day’s events.
Part of this process is to ensure that a couple of things are accounted for:
1. The model does not utilize competitive prices as its input and therefore, need somebody to
make sure that the prices are not outside normal ranges. This ensures competitive pricing
because that is the fundamental value proposition to Rue La La’s customers.
2. Educate the merchants in terms of how the prices are actually set. Typically, merchants
have price control and there is some art to this process. To ensure that they are not
completely cut out of the loop, prices are shared with them and they make sure that the
recommendations are sensible.
Benefits: Implementing this approach produced increased revenues of 10% and won the team
the 2014 INFORMS Revenue Management and Pricing Section Practice Award.
Conclusion
Supply chain analytics is therefore not just about crunching data and finding
correlations; it requires deep understanding of the way supply chains work and the factors that
influence costs and risks. If the tradeoffs are balanced well, these types of analytics projects
can result in large savings, 10% to 30% are typical, while maintaining service levels and other
important performance measures.
OPS Rules’ proven approach and methodology can be the key to successful projects
that can lead to developing internal capabilities for continued use of analytics to make the right
decisions for your business.
analyze.innovate.transform