OPERATIONS MANAGEMENT
POKA YOKE-
• Poka Yoke is a Japanese phrase that means error prevention.
• The method is used to prevent defects and resolve them during the
production process in industrial engineering, eliminating human
quality control after the process. Poka Yoke is a frequently used
method in Lean Manufacturing and Six Sigma to ensure as little
errors in a production process as possible.
• Poka Yoke makes it practically impossible to have processing errors.
It forces actions to be carried out correctly, leaving no room for
misunderstandings and/or human error. It's about measures that
prevent further errors from being made.
• Many solutions based on this concept tend to be simple, cheap, and
effective. They can be integrated into the product design or in one of
the process steps.
• Many technological inventions have been based on this concept.
Examples are control systems, contact methods, fixed-value
methods, motion-steps methods and sensing devices. Some of
these solutions are used very often, like limit switches, touch
switches and proximity switches.
SERVICE BLUEPRINT-
• Service design is the activity of planning and organizing a business’s
resources (people, props, and processes) in order to (1) directly
improve the employee’s experience, and (2) indirectly, the customer’s
experience. Service blueprinting is the primary mapping tool used in
the service design process.
• Definition: A service blueprint is a diagram that visualizes the
relationships between different service components — people, props
(physical or digital evidence), and processes — that are directly tied
to touchpoints in a specific customer journey.
Elements of a service blueprint
Service blueprints typically contain five categories that illustrate the
main components of the service being mapped out.
Physical evidence
What customers (and even employees) come in contact with.
Though first in line, it’s usually the last element added.
Example: This category includes locations, like a physical store or the
company website, but also any signage, receipts, notification or
confirmation emails, etc.
Customer actions
What customers do during the service experience.
Example: Customers might visit the website, talk to an employee (in
person or online), make a purchase, place an order, accept an order,
or receive something.
Frontstage or visible employee actions
What customers see and who they interact with. For tech-heavy
businesses, add in or replace this category with the technology that
interacts with the customer.
Example: Employees might greet a customer visiting a physical
location, respond to questions through chat, send emails, take an
order, or provide status information.
Backstage or invisible contact employee actions
All other employee actions, preparations, or responsibilities
customers don’t see but that make the service possible.
Example: Employees might write content for the website/email/etc.,
provide approval, complete a review process, make preparations,
package an order, etc.
Support processes
Internal/additional activities that support the employees providing
the service.
Example: Third-party vendors who deliver supplies, a carrier service,
equipment or software used, delivery or payment systems, etc.
Lines
Service blueprints also include lines to separate each category,
clarifying how components in a service process interact with each
other. This allows employees and managers to better understand
their role and, most importantly, possible sources of customer
dissatisfaction within a service experience.
Optional categories
If you need more detail, you could also add a timeline to show how
long each step takes, some kind of success metric to measure goals,
or the customer’s emotions throughout the process.
Fundamentally, service blueprints center on the customer. They
allow for a clear vision of the service design, which in turn helps
organizations refine their processes and deliver pleasing,
memorable customer experiences.
Benefits of using service blueprints
Because services aren’t tangible, it can be difficult to convince
decision-makers and executives that changes need to be made. It
can be even more difficult to talk about specific changes without
first having a full picture of the process. Visualizing each step and
each interaction in the process takes away that vagueness and
highlights areas for improvement.
Service blueprints empower organizations to optimize their service
processes. Additional benefits include:
• Scalability
and flexibility: Service blueprints accommodate as
much or as little detail as needed. They can show high-level
overviews or intricate steps.
• Cross-functionality and knowledge
transferability: Employees and managers in long-standing
or complex processes can easily lose sight of the bigger
picture or how each action affects other departments, fellow
employees, or even the customer. Service blueprints clarify
interactions and reduce siloes.
• Competition: Service blueprints allow you to compare what
you want your service to look like with what it looks like
now, or you can compare your company’s services with a
competitor’s.
• Failure
analysis: Once you can see who is (or should be)
doing what, it’s much easier to diagnose what’s going
wrong.
Service blueprints create a visible structure for implementing and
achieving operational goals. Their cross-functionality likewise
fosters better communication between customers, employees, and
management, which increases the chances that companies will
understand their customers and respond to their needs while
keeping their service processes free from unnecessary complications
and redundancies.
AGGREGATE PLANNING-
• It is a business strategy that helps companies plan how
to use their resources and produce what people will
want in the future.
• Aggregate planning is creating a production
schedule for a given period. It starts after listing out all
the crucial requirements for uninterrupted production.
The usual planning horizon ranges from three to twelve
months.
• Aggregate planning does not differentiate colors, sizes,
and features. For example, in a mobile handset
manufacturing company, aggregate planning considers
only the total number of handsets, not the separate
models’ colors.
Aggregate planning examples
A plant might be manufacturing five different kinds of products. The demand for the
two products may be going up. The market for the other three might be coming
down.
The company is interested only in the overall growth and the resources (people,
machines, storage, and raw materials) needed for the following year.
If the above company makes a forecast of five products individually, each prediction
can have some errors. However, if they combine these forecasts, the aggregate
demand figure would be subject to fewer errors.
High and low are tend to cross each other out randomly. That leads to greater
accuracy in obtaining the total demand forecast than the isolated demand forecast.
Hence, aggregating the individual products’ demands and handling the aggregate
production plan is better than discussing individual production plans. That leads to
better utilization of resources.
ISO 9000 –
• It is a set of internationally recognized standards for quality
assurance and management.
• Published by the International Organization for Standardization, it
aims to encourage the production of goods and services that meet a
globally-acceptable level of quality.
• ISO 9000 lays out best practices, guidelines, and a standard
vocabulary for quality management systems.
• ISO 9001 is the portion of ISO 9000 that consists of action items for
a business or other organization that seeks ISO certification.
Understanding ISO 9000
ISO 9000 standards were developed to help manufacturers effectively
document the quality system elements that need to be implemented to
maintain an efficient quality system. They are increasingly being applied to
any organization or industry.
ISO 9001 is now being used as a basis for quality management—in the
service sector, education, and government—to help organizations satisfy
their customers, meet regulatory requirements, and achieve continual
improvement.
LEARNING CURVES-
• The learning curve is a visual representation of how long it takes to
acquire new skills or knowledge.
• In business, the slope of the learning curve represents the rate in
which learning new skills translates into cost savings for a company.
• The steeper the slope of the learning curve, the higher the cost
savings per unit of output.
Understanding Learning Curves
The learning curve also is referred to as the experience curve, the cost
curve, the efficiency curve, or the productivity curve. This is because the
learning curve provides measurement and insight into all the above
aspects of a company. The idea behind this is that any employee,
regardless of position, takes time to learn how to carry out a specific task
or duty. The amount of time needed to produce the associated output is
high. Then, as the task is repeated, the employee learns how to complete
it quickly, and that reduces the amount of time needed for a unit of output.
That is why the learning curve is downward sloping in the beginning with a
flat slope toward the end, with the cost per unit depicted on the Y-axis and
total output on the X-axis. As learning increases, it decreases the cost per
unit of output initially before flattening out, as it becomes harder to
increase the efficiencies gained through learning.
RELIABILITY-
When you do quantitative research, you have to consider the reliability and validity of
your research methods and instruments of measurement.
Reliability tells you how consistently a method measures something. When you apply
the same method to the same sample under the same conditions, you should get the
same results. If not, the method of measurement may be unreliable.
There are four main types of reliability. Each can be estimated by comparing
different sets of results produced by the same method.
Type of reliability Measures the consistency of…
Test-retest The same test over time.
Interrater The same test conducted by different people.
Parallel forms Different versions of a test which are designed to be equivalent.
Internal consistency The individual items of a test.
Test-retest reliability
Test-retest reliability measures the consistency of results when you repeat the
same test on the same sample at a different point in time. You use it when you
are measuring something that you expect to stay constant in your sample.
Why it’s important
Many factors can influence your results at different points in time: for example,
respondents might experience different moods, or external conditions might affect
their ability to respond accurately.
Test-retest reliability can be used to assess how well a method resists these factors
over time. The smaller the difference between the two sets of results, the higher the
test-retest reliability.
Interrater reliability
Interrater reliability (also called interobserver reliability) measures the degree of
agreement between different people observing or assessing the same thing. You use
it when data is collected by researchers assigning ratings, scores or categories to
one or more variables.
Why it’s important
People are subjective, so different observers’ perceptions of situations and
phenomena naturally differ. Reliable research aims to minimize subjectivity as much
as possible so that a different researcher could replicate the same results.
When designing the scale and criteria for data collection, it’s important to make sure
that different people will rate the same variable consistently with minimal bias. This is
especially important when there are multiple researchers involved in data collection
or analysis.
Parallel forms reliability
Parallel forms reliability measures the correlation between two equivalent versions of
a test. You use it when you have two different assessment tools or sets of questions
designed to measure the same thing.
Why it’s important
If you want to use multiple different versions of a test (for example, to avoid
respondents repeating the same answers from memory), you first need to make sure
that all the sets of questions or measurements give reliable results.
Internal consistency
Internal consistency assesses the correlation between multiple items in a test that
are intended to measure the same construct.
You can calculate internal consistency without repeating the test or involving other
researchers, so it’s a good way of assessing reliability when you only have one data
set.
Why it’s important
When you devise a set of questions or ratings that will be combined into an overall
score, you have to make sure that all of the items really do reflect the same thing. If
responses to different items contradict one another, the test might be unreliable.
ABC INVENTORY ANALYSIS-
ABC (Always Better Control) analysis is one of the most commonly used inventory
management methods. ABC analysis groups items into three categories (A, B, and
C) based on their level of value within a business.
Classifying inventory with ABC analysis helps businesses prioritize their inventory,
optimize operations, and make clear decisions.
• A items: This is your inventory with the highest annual consumption value. It
should be your highest priority and rarely, if ever, a stockout.
• B items: Inventory that sells regularly but not nearly as much as A items.
Often inventory that costs more to hold than A items.
• C items: This is the rest of your inventory that doesn’t sell much, has the
lowest inventory value, and makes up the bulk of your inventory cost.
Inventory categorization is essential with physical products because it protects your
profit margins and prevents write-offs and losses for spoiled inventory. It is also the
first step in reducing obsolete inventory, supply chain optimization, increasing prices,
and forecasting demand.
ABC inventory analysis is based on the Pareto Principle, meaning it’s often the case
that about 20% of a company’s inventory accounts for 80% of its value. This insight
enables leaders to make more operationally informed decisions.
Benefits of ABC analysis
1. More accurate demand forecasting
2. Better control of high-value inventory
3. Strategic pricing
ERGONOMICS-
Defined as the science of fitting a workplace to the user’s needs, ergonomics aims to
increase efficiency and productivity and reduce discomfort.
Think about the angle of your computer monitor, or the height of your desk. Think about
whether your eyes are strained by the end of the day or if your wrists hurt from typing. A
sound understanding of ergonomics can prevent most workplace injuries by adjusting
tools to the user, putting an emphasis on proper posture to reduce the impact of
repetitive movements.
The use of computers and rapidly changing technology in the modern workplace has
greatly increased the need for ergonomics. Desks, chairs, monitors, keyboards and
lighting all need to be assessed when creating a workspace, whether it is at the office or
at home.
Ergonomics also takes into account the need for movement throughout the day. Office
furniture has traditionally encouraged stiff, fixed postures and little movement. However,
a balance between sitting and standing, which can be aided with a height-adjustable
desk, is a proven way to combat the effects of sedentary workplace behavior.
All of Humanscale’s work tools are inherently ergonomic. They are created to be intuitive
and adjust effortlessly to the user. Our products are created to promote daily movement
and physical activity, while supporting the user’s posture.
Benefits of Ergonomics
1. Lower costs
2. Higher productivity
3. Better product quality
4. Improved employee engagement
5. Better safety culture
Quality Function Deployment (QFD)-
Definition: Quality Function Deployment, or QFD, is a model for product
development and production popularized in Japan in the 1960s. The model
aids in translating customer needs and expectations into technical
requirements by listening to the voice of customer.
What is Quality Function Deployment?
Although it might sound like a modern testing methodology, Quality
Function Deployment (QFD) has a 50-year track record of putting
customer needs first throughout the entire product development process.
Focusing consistently on customer desires, QFD ensures these are always
considered during both the design process and various quality assurance
milestones throughout the entire product lifecycle.
By continuously circling back to the Voice of the Customer, QFD ensures
every technical requirement takes the customer into account, using matrix
diagrams such as the House of Quality to drive customer value into every
stage.
QFD is most appropriate when companies are focused on relatively
iterative innovation versus something completely new since there is a large
base of customer feedback and input to drive the process. When a product
is creating a completely new category, it’s more difficult to fully articulate
the voice of the customer since they don’t necessarily have a frame of
reference, but even in these cases carrying forward what is known about
customer needs and preferences can provide value.
Business Process Reengineering(BPR)-
Today, It is the latest, most radical, revolutionary and extremely powerful management
tool.
BPR is defined as “ The fundamental rethinking and radical redesign of the business
process to achieve dramatic improvements in critical contemporary measures of
performance such as cost, quality, service, and speed.”
BPR offers process based approach to strategy rather than market based approach. The
tool concentrates on process activities that convert inputs to output for the customer.
It will not believe in minor improvements through modifications. It is going to strike the
very aspect of design.
The entire form or structure of the process may change during the reengineering and the
reengineered process may not resemble the old one.
The entire process of reengineering revolves around how to give the customer what he
wants at the right time and in the most cost effective manner.
BPR is a powerful tool that promises the higher order magnitudes of improvements in
revenues, profits, productivity, cycle time and efficiency.
Advantages of Business Process Reengineering:
• Improvement in entire organization as a whole.
• Better systems and management improvements in the areas of- Product and services,
Design and operations, improved system operations.
• Takes advantage of improved operations.
• Improved application of industrial engineering in the areas of- organizational strategies,
management functions, plant utilization, quality improvement, creativity and innovation,
confidence in competition.
• Improvement in customer satisfaction.
Just-in-Time (JIT)-
The just-in-time (JIT) inventory system is a management strategy that
aligns raw-material orders from suppliers directly with production
schedules. Companies employ this inventory strategy to increase
efficiency and decrease waste by receiving goods only as they need them
for the production process, which reduces inventory costs. This method
requires producers to forecast demand accurately.
KEY TAKEAWAYS
• The just-in-time (JIT) inventory system is a management strategy
that minimizes inventory and increases efficiency.
• Just-in-time manufacturing is also known as the Toyota Production
System (TPS) because the car manufacturer Toyota adopted the
system in the 1970s.
• Kanban is a scheduling system often used in conjunction with JIT to
avoid overcapacity of work in process.
• The success of the JIT production process relies on steady
production, high-quality workmanship, no machine breakdowns, and
reliable suppliers.
• The terms short-cycle manufacturing, used by Motorola, and
continuous-flow manufacturing, used by IBM, are synonymous with
the JIT system.
How Does Just-in-Time Inventory Work?
The just-in-time (JIT) inventory system minimizes inventory and increases
efficiency. JIT production systems cut inventory costs because
manufacturers receive materials and parts as needed for production and
do not have to pay storage costs. Manufacturers are also not left with
unwanted inventory if an order is canceled or not fulfilled.
One example of a JIT inventory system is a car manufacturer that operates
with low inventory levels but heavily relies on its supply chain to deliver the
parts it requires to build cars on an as-needed basis. Consequently, the
manufacturer orders the parts required to assemble the vehicles only after
an order is received.
PRODUCTIVITY-
A management tool called "Productivity" measure the efficiency of this
conversion. Productivity is the relationship between the "output" generated
by a production system and the "input".
Productivity = Output/Input
Some times higher production is assumed as the higher productivity and
other are used synonymously. However this does not always become true
The productivity improvements can be observed from the following:
• Increase output / Decrease Input
• Increase output/constant input
• Constant output/ Decrease input
• Increase output more/ Decrease input less
• Decrease output less/Decrease input more.
• Productivity measurement is derived from the ratio between output
and input. Productivity is usually defined as follows:
• Productivity = Output/ Input
• Symbolically, P = O/I
• Where;
• P = Productivity
• O = Output
• I = Input
• There fore productivity measure how well input or resources are
utilize to obtain the desired output. The higher ratio, the greater will
be the productivity. Given similar situation entrepreneur or an
enterprises should try to improve this ratio over time which indicates
productivity improvements. Before productivity measurement can be
calculated both the Output and input should be measured.
Acceptance Sampling-
Acceptance sampling is a statistical measure used in quality control. It
allows a company to determine the quality of a batch of products by
selecting a specified number for testing. The quality of this designated
sample will be viewed as the quality level for the entire group of products.
A company cannot test every one of its products at all times. There may be
too many to inspect at a reasonable cost or within a reasonable timeframe.
Also, comprehensive testing might damage the product or make it unfit for
sale in some way. Testing a small sample would be indicative without
ruining the bulk of the product run.
KEY TAKEAWAYS
• Acceptance sampling is a quality-control measure that lets a
company determine the quality of an entire product lot by testing
randomly selected samples and using statistical analysis.
• When done correctly, acceptance sampling is effective for quality
control.
• While it was developed during World War II as a quick fix for
manufacturing, acceptance sampling shouldn't permanently replace
more systemic acceptance quality control methods.
Understanding Acceptance Sampling
Acceptance sampling tests a representative sample of the product for
defects. The process involves first, determining the size of a product lot to
be tested, then the number of products to be sampled, and finally the
number of defects acceptable within the sample batch.
Products are chosen at random for sampling. The procedure usually
occurs at the manufacturing site, just before the products are to be
shipped. The goal is to measure the quality of a batch with a specified
degree of statistical certainty without having to test every single unit.
Based on the results—how many of the predetermined number of samples
pass or fail the testing—the company decides whether to accept or reject
the lot.
The statistical reliability of a sample is generally measured by a t-statistic,
an inferential statistic used to determine if there is a significant difference
between two groups that share common features.
SEQUENCING AND SCHEDULING-
Scheduling is the allocation of resources over time to perform a collection of tasks
and it is a decision making function. The practical problem of allocating resources
over time to perform a collection of tasks arises in a variety of situations. In most
cases, however, scheduling does not become a concern until some fundamental
planning problems are resolved, and it must be recognized that scheduling
decisions are of secondary importance to a broader set of managerial decisions.
The scheduling process most often arises in a situation where resource availability
fixed by the long- term commitments of a prior-planning horizon.
Sequencing is the order of processing a set of tasks over available resources.
Scheduling involves sequencing'' task of allocating as well as the determination of
process commencement and completion times i.e., time-tabling. Sequencing
problems occur whenever there is a choice to the order in which a group of tasks
can be performed. The shop supervisor or scheduler can deal with sequencing
problems in a variety of ways. The simplest approach is to ignore the problem and
accomplish the tasks in any random order. The most frequently used approach is to
schedule heuristically according to predetermined "rules of thumb". In certain cases,
scientifically derived scheduling procedures can be used to optimize the scheduling
objectives.
Johnson’s RulE-
Johnson’s Rules a technique that can be used to minimize the
completion time for a group of jobs that are to be processed on two
machines or at two successive work centers.
Objectives of Johnson’s Rule
To minimize the processing time for sequencing a group of jobs through two work centers.
To minimize the total idle times on the machines.
To minimize the flow time from the beginning of the first job until the finish of the last job.
Conditions for Johnson’s Rule
Job time(including setup and processing) must be known and constant for each job at each wo
Job times must be independent of the job sequence.
All jobs must follow the same two-setup work sequence.
Job priorities cannot be used.
Steps Involved In Johnson’s Rule
Minimizes makespan when scheduling a group of jobs on two
workstations
Step 1: Scan the processing time at each workstation and find the
shortest processing time among the jobs not yet scheduled. If two or
more jobs are tied, choose one job arbitrarily.
Step 2: If the shortest processing time is on workstation 1, schedule the
corresponding job as early as possible. If the shortest processing time is
on workstation 2, schedule the corresponding job as late as possible.
Step 3: Eliminate the last job scheduled for further consideration.
Repeat steps 1 and 2 until all jobs have been scheduled
Why Forecasts Are Wrong
• Unsuitable software – software that doesn’t have the necessary capabilities, has
mathematical errors, or uses inappropriate methods. It is also possible that the
software is perfectly sound but due to untrained or inexperienced forecasters, it is
misused.
• The second reason is when untrained, unskilled, or inexperienced forecasters
exhibit behaviors that affect forecast accuracy. One example is over-adjustment, or
as W. Edwards Deming put it, “fiddling” with the process. This happens when a
forecaster constantly adjusts the forecast based on new information. Research
suggests that much of this fiddling makes no improvement in forecast accuracy and
is simply wasted effort.*
• Forecasting should be a dispassionate and scientific exercise seeking a “best
guess” at what is really going to happen in the future. The third reason for
forecasting inaccuracy is process contamination by the biases, personal agendas,
and ill-intentions of forecasting participants. Instead of presenting an unbiased best
guess at what is going to happen, the forecast comes to represent what
management wants to see happen – no matter what the marketplace is saying.
• Finally, bad forecasting can occur because the desired level of accuracy is
unachievable for the behavior being forecast. Consider calling heads or tails in the
tossing of a fair coin. It doesn’t matter that we may want to achieve 60, 70 or 90
percent accuracy. The reality is that over a large number of tosses, we will only be
right half of the time and nothing can change that. The nature of the behavior
determines how well we can forecast it – and this applies to demand for products
and services just as it does to tossing coins.
X-BAR CHART
The x-bar and R-chart are quality control charts used to monitor
the mean and variation of a process based on samples taken in a
given time. The control limits on both chats are used to monitor the
mean and variation of the process going forward. If a point is out of
the control limits, it indicates that the mean or variation of the
process is out-of-control; assignable causes may be suspected at this
point. On the x-bar chart, the y-axis shows the grand mean and the
control limits while the x-axis shows the sample group.
P-CHART
A p-chart is an attributes control chart used with data collected in
subgroups of varying sizes. Because the subgroup size can vary, it
shows a proportion on nonconforming items rather than the
actual count. P-charts show how the process changes over time.
The process attribute (or characteristic) is always described in a
yes/no, pass/fail, go/no go form
C-CHART
A c-chart is an attributes control chart used with data collected in
subgroups that are the same size. C-charts show how the process,
measured by the number of nonconformities per item or group of
items, changes over time. Nonconformities are defects or
occurrences found in the sampled subgroup. They can be
described as any characteristic that is present but should not be,
or any characteristic that is not present but should be.
What is an operating characteristic (OC) curve?
The operating characteristic (OC) curve depicts the discriminatory power of an
acceptance sampling plan. The OC curve plots the probabilities of accepting a lot
versus the fraction defective.
When the OC curve is plotted, the sampling risks are obvious. You should always
examine the OC curve before using a sampling plan
Consequences of Poor Quality
Quality is vital in all aspects of life. It is imperative in business because
poor quality can lead to negative consequences such as lost customers,
decreased profits, and even closure.
To maintain good quality, you must have a system in place that ensures
that all of your products meet specific standards. You should also ensure
that your employees follow these same guidelines when producing your
products.
The Consequences of Poor Quality
Poor quality can lead to several negative outcomes. These consequences
can be placed in the following broad categories.
1. Customer Related Consequences
2. Operation Related Consequences
3. Business Related Consequences
4. Society Related Consequences
SIX SIGMA
Six Sigma is a management ideology that focuses on statistical
improvements to a business process and advocates for qualitative
measurements of success over qualitative markers. As such, Six Sigma
practitioners are business people who use statistics, financial analysis,
and project management to achieve improved business functionality.3
Six Sigma is a statistical benchmark that shows how (well) a business
process works.2 As mentioned above, an error happens when an event
occurs with six standard deviations from the mean with no more than 3.4
occurrences per million events. This means that a process is considered to
be efficient if it produces less than 3.4 defects per one million chances. A
defect is anything produced outside of consumer satisfaction.
The 5 Steps of Six Sigma
Adherents and practitioners of the Six Sigma method follow an approach
called DMAIC. This acronym stands for define, measure, analyze,
improve, and control.4
DMAIC is a statistically driven methodology that companies implement as
a mental framework for business process improvement. According to the
ideology, a business may solve any seemingly unsolvable problem by
following the five DMAIC steps:
1. A team of people, led by a Six Sigma champion, defines a faulty
process on which to focus, decided through an analysis of company
goals and requirements. This definition outlines the problem, goals,
and deliverables for the project.
2. The team measures the initial performance of the process. These
statistical measures make up a list of potential inputs, which may
cause the problem and help the team understand the process's
benchmark performance.
3. Then the team analyzes the process by isolating each input, or
potential reason for the failure, and testing it as the root of the
problem. The team uses analytics to identify the reason for process
errors.
4. The team works from there to improve system performance.
5. The group adds controls to the process to ensure it does not regress
and become ineffective once again.5 6
Quality AWARDS
Total Quality Management (TQM) supports the organizations in their efforts to
obtain satisfied customers. A major boost to the growth of TQM is the promotion of
quality award models like Deming Application Prize, Malcolm Baldrige National
Quality Award, Rajiv Gandhi National Quality Award and CII-EXIM Bank award. These
award frameworks are used by many organizations to assess and benchmark their
level of TQM implementation. The common purpose of quality award is to promote
excellence in quality and implementation of total quality management in
enterprises. Many Indian companies have spruced up quality efforts with TQM and
have even won the quality awards. This paper describes, compares and analyses
some important quality awards which are in use in India. The methodology adopted
is based on the critical comparison of criteria of various quality award models. The
objective of this paper is to to know how TQM principles are incorporated in these
quality award models.
Quality Glossary Definition: Cost of quality
Cost of quality (COQ) is defined as a methodology that allows an organization to determine the
extent to which its resources are used for activities that prevent poor quality, that appraise the
quality of the organization’s products or services, and that result from internal and external
failures. Having such information allows an organization to determine the potential savings to be
gained by implementing process improvements.
• Cost of poor quality (COPQ)
• Appraisal costs
• Internal failure costs
• External failure costs
• Prevention costs
• COQ and organizational objectives
• COQ resources
Consumer surveys: Weakness
Normally in most surveys, the total number of consumers is very large and it is not
feasible to contact each and every consumer or potential consumer. Therefore,
some type of sampling is done and the survey results are based on this smaller
sample size. While this sort of survey could gain access to information that would not
be otherwise accessible, there is also a danger that lurks are hidden in this
information. This danger is that unless the survey is designed by an expert in the
field the questions may not reveal the correct information. In fact, there is a great
fear that wrong information may get validated just because it is the result of a
consumer survey.
Secondly, the sample has to be correctly chosen to ensure that the entire population
of consumers, present and potential, is correctly represented. Otherwise again there
is a fear of a false positive result. Also these surveys can be expensive and time
consuming. Moreover, the surveying personnel cannot eliminate any irrationality that
may creep into the answers due to certain prevalent external factors at the time of
the survey. Any market rumors, whether positive or negative, regarding the firm’s
products or services, can affect the rationality of the response.
Salesforce
The sales force composite method is not free from the limitations too. Since the sales agents
are not the experts in forecasting, they cannot employ the sophisticated forecasting
techniques properly and neither they have complete data to have a fact-based
forecasting.
Weakness of Committee Form of Organisation:
This form of organisation suffers from the following
weaknesses:
1. Delay: The main drawback of committee form of organisation is delay in taking
decisions. A number of persons express their view points in meetings and a lot of time is
taken on reaching a decision. The fixing of committee meetings is also time consuming. An
agenda is issued and a convenient date is fixed for the meeting. The decision making
process is very slow and many business opportunities may be lost due to delayed decisions.
2.Compromise:
Generally, efforts are made to reach consensus decisions. The view Point majority is
taken as a unanimous decision of the committee. The thinking of the minority may
be valid but it may not be pursued for being singled out. They may accept less than
an optimal solution because of a fear that if their solution proves wrong then they
will be blamed for it.
3. No Accountability:
No individual accountability can be fixed if these decisions are bad. Every member of
the committee tries to defend himself by saying that he suggested a different
solution. If accountability is not fixed then it is the weakness of the organisation.
4. Domination by Some Members:
Some members try to dominate in the committee meetings. They try to thrust their
view point on others. The aggressiveness of some members helps them to take
majority with them and minority view is ignored. This type of decision making is not
in the interest of the organisation.
5. Strained Relations:
Sometimes relations among committee members or with others become strained. If
some members take divergent stands on certain issues, some may feel offended. In
case some issue concerning other persons is discussed in a committee and members
taking stand not liked by those persons may offend them. The discussions in the
meetings are generally leaked to other employees. Some unpleasant decisions may
not be liked by those who are adversely affected. It affects relations of employees not
only on the job but at personal level also.
6. Lack of Effectiveness:
The role of committees is not effective in all areas. The committees may be useful
where grievance redressal or inter personal departmental matters are concerned.
Committees may not be effective where policies are to be framed and quick decisions
are required. Individual initiative will be more effective in these cases. So committees
have a limited role to play.
Why forecast is wrong?
The main reason is because your customers are key to your business,
and your forecasts/ demand plans represent your customer demand.
And, if you provide excellent service to your customers in the most
efficient and consistent manner, you will likely be one of the rare
businesses that can demonstrate sustainable and profitable growth.
So, since forecasting / demand planning is a critical success factor, what
are the best practices that can be applied to any industry/ company?
I’ve seen many examples of forecasting process, tools etc., and yet the
example that stands out and achieved the best results is based on a
pragmatic approach that could be applied to any industry or company.
There were three critical components to the successful example.
Q. Which type of forecasting approach, qualitative or quantitative,
is better?
Statistical data are essentially quantitative or numerical. For statistical analysis qualitative
data must be transformed into a quantitative form. Statistical forecasting must be quantitative
and not qualitative. Hence quantitative forecasting is better than qualitative forecasting.
What Is Total Quality Management (TQM)?
• Total quality management (TQM) is the continual process of
detecting and reducing or eliminating errors in manufacturing,
streamlining supply chain management, improving the customer
experience, and ensuring that employees are up to speed with
training. Total quality management aims to hold all parties involved
in the production process accountable for the overall quality of the
final product or service.
• it is not the same as Six Sigma. TQM focuses on ensuring that
internal guidelines and process standards reduce errors, while Six
Sigma looks to reduce defects.
• TQM is considered a customer-focused process that focuses on
consistently improving business operations. It strives to ensure all
associated employees work toward the common goals of improving
product or service quality, as well as improving the procedures that
are in place for production.
• TQM oversees all activities and tasks needed to maintain a desired
level of excellence within a business and its operations. This
includes the determination of a quality policy, creating and
implementing quality planning and assurance, and quality control
and quality improvement measures.
Elements of Total Quality Management –
• Management’s commitment to quality
• Customer satisfaction
• Preventing rather than detecting defects
• Measurement of Quality
• Continuous improvement
• Corrective action for root cause
• Training
• Recognition of high quality
• Involvement of Employees
• Benchmarking.