Neeraj
Neeraj
2010 KITINFINET
Margin of safety (financial)
From Wikipedia, the free encyclopedia
Jump to:navigation, search
Margin of safety (safety margin) is the difference between the intrinsic value of a stock and its
market price.
Another definition: In Break even analysis (accounting), margin of safety is how much output or
sales level can fall before a business reaches its breakeven point.
[edit] History
Benjamin Graham and David Dodd, founders of value investing, coined the term margin of safety in
their seminal 1934 book, Security Analysis. The term is also described in Graham's The Intelligent
Investor. Graham said that "the margin of safety is always dependent on the price paid" (The
Intelligent Investor, Benjamin Graham, HarperBusiness Essentials, 2003).
[edit] Application to investing
Using margin of safety, one should buy a stock when it is worth more than its price on the
market. This is the central thesis of value investing philosophy which espouses preservation of
capital as its first rule of investing. Benjamin Graham suggested to look at unpopular or neglected
companies with low P/E and P/B ratios. One should also analyze financial statements and footnotes
to understand whether companies have hidden assets (e.g., investments in other companies) that
are potentially unnoticed by the market.
The margin of safety protects the investor from both poor decisions and downturns in the market.
Because fair value is difficult to accurately compute, the margin of safety gives the investor room
for error.
A common interpretation of margin of safety is how far below intrinsic value one is paying for a
stock. For high quality issues, value investors typically want to pay 90 cents for a dollar (90% of
intrinsic value) while more speculative stocks should be purchased for up to a 50 percent
discount to intrinsic value (pay 50 cents for a dollar).[1]
[edit] Application to accounting
In investing parlance, margin of safety is the difference between the expected (or actual) sales
level and the breakeven sales level. It can be expressed in the equation form as follows:
Margin of Safety = Expected (or) Actual Sales Level (quantity or dollar amount) - Breakeven
sales Level (quantity or dollar amount)
[edit] Rare book
Also, Margin of Safety is a rare and out-of-print book written by Seth Klarman, founder of
Baupost Limited Partners, a value investing focused hedge fund based in Boston, MA. Copies of
his book are considered something of a collector's item and it can regularly be found on eBay or
Amazon.com in the $1200-$2000 price range.
[edit] References
• Graham, Benjamin. Dodd, David. Security Analysis: The Classic 1934 Edition.
McGraw-Hill. 1996. ISBN 0-07-024496-0.
• http://www.businessweek.com/magazine/content/06_32/b3996085.htm
• http://www.worldfinancialblog.com/investing/ben-grahams-margin-of-safety/26/
It has been suggested that this article or section be merged into break-even.
(Discuss)
The break-even point for a product is the point where total revenue received equals the total costs
associated with the sale of the product (TR = TC).[1] A break-even point is typically calculated in
order for businesses to determine if it would be profitable to sell a proposed product, as opposed
to attempting to modify an existing product instead so it can be made lucrative. Break even
analysis can also be used to analyze the potential profitability of an expenditure in a sales-based
business.
break even point (for output) = fixed cost / contribution per unit
contribution (p.u) = selling price (p.u.) - variable cost (p.u)
break even point (for sales) = fixed cost / contribution (pu) * selling price (pu)
Contents
[hide]
• 1 Margin of Safety
• 2 In unit sales
• 3 Internet research
• 4 Limitations
• 5 References
• 6 Bibliography
• 7 External links
Firms may still decide not to sell low-profit products, for example those not fitting well into their
sales mix. Firms may also sell products that lose money - as a loss leader, to offer a complete
line of products, etc. But if a product does not break even, or a potential product looks like it
clearly will not sell better than the break even point, then the firm will not sell, or will stop
selling, that product.
An example:
• Assume we are selling a product for £2 each.
• Assume that the variable cost associated with producing and selling the
product is 60p.
• Assume that the fixed cost related to the product (the basic costs that are
incurred in operating the business even if no product is produced) is £1000.
• In this example, the firm would have to sell (1000 / (2.00 - 0.60) = 715) 715
units to break even.
Total Income (Net profit) = Total expenses (costs)
NI = TC = Fixed cost + Variable cost
Selling Price x Quantity = Fixed cost + Quantity x Variable cost (cost/unit)
SP x Q = FC + Q x VC
Quantity x (SP-V) = Fc
Break Even = FC / (SP − VC)
where FC is Fixed Cost, SP is Selling Price and VC is Variable Cost
[edit] Internet research
By inserting different prices into the formula, you will obtain a number of break even points, one
for each possible price charged. If the firm changes the selling price for its product, from $2 to
$2.30, in the example above, then it would have to sell only (1000/(2.3 - 0.6))= 589 units to
break even, rather than 715.
To make the results clearer, they can be graphed. To do this, you draw the total cost curve (TC in
the diagram) which shows the total cost associated with each possible level of output, the fixed
cost curve (FC) which shows the costs that do not vary with output level, and finally the various
total revenue lines (R1, R2, and R3) which show the total amount of revenue received at each
output level, given the price you will be charging.
The break even points (A,B,C) are the points of intersection between the total cost curve (TC)
and a total revenue curve (R1, R2, or R3). The break even quantity at each selling price can be
read off the horizontal, axis and the break even price at each selling price can be read off the
vertical axis. The total cost, total revenue, and fixed cost curves can each be constructed with
simple formulae. For example, the total revenue curve is simply the product of selling price
times quantity for each output quantity. The data used in these formulae come either from
accounting records or from various estimation techniques such as regression analysis.
[edit] Limitations
• Break-even analysis is only a supply side (i.e. costs only) analysis, as it tells
you nothing about what sales are actually likely to be for the product at these
various prices.
• It assumes that fixed costs (FC) are constant. Although, this is true in the
short run, an increase in the scale of production is likely to cause fixed costs
to rise.
• It assumes average variable costs are constant per unit of output, at least in
the range of likely quantities of sales. (i.e. linearity)
• It assumes that the quantity of goods produced is equal to the quantity of
goods sold (i.e., there is no change in the quantity of goods held in inventory
at the beginning of the period and the quantity of goods held in inventory at
the end of the period).
• In multi-product companies, it assumes that the relative proportions of each
product sold and produced are constant (i.e., the sales mix is constant).
Instrumental values are values like ambition, courage, persistence, politeness etc.
They are not the end but a mean of achieving Terminal values.
Basic Conviction that a specific mode of conduct is preferable over an opposite or
converse mode of existence. These differ from Terminal values which are convictions
about the end state of existence, rather than the means.
Emotional intelligence
From Wikipedia, the free encyclopedia
Jump to:navigation, search
Emotional intelligence (EI) describes the ability, capacity, skill or, in the case of the trait EI
model, a self-perceived grand ability to identify, assess, manage and control the emotions of one's
self, of others, and of groups.[1] Different models have been proposed for the definition of EI and
disagreement exists as to how the term should be used.[2] Despite these disagreements, which are
often highly technical, the ability EI and trait EI models (but not the mixed models) enjoy
support in the literature and have successful applications in different domains.
The earliest roots of emotional intelligence can be traced to Darwin's work on the importance of
emotional expression for survival and second adaptation.[3] In the 1900s, even though traditional
definitions of intelligence emphasized cognitive aspects such as memory and problem-solving,
several influential researchers in the intelligence field of study had begun to recognize the
importance of the non-cognitive aspects. For instance, as early as 1920, E.L. Thorndike used the
term social intelligence to describe the skill of understanding and managing other people.[4]
Similarly, in 1940 David Wechsler described the influence of non-intellective factors on intelligent
behavior, and further argued that our models of intelligence would not be complete until we can
adequately describe these factors.[3] In 1983, Howard Gardner's Frames of Mind: The Theory of
Multiple Intelligences[5] introduced the idea of multiple intelligences which included both
Interpersonal intelligence (the capacity to understand the intentions, motivations and desires of
other people) and Intrapersonal intelligence (the capacity to understand oneself, to appreciate
one's feelings, fears and motivations). In Gardner's view, traditional types of intelligence, such as
IQ, fail to fully explain cognitive ability.[6] Thus, even though the names given to the concept
varied, there was a common belief that traditional definitions of intelligence are lacking in ability
to fully explain performance outcomes.
The first use of the term "emotional intelligence" is usually attributed to Wayne Payne's doctoral
thesis, A Study of Emotion: Developing Emotional Intelligence from 1985.[7] However, prior to
this, the term "emotional intelligence" had appeared in Leuner (1966). Greenspan (1989) also put
forward an EI model, followed by Salovey and Mayer (1990), and Goleman (1995). The
distinction between trait emotional intelligence and ability emotional intelligence was introduced
in 2000.[8]
As a result of the growing acknowledgement by professionals of the importance and relevance of
emotions to work outcomes,[9] the research on the topic continued to gain momentum, but it
wasn't until the publication of Daniel Goleman's best seller Emotional Intelligence: Why It Can
Matter More Than IQ that the term became widely popularized.[10] Nancy Gibbs' 1995 Time
magazine article highlighted Goleman's book and was the first in a string of mainstream media
interest in EI.
[edit] Defining emotional intelligence
Substantial disagreement exists regarding the definition of EI, with respect to both terminology
and operationalizations. There has been much confusion regarding the exact meaning of this
construct. The definitions are so varied, and the field is growing so rapidly, that researchers are
constantly amending even their own definitions of the construct. At the present time, there are
three main models of EI:
• Ability EI models
• Mixed models of EI
• Trait EI model
Succession planning is a critical part of the human resources planning process. Human
resources planning (HRP) is the process of having the right number of employees in the
right positions in the organization at the time that they are needed. HRP involves
forecasting, or predicting, the organization's needs for labor and supply of labor and
then taking steps to move people into positions in which they are needed.
Despite its many advantages, internal selection can also have some drawbacks. While
the opportunities for advancement may be motivating to employees who believe that
they can move up within the organization at a future date, those employees who feel that
they have been passed over for promotion or are at a career plateau are likely to become
discouraged and may choose to leave the organization. Having an employee who has
been trained and socialized by the organization may limit the availability of skills,
innovation, or creativity that may be found when new employees are brought in from
the outside. Finally, internal selection still leaves a position at a lower level that must be
staffed from the outside, which may not reduce recruitment and selection costs.
Many companies organize their management training and development efforts around
succession planning. However, not all organizations take a formal approach to it, and
instead do so very informally, using the opinions of managers as the basis for
promotion, with little consideration of the actual requirements of future positions.
Informal succession planning is likely to result in managers who are promoted due to
criteria that are unrelated to performance, such as networking within and outside of the
organization. Organizations would be better served by promoting managers who were
able to successfully engage in human resource management activities and communicate
with employees. Poor succession planning, such as just described, can have negative
organizational consequences. Research indicates that poor preparation for advancement
into managerial positions leaves almost one-third of new executives unable to meet
company expectations for job performance. This may have negative repercussions for
the newly promoted manager, the other employees, and the company's bottom line.
The second major step for succession planning is to define and measure individual
qualifications needed for each targeted position. Such qualifications should be based on
information from a recent job analysis. Once these qualifications are defined, employees
must be evaluated on these qualifications to identify those with a high potential for
promotion. This may involve assessing both the abilities and the career interests of
employees. If a lower-level manager has excellent abilities but little interest in
advancement within the organization, then development efforts aimed at promotion will
be a poor investment.
To determine the level of abilities of employees within the organization, many of the
same selection tools that are used for assessing external candidates can be used, such as
general mental ability tests, personality tests, and assessment centers. However, when
selecting internally, the company has an advantage in that it has much more data on
internal candidates, such as records of an employee's career progress, experience, past
performance, and self-reported interests regarding future career steps.
DEVELOPING MANAGERS.
The third step of succession planning, which is actually ongoing throughout the process,
is the development of the managers who are identified as having promotion potential. In
order to prepare these lower-level managers for higher positions, they need to engage in
development activities to improve their skills. Some of these activities may include:
• Job rotation through key executive positions. By working in different executive positions
throughout the organization, the manager gains insight into the overall strategic
workings of the company. Additionally, the performance of this manager at the executive
level can be assessed before further promotions are awarded.
• Education. Formal courses may improve managers' abilities to understand the financial
and operational aspects of business management. Many companies will pay for
managers to pursue degrees such as Masters in Business Administration (MBAs), which
are expected to provide managers with knowledge that they could not otherwise gain
from the company's own training and development programs.
• Performance-related training and development for current and future roles. Specific
training and development provided by the company may be required for managers to
excel in their current positions and to give them skills that they need in higher-level
positions.
In the final step of succession planning, the organization identifies a career path for each
high-potential candidate—those who have the interest and ability to move upward in the
organization. A career path is the typical set of positions that an employee might hold in
the course of his or her career. In succession planning, it is a road map of positions and
experiences designed to prepare the individual for an upper-level management position.
Along with career paths, the organization should develop replacement charts,
which indicate the availability of candidates and their readiness to step into the
various management positions. These charts are depicted as organizational charts in
which possible candidates to replacement others are listed in rank order for each
management position. These rank orders are based on the candidates' potential scores,
which are derived on the basis of their past performance, experience, and other relevant
qualifications. The charts indicate who is currently ready for promotion and who needs
further grooming to be prepared for an upper-level position.
The first potential problem in succession planning is the crowned prince syndrome,
which occurs when upper management only considers for advancement, those
employees who have become visible to them. In other words, rather than looking at a
wider array of individual employees and their capabilities, upper management focuses
only on one person—the "crowned prince." This person is often one who has been
involved in high-profile projects, has a powerful and prominent mentor, or has
networked well with organizational leaders. There are often employees throughout the
organization who are capable of and interested in promotion who may be overlooked
because of the more visible and obvious "crowned prince," who is likely to be promoted
even if these other employees are available. Not only are performance problems a
potential outcome of this syndrome, but also the motivation of current employees may
suffer if they feel that their high performance has been overlooked. This may result in
turnover of high quality employees who have been overlooked for promotion.
TALENT DRAIN.
The talent drain is the second potential problem that may occur in succession planning.
Because upper management must identify only a small group of managers to receive
training and development for promotion, those managers who are not assigned to
development activities may feel overlooked and therefore leave the organization. This
turnover may reduce the number of talented managers that the organization has at the
lower and middle levels of the hierarchy. Exacerbating this problem is that these
talented managers may work for a competing firm or start their own business, thus
creating increased competition for their former company.
The final problem that can occur in succession planning is the concern with managing
large amounts of human resources information. Because succession planning requires
retention of a great deal of information, it is typically best to store and manage it on a
computer. Attempting to maintain such records by hand may prove daunting. Even on
the computer, identifying and evaluating many years' worth of information about
employees' performance and experiences may be difficult. Add to that the challenges of
comparing distinct records of performance to judge promotion capability, and this
information overload is likely to increase the difficulty of successful succession
planning.
It is the first operative function of HRM. Employment is concerned with securing and employing the people possessing required kind and
level of human resources necessary to achieve the organizational objectives. It covers the functions such as job analysis, human resources
planning, recruitment, selection, placement, induction and internal mobility.
Job Analysis: It is the process of study and collection of information relating to the operations and responsibilities of a specific job. It
includes:
1.Collection of data, information, facts and ideas relating to various aspects of jobs including men, machines and materials.
2.Preparation of job description, job specification, job requirements and employee specification which help in identifying the nature, levels
and quantum of human resources.
3.Providing the guides, plans and basis for job design and for all operative functions of HRM.
It is a process for determination and assuring that the organization will have an adequate number of qualified persons, available at proper
times, performing jobs which would meet the needs of the organization and which would provide satisfaction for the individuals involved. It
involves
*Estimation of present and future requirement and supply of human resources basing on objectives and long range plans of the organization.
*Calculation of net human resources requirement based on present inventory of human resources.
*Taking steps to mould, change, and develop the strength of existing employees in the organization so as to meet the future human
resources requirements.
*Preparation of action programs to get the rest of human resources from outside the organization and to develop the human resources of
existing employees.
Recruitment:
It is the process of searching for prospective employees and stimulating them to apply for jobs in an organization. It deals with:
It is the process of ascertaining the qualifications, experience, skill, knowledge etc., of an applicant with a view to appraising his / her
suitability to a job appraising.
Placement: It is the process of assigning the selected candidate with the most suitable job in terms of job requirements. It is matching of
employees specifications with job requirements. This function includes:
(b)Conducting follow-up study, appraising employee performance in order to determine employee’s adjustment with the job.
Induction and Orientation: Induction and orientation are the techniques by which a new employee is rehabilitated in the changed
surroundings and introduced to the practices, policies, purposes and people etc., of the organization.
(a)Acquaint the employee with the company philosophy, objectives, policies, career planning and development, opportunities, product,
market share, social and community standing, company history, culture etc.
(b)Introduce the employee to the people with whom he has to work such as peers, supervisors and subordinates.
(c)Mould the employee attitude by orienting him to the new working and social environment.
Human resource management deals with the management of people in an organization it is assessed and accepted that Human resource
are the main components of an organization and the human or failure of an organization depends on how effectively this components is
managed. This is the concept, which is integrated and involving the entire human force of that organization to work together with a sense of
common purpose that how to be infused to the organization.
Human resource management is dedicated to develop a suitable corporate, culture programs or design and implement to reflect core
values, of the enterprise., "Human resource management is proactive rather than waiting to be told what to do about recruiting, payments or
training the people "Human resource…………
Management is related to the continues process of man power planning selection performance appraisal, administration, training and
development. Human resource management always is deep rooted comprehensive activity taken up to improve that quality of human beings
who are vital assets of the organizations competence and capability of the employees will be improved by adopting scientific methods which
enable them to play their assigned roles effectively.
Today management techniques in corporate enterprise are changing very fast it is more so in Human resource management. Human
resource development manager has to actuate every being that works in the organizations. His job is to created a team spirit in the minds of
the His job is to create a team spirit in the minds of the corporate enterprise before has actuates his workers he should be able to self
actuate and work with his group of workers. "Actuating is getting all the members of the group to work and to strive to archive objectives of
the enterprises.
MEANING
In simple sense Human resource management means employing people and development their resource utilizing maintaining people and
compensation their services in tune with the jobs and organization requirements with a view to achieve the goal of the organization individual
and the society.
Human resource management functions helps in the managers in recruiting, relations, training and development the members for an
organization. Human resource management is also concerned with hiring motivating, and people in an organization it forces on people in
organization.
DEFINITIONS
According to Leon C meginson Human resource management is "The total knowledge of skills, creative abilities, talents and aptitudes of
an organization workforce as well as the value attitude and beliefs of the individuals involved.
According to Flippo Human resource management is "The planning organizing, directing, and co-ordinating and controlling of the
procurement,
DEFINITION
According to Leon C meginson Human resource management is "The total knowledge of skills, creative abilities, talents and aptitudes of
an organization workforce as well as the value attitude and beliefs of the individuals involved".
According to Flippo Human resource management is "The planning, organizing, directing, and coordinating and controlling of the
procurement, development, compensation, integration, maintenance, organization and social objective are accomplished.
Human resource management can be defined as managing production organizing and controlling the functions of employing, development
and compensation, Human resource resulting in the creations and development of human relations resulting with a view to contribute
proportionately to the organizational, individual and social goals.
• Setting general and specific management policy for organizations relationship and establishing and maintaining a suitable
organization for better cooperation.
• Collective bargaining contract negotiation, contract administration and grievance handling.
• Aiding in the self-development of employees at all levels providing opportunities for personal development and growth as
for acquiring requisite skill and experience.
• Developing and maintaining motivating for workers by providing certain incentives.
OBJECTIVES
Primary objective of Human resource management is to ensure the availability of a component and willing workforce to an organization.
Beyond this there are another objective too specifically Human resource management objective are divided into four fractions i.e., social
organizational, functional and personal.
SOCIAL OBJECTIVE
Every organization has to set objective keeping the society in mind. Along with the organizational objectives it has to set certain other
social objectives in order to help the society the primary objective of the organization is to be ethically and socially responsible to meet the
need and challenges of the society while minimizing the negative impact of such demands upon the organization the failure of organizations
to use there resource for the society benefits in ethical ways may led to restrictions.
ORGANIZATIONAL OBJECTIVES
To recognize the role of human resource management is burning about organizational effectiveness. Human resource management is not an
end in itself it is only a mean to assist the organization with its primary objectives the department exists to serve the rest of the organization.
FUNCTIONAL OBJECTIVES
Functional objective should not become too expensive at the cost of the organization it serves while personal objectives assist employees
in achieving their personal goals to maintain the departments contributed at the level appropriate to the organization needs resource are
wasted when HRM is either more or less sophisticated to suit the organizations demands.
PERSONAL OBJECTIVES
To assist employees in achieving their personnel goals which enhance the individual contribution to the organization personal objective of
employees have to meet if workers are to be maintained, retained and otherwise employee performance and satisfactions may decline and
the employee may try to leave the organization.
1. SOCIAL SIGNIFICANCE: Proper management is that which enhance their dignity by satisfying their social needs it is
done by… Maintaining balance between the jobs available and the job seekers according to their needs and qualification.
Providing suitable and most productive employment. Eliminating waste or unwanted Human resource. By helping people
to make their own decisions that are in their interests.
2. PROFESSIONAL SIGNIFICANCE: By providing a healthy working environment it promotes team works in the employees,
this is done by maintaining the dignity of the employees as a Human resource being.
Providing maximum opportunity for personal development. Improving the employees skills and capacity. Correcting the
errors and reallocation of work.
3. SIGNIFICANCE OF INDIVIDUAL ENTERPRISE: It can help the organization in accomplishing its goals by. Creating right
attitude among the employees Utilizing the Human resource to the maximum extent Attaining cooperation among the
employees.
Human resource development is mainly concerned with developing the skills, knowledge and competencies of people and it is people
oriented concept.
Human resource and human resource management are related to human resource development, human resource are simply people,
human resource management is the activity of managing people and the business of an organization. Human resource development appears
to the systematic process of changing within as organization it is a specialized process that assists people to reach their potential and further
strengths the goals of a organization. Human resource development can be applied both for the organizational and national level the
concepts of human resource development was formally introduced by LEONARD NADLER in 1989 in a conference organized by the
American society for training and development.
VARIABLE PAY: it is the compensation that is linked directly to perform accomplishment (bonuses, incentives, stock options)
BENEFITS: These are indirect rewards given to an employee or group employees as a part organization membership (health insurance
vacantion pay, pension etc)
1. Attract talent; compensation needs to be high enough to attract talented people since many firms compete to
hire the services of competent people, the salaries offered must be high enough to motivate them to apply.
2. RETAIN TALENTS: If compensation levels fall below the expectations of employees or are not competitive,
employees may quit in frustration.
3. ENSURE EQUITY: Pay should equal the worth of a job, similar get similar pay, likewise, more qualified people
should get better wages.
4. CONTROL COSTS: The cost of hiring people should not be too high, Effective compensation management
ensures that worker are nether over paid nor underpaid.
5. EASE OF OPERATION: The compensation management system should be easy to understand and operate.
Then only will it promote understand regarding pay-related matters between employees union and
management.
1.3.4 COMPONENTS OF PAY STRUCTURE IN INDIA.
WAGES. In India, different Acts include different items under wages. Under the workman's compensation act 1923 wages for holiday pay,
overtime, bonus attendance bonus, and good conduct bonus "form part of wages.
1. Bonus or other payments under a profit-sharing schemes which do not form a part of contract of employees.
2. Value of any house accommodation supply of light, water, medical attendance, traveling allowance, or payment in lieu
thereof or any other concession.
3. Any sum paid to defray special expenses entailed by the nature of security and social insurance benefits.
4. Any contribution to pension, provided fund or a scheme of social securities and social insurance benefits.
BASIC WAGE: The basic wage in India corresponds with what has been recommended by the fair wages committee (1948) and the 15 th
Indian labor conference (1957) The various awards by wages tribunals wages board, pay commission reports and job evolutions also serve
as guiding principle in determining basic wage they are:
DEARNESS ALLOWANCE: It is the allowance paid to employee in order to enable them to face the increasing dearness of essential
commodities. It service as a cushion, as sort of insurance against increasing in price levels of commodities. Instead of increasing wages
every time there is a rise in price levels, DA is paid to neutralize the effects of inflation; when prices go down DA can be always be reduced.
These has, however, remained a hypothetical situation as price never come down to necessitate a cut in dearness allowance payable to
employees.
WAGE AND SALARY ADMINISTRATION.
Employees compensation may be classified into two types basic compensation and supplementary compensation. Basic compensation
refers to monetary payments in the form of wages and salaries the term wage implies remuneration to works doing manual work. The term
salaries is usually defined to mean compensation to office, managerial, technical and professional staff.
1.3.5 OBJECTIVES: A sound plan of compensation administration seeks to achieve the following objectives;
RESEARCH DESIGN
RESEARCH DESIGN
Research designs decides the fate decides the fact of the proposal and its
outcome if the designs is defective the whole outcome and report will be
faulty and undependable. Designing is preliminary step in every activity it is
at designing stage that the purpose for which research is to be used will also
have to be decided to the designing stage. Designing thus provides a picture
for the whole before starting the work.
OBJECTIVES OF STUDY.
1. To analyse various compensation methods existing in the organization.
2. To identify the relationships of performance with compensation methods
in the organization.
3. To check the level of motivation the employees get through
compensation provided.
4. To analyse employees, level of satisfaction towards compensation
provided by company.
5. To identify process of interlink between performance and compensation
method.
METHODOLOGY
Research methodology may be under started as science of studying how
research is done scientifically. It is way to systematically solve research
problem.
RESEARCH DESIGN IN BAMUL.
"A research design is the arrangement of condition for collection and
analysis of data in a manner that aims to combine relevance to the
research purpose with the economy in procedure Intact the research
design is the conceptual structure within which research is conducted
it constitutes the blue prints for the collection, measurement and
analysis of data. As such the design include an outline of what the
researcher will do from writing the hypothesis and its operational
implication to the final analysis of data.
• Schedule the meeting as close as possible to the employee's departure from the company.
Many companies plan this as the last stop for departing employees.
• Explain the purpose of the interview to the employee that is to gather information about
the employee's perception of the company and how it treats employees.
• Assure the employee that comments made during the exit interview will remain
anonymous except in the case of allegations of misconduct.
• Be prepared to answer employee's questions.
• Set the right tone. Be warm, receptive and interested in what the employee has to say.
Listen. Don't insert personal comments, provide opinions or defend the company and its
actions. Your role is to gather information and stay objective.
• Review any noncompetition or nondisclosure agreements they may have signed.
• Gather or verify that all company property and material has been returned.
• Document the exit interview.
Many companies develop an exit interview form that is completed by the interviewer.
Modigliani-Miller theorem
From Wikipedia, the free encyclopedia
Jump to:navigation, search
The Modigliani-Miller theorem (of Franco Modigliani, Merton Miller) forms the basis for modern
thinking on capital structure. The basic theorem states that, under a certain market price process
(the classical random walk), in the absence of taxes, bankruptcy costs, and asymmetric information,
and in an efficient market, the value of a firm is unaffected by how that firm is financed.[1] It does
not matter if the firm's capital is raised by issuing stock or selling debt. It does not matter what
the firm's dividend policy is. Therefore, the Modigliani-Miller theorem is also often called the
capital structure irrelevance principle.
Modigliani was awarded the 1985 Nobel Prize in Economics for this and other contributions.
Miller was awarded the 1990 Nobel Prize in Economics, along with Harry Markowitz and William
Sharpe, for their "work in the theory of financial economics," with Miller specifically cited for
"fundamental contributions to the theory of corporate finance."
The theorem was originally proven under the assumption of no taxes. It is made up of two
propositions which can also be extended to a situation with taxes.
Consider two firms which are identical except for their financial structures. The first (Firm U) is
unlevered: that is, it is financed by equity only. The other (Firm L) is levered: it is financed
partly by equity, and partly by debt. The Modigliani-Miller theorem states that the value of the
two firms is the same.
Proposition II with risky debt. As leverage (D/E) increases, the WACC (k0) stays
constant.
where
• VL is the value of a levered firm.
• VU is the value of an unlevered firm.
• TCD is the tax rate (TC) x the value of debt (D)
• the term TCD assumes debt is perpetual
This means that there are advantages for firms to be levered, since corporations can deduct
interest payments. Therefore leverage lowers tax payments. Dividend payments are non-
deductible.
Proposition II:
where
• rE is the required rate of return on equity, or cost of levered equity = unlevered equity
+ financing premium.
marketing myopia
Hide links within definitionsShow links within definitions
Definition
Short sighted and inward looking approach to marketing that focuses on the needs of the firm instead
of defining the firm and its products in terms of the customers' needs and wants. Such self-centered
firms fail to see and adjust to the rapid changes in their markets and, despite their previous eminence,
falter, fall, and disappear. This concept was discussed in an article (titled 'Marketing Myopia,' in July-
August 1960 issue of Harvard Business Review) by Harvard Business School emeritus professor of
marketing, Theodore C. Levitt (1925-), who suggests that firms get trapped in this bind because they
omit to ask the vital question, "What business are we in?"
marketing myopia
marketing myopia
narrow-minded approach to a marketing situation where only short-range goals are considered or where the
marketing focuses on only one aspect out of many possible marketing attributes. Because of its
shortsightedness, marketing myopia is an inefficient marketing approach.
Related Terms:
Dictionary of Marketing Terms
marketing concept
goal-oriented, integrated philosophy practiced by producers of goods and services that focuses on satisfying
the needs of consumers over the needs of the producing company. The marketing concept holds that the
desires and needs of the target market must be determined and satisfied in order to successfully achieve the
goals of the producer.
Product Mix
Product mix is a combination of products manufactured or traded by the same business
house to reinforce their presence in the market, increase market share and increase the
turnover for more profitability. Normally the product mix is within the synergy of other
products for a medium size organization. However large groups of Industries may have
diversified products within core competency. Larsen & Toubro Ltd, Godrej, Reliance in India
are some of the examples.
One of the realities of business is that most firms deal with multi-products .This helps a firm
diffuse its risk across different product groups/Also it enables the firm to appeal to a much
larger group of customers or to different needs of the same customer group .So when
Videocon chose to diversify into other consumer durables like music systems ,washing
machines and refrigerators ,it sought to satisfy the needs of the middle and upper middle
income group of consumers.
Likewise , Bajaj Electricals.a household name in India, has almost ninety products in i8ts
portfolio ranging from low value items like bulbs to high priced consumer durables like
mixers and luminaires and lighting projects .The number of products carried by a firm at a
given point of time is called its product mix. This product mix contains product lines and
product items .In other words it’s a composite of products offered for sale by a firm.
Often firms take decisions to change their product mix. These decisions are dictated by the
above factors and also by the changes occurring in the market place. Like the changing life-
styles of Indian consumers led BPL-Sanyo to launch an entire range of white goods like
refrigerators , washing machines, and microwave ovens .It also motivate the firm to launch
other entertainment electronics. Rahejas, a well-known builders firm in Bombay, took a
major decision to convert one of its theatre buildings in the western suburbs of Bombay into
a large garments and accessories store for men ,women and children, perhaps the first of
its kind in India to have almost all products required by these customer groups Competition
from low priced washing powders (mainly Nirma) forced Hindustan Levers to launch
different brands of detergent powder at different price levels positioned at different market
segments .Customer preferences for herbs, mainly shikakai motivated Lever to launch black
Sunsilk Shampoo ,which has shikakai .Also ,low purchasing power. and cultural bias against
shampoo market made Hindustan Lever consider smaller packaging mainly sachets , for
single use .So, it is the changes or anticipated changes in the market place that motivates a
firm to consider changes in its product mix.
The product mix of a company, which is generally defined as the total composite of products
offered by a particular organization, consists of both product lines and individual products. A
product line is a group of products within the product mix that are closely related, either because
they function in a similar manner, are sold to the same customer groups, are marketed through
the same types of outlets, or fall within given price ranges. A product is a distinct unit within the
product line that is distinguishable by size, price, appearance, or some other attribute. For
example, all the courses a university offers constitute its product mix; courses in the marketing
department constitute a product line; and the basic marketing course is a product item. Product
decisions at these three levels are generally of two types: those that involve width (variety) and
depth (assortment) of the product line and those that involve changes in the product mix occur
over time.
The depth (assortment) of the product mix refers to the number of product items offered
Hypothetical State University Product Mix
WIDE WIDTH, AVERAGE DEPTH
Political Mathematic
Education
Science s
Theory Teaching
Government Teaching
Relations Internship
Post
State Math Theory
Secondary
Government Teaching
Engineerin
Nursing English
g
English
Biology Physics
Literature
Advanced European
Chemistry
Math Writers
Writing
within each line; the width (variety) refers to the number of product lines a company carries. For
example, Table 1 illustrates the hypothetical product mix of a major state university.
The product lines are defined in terms of academic departments. The depth of each line is shown
by the number of different product items—course offerings—offered within each product line.
(The examples represent only a partial listing of what a real university would offer.) The state
university has made the strategic decision to offer a diverse market mix. Because the university
has numerous academic departments, it can appeal to a large cross-section of potential students.
This university has decided to offer a wide product line (academic departments), but the depth of
each department (course offerings) is only average.
In order to see the difference in product mix, product line, and products, consider a smaller
college that focuses on the sciences represented in Table 2. This college has decided to
concentrate its resources in a few departments (again, this is
Hypothetical Small College Product Mix
NARROW WIDTH, LARGE DEPTH
Mathematics Physics
Geometric
Intermediate Physics
Concepts
Analytic
Advanced Physics
Geometry
and Calculus
and Astronomy
Differential Electromagnetic
Equations Theory
CORPORATE-LEVEL STRATEGY
Corporate-level strategies address the entire strategic scope of the enterprise. This is the
"big picture" view of the organization and includes deciding in which product or service
markets to compete and in which geographic regions to operate. For multi-business
firms, the resource allocation process—how cash, staffing, equipment and other
resources are distributed—is typically established at the corporate level. In addition,
because market definition is the domain of corporate-level strategists, the responsibility
for diversification, or the addition of new products or services to the existing
product/service line-up, also falls within the realm of corporate-level strategy. Similarly,
whether to compete directly with other firms or to selectively establish cooperative
relationships—strategic alliances—falls within the purview corporate-level strategy,
while requiring ongoing input from
Table 1
Corporate, Business, and Functional Strategy
Level of
Definition Example
Strategy
1. What should be the scope of operations; i.e.; what businesses should the firm be in?
2. How should the firm allocate its resources among existing businesses?
3. What level of diversification should the firm pursue; i.e., which businesses represent the
company's future? Are there additional businesses the firm should enter or are there
businesses that should be targeted for termination or divestment?
4. How diversified should the corporation's business be? Should we pursue related
diversification; i.e., similar products and service markets, or is unrelated diversification;
i.e., dissimilar product and service markets, a more suitable approach given current and
projected industry conditions? If we pursue related diversification, how will the firm
leverage potential cross-business synergies? In other words, how will adding new
product or service businesses benefit the existing product/service line-up?
5. How should the firm be structured? Where should the boundaries of the firm be drawn
and how will these boundaries affect relationships across businesses, with suppliers,
customers and other constituents? Do the organizational components such as research
and development, finance, marketing, customer service, etc. fit together? Are the
responsibilities or each business unit clearly identified and is accountability established?
The BCG matrix classifies business-unit performance on the basis of the unit's relative
market share and the rate of market growth as shown in Figure 1.
Figure 1
BCG Model of Portfolio Analysis
Products and their respective strategies fall into one of four quadrants. The typical starting point
for a new business is as a question mark. If the product is new, it has no market share, but the
predicted growth rate is good. What typically happens in an organization is that management is
faced with a number of these types of products but with too few resources to develop all of them.
Thus, the strategic decision-maker must determine which of the products to attempt to develop
into commercially viable products and which ones to drop from consideration. Question marks
are cash users in the organization. Early in their life, they contribute no revenues and require
expenditures for market research, test marketing, and advertising to build consumer awareness.
If the correct decision is made and the product selected achieves a high market share, it
becomes a BCG matrix star. Stars have high market share in high-growth markets. Stars
generate large cash flows for the business, but also require large infusions of money to
sustain their growth. Stars are often the targets of large expenditures for advertising and
research and development to improve the product and to enable it to establish a
dominant position in the industry.
Cash cows are business units that have high market share in a low-growth market. These
are often products in the maturity stage of the product life cycle. They are usually well-
established products with wide consumer acceptance, so sales revenues are usually high.
The strategy for such products is to invest little money into maintaining the product and
divert the large profits generated into products with more long-term earnings potential,
i.e., question marks and stars.
Dogs are businesses with low market share in low-growth markets. These are often cash
cows that have lost their market share or question marks the company has elected not to
develop. The recommended strategy for these businesses is to dispose of them for
whatever revenue they will generate and reinvest the money in more attractive
businesses (question marks or stars).
Despite its simplicity, the BCG matrix suffers from limited variables on which to base
resource allocation decisions among the business making up the corporate portfolio.
Notice that the only two variables composing the matrix are relative market share and
the rate of market growth. Now consider how many other factors contribute to business
success or failure. Management talent, employee commitment, industry forces such as
buyer and supplier power and the introduction of strategically-equivalent substitute
products or services, changes in consumer preferences, and a host of others determine
ultimate business viability. The BCG matrix is best used, then, as a beginning point, but
certainly not as the final determination for resource allocation decisions as it was
originally intended. Consider, for instance, Apple Computer. With a market share for its
Macintosh-based computers below ten percent in a market notoriously saturated with a
number of low-cost competitors and growth rates well-below that of other technology
pursuits such as biotechnology and medical device products, the BCG matrix would
suggest Apple divest its computer business and focus instead on the rapidly growing
iPod business (its music download business). Clearly, though, there are both
technological and market synergies between Apple's Macintosh computers and its fast-
growing iPod business. Divesting the computer business would likely be tantamount to
destroying the iPod business.
A more stringent approach, but still one with weaknesses, is a competitive assessment. A
competitive assessment is a technique for ranking an organization relative to its peers in
the industry. The advantage of a competitive assessment over the BCG matrix for
corporate-level strategy is that the competitive assessment includes critical success
factors, or factors that are crucial for an organizational to prevail when all organizational
members are competing for the same customers. A six-step process that allows
corporate strategist to define appropriate variables, rather than being locked into the
market share and market growth variables of the BCG matrix, is used to develop a table
that shows a businesses ranking relative to the critical success factors that managers
identify as the key factors influencing failure or success. These steps include:
1. Identifying key success factors. This step allows managers to select the most appropriate
variables for its situation. There is no limit to the number of variables managers may
select; the idea, however, is to use those that are key in determining competitive
strength.
3. Identifying main industry rivals. This step helps managers focus on one of the most
common external threats; competitors who want the organization's market share.
6. Adding the values. The sum of the values for a manager's organization versus
competitors gives a rough idea if the manager's firm is ahead or behind the competition
on weighted key success factors that are critical for market success.
A competitive strength assessment is superior to a BCG matrix because it adds more
variables to the mix. In addition, these variables are weighted in importance in contrast
to the BCG matrix's equal weighting of market share and market growth. Regardless of
these advantages, competitive strength assessments are still limited by the type of data
they provide. When the values are summed in step six, each organization has a number
assigned to it. This number is compared against other firms to determine which is
competitively the strongest. One weakness is that these data are ordinal: they can be
ranked, but the differences among them are not meaningful. A firm with a score of four
is not twice as good as one with a score of two, but it is better. The degree of
"betterness," however, is not known.
GROWTH STRATEGIES
Growth strategies are designed to expand an organization's performance, usually as
measured by sales, profits, product mix, market coverage, market share, or other
accounting and market-based variables. Typical growth strategies involve one or more
of the following:
1. With a concentration strategy the firm attempts to achieve greater market penetration by
becoming highly efficient at servicing its market with a limited product line (e.g.,
McDonalds in fast foods).
2. By using a vertical integration strategy, the firm attempts to expand the scope of its
current operations by undertaking business activities formerly performed by one of its
suppliers (backward integration) or by undertaking business activities performed by a
business in its channel of distribution (forward integration).
3. A diversification strategy entails moving into different markets or adding different
products to its mix. If the products or markets are related to existing product or service
offerings, the strategy is called concentric diversification. If expansion is into products or
services unrelated to the firm's existing business, the diversification is called
conglomerate diversification.
STABILITY STRATEGIES
When firms are satisfied with their current rate of growth and profits, they may decide
to use a stability strategy. This strategy is essentially a continuation of existing
strategies. Such strategies are typically found in industries having relatively stable
environments. The firm is often making a comfortable income operating a business that
they know, and see no need to make the psychological and financial investment that
would be required to undertake a growth strategy.
RETRENCHMENT STRATEGIES
Retrenchment strategies involve a reduction in the scope of a corporation's activities,
which also generally necessitates a reduction in number of employees, sale of assets
associated with discontinued product or service lines, possible restructuring of debt
through bankruptcy proceedings, and in the most extreme cases, liquidation of the firm.
• A divestment decision occurs when a firm elects to sell one or more of the businesses in
its corporate portfolio. Typically, a poorly performing unit is sold to another company
and the money is reinvested in another business within the portfolio that has greater
potential.
• Bankruptcy involves legal protection against creditors or others allowing the firm to
restructure its debt obligations or other payments, typically in a way that temporarily
increases cash flow. Such restructuring allows the firm time to attempt a turnaround
strategy. For example, since the airline hijackings and the subsequent tragic events of
September 11, 2001, many of the airlines based in the U.S. have filed for bankruptcy to
avoid liquidation as a result of stymied demand for air travel and rising fuel prices. At
least one airline has asked the courts to allow it to permanently suspend payments to its
employee pension plan to free up positive cash flow.
• Liquidation is the most extreme form of retrenchment. Liquidation involves the selling
or closing of the entire operation. There is no future for the firm; employees are released,
buildings and equipment are sold, and customers no longer have access to the product or
service. This is a strategy of last resort and one that most managers work hard to avoid.
BUSINESS-LEVEL STRATEGIES
Business-level strategies are similar to corporate-strategies in that they focus on overall
performance. In contrast to corporate-level strategy, however, they focus on only one
rather than a portfolio of businesses. Business units represent individual entities
oriented toward a particular industry, product, or market. In large multi-product or
multi-industry organizations, individual business units may be combined to form
strategic business units (SBUs). An SBU represents a group of related business
divisions, each responsible to corporate head-quarters for its own profits and losses.
Each strategic business unit will likely have its' own competitors and its own unique
strategy. A common focus of business-level strategies are sometimes on a particular
product or service line and business-level strategies commonly involve decisions
regarding individual products within this product or service line. There are also
strategies regarding relationships between products. One product may contribute to
corporate-level strategy by generating a large positive cash flow for new product
development, while another product uses the cash to increase sales and expand market
share of existing businesses. Given this potential for business-level strategies to impact
other business-level strategies, business-level managers must provide ongoing, intensive
information to corporate-level managers. Without such crucial information, corporate-
level managers are prevented from best managing overall organizational direction.
Business-level strategies are thus primarily concerned with:
4. Monitoring product or service markets so that strategies conform to the needs of the
markets at the current stage of evolution.
ANALYSIS OF BUSINESS-LEVEL
STRATEGIES
PORTER'S GENERIC STRATEGIES.
Cost leadership provides firms above-average returns even with strong competitive
pressures. Lower costs allow the firm to earn profits after competitors have reduced
their profit margin to zero. Low-cost production further limits pressures from customers
to lower price, as the customers are unable to purchase cheaper from a competitor. Cost
leadership may be attained via a number of techniques. Products can be designed to
simplify manufacturing. A large market share combined with concentrating selling
efforts on large customers may contribute to reduced costs. Extensive investment in
state-of-the-art facilities may also lead to long run cost reductions. Companies that
successfully use this strategy tend to be highly centralized in their structure. They place
heavy emphasis on quantitative standards and measuring performance toward goal
accomplishment.
Efficiencies that allow a firm to be the cost leader also allow it to compete effectively
with both existing competitors and potential new entrants. Finally, low costs reduce the
likely impact of substitutes. Substitutes are more likely to replace products of the more
expensive producers first, before significantly harming sales of the cost leader unless
producers of substitutes can simultaneously develop a substitute product or service at a
lower cost than competitors. In many instances, the necessity to climb up the experience
curve inhibits a new entrants ability to pursue this tactic.
Differentiation strategies require a firm to create something about its product that is
perceived as unique within its market. Whether the features are real, or just in the mind
of the customer, customers must perceive the product as having desirable features not
commonly found in competing products. The customers also must be relatively price-
insensitive. Adding product features means that the production or distribution costs of a
differentiated product will be somewhat higher than the price of a generic, non-
differentiated product. Customers must be willing to pay more than the marginal cost of
adding the differentiating feature if a differentiation strategy is to succeed.
Differentiation may be attained through many features that make the product or service
appear unique. Possible strategies for achieving differentiation may include warranty
(Sears tools have lifetime guarantee against breakage), brand image (Coach handbags,
Tommy Hilfiger sportswear), technology (Hewlett-Packard laser printers), features
(Jenn-Air ranges, Whirlpool appliances), service (Makita hand tools), and dealer
network (Caterpillar construction equipment), among other dimensions. Differentiation
does not allow a firm to ignore costs; it makes a firm's products less susceptible to cost
pressures from competitors because customers see the product as unique and are willing
to pay extra to have the product with the desirable features.
Differentiation often forces a firm to accept higher costs in order to make a product or
service appear unique. The uniqueness can be achieved through real product features or
advertising that causes the customer to perceive that the product is unique. Whether the
difference is achieved through adding more vegetables to the soup or effective
advertising, costs for the differentiated product will be higher than for non-
differentiated products. Thus, firms must remain sensitive to cost differences. They
must carefully monitor the incremental costs of differentiating their product and make
certain the difference is reflected in the price.
A focus strategy is often appropriate for small, aggressive businesses that do not have
the ability or resources to engage in a nation-wide marketing effort. Such a strategy may
also be appropriate if the target market is too small to support a large-scale operation.
Many firms start small and expand into a national organization. Wal-Mart started in
small towns in the South and Midwest. As the firm gained in market knowledge and
acceptance, it was able to expand throughout the South, then nationally, and now
internationally. The company started with a focused cost-leader strategy in its limited
market and was able to expand beyond its initial market segment.
Firms utilizing a focus strategy may also be better able to tailor advertising and
promotional efforts to a particular market niche. Many automobile dealers advertise
that they are the largest-volume dealer for a specific geographic area. Other dealers
advertise that they have the highest customer-satisfaction scores or the most awards for
their service department of any dealer within their defined market. Similarly, firms may
be able to design products specifically for a customer. Customization may range from
individually designing a product for a customer to allowing the customer input into the
finished product. Tailor-made clothing and custom-built houses include the customer in
all aspects of production from product design to final acceptance. Key decisions are
made with customer input. Providing such individualized attention to customers may
not be feasible for firms with an industry-wide orientation.
FUNCTIONAL-LEVEL STRATEGIES.
Functional-level strategies are concerned with coordinating the functional areas of the
organization (marketing, finance, human resources, production, research and
development, etc.) so that each functional area upholds and contributes to individual
business-level strategies and the overall corporate-level strategy. This involves
coordinating the various functions and operations needed to design, manufacturer,
deliver, and support the product or service of each business within the corporate
portfolio. Functional strategies are primarily concerned with:
• Assuring that functional strategies mesh with business-level strategies and the overall
corporate-level strategy.
Functional strategies are frequently concerned with appropriate timing. For example,
advertising for a new product could be expected to begin sixty days prior to shipment of
the first product. Production could then start thirty days before shipping begins. Raw
materials, for instance, may require that orders are placed at least two weeks before
production is to start. Thus, functional strategies have a shorter time orientation than
either business-level or corporate-level strategies. Accountability is also easiest to
establish with functional strategies because results of actions occur sooner and are more
easily attributed to the function than is possible at other levels of strategy. Lower-level
managers are most directly involved with the implementation of functional strategies.
McKinsey 7S Framework
Introduction
This paper discusses McKinsey's 7S Model that
was created by the consulting company
McKinsey and Company in the early 1980s.
Since then it has been widely used by
practitioners and academics alike in analysing
hundreds of organisations. The paper explains
each of the seven components of the model
and the links between them. It also includes
practical guidance and advice for the students
to analyse organisations using this model. At
the end, some sources for further information
on the model and case studies available on this
website are mentioned.
The McKinsey 7S model was named after a
consulting company, McKinsey and Company,
which has conducted applied research in
business and industry (Pascale & Athos, 1981;
Peters & Waterman, 1982). All of the authors
worked as consultants at McKinsey and
Company; in the 1980s, they used the model to
analyse over 70 large organisations. The
McKinsey 7S Framework was created as a
recognisable and easily remembered model in
business. The seven variables, which the
authors term "levers", all begin with the letter
"S":
These seven variables include structure,
strategy, systems, skills, style, staff and shared
values. Structure is defined as the skeleton of
the organisation or the organisational chart.
The authors describe strategy as the plan or
course of action in allocating resources to
achieve identified goals over time. The systems
are the routine processes and procedures
followed within the organisation. Staff are
described in terms of personnel categories
within the organisation (e.g. engineers),
whereas the skills variable refers to the
capabilities of the staff within the organisation
as a whole. The way in which key managers
behave in achieving organisational goals is
considered to be the style variable; this variable
is thought to encompass the cultural style of the
organisation. The shared values variable,
originally termed superordinate goals, refers to
the significant meanings or guiding concepts
that organisational members share (Peters and
Waterman, 1982).
The shape of the model (as shown in figure 1)
was also designed to illustrate the
interdependency of the variables. This is
illustrated by the model also being termed as
the "Managerial Molecule". While the authors
thought that other variables existed within
complex organisations, the variables
represented in the model were considered to be
of crucial importance to managers and
practitioners (Peters and Waterman, 1982).
The analysis of several organisations using the
model revealed that American companies tend
to focus on those variables which they feel they
can change (e.g. structure, strategy and
systems) while neglecting the other variables.
These other variables (e.g. skills, style, staff
and shared values) are considered to be "soft"
variables. Japanese and a few excellent
American companies are reportedly successful
at linking their structure, strategy and systems
with the soft variables. The authors have
concluded that a company cannot merely
change one or two variables to change the
whole organisation.
For long-term benefit, they feel that the
variables should be changed to become more
congruent as a system. The external
environment is not mentioned in the McKinsey
7S Framework, although the authors do
acknowledge that other variables exist and that
they depict only the most crucial variables in
the model. While alluded to in their discussion
of the model, the notion of performance or
effectiveness is not made explicit in the model.
Description of 7 Ss
Strategy: Strategy is the plan of action an
organisation prepares in response to, or
anticipation of, changes in its external
environment. Strategy is differentiated by
tactics or operational actions by its nature of
being premeditated, well thought through and
often practically rehearsed. It deals with
essentially three questions (as shown in figure
2): 1) where the organisation is at this moment
in time, 2) where the organisation wants to be
in a particular length of time and 3) how to get
there. Thus, strategy is designed to transform
the firm from the present position to the new
position described by objectives, subject to
constraints of the capabilities or the potential
(Ansoff, 1965).
Structure: Business needs to be organised in a
specific form of shape that is generally referred
to as organisational structure. Organisations
are structured in a variety of ways, dependent
on their objectives and culture. The structure of
the company often dictates the way it operates
and performs (Waterman et al., 1980).
Traditionally, the businesses have been
structured in a hierarchical way with several
divisions and departments, each responsible for
a specific task such as human resources
management, production or marketing. Many
layers of management controlled the
operations, with each answerable to the upper
layer of management. Although this is still the
most widely used organisational structure, the
recent trend is increasingly towards a flat
structure where the work is done in teams of
specialists rather than fixed departments. The
idea is to make the organisation more flexible
and devolve the power by empowering the
employees and eliminate the middle
management layers (Boyle, 2007).
Temporary
947,000 941,000 965,000 966,000 965,000
downsizing
Permanent 3,082,00
3,127,000 3,124,000 3,144,000 3,015,000
downsizing 0
However, economists remain optimistic about downsizing and the effects of downsizing on the
economy when the rate of overall job growth outpaces the rate of job elimination. A trend toward
outsourcing jobs overseas to countries with lower labor costs is a form of downsizing that affects
some U.S. employees. These jobs are not actually eliminated, but instead moved out of reach of
the employees who lose their jobs to outsourcing. Some economists, however, suggest that the
overall net effect of such outsourced jobs will actually be an increase in U.S. jobs as resulting
corporate operating efficiencies allow for more employment of higher-tier (and thus higher-
wage) positions. Regardless of whether downsizing is good or bad for the national economy,
companies continue to downsize and the trend shows few signs of slowing down. For some
sectors, this trend is projected to be particularly prevalent through 2012, as shown in Table 2.
Projected
Occupation
Decline
par value
Hide links within definitionsShow links within definitions
Definition
The nominal dollar amount assigned to a security by the issuer. For an equity security, par value is usually a very
small amount that bears no relationship to its market price, except for preferred stock, in which case par value is
used to calculate dividend payments. For a debt security, par value is the amount repaid to the investor when the
bond matures (usually, corporate bonds have a par value of $1000, municipal bonds $5000, and federal bonds
$10,000). In the secondary market, a bond's price fluctuates with interest rates. If interest rates are higher than
the coupon rate on a bond, the bond will be sold below par value (at a "discount"). If interest rates have fallen, the
price will be
Par value
From Wikipedia, the free encyclopedia
Jump to:navigation, search
Par value, in finance and accounting, means stated value or face value. From this comes the
expressions at par (at the par value), over par (over par value) and under par (under par value).
The term "par value" has several meanings depending on context and geography.
[edit] Bonds
In the U.S. bond markets, the Par Value (as stated on the face of the bond) is the amount that the
issuing firm is to pay to the bond holder at the maturity date. The present value of the Par Value
plus the present value of annuity of the interest payments equal the bond price.
A bond is worth its par value when the price is equal to the face value. When a bond is worth less
than its par value, it is priced at a discount; conversely when a bond is valued above its par value,
the bond is priced at a premium.
[edit] Stock
Par value stock has no relation to market value and, as a concept, is somewhat archaic. The par
value of a stock was the share price upon initial offering; the issuing company promised not to
issue further shares below par value, so investors could be confident that no one else was
receiving a more favorable issue price. Thus, Par Value is a nominal value of a security which
is determined by an issuing company as a minimum price. This was far more important in
unregulated equity markets than in the regulated markets that exist today.
Par value also has bookkeeping purposes. It allows the company to put a de minimis value for the
stock on the company's financial statement.
Many common stocks issued today do not have par values; those that do (usually only in
jurisdictions where par values are required by law) have extremely low par values (often the
smallest unit of currency in circulation), for example a penny par value on a stock issued at
USD$25/share. Most states do not allow a company to issue stock below par value.
No-par stocks have "no par value" printed on their certificates. Instead of par value, some U.S.
states allow no-par stocks to have a stated value, set by the board of directors of the corporation,
which serves the same purpose as par value in setting the minimum legal capital that the
corporation must have after paying any dividends or buying back its stock.
Preferred stockpar value remains relevant, and tends to reflect issue price. Dividends on preferred
stocks are calculated as a percentage of par value.
Also, par value still matters for a callable common stock: the call price is usually either par value
or a small fixed percentage over par value.
In the United States, it is legal for a corporation to issue "watered" shares below par value.
However, the purchasers of "watered" shares incur an accounting liability to the corporation for
the difference between the par value and the price they paid. Today, in many jurisdictions, par
values are no longer required for common stocks.
[edit] Currency
The term "at par" is also used when two currencies are exchanged at equal value (for instance, in
1964, Trinidad and Tobago switched from British West Indies dollar to the new Trinidad and Tobago
dollar, and that switch was "at par", meaning that the Central Bank of Trinidad and Tobago replaced
each old dollar with a new).
Coupon (bond)
From Wikipedia, the free encyclopedia
Jump to:navigation, search
Uncut bond coupons on 1922 Mecca Temple (NY, NY, U.S.A.) construction bond
The coupon or coupon rate of a bond is the amount of interest paid per year expressed as a
percentage of the face value of the bond. It is the interest rate that a bond issuer will pay to a
bondholder.[1]
[edit] Overview
For example if you hold $10,000 nominal of a bond described as a 4.5% loan stock, you will
receive $450 in interest each year (probably in two installments of $225 each; a semi-annual
payment).
Not all bonds have coupons. Zero-coupon bonds are those which do not pay interest, but are sold
at the initial offering to investors at a price less than the par value. When held to maturity, the
bond is redeemed for par value.
The origin of the expression "coupon" is that bonds were historically issued as bearer certificates,
so that possession of the certificate was conclusive proof of ownership. Several coupons, one for
each scheduled interest payment covering a number of years, were printed on the certificate. At
the due date the owner would physically detach the coupon and present it for payment of the
interest (known as "clipping the coupon").[2]
Between the issue date and the redemption date, the price of a bond will be determined by the
market, taking into account among other things:
• The amount and date of the redemption payment at maturity;
• The amounts and dates of the coupons;
• The ability of the issuer to pay interest and repay the principal at maturity;
• The yield offered by other similar bonds in the market.
Principal value
The amount that the issuer of a bond agrees to repay the bondholder at the
maturity date. The principal is also referred to redemption value, maturity value,
par value or face value.
Principal
The par or face value of a debt instrument
Principal-agent relationship
A situation that can be modeled as one person, an agent, who acts on the behalf of another person, the principal.
Principal of diversification
Highly diversified portfolios will have negligible unsystematic risk. In other words, unsystematic risks disappear in
portfolios, and only systematic risks survive.
Back-end value
The amount paid to remaining shareholders in the second stage of a two-tier or partial tender offer.
Going-concern value
The value of a company as a whole over and above the sum of the values of each of its parts; the value of organization
learning and reputation.
Terminal value
The value at maturity.
Face value
Alternative name for par value.
Value manager
A manager who seeks to buy stocks that are at a discount to their "fair value" and sell them at or in excess of that value.
Often a value stock is one with a low price to book value ratio.
Value dating
Refers to when value or credit is given for funds transferred between banks.
Value date
In the market for eurodollar deposits and foreign exchange, value date refers to the delivery date of funds traded. Normally
it is on spot transactions two days after a transaction is agreed upon and the future date in the case of a forward foreign
exchange trade.
Value-at-Risk
A value-at-risk (VAR) model is a procedure for estimating the probability of portfolio losses exceeding some specified
proportion based on a statistical analysis of historical market price trends, correlations, and volatilities.
Value-added tax
Value-added tax (VAT) is a method of indirect taxation whereby a tax is levied at each stage of production on the value
added at that specific stage.
Utility value
The welfare a given investor assigns to an investment with a particular return and risk.
Straight value
Also called investment value, the value of a convertible security without the con-version option.
Standardized value
Also called the normal deviate, the distance of one data point from the mean, divided by the standard deviation of the
distribution.
Salvage value
Scrap value of plant and equipment.
Residual value
Usually refers to the value of a lessor's property at the time the lease expires.
Replacement value
Current cost of replacing the firm's assets.
Relative value
The attractiveness measured in terms of risk, liquidity, and return of one instrument relative to another, or for a given
instrument, of one maturity relative to another.
Present value
The amount of cash today that is equivalent in value to a payment, or to a stream of payments, to be received in the
future.
Par value
Also called the maturity value or face value, the amount that the issuer agrees to pay at the maturity date.
Market value
(a) The price at which a security is trading and could presumably be purchased or sold. (b) The value investors believe a
firm is worth; calculated by multiplying the number of shares outstanding by the current market price of a firm's shares.
Loan value
The amount a policyholder may borrow against a whole life insurance policy at the interest rate specified in the policy.
Liquidation value
Net amount that could be realized by selling the assets of a firm after paying the debt.
Bond value
With respect to convertible bonds, the value the security would have if it were not convertible apart from the conversion
option.
Book value
A company's book value is its total assets minus intangible assets and liabilities, such as debt. A company's book value
might be more or less than its market value.
Cash-surrender value
An amount the insurance company will pay if the policyholder ends a whole life insurance policy.
Conversion value
Also called parity value, the value of a convertible security if it is converted immediately.
Embedded value
A methodology that reflects future shareholder profits in the life insurance business. Embedded value equals the free
surplus plus the value of inforce business. Embedded value is hard to compare with different companies since each
company determines its own input parameters, for example the level of target surplus.
Salvage Value
Is the amount remaining after a depreciated useful life. It refers to the residual or recoverable value of a depreciated asset.
It should be noted that the gross salvage value may be adjusted by a removal or disposal cost. This adjustment would
lower the gross salvage value.
Extrinsic Value
The time value component of an option premium.
Termbox
Popular terms
Times-interest-earned ratio
BIS ratio
Herfindahl index
Dept/equity ratio
NPVGO
Trade credit
From Wikipedia, the free encyclopedia
Jump to:navigation, search
Trade credit For example, Wal-Mart, the largest retailer in the world, has used trade credit as a
larger source of capital than bank borrowings; trade credit for Wal-Mart is 8 times the amount of
capital invested by shareholders.[1]
There are many forms of trade credit in common use. Various industries use various specialized
forms. They all have, in common, the collaboration of businesses to make efficient use of capital
to accomplish various business objectives.
[edit] Example
The operator of an ice cream stand may sign a franchising agreement, under which the distributor
agrees to provide ice cream stock under the terms "Net 60" with a ten percent discount on
payment within 30 days, and a 20% discount on payment within 10 days. This means that the
operator has 60 days to pay the invoice in full. If sales are good within the first week, the
operator may be able to send a cheque for all or part of the invoice, and make an extra 20% on
the ice cream sold. However, if sales are slow, leading to a month of low cash flow, then the
operator may decide to pay within 30 days, obtaining a 10% discount, or use the money another
30 days and pay the full invoice amount within 60 days.
The ice cream distributor can do the same thing. Receiving trade credit from milk and sugar
suppliers on terms of Net 30, 2% discount if paid within ten days, means they are apparently
taking a loss or disadvantageous position in this web of trade credit balances. Why would they
do this? First, they have a substantial markup on the ingredients and other costs of production of
the ice cream they sell to the operator. There are many reasons and ways to manage trade credit
terms for the benefit of a business. The ice cream distributor may be well-capitalized either from
the owners' investment or from accumualated profits, and may be looking to expand his markets.
They may be aggressive in attempting to locate new customers or to help them get established. It
is not on their interests for customers to go out of business from cash flow instabilities, so their
financial terms aim to accomplish two things:
1. Allow startup ice cream parlors the ability to mismanage their investment in
inventory for a while, while learning their markets, without having a dramatic
negative balance in their bank account which could put them out of business.
This is in effect, a short term business loan made to help expand the
distributor's market and customer base.
2. By tracking who pays, and when, the distributor can see potential problems
developing and take steps to reduce or increase the allowed amount of trade
credit he extends to prospering or faltering businesses. This limits the
exposure to losses from customers going bankrupt who would never pay for the
ice cream delivered.
Trade Credit
Definition: An arrangement to buy goods or services on account, that is, without making immediate cash
payment
For many businesses, trade credit is an essential tool for financing growth. Trade credit is the credit
extended to you by suppliers who let you buy now and pay later. Any time you take delivery of materials,
equipment or other valuables without paying cash on the spot, you're using trade credit.
When you're first starting your business, however, suppliers most likely aren't going to offer you trade credit. They're going to want to
make every order c.o.d. (cash or check on delivery) or paid by credit card in advance until you've established that you can pay your bills
on time. While this is a fairly normal practice, you can still try and negotiate trade credit with suppliers. One of the things that will help
you in these negotiations is a properly prepared financial plan.
When you visit your supplier to set up your order during your startup period, ask to speak directly to the owner of the business if it's a
small company. If it's a larger business, ask to speak to the CFO or any other person who approves credit. Introduce yourself. Show the
officer the financial plan you've prepared. Tell the owner or financial officer about your business, and explain that you need to get your
first orders on credit in order to launch your venture.
Depending on the terms available from your suppliers, the cost of trade credit can be quite high. For example, assume you make a
purchase from a supplier who decides to extend credit to you. The terms the supplier offers you are two-percent cash discount with 10
days and a net date of 30 days. Essentially, the suppliers is saying that if you pay within 10 days, the purchase price will be discounted
by two percent. On the other hand, by forfeiting the two-percent discount, you're able to use your money for 20 more days. On an
annualized basis, this is actually costing you 36 percent of the total cost of the items you are purchasing from this supplier! (360 ( 20
days = 18 times per year without discount; 18 ( 2 percent discount = 36 percent discount missed.)
Cash discounts aren't the only factor you have to consider in the equation. There are also late-payment or delinquency penalties should
you extend payment beyond the agreed-upon terms. These can usually run between one and two percent on a monthly basis. If you miss
your net payment date for an entire year, that can cost you as much as 12 to 24 percent in penalty interest.
Effective use of trade credit requires intelligent planning to avoid unnecessary costs through forfeiture of cash discounts or the incurring
of delinquency penalties. But every business should take full advantage of trade that is available without additional cost in order to
reduce its need for capital from other sources.
Classical conditioning
From Wikipedia, the free encyclopedia
Jump to:navigation, search
One of Pavlov’s dogs with a surgically implanted cannula to measure salivation, Pavlov
Museum, 2005
The original and most famous example of classical conditioning involved the salivary
conditioning of Pavlov's dogs. During his research on the physiology of digestion in dogs,
Pavlov noticed that, rather than simply salivating in the presence of meat powder (an innate
response to food that he called the unconditioned response), the dogs began to salivate in the
presence of the lab technician who normally fed them. Pavlov called these psychic secretions.
From this observation he predicted that, if a particular stimulus in the dog’s surroundings were
present when the dog was presented with meat powder, then this stimulus would become
associated with food and cause salivation on its own. In his initial experiment, Pavlov used a
metronome to call the dogs to their food and, after a few repetitions, the dogs started to salivate
in response to the metronome.
[edit] Types
Diagram representing forward conditioning. The time interval increases from left to
right.
Forward conditioning: During forward conditioning the onset of the CS precedes the onset of
the US. Two common forms of forward conditioning are delay and trace conditioning.
Delay Conditioning: In delay conditioning the CS is presented and is overlapped by the
presentation of the US
Trace conditioning: During trace conditioning the CS and US do not overlap. Instead, the CS is
presented, a period of time is allowed to elapse during which no stimuli are presented, and then
the US is presented. The stimulus free period is called the trace interval. It may also be called the
"conditioning interval"
Simultaneous conditioning: During simultaneous conditioning, the CS and US are presented
and terminated at the same time.
Backward conditioning: Backward conditioning occurs when a conditioned stimulus
immediately follows an unconditioned stimulus. Unlike traditional conditioning models, in
which the conditioned stimulus precedes the unconditioned stimulus, the conditioned response
tends to be inhibitory. This is because the conditioned stimulus serves as a signal that the
unconditioned stimulus has ended, rather than a reliable method of predicting the future
occurrence of the unconditioned stimulus.
Temporal conditioning: The US is presented at regularly timed intervals, and CR acquisition is
dependent upon correct timing of the interval between US presentations. The background, or
context, can serve as the CS in this example.
Unpaired conditioning: The CS and US are not presented together. Usually they are presented
as independent trials that are separated by a variable, or pseudo-random, interval. This procedure
is used to study non-associative behavioral responses, such as sensitization.
CS-alone extinction:The CS is presented in the absence of the US. This procedure is usually
done after the CR has been acquired through Forward conditioning training. Eventually, the CR
frequency is reduced to pre-training levels.
[edit] Procedure variations
In addition to the simple procedures described above, some classical conditioning studies are
designed to tap into more complex learning processes. Some common variations are discussed
below.
[edit] Classical discrimination/reversal conditioning
In this procedure, two CSs and one US are typically used. The CSs may be the same modality
(such as lights of different intensity), or they may be different modalities (such as auditory CS
and visual CS). In this procedure, one of the CSs is designated CS+ and its presentation is always
followed by the US. The other CS is designated CS- and its presentation is never followed by the
US. After a number of trials, the organism learns to discriminate CS+ trials and CS- trials such
that CRs are only observed on CS+ trials.
During Reversal Training, the CS+ and CS- are reversed and subjects learn to suppress
responding to the previous CS+ and show CRs to the previous CS-.
[edit] Classical ISI discrimination conditioning
This is a discrimination procedure in which two different CSs are used to signal two different
interstimulus intervals. For example, a dim light may be presented 30 seconds before a US, while a
very bright light is presented 2 minutes before the US. Using this technique, organisms can learn
to perform CRs that are appropriately timed for the two distinct CSs.
[edit] Latent inhibition conditioning
In this procedure, a CS is presented several times before paired CS-US training commences. The
pre-exposure of the subject to the CS before paired training slows the rate of CR acquisition
relative to organisms that are not CS pre-exposed. Also see Latent inhibition for applications.
[edit] Conditioned inhibition conditioning
Three phases of conditioning are typically used:
Phase 1:
Phase 2:
CS+/US trials are continued, but interspersed with trials on which the CS+ in
compound with a second CS, but not with the US (i.e., CS+/CS- trials).
Typically, organisms show CRs on CS+/US trials, but suppress responding on
CS+/CS- trials.
Phase 3:
In this retention test, the previous CS- is paired with the US. If conditioned
inhibition has occurred, the rate of acquisition to the previous CS- should be
impaired relative to organisms that did not experience Phase 2.
[edit] Blocking
Main article: Blocking effect
This form of classical conditioning involves two phases.
Phase 1:
Phase 2:
Test:
A separate test for each CS (CS1 and CS2) is performed. The blocking effect
is observed in a lack of conditioned response to CS2, suggesting that the first
phase of training blocked the acquisition of the second CS.
[edit] Applications
[edit] Little Albert
Main article: Little Albert experiment
In human psychology, implications for therapies and treatments using classical conditioning
differ from operant conditioning. Therapies associated with classical conditioning are aversion
therapy, flooding and systematic desensitization.
Classical conditioning is short-term, usually requiring less time with therapists and less effort
from patients, unlike humanistic therapies.[citation needed] The therapies mentioned are designed to
cause either aversive feelings toward something, or to reduce unwanted fear and aversion.
[edit] Theories of classical conditioning
There are two competing theories of how classical conditioning works. The first, stimulus-
response theory, suggests that an association to the unconditioned stimulus is made with the
conditioned stimulus within the brain, but without involving conscious thought. The second
theory stimulus-stimulus theory involves cognitive activity, in which the conditioned stimulus is
associated to the concept of the unconditioned stimulus, a subtle but important distinction.
Stimulus-response theory, referred to as S-R theory, is a theoretical model of behavioral
psychology that suggests humans and other animals can learn to associate a new stimulus — the
conditioned stimulus (CS) — with a pre-existing stimulus — the unconditioned stimulus (US),
and can think, feel or respond to the CS as if it were actually the US.
The opposing theory, put forward by cognitive behaviorists, is stimulus-stimulus theory (S-S
theory). Stimulus-stimulus theory, referred to as S-S theory, is a theoretical model of classical
conditioning that suggests a cognitive component is required to understand classical conditioning
and that stimulus-response theory is an inadequate model. It proposes that a cognitive component
is at play. S-R theory suggests that an animal can learn to associate a conditioned stimulus (CS)
such as a bell, with the impending arrival of food termed the unconditioned stimulus, resulting in
an observable behavior such as salivation. Stimulus-stimulus theory suggests that instead the
animal salivates to the bell because it is associated with the concept of food, which is a very fine
but important distinction.
To test this theory, psychologist Robert Rescorla undertook the following experiment [2]. Rats
learned to associate a loud noise as the unconditioned stimulus, and a light as the conditioned
stimulus. The response of the rats was to freeze and cease movement. What would happen then if
the rats were habituated to the US? S-R theory would suggest that the rats would continue to
respond to the CS, but if S-S theory is correct, they would be habituated to the concept of a loud
sound (danger), and so would not freeze to the CS. The experimental results suggest that S-S was
correct, as the rats no longer froze when exposed to the signal light.[3] His theory still continues
and is applied in everyday life.[1]
A plant layout study is an engineering study used to analyze different physical configurations for
an industrial plant.[1]
[edit] General
Modern industrial manufacturing plants involve a complex mix of functions and operations.
Various techniques exist, but general areas of concern include the following:[2]
• Space (adequate area to house each function)
• Affinity (functions located in close proximity to other related functions)
• Material handling
• Communications (telephone, data, telemetry, and other signal items)
• Utilities (electrical, gas, steam, water, sewer, and other utility services)
• Buildings (structural and architectural forms; sitework)
The acceptable quality limit (AQL) is the worst tolerable process average in percentage or
ratio, that is still considered acceptable: that is, it is at an acceptable quality level.[1] Closely
related terms are the rejectable quality limit and level (RQL).[1][2] In a quality control procedure, a
process is said to be at an acceptable quality level if the appropriate statistic used to construct a
control chart does not fall outside the bounds of the acceptable quality limits. Otherwise, the
process is said to be at a rejectable control level.
The usage of the abbreviation AQL for the term Acceptable Quality Level has recently been
changed in the standards issued by at least one national standards organization (ANSI/ASQ) to
relate to the term Acceptance Quality Level.[3][4] It is unclear whether this interpretation will be
brought into general usage, but the underlying meaning remains the same.
An acceptable quality level is an inspection standard describing the maximum number of defects
that could be considered acceptable during the random sampling of an inspection. The defects
found during inspection are sometimes classified into three levels: critical, major and minor.
Critical defects are those that render the product unsafe or hazardous for the end user or that
contravene mandatory regulations. Major defects can result in the product's failure, reducing its
marketability, usability or saleability. Lastly, minor defects do not affect the product's
marketability or usability, but represent workmanship defects that make the product fall short of
defined quality standards. Different companies maintain different interpretations of each defect
type. In order to avoid argument, buyers and sellers agree on an AQL standard, chosen according
to the level of risk each party assumes, which they use as a reference during pre-shipment
inspection.
Profiteering (business)
From Wikipedia, the free encyclopedia
Jump to:navigation, search
Profiteering is a pejorative term for the act of making a profit by methods considered unethical.
Business owners may be accused of profiteering when they raise prices during an emergency
(especially a war). The term is also applied to businesses that play on political corruption to obtain
government contracts.
Some types of profiteering are illegal, such as price fixing syndicates and other anti-competitive
behaviour, for example on fuel subsidies (see British Airways price-fixing allegations), or restricted by
industry codes of conduct such as aggressive marketing of products in the third world such as baby
milk (see Nestlé boycott).
Contents
[hide]
• 1 Types of profiteering
• 2 Laws
• 3 See also
○ 3.1 Example cases
Michael Porter
From Wikipedia, the free encyclopedia
Jump to:navigation, search
Michael Porter
Born 1947 (1947)
Michael Eugene Porter (born 1947) is the Bishop William Lawrence University Professor at
Harvard Business School. He is a leading authority on company strategy and the competitiveness
of nations and regions. Michael Porter’s work is recognized in many governments, corporations
and academic circles globally. He chairs Harvard Business School's program dedicated for newly
appointed CEOs of very large corporations.
Contents
[hide]
[edit] Works
Competititve Strategy
• Porter, M.E. (1979) "How competitive forces shape strategy", Harvard business
Review, March/April 1979.
• Porter, M.E. (1980) Competitive Strategy, Free Press, New York, 1980.
• Porter, M.E. (1985) Competitive Advantage, Free Press, New York, 1985.
• Porter, M.E. (ed.) (1986) Competition in Global Industries, Harvard Business
School Press, Boston, 1986.
• Porter, M.E. (1987) "From Competitive Advantage to Corporate Strategy", Harvard
Business Review, May/June 1987, pp 43-59.
• Porter, M.E. (1996) "What is Strategy", Harvard Business Review, Nov/Dec 1996.
• Porter, M.E. (1998) On Competition, Boston: Harvard Business School, 1998.
• Porter, M.E. (1990, 1998) "The Competitive Advantage of Nations", Free Press, New
York, 1990.
• Porter, M.E. (1991) "Towards a Dynamic Theory of Strategy", Strategic Management
Journal, 12 (Winter Special Issue), pp. 95-117.
• McGahan, A.M. & Porter, M.E. Porter. (1997) "How Much Does Industry Matter,
Really?" Strategic Management Journal, 18 (Summer Special Issue), pp. 15-30.
• Porter, M.E. (2001) "Strategy and the Internet", Harvard Business Review, March 2001,
pp. 62-78.
• Porter, M.E. & Kramer, M.R. (2006) "Strategy and Society: The Link Between
Competitive Advantage and Corporate Social Responsibility", Harvard Business Review,
December 2006, pp. 78-92.
Domestic Health Care
• Porter, M.E. & Teisberg, E.O. (2006) "Redefining Health Care: Creating Value-Based
Competition On Results", Harvard Business School Press, 2006.
Global Health Care
• Jain SH, Weintraub R, Rhatigan J, Porter ME, Kim JY. Delivering Global Health.
Student British Medical Journal 2008; 16:27.[1]
• Kim JY, Rhatigan J, Jain SH, Weintraub R, Porter ME. From a declaration of
values to the creation of value in global health: a report from Harvard
University's Global Health Delivery Project. Glob Public Health. 2010
Mar;5(2):181-8.
• Rhatigan, Joseph, Sachin H Jain, Joia S. Mukherjee, and Michael E. Porter.
"Applying the Care Delivery Value Chain: HIV/AIDS Care in Resource Poor
Settings." Harvard Business School Working Paper, No. 09-093, February
2009.
[edit] Criticisms
Porter has been criticized by some academics for inconsistent logical argument in his assertions.
[1]
Critics have also labeled Porter's conclusions as lacking in empirical support and as justified
with selective case studies.[2][3][4][5]
Psychographic
From Wikipedia, the free encyclopedia
Jump to:navigation, search
In the field of marketing, demographics, opinion research, and social research in general,
psychographic variables are any attributes relating to personality, values, attitudes, interests, or
lifestyles. They are also called IAO variables (for Interests, Activities, and Opinions). They can
be contrasted with demographic variables (such as age and gender), behavioral variables (such as
usage rate or loyalty), and firmographic variables (such as industry, seniority and functional area).
Psychographics are often confused with demographics. This confusion can create fundamentally
flawed definitions. For example, historical generations are defined by psychographic variables
like attitudes, personality formation, and cultural touchstones. The traditional definition of the
"Baby Boom Generation" has been the subject of much criticism because it is based on
demographic variables where it should be based on psychographic variables. While all other
generations are defined by psychographic variables, the Boomer definition is based on a
demographic variable: the fertility rates of its members' parents.
When a relatively complete profile of a person or group's psychographic make-up is constructed,
this is called a psychographic profile. Psychographic profiles are used in market segmentation as
well as in advertising.
Some categories of psychographic factors used in market segmentation include:
• Activity, Interest, Opinion (AIO)
• Attitudes
• Values
• 3 Strategies of Market Leaders
• Customers for Life
• By: Brian Tracy
• The purpose of a business is to create and keep a customer.
• The two most important words to keep in mind in developing a successful customer base are Positioning
and Differentiation.
• Differentiation refers to your ability to separate yourself and your product or service from that of your
competitors. And it is the key to building and maintaining a competitive advantage.
• This is the advantage that you and your company have over your competitors in the same marketplace – the
unique and special benefits that no one else can give your custome
(1) Fixed order quantity, variable time between orders (EOQ, EPQ, and Quantity
Discount)
(2) On-hand inventory balance serves as order trigger (R)
(3) Perpetual inventory count
(4) 2-bin system
Corporate governance
From Wikipedia, the free encyclopedia
Jump to:navigation, search
Corporate governance is the set of processes, customs, policies, laws, and institutions affecting the
way a corporation (or company) is directed, administered or controlled. Corporate governance also
includes the relationships among the many stakeholders involved and the goals for which the
corporation is governed. The principal stakeholders are the shareholders, management, and the
board of directors. Other stakeholders include employees, customers, creditors, suppliers,
regulators, and the community at large.
Corporate governance is a multi-faceted subject.[1] An important theme of corporate governance
is to ensure the accountability of certain individuals in an organization through mechanisms that
try to reduce or eliminate the principal-agent problem. A related but separate thread of discussions
focuses on the impact of a corporate governance system in economic efficiency, with a strong
emphasis on shareholders' welfare. There are yet other aspects to the corporate governance
subject, such as the stakeholder view and the corporate governance models around the world (see
section 9 below).
There has been renewed interest in the corporate governance practices of modern corporations
since 2001, particularly due to the high-profile collapses of a number of large U.S. firms such as
Enron Corporation and MCI Inc. (formerly WorldCom). In 2002, the U.S. federal government passed
the Sarbanes-Oxley Act, intending to restore public confidence in corporate governance.
[edit] Definition
In A Board Culture of Corporate Governance, business author Gabrielle O'Donovan defines
corporate governance as 'an internal system encompassing policies, processes and people, which
serves the needs of shareholders and other stakeholders, by directing and controlling
management activities with good business savvy, objectivity, accountability and integrity. Sound
corporate governance is reliant on external marketplace commitment and legislation, plus a
healthy board culture which safeguards policies and processes.
O'Donovan goes on to say that 'the perceived quality of a company's corporate governance can
influence its share price as well as the cost of raising capital. Quality is determined by the
financial markets, legislation and other external market forces plus how policies and processes
are implemented and how people are led. External forces are, to a large extent, outside the circle
of control of any board. The internal environment is quite a different matter, and offers
companies the opportunity to differentiate from competitors through their board culture. To date,
too much of corporate governance debate has centred on legislative policy, to deter fraudulent
activities and transparency policy which misleads executives to treat the symptoms and not the
cause.'[2]
It is a system of structuring, operating and controlling a company with a view to achieve long
term strategic goals to satisfy shareholders, creditors, employees, customers and suppliers, and
complying with the legal and regulatory requirements, apart from meeting environmental and
local community needs.
Report of SEBI committee (India) on Corporate Governance defines corporate governance as the
acceptance by management of the inalienable rights of shareholders as the true owners of the
corporation and of their own role as trustees on behalf of the shareholders. It is about
commitment to values, about ethical business conduct and about making a distinction between
personal & corporate funds in the management of a company.” The definition is drawn from the
Gandhian principle of trusteeship and the Directive Principles of the Indian Constitution.
Corporate Governance is viewed as business ethics and a moral duty. See also Corporate Social
Entrepreneurship regarding employees who are driven by their sense of integrity (moral
conscience) and duty to society. This notion stems from traditional philosophical ideas of virtue
(or self governance) [3]and represents a "bottom-up" approach to corporate governance (agency)
which supports the more obvious "top-down" (systems and processes, i.e. structural) perspective.
[edit] History - United States
In the 19th century, state corporation laws enhanced the rights of corporate boards to govern
without unanimous consent of shareholders in exchange for statutory benefits like appraisal
rights, to make corporate governance more efficient. Since that time, and because most large
publicly traded corporations in the US are incorporated under corporate administration friendly
Delaware law, and because the US's wealth has been increasingly securitized into various
corporate entities and institutions, the rights of individual owners and shareholders have become
increasingly derivative and dissipated. The concerns of shareholders over administration pay and
stock losses periodically has led to more frequent calls for corporate governance reforms.
In the 20th century in the immediate aftermath of the Wall Street Crash of 1929 legal scholars such
as Adolf Augustus Berle, Edwin Dodd, and Gardiner C. Means pondered on the changing role of
the modern corporation in society. Berle and Means' monograph "The Modern Corporation and
Private Property" (1932, Macmillan) continues to have a profound influence on the conception of
corporate governance in scholarly debates today.
From the Chicago school of economics, Ronald Coase's "The Nature of the Firm" (1937) introduced
the notion of transaction costs into the understanding of why firms are founded and how they
continue to behave. Fifty years later, Eugene Fama and Michael Jensen's "The Separation of
Ownership and Control" (1983, Journal of Law and Economics) firmly established agency theory
as a way of understanding corporate governance: the firm is seen as a series of contracts. Agency
theory's dominance was highlighted in a 1989 article by Kathleen Eisenhardt ("Agency theory: an
assessement and review", Academy of Management Review).
US expansion after World War II through the emergence of multinational corporations saw the
establishment of the managerial class. Accordingly, the following Harvard Business School
management professors published influential monographs studying their prominence: Myles Mace
(entrepreneurship), Alfred D. Chandler, Jr. (business history), Jay Lorsch (organizational behavior)
and Elizabeth MacIver (organizational behavior). According to Lorsch and MacIver "many large
corporations have dominant control over business affairs without sufficient accountability or
monitoring by their board of directors."
Since the late 1970’s, corporate governance has been the subject of significant debate in the U.S.
and around the globe. Bold, broad efforts to reform corporate governance have been driven, in
part, by the needs and desires of shareowners to exercise their rights of corporate ownership and
to increase the value of their shares and, therefore, wealth. Over the past three decades, corporate
directors’ duties have expanded greatly beyond their traditional legal responsibility of duty of
loyalty to the corporation and its shareowners.[4]
In the first half of the 1990s, the issue of corporate governance in the U.S. received considerable
press attention due to the wave of CEO dismissals (e.g.: IBM, Kodak, Honeywell) by their boards.
The California Public Employees' Retirement System (CalPERS) led a wave of institutional
shareholder activism (something only very rarely seen before), as a way of ensuring that
corporate value would not be destroyed by the now traditionally cozy relationships between the
CEO and the board of directors (e.g., by the unrestrained issuance of stock options, not
infrequently back dated).
In 1997, the East Asian Financial Crisis saw the economies of Thailand, Indonesia, South Korea,
Malaysia and The Philippines severely affected by the exit of foreign capital after property assets
collapsed. The lack of corporate governance mechanisms in these countries highlighted the
weaknesses of the institutions in their economies.
In the early 2000s, the massive bankruptcies (and criminal malfeasance) of Enron and Worldcom,
as well as lesser corporate debacles, such as Adelphia Communications, AOL, Arthur Andersen,
Global Crossing, Tyco, led to increased shareholder and governmental interest in corporate
governance. This is reflected in the passage of the Sarbanes-Oxley Act of 2002.[3]
[edit] Impact of Corporate Governance
The positive effect of corporate governance on different stakeholders ultimately is a strengthened
economy, and hence good corporate governance is a tool for socio-economic development.[5]
[edit] Role of Institutional Investors
Many years ago, worldwide, buyers and sellers of corporation stocks were individual investors,
such as wealthy businessmen or families,who often had a vested, personal and emotional interest
in the corporations whose shares they owned. Over time, markets have become largely
institutionalized: buyers and sellers are largely institutions (e.g., pension funds, mutual funds,
hedge funds, exchange-traded funds, other investor groups; insurance companies, banks, brokers, and
other financial institutions).
The rise of the institutional investor has brought with it some increase of professional diligence
which has tended to improve regulation of the stock market (but not necessarily in the interest of
the small investor or even of the naïve institutions, of which there are many). Note that this
process occurred simultaneously with the direct growth of individuals investing indirectly in the
market (for example individuals have twice as much money in mutual funds as they do in bank
accounts). However this growth occurred primarily by way of individuals turning over their
funds to 'professionals' to manage, such as in mutual funds. In this way, the majority of
investment now is described as "institutional investment" even though the vast majority of the
funds are for the benefit of individual investors.
Program trading,the hallmark of institutional trading, averaged over 80% of NYSE trades in some
months of 2007. [4] (Moreover, these statistics do not reveal the full extent of the practice,
because of so-called 'iceberg' orders. See Quantity and display instructions under last reference.)
Unfortunately, there has been a concurrent lapse in the oversight of large corporations, which are
now almost all owned by large institutions. The Board of Directors of large corporations used to be
chosen by the principal shareholders, who usually had an emotional as well as monetary
investment in the company (think Ford), and the Board diligently kept an eye on the company
and its principal executives (they usually hired and fired the President, or Chief Executive Officer—
CEO).1
A recent study by Credit Suisse found that companies in which "founding families retain a stake
of more than 10% of the company's capital enjoyed a superior performance over their respective
sectorial peers." Since 1996, this superior performance amounts to 8% per year.[5] Forget the
celebrity CEO. "Look beyond Six Sigma and the latest technology fad. One of the biggest
strategic advantages a company can have, [BusinessWeek has found], is blood lines." [6] In that
last study, "BW identified five key ingredients that contribute to superior performance. Not all
are qualities unique to enterprises with retained family interests. But they do go far to explain
why it helps to have someone at the helm— or active behind the scenes— who has more than a
mere paycheck and the prospect of a cozy retirement at stake." See also, "Revolt in the
Boardroom," by Alan Murray.
Nowadays, if the owning institutions don't like what the President/CEO is doing and they feel
that firing them will likely be costly (think "golden handshake") and/or time consuming, they will
simply sell out their interest. The Board is now mostly chosen by the President/CEO, and may be
made up primarily of their friends and associates, such as officers of the corporation or business
colleagues. Since the (institutional) shareholders rarely object, the President/CEO generally takes
the Chair of the Board position for his/herself (which makes it much more difficult for the
institutional owners to "fire" him/her). Occasionally, but rarely, institutional investors support
shareholder resolutions on such matters as executive pay and anti-takeover, aka, "poison pill"
measures.
Finally, the largest pools of invested money (such as the mutual fund 'Vanguard 500', or the
largest investment management firm for corporations, State Street Corp.) are designed simply to
invest in a very large number of different companies with sufficient liquidity, based on the idea
that this strategy will largely eliminate individual company financial or other risk and, therefore,
these investors have even less interest in a particular company's governance.
Since the marked rise in the use of Internet transactions from the 1990s, both individual and
professional stock investors around the world have emerged as a potential new kind of major
(short term) force in the direct or indirect ownership of corporations and in the markets: the
casual participant. Even as the purchase of individual shares in any one corporation by individual
investors diminishes, the sale of derivatives (e.g., exchange-traded funds (ETFs), Stock market index
options [7], etc.) has soared. So, the interests of most investors are now increasingly rarely tied to
the fortunes of individual corporations.
But, the ownership of stocks in markets around the world varies; for example, the majority of the
shares in the Japanese market are held by financial companies and industrial corporations (there
is a large and deliberate amount of cross-holding among Japanese keiretsu corporations and within
S. Korean chaebol 'groups') [8], whereas stock in the USA or the UK and Europe are much more
broadly owned, often still by large individual investors.
[edit] Parties to corporate governance
Parties involved in corporate governance include the regulatory body (e.g. the Chief Executive
Officer, the board of directors, management, shareholders and Auditors). Other stakeholders who
take part include suppliers, employees, creditors, customers and the community at large.
In corporations, the shareholder delegates decision rights to the manager to act in the principal's
best interests. This separation of ownership from control implies a loss of effective control by
shareholders over managerial decisions. Partly as a result of this separation between the two
parties, a system of corporate governance controls is implemented to assist in aligning the
incentives of managers with those of shareholders. With the significant increase in equity
holdings of investors, there has been an opportunity for a reversal of the separation of ownership
and control problems because ownership is not so diffuse.
A board of directors often plays a key role in corporate governance. It is their responsibility to
endorse the organisation's strategy, develop directional policy, appoint, supervise and remunerate
senior executives and to ensure accountability of the organisation to its owners and authorities.
The Company Secretary, known as a Corporate Secretary in the US and often referred to as a
Chartered Secretary if qualified by the Institute of Chartered Secretaries and Administrators (ICSA),
is a high ranking professional who is trained to uphold the highest standards of corporate
governance, effective operations, compliance and administration.
All parties to corporate governance have an interest, whether direct or indirect, in the effective
performance of the organization. Directors, workers and management receive salaries, benefits
and reputation, while shareholders receive capital return. Customers receive goods and services;
suppliers receive compensation for their goods or services. In return these individuals provide
value in the form of natural, human, social and other forms of capital.
A key factor is an individual's decision to participate in an organisation e.g. through providing
financial capital and trust that they will receive a fair share of the organisational returns. If some
parties are receiving more than their fair return then participants may choose to not continue
participating leading to organizational collapse.
[edit] Principles
Key elements of good corporate governance principles include honesty, trust and integrity,
openness, performance orientation, responsibility and accountability, mutual respect, and
commitment to the organization.
Of importance is how directors and management develop a model of governance that aligns the
values of the corporate participants and then evaluate this model periodically for its
effectiveness. In particular, senior executives should conduct themselves honestly and ethically,
especially concerning actual or apparent conflicts of interest, and disclosure in financial reports.
Commonly accepted principles of corporate governance include:
• Rights and equitable treatment of shareholders: Organizations should
respect the rights of shareholders and help shareholders to exercise those
rights. They can help shareholders exercise their rights by effectively
communicating information that is understandable and accessible and
encouraging shareholders to participate in general meetings.
• Interests of other stakeholders: Organizations should recognize that they
have legal and other obligations to all legitimate stakeholders.
• Role and responsibilities of the board: The board needs a range of skills
and understanding to be able to deal with various business issues and have
the ability to review and challenge management performance. It needs to be
of sufficient size and have an appropriate level of commitment to fulfill its
responsibilities and duties. There are issues about the appropriate mix of
executive and non-executive directors.
• Integrity and ethical behaviour: Ethical and responsible decision making
is not only important for public relations, but it is also a necessary element in
risk management and avoiding lawsuits. Organizations should develop a code
of conduct for their directors and executives that promotes ethical and
responsible decision making. It is important to understand, though, that
reliance by a company on the integrity and ethics of individuals is bound to
eventual failure. Because of this, many organizations establish Compliance
and Ethics Programs to minimize the risk that the firm steps outside of ethical
and legal boundaries.
• Disclosure and transparency: Organizations should clarify and make
publicly known the roles and responsibilities of board and management to
provide shareholders with a level of accountability. They should also
implement procedures to independently verify and safeguard the integrity of
the company's financial reporting. Disclosure of material matters concerning
the organization should be timely and balanced to ensure that all investors
have access to clear, factual information.
Issues involving corporate governance principles include:
• internal controls and internal auditors
• the independence of the entity's external auditors and the quality of their
audits
• oversight and management of risk
• oversight of the preparation of the entity's financial statements
• review of the compensation arrangements for the chief executive officer and
other senior executives
• the resources made available to directors in carrying out their duties
• the way in which individuals are nominated for positions on the board
• dividend policy
Nevertheless "corporate governance," despite some feeble attempts from various quarters,
remains an ambiguous and often misunderstood phrase. For quite some time it was confined only
to corporate management. That is not so. It is something much broader, for it must include a fair,
efficient and transparent administration and strive to meet certain well defined, written
objectives. Corporate governance must go well beyond law. The quantity, quality and frequency
of financial and managerial disclosure, the degree and extent to which the board of Director
(BOD) exercise their trustee responsibilities (largely an ethical commitment), and the
commitment to run a transparent organization- these should be constantly evolving due to
interplay of many factors and the roles played by the more progressive/responsible elements
within the corporate sector. John G. Smale, a former member of the General Motors board of
directors, wrote: "The Board is responsible for the successful perpetuation of the corporation.
That responsibility cannot be relegated to management."[6] However it should be noted that a
corporation should cease to exist if that is in the best interests of its stakeholders. Perpetuation
for its own sake may be counterproductive.
[edit] Mechanisms and controls
Corporate governance mechanisms and controls are designed to reduce the inefficiencies that
arise from moral hazard and adverse selection. For example, to monitor managers' behaviour, an
independent third party (the external auditor) attests the accuracy of information provided by
management to investors. An ideal control system should regulate both motivation and ability.
[edit] Internal corporate governance controls
Internal corporate governance controls monitor activities and then take corrective action to
accomplish organisational goals. Examples include:
• Monitoring by the board of directors: The board of directors, with its
legal authority to hire, fire and compensate top management, safeguards
invested capital. Regular board meetings allow potential problems to be
identified, discussed and avoided. Whilst non-executive directors are thought
to be more independent, they may not always result in more effective
corporate governance and may not increase performance.[7] Different board
structures are optimal for different firms. Moreover, the ability of the board to
monitor the firm's executives is a function of its access to information.
Executive directors possess superior knowledge of the decision-making
process and therefore evaluate top management on the basis of the quality
of its decisions that lead to financial performance outcomes, ex ante. It could
be argued, therefore, that executive directors look beyond the financial
criteria.
• Internal control procedures and internal auditors: Internal control
procedures are policies implemented by an entity's board of directors, audit
committee, management, and other personnel to provide reasonable
assurance of the entity achieving its objectives related to reliable financial
reporting, operating efficiency, and compliance with laws and regulations.
Internal auditors are personnel within an organization who test the design
and implementation of the entity's internal control procedures and the
reliability of its financial reporting
• Balance of power: The simplest balance of power is very common; require
that the President be a different person from the Treasurer. This application
of separation of power is further developed in companies where separate
divisions check and balance each other's actions. One group may propose
company-wide administrative changes, another group review and can veto
the changes, and a third group check that the interests of people (customers,
shareholders, employees) outside the three groups are being met.
• Remuneration: Performance-based remuneration is designed to relate some
proportion of salary to individual performance. It may be in the form of cash
or non-cash payments such as shares and share options, superannuation or
other benefits. Such incentive schemes, however, are reactive in the sense
that they provide no mechanism for preventing mistakes or opportunistic
behaviour, and can elicit myopic behaviour.
Companies law
Company · Business
Sole proprietorship
Partnership
(General · Limited · LLP)
Corporation
Cooperative
United States
S corporation · C corporation
LLC · LLLP · Series LLC
Delaware corporation
Nevada corporation
Massachusetts business trust
UK / Ireland /
Commonwealth
Limited company
(by shares · by guarantee
Public · Proprietary)
Unlimited company
Community interest company
Elsewhere
AB · AG · ANS · A/S · AS ·
GmbH
K.K. · N.V. · OY · S.A. · more
Doctrines
Corporate governance
Limited liability · Ultra vires
Business judgment rule
Internal affairs doctrine
Related areas
v • d • e
Agenda 21
Agenda 21 is a programme run by the United Nations (UN) related to sustainable development and
was the planet's first summit to discuss global warming related issues. It is a comprehensive
blueprint of action to be taken globally, nationally and locally by organizations of the UN,
governments, and major groups in every area in which humans directly affect the environment.
Development of Agenda 21
The full text of Agenda 21 was revealed at the United Nations Conference on Environment and
Development (Earth Summit), held in Rio de Janeiro on June 14, 1992, where 178 governments
voted to adopt the program. The final text was the result of drafting, consultation and
negotiation, beginning in 1989 and culminating at the two-week conference. The number 21
refers to an agenda for the 21st century. It may also refer to the number on the UN's agenda at
this particular summit.
[edit] Rio5
In 1997, the General Assembly of the UN held a special session to appraise five years of progress
on the implementation of Agenda 21 (Rio +5). The Assembly recognized progress as 'uneven'
and identified key trends including increasing globalization, widening inequalities in income and a
continued deterioration of the global environment. A new General Assembly Resolution (S-19/2)
promised further action.
[edit] The Johannesburg Summit
The Johannesburg Plan of Implementation, agreed at the World Summit on Sustainable Development
(Earth Summit 2002) affirmed UN commitment to 'full implementation' of Agenda 21, alongside
achievement of the Millennium Development Goals and other international agreements.
[edit] Implementation
The Commission on Sustainable Development acts as a high level forum on sustainable
development and has acted as preparatory committee for summits and sessions on the
implementation of Agenda 21. The United Nations Division for Sustainable Development acts as
the secretariat to the Commission and works 'within the context of' Agenda 21. Implementation
by member states remains essentially voluntary.
[edit] Structure and contents
There are 40 chapters in the Agenda 21, divided into four main sections.
[edit] Section I: Social and Economic Dimensions
Includes combating poverty, changing consumption patterns, population and demographic
dynamics, promoting health, promoting sustainable settlement patterns and integrating
environment and development into decision-making.
[edit] Section II: Conservation and Management of Resources for
Development
Includes atmospheric protection, combating deforestation, protecting fragile environments,
conservation of biological diversity (biodiversity), and control of pollution.
[edit] Section III: Strengthening the Role of Major Groups
Includes the roles of children and youth, women, NGOs, local authorities, business and workers.
[edit] Section IV: Means of Implementation
includes science, technology transfer, education, international institutions and
Implementation
mechanisms and financial mechanisms.
[edit] Local Agenda 21
The implementation of Agenda 21 was intended to involve action at international, national,
regional and local levels. Some national and state governments have legislated or advised that
local authorities take steps to implement the plan locally, as recommended in Chapter 28 of the
document. Such programmes are often known as 'Local Agenda 21' or 'LA21'.[1]
Contents
[hide]
• 1 Overview
• 2 History
• 3 Taxonomies
• 4 Architecture
○ 4.1 Development Frameworks
• 5 Classifying DSS
• 6 Applications
• 7 Benefits of DSS
• 8 See also
• 9 References
• 10 Further reading
[edit] Overview
A Decision Support System (DSS) is a class of information systems (including but not limited
to computerized systems) that support business and organizational decision-making activities. A
properly designed DSS is an interactive software-based system intended to help decision makers
compile useful information from a combination of raw data, documents, personal knowledge, or
business models to identify and solve problems and make decisions.
Typical information that a decision support application might gather and present are:
• inventories of all of your current information assets (including legacy and
relational data sources, cubes, data warehouses, and data marts),
• comparative sales figures between one week and the next,
• projected revenue figures based on new product sales assumptions.
[edit] History
According to Keen (1978)[1], the concept of decision support has evolved from two main areas of
research: The theoretical studies of organizational decision making done at the Carnegie Institute
of Technology during the late 1950s and early 1960s, and the technical work on interactive
computer systems, mainly carried out at the Massachusetts Institute of Technology in the 1960s.[1] It
is considered that the concept of DSS became an area of research of its own in the middle of the
1970s, before gaining in intensity during the 1980s. In the middle and late 1980s, executive
information systems (EIS), group decision support systems (GDSS), and organizational decision
support systems (ODSS) evolved from the single user and model-oriented DSS.
According to Sol (1987)[2] the definition and scope of DSS has been migrating over the years. In
the 1970s DSS was described as "a computer based system to aid decision making". Late 1970s
the DSS movement started focusing on "interactive computer-based systems which help
decision-makers utilize data bases and models to solve ill-structured problems". In the 1980s
DSS should provide systems "using suitable and available technology to improve effectiveness
of managerial and professional activities", and end 1980s DSS faced a new challenge towards the
design of intelligent workstations.[2]
In 1987 Texas Instruments completed development of the Gate Assignment Display System
(GADS) for United Airlines. This decision support system is credited with significantly reducing
travel delays by aiding the management of ground operations at various airports, beginning with
O'Hare International Airport in Chicago and Stapleton Airport in Denver Colorado.[3][4]
Beginning in about 1990, data warehousing and on-line analytical processing (OLAP) began
broadening the realm of DSS. As the turn of the millennium approached, new Web-based
analytical applications were introduced.
The advent of better and better reporting technologies has seen DSS start to emerge as a critical
component of management design. Examples of this can be seen in the intense amount of
discussion of DSS in the education environment.
DSS also have a weak connection to the user interface paradigm of hypertext. Both the University of
Vermont PROMIS system (for medical decision making) and the Carnegie Mellon ZOG/KMS
system (for military and business decision making) were decision support systems which also
were major breakthroughs in user interface research. Furthermore, although hypertext researchers
have generally been concerned with information overload, certain researchers, notably Douglas
Engelbart, have been focused on decision makers in particular.
[edit] Taxonomies
As with the definition, there is no universally-accepted taxonomy of DSS either. Different authors
propose different classifications. Using the relationship with the user as the criterion,
Haettenschwiler[5] differentiates passive, active, and cooperative DSS. A passive DSS is a system
that aids the process of decision making, but that cannot bring out explicit decision suggestions
or solutions. An active DSS can bring out such decision suggestions or solutions. A cooperative
DSS allows the decision maker (or its advisor) to modify, complete, or refine the decision
suggestions provided by the system, before sending them back to the system for validation. The
system again improves, completes, and refines the suggestions of the decision maker and sends
them back to her for validation. The whole process then starts again, until a consolidated solution
is generated.
Another taxonomy for DSS has been created by Daniel Power. Using the mode of assistance as
the criterion, Power differentiates communication-driven DSS, data-driven DSS, document-
driven DSS, knowledge-driven DSS, and model-driven DSS.[6]
• A communication-driven DSS supports more than one person working on a
shared task; examples include integrated tools like Microsoft's NetMeeting or
Groove[7]
• A data-driven DSS or data-oriented DSS emphasizes access to and
manipulation of a time series of internal company data and, sometimes,
external data.
• A document-driven DSS manages, retrieves, and manipulates unstructured
information in a variety of electronic formats.
• A knowledge-driven DSS provides specialized problem-solving expertise
stored as facts, rules, procedures, or in similar structures.[6]
• A model-driven DSS emphasizes access to and manipulation of a statistical,
financial, optimization, or simulation model. Model-driven DSS use data and
parameters provided by users to assist decision makers in analyzing a
situation; they are not necessarily data-intensive. Dicodess is an example of an
open source model-driven DSS generator[8].
Using scope as the criterion, Power[9] differentiates enterprise-wide DSS and desktop DSS. An
enterprise-wide DSS is linked to large data warehouses and serves many managers in the
company. A desktop, single-user DSS is a small system that runs on an individual manager's PC.
[edit] Architecture
Goals vs Objectives
When you have something you want to accomplish, it is important to set both goals and
objectives. Once you learn the difference between goals and objectives, you will realize that how
important it is that you have both of them. Goals without objectives can never be accomplished
while objectives without goals will never get you to where you want to be. The two concepts are
separate but related and will help you to be who you want to be.
Definition of Goals and Objectives
Goals are long-term aims that you want to accomplish.
Objectives are concrete attainments that can be achieved by following a certain number of steps.
Goals and objectives are often used interchangeably, but the main difference comes in their level
of concreteness. Objectives are very concrete, whereas goals are less structured.
Calculating EVA
In the field of corporate finance, Economic Value Added is a way to determine the value created,
above the required return, for the shareholders of a company.
The basic formula is:
where
r is the firm's return on capital, NOPAT is the Net Operating Profit After Tax, c is the Weighted
Average Cost of Capital (WACC) and K is capital employed. To put it simply, EVA is the profit
earned by the firm less the cost of financing the firm's capital.
Shareholders of the company will receive a positive value added when the return from the capital
employed in the business operations is greater than the cost of that capital; see Working capital
management. Any value obtained by employees of the company or by product users is not
included in the calculations.
[edit] Relationship to Market Value Added
The firm's market value added, or MVA, is the discounted sum of all future expected economic
value added:
Culture of India
From Wikipedia, the free encyclopedia
Jump to:navigation, search
A Kathakali performer as Krishna. One the eight major Indian classical dances, Kathakali is
more than 1,500 years old and its theme is heavily influenced by the Puranas.[1]
The culture of India has been shaped not only by its long history, unique geography and diverse
demography, but also by its ancient heritages, which were formed during the Indus Valley
Civilization and evolved further during the Vedic age, rise and decline of Buddhism, the Golden age,
invasions from Central Asia, European colonization and the rise of Indian nationalism.
The languages, religions, dance, music, architecture and its customs differ from place to place
within the country, but nevertheless possess a commonality. The culture of India is an
amalgamation of diverse sub-cultures spread all over the country and traditions that are several
millennia old.
[edit] Religion
Close-up of a statue depicting Maitreya at the Thikse Monastery in Ladakh, India.
Dharmic religions such as Hinduism and Buddhism are indigenous to India.[2]
India is the birth place of Dharmic religions such as Hinduism, Buddhism, Jainism and Sikhism.[3]
Dharmic religions, also known as Indian religions, are a major form of world religions next to
the Abrahamic ones. Today, Hinduism and Buddhism are the world's third- and fourth-largest
religions respectively, with around 1.4 billion followers altogether.
India is one of the most religiously diverse nations in the world, with some of the most deeply
religious societies and cultures. Religion still plays a central and definitive role in the life of most
of its people.
The religion of 80% of the people is Hinduism. Islam is practiced by around 13% of all Indians.[4]
Sikhism, Jainism and especially Buddhism are influential not only in India but across the world.
Christianity, Zoroastrianism, Judaism and the Bahá'í Faith are also influential but their numbers are
smaller. Despite the strong role of religion in Indian life, atheism and agnostics also have visible
influence along with a self-ascribed tolerance to other people.
[edit] Society
[edit] Overview
According to Eugene M. Makar, traditional Indian culture is defined by relatively strict social
hierarchy. He also mentions that from an early age, children are reminded of their roles and
places in society.[5] This is reinforced by the fact that many believe gods and spirits have an
integral and functional role in determining their life.[5] Several differences such as religion divide
the culture.[5] However, a far more powerful division is the traditional Hindu bifurcation into non-
polluting and polluting occupations.[5] Strict social taboos have governed these groups for thousands
of years.[5] In recent years, particularly in cities, some of these lines have blurred and sometimes
even disappeared.[5] The Nuclear family is becoming central to Indian culture. Important family
relations extend as far as gotra, the mainly patrilinear lineage or clan assigned to a Hindu at birth.
[5]
In rural areas & sometimes in urban areas as well, it is common that three or four generations
of the family live under the same roof.[5] The patriarch often resolves family issues.[5]
Among developing countries, India has low levels of occupational and geographic mobility.
People choose same occupations as their parents and rarely move geographically in the country.
[6]
During the nationalist movement, pretentious behaviour was something to be avoided.
Egalitarian behaviour and social service were promoted while nonessential spending was disliked
and spending money for ‘showing off’ was deemed a vice. This image continues in politics with
many politicians wearing simple looking / traditionally rural clothes, such as the traditional 'kurta
-pyjama' and the 'Gandhi topi'.
[edit] Family
Main articles: Hindu joint family, Arranged marriage in India, and Women in India
Family plays a significant role in the Indian culture. For generations, India has had a prevailing
tradition of the joint family system. It is a system under which extended members of a family -
parents, children, the children’s spouses and their offspring, etc. - live together. Usually, the
eldest male member is the head in the joint Indian family system. He makes all important
decisions and rules, and other family members abide by them.
[edit] Marriage
For centuries, arranged marriages have been the tradition in Indian society. Even today, the vast
majority of Indians have their marriages planned by their parents and other respected family-
members, with the consent of the bride and groom.[7] Arranged matches are made after taking
into account factors such as age, height, personal values and tastes, the backgrounds of their
families (wealth, social standing) and their castes and the astrological compatibility of the
couples' horoscopes.
In India, the marriage is thought to be for life,[8] and the divorce rate is extremely low — 1.1%
compared with about 50% in the United States.[9] The arranged marriages generally have a much
lower divorce rate. The divorce rates have risen significantly in recent years:
"Opinion is divided over what the phenomenon means: for traditionalists the
rising numbers portend the breakdown of society while, for some modernists,
they speak of a healthy new empowerment for women."[10]
Although child marriage was outlawed in 1860, its practiced continues in some rural parts of
India.[11] According to UNICEF’s “State of the World’s Children-2009” report, 47% of India's
women aged 20–24 were married before the legal age of 18, with 56% in rural areas.[12] The
report also showed that 40% of the world's child marriages occur in India.[13]
[edit] Names and language
Indian names are based on a variety of systems and naming conventions, which vary from region to
region. Names are also influenced by religion and caste and may come from the Indian epics.
India's population speaks a wide variety of languages.
[edit] Gender equality
Although women and men are equal before the law and the trend toward gender equality has
been noticeable, women and men still occupy distinct functions in Indian society. Woman's role
in the society is often to perform household works and pro bono community work.[5] This low
rate of participation has ideological and historical reasons. Women and women's issues appear
only 7-14% of the time in news programs.[5] In most Indian families, women do not own any
property in their own names, and do not get a share of parental property.[14] Due to weak
enforcement of laws protecting women, they continue to have little access to land and property.
[15]
In many families, especially rural ones, the girls and women face nutritional discrimination
within the family, and are anaemic and malnourished.[14] They still lag behind men in terms of
income and job status. Traditional Hindu art, such as Rangoli (or Kolam), is very popular among
Indian women. Popular and influential woman's magazines include Femina, Grihshobha and
Woman's Era', 'Savvy.
[edit] Animals
See also: Wildlife of India, Animal husbandry in India, and Cattle in religion
The varied and rich wildlife of India has had a profound impact on the region's popular culture.
Common name for wilderness in India is Jungle which was adopted by the British colonialists to
the English language. The word has been also made famous in The Jungle Book by Rudyard
Kipling. India's wildlife has been the subject of numerous other tales and fables such as the
Panchatantra and the Jataka tales.[16]
In Hinduism, the cow is regarded as a symbol of ahimsa (non-violence), mother goddess and
bringer of good fortune and wealth.[17] For this reason, cows are revered in Hindu culture and
feeding a cow is seen as an act of worship.[18]
[edit] Namaste
Namaste, Namaskar or Namaskaram or Vannakam is a common spoken greeting or salutation in
the Indian subcontinent. Namaskar is considered a slightly more formal version than namaste but
both express deep respect. It is commonly used in India and Nepal by Hindus, Jains and Buddhists,
and many continue to use this outside the Indian subcontinent. In Indian and Nepali culture, the
word is spoken at the beginning of written or verbal communication. However, the same hands
folded gesture is made usually wordlessly upon departure. In yoga, namaste is said to mean "The
light in me honors the light in you", as spoken by both the yoga instructor and yoga students.
Taken literally, it means "I bow to you". The word is derived from Sanskrit (namas): to bow,
obeisance, reverential salutation, and respect, and (te): "to you".
When spoken to another person, it is commonly accompanied by a slight bow made with hands
pressed together, palms touching and fingers pointed upwards, in front of the chest. The gesture
can also be performed wordlessly or calling on another god E.g.: "Jai shri Krishna" and carry the
same meaning.
[edit] Festivals
Main article: Festivals in India
India, being a multi-cultural and multi-religious society, celebrates holidays and festivals of
various religions. The three national holidays in India, the Independence Day, the Republic Day and
the Gandhi Jayanti, are celebrated with zeal and enthusiasm across India. In addition, many states
and regions have local festivals depending on prevalent religious and linguistic demographics.
Popular religious festivals include the Hindu festivals of Navratri Diwali, Ganesh Chaturthi, Durga
puja, Holi, Rakshabandhan and Dussehra. Several harvest festivals, such as Sankranthi, Pongal
and Onam,"Nuakhai" are also fairly popular.
Certain festivals in India are celebrated by multiple religions. Notable examples include Diwali,
which is celebrated by Hindus, Sikhs and Jains, and Buddh Purnima, celebrated by Buddhists
and Hindus. Islamic festivals, such Eid ul-Fitr, Eid al-Adha and Ramadan, are celebrated by
Muslims across India. Adding colors to the culture of India, the Dree Festival is one of the tribal
festivals of India celebrated by the Apatanis of the Ziro valley of Arunachal Pradesh, which is the
easternmost state of India.
[edit] Cuisine
Main article: Cuisine of India
A variety of Indian curries and vegetable dishes.
The multiple varieties of Indian cuisine are characterized by their sophisticated and subtle use of
many spices and herbs. Each family of this cuisine is characterized by a wide assortment of
dishes and cooking techniques. Though a significant portion of Indian food is vegetarian, many
traditional Indian dishes also include chicken, goat, lamb, fish, and other meats.
Food is an important part of Indian culture, playing a role in everyday life as well as in festivals.
Indian cuisine varies from region to region, reflecting the varied demographics of the ethnically
diverse subcontinent. Generally, Indian cuisine can be split into five categories: North, South,
East,West Indian and North-eastern India.
Despite this diversity, some unifying threads emerge. Varied uses of spices are an integral part of
food preparation, and are used to enhance the flavor of a dish and create unique flavors and
aromas. Cuisine across India has also been influenced by various cultural groups that entered
India throughout history, such as the Persians, Mughals, and European colonists. Though the
tandoor originated in Central Asia, Indian tandoori dishes, such as chicken tikka made with Indian
ingredients, enjoy widespread popularity.[19]
Indian cuisine is one of the most popular cuisines across the globe.[20] Historically, Indian spices
and herbs were one of the most sought after trade commodities. The spice trade between India
and Europe led to the rise and dominance of Arab traders to such an extent that European
explorers, such as Vasco da Gama and Christopher Columbus, set out to find new trade routes with
India leading to the Age of Discovery.[21] The popularity of curry, which originated in India, across
Asia has often led to the dish being labeled as the "pan-Asian" dish.[22]
[edit] Clothing
A girl from Tripura sports a bindi while preparing to take part in a traditional dance
festival.
Traditional Indian clothing for women are the saris and also Ghaghra Cholis (Lehengas). For men,
traditional clothes are the Dhoti/pancha/veshti or Kurta. In some village parts of India, traditional
clothing mostly will be worn. In southern India the men wear long, white sheets of cloth called
dhoti in English and in Tamil. Over the dhoti, men wear shirts, t-shirts, or anything else. Women
wear a sari, a long sheet of colourful cloth with patterns. This is draped over a simple or fancy
blouse. This is worn by young ladies and woman. Little girls wear a pavada, a long skirt worn
under a blouse.
Bindiis part of the women's make-up. Traditionally, the red bindi (or sindhur) was worn only by
the married Hindu women, but now it has become a part of women's fashion. A bindi is also
worn by some as their third eye. It sees what the others eyes cannot and is reputed to protect the
brain from the outside and the sun.[23] Indo-western clothing is the fusion of Western and
Subcontinental fashion.
Delhi is considered to be India's fashion capital, housing the annual Fashion weeks.
[edit] Literature
[edit] History
Main article: Indian literature
Rabindranath Tagore, Asia's first Nobel laureate.[24]
The earliest works of Indian literature were orally transmitted.[citation needed] Sanskrit literature begins
with the Rig Veda, a collection of sacred hymns dating to the period 1500–1200 BCE. The
Sanskrit epics Ramayana and Mahabharata appeared towards the end of the first millennium
BCE. Classical Sanskrit literature flourished in the first few centuries of the first millennium CE.
[
[Tamil literature]] begins with the sangam literature, a collection of sacred hymns dating to the
period 10000BC–1200 BCE.[citation needed] The Tamil epics tolkappiyam and thirukural appeared
towards the end of the first millennium BCE.[citation needed] Classical Tamil literature succeeded well in
the first few centuries of the first millennium CE.[citation needed]
In the medieval period, literature in Kannada and Telugu appears in the 9th and 10th and 11th
centuries respectively,[25] followed by the first Malayalam works in the 12th century. During this
time, literature in the Tamil, Bengali, Marathi, and various dialects of Hindi, and Urdu began to
appear as well.
Some of the most important authors from India are Rabindranath Tagore, Ramdhari Singh 'Dinkar',
Subramania Barathi Kuvempu, Bankim Chandra Chattopadhyay, Michael Madhusudan Dutt, Munshi
Premchand, Muhammad Iqbal, Devaki Nandan Khatri. In contemporary India, among the writers who
have received critical acclaim are: Girish Karnad, Agyeya, Nirmal Verma, Kamleshwar, Vaikom
Muhammad Basheer, Indira Goswami, Mahasweta Devi, Amrita Pritam, Maasti Venkatesh Ayengar,
Qurratulain Hyder and Thakazhi Sivasankara Pillai.
In contemporary Indian literature, there are two major literary awards; these are the Sahitya
Akademi Fellowship and the Jnanpith Award. Seven Jnanpith awards each have been awarded in
Kannada, six in Hindi, five in Bengali, four in Malayalam, three each in Marathi, Gujarati, Urdu and
Oriya.[26]
[edit] Poetry
Main article: Indian poetry
Illustration of the Battle of Kurukshetra. With more than 74,000 verses, long prose
passages, and about 1.8 million words in total, the Mahābhārata is one of the longest
epic poems in the world.
India has strong traditions of poetry ever since the Rigveda, as well as prose compositions. Poetry
is often closely related to musical traditions, and much of poetry can be attributed to religious
movements. Writers and philosophers were often also skilled poets. In modern times, poetry has
served as an important non-violent tool of nationalism during the Indian freedom movement. A
famous modern example of this tradition can be found in such figures as Rabindranath Tagore,
Kuvempu and K. S. Narasimhaswamy in modern times, and poets such as Basava (vachanas) , Kabir
and Purandaradasa (padas and devaranamas) in medieval times, as well as the epics of ancient
times. Two examples of poetry from Tagore's Gitanjali serve as the national anthems of both
India and Bangladesh.
[edit] Epics
The Ramayana and Mahabharata are the oldest preserved and well-known epics of India. Versions
have been adopted as the epics of Southeast Asian countries like Thailand, Malaysia and Indonesia.
In addition, there are five epics in the classical Tamil language: Silappadhikaram, Manimegalai,
Civaka Cintamani, Valaiyapathi and Kundalakesi.
Other regional variations of these, as well as unrelated epics include the Tamil Kamba
Ramayanam, in Kannada, the Pampa Bharata by Adikavi Pampa, Torave Ramayana by Kumara
Valmiki and Karnata Bharata Katha Manjari by Kumaravyasa, Hindi Ramacharitamanasa, and
Malayalam Adhyathmaramayanam.
The music of India includes multiple varieties of religious, folk, popular, pop, and classical music.
The oldest preserved examples of Indian music are the melodies of the Samaveda that are still
sung in certain Vedic Śrauta sacrifices. India's classical music tradition is heavily influenced by
Hindu texts. It includes two distinct styles: Carnatic and Hindustani music. It is noted for the use of
several Raga, melodic modes. It has a history spanning millennia and it was developed over
several eras. It remains instrumental to religious inspiration, cultural expression and pure
entertainment.
Purandaradasa is considered the "father of carnatic music" (Karnataka sangeeta pitamaha).[27][28]
[29]
He concluded his songs with a salutation to Lord Purandara Vittala and is believed to have
composed as many as 475,000 songs in the Kannada language.[30] However, only about 1000 are
known today.[27][31]
[edit] Dance
Main article: Indian dance
Odissi dancer in front of the Konark Sun Temple.
Indian dance too has diverse folk and classical forms. Among the well-known folk dances are the
bhangra of the Punjab, the bihu of Assam, the chhau of Jharkhand and Orissa, the ghoomar of
Rajasthan, the dandiya and garba of Gujarat, the Yakshagana of Karnataka and lavani of
Maharashtra and Dekhnni of Goa. Eight dance forms, many with narrative forms and mythological
elements, have been accorded classical dance status by India's National Academy of Music,
Dance, and Drama. These are: bharatanatyam of the state of Tamil Nadu, kathak of Uttar Pradesh,
kathakali and mohiniattam of Kerala, kuchipudi of Andhra Pradesh, manipuri of Manipur, odissi of the
state of Orissa and the sattriya of Assam.[32][33]
Kalarippayattu, or Kalari for short, is considered one of the world's oldest martial arts. It is
preserved in texts such as the Mallapurana. Kalari and other later formed martial arts have been
assumed by some[who?] to have traveled to China, like Buddhism, and eventually developing into
Kung-fu.[citation needed] Other later martial arts are Gatka, Pehlwani and Malla-yuddha.
[edit] Drama and theater
Natyacarya Mani Madhava Chakyar as Ravana in Bhasa's Abhiṣeka Nataka Kutiyattam -
one of the oldest surviving drama tradition of the world.
Indian drama and theater has a long history alongside its music and dance. Kalidasa's plays like
Shakuntala and Meghadoota are some of the older plays, following those of Bhasa. One of the
oldest surviving theatre traditions of the world is the 2,000 year old Kutiyattam of Kerala. It
strictly follows the Natya Shastra.[34] The natak of Bhasa are very popular in this art form.
Nātyāchārya (late) Padma Shri Māni Mādhava Chākyār - the unrivaled maestro of this art form and
Abhinaya,[citation needed] revived the age old drama tradition from extinction. He was known for
mastery of Rasa Abhinaya. He started to perform the Kalidasa plays like Abhijñānaśākuntala,
Vikramorvaśīya and Mālavikāgnimitra; Bhasa's Swapnavāsavadatta and Pancharātra; Harsha's
Nagananda in Kutiyattam form.[35][36]
The tradition of folk theater is popular in most linguistic regions of India. In addition, there is a
rich tradition of puppet theater in rural India, going back to at least the second century BCE. (It is
mentioned in Patanjali's commentary on Panini). Group Theater is also thriving in the cities,
initiated by the likes of Gubbi Veeranna,[37] Utpal Dutt, Khwaja Ahmad Abbas, and K. V. Subbanna
and still maintained by groups like Nandikar, Ninasam and Prithvi Theatre.
[edit] Visual arts
Main article: Indian art
[edit] Painting
Main article: Indian painting
The earliest Indian paintings were the rock paintings of pre-historic times, the petroglyphs as found
in places like Bhimbetka, some of which go back to the Stone Age. Ancient texts outline theories
of darragh and anecdotal accounts suggesting that it was common for households to paint their
doorways or indoor rooms where guests resided.
Cave paintings from Ajanta, Bagh, Ellora and Sittanavasal and temple paintings testify to a love of
naturalism. Most early and medieval art in India is Hindu, Buddhist or Jain. A freshly made
coloured flour design (Rangoli) is still a common sight outside the doorstep of many (mostly
South Indian) Indian homes. Raja Ravi Varma is one the classical painters from medieval India.
Madhubani painting, Mysore painting, Rajput painting, Tanjore painting, Mughal painting are some
notable Genres of Indian Art; while Nandalal Bose, M. F. Husain, S. H. Raza, Geeta Vadhera, Jamini
Roy and B.Venkatappa[38] are some modern painters. Among the present day artists, Atul Dodiya,
Bose Krishnamacnahri, Devajyoti Ray and Shibu Natesan represent a new era of Indian art
where global art shows direct amalgamation with Indian classical styles. These recent artists
have acquired international recognition. Jehangir Art Gallery, Mumbai, Mysore Palace has on display
a few good Indian paintings.
[edit] Sculpture
Main article: Sculpture in India
The first sculptures in India date back to the Indus Valley civilization, where stone and bronze
figures have been discovered. Later, as Hinduism, Buddhism, and Jainism developed further, India
produced some extremely intricate bronzes as well as temple carvings. Some huge shrines, such
as the one at Ellora were not constructed by using blocks but carved out of solid rock.
Sculptures produced in the northwest, in stucco, schist, or clay, display a very strong blend of
Indian and Classical Hellenistic or possibly even Greco-Roman influence. The pink sandstone
sculptures of Mathura evolved almost simultaneously. During the Gupta period (4th to 6th century)
sculpture reached a very high standard in execution and delicacy in modeling. These styles and
others elsewhere in India evolved leading to classical Indian art that contributed to Buddhist and
Hindu sculpture throughout Southeast Central and East Asia.
[edit] Architecture
Main article: Indian architecture
The Umaid Bhawan Palace in Rajasthan, one of the largest private residences in the
world.[39]
Indian architecture encompasses a multitude of expressions over space and time, constantly
absorbing new ideas. The result is an evolving range of architectural production that nonetheless
retains a certain amount of continuity across history. Some of its earliest production are found in
the Indus Valley Civilization (2600-1900 BCE) which is characterised by well planned cities and
houses. Religion and kingship do not seem to have played an important role in the planning and
layout of these towns.
During the period of the Mauryan and Gupta empires and their successors, several Buddhist
architectural complexes, such as the caves of Ajanta and Ellora and the monumental Sanchi Stupa
were built. Later on, South India produced several Hindu temples like Chennakesava Temple at
Belur, the Hoysaleswara Temple at Halebidu, and the Kesava Temple at Somanathapura,
Brihadeeswara Temple, Thanjavur, the Sun Temple, Konark, Sri Ranganathaswamy Temple at
Srirangam, and the Buddha stupa (Chinna Lanja dibba and Vikramarka kota dibba) at Bhattiprolu.
Angkor Wat, Borobudur and other Buddhist and Hindu temples indicate strong Indian influence on
South East Asian architecture, as they are built in styles almost identical to traditional Indian
religious buildings.
The traditional system of Vaastu Shastra serves as India's version of Feng Shui, influencing town
planning, architecture, and ergonomics. It is unclear which system is older, but they contain
certain similarities. Feng Shui is more commonly used throughout the world. Though Vastu is
conceptually similar to Feng Shui in that it also tries to harmonize the flow of energy, (also called
life-force or Prana in Sanskrit and Chi/Ki in Chinese/Japanese), through the house, it differs in the
details, such as the exact directions in which various objects, rooms, materials, etc. are to be
placed.
With the advent of Islamic influence from the west, Indian architecture was adapted to allow the
traditions of the new religion. Fatehpur Sikri, Taj Mahal, Gol Gumbaz, Qutub Minar, Red Fort of Delhi
are creations of this era, and are often used as the stereotypical symbols of India. The colonial
rule of the British Empire saw the development of Indo-Saracenic style, and mixing of several
other styles, such as European Gothic. The Victoria Memorial or the Chhatrapati Shivaji Terminus are
notable examples.
Indian architecture has influenced eastern and southeastern Asia, due to the spread of Buddhism.
A number of Indian architectural features such as the temple mound or stupa, temple spire or
sikhara, temple tower or pagoda and temple gate or torana, have become famous symbols of
Asian culture, used extensively in East Asia and South East Asia. The central spire is also
sometimes called a vimanam. The southern temple gate, or gopuram is noted for its intricacy and
majesty.
Contemporary Indian architecture is more cosmopolitan. Cities are extremely compact and
densely populated. Mumbai's Nariman Point is famous for its Art Deco buildings. Recent creations
such as the Lotus Temple, and the various modern urban developments of India like Chandigarh,
are notable.
[edit] Recreation and sports
Main article: Sports in India
The annual snake boat race is performed during Onam Celebrations on the Pamba River at
Aranmula near Pathanamthitta.
In the area of recreation and sports India had evolved a number of games. The modern eastern
martial arts originated as ancient games and martial arts in India, and it is believed by some that
these games were transmitted to foreign countries, where they were further adapted and
modernized. Traditional indigenous sports include kabaddi and gilli-danda, which are played in
most parts of the country.
A few games introduced during the British Raj have grown quite popular in India: field hockey,
football (soccer) and especially cricket. Although field hockey is India's official national sport,
cricket is by far the most popular sport not only in India, but the entire subcontinent, thriving
recreationally and professionally. Cricket has even been used recently as a forum for diplomatic
relations between India and Pakistan. The two nations' cricket teams face off annually and such
contests are quite impassioned on both sides. Polo is also popular.
Indoor and outdoor games like Chess, Snakes and Ladders, Playing cards, Carrom, Badminton are
popular. Chess was invented in India.
Games of strength and speed flourished in India. In ancient India stones were used for weights,
marbles, and dice. Ancient Indians competed in chariot racing, archery, horsemanship, military tactics,
wrestling, weight lifting, hunting, swimming and running races.
Indian television started off in 1959 in New Delhi with tests for educational telecasts.[40] Indian
small screen programming started off in the mid 1970s. At that time there was only one national
channel Doordarshan, which was government owned. 1982 saw revolution in TV programming in
India, with the New Delhi Asian games, India saw the colour version of TV, that year. The
Ramayana and Mahabharat were some among the popular television series produced. By the late
1980s more and more people started to own television sets. Though there was a single channel,
television programming had reached saturation. Hence the government opened up another
channel which had part national programming and part regional. This channel was known as DD
2 later DD Metro. Both channels were broadcasted terrestrially.
In 1991, the government liberated its markets, opening them up to cable television. Since then,
there has been a spurt in the number of channels available. Today, Indian silver screen is a huge
industry by itself, and has thousands of programmes in all the states of India. The small screen
has produced numerous celebrities of their own kind some even attaining national fame for
themselves. TV soaps are extremely popular with housewives as well as working women, and
even men of all kinds. Some lesser known actors have found success in Bollywood. Indian TV
now has many of the same channels as Western TV, including stations such as Cartoon Network,
Nickelodeon, and MTV India.
[edit] Cinema
Main article: Cinema of India
Shooting of a Bollywood dance number.
Bollywood is the informal name given to the popular Mumbai-based film industry in India.
Bollywood and the other major cinematic hubs (in Bengali, Kannada, Malayalam, Marathi, Tamil,
Punjabi and Telugu) constitute the broader Indian film industry, whose output is considered to be the
largest in the world in terms of number of films produced and number of tickets sold.
India has produced many critically acclaimed cinema-makers like K.Vishwanath, Bapu
,Jagdaman Grewal, Satyajit Ray, Ritwik Ghatak, Guru Dutt, K. Vishwanath, Adoor Gopalakrishnan,
Girish Kasaravalli, Shekhar Kapoor, Hrishikesh Mukherjee, Shankar Nag, Girish Karnad, G. V. Iyer,etc.
(See Indian film directors). With the opening up of the economy in the recent years and consequent
exposure to world cinema, audience tastes have been changing. In addition, multiplexes have
mushroomed in most cities, changing the revenue patterns.
co
sts
C
re
ar
be o
du
e
co
hi
ce
lo
m nt
SMALL INDUSTRIES DEVELOPMENT ORGANISATION (SIDO) gh
d
we
e e
du
re
co
ORGANISATIONAL STRUCTURE OF SIDO
slo
e
d
un
w
nt
Small Industries Development Organisation (SIDO) an apex body at Central level forter to
as
sal s
formulating policy for the development of Small Scale Industries in the country, is -a ec
es
headed by the Additional Secretary & Development Commissioner (Small Scale [op
on
re hi
vo
Industries) under Ministry of Small Scale Industries Govt. of India. de
o
sul
ti ]
lu
tmi
m
SIDO is playing a very constructive role for strengthening this vital sector which hasm 1
es
of
al
proved to be one of the strong pillars of the economy of the country. It functions es Pr
of
pr
to
sal
through a network of the field offices namely 30 SISIs, 28 Br. SISIs, 4 RTCs, 7 FTSs, od
sc
od
st
es
various training and production centers and specialized institutes spread over uc
different parts of the country. It is rendering the services in the following areas :- vo
al
uc
art
ttio
e
Advising the Govt. in policy matters concerning small scale sector. lu
litt
lif
n
sal
m
le
Providing techno-economic and managerial consultancy, common facilities and vo e
es
e
or
extension services. cy
lu
vo
de
no
cle
Providing facilities for technology up-gradation, modernization quality m
lu
cli
improvement & infrastructure. co
(P
es
m
ne
m
Human resources development through training and skill up-gradation. LC
in
e
or
pe
)
cr
Providing economic information services. in
st
titi
ea
1.
cr
ab
Maintaining close liaison and vital linkage with the Central Ministries, Planning on sin
1
ea
iliz
Commission, Financial Institutions, State Govts. & similar other developmental g de
Re
se
e
organizations/agencies related to the promotion and development of SSI Sector. an m
qu
s
Evolving and coordinating policies for development of ancillaries. pri
d
an
es
sig
ce
ex
d
Monitoring of PMRY Scheme tnif
s,
pe
ha
for
ica
Monitoring the working of different Tool Rooms & PPDC's pr
rie
s
de
ntl
ofi
nc
to
vi
y
ta
e
be
ati
pr
bil
cu
cr
on
ofi
ity
rv
ea
1.
ta
di
e
te
2
bil
mi
eff
d
Convertible bond M
ity
nis
ec
cu
ar
be
h
ts
st
ke
gi
pr
sal
o
In finance, a convertible note (or, if it has a maturity of greater than 10 years, a tns
ofi
convertible debenture) is a type of bond that the holder can convert into shares of es m
id
to
ter
common stock in the issuing company or cash of equal value, at an agreed-upon price. vo en
risIt
be
lu
s
is a hybrid security with debt- and equity-like features. Although it typically has a low e tifi
co
m
coupon rate, the instrument carries additional value through the option to convert the 4. 1.
ha
ca
pu
m
e
ve
tio
M
Sa
bond to stock, and thereby participate in further growth in the company's equity value. bli
es
pe
to
n
The investor receives the potential upside of conversion into equity while protecting tu ar
c
m
ak
be
downside with cash flow from the coupon payments. 2
ke
ra
aw
or
s
pr
From the issuer's perspective, the key benefit of raising money by selling convertible Le tan
ti
ar
e
o
bonds is a reduced cash interest payment. However, in exchange for the benefit of ss en
a
in
3.
on
d
m
reduced interest payments, the value of shareholder's equity is reduced due to the stockon es
ch
m
tr
2.
M
an
pt
s
s
all
ar
dilution expected when bondholders convert their bonds into new shares. ed
od
Gr
at
d Ch
of
in
en
ke
The convertible bond markets in the United States and Japan are of primary global th to
uc
o
ur
de ar
cr
ge
ttry
ti
wt
it
cli
eac
ea
of
sa
th
pr
on
h
y
ne
se
prte
tur
e
od
s
od
st
ris
St
ati
pr
uc
uc
ag
tic
on
od
co
ttio
is
ee
uc
m s
e
dis
ac
titi
m
cy
tri
he
on
ak
cle
bu
d
be
es
(P
tio
gi
in
no
LC
n
ns
cr
m
)eff
to
ea
on
• ILO Constitution ici
3
in
se
ey
en
Li
cr
• ILO Convention No. 29 : Forced Labour Convention, 1930 in
at
cy
mi
ea
co
thi
th
tat
se
m
• ILO Convention No. 81 : Labour Inspection Convention, 1947 s
an
io
wi
pe
st
in
ns
th
• ILO Convention No. 87 : Freedom of Association and Protection of the Right to Organise, 1948 tit
ag
cr
a
4
or
e
ea
fe
• ILO Convention No. 97 : Migration for Employment Convention, 1949 Se
s
se
w
e
en
d
ne
• ILO Convention No. 98 : Right to Organise and Collective Bargaining Convention, 1949 als
ter
sal
w
o
in
es
pl
• ILO Convention No. 100 : Equal Remuneration Convention, 1951 g
5
ay
th
Re
er
• ILO Convention No. 102 : Social Security (Minimum Standards) Convention, 1952 e
fer
s
m
en
in
• ILO Convention No. 105 : Abolition of Forced Labour Convention, 1957 ar
ce
es
ke
s
• ILO Convention No. 111 : Discrimination (Employment and Occupation) Convention, 1958 tta
6
bli
pri
• ILO Convention No. 115 : Radiation Protection Convention, 1960 Ex
shi
ce
ter
ng
• ILO Convention No. 122 : Employment Policy Convention, 1964
s
na
m
te
l
ar
• ILO Convention No. 129 : Labour Inspection (Agriculture) Convention, 1969
nd
lin
ke
to
tks
• ILO Convention No. 138 : Minimum Age Convention, 1973 dr
in
op
cr
du
• ILO Convention No. 143 : Migrant Workers (Supplementary Provisions). 1975
ea
e
• ILO Convention No. 144 : Tripartite Consultation (International Labour Standards) Convention, 1976
se
to
d
th
• ILO Convention No. 155 : Occupational Safety and Health Convention, 1981
co
e
m
pr
• ILO Convention No. 158 : Termination of Employment Convention. 1982 pe
oli
titi
fer
• ILO Convention No. 161 : Occupational Health Services Convention, 1985 on
ati
le
on
• ILO Convention No. 182 : Worst Forms of Child Labour Convention, 1999 ad
of
s
co
• ILO Convention No. 187 : Promotional Framework for Occupational Safety and Health Convention, 2006 to
m
pri
pe
• ILO Declaration of Philadelphia ce
tin
de
g
• ILO Declaration on Fundamental Principles and Rights at Work, 1998 cr
pr
ea
od
• ILO Declaration on Social Justice for a Fair Globalization se
uc
s
ts
br
an
d
dif
fer
en
tia
tio
n
d
fe
at
ur
e
di
ve
rsi
fic
United Nations Global Compact ati
on
The United Nations Global Compact, also known as Compact or UNGC, is a United Nationsis
initiative to encourage businesses worldwide to adopt sustainable and socially responsible policies,
e
and to report on their implementation. The Global Compact is a principle-based framework form
businesses, stating ten principles in the areas of human rights, labour, the environment and anti- ph
corruption. Under the Global Compact, companies are brought together with UN agencies, labour asi
groups and civil society. ze
d
The Global Compact is the world's largest corporate citizenship initiative and as voluntary
to
initiative has two objectives: "Mainstream the ten principles in business activities around the
m
world" and "Catalyse actions in support of broader UN goals, such as the Millennium Development ai
Goals (MDGs)."[1]
nt
The Global Compact was first announced by the then UN Secretary-General Kofi Annan in an ai
address to The World Economic Forum on January 31, 1999, and was officially launched at UN n
Headquarters in New York on July 26, 2000. or
in
The Global Compact Office is supported by six UN agencies: the United Nations High cr
Commissioner for Human Rights; the United Nations Environment Programme; the International Labour
ea
Organization; the United Nations Development Programme; the United Nations Industrial Development se
Organization; and the United Nations Office on Drugs and Crime. m
ar
The Ten Principles ke
The Global Compact was initially launched with nine Principles. June 24, 2004, during the first t
Global Compact Leaders Summit, Kofi Annan announced the addition of a tenth principle sh
against corruption. This step followed an extensive consultation process with all Global Compact ar
participants. e
Human Rights In
Businesses should: du
str
• Principle 1: Support and respect the protection of internationally proclaimed ial
human rights; and pr
• Principle 2: Make sure that they are not complicit in human rights abuses. ofi
ts
Labour Standards
go
Businesses should uphold:
do
• Principle 3: the freedom of association and the effective recognition of the right to
wn
collective bargaining;
• Principle 4: the elimination of all forms of forced and compulsory labour;
• Principle 5: the effective abolition of child labour; and
• Principle 6: the elimination of discrimination in employment and occupation.
Environment
Businesses should:
• Principle 7: support a precautionary approach to environmental challenges;
• Principle 8: undertake initiatives to promote environmental responsibility; and
• Principle 9: encourage the development and diffusion of environmentally
friendly technologies.
Anti-Corruption
• Principle 10: Businesses should work against corruption in all its forms,
including extortion and bribery.
[edit] Facilitation
The Global Compact is not a regulatory instrument, but rather a forum for discussion and a
network for communication including governments; companies and labour organisations, whose
actions it seeks to influence; and civil society organizations, representing its stakeholders.
The Compact itself says that once companies declared their support for the Global Compact
principles "This does not mean that the Global Compact recognizes or certifies that these
companies have fulfilled the Compact’s principles."
The Compact's goals are intentionally flexible and vague, but it distinguishes the following
channels through which it provides facilitation and encourages dialogue: policy dialogues,
learning, local networks and projects.
The first Global Compact Leaders Summit, chaired by the then Secretary-General Kofi Annan,
was held in UN Headquarters in New York on June 24, 2004. It aimed to bring "intensified
international focus and increased momentum" to the Global Compact. On the eve of the
conference, delegates were invited to attend the first Prix Ars Electronica Digital Communities
award ceremony, which was co-hosted by a representative from the UN.
The second Global Compact Leaders Summit, chaired by Secretary-General Ban Ki-moon, was
held on 5–6 July 2007 at the Palais des Nations in Geneva, Switzerland. It adopted the Geneva
Declaration on corporate responsibility.
CSR/SRI is still voluntary in Denmark, but if a company has no policy on this they must state
their positioning on CSR in their annual financial report.
More on the Danish law on CSRgov.dk
[edit] Crises and their consequences
Often it takes a crisis to precipitate attention to CSR. One of the most active stands against
environmental management is the CERES Principles that resulted after the Exxon Valdez incident
in Alaska in 1989 (Grace and Cohen 2006). Other examples include the lead poisoning paint
used by toy giant Mattel, which required a recall of millions of toys globally and caused the
company to initiate new risk management and quality control processes. In another example,
Magellan Metals in the West Australian town of Esperance was responsible for lead contamination
killing thousands of birds in the area. The company had to cease business immediately and work
with independent regulatory bodies to execute a cleanup. Odwalla also experienced a crisis with
sales dropping 90 percent, and the company's stock price dropping 34 percent due to several
cases of E.Coli spread through Odwalla apple juice. The company ordered a recall of all apple or
carrot juice products and introduced a new process called "flash pasteurization" as well as
maintaining lines of communication constantly open with customers.
[edit] Stakeholder priorities
Increasingly, corporations are motivated to become more socially responsible because their most
important stakeholders expect them to understand and address the social and community issues
that are relevant to them. Understanding what causes are important to employees is usually the
first priority because of the many interrelated business benefits that can be derived from
increased employee engagement (i.e. more loyalty, improved recruitment, increased retention,
higher productivity, and so on). Key external stakeholders include customers, consumers,
investors (particularly institutional investors), regulators, academics, and the media).
Adjudication
Adjudication is the legal process by which an arbiter or judge reviews evidence and argumentation
including legal reasoning set forth by opposing parties or litigants to come to a decision which
determines rights and obligations between the parties involved. Three types of disputes are
resolved through adjudication:
1. Disputes between private parties, such as individuals or corporations.
2. Disputes between private parties and public officials.
3. Disputes between public officials or public bodies.
[edit] In Australia
Robert Gaussen is said to have played pioneered the introduction of Adjudication process in
Australia through his role in drafting of Adjudication legislations in most states and territories in
the country.
[edit] In Victoria
Adjudication[4] is a relatively new process introduced by the Government of Victoria[5] in
Australia, to allow for the rapid determination of progress claims under building contracts or
sub-contracts and contracts for the supply of goods or services in the building industry. This
process was designed to ensure cash flow to businesses in the building industry, without parties
getting tied up in lengthy and expensive litigation or arbitration. It is regulated by the Building and
Construction Industry Security of Payment Act 2002.[6]
Builders, sub-contractors and suppliers need to carefully choose a nominating authority to which
they make an adjudication application.[7]
[edit] In Queensland
The Building and Construction Industry Payments Act 2004 (BCIPA) came into effect in
Queensland in October, 2004. Through a statuatory-based process known as adjudication a
claimant can seek to resolve payment on account disputes. The act covers construction, and
related supply of goods and services, contracts, whether written or verbal. BCIPA is regulated by
the Building and Construction Industry Payments Agency, a branch of the Queensland Building
Services Authority[8].
Conciliation
Conciliation is an alternative dispute resolution (ADR) process whereby the parties to a dispute
(including future interest disputes) agree to utilize the services of a conciliator, who then meets
with the parties separately in an attempt to resolve their differences. He does this by lowering
tensions, improving communications, interpreting issues, providing technical assistance,
exploring potential solutions and bringing about a negotiated settlement.
Conciliation differs from arbitration in that the conciliation process, in and of itself, has no legal
standing, and the conciliator usually has no authority to seek evidence or call witnesses, usually
writes no decision, and makes no award.
Conciliation differs from mediation in that the main goal is to conciliate, most of the time by
seeking concessions. In mediation, the mediator tries to guide the discussion in a way that
optimizes parties needs, takes feelings into account and reframes representations.
In conciliation the parties seldom, if ever, actually face each other across the table in the
presence of the conciliator.
[edit] Effectiveness
Recent studies in the processes of negotiation have indicated the effectiveness of a technique that
deserves mention here. A conciliator assists each of the parties to independently develop a list of
all of their objectives (the outcomes which they desire to obtain from the conciliation). The
conciliator then has each of the parties separately prioritize their own list from most to least
important. He/She then goes back and forth between the parties and encourages them to "give"
on the objectives one at a time, starting with the least important and working toward the most
important for each party in turn. The parties rarely place the same priorities on all objectives, and
usually have some objectives that are not listed by the other party. Thus the conciliator can
quickly build a string of successes and help the parties create an atmosphere of trust which the
conciliator can continue to develop.
Most successful conciliators are highly skilled negotiators. Some conciliators operate under the
auspices of any one of several non-governmental entities, and for governmental agencies such as
the Federal Mediation and Conciliation Service in the United States.
[edit] Conciliation in Japan
Japanese law makes extensive use of conciliation (調停, chōtei?) in civil disputes. The most
common forms are civil conciliation and domestic conciliation, both of which are managed under
the auspices of the court system by one judge and two non-judge "conciliators."
Civil conciliation is a form of dispute resolution for small lawsuits, and provides a simpler and
cheaper alternative to litigation. Depending on the nature of the case, non-judge experts (doctors,
appraisers, actuaries, etc.) may be called by the court as conciliators to help decide the case.
Domestic conciliation is most commonly used to handle contentious divorces, but may apply to
other domestic disputes such as the annulment of a marriage or acknowledgment of paternity.
Parties in such cases are required to undergo conciliation proceedings and may only bring their
case to court once conciliation has failed.
Average cost
From Wikipedia, the free encyclopedia
Jump to:navigation, search
In economics, average cost is equal to total cost divided by the number of goods produced (the
output quantity, Q). It is also equal to the sum of average variable costs (total variable costs
divided by Q) plus average fixed costs (total fixed costs divided by Q). Average costs may be
dependent on the time period considered (increasing production may be expensive or impossible
in the short term, for example). Average costs affect the supply curve and are a fundamental
component of supply and demand.
[edit] Overview
Average cost is distinct from the price, and depends on the interaction with demand through
elasticity of demand and elasticity of supply. In cases of perfect competition, price may be lower than
average cost due to marginal cost pricing.
Average cost will vary in relation to the quantity produced unless fixed costs are zero and
variable costs constant. A cost curve can be plotted, with cost on the y-axis and quantity on the x-
axis. Marginal costs are often shown on these graphs, with marginal cost representing the cost of
the last unit produced at each point; marginal costs are the first derivative of total or variable costs.
A typical average cost curve will have a U-shape, because fixed costs are all incurred before any
production takes place and marginal costs are typically increasing, because of diminishing marginal
productivity. In this "typical" case, for low levels of production there are economies of scale:
marginal costs are below average costs, so average costs are decreasing as quantity increases. An
increasing marginal cost curve will intersect a U-shaped average cost curve at its minimum, after
which point the average cost curve begins to slope upward. This is indicative of diseconomies of
scale. For further increases in production beyond this minimum, marginal cost is above average
costs, so average costs are increasing as quantity increases. An example of this typical case
would be a factory designed to produce a specific quantity of widgets per period: below a certain
production level, average cost is higher due to under-utilised equipment, while above that level,
production bottlenecks increase the average cost.
[edit] Relationship to marginal cost
When average cost is declining as output increases, marginal cost is less than average cost. When
average cost is rising, marginal cost is greater than average cost. When average cost is neither
rising nor falling (at a minimum or maximum), marginal cost equals average cost.
Other special cases for average cost and marginal cost appear frequently:
• Constant marginal cost/high fixed costs: each additional unit of production is
produced at constant additional expense per unit. The average cost curve
slopes down continuously, approaching marginal cost. An example may be
hydroelectric generation, which has no fuel expense, limited maintenance
expenses and a high up-front fixed cost (ignoring irregular maintenance costs
or useful lifespan). Industries where fixed marginal costs obtain, such as
electrical transmission networks, may meet the conditions for a natural
monopoly, because once capacity is built, the marginal cost to the incumbent
of serving an additional customer is always lower than the average cost for a
potential competitor. The high fixed capital costs are a barrier to entry.
• Minimum efficient scale / maximum efficient scale: marginal or average costs may be
non-linear, or have discontinuities. Average cost curves may therefore only
be shown over a limited scale of production for a given technology. For
example, a nuclear plant would be extremely inefficient (very high average
cost) for production in small quantities; similarly, its maximum output for any
given time period may essentially be fixed, and production above that level
may be technically impossible, dangerous or extremely costly. The long run
elasticity of supply will be higher, as new plants could be built and brought
on-line.
• Low or zero fixed costs / constant marginal cost: since there is no economy of
scale, average cost will be close to or equal to marginal cost. Examples may
include buying and selling of commodities (trading) etc...
Marginal cost
From Wikipedia, the free encyclopedia
Jump to:navigation, search
In economics and finance, marginal cost is the change in total cost that arises when the quantity
produced changes by one unit. That is, it is the cost of producing one more unit of a good.[1]
Mathematically, the marginal cost (MC) function is expressed as the first derivative of the total
cost(TC) function with respect to quantity (Q). Note that the marginal cost may change with
volume, and so at each level of production, the marginal cost is the cost of the next unit
produced.
In general terms, marginal cost at each level of production includes any additional costs required
to produce the next unit. If producing additional vehicles requires, for example, building a new
factory, the marginal cost of those extra vehicles includes the cost of the new factory. In practice,
the analysis is segregated into short and long-run cases, and over the longest run, all costs are
marginal. At each level of production and time period being considered, marginal costs include
all costs which vary with the level of production, and other costs are considered fixed costs.
A number of other factors can affect marginal cost and its applicability to real world problems.
Some of these may be considered market failures. These may include information asymmetries, the
presence of negative or positive externalities, transaction costs, price discrimination and others.
For discrete calculation without calculus, marginal cost equals the change in total (or variable)
cost that comes with each additional unit produced. For instance, suppose the total cost of
making 1 shoe is $30 and the total cost of making 2 shoes is $40. The marginal cost of producing
the second shoe is $40 - $30 = $10.
[edit] Economies of scale
Production may be subject to economies of scale (or diseconomies of scale). Increasing returns to
scale are said to exist if additional units can be produced for less than the previous unit, that is,
average cost is falling. This can only occur if average cost at any given level of production is
higher than the marginal cost. Conversely, there may be levels of production where marginal cost
is higher than average cost, and average cost will rise for each unit of production after that point.
This type of production function is generally known as diminishing marginal productivity: at low
levels of production, productivity gains are easy and marginal costs falling, but productivity gains
become smaller as production increases; eventually, marginal costs rise because increasing
output (with existing capital, labor or organization) becomes more expensive. For this generic
case, minimum average cost occurs at the point where average cost and marginal cost are equal
(when plotted, the two curves intersect); this point will not be at the minimum for marginal cost
if fixed costs are greater than zero.
[edit] Short and long run costs and economies of scale
A textbook distinction is made between short-run and long-run marginal cost. The former takes
fixed costs as unchanged, for example, the capital equipment and overhead of the producer, any
change in its production involves only changes in the inputs of labour, materials and energy. The
latter allows all inputs, including capital items (plant, equipment, buildings) to vary.
A long-run cost function describes the cost of production as a function of output assuming that
all inputs are obtained at current prices, that current technology is employed, and everything is
being built new from scratch. In view of the durability of many capital items this textbook
concept is less useful than one which allows for some scrapping of existing capital items or the
acquisition of new capital items to be used with the existing stock of capital items acquired in the
past. Long-run marginal cost then means the additional cost or the cost saving per unit of
additional or reduced production, including the expenditure on additional capital goods or any
saving from disposing of existing capital goods. Note that marginal cost upwards and marginal
cost downwards may differ, in contrast with marginal cost according to the less useful textbook
concept.
Economies of scale are said to exist when marginal cost according to the textbook concept falls
as a function of output and is less than the average cost per unit. This means that the average cost
of production from a larger new built-from-scratch installation falls below that from a smaller
new built-from-scratch installation. Under the more useful concept, with an existing capital
stock, it is necessary to distinguish those costs which vary with output from accounting costs
which will also include the interest and depreciation on that existing capital stock, which may be
of a different type from what can currently be acquired in past years at past prices. The concept
of economies of scale then does not apply.
[edit] Externalities
Externalities are costs (or benefits) that are not borne by the parties to the economic transaction.
A producer may, for example, pollute the environment, and others may bear those costs. A
consumer may consume a good which produces benefits for society, such as education; because
the individual does not receive all of the benefits, he may consume less than efficiency would
suggest. Alternatively, an individual may be a smoker or alcoholic and impose costs on others. In
these cases, production or consumption of the good in question may differ from the optimum
level.
[edit] Negative externalities of production
Much of the time, private and social costs do not diverge from one another, but at times social
costs may be either greater or less than private costs. When marginal social costs of production
are greater than that of the private cost function, we see the occurrence of a negative externality of
production. Productive processes that result in pollution are a textbook example of production that
creates negative externalities.
Such externalities are a result of firms externalising their costs onto a third party in order to
reduce their own total cost. As a result of externalising such costs we see that members of society
will be negatively affected by such behavior of the firm. In this case, we see that an increased
cost of production on society creates a social cost curve that depicts a greater cost than the
private cost curve.
In an equilibrium state we see that markets creating negative externalities of production will
overproduce that good. As a result, the socially optimal production level would be lower than
that observed.
[edit] Positive externalities of production
When marginal social costs of production are less than that of the private cost function, we see
the occurrence of a positive externality of production. Production of public goods are a textbook
example of production that create positive externalities. An example of such a public good,
which creates a divergence in social and private costs, includes the production of education. It is
often seen that education is a positive for any whole society, as well as a positive for those
directly involved in the market.
Examining the relevant diagram we see that such production creates a social cost curve that is
less than that of the private curve. In an equilibrium state we see that markets creating positive
externalities of production will under produce that good. As a result, the socially optimal
production level would be greater than that observed.
[edit] Social costs
Main article: Social cost
Of great importance in the theory of marginal cost is the distinction between the marginal private
and social costs. The marginal private cost shows the cost associated to the firm in question. It is
the marginal private cost that is used by business decision makers in their profit maximization
goals, and by individuals in their purchasing and consumption choices. Marginal social cost is
similar to private cost in that it includes the cost functions of private enterprise but also that of
society as a whole, including parties that have no direct association with the private costs of
production. It incorporates all negative and positive externalities, of both production and
consumption.
Hence, when deciding whether or how much to buy, buyers take account of the cost to society of
their actions if private and social marginal cost coincide. The equality of price with social
marginal cost, by aligning the interest of the buyer with the interest of the community as a whole
is a necessary condition for economically efficient resource allocation.
[edit] Other cost definitions
• Fixed costs are costs which do not vary with output, for example, rent. In the
long run all costs can be considered variable.
• Variable cost also known as, operating costs, prime costs, on costs and direct
costs, are costs which vary directly with the level of output, for example,
labor, fuel, power and cost of raw material.
• Social costs of production are costs incurred by society, as a whole, resulting
from private production.
• Average total cost is the total cost divided by the quantity of output.
• Average fixed cost is the fixed cost divided by the quantity of output.
• Average variable cost are variable costs divided by the quantity of output.
FC = 420
VC = 60Q + Q2
TC = 420 + 60Q + Q2
MC = 60 +2Q
ATC = 420/Q + 60 + Q
AFC = 420/Q
AVC = 60 + Q
Living wage
From Wikipedia, the free encyclopedia
Jump to:navigation, search
Living wage is a term used to describe the minimum hourly wage necessary for shelter (housing
and incidentals such as clothing and other basic needs) and nutrition for a person for an extended
period of time (lifetime). In developed countries such as the United Kingdom or Switzerland, this
standard generally means that a person working forty hours a week, with no additional income,
should be able to afford a specified quality or quantity of housing, food, utilities, transport, health
care, and recreation.
This concept differs from the minimum wage in that the latter is set by law and may fail to meet
the requirements of a living wage. It differs somewhat from basic needs in that the basic needs
model usually measures a minimum level of consumption, without regard for the source of the
income. A related concept is that of a family wage – one sufficient to not only live on oneself, but
also to raise a family, though these notions may be conflated.
[edit] Implementations
The national and international living wage movements are supported by many labor unions and
community action groups such as ACORN.
[edit] Australia
In Australia, the 1907 Harvester Judgment ruled that an employer was obliged to pay his
employees a wage that guaranteed them a standard of living which was reasonable for "a human
being in a civilised community," regardless of his capacity to pay. Justice Higgins established a
wage of 7/- (7 shillings) per day or 42/- per week as a 'fair and reasonable' minimum wage for
unskilled workers. The judgment was later overturned but remains influential. In 1913, to
compensate for the rising cost of living, the basic wage was increased to 8/- per day, the first
increase since the minimum was set. The first Retail Price Index in Australia was published late in
1912. The basic wage system remained in place in Australia until 1967. It was also adopted by
some state tribunals and was in use in some states in the 1980s.
[edit] United States
In the United States, the state of Maryland and several municipalities and local governments have
enacted ordinances which set a minimum wage higher than the federal minimum for the purpose
of requiring all jobs to meet the living wage for that region. However, San Francisco, California
and Santa Fe, New Mexico have notably passed very wide-reaching living wage ordinances.[citation
needed]
U.S. cities with living wage laws include Santa Fe and Albuquerque in New Mexico; San
Francisco, California; and Washington D.C.[3] (The city of Chicago, Illinois also passed a living wage
ordinance in 2006, but it was vetoed by the mayor.) Living wage laws typically cover only
businesses that receive state assistance or have contracts with the government.[4]
This effort began in 1994 when an alliance between a labor union and religious leaders in
Baltimore launched a successful campaign requiring city service contractors to pay a living
wage[5]. Subsequent to this effort, community advocates have won similar ordinances in cities
such as Boston, Los Angeles, San Francisco, and St. Louis. In 2007, there were at least 140
living wage ordinances in cities throughout the United States and more than 100 living wage
campaigns underway in cities, counties, states, and college campuses[6].
[edit] United Kingdom
In the United Kingdom, many campaigning organisations have responded to the low level of the
National Minimum Wage by asserting the need for it to be increased to a level more comparable
to a living wage. For instance, the Mayor of London's office hosts a Living Wage Unit which
monitors the level needed for a living wage in London (which has considerably higher living
costs than the rest of the UK). Other organisations with an interest in living wage issues include
the Living Wage Campaign[7], and the Church Action on Poverty [8] and the Scottish Low Pay Unit.
The Guardian newspaper columnist Polly Toynbee is also a major supporter of the campaign for a
living wage. The charity London Citizens is campaigning for a living wage to be implemented
across London.
[edit] Alternative policies
Some critics[who?] argue that there are alternative ways to deliver income to the poor, such as the
US Earned Income Tax Credit, the UK Working Tax Credit or a negative income tax, that don't have
the unemployment and deadweight loss effects that critics claim are the result of living wage law.
A further alternative is a job guarantee, where jobs are provided to all comers at a living wage,
setting a de facto (but not de jure) living wage.
Minimum wage
From Wikipedia, the free encyclopedia
Jump to:navigation, search
A minimum wage is the lowest hourly, daily or monthly wage that employers may legally pay to
employees or workers. Equivalently, it is the lowest wage at which workers may sell their labor.
Although minimum wage laws are in effect in a great many jurisdictions, there are differences of
opinion about the benefits and drawbacks of a minimum wage. Supporters of the minimum wage
say that it increases the standard of living of workers and reduces poverty.[1] Opponents say that
if it is high enough to be effective, it increases unemployment, particularly among workers with
very low productivity due to inexperience or handicap, thereby harming lesser skilled workers to
the benefit of better skilled workers.[2]
[edit] Background
A sweatshop in Chicago, Illinois in 1903
Minimum wages were first proposed as a way to control the proliferation of sweat shops in
manufacturing industries. The sweat shops employed large numbers of women and young
workers, paying them what were considered to be substandard wages. The sweatshop owners
were thought to have unfair bargaining power over their workers, and a minimum wage was
proposed as a means to make them pay "fairly." Over time, the focus changed to helping people,
especially families, become more self sufficient. Today, minimum wage laws cover workers in
most low-paid fields of employment.[3]
The minimum wage has a strong social appeal, rooted in concern about the ability of markets to
provide income equity for the least able members of the work force. An obvious solution to this
concern is to redefine the wage structure politically to achieve a socially preferable distribution
of income. Thus, minimum wage laws have usually been judged against the criterion of reducing
poverty.[4]
Although the goals of the minimum wage are widely accepted as proper, there is great
disagreement as to whether the minimum wage is effective in attaining its goals. From the time
of their introduction, minimum wage laws have been highly controversial politically, and have
received much less support from economists than from the general public. Despite decades of
experience and economic research, debates about the costs and benefits of minimum wages
continue today.[3]
The classic exposition of the minimum wage's shortcomings in reducing poverty was provided
by George Stigler in 1946:
• Employment may fall more than in proportion to the wage increase, thereby
reducing overall earnings;
• As uncovered sectors of the economy absorb workers released from the
covered sectors, the decrease in wages in the uncovered sectors may exceed
the increase in wages in the covered ones;
• The impact of the minimum wage on family income distribution may be
negative unless the fewer but better jobs are allocated to members of needy
families rather than to, for example, teenagers from families not in poverty;
• The legal restriction that employers cannot pay less than a legislated wage is
equivalent to the legal restriction that workers cannot work at all in the
protected sector unless they can find employers willing to hire them at that
wage.[4]
Direct empirical studies indicate that anti-poverty effects in the U.S. would be quite modest,
even if there were no unemployment effects. Very few low-wage workers come from families in
poverty. Those primarily affected by minimum wage laws are teenagers and low-skilled adult
females who work part time, and any wage rate effects on their income is strictly proportional to
the hours of work they are offered. So, if market outcomes for low-skilled families are to be
supplemented in a socially satisfactory way, factors other than wage rates must also be
considered. Employment opportunities and the factors that limit labor market participation must
be considered as well.[4] Economist Thomas Sowell has also argued that regardless of custom or
law, the real minimum wage is always zero, and zero is what some people would receive if they
fail to find jobs when they try to enter the workforce, or they lose the jobs they already have.[5]
[edit] Minimum wage law
Main article: Minimum wage law
First enacted in New Zealand in 1894,[6][7] there is now legislation or binding collective bargaining
regarding minimum wage in more than 90% of all countries.[8]
Minimum wage rates vary greatly across many different jurisdictions, not only in setting a
particular amount of money (e.g. US$7.25 per hour under U.S. Federal law, $8.55 in the U.S.
state of Washington,[9] and £5.80 (for those aged 22+) in the United Kingdom[10]), but also in terms
of which pay period (e.g. Russia and China set monthly minimums) or the scope of coverage.
Some jurisdictions allow employers to count tips given to their workers as credit towards the
minimum wage level. (See also: List of minimum wages by country)
[edit] Informal minimum wages
Sometimes a minimum wage exists without a law. Custom and extra-legal pressures from
governments or labor unions can produce a de facto minimum wage. So can international public
opinion, by pressuring multinational companies to pay Third World workers wages usually
found in more industrialized countries. The latter situation in Southeast Asia and Latin America
has been publicized in recent years, but it existed with companies in West Africa in the middle of
the twentieth century.[5]
[edit] Economics of the minimum wage
[edit] Simple supply and demand
Main article: Supply and demand
An analysis of supply and demand of the type shown in introductory mainstream economics
textbooks implies that by mandating a price floor above the equilibrium wage, minimum wage
laws should cause unemployment.[11][12] This is because a greater number of workers are willing
to work at the higher wage while a smaller numbers of jobs will be available at the higher wage.
Companies can be more selective in those whom they employ thus the least skilled and least
experienced will typically be excluded.
According to the model shown in nearly all introductory textbooks on economics, increasing the
minimum wage decreases the employment of minimum-wage workers.[13] One such textbook
says:
"If a higher minimum wage increases the wage rates of unskilled workers above the level that
would be established by market forces, the quantity of unskilled workers employed will fall. The
minimum wage will price the services of the least productive (and therefore lowest-wage)
workers out of the market. ... The direct results of minimum wage legislation are clearly mixed.
Some workers, most likely those whose previous wages were closest to the minimum, will enjoy
higher wages. Others, particularly those with the lowest prelegislation wage rates, will be unable
to find work. They will be pushed into the ranks of the unemployed or out of the labor force."[14]
It illustrates the point with a supply and demand diagram similar to the one below.
It is assumed that workers are willing to labor for more hours if paid a higher wage. Economists
graph this relationship with the wage on the vertical axis and the quantity (hours) of labor
supplied on the horizontal axis. Since higher wages increase the quantity supplied, the supply of
labor curve is upward sloping, and is shown as a line moving up and to the right.[15]
A firm's cost is a function of the wage rate. It is assumed that the higher the wage, the fewer
hours an employer will demand of an employee. This is because, as the wage rate rises, it
becomes more expensive for firms to hire workers and so firms hire fewer workers (or hire them
for fewer hours). The demand of labor curve is therefore shown as a line moving down and to the
right.[15]
Combining the demand and supply curves for labor allows us to examine the effect of the
minimum wage. We will start by assuming that the supply and demand curves for labor will not
change as a result of raising the minimum wage. This assumption has been questioned. If no
minimum wage is in place, workers and employers will continue to adjust the quantity of labor
supplied according to price until the quantity of labor demanded is equal to the quantity of labor
supplied, reaching equilibrium price, where the supply and demand curves intersect. Minimum
wage behaves as a classical price floor on labor. Standard theory says that, if set above the
equilibrium price, more labor will be willing to be provided by workers than will be demanded
by employers, creating a surplus of labor i.e. unemployment.[15]
In other words, the simplest and most basic economics says this about commodities like labor
(and wheat, for example): Artificially raising the price of the commodity tends to cause the
supply of it to increase and the demand for it to lessen. The result is a surplus of the commodity.
When there is a wheat surplus, the government buys it. Since the government doesn't hire surplus
labor, the labor surplus takes the form of unemployment, which tends to be higher with
minimum wage laws than without them.[5]
So the basic theory says that raising the minimum wage helps workers whose wages are raised,
and hurts people who are not hired (or lose their jobs) because companies cut back on
employment. But proponents of the minimum wage hold that the situation is much more
complicated than the basic theory can account for.
One complicating factor is possible monopsony in the labor market, whereby the individual
employer has some market power in determining wages paid. Thus it is at least theoretically
possible that the minimum wage may boost employment. Though single employer market power
is unlikely to exist in most labor markets in the sense of the traditional 'company town,'
asymmetric information, imperfect mobility, and the 'personal' element of the labor transaction
give some degree of wage-setting power to most firms.[16]
[edit] Criticism of the "textbook model"
The argument that minimum wages decrease employment is based on a simple supply and
demand model of the labor market. A number of economists (for example Pierangelo
Garegnani[17], Robert L. Vienneau[18], and Arrigo Opocher & Ian Steedman[19]), building on the
work of Piero Sraffa, argue that that model, even given all its assumptions, is logically incoherent.
Michael Anyadike-Danes and Wyne Godley [20] argue, based on simulation results, that little of
the empirical work done with the textbook model constitutes a potentially falsifying test, and,
consequently, empirical evidence hardly exists for that model. Graham White [21] argues, partially
on the basis of Sraffianism, that the policy of increased labor market flexibility, including the
reduction of minimum wages, does not have an "intellectually coherent" argument in economic
theory.
Gary Fields, Professor of Labor Economics and Economics at Cornell University, argues that the
standard "textbook model" for the minimum wage is "ambiguous", and that the standard
theoretical arguments incorrectly measure only a one-sector market. Fields says a two-sector
market, where "the self-employed, service workers, and farm workers are typically excluded
from minimum-wage coverage… [and with] one sector with minimum-wage coverage and the
other without it [and possible mobility between the two]," is the basis for better analysis.
Through this model, Fields shows the typical theoretical argument to be ambiguous and says "the
predictions derived from the textbook model definitely do not carry over to the two-sector case.
Therefore, since a non-covered sector exists nearly everywhere, the predictions of the textbook
model simply cannot be relied on."[22]
An alternate view of the labor market has low-wage labor markets characterized as monopsonistic
competition wherein buyers (employers) have significantly more market power than do sellers
(workers). This monopsony could be a result of intentional collusion between employers, or
naturalistic factors such as segmented markets, information costs, imperfect mobility and the
'personal' element of labor markets. In such a case the diagram above would not yield the
quantity of labor clearing and the wage rate. This is because while the upward sloping aggregate
labor supply would remain unchanged, instead of using the downward labor demand curve
shown in the diagram above, monopsonistic employers would use a steeper downward sloping
curve corresponding to marginal expenditures to yield the intersection with the supply curve
resulting in a wage rate lower than would be the case under competition. Also, the amount of
labor sold would also be lower than the competitive optimal allocation.
Such a case is a type of market failure and results in workers being paid less than their marginal
value. Under the monopsonistic assumption, an appropriately set minimum wage could increase
both wages and employment, with the optimal level being equal to the marginal productivity of
labor.[23] This view emphasizes the role of minimum wages as a market regulation policy akin to
antitrust policies, as opposed to an illusory "free lunch" for low-wage workers.
Another reason minimum wage may not affect employment in certain industries is that the
demand for the product the employees produce is highly inelastic;[24] For example, if
management is forced to increase wages, management can pass on the increase in wage to
consumers in the form of higher prices. Since demand for the product is highly inelastic,
consumers continue to buy the product at the higher price and so the manager is not forced to lay
off workers.
Three other possible reasons minimum wages do not affect employment were suggested by Alan
Blinder: higher wages may reduce turnover, and hence training costs; raising the minimum wage
may "render moot" the potential problem of recruiting workers at a higher wage than current
workers; and minimum wage workers might represent such a small proportion of a business's
cost that the increase is too small to matter. He admits that he does not know if these are correct,
but argues that "the list demonstrates that one can accept the new empirical findings and still be a
card-carrying economist."[25]
[edit] Debate over consequences
Various groups have great ideological, political, financial, and emotional investments in issues
surrounding minimum wage laws. For example, agencies that administer the laws have a vested
interest in showing that "their" laws do not create unemployment, as do labor unions, whose
members' jobs are protected by minimum wage laws. On the other side of the issue, low-wage
employers such as restaurants finance the Employment Policies Institute, which has released
numerous studies opposing the minimum wage.[26] The presence of these powerful groups and
factors means that the debate on the issue is not always based on dispassionate analysis.
Additionally, it is extraordinarily difficult to separate the effects of minimum wage from all the
other variables that affect employment.[5]
The following table summarizes the arguments made by those for and against minimum wage
laws:
Arguments in favor of Minimum Arguments against Minimum Wage Laws
Wage Laws Opponents of the minimum wage claim it has these effects:
Supporters of the minimum wage • As a labor market analogue of political-
claim it has these effects: economic protectionism, it excludes low cost
• Increases the standard of competitors from labor markets, hampers
living for the poorest and firms in reducing wage costs during trade
most vulnerable class in downturns, generates various industrial-
society and raises economic inefficiencies as well as
average.[1] unemployment, poverty, and price rises, and
generally dysfunctions.[28]
• Motivates and
encourages employees • Hurts small business more than large
to work harder (unlike business.[29]
welfare programs and • Reduces quantity demanded of workers,
other transfer either through a reduction in the number of
payments).[27] hours worked by individuals, or through a
• Stimulates consumption, reduction in the number of jobs.[30][31]
by putting more money • May cause inflation as businesses try to
in the hands of low- compensate by raising the prices of the
income people who goods being sold.[32][33]
spend their entire
paychecks.[1] • Benefits some workers at the expense of the
poorest and least productive.[34]
• Increases the work ethic of
those who earn very • Can result in the exclusion of certain groups
little, as employers from the labour force.[35]
demand more return • Businesses may spend less on training their
from the higher cost of employees.[36]
hiring these employees.[1]
• Is less effective than other methods (e.g. the
• Decreases the cost of Earned Income Tax Credit) at reducing poverty,
government social and is more damaging to businesses than
welfare programs by those other methods.[36]
increasing incomes for
• Discourages further education among the
the lowest-paid.[1]
poor by enticing people to enter the job
market.[36]
In 2006, the International Labour Organization (ILO)[8] argued that the minimum wage could not be
directly linked to unemployment in countries that have suffered job losses. In April 2010, the
Organisation for Economic Co-operation and Development (OECD)[37] released a report arguing that
countries could alleviate teen unemployment by “lowering the cost of employing low-skilled
youth” through a sub-minimum training wage. A study of U.S. states showed that businesses'
annual and average payrolls grow faster and employment grew at a faster rate in states with a
minimum wage.[38] The study showed a correlation, but did not claim to prove causation.
Although strongly opposed by both the business community and the Conservative Party when
introduced in 1999, the minimum wage introduced in the UK is no longer controversial and the
Conservatives reversed their opposition in 2000.[39] A review of its effects found no discernible
impact on employment levels.[40] However, prices in the minimum wage sector were found to
have risen significantly faster than prices in non-minimum wage sectors, most notably in the four
years following the implementation of the minimum wage.[41]
Since the introduction of a national minimum wage in the UK in 1999, its effects on employment
were subject to extensive research and observation by the Low Pay Commission. The Low Pay
Commission found that, rather than make employees redundant, employers have reduced their
rate of hiring, reduced staff hours, increased prices, and have found ways to cause current
workers to be more productive (especially service companies).[42] Neither trade unions nor
employer organizations contest the minimum wage, although the latter had especially done so
heavily until 1999.
[edit] Empirical studies
Economists disagree as to the measurable impact of minimum wages in the 'real world'. This
disagreement usually takes the form of competing empirical tests of the elasticities of demand and
supply in labor markets and the degree to which markets differ from the efficiency that models of
perfect competition predict.
Economists have done empirical studies on numerous aspects of the minimum wage,
prominently including:[3]
• Employment effects, the most frequently studied aspect
• Effects on the distribution of wages and earnings among low-paid and higher-
paid workers
• Effects on the distribution of incomes among low-income and higher-income
families
• Effects on the skills of workers through job training and the deferring of work
to acquire education
• Effects on prices and profits
Until the mid-1990s, a strong consensus existed among economists, both conservative and
liberal, that the minimum wage reduced employment, especially among younger and low-skill
workers.[13] In addition to the basic supply-demand intuition, there were a number of empirical
studies that supported this view. For example, Gramlich (1976) found that many of the benefits
went to higher income families, and in particular that teenagers were made worse off by the
unemployment associated with the minimum wage.[43]
Brown et al. (1983) note that time series studies to that point had found that for a 10 percent
increase in the minimum wage, there was a decrease in teenage employment of 1-3 percent.
However, for the effect on the teenage unemployment rate, the studies exhibited wider variation
in their estimates, from zero to over 3 percent. In contrast to the simple supply/demand figure
above, it was commonly found that teenagers withdrew from the labor force in response to the
minimum wage, which produced the possibility of equal reductions in the supply as well as the
demand for labor at a higher minimum wage and hence no impact on the unemployment rate.
Using a variety of specifications of the employment and unemployment equations (using ordinary
least squares vs. generalized least squares regression procedures, and linear vs. logarithmic
specifications), they found that a 10 percent increase in the minimum wage caused a 1 percent
decrease in teenage employment, and no change in the teenage unemployment rate. The study
also found a small, but statistically significant, increase in unemployment for adults aged 20–24.
[44]
Wellington (1991) updated Brown et al.'s research with data through 1986 to provide new
estimates encompassing a period when the real (i.e., inflation-adjusted) value of the minimum
wage was declining, due to the fact that it had not increased since 1981. She found that a 10%
increase in the minimum wage decreased teenage employment by 0.6 percentage points, with no
effect on either the teen or young adult unemployment rates.[45]
Some research suggests that the unemployment effects of small minimum wage increases are
dominated by other factors. [5] In Florida, where voters approved an increase in 2004, a follow-
up comprehensive study confirms a strong economy with increased employment above previous
years in Florida and better than in the U.S. as a whole.[6]
[edit] Card and Krueger
In 1992, the minimum wage in New Jersey increased from $4.25 to $5.05 per hour (an 18.8%
increase) while the adjacent state of Pennsylvania remained at $4.25. David Card and Alan Krueger
gathered information on fast food restaurants in New Jersey and eastern Pennsylvania in an
attempt to see what effect this increase had on employment within New Jersey. Basic economic
theory would have implied that relative employment should have decreased in New Jersey. Card
and Krueger surveyed employers before the April 1992 New Jersey increase, and again in
November-December 1992, asking managers for data on the full-time equivalent staff level of
their restaurants both times.[46] Based on data from the employers' responses, the authors
concluded that the increase in the minimum wage increased employment in the New Jersey
restaurants.[47]
Card and Krueger expanded on this initial article in their 1995 book Myth and Measurement:
The New Economics of the Minimum Wage (ISBN 0-691-04823-1). They argued that the negative
employment effects of minimum wage laws are minimal if not non-existent. For example, they
look at the 1992 increase in New Jersey's minimum wage, the 1988 rise in California's minimum
wage, and the 1990-91 increases in the federal minimum wage. In addition to their own findings,
they reanalyzed earlier studies with updated data, generally finding that the older results of a
negative employment effect did not hold up in the larger datasets.
Critics, however, argue that their research was flawed.[48] Subsequent attempts to verify the
claims requested payroll cards from employers to verify employment, and found that the
minimum wage increases were followed by decreases in employment. On the other hand, an
assessment of data collected and analyzed by David Neumark and William Wascher did not
initially contradict the Card/Krueger results,[49] but in a later edited version they found that the
same general sample set did increase unemployment. The 18.8% wage hike resulted in
"[statistically] insignificant—although almost always negative" employment effects.[50]
Another possible explanation for why the current minimum wage laws may not affect
unemployment in the United States is that the minimum wage is set close to the equilibrium
point for low and unskilled workers. Thus in the absence of the minimum wage law unskilled
workers would be paid approximately the same amount. However, an increase above this
equilibrium point could likely bring about increased unemployment for the low and unskilled
workers.[15]
[edit] Reaction to Card and Krueger
Some leading economists such as Greg Mankiw, Kevin M. Murphy and Nobel laureate Gary Becker
do not accept the Card/Krueger results,[51][52] while some others, like Nobel laureates Paul
Krugman[53] and Joseph Stiglitz do accept them as correct.[54][55]
According to economists Donald Deere (Texas A&M), Kevin Murphy (University of Chicago), and
Finis Weltch (Texas A&M), Card and Krueger's conclusions are contradicted by "common sense
and past research". They conclude that:[56]
Each of the four studies examines a different piece of the minimum
wage/employment relationship. Three of them consider a single state, and two of
them look at only a handful of firms in one industry. From these isolated findings
Card and Krueger paint a big picture wherein increased minimum wages do not
decrease, and may increase, employment. Our view is that there is something
wrong with this picture. Artificial increases in the price of unskilled laborers
inevitably lead to their reduced employment; the conventional wisdom remains
intact.
Nobel laureate James M. Buchanan responded to the Card and Krueger study in the Wall Street
Journal, arguing:[57]
...no self-respecting economist would claim that increases in the minimum wage
increase employment. Such a claim, if seriously advanced, becomes equivalent to a
denial that there is even minimum scientific content in economics, and that, in
consequence, economists can do nothing but write as advocates for ideological
interests. Fortunately, only a handful of economists are willing to throw over the
teaching of two centuries; we have not yet become a bevy of camp-following
whores.
Nobel laureate Paul Krugman, has argued in favour of the Card and Krueger result, stating that
Card and Krueger;[59]
... found no evidence that minimum wage increases in the range that the United
States has experiences led to job losses. Their work has been attacked because it
seems to contradict Econ 101 and because it was ideologically disturbing to many.
Yet it has stood up very well to repeated challenges, and new cases confirming its
results keep coming in.
"I warn you, Sir! The discourtesy of this bank is beyond all limits. One word more
and I — I withdraw my overdraft!"
An overdraft occurs when withdrawals from a bank account exceed the available balance. In this
situation a person is said to be "overdrawn".
If there is a prior agreement with the account provider for an overdraft protection plan, and the
amount overdrawn is within this authorised overdraft limit, then interest is normally charged at
the agreed rate. If the balance exceeds the agreed terms, then fees may be charged and higher
interest rate might apply.
Contents
[hide]
• 1 History of the overdraft
• 2 Reasons for overdrafts
• 3 United Kingdom
○ 3.1 Overdraft protection in the UK
3.1.1 Amount of fees
3.1.2 Legal status and controversy
• 4 United States
○ 4.1 Overdraft protection in the US
4.1.1 Ad-hoc coverage of overdrafts
4.1.2 Overdraft lines of credit
4.1.3 Linked accounts
4.1.4 Bounce protection plans
○ 4.2 Industry statistics
○ 4.3 Transaction processing order
○ 4.4 Proposed legislation
• 5 See also
• 6 References
In 2006 the Office of Fair Trading issued a statement which concluded that credit card issuers were
levying penalty charges when customers exceeded their maximum spend limit and / or made late
payments to their accounts. In the statement, the OFT recommended that credit card issuers set
such fees at a maximum of 12 UK pounds.[2]
In the statement, the OFT opined that the fees charged by credit card issuers were analogous to
unauthorized overdraft fees charged by banks. Many customers who have incurred unauthorized
overdraft fees have used this statement as a springboard to sue their banks in order to recover the
fees. It is currently thought that the England and Wales county courts are flooded with such
claims.[3] Claimants tend frequently to be assisted by web sites such as The Consumer Action
Group.[4] To date, many banks do not appear in court to justify their unauthorized overdraft
charging structures and many customers have recovered such charges in full,[5] However, there
have been cases where the courts have ruled in favor of the banks and alternatively struck out
claims against customers who have not adequately made a case against their bank.[6]
[edit] United States
[edit] Overdraft protection in the US
Overdraft protection is a financial service offered by banking institutions primarily in the United
States. Overdraft or courtesy pay program protection pays items presented to a customer's
account when sufficient funds are not present to cover the amount of the withdrawal. Overdraft
protection can cover ATM withdrawals, purchases made with a debit card, electronic transfers,
and checks. In the case of non-preauthorized items such as checks, or ACH withdrawals,
overdraft protection allows for these items to be paid as opposed to being returned unpaid, or
bouncing. However, ATM withdrawals and purchases made with a debit or check card are
considered preauthorized and must be paid by the bank when presented, even if this causes an
overdraft.
[edit] Ad-hoc coverage of overdrafts
Traditionally, the manager of a bank would look at the bank's list of overdrafts each day. If the
manager saw that a favored customer had incurred an overdraft, they had the discretion to pay
the overdraft for the customer. Banks traditionally did not charge for this ad-hoc coverage.
However, it was fully discretionary, and so could not be depended on. With the advent of large-
scale interstate branch banking, traditional ad-hoc coverage has practically disappeared.
The one exception to this is so-called "force pay" lists. At the beginning of each business day,
branch managers often still get a computerized list of items that are pending rejection, only for
accounts held in their specific branch, city or state. Generally, if a customer is able to come into
the branch with cash or make a transfer to cover the amount of the item pending rejection, the
manager can "force pay" the item. In addition, if there are extenuating circumstances or the item
in question is from an account held by a regular customer, the manager may take a risk by paying
the item, but this is increasingly uncommon. Banks have a cut-off time when this action must
take place by, as after that time, the item automatically switches from "pending rejection" to
"rejected," and no further action may be taken.
[edit] Overdraft lines of credit
This form of overdraft protection is a contractual relationship in which the bank promises to pay
overdrafts up to a certain dollar limit. A consumer who wants an overdraft line of credit must
complete and sign an application, after which the bank checks the consumer's credit and
approves or denies the application. Overdraft lines of credit are loans and must comply with the
Truth in Lending Act. As with linked accounts, banks typically charge a nominal fee per overdraft,
and also charge interest on the outstanding balance. Some banks charge a small monthly fee
regardless of whether the line of credit is used. This form of overdraft protection is available to
consumers who meet the creditworthiness criteria established by the bank for such accounts.
Once the line of credit is established, the available credit may be visible as part of the customer's
available balance.
[edit] Linked accounts
Also referred to as "Overdraft Transfer Protection", a checking account can be linked to another
account, such as a savings account, credit card, or line of credit. Once the link is established,
when an item is presented to the checking account that would result in an overdraft, funds are
transferred from the linked account to cover the overdraft. A nominal fee is usually charged for
each overdraft transfer, and if the linked account is a credit card or other line of credit, the
consumer may be required to pay interest under the terms of that account.
The main difference between linked accounts and an overdraft line of credit is that an overdraft
line of credit is typically only usable for overdraft protection. Separate accounts that are linked
for overdraft protection are independent accounts in their own right.
[edit] Bounce protection plans
A more recent product being offered by some banks is called "bounce protection."
Smaller banks offer plans administered by third party companies which help the banks gain
additional fee income.[7] Larger banks tend not to offer bounce protection plans, but instead
process overdrafts as disclosed in their account terms and conditions.
In either case, the bank may choose to cover overdrawn items at their discretion and charge an
overdraft fee, the amount of which may or may not be disclosed. As opposed to traditional ad-
hoc coverage, this decision to pay or not pay overdrawn items is automated and based on
objective criteria such as the customer's average balance, the overdraft history of the account, the
number of accounts the customer holds with the bank, and the length of time those accounts have
been open.[8] However, the bank does not promise to pay the overdraft even if the automated
criteria are met.
Bounce protection plans have some superficial similarities to overdraft lines of credit and ad-hoc
coverage of overdrafts, but tend to operate under different rules. Like an overdraft line of credit,
the balance of the bounce protection plan may be viewable as part of the customer's available
balance, yet the bank reserves the right to refuse payment of an overdrawn item, as with
traditional ad-hoc coverage. Banks typically charge a one-time fee for each overdraft paid. A
bank may also charge a recurring daily fee for each day during which the account has a negative
balance.
Critics argue that because funds are advanced to a consumer and repayment is expected, that
bounce protection is a type of loan.[9] Because banks are not contractually obligated to cover the
overdrafts, "bounce protection" is not regulated by the Truth in Lending Act, which prohibits
certain deceptive advertisements and requires disclosure of the terms of loans. Historically,
bounce protection could be added to a consumer's account without his or her permission or
knowledge.
In May 2005, Regulation DD of the Truth in Savings Act was amended to require that banks
offering "bounce protection" plans provide certain disclosures to their customers. These
amendments include requirements to disclose the types of transaction that may cause bounce
protection to be triggered, the fees associated with bounce protection, separate statement
categories to enumerate the number of fees charged, and restrictions on the marketing of bounce
protection programs to deter misleading advertisements. These disclosures are already provided
by larger banks which process overdrafts according to their terms and conditions.
[edit] Industry statistics
U.S. banks are projected to collect over $38.5 billion in overdraft fees for 2009, nearly double
compared to 2000.[10]
[edit] Transaction processing order
An area of controversy with regards to overdraft fees is the order in which a bank posts
transactions to a customer's account. This is controversial because largest to smallest processing
tends to maximize overdraft occurrences on a customer's account. This situation can arise when
the account holder makes a number of small debits for which there are sufficient funds in the
account at the time of purchase. Later, the account holder makes a large debit that overdraws the
account (either accidentally or intentionally). If all of the items present for payment to the
account on the same day, and the bank processes the largest transaction first, multiple overdrafts
can result.
The "biggest check first" policy is common among large U.S. banks.[11] Banks argue that this is
done to prevent a customer's most important transactions (such as a rent or mortgage check, or
utility payment) from being returned unpaid, despite some such transactions being guaranteed.
Consumers have attempted to litigate to prevent this practice, arguing that banks use "biggest
check first" to manipulate the order of transactions to artificially trigger more overdraft fees to
collect. Banks in the United States are mostly regulated by the Office of the Comptroller of
Currency, a Federal agency, which has formally approved of the practice; the practice has
recently been challenged, however, under numerous individual state deceptive practice laws. [12]
Bank deposit agreements usually provide that the bank may clear transactions in any order, at the
bank's discretion.[13]
Global Depository Receipt
From Wikipedia, the free encyclopedia
Jump to:navigation, search
Foreign direct investment (FDI) refers to long term participation by country A into country B.
It usually involves participation in management, joint-venture, transfer of technology and expertise.
There are two types of FDI: inward foreign direct investment and outward foreign direct
investment, resulting in a net FDI inflow (positive or negative).
[edit] History
Foreign direct investment (FDI) is a measure of foreign ownership of productive assets, such as
factories, mines and land. Increasing foreign investment can be used as one measure of growing
economic globalization. Figure below shows net inflows of foreign direct investment as a
percentage of gross domestic product (GDP). The largest flows of foreign investment occur
between the industrialized countries (North America, Western Europe and Japan). But flows to non-
industrialized countries are increasing sharply.
US International Direct Investment Flows:[1]
Perio FDI FDI Net
d Outflow Inflows
1960- + $ 37.04
$ 42.18 bn $ 5.13 bn
69 bn
[edit] Types
A foreign direct investor may be classified in any sector of the economy and could be any one of
the following:[citation needed]
• an individual;
• a group of related individuals;
• an incorporated or unincorporated entity;
• a public company or private company;
• a group of related enterprises;
• a government body;
• an estate (law), trust or other societal organisation; or
• any combination of the above.
[edit] Methods
The foreign direct investor may acquire 10% or more of the voting power of an enterprise in an
economy through any of the following methods:
• by incorporating a wholly owned subsidiary or company
• by acquiring shares in an associated enterprise
• through a merger or an acquisition of an unrelated enterprise
• participating in an equity joint venture with another investor or enterprise
Foreign direct investment incentives may take the following forms:[citation needed]
• low corporate tax and income tax rates
• tax holidays
• other types of tax concessions
• preferential tariffs
• special economic zones
• EPZ - Export Processing Zones
• Bonded Warehouses
• Maquiladoras
• investment financial subsidies
• soft loan or loan guarantees
• free land or land subsidies
• relocation & expatriation subsidies
• job training & employment subsidies
• infrastructure subsidies
• R&D support
• derogation from regulations (usually for very large projects)
The term foreign institutional investment denotes all those investors or investment companies that are not located within the territory of
the country in which they are investing. These are actually the outsiders in the financial markets of the particular company. Foreign
institutional investment is a common term in the financial sector of India.
The type of institutions that are involved in the foreign institutional investment are as follows:
Mutual Funds
Hedge Funds
Pension Funds
InsuranceCompanies
The economies like India, which are growing very rapidly, are becoming hot favorite investment destinations for the foreign institutional
investors. These markets have the potential to grow in the near future . This is the prime reason behind the growing interests of the foreign
investors. The promise of rapid growth of the investable fund is tempting the investors and so they are coming in huge numbers to these
countries. The money, which is coming through the foreign institutional investment is referred as 'hot money' because the money can be
taken out from the market at anytime by these investors.
The foreign investment market was not so developed in the past. But once the globalization took the whole world in its grip, the diversified
global market became united. Because of this the investment sector became very strong and at the same time allowed the foreigners to enter
the national financial market.
At the same time the developing countries understood the value of foreign investment and allowed the foreign direct investment and foreign
institutional investment in their financial markets. Although the foreign direct investments are long term investments but the foreign
institutional investments are unpredictable. The Securities and Exchange Board of India looks after the foriegn institutional investments in
India. SEBI has imposed several rules and regulations on these investments.
Price skimming
From Wikipedia, the free encyclopedia
Jump to:navigation, search
Price Skimming
Price skimming is a pricing strategy in which a marketer sets a relatively high price for a product or
service at first, then lowers the price over time. It is a temporal version of price discrimination/yield
management. It allows the firm to recover its sunk costs quickly before competition steps in and
lowers the market price.
Price skimming is sometimes referred to as riding down the demand curve. The objective of a
price skimming strategy is to capture the consumer surplus. If this is done successfully, then
theoretically no customer will pay less for the product than the maximum they are willing to pay.
In practice, it is almost impossible for a firm to capture all of this surplus.
[edit] Limitations of Price Skimming
There are several potential problems with this strategy.
• It is effective only when the firm is facing an inelastic demand curve. If the long
run demand schedule is elastic (as in the diagram to the right), market
equilibrium will be achieved by quantity changes rather than price changes.
Penetration pricing is a more suitable strategy in this case. Price changes by any
one firm will be matched by other firms resulting in a rapid growth in industry
volume. Dominant market share will typically be obtained by a low cost
producer that pursues a penetration strategy.
• A price skimmer must be careful with the law. Price discrimination is illegal in
many jurisdictions, but yield management is not. Price skimming can be
considered either a form of price discrimination or a form of yield
management. Price discrimination uses market characteristics (such as price
elasticity) to adjust prices, whereas yield management uses product
characteristics. Marketers see this legal distinction as quaint since in almost
all cases market characteristics correlate highly with product characteristics.
If using a skimming strategy, a marketer must speak and think in terms of
product characteristics in order to stay on the right side of the law.
• The inventory turn rate can be very low for skimmed products. This could cause
problems for the manufacturer's distribution chain. It may be necessary to
give retailers higher margins to convince them to handle enthusiastically the
product.
• Skimming encourages the entry of competitors. When other firms see the high
margins available in the industry, they will quickly enter.
• Skimming results in a slow rate of stuff diffusion and adaptation. This results in
a high level of untapped demand. This gives competitors time to either
imitate the product or leap frog it with a new innovation. If competitors do
this, the window of opportunity will have been lost.
• The manufacturer could develop negative publicity if they lower the price too
fast and without significant product changes. Some early purchasers will feel
they have been ripped-off. They will feel it would have been better to wait
and purchase the product at a much lower price. This negative sentiment will
be transferred to the brand and the company as a whole.
• High margins may make the firm inefficient. There will be less incentive to
keep costs under control. Inefficient practices will become established
making it difficult to compete on value or price.
Writing in Basic Marketing, E. Jerome McCarthy and William Perreault Jr. observed
that "a penetration pricing policy tries to sell the whole market at one low price. Such an
approach might be wise when the 'elite' market—those willing to pay a high price—is
small. This is the case when the whole demand curve [for the product] is fairly elastic. A
penetration policy is even more attractive if selling larger quantities results in lower
costs because of economies of scale. Penetration pricing may be wise if the firm expects
strong competition very soon after introduction. A low penetration price may be called a
'stay out' price. It discourages competitors from entering the market." Once the product
has secured a desired market share, its producers can then review business conditions
and decide whether to gradually increase the price.
Penetration pricing, however, is not the same as introductory price dealing, in which
marketers attach temporary low prices to new products when they first hit the market.
"These temporary price cuts should not be confused with low penetration prices," wrote
McCarthy and Perreault Jr. "The plan [with introductory price dealing] is to raise prices
as soon as the introductory offer is over."
Lockout (industry)
From Wikipedia, the free encyclopedia
A lockout is a work stoppage in which an employer prevents employees from working. This is
different from a strike, in which employees refuse to work.
[edit] Causes
A lockout may happen for several reasons. When only part of a trade union votes to strike, the
purpose of a lockout is to put pressure on a union by reducing the number of members who are
able to work. For example, if the anticipated strike severely hampers work of non-striking
workers, the employer may declare a lockout until the workers end the strike.
Another case in which an employer may impose a lockout is to avoid slowdowns or intermittent
work-stoppages.
Other times, particularly in the United States, a lockout occurs when union membership rejects
the company's final offer at negotiations and offers to return to work under the same conditions
of employment as existed under the now-expired contract. In such a case, the lockout is designed
to pressure the workers into accepting the terms of the company's last offer.
[edit] Lock-in
The term lock-in refers to the practice of physically preventing workers from leaving a
workplace. In most jurisdictions this is illegal but is occasionally reported, especially in some
developing countries.[citation needed]
More recently, lock-ins have been carried out by employees against management, which have
been labelled 'bossnapping' by the mainstream media. In France during March 2009, 3M's national
manager was locked in his office for 24 hours by employees in a dispute over redundancies.[1][2][3]
The following month, employees of a call centre managed by Synovate in Auckland locked the
front doors of the office, in response to management locking them out.[4] Such practices bear
mild resemblance to the gherao in India.
[edit] Ireland
Cartoon showing the depth of ill feeling caused by the Dublin Lockout.
The Dublin Lockout (Irish: Frithdhúnadh Mór Bhaile-Átha-Cliath) was a major industrial dispute
between approximately 20,000 workers and 300 employers which took place in Ireland's capital
city of Dublin. The dispute lasted from 26 August 1913 to 18 January 1914, and is often viewed
as the most severe and significant industrial dispute in Irish history. Central to the dispute was the
workers' right to unionize.
[edit] United States
In the United States, under Federal labor law, an employer may hire only temporary replacements
during a lockout. In a strike, unless it is an unfair labor practice (ULP) strike, an employer may
legally hire permanent replacements. Also, in many U.S. states, employees who are locked-out
are eligible to receive unemployment benefits, but are not eligible for such benefits during a strike.
[citation needed]
For the above reasons, many American employers have historically been reluctant to impose
lockouts, instead attempting to provoke a strike. However, as American unions have increasingly
begun to resort to slowdowns rather than strikes, lockouts have come "back in fashion" for many
employers, and even as incident of strikes are on the decline, incidents of lockouts are on the rise
in the U.S.[citation needed]
Recent notable lockout incidents have been reported in professional sports, notably involving the
National Basketball Association in the 1998–99 season and the National Hockey League in the 1994–
95 and 2004–05 seasons.
Layoff
From Wikipedia, the free encyclopedia
Jump to:navigation, search
[edit] Etymology
Euphemisms are often used to "soften the blow" in the process of firing and being fired,
(Wilkinson 2005, Redman and Wilkinson, 2006) including "downsize", "excess", "rightsize",
"delayering", "smartsize", "redeployment", "workforce reduction", "workforce optimization",
"simplification", "force shaping", "recussion", and "reduction in force" (also called a "RIF",
especially in the government employment sector). "Mass layoff" implies laying off a large
number of workers. "Attrition" implies that positions will be eliminated as workers quit or retire.
"Early retirement" means workers may quit now yet still remain eligible for their retirement
benefits later. While "redundancy" is a specific legal term in UK labour law, it may be perceived as
obfuscation. Firings imply misconduct or failure while lay-offs imply economic forces beyond
one's control.
[edit] Unemployment compensation
The method of separation may have an effect on a former employee's ability to collect whatever
form of unemployment compensation might be available in their jurisdiction. In many U.S. states,
workers who are laid off can file an unemployment claim and receive compensation. Depending
on local or state laws, workers who leave voluntarily are generally ineligible to collect
unemployment benefits, as are those who are fired for gross misconduct. Also, lay-offs due to a
firm's moving production overseas may entitle one to increased re-training benefits.
Certain countries (e.g. France and Germany), distinguish between leaving the company of one's
free will, in which case the person isn't entitled to unemployment benefits and leaving the
company voluntarily in the frame of a RIF, in which case the person is entitled to them. An RIF
reduced the number of positions, rather than laying off specific people, and is usually
accompanied by internal redeployment. A person might leave even if their job isn't reduced,
unless the employer has strong objections. In this situation, it's more beneficial for the state to
facilitate the departure of the more professionally active people, since they are less likely to
remain jobless. Often they find new jobs while still being paid by their old companies, costing
nothing to the social security system in the end.
There have also been increasing concerns about the organisational effectiveness of the post-
downsized ‘anorexic organisation’. The benefits, which organisations claim to be seeking from
downsizing, centre on savings in labour costs, speedier decision making, better communication,
reduced product development time, enhanced involvement of employees and greater
responsiveness to customers (De Meuse et al. 1997, p. 168). However, some writers draw
attention to the ‘obsessive’ pursuit of downsizing to the point of self-starvation marked by
excessive cost cutting, organ failure and an extreme pathological fear of becoming inefficient.
Hence ‘trimming’ and ‘tightening belts’ are the order of the day (Tyler and Wilkinson 2007)
[edit] Derivative terms
Downsizing has come to mean much more than job losses, as the word downsize may now be
applied to almost everything. People describe downsizing their cars, houses and nearly anything
else that can be measured or valued.
This has also spawned the opposite term upsize, which means to grow, expand or purchase
something larger.
Closure (business)
Closure is the term used to refer to the actions necessary when it is no longer necessary or
possible for a business or other organization to continue to operate. Closure may be the result of a
bankruptcy, where the organization lacks sufficient funds to continue operations, as a result of the
proprietor of the business dying, as a result of a business being purchased by another
organization (or a competitor) and shut down as superfluous, or because it is the non-surviving
entity in a corporate merger. A closure may occur because the purpose for which the organization
was created is no longer necessary.
While a closure is typically of a business or a non-profit organization, any entity which is created by
human beings can be subject to a closure, from a single church to a whole religion, up to and
including an entire country if, for some reason, it ceases to exist.
Closures are of two types, voluntary or involuntary. Voluntary closures of organizations are
much rarer than involuntary ones, as, in the absence of some change making operations
impossible or unnecessary, most operations will continue until something happens that causes a
change requiring this situation.
The most common form of voluntary closure would be when a group of people decide to start
some organization such as a social club, a band, or a non-profit organization, then at some point
those involved decide to quit. If the organization has no outstanding debts or pending operations
to finish, closure may consist of nothing more than the informal organization ceasing to exist.
This is referred to as the organizers walking away from the organization.
If an organization has debts that cannot be paid, it may be necessary to perform liquidation of its
assets. If there is anything left after the assets are converted to cash, in the case of a for-profit
organization, the remainder is distributed to the stockholders; in the case of a non-profit, by law
any remaining assets must be distributed to another non-profit.
If an organization has more debts than assets, it may have to declare bankruptcy. If the
organization has viability, it reorganizes itself as a result of the bankruptcy and continues
operations. If it is not viable for the business to continue operating, then a closure occurs through
a bankruptcy liquidation: its assets are liquidated, the creditors are paid from whatever assets
could be liquidated, and the business ceases operations.
Possibly the largest "closure" in history was the destruction of the Soviet Union into the composite
countries that represented it. In comparison, the end of East Germany can be considered a merger
rather than a closure as West Germany assumed all of the assets and liabilities of East Germany.
The end of the Soviet Union was the equivalent of a closure through a bankruptcy liquidation,
because while Russia assumed most of the assets and responsibilities of the former Soviet Union,
it did not assume all of them. There have been issues over who is responsible for unpaid parking
tickets accumulated by motor vehicles operated on behalf of diplomatic missions operated by the
former Soviet Union in other countries, as Russia claims it is not responsible for them.
Several major business closures include the bankruptcy of the Penn Central railroad, the Enron
scandals, and MCI Worldcom's bankruptcy and eventual merger into Verizon.
Retrieved from "http://en.wikipedia.org/wiki/Closure_(business)"
Two-factor theory
From Wikipedia, the free encyclopedia
Jump to: navigation, search
For Schachter's two factor theory of emotion, see Two factor theory of emotion.
It has been suggested that Hygiene factors be merged into this article or
section. (Discuss)
The two-factor theory (also known as Herzberg's motivation-hygiene theory) states that there
are certain factors in the workplace that cause job satisfaction, while a separate set of factors cause
dissatisfaction. It was developed by Frederick Herzberg, a psychologist, who theorized that job
satisfaction and job dissatisfaction act independently of each other.[1]
Contents
[hide]
• 1 Two-factor theory
fundamentals
• 2 Validity and criticisms
• 3 Implications for management
• 4 References
• 5 Further reading
• 6 External links
[edit] Two-factor theory fundamentals
Attitudes and their connection with industrial mental health are related to Maslow's theory of
motivation. His findings have had a considerable theoretical, as well as a practical, influence on
attitudes toward administration[2]. According to Herzberg, individuals are not content with the
satisfaction of lower-order needs at work, for example, those associated with minimum salary
levels or safe and pleasant working conditions. Rather, individuals look for the gratification of
higher-level psychological needs having to do with achievement, recognition, responsibility,
advancement, and the nature of the work itself. So far, this appears to parallel Maslow's theory of
a need hierarchy. However, Herzberg added a new dimension to this theory by proposing a two-
factor model of motivation, based on the notion that the presence of one set of job characteristics
or incentives lead to worker satisfaction at work, while another and separate set of job
characteristics lead to dissatisfaction at work. Thus, satisfaction and dissatisfaction are not on a
continuum with one increasing as the other diminishes, but are independent phenomena. This
theory suggests that to improve job attitudes and productivity, administrators must recognize and
attend to both sets of characteristics and not assume that an increase in satisfaction leads to
decrease in unpleasurable dissatisfaction.
The two-factor, or motivation-hygiene theory, developed from data collected by Herzberg from
interviews with a large number of engineers and accountants in the Pittsburgh area. From
analyzing these interviews, he found that job characteristics related to what an individual does —
that is, to the nature of the work he performs — apparently have the capacity to gratify such
needs as achievement, competency, status, personal worth, and self-realization, thus making him
happy and satisfied. However, the absence of such gratifying job characteristics does not appear
to lead to unhappiness and dissatisfaction. Instead, dissatisfaction results from unfavorable
assessments of such job-related factors as company policies, supervision, technical problems,
salary, interpersonal relations on the job, and working conditions. Thus, if management wishes to
increase satisfaction on the job, it should be concerned with the nature of the work itself — the
opportunities it presents for gaining status, assuming responsibility, and for achieving self-
realization. If, on the other hand, management wishes to reduce dissatisfaction, then it must focus
on the job environment — policies, procedures, supervision, and working conditions[1]. If
management is equally concerned with both (as is usually the case), then managers must give
attention to both sets of job factors.
The theory was based around interviews with 203 American accountants & engineers in
Pittsburgh, chosen because of their professions' growing importance in the business world. The
subjects were asked to relate times when they felt exceptionally good or bad about their present
job or any previous job, and to provide reasons, and a description of the sequence of events
giving rise to that positive or negative feeling.
Here is the description of this interview analysis:
Briefly, we asked our respondents to describe periods in their lives when they were exceedingly
happy and unhappy with their jobs. Each respondent gave as many "sequences of events" as he
could that met certain criteria—including a marked change in feeling, a beginning and an end,
and contained some substantive description other than feelings and interpretations…
The proposed hypothesis appears verified. The factors on the right that led to satisfaction
(achievement, intrinsic interest in the work, responsibility, and advancement) are mostly
unipolar; that is, they contribute very little to job dissatisfaction. Conversely, the dis-satisfiers
(company policy and administrative practices, supervision, interpersonal relationships, working
conditions, and salary) contribute very little to job satisfaction[3].
Two-factor theory distinguishes between:
• Motivators (e.g., challenging work, recognition, responsibility) that give
positive satisfaction, arising from intrinsic conditions of the job itself, such as
recognition, achievement, or personal growth[4], and
• Hygiene factors (e.g. status, job security, salary and fringe benefits) that do not
give positive satisfaction, though dissatisfaction results from their absence.
These are extrinsic to the work itself, and include aspects such as company
policies, supervisory practices, or wages/salary[4].
Essentially, hygiene factors are needed to ensure an employee is not dissatisfied. Motivation
factors are needed to motivate an employee to higher performance, Herzberg also further
classified our actions and how and why we do them, for example, if you perform a work related
action because you have to then that is classed as movement, but if you perform a work related
action because you want to then that is classed as motivation.
Unlike Maslow, who offered little data to support his ideas, Herzberg and others have presented
considerable empirical evidence to confirm the motivation-hygiene theory, although their work
has been criticized on methodological grounds.
[edit] Validity and criticisms
In 1968 Herzberg stated that his two-factor theory study had already been replicated 16 times in
a wide variety of populations including some in Communist countries, and corroborated with
studies using different procedures that agreed with his original findings regarding intrinsic
employee motivation making it one of the most widely replicated studies on job attitudes.
While the Motivator-Hygiene concept is still well regarded, satisfaction and dissatisfaction are
generally no longer considered to exist on separate scales. The separation of satisfaction and
dissatisfaction has been shown to be an artifact of the Critical Incident Technique (CIT) used by
Herzberg to record events [5]. Furthermore, it has been noted the theory does not allow for
individual differences, such as a particular personality traits, which would affect individuals'
unique responses to motivating or hygiene factors [4].
A number of behavioral scientists have pointed to inadequacies in the need hierarchy and
motivation-hygiene theories. The most basic is the criticism that both of these theories contain
the relatively explicit assumption that happy and satisfied workers produce more. Another
problem is that these and other statistical theories are concerned with explaining "average"
behavior and, on the other hand, if playing a better game of golf is the means he chooses to
satisfy his need for recognition, then he will find ways to play and think about golf more often,
perhaps resulting in an accompanying lower output on the job. Finally, in his pursuit of status he
might take a balanced view and strive to pursue several behavioral paths in an effort to achieve a
combination of personal status objectives.
In other words, this individual's expectation or estimated probability that a given behavior will
bring a valued outcome determines his choice of means and the effort he will devote to these
means. In effect, this diagram of expectancy depicts an employee asking himself the question
posed by one investigator, "How much payoff is there for me toward attaining a personal goal
while expending so much effort toward the achievement of an assigned organizational
objective?" [6] The Expectancy theory by Victor Vroom also provides a framework for motivation
based on expectations.
This approach to the study and understanding of motivation would appear to have certain
conceptual advantages over other theories: First, unlike Maslow's and Herzberg's theories, it is
capable of handling individual differences. Second, its focus is toward the present and the future,
in contrast to drive theory, which emphasizes past learning. Third, it specifically correlates with
behavior to a goal and thus eliminates the problem of assumed relationships, such as between
motivation and performance. Fourth, it relates motivation to ability: Performance =
Motivation*Ability.
That said, a study by the Gallup Organization, as detailed in the book "First, Break All the Rules:
What the World's Greatest Managers Do" by Marcus Buckingham and Curt Coffman, appears to
provide strong support for Herzberg's division of satisfaction and dissatisfaction onto two
separate scales. In this book, the authors discuss how the study identified twelve questions that
provide a framework for determining high-performing individuals and organizations. These
twelve questions align squarely with Herzberg's motivation factors, while hygiene factors were
determined to have little effect on motivating high performance.
To better understand employee attitudes and motivation, Frederick Herzberg performed studies
to determine which factors in an employee's work environment caused satisfaction or
dissatisfaction. He published his findings in the 1959 book The Motivation to Work.
The studies included interviews in which employees where asked what pleased and displeased
them about their work. Herzberg found that the factors causing job satisfaction (and presumably
motivation) were different from those causing job dissatisfaction. He developed the motivation-
hygiene theory to explain these results. He called the satisfiers motivators and the dissatisfiers
hygiene factors, using the term "hygiene" in the sense that they are considered maintenance
factors that are necessary to avoid dissatisfaction but that by themselves do not provide
satisfaction.
The following table presents the top six factors causing dissatisfaction and the top six factors
causing satisfaction, listed in the order of higher to lower importance.
Leading to satisfaction Leading to dissatisfaction
• Achievement • Company policy
• Recognition • Supervision
• Work itself • Relationship with boss
• Responsibility • Work conditions
• Advancement • Salary
• Growth • Relationship with peers
• Security
Herzberg reasoned that because the factors causing satisfaction are different from those causing
dissatisfaction, the two feelings cannot simply be treated as opposites of one another. The
opposite of satisfaction is not dissatisfaction, but rather, no satisfaction. Similarly, the opposite
of dissatisfaction is no dissatisfaction.
While at first glance this distinction between the two opposites may sound like a play on words,
Herzberg argued that there are two distinct human needs portrayed. First, there are physiological
needs that can be fulfilled by money, for example, to purchase food and shelter. Second, there is
the psychological need to achieve and grow, and this need is fulfilled by activities that cause one
to grow.
From the above table of results, one observes that the factors that determine whether there is
dissatisfaction or no dissatisfaction are not part of the work itself, but rather, are external factors.
Herzberg often referred to these hygiene factors as "KITA" factors, where KITA is an acronym
for Kick In The Ass, the process of providing incentives or a threat of punishment to cause
someone to do something. Herzberg argues that these provide only short-run success because the
motivator factors that determine whether there is satisfaction or no satisfaction are intrinsic to the
job itself, and do not result from carrot and stick incentives.
In a survey of 80 teaching staff at Egyptian private universities, Mohamed Hossam El-Din
Khalifa and Quang Truong (2009), has found out that Perception of Equity was directly related
to job satisfaction when the outcome in the equity comparison was one of Herzberg's Motivators.
On the contrary, perception of equity and job satisfaction were not related when the outcome in
the equity comparison was one of Herzberg's Hygiene Factors. The findings of this study provide
a kind of an indirect support to Herzberg's findings that improving Hygiene Factors would not
lead to improvement in an employee's job satisfaction.
[edit] Implications for management
If the motivation-hygiene theory holds, management not only must provide hygiene factors to
avoid employee dissatisfaction, but also must factors intrinsic to the work itself for employees to
be satisfied with their jobs.
Herzberg argued that job enrichment is required for intrinsic motivation, and that it is a
continuous management process. According to Herzberg:
• The job should have sufficient challenge to utilize the full ability of the
employee.
• Employees who demonstrate increasing levels of ability should be given
increasing levels of responsibility.
• If a job cannot be designed to use an employee's full abilities, then the firm
should consider automating the task or replacing the employee with one who
has a lower level of skill. If a person cannot be fully utilized, then there will be
a motivation problem.
Critics of Herzberg's theory argue that the two-factor result is observed because it is natural for
people to take credit for satisfaction and to blame dissatisfaction on external factors.
Furthermore, job satisfaction does not necessarily imply a high level of motivation or
productivity.
Herzberg's theory has been broadly read and despite its weaknesses its enduring value is that it
recognizes that true motivation comes from within a person and not from KITA factors.(French,
2008)
360-degree feedback
From Wikipedia, the free encyclopedia
Contents
[hide]
• 1 History
• 2 Accuracy
• 3 Results
• 4 References
[edit] History
The German Military first began gathering feedback from multiple sources in order to evaluate
performance during World War II (Fleenor & Prince, 1997). Also during this time period, others
explored the use of multi-rater feedback via the concept of T-groups.
One of the earliest recorded uses of surveys to gather information about employees occurred in
the 1950s at Esso Research and Engineering Company (Bracken, Dalton, Jako, McCauley, &
Pollman, 1997). From there, the idea of 360-degree feedback gained momentum, and by the
1990s most human resources and organization development professionals understood the concept.
The problem was that collecting and collating the feedback demanded a paper-based effort
including either complex manual calculations or lengthy delays. The first led to despair on the
part of practitioners; the second to a gradual erosion of commitment by recipients.
Multi-rater feedback use steadily increased in popularity, due largely to the use of the Internet in
conducting web-based surveys (Atkins & Wood, 2002). Today, studies suggest that over one-
third of U.S. companies use some type of multi-source feedback (Bracken, Timmereck, &
Church, 2001a). Others claim that this estimate is closer to 90% of all Fortune 500 firms
(Edwards & Ewen, 1996). In recent years, Internet-based services have become the norm, with a
growing menu of useful features (e.g., multi languages, comparative reporting, and aggregate
reporting) (Bracken, Summers, & Fleenor, 1998).
[edit] Accuracy
A study on the patterns of rater accuracy shows that length of time that a rater has known the
person being rated has the most significant effect on the accuracy of a 360-degree review. The
study shows that subjects in the group “known for one to three years” are the most accurate,
followed by “known for less than one year,” followed by “known for three to five years” and the
least accurate being “known for more than five years.” The study concludes that the most
accurate ratings come from knowing the person long enough to get past first impressions, but not
so long as to begin to generalize favorably (Eichinger, 2004).
It has been suggested that multi-rater assessments often generate conflicting opinions, and that
there may be no way to determine whose feedback is accurate (Vinson, 1996). Studies have also
indicated that self-ratings are generally significantly higher than the ratings of others (Lublin,
1994; Yammarino & Atwater, 1993; Nowack, 1992).
[edit] Results
Several studies (Hazucha et al., 1993; London & Wohlers, 1991; Walker & Smither, 1999)
indicate that the use of 360-degree feedback helps people improve performance. In a 5-year
Walker and Smither (1999) study, no improvement in overall ratings was found between the 1st
and 2nd year, but higher scores were noted between 2nd and 3rd and 3rd and 4th years. A study
by Reilly et al. (1996) found that performance increased between the 1st and 2nd
administrations, and sustained this improvement 2 years later. Additional studies show that 360
feedback may be predictive of future performance (Maylett & Riboldi, 2007).
Some authors maintain that 360 processes are much too complex to make blanket generalizations
about their effectiveness (Bracken, Timmreck, Fleenor, & Summers, 2001b; Smither, London, &
Reilly, 2005). Smither et al. (2005) suggest, "We therefore think that it is time for researchers
and practitioners to ask, 'Under what conditions and for whom is multisource feedback likely to
be beneficial?' (rather than asking 'Does multisource feedback work?') (p. 60)." Their meta-
analysis of 24 longitudinal studies looks at individual and organizational moderators that point to
many potential determinants of behavior change, including positive feedback orientation,
positive reactions to feedback, goal setting, and taking action.
Bracken et al. (2001b) and Bracken and Timmreck (2001) focus on process features that are
likely to also have major effects in creating behavior change and offer best practices in those
areas. Some of these factors have been researched and been shown to have significant impact.
Greguras and Robie (1998) document how the number of raters used in each rater category
(direct report, peer, manager) affects the reliability of the feedback, with direct reports being the
least reliable and therefore requiring more participation. Multiple pieces of research (Bracken &
Paul, 1993; Kaiser & Kaplan, 2006; Caputo & Roch, 2009; English, Rose, & McClellan, 2009)
have demonstrated that the response scale can have a major effect on the results, and some
response scales are indeed better than others. Goldsmith and Underhill (2001) report the
powerful influence of the participant behavior of following up with raters to discuss their results.
Other potentially powerful moderators of behavior change include how raters are selected,
manager approval, instrument quality (reliability and validity), rater training and orientation,
participant training, manager (supervisor) training, coaching, integration with HR systems, and
accountability (Bracken et al., 2001b).
Others authors state that the use of multi-rater assessment does not improve company
performance. One 2001 study found that 360-degree feedback was associated with a 10.6 percent
decrease in market value, while another study concludes that "there is no data showing that [360-
degree feedback] actually improves productivity, increases retention, decreases grievances, or is
superior to forced ranking and standard performance appraisal systems. It sounds good, but there
is no proof it works." (Pfau & Kay, 2002) Similarly, Seifert, Yukl, and McDonald (2003) state
that there is little evidence that the multi-rater process results in change.
Additional studies (Maylett, 2005) found no correlation between an employee's multi-rater
assessment scores and his or her top-down performance appraisal scores (provided by the
person's supervisor), and advised that although multi-rater feedback can be effectively used for
appraisal, care should be taken in its implementation (Maylett, 2009). This research suggests that
360-degree feedback and performance appraisals get at different outcomes, and that both 360-
degree feedback and traditional performance appraisals should be used in evaluating overall
performance.[1]
Collecting data is the second step in the Workforce Planning process. Data collection includes
conducting an Environmental Scan andSWOT Analysis (PDF file) and a Supply/Demand Analysis
(PDF file).
An Environmental Scan requires identifying the internal and external Strengths, Weaknesses,
Opportunities and Threats (SWOT) that will affect the short- and long-term goals of the
agency.
Information and trends discovered during the Environmental Scan process can provide the
foundation for a SWOT Analysis finding. For example, if the Environmental Scan predicts that
there will be a shortage of trained child welfare workers, this shortage would likely be identified
as a Threat in the SWOT Analysis.
Assistance under the scheme is available for meeting the expenditure on purchase of capital
equipment, acquisition of technical know how, upgradation of process technology and products
with thrust on quality improvement, improvement in packaging and cost of TOM & acquisition of
ISO-9000 series certification.
Units which are already exporting their products or have the potential to export at least 25% of
the output by adopting modernisation scheme would be eligible for assistance from the fund
provided they have been in operation of atleast 3 years and are not in default to bank/financial
institutions. Assistance under the scheme will be need-based subject to a minimum of Rs. 10
lakh per unit.
The ratio of contribution to the fund could be 60% from Government of India, 40% from State
Government, Industry Associations and other developmental agencies including banks put
together. The initiative could be with the state Govts./Industry Associations, to rise their
contribution of 40% by mobilising resources at the State level. Assistance to such a fund is
restricted to Rs. 30 lakhs per fund. The scope and activities to be generated out of the Fund are
as follows :
This technology fund, inter-alia, is to bring about technological upgradation in selected areas of
the SSI Sector with the involvement of CEIR Labs, Tool rooms, Testing Centres, PPDC etc. It
will also help in development of prototypes, designs, drawing and dissemination of the
information through seminars, workshops, Consultancy etc.
Arranging of technology transfer between SMEs within the country and also by way of arranging
tie-ups for technology transfer between large and small industries, particularly for ancillarisation
and vendor development.
Arranging of technology transfer from Indian small enterprises to small enterprises in other
developing countries.
It is also proposed that following activities could be included for making a portion of the fund,
more useful for getting faster results :
i. The fund at the disposal of the Government/DCSSI could also be utilised for sponsoring
technology related training programmes in India & abroad.
- Provide partial funding support upto 50% of the cost acquisition of technologies,
negotiating transfer for technology agreement, and such related activities.
- Meeting expenditure for experts participation from other developed countries in various
seminars/workshops related to technology transfer/acquisition.
ii. Conducting of technology related studies including cluster studies etc.
MODERNISATION OF SELECTED SMALL SCALE INDUSTRIES
During the Eighth Five Year Plan, a sum of Rs. 70 lakhs was earmarked for the programme of
modernisation of selected small scale industries. Under this scheme it is proposed to prepare
modernisation guides, status reports, technology upgradation reports, cluster study reports,
units specific study reports and organise contact programme in the form of seminars/workshops
for dissemination of information.
LOCK-OUT means the temporary closing of a place of employment or the .... compel the
workmen to accept the terms and conditions of the employer .
The AQL (Acceptance Quality Level), the maximum % defective that can be considered satisfactory as a
process average for sampling inspection, here is 1%. Its corresponding Pa is about 89%. It should
normally be at least that high.
* The hyper geometric and binomial distance are also used. The alpha risk is the probability of rejecting
relatively good lots (at AQL). The beta risk is the probability of accepting relatively bad lots (at
LTPD/RQL). It is the probability of accepting product of some stated undesirable quality; it is the value of
Pa at that stated quality level.
The OC curves are a means of quantifying alpha and beta risks for a given attribute sampling plan. The
Pa value obtained assumes that the distribution of defectives among a lot is random – either the
underlying process is in control, or the product was well mixed before being divided into lots. The samples
must be selected randomly from the entire lot. The alpha risk is 1 − P a. The shape of the OC curves is
affected by the sample size (n) and accept number (c) parameters. Increasing both the accept number
and sample size will bring the curve closer to the ideal shape, with better discrimination.
Linear Regression
Scatter Diagrams
We often wish to look at the relationship between two things (e.g. between a person"s height and
weight) by comparing data for each of these things. A good way of doing this is by drawing a
scatter diagram.
"Regression" is the process of finding the function satisfied by the points on the scatter diagram.
Of course, the points might not fit the function exactly but the aim is to get as close as possible.
"Linear" means that the function we are looking for is a straight line (so our function f will be of
the form f(x) = mx + c for constants m and c).
Here is a scatter diagram with a regression line drawn in:
Correlation
Correlation is a term used to describe how strong the relationship between the two variables
appears to be.
We say that there is a positive linear correlation if y increases as x increases and we say there is a
negative linear correlation if y decreases as x increases. There is no correlation if x and y do not
appear to be related.
Explanatory and Response Variables
In many experiments, one of the variables is fixed or controlled and the point of the experiment
is to determine how the other variable varies with the first. The fixed/controlled variable is
known as the explanatory or independent variable and the other variable is known as the
response or dependent variable.
I shall use "x" for my explanatory variable and "y" for my response variable, but I could have
used any letters.
Regression Lines
By Eye
If there is very little scatter (we say there is a strong correlation between the variables), a
regression line can be drawn "by eye". You should make sure that your line passes through the
mean point (the point (x,y) where x is mean of the data collected for the explanatory variable and
y is the mean of the data collected for the response variable).
Two Regression Lines
When there is a reasonable amount of scatter, we can draw two different regression lines
depending upon which variable we consider to be the most accurate. The first is a line of
regression of y on x, which can be used to estimate y given x. The other is a line of regression of
x on y, used to estimate x given y.
If there is a perfect correlation between the data (in other words, if all the points lie on a straight
line), then the two regression lines will be the same.
Least Squares Regression Lines
This is a method of finding a regression line without estimating where the line should go by eye.
If the equation of the regression line is y = ax + b, we need to find what a and b are. We find
these by solving the "normal equations".
Normal Equations
The "normal equations" for the line of regression of y on x are:
Σ y = aΣ x + nb and
Σ xy = aΣ x2 + bΣ x
The values of a and b are found by solving these equations simultaneously.
For the line of regression of x on y, the "normal equations" are the same but with x and y
swapped.
Adapted from Robert S. Kaplan and David P. Norton, “Using the Balanced
Scorecard as a Strategic Management System,” Harvard Business Review
(January-February 1996): 76.
Perspectives
The balanced scorecard suggests that we view the
organization from four perspectives, and to develop metrics,
collect data and analyze it relative to each of these
perspectives:
The Learning & Growth Perspective
This perspective includes employee training and corporate
cultural attitudes related to both individual and corporate selfimprovement.
In a knowledge-worker organization, people --
the only repository of knowledge -- are the main resource. In
the current climate of rapid technological change, it is
becoming necessary for knowledge workers to be in a
continuous learning mode. Metrics can be put into place to
guide managers in focusing training funds where they can
help the most. In any case, learning and growth constitute
the essential foundation for success of any knowledge-worker
organization.
Kaplan and Norton emphasize that 'learning' is more than
'training'; it also includes things like mentors and tutors
within the organization, as well as that ease of
communication among workers that allows them to readily
get help on a problem when it is needed. It also includes
technological tools; what the Baldrige criteria call "high
performance work systems."
The Business Process Perspective
This perspective refers to internal business processes. Metrics
based on this perspective allow the managers to know how
well their business is running, and whether its products and
services conform to customer requirements (the mission).
These metrics have to be carefully designed by those who
know these processes most intimately; with our unique
missions these are not something that can be developed by
outside consultants.
The Customer Perspective
Recent management philosophy has shown an increasing
realization of the importance of customer focus and customer
satisfaction in any business. These are leading indicators: ifcustomers are not
satisfied, they will eventually find other
suppliers that will meet their needs. Poor performance from
this perspective is thus a leading indicator of future decline,
even though the current financial picture may look good.
In developing metrics for satisfaction, customers should be
analyzed in terms of kinds of customers and the kinds of
processes for which we are providing a product or service to
those customer groups.
The Financial Perspective
Kaplan and Norton do not disregard the traditional need for
financial data. Timely and accurate funding data will always
be a priority, and managers will do whatever necessary to
provide it. In fact, often there is more than enough handling
and processing of financial data. With the implementation of a
corporate database, it is hoped that more of the processing
can be centralized and automated. But the point is that the
current emphasis on financials leads to the "unbalanced"
situation with regard to other perspectives. There is perhaps
a need to include additional financial-related data, such as
risk assessment and cost-benefit data, in this category.
Strategy Mapping
Strategy maps are communication tools used to tell a story of
how value is created for the organization. They show a
logical, step-by-step connection between strategic objectives
(shown as ovals on the map) in the form of a cause-andeffect
chain. Generally speaking, improving performance in
the objectives found in the Learning & Growth perspective
(the bottom row) enables the organization to improve its
Internal Process perspective Objectives (the next row up),
which in turn enables the organization to create desirable
results in the Customer and Financial perspectives (the top
two rows).
Balanced Scorecard Software
The balanced scorecard is not a piece of software.
Unfortunately, many people believe that implementing
software amounts to implementing a balanced
scorecard. Once a scorecard has been developed and
implemented, however, performance management software
can be used to get the right performance information to the
right people at the right time. Automation adds structure and
discipline to implementing the Balanced Scorecard system,
helps transform disparate corporate data into information and
knowledge, and helps communicate performance information.
Strategic Theme
Strategic Themes are key areas in which an organization must
excel in order to achieve its mission and vision, and deliver value
to customers. Strategic Themes are the organization's "Pillars of
Excellence."
Strategy Map
A Strategy Map displays the cause-effect relationships among the
objectives that make up a strategy. A good Strategy Map tells a
story of how value is created for the business.
Strategy
How an organization intends to accomplish its vision; an approach,
or “game plan”.
Targets
Desired levels of performance for performance measures
Vision
A vision statement is an organization's picture of future success;
where it wants to be in the future
The circular flow of income forms the basis for all models of the macro‐economy,
and
understanding the circular flow process is key to explaining how national income,
output
and expenditure is created over time.
Injections and withdrawals
The circular flow will adjust following new injections into it or new withdrawals from
it. An
injection of new spending will increase the flow. A net injection relates to the overall
effect
of injections in relation to withdrawals following a change in an economic variable.
Savings and investment
The simple circular flow is, therefore, adjusted to take into account withdrawals and
injections. Households may choose to save (S) some of their income (Y) rather than
spend it
(C), and this reduces the circular flow of income. Marginal decisions to save reduce
the flow
of income in the economy because saving is a withdrawal out of the circular flow.
However,
firms also purchase capital goods, such as machinery, from other firms, and this
spending is
an injection into the circular flow. This process, called investment (I), occurs
because
existing machinery wears out and because firms may wish to increase their capacity
to
produce.