Macroeconomics, An Introduction To Advanced Methods by William M. Scarth
Macroeconomics, An Introduction To Advanced Methods by William M. Scarth
r,
"' .:.r* . :a
':
ll :'1 I*j_ .,4
e- .l- ,
eJf
q: l
:i
MACROECONOMICS
AN INTRODUCTION TO ADVANCED METHODS
THIRD EDITION
WILLIAM M. SCARTH
Copyright © 2009 William M. Scarth
All rights reserved. No part of the work covered by the copyright hereon may be
reproduced or used in any form without written permission.
ii
I have tried to maintain the user friendly exposition that has been
appreciated in earlier editions — giving equal treatment to explaining
technical details and to exposing the essence of each result and
controversy. Using basic mathematics throughout, the book introduces
readers to the actual research methods of macroeconomics. But in addition
to explaining methods, it discusses the underlying logic at the intuitive
level, and with an eye to both the historical development of the subject,
and the ability to apply the analysis to applied policy debates. Concerning
application, some of the highlighted topics are: the Lucas critique of
standard methods for evaluating policy, credibility and dynamic
consistency issues in policy design, the sustainability of rising debt levels
and an evaluation of Europe's Stability Pact, the optimal inflation rate, the
implications of alternative monetary policies for pursuing price stability
(price-level vs inflation-rate targeting, fixed vs flexible exchange rates),
tax reform (trickle-down controversies and whether second-best initial
conditions ease the trade-off between efficiency and equity objectives),
theories of the natural unemployment rate and the possibility of multiple
equilibria, alternative low-income support policies, and globalization
(including the alleged threat to the scope for independent macro policy).
I welcome comments from users: [email protected] Indeed,
much useful feedback in the past is reflected in these pages, so it is
appropriate to acknowledge a few debts here. In earlier editions, I have
thanked some of my mentors — individuals who have been instrumental in
my own development. In this edition, I confine my acknowledgements to
two groups — those who have provided helpful discussion concerning
particular topics, and some impressive students who have helped improve
earlier drafts. I thank John Burbidge, Peter Howitt, Harriet Jackson, Ron
Kneebone, Jean-Paul Lam, David Laidler, Tiff Macklem, Lonnie Magee,
Hamza Malik, Thomas Moutos, Tony Myatt, Siy,arn Rafique, Krishna
Sengupta, John Smithin, Malick Souare, Mike Veall, and especially Leilei
Tang and Huizi Zhao for their capable and generous assistance. It should,
of course, be emphasized that none of these individuals can be held
responsible for how I may have filtered their remarks. Despite the real
contributions of these individuals, my greatest debt is to my wife, Kathy,
whose unfailing love and support have been invaluable. Without this
support I would have been unable to work at making the exciting
developments in modern macroeconomics more accessible.
iii
Contents
Chapter 1: Keynes and the Classics 1
1.1 Introduction 1
1.2 Criteria for Model Selection 2
1.3 The Textbook Classical Model:
The Labour Market with Flexible Wages 5
1.4 The Textbook Keynesian Model:
The Labour Market with Money-Wage Rigidity 11
1.5 Generalized Disequilibrium:
Money-Wage and Price Rigidity 14
1.6 Conclusions 17
2.1 Introduction 19
2.2 A Simple Dynamic Model:
Keynesian Short-Run Features
and a Classical Full-Equilibrium 19
2.3 The Correspondence Principle 23
2.4 Can Increased Price Flexibility be De-Stabilizing? 26
2.5 Monetary Policy as a Substitute for Price Flexibility 31
2.6 Conclusions 35
3.1 Introduction 36
3.2 Uncertainty in Traditional Macroeconomics 39
3.3 Adaptive Expectations 48
3.4 Rational Expectations: Basic Analysis 51
3.5 Rational Expectations: Extended Analysis 56
3.6 Conclusions 60
iv
Chapter 4: The Micro-Foundations
of Modern Macroeconomics 64
4.1 Introduction 64
4.2 The Lucas Critique 64
4.2 Household Behaviour 70
4.3 Firms' Behaviour: Factor Demands 79
4.5 Firms' Behaviour: Setting Prices 83
4.6 Conclusions 87
5.1 Introduction 89
5.2 The Original Real Business Cycle Model 89
5.3 Extensions to the Basic Model 93
5.4 Optimal Inflation Policy 101
5.5 Harberger Triangles vs. Okun's Gap 105
5.6 Conclusions 111
Reference 283
vi i
Chapter 1
1.1 Introduction
Almost 70 years have elapsed since the publication of Keynes' The
General Theory of Employment, Interest and Money, yet the controversies
between his followers and those macroeconomists who favour a more
classical approach have remained active. One purpose of this book is to
examine some of these controversies, to draw attention to developments
—
that have led to a important
synthesis of mpoant ideas from both traditions, and to
illustrate in some detail how this integrated approach can inform policy
debates.
At the policy level, the hallmarks of Keynesian analysis are that
involuntary unemployment can exist and that, without government
assistance, any adjustment of the system back to the "natural"
unemployment rate is likely to be slow and to involve cycles and
overshoots. In its extreme form, the Keynesian view is that adjustment
back to equilibrium simply does not take place without policy assistance.
This view can be defended by maintaining either of the following
positions: (i) the economy has multiple equilibria, only one of which
involves "full" employment; or (ii) there is only one equilibrium, and it
involves "full" employment, but the economic system is unstable without
the assistance of policy, so it cannot reach the "full" employment
equilibrium on its own.
We shall consider the issue of multiple equilibria in Chapter 9. In
earlier chapters, we focus on the question of convergence to a full
equilibrium. To simplify the exposition, we concentrate on stability versus
outright instability, which is the extreme form of the issue. We interpret
any tendency toward outright instability as analytical support for the more
general proposition that adjustment between full equilibria is protracted.
In this first chapter, we examine alternative specifications of the
labour market, such as perfectly flexible money wages (the textbook
Classical model) and completely fixed money wages (the textbook
Keynesian model), to clarify some of the causes of unemployment. We
consider fixed goods prices as well (the model of generalized
disequilibrium), and then we build on this background in later chapters.
For example, in Chapter 2, we assume that nominal rigidities are only
temporary, and we consider a dynamic analysis that has Classical
1
properties in full equilibrium, but Keynesian features in the transitional
periods on the way to full equilibrium. Fifty years ago, Paul Samuelson
labelled this class of dynamic models the Neoclassical Synthesis.
In Chapter 3, we enrich this dynamic analysis by exploring
alternative ways of bringing expectations into the analysis. With
expectations involved, it is not obvious that an increased degree of price
flexibility lessens the amount of cyclical unemployment that follows from
a decrease in aggregate demand. By the end of Chapter 3, we will have
identified two important considerations that make macroeconomic
convergence more problematic: firms' reactions to sticky prices and sales
constraints, and expectations.
In Chapter 4, we rectify one major limitation of the analysis to
that point — that formal micro-foundations have been missing. The inter-
temporal optimization that is needed to overcome this limitation is
explained in Chapter 4. Then, in Chapter 5, we examine the New Classical
approach to business cycle analysis — the modern, more micro-based
version of the market-clearing approach to macroeconomics, in which no
appeal to sticky prices is involved. Finally, in Chapters 6 and 7, we
examine what has been called the "New" Neoclassical Synthesis — a
business-cycle analysis that blends the microeconomic rigour of the New
Classicals with the empirical applicability that has always been the focus
of the Keynesian tradition and the original Neoclassical Synthesis.
For the remainder of the book (the final five chapters), the focus
shifts from short-run stabilization issues to concerns about long-run living
standards. In these chapters, we focus on structural unemployment and the
challenge of raising productivity growth.
1.3 The Textbook Classical Model: the Labour Market with Flexible
,
Wages
7
Western countries tried a policy package of tax cuts along with decreased
money supply growth; the motive for this policy package was, to a large
extent, the belief that the Classical macro model has some short-run policy
relevance. Such policies are controversial, however, because various
analysts believe that the model ignores some key questions. Is the real-
world supply curve approximately vertical in the short run? Are labour
Figure 1.1 Derivation of the Aggregate Demand Curve
P2
P1
Po
P S(K,k)
supply elasticities large enough to lead to a significant shift in aggregate
supply? Many economists doubt that these conditions are satisfied.
Another key issue is the effect on macroeconomic convergence of the
growing government debt that accompanies this combination policy of
decreased reliance on taxation and money issue as methods of government
finance. The textbook Classical model abstracts from this consideration.
An explicit treatment of government debt is considered later in this book
(in Chapter 7), and a negative verdict on the possibility of tax cuts paying
for themselves is available in Mankiw and Weinzierl (2006).
Before leaving the textbook Classical model, we summarize a
graphic exposition that highlights both the goods market and the labour
market. In Figure 1.3, consider that the economy starts at point A. Then a
decrease in government spending occurs. The initial effect is a leftward
shift of the IS curve (and therefore, in the aggregate demand curve). At the
initial price level, aggregate supply exceeds aggregate demand. The result
is a fall in the price level, and this (in turn) causes two shifts in the labour
market quadrant of Figure 1.3: (1) labour demand shifts down (because of
the decrease in the marginal revenue product of labour); and (2) labour
supply shifts down by the same proportionate amount as the decrease in
the price level (because of workers' decreased money-wage claims). Both
workers and firms care about real wages; had we drawn the labour market
with the real wage on the vertical axis, neither the first nor the second
shift would occur. These shifts occur because we must "correct" for
having drawn the labour demand and supply curves with reference to the
nominal wage. The final observation point for the economy is B in both
bottom panels of Figure 1.3. The economy avoids ever having a recession
in actual output and employment since the shock is fully absorbed by the
falling wages and prices. These fixed levels of output and employment are
often referred as economy's "natural rates" (denoted here by P.- and N ).
Many economists find this model unappealing for two reasons.
First, they think they do observe recessions in response to drops in
aggregate demand. Second, adjustment within this model involves firms
that are perfectly happy to let inventories accumulate. A series of large
decreases in aggregate demand would cause a dramatic increase in
inventories, yet firms apparently never want to work them down since the
model shows no layoffs.
This implicit build-up of inventories will be particularly acute if
the economy is characterized by the phenomenon that Keynes called a
"liquidity trap". This special case can be considered by letting the interest
sensitivity of money demand become very large: L, > 00. By checking
— —
the slope expression for the aggregate demand curve (equation (1.6)
above), the reader can verify that this situation involves the aggregate
9
demand curve being so steep that it is almost vertical. Thus, when this
curve shifts to the left, it may no longer intersect the aggregate supply
curve anywhere in the positive quadrant. In this situation, falling wages
and prices cannot eliminate the recession. Indeed, no consistent full
equilibrium exists in this case. The Classical model can, however, be
modified to avoid this problem by allowing the household consumption-
savings decision to depend on the quantity of liquid assets available - by
making the consumption function C[(1-k)Y, MIP]. The second term in this
function is referred to as the Pigou effect.
Y Production Function
P• S(N)
(1-k)
S
N
N Y
Labour Market Aggregate Supply and Demand
Contracts, explicit or implicit, often fix money wages for a period of time.
In Chapter 8, we shall consider some of the considerations that might
motivate these contracts. For the present, however, we simply presume the
existence of fixed money-wage contracts and we explore their
macroeconomic implications.
On the assumption that money wages are fixed by contracts for
the entire relevant short run, W is now taken as an exogenous variable
stuck at value W. Some further change in the model is required, however,
since otherwise we would now have five equations in four unknowns — Y,
V, r, and P.
Since the money wage does not clear the labour market in this
case, we must distinguish actual employment, labour demand, and labour
supply, which are all equal only in equilibrium. The standard assumption
n disequilibrium analyses is to assume that firms have the "right to
-nanage” . the size of their labour force during the period after which the
wage has been set. This means that labour demand is always satisfied, and
hat the five endogenous variables are now Y,r,P,N,N s , where the latter
variable is desired labour supply. Since this variable occurs nowhere in
he model except in equation (1.5), that equation solves residually for Ns .
11
Actual employment is determined by the intersection of the labour
demand curve and the given money wage line.
N
V/-
These inconsistencies between Keynesian beliefs on the one hand and the
properties of the textbook (perfect competition version of the) Keynesian
model on the other suggest that Keynesian economists must have
developed other models that involve more fundamental departures from
the Classical system. One of these developments is the generalization of
the notion of disequilibrium to apply beyond the labour market, a concept
pioneered by Barro and Grossman (1971) and Malinvaud (1977).
If the price level is rigid in the short run, the aggregate supply
curve is horizontal. There are two ways in which this specification can be
defended. One becomes evident when we focus on slope expression (1.7).
This expression equals zero if FNN = 0. To put the point verbally, the
marginal product of labour is constant if labour and capital must be
combined in fixed proportions. This set of assumptions — rigid money
wages and fixed-coefficient technology — is often appealed to in defending
fixed-price models. (Note that these models are the opposite of supply-
side economics since with a horizontal supply curve, output is completely
demand-determined, not supply-determined.)
Another defence for price rigidity is simply the existence of long-
term contracts fixing the money price of goods as well as factors. To use
this interpretation, however, we must re-derive the equations in the macro
model that relate to firms, since if the goods market is not clearing, it may
no longer be sensible for firms to set marginal revenue equal to marginal
cost. This situation is evident in Figure 1.5, which shows a perfectly
competitive firm facing a sales constraint. If there were no sales constraint
the firm would operate at point A, with marginal revenue (which equals
price) equal to marginal cost. Since marginal cost = f(dN / dY)= W I F N ,
14
Figure 1.5 A Competitive Firm Facing a Sales Constraint
Price
Marginal Cost Marginal Cost
Output
Sales constraint
The model now has two key differences from what we labelled
the textbook Keynesian model. First, labour demand is now independent
of the real wage, so any reduction in the real wage does not help in raising
employment. Second, the real wage is now a shift variable for the IS curve,
and therefore for the aggregate demand curve for goods, so wage cuts can
decrease aggregate demand and thereby lower employment. (This second
point is explained more fully below.) These properties can be verified
formally by noting that the model becomes simply equations 1.1 to 1.3 but
15
with W and P exogenous and with the revised investment function
replacing I(r). The three endogenous variables are Y, r, and N, with N
solved residually by equation 1.3.
The model is presented graphically in Figure 1.6. The initial
observation point is A in both the goods and labour markets. Assume a
decrease in government expenditure. The demand for goods curve moves
left so firms can only sell y ; the labour demand curve becomes the
line, and the observation point moves to point B in both diagrams.
Unemployment clearly exists. Can it be eliminated? Increases in M or G
would shift the demand for goods back, so these policies would still work.
But what about a wage cut? If the W line shifts down, all that happens is
that income is redistributed from labour to capitalists (as shown by the
shaded rectangle). If capitalists have a smaller marginal propensity to
consume than workers, the demand for goods shifts further to the left,
leading to further declines in real output and employment. The demand for
goods shifts to the left in any event, however, since given the modified
investment function (equation 1.8), the lower wage reduces investment.
Thus, wage cuts actually make unemployment worse.
Figure 1.6 The Effects of Falling Demand with Fixed Wages and Prices
Iv N
1.6 Conclusions
18
Chapter 2
The traditional (that is, pre New Classical) analysis of economic cycles
involved a compact structure that included the textbook Classical and
Keynesian models as special cases. This simple — yet encompassing —
framework was achieved by dropping any explicit treatment of the labour
market (and the production function). Instead, a single summary
relationship of the supply side of the goods market was specified. That
one function was an expectations-augmented Phillips curve — a
relationship that imposes temporary rigidity for goods prices in the short
run, but Classical-dichotomy (natural-rate) features in full equilibrium.
This simple, but complete, model of simultaneous fluctuations in real
output and inflation consisted of two equations: a Phillips curve (the
supply-side specification), and a summary of IS-LM theory — a simple
reduced-form aggregate-demand function. The purpose of this chapter is
to review the properties of this standard dynamic model.
19
These IS and LM relationships can be combined to yield the
aggregate demand function (by eliminating the interest rate via simple
substitution). The result is
= 0(Y 5) (2.2)
The new notation, .57 and TC, denote the natural rate of output (the
value that emerges in the textbook Classical model, and a value we take as
an exogenous variable in the present chapter) and the core inflation rate.
Since p is the logarithm of the price level, its absolute time change equals
the percentage change in the price level. Thus, p is the inflation rate.
Initially, the core inflation rate is assumed to be zero. Later on in the
chapter, the core inflation rate is assumed to equal the full-equilibrium
inflation rate, and since we assume a constant natural rate of output, this
core inflation rate is simply equal to the rate of monetary expansion:
= m . If we assume the rate of monetary expansion to be zero, there is
no difference between these specifications.
The full equilibrium properties of this system are: y =j7,
p =ir =th and 7 = (fig — 37)1a, so (as already noted) macroeconomists
talk in terms of the "natural" output rate, the "natural" interest rate, and
the proposition that there is no lasting inflation-output trade-off. Milton
Friedman went so far as to claim that inflation is "always and
everywhere" a monetary phenomenon, but this assertion is supported by
the model only if prices are completely flexible (that is, if parameter 40
approaches infinity). In general, the model involves simultaneous
fluctuations in real output and inflation, bringing predictions such as:
disinflation must involve a temporary recession. Such properties imply
that Friedman's claim is accurate only when comparing full long-run
equilibria. Nevertheless, the presumption that the model's full equilibrium
is, in fact, reached as time proceeds, should not be viewed as terribly
20
controversial, since it turns out that this model's stability condition can be
violated in only rather limited circumstances.
Keynes' approach to macroeconomics involved the concern that
convergence to a classical full-equilibrium should not be presumed.
Indeed, Keynes argued that a central job for macro theory was to identify
those circumstances when convergence is unlikely, so that policy can be
designed to ensure that real economies do not get into these
circumstances. So while this traditional model involves sticky prices in
the short run (and from this vantage point, at least, it is appealing to
Keynesians), the fact that — when expectations are ignored — it rejects the
possibility of instability as rather unlikely makes it offensive to
Keynesians. How has this model been altered to avoid this offensive
feature? The answer: by letting expected inflation depend on actual
inflation, and by allowing these expectations to have demand-side effects.
Thus far, we have limited the effects of anticipated inflation to the
wage/price setting process (by allowing the core, or full-equilibrium,
inflation rate to enter the Phillips curve). As an extension, we can allow
the nominal and the real interest rates to differ by peoples' expectations
concerning inflation in the short run. But before we introduce this
distinction, we discuss stability in this initial, more basic, model.
Mathematically, we can focus on the question of convergence to
full equilibrium by taking the time derivative of the aggregate demand
equation, assuming that autonomous spending and the money supply are
not changing in an ongoing fashion (setting g = m = 0) , and substituting
out p by using the Phillips curve (with it = 0). The result is
= -s(), (2.3)
Long-run
Aggregate
Supply
B Short-run
Aggregate
Supply
Aggregate
Demand
We now extend our simple dynamic aggregate supply and demand model
by allowing inflationary and deflationary expectations to affect aggregate
demand. We continue to assume descriptive behavioural equations,
leaving the consideration of formal micro-foundations until Chapter 4. We
now distinguish real and nominal interest rates. The former is involved in
the IS relationship, since we assume that households and firms realize that
it is the real interest rate that represents the true cost of postponing
consumption and borrowing. But it is the nominal interest rate that
belongs in the LM equation, as long as we assume that peoples' portfolio
choice is between non-indexed "bonds" (that involve .a real return of
r =i—k ) and money (that involves a real return of -p). The real yield
iifferential is, therefore, i — the nominal interest rate. Notice that, to avoid
laving to specify a relationship between actual and expected inflation, we
lave simply assumed that they are equal. We consider alternative
>pecifications in Chapter 3. In any event, when the nominal interest rate is
liminated by substitution, the IS-LM summary is
yr = aS2 /(ay +
23
At the intuitive level, the basic rationale for this term is straightforward
aggregate demand is higher, if expected (equals actual) inflation rises
since people want to "beat the price increases" by purchasing the items
now. Similarly, aggregate demand is lower if people expect deflation
since in this case they want to postpone purchases so that they can benefi
from the expected lower prices in the future. Thus, while currem
aggregate demand depends inversely on the current price of goods, ii
depends positively on the expected future price of goods.
On the supply side, as long as we assume (as above) that the
natural output rate is constant, we know that the core (full-equilibrium:
inflation rate is the money growth rate, so the aggregate supply
relationship can remain as specified earlier. The model now consists of
equations (2.1a) and (2.2).
We are interested in determining how this simple economy react s
to shocks such as a once-for-all drop in autonomous spending. What
determines how real output is affected in the short run? Under what
conditions will the economy's self-correction mechanism work (that is.
under what conditions will the short-run effect — a recession — be
temporary and automatically eliminated)? Was Keynes right when he
argued that it is a "good" thing that prices are sticky? That is, is the
magnitude or duration of the recession made worse if the short-run
Phillips curve is steeper (if coefficient (I) is larger)? It is to these questions
that we now turn.
The effect of a change in autonomous expenditure on output is
calculated by substituting equation (2.2) into equation (2.1 a) to eliminate
the inflation rate. Further, we simplify by setting 7r = m = 0. The resulting
at-a-point-in-time reduced form for output is
dy I dg — 0V)
which is positive only if the denominator is positive. Thus, the model only
supports the proposition that a drop in demand causes a recession if the
denominator is positive.
It may seem surprising that — in such a simple model as this one —
our basic assumptions about the signs of the parameters are not sufficient
to determine the sign of this most basic policy multiplier. If we cannot
"solve" this problem in such a simple setting, we will surely be plagued
24
with sign ambiguities in essentially all macro models that are more
complicated than this one. Macroeconomists have responded to this
problem in three ways. First, on the basis of empirical work, theorists
have become more confident in making quantitative assumptions about
the model's parameters, not just qualitative (or sign) assumptions. But
given the controversy that surrounds most econometric work, this strategy
has somewhat limited appeal. The second approach is to provide more
explicit micro-foundations for the model's behavioural equations. By
having a more specific theory behind these relationships, we have more
restrictions on the admissible magnitudes for these structural coefficients.
While this approach limits the model's sign ambiguity problems, as we
shall see in Chapter 3, it does not fully eliminate them. Thus, some
reliance must remain on what Paul Samuelson called the correspondence
principle many years ago. He assumed that the least controversial
additional assumption that can be made concerning the model's
parameters (other than their signs) is to assume that — given infinite time —
the system will eventually converge to its full equilibrium. After all, most
economists presume that we eventually get to equilibrium. To exploit this
belief, Samuelson's recommendation was to derive system's dynamic
stability condition, and then to use that condition as a restriction to help
sign the corresponding comparative static multipliers. Since macro-
economists are assuming eventual convergence implicitly, Samuelson felt
that nothing more of substance is being assumed when that presumption is
made more explicit to sign policy multipliers. This has been standard
procedure in the profession for 60 years, and we will apply the
correspondence principle in our analysis here. But before doing so, we
note that some macroeconomists regard the use of the correspondence
principle as suspect.
The dissenters can see that there is an analogy between
macroeconomists using the correspondence principle and micro-
economists focusing on second-order conditions. Microeconomists use the
second-order conditions to resolve sign ambiguities in their analyses — that
are based on agents obeying the first-order conditions. There is no
controversy in this case, because the second-order conditions are an
integral part of the model, and analysts are simply making implicit
assumptions explicit. But the analogy between second-order conditions in
micro and dynamic stability conditions in macro breaks down since, in
most macro models, analysts have the freedom to specify more than one
set of assumptions for the model's dynamics. Thus, there is an element of
arbitrariness in macro applications of the correspondence principle that is
not present when microeconomists rely on second-order conditions for
additional restrictions. One of the purposes of providing explicit micro-
25
foundations for macroeconomics is to discipline macro model builders so
that they have less opportunity for making what others might regard as
arbitrary assumptions.
A more fundamental problem with the correspondence principle is
that some economists (for example, Keynes) are not prepared to assume
stability. Indeed, some of them can be viewed as arguing that this issue
should be the fundamental focus of research (see Tobin 1975, 1980; and
Hahn and Solow 1986). According to this approach, we should compare
the stability conditions under alternative policy regimes, to see whether or
not a particular policy is a built-in stabilizer. Policy regimes that lead to
likely instability should be avoided. Thus, even though the stability
conditions are not presumed to hold by all analysts, all economists must
know how to derive these conditions. Thus, we now consider the stability
condition for our aggregate supply and demand model.
The stability of the economy is assessed by taking the time
derivative of equation (2.4) and using equation (2.2) to, once again,
eliminate the inflation rate. In this derivation, we assume that there are no
further (ongoing) changes in autonomous spending and that the natural
rate is constant (g = y = 0 ). As in the simpler model which suppressed
the distinction between real and nominal interest rates, the result is
= —s(y — 3,-) , but now the expression for the stability parameter is
s = 0041— 0).
.
, — Output time path with low 0
— — Output time path with high 0
Time, t
Lig occurs at this time
equals the initial output loss (the impact multiplier) divided by the speed
of adjustment. Thus, in this case, the cumulative output loss is 00.
According to this method of weighting the "bad" short-run effect and the
"good" longer-run effect of a larger parameter (I), then, an increased degree
of price flexibility is deemed desirable. Of course, supporters of Keynes
can argue that what matters is a discounted cumulative output loss
calculation – not what we have just calculated. Once the short run is given
more weight than the longer run, it is immediately apparent from Figure
2.2 that the undesirable aspects of an increased degree of price flexibility
could dominate. Overall, this analysis provides at least partial analytical
support for the Keynesian proposition that increased price flexibility may
28
not help the built-in stability properties of the economy. This issue is
particularly important since many policy analysts advocate that
governments use taxes and/or subsidies to stimulate private firms and their
workers to adopt such arrangements as profit-sharing and shorter wage
contracts. One motive for encouraging these institutional changes is a
desire to increase wage flexibility (which would indirectly bring increased
price flexibility) and the proponents of these policies simply presume that
their adoption would be "good."
The opposite presumption seems to be involved at central banks,
such as the Bank of Canada. Analysts there have noted that one of the
"beneficial" aspects of our reaching price stability, is that average contract
length has increased dramatically over the last 20 years. With low
inflation, people are prepared to sign long-term wage contracts. While this
means lower industrial-dispute and negotiation costs, it also means that
the size of parameter 4 is now smaller. This development is "good" for
macroeconomic stability only if Keynes was right. It appears that the
Bank of Canada is comfortable siding with Keynes on this issue.
Before leaving this section, we investigate what things make the
economy's stability condition more or less likely to be satisfied. We
derived above that convergence to full equilibrium occurs only if Ow <1.
To assess the plausibility of this condition being met, we focus on the
detailed determinants of the reduced-form aggregate demand parameters
given earlier. Using these interpretations, the stability condition can be re-
,xpressed as
29
So, under monetary aggregate targeting at least, the efficacy of the
economy's self-correction mechanism very much depends on the interest
elasticity of money demand. The intuition behind this fact is straight-
forward. Lower prices have two effects on aggregate demand. Falling
actual prices stimulate demand, and (other things equal) help end a
recession. But expectations of falling prices raise real interest rates, and
(other things equal) dampen demand and thereby worsen a recession. The
stabilizing effect of falling actual prices works through its expansionary
effect on the real money supply, while the destabilizing effect of expected
deflation works through interest rates and the associated increase in
money demand. If real money demand increases more than real money
supply, the initial short-fall in the demand for goods is increased. A
liquidity trap maximizes the chance that the money-demand effect is
stronger — making the economy unstable.
We conclude that this simple dynamic model represents a
compact system that allows for Keynes's worry that wage and price
decreases can worsen a deep recession through expectational effects. Al
the same time, however, it is important to realize that the model does not
suggest that instability must always occur. Indeed, macroeconomists can
appeal to this one simplified model, and consistently argue that
government intervention was justified in the 1930s (to avoid instability or
at least protracted adjustment problems) but was not required in the 1970s
(to avoid hyperinflation). Since any scientist wants a single, simple model
to "explain" a host of diverse situations, it is easy to see why variants of
this model represented mainstream macroeconomics for many years.
Some macroeconomists interpret the economy as having a stable
"corridor." The term corridor has been used to capture the notion that the
economic system may well be stable in the face of small disturbances that
do not push the level of activity outside its normal range of operation (the
corridor). But that fact still leaves open the possibility that large shocks
can push the economy out of the stable range. For example, as long as
shocks are fairly small, we do not get into a liquidity trap. But sometimes
a shock is big enough to make this extreme outcome relevant, and the
economy is pushed out of the stable corridor. This appears to be a
reasonable characterization of the Great Depression in the 1930s. At that
time, individuals became convinced that bankruptcies would be prevalent.
As a result, they developed an extraordinary preference for cash, and the
corresponding liquidity trap destroyed the applicability of self-correcting
mechanism. It is important for Keynesians that instability does not always
occur, so that Keynesian concerns cannot be dismissed by simply
observing that there have been many episodes which have not involved
macroeconomic breakdown. Howitt (1979) has argued that models that do
30
not permit a corridor feature of this sort cannot claim to truly represent
Keynes's ideas. While this discussion of corridors has been instructive, it
has not been completely rigorous. After all, formal modeling of corridors
would require nonlinear relationships, and our basic model in this chapter
has involved linear relationships.
where the bars denote (constant) target values and parameter x defines
alternative policies: x —> 0 implies a constant money supply (what we
have been assuming in the previous section); x —> co implies pegging
nominal income. Since the target variables are constant, the equilibrium
inflation rate is zero for both of these monetary policies. This policy
reaction function can be combined with equations (2.1a) and (2.2). Using
the methods already described, the reader can verify that the impact
autonomous spending multiplier is
31
s = 600 + /(1 — Ov + ex ),
and the cumulative output loss is 4 1(t90(1+ x)). These results imply that
nominal income targeting (an increase in parameter x) reduces the size of
the impact effect which follows an aggregate demand shock, and it has an
ambiguous effect on the speed with which this temporary output deviation
is eliminated. These effects are not quite the same as we obtained for an
increase in price flexibility, but the net effect on the undiscounted
cumulative output outcome is similar. The overall output effect is made
smaller by nominal income targeting. In this sense, then, a more active
attempt to target nominal income can substitute for an attempt to vary the
degree of wage/price flexibility.
Readers may regard this analysis of monetary policy as somewhat
dated. After all, central banks no longer set policy in terms of targeting
monetary aggregates. Instead, they adjust interest rates with a view to
targeting inflation directly. Their research branches investigate whether
their interest-rate reaction function should focus on the deviation of the
inflation rate from target (the current practice), on the deviation of the
price level from target, or on the deviation of nominal GDP from target.
For the remainder of this section, we recast the analysis so that it can
apply directly to this set of questions. Also, we consider a different
specification of disturbances. Thus far, we have focused on the effects of a
once-for-all change in demand. Now, we focus on an ongoing cycle in
autonomous spending. Since modern analysis views policy as an ongoing
process, not a series of isolated events, this alternative treatment of
disturbances is appealing for an analysis that highlights model-consistent
expectations on the part of private agents who understand that policy
involves an ongoing interaction with the economy.
The revised model involves the IS relationship, the central bank's
interest-rate setting rule, the Phillips curve, and the specification of the
ongoing cycle (a sine curve) in autonomous demand:
y = —ar + fig
r+p +OA - 2(p —0)+(1— 2)(p —0)
P = 0(y — )+0
g = g + ssin(t)
Next, we take the time derivative and use the Phillips curve again to
eliminate inflation:
33
where the arbitrary parameters, and A, are yet to be determined
Substituting the trial solution into the model, we have: µ`A = (1 + r),u' A
or p = (1+ r) . Similarly, substituting t = 0 into the trial solution, we have
A= x0 . As a result, the initially arbitrary reduced form coefficients, and
A, are now determined as functions of the economically meaningful
parameters, r and xo .
We use this same procedure for our macro model here. Following
Chiang (1984, p. 472) the (trial) solution for output can be written as
Even for this simple model, illustrative parameter values are needed to
assess the resulting amplitude of the cycle in y. Representative values
(considering a period length of one year) are: a = 0.8 and p = 0.2
(autonomous spending is 20% of GDP), 8 = 1, and (I) = 0.2 (see Walsh
(2003a)). With these values, the amplitude of the real output cycle is equal
to one-fifth of the amplitude of the autonomous spending cycle if the
central bank targets the inflation rate (if X = 1). In contrast,' the amplitude
of the output cycle is a slightly larger fraction (0.23 instead of 0.20) of
that of the autonomous spending cycle if the central bank targets the price
level (X. = 0). According to the model, then, the contemplated move from
inflation targeting to price-level targeting is not supported.
The intuition behind this result can be appreciated by considering
an exogenous increase in the price level. With inflation-rate targeting,
such a "bygone" outcome is simply accepted, and future inflation is
resisted. But with price-level targeting, future inflation has to be less than
zero to eliminate this past outcome. That is, only under price-level
targeting is a policy-induced recession called for. So price-level targeting
seems obviously "bad." The reason that this consideration may not be the
dominant one, however, is that the avoidance of any long-term price-level
drift (that is a feature of price-level targeting) has a stabilizing effect on
expectations. For the plausible parameter values considered here, it
34
appears that the former (destabilizing) effect of price-level targeting
slightly outweighs the latter (stabilizing) effect.
2.6 Conclusions
The analysis of this chapter has involved a simple model that summarizes
mainstream macroeconomics before the rational-expectations and new-
classical "revolutions". This version of macroeconomics is called the
Neoclassical Synthesis since it combines important elements of both
Classical and Keynesian traditions. It involves both the long-run
equilibrium defined by the static textbook Classical model, and the
temporary nominal stickiness feature that is the hallmark of the textbook
Keynesian model (as the mechanism whereby the system departs from its
full equilibrium in the short run). We have used this model to explain the
correspondence principle, to examine how several monetary policies
might be used to make up for the fact that prices are not more flexible, and
to establish whether more flexible prices are desirable or not.
We have learned how important it is to add expectations to a
macro model. Initially, expectations was an issue stressed by Keynesians,
;ince it represents a mechanism that makes convergence to full
equilibrium less assured. But, more recently, macroeconomists of all
)ersuasion highlight expectations in their analysis. This is because
;tabilization policy is now modelled as an ongoing operation, not an
solated one-time event, and analysts are drawn to models in which agents
ire fully aware of what the government has been, and will be, doing.
Chus, as far as stabilization policy analysis is concerned, there has been a
;onvergence of views in the sense that all modern analysis focuses on
nodel-consistent expectations (as we did by equating actual and expected
nflation in this chapter).
We extend our appreciation of these issues in the next two
:hapters, by exploring alternative treatments of expectations in the next
:hapter, and by providing more explicit micro-foundations for the
ynthesis model in Chapter 4. Once the rational-expectations (Chapter 3)
md the new-classical (Chapter 5) revolutions have been explored, we will
► e in a position to consider an updated version of the synthesis model —
35
Chapter 3
Model-Consistent Expectations
3.1 Introduction
38
3.2 Uncertainty in Traditional Macroeconomics
The "hat" denotes the output gap. The monetary policy reaction function
involves the money growth rate adjusting to the most recent observation
on the output gap (if x is positive, not zero). Otherwise the money growth
rate is a random variable, since we assume that u is a standard "error"
term, with expected value of zero, no serial correlation, and a constant
variance. Among other things, the model can be used to assess whether
"leaning against the wind" is a good policy (as Keynesians have always
recommended), or whether (as Milton Friedman has long advocated) a
"hands of' policy is better. We can evaluate this "rules vs. discretionary
policy" debate by analyzing whether a zero value for parameter x leads to
a smaller value for output variance (the variable the central bank is
assumed to care about (given the policy reaction function)) or whether a
positive value for parameter x delivers the smaller variance. But we
postpone this policy analysis for the moment, by imposing x = 0. By
taking the first difference of the demand function, and substituting in both
the policy reaction function and the Phillips curve, we have:
A= + Ou, (3.1)
39
where v = (1— 00). Aside from the ongoing error term, there are (in
general) four possible time paths that can follow from a first-order
difference equation of this sort (as shown in Figure 3.1). We observe
explosion if u > 1, asymptotic approach to full equilibrium if 0 < u < 1,
damped oscillations if —1 < u < 0, and explosive cycles if u < -1. The
standard way of using the model to "explain" business cycles is to assume
that 0 < u < 1, and that the stochastic disturbance term keeps the economy
from ever settling down to full equilibrium. With these assumptions, the
model predicts ongoing cycles.
It is useful to compare the stability conditions for continuous-time
and discrete-time macro models. In the former, we saw that overshoots
were not possible, so the stability condition is just a qualitative restriction
(that the sign of parameter s be appropriate). But in discrete time,
overshoots of the full equilibrium are possible, so the stability condition
involves a quantitative restriction on the model's parameters (that the
absolute value of u be less than unity). To maximize the generality of
their analyses, macro theorists prefer to restrict their assumptions to
qualitative, not quantitative, presumptions. This fact clarifies why much of
modern macro theory is specified in continuous time. But, as noted above,
if stochastic considerations are to form the focus of the analysis, we must
put up with the more restrictive stability conditions of discrete-time
analysis, since stochastic differential equations are beyond the technical
abilities of many analysts.
Now let us re-introduce ongoing policy, by considering the
possibility of x > 0. In this case, v = (1 0(0 +,')). Is an increase in x a
—
40
Figure 3.1 Possible Time Paths for Output
Yr
1. Direct Instability (v> 1)
2. Direct Convergence
(0<v<1)
3. Damped Cycles
(-1 <v<O)
/• /` .‘ 3 4. Explosive Cycles—not
New Y _ ••__/_ I shown (v<-1)
/ -
I , / 2
Initial Y
Time, t
E(Y,)= vE(Y„).
When this relationship is subtracted from equation (3.1), and the outcome
is squared, we have
41
consider the effects on the economy of a whole series of stochastic shocks
buffeting the system through time (at any moment, the shocks from many
periods continue to have some effect). To capture this ongoing uncertainty
in the variance calculation, we need to assume that expectations for period
t are based on information from period (t j), where j is much larger than
one. The convention is to calculate the asymptotic variance by letting j
approach infinity. In this case, both [Y, — E(Y,)] 2 and [Y,_, — E(y_ 1 )]2 equal
62 y so equation (3.2) leads to (3.3).
,
Epc—E(Y i+Of .
171+1 etif+1
1-)Y ,
Y„ = un+' Y + Oz
where
Z = U, + , UU„n „ + U ntl,.
cr 2 y = 02 (1 u2 04 = 02472,,
— U 2 ).
43
Differentiating this objective function with respect to the decision variable
G, we have the optimizing rule, and the optimal value for the policy
variable that emerges is:
- 7 1(.72 +a2 e ).
G* = 0
This system is quite similar to the model we have discussed thus far in this
chapter. Indeed, the aggregate-demand and Phillips-curve relationships
44
are exactly the same. The policy reaction function is more up-to-date in
the present specification, with the central bank adjusting its monetary
aggregate with a view to hitting a price-level, not an output, target. For
simplicity, we assume that the value of the level of all variables with bars
above them is unity (so the associated logarithms (the lower-case
variables appearing in this system) are zero). The solution equation for the
output gap then is
where v=1-00(1+ 2). As was the case with our earlier analysis, this
model reduces to a first-order linear difference equation. Convergence to
full equilibrium occurs if u is a fraction. The model cannot predict an
ongoing cycle of constant amplitude. Is a policy of price-level targeting
recommended? Not necessarily: an increase in parameter x can make u
exceed unity, so more aggressive price-level targeting can be destabilizing.
But this policy can never change the fact that output gravitates to its
natural rate value (unity) as long as instability is avoided. We now show
that this property is lost if the curvature involved in the Phillips curve is
altered just slightly. Indeed, the classical dichotomy is lost in this case.
These assertions can be defended by replacing the Phillips curve
in this simple model with the following:
This one change leads to a reduced form for output that is slightly
different:
45
average rate of output observed in the long run (a real variable) is affected
by the policy reaction coefficient (that is, by monetary policy).
Despite its simplicity, equation (3.4) is consistent with many
different time paths for output, as is summarized in Table 3.1. Two of the
possible dynamic patterns are shown in Figure 3.2. The general shape of
equation (3.4) that is shown in both parts of the figure can be verified by
noting that both the values 0 and 1 for current X imply that next period's X
is zero, and that the slope of the relationship decreases as current X rises
(and is zero when current Xis 0.5).
The dashed lines in Figure 3.2 indicate the time sequence.
Convergence to the natural rate (which involves X equal to 0.5 and Y
equal to 1) occurs with the "low" value for 13, but a limit cycle (one that
maintains a constant amplitude forever) occurs with the "high" value for 0.
In the right-hand panel of Figure 3.2, "full equilibrium" involves the
economy shifting back and forth between points A and D (every other
period) forever. The average level of economic activity is represented by
point C. Since point D corresponds to the natural rate of output, point C
involves an average activity level for the economy that is below the
natural rate. Thus, with a non-linear reduced form relationship, it is
possible that output remains below the natural rate (on average) even in
the long run.
Table 3.1
X must approach 0
113<3 X must approach (/-1)/fl so Y approaches
one
3f3<3.449 X follows a 2-period limit cycle
3.449..11<3.549 X follows a 4-period limit cycle
3.549.13<3.57 X follows even-numbered-period limit
cycles of ever greater number (8, 16, 32, ...)
Xis subject to chaotic fluctuations
13>4.0 X must approach negative infinity
46
Figure 3.2 Convergence and Cycles in a Nonlinear Model
Xt+I Xt4-1
47
There are three reasons for integrating basic macro analysis and
chaos theory. First, we can appreciate that ongoing changes in exogenous
variables do not need to be assumed to explain business cycles. Second.
we can see that intuitively appealing policies can create cycles (both
regular ones and chaotic ones) — so once again — built-in "stabilizers" can
be destabilizing. Finally, we can appreciate that the entire nature of the
full equlibrium for real variables — whether it involves asymptotic
approach to the natural rate or a limit cycle which averages out to an
output level that is less than the natural rate (to take just two of the
possibilities) — can depend on the short-run targeting strategy for
implementing monetary policy. Keynesians have become particularly
excited about this implication of non-linearity, since it implies that they
need not follow what has been the convention — conceding the full-
equilibrium properties of macro models to the Classicals, and limiting the
Keynesian contribution to a discussion of approach paths. With non-
linearities, the short and long-run properties of models cannot be so
readily separated.
Two useful surveys of work on non-linear dynamics are
Mullineux and Peng (1993) and Rosser (1990). It has proved difficult for
econometricians to distinguish between the traditional view of business
cycles (linear systems with stochastic shocks maintaining the amplitude of
cycles that would otherwise dampen out) and this alternative view
(endogenous deterministic cycles in a non-linear system that never die
down). Despite this inability to reject the relevance of the non-linear
approach, it has not become too popular. Mainstream macroeconomists
have limited their attention to linear stochastic systems, as we do for the
remainder of the chapter.
Real world business cycles are not the two-period saw-tooth cycles that
we see in Figure 3.1; they are more irregular. But if mainstream analysis
does not rely on exogenous variables following sine-curve time paths, and
it simplifies by ignoring non-linearities, how does the mainstream
approach generate realistic-looking cycles in the model's endogenous
variables? The answer is by extending the linear model so that it is
consistent with cycles that appear more like a sine curve. This can be done
if the model is complicated to the point that the reduced-form for output
deviations is a second-order (or higher) linear difference equation. One
way of doing this is by allowing for adaptive expectations. Other ways are
48
discussed in Chapter 5, where we explore New Classical macro-
economics.
Ignoring all policy variables and error terms for simplicity, the
revised demand and supply functions are:
= (1 — 2Ap„
1-1 = (1 — 2)7r t-2
yt =1)15),- 1 +1-)25.)!--2
left for the reader to verify that the bigger is the role that adaptive
expectations play in the system (that is, the bigger is parameter X), the
more likely it is for the system to display cycles, and for the system to
display instability. It is for this reason that Keynesian economists have
emphasized expectations. And, as we discovered in Chapter 2, similar
conclusions emerge when a model-consistent form of expectations
(perfect foresight) is assumed, instead of adaptive expectations.
While the adaptive-expectations hypothesis has some appealing
features, it has an unappealing one as well. It makes little sense to analyze
the effectiveness of any government policy that is intended to improve
agents' economic welfare if that analysis does not permit those agents to
understand both the environment they live in and the effect of the ongoing
policy intervention within that environment. Our modeling has allowed
for all these considerations only in the perfect-foresight case — not in the
adaptive-expectations case.
A less "unrealistic" version of model-consistent expectations
(compared to perfect foresight) is the "rational expectations" hypothesis.
According to this hypothesis, agents make forecast errors, but (since they
50
are aware of the probability distributions that generate the random shocks
that hit the macro economy), they make their forecasts of the endogenous
variables as if they were calculating the best forecast possible from the
formal model. Thus, forecast errors occur, but they are not systematic.
The remaining sections of the chapter explain how we can analyze models
of this sort, and what some of the policy implications are.
and one of
=
Ye, = Et _1
The first two equations define goods market clearing, and that private
demand is proportional to agents' expectations concerning what income
they will receive that period. The remaining equations define two
hypotheses concerning how those income expectations may be formed.
The former can be thought of as either static expectations (the forecast for
today is equal to what was actually observed yesterday), or a degenerate
version of adaptive expectations (with the entire weight in the weighted
average put on the most recent past). The second hypothesis is rational
expectations. According to this hypothesis, agents' subjective forecast is
the same as we can calculate by evaluating the mathematical expectation
of actual current income (as determined in the model).
In the static expectations case, the at-a-point-in-time solution
equation for current GDP is:
= cy_i + G.
51
Consider a once-for-all increase in autonomous spending, G. The solution
equation for Y indicates that GDP rises by the same amount as the
increase in G in that very period. Then, Y keeps rising by_smaller and
smaller amounts each period, with the overall increase in output (given
infinite time) being:
^ whys
Y = (1 /(1 c))G.
Yt =cE,_1 (Y,)+G.
This is not a solution equation for actual output, Y, since there are still two
endogenous variables (Y and its expectation)_in this one equation. We
need a second equation — one for the forecast of Y.. This can be had by
taking the expectations operator through this "almost reduced-form". The
result is
Y, = (11(1— c))G.
52
We now explore the properties of a slightly more complicated
rational-expectations model. The system we now consider is similar to
that in the previous section in that it involves descriptive aggregate
demand and supply functions. But there are several extensions here. For
one thing, we allow for variable prices and interest rates. Also, we focus
on monetary (not fiscal) policy. Initially, we continue to suppress any
distinction between nominal and real interest rates, and we specify an
arbitrary monetary policy reaction function. The initial model is defined
by the following four equations.
y, = a — yr, + v, (3.5)
Pt — Pt_1=0.Y1+ Pe PI-1 ±Ut (3.6)
r, = r+ y(ioe, — 0) (3.7)
P = Et_1(13t) (3.8)
y and p stand for the natural logs of real output and the price level. Both
the natural rate of output and the central bank's target for the price level
are unity, so the logs of these variables are zero (and y = a — yr: = 0) . v
and u are stochastic shocks — drawn from distributions that involve zero
means, constant variances and no serial correlation. r is the interest rate,
not its logarithm.
Equation (3.5) is a standard (descriptive) IS relationship, which is
also the aggregate demand function since the central bank sets the interest
rate. Since we focus exclusively on monetary policy in this discussion,
fiscal variables are constant (and erril_ec
) ided in thejiiiircePt). Equation
;3.6) summarizes the supply side of the goods market. It is a standard
expectations-augmented Phillips curve. Equation (3.7) is the central
yank's reaction function. The bank raises (lowers) the interest rate above
:below) its long-run average value whenever the bank's forecast for the
jog of the) price level is above (below) target. There is no need to specify
m LM equation, since its only function is to determine what value the
yank had to adjust the money supply to (in order for the public's demand
or money function to be satisfied at this interest rate). Since the money
;upply enters none of the other equations of the model, we can afford to
gnore this consideration.
Equation (3.8) defines rational expectations; it specifies that the
igents' subjective expectation for price is equal to what we (as model
nanipulators) can calculate as the mathematical expectation of price. The
ime subscript for the expectations operator denotes that agents know all ,
ralues for the stochastic shocks up to and including the previous period
53
(time period t-1). Agents (and the central banker) must forecast the current
shock — at the end of the previous period before it is revealed — on the
basis of all past information. Agents know the exact structure of the
economy (the form of all equations and all slopeciietyclients). The only
thing they do not know is what the current and all futureNT—M-6)s of the
additive error terms are.
To solve the model, we first eliminate the interest rate and the
expected price variables by simple substitution. The results are:
Y, = — V7E,--i(P,)+vt (3.9)
P, OY, E,--1(P,)±u, (3.10)
yl = v,
P, = OY, + 14,
var(y) = o-
var(p) 0=262v +6z ,4
54
chapters. We verify that policy irrelevance is a very model-specific result
by changing equation (3.7) to the following
ri = 7 + 21(P, 0) (3.7a)
and re-deriving the output and price variances. It is left for the reader to
duplicate the earlier steps, and to verify that the revised solutions are:
y, =Ogyu,+v,)1(1
19, =0Y, + u,
55
formally. But such a conclusion is not warranted, as we will see in the
next section — where we consider a more complicated model. In such
settings, it is almost impossible to sort things out without the precision
that accompanies a formal solution.
56
interest rate equal to the value that it expects will deliver its goal) we set
E,_,(i,). The result is:
When this relationship is substituted back into the IS function, and the
Phillips curve is simplified slightly, we have the model in the following
revised form:
We now proceed with solving the model, using trial solutions and the
undetermined coefficient method. Since there is only one pre-determined
variable in the model (the previous period's price) and one exogenous
variable (the demand shock), there can only be these two terms in the
solution equations for real output and the price level. Thus, we assume the
following trial solutions:
We now substitute the trial solutions into (3.11a) and (3.12a). First, after
substituting (113) into (3.12a), we have
c = ((1 + a0)12)
and
d =(b012).
57
(3.11 a) except the error term. Once these are substituted in and the result
is compared to (3.13), the remaining two identifying restrictions emerge:
a=a—vc
b= ad — yid +1.
The four identifying restrictions are now solved explicitly for a, b, c and d:
so the reduced forms contain only primitive parameters:
a = —11 0
b = 2 /(3 + v0)
c=0
d = 1(3 +
The expressions for price and output volatility (the asymptotic variance
for each variable) follow directly from the trial solutions:
58
from the outset. It is left for the reader to verify that the revised
expressions for the identifying restrictions are:
a =—c
b =1—d
d=01(2 — c+0)
c=[(0+2)±V(0+2)2 —4]/2
59
Chapter 11
The Solow growth model (and all the extensions we considered in Chapter
10, except the last one involving non-renewable resources) has the
property that savings/investment policies cannot permanently affect the
economic growth rate. This property of the model stems from the fact that
the man-made item that raises labour's productivity — physical capital
accumulation — involves diminishing returns. Given our standard
assumption of diminishing returns, as more capital is accumulated, its
marginal product must fall. This makes it ever less tempting for house-
holds to acquire more capital. As this reaction sets in over time, the
temporary rise in productivity growth that initially accompanies a rise in
saving must wither away. The only way we can have a model that avoids
this prediction is by changing our assumption about the production
process. We need to build a model in which there is a man-made factor of
production that does not involve diminishing returns. The obvious
candidate is "knowledge". There seems to be no compelling reason why
we need assume that the more knowledge we have, the less valuable more
knowledge becomes. Thus, to have a theory of endogenous productivity
growth, economists have built models involving constant returns to scale
in the production process. We consider three of these models in this
chapter.
236
= Kte
B. = aK.
Y= AK
wN = (1 — a)Y
(r + 8)K = aY
e I C = r(1— r)— p
k =Y -C -G- SK
G = rrK +twN
The second and third equations follow from profit maximization; firms
hire each factor up to the point that the marginal product equals the rental
cost. Combining the first and third equations, we have: a A = (r + (5) .
Since three variables in this relationship are technology coefficients, this
equation fixes the pre-tax rate of interest. This simplifies the interpretation
of the remaining three relationships.
The fourth equation is the standard micro-based consumption
function (where there is an interest-income tax, z). For simplicity we have
assumed infinitely lived family dynasties (for whom the probability of
death is zero). The fifth equation is the GDP identity — indicating that the
capital stock grows whenever more output is produced, than is consumed.
The final equation defines a balanced government budget. Spending is
237
financed by the revenue that is collected from the interest-income tax (rate
r) and the wage-income tax (rate t).
When a balanced growth path is reached, capital, output and
consumption all grow at the same rate, n. This growth rate in living
standards is also the labour productivity growth rate; otherwise "effective"
labour would not be growing at the same rate as all other aggregates. We
re-write the final three equations to focus on this equilibrium growth rate.
In doing so, we divide the fifth equation through by K, define x and g as
C/K and GIY respectively, and we use the second and third equations to
simplify. The results are:
n = r (1 — r) — p
n= A(1— g)— x —
g = ra — (r8 I A) + t(1— a)
C/C=r—p
ilx=r+x-F45—p— A.
238
The total differential of this system is:
di= .7dx
where I/ and .7 are the steady-state values of the growth rate and the
consumption-capital ratio. Since both these entities are positive, both
these first-order differential equations involve an unstable dynamic
process. If we drew a phase diagram in C-x space, all arrows showing the
tendency for motion would point to outright instability. There is not even
a saddle path to jump on to. But since both variables are "jump" variables,
the economy is capable of jumping immediately to the one and only stable
point — the full equilibrium. In keeping with the convention that we limit
our attention to feasible stable outcomes, we assume that the economy
jumps instantly to its new steady state — the moment the exogenous shock
occurs to make the steady-state growth rate higher. In short, there are no
transitional dynamics in this growth model. We exploit this fact in our
growth policy analysis in the final chapter. We rely on this fact here as
well, as we illustrate the significance of a slightly higher growth rate.
Assume that today's GDP is unity. If n and r denote the GDP growth rate
and the discount rate, the present value (PV) of all future GDP is
Y = AK' (bH)''.
=Y —C —G — SK (11.1)
B is the gross rate of return on each unit of human capital employed in the
education sector. The total amount that is employed there is proportion
(1— b) of the total. The net rate of return is (B — 8), since, for simplicity,
we assume that physical and human capital depreciate at the same rate.
It is not obvious how we should interpret the education sector. On
the one hand, it can be thought of as "home study" where individuals
simply refrain from paid employment, and use the time to learn more. On
this interpretation, these individuals are receiving no income (when away
from the manufacturing sector) so there is no taxable wage income
generated in the knowledge sector. This interpretation is appealing when
thinking of university students. The other way of thinking about the
education sector is that it is a "research institute" that employs researchers.
240
On this interpretation, everyone in society is automatically aware of all
existing knowledge. This sector's job is to produce new knowledge. This
interpretation of the education sector is appealing when thinking about
professors, since professors receive wages which the government can tax.
We proceed in two stages. First, we follow the "home study" inter-
pretation; then, later in this section, we shift to the "research institute"
interpretation. Unfortunately, the policy implications of the two models
are very different.
Households are assumed to optimize. One outcome is a standard
consumption function (listed below). Another implication is that
individuals arrange their portfolio of assets so that — in equilibrium and at
the margin — they are indifferent between their three options: holding
physical capital employed in the goods sector, holding human capital
employed in the goods sector, and holding human capital employed in the
education sector. Since there is no tax on home study activity, the after-tax
yield and the before tax yield on human capital in that sector is the same.
We denote that return by
r* =(B —(5).
It is clear from this relationship that, since there is no taxation in the home
study sector, r* is totally pinned down by two technology parameters.
The net of depreciation (but before tax) rate of return on physical
capital in the goods sector is
For portfolio equilibrium, the after-tax yield, r(1 — r), must equal what is
available on human capital placed in the other sector:
r(1— r) = (11.4)
The model consists of the four numbered equations plus the consumption
function (equation (11.5) below). Here we generalize (compared to the
simple AK model) by re-introducing population growth and overlapping
generations (a positive probability of death). In this case, the growth rate
for aggregate consumption (C) is the sum of the growth rate for per capita
consumption (c) plus the population growth rate (C IC =n=y+z). From
Chapter 4 (section 4.3), we know that the consumption function is
y+z+x+8(1—t)=.(1—t)(r+8)la (11.1a)
y + z B(1— b)— (11.2a)
The five equations determine r, r*, b, x and the productivity growth rate y.
In simplifying these five relationships, we have substituted in the
assumption of balanced steady-state growth — that the growth rates of C, K
and Hare all equal. Thus, we are limiting our attention to full equilibrium,
and for a simplified exposition, we are ignoring the transitional dynamics
(that are not degenerate in this case, as they were for the AK model).
Readers interested in an analysis of the between steady states dynamics
should read Barro and Sala-i-Martin (1995, p 182).
It is worth taking stock of the model's structure. The taste
parameter (p), the demographic parameters (p and z), the technology
parameters (a, B and 8), and the exogenous policy variable (t) are all
given from outside. We can use the model to determine the effect on the
steady-state growth rate of a shift to a smaller government. If there is an
exogenous cut in the tax rate (and a corresponding drop in the level of
government program spending), we would expect — on intuitive grounds —
that there would be a higher growth rate. First, the lower tax rate can be
expected to encourage growth since interest taxation is a disincentive to
accumulate capital. Further, with lower government spending, there is less
wastage. After all, in this model, since G enters neither the utility function
nor the production function, society loses whenever it has any government.
We do not intend to argue that this is the most satisfactory way of
modelling government activity. Instead, we are just pointing out that it is
in this setting that the move-to-smaller-government initiative should be
expected to be at its most powerful. It is alarming, therefore, that we find
such tiny growth rate effects (below).
Of course, to derive quantitative results, the model has to be
calibrated. For illustration, we use the following set of representative
parameter values:
242
r net of depreciation return on capital 0.075
t tax rate 0.20
initial productivity growth rate 0.02
z population growth rate 0.02
annual death probability 0.02
6 depreciation rate 0.04
p rate of time preference 0.03
physical capital output share — manufacturing 0.33
r* (B — 8)(1— t)
where t is the tax on wage income. (Since we are now highlighting the
taxability of income in this sector, we revert to our earlier practice
(followed in section 11.2 involving the AK model) of allowing for
different rates of tax on the income derived from physical and human
capital.)
rK (1— ) = r* and rn (1 — t) = r* .
243
Eliminating the pre-tax rates of return from these relationships by
substitution, we have
r* = (B — 8)(1— t)
y+ z = B(1— b)— 8
y+ z+ x+ 8 =[(1— g)(r* +8(1— r))]l[a(1—r)]
g = ar + (1— a)t 1 b+ a(1—r)(Ax— k8)•
[r* +8(1— r)-8t(1— a)(1—t)]1(r* +8(1—t)]
y = (r * —p)—(p+ z)(p+ p)1(x(1+ A))
The new variable is ?c, the sales tax rate. The five equations determine r*,
b, x, y and one of the three tax rates. The taste parameter (p), the
demographic parameters (p and z), the technology parameters (a, B and 6),
and the exogenous policy variables (g and two of the three tax rates) are
all given from outside. To illustrate, we consider using the model to
determine the effect on the steady-state growth rate of a cut in the tax on
wage income (financed by an increase in the tax on the earnings of
physical capital). From the first equation, we see that the cut in t can have
the same large increase in r* that we got in the AK model. Then, focusing
on the first term on the right-hand side of the fifth equation, we see that
the productivity growth rate moves one-for-one with r* (again, as in the
AK setting). The presence of the other term on the right-hand side of the
fifth equation means that things are a little more complicated.
Nevertheless, with a long life expectancy (that is, with a small value for
the death probability) this other term is not very important quantitatively.
Hence, we are back to the Lucas level of excitement. With this version of
the two-sector model, we can expect big and permanent growth-rate
effects from fiscal policy.
But there is a critical difference. In this model it is the wage-
income tax that needs to be cut to stimulate growth, not the interest-
income tax. This is because — of the two man-made inputs that are part of
244
the production processes — it is human capital that is the central one. Its
production is the process that involves constant returns to scale (and it is
the presence of constant returns to scale that makes endogenous growth
possible). This means that the growth-retarding tax is the one that lowers
the incentive for people to acquire human capital. Even when the cut in
this tax is financed by a higher tax on interest income, growth is
significantly stimulated. Simulations with a calibrated model verify that
growth is stimulated a little more if the wage-tax cut is financed by a
higher consumption tax instead of a higher interest-income tax. But this
difference is very small. Thus, the model provides only the smallest
support for the common presumption in applied tax-policy circles — that it
is physical capital that is the bad thing to tax.
Y=C+G+k +H.
aA = r
(1-a)Y1H = w.
f[ln C, + lnG,]e-P1 dt
subject to C + k + H = rK + wH. As usual, p is the rate of impatience.
For simplicity at this stage (until the next chapter), we ignore the taxes
that are necessary to finance government spending. Households choose
the time paths for C, K and H. The first-order conditions are:
n= r — p
= W.
Among other things, we add taxes to this system in the next chapter, to
determine the method of government finance that is most conducive to
raising growth and household welfare. In the present chapter, we focus on
the implications of imperfect competition.
The easiest way of allowing for some market power is to specify a
two-stage production process. Intermediate goods are created by primary
producers (competitive firms) employing both forms of capital. Then,
final goods are produced by the second-stage producers. This second stage
involves fixed-coefficient technology (each single final good requires one
intermediate product and no additional primary factors). The final good is
sold at a mark-up over marginal cost:
Letting the price of final goods be the numeraire (set at unity), the mark-
up be m > 1, and marginal cost be denoted by MC, this standard pricing
relationship becomes
1 = m(MC).
MC = rl FK = wl FN .
250
Combining these last relationships, we have
r F, I m
w = FN M.
n(1 + B) = A — c
B= (1 — a)I a
A = 0131-a Rm —1)1 /ni g
r = aAl m
n=r—p
252
By comparing peoples' incomes — people with and without further
education — many labour economists have estimated the returns to more
education. A typical result from these cross-section regression equations
(involving earnings as a function of years of schooling) is that the annual
return to education is in the 7.5% range. We use this return to estimate the
higher standards of living that we might enjoy if everyone were more
educated. This is a controversial thing to do, since it may be that much of
the estimated return in the cross-section regressions just reflects a signal —
that smarter people can make it through school and less clever people
cannot. So, even if school does nothing but visibly separate people into
these two groups, employers will find it useful to use this signal of native
intelligence to save decision-making costs. The education process itself
may not raise productivity. If this is the case, and the high private return
reflects just signaling, then we should not apply it in an economy-wide
thought experiment. After all, a signal has no discriminating power if
everyone has the credential. Despite this controversy, we assume that
education is not just a signal. This means that the following calculation is
biased toward finding too big a pay-off from more investment in
education. This bias only supports the conclusion that we reach, however,
since the estimated pay-off is very small — despite the upward bias.
Consider transferring 1% of GDP out of current consumption, and
into education. This reallocation is like buying an equity that pays a
dividend of 7.5% of 1% of GDP forever. The present value of the stream
of dividends that accompanies this year's equity purchase is therefore
(.075)(0.01)(a + a 2 + a3 +...)
(.075)(.01)V
The cost of this year's equity purchase is the lost consumption, which is
0.01. The net benefit, NB, is therefore
NB = (0.01)[(.075) V— 1].
Illustrative parameter values are r = 0.075 and n = .04. These values make
NB= 1.23%
Given the amount that most western countries are spending on
education at this time — something in the order of 5 or 6 percent of GDP
GDP going to—apermntopercnagoitncreas th reof
education has to be regarded as a very big investment. Yet this investment
is estimated to bring a series of annual benefits that is equivalent to a one-
time pay-off equal to 1.23%. The interesting question is what increase in n
(from a starting value of 0.04, with an r value of 0.075) is required to raise
V = (1 + n)I(r — n) by 1.23? The answer is that n must rise to 0.0406. In
other words, when measured in terms of the equivalent permanent annual
growth rate increase, a truly large investment in education can be expected
to raise living standards by just four one-hundredths of a percentage point.
This is just what we obtained from the "home study" version of the two-
sector endogenous growth model in section 11.3. This back-of-the-
envelope exercise suggests that this particular model's predictions may be
the most relevant for assessing real-world policy options. In turn, this
suggests that we temper our enthusiasm — compared to Lucas'. For an
alternative defense of this same recommendation, see Easterly (2005).
We close this section with a brief reference to some empirical
studies. Have economists found any evidence that policy can permanently
affect growth rates? At a very basic level, the answer surely has to be
"yes". The experience of the formally planned economies has shown
dramatically that a market system cannot work without a set of supporting
institutions (such as a legal system that creates and protects property
rights). These institutions — that induce people to be producers and not just
rent seekers — are the outcome of policy.
But at a more specific level, there is evidence against policy
mattering. An example pointed out by Romer (2001) involves three
countries. During the 1960-1997 period, the United States, Bolivia and
Malawi had essentially the same growth rates, and very different fiscal
policies. These differences have resulted in very different levels of living
standards (as predicted by exogenous growth theory) but not different
growth rates. Another problem concerns estimated production functions.
As these estimates vary the definition of capital from a narrow physical
capital measure to an ever more broad measure of capital (including
254
human capital), capital's estimated share ranges between one-third and
eight-tenths. No study has found a value of unity (which is required to
support the linear differential equation that lies at the core of the
endogenous growth framework).
Finally, Jones (1995) has presented some interesting projections.
He used the pre-1940 growth experience of the United States as a basis for
making a simple projection of how much per capita incomes would grow
over the next 45 years. He replicated what someone might have predicted
back then — without taking any account of later developments. He
expected these projections to under-estimate actual growth, since they
ignore the vast increase in human capital that has been accumulated
during the later period. For example, the share of scientists and engineers
in the labour force has increased by 300%. In addition, now 85% of
individuals, not 25% finish high school. But despite all this, the simple
projections over-estimated the actual growth in living standards. This
exercise has created a challenge for new growth theory to explain.
What advice can be given to policy makers — given all this
controversy? It would seem that — if a policy involves definite short-term
pain and uncertain long-term growth-rate gains — it may be prudent to
postpone implementing that policy until the insight from further research
can be had. This cautious strategy does not mean that nothing can be
recommended in the meantime. After all, a number of policies can be
present-value justified if they deliver a (much less uncertain) one-time
level-effect on living standards. This may not be as "exciting" as a
permanent growth-rate effect, but it is still very worthwhile. For example,
the increase in living standards that we estimated to follow from
government debt-reduction policy (see section 10.4) is a very worthwhile
outcome. This analysis assures us that there is a large scope for relevant
policy analysis — even if it is too soon to base advice on an area of inquiry
— endogenous growth theory — that is still "in progress".
11.6 Conclusions
256
Chapter 12
Growth Policy
12.1 Introduction
The purpose of this chapter is to assess the analytical support for several
propositions advocated by policy advisors, using a simple version of
endogenous growth theory. We proceed through the following steps. First,
we consider the optimal tax question by comparing income taxes to
expenditure taxes. Without any market distortions allowed for in the
analysis, we show that expenditure taxes are recommended. This basic
analysis has been very influential. For example, the President's Advisory
Panel on Federal Tax Reform in the United States (issued in 2005) argues
for a wholesale replacement of the income tax with an expenditure tax.
Our analysis, indicates how the analytical underpinnings for this proposal
are sensitive to the existence of other sources of market failure. For
example, in section 2, we consider a second-best setting in which the
government is "too big." In this situation, it is appropriate for the
government to levy distorting taxes, so the income tax should not be
eliminated. Many policy analysts argue both that government is "too big"
and that income taxes are "bad." This section of the chapter challenges
these analysts to identify the analytical underpinnings of their views. A
related finding follows from an extension that allows for two groups of
households, one so impatient that these individuals do not accumulate
physical capital. Progressive taxation is analyzed by taxing only "rich"
households to finance transfers to the "hand-to-mouth" group. The
analysis does not support replacing a progressive income tax with a
progressive expenditure tax.
Other second-best considerations are considered in later sections
of the chapter. In section 3, we focus on consumption externalities. This
consideration pushes the conclusion for tax policy in the opposite
direction; it strengthens the case for the expenditure tax. Some analysts
(such as Frank (2005)) argue that consumption externalities are very
important from an empirical point of view. How else, asks Frank, can we
explain the fact that survey measures of subjective happiness show no
increases over a half century — when this period has involved significant
increases in the standard measures of economic growth (such as per capita
GDP)?
257
Unemployment is added in section 4 of the chapter. In this
second-best situation, an employment subsidy is supported as a
mechanism for simultaneously lowering unemployment and raising the
growth rate. This result is a challenge for policy analysts who argue that
we face a trade-off in the pursuit of low-income support policy (our equity
objectives) and higher-growth policy (our efficiency objective). It is
interesting that basic endogenous growth theory can provide examples
such as this one — that illustrate the relevance of what Alan Blinder has
called "percolate up" economics. In this class of models it seems
relatively easy to find settings in which a fiscal policy that is designed to
help those lower down on the "economic ladder" has indirect benefits for
those up the ladder. This is similar in spirit, but opposite in direction, to
the more widely known approach known as "trickle down" economics —
in which a fiscal policy designed to help the rich generates indirect
benefits for those further down the economic ladder.
Finally, in section 5 of the chapter, we outline what basic growth
models imply about how future living standards may be affected by a
major demographic event that is much discussed. We consider the
increase in the old-age dependency ratio that will accompany the aging of
the population that will occur as the post-war baby-boom generation
moves on to retirement.
12.2 Tax Reform: Income Taxes vs. the Progressive Expenditure Tax
Many economists and tax-reform panels have called for a shift in tax
policy: a decreased reliance on income taxation and an increased reliance
on expenditure taxes. The standard analytical underpinning for this tax-
reform proposal is endogenous growth theory. While less emphasis is
given to equity considerations in the theory (since standard growth theory
involves a single representative agent), policy analysts sometimes call for
a progressive expenditure tax — to avoid the regressivity that would
otherwise accompany the use of sales taxes. The purpose of this section of
the chapter is to use a simple version of endogenous growth theory to
review this debate.
We begin with the proposition that total output produced each
period, Y, is used in two ways. First, it is purchased by households to be
consumed that period, C. Second, it is purchased by households to add to
their stock of capital, K. K refers to that period's increase in capital. The
supply-equals-demand statement is
(12.1)
258
Next we specify the production process; in this initial case, it is very
simple — output is proportional to the one input that is used in the
production process, K:
Y = AK. (12.2)
259
utility = ln Cicli
C+K=rK+R-rrK-sC.
r=c+n (12.5)
z=r+scIr (12.6)
n = r(1—r)— p (12.7)
Equations (12.5) through (12.7) solve for the economy's growth rate, n,
the consumption-to-capital ratio, c, and one of the tax rates (we assume
that the sales tax rate, s, is residually determined by the model). The
equations indicate how these three variables are affected by any change
we wish to assume in the technology parameter, r, the taste parameter, p,
and the government's exogenous instruments — the transfers-to-GDP ratio,
z, and the income tax rate, r.
Given the existing debate on tax reform, we focus on cutting the
income tax rate, and financing this initiative with a corresponding increase
260
in the sales tax rate. The effect on the growth rate follows immediately
from (12.7): dnldr.—r< 0. Thus, shifting to the sales tax raises the
ongoing growth rate of living standards. But there is a one-time level
effect on living standards (consumption) as well, and from (12.5) we see
that this outcome is adverse: dcl dr =—dnI dr =r>0. So a shift away
from income taxation toward an increased reliance on expenditure taxes
shifts the time path of household consumption in the manner shown in
Figure 12.1. (We noted in section 11.2 that this model involves no
transitional dynamics; that is, the time path for living standards moves
immediately from its original equilibrium path to its new one.)
Living Standards
(In C)
Time
where C0 = cic and K0 is the initial capital stock, at the time when the
tax substitution takes effect. From (12.8), we can determine the effect on
overall welfare of the tax substitution:
261
dSW 1 dr =[(11 c)(dc 1 dr)+ (11 p)(dn1 dr)11 p
dSW 1 dr =r(p—c)1(cp 2 )
The fact that this expression is negative indicates that the government can
raise peoples' material welfare by cutting the income tax rate — all the way
to zero. This is the standard proof that we should rely on expenditure, not
income, taxes to finance the transfer payments. Because the generosity of
the transfer does not affect the household's consumption-saving choice,
there is no such thing as an "optimal" value for transfers. Whatever level
is arbitrarily chosen has to be financed by expenditure taxes if the
government wishes to maximize the material welfare of its citizens.
We now consider a sensitivity test, by asking how the optimal-tax
conclusion is affected by our replacing the government transfer payment
with a program whereby the government buys a fraction of the GDP and
distributes it free to users (as in the case of government-provided health
care). We continue to assume that no resources are needed to convert
newly produced consumer goods into new capital or (flow) into the
government service. Thus, the economy's production function remains
(12.2), and the supply-equals-demand relationship becomes
Y=C+K+G (12.1a)
utility = f [ln C, + y ln
i=0
262
Parameter y indicates the relative value that households attach to the
government service. Since the government imposes the level of
government spending, individual households still have only one choice to
make; they must choose their accumulation of capital with a view to
maximizing the present discounted value of private consumption. The
solution to that problem is still equation (12.4).
The model is now defined by equations (12.1a), (12.2), (12.3a)
and (12.4). Defining g = GIY as the ratio of the government's spending to
GDP, these relationships can be re-expressed in compact form:
r(1 g) , c+ n (12.5a)
g=r+scIr (12.6a)
n = r(1— r)— p (12.7)
(MU / price) private consumption good = (MU / price) g overnment provided good
With the price of both goods being unity, and the utility function that has
been assumed, this condition requires 1/C = y/G; that is, the optimal
program-spending-to-GDP ratio is g* = yclr. This definition is used to
simplify the overall welfare effect of varying the income tax rate. The
result is:
Y -= C + + + G
264
Y Ka H I'
By defining B = H/K and A= 13B 1' , the production function can be re-
expressed as Y = AK — the same form as that used above. But in this
setting, tax policy affects A.
As explained in section 11.4, it is assumed that households rent
out their physical and human capital to firms (that are owned by other
households). Profit maximization on the part of firms results in factors
being hired to the point that marginal products just equal rental prices (r is
the rental price of physical capital; w that for human capital):
aA = r
(1 — a)Y1H = w.
Households maximize the same utility function. In this case, the constraint
involved is: (1 + s)C + K + H = r(1— r)K + w(1— r)H . In addition to the
familiar consumption-growth rule, n= r(1—r)— p , the optimization leads
to r = w. In equilibrium, households must be indifferent between holding
their wealth in each of the two forms of capital, and the equal-yields
relationship imposes this equilibrium condition. Finally, the government
budget constraint is G = zrK + rwH +sC, and the compact version of the
full model is:
A= /3B' -a
r = bA
n = r(1 — r)— p
B = (1 — a)1 a
n(1 + B) = A(1 — g) — c
g= r+ scl A.
265
Perhaps the only result that readers might need help with concerns
the effect of the tax substitution on social welfare. The social welfare
function is the utility function of the representative household:
When this solution is evaluated at the extreme points in time, and the
result is substituted into the expression for SW, equation (12.8a) above is
the result. When differentiating that equation with respect to the tax rate,
we use c = C / K, which implies that (dC0 / C0 ) / dr = (dc / c) / dr since K
cannot jump ( Ko is independent of policy). Also, since G0 = grK0 , we
know that dG0 / dr = 0 as well. The g* = yc I r result emerges from
maximizing SW subject to the resource constraint, that is, by evaluating
dSW I dG = 0.
We now move on to consider income-distribution issues. Mankiw
(2000) has suggested that, for the sake of realism, all fiscal policy
analyses should allow for roughly half of the population operating as
infinitely-lived family dynasties and the other half operating hand-to-
mouth. Thus far in this tax-policy analysis, we have not followed this
advice. Further, since all households have been identical, progressive
taxes could not be considered. We now extend our analysis so that we can
follow Mankiw's suggestion — both to increase the empirical applicability
of the analysis, and to make it possible to investigate progressive taxes.
Since the poor consume a higher proportion of their income than
do the rich, a shift from a proportional income tax to a proportional
expenditure tax makes the tax system regressive. For this reason, policy
266
analysts are drawn to the progressive expenditure tax. It is hoped that this
tax can avoid creating an equity problem, as we take steps to eliminate
what is perceived to be an efficiency problem. For the remainder of this
section of the chapter, we use our otherwise standard growth model to
examine shifting between income and expenditure taxes — when both
taxes are strongly progressive. To impose progressivity in a stark fashion
we assume that only "Group 1" households — the inter-temporal
consumption smoothers who are "rich" — pay any taxes. "Group 2"
households — the "poor" individuals who live hand-to-mouth — pay no
taxes. In this section, we revert to our original specification concerning
the expenditure side of the government budget. Once again we assume
that there is no government-provided good; the taxes collected from the
rich are used to make a transfer payment to the poor.
Group 2 households have a utility function just like the Group 1
utility function, except Group 2 people are more impatient. Their rate of
time preference is 0, which is sufficiently larger than the other group's
rate of impatience, p, that Group 2 people never save voluntarily, so they
never acquire any physical capital. It is assumed that they simply have to
do some saving, in the form of acquiring the human capital that is
absolutely required for employment. But beyond that "compulsory"
saving, they do none. Thus, this group's consumption function is simply
their budget constraint. Using E to denote total expenditure by this portion
of the population (assumed to be one half of the total), we have
E = R + (wH — 1.1)1 2, since this group receives no interest income and it
pays no taxes, but it does receive the transfer payment. Since only Group
1 pays either tax, both the income and the expenditure tax are progressive.
The government budget constraint is R = r[rK + (wH / 2)] + sC. We re-
express the Group 2 expenditure relationship and the government budget
constraint by using the optimization conditions for firms and the rich
households, and by defining e = El K and z = R/Y . The compact form of
the model is:
A = flea
r = bA
n = r(1 — p
B (1 — a)I a
n = r — a(c + e)
e = zA +[(1 — a)A — Bn]I2
z = ((1 + a)712) + (scl A)
267
We assume that the initial situation involves only an expenditure tax (t =
0 initially), and we use the model to determine the effects of moving away
from this initial situation by introducing the income tax (and cutting the
expenditure tax by whatever is needed to maintain budget balance). If the
conventional wisdom (that a progressive expenditure tax is preferred to a
progressive income tax) is to be supported, the analysis must render the
verdict that undesirable effects follow from the introduction of the income
tax. The results are
and for Group 2 (using a similar welfare function with 0 replacing p):
The model must be calibrated to assess the sign of these overall material
welfare effects. To illustrate the outcomes, we assume the following
illustrative values:
These values imply a rate of impatience for the "poor" that is twice that of
the "rich" 10%), and the following initial conditions: c = 0.182 and e
268
= 0.118. When these representative parameter values are substituted into
the welfare change expressions, we see that both are positive. This implies
that both rich and poor are made better off by having the lower growth
rate that results from replacing the progressive expenditure tax with the
progressive income tax. There appears to be no support for conventional
wisdom, at least for these representative parameter values.
The intuition behind this finding runs as follows. From the selfish
perspective of the rich, the optimal transfer to the poor is zero. Thus, with
a positive transfer, the government is "too big," and since this problem is
a constant proportion of a growing GDP, less growth is preferred. Thus,
we have an additional application of the general principle that emerged
earlier in this section. Finally, as far as the poor are concerned, in this
illustrative calibration, they are impatient enough to prefer the favourable
consumption-level effect that accompanies the shift to the income tax.
The analysis in this section has involved numerous simplifying
assumptions. With a view to providing a little balance, we consider one
such assumption in the remainder of this section. By doing so, we re-
iterate Judd's (2002) point that the presence of imperfect competition
strengthens the case for an increased reliance on expenditure taxation.
Here we add taxes to the monopolistic competition model that was
explained in section 11.4. We assume that there is no government
provided good, that the government returns all revenue to private agents
as a lump-sum transfer, and that all forms of income (including monopoly
profits) are taxed at the same rate. As a result, the government budget
constraint is R = z-Y + sC. The compact listing of the model is quite
similar to that discussed in earlier parts of this section, except that the
monopoly mark-up parameter, m, appears here in the fourth equation:
n(1 + B) = A — c
B = (1 — a)1 a
A= fie"
r = calm
n = r(1 — r)— p
z= r scIA
The best value for the income tax rate is that which makes this expression
zero. Since m exceeds unity, that best value for r is a negative number.
Not only should positive income taxation be eliminated; the ownership of
capital should be subsidized. This outcome follows from another second-
best situation. With imperfect competition, there is the opportunity for
profit income. This possibility means a reduced incentive to earn income
by acquiring and employing capital (compared to what exists in a
competitive economy). If there is no other source of market failure, it is
optimal for the government to remove this incentive — by subsidizing the
non-profit source of income (that is, by subsidizing capital accumulation).
In the real economy, there are likely several market failures — for
example, both imperfect competition and an over-expanded government
(at least as viewed from the perspective of the rich). As we have seen in
this section of the chapter, the former distortion calls for a negative r
value, while the latter distortion calls for a positive r value. Without
further empirical analysis, it seems difficult to defend the proposition that
the best value for the income tax rate is zero.
270
chooses. Further, we allow one household's utility to depend inversely on
the level of consumption of other households. This last feature is
necessary if we are to explore Frank's (2005) suggestion that positional
goods be taken more seriously in otherwise standard analysis. To simplify
the exposition, and to highlight these extensions, we introduce them into
the most basic model that was considered in the first few paragraphs of
section 12.2. In particular, there is perfect competition, only one
composite form of capital, and no hand-to-mouth individuals or govern-
ment provided good.
For the present discussion, it is best to think of capital as human
capital. Letting x denote the fraction of each household's time that is spent
at work, xH is employed capital, and the remainder, (1 — x)H, is what is
devoted to leisure. The resource constraint, the production function, and
the government budget constraint are:
Y=C+H
Y = rxH
R= TrxH + sC
The remaining equations that define the model are the first-order
conditions that emerge from the following optimization. Households are
assumed to maximize
n = r(1— r)— p
tit c(1 + s) = r(1— r)(1— x)
rx = c + n
z=r+sclrx
where c is now CI H. These four equations define the behaviour that takes
place in the decentralized market economy. They can be compared to the
four equations that define the outcome that would obtain if a benevolent
planner were in charge. The planner would maximize the same utility
function, but she would recognize that C could not differ from C.
Formally, this optimization involves maximizing
n=r—p
wc = r(1— .1)(1 — x)
rx = + n
z=r+sc/rx
How can we ensure that the planner's outcome and the decentralized
market outcome coincide? We answer this question in two stages. First, if
goods are not positional at all (2= 0), the outcomes are the same only if r
= s = z = 0. Intuitively if there is no market failure, there is no role for
government. Second, if goods have a positional feature (2 > 0), the
outcomes are the same only if r= 0, s = 21(1 — 2) , and z > 0. Intuitively, if
there is a market failure (a negative externality arising from consumption),
consumption should be discouraged. There is no role for an income tax.
272
Ignoring all the other issues that have been considered in earlier
sections of this chapter, then, we can conclude as follows: if goods have a
positional feature, then the government should rely on expenditure taxes,
not income taxes. But (just as we noted at the end of section 12.2) it is
difficult to reach a firm conclusion regarding tax policy if there are two
sources of market failure in the economy.
274
The remaining relationships that are needed to define the model
are the ones that describe how households make their consumption-vs-
saving (investment in capital) decision, how the unemployment rate is
determined, and how the government finances its employment-creating
initiative. We discuss each of these issues in turn.
As in section 12.2, we follow Mankiw (2000) and assume that
there are two groups of households — with each representing one half of
the population. One group is patient and the other is not. The patient
households save as long as the after-tax return on capital exceeds their rate
of impatience, and this saving generates the income that is necessary to
yield a positive percentage growth rate in consumption. This growth in
living standards equals the economy's productivity growth rate. The
simplest version of this outcome is the straightforward proposition that
the productivity growth rate equals the excess of the after-tax interest rate
over the household's rate of impatience: e/ C = r(1— r)— p. The other
condition that follows from household optimization concerning capital
accumulation is that both physical and human capital must generate the
same rate of return per unit, so households are indifferent between holding
their wealth in each of the two forms of capital. This condition is
r = (1 — u)w.
These forward-looking households make two separate decisions.
As a group, each family makes the capital-accumulation decision by
following the consumption-growth relationship that was just discussed.
Following Alexopoulos (2003), we can think of this decision being
executed by the family matriarch, who takes the labour market outcomes
of the various family members as exogenous to her planning problem. She
chooses the optimal capital-accumulation plan, and allocates the
corresponding amount of consumption each period to each family member.
Each family member is free to augment that level of consumption by
adjusting her labour market involvement. The workers at each firm are
assumed to rely on a group representative to negotiate wages with their
employer. The negotiator pursues a wage that exceeds the workers'
outside option, but only to a limited degree since the negotiator values a
high level of employment as well. As explained in section 8.3,
unemployment emerges in this setting. Specifically, we have
u = (a(1— v) / v)(1— 9), where (as already noted) 0 is the employment
subsidy and v is the exponent on employment in the labour negotiator's
Cobb-Douglas objective function. (1 — v) is the weight on wages. Of
particular interest here is that the unemployment rate varies inversely with
the level of the employment subsidy. Clearly, the growth model is not
needed to arrive at this conclusion. What the model does is facilitate an
275
examination of how an employment subsidy affects the growth rate of
living standards (the productivity growth rate).
Some macroeconomists might find the separate-decisions format
for specifying the family unappealing. After all, for some time now, the
goal when providing micro-foundations for macroeconomics has been to
specify one overall optimization that simultaneously yields all behavioural
equations for the agents. But, in the interest of tractability, it is now
common to separate certain decisions. As already noted, in her model of
efficiency wages and endogenous growth, Alexopoulos (2003) adopts
precisely this same separation of the asset-accumulation and labour-
market-involvement household decisions. Similarly, in the New Neo-
classical Synthesis approach to stabilization policy analysis, modellers
specify two separate firms: one to hire the factors and to sell
"intermediate" products, and the other to buy the intermediate products
and to sell final products to households. These "final goods" sellers have
no costs at all, other than the menu costs that are incurred when changing
prices. The two-stage process involving two separate firms is adopted
solely to separate the optimization for gradual price adjustment from the
optimization for determining factor demands. Thus, while it can be
regarded as contrived, this practice of separating certain decisions appears
to be accepted as a necessary way of proceeding in even the most central
areas of study in modern macroeconomics. As noted, we follow this
practice here.
The second group of households is impatient. They have such a
high rate of time preference that they never save — beyond the investment
in human capital that is necessary to have a job. As a result, this group
simply consumes all their income — which is half the after-tax labour
income generated each period, plus a transfer payment that the
government pays to this lower-income group, minus their spending on
acquiring human capital. This group interacts with employers in the same
way as was described in the previous paragraph. Thus, since this group
constitutes half the population, they represent half the unemployed. They
are relatively poor since, by never acquiring any physical capital, they
receive no "interest" income. The spending function for Group-2
households is given by E = R+[w(1— 0(1— u)H — MI 2, where R is
transfer-payment receipts.
Finally, we note the government's balanced-budget constraint. It
is Olv(1— u)H + R = z-rK + zw(1—u)H , a relationship which states that the
income tax revenue pays for general transfers and the employment
subsidy. Balanced growth is assumed, so e/ C= kIK =H / H = n. The
equations determine the responses of n, c, e, T, r, w, u, A and B, when the
276
employment subsidy is introduced (6) is increased). c and e are defined as
C/K and E/K, and it is assumed that the government fixes the transfer-
payments-to-GDP ratio, RIY. For the remainder of this section, we discuss
four properties of this system — that n, c and e all rise, and that u falls — as
9 becomes positive. It is left for the reader to use the equations to verify
the results, if desired.
Introducing the employment subsidy has a direct effect in the
labour market — lower unemployment. As a result, physical capital has
more labour to work with, and this raises physical capital's marginal
product, and so raises the interest rate. Thus, there is an increased
incentive to save. Financing this initiative with a higher income tax rate
shrinks, but does not eliminate, this increased incentive to save. The value
of the formal model is that it allows us to see that between these
competing effects on the after-tax return on saving — the rise in the pre-tax
rate of return and the rise in the tax rate applied to that return — the former
must dominate. Further, the model clarifies that there is no short-term pain
involved (in either the richer or the poorer households having to cut
current consumption) in order to secure this long-term gain (higher
productivity growth). Indeed, there are "good news" outcomes on three
fronts: unemployment falls, the level of consumption rises, and the
ongoing growth rate of living standards rises. We conclude that basic
endogenous-growth analysis can support initiatives designed to reduce
structural unemployment.
As we noted when reporting a similar finding in section 9.3, this
conclusion will not be regarded as too surprising, if one recalls the
Bhagwati/Ramaswami (1963) theorem — a proposition which states that
we have the best chance of improving economic welfare if the attempt to
alleviate a distortion is introduced at the very source of that distortion.
Many prominent economists such as Phelps (1997), Solow (1998),
and Freeman (1999) have advocated employment subsidies. In practical
terms, they call for a major enlarging of the earned income tax credit
policy in the United States. As surprising as it may seem, given the high
profile that is enjoyed by these advocates of employment subsidies, the
investigation of this broad strategy within an endogenous-productivity-
growth setting has not been researched at all extensively. This section of
the chapter has been intended as a partial filling of this gap. Of course,
much sensitivity testing is needed to see if similar results emerge in other
formulations of endogenous growth. Also, it will be instructive to extend
this analysis to an open-economy setting. Van Der Ploeg (1996) and
Turnovsky (20002) provide useful starting points for pursuing this agenda.
277
12.5 The Aging Population and Future Living Standards
C = (p + p)(K + H)
k=rK+w—C
279
H = (r + p)H — w
C = (p + p)(K + H)
k=rK+w—C
The second term on the right-hand side is smaller than previously, since
only a portion of the population is working (and therefore receiving wage
income). The (new) third term on the right-hand side accounts for the fact
that each individual receives the wage for only a portion of her remaining
life. For a detailed derivation of this aggregate human capital
accumulation identity, the reader must consult Nielsen (1994). Evaluation
of the double integral (across cohorts of different ages and across the
remaining years in any one person's life) is only possible if wages are
constant. Perfect international capital mobility and constant-returns-to-
scale technology are sufficient assumptions to deliver this independence
of wages from changes in demography. As a result, it is prudent to limit
our application of this particular model to small open economies. In any
event, when the three relationships are combined, we have the revised
aggregate consumption function:
12.6. Conclusions
282
References
Abel, A., N.G. Mankiw, L.H. Summers and R.J. Zeckhauser (1989),
"Assessing Dynamic Efficiency: Theory and Evidence," Review
of Economic Studies 1-20.
Adelman, I. and F.L. Adelman (1959), "The Dynamic Properties of the
Klein-Goldberger Model," Econometrica 27, 596-625.
Aghion, P. and P. Howitt (1992), "A Model of Growth through Creative
Destruction," Econometrica 60, 323-351.
Aghion, P. and P. Howitt (1998), Endogenous Growth Theory
(Cambridge, Mass.: MIT Press).
Aghion, P., E. Caroli and C. Garcia-Penalosa (1999), "Inequality and
Economic Growth: The Perspective of the New Growth
Theories," Journal of Economic Literature 37, 1615-1660.
Akerlof, G.A. and J. Yellen (1985), "A Near Rational Model of the
Business Cycle, with Wage and Price Inertia," Quarterly Journal
of Economics 100, 823-838.
Alexopoulos, M. (2003), "Growth and Unemployment in a Shirking
Efficiency Wage Model," Canadian Journal of Economics 36,
728-746.
Alexopoulos, M. (2004), "Unemployment and the Business Cycle,"
Journal of Monetary Economics 51, 277-298.
Alvi, E. (1993), "Near-Rationality, Menu Costs, Strategic Comple-
mentarity, and Real Rigidity: An Integration," Journal of
Macroeconomics 15, 619-625.
Amato, J.D., and T. Laubach (2003), "Rule-of-Thumb Behaviour and
Monetary Policy," European Economic Review 47, 791-831.
Ambler, S. and L. Phaneuf (1992), "Wage Contracts and Business Cycle
Models," European Economic Review 36, 783-800.
Andres, J., J. D. Lopez-Salido, and E. Nelson (2005), "Sticky-Price
Models and the Natural Rate Hypothesis," Journal of Monetary
Economics 52, 1025-1053.
Auerbach, A. and L. Kotlikoff (1987), Dynamic Fiscal Policy
(Cambridge: Cambridge University Press).
Ball, L. (1994), "What Determines the Sacrifice Ratio?" in N. G. Mankiw
(ed.) Monetary Policy (Chicago: University of Chicago Press).
283
Ball, L. (1995), "Disinflation with Imperfect Credibility," Journal of
Monetary Economics 35, 5-23.
Ball, L., D.W. Elmendorf, and N.G. Mankiw (1995), "The Deficit
Gamble," Journal of Money, Credit and Banking 30, 699-720.
Ball, L. and N G Mankiw (1994), "A Sticky-Price Manifesto," Carnegie-
Rochester Conference Series on Public Policy 41, 127-151.
Ball, L., N.G. Mankiw, and R. Reis (2005), "Monetary Policy for
Inattentive Economies," Journal of Monetary Economics 52, 703-
725.
Barbie, M., M. Hagedorn, and A. Kaul (2004), "Assessing Aggregate
Tests of Efficiency for Dynamic Economies," Topics in
Macroeconomics 4.
Barlevy, G. (2004), "The Cost of Business Cycles Under Endogenous
Growth," American Economic Review 94, 964-991.
Barro, R.J. (1977), "Unanticipated Money Growth and Unemployment in
the United States," American Economic Review 67, 101-115.
Barro, R.J. and D.B. Gordon (1983), "Rules, Discretion and Reputation in
a Model of Monetary Policy," Journal of Monetary Economics
12, 101-122.
Barro, R.J. and H.I. Grossman (1971), "A General Disequilibrium Model
of Income and Employment," American Economic Review 61, 82-
93.
Barro, R.J. and X. Sala-i-Martin (1995), Economic Growth (New York:
McGraw-Hill).
Basu, S., M.S. Kimball, N.G. Mankiw and D.N. Weil (1990), "Optimal
Advice for Monetary Policy," Journal of Money, Credit and
Banking 22, 19-36.
Benhabib, J., R. Rogerson and R. Wright (1991), "Homework in Macro-
economics: Household Production and Aggregate Fluctuations,"
Journal of Political Economy 99, 1166-1187.
Bernanke, B.S. and V.R. Reinhart (2004), "Conducting Monetary Policy
at Very Low Short-Term Interest Rates," American Economic
Review Papers and Proceedings 94, 85-90.
Bettendorf, L.J.H. and B.J. Heijdra (2006), "Population Ageing and
Pension Reform in a Small Open Economy with Non-Traded
Goods," Journal of Economic Dynamics and Control 30, 2389-
2424.
Bhagwati, J. and V, Ramaswami (1963), "Domestic Distortions, Tariffs,
and the Theory of Optimum Subsidy," Journal of Political
Economy 71, 44-50.
Blackburn, K. and A. Pelloni (2005), "Growth, Cycles, and Stabilization
Policy," Oxford Economic Papers 57, 262-282.
284
Blanchard, O.J. (1981), "Output, the Stock Market, and Interest Rates,"
American Economic Review 71, 132-143.
Blanchard, O.J. (1985a), "Credibility, Disinflation and Gradualism,"
Economics Letters 17, 211-217.
Blanchard, O.J. (1985b), "Debt, Deficits and Finite Horizons," Journal of
Political Economy 93, 223-247.
Blanchard, O.J., and N. Kiyotaki (1987), "Monopolistic Competition and
the Effects of Aggregate Demand," American Economic Review
77, 647-666.
Blanchard, O.J. and L.H. Summers (1986), "Hysteresis and the European
Unemployment Problem," NBER Macroeconomics Annual 1986
1, 15-78.
Blanchard, 0. and P. Weil (2003), "Dynamic Efficiency, the Riskless
Rate, and Debt Ponzi Games under Uncertainty," Advances in
Macroeconomics 3.
Blinder, A.S. (1973),"Can Income Taxes Be Inflationary? An Expository
Note," National Tax Journal 26, 295-301.
Blinder, A.S. (1981), "Inventories and the Structure of Macro Models,"
American Economic Review Papers and Proceedings 71,11-16.
Blinder, A.S. (1987), "Keynes, Lucas, and Scientific Progress," American
Economic Review Papers and Proceedings 77, 130-135.
Bohn, H. (1995), "The Sustainability of Budget Deficits in a Stochastic
Economy," Journal of Money, Credit and Banking 27, 257-271.
Bouakez, H., E. Cardia, and F.J. Ruge-Murcia (2005), "Habit Formation
and the Persistence of Monetary Shocks," Journal of Monetary
Economics 52, 1073-1088.
Bouakez, H. and T. Kano (2006), "Learning-by-doing or Habit
Formation?" Review of Economic Dynamics 9, 508-524.
Brainard, W. (1967), "Uncertainty and the Effectiveness of Policy,"
American Economic Review 57, 411-425.
Brock, W.A., S.N. Durlauf, and K.D. West (2003), "Policy Evaluation in
Uncertain Economic Environments," Brookings Papers on
Economic Activity, 235-322.
Bryant, J. (1991), "A Simple Rational-Expectations Keynes-Type Model,"
in N.G. Mankiw and D. Romer (eds.), New Keynesian Economics
Volume 2 (Cambridge, Mass.: MIT Press).
Buiter, W.H. and Miller, M. (1982), "Real Exchange Rate Overshooting
and the Output Cost of Bringing Down Inflation," European
Economic Review 18, 85-123.
Burbidge, J.B. (1984), "Government Debt: Reply," American Economic
Review 74, 766-767.
285
Cagan, P. (1956), "The Monetary Dynamics of Hyperinflation," in M.
Friedman, ed., Studies in the Quantity Theory of Money (Chicago:
University of Chicago Press).
Calvo, G.A. (1983), "Staggered Prices in a Utility-Maximizing
Framework," Journal of Monetary Economics 12, 383-398.
Calvo, G.A. and F.S. Mishkin (2003), "The Mirage of Exchange Rate
Regimes for Emerging Market Countries," The Journal of
Economic Perspectives 17, 99-118.
Calvo, G.A. and M. Obstfeld (1988), "Optimal Time-Consistent Fiscal
Policy with Finite Lifetimes," Econometrica 56, 411-432.
Campbell, J., and N.G. Mankiw (1987), "Are Output Fluctuations
Transitory?" Quarterly Journal of Economics 52, 857-880.
Carroll, C.D. (2001), "Death to the Log-Linearized Consumption Euler
Equation! (And Very Poor Health to the Second-Order
Approximation)," Advances in Macroeconomics 1.
Chiang, A.C. (1984), Fundamental Methods of Mathematical Economics
Third edition (New York: McGraw-Hill).
Christiano, L. and M. Eichenbaum (1992), "Current Real-Business-Cycle
Theories and Aggregate Labor-Market Fluctuations," American
Economic Review 82, 430-450.
Christiano, L., M. Eichenbaum, and C. Evans (2005), "Nominal Rigidities
and the Dynamic Effects of a Shock to Monetary Policy," Journal
of Political Economy 113, 1-45.
Coenen, G., A. Orphanides, and V. Wieland (2006), "Price Stability and
Monetary Policy Effectiveness when Nominal Interest Rates are
Bounded at Zero," Advances in Macroeconomics 6.
Cohen, A.J. and G.C. Harcourt (2003), "Retrospectives: Whatever
happened to the Cambridge Capital Theory Controversies?"
Journal of Economic Perspectives 17, 199-214.
Cover, J.P. and P. Pecorino (2004), "Optimal Monetary Policy and the
Correlation between Prices and Output," Contributions to
Macroeconomics 4.
Cooley, T.F. and G D Hansen (1991), "The Welfare Costs of Moderate
Inflation," Journal of Money, Credit and Banking 23, 483-503.
Cooper, R., and A. John (1988), "Coordinating Coordination Failures in
Keynesian Models," Quarterly Journal of Economics 103, 441-
464.
Cooper R., and A. Johri (2002), "Learning By Doing and Aggregate
Fluctuations," Journal of Monetary Economics 49, 1539-1566.
Das, S.P. and C. Ghate (2005), "Endogenous Distribution, Politics, and
the Growth-Equity Trade-off," Contributions to Macroeconomics
5.
286
Devereux, M., A.C. Head and B.J. Lapham (1993), "Monopolistic
Competition, Technology Shocks, and Aggregate Fluctuations,"
Economics Letters 41, 57-61.
Devereux,M.B., P.R. Lane, and J. Xu (2006), "Exchange Rates and
Monetary Policy in Emerging market Economics," The Economic
Journal 116, 478-506.
Diamond, P.A. (1965), "National Debt in a Neoclassical Growth Model,"
American Economic Review 55, 1126-1150.
Diamond, P.A. (1984), A Search-Equilibrium Approach to the Micro
Foundations of Macroeconomics (Cambridge, Mass. and London,
England: MIT Press).
Dixon, H.D. and E. Kara (2006), "How to Compare Taylor and Calvo
Contracts: A Comment on Michael Kiley," Journal of Money,
Credit, and Banking 38, 1119-1126.
Domeij, D. (2005), "Optimal Capital Taxation and Labour Market
Search," Review of Economic Dynamics 8, 623-650.
Dornbusch, R. (1976), "Expectations and Exchange Rate Dynamics,"
Journal of Political Economy 84, 1161-1176.
Drazen, A. and P.R. Masson (1994), "Credibility of Policies Versus
Credibility of Policymakers," Quarterly Journal of Economics 90,
735-754.
Driskill, R. (2006), "Multiple Equilibria in Dynamic Rational
Expectations Models: A Critical Review," European Economic
Review 50, 171-210.
Edwards, S. and Y.E. Levy (2005), "Flexible Exchange Rates as Shock
Absorbers," European Economic Review 49, 2079-2105.
Eggertsson, G.B. and M. Woodford (2004), "Policy Options in a Liquidity
Trap," American Economic Review Papers and Proceedings 94,
76-79.
Easterly, W. (2005), "National Policies and Economic Growth," in P.
Aghion and S. Durlauf (eds.), Handbook of Economic Growth
(Amsterdam: Elsevier).
Estralla, A. and J. Fuhrer (2002), "Dynamic Inconsistencies: Counter-
factual Implications of a Class of Rational Expectations Models,"
American Economic Review 92, 1013-1028.
Farmer, R.E.A. (1993) The Macroeconomics of Self-Fulfilling Prophecies
(Cambridge, Mass.: MIT Press).
Faruqee, H. (2003), "Debt, Deficits, and Age-Specific Mortality," Review
of Economic Dynamics 6, 300-312.
Fatas, A. (2000), "Do Business Cycles Cast Long Shadows? Short-Run
Persistence and Economic Growth," Journal of Economic Growth
5, 147-162.
287
Fatas , A. and I. Mihov (2003), "The Case for Restricting Fiscal Policy
Discretion, " Quarterly Journal of Economics 118, 1419-1447.
Favero, C.A. and F. Milani (2005), "Parameter Instability, Model
Uncertainty and the Choice of Monetary Policy," Topics in
Macroeconomics 5.
Fillion, J-F. (1996), "L'endettement du Canada et ses effets sur les taux
d'interet reels de long terme," Bank of Canada Working Paper 96-
14.
Fischer, S. (1980), "Dynamic Inconsistency, Cooperation, and the
Benevolent Dissembling Government," Journal of Economic
Dynamics and Control 2, 93-107.
Fischer, S. (1995), "Central-Bank Independence Revisited," The American
Economic Review Proceedings 85, 201-206.
Fischer, S. and L. Summers (1989), "Should Nations Learn to Live with
Inflation?" American Economic Review Papers and Proceedings
79, 382-387.
Fleming, J.M. (1962), "Domestic Financial Policies Under Fixed and
Floating Exchange Rates," International Monetary Fund Staff
Papers 9, 369-379.
Frank, R.H. (2005), "Positional Externalities Cause Large and Preventable
Welfare Losses," American Economic Review Papers and
Proceedings 95, 137-141.
Freeman, R. (1999), The New Inequality: Creating Solutions for Poor
America (Boston: Beacon Press).
Friedman, M. (1948), "A Monetary and Fiscal Framework for Economic
Stability," The American Economic Review 38, 245-264.
Friedman, M. (1953), "The Case for Flexible Exchange Rates," in Essays
-,0,0//C, (Chicago: University of Chicago Press).
Friedman, M. (1957), A Theory of the Consumption Function (Princeton,
N.J.: National Bureau of Economic Research and Princeton
University Press).
Friedman, M. (1959), A Program for Monetary Stability (New York:
Fordham University Press).
Frydman, R. and E. Phelps, (1983), Individual Forecasting and Aggregate
Outcomes: Rational Expectations Examined (Cambridge:
Cambridge University Press).
Gali, J. and M. Gertler (1999), "Inflation Dynamics: A Structural
Econometric Analysis," Journal of Monetary Economics 44 195-
222.
Gali, J., M. Gertler, and J.D. Lopez-Salido (2005), "Robustness of the
Estimates of the Hybrid New Keynesian Philips Curve," Journal
of Monetary Economics 52, 1107-1118.
288
Galor, 0. and J. Zeira (1993), "Income Distribution and Macro-
economics," Review of Economics Studies 60, 35-52.
Garcia-Penalosa, C. and S.J. Turnovsky (2005), "Second-best Optimal
Taxation of Capital and Labour in a Developing Economy,"
Journal of Public Economics 89, 1045-1074.
Gertler, M, "Government Debt and Social Security in a Life-Cycle
Economy," Carnegie-Rochester Series on Public Policy 50, 61-
110.
Geweke, J. (1985), "Macroeconometric Modeling and the Theory of the
Representative Agent," The American Economic Review
Proceedings 75, 206-210.
Giannitsarou, C. (2005), "E-stability Does Not Imply Learnability,"
Macroeconomic Dynamics 9, 276-287.
Goodfriend, M.and R.G. King (1997), "The New Neoclassical Synthesis
and the Role of Monetary Policy," Macroeconomics Annual 12,
231-283.
Goodfriend, M. (2004), "Monetary Policy in the New Neoclassical
Synthesis: A Primer," Federal Reserve Bank of Richmond
Economic Quarterly 90, 21-45.
Gorbet, F. and J. Helliwell (1971), "Assessing the Dynamic Efficiency of
Automatic Stabilizers," Journal of Political Economy 79, 826-
845.
Graham, B.S. and J.R.W. Temple (2006), "Rich Nations, Poor Nations:
How Much Can Multiple Equilibria Explain?" Journal of
Economic Growth 11, 5-41.
Gravelle, J.G. (1991), "Income, Consumption, and Wage Taxation in a
Life-Cycle Model: Separating Efficiency from Distribution,"
American Economic Review 67, 985-995.
Hahn, F.H. and R.M. Solow (1986), "Is Wage Flexibility a Good Thing?"
in W. Beckerman (ed.), Wage Rigidity and Unemployment
(Baltimore: The Johns Hopkins University Press).
Hall, R.E. (2003), "Modern Theory of Unemployment Fluctuations:
Empirics and Policy Applications," American Economic Review
Papers and Proceedings 93, 145-150.
Hansen, G.D. (1985), "Indivisible Labor and the Business Cycle," Journal
of Monetary Economics 16, 309-327.
Hansen, G.D., and R. Wright (1992), "The Labor Market in Real Business
Cycle Theory," Federal Reserve Bank of Minneapolis Quarterly
Review 16, 2-12.
Hartley, J.E. (1997), The Representative Agent in Macroeconomics (New
York: Routledge).
289
Helpman, E. (2004), The Mystery of Economic Growth (Cambridge,
Mass.: Harvard University Press).
Honkapohja, S. and K. Mitra (2004), "Are Non-fundamental Equilibria
Learnable in Models of Monetary Policy?" Journal of Monetary
Economics 51, 1743-1770.
Howitt, P. (1978), "The Limits to Stability of Full Employment
Equilibrium," Scandinavian Journal of Economics 80, 265-282.
Howitt, P. (1985), "Transaction Costs in the Theory of Unemployment,"
American Economic Review 75, 88-100.
Howitt, P. (1986), "The Keynesian Recovery," Canadian Journal of
Economics 19, 626-641.
Howitt, P. and R.P. McAfee (1992), "Animal Spirits," The American
Economic Review 82, 493-507.
Huang, K.X.D., L. Zheng, and L. Phaneuf (2004), "Why Does the
Cyclical Behavior of Real Wages Change Over Time," The
American Economic Review 94, 836-857.
Huang, K.X.D. and L. Zheng (2005), "Inflation Targeting: What Inflation
Rate to Target?" Journal of Monetary Economics 52, 1435-1462.
Ireland, P.N. (2003), "Endogenous Money or Sticky Prices?" Journal of
Monetary Economics 50, 1623-1648.
Jackson, A.L. (2005), "Disinflationary Boom Reversion," Macroeconomic
Dynamics 9, 489-575.
Jensen, H. (2002), "Targeting Nominal Income Growth or Inflation,"
American Economic Review 94, 928-956.
Jensen, S. and S. Nielsen (1993), "Aging, Intergenerational Distribution
and Public Pension Systems," Public Finance 48, 29-42.
Johnson, L. (1977), "Keynesian Dynamics and Growth," Journal of
Money, Credit and Banking 9, 328-340.
Jones, C.I. (1995), "Time Series Tests of Endogenous Growth Models,"
Quarterly Journal of Economics 91, 495-525.
Jones, C.I. (2002), Introduction to Economic Growth, Second Edition
(New York: W.W. Norton).
Jones, C.I. (2003), "Population and Ideas: A Theory of Endogenous
Growth," in P. Aghion, R. Frydman, J. Stiglitz and M. Woodford
(eds.) Knowledge, Information, and Expectations in Modern
Macroeconomics (Princeton: Princeton University Press.
Judd, J.P. and B. Trehan, "The Cyclical Behavior of Prices: Interpreting
the Evidence," Journal of Money, Credit and Banking 27, 789-
797.
Judd, K.L. (2002), "Capital Income Taxation with Imperfect Compe-
tition," American Economic Review Papers and Proceedings 92,
417-421.
290
Keynes, J. M. (1936), The General Theory of Employment, Interest and
Money (London: Macmillan).
Kim, J. and D.W. Henderson (2005), "Inflation Targeting and Nominal-
income-growth Targeting: When and Why Are They Sub-
optimal?" Journal of Monetary Economics 52, 1463-1495.
King, R.G. (1993), "Will the New Keynesian Macroeconomics Resurrect
the IS-LM Model? Journal of Economic Perspectives 7, 67-82.
King, R.G. (2000), "The New IS-LM Model: Language, Logic, and
Limits," Federal Reserve Bank of Richmond Economic Quarterly
86, 45-103.
King, R.G. (2006), "Discretionary Policy and Multiple Equilibria,"
Economic Quarterly, Federal Reserve Bank of Richmond 92, 1-
15.
Kirman, A.P. (1992), "Whom or What Does the Representative Individual
Represent?" Journal of Economic Perspectives 6, 117-136.
Kirsanova, T., C. Leith, and S. Wren-Lewis (2006), "Should Central
Banks Target Consumer Prices or the Exchange Rate?" The
Economic Journal 116, F208-F231.
Knight, F.H. (1921), Risk, Uncertainly and Profit (London: London
School of Economics).
Koskela, E. and R. Schob (2002), "Why Governments Should Tax Mobile
Capital in the Presence of Unemployment," Contributions to
Economic Analysis and Policy 1.
Kremer, M. (1993a), "The 0-Ring Theory of Economic Development,"
Quarterly Journal of Economics 108, 551-575.
Kremer, M. (1993b), "Population Growth and Technical Change: One
Million B.C. to 1990," Quarterly Journal of Economics 108, 681-
716.
Kydland, F.E. and Prescott, E.C. (1977), "Rules Rather than Discretion:
The Inconsistency of Optimal Plans," Journal of Political
Economy 85, 473-492.
Kydland, F.E. and Prescott, E.C. (1982), "Time to Build and Aggregate
Fluctuations," Econometrica 50, 1345-1370.
Lam, J-P. and W. Scarth (2006), "Balanced Budgets vs. Keynesian Built-
In Stabilizers," mimeo.
Lipsey, R.G. (1960), "The Relationaship Between Unemployment and the
Rate of Change of Money Wage Rates in the U.K. 1862-1957: A
Further Analysis," Economica 28, 1-31.
Lipsey, R and K. Lancaster (1956), "The General Theory of the Second
Best," Review of Economic Studies 24, 11-32.
Loh, S. (2002), "A Cold-Turkey versus a Gradualist Approach in a Menu
Cost Model," Topics in Macroeconomics 2.
291
Lucas, R.E.,Jr. (1976), "Econometric Policy Evaluations: A Critique," in
K. Brunner and A.H. Meltzer, eds., The Phillips Curve and the
Labor Market (Amsterdam, New York, and Oxford: North
Holland).
Lucas, R.E.,Jr. (1987), Models of Business Cycles (Oxford: Blackwell).
Lucas, R.E., Jr. (1988), "On the Mechanics of Development Planning,"
Journal of Monetary Economics 22, 3-42.
Lucas, R.E.,Jr. and T.J. Sargent (1979), "After Keynesian
Macroeconomics," Federal Reserve Bank of Minneapolis
Quarterly Review 3, 1-16.
Malik, H.A. (2004), Four Essays on the Theory of Monetary Policy, PhD
Dissertation in Economics, McMaster University.
Malik, H.A. and W.M. Scarth (2006), "Is Price Flexibility De-stabilizing?
A Reconsideration," mimeo.
Malinvaud, E. (1977), The Theory of Unemployment Reconsidered
(Oxford: Blackwell).
Majundar, S. and S.W. Mukand (2004), "Policy Gambles," American
Economic Review 94, 1207-1223.
Mankiw, N.G. (1985), "Small Menu Costs and Large Business Cycles: A
Macroeconomic Model of Monopoly," Quarterly Journal of
Economics 100, 529-537
Mankiw, N.G. (1992), "The Reincarnation of Keynesian Economics,"
European Economic Review 36, 559-565.
Mankiw, N.G. (2000), "The Savers-Spenders Theory of Fiscal Policy,"
American Economic Review Papers and Proceedings 90, 120-
125 .
Mankiw, N.G., D. Romer and D.N. Weil (1992), "A Contribution to the
Empirics of Economic Growth," Quarterly Journal of Economics
88, 407-437.
Mankiw, N.G. and R. Reis (2002), "Sticky Information Versus Sticky
Prices: A Proposal to Replace the New Keynesian Phillips
Curve," Quarterly Journal of Economics 17, 1295-1328.
Mankiw, N.G. and M. Weinzierl (2006), "Dynamic Scoring: A Back-of-
the-envelop Guide," Journal of Public Economics 90, 1415-1433.
Manning, A. (1990), "Imperfect Competition, Multiple Equilibria, and
Unemployment Policy," Economic Journal Supplement 100, 151-
162.
Mansoorian, A. and M. Mohsin (2004), "Monetary Policy in a Cash-in-
Advance Economy: Employment, Capital Accumulation, and the
Term Structure of Interest Rates," Canadian Journal of
Economics 37, 336-352.
292
Marrero, G.A. and A. Novales (2005), "Growth and Welfare: Distorting
versus Non-Distorting Taxes," Journal of Macroeconomics 27,
403-433.
McCallum, B.T. (1980), "Rational Expectations and Macroeconomic
Stabilization Policy: An Overview," Journal of Money, Credit
and Banking 12, 716-746.
McCallum, B.T. (1983). "On Non-Uniqueness in Rational Expectations
Models: An Attempt at Perspective," Journal of Monetary
Economics 11, 139-168.
McCallum, B.T. (1984), "Are Bond-Financed Deficits Inflationary? A
Ricardian Analysis," Journal of Political Economy 92, 123-135.
McCallum, B.T. (1986), "On 'Real' and 'Sticky-Price' Theories of the
Business Cycle," Journal of Money, Credit and Banking 18, 397-
414.
McCallum, B.T. (1995), "Two Fallacies Concerning Central-Bank
Independence," American Economic Review Proceedings 85, 207-
211.
McCallum, B.T. (2001), "Monetary Policy Analysis in Models Without
Money," Federal Reserve Bank of St. Louis Review 83, 145-160.
McCallum, B.T. (2003), "Multiple-Solution Indeterminacies in Monetary
Policy Analysis," Journal of Monetary Economics 50, 1153-1175.
McCallum, B. and E. Nelsen (1999), "An Optimizing IS-LM Specification
for Monetary Policy and Business Cycle Analysis," Journal of
Money, Credit and Banking 27, 296-316.
McDonald, I. and R.M. Solow (1981), "Wage Bargaining and
Employment," The American Economic Review 71, 896-908.
McGrattan, E.R. (1994), "A Progress Report on Business Cycle Models,"
Federal Reserve Bank of Minneapolis Quarterly Review 18, 2-16.
Mehra, Y.P. (2004), "The Output Gap, Expected Future Inflation and
Inflation Dynamics: Another Look," Topics in Macroeconomics
4.
Metzler, L.A. (1941), "The Nature and Stability of Inventory Cycles,"
Review of Economics and Statistics 23, 113-129.
Milborne, R.D. D.D. Purvis and D. Scoones (1991), "Unemployment
Insurance and Unemployment Dynamics," Canadian Journal of
Economics 24, 804-826.
Minford, P. and D. Peel (2002), Advanced Macroeconomics: A Primer
(Northampton: Edward Elgar).
Moutos, T., and W. Scarth (2004), "Some Macroeconomic Consequences
of Basic Income and Employment Subsidies," in J. Agell, M.
Keen and A. Weichenrieder (eds.) Labor Market Institutions and
Public Regulation (Cambridge, Mass.: MIT Press).
293
Mullineux, A. and W. Peng (1993), "Nonlinear Business Cycle
Modelling," Journal of Economic Surveys 7, 41-83.
Mundell, R.A. (1963), "Capital Mobility and Stabilization Policy under
Fixed and Flexible Exchange Rates," Canadian Journal of
Economics and Political Science 29, 475-485.
Mussa, M. (1981), "Sticky Prices and Disequilibrium Adjustment in a
Rational Model of the Inflationary Process," American Economic
Review 71, 1020-1027.
Nelson, E. (2003), "The Future of Monetary Aggregates in Monetary
Policy Analysis," Journal of Monetary Economics 50, 1029-1059.
Nielsen, S. (1994), "Social Security and Foreign Indebtedness in a Small
Open Economy," Open Economies Review 5, 47-63.
Nordhous, W.D. (1992), "Letharo Model 2: The Limits to Growth
Revisited," Brookings Papers on Economic Activity 2, 1-43.
Oh, S. and M. Waldman (1994), "Strategic Complementarity Slows
Macroeconomic Adjustment to Temporary Shocks," Economic
Inquiry 32, 318-329.
Okun, A.M. (1962), "Potential GNP: Its Measurement and Significance,"
Proceedings of the Business and Economic Statistics Section of
the American Statistical Association, 98-104.
Okun, A.M. (1975), Equity and Efficiency: The Big Trade-Off
(Washington: The Brookings Institution).
Parkin, M. (2000), "What Have We Learned About Price Stability?" in
Price Stability and The Long-Run Target for Monetary Policy
Bank of Canada Conference Proceedings.
Pemberton, J. (1995), "Trends Versus Cycles: Asymmetric Preferences
and Heterogeneous Individual Responses," Journal of
Macroeconomics 17, 241-256.
Persson, T. and G. Tabellini (1994), Monetary and Fiscal Policy
(Cambridge, Mass.: MIT Press).
Pesaran, M.H. (1982), "A Critique of the Proposed Tests of the Natural
Rate-Rational Expectations Hypothesis," Economic Journal 92,
529-554.
Pesaran, M.H. (1987), The Limits to Rational Expectations (Oxford:
Blackwell).
Phelps, E. (1997), Rewarding Work (Boston: Harvard University Press).
Pissarides, C.A. (1998), "The Impact of Employment Tax Cuts on
Unemployment and Wages; The Role of Unemployment Benefits
and Tax Structure," European Economic Review 42, 155-183.
Ramsey, F.P. (1928), "A Mathematical Theory of Saving," Economic
Journal 38, 543-559.
294
Ravenna, F. and C.E. Walsh (2006), "Optimal Monetary Policy with the
Cost Channel." Journal of Monetary Economics 53, 199-216.
Rebelo, S. (2005), "Real Business Cycle Models: Past, Present and
Future," The Scandinavian Journal of Economics 107, 217-238.
Roberts, J.M. (2006), "How Well Does the New Keynesian Sticky-Price
Model Fit the Data?" Contributions to Macroeconomics 6.
Rogerson, R.D. (1988), "Indivisible Labor, Lotteries and Equilibrium,"
Journal of Monetary Economics 21, 3-16.
Romer, C.D. and D. Romer (1989), "Does Monetary Policy Matter? A
New Test in t Spirit of Friedman and Schwartz," NBER
Macroeconorr,,- s Annual 4, 121-170.
Romer, C.D. and D 71-1 ?,,orner (2004), "New Measure of Monetary
Shocks: 1/...---7- ,ackum and Implications," American Economic
Review 94. .
Romer, D. (1993). 7-me Keynesian Synthesis," Journal of Economic
Perspective$ . 5-21
Romer, D. (2001). A—r,agved Macroeconomics, Second Edition (New
York: MeGrro
Romer , P. (1990). "Frog -venous Technical Change," Journal of Political
Economy 91-1, . S - :-SIO2.
Rosser, J.B., Jr. « i $. "Chaos Theory and the New Keynesian
Economies.' :-P)e Manchester School 58, 265-291.
Rowe, N. (1987), -Ar. Extreme Keynesian Macro-economic Model with
Formal Mic7.-oeconomic Foundations," Canadian Journal of
Economics 2(..1. 306 -320.
Rudd, J. and K. v‘Ielan (2005), "New Tests of the New-Keynesian
Phillips Cur. e: Journal of Monetary Economics 52, 1167-1181.
Rudd, J. and K. Whelan (2006), "Can Rational Expectation Sticky-Price
Models Explain Inflation Dynamics?" American Economic
Review 96. 303.
Rudebusch, G.D. (2005), "Assessing the Lucas Critique in Monetary
Policy Models," Journal of Money, Credit, and Banking 37, 245-
272.
Saint-Paul, G. (2006), "Distribution and Growth in an Economy with
Limited Needs: Variable Markups and the End of Work," The
Economic Journal 116, 382-407.
Sargent, T.J. (1978), "Rational Expectations, Econometric Exogeneity,
and Consumption," Journal of Political Economy 86, 673-700.
Sargent, T.J. (1982), "Beyond Demand and Supply Curves in Macro-
economics," American Economic Review Papers and Proceedings
72, 382- 389.
295
Sargent, T.J. and N. Wallace (1981), "Some Unpleasant Monetarist
Arithmetic," Federal Reserve Bank of Minneapolis Quarterl;
Review 5, 1-17.
Scarth, W. (2004), "What Should We Do About the Debt?" in C. Ragan
and W. Watson (eds.) Is The Debt War Over? (Montreal: Institute
for Research on Public Policy.
Scarth, W. (2005), "Fiscal Policy Can Raise Both Employment and
Productivity," International Productivity Monitor 11, 39-46.
Scarth, W. and M. Souare (2002), "Baby-Boom Aging and Average
Living Standards," SEDAP Research Paper No268.
Shaikh, A. (1974), "Laws of Production and Laws of Algebra: The
Humbug Production Function," Review of Economics and
Statistics 56, 115-120.
Smets, F. and R. Wouters (2003), "An Estimated Stochastic Dynamic
General Equilibrium Model of the Euro Area," Journal of the
European Economic Association 1, 1123-1175.
Smyth, D.J. (1974), "Built-in Flexibility of Taxation and Stability in a
Simple Dynamic IS-LM Model," Public Finance 29, 111-113.
Soderstrom, U. (2002), "Targeting Inflation with a Role for Money."
Contributions to Macroeconomics 2.
Solon, G., R. Barsky and J.A. Parker (1994), "Measuring the Cyclicality
of Real Wages: How Important is Composition Bias?" Quarterlti
Journal of Economics 109, 1-25.
Solow, R.M. (1956), "A Contribution to the Theory of Economic
Growth," Quarterly Journal of Economics 70, 65-94.
Solow, R.M. (1998), Monopolistic Competition and Macroeconomic
Theory (Cambridge: Cambridge University Press).
Solow, R.M. (2003), "General Comments on Part IV, " in P. Aghion.
R. Frydman, J. Stiglitz and M. Woodford (eds.) Knowledge.
Information, and Expectations in Modern Macroeconomics
(Princeton: Princeton University Press).
Stiglitz, J.E. (1979), "Equilibrium in Product Markets with Imperfect
Information," American Economic Review Papers and
Proceedings 69, 339-346.
Stiglitz, J.E. (1992), "Capital Markets and Economic Fluctuations in
Capitalist Economies," European Economic Review 36, 269-306.
Stoneman, P. (1979), "A Simple Diagrammatic Apparatus for the
Investigation of a Macroeconomic Model of Temporary
Equilibria," Economica 46, 61-66.
Summers, LB. (1986), "Some Skeptical Observations on Real Business
Cy Theory," Federal Reserve Bank of Minneapolis Quarterly
Rev' 10, 23-27.
296
st Summers, L.H. (1988), "Relative Wages, Efficiency Wages and
ly Keynesian Unemployment," American Economic Review Papers
and Proceedings 78, 383-388.
Svensson L. (1999), "Price Level Targeting vs. Inflation Targeting,"
Journal of Money, Credit and Banking 31, 277-295.
Svensson, L. (2003), "Escaping from a Liquidity Trap and Deflation: The
d Foolproof Way and Others," The Journal of Economic
Perspectives 17, 145-166.
e Swan, T.W. (1956), "Economic Growth and Capital Accumulation,"
Economic Record 32, 334-361.
Taylor, J.B. (1977), "Conditions for Unique Solutions in Stochastic
Macroeconomic Models with Rational Expectations," Econo-
metrica 45, 1377-1385.
Taylor, J.B. (1979a), "Estimation and Control of a Macroeconomic
Model with Rational Expectations," Econometrica 47, 1267-1286.
Taylor, J.B. (1979b), "Staggered Wage Setting in a Macro Model," The
American Economic Review 69, 108-113
Tobin, J. (1969), "A General Equilibrium Approach to Monetary Theory,"
Journal of Money, Credit and Banking 1, 15-29.
Tobin, J. (1975), "Keynesian Models of Recession and Depression," The
American Economic Review Proceedings 65, 195-202.
Tobin, J. (1977), "How Dead Is Keynes?" Economic Inquiry 15, 459-468.
Turnovsky, S.J. (2002), "Knife-Edge Conditions and The Macro-
economics of Small Open Economies," Macroeconomic
Dynamics 6, 307-335.
Van Der Ploeg, F. (1996), "Budgetary Policies, Foreign Indebtedness, the
Stock Market and Economic Growth," Oxford Economic Papers
48, 382-396.
Van Groezen, B., L. Meijdam, and H.A.A. Verbon (2005), "Serving the
Old: Ageing and Economic Growth," Oxford Economic Papers
57, 647-663.
Van Parijs, P. (2000), "A Basic Income For All, Boston Review," 25.
Walsh, C. (2003a), Monetary Theory and Policy, Second Edition
(Cambridge, Mass.: MIT Press).
Walsh, C. (2003b), "Speed Limit Policies: The Output Gap and Optimal
Monetary Policy," American Economic Review 93, 265-278.
Wieland, V. (2006), "Monetary Policy and Uncertainty about the Natural
Unemployment Rate: Brainard-Style Conservatism versus
Experimental Activism," Advances in Macroeconomics 6,
Woglom, G.(1982), "Under-Employment Equilibrium with Rational
Expectations," Quarterly Journal of Economics 96, 89-107.
297
Wolfson, P. (1994), "A Simple Keynesian Model with Flexible Wages: a
Succinct Synthesis," Journal of Macroeconomics 16, 129-156.
Woodford, M. (1991), "Self-Fulfilling Expectations and Fluctuations in
Aggregate Demand," in N.G. Mankiw and D. Romer (eds.), New
Keynesian Economics (Cambridge, Mass.: MIT Press).
Woodford, M. (2003a), "Inflation Stabilization and Welfare," Contri-
butions to Macroeconomics 3.
Woodford, M. (2003b), "Comment on: Multiple-Solution Indeterminacies
in Monetary Policy Analysis," Journal of Monetary Economics
50, 1177-1188.
Wright, S. (2004), "Monetary Stabilisation with Nominal Asymmetries,"
The Economic Journal 114, 196-222.
298
Readers who wish to pursue the nonuniqueness issue coulc
consult McCallum (2003), and the critical commentary on this me]
offered by Woodford (2003b). Another development in this literature
focuses on which of the multiple equilibria do agents converge to, if they
start with incomplete information and have to gradually learn acquire
what we have assumed they have from the outset — rational expectations.
The idea is that only equilibria agents can get to via gradual learning
should be deemed admissible. Two recent references for this literature are
Honkapohja and Mitra (2004) and Giannitsarou (2005). A general survey
on the nonuniqueness problem in monetary policy models is available in
Driskill (2006).
Despite the incompleteness in the methodology for dealing with
multiple equilibria, we proceed in the present circumstance by simply
rejecting the unstable outcome. Thus, we follow convention and focus on
the fractional value for parameter c. When the same illustrative parameter
value 034= 0.2) is inserted into this version of the model, we have
3.6 Conclusions
60
full equilibrium less assured. But, more recently, macroeconomists of all
persuasion highlight expectations in their analysis. This is because
stabilization policy is now modelled as an ongoing operation, not an
isolated one-time event, and analysts restrict their attention to models in
which agents are fully aware of what the government has been, and will
be, doing. Thus, as far as stabilization policy analysis is concerned, there
has been a convergence of views in two senses. First, all modern analysis
focuses on model-consistent expectations. A central task for this chapter
has been to equip readers to be able to execute the required derivations.
The second dimension of convergence among macroeconomists — whether
they come from either new Classical or more traditional Keynesian
traditions — is that they all emphasize micro-foundations. Without starting
from a specification of utility and a well-identified source of market
failure, there is no way we can argue that one policy is "better" than
another. Understanding these micro-foundations is our task for the next
chapter.
The analyses covered in the latter sections of this chapter are
examples of current research in macroeconomics. Unfortunately, space
constraints force us to exclude many interesting studies. For example,
further work on the theory of monetary policy has focused on more
general objective functions for the central bank. Instead of assuming that
the bank restricts its attention to hitting a price-level target in just the
current period, for example, a vast literature considers central banks with
an objective function involving both current and future values of several
macroeconomic outcomes. A central consideration in this literature is a
credibility issue. Since current outcomes depend on agents' expectations
of the future, it can be tempting for the central bank to promise a
particular future policy (to get favourable expectations effects at that time)
and then to deliver a different monetary policy once the future arrives
(since agents cannot then go back and change their earlier expectations).
Walsh (2003a, chapter 11) surveys this literature.
Other interesting work enriches the theory of rational expectations.
In one strand of the literature, agents are assumed to know some, but not
all, of the current endogenous variable values, when forecasts of the other
variables are being made. For example, when forecasting this period's real
GDP, we usually know more than just the lagged values of all variables.
We know a couple of current variable values — for example, for the
interest rate and the exchange rate (which are reported in the news media
every day, and which are never revised). We do not know enough to
figure out the current values for all the current "error terms," but we can
make somewhat better forecasts than we assumed agents could make in
61
our analysis above. Minford and Peel (2002) provide a good summary of
this class of rational expectations models.
As already noted, another interesting line of investigation
involves agents who have to use least-squares forecasting techniques to
learn about a change in the economy's structure. In these models, agents'
forecasts gradually converge to the version of rational expectations that
we have assumed from the outset, and examined, in this chapter. While
the level of technical difficulty rises quite dramatically as these extensions
are pursued, they represent important developments. One reason for this
importance is the fact that some empirical work has been "unkind" to the
basic version of rational expectations that we have considered here.
One awkward bit of evidence for the rational-expectations
hypothesis is that surveyed series on expectations differ from the
associated actual series in systematic ways. Another source of tension
follows from sectoral econometric studies, such as estimated consumption
functions. For example, when Friedman (1957) tested his permanent-
income theory of consumption, he tested that hypothesis and an additional
one — that permanent income is related to actual measured income
according to the adaptive expectations scheme — simultaneously. This
package of hypotheses was not rejected by the data. Rational expectations
theorists have criticized Friedman's work on the grounds that the adaptive
expectations part of the package gave him a "free" parameter. Since there
was no restriction based on the theory for the coefficient of expectation
revision to be anything other than a positive fraction, Friedman's
computer was free to pick a value for that parameter that maximized the
goodness of fit of the overall package of hypotheses.
When Friedman's critics re-estimate the permanent-income theory
of consumption — as a package with rational expectations instead — they
find much less support. It can be argued that these later studies (for
example, Sargent (1978)) have gone too far in replacing the free-
parameter problem with a very-restrictive assumption concerning the rest
of the economy. Since rational expectations has to be implemented as a
full-model proposition, the consumption function cannot be estimated
without specifying the entire rest of the macro model. Since this is done in
a boldly simplified way in any study this is focusing on just one
relationship (for example, the consumption function), imposing the
associated cross-equation restrictions can essentially destroy any chance
that the original theory might have had to get a "passing grade." In other
words, as it is often applied, empirical work that combines inter-temporal
optimization by one group of agents, an over-simplified specification of
the rest of the economy, and rational expectations may involve too few —
not too many — free parameters. Carroll's (2001) concern goes even
62
further. He generates fictitious data from sets of simulations involving
hypothetical consumers who behave exactly according to the theory (the
strict permanent income hypothesis plus rational expectations). He finds
that fictitious researchers who use this data still reject the theory. So there
is certainly more research to do on how to test both our inter-temporal
models and the hypothesis of rational expectations. Beyond acquiring an
initial awareness of these challenges, it is hoped that readers have
acquired two things by studying this chapter: both an ability to solve and
interpret basic rational-expectations models, and a sense of perspective
concerning this literature.
We end this chapter by summarizing the prerequisites for any
analysis of stabilization policy to be deemed acceptable by modern
macroeconomists — whether their background is Classical or Keynesian.
These features are: consistent with inter-temporal optimization, allowing
for some short-run nominal stickiness, and involving model-consistent
expectations. In addition to these features, if the analysis is to be used to
directly inform actual policy debates, it has to be highly aggregative and
fairly simple (that is, involve a limited number of equations). The
framework that appears to satisfy all these prerequisites has been called
the New Neoclassical Synthesis. We considered the basic version of this
framework in section 3.5 in this chapter. In the next chapter, we discuss
the micro-foundations of this framework. Then, in the following two
chapters, we explore in a much more complete manner, the development
of this new synthesis.
63
Chapter 4
The Micro-Foundations
Of Modern Macroeconomics
4.1 Introduction
Before the early 1970s, virtually all macro policy analyses were
inconsistent with the principles of microeconomics. The households and
firms who operated within the macro model followed the same decision
rules no matter how their environment was altered by changes in policy
regime. Economics is often defined as the subject that explores the
implications of constrained maximization. But this description did not
apply to traditional macroeconomics. Since this was forcefully first
pointed out by Nobel laureate Robert Lucas in 1976, and since
macroeconomists have been working hard to avoid this criticism ever
since, we begin this chapter by explaining what has become known as the
"Lucas critique" of more traditional macroeconomics. The material in this
64
section provides a compact summary of Lucas' argument — with an
explicit applied example — by relying on a simple theory behind the
Phillips curve. A fuller (inter-temporal) version of this theory of sticky
prices is available in McCallum (1980) and Mussa (1981). Here, for
simplicity, we consider just a one-period optimization.
Firms face negotiation costs when setting wages. When workers
are dissatisfied with the prospect that wage changes will be too low, they
work to rule or strike, and this industrial action lowers output (and raises
firms' costs). When firm owners are dissatisfied with the prospect that
wage changes will be too high, they lock out workers, and these actions
also reduce output and raise costs. If prices are a mark-up on wages, this
same reasoning implies that these adjustment costs are incurred whenever
price increases are either above or below what full-equilibrium
considerations dictate would be an "appropriate" price change. To capture
these considerations, we assume that the price-setter's optimization
involves balancing the costs of fast versus slow price adjustment. Fast
adjustment lessens the cost of being away from full equilibrium, while
slow adjustment lessens the adjustment costs. The following cost function
captures these considerations:
(p, - A)
z
+PRP, -P,_)-(7.1-75,_)1 2
-p- is the full-equilibrium price — the one that would have the firm
operating at its natural rate of output (at the minimum point of its long-run
average-cost curve). The first term in the cost function captures the cost of
being away from this desirable long-run level of operations. The second
term captures the adjustment cost — that exists whenever the actual change
in wages (and therefore prices) is either above or below what is dictated
by the firm's equilibrium considerations. Parameter 13 defines the relative
importance of these adjustment costs. To keep this demonstration of the
Lucas critique simplified, we take 13 to be a "primitive" parameter.
economics starts with a specification of tastes and technology. If one
Arants to explore the determination of tastes, one becomes a psychologist,
lot an economist, and if one is interested in understanding technology,
one becomes an engineer, not an economist. So taste and technology are
he primitive constructs in our discipline. Clearly, parameter 13 is not
vimitive in this ultimate sense. However, since the relative cost of
adjustment can be assumed to be an aspect of the given "technology"
aced by firms, we interpret 13 in this way, in the interests of a simplified
!xposition.
65
Since the equilibrium considerations and previous history are
beyond the control of this period's price-setter, costs are minimized by
differentiating the cost function with respect to the current actual price
and setting the result equal to zero. After a bit of manipulation, we have
19,__)= — 15,1)
=P - r(P - T9)•
= /5 + 0(Y—Y)
and this form must now be related to the price-change relationship just
derived, via an aggregate demand relationship. For simplicity, let us
assume that transactions technology dictates that each level of output
requires a certain amount of money. This is captured in a quantity-theory
specification for aggregate demand (where parameter 0 is also "primitive"
— that is — unaffected by changes in the policy regime):
y = O(m — p).
m = — Z(P
According to this relationship, the central bank raises (lowers) the (log of
the) nominal money supply — above (below) its long-run average value —
whenever the price level is below (above) its target value. If parameter x
is zero, the central bank is a monetary-aggregate targeter; if x approaches
infinity, the bank is a price-level targeter.
Substituting the policy reaction function into the quantity theory
equation, we obtain the nation's aggregate demand function:
66
Next, we define the full-equilibrium price as that value for p that makes
output equal its natural rate. This value can be determined by setting
y = y and p = T) into the demand function:
y =19(in - T2).
This equation can be used in two ways. First, if the natural rate is constant,
the time derivative of this equation states that /-z5. = th ; in other words, the
core inflation rate is the money growth rate. The second use of the last
equation is that it can be subtracted from the demand function. The result
is:
Substituting this last relationship into the price-change equation that was
derived above, we end with:
io = + 0(Y — 57 )
where 0 = y 40(1 + x)).
This Phillips curve has two important features. First, it states that
temporary deviations of inflation from the monetary growth rate correlate
with temporary deviations of real output from the natural rate. This
implies that there is no inconsistency in the view that inflation is
ultimately a purely monetary phenomenon, and the view that the Phillips
curve is an important ingredient in a theory of the short-run interaction
between inflation and real activity. As noted earlier chapters, this has long
been a feature of standard macroeconomics.
Second, we conclude that the summary parameter 4 cannot be
considered a primitive parameter. According to this derivation, y and 0 are
technology parameters, since they define the adjustment-cost and
transactions processes. Thus, these parameters are not affected by
aggregate demand policy — that is, by changes in the value of monetary-
policy parameter x. But precisely because these "private sector" response
coefficients are policy invariant, it must be the case that the slope of the
short-run Phillips curve, 4, does depends on monetary policy. Thus, the
slope of the short-run Phillips curve does not represent purely "supply-
side" phenomena.
67
Standard practice in applied economics (in all fields — not just
macroeconomics) involves estimating a model, and then using those
estimated coefficients to simulate what would happen if policy were
different. The Lucas critique is the warning that it may not make sense to
assume that those estimated coefficients would be the same if an
alternative policy regime were in place. The only way we can respond to
this warning is to have some theory behind each of the model's equations.
We can then derive how (if at all) the coefficients depend on the policy
regime. This derivation of the short-run Phillips curve illustrates two
things. First, the Lucas critique does apply to the Phillips curve — its slope
is a policy-dependent coefficient. Second, the derivation illustrates how
we can react constructively to the Lucas critique. Just because the slope
coefficient depends on policy, it does not mean that legitimate counter-
factual experiments cannot proceed. The value of micro-foundations is
evident. They do not just expose the non-primitive nature of the Phillips
curve slope; they also outline precisely how to adjust that parameter to
conduct theoretically defensible simulations of alternative policy rules.
One illustration of how the answers to policy questions change
when the micro-foundations are respected can be had by reconsidering the
results that were reported in Chapter 2 (Section 2.4). In that analysis —
which involved essentially exactly this model but without the micro
underpinnings — we determined that the speed of adjustment between full
equilibria (parameter s in p= —s(y — y)) could be derived in a
straightforward fashion, and in this case we have s = 00(1+ x). Without
considering micro underpinnings, we conclude that more aggressive
policy increases the economy's adjustment speed back to full equilibrium
following disturbances. But, according to the Lucas critique, we should
take account of the micro basis of the Phillips curve, and substitute out the
policy-dependent Phillips curve slope parameter (by using the
= y/(0(1+ x)) result) before conducting the policy analysis. When this
is done, the expression for the adjustment speed parameter becomes s = y .
This result has dramatically different policy implications. It says that
monetary policy has no effect on the economy's adjustment speed. So
respecting the Lucas critique does not just affect the smaller details of
policy analysis; it can change the analysis in fundamental ways.
But is Lucas critique actually important in the real world? It
appears so, if the history of recent decades is considered. Our micro-based
analysis predicts that a move toward more direct price-level targeting on
the part of the central bank can be expected to decrease the slope of
estimated Phillips curves. This effect is not usually noted, so (for example:
many people were surprised that the monetary-policy-induced recessions
68
of the early 1980s and the early 1990s were as long and deep as they were.
These outcomes are less surprising when this underlying theory is
considered. In the 1980s and 1990s, central banks were increasingly
gearing policy to price level-targets. With a flatter Phillips curve emerging
as a result, the contraction in demand that was part of the disinflation
policy had a noticeably larger effect on real output than the previous
estimates of the Phillips curve had suggested. Viewed through the analysis
of this section, these outcomes are not surprising after all. While this
rather dramatic change in monetary policy clearly illustrates the
importance of the Lucas critique, some authors (such as Rudebusch
(2005)) have noted that the empirical evidence concerning less bold
changes in policy does not support the conclusion of structural breaks in
the estimated reduced forms of macro models.
Despite the value of micro-foundations (that has been illustrated
in a policy-relevant setting above), one consideration has kept some
macroeconomists from working toward a more elaborate microeconomic
base for conventional macro models. This problem is aggregation — an
issue which was not addressed in the Phillips curve analysis just given.
The conclusion which emerges from the aggregation literature is that the
conditions required for consistent aggregation are so rigid that constrained
maximization at the individual level may have very few macroeconomic
implications — that is, very few useful insights for aggregative analysis.
This presents a problem since the only way to solve the Lucas critique is
to use optimizing underpinnings to go "behind demand and supply
curves" (Sargent 1982), and to treat only the ultimate taste and technology
parameters as primitive (policy-invariant). If aggregation issues prevent
these individual optimizations from imposing any restrictions on
macroeconomic relationships, the Lucas critique cannot be faced. Yet
only a few commentators (for example, Geweke 1985) emphasize that
ignoring aggregation issues can be as important as ignoring the Lucas
critique.
Thus, we are on the horns of a dilemma. Economists should
ignore neither aggregation problems nor optimizing underpinnings. Yet
the current convention is, in essence, to ignore aggregation issues by
building macro models involving no differences between any individuals
— the so-called representative-agent model. The only justification for this
approach is an empirical one — that the predictions of the macro models,
which are based on such a representative agent, are not rejected by the
data. Thus macroeconomists have reacted to this dilemma in a pragmatic
way. Since aggregate models seem consistent with the macroeconomic
"facts," then no matter how restrictive the aggregation requirements seem,
not too much seems to be lost by assuming that the economy operates as if
69
these restrictions are appropriate. Some macroeconomists find this
pragmatic approach unconvincing, and they draw attention to inherent
logical difficulties within the representative-agent methodology (see
Kirman (1992) and Hartley (1997)).
In later chapters (10 and 12), we make use of an over-lapping
generations macro model that involves both appealing optimization
underpinnings and explicit aggregation across cohorts of different ages.
Indeed, we introduce readers to this model that generalizes the single
ever-lasting representative-agent framework later in the present chapter.
The over-lapping generations model certainly respects the Lucas critique
without ignoring the aggregation challenge. Nevertheless, since much of
the modern macro literature that we need to survey in this book follows
the representative-agent convention, we focus on this simpler model as
well.
i is the index of time periods, p is the rate of time preference (the higher is
p, the more impatient people are), and the logarithmic form for the utility
function at each point in time is consistent with two propositions — that a
certain minimum amount of consumption (defined to be one unit) is
needed to live (to receive any positive amount of utility), and that (beyond
that one unit) there is diminishing marginal utility of consumption.
70
The household maximizes this utility function subject to the
constraint that the present value of the entire stream of consumption be no
more than the present value of its disposable income. As readers will have
learned in basic micro theory, to achieve utility maximization, the
household must arrange its affairs so that the ratio of the marginal utilities
of any two items is equal the ratio of the prices of those two items. In this
case, the two items are the levels of consumption in any two adjacent time
periods. If the price of buying one unit of consumption today is unity, then
the price of one unit of consumption deferred for one period is less than
unity since the household's funds can be invested at the real rate of
interest for one period. Thus, the price of one-period-deferred
consumption is (1/(1+r)). With this insight, and the knowledge that
marginal utility for a logarithmic utility function is the inverse of
consumption, we can write the condition for utility maximization as:
C,= [(1+r)1(1+AGI
AC = (r p)C, —
C = (r — p)C.
C = p(A +
71
The factor of proportionality is the rate of time preference. To see that this
is the same theory as the one just derived, we need to know how both
human and non-human wealth change over time. In the case of non-
human wealth, the specification is familiar. If each individual's level of
employment is one unit, she acquires assets when the sum of her wage
income, w, and interest income, rA, exceeds current consumption:
A=w+rA — C.
Human wealth is the present value of all future after-tax wage income.
Since intuition is more straightforward in a discrete-time specification,
initially we write human wealth as:
Hi = [w,l(l+r)]+ [w,+11(1+01+
AH= rH— w.
C = p(A+ H)
A=w+rA—C
H = rH —w
By taking the time derivative of the first relationship, and substituting the
other two equations into the result, we have:
e (r p)C .
—
E (11(1
70
utility = p))' In
.0
subject to
fin Ce-Ps dt
0
subject to
C=rA+w—A.
Since many readers will not have been taught how to deal with a situation
in which the objective function is an integral and the constraint is a
differential equation, a simple "cook book" rule is given here. Whenever
73
readers confront this situation, they should write down what is known as
the Hamiltonian. In this case it is
A ="ln[rA+w— A].
A A —A A = 0,
where subscripts stand for partial derivatives. It is left for the reader to
verify that, in this case, following this procedure leads to exactly what was
derived earlier:
a (r — p)C .
Since the reader already knew the answer in this case, he/she can feel
reassured that the "cook book" method for dealing with continuous-time
specifications does "work".
Future Income
and Consumption
Present Income
and consumption
74
To have more intuition about this model of household behaviour,
consider Figure 4.1. For simplicity, the diagram refers to a planning
horizon with only two periods: the present (measured on the horizontal
axis) and the future (measured on the vertical axis), and taxes on interest
income are ignored. The household's endowment point is A. Since the
household can borrow and lend at rate r, the maximum amount of
consumption in each of the two periods is marked on both axes. The line
joining these two points is the inter-temporal budget constraint, and the
household chooses the point on this boundary of its feasible set that allows
it to reach the highest indifference curve (point B).
What happens if the government raises taxes today to retire some
government bonds? Since this just amounts to the government substituting
current for future taxes (with the present value of the household's tax
liabilities staying constant), all that happens is that the endowment point
shifts in a northwest direction along a fixed budget constraint to a point
such as C. The household simply borrows more, and remains consuming
according to point B. Thus, the Ramsey model involves what is known as
"Ricardian Equivalence" — the proposition that the size of the outstanding
government debt is irrelevant.
Some OECD governments have been very proud of themselves in
recent years — as they have been working down their debt-to-GDP ratios.
In a similar vein, policy analysts have regularly referred to George W.
Bush administration's fiscal policy as irresponsible, since the United
States government debt-to-GDP ratio has been rising dramatically. It is
clear that these OECD Ministers of Finance and these commentators on
US policy do not believe in Ricardian Equivalence. Perhaps this is
because they know that some "real world" individuals are liquidity
constraint — that is, they cannot borrow. We can focus on this situation in
Figure 4.1, by realizing that such an individual faces a budget constraint
given by DAE. Such an individual would be at point A — a corner solution
— initially. Then, after an increase in current taxes (which retires some
bonds and, therefore, cuts future taxes), the budget constraint becomes
DCF. The individual moves to point C. Since current consumption is
affected by the quantity of bonds outstanding, Ricardian Equivalence does
not apply.
Allowing for liquidity constraints is just one way to eliminate the
Ricardian Equivalence property. Another consideration is that people may
discount the future since they expect to die. The Ramsey model assumes
that the decision-making agent lives forever. If agents are infinitely lived
dynasties (households who have children), this assumption may be quite
appropriate. Nevertheless, some households have no children, so we
should consider the case in which agents do not live forever. The model
75
has been extended (by Blanchard (1985b)) by assuming that each
individual faces a constant probability of death, p. With this assumption,
life expectancy is (1/p) and (as derived in Blanchard and Fischer (1989),
chapter 3) the aggregate consumption function becomes
C = (p + p)(A + 11).
A=w+(r+p)A—C—pA
H=(r+p)H—w.
The new terms in the wealth-accumulation identities stem from the fact
that people realize that the present value of their future wage income is
smaller when death is a possibility. The most convenient way to think of
the arrangements for non-human wealth is that there is a competitive
annuity industry in the economy. It provides each individual with annuity
income on her holdings of A throughout her lifetime, and in exchange the
individual bequeaths her non-human wealth to the annuity company when
she dies. Since a new person is born to replace each one that dies (so that
the overall population size is constant), in aggregate, both these payments
to and from the annuity companies are pA each period. When these
identities are substituted into the level version of the aggregate
consumption function, the result is:
76
This consumption function collapses to Ramsey's when the death
probability is zero. Since government bonds are part of non-human wealth
(variable A), empirical workers have utilized this formulation to test
Ricardian Equivalence (the proposition that p = 0). In pooled time-series
cross-section regressions with future consumption regressed on current
consumption and non-human wealth, empirical researchers are able to
reject the null hypothesis that the A variable's coefficient is zero. We
respect these empirical results when using an overlapping-generations
analysis to evaluate deficit and debt reduction, and the implications of an
aging population, in Chapters 10 and 12, by allowing for p > 0 in that
analysis. However, since the New Neoclassical Synthesis approach to
stabilization policy analysis simplifies by setting p = 0, we follow this
convention as we report on that literature in Chapters 6 and 7.
Thus far, our theory of households has been simplified by
assuming exogenous labour supply. If households can vary the quantity of
leisure they can consume, inter-temporal optimization leads to the
derivation of both the consumption function and a labour-supply function.
With endogenous leisure — both current goods consumption and leisure
turn out to depend on permanent income. As far as labour income is
concerned, permanent income depends on both the current wage rate and
the present (discounted) value of the future wage rate. Thus, current
labour supply depends positively on both the current wage and the interest
rate, and negatively on the future wage. This aspect of the inter-temporal
model of household behaviour plays a central part in New Classical
macroeconomics (Chapter 5) — the first generation of fully micro-based
macro models. Also, the theory behind the "new" Phillips curve (that
Forms an integral part of the New Neoclassical Synthesis) relies very
much on this same labour supply function. To have a micro base for the
\lew Synthesis that is internally consistent, we want the labour supply
function to emerge from the same theory of the household that lies behind
he consumption function. Thus, for several reasons, we must now extend
:hat earlier analysis to allow for a labour-leisure (labour supply) choice.
As noted, for simplicity and to be able to report the New
neoclassical Synthesis literature as it is, we revert to the ever-lasting
:amity-dynasty version of the theory of the household, for this extension.
[hese household dynasties live forever and they choose a saving plan that
s designed to smooth consumption over time. Here we assume that
iouseholds maximize the discounted value of a utility function that is a
)ositive function of consumption and a negative function of labour
;upplied:
77
f [ln C — (1 /(1 + s))N i+le-Pi dt
C =rA+(W 1 P)N — A.
elC=r—p and
N =(W IPC)".
We use the labour supply function in the derivation of the Phillips curve
below. In this section of the chapter, we focus on the IS relationship. If
for simplicity, we ignore investment and government spending, we know,
that C = Y. Using lower-case letters to denote logarithms (for all variables
except the interest rate) and using r to represent the rate of time
preference, the consumption function can be re-expressed as j) = (r —7).
Except for adding an autonomous component of expenditure, this
completes the derivation of the "new" IS relationship. To add
component of demand that is independent of interest rates we take
logarithmic approximation of the economy's resource constraint (Y = C 4
G): y = y + ac + (1— a)g, where a is the full-equilibrium ratio of interest-
sensitive consumption to output; we end with the following slightly more
general micro-based IS relationship:
78
The final term is eliminated (by setting a = 1) when analysts are not
focusing on fiscal policy. We consider the policy implications of this
"new" IS relationship in Chapters 6 and 7.
79
for N, K and 1), the reader can verify that the following rules must be
followed for the firm to maximize its affairs.
FN =w
I = a(q —1) a=112b
r=FK lq-8+41q
The first rule is familiar; it stipulates that firms must hire labour each
period up to the point that its marginal product has been driven down to its
rental cost (the real wage). The second rule is intuitive; it states that firms
should invest in acquiring more capital whenever it is worth more than
consumption goods (alternatively, whenever the stock market values
capital at more than its purchase price). The third equation states that
individuals should be content to own capital when its overall return equals
what is available on alternative assets (interest rate r). The overall return
is composed of a "dividend" plus a "capital gain". When measured as a
percentage of the amount invested, the former term is the gross earnings
(capital's marginal product divided by the purchase price of capital) minus
the depreciation rate. The final term on the right-hand side measures the
capital gain. In our specification of the firms' investment function in
Chapter 1 (that was embedded within the basic IS relationship), we
assumed static expectations (4 = 0) . In Chapter 6 we are more general;
we examine what insights are missed by assuming static expectations. But
to simplify the exposition concerning installation costs for the remainder
of this section, we assume static expectations.
It is useful to focus on the implications of this theory of the firm
for the several models that were discussed in Chapter 1. In the textbook
Classical and Keynesian models, firms were assumed to have investment
and labour demand functions just like those that we have derived here. As
a result, we can now appreciate that the implicit assumption behind these
models is that firms maximize profits. That is, they pick factor inputs
according to cost minimization, and they can simultaneously adjust
employment and output to whatever values they want. With no installation
costs for labour, the standard marginal-product-equal-wage condition
applies. But, with adjustment costs for capital, a gap between the marginal
product of capital and its rental cost (the interest rate plus capital's rate of
depreciation) exists in the short run. The optimal investment function is:
80
This relationship states that investment is proportional to the gap between
capital's marginal product and its rental cost. Since capital's marginal
product is positively related to the employment of labour, this result
"justifies" assuming that investment depends positively on output and
negatively on the rate of interest. (This is standard in traditional IS-LM
theory, although sometimes (as in Chapter 1 above), analysts simplify by
excluding the income argument.)
If firms encounter a sales constraint (that is, if they pick factor
inputs with the goal of achieving cost minimization without being able to
simultaneously adjust employment and output to whatever values they
want), the optimal investment function is:
81
For the theory to receive empirical support, we must find that the
estimated partial adjustment coefficient must equal the depreciation rate.
It was mentioned above that we could interpret the Lagrange
multiplier, q, as the value of stocks. In fact, it can also be interpreted as
the slope of the nation's production-possibilities curve as well. We defend
both interpretations now. In a well-functioning stock market, the value of
equities equals the present value of the income derived from owning
capital. If capital is held into the indefinite future, its per-period earnings
will be national output, PF(N ,K), minus the wage bill, WN. To obtain
the present value of this flow, it is discounted by the sum of the real
interest rate and the depreciation rate. (Not only must the future be
discounted to calculate present value, but also the capital stock must be
maintained.) Assuming constant returns to scale, we have
F(N,K). FK K + FN N. Using this fact and PFN = W, the market value
of equities can be re-expressed in nominal terms as PFK K l(r + 6). We
can define q as the ratio of the market's valuation of capital to its actual
purchase price, PK. This means q = FK /(r + 8). This is most appealing.
When shares can be sold for such a price that capital can increase at the
same rate as the ownership of the company is being diluted, and there are
some additional funds left over, the existing owners should approve
expansion. This was the intuition behind the Keynes-Tobin (1969)
approach to the investment function. It is reassuring to know that we can
embrace a model of investment that is consistent with both this Keynesian
intuition and formal inter-temporal optimization.
Without adjustment costs, there is no difference between
consumption goods and investment goods. In that case, the economy's
production possibilities curve (drawn in C—I space) is a straight
(negatively sloped) 45-degree line. But with adjustment costs, this is not
the case. Ignoring government spending for a simplified exposition, the
amount of goods that are available for consumption is given by
C = F(N ,K)— I — bI 2 . The slope of the production possibilities curve is
had by taking the total differential of this definition, while imposing the
condition that factor supplies are constant (dN = dK = 0). The result is
dC =—qdI, since we know from our earlier derivations that q=1+2b1.
Thus, Tobin's valuation ratio can also be interpreted as the relative price
of investment goods in terms of consumption goods. As long as macro
theorists specify the resource constraint to include the installation cost
term, the production possibilities curve has its normal bowed-out shape,
and the model involves a consistent aggregation of the two kinds of goods
(even though it appears as simple as a one-sector model).
82
Other versions of the firms' investment function emerge if we
specify alternative installation-cost functions. For example, with an
installation-cost specification that normalizes for the size of the firm (such
2
as b1 2 1K or b1 I Y) we get slightly different investment functions:
I 1 K = a(q —1) and / / Y = a(q —1) . In this last specification, since the
installation process involves labour, there is also an important revision in
the labour demand function: FN (1 + b(I I Y) 2 ) = W / P. With this
specification, the separation of demand and supply-side fiscal policy
instruments is blurred. Anything, such as program spending, that can
affect interest rates and therefore investment, is a policy variable that
causes a shift in the position of the labour demand function. (With higher
interest rates, fewer workers are needed for installing capital (at any real
wage).) Since the labour market is what lies behind the aggregate supply
curve for goods, G is a policy variable that shifts the position of both the
demand and supply curves for goods. While some New Keynesian
economists (such as Stiglitz (1992)) use models of equity rationing to give
their model this very feature, space constraints limit our ability to pursue
further these models involving interdependent aggregate supply and
demand curves.
Finally, before closing this section, it is worth noting what
happens if there were no adjustment costs. In this case, parameter b would
be set to zero, and the first-order conditions imply that q would always
equal one, and that FN = r + 8. would hold at all times. In such a world,
.7.apital and labour would be treated in a symmetric fashion. Both factors
could be adjusted costlessly at each point in time so that — even for capital
- marginal product would equal rental cost. There would be no well-
lefined investment function in such a world, since firms would always
lave the optimal amount of capital. As a result, firms would passively
nvest whatever households saved. We examine macro models with this
'eature, when we discuss economic growth in Chapters 10-12. But all
nodels that focus on short-run fluctuations (that is, both Classical and
(eynesian models of short-run cycles) allow for adjustment costs.
83
that is favoured in the literature is Calvo's (1983), so we explore that
analysis here.
A full treatment would be very complicated. We would need to
derive the firms' factor demand functions (labour demand and investment)
and the firms' optimal price-setting strategy simultaneously — within one
very general inter-temporal optimization. This is rarely attempted in the
literature. Instead, it is assumed that there are two separate groups of firms
The first produces an "intermediate" product, and it is these firms who
demand labour and capital. The second group of firms buys the
intermediate product and (without incurring any costs, but subject to a
constraint on how often selling prices can be changed) they sell them as
final goods. This two-stage procedure is an ad hoc simplification that is
intended to "justify" our not integrating the optimal factor-demands
problem with the optimal-price-setting problem. Even with this separation
a full treatment of the price-setting problem is more complicated than
what is presented here. The fuller treatment involves product
differentiation across firms. Strictly speaking, this is necessary, since each
individual firm must have some monopoly power to have a price-setting
decision. To ease exposition, however, the present discussion suppresses
the formal treatment of monopolistic competition. By comparing this
analysis to King (2000) and Goodfriend (2004), the reader can verify that
the "bottom line" is unaffected by our following the many authors who
take this short-cut.
Prices are sticky. Specifically, a proportion, 0, of firms cannot
change their price each period. One way of thinking about the
environment is to assume that all firms face a constant probability of
being able to change price. That probability is (1— 0), so the average
duration of each price is (1/0). Then, if p denotes the index of all prices
ruling at each point in time and z denotes what is set by those who do
adjust their price at each point in time, we have:
p, = Op,_1 + (1-0)z,.
E(1/(14-p))10/E,(p,
,=0
84
It is shown in the following paragraphs that the first-order condition for
this problem can be approximated by the following equation (if the
discount factor is set to unity):
= - p, ).
E x`Of E,(p„—mc,, j )2
=Ej= 0
Of E,(p 2
=Ex j0jE,(mc,, j ), so
p„=(1-0x)EziOlE,(mc,,i ).
.T=0
The next steps in the derivation are as follows. First, write (4.1) forward
one period in time. Second, take the expectations operator, E, , through
the result. Third, substitute the result into (4.3), then that result into (4.1).
Finally, simplify what remains, using the definition of the inflation rate:
7T, = pe — p,_ 1 . We end with
where rmc stands for (the logarithm of) each firm's real marginal cost:
rmc, = mc1 — p t . Thus,
7r, = ,u(rmci )
(mc— p)= — in 6.
86
We pick units so that, in full equilibrium, price equals marginal cost
equals unity. This implies that firms have monopolistic power only when
they are out of long-run equilibrium, and it implies that the logarithm of
both price and marginal cost are zero. Thus, the full-equilibrium version
of this last equation is
Subtracting this last relationship from the previous one, we end with
— P) = n(Y
4.6 Conclusions
88
Chapter 5
The Challenge of
New Classical Macroeconomics
5.1 Introduction
Our analysis thus far may have left the impression that all macro-
economists feel comfortable with models which "explain" short-run
business cycles by appealing to some form of nominal rigidity. This may
be true as far as macro policy-makers are concerned, but this has not been
a good description of the view of many macro theorists. These theorists
are concerned that (until recently) the profession has not been very
successful in providing micro foundations for nominal rigidity. Even those
who have pioneered the "new synthesis" of Keynesian and Classical
approaches (for example, King (2000)) have expressed concern that some
of its underling assumptions concerning sticky prices can be regarded as
"heroic."
The response of New Keynesian academics is to work at
developing more convincing models of "menu" costs (the costs of
changing nominal prices), and to elaborate how other features, such as
real rigidities and strategic complementarities, make the basing of
business cycle theory on seemingly small menu costs appealing after all
(see Chapter 8). But there has been another. reaction — on the part of New
Classicals. Their reaction to the proposition that menu costs "seem" too
;mall to explain business cycles is to investigate whether cycles can be
explained without any reference to nominal rigidities at all. They have
Deen remarkably successful in demonstrating that this is possible. Further,
;ome revolutionary conclusions have been derived from this
`equilibrium" approach to cycles. Perhaps the most central result is that
he estimated benefit to society of completely eliminating business cycles
nay be trivial! This chapter explains this so-called real business cycle
approach, and how it leads to this strong verdict regarding stabilization
)olicy.
L, + N, =1 (5.2)
Y,=e; Ka ,N ( ''), (5.3)
z, =ft + z, (5.4)
z, = 01-1+ vi (5.5)
K, +, = (1— 45)K, + (5.6)
Y, =C,+ (5.7)
h,,, — h, = A(c,—h,).
93
This relationship is an additional persistence-generation mechanism that
helps both calibrated and estimated models match real-world data (see, for
example, Bouakez et al (2005)). But since the habit-adjustment process is
identical to the adaptive expectations formula, critics argue that New
Classicals cannot simultaneously argue that rational expectations is
fundamentally more appealing than adaptive expectations, and that this
extension to the standard utility function is not ad hoc. Of course, as a
technical matter, New Classicals can safely ignore such criticism. Since
our subject starts from a specification of tastes and technology, every
assumption about such matters is necessarily "arbitrary," and it seems to
demonstrate a misunderstanding of the bounds of our discipline to call any
such assumption "ad hoc." On the other hand, since the hypothesis of
adaptive expectations concerns the relationship between actual and
forecasted values of endogenous variables, not exogenous items such as
the definition of tastes, it is legitimately viewed as an arbitrary (ad hoc)
specification.
It is not useful for us to get bogged down in methodological
dispute. At the practical level, two considerations are worth mentioning.
First, since it is difficult to find any non-time-series-econometrics
evidence to use as a basis for choosing an "appropriate" value for the
habit-persistence parameter, 2k,, it has been difficult for practitioners to
avoid some proliferation of the free-parameter problem as they implement
this extension. The second point worth noting is that this generalization of
the utility function does not adequately repair the real wage-employment
correlations, so other changes to the basic model have become quite
prevalent in the literature as well. It is to some of these other
modifications that we now turn.
One such extension is indivisible labour. Some New Classical
models make working an all-or-nothing choice for labour suppliers. At the
macro level, then, variation in employment comes from changes in the
number of people working, not from variations in average hours per
worker. This means that the macro correlations are not pinned down by
needing to be consistent with evidence from micro studies of the hours
supplied by each individual (that show a very small elasticity). For more
detail, see Hansen (1985) and Rogerson (1988).
Another extension focuses on non-market activity. Statistical
agencies in OECD countries have estimated that, on average, households
produce items for their own consumption equal in value to about one third
-
95
framework. For simplicity, in the present discussion, we assume static
expectations, so no separate expected future consumption and expected
future wage rates need to be considered. But when the labour supply and
demand equations are combined (to eliminate the real wage by
substitution), we are left with a relationship that stipulates output as a
positive function of the interest rate. We can use the IS relationship to
replace the interest rate. The resulting summary of the labour demand,
labour supply, production-function, and IS relationships is a vertical line
in P-Y space with government spending and tax rates — in addition to the
technology shock — as shift influences.
The real business cycle model allows no role for the money
supply, so — to have the model determine nominal prices — we must add
some sort of LM relationship to the system. Initially, to avoid having to re-
specify the household's optimization, this relationship was assumed to be
the quantity-theory of money relationship (L(Y) = M / P) — justified as a
specification of the nation's trading transactions technology. This
relationship is the economy's aggregate demand function, and the nominal
money supply is the only shift influence. Thus, the New Classical model
still exhibits the classical dichotomy as far as monetary policy is
concerned. But fiscal policy — even a change in program spending — has a
supply-side effect, so it has real output effects. Readers can draw the
appropriate aggregate demand and supply diagram to compare the effects
on output and the price level of variations in autonomous expenditure
across several models — those examined in Chapter 1 and the New
Classical model of the present chapter. Since the traditional Keynesian
model involves the prediction that the price level rises during business-
cycle booms, while this New Classical model has the property that the
price level falls during booms, various researchers (for example, Cover
and Pecorino (2004)) have tried to exploit this difference in prediction to
be able to discriminate between these alternative approaches to
interpreting cycles.
Care must be exercised when pursuing this strategy, however,
since there is no reason to restrict our attention to an LM relationship that
does not involve the interest rate. As we see in section 5.4 below, it is not
difficult to extend the household-optimization part of the New Classical
model to derive such a more general LM relationship from first principles.
When this more general specification is involved in the model, the IS
function is needed to eliminate the interest rate both from the labour-
market relationships (to obtain the aggregate supply of goods function)
and from the LM relationship (to obtain the aggregate demand for goods
function). As a result, changes in autonomous spending shift the position
96
of both the aggregate supply and demand curves, and the model no longer
predicts that the price level must move contra-cyclically.
Other extensions to the basic New Classical model can be
understood within the same labour supply and demand framework that we
have just discussed. Any mechanism which causes the labour demand
curve to shift over the cycle decreases the burden that has to be borne by
technology shocks, and any mechanism which causes the labour supply
curve to shift accomplishes the same thing — while at the same time
decreasing the variability of real wages over the cycle. The variability of
price mark-ups over the cycle is an example of a demand-shift mechanism,
and households shifting between market-oriented employment and home
production is an example of a supply-shift mechanism. Regarding the
former, we know that the mark-up of price over marginal cost falls during
booms because of the entry of new firms. This fact causes the labour
demand curve to shift to the right during booms. Devereux, Head, and
Lapham (1993) have shown how this mechanism can operate within a real
business cycle model involving imperfect competition.
There are still other reasons for the labour demand curve to move
in a way that adds persistence to employment variations. Some authors
(such as Christiano and Eichenbaum (1992)) introduce a payment lag. If
firms have to pay their wage bill one period before receiving their sales
revenue, the labour demand function becomes EN = ( W I P)(1 + r) so
variations in the interest rate shift the position of the labour demand curve.
Further, firms may encounter adjustment costs when hiring/firing labour,
and learning-by-doing may be an important phenomenon (see Cooper and
Johri (2002)). In the latter case, tomorrow's labour productivity is high if
today's employment level is high. Simulations have shown that the
consistency between the output of calibrated equilibrium models and real-
world time series is increased, when persistence-generation mechanisms
such as these are added. It can be challenging to discriminate between
some of these mechanisms. For example, there is a strong similarity
between the habits extension and the learning-by-doing extension. But
despite this, Bouakez and Kano (2006) conclude that the habits approach
fits the facts better.
As noted above, it is interesting to note the convergence involved
with parts of New Keynesian and New Classical work. Keynesians have
been taking expectations and micro foundations more seriously to
improve the logical consistency of their systems, while Classicals are
embracing such things as autonomous expenditure variation, imperfect
competition and payment lags to improve the empirical success of their
models. Despite this convergence, however, there is still a noticeable
difference in emphasis. Classical models have the property that the
97
observed fluctuations in employment have been chosen by agents, so there
is no obvious role for government to reduce output variation below what
agents have already determined to be optimal. This presumption of social
optimality is inappropriate, however, if markets fail for any reason (such
as externalities, moral hazard, or imperfect competition).
Hansen and Wright (1992) have shown that when all of these
extensions are combined, a simple aggregative model can generate data
that reflects fairly well the main features of the real-world real wage-
employment correlations after all. But still the model does not fit the facts
well enough, so even pioneers of this approach (for example, Goodfriend
and King (1997) have called for the New Neoclassical Synthesis in which
temporarily sticky prices are added to the real business cycle model. We
consider this synthesis in some detail in Chapter 6.
It may seem surprising that New Classicals have embraced the
hallmark of Keynesian analysis — sticky prices. Why has this happened?
Perhaps because there is one fact that appears to support the relevance of
nominal rigidities — the well-documented correlation between changes in
the nominal money supply and variations in real output. Either this is
evidence in favour of nominal rigidities, or it is evidence that the central
bank always accommodates — increasing the money supply whenever
more is wanted (during an upswing). In response to this reverse causation
argument, Romer and Romer (1989) have consulted the minutes of the
Federal Reserve's committee meetings to establish seven clear episodes
during which contractionary monetary policy was adopted as an
unquestionably exogenous and discretionary development. The real
effects that have accompanied these major shifts in policy simply cannot
be put down to accommodative behaviour on the part of the central bank.
Further evidence is offered in Romer and Romer (2004), and further
support is provided by Ireland's (2003) econometric results. This evidence
— especially that which is derived from independent evidence of the
central bank's deliberations — removes the uncertainty that remains when
only statistical causality tests are performed. One final consideration is
that real and nominal exchange rates are very highly correlated. Many
economists argue that there appears to be no way to account for this fact
other than by embracing short-run nominal rigidities.
Keynesians welcome this convergence of research approaches, yet
(at the conceptual level) they remain concerned about the lack of market
failure involved in the classical tradition. Also, they have some empirical
concerns, and a few of these are summarized in the next few paragraphs.
The real business cycle approach is based on the notion of inter-
temporal substitution of labour supply. However, micro studies of
household behaviour suggest that leisure and the consumption of goods
98
are complements, not substitutes (as assumed in New Classical theory).
Another awkward fact is that, in the United States at least, only 15 percent
of actual labour market separations are quits. The rest of separations are
layoffs. In addition, the data on quits indicates that they are higher in
booms. The real business cycle model predicts that all separations are
quits and that they are higher in recessions. Finally, it is a fact that a high
proportion of unemployment involves individuals who have been out of
work for a long time — an outcome that does not seem consistent with the
assumption of random separations.
Other interpretation disputes stem from the fact that it is
impossible to observe the technology shocks directly. In Solow's original
work, the residual accounted for 48 percent of the variation in the output
growth rate. Later work, which measured inputs more carefully, avoided
some aggregation problems, and allowed for a variable utilization rate for
both labour and capital, pushed the residual's contribution down to 3
percent. It is no wonder that New Classicals can explain a lot with the
original Solow residuals; they contain a lot more than technology shocks.
Incidently, many analysts find it reassuring that the residuals are now
perceived to be much smaller. Surely, if there are both positive and
negative technology shocks, disturbances at the individual firm or industry
level would largely "cancel out" each other, so that, in the aggregate, there
would not be large losses in technological knowledge. Possibly it would
be better if New Classicals interpreted real shocks more broadly, and
included such things as variations in the relative price of raw materials, as
well as technology shocks, in what they consider as supply-side
disturbances.
Quite apart from all these specific details, and the even the
general question of real-wage-employment correlations, some economists
regard the inter-temporal substitution model as outrageous. In its simplest
terms, it suggests that the Great Depression of the 1930s was the result of
agents anticipating World War II and deciding to withhold their labour
services for a decade until that high labour-demand period arrived.
Summers' (1986) remarks that even if workers took such a prolonged
voluntary holiday during the 1930s, how can the same strategic behaviour
be posited for the machines that were also unemployed?
Given these problems, why does real business-cycle theory appeal
to many of the best young minds of the profession? Blinder (1987)
attributes the attraction to "Lucas's keen intellect and profound
influence," but it also comes from the theory's firm basis in
microeconomic principles and its ability to match significant features of
real-world business cycles. Rebelo (2005) provides a clear and balanced
assessment of both the successes and some of the challenges that remain
99
for the research agenda of real business cycle theorists. It is likely that
both this ongoing willingness to address these challenges, and the shift of
the New Classicals from calibration to estimation, have strengthened the
appeal of this approach to young researchers. Finally, perhaps another
consideration is that this school of thought's insistence on starting from a
specification of utility makes it possible for straightforward normative
(not just positive) analysis to be conducted. Since the theory is so
explicitly grounded in a competitive framework with optimizing agents
who encounter no market failure problems, the output and employment
calculations are not just "fairly realistic"; they can be viewed as optimal
responses to the exogenous technology shocks that hit a particular
economy.
Using this interpretation, economists have a basis for calculating
the welfare gains from stabilization policy. Lucas (1987) has used data on
the volatility of consumption over the cycle and an assumed degree of
curvature in the utility of consumption function that appears to fit some
facts to calculate how much business cycles lower utility. He concludes
that "eliminating aggregate consumption variability entirely would ... be
the equivalent in utility terms of an increase in average consumption of
something like one or two tenths of a percentage point." Lucas'
conclusion has been influential; it is one of the reasons macroeconomists
have shifted their emphasis to growth theory (Chapters 10-12) in recent
years.
It is interesting to interpret the simulations produced by modern
real business cycle theorists as an up-dated version of Adelman and
Adelman (1959). These authors performed a similar stochastic simulation
experiment with a small econometric model; their intention was to show
that a standard Keynesian model (with just a few numerical parameters)
could mimic the actual US data. Since both groups (Old Keynesians and
New Classicals) have established that their models are consistent with
significant parts of actual business-cycle data (and therefore should be
taken seriously), how can any one of them argue that its preferred
approach should have priority in the profession's research agenda (for this
reason alone)? Even New Keynesians, for example Ambler and Phaneuf
(1992) have shown that an updating of the original Adelman/Adelman
study gives the same support to the New Keynesian approach. Now that
all groups have proved that their approach has passed this basic test — to
be taken seriously as one of the legitimate and contending schools of
thought — it seems that either some additional criteria for choosing among
the different approaches is required — or a synthesis of the New Classical
and sticky-price approaches should be embraced. Indeed, as we explore in
the next chapter, this synthesis has been just what has developed.
100
Before ending this chapter and moving on to the synthesis model,
we do two things. First, we use a particular version of New Classical
theory to illustrate how this school of thought can be used to contribute to
policy debates. In particular, we show how it facilitates our estimating the
long-term benefits of adopting a low-inflation policy. Second, we pursue
Lucas's proposition that the value of stabilization policy is trivial.
C=wN+rK+r—.rm—A.
101
Consumption is the sum of wage and employment income, plus the
transfer payments received from the government, T, minus the inflation
tax incurred by holding real money balances and minus asset
accumulation. Since A= K + m and i = r + r, we can re-express the
constraint as C = wN + rA + r — im — A. Households do not focus on the
fact that their individual transfer payment may depend on how much
inflation tax the government collects from them — individually. That is,
they do not see the aggregate government budget constraint (given below)
as applying at the individual level. However, there is an additional
complication that confronts households. This novel feature is the "cash-in-
advance" constraint: each period's consumption cannot exceed the start-
of-period money holdings. Since the rate of return on bonds dominates
that on money, individuals satisfy the financing constraint as an equality:
C = m. Thus, we replace the m term in the constraint with C. Finally, for
simplicity, capital does not depreciate, and since there is no growth or
government program spending, investment is zero in full equilibrium, and
so (in full equilibrium) total output and C are identical. Firms have a
Cobb-Douglas production function (with capital's exponent being 0), and
they hire labour and capital so that marginal products equal rental costs.
The full-equilibrium version of the model is described by the following
equations:
r=p
(N 1(1— N)) = (1— a)(1— 0)1(a(1+ r + r))
C = Ke N"
OC I K =r
r = zC
102
This equation states that lump-sum transfer payments, T, are paid to
individuals, and in aggregate, these transfers are financed by the inflation
tax.
With T endogenous, cutting inflation is unambiguously "good".
Lower inflation eliminates a tax that distorts the household saving
decision, and no other distortion is introduced by the government having
to levy some other tax to acquire the missing revenue. Consumption,
employment, output and the capital stock all increase by the same
percentage when the inflation rate is reduced. Specifically,
time
Thus far, much of this book has focused on explanations of the business
cycle and an evaluation of stabilization policy. It has been implicit that
there would be significant gains for society if the business cycle could be
eliminated. The standard defence for this presumption can be given by
referring to Figure 5.2 (where we now allow for ongoing growth by
drawing the (log of the) GDP time paths with a positive slope.
105
Without business cycles, actual output, y, would coincide with the
natural rate, 5; , and both series would follow a smooth growth path such
as the straight line labelled 37 in Figure 5.2. But because we observe
business cycles, the actual output time path is cyclical — as is the wavy
line labelled y in the figure. Traditionally, Keynesians equated the natural
rate with potential GDP, and Figure 5.2 reflects this interpretation by
having the actual and natural rates coinciding only at the peak of each
cycle. Okun (1962) measured the area between the two time paths for the
United States for a several decade long period of time, and since the
average recession involved a loss of at least 5 percent of national output,
the sum of the so-called Okun gaps was taken to represent a very large
loss in material welfare. The payoff to be derived from a successful
stabilization policy seemed immense.
time
Before explaining how the New Classicals have taken issue with
this analysis, it is useful to note how the likely payoff that can follow from
successful microeconomic policy was estimated back when Okun was
writing. As an example, consider a reduction in the income tax rate, which
(since it applies to interest income) distorts the consumption-savings
decision. In section 4.3, we derived the consumption function that follows
from inter-temporal optimization on the part of an infinitely lived agent
who is not liquidity constrained; the decision rule is:
r(I—t)— p,
106
where C, r, p, and t denote consumption, the real interest rate, the rate of
time preference, and an income tax rate that does not exempt interest
income (as we assumed in the previous section).
Traditional applied microeconomic analysis involved focusing on
full equilibrium without growth (that is, on the r(1 —t)= p relationship,
and combining this supply of savings function with the full-equilibrium
demand for capital ( FK = r + 8, where F(K,N) = Y is the production
function and 8 is the depreciation rate for capital). Assuming a fixed
quantity of labour employed, a Cobb/Douglas function, Y = K ° N" , and
that the rates of time preference and capital depreciation are independent
of tax policy, these relationships imply
Utility
Wage Income
$50 $100 $150
Even if gains and losses did cancel out, there would still be some
benefit to individuals as long as they are risk averse. That is, two income
streams with the same present value are not evaluated as equal in utility
terms if one income stream involves volatility. Figure 5.3 illustrates this
issue. It shows that with risk aversion, an individual refuses a fair bet — for
example, she refuses to pay $100 for the right to play a game in which
there is a 50-50 chance of receiving either $150 or $50. The expected
value of the game is $100, but — given the uncertainty — the utility that can
be derived from this expected value is not as big as what is enjoyed when
the $100 is certain. Thus, an individual with diminishing marginal utility
is willing to give up an amount of utility equal to distance DB to eliminate
the variability in her income stream. If the degree of risk aversion is very
slight, the utility of income function is almost linear, and distance DB is
very tiny. This is the reasoning that Lucas (1987) used in arriving at his
estimate of the value of stabilization policy. Using a time-separable utility
function with a constant coefficient of relative risk aversion (for which
empirical demand systems yield an estimate), Lucas was able to quantify
the benefits of eliminating variability, and as already noted, he concluded
that they were trivial.
108
Keynesians have made three points in reacting to Lucas. The first
concerns whether there is market failure. According to New Classicals,
unemployment is voluntary, so when output is low it is because the value
of leisure is high. What is so bad about an "output loss" if it is just another
word for a "leisure gain"? But Keynesians think that a significant
component of unemployment is involuntary, since it stems from some
market failure such as asymmetric information, adverse selection, or
externalities in the trading process. (We examine these possibilities in
Chapter 8.) According to this view, smoothing is not the only result to
follow from stabilization policy. Indeed, just the commitment to attempt
stabilization may be sufficient to shift the economy to a Pareto-superior
equilibrium in models that involve both market failure and multiple
equilibria. Thus, stabilization policy can affect the mean, not just the
variance, of income. In terms of Figure 5.2, stabilization policy can both
reduce the wiggles in the y line and shift up its intercept. Lucas'
calculations simply assume that this second effect is not possible.
A second point concerns the distribution of the gains and losses
over the business cycle. A relatively small proportion of the population
bears most of the variability, so these individuals are sliding back and
forth around a much wider arc of their utility function (than Lucas
assumed). Even staying within Lucas' framework and numerical values,
Pemberton (1995) has shown that this distributional consideration can
raise the estimated benefit of stabilization policy by a factor of eight. An
even bigger revision is called for if a different utility function is used.
Pemberton notes that many experimental studies have cast fundamental
doubt on the expected utility approach. Indeed, the equity-premium puzzle
implies that we cannot have confidence in utility functions like the one
Lucas used. When some of the alternatives are used to rework Lucas'
calculations, it turns out that business cycles do involve significant
welfare implications.
Thus far, our discussion of the relative size of Okun's gap and
Harberger triangles has ignored two things: what are the effects of tax
changes before full equilibrium is reached? and what are the effects (if
any) on the economy's average rate of growth? These issues can be
clarified with reference to Figure 5.4. As we have seen, a cut in the
interest-income tax stimulates savings. As a result, current consumption
must drop, as shown by the step down in the solid-line time path in the
left-hand panel of Figure 5.4. Individuals must suffer this lower standard
of living for a time, before the increase in the stock of capital (made
possible by the higher saving) takes place. Our illustrative calculations
have estimated the long-term gain (the step up in the dashed line in the
left-hand panel of the figure) but not this short-term pain. Thus, the
109
comparison of Okun gaps and Harberger triangles is not complete without
a dynamic analysis of tax policy (which is provided in Chapters 10 and
12). But we must also note that a tax policy which stimulates savings may
not just cause a once-for-all increase in the level of living standards. It
may raise the ongoing growth rate of consumption, as shown in the right-
hand panel of Figure 5.4 There is still a period of short-term pain in this
case, but the effect on the present value of all future consumption can be
much more dramatic. Whether tax policy can have any effect on the long-
run average growth rate has been much debated in recent years, and this
debate is covered in the final two chapters of the book. But if it can, we
would have to conclude that the size of Harberger triangles may be far
bigger than earlier analysts had thought.
Does this mean that Lucas is right after all — that microeconomic
policy initiatives are more important than stabilization policy? Not
necessarily. As Fatas (2000), Barlevy (2004) and Blackburn and Pelloni
(2005) have shown, there is a negative correlation between the variance of
output growth and its mean value. It seems that a more volatile business
cycle is not conducive to investment, and so it contributes to a smaller
long-run growth rate than would otherwise occur. Thus, endogenous
growth analysis raises the size of both Harberger triangles and Okun gaps.
In C In C
Time Time
Old Growth Theory New Growth Theory
It seems that a prudent reaction to the Okun gap vs. Harberger triangle
debate is to take the view that the profession should allocate some of its
resources to investigating both stabilization policy and long-term growth
110
policy (eliminating distortions). As we shall see in later chapters, the same
analytical tools are needed to pursue both tasks.
5.6 Conclusions
111
Chapter 6
In this chapter, we analyze what has been called the "New Neoclassical
Synthesis" in macroeconomics. As noted in earlier chapters, this approach
attempts to combine the best of two earlier schools of thought. First, it is
consistent with the empirical "fact of life" that prices are sticky in the
short run (the Keynesian tradition). Second, it is based on the presumption
that the Lucas critique must be respected. That is, it is in keeping with the
demands of the New Classicals; both the temporary price stickiness and
the determinants of the demand for goods must be based on a clearly
specified inter-temporal optimization.
This synthesis involves the basic (infinitely lived representative
agent) version of the inter-temporal theory of the household (derived in
Chapter 4, section 3) to re-specify the IS relationship, and the similar
theory of the firm (derived in Chapter 4, section 5) as a basis for a re-
specified Phillips curve. The traditional (or "old") IS relationship involves
the level of aggregate demand (output) depending inversely on the interest
rate, and the traditional Phillips curve involves the level of the inflation
rate depending positively on the output gap. When we derived the micro-
based IS and Phillips curve relationships in Chapter 4, they appeared
rather different: j). = (r — 7) and p -, 0(y — T)) . Thus the "new" IS
relationship involves the change in aggregate demand depending
positively on the interest rate, and the "new" Phillips curve involves the
change in the inflation rate depending inversely on the output gap.
We investigated one aspect of monetary policy in a model
involving these "new" relationships in a rational-expectations setting in
Chapter 3 (section 5). Since that analysis was rather messy, in this chapter
we simplify in three ways. First, we ignore stochastic shocks, so that
rational expectations becomes the same thing as perfect foresight. Second,
we use a continuous-time specification, so that a geometric approach —
phase diagrams — can be used instead of algebra. Third, for most of the
chapter, we consider only one aspect of the new model at a time — initially,
the new IS relationship with a traditional Phillips curve, and then a new
Phillips curve with a traditional IS relationship. In each case, we wish to
explore how (if at all) these changes in the model's specification affect the
answer to a standard stabilization policy question: what happens to output
when the central bank embarks on a disinflation policy?
112
6.2 Phase Diagram Methodology
.) = —V(r — (6.1)
= 0(y - 5) + (6.2)
r+p=7+±+2(p—x) (6.3)
The first equation (the aggregate demand function) states that output falls
below its full-equilibrium value when the real interest rate rises above its
full equilibrium value. The second equation (the dynamic aggregate
supply function) states that inflation exceeds the authority's target
inflation rate whenever the actual rate of output exceeds the natural rate.
The third equation states that the central bank raises the nominal interest
rate above its full equilibrium value whenever the price level exceeds the
bank's target value for the price level, x. The slope parameters (the three
Greek letters) are all positive.
We focus on a contractionary monetary policy; the central bank
lowers its target value for the price level in a once-for-all, previously
unexpected, fashion. Further, we assume that — both before and after this
change — the bank did maintain, and will then revert to maintaining, a
constant value for that target variable (x). If we were to graph this
exogenous variable — the level of x as a function of time — it would appear
as a horizontal line that drops down in a one-time step fashion at a
particular point in time. At that very instant, the slope of the graph is
undefined, but both before and after that point in time, the slope, z, is
zero.
We are interested in knowing what the time graphs for real output
and the price level are in the face of this one-time contractionary monetary
policy. We learned how to answer this question in Chapter 2. We were
able to re-write the model as a single linear differential equation in one
variable, and from that compact version of the system, we could derive
both the impact effect on real output, and the nature of the time paths after
the policy change had occurred. Specifically, we learned that (as long as
the system is stable) there is a temporary recession (which is biggest at the
very instant that the target price level is cut). The output time path then
starts rising asymptotically back up to the unaffected natural rate line, and
113
the temporary recession is gradually eliminated (see Figure 2.2, p. 28).
There is no jump in the price level; the Keynesian element of the synthesis
is that the price level is a sticky variable. But while it cannot "jump" at a
point in time, it can adjust gradually through time. In this case, it
gradually falls to (asymptotically approaches) the new lower value of x.
These properties represent the base for comparison in the present chapter.
We want to know if the output and price-level time paths follow these
same general patterns in a series of modified models.
The first modified model is defined by equations (6.1 a), (6.2) and
(6.3). The only change is that the traditional IS relationship is replaced by
the "new" IS function:
y = (r —7) (6.1a)
Equations (6.2) and (6.4) represent the compact version of the model.
These relationships contain no endogenous variables other than the two
we are focusing on — y and p — and also the time rates of change of no
endogenous variables other than these same two. This is exactly the
format we need, if we are to draw a phase diagram with y and p on the two
axes. It is customary to put the "jump" variable — in this case, y — on the
vertical axis, and the sticky-at-a-point-in-time variable — in this case, p
—ontherizalxs.Wnowepihtusqaon(6.2)d
(6.4) to derive the phase diagram.
The goal is to draw two "no-motion" lines in a y-p space graph.
The (p = o) locus is all combinations of y and p values that involve p not
114
changing through time. The (j) = 0) locus is all combinations of y and p
values that involve y not changing through time. The model's full
equilibrium involves neither variable changing, so — graphically — full
equilibrium is determined by the intersection of the two no-motion loci.
Only that one point involves no motion in both variables. To draw each
no-motion locus, we must determine its three properties: What is the slope
of the locus? What precise motion occurs when the economy is observed
at a point that is not on this line? and How does this locus shift (if at all)
when each exogenous variable is changed? We now proceed to answer all
three questions for both no-motion loci.
The properties of the (p = 0) locus can be determined from the
/5 relationship — equation (6.2). When p = 0, this relationship reduces to
y = y-- , and this fact is graphed as the horizontal line in Figure 6.1,
labelled (p = o) . So we have already answered question one; the slope of
/3 > 0
A.
/3 = 0
y
/3 < 0
the (p = 0) locus is zero. What motion takes place when the economy is
at a point off this line? The best way to answer this question is to assume
that that we are at such a point, say point A in Figure 6.1, and then
determine what equation (6.2) implies about point A. At A, actual output
exceeds the natural rate, and (according to equation (6.2)) p must be
positive at point A as a result. This is just a mathematical way of saying
that "p is rising", so we draw in a horizontal line pointing to the right in
115
this upper region of the diagram to show this rising price level. It may
seem tempting to put an upward pointing arrow in the graph, since we are
talking about "rising" prices. But we must remember that we are graphing
p on the horizontal axis, so a rising value means an arrow pointing east,
not north. Similar reasoning leads to our inserting a western pointing
arrow below the (p = 0) locus. We have now summarized what is
happening (with respect to the price level, at least) at every point in the
plane. p is rising when it is observed at values above the line, falling when
observed at points below the line, and not moving at all when observed at
points on the line. The third (and final) question of interest concerning the
(p = 0) locus: is Does this line shift when the central bank lowers the
value of x — its price-level target? Since this exogenous variable does not
appear in equation (6.2), the answer is simply "no"; this policy does not
shift the position of this no-motion locus. We now proceed to ask and
answer these same three questions for the (5; = 0) locus.
The properties of the (p = 0) locus follow from equation (6.4). To
determine the slope of this no-motion locus, we substitute in the definition
of no motion (p = 0) , and solve for the variable that we are measuring on
the vertical axis in our phase diagram, y:
Y = (2 1 0)P + — (2 1 0)x].
The slope expression in this equation is the coefficient on the variable that
is being measured on the horizontal axis. Since this coefficient is (X/4) > 0,
we know that the (5, = 0) locus is positively sloped. The intercept is the
term in square brackets. Within this term, the coefficient of x is — (X/4)) < 0,
so we know that the reduction in x must increase the vertical intercept of
the (5, = 0) locus. Thus, we know that the contractionary monetary policy
moves the positively sloped (p = 0) locus up on the page. To answer the
remaining question — what motion is involved when the economy is
observed at a point that is not on this locus, we must revert to equation
(6.4) — that does not involve our having imposed P = 0 . From (6.4), we
see that ay = -0 < 0, which means that the time change in y goes from
zero to a negative value as we move to a point off the line and above the
line. Thus, we label all points above the line as involving p < 0, and this
is what justifies the arrows pointing south in this region of Figure 6.2.
Similarly, we label all points below the line as involving p > 0, and, as a
result, we insert a northern pointing arrow — indicating this rising y motion
for all observation points below the line in Figure 6.2.
116
Figure 6.2 The Properties of the Y = 0 Locus
y =o
=0
/3 - 0
4.111-4
..** Saddle Path
p
There is an alternative way to realize that these are the appropriate arrows
of motion, and this is by considering points that are to the right or left of
the line, instead of above or below the line. To proceed in this way, we
derive aj,/op = 2 > 0 from equation (6.4). This sign means that the time
change in y goes from zero to a positive value as we move off the line to
the right. Thus, we label all points in this region as having the property
117
(j' > 0), and we show this with the arrow that is pointing north. When
deriving a phase diagram, we can select either a vertical-displacement
thought experiment, or a horizontal-displacement from the line thought
experiment — which ever appears to involve the simpler algebra.
We are now ready to put both no-motion lines — with their
corresponding arrows that indicate what changes occur when the economy
is observed at points off these lines — in the same graph. This is done in
Figure 6.3. Point E is full equilibrium, since it is the only point in the
plane that involves no motion for either endogenous variable. The two loci
divide the rest of the plane into four regions, and the combined forces that
operate at all these other points are shown. These forces show that the
system has both convergent and divergent tendencies. If the economy is
ever observed at points in the north-east or the south-west regions of
Figure 6.3, for example, the system diverges from full equilibrium, and
we conclude that the system is unstable. This same conclusion is
warranted for many points in the north-west and south-east regions of
Figure 6.3. Readers can verify this by selecting an arbitrary "starting"
point in one of these regions and tracing the economy's possible time path.
If that time path misses point E, the trajectory enters either the north-east
or the south-west regions and instability is assured once again. But since
trajectories that start in the north-west and the south-east regions might
just hit (and therefore end at) point E, stability is possible. The dotted
negatively sloped line — labelled the saddle path in Figure 6.3 — shows this
possible stable path. So if this economy is ever observed on the saddle
path, there will be convergence to full equilibrium.
At this point, the outcome seems quite arbitrary; the system may
or may not involve stability. To resolve this problem, we apply the
correspondence principle. We observe, in the real world, that the
instability prediction seems not to apply. Thus, if we are to relate the
model to this reality, we must reject the unstable outcomes as
mathematically possible, but inadmissible on empirical applicability
grounds. Hence, we simply assume that the economy manages to get on
the saddle path. This is possible, since history does not pin down a value
for one of the two endogenous variables. The price level is pre-determined
at each point in time, but the level of output has been assumed to be a
variable that is free to "jump" at a point in time. These assumptions reflect
the Keynesian feature of the synthesis model in the (instantaneous) short
run. Even though prices are flexible as time passes, the quantity of output
can change faster than its price in the short run. We can summarize this
given-history constraint that operates on the price level, but not on output,
by adding an "initial conditions constraint" line in the phase diagram. This
is done in Figure 6.4.
118
Figure 6.4 Jumping on the Saddle Path
Initial =0
Conditions
Constraint
/3 = 0
j
' 'Saddle Path
p
There are four loci in Figure 6.4. The intersection of two — the
(5/ = 0) and the (p = 0) lines — determines the full equilibrium of the
system (the long-run outcome — point E). The intersection of the other two
lines — the saddle path that cuts through the long-run equilibrium point
and the pre-existing initial conditions constraint — determines the short-
run outcome (point A). The saddle path is the line we need to jump on to if
outright instability (which is presumed to be irrelevant on empirical
grounds) is to be avoided, and the initial conditions line is what we can
jump along. The intersection (point A) is the only point in the plane that is
both feasible (given the historically determined starting price level) and
desirable. While this methodology offers no discussion of a decentralized
mechanism that would help individual agents coordinate to find point A, it
assumes — on the basis of instability not being observed — that agents
somehow achieve this starting point. While this seems somewhat arbitrary,
it must be realized that some additional assumption is needed to complete
the model. After all, the two-equation system involves three endogenous
items: p, y and y. With y being a jump variable, both its level and its
time derivative are determined within the model. So an additional
restriction is needed to close the model, and this additional restriction is
that the system always jump on to the relevant saddle path, the moment
some previously unexpected event takes place. Any other assumption
renders the system unstable, and therefore unable to be related to the
(presumed to be stable) real world.
It is easier to appreciate how the model works by actually
following through a specific event. We do just this by referring to Figure
119
6.5. The economy starts in full equilibrium at point A — the intersection of
the initial no-motion loci. Then a once-for-all, previously unexpected,
drop in x occurs, as the central bank performs this unanticipated monetary
contraction. We have determined that this event shifts the (j) = 0) line up
to the left — to what is shown as the dashed line in Figure 6.5. The new
full equilibrium point is C, but the economy cannot jump immediately to
this point, because the price level is predetermined at a point in time. The
initial conditions line (always a vertical line if we follow the convention
of graphing the jump variable on the vertical axis) is the vertical line
going through the initial equilibrium point A. Since y is a jump variable,
the economy can move — instantaneously — to any point on this line. To
determine which point, we draw in the saddle path going through the new
full equilibrium point C. The intersection of this line with the initial
conditions line determines the immediate jump point — B.
Initial 0 after 14
Conditions
Constraint Initial y = 0
e
I e
p = 0 both before
and after . L x
B0 . Saddle Path
Through New
Full Equilibrium
where the Ss are the characteristic roots, and the zs are determined by
initial conditions. Since we are restricting attention to saddle path
outcomes, we know that one characteristic root must be positive, and the
other negative. Let us assume 82 > 0 and gi < 0. By presuming a jump to
the saddle path, the unstable root is being precluded from having influence,
so z, = z, = 0 are imposed as initial conditions.
With a linear system, the equation of the saddle path must be
linear as well:
y, - .T ) = r (p, - x) (6.6)
We substitute equations (6.7), and their time derivatives, into (6.2) and
(6.4) and obtain
1 = Or
(5
and (6; + = (6.8)
These two equations can be solved for the stable root and the slope of the
saddle path. The absolute value of the stable root defines the speed of
adjustment, while the slope of the saddle path is used to calculate impact
121
multipliers. For example, the impact effect on real output of the
unanticipated reduction in the target price level follows from the saddle
path equation (6.6):
ij = —0( y — -
.)
7)
(6.2a)
=(Ir —0)
it = 0 Locus
0 /3 = 0 Locus
.1-1
\
Saddle Path
p
= Ovg p - x
( ) - iffOr (6.9)
This equation has exactly the same format as (6.4), with it playing the role
in (6.9) that y plays in (6.4). This means that the (7i - = 0) locus in the
present model has all the same properties as the (5) = 0) locus had in the
model of the previous section (it is positively sloped, there are downward
pointing arrows of motion above the locus, and the position shifts left
when x is cut). Thus, the entire policy discussion proceeds in Figure 6.6 as
an exact analogy to that which accompanied Figure 6.5 above. To save
space, we leave it to the reader to review this discussion if necessary. We
simply close this particular analysis with a short discussion that links the
impact effect on it to the impact effect on y. Equations (6.1) and (6.3)
combine to yield
123
The phase diagram (Figure 6.6) proves that it falls when x falls. Given this
outcome, and the fact that p cannot jump, this last equation implies that y
must jump down. Thus, the predictions of this new-Phillips-curve model
align with those of the new-IS-curve model (of the last section) and of the
first synthesis model (of Chapter 2): there is a temporary recession that
immediately begins shrinking as time passes beyond the impact period.
We conclude, as in the previous section, that the predicted effects of
monetary policy are not specific to any one of these models.
e
/
e I
D e p = 0 both before
y •
e
e
e
and after 4-x
\ Saddle Path
Through New
Full Equilibrium
p
— 7V)= —W(R — k)
R =k(R—r) .
There are three terms in this general specification; the last two are
involved in the "old" IS function, while the first two are involved in the
"new" IS function. It is interesting that models involving this quite general
relationship were being analyzed for a number of years before the New
Neoclassical synthesis was formally introduced in the literature (for
example, see Blanchard (1981)).
Another analysis that involves forward-looking agents and asset
prices within the demand side of the model is Dornbusch's (1976) model
of overshooting exchange rates. The open-economy equivalent of the
"old" IS-LM system is the Mundell (1963)-Fleming (1962) model.
Dornbusch extended this framework to allow for exchange-rate
expectations and perfect foresight. His goal was to make the model more
consistent with what had previously been regarded as surprising volatility
in exchange rates. Dornbusch's model involved the prediction that the
128
exchange rate adjusts more in the short run than it does in full equilibrium
— in response to a change in monetary policy. The following system is a
version of Dornbusch's model.
The small open economy model involves equations (6.2 and 6.3)
along with
The phase diagram can be constructed from (6.2) and (6.11). As we have
encountered several times already, the form of (6.11) is the same as (6.4)
so the phase diagram analysis is very similar to what was presented in
detail in section 6.2. For this reason, we leave it for interested readers to
complete this specific open-economy analysis. However, before moving
on, we do note that (6.11) can be re-expressed, using (6.3), as a
relationship that involves just three variables: y (r —7) and (y — j7). Just
,
Writing this equation lagged once, multiplying the result through by (1-t),
and then subtracting the result from the original, yields
13,-1=n(wi — P1)
where 52 = r 1(1— r). With constant returns to scale technology, units can
be chosen so that the marginal product of labour is unity; thus p stands for
both the wage index and the price level. Each contracted w is set with a
view to the expected (equals actual) price that will obtain in the various
periods in the future, and to the state of market pressure in all future
periods (with the weight given to each period in the future depending on
the number of contracts that will run for two, three or more periods). Thus.
we have
since we define the natural rate as zero. Writing this last equation forward
one period, multiplying the result by (1-T), and subtracting this last
equation from that result, we have
=C2(w, — — 0y1).
=n(w- p) and
P 0Y) •
130
The full model consists of these two relationships and equations (6.1) and
(6.3). In continuous time, the length of one period is just an instant; thus, y,
r, and w are jump variables, while p is predetermined at each instant.
Defining v = w —p as an additional jump variable, we can re-express the
system (using the by-now familiar steps) as
The first of these equations that define the compact system has the very
same form as equation (6.9). The second is analogous to the definition of
TE in section 6.3. This symmetry implies that this descriptive model of
overlapping wage contracts and the micro-based model of optimal price
adjustment have the same macro properties.
131
that analysis was very messy as well. There is one other option — to stay in
continuous time and allow for ongoing changes in exogenous variables,
but to rely on the undetermined-coefficient solution method that was
explained in Chapter 2 (section 5) and used again in Chapter 3. We take
this approach in this remaining section of this chapter.
The exogenous variable that we want to model as involving
ongoing changes is autonomous spending. Thus, we must start by
indicating how the new synthesis model is altered slightly — compared to
what was derived in Chapter 4 — when this component of aggregate
demand is considered. It is left for readers to rework the Chapter 4
analysis to verify that — with an exogenous component of aggregate
demand, g, the new IS function becomes 5) a(r — + (1— a)g and the
"new" Phillips curve becomes /6 = —Ø(y — 57) + 18(g — . (1 — a) is the
full-equilibrium ratio of autonomous spending to total output (that is,
a = C I f, Y = C + G) , = 2(1— 0) 2 / + fi [(1— 0) 2 (1— a)]1(0a),
and 0 is the proportion of firms that cannot change their price each period.
While there seems to be little to justify assuming that there is no
exogenous component to aggregate demand, (a = 1), this is the common
assumption in the literature. We extend that literature in this section — first
by focusing on the basic new synthesis model, and then by considering
what has been called a "hybrid" model — that blends old and new
approaches.
To keep this section self-contained so that readers do not have to
keep referring back to equations that were defined much earlier in the
chapter, the full set of the model's relationships are listed together here.
The basic version of the model is defined by equations (6.12) through
(6.15). These equations define (respectively) the "new" IS relationship
(aggregate demand), the "new" Phillips curve (aggregate supply),
monetary policy (the interest-rate setting rule), and the exogenous cycle in
the autonomous component of demand.
The first two equations have already been discussed. Equation (6.14) is
the central bank's reaction function. We continue to focus on a bank that
is committed to price stability and this is why there is a zero inflation-rate
target term in the interest-rate-setting equation. But some of the remaining
132
details are slightly different here compared to earlier sections of this
chapter. We have chosen units so that both the (now constant) target price
level and the natural rate of output are zero. This monetary-policy reaction
function states that the bank sets the current nominal interest rate above
(below) its long-run average value whenever either the inflation rate is
above (below) its target value, or whenever the price level is above (below)
its target level.
As noted in earlier chapters, one major point of debate among
monetary policy analysts is whether central banks should pursue an
inflation-rate target or a price-level target. We consider this debate here by
examining alternative values for parameter k. Inflation targeting is in-
volved if X. = 1, while price-level targeting is specified by k = 0. We focus
on the implications of this choice for the economy's short-run built-in
stability properties. The existing literature has provided a thorough
investigation of this policy choice in the face of supply shocks, but there
has been nothing reported regarding demand shocks. It is partly to address
this gap in the literature that we focus on business cycles that are caused
by exogenous variations in autonomous demand, as defined by the sine
curve in equation (6.15). We proceed by deriving the reduced form for
real output, to see how the amplitude of the resulting cycle in y is affected
by changes in the monetary policy parameter (X).
To analyze the model, we first substitute (6.14) into (6.12) to
eliminate the real interest rate gap. Then, we take the time derivative of
the result, and use (6.13) to eliminate the change in the inflation rate.
Finally, we take one more time derivative and use (6.13) again, to
eliminate the remaining inflation-change term. The result is:
third time derivative, .5; = B sin(t) — C cos(t), and equation (6.15) and its
first and third time derivatives, k = S cos(t) and k = —8 cos(t), are all
substituted into (6.16). The resulting coefficient-identifying restrictions
are
133
B = a(l - 2)[/35 ?E] /[I + 4( 1- 2 )]
C = 8[(1— a)(1+ aO(1— 2)) +(afl(1- A)) + fliba 2 (1— 2 )2 l 1
[(a0(1 - 2))2 + (1 + GOO- 2 W].
Illustrative parameter values are needed to assess the resulting amplitude
of the cycle in y. Representative values are: a = 0.8, (I) = 0.2 (which, given
the restrictions noted in the third paragraph of this section, imply (3 =
0.022) and 8 = 1.0. With these values, the amplitude of the real output
cycle is about 15% larger if the central bank targets the inflation rate (X =
1), than it is if the central bank targets the price level (X. = 0). According to
the model, then, the contemplated move from inflation-targeting to price-
level targeting is supported. Long-run price stability is achieved in either
case, and there is a small bonus with price-level targeting — there is a
slight reduction in real output volatility.
The reason why this issue requires a formal analysis is that there
are competing effects. These can be readily appreciated at the intuitive
level (as was noted in Chapter 2). Consider an exogenous increase in the
price level. With inflation-rate targeting, such a "bygone" outcome is
simply accepted, and only future inflation is resisted. But with price-level
targeting, future inflation has to be less than zero to eliminate this past
outcome. That is, only under price-level targeting is a policy-induced
recession called for. The reason that this consideration may not be the
dominant one, however, is that the avoidance of any long-term price-level
drift has a stabilizing effect on expectations. For plausible parameter
values, it appears that the former (destabilizing) effect is outweighed by
the latter (stabilizing) effect. As noted in Chapter 3 (section 5), Svensson
(1999) has interpreted this result as representing a "free lunch." The
present analysis supports Svensson.
But it was also noted in Chapter 3 that this free lunch proposition
is not robust. Walsh (2003b) has shown that it disappears when the
analysis shifts from the basic new synthesis model (that we have relied on
above) to a "hybrid" model — at least as far as supply-side shocks are
concerned. Walsh excluded demand shocks from his analysis. For the
remainder of this section, we outline what is meant by a hybrid model,
and then we determine whether Walsh's conclusion is appropriate for
demand disturbances (which is our focus here). We conclude that it is.
It is important that we consider some reformulation of the new
Phillips curve. This is because the strict version of this relationship makes
a prediction that is clearly refuted by empirical observation. Recall that
the strict version is j5 = —Ø(y — . Consider what this equation implies
during a period of disinflation — a period when the inflation rate is falling
134
(that is, when jj is negative). This Phillips curve predicts that the output
gap must be positive in such situations; that is that we must enjoy a boom
— not a recession — during disinflations! This is clearly not what we have
observed. Thus, while the new Phillips curve is appealing on micro-
foundations grounds, it is less so on empirical applicability grounds. To
improve the applicability without losing much on the optimization-
underpinnings front, macro-economists have allowed past inflation, not
just future inflation, to play a role in the new Phillips curve. This
extension avoids the counter-factual prediction concerning booms during
disinflations. Jackson (2005) has shown that a substantial degree of such a
"backward-looking" element is needed in this regard. Since hybrid
versions of the Phillips curve provide this feature, we consider them now.
Hybrid versions of the new synthesis model involve IS and
Phillips curve relationships that are weighted averages involving both the
new forward-looking specification and a backward-looking component.
The latter is included by appealing to the notion of decision-making costs.
It is assumed that there is a proportion of agents who find it too expensive
to engage in inter-temporal optimization. In an attempt to approximate
what the other optimizing agents are doing, the rule-of-thumb agents
simply mimic the behaviour of other agents with a one-period lag.
For example, on the supply side (using it to denote the inflation
rate and writing relationships in discrete time), the optimizers set prices
according to
Several authors (Amato and Laubach (2003), Gali and Gertler (1999),
Walsh (2003b), Estralla and Fuhrer (2002), Smets and Wouters (2003),
Christiano et al (2005) and Jensen (2002)) have derived versions of a
hybrid that are equivalent to (6.13a) but which involve a more elaborate
135
derivation. Some lead to the proposition that the coefficient on the output
gap should be bigger in the hybrid environment than it is in the simpler
Calvo setting, while others lead to the opposite prediction. At least one,
Jensen, supports exactly (6.13a). There is a mixed empirical verdict on
alternative versions of the new hybrid Phillips curve. Gali et al (2005)
argue that most of the weight should be assigned to the forward-looking
component, while the results of Rudd and Whelan (2005) lead to the
opposite conclusion. Roberts' (2006) empirical findings support our 50-50
specification. Rudd and Whelan (2006) are quite negative on the general
empirical success of the hybrid models, while Mehra (2004) reaches a
more positive conclusion. Mankiw and Reis (2002) argue that a "sticky
information" approach is more appealing than any of the sticky price
models, and their approach is followed up in Ball et al (2005) and Andres
et al (2005). Given our inability to establish a clear preference for any one
of these hybrid price-change relationships, it seems advisable to use the
intermediate specification that (6.13a) represents, with wide sensitivity
testing on its slope parameters when reporting calibrated results with the
model.
We now explain a representative specification of a hybrid IS
relationship. The log-linear approximation of the resource constraint is
common to both the forward-looking and backward-looking components
y, = ac, + (1 a)gi .—
c, = c,,, — (r —7),
and the rule-of-thumb agents mimic what other agents did in the previous
period
C, = C,_1 .
137
have shown that these policy implications can be affected when there is a
direct effect of interest rates in the new Phillips curve.
Finally, readers may be surprised by the following feature of the
new synthesis framework. Much attention is paid to the proposition that
the analysis starts from a clear specification of the agents' utility function.
Then, when policies are evaluated with the model, analysis seem to revert
to the "old" habit of ranking the outcomes according to which policy
delivers the smallest deviations of output and the price level from their
full-equilibrium values. Shouldn't an analysis that is based on an explicit
utility function (involving consumption and leisure as arguments) use that
very same utility function to evaluate alternative policies? Indeed, this is
the only approach that could claim to have internal consistency. It is
reassuring, therefore, that Woodford (2003a) has derived how the standard
policy-maker's objective function — involving price and output deviations
— can be explicitly derived as an approximation that follows directly from
the private agents' utility function that underpins the analysis.
6.6 Conclusions
When the New Classical revolution began in the 1970s, strong statements
were made concerning "old fashioned" macroeconomics. For example,
Lucas and Sargent (1979) referred to that work as "fatally flawed", and
King (1993) argued that the IS-LM-Phillips analysis (the first Neoclassical
Synthesis) was a "hazardous base on which to ... undertake policy
advice". But, as Mankiw (1992) has noted, that analysis has been
"reincarnated" (not "resurrected" — since the "new" IS-LM-Phillips
analysis certainly involves important differences in its structure from its
predecessor), and we are left with a framework that is now embraced as a
very useful base for undertaking policy advice. It is "good news" that this
analytical framework involves roughly the same level of aggregation and
abstraction as the older analysis, since this facilitates communication
between actual policy makers (who were brought up in the older tradition)
and modern analysts. In short, there is reason for much more optimism
than there was 30 years ago.
This is not to say that there is no controversy remaining. Indeed,
some macroeconomists still actively debate the relative merits of the "old"
and "new" specifications of the IS and Phillips curve relationships. This is
because, as noted above, the "old" relationships appear to be much more
consistent with real-world data, despite the fact that the "new" rela-
tionships are more consistent with at least one specific version of
microeconomics. It is frustrating to have to choose between the two
138
criteria for evaluating macro models — consistency with the facts and
consistency with optimization theory. It is for this reason that many
researchers are focused on developing the hybrid models that share some
of the features of both the "old" and "new" approaches.
One purpose of this chapter has been to make the reader aware of
these developments, and to thereby impart some perspective. But since we
wished to limit the technical demands on the reader, for the most part, we
limited the analysis to a set of "partially new" models. We have found that
some of the properties of these models are very similar. This is fortunate.
It means that policy makers do not have to wait until all the controversies
within modern macroeconomics are settled — before taking an initiative. It
is true that the answers to some questions are model-specific. For example,
the question as to whether the central bank should target the inflation rate
or the price level receives different answers as we vary the model. In
Chapter 3, using an "old" specification, we concluded that inflation-rate
targeting is preferred. In section 5 of the present chapter, we saw two
things — that a "new" model supports the opposite conclusion, and that a
hybrid model swings the support back in the direction of inflation-rate
targeting. Nevertheless, despite the sensitivity of our recommendations to
changes in model specification, policy makers can still proceed on this
issue. This is because, as already noted, the hybrid model seems to score
the best when evaluated according to both model-selection criteria
(consistency with both constrained maximization theory and data).
The main methodological tool that is used in this chapter is the
phase diagram. We will use this tool in later chapters as well. One of the
most interesting things that has emerged from our use of phase diagrams
is that the impact of government policies can be very different —
depending on whether private agents did or did not anticipate that policy.
This sensitivity can sometimes make it very difficult for macroeconomists
to test their models empirically.
139
Chapter 7
7.1 Introduction
Perhaps the central question in monetary policy is Why should the central
bank target zero (or perhaps some low) rate of inflation? We analyzed this
question in Chapter 5 (section 4). We noted there that estimates of the
"sacrifice ratio" involved in disinflation have averaged about 3.5. This
means that a country incurs a one-time loss of about 3.5 percentage points
of GDP — in some year or other — for each percentage point of reduction
in the full-equilibrium inflation rate. We argued that it is difficult to be
dogmatic about what constitutes the optimal inflation rate, however, since
estimates of the benefits of lower inflation are not at all precise, and they
140
range both above and below this cost estimate. We add to that earlier
analysis in this section in two ways, and both extensions are motivated by
empirical considerations. For one thing, empirical studies suggest that the
short-run Phillips curve is flatter at low inflation rates. For another,
estimated sacrifice ratios vary a great deal, and one of the considerations
that appears to be important is how abrupt or gradual the disinflation
episode is (see Ball (1994)). In this section, we highlight analyses that
address these issues.
One of the reasons for flatter Phillips curves at low inflation rates
may be the fact that wages appear to be more "sticky" in the downward
direction than they are in the upward direction. The implications of this
proposition can be appreciated by considering the short run Phillips curves
shown in Figure 7.1. They are "curved" — that is, flatter in the lower
quadrant than they are in the upper quadrant, to reflect relative downward
rigidity.
Inflation
141
will push the economy's observation point into the lower quadrant for half
of the time. Even if inflation averages zero, the fact that the Phillips curve
is flatter in the lower part of the diagram means that the unemployment
rate will average more the natural rate. The moral of this story is that —
with nonlinear short-run Phillips curves and stochastic shocks — there is a
long run trade-off after all. When we are "too ambitious" and aim too low
-
For expected
inflation of AC Short-run
trade-off
For expected lines
inflation of zero
142
The remainder of this section reviews Barro and Gordon (1983) who
apply the concept of dynamic consistency in policy-making to monetary
policy. Their model abstracts from the non-linearities and stochastic
shocks that have been the focus of the analysis in the last three paragraphs.
It emphasizes that monetary policy should deliver an entirely predictable
inflation rate — whatever target value is selected. The reasoning is best
clarified by considering the following two equations:
u = u * —a(g- — ge)
L = (ti — ju*) 2 + b(71- — 0) 2 .
143
(ju*). Only if this is assumed can we interpret the unemployment as
"involuntary." (The interpretation of unemployment as involuntary is
discussed in much greater detail in Chapter 8. The analysis in that chapter
provides a rigorous defense for the proposition that it is reasonable to
specify parameter j as a fraction. This is important, since the analysis in
the present section breaks down ifj is not a fraction.)
The loss function is illustrated in Figure 7.2 by the indifference
curves. Curves that are closest to the point marked "best" represent higher
utility. Suppose that the economy is at point A with an undesirable level
—
point A. Once this attempt to move to B gets going, the system gravitates
to point C — which is on the least desirable of the three indifference curves.
The central bank will make the right decision if it focuses on the vertical
long-run Phillips curve. If it does so, and tries to achieve the point of
tangency between this constraint and the indifference map, the bank will
realize that point A is that best tangency point. By focusing on the long
run — and ignoring the fact that it can (but should not) exploit the fact that
individuals have already signed their wage and price contracts and so are
144
already committed to their inflationary expectations — the bank can deliver
point A (and this is better than point C). The moral of the story is that the
bank should focus on the long run, and not give in to the temptation to
create inflation surprises in the short run.
A formal proof of this proposition can be developed as follows.
The central bank has two options. First, it can be myopic and inter-
ventionist by revising its decision every period — capitalizing on the fact
that the inflationary expectations of private agents are given at each point
in time. Second, it can take a long-term view and be passive — just setting
inflation at what is best in full equilibrium and not taking action on a
period-by-period basis. In each case, the appropriate inflation rate can be
calculated by substituting the constraint (the Phillips curve equation) into
the objective function, differentiating that objective function with respect
to the bank's choice variable, the inflation rate, and setting that derivative
equal to zero. In the myopic version, inflationary expectations are taken as
given (and that fact can be exploited), while in the passive version, the
bank subjects itself to the additional constraint that it not falsify anyone's
expectations. In this case, the 7T = Ire constraint is imposed before, not
after optimization by the bank.
In the passive case the first order condition is (by substituting = ,e and
u = u * into the loss function before differentiation):
oL 1 diz- =2bn- = 0,
so this policy generates 7z- = 0 (the height of point A in Figure 7.2) and
losses equal to:
145
L(passive) = (u * (1— j)) 2
By inspecting the two loss expressions, the reader can verify that losses
are smaller with the "hands off' approach to policy.
Despite what has just been shown, individuals know that central
banks will always be tempted by point B. When the central banker's boss
(the government) is particularly worried about an electorate (that is
frustrated by unemployment), there is pressure on the bank to pursue what
has been called the "myopic" policy. Following Fischer and Summers
(1989), let parameter c be the central bank's credibility coefficient. If
people believe that there is no chance that the bank will be myopic, its c
coefficient is unity. If people expect the myopic policy with certainty, the
bank's c coefficient is zero. All real-world central banks will have a
credibility coefficient that is somewhere between zero and one. Thus, in
general, social losses will be equal to:
This expression teaches us three important lessons: that losses will be less
with a higher c, a lower a, and a higher b. Let us consider each of these
lessons in turn.
Anything that increases central bank credibility is "good." This
explains why all central bankers make "tough" speeches — assuring
listeners that the bank cares much more about inflation than it does about
unemployment. It also explains why some countries (such as New
Zealand) link the central banker's pay to his/her performance (he/she loses
the job if the inflation target is exceeded). In addition, it explains why
particularly conservative individuals are appointed as central bankers -
even by left-wing governments, and why central banks are set up to be
fairly independent of the government of the day (such as in the United
Kingdom). Finally, it explains why developing countries, and countries
that have had a history of irresponsible monetary policy, often choose to
give up trying to establish credibility — and just use another country's
currency instead of their own.
Anything that makes the family of short-run Phillips curves
steeper (that is, makes the slope expression (— 1/a) bigger) is "good." Thus.
the analysis supports the wider use of profit sharing and/or shorter wage
contracts.
Finally, increased indexation is "bad." Twenty-five years ago,
when various western countries were debating the wisdom of fighting
146
inflation, some argued that it would be best to avoid the temporary
recession that accompanies disinflation. These individuals argued that it
would be better to include a cost-of-living clause in all wage, pension and
loan contracts. With indexed contracts, unexpected inflation would not
transfer income and wealth in an unintended fashion away from
pensioners and lenders to others. Thus, some people argued that
embracing indexation and "living with inflation" would have been better
than fighting it. Our analysis can be used to evaluate this controversy
since widespread indexation would change parameter b. If we had indexed
contracts, any given amount of inflation would result in smaller social
losses. Thus, more indexation means a smaller parameter b. But the final
loss expression indicates that a smaller b is "bad." This is because
indexation makes it more sensible for the central bank to be myopic.
There is always the temptation of point B, and indexation makes the cost
of going for B seem smaller.
We can summarize as follows. More indexation is "good news"
since it makes any given amount of inflation less costly. But more
indexation is "bad news" since it tempts the central bank to choose more
inflation. It is interesting (and perhaps unexpected) that — according to the
formal analysis — the "bad news" dimension must be the dominant
consideration. (This is why economists sometimes use formal analyses.
Algebra is a more precise tool than geometry, and sometimes this means
we can learn more from algebraic analysis. This analysis of indexation is
an illustration of this fact.) Overall, the analysis supports disinflation over
"living with inflation."
Before leaving this analysis of central bank credibility, it is worth
noting several things. First, the analysis does not prove that zero inflation
is best; it assumes it. Another point that is worth emphasizing is that what
this central bank credibility analysis rules out is policy that creates
surprises. It does not rule out discretionary policy that is well-announced
in advance. This distinction can be clarified as follows. Let the inflation
term in the loss function be the average inflation rate, g*, instead of just
this period's inflation:
z= * +d(u —u*),
147
which states that the bank raises aggregate demand (and therefore
inflation) whenever unemployment rises above the natural rate. It is
assumed that this reaction is announced in advance — as an ongoing
decision rule. Thus, it is a policy that is not based on causing private-
sector forecasts to be incorrect. If we substitute this policy reaction
function into a standard Phillips curve:
u =u*—h(g. — ze),
the result is
u =u*—a(n- * —ze)
where a = h 1(1 + dh). The analysis proceeds just as before — using the
revised loss function and this revised Phillips curve. The only thing that is
new is the revised interpretation of parameter a — that is defined in this
paragraph. As before, anything that makes a small is "good," and we see
that embracing discretionary policy that is announced in advance (moving
to a higher value for coefficient d) is therefore "good." Overall, then, this
credibility analysis supports discretionary policy — as long as that policy
is a pre-announced ongoing process, not a one-time event. This is why the
policy analysis in earlier chapters of this book have involved policy
reaction functions that were assumed to be fully understood by the agents
in the model.
More on central bank independence is available in Fischer (1995)
and McCallum (1995). As a result of this basic model of policy credibility,
we now have an integrated analysis of disinflation which combines
important insights from both Keynesian and Classical perspectives.
Keynesians have emphasized rigidities in nominal wages and, prices to
explain the recession that seems to always accompany disinflation. New
Classicals have stressed the inability of the central bank to make credible
policy announcements. Ball (1995) has shown that both sticky prices and
incomplete credibility are necessary to understand the short-run output
effects of disinflation. Indeed if incomplete credibility is not an issue, Ball
shows that disinflation can cause a temporary boom, not a recession. Only
a fraction of firms can adjust their prices quickly to the disinflationary
monetary policy. But if the policy is anticipated with full credibility, firms
that do adjust know that money growth will fall considerably while their
prices are in effect. Thus, it is rational for them to reduce dramatically
their price increases, and this can even be enough to push the overall
inflation rate below the money growth rate initially. This is what causes
148
the predicted temporary boom. However, since we know that such booms
do not occur during disinflations, we can conclude that policy credibility
must be a central issue.
The general conclusion to follow from this monetary policy
analysis is that the only dynamically consistent — that is, credible — policy
approach is one that leaves the authorities no freedom to react in a
previously unexpected fashion to developments in the future.
Discretionary policy that is not part of an ongoing well-understood rule
can be sub-optimal because "there is no mechanism to induce future
policy makers to take into consideration the effect of their policy, via the
expectations mechanism, upon current decisions of agents" (Kydland and
Prescott (1977, p 627)).
This argument for "rules" (with or without predictable feedback)
over "discretion" (unpredictable feedback) has many other applications in
everyday life. For example, there is the issue of negotiating with
kidnappers. In any one instance, negotiation is appealing — so that the life
of the hostage can be saved. But concessions create an incentive for more
kidnapping. Many individuals have decided that the long-term benefits of
a rule (no negotiations) exceed those associated with discretion (engaging
in negotiations). Another application of this issue, that is less important
but closer to home for university students, concerns the institution of final
exams. Both students and professors would be better off if there were no
final examinations. Yet students seem to need the incentive of the exam to
work properly. Thus, one's first thought is that the optimal policy would
be a discretionary one in which the professor promises an exam and then,
near the end of term, breaks that promise. With this arrangement, the
students would work and learn but would avoid the trauma of the exam,
and the professor would avoid the marking. The problem is that students
can anticipate such a broken promise (especially if there is a history of
this behaviour), so we opt for a rules approach — exams no matter what.
Most participants regard this policy as the best one. (For further
discussion, see Fischer (1980).)
We close this section by returning to disinflation policy. In this
further treatment of the issue, we integrate political and macroeconomic
considerations. We include this material for two reasons. First, it shows
how political uncertainty can be a source of incomplete credibility in
economic policy making. Second, it provides a framework for examining
whether gradualism in policy making is recommended or not.
Blanchard (1985a) has presented a simple model which highlights
this interplay. His analysis focuses on the possibility of two equilibria —
both of which are durable since both are consistent with rational
expectations. The essence of this model is that disinflation policy only
149
works if agents expect low money growth in the future. If a "hard-line"
political party is in power and it promises disinflation, two outcomes are
possible. On the one hand, agents can expect that the disinflation will
succeed and that the anti-inflation party will remain in power. Since both
these expectations will be realized, this outcome is a legitimate
equilibrium. On the other hand, agents can expect that the disinflation will
fail, and that (perhaps as a result) the anti-inflation party will lose power.
Since the change in government means that the disinflation policy is not
maintained, once again, the results validate the initial expectation, and this
outcome is another legitimate equilibrium. Blanchard formalizes this
argument, to see what strategy is advised for the hard-line party.
The basic version of the model is straightforward. Unemployment
depends inversely on the excess of the money growth rate, g, over the
inflation rate, Tc: u = —y(g — R -). The inflation rate is a weighted average of
the current actual money growth rate and what growth rate is expected in
the future, ge : it = cog +(1—yo)ge The hard-line party (which is in power
initially) sets g = g . The political opponent, the "soft" party, sets g = g*
where g* > k-.
If agents expect the hard-liners to stay in power, it = T. - so
unemployment is zero. If agents expect the soft party to gain power,
it = cok + (1— co)g * . Combining this relationship with the unemployment
equation, we have a summary of how unemployment and inflation interact:
To recap — for inflation rates between k and z*, the hard-liners stay in
power and disinflation succeeds. For all inflation rates lower than 114 ,
there are two equilibria. If the hard-liners are expected to remain in power,
150
any chosen inflation rate below g* and u = 0 are the outcomes and the
hard-liners do remain in power. However, private agents may expect the
hard-liners to lose, and in that case we observe the same chosen inflation
rate along with positive unemployment (the value given by the second last
indented equation) and the hard-liners do lose power.
Three reactions to this two-equilibria feature are possible. One is
to interpret it as a theoretical under-pining for gradualism: the hard-liners
can avoid any chance of losing power (and therefore ensure that the two-
equilibria range of outcomes never emerges) by only disinflating within
the k to 71- * range. (This is Blanchard's reaction.) A second reaction is to
argue that the model should be re-specified to eliminate the two-equilibria
possibility. (Blanchard notes that this is achieved if the probability of the
hard-liners retaining power is less than unity even if unemployment never
exceeds zero.) The third reaction is to argue that we must develop more
explicit learning models of how agents' belief structures are modified
through time. (This is Farmer's (1993) reaction, as he exhorts macro-
economists to directly focus on self-fulfilling prophecies rather than to
avoid analyzing these phenomena.)
Blanchard's analysis is not the only one to provide a rationale for
gradualism. Our study of stabilization policy when the authority is
uncertain about the system's parameter values (Chapter 3, section 2)
showed that uncertainty makes optimal policy less activist. Drazen and
Masson (1994) derive a very similar result to Blanchard's, in an open
economy model that stresses the duration of the hard-line policy. It is
commonplace to assume that a central bank's credibility rises with the
duration of its hard-line position. This presumption does not always hold
in Drazen and Masson's study, since the larger period of (temporary)
suffering can so undermine support for the central bank's policy that it
becomes rational for everyone to expect that a softening of its stand must
take place. The bank can avoid this loss of credibility only by being less
hard-line in the first place. Drazen and Masson compare their model to a
person following an ambitious diet. When such a person assures us that he
will skip lunch today, is the credibility of that promise enhanced or
undermined by the observation that the individual has not eaten at all for
three days. On the one hand, there is clear evidence of the person's
determination; on the other hand, the probability is rising that the
individual will simply have to renege on his promise. Again, credibility
may be higher with gradualism.
For a more thorough sampling of the literature which integrates
macro theory, credibility and considerations of political feasibility, see
Persson and Tabellini (1994). A recent study that is similar to Blanchard's
is Loh (2002). Also, King (2006) provides a very general and concise
151
explanation of why discretionary policy creates a multiple equilibria
problem that a rules approach avoids.
This section of the chapter uses the new neoclassical synthesis approach
to consider the implications for output volatility of alternative exchange-
rate regimes. A more detailed version of this material is available in Malik
and Scarth (2006).
The model is defined by equations (7.1) through (7.5). These
equations define (respectively) the new IS relationship, the new Phillips
curve, interest parity, monetary policy (assuming flexible exchange rates),
and the exogenous cycle in autonomous export demand. The definition of
variables that readers have not encountered in earlier chapters is given
following the equations.
The similar expression for the fixed exchange rate version of the model 1 ,
The reader can verify that a sufficient, though not necessary, condition for
A4 to be positive is that intermediate imports be less than half of GDP.
Taking this to be uncontroversial, we make this assumption. Then, a
sufficient, though not necessary condition for expression (7.10) to be
positive is (112 tif)> 0. As noted above, the micro-foundations imply
that this restriction must hold. We conclude that the model supports a
flexible exchange rate. It is noteworthy that this conclusion is warranted
for quite a range of monetary policies followed under flexible exchange
rates — all the way from a hard-line approach (absolute price-level
targeting) to a policy that pays significant attentions to real-output
outcomes (nominal GDP targeting). Since each one of these options
delivers lower output volatility — with no trade-off in terms of long-run
price stability — the flexible exchange rate policy is supported.
There is a vast pre-new-synthesis literature on this question. The
original analysis — the Mundell-Fleming model — predicted that a flexible
exchange rate would serve as a shock absorber in the face of demand
shocks. As complications such as exchange-rate expectations and supply-
side effects of the exchange rate were added as extension to this
descriptive analysis, the support for flexible exchange rates — as a
mechanism for achieving lower output volatility — was decreased but (in
most studies) not eliminated. The analysis in this section allows for all
these extensions, and for the dynamic structure that is imposed by the
micro-foundations and inter-temporal optimization that forms the core of
the new synthesis approach. For those who have maintained a preference
for flexible exchange rates throughout this period, the robustness of our
conclusion is reassuring. As we found in the previous chapter, both old
and new analyses often complement each other — at the level of policy
implications.
A number of recent studies have focused on monetary policy in
small open economies. Kirsanova et al (2005) focus on the question of
which price level should the central bank attempt to keep on a target — the
price of domestically produced goods or the overall consumer price index.
The former is a sticky variable, while the latter has a jump dimension
since it is affected directly by the exchange rate (through its effect on the
price of imports). Similar closed-economy analyses have addressed the
question: should the price level or the wage level be targeted by monetary
policy? The general answer to these questions is that the central bank
should target whichever nominal variable is the stickiest. After all, welfare
155
costs arise when the sticky variable cannot adjust to its equilibrium value.
If the central bank never lets that variable get very far from that desired
value, its stickiness cannot matter much.
Devereux et al (2006) apply the new synthesis approach to models
of developing economies; they provide a detailed account of how optimal
monetary policy depends on the extent of exchange-rate pass-through.
Calvo and Mishkin (2003) offer a less model-specific discussion of the
options for developing countries. They argue that the choice between
fixed and flexible exchange rates is likely to be less important than
whether or not the country possesses good institutions and a set of sound
fiscal policies. Finally, Edwards and Levy (2005) provide a summary of
the empirical evidence. By comparing the group of countries that have
adopted flexible exchange rates with those that have not, they find that —
despite the differences in monetary policy strategies within the first group
— a flexible exchange rate acts as a shock absorber.
D = PG — PT + iB .
d=g—t+ib (7.11)
We consider two fiscal regimes — first, one that involves the government
maintaining a fixed structural, or primary, deficit-to-GDP ratio. This
policy means that (g — 1) remains an exogenous constant, and the overall
157
deficit, d, is endogenous (determined in (7.11). Substituting (7.11) into
(7.12) to eliminate d yields
(g — 1)+ (r — n)b
present value of this item is then ixe ("-R) dt, where R is the after-tax real
interest rate. Clearly, if n > R, this present value is infinite. To avoid such
a situation in cost-benefit studies, analysts always assume R > n.
Does internal consistency demand that we make the same
assumption (R > n) when considering macroeconomic policy (for example,
when assessing the stability or instability of bond-financed budget deficits)
160
as we do when conducting microeconomic policy analysis (for example,
when conducting cost-benefit studies)? If so, we must conclude that bond-
financing of budget deficits with an exogenous primary deficit involves
instability, and so this fiscal regime should be avoided. But a number of
analysts answer this question in the negative. They note that different rates
of interest are involved in both firms' and the government's investment
decisions on the one hand, and in the financing of the government debt on
the other. Investment decisions involve risk, and it is customary to
account for risk in cost-benefit calculations by using a "high" discount
rate. Thus, it is not surprising that estimates of the marginal product of
capital are noticeably higher than the rate of return paid on a safe
government bond. It is possible for the average real growth rate to be
consistently below the former and consistently above the latter. If so, it
may be possible to argue that there is no inherent inconsistency involved
in following standard cost-benefit practice, while at the same time
maintaining that a bond-financed deficit — with exogenous program
spending and tax rates — is feasible.
It is difficult to react to this proposition formally, since our
discussion has not involved uncertainty, and so it contains no well-defined
mechanism for maintaining any gap between the marginal product of
capital and a risk-free yield. We can say that during the final quarter of the
twentieth century, government bond yields exceeded real growth rates in
many countries. In this environment, the entire distinction between safe
and risky assets ceases to be important since there is no doubt that bond
financing (along with an exogenous primary deficit) involves instability.
The literal take-off of government-debt-to-GDP ratios during much of that
time is, of course, consistent with this interpretation.
In the end, over long time intervals at least, it seems fair to say
that there is serious doubt concerning the relative magnitude of the growth
rate and the level of government bond yields. Bohn (1995) and Ball,
Elmandorf and Mankiw (1995) and Blanchard and Weil (2003) have
developed models of bond-financed deficits that involve explicit
modelling of uncertainty. Ball et. al. consider historical evidence for the
United States since 1871. They estimate that this experience is consistent
with being able, with a probability in the 80-90 percent range, to run
temporary deficits and then roll over the resulting government debt
forever. The country can likely grow its way out of a large debt problem,
without the need for ever having to raise taxes. As a result, the welfare of
all generations can be improved. This result is not certain. Nor does it
violate the evidence in favour of dynamic efficiency — since there is still a
positive probability that some generations will be worse off if growth is
insufficient to lower the debt -to-GDP ratio.
161
Ball et. al. use their result to argue that the usual metaphor
concerning government deficits — that they are like termites eating one's
house — is inappropriate. The termite analogy suggests very gradual but
inevitable disaster. They advocate thinking of deficits like a homeowner's
decision not to buy fire insurance. Living standards can be higher as long
as no fire occurs, but there is a large adverse effect if the fire does occur.
They prefer the fire insurance metaphor because the occurrence of a fire is
not inevitable. Of course, reasonable people can disagree about the
likelihood of the fire. Since we are not sure what caused the productivity
slowdown in the mid-1970s, for example, it may be that extrapolating the
more rapid average growth rates of the previous century into the future is
ill-advised. Similarly, with world capital markets now integrated, and with
the high demand for savings stemming from the developing markets in
Asia and elsewhere, world (real) interest rates may rise above historical
norms for a prolonged period of time. In short, it may be quite imprudent
to put much weight on the 1945-1975 historical experience (which was an
historical anomaly since it involved n > r). If so, the termite analogy is not
so bad after all.
Should the debt ratio be allowed to vary over the business cycle? One of
the central lessons of the Great Depression was that adjusting annual
spending and taxation with a view to maintaining a fixed budget-balance
target "come hell or high water" increases output volatility: spending has
to be cut and taxes raised as the economy slows down, which is exactly
the time we do not want that to happen. The Keynesian message was that
it is better to help balance the economy by balancing the budget over the
time horizon of one full business cycle, not over an arbitrary shorter
period such as one year. Thus, for at least a half century following the
Depression, it was assumed that a rigid annually balanced budget
approach was "obviously" to be avoided. But the Keynesian message has
been increasingly ignored in recent years. As the "hell or high water"
quotation from the Canadian finance minister in 1994 indicates,
governments have reverted to annual budget-balance targets that permit
only very small departures from a more rigid regime. Adoption of the
"Growth and Stability Pact" in Europe has applied similar pressure. As
The Economist magazine has editorialized:
162
as the euro area faces the possibility of its first recession ... the
stability pact must not only preclude any fiscal easing but even
trammel the operation of fiscal 'automatic stabilizers.' That could
mean that these countries are required to increase taxes or cut public
spending even as their economies slow. That smacks of 1930s-style
self-flagellation" (Aug. 25, 2001, p.13).
Are The Economist's editorial writers correct or are they putting too much
stock in an "old" analysis that has not been modified to make it consistent
with modern standards of rigour? Is it, or is it not, appropriate for the
government to allow cyclical variation in its debt ratio by running deficits
during recessions and surpluses during booms?
There is a long literature assessing the usefulness of Keynesian-
style "built-in stabilizers." For example, 35 years ago, Gorbet and
Helliwell (1971) and Smyth (1974) showed that these mechanisms can
serve as de-stabilizers. Running a deficit budget during a downturn may
well decrease the size of that initial recession. But over time the
government debt must be worked back down, so the overall speed of
adjustment of the economy is reduced. The initial recession is smaller, but
the recovery takes longer. While this literature identifies this trade-off
between a favourable initial impact effect and an unfavourable persistence
effect, it is rather dated, in that expectations are not highlighted, and the
behavioural equations are descriptive, not formally micro-based.
We can re-interpret some of our earlier modelling to make this
analysis somewhat more up-to-date. Consider ongoing shocks, in the
context of the simple model involving perfect foresight and descriptive
behavioural relationships (discussed in Chapter 2, pages 32-34). If we
focus on the inflation targeting (X, = 1) case, we see that the amplitude of
the real-output cycle is C = Pa A Keynesian fiscal policy involves
taxation depending positively on output, and this (in turn) makes IS-curve
parameters a and 13 larger. We can appreciate this by considering the
relationship that lies behind the IS specification: Y = C((1—t)Y)+ 10+ G.
The total differential of this relationship implies that the coefficients in
y = —ar + fig must be interpreted as
As is readily seen, the steeper is the tax function (the larger is t), the
smaller are summary coefficients a and 13. As a result, allowing for the
"built-in stabilizers" does lower output volatility. Thus, with a somewhat
163
more up-to-date analysis of fiscal policy (one that allows for the modem
treatment of expectations), support for Keynesian stabilizers re-emerges.
But, as we have seen concerning monetary policy, the analytical
under-pinning for any policy is now viewed as quite incomplete if that
analysis does not involve micro-based behavioural relationships. Lam and
Scarth (2006) have investigated whether the (undesirable) "increased
persistence" property of Keynesian debt policy is bigger or smaller than
the (desirable) decreased impact-effect property, when that regime is
compared to a rigid regime (involving a constant debt ratio) — in a setting
that respects the requirements of the modem approach to business cycle
theory. Here is a summary of their analysis.
Except for the presence of the (log of) government spending, the
model's new IS and Phillips curve relationships are standard:
The remaining equations of the model define how monetary and fiscal
policies are conducted. For monetary policy, an important consideration is
simplification. There is one time derivative in the new IS relationship, two
time derivatives in the new Phillips curve, and one time derivative needed
to define Keynesian fiscal policy (the change in bonds equals the current
deficit). Thus, we need a monetary policy that can reduce the order of
dynamics that is inherent in a macro model that involves both forward-
looking dynamically optimizing agents, and the dynamics of bond-
financed deficits. To this end, we assume that the central bank has a
constant target value for the price level, I-3-, and that the bank adjusts the
interest rate by whatever it takes to ensure that the actual price level
approaches this target according to the following relationship:
= --2(p - (7.15)
The bank ensures that the percentage change in the actual price is
proportional to the percentage gap between the actual and target price
levels. When the time derivative of this policy rule is substituted into the
new Phillips curve, (7.14), we have a convenient static relationship:
(g-g)=v(y-i-,) (7.17)
where = r /(G / FT) = r 1(1 — a). The fact that this relationship involves
government spending falling whenever the economy is in recession is
what leads the analysis to reject a balanced-budget rule and to support the
Keynesian approach.
We now turn to the specification of fiscal policy in a Keynesian
setting. In this regime, temporary budget deficits and surpluses are
permitted. But there is the possibility of instability in the debt-to-GDP
ratio, since the economy has a positive interest rate, and no ongoing real
growth. To avoid this problem, we specify that the Keynesian government
reduces spending — whenever debt rises — by what is necessary to avoid
instability. This policy is specified by G =G — y(B — TO. Recalling that
G = rf — (1 — r)B, we can specify government spending in the
Keynesian regime by
(g — = — 0(b — b) (7.18)
165
(11 r)13 = G + (1— r)B —TY.
After substituting in the equations for both the actual and full-equilibrium
values for spending, and using the fact that b = B / B, this bond-issue
equation can be re-expressed as
[B I iir]b = (1 r y)(b
— — — (rY I B)(y — .T)).
166
when the rigid fiscal regime is in place. Since the three-equation system is
linear, all variables involve the same speed of adjustment. Given equation
(7.15), that adjustment speed is 2. The undiscounted output loss is the
impact effect divided by the adjustment speed, so the overall output
deviation is
dy 1 = 1 0. (7.23)
—2
A 0
922 /0
fi = x +(OA /0)
167
assume that this condition is met. The adjustment speed in this case is
— trace(A)12 = (2 — 16)1 2. This expression is definitely smaller than X,
the adjustment speed that obtains in the rigid fiscal regime.
We can summarize as follows: the Keynesian approach has both
desirable and undesirable features — it involves a smaller initial recession
but a slower speed with which that recession is dissipated. Since the
overall sum of the output deviations is a measure that gives weight to both
these features, we compare this expression across fiscal policy regimes. In
this Keynesian case, we have
168
of adjustment back to the natural rate. Historically, it has been difficult for
macroeconomists to evaluate dynamic issues when their models have had
only limited micro-foundations. With its solid grounding in inter-temporal
optimization, however, the new neoclassical synthesis gives analysts more
confidence in their ability to assess dynamic considerations of this sort.
The fact that the speed of adjustment outcome can be the quantitatively
dominant consideration in this fiscal policy debate means that what is now
mainstream macroeconomics offers support for maintaining Europe's
Stability Pact.
7.6 Conclusions
Some of the most central questions that have been debated in the
stabilization policy field over the years are: Is following a rule preferred to
discretionary policy? Is a flexible exchange rate a shock absorber? Are
rigid budget-balance rules required to avoid an exploding debt-to-GDP
ratio? Does the adoption of a budget-balance rule decrease the
effectiveness of fiscal built-in stabilizers? Now that modern macro-
economics has a new paradigm — the New Neoclassical Synthesis —
analysts are returning to these long-standing issues, and checking to see if
the conventional wisdom — which is based on earlier, less micro-based
models — is threatened by the sensitivity test that the new modelling
approach makes possible. The purpose of this chapter has been to explain
how these questions have been pursued within this new framework.
Some very central insights have emerged — such as the whole
question of dynamic consistency in policy making. Part of the chapter
focused on the financing of government budget deficits. We learned how
one common specification of bond-financed fiscal deficits is infeasible.
Not only does this help us interpret fiscal policy, it can bring useful
insights concerning monetary policy. For example, some economists have
relied on this result to offer an explanation of why the disinflation policy
of the early 1980s took so long to work. Sargent and Wallace (1981) note
that if the fiscal deficit is exogenously set and if bond financing can be
used only temporarily, a reduction in money issue today must mean an
increased reliance on money issue in the future. They show that, therefore,
rational agents would not necessarily expect the inflation rate to fall along
with the current money-growth rate. Even in the short run, in the Sargent
/Wallace model, inflation can rise as money growth falls.
The ultimate issue raised by Sargent and Wallace is which policy
variables — the fiscal or the monetary instruments — should be set
residually. In the 1980s, macroeconomic policy in many countries
169
involved imposing monetary policy as a constraint on fiscal decisions — an
approach that led to extreme reliance on bond-financed deficits and a
dramatic increase in debt-to-GDP ratios. The analysis in part of this
chapter indicates that it might have been more appropriate for monetary
policy to be determined residually, as Friedman originally proposed in
1948. As long as fiscal parameters are set so that the budget deficit
averages zero, this arrangement would control the longer-run growth in
the money supply and, therefore, the underlying inflation rate. It would
also have the advantage of avoiding the instabilities associated with
inordinate reliance on bond financing. But these benefits can be had only
if the fiscal policy record is consistent and credible. Friedman gave up
expecting this in 1959, and he switched to supporting an exogenous
monetary policy. Yet the analysis of this chapter suggests that this
exogenous monetary policy can involve a decrease in the built-in stability
properties of the macro-economy. Increased reliance on bond financing
can lead to instability (an unstable time path for the debt-to-GDP ratio).
This problem can be avoided by embracing a rigid rule for the annual
budget balance, and as we have seen, the rigid fiscal regime may not
worsen short-run macroeconomic performance appreciably.
170
Chapter 8
Structural Unemployment
8.1 Introduction
172
denote this outside option as b, and we define it more fully below. Thus,
the index of worker effort is defined as:
O profits 1 = — w = 0,
which states that firms should hire labour up to the point that its marginal
product has been pushed down to the rental price of labour (the wage).
This is the standard optimal hiring rule (encountered in basic price theory)
except that here, the marginal product expression involves the work-effort
index. To decide on the best wage, we work out
To see the intuition behind this wage-setting rule, we use the other first-
order condition to substitute out F'q = w and get
w = b/(1— a).
According to this rule, firms must set the wage equal to their workers'
outside option if there is no variability in worker effort (if a equals 0).
This is what is assumed in the standard competitive model of the labour
market. But with variable worker productivity (a > 0), it becomes optimal
to set the wage above the outside option — to induce workers to work hard
(to lower the probability of getting fired by shirking less).
173
The implications for the unemployment rate can be determined
once the workers' outside option is defined. We assume
u = a 1(1— f).
This solution for the unemployment rate teaches us three things. First,
unemployment is zero if there is no variability in worker effort (that is, if
parameter a is zero). Second, increased generosity in the unemployment
insurance system (a higher value for parameter .I) raises unemployment.
This is because higher unemployment insurance shrinks the relative pay-
off that individuals get by keeping their jobs. As a result, they choose to
shirk more. Knowing this, firms raise the wage in an attempt to lessen this
reaction of their employees. With higher wages, firms shift back along
their (downward sloping) labour demand curve, and hire fewer workers.
(By the way, this does not mean that unemployment insurance is "bad."
After all, with unemployment insurance, any one unemployment spell
hurts the individual less. It is just that there is a trade-off; this beneficial
effect induces an increased frequency of unemployment spells.) The third
implication of the solution equation for the unemployment rate follows
from the fact that it does not include a productivity term, F'. Thus, the
proposition that investment in education (that raises overall productivity)
would lower unemployment is not supported by this analysis. Higher
productivity is desirable because it raises the wages of those who already
have jobs, not because it brings more people jobs. This prediction is
consistent with centuries of economic history. Vast productivity growth
has led to similar increases in real wages, without any significant long-
term trend in the unemployment rate.
We have focused on this model so that we have at least one
rigorous framework for arguing that some unemployment is involuntary.
174
We will use the theory to evaluate how fiscal policy might be used to
lower unemployment in the next chapter. Before closing the present
discussion of efficiency-wage theory, however, it is useful to extend
Summers' analysis by considering optimization on the part of households
(not just firms). It is preferable that the household variable-work-effort
function be derived, not assumed. This can be accomplished by assuming
that households maximize (71w-i-(1--71)b— fibct ). it is the proportion of time
that the individual is employed in her current job, and this proportion
rises with worker effort: 7 = e. The first two terms in the objective
function define the individual's income; she receives w if she keeps her
current job, and she receives b if she does not. The final term defines the
disutility associated with putting effort into one's job. To be compatible
with the income components of the objective function, this term is scaled
by b. Since it is reasonable to specify that higher work effort increases the
probability of keeping one's job, but at a decreasing rate, W must be less
than one. Similarly, since it is appealing for higher effort to decrease
utility at an increasing rate, y must exceed unity. Household behaviour
follows from substituting in the constraint and differentiating the
objective function with respect to q. The result is the effort function given
above, if a is interpreted as 1 /(y — v) and units are chosen so that
1 yfl =1.
We pursue several policy implications of this model of efficiency
wages in the next chapter. But for much of the remainder of this chapter,
we consider alternative ways to model the labour market.
175
Unions are assumed to maximize the income of a representative
member of the group. This can be defined as w(N I L) + IT,(L — N) I L,
where N and L denote employment and size of union membership
respectively, w is the wage paid to union members if they are employed,
IT is the wage they receive if they are unemployed (which could be
interpreted as unemployment insurance), and NIL is the probability of
employment. The slope of the union's indifference curves is derived by
setting the total differential of the expected income definition to zero. The
result is dw I dN = —(w — cv-)I N, which is negative since the union wage
cannot be less than the individual's reservation wage.
The union achieves the highest indifference curve by picking the wage
that corresponds to the point at which the labour demand curve is tangent
to an indifference curve (point A in Figure 8.1). Once the wage is set,
firms are free to choose employment. But since the union has taken the
firm's reaction into account, its members know that point A will be
chosen. We can derive what this model predicts concerning the real wage
and employment by including a shift variable — such as A in a revised
production function: AF(N). Comparative static predictions are calculated
by taking the total differential of the labour demand curve and the equal
slopes condition. It is left for the reader to verify that the model does not
predict real wage rigidity. One purpose of examining this model was to
see whether the existence of unions in labour markets leads to wage
ridigity and/or involuntary unemployment. Since the model contains no
176
explicit explanation of why individuals deal with the firm exclusively
through the union (are they forced to? did they choose to?), it is not
possible to say whether lower employment means higher involuntary
unemployment.
Let us now investigate whether wage rigidity occurs in the co-
operative model of union/firm interaction. The outcome in the previous
model is inefficient since there are many wage/employment outcomes —
all the points within the shaded, lense-shaped region in Figure 8.1 — that
can make both the firm and the union better off than they are at point A.
The co-operative model assumes that the two parties reach an agreement
and settle at one of the Pareto-efficient points that lie along the contract
curve. Completing the model now requires some additional assumption
that defines how the two parties divide the gains from trade. The
additional assumption that is most common (see McDonald and Solow
(1981)) is that the two bargainers reach a Nash equilibrium. Without
specifying some rule of this sort, we cannot derive any predictions about
how the wage level responds to shifts in the position of the labour demand
function.
Employment effects can, however, be derived without any such
additional specification. The equation of the contract curve is had by
equating the slope expressions for the iso-profit and the indifference
curves. With the shift variable for labour's marginal product inserted, this
equal slopes condition is (AF' — w) / N = —(w —171)1 N, or AF' = IT. The
contract curve is vertical since w does not enter this equation. From this
equation of the contract curve, we see that we can determine the effects
on employment of changes in A and IT without having to specify the
bargaining model that is required for the model to yield any real wage
predictions. It appears that this co-operative union model does not support
the hypothesis of real wage rigidity, but we can derive the employment
effects that follow from this theory if we impose real wage rigidity in a
macroeconomic context. For macroeconomic employment effects, it is as
if real wages were fixed.
We complete our analysis of the co-operative model by using the
standard Nash product to derive the condition which determines the
division of the rents between the union and the firm. The function that is
delegated to the arbitrator to maximize involves the product of two items:
first, what the firm can earn in profits if co-operation is achieved (minus
what it gets with no co-operation — zero), and second, the similar
differential for workers. This product can be written as [(w —
where V is profits: V = AF(N) — wN, 0 is the bargaining power parameter
177
(0 = 1: unions have all the power; 0 = 0: firms have all the power), and NJ
is a union preference parameter (w = 1: the union is utilitarian in that it
values all its members (not just the currently employed); w = 0: the union
is seniority oriented (only the wages of those currently employed are
valued)).
After differentiating the arbitrator's objective function with
respect to w and N and simplifying, we have two labour market equations
(which are the imperfect competition analogues for supply and demand
curves) to determine wages and employment. McDonald and Solow call
these two relationships the efficiency locus and the equity locus. The
equation that defines the contract curve is the efficiency locus. It is
AF' = Tv" with a utilitarian union, while it is AF' = w with a seniority-
based union. With the seniority-based union, the negotiations pay no
attention to employment and the union indifference curves are horizontal
lines. Thus, in this case, the labour demand curve and the contract curve
coincide, and this version of the system essentially replicates the non-co-
operative model. Finally, the equation which defines the division of the
rents (the equity locus) is CV = (1— 6)(w — TON, whether unions are
utilitarian or not.
Pissarides (1998) uses a model of union-firm interaction that
combines features of the two different approaches that have just been
summarized. His model follows the "right to manage" aspect of the non-
co-operative approach, in that it is the firm that chooses the employment
level after the wage , has been determined. But the model involves a key
feature of the co-operative approach as well, since the wage is not set
unilaterally by the union. Instead, it is the result of a bargaining process
involving both the union and the employer. A simplified version of
Pissarides' model (involving risk neutrality on the part of the union and a
Cobb-Douglas production function) is explained here. In the first stage,
an arbitrator is appointed to choose the wage which maximizes the
following Nash product function: (I —1) 9 (V — 0)" . I is the index of the
workers' net benefit from the contract. Pissarides' definition of this net
benefit is I = wN + (L — N)[(1—u)w* +ulT]. As above, N is employment
in this firm, and L is union membership. It is assumed that those who do
not find employment in this firm seek employment elsewhere. These
individuals face probabilities equal to the employment rate, and the
unemployment rate, concerning whether they secure another job (and are
paid w*) or whether they are without work (and receive employment
insurance equal to IT) . T is what individuals receive if employment at
this firm, N, is zero. Thus, (I = N(w — (1— u)w* ulT). As far as the
—
178
firm's profit is concerned, we have V = Y — wN and the production
function is Y
Differentiating the arbitrator's objective function with respect to
w, and then substituting in the equations that define full equilibrium
(w = w*) and the unemployment insurance system ( 4. 3= fw), we have:
u = a 1(1— f) , where a =[0(1— y) I y(1— 0)1 We see that this version of
imperfect competition in the labour market yields the same equation for
the natural unemployment rate as did our efficiency-wage model. Despite
this similarity, there are some new results embedded within this
alternative derivation of this reduced form. For example, in this
union/firm interaction interpretation, we see that the natural
unemployment rate is predicted to rise, the higher is the degree of union
power. Thus, if lower structural unemployment is the goal, we might view
the model as providing some support for legislation that is designed to
limit workers' rights.
The more general point is that many policies that are designed to
lower the natural unemployment rate (some of which we stress in the next
chapter) receive equivalent analytical support — whether one appeals to
efficiency-wage theory or union/firm interaction theory. We can have
more confidence about making applied policy advice when the underlying
rationale for that policy proposal is not dependent on just one
interpretation of the labour market or the other.
sl(s + f).
What search theorists have done is to build a model that makes the job
finding rate, f, endogenous (and based on optimizing behaviour). The
resulting solution for f is then substituted into the unemployment-rate
equation just presented, so that the determinants of the natural
unemployment rate are then exposed.
We need additional notation to define this theory (and we use
Romer's): A — the marginal product of each employed worker, C — the
fixed cost of maintaining a job, and w — the wage rate. It is assumed that
there is no cost of posting a vacancy. The profit associated with each
filled job is given by (A — C — w), and the profit associated with each job
vacancy is (— C). We assume static expectations, so that we can represent
the present value of receiving these flows indefinitely, by simply
multiplying them by (11r).
The technology of the matching process is specified by:
M = at Ifir ,
180
f = MIU = aU " V" = a(u 1 v)" = 0(x).
This equation defines the annual dividend of having a job as the wage
received, and the capital loss as the difference between the value of a job
and the value of not having one (being unemployed). The probability of
sustaining this capital loss is the job separation rate. The annual return
that is associated with other states are defined in a similar manner.
The value to a firm of maintaining a filled job is
The firm's annual profit is the "dividend" and the difference between the
value of a filled job and the value of an unfilled vacancy is the "capital
loss". Again, the separation rate is the probability of sustaining this
capital loss.
The annual return of being unemployed is given by
M= sE, or JU = sE (8.5)
Given that the cost of posting a vacancy is zero, firms must have posted a
sufficient number to ensure that the marginal benefit is zero:
rVv = 0 (8.6)
Finally, the wage must be set so that the gains of the match are distributed
between individuals and firms in a way that is consistent with the "market
power" of these groups. As in our models of union/firm interaction, we
assume a Nash equilibrium, and for simplicity here, we assume equal
levels of bargaining power. This implies that the wage is set so that an
individual's gain from a match is equal to the firm's gain from the
relationship:
The solution proceeds as follows. Subtracting (8.3) from (8.1), and (8.4)
from (8.2), we have
Substituting (8.8) and (8.9) into (8.7), and solving for w, we have
182
Finally, substituting (8.10) into (8.11) to eliminate w, imposing Vv = 0,
and substituting in f = 0(x), B = x0(x), and z = AIC, we end with
u = s i(s + 0( x )).
Some of the labour market models that were surveyed in the three
previous sections of this chapter provide support for the hypothesis of
real wage rigidity, but a fundamental problem for business-cycle theory is
to explain why purely nominal shocks have real effects. The Keynesian
and new neoclassical synthesis approaches to this question rely on
nominal wage and/or price rigidity. Can the models of this chapter apply
in any way to this question?
Some analysts have argued that the theories of real wage rigidity
can apply to nominal wages in an indirect way. When wages are set,
bargaining is about the real wage. But the item that is actually set as a
result of these decisions is the money wage. It is set at the level intended
to deliver the desired real wage, given inflationary expectations. With this
183
interpretation, we can argue that the theories apply to money wage-
setting, although an additional assumption regarding indexation is
required. We would expect agents to set the money wage with a full
indexing clause so there would be no need to incur errors in inflationary
expectations. But given that catch-up provisions can roughly replace ex
ante indexing formulae, and given that households and firms want to tie
wages to different price indexes, the costs of full indexation are probably
not worth the benefit (as McCallum (1986) has stressed).
Quite apart from_the preceding argument, some analysts are
uneasy about applying an adjustment-cost model (such as the one we
explored in Chapter 4) to explain sticky goods prices. Negotiation costs
between buyers and firms are not a feature of reality for many
commodities. Of course the sale of many commodities involves posting
prices, but it does not seem compelling to rest all of sticky-price
macroeconomics on an item that seems rather trivial (such as the cost of
printing new prices in catalogues — the so-called "menu" cost — and the
cost of informing sales staff about price changes). The response that one
can make to the charge that adjustment costs for many nominal prices
cannot be "that important" is simply to demonstrate that even explicitly
small price-change costs can lead to large welfare losses. Akerlof and
Yellen (1985) and Mankiw (1985) provide analysis that is intended to
support this view.
Let us examine a brief summary of the argument (along the lines
suggested by Romer (1993)). Consider a monopolist who must set her
nominal price before the relevant period but who can change that price
later (during the period) at a "small" cost. A situation in which the price
has been set at too high a value is illustrated in Figure 8.2. When the firm
set its price, it did not guess the then-future position of its demand curve
perfectly. As it enters the period analyzed in Figure 8.2, it has already
posted a price equal to OA, but the appropriate price is OB. The firm must
now decide whether making the change is worthwhile. As far as private
profits are concerned, the firm loses an amount equal to area FGH by not
lowering its price to OB. The cost to society of not adjusting the price is
area DGHE — potentially a much bigger amount. It is quite possible for
even quite small adjustment costs to be larger than area FGH but much
smaller than area DGHE. Thus, the social gains from price adjustment
may far exceed the private gains.
184
Figure 8.2 Menu Costs
Price
Posted price
Marginal cost
G
Marginal Demand
revenue
Quantity/time
There are a couple of reasons why this analysis may not support such
sweeping conclusions. For one thing, Figure 8.2 shows just the case
involving prices remaining too high. It is left for the reader to draw a
diagram in which the existing price is set at too low a value. It is still true
that firms incur a small private cost (in terms of foregone profits) that may
easily be dominated by the menu cost, if they do not raise prices. And it is
still true that the implications of not adjusting price are much larger for
society. But this time, that large area is a gain in welfare. Since prices
may be too low just about as often as they are too high, it may be roughly
the case that menu costs lead to no significant net effect on society
welfare. While this nonnative issue has been overstated by some
185
Keynesians, the positive point remains — even seemingly trivial menu
costs may dominate the private benefits of incurring them.
A second issue that has been glossed over in our discussion of
Figure 8.2 is a more detailed focus on how the demand and cost curves
may have shifted to create an initial situation such as that illustrated. One
possibility is that the vertical intercept of the demand curve and the height
of the marginal cost curve shifted down by the same amount. But if real
wages are rigid, and the nominal price does not fall (given menu costs),
the position of the marginal cost curve should not be shifted down at all.
Such a redrawing of Figure 8.2 shrinks the size of area FGH, and so
makes it all the more likely that even small menu costs can be the
dominant consideration. In short, real wage rigidity may increase the
applicability of the menu-cost model to such an extent that the hypothesis
can be said to play a central (if indirect) role in providing the micro
foundations for nominal rigidities.
The intuition behind this result is perhaps best appreciated by
considering an oligopoly. Each firm finds it costly to change its relative
price, since a higher relative price is immediately noticed by its current
customers, while a lower relative price is not widely noticed by the other
firms' customers. Thus, there is a real rigidity — in relative prices. Even if
the nominal rigidity — the actual cost of changing its own nominal price —
is very small, all firms will behave as if this is not the case — because of
the real rigidity. Thus, real rigidities magnify the importance of a little
nominal rigidity.
Alvi (1993) has presented a simple proof that this proposition is
quite general, which we now summarize. Alvi assumes that each firm's
profit function can be written as V(P/i5,M /3 ), where P, P and M
denote the firm's own price, the economy-wide average price, and the
nominal money supply. The first argument in this function captures
imperfect competition, while the second captures aggregate effects. The
fact that firms care only about relative prices and the real value of money
means that no money illusion is involved. Each firm's optimum outcome
can be written as
which indicates that the best value for the firm's price is simply a function
of the two items which the firm takes as parameters: P and M. Note that
real rigidity (that is, relative price rigidity) is prevalent if H' is small. We
186
assume P = P = M =1 initially, and that H' = h . Then, the total
differential of (8.13) implies:
This final equation implies that z = 1 if there are no menu costs (q = 0).
With menu costs, however, it implies az/oh > 0, so a given amount of
menu cost results in a high degree of overall nominal rigidity if real
rigidities are prevalent. We conclude that it is not necessarily
unreasonable to base a theory of business cycles on small nominal
"menu" costs.
Research continues on the microeconomics of menu costs. To
some, the most appealing model of price changes at the individual level is
known as the two-sided (S,$) adjustment rule. It involves the firm only
incurring the fixed cost of adjustment when the gap between the desired
price and the existing one exceeds a critical value (S on the high side, s on
the low side). Heterogeneity among firms can take various forms, such as
differing initial positions within common (S,$) bands, or firm-specific
shocks. As is usual in the aggregation literature, not all specifications lead
to well-defined macro implications.
Ball and Mankiw (1994) draw the distinction between time-
contingent adjustment models and state-contingent adjustment models.
The theory we covered in Chapter 4 is an example of the former, while
the (S,$) models are examples of the latter. Ball and Mankiw note that no
robust conclusions have emerged from the literature on state-contingent
adjustment, but that this state of affairs is not necessarily upsetting. This
is because time-contingent adjustment is optimal if the main cost is
gathering information about the state rather than making the actual price
187
adjustment. Also, in economies with two groups of firms — one making
each kind of adjustment — it turns out that the sluggish adjustment on the
part of the time-contingent firms makes it rational for those monitoring
developments continuously according to the state-contingent model to
behave much like the other group. Thus, it may well be that the quadratic
adjustment cost model of Chapter 4 is not such a bad approximation of a
theory with much more detailed structure.
Another issue that is being researched is whether it matters to
specify explicitly that it is the gathering of information, not the re-setting
of prices, that is costly. Mankiw and Ries (2002) have shown that a
"sticky information" version of an expectations-augmented Phillips curve
may fit the facts better than a "sticky" price" version does. Further, they
demonstrate that this version of a new synthesis model can lead to
different conclusions regarding the relative appeal of alternative monetary
policies.
8.6 Conclusions
188
who has added a version of efficiency-wage theory to the real business
cycle framework — are relying have relied on this proposition to generate
more persistence in real variables within their models.
There are three tasks that we address in the remaining chapters.
First, as just noted, we use the models developed here to analyze a series
of policy proposals designed to lower structural unemployment and to
raise the economic position of those on low income. Second, we use some
of these models to investigate the possibility of multiple equilibria. If
theory leads to the possibility that there are two "natural" unemployment
rates — both a high-employment equilibrium and a low-employment
equilibrium — there is an "announcement effect" role for policy. It may be
possible for the government to induce agents to focus on the high-activity
outcome if agents know that the policy maker stands ready to push the
system to that outcome if necessary. It is possible that no action — just the
commitment to act — is all that may be necessary. Third, we would like to
see whether policies that are geared to reducing structural unemployment
have an undesirable long-run implication. Might these initiatives retard
the productivity growth rate? We examine the first two issues in the next
chapter, and then devote the final three chapters to an analysis of long-
term growth.
189
Chapter 9
9.1 Introduction
190
income tax rate. It is left for the reader to verify that these modifications
change the unemployment rate solution to
u = a /[1— (f /(1
This equation implies that an increase in the tax rate raises the natural
unemployment rate. This occurs because higher taxes reduce the relative
pay-off individuals receive from work. To lessen the resulting increase in
worker shirking, firms offer a higher wage, and they hire fewer workers at
this higher price.
The importance of taxes can be illustrated by considering some
illustrative parameter values. Realistic assumptions are: u = .05,f = .50, t
= .15. These representative values are consistent with this model only if a
= .02, which we therefore assume. Now consider fixing a and f at 0.02
and 0.5 respectively, while higher tax rates are considered. The reader can
verify that the unemployment rate rises by one percentage point (to u =
.06) when the tax rate rises by 10 percentage points to 0.25, and the
unemployment rate rises by much more (2 and 2/3 percentage points, to
0.0867) when the tax rate rises by an additional 10 percentage points to
0.35. This thought experiment indicates that one does not need to have
ultra right-wing views to be concerned about efficiency in government.
Only with such efficiency can we have the many valuable services of
government with the lowest possible taxes, and (as this numerical
example suggests) high taxes can very much raise unemployment.
It is instructive to examine the effects that several other taxes
have (or more precisely, do not have) within this basic version of the
efficiency-wage model. With an employer payroll tax, T, the firms' wage
bill becomes wN(1 + -r), and with a sales tax, X, the wage that concerns
households is w* = w/(1 + X). It is left for the reader to verify that, when
these changes are made in the specification of the efficiency-wage model,
there is no change in the solution equation for the unemployment rate. It
is useful to review the intuition behind why employee payroll taxes do,
but these other taxes do not, affect unemployment. As already noted, both
a more generous unemployment insurance system and a higher employee
payroll tax increase unemployment. Both these measures lower the
relative return from working. To compensate for the deterioration in work
effort that results, firms must raise wages, and this makes a lower level of
employment optimal.
The other taxes do not change the relative return of work
compared to being unemployed. For example, sales taxes must be paid
simply because goods are purchased; it makes no difference how the
191
purchaser obtained her funds. This is why the natural unemployment rate
is unaffected by the sales tax. Similar reasoning applies to the employer
payroll tax. A cut in this levy increases both the ability of the worker's
employer to pay higher wages and the ability of all other firms to pay that
individual higher wages. Competition among firms for workers forces this
entire increase in ability to pay to be transferred to those already working
(in the form of higher wages). As a result there is no reduction in
unemployment. The same outcome follows for anything that shifts the
labour demand curve without having any direct effect within the workers'
effort function. This is why we stressed in the previous chapter that
increases in general productivity raise wages — and do not lower
unemployment — in this efficiency-wage setting.
These results imply that we can have a lower natural
unemployment rate if we rely more heavily on a sales tax, instead of an
income tax. They also imply that investments in training and education
lead to higher wages, but not to lower unemployment. But before we can
have confidence in such strong predictions, and exhort real-world
authorities to act on this advice, we need to know whether they are
supported by the other theories of the natural unemployment rate.
To check the effects of various fiscal policies in our models of
union-firm interaction, we add a wage-income tax (which, as above, can
also be interpreted as the employee payroll tax), an employer payroll tax,
and a sales tax. As in Chapter 8, the function that is delegated to the
arbitrator to maximize involves the product of two items: first, what the
firm can earn in profits if co-operation is achieved (minus what it gets
with no co-operation — zero), and second, the similar differential in
returns for workers. This product is [(((1— t)w— 17)41 + 2))N" J8 V" .
V = AF(N) — wN(l+z) is profits. 0 is the union bargaining power
parameter, and w is the union seniority parameter.
After differentiating the arbitrator's objective function with
respect to w and N and simplifying, we have the tax-included versions of
the two labour market equations that determine wages and employment.
The equation that defines the contract curve is (1 —OAF' = W(1+ r) with a
utilitarian union, while it is AP =14(1+z) with a seniority-based union.
The equity relationship is OAF(N) = w(l + z)N —(1— 0)(1+ z)vTAT 1(1— t)
whether unions are utilitarian or not. In both cases, the level of
employment is unaffected by sales taxes, but it is affected by both the
employer and the employee payroll tax (an increase in either tax raises
unemployment).
192
We add the same set of taxes to the Pissarides (1998) model of
union-firm interaction that combines features of the co-operative and non-
co-operative approaches. The arbitrator's objective function is still
(/ — T)' (V —O)' -B , and the production function is still Y = ANr. There are
several changes: (I — I ) = ((w —(1— u)w*)(1— t) — uW)N 1(1+ 2),
V = Y — wN (1 + r), and = w(1 + r). Proceeding with the same
steps as we followed in Chapter 8, we arrive at the revised solution
equation for the unemployment rate:
u =a/[1— fl(1—t)]
Proceeding with the solution, and Hall's (2003) calibration, we reach the
several policy conclusions. Some are similar to the outcomes that we
discovered in our analysis of efficiency wages and unions. For example,
an increase in the employee payroll tax increases unemployment. Even
the magnitude of this response is comparable to our earlier findings. (If t
is raised from zero to 0.1, the unemployment rate rises by about one half
193
of one percentage point.) This finding means that our earlier conclusion is
robust across alternative specifications of the labour market. For this
policy, at least, it appears not to matter that there is controversy
concerning how best to model structural unemployment. Policy makers
can proceed without needing to wait for this controversy to be resolved.
But this assurance does not apply to all policy initiatives, since
some of the implications of search theory are different from the polic y
theorems that followed from the other models. For example, in this
specification, both the employer payroll tax and the interest rate affect the
natural unemployment rate — predictions that are at odds with both the
efficiency-wage model and Pissarides' model of union/firm interaction.
But not all of these differences are important. For example, while the
interest rate matters in the present specification (since a higher interest
rate lowers the benefit of having a job and so raises equilibrium
unemployment), the practical significance of this effect is non existent.
The reader can verify that, when Hall's calibration is used, and when the
annual interest rate is raised by even two or three percentage points, the
effect on the unemployment rate is truly trivial. Hence, some of the
differences across models of the labour market are irrelevant for policy
purposes, and we can proceed with the policy prescriptions that
accompanied the earlier specifications.
However, not all the differences across natural unemployment
rate models can be dispensed with in this way. For example, in this search
model, the unemployment rate is increased by the existence of a sales tax.
Again, for Hall's calibration, we find that increasing the variable from
zero to 0.1, makes the unemployment rate rise by about one half of one
percentage point. This is a non-trivial effect, and it differs markedly from
the zero response we discovered with efficiency wages and unions.
This different outcome is important for the general debate on
whether we should follow the advice of many public-finance practitioners
— that we should replace our progressive personal income tax with a
progressive expenditure tax. According to growth theory (models which
usually involve no unemployment, which we examine in chapters 10-12),
this tax substitution should increase long-run living standards. According
to efficiency-wage and union theory, this tax substitution should bring the
additional benefit of lowering the natural unemployment rate. But as just
noted, this fortuitous outcome is not supported by search theory.
However, this search model indicates that the cut in the wage income tax
can be expected to lower unemployment by about the same amount as the
increase in the expenditure tax can be expected to raise unemployment.
Thus, even this model does not argue for rejecting the move to an
194
expenditure tax. In this limited sense, then, the labour market models give
a single message: with respect to lowering the natural unemployment rate,
we either gain, or at least do not lose, by embracing a shift to expenditure-
based taxation.
One of the primary concerns about the new global economy is income
inequality. Compared with many low-wage countries, the developed
economies (often referred to as the North) have an abundance of skilled
workers and a small proportion of unskilled workers. The opposite is the
case in the developing countries (the South). With increased integration
of the world economies, the North specializes in the production of goods
that emphasize their relatively abundant factor, skilled labour, so it is the
wages of skilled workers that are bid up by increased foreign trade. The
other side of this development is that Northern countries rely more on
imports to supply goods that require only unskilled labour, so the demand
for unskilled labour falls in the North. The result is either lower wages for
the unskilled in the North (if there is no legislation that puts a floor on
wages there) or rising unemployment among the unskilled in the North (if
there is a floor on wage rates, such as that imposed by minimum wage
laws and welfare). In either case, unskilled Northerners can lose income
in the new global economy.
There is a second hypothesis concerning rising income inequality.
It is that, during the final quarter of the twentieth century, skills-biased
technical change has meant that the demand for skilled workers has risen
while that for unskilled workers has fallen. Technical change has
increased the demand for skilled individuals to design and program in
such fields as robotics, while it has decreased the demand for unskilled
workers since the robots replace these individuals. Just as with the free-
trade hypothesis, the effects of these shifts in demand depend on whether
it is possible for wages in the unskilled sector to fall. The United States
and Europe are often cited as illustrations of the different possible
outcomes. The United States has only a limited welfare state, so there is
little to stop increased wage inequality from emerging, as indeed it has in
recent decades. Europe has much more developed welfare states that
maintain floors below which the wages of unskilled workers cannot fall.
When technological change decreases the demand for unskilled labour,
firms have to reduce their employment of these individuals. Thus, Europe
195
has avoided large increases in wage inequality, but the unemployment
rate has been high there for many years.
Most economists favour the skill-biased technical change ex-
planation for rising income inequality. This is because inequality has
increased so much within each industry and occupation, in ways that are
unrelated to imports. The consensus has been that only 11% of the rising
inequality in America can be attributed to the expansion of international
trade. But whatever the causes, the plight of the less skilled is dire.
Even if globalization is not the cause of the low income problem
for unskilled individuals in the North, it may be an important constraint
on whether their governments can do anything to help them. This is the
fundamental challenge posed by globalization. Citizens expect their
governments to provide support for low-income individuals so that
everyone shares the benefits of rising average living standards. The anti-
globalization protesters fear that governments can no longer do this. The
analysis in this section — which draws heavily on Moutos and Scarth
(2004) — suggests that such pessimism is not warranted. To address this
question specifically, let us assume that capitalists (the owners of capital)
are "rich" and that they have the ability to re-locate their capital costlessly
to lower-tax jurisdictions. Also, we assume that labour is "poor" and that
these individuals cannot migrate to other countries. Can the government
help the "poor" by raising the tax it imposes on the capitalists and using
the revenue to provide a tax cut for the workers? Anti-globalization
protesters argue that the answer to this question is "obviously no." They
Quantity of Capital
expect capital to relocate to escape the higher tax, and the result will be
less capital for the captive domestic labour force to work with. Labour's
196
living standards could well go down — even with the cut in the wage-
income tax rate. It is worthwhile reviewing the standard analysis, since it
is the basis for recommending that we not tax a factor that is supplied
perfectly elasticly (such as capital is for a small open economy). Figure
9.1 facilitates this review. The solid lines represent the initial demand and
supply curves for capital. The demand curve is the diminishing marginal
productivity relationship that is drawn for an assumed constant level of
labour employed. The supply curve is perfectly elastic at the yield that
owners of capital can receive on an after-tax basis in the rest of the world.
Before the tax on capital is levied to finance a tax cut for labour, the
economy is observed at the intersection of these solid-line demand and
supply curves, and GDP is represented by the sum of the five regions
numbered 1 to 5.
When the government raises the tax on capital, capitalists demand
a higher pre-tax return — an amount that is just enough to keep the after-
tax yield equal to what is available elsewhere. Thus, the higher (dashed)
supply curve in Figure 9.1 becomes relevant. Domestically produced
output falls by regions 1 and 3. Capital owners do not lose region 1, since
they now earn this income in the rest of the world. Labour loses regions 3
and 4, but since the tax revenue is used to make an unconditional transfer
to labour, their net loss is just region 3. But this is a loss, so the analysis
supports the propositions that capital is a bad thing to tax, and that it is
impossible to raise labour's income.
Quantity of Capital
But this standard analysis involves the assumption that the policy has no
effect on the number of men and women employed. If the level of
197
employment rises, capital can be a good thing to tax after all. If there is
unemployment in the labour market, and no similar excess supply in the
capital market, the economy involves a distortion before this policy is
initiated. The existence of involuntary unemployment means that, before
the policy, society's use of labour is "too small," and that (from society's
point of view) profit maximization has led firms to use "too much" capital
compared to labour. A tax on capital induces firms to shift more toward
employing labour and this helps lessen the initial distortion. But can this
desirable effect of the policy package outweigh the traditional cost (the
loss of income represented by region 3 in Figure 9.1)? Figure 9.2 suggests
that this possible. As long as the wage-income tax cut results in lower
unemployment, each unit of capital has more labour to work with, and so
it is more productive. This is shown in Figure 9.2 as a shift up in the
position of the marginal product of capital curve (shown by the higher
dashed demand curve). In this case, the total income available to labour is
affected in two ways. It is reduced by the shaded triangle, and it is
increased by the shaded parallelogram.
If the gain exceeds the loss, the low-income support policy is
effective after all. It lowers unemployment, it raises the total income of
the "poor" (labour) and it does not reduce the income of the "rich" (the
owners of capital). This approach to low-income support is not a zero-
sum game, in the sense that labour is not helped at the expense of
capitalists. This is because the size of the overall economic "pie" has been
increased by policy. Labour receives a bigger slice, and capitalist get the
same slice as before. And all of this appears possible — despite the fact
that the government faces the constraints that are stressed by the anti-
globalization protesters. The same result is stressed in Koskala and Schob
(2002). In their model, the unemployment results from unions, not
asymmetric information, as is the case in our specification below. Related
work, involving search theory instead of either efficiency wages or
unions, is available in Domeij (2005).
There are two crucial questions: First, is it reasonable to expect
that a cut in the wage-income tax rate will lower the long-run average
unemployment rate? We addressed that question in the previous section
of this chapter, and we discovered that the answer is "yes." The second
question concerns whether it is reasonable to argue that the gain can be
bigger than the loss. It is straightforward to answer this question by
combining: one of our models of unemployment (we choose the
efficiency-wage model), a production function that involves both capital
and labour as inputs, a government budget identity, and the hypothesis of
perfect capital mobility. We now define just such a model, and derive the
198
condition that must be satisfied for this revenue-neutral tax substitution to
provide the Pareto improvement that we have just discussed.
-r K r
Y = (gN)'
q =[(w(1— t) — b) I br
h = (1— u)w(1— t)+ ufiv
(1 — y)Y I N = w
yY I K r
u = a(1— 1) 1(1— t — f)
N =1— u
r(1— r) = r*
G + .fwu = rrK + twN
199
chapter, T refers to the tax on the earnings of domestically employed
capital, not an employer payroll tax.
The equations determine Y, N, u, w, b, K r, q and T for given
values of the other variables and g= GIY. We use this system to derive
the effects on the unemployment rate, u, and the average income of a
labourer, b, of a cut in the wage tax rate, t, that is financed by a change in
(presumed to be an increase in) the tax on capital, T. To accomplish this,
we take the total differential of the system, and eliminate the other
endogenous variable changes by substitution. The goal is to sign duldt
and dbldt.
It turns out that the second of these policy multipliers has an
ambiguous sign. Nevertheless, we can show that the average income of a
labourer must rise, as long as the government does not encounter a
"Laffer curve" phenomenon. What this means is that the government
must raise one tax rate when the other is cut. Laffer believed that the
opposite might be true — that a cut in one tax rate might so increase the
level of economic activity (the overall tax base) that overall revenue
collected would increase — despite the fact that the tax rate was reduced.
Most economists read the evidence as being against this proposition, and
so have concluded that tax cuts do not more than finance themselves. That
is, most analysts are comfortable assuming that the other tax rate would
have to be raised. We assume that here. To make use of this non-
controversial assumption, we need to work out dr/dt and to assume that
this expression is negative. It is left for the reader to verify that this
assumption is necessary and sufficient to sign the average-income
response (to ensure that dbldt is negative). The unemployment-rate
response is unambiguous in any event.
We conclude that low-income support policy by governments in
small open economies is quite feasible — despite the constraint imposed
by globalization — as long as the revenue that is raised from taxing capital
is used to lessen the pre-existing distortion in the labour market. Since a
transfer to labour that is not conditional on employment status does not
meet this requirement, using that instrument (in an attempt to provide
low-income support) fails. Nevertheless, the fact that a Pareto
improvement is found with the wage-income tax cut policy, means that
the anti-globalization protesters have been premature in their verdict
concerning the inability of governments in small open economies to raise
the economic position of the low-income individuals within their
countries.
Before closing this section, it is worth reviewing why a Pareto
improvement is possible. For an initiative to be both efficiency-enhancing
200
and equity-enhancing, the economy must be starting from a "second best"
situation. Involuntary unemployment involves just this kind of situation.
We can clarify by recalling an example introduced in the original paper
on this topic (Lipsey and Lancaster (1956)). In a two-good economy,
standard analysis leads to the proposition that a selective sales tax is
"bad". With a tax on the purchase of just one good, the ratio of market
prices does not reflect the ratio of marginal costs, so decentralized
markets cannot replicate what a perfect planner could accomplish —
achieve the most efficient use of society's scarce resources. Society is
producing and consuming "too little" of the taxed good, and "too much"
of the untaxed good. But this conclusion assumes that there is no pre-
existing market distortion — before the tax is levied. A different verdict
emerges if it is assumed that there is an initial market failure. For
example, if one good is produced by a monopolist who restricts output
and raises price above marginal cost, a similar inefficiency is created
(with society consuming "too little" of this good and "too much" of the
competitively supplied good). There are two policies that can "fix" this
problem. One is to try to use the Competition Act to eliminate the
monopoly; the other is to levy a selective excise tax on the sale of the
other product. With this tax, both prices can be above their respective
marginal costs by the same proportion, and society gets the efficient
allocation of resources — even with the monopoly.
So the verdict concerning the desirability of a selective sales tax
is completely reversed, when we switch from a no-other-distortions
situation to a with-other-distortions setting. The analysis in this section
shows that this same logic applies in macroeconomics to factor markets.
With incomplete information in the labour market, labour's price is "too
high" and firms employ "too little" labour. By stimulating employment,
we can increase overall efficiency — have higher GDP — as we improve
equity (by lowering unemployment). This sort of outcome is what led to
the Bhagwati/Ramaswami (1963) theorem. This proposition concerns a
second-best setting, and it states that we have the best chance of
improving economic welfare if the attempt to alleviate the distortion is
introduced at the very source of that distortion. Since the distortion in this
case is that wages are "too high" to employ everyone, one would expect
that the government can improve things by pulling the wage that firms
have to pay to hire labour back down. This takes place in our analysis
since the employee payroll tax cut lessens workers' wage claims. Another
way of saying essentially the same thing is to note that the second-best
problem is the existence of asymmetric information in the labour market,
which leads to a level of employment that is "too low." By directly
201
stimulating employment, the employee payroll tax cut partially removes
the original distortion at source, and this is why the analysis supports this
initiative. ,
Y = (qN) 1-7 Kr
q =[(w(1—t + p)—b)1 br
b = (1— u)w(1— t)+ pw
(1— y)Y I N = w(1— s)
202
yY/K=r
u = a(1— t — s)I(1— t)
N =1—u
r(1—r) .= r*
G + pw+ swN =rrK + twN
203
less productive. This negative effect must dominate, so the analysis does
not support the introduction of a guaranteed annual income.
The overall conclusion is that the employment subsidy can be
defended — even when the model highlights the "globalization constrain(
(the fact that the financing of the initiative requires a higher tax rare
which scares away capital). However, the basic income proposal canner
be defended. The intuition behind this difference in outcomes is the same
as that which applied in the previous section. The employment subsid:.
addresses a distortion (asymmetric information in the labour market) a:
source, while the guaranteed annual income does not.
We pursue the analysis of employment subsidies in one additional
way. To motivate this further investigation, we note that our model has
not involved any specialized features that would make it particularly
applicable to developing economies. Development economists have
stressed two things about production possibilities in the lesser developed
countries that we now insert into our analysis. First, they have stressed
that developing countries often have a limited supply of some crucial
input — a problem that cannot be highlighted if we restrict our attention to
the Cobb-Douglas production function (that allows firms to produce each
level of output with any ratio of factor inputs). Second, they have stressed
that workers can be so under-nourished that their effectiveness on the job
can be compromised. The following adaptation of the earlier model
allows for these considerations.
Y = min(V, L 1 0)
V = (qN) I-7 Kr
q =[(w — 6)1 b"
b = (1— u)w
(1— v0)(1— y)Y / N = w(1— s)
(1— vO)yY I K = r
u = a(1— s)
N =1— u
v = v*
r(1— r) = r*
G + swN = rrK
The first two equations define the production process, and this two-part
specification follows suggestions made by Moutos. The first equation is
the overall production function. It is a Leontief fixed-coefficient rela-
tionship which states that output is equal to the minimum of two inputs —
204
skilled labour, L, and remaining value-added, V. The latter is a standard
Cobb-Douglas function of unskilled labour, N, and capital, K. Skilled
labour is the "crucial" input; each unit of output requires 0 units of this
input. The remaining value added can be produced with an infinite variety
of unskilled-labour-to-capital ratios. Development economists refer to this
type of specification as an "0-ring" theory of production. This label is
based on the NASA disaster in which the travelers in the Columbia
spacecraft perished all because of one tiny flaw — a damaged 0-ring
sealer. The basic` ideais that — no matter how many and how good all
other inputs are — if one is missing, the entire enterprise amounts to
nothing. Skilled labour is the analogue to the 0-ring in our case, and this
is a concise way of imposing the notion that the modern world involves
knowledge-based economies. With profit maximization, firms do not hire
unused factors, so we proceed on the assumption that Y = V = (1 / OW
The third equation is the unskilled worker effort index. It is
different from what was specified above in two ways. The non-essential
way is that — for simplicity — we have removed the basic-income and the
unemployment-insurance policies in this specification, and we have also
set taxes on both forms of labour to zero. The novel feature in the effort
relationship is the second argument on the right-hand side. We can think
of this as a "nourishment effect"; with parameter n> 0, it is the case that —
other things equal — the higher is the unskilled labour wage, the more
healthy, and therefore, the more productive are these individuals. There is
no variable-worker-effort function for skilled labour. For one thing, it is
assumed that their wage is high enough for there to be no concern about
their basic health and nourishment. Further, since these individuals have
"good" jobs, there is no reason for them to consider shirking; they enjoy
their work too much. Thus, only the unskilled become unemployed.
The fifth, sixth and seventh equations are the first-order
conditions that follow from profit maximization. Profits are defined as
Y — wN — vL — rK + SN. The next three equations define factor supplies;
unskilled labour is stuck within the country (inelastic supply), and the
other two factors are perfectly mobile internationally. Skilled labour can
earn wage v*, and capital can earn rent r*, in the rest of the world. The
last equation is the government budget constraint. Program spending and
the employment-subsidy expenses (paid to firms for hiring unskilled
labour) are financed by a tax on capital.
We do not expect readers to work out the formal results of this
model. It's structure is spelled out just so that readers are aware of how to
adapt the analysis to a developing economy setting. We simply assert the
result that emerges: the subsidy to firms for hiring unskilled labour brings
205
both good and bad news. The good news is that the unemployment rate is
reduced. The bad news is that sufficient capital is pushed out of the
country (due to the higher tax levied on capital to finance the employment
initiative) for the average income of an unskilled individual, b, to fall.
Thus, it is harder for governments to provide low-income support in the
very set of countries where pursuing this objective is most compelling.
To keep exposition straightforward, we followed Moutos'
specification of the 0-ring feature in the production function (which is
much simpler than the standard specification, as in Kremer (1993a)).
Given this departure from the literature, it is useful to provide some
sensitivity test. To this end, we report a different, less thorough-going,
method of decreasing substitution possibilities within the production
process. We revert to just the one (unskilled) labour and capital
specification, but we switch from Cobb-Douglas to a CES production
function with an elasticity of factor substitution equal to one half (not
unity as with the Cobb Douglas). The production and factor-demand
functions become
206
higher: f = au,_, if the previous period's unemployment rate is below
some upper limit (u,_, < /7), while f =7 once that maximum upper limit
is reached /7) . Since the solution equation for the unemployment
rate is it, = a 1(1 — f,), in the simpler version of the model with no taxes,
the unemployment rate follows a first-order nonlinear difference equation,
as long as it is below the upper bound.
Ut-i
1 2 4
208
Figure 9.4 Multiple Equilibria with Increasing Returns to Scale
210
there are multiple equilibria. Farmer (1993) has also considered
increasing returns to scale — examining how the New Classical model is
affected by this extension. Multiple stable equilibria emerge.
"Strategic complementarity" is a game-theoretic term which has
been used to interpret many of the multple-equilibria models. As Cooper
and John (1988) have noted, there is a general reason that coordination
fails in many of these New Keynesian models. The general feature is that
the larger is aggregate production, the larger is the incentive for each
individual to produce. They show that this feature provides a general
underpinning for Keynesian multiplier effects, Oh and Waldman (1994)
explain that it is a basis for slow adjustment, and Alvi (1993) proves that
strategic complementarity can (along with real rigidities) accentuate the
importance of any nominal rigidities that are present in the system.
The most general notion of multiple equilibria is found in models
that involve hysteresis. The simplest model of this sort — provided by
Blanchard and Summers (1986) — is based on the idea that the more
senior members of a union (the "insiders") are the ones who make the
decisions on wages. These workers are assumed to give no weight to the
preferences of members who are no longer seen — having become
unemployed. The insiders' power stems from median-voter
considerations. The wage is set equal to the value that makes the firm
want to hire just the number of workers who were employed in the
previous period. Thus, the expected employment in time t, denoted as
E(N,) , equals the last period's employment, N1_, . An expression for
expected employment can be had by specifying a labour demand function.
Blanchard and Summers assume a simple aggregate demand function for
goods, Y, = c(M, — Ps ) , and constant returns to scale in production; thus, if
units are chosen so that labour's marginal product is unity, Y, = N, and
P, = W, (where Y stands for output, M for money supply, P for price, and
W for the wage rate). The implied labour demand function is
N, = c(M, — W,) . If the expectations operator is taken through this
relationship and the resulting equation is subtracted from the original, we
have E(N,) = N,—c(M,— E(M,)) , since wages are set so that
W, = E(W,)). Replacing expected employment by N,_ 1 , the time path for
employment becomes
1V, = E(M,)).
211
This model is consistent with both the random-walk observation
concerning output and employment rates (Campbell/Mankiw 1987) and
the "money surprise" literature (Barro 1977). Unexpected changes in
aggregate demand affect employment, and there is nothing to pull the
level of employment back to any particular equilibrium (because the
preferences of laid-off workers no longer matter for wage-setting).
Blanchard and Summers consider several variations of this and other
models to test the robustness of the hysteresis prediction. Some of these
extensions allow the "outsiders" to exert some pressure on wage-setting,
with the effect that the prediction of pure hysteresis is replaced by one of
extreme persistence.
Another source of multiple equilibria is "the average opinion
problem" in rational expectations. The economy has many equilibria —
each fully consistent with rational expectations — and each one
corresponding to a possible view of what all agents expect all the others
to take as the going market price (see Frydman and Phelps (1983)).
Ultimately, models such as these lead us to the proposition that
the belief structure of private agents is part of the "fundamentals" — much
like tastes and technology — so that economists should study the several
equilibria rather than search for some rationale to treat all but one as
inadmissable. (Readers saw how common this practice is when learning
phase-diagram methods in Chapter 6.) This plea for further study
inevitably forces analysts to explore how agents gradually achieve
rational expectations. For example, consider even a very limited aspect of
learning — can agents grope their way to knowing the actual values of a
model's structural coefficients if all they start with is knowledge of the
form of the model? Pesaran (1982) surveys some of the studies which
pose this class of questions. Some plausible adaptive learning schemes
converge to unique rational expectations equilibria in some contexts, but
not always. Despite the assumption that agents incur no decision-making
costs, these plausible learning models sometimes lead to cycles and/or
divergence from rational expectations equilibria. With decision making
costs, Pesaran (1987) has stressed that agents can become trapped in a
kind of vicious circle of ignorance. If agents expect further learning is not
economically worthwhile, insufficient information will be accumulated to
properly test that initial belief and therefore to realize that the original
decision may have been mistaken. This implies that systematic forecast
errors may not be eliminated with economically rational expectations.
Most studies of multiple equilibria do not question the entire
concept of rational expectations; instead, they stress how the economy
might shift between them — resulting in fluctuations in aggregate demand
212
that are ongoing due to the self-fulfilling cycle of revised expectations (as
in Woodfood (1991). This class of models is quite different from both
traditional macroeconomics and New Classical work, where cycles are
caused by exogenous shocks to fundamentals (such as autonomous
spending in Keynesian models or technology in the real business cycle
framework). In the standard approach, it is almost always the case that it
is optimal for agents to absorb these shocks (at least partly) by permitting
a business cycle to exist. After all, stochastic shocks are a fact of life. As
we have seen in earlier chapters, attempts by the government to lessen
these cycles can reduce welfare. But if cycles result solely from self-
fulfilling expectations, then it is much easier to defend the proposition
that the elimination of cycles is welfare improving. Indeed, government
may not need to actually do anything to eliminate the cycles other than
make a commitment to intervene to stabilize if that were ever necessary.
Knowledge of that commitment may be sufficient to cause agents to
expect (and therefore achieve) a non-cyclical equilibrium.
Howitt and McAfee (1992) build a similar model of endogenous
self-fulfilling cycles. It is based on the theory of search behaviour in the
labour market covered in section 8.4 — one that involves a supposedly
"non-fundamental" random variable called (in deference to Keynes)
"animal spirits." A particularly interesting feature of the analysis is that
the equilibrium involving ongoing cycles between the optimistic and
pessimistic phases is stable in a learning sense. Baysian updating induces
convergence to this equilibrium with positive probability, even though
agents start with no belief that animal spirits affect the probability of
successful matches in the labour market search activity. Models such as
this one provide a solid modern pedigree for even the most (apparently
non-scientific) of Keynesian ideas — animal spirits.
There are many more models of multiple equilibria in the
literature that focus on other topics. But enough has been covered for
readers to appreciate how many public-economics terms — externality,
incomplete information, missing markets, non-convexity, moral hazard,
market power — appear in New Keynesian analyses. The intention is to
meet the challenge posed by the New Classicals — have firmer micro
foundations for macro policies — that can then be motivated on the basis
of some well-identified market failure (second-best initial condition). This
means that the principles that underlie normative analysis in
macroeconomics are becoming consistent with the principles that underlie
microeconomic policy analysis — an outcome much applauded by New
Classicals.
213
9.6 Conclusions
The purpose of this chapter has been to use some of the micro-based
macro models of the natural unemployment rate that were developed in
the previous chapter to assess several policies that have been used or
advocated for reducing structural unemployment and/or raising the
incomes of unskilled individuals. Here, we summarize a few of the key
findings.
First, there is considerable analytical support for a policy of
decreasing our reliance on income taxation and increasing that on
expenditure taxes. This tax substitution can be expected to lower the
natural unemployment rate. Second, involuntary unemployment creates a
second-best environment in the labour market. In such a setting, it can be
welfare-improving to impose a distorting tax — even one levied on capital
that is supplied perfectly elastically — if the revenue can be used to reduce
the pre-existing distortion in the other factor market (the labour market).
This environment makes low-income support possible — even for the
government of a small open economy that faces the "globalization
constraint." This second-best analysis was extended so that the appeal of
competing anti-poverty policies — providing employment subsidies to
firms or providing basic income to individuals — could be compared.
Finally, we explored how natural-unemployment-rate analyses
could be modified to consider some of the additional constraints that
confront policy makers in developing economies, and to consider the
possibility of multiple equilibria. The possibility of multiple equilibria
suggests an "announcement effect" rationale for policy. With both a high-
employment equilibrium and a low-employment equilibrium possible —
and with both involving self-fulfilling rational expectations — policy can
induce agents to focus on the high-activity outcome if agents know that
the policy maker stands ready to push the system to that outcome if
necessary. It is quite possible that no action — just the commitment to act
— is all that may be necessary.
The natural unemployment rate is a long-run concept. There is
another long-run aspect of real economies that we have ignored thus far in
the book. This feature is the fact that there is ongoing growth — a long-run
trend in the natural rate of output. We focus on this issue — productivity
growth — in the remaining chapters of the book.
214
Chapter 10
10.1 Introduction
215
10.2 The Solow Model
Y = F(N,K)
S
S =sY
I = k +8K
216
example, assuming a Cobb-Douglas function: Y = K" N j- " , we have
y= Y / N = (K / N)" = f (k). Using the time derivative of the k = KIN
definition, the entire model can be summarized in a single differential
equation:
= sf (k) — (n + g)k
Figure 10.1 The Solow Growth Model and the Golden Rule
(5 +n)k
k"
217
out from the origin with a slope equal to (n + 0. This line can be
interpreted as the "required" investment line, if required refers to what is
necessary to keep the capital stock growing at the same rate as the
effective labour supply. With no investment, the capital stock is shrinking
through depreciation at rate 8. Thus, to make up for this, and to have
capital grow at effective labour's growth rate, n, capital must grow at a
rate equal to the sum of these two factors for the system to achieve
balanced growth. Capital-labour ratio k, is the equilibrium, since it marks
the intersection of the actual per-effective-worker saving/investment
schedule with the required saving/investment schedule.
Suppose the economy starts with a capital-labour ratio that is
smaller than value k, (that is, we start to the left of the equilibrium). In
this region of the figure, the height of the actual saving/investment curve
is greater than the height of the required saving/investment line. Thus, the
economy is accumulating more capital than is necessary to keep the
capital-labour ratio constant, and that ratio must, therefore, rise. The
economy moves inexorably toward the k, level of capital intensity.
In C
r „ - -
Time
1 2
221
these models in Chapter 11. Before doing so, however, we investigate
how basic exogenous growth theory has been modified to respect the
Lucas critique.
where
r (r —n) —1
0=
Lc(1— Of" p(p + p) r(1—r)—p— n
223
Figure 10.3 Phase Diagram
c
e = 0 Locus
A cut in interest taxation shifts the a = 0 locus to the right; the economy
moves from point 1 to point 2 immediately, and then from point 2 to point
3 gradually. The formal analysis confirms that there is short-term pain
(lower c initially) followed by long-term gain (higher c in the new full
equilibrium), so the result is similar to that pictured in Figure 10.2. The
only difference stems from the fact that c in the present analysis is per-
224
effective-worker consumption, not per-person consumption (what was the
focus in Figure 10.2). As a result, there is no positive slope to the trend
lines in the version of Figure 10.2 that would apply to variable c.
Nevertheless, the formal analysis accomplishes two things: it confirms
that there is short-term pain followed by long-term gain outcome, and it
facilitates a calculation which determines whether the short-term pain is or
is not dominated by the long-term gain. To answer this question, we
calculate dPVIdr, where PV is the present value function:
PV = Se' In c,di
0
and is the social discount rate. This welfare function is based on the
instantaneous household utility function that was involved in the
derivation of the model's consumption function, so there should be no
controversy about this general form of social welfare function. But there is
controversy concerning what discount rate to use.
One candidate is 2= r(1—r)— n , the economy's net of tax and
growth rate of interest. This is what is used in standard applied benefit-
cost analysis — based as it is on the hypothetical compensation principle.
Another candidate is 2 = p , each individual's rate of time preference. For
internal consistency, this option must be used if agents live forever (that is,
ifp = 0). But in the over-lapping generations setting that is our focus here,
the 2 = p assumption is not so obviously appealing. It is not without any
appeal in this context, however, since it has been shown that this discount
rate is an integral part of the only time-consistent social welfare function
to be identified in the literature as consistent with this over-lapping
generations structure (Calvo and Obstfeld (1988)). Given the uncertainty
concerning what discount rate to use in public policy analysis, it is
instructive to consider both these options. Two rather different
conclusions emerge (and readers can verify this by following the
procedure outlined in section 6.2).
If the time preference rate of any one generation is used as the
social discount rate, we find that — even allowing for the short-term pain —
agents are better off if the interest-income tax is eliminated. But if the net
market interest rate is used, there is less support for this initiative. This is
because, with the net interest rate exceeding the time preference rate in the
overlapping generations setting, this decision rule involves discounting
the long-term gain more heavily. It turns out that aPv or =0 in this case,
so the pro-savings policy is neither supported nor rejected when the
hypothetical compensation criterion is used. This result is consistent with
225
Gravelle (1991) who argues that this tax substitution has more to do with
distribution than efficiency.
We return to this issue — the support or lack thereof for pro-
savings initiatives — in Chapter 12. In that chapter we will add interesting
features to the tax-substitution analysis. First, the equilibrium growth rate
will be endogenous, so fiscal policies will affect growth permanently.
Second, we will consider two groups of households — one very patient and
the other much less so. With each group having its own rate of time
preference — one above the economy's net interest rate, and the other
below it — we can be more explicit when we compare the short-term pain
and the long-term gain of fiscal policies that raise national savings. In the
meantime, we extend the present exogenous-growth model with just one
class of households so that we can consider a policy of government budget
deficit and debt reduction, for a small open economy.
As above, all variables are defined as ratios — with the denominator being
the quantity of labour measured in efficiency units. Units of output are
chosen so that, initially, these ratios can be interpreted as ratios to GDP as
well. The new variable is a, the nation's foreign debt ratio.
The first equation is the private-sector consumption function —
very similar to what has been discussed above. There is one difference
here; there is an additional component to non-human wealth. In addition
to the domestically owned part of the physical capital stock (k — a), there
is the stock of bonds issued by the government to domestic residents (b).
The second equation states that firms maximize profits and hire
capital until the marginal product equals the rental cost. For this
application, we focus on a small open-economy setting. As a result, the
interest rate is determined from outside (it is pinned down by the foreign
interest rate and the assumption of perfect capital mobility). Since k is the
only endogenous variable in the second equation, this optimal-hiring-rule
relationship pegs the capital stock. Thus, in analyzing domestic policy
initiatives, we take k as a constant (and set k = 0).
The third equation combines the GDP identity with the
accumulation identity for foreign debt (both written in ratio form). This
version of the accumulation identity states that the foreign debt-to-GDP
ratio rises whenever net exports falls short of the pre-existing interest
payment obligations to citizens in the rest of the world. The interest
payment term reflects the fact that — even when net exports are zero — the
foreign debt ratio rises if the growth in the numerator (the interest rate
paid on that debt this period) exceeds the growth in the denominator (the
GDP growth rate — n). Net exports are defined by the expression in square
brackets in the third equation.
The final two equations define the government accounting
identities (and they were discussed in Chapter 7 (section 4)).
As noted, to avoid advanced mathematics, we ignore the details
involved in the dynamic approach to full equilibrium, and focus on the
long run. Once full equilibrium is reached, all aggregates are growing at
the same rate as overall GDP (at rate n), so ongoing changes in the ratios
(all the dotted terms in these five equations) are zero. The five equations
then determine the equilibrium values of c, k, a, b and one policy variable
— which we take to be t. We take the total differential of the system, we set
227
all exogenous variable changes except that in d to zero, and we eliminate
the changes in a, b and t by substitution. The result is:
Percent
Demand
(Marginal Product)
Quantity of Capital
229
Pro-savings initiatives often take the form of the government
using the tax system to stimulate private saving, instead of boosting
public-sector saving via deficit reduction. Some people oppose these tax
initiatives on equity grounds; they believe that only the rich have enough
income to do much saving, and so they are the only ones who can benefit
from these tax policies. Those who favour pro-savings tax concessions
argue that this presumption is incorrect; indeed they argue that most of the
benefits go to those with lower incomes. But since the process by which
these benefits "trickle down" the income scale is indirect, they argue,
many people do not understand it and thus reject these tax initiatives
inappropriately. We now examine whether this trickle-down view is
correct, first in a closed economy, and then in a small open economy.
Figure 10.5 shows the market for capital; the demand curve's
negative slope shows diminishing returns (the fixed quantity of labour
being shared ever more widely), and the supply curve's positive slope
captures that fact that savings is higher with higher returns. Equilibrium
occurs at point E. With just two factors of production, capital and labour,
the area under the marginal product of capital curve represents total output.
Thus, GDP is the entire area under the demand curve up to point E in
Figure 10.5. Further, since each unit of capital is receiving a rate of return
equal to the height of point E, capital's share of national income is the
rectangle below the horizontal line going through point E. Labour gets the
residual amount — the triangle above this line. A tax policy designed to
stimulate savings shifts the capital supply curve to the right, as shown in
Figure 10.5. Equilibrium moves from point E to point A, and total output
increases by the additional area under the marginal product curve (that is,
by an amount equal to the shaded trapezoid in Figure 10.5). So the pro-
saving tax initiative does raise per-capita output. But how is this
additional output distributed? The owners of capital get the dark shaded
rectangle, and labour gets the light shaded triangle. So even if capitalists
do all the saving and are the apparent beneficiaries of the tax policy, and
even if workers do no saving, labour does get something.
Furthermore, labour's benefit is not just the small shaded triangle
in Figure 10.5. Being more plentiful, capital's rate of return has been bid
down to a lower level. Since that lower rate is being paid on all units of
capital, there has been a transfer from capital to labour of the rectangle
formed by the horizontal lines running through points E and A. So, after
considering general-equilibrium effects, we see that capital owners may
not gain at all; labour, on the other hand, must gain. We conclude that, in
a closed economy (that can determine its own interest rate), the benefits of
pro-savings tax initiatives do trickle down.
230
Figure 10.6 The Size and Distribution of Income: An Open Economy
Percent
Foreign
Supply
Demand
(Marginal Product)
Quantity of Capital
Percent
Foreign
Supply
Demand
(Marginal Product)
Quantity of Capital
In all our models considered thus far in the book, we have assumed that
there are only two inputs in the production process, and that both can
expand without limit. Many concerned citizens see this as a fundamental
limitation of mainstream economics. This section briefly considers this
issue — in two stages. In the first, we introduce a third factor — land. It is
assumed that the quantity of land cannot grow, and we ask whether this
fact necessarily brings the growth in living standards to a halt in the long
run. In the second stage, we consider a bigger challenge. Instead of
considering a factor that cannot grow, we focus on one whose supply
continually shrinks. This is the appropriate assumption for non-renewable
resources. Again, the focus of the analysis is on whether ongoing growth
remains possible, and if so, is it likely. Our treatment follows that in Jones
(2002, 170-177) very closely.
Fixed land (denoted by 7) is included in the following (otherwise
standard) production function:
Y = BK"T fi
232
B is the exogenously determined productivity growth index (so AIB= y).
Land is fixed, and labour (the population) grows at a constant rate (T = 0
and k I N = z). We wish to focus on balanced growth paths (where output
and capital grow at the same rate), so we re-express the production
function in such a way that highlights the capital-output ratio. After
dividing through by Ya we have
After taking logs and then time derivatives, and noting that T and (KIY)
are constant, we get an expression for the output growth rate (that is, for
the sum of the per capita output growth rate plus z):
where 7* = y /(1 — a) and /3* = fi/(1 — a). Subtracting z from both sides,
we end with
Y = BICEPN 1'-/3
andweith
This result indicates that the drag on growth is bigger when the natural
resource is shrinking. Nordhous (1992) has estimated these parameters,
and he concludes that the annual growth-retarding effect (the second term
on the right-hand side) is about one-third of one percentage point. The
implication is that, as long as the productivity growth rate can be expected
to exceed this amount, we can enjoy rising living standards — even though
we are running out of non-renewable resources. A second reassuring point
can be made. If resources really are running out as we have assumed in
this analysis, the real prices of these resources should be increasing. In
many cases they are not. In other words, the rate of discovery of new
supplies appears to be dominating our rate of using the resources. These
observations suggest that the model may have focused on a case that is
more stringent than what we have yet had to confront. On the other hand,
the Cobb-Douglas function implies an elasticity of factor substitution that
may very much overstate how easy it is for firms to make do with less of
the dwindling resources, and there has been no mention of pollution
externalities. Thus, it is worrisome that concern about non-renewable
resources is left out of the standard analyses of fiscal policy and growth.
Despite this concern, space limitations make it necessary for us to follow
this convention in the remaining two chapters of the book.
It is worth noting that this discussion of non-renewable resources
has involed our switching from exogenous-growth to endogenous-growth
analysis, since in this case, the full-equilibrium growth rate is affected by
a policy parameter, 0. We pursue a more systematic exploration of
endogenous growth in the next chapter.
234
10.6 Conclusions
The purpose of this chapter has been to position readers so that they can
benefit from our exploration of "new" growth theory in the remainder of
the book. Traditional growth analysis is "old" in that (originally) it lacked
formal optimization as a basis for its key behavioual relationship — the
savings function, and (even today) it involves a rate of technological
progress that is exogenous. In the analysis that has been covered in this
chapter, we have seen that the first dimension of oldness has been
removed through the addition of micro-foundations. We consider ways of
endogenizing the rate of technical progress in the next chapter.
The basic policy prescription that follows from both the original
and the micro-based version of traditional analysis is that pro-savings
policies can be supported. These initiatives result in a temporary increase
in the growth rate of consumption, and a permanent increase in the level
of per capita consumption. Calibrated versions of these models support
the conclusion that, even without a permanent growth-rate effect, higher
saving leads to quite substantial increases in average living standards. If
higher saving leads to capital accumulation (as it does in a closed
economy), even a poor labourer who does not save benefits from a pro-
savings fiscal policy. Such individuals benefit indirectly, since they have
more capital with which to work. But if the higher saving leads only to
increased domestic ownership of the same quantity of capital, the poor
labourer is not made better off. So, in some circumstances, the increase in
average living standards does not involve higher incomes for everyone.
Our task in the next chapter is to establish whether this basic conclusion —
that a pro-savings policy is supported in models that do not stress income
distribution issues, but it may not be otherwise — holds in an endogenous
productivity growth setting.
235
As a concise survey of the developments in modern macroeconomics, this book bridges
the gap between intermediate-level texts and advanced material. By highlighting the New
Neoclassical Synthesis, it draws attention to recent work which simultaneously emphasizes the
rigour of the New Classical approach and a focus on market failure that is the essence of the
Keynesian tradition. In addition to stabilization policy issues, there is extensive coverage of natural
unemployment rate theories, and both old and new growth analysis. At the upper undergraduate
level, the book can be used on its own; at the introductory post-graduate level, it represents
a much-needed complement to journal readings and the more advanced research monographs.
The user friendly exposition gives equal billing to explaining technical derivations and to exposing
the essence of each result and controversy at the intuitive level. Calibrated versions of the models
allow readers to appreciate how modern macroeconomics can inform central policy debates.
Many topical issues are highlighted: the Lucas critique of standard methods for evaluating policy,
credibility and dynamic consistency issues in policy design, the sustainability of rising debt
levels and an evaluation of Europe's Stability Pact, the optimal inflation rate. the implications of
alternative monetary policies for pursuing price stability (price-level vs inflation-rate targeting.
fixed vs flexible exchange rates), tax reform (trickle-down controversies and whether second-best
initial conditions ease the trade-offbetween efficiency and equity objectives), theories of the natural
unemployment rate and the possibility of multiple equilibria, alternative low-income support
policies. and globalization (including the alleged threat to the scope for independent macro policy).
Using basic mathematics throughout, the book introduces its readers to the actual research
methods of macroeconomics. But in addition to explaining methods, the author presents the
underlying logic at the common-sense level, and with an eye to the historical development
of the subject. As with the earlier editions, both instructors and students will welcome
having the exciting developments in modern macroeconomics made more accessible.
About the author: William Scarth is professor of economics at McMaster University. where
he has been awarded the President's Award for Best Teacher, and the McMaster Student
Union Lifetime Teaching Award. In addition to publishing many articles in academic journals
(in the areas of macroeconomics, labour economics. international trade and public finance),
Professor Scarth has authored three other textbooks. and he is a Research Fellow at the C.D.
Howe Institute. Canada's leading nonprofit policy institute. Professor Scarth's recent work
concerns how both globalization and a commitment to high productivity growth affect the
ability of governments to provide low-income support policy within small open economies.
INNOVATION PRESS