Cost Function PDF
Cost Function PDF
Foreword
This is an incomplete collection of my lecture notes for various courses in the field of econometric
production analysis. These lecture notes are still incomplete and may contain many typos, errors,
and inconsistencies. Please report any problems to [email protected]. I am grateful
to my former students who helped me to improve my teaching and these notes through their
questions, suggestions, and comments. Finally, I thank the R community for providing so many
excellent tools for econometric production analysis.
January 15, 2014
Arne Henningsen
Contents
1 Introduction
10
1.1
1.2
1.2.2
1.2.3
Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.2.4
Simple functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.2.5
1.2.6
1.2.7
Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.2.8
1.2.9
Extension packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.3.1
1.3.2
1.4
1.3.1.2
1.3.1.3
1.3.1.4
1.3.1.5
. . . . . . . . . . . . . . . . . . . . . . . 22
1.3.2.1
1.3.2.2
Mean-scaling Quantities . . . . . . . . . . . . . . . . . . . . . . . . 23
1.3.2.3
1.3.2.4
1.3.2.5
Aggregating quantities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.4.2
Quasiconcavity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.4.3
Delta method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Contents
2 Primal Approach: Production Function
2.1
28
Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.1.1
Production function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.1.2
Average Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.1.3
2.1.4
Marginal Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.1.5
Output elasticities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.1.6
Elasticity of scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.1.7
2.1.8
2.1.9
Elasticities of substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.1.9.1
2.1.9.2
2.1.9.3
2.3
Productivity Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.2.1
Average Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.2.2
Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.3.2
Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.3.3
Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.3.4
2.3.5
Marginal Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.3.6
Output Elasticities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.3.7
Elasticity of Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.3.8
2.3.9
Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
2.4.2
Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
2.4.3
Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Contents
2.4.4
2.4.5
Output elasticities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
2.4.6
Marginal products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
2.4.7
Elasticity of Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
2.4.8
2.4.9
Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
2.5.2
Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
2.5.3
Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
2.5.4
2.5.5
Marginal Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
2.5.6
Output Elasticities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
2.5.7
Elasticity of Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
2.5.8
2.5.9
Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
2.6.2
Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
2.6.3
Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
2.6.4
2.6.5
Output Elasticities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
2.6.6
2.6.7
2.6.8
Contents
2.6.9
2.8
2.7.2
2.7.3
2.7.4
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
3.2
. . . . . . . . . . . . . . . . . . . . . . . 118
127
Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
3.1.1
3.1.2
3.1.3
Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
3.2.2
Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
3.2.3
Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
3.2.4
3.2.5
3.2.6
3.2.7
3.2.8
3.2.9
3.4
Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
3.3.2
Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
3.3.3
Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
3.3.4
Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
3.4.2
Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
3.4.3
3.4.4
Contents
3.4.5
3.4.6
3.4.7
3.4.8
3.4.9
171
Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
4.1.1
4.1.2
4.2
4.3
4.4
4.3.1
Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
4.3.2
Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
4.3.3
Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
4.3.4
4.3.5
4.3.6
4.3.7
4.3.8
4.3.9
Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
4.4.2
Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
4.4.3
Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
4.4.4
4.4.5
4.4.6
192
Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
5.1.1
5.1.1.2
5.1.1.3
5.1.1.4
5.1.1.5
5.1.1.6
5.1.1.7
5.1.1.8
Contents
5.2
Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
5.2.1.1
5.3
5.4
5.2.2
5.2.3
5.2.4
5.2.5
Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
5.3.2
5.3.3
5.4.2
5.4.3
221
6.1.1.2
6.1.2
Pooled estimation of the Translog Production Function with Constant and Neutral Technological Change . . . . . . . . . . . . . . . 228
6.1.2.2
6.1.3
Translog Production Function with Non-Constant and Non-Neutral Technological Change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
6.1.3.1
Pooled Estimation of a Translog Production Function with NonConstant and Non-Neutral Technological Change . . . . . . . . . . 235
6.1.3.2
6.2
6.2.1.2
6.2.1.3
Contents
6.2.2
6.2.3
Translog Production Frontier with Non-Constant and Non-Neutral Technological Change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
6.2.3.1
6.2.4
6.3
1 Introduction
1.1 Objectives of the course and the lecture notes
Knowledge about production technologies and producer behavior is important for politicians,
business organizations, government administrations, financial institutions, the EU, and other national and international organizations who desire to know how contemplated policies and market
conditions can affect production, prices, income, and resource utilization in agriculture as well as
in other industries. The same knowledge is relevant in consultancy of single firms who also want
to compare themselves with other firms and their technology with the best practice technology.
The participants of my courses in the field of econometric production analysis will obtain
relevant theoretical knowledge and practical skills so that they can contribute to the knowledge
about production technologies and producer behavior. After completing my courses in the field
of econometric production analysis, the students should be able to:
use econometric production analysis and efficiency analysis to analyze various real-world
questions,
interpret the results of econometric production analyses and efficiency analyses,
choose a relevant approach for econometric production and efficiency analysis, and
critically evaluate the appropriateness of a specific econometric production analysis or efficiency analysis for analyzing a specific real-world question.
These lecture notes focus on practical applications of econometrics and microeconomic production theory. Hence, they complement textbooks in microeconomic production theory (rather
than substituting them).
10
1 Introduction
# square root
[1] 1.414214
> 2^(1/2)
# the same
11
1 Introduction
[1] 1.414214
> 2^0.5
[1] 1.414214
> log(3)
# natural logarithm
[1] 1.098612
> exp(3)
# exponential function
[1] 20.08554
The commands can span multiple lines. They are executed as soon as the command can be
considered as complete.
> 2 +
+
[1] 5
> ( 2
+
3 )
[1] 5
12
1 Introduction
> a = 4
> a
[1] 4
> b = 5
> b
[1] 5
> a * b
[1] 20
In these lecture notes, I stick to the traditional assignment operator, i.e. the arrow symbol (<-).
Please note that R is case-sensitive, i.e. R distinguishes between upper-case and lower-case
letters. Therefore, the following commands return error messages:
> A
> B
> Log(3)
> LOG(3)
1.2.3 Vectors
> v <- 1:4
> v
[1] 1 2 3 4
> 2 + v
[1] 3 4 5 6
> 2 * v
[1] 2 4 6 8
> log( v )
> w
13
1 Introduction
[1]
> v + w
[1]
# element-wise addition
6 11 20
> v * w
[1]
8 16
# element-wise multiplication
8 24 64
> v %*% w
[,1]
[1,]
98
> w[2]
[1] 4
> w[c(1,3)]
[1] 2 8
> w[2:4]
[1]
> w[-2]
[1]
8 16
# select all but the second element
8 16
> length( w )
[1] 4
14
1 Introduction
TRUE
TRUE
TRUE
> w == 2^(1:4)
[1] TRUE TRUE TRUE TRUE
> all.equal( w, 2^(1:4) )
[1] TRUE
> w > 3 & w < 6
[1] FALSE
# ampersand = and
# vertical line = or
[1]
TRUE
TRUE FALSE
TRUE
> women
15
1 Introduction
height weight
1
58
115
59
117
60
120
61
123
62
126
63
129
64
132
65
135
66
139
10
67
142
11
68
146
12
69
150
13
70
154
14
71
159
15
72
164
[1] 15
> ncol( women )
[1] 2
> women[[ "height" ]]
[1] 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
> women$height
[1] 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
> women$height[ 3 ]
[1] 60
16
1 Introduction
> women[ 3, "height" ]
# the same
[1] 60
> women[ 3, 1 ]
[1] 60
> women[ 1:3, 1 ]
[1] 58 59 60
> women[ 1:3, ]
height weight
1
58
115
59
117
60
120
bmi
58
115
59
117
60
120
61
123
62
126
63
129
64
132
65
135
66
139
10
67
142
11
68
146
12
69
150
13
70
154
14
71
159
15
72
164
17
1 Introduction
1.2.7 Functions
In order to execute a function in R, the function name has to be followed by a pair of parenthesis
(round brackets). The documentation of a function (if available) can be obtained by, e.g., typing
at the R prompt a question mark followed by the name of the function.
> ?log
One can read in the documentation of the function log, e.g., that this function has a second
optional argument base, which can be used to specify the base of the logarithm. By default, the
base is equal to the Euler number (e, exp(1)). A different base can be chosen by adding a second
argument, either with or without specifying the name of the argument.
> log( 100, base = 10 )
[1] 2
> log( 100, 10 )
[1] 2
"v"
"w"
"women"
# remove an object
> ls()
[1] "a"
"b"
"v"
"women"
18
1 Introduction
19
1 Introduction
"vLab"
"vMat"
"qApples"
"qOtherOut" "qOut"
[7] "pCap"
"pLab"
"pMat"
"pOut"
"adv"
generated in order to be able to conduct some further analyses with this data set. Variable names
starting with v indicate volumes (values), variable names starting with q indicate quantities, and
variable names starting with p indicate prices.
1
In order to focus on the microeconomic analysis rather than on econometric issues in panel data analysis, we
only use a single year from this panel data set.
This information is also available in the documentation of this data set, which can be obtained by the command:
help( "appleProdFr86", package = "micEcon" ).
20
1 Introduction
1.3.1.2 Abbreviating name of data set
In order to avoid too much typing, give the data set a much shorter name (dat) by creating a
copy of the data set and removing the original data set:
> dat <- appleProdFr86
> rm( appleProdFr86 )
1.3.1.3 Calculation of input quantities
Our data set does not contain input quantities but prices and costs (volumes) of the inputs.
As we will need to know input quantities for many of our analyses, we calculate input quantity
indices based on following identity:
v i = x i wi ,
(1.1)
where wi is the price, xi is the quantity and vi is the volume of the ith input. In R, we can
calculate the input quantities with the following commands:
> dat$qCap <- dat$vCap / dat$pCap
> dat$qLab <- dat$vLab / dat$pLab
> dat$qMat <- dat$vMat / dat$pMat
1.3.1.4 Calculation of total costs and variable costs
Total costs are defined as:
c=
N
X
wi x i ,
(1.2)
i=1
where N denotes the number of inputs. We can calculate the apple producers total costs by
following command:
> dat$cost <- with( dat, vCap + vLab + vMat )
Alternatively, we can calculate the costs by summing up the products of the quantities and the
corresponding prices over all inputs:
> all.equal( dat$cost, with( dat, pCap * qCap + pLab * qLab + pMat * qMat ) )
[1] TRUE
Variable costs are defined as:
cv =
wi xi ,
(1.3)
iN 1
where N 1 is a vector of the indices of the variable inputs. If capital is a quasi-fixed input and
labor and materials are variable inputs, the apple producers variable costs can be calculated by
following command:
> dat$vCost <- with( dat, vLab + vMat )
21
1 Introduction
1.3.1.5 Calculation of profit and gross margin
Profit is defined as:
=py
N
X
wi xi = p y c,
(1.4)
i=1
where all variables are defined as above. We can calculate the apple producers profits by:
> dat$profit <- with( dat, pOut * qOut - cost )
Alternatively, we can calculate the profit by subtracting the products of the quantities and the
corresponding prices of all inputs from the revenues:
> all.equal( dat$cost, with( dat, pCap * qCap + pLab * qLab + pMat * qMat ) )
[1] TRUE
The gross margin (variable profit) is defined as:
v = p y
wi xi = p y cv ,
(1.5)
iN 1
where all variables are defined as above. If capital is a quasi-fixed input and labor and materials
are variable inputs, the apple producers gross margins can be calculated by following command:
> dat$vProfit <- with( dat, pOut * qOut - vLab - vMat )
"FMERCODE" "PROD"
"AREA"
"LABOR"
"NPK"
[7] "OTHER"
"PRICE"
"AREAP"
"LABORP"
"NPKP"
"OTHERP"
"EDYRS"
"HHSIZE"
"NADULT"
"BANRAT"
[13] "AGE"
22
1 Introduction
PROD output (tonnes of freshly threshed rice)
AREA area planted (hectares).
LABOR labor used (man-days of family and hired labor)
NPK fertilizer used (kg of active ingredients)
YEARDUM time period (1 = 1990, . . . , 8 = 1997)
In our analysis of the production technology of the rice producers we will use variable PROD as
output quantity and variables AREA, LABOR, and NPK as input quantities.
1.3.2.2 Mean-scaling Quantities
In some model specifications, it is an advantage to use mean-scaled quantities. Therefore, we
create new variables with mean-scaled input and output quantities:
> riceProdPhil$area
> riceProdPhil$prod
As expected, the sample means of the mean-scaled variables are all one so that their logarithms
are all zero (except for negligible very small rounding errors):
> colMeans( riceProdPhil[ , c( "prod", "area", "labor", "npk" ) ] )
prod
1
area labor
1
npk
area
labor
npk
0.000000e+00 -1.110223e-16
0.000000e+00
0.000000e+00
> riceProdPhil$lProd
Please note that the (arithmetic) mean values of the logarithmic mean-scaled variables are not
equal to zero:
23
1 Introduction
> colMeans( riceProdPhil[ , c( "lProd", "lArea", "lLabor", "lNpk" ) ] )
lProd
lArea
lLabor
lNpk
xij pij
,
x
i i0 pij
XjL = Pi
XjP = Pi
(1.6)
where subscript i indicates the good, subscript j indicates the observation, xi0 is the base
quantity, and pi0 is the base price of the ith good, e.g. the sample means.
3
Please note that the specification of variable YEARDUM as the time dimension in the panel data set pdat converts
this variable to a categorical variable. If a numeric time variable is needed, it can be created, e.g., by the
command pdat$year <- as.numeric( pdat$YEARDUM ).
24
1 Introduction
The Paasche and Laspeyres quantity indices of all three inputs in the data set of French apple
producers can be calculated by:
> dat$XP <- with( dat,
+
In many cases, the choice of the formula for calculating quantity indices does not have a major
influence on the result. We demonstrate this with two scatter plots, where we set argument log
of the second plot command to the character string "xy" so that both axes are measured in
logarithmic terms and the dots (firms) are more equally spread:
> plot( dat$XP, dat$XL )
5.0
2.0
0.5
1.0
XL
3
1
XL
0.5
XP
1.0
2.0
5.0
XP
25
1 Introduction
+
1.4.2 Quasiconcavity
A function f (x) : RN R is quasiconcave if its level plots (isoquants) are convex. This is the
case if
f (xl + (1 )xu ) min(f (xl ), f (xu ))
(1.7)
f1
f2
...
fN
f1
B=
f2
.
..
f11
f12
...
f12
..
.
f22
..
.
...
..
.
f1N
fN
f1N
f2N
. . . fN N
f2N
..
.
(1.8)
fi denotes the partial derivative of f (x) with respect to xi , fij denotes the second partial derivative
of f (x) with respect to xi and xj , |B1 | is the determinant of the upper left 2 2 sub-matrix of B,
|B2 | is the determinant of the upper left 3 3 sub-matrix of B, . . . , and |BN | is the determinant
of B (Chambers, 1988, p. 312; Chiang, 1984, p. 393f).
26
1 Introduction
we can calculate the approximate variance covariance matrix of z by:
V ar(z)
g()
>
V ar()
g()
,
(1.9)
where g()/ is the Jacobian matrix of z = g() with respect to and the superscript > is
the transpose operator.
27
(2.1)
indicates the maximum quantity of a single output (y) that can be obtained with a vector of
given input quantities (x). It is usually assumed that production functions fulfill some properties
(see Chambers, 1988, p. 9).
f (x)
y
=
xi
xi
(2.2)
The more output one firm produces per unit of input, the more productive is this firm and
the higher is the corresponding average product. If two firms use identical input quantities,
the firm with the larger output quantity is more productive (has a higher average product).
And if two firms produce the same output quantity, the firm with the smaller input quantity is
more productive (has a higher average product). However, if these two firms use different input
combinations, one firm could be more productive regarding the average product of one input,
while the other firm could be more productive regarding the average product of another input.
y
,
X
28
(2.3)
f (x)
xi
(2.4)
f (x) xi
MP
=
xi f (x)
AP
(2.5)
In contrast to the marginal products, the changes of the input and output quantities are measured
in relative terms so that output elasticities are independent of the units of measurement. Output
elasticities are sometimes also called partial output elasticities or partial production elasticities.
(2.6)
If the technology has increasing returns to scale ( > 1), total factor productivity increases
when all input quantities are proportionally increased, because the relative increase of the output
quantity y is larger than the relative increase of the aggregate input quantity X in equation (2.3).
If the technology has decreasing returns to scale ( < 1), total factor productivity decreases when
all input quantities are proportionally increased, because the relative increase of the output
quantity y is less than the relative increase of the aggregate input quantity X. If the technology
has constant returns to scale ( = 1), total factor productivity remains constant when all input
quantities change proportionally, because the relative change of the output quantity y is equal to
the relative change of the aggregate input quantity X.
If the elasticity of scale (monotonically) decreases with firm size, the firm has the most productive scale size at the point, where the elasticity of scale is one.
29
M RT Si,j
(2.7)
RM RT Si,j
xj
j
y
xi =
i
y
(2.8)
ij =
xi
M Pj
x
j MxPi
i
MP
d M Pji
xj
d
=
xi
xj
M RT Sij
d (M RT Sij )
xi
xj
d
=
xi
xj
d M RT Sij
M RT Sij
xi
xj
(2.9)
Thus, if input i is substituted for input j so that the input ratio xi /xj increases by ij %, the
marginal rate of technical substitution between input i and input j will increase by 1%.
2.1.9.1 Direct Elasticities of Substitution
The direct elasticity of substitution can be calculated by:
D
ij
=
fi xi + fj xj Fij
,
xi xj
F
(2.10)
f1
f2
...
fN
f1
B=
f2
.
..
f11
f12
...
f12
..
.
f22
..
.
...
..
.
f1N
fN
f1N
f2N
. . . fN N
30
f2N
..
.
(2.11)
f1
f2
...
fj1
fj+1
...
f1
f11
f12
...
f1,j1
f1,j+1
...
f2
..
.
f12
..
.
f22
..
.
...
..
.
f2,j1
..
.
f2,j+1
..
.
...
..
.
f1N
f2N
...
fj1,N
fj+1,N
...
f1N
f2N
..
.
,
fi1,N
fi+1,N
..
.
fN N
fN
(2.12)
fi is the partial derivative of the production function f with respect to the ith input quantity
(xi ), and fij is the second partial derivative of the production function f with respect to the ith
and jth input quantity (xi , xj ).
As the bordered Hessian matrix is symmetric, the co-factors are also symmetric (Fij = Fji ) so
D = D ).
that also the direct elasticities of substitution are symmetric (ij
ji
k fk
xk Fij
,
xi xj F
ij =
(2.13)
fi xi + fj xj
= P
k fk xk
fi xi + fj xj
k fk xk Fij
= P
ij
xi xj F
k fk xk
(2.14)
As the input quantities and the marginal products should always be positive, the direct elasticities
of substitution and the Allen elasticities of substitution always have the same sign and the direct
elasticities of substitution are always smaller than the Allen elasticities of substitution in absolute
D | | |.
terms, i.e. |ij
ij
fi xi
Ki ij = 0 with Ki = P
k fk xk
(2.15)
The exponent of (1) usually is the sum of the number of the deleted row (i + 1) and the number of the deleted
column (j + 1), i.e. i + j + 2. In our case, we can simplify this to i + j, because (1)i+j+2 = (1)i+j (1)2 =
(1)i+j .
31
fj Fij
fj Fjj
,
xi F
xj F
(2.16)
where Fij and F are defined as above. In contrast to the direct elasticity of substitution and the
Allen elasticity of substitution, the Morishima elasticity of substitution is usually not symmetric
M 6= M ).
(ij
ji
From the above definition of the Morishima elasticities of substitution (2.16), we can derive
the relationship between the Morishima elasticities of substitution and the Allen elasticities of
substitution:
M
ij
fj xj
fj xj
k fk xk Fij
=P
P
f
x
x
x
F
i j
k k k
k fk xk
fj xj
fj xj
ij P
jj
=P
f
x
k k k
k fk xk
fj xj
(ij jj ) ,
=P
k fk xk
k fk
x2j
xk Fjj
F
(2.17)
(2.18)
(2.19)
where jj can be calculated as the Allen elasticities of substitution with equation (2.13), but does
not have an economic meaning.
wi xi ,
(2.20)
where p is the price of the output and wi is the price of the ith input. If the firm faces output
price p and input prices wi , we can calculate the maximum profit that can be obtained by the
firm by solving following optimization problem:
max p y
y,x
wi xi , s.t. y = f (x)
(2.21)
X
i
32
wi x i
(2.22)
=p
wi = p M Pi wi = 0
xi
xi
(2.23)
wi = p M Pi = M V Pi
(2.24)
so that we get
wi x i ,
(2.25)
wi xi , s.t. y = f (x)
(2.26)
wi xi + (y f (x))
(2.27)
(2.28)
(2.29)
and
wi = M Pi
(2.30)
wi
M Pi
M Pi
=
=
= M RT Sji
wj
M Pj
M Pj
(2.31)
As profit maximization implies producing the optimal output quantity with minimum costs,
the first-order conditions for the optimal input combinations (2.31) can be obtained not only
33
(2.32)
(2.33)
where w = [wi ] is the vector of all input prices. The input demand functions indicate the optimal
input quantities (xi ) given the output price (p) and all input prices (w). We can obtain the
output supply function from the production function by replacing all input quantities by the
corresponding input demand functions:
y = f (x(p, w)) = y(p, w),
(2.34)
where x(p, w) = [xi (p, w)] is the set of all input demand functions. The output supply function
indicates the optimal output quantity (y) given the output price (p) and all input prices (w).
Hence, the input demand and output supply functions can be used to analyze the effects of prices
on the (optimal) input use and output supply. In economics, the effects of price changes are
usually measured in terms of price elasticities. These price elasticities can measure the effects of
the input prices on the input quantities:
ij (p, w) =
xi (p, w)
wj
,
wj
xi (p, w)
(2.35)
the effects of the input prices on the output quantity (expected to be non-positive):
yj (p, w) =
y(p, w) wj
,
wj
y(p, w)
(2.36)
the effects of the output price on the input quantities (expected to be non-negative):
ip (p, w) =
xi (p, w)
p
,
p
xi (p, w)
34
(2.37)
p
y(p, w)
.
p
y(p, w)
(2.38)
The effect of an input price on the optimal quantity of the same input is expected to be nonpositive (ii (p, w) 0). If the cross-price elasticities between two inputs i and j are positive
(ij (p, w) 0, ji (p, w) 0), they are considered as gross substitutes. If the cross-price elasticities
between two inputs i and j are negative (ij (p, w) 0, ji (p, w) 0), they are considered as gross
complements.
2.1.12.2 Derived from cost minimization
If we replace the marginal products in the first-order conditions for cost minimization (2.30) by
the equations for calculating these marginal products and the solve this system of equations for
the input quantities, we get the conditional input demand functions:
xi = xi (w, y)
(2.39)
These input demand functions are called conditional, because they indicate the optimal input
quantities (xi ) given all input prices (w) and conditional on the fixed output quantity (y). The
conditional input demand functions can be used to analyze the effects of input prices on the
(optimal) input use if the output quantity is given. The effects of price changes on the optimal
input quantities can be measured by conditional price elasticities:
ij (w, y) =
xi (w, y)
wj
wj
xi (w, y)
(2.40)
The effect of the output quantity on the optimal input quantities can also be measured in terms
of elasticities (expected to be positive):
iy (w, y) =
xi (w, y)
y
.
y
xi (w, y)
(2.41)
The conditional effect of an input price on the optimal quantity of the same input is expected
to be non-positive (ii (w, y) 0). If the conditional cross-price elasticities between two inputs i
and j are positive (ij (w, y) 0, ji (w, y) 0), they are considered as net substitutes. If
the conditional cross-price elasticities between two inputs i and j are negative (ij (w, y) 0,
ji (w, y) 0), they are considered as net complements.
35
50
100
150
50
40
30
10
0
10
0
0
20
Frequency
15
5
10
Frequency
40
30
20
Frequency
50
20
60
10
apCap
15
20
25
apLab
50
150
250
350
apMat
36
50
100
150
100
150
50
dat$apCap
dat$apCap
200
100
50
50
15
10
0
dat$apMat
300
200
20
300
dat$apLab
100
dat$apMat
25
10
15
20
25
dat$apLab
1e+05
5e+05
5e+06
1e+05
5e+05
qOut
5e+06
200
100
50
20
15
10
apLab
100
50
apCap
300
apMat
25
150
1e+05
qOut
5e+05
5e+06
qOut
37
0e+00
0
0e+00 2e+06 4e+06 6e+06
1e+05
5e+05
TFP
5e+06
qOut
4e+06
2e+06
6e+06
TFP
0e+00
2e+06
4e+06
TFP
30
20
10
Frequency
40
6e+06
0.5
1.0
2.0
5.0
38
6e+06
1.5
17
7e+06
1e+06
0.5
0.5
0.0
log(X)
13
14
3e+06
log(qOut)
4e+06
15
2e+06
12
1.0
TFP
5e+06
1.0
16
0e+00
no advisory
advisory
no advisory
advisory
no advisory
advisory
39
N
X
i xi
(2.42)
i=1
2.3.2 Estimation
We can add a stochastic error term to this linear production function and estimate it for our data
set using the command lm:
> prodLin <- lm( qOut ~ qCap + qLab + qMat, data = dat )
> summary( prodLin )
Call:
lm(formula = qOut ~ qCap + qLab + qMat, data = dat)
Residuals:
Min
1Q
Median
3Q
Max
-3888955
-773002
86119
769073
7091521
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.616e+06
2.318e+05
qCap
1.788e+00
1.995e+00
0.896
qLab
1.183e+01
1.272e+00
qMat
4.667e+01
1.123e+01
0.372
--Signif. codes:
0.7868,
Adjusted R-squared:
0.7821
2.3.3 Properties
As the coefficients of all three input quantities are positive, the monotonicity condition is (globally) fulfilled. However, the coefficient of the capital quantity is statistically not significantly
different from zero. Therefore, we cannot be sure that the capital quantity has a positive effect
on the output quantity.
40
[1] TRUE
We can evaluate the fit of the model by comparing the observed with the fitted output
quantities:
> compPlot( dat$qOut, dat$qOutLin )
> compPlot( dat$qOut[ dat$qOutLin > 0 ], dat$qOutLin[ dat$qOutLin > 0 ],
+
log = "xy" )
The resulting graphs are shown in figure 2.6. While the graph in the left panel uses a linear
scale for the axes, the graph in the right panel uses a logarithmic scale for both axes. Hence, the
deviations from the 45-line illustrate the absolute deviations in the left panel and the relative
41
0.0e+00
2e+05
1.0e+07
2e+04
2e+06
fitted
1.0e+07
0.0e+00
fitted
2.0e+07
2e+07
2.0e+07
2e+04
2e+05
observed
2e+06
2e+07
observed
y
= i
xi
(2.43)
Hence, if a firm increases capital input by one unit, the output will increase by 1.79 units; if
a firm increases labor input by one unit, the output will increase by 11.83 units; and if a firm
increases materials input by one unit, the output will increase by 46.67 units.
y xi
xi
M Pi
= M Pi
=
xi y
y
APi
42
(2.44)
eLab
eMat
eLab
eMat
43
0.4
0.8
1.2
10
0
10
0
0
0.0
20
Frequency
30
40
30
20
Frequency
60
40
20
Frequency
80
eCap
10 12 14
eLab
eMat
eLabFit
eMatFit
eLabFit
eMatFit
44
80 100
60
20
0
20
0
0
0.0 0.5 1.0 1.5 2.0 2.5 3.0
40
Frequency
120
80
60
40
Frequency
80 100
60
40
20
Frequency
10
10
eCapFit
20
30
40
50
eLabFit
10
15
20
25
eMatFit
Figure 2.8: Linear production function: output elasticities based on predicted output quantities
output elasticities, the ranges of the output elasticities that are calculated from the predicted
output quantities are much larger than the ranges of the output elasticities that are calculated
from the observed output quantities. Due to 1 negative predicted output quantity, the output
elasticities of this observation are also negative.
i
(2.45)
Hence, the elasticities of scale of all firms in the sample can be calculated by:
> dat$eScale <- with( dat, eCap + eLab + eMat )
> dat$eScaleFit <- with( dat, eCapFit + eLabFit + eMatFit )
The mean and median values of the elasticities of scale can be calculated by
> colMeans( subset( dat, , c( "eScale", "eScaleFit" ) ) )
eScale eScaleFit
3.056945
3.334809
1.864253
Hence, if a firm increases all input quantities by one percent, the output quantity will usually
increase by around 1.9 percent. This means that most firms have increasing returns to scale and
hence, the firms could increase productivity by increasing the firm size (i.e. increasing all input
quantities).
The (variation of the) elasticities of scale can be visualized with histograms:
45
10
20
Frequency
30
60
20
40
Frequency
20
15
10
0
10
15
Frequency
25
40
30
eScale
20
40
60
80
eScaleFit
10
12
14
46
15
15
10
eScale
10
eScale
0.5
1.0
2.0
5.0
1e+05
5e+05
2e+06
eScaleFit
15
10
2e+07
qOut
eScaleFit
10
15
5e+06
0.5
1.0
2.0
5.0
1e+05
5e+05
2e+06
5e+06
qOut
Figure 2.10: Linear production function: elasticities of scale for different firm sizes
47
2e+07
48
49
40
30
10
0.20
0.00
60
50
Frequency
10
0
10
0
rmrtsMatCap
40
50
30
20
Frequency
0.0
20
rmrtsCapMat
40
30
20
10
0
0.5 0.4 0.3 0.2 0.1
40
rmrtsLabCap
40
rmrtsCapLab
0.10
30
50
20
100
0
150
Frequency
20
Frequency
30
10
10
20
Frequency
40
30
20
Frequency
50
50
60
1.5
1.0
0.5
rmrtsLabMat
0.0
rmrtsMatLab
50
10
10 15 20 25 30 35
20
40
60
10.0
120
1.0
10
0.5
0.2
80
w Mat
2.0
5.0
2.0
1.0
0.5
MVP Cap
w Lab
MVP Lab
5.0
w Cap
0
0
100
50
20
MVP Mat
80
100
60
40
20
30
25
MVP Lab
4
3
2
20
MVP Cap
15
MVP Mat
140
35
0.2
0.5
1.0
2.0
5.0
w Cap
5.0
20.0
10
w Lab
20
50
100
w Mat
51
The resulting graphs are shown in figure 2.13. The upper left graph shows that the ratio between
the capital price and the labor price is larger than the absolute value of the marginal rate of
technical substitution between labor and capital (0.151) for the most firms in the sample:
wcap
M Pcap
> M RT Slab,cap =
wlab
M Plab
(2.46)
Or taken the other way round, the lower left graph shows that the ratio between the labor
price and the capital price is smaller than the absolute value of the marginal rate of technical
substitution between capital and labor (6.616) for the most firms in the sample:
M Plab
wlab
< M RT Scap,lab =
wcap
M Pcap
(2.47)
Hence, the firm can get closer to the minimum of the costs by substituting labor for capital,
because this will decrease the marginal product of labor and increase the marginal product of
52
30
10
0
10
0
0
0
20
Frequency
30
20
Frequency
20
10
Frequency
30
40
40
40
50
0.0
0.2
0.6
0.8
0.1
w Cap / w Mat
0.2
0.3
0.4
w Lab / w Mat
40
4
w Lab / w Cap
30
10
0
10
0
0
20
Frequency
40
30
20
Frequency
40
20
0
Frequency
60
50
80
60
w Cap / w Lab
0.4
10
20
30
40
50
60
w Mat / w Cap
53
10
15
w Mat / w Lab
20
xi (p, w) =
if M V Pi < wi
indeterminate
if M V Pi = wi
(2.48)
if M V Pi > wi
If all input quantities are zero, the output quantity is equal to the intercept, which is zero in case
of weak essentiality. Otherwise, the output quantity is indeterminate or infinity:
y(p, w) =
if M V Pi < wi i
if M V Pi > wi i
indeterminate
(2.49)
otherwise
A cost minimizing producer will use only a single input, i.e. the input with the lowest cost
per unit of produced output (wi /M Pi ). If the lowest cost per unit of produced output can be
obtained by two or more inputs, these input quantities are indeterminate.
xi (w, y) =
if
y0
indeterminate
if
i
wi
i
wi
<
>
j
wj
j
wj
j
j 6= i
(2.50)
otherwise
Given that the unconditional and conditional input demand functions and the output supply
functions based on the linear production function are non-continuous and often return either zero
or infinite values, it does not make much sense to use this functional form to predict the effects
of price changes when the true technology implies that firms always use non-zero finite input
quantities.
54
N
Y
xi i .
(2.51)
i=1
This function can be linearized by taking the (natural) logarithm on both sides:
ln y = 0 +
N
X
i ln xi ,
(2.52)
i=1
where 0 is equal to ln A.
2.4.2 Estimation
We can estimate this Cobb-Douglas production function for our data set using the command lm:
> prodCD <- lm( log( qOut ) ~ log( qCap ) + log( qLab ) + log( qMat ),
+
data = dat )
1Q
Median
3Q
Max
-1.67239 -0.28024
0.00667
0.47834
1.30115
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -2.06377
1.31259
-1.572
0.1182
log(qCap)
0.16303
0.08721
1.869
log(qLab)
0.67622
0.15430
log(qMat)
0.62720
0.12587
0.0637 .
--Signif. codes:
0.5943,
Adjusted R-squared:
p-value: < 2.2e-16
55
0.5854
2.4.3 Properties
The monotonicity condition is (globally) fulfilled, as the estimated coefficients of all three (logarithmic) input quantities are positive and the output quantity as well as all input quantities are
non-negative (see equation 2.54). However, the coefficient of the (logarithmic) capital quantity is
only statistically significantly different from zero at the 10% level. Therefore, we cannot be sure
that the capital quantity has a positive effect on the output quantity.
The quasi-concavity of our estimated Cobb-Douglas production function is checked in section 2.4.12.
The production technology described by a Cobb-Douglas production function always shows
weak and strict essentiality, because the output quantity becomes zero, as soon as a single input
quantity becomes zero (see equation 2.51).
The input requirement sets derived from Cobb-Douglas production functions are always closed
and non-empty for y > 0 if strict monotonicity is fulfilled for at least one input ( i {1, . . . , N } :
i > 0), as the input quantities must be non-negative (xi 0 i).
The Cobb-Douglas production function always returns finite, real, and single values if the input
quantities are non-negative and finite. The predicted output quantity is non-negative as long as
A and the input quantities are non-negative, where A = exp(0 ) is positive even if 0 is negative.
All Cobb-Douglas production functions are continuous and twice-continuously differentiable.
[1] TRUE
We can evaluate the fit of the Cobb-Douglas production function by comparing the observed
with the fitted output quantities:
> compPlot( dat$qOut, dat$qOutCD )
> compPlot( dat$qOut, dat$qOutCD, log = "xy" )
56
1e+07
0.0e+00
1e+05
5e+05 2e+06
fitted
1.0e+07
0.0e+00
fitted
2.0e+07
1.0e+07
2.0e+07
1e+05
observed
5e+05
5e+06
observed
y xi
ln y
=
= i
xi y
ln xi
(2.53)
Hence, if a firm increases capital input by one percent, the output will increase by 0.16 percent;
if a firm increases labor input by one percent, the output will increase by 0.68 percent; and if
a firm increases materials input by one percent, the output will increase by 0.63 percent. The
output elasticity of capital is somewhat larger and the output elasticity of labor is considerably
smaller when estimated by a Cobb-Douglas production function than when estimated by a linear
production function. Indeed, the output elasticities of all three inputs are in the reasonable range,
i.e. between zero one one, now.
y
ln y y
y
=
= i = i APi
xi
ln xi xi
xi
57
(2.54)
30
0
10
15
20
25
30
0
10
20
Frequency
20
15
10
Frequency
30
20
10
Frequency
40
25
50
MP Cap
10
15
50
MP Lab
100
150
200
MP Mat
58
59
60
30
10
6
50
mrtsMatCapCD
0.0
10
50
Frequency
10
0
10
0
0.2
20
40
60
40
30
20
Frequency
0.4
30
mrtsCapMatCD
50
50
40
30
20
10
0
0.6
40
mrtsLabCapCD
60
mrtsCapLabCD
30
20
10
0
5
0
6
Frequency
20
Frequency
40
30
20
Frequency
20
15
10
Frequency
25
50
30
60
35
35
25
15
mrtsLabMatCD
0.4
0.3
0.2
0.1
0.0
mrtsMatLabCD
Figure 2.16: Cobb-Douglas production function: marginal rates of technical substitution (MRTS)
61
f1 =
(2.55)
(2.56)
(2.57)
f1
x1
f2
x2
f3
x3
f1
x2
f1
x3
f1
x1
f2
= 2
x2
f3
= 3
x3
f2
= 1
x1
f3
= 1
x1
= 1
y
=
x21
y
2 2 =
x2
y
3 2 =
x3
f1 f2
=
y
f1 f3
=
y
1
62
f12
f1
y
x1
2
f2
f2
y
x2
2
f3
f3
y
x3
(2.58)
(2.59)
(2.60)
(2.61)
(2.62)
f3
f2 f3
f2
= 2
=
.
x3
x2
y
(2.63)
Generally, for an N -input Cobb-Douglas function, the first and second derivatives are
y
xi
fi fj
fi
fij =
ij ,
y
xi
fi = i
(2.64)
(2.65)
1 if i = j
ij =
0 if i =
6 j
(2.66)
In the calculations of the partial derivatives (fi ), we have simplified the formulas by replacing
the right-hand side of the Cobb-Douglas function (2.51) by the output quantities. When we
calculated the marginal products (partial derivatives) of the Cobb-Douglas function in in section 2.4.6, we have used the observed output quantities for y. However, as the fit (R2 value)
of our model is not 100 %, the observed output quantities are generally not equal to the output
quantities predicted by our model, i.e. the right-hand side of the Cobb-Douglas function (2.51)
using the estimated parameters. The better the fit of our model, the smaller is the difference
between the observed and the predicted output quantities. If we believe in our estimated model,
it would be more consistent with microeconomic theory, if we use the predicted output quantities and disregard the stochastic error term (difference between observed and predicted output
quantities) that is caused, e.g., by measurement errors, (good or bad) luck, or unusual(ly) (good
or bad) weather conditions.
We can calculate the first derivatives (marginal products) with the predicted output quantities
(see section 2.4.4):
> dat$fCap <- coef(prodCD)["log(qCap)"] * dat$qOutCD / dat$qCap
> dat$fLab <- coef(prodCD)["log(qLab)"] * dat$qOutCD / dat$qLab
> dat$fMat <- coef(prodCD)["log(qMat)"] * dat$qOutCD / dat$qMat
Based on these first derivatives, we can also calculate the second derivatives:
> dat$fCapCap <- with( dat, fCap^2 / qOutCD - fCap / qCap )
> dat$fLabLab <- with( dat, fLab^2 / qOutCD - fLab / qLab )
> dat$fMatMat <- with( dat, fMat^2 / qOutCD - fMat / qMat )
> dat$fCapLab <- with( dat, fCap * fLab / qOutCD )
> dat$fCapMat <- with( dat, fCap * fMat / qOutCD )
> dat$fLabMat <- with( dat, fLab * fMat / qOutCD )
63
[,2]
[1,]
0.000000
6.229014e+00
[2,]
6.229014 -6.202845e-05
[3,]
6.031225
[4,] 59.090913
[,3]
[,4]
6.031225e+00 59.0909133861
1.169835e-05
0.0001146146
1.169835e-05 -5.423455e-06
0.0001109752
1.146146e-04
1.109752e-04 -0.0006462733
Based on this bordered Hessian matrix, we can calculate the co-factors Fij :
> FCapLab <- - det( bhm[ -2, -3 ] )
[1] -0.06512713
> FCapMat <- det( bhm[ -2, -4 ] )
[1] -0.006165438
> FLabMat <- - det( bhm[ -3, -4 ] )
[1] -0.02641227
So that we can calculate the direct elasticities of substitution (of the first observation):
> esdCapLab <- with( dat[1,], ( qCap * fCap + qLab * fLab ) /
+
64
[1] 0.5388715
> esdLabMat <- with( dat[ 1, ], ( qLab * fLab + qMat * fMat ) /
+
[1] 0.8888284
As all elasticities of substitution are positive, we can conclude that all pairs of inputs are substitutes for each other and no pair of inputs is complementary. If the firm substitutes capital for labor
so that the ratio between the capital and labor quantity (xcap /xlab ) increases by 0.57 percent,
the (absolute value of the) MRTS between capital and labor (|dxcap /dxlab | = flab /fcap ) increases
by one percent. Or, the other way round, if the firm substitutes capital for labor so that the
absolute value of the MRTS between capital and labor (|dxcap /dxlab | = flab /fcap ) increases by
one percent, e.g. because the price ratio between labor and capital (wlab /wcap ) increases by one
percent, the ratio between the capital and labor quantity (xcap /xlab ) will increase by 0.57 percent.
We can calculate the elasticities of substitution for all firms by automatically repeating the
above commands for each observation using a for loop:2
> dat$esdCapLab <- NA
> dat$esdCapMat <- NA
> dat$esdLabMat <- NA
> for( obs in 1:nrow( dat ) ) {
+
As I want to use the bordered Hessian matrix and some of its co-factors after the loop, I do not want to overwrite
the values in bhm, FCapLab, FCapMat, and FLabMat in the loop. Therefore, I use not the same variable names
for the bordered Hessian matrix and the co-factors in the loop.
65
+ }
> range( dat$esdCapLab )
[1] 0.5723001 0.5723001
> range( dat$esdCapMat )
[1] 0.5388715 0.5388715
> range( dat$esdLabMat )
[1] 0.8888284 0.8888284
The direct elasticities of substitution based on the Cobb-Douglas production function are the
same for all firms.
2.4.11.2 Allen Elasticities of Substitution
The calculation of the Allen elasticities of substitution is similar to the calculation of the direct
elasticities of substitution:
> numerator <- with( dat[1,], qCap * fCap + qLab * fLab + qMat * fMat )
> esaCapLab <- numerator /
+
( dat$qCap[ 1 ] * dat$qLab[ 1 ] ) *
[1] 1
> esaCapMat <- numerator /
+
( dat$qCap[ 1 ] * dat$qMat[ 1 ] ) *
[1] 1
66
( dat$qLab[ 1 ] * dat$qMat[ 1 ] ) *
[1] 1
All elasticities of substitution are exactly one. This is no surprise and confirms that our calculations have been done correctly, because the Cobb-Douglas production function always has Allen
elasticities of substitution equal to one, irrespective of the input and output quantities and the
estimated parameters. Hence, the Cobb-Douglas function cannot be used to analyze the substitutability of the inputs, because it will always return Allen elasticities of substitution equal to
one, no matter if the true elasticities are close to zero or close to infinity.
Although it seemed that we got free estimates of the direct elasticities of substitution from
the Cobb-Douglas production function in section 2.4.11.1, they are indeed forced to be (fi xi +
fj xj )/(
k fk
xk ) = (i y + j y)/(
k k
(see equation 2.14). Hence, the Cobb-Douglas production function cannot be used to analyze
substitutability between inputs.
2.4.11.3 Morishima Elasticities of Substitution
In order to calculate the Morishima elasticities of substitution, we need to calculate the co-factors
of the diagonal elements of the bordered Hessian matrix:
> FCapCap <- det( bhm[ -2, -2 ] )
> FLabLab <- det( bhm[ -3, -3 ] )
> FMatMat <- det( bhm[ -4, -4 ] )
> esmCapLab <- with( dat[1,], ( fLab / qCap ) * FCapLab / det( bhm ) +
[1] 1
> esmLabCap <- with( dat[1,], ( fCap / qLab ) * FCapLab / det( bhm ) +
[1] 1
> esmCapMat <- with( dat[1,], ( fMat / qCap ) * FCapMat / det( bhm ) +
[1] 1
> esmMatCap <- with( dat[1,], ( fCap / qMat ) * FCapMat / det( bhm ) +
67
[1] 1
> esmMatLab <- with( dat[1,], ( fLab / qMat ) * FLabMat / det( bhm ) +
[1] 1
As with the Allen elasticities of substitution, all Morishima elasticities of substitution based on
Cobb-Douglas functions are exactly one.
From the condition 2.15, we can show that all Morishima elasticities of substitution are always
M = 1 i 6= j), if all Allen elasticities of substitution are one ( = 1 i 6= j):
one (ij
ij
M
ij
= Kj ij Kj jj = Kj +
X
k6=j
Kk kj =
Kk = 1
(2.67)
2.4.12 Quasiconcavity
We start by checking whether our estimated Cobb-Douglas production function is quasiconcave
at the first observation:
> bhm
[,1]
[,2]
[1,]
0.000000
6.229014e+00
[2,]
6.229014 -6.202845e-05
[3,]
6.031225
[4,] 59.090913
[,3]
[,4]
6.031225e+00 59.0909133861
1.169835e-05
0.0001146146
1.169835e-05 -5.423455e-06
0.0001109752
1.146146e-04
1.109752e-04 -0.0006462733
68
+ }
> sum( dat$quasiConc )
[1] 140
Our estimated Cobb-Douglas production function is quasiconcave at all of the 140 observations.
In fact, all Cobb-Douglas production functions are quasiconcave in inputs if A 0, 1 0,
. . . , N 0, while Cobb-Douglas production functions are concave in inputs if A 0, 1 0,
PN
i=1 i
1).3
69
250
30
30
200
50
10
20
25
30
50
100
100 200
200
250
0.5
0.2
1.0
2.0
5.0
10.0
150
w Mat
50
MVP Lab
20.0
5.0
15
w Lab
0.5
2.0
150
MVP Mat
5
w Cap
MVP Cap
0
0
20
20
10
10
100
25
10
MVP Mat
20
MVP Lab
20
0
15
10
MVP Cap
30
0.2
0.5
2.0
5.0
20.0
0.5
1.0
2.0
w Cap
5.0
20.0
10
w Lab
20
50
200
w Mat
70
0.4
0.8
0.0
0.2
0.4
0.3
0.2
0.6
0.8
0.1
w Cap / w Mat
0.2
0.5
1.0
2.0
0.3
0.4
5.0
0.50
0.20
0.10
0.05
0.2
w Lab / w Mat
0.02
5.0
2.0
1.0
0.5
0.2
w Cap / w Lab
0.6
0.1
0.2
0.0
0.4
0.02
w Cap / w Lab
0.05
0.20
0.50
0.05
w Cap / w Mat
0.10
0.20
w Lab / w Mat
71
30
10
20
Frequency
40
30
20
Frequency
60
40
4
10
20
Frequency
80
50
40
0.6
0.2
0.0
0.2
0.0
0.1
0.2
1.0
0.0
1.0
15
Frequency
5
0
0
2.0
10
40
30
10
20
Frequency
40
30
20
0
10
Frequency
20
50
0.4
2.5
1.5
0.5
0.5
1.5
0.5
0.0
0.5
1.0
(2.68)
Hence, most firms can get closer to the minimum of their production costs by substituting labor
for capital, because this will decrease the marginal product of labor and increase the marginal
product of capital so that the absolute value of the MRTS between labor and capital increases
and gets closer to the corresponding input price ratio. Similarly, the graphs in the middle column
indicate that most firms should substitute materials for capital and the graphs on the right
indicate that the majority of the firms should substitute materials for labor. Hence, the majority
72
xi (p, w) =
Y
i
P A
wi
j
wj
1
!j 1
if < 1
(2.69)
if = 1
if > 1
y(p, w) =
with =
A P
j
wj
1
!j 1
if < 1
if = 1
(2.70)
if > 1
to scale ( = > 1), the optimal input and output quantities are infinity. As our estimated
Cobb-Douglas production function has increasing returns to scale, the optimal input quantities
are infinity. Therefore, we cannot evaluate the effect of prices on the optimal input quantities.
A cost minimizing producer would choose the following input quantities:
y Y i wj
xi (w, y) =
A j6=i j wi
!j 1
(2.71)
For our three-input Cobb-Douglas production function, we get following conditional input demand
functions
y
xcap (w, y) =
A
cap
wcap
y
A
wcap
cap
xlab (w, y) =
4
!lab +mat
!cap
lab
wlab
wlab
lab
lab
wmat
mat
cap +mat
mat
wmat
mat
1
cap +lab +mat
(2.72)
1
mat ! cap +lab
+mat
(2.73)
This generally confirms the results of the linear production function for the relationships between capital and
labor and the relationship between capital and materials. However, in contrast to the linear production function,
the results obtained by the Cobb-Douglas functional form indicate that most firms should substitute materials
for labor (rather than the other way round).
73
xmat (w, y) =
y
A
wcap
cap
!cap
wlab
lab
lab
mat
wmat
1
cap +lab ! cap +lab
+mat
(2.74)
We can use these formulas to calculate the cost-minimizing input quantities based on the observed
input prices and the predicted output quantities. Alternatively, we could calculate the costminimizing input quantities based on the observed input prices and the observed output quantities. However, in the latter case, the predicted output quantities based on the cost-minimizing
input quantities would differ from the predicted output quantities based on the observed input
quantities so that a comparison of the cost-minimizing input quantities with the observed input
quantities would be less useful.
As the coefficients of the Cobb-Douglas function repeatedly occur in the formulas for calculating
the cost-minimizing input quantities, it is convenient to define short-cuts for them:
> A <- exp( coef( prodCD )[ "(Intercept)" ] )
> aCap <- coef( prodCD )[ "log(qCap)" ]
> aLab <- coef( prodCD )[ "log(qLab)" ]
> aMat <- coef( prodCD )[ "log(qMat)" ]
Now, we can calculate the cost-minimizing input quantities:
> dat$qCapCD <- with( dat,
+
Before we continue, we will check whether it is indeed possible to produce the predicted output
with the calculated cost-minimizing input quantities:
> dat$qOutTest <- with( dat,
+
74
6e+05
4e+05
6e+05
200000
600000
100000
1000000
qMatCD
5e+04
5e+05
5e+04
5e+03
5e+04
2e+04
5e+05
1e+05
100000
2e+05
2e+04
1e+05
60000
qLab
2e+04
qLabCD
5e+03
20000
qMat
5e+05
qCapCD
5e+03
qMat
20000
60000
1000000
2e+05
200000
2e+05
0e+00
qLab
qCap
600000
4e+05
0e+00
qCap
2e+05
qCapCD
5e+05
5e+03
qLabCD
2e+04
5e+04
qMatCD
75
15
0 5
Frequency
25
0.75
0.80
0.85
0.90
0.95
1.00
costProdCD / cost
qLab qLabCD
qMat qMatCD
cost costProdCD
1 84050
38038 846329
790968
2 39663
36365 580545
545777
3 37051
32176 306040
281401
4 21222
13300
83427
69713 18893
25890 199634
191709
5 44675
28400
13107 226578
221302
76
y 1 w2 2
x1 (w, y) =
A 2 w1
w1
x1 (w, y)
11 (w, y) =
w1
x1 (w, y)
1
=
y
A
1
=
1
=
1 w2
2 w1
y
A
y
A
1 y
=
A
1 2
= x1
x1
x1 (w, y)
12 (w, y) =
w2
1
=
y
A
y
A
1
=
1
=
y
A
1 w2
2 w1
2 1 1
y
A
y
A
2 1
1 w2
2 w1
2 1
1 w2
2 w1
2
1 w2
2 w12
1 w2 2
2 w1 x 1
2
x1
1 w2
2 w1
2 1 1
1 w2
2 w1
2 1 1
1 w2
2 w1
2 1 1
1
w1
x1
(2.77)
(2.78)
(2.79)
y
1 w2
2
A
2 w1
y
A
y
A
(2.80)
(2.81)
(2.82)
2 1
1 w2
2 w1
2 1
1 w2
2 w1
2
1 1 w2
2 w1 x 1
1 w2 2
2 w1 x 1
2
x1
2
x1
(2.83)
(2.84)
(2.85)
(2.86)
(2.87)
(2.88)
1 w2
2 w1
2 1 1
1 w2
2 w1
2 1 1
1
1 w2 2 2
2 w1
x1
2
1
1
=
=
=
1
w2
x1 (w, y)
1 y 1 w2
=
A 2 w1
1
1
1
= x1
=
x1
2 1 1
y
A
y
1 w2
2
A
2 w1
1 w2
2 w1
1 y 1 w2 2
=
A 2 w1
1 2
2
= x1
=
x1
x1 (w, y)
y
1y (w, y) =
y
x1 (w, y)
(2.76)
2 1 1
y
A
(2.75)
2 1
1
A
y
A
1 w2
2 w1
2
1 w2
2 w1
2
1
x1
y
x1
(2.89)
1
x1
(2.90)
(2.91)
(2.92)
x2 (w, y) =
y
A
2 w1
1 w2
1 1
77
(2.93)
x2 (w, y)
w1
1
21 (w, y) =
=
w1
x2 (w, y)
y
1
x2 (w, y)
= .
2y (w, y) =
y
x2 (w, y)
22 (w, y) =
(2.94)
(2.95)
(2.96)
One can similarly derive the input demand elasticities for the general case of N inputs:
xi (w, y) wj
j
=
ij
wj xi (w, y)
y
xi (w, y)
1
iy (w, y) =
= ,
y
xi (w, y)
ij (w, y) =
(2.97)
(2.98)
where ij is (again) Kroneckers delta (2.66). We have calculated all these elasticities based on the
estimated coefficients of the Cobb-Douglas production function; these elasticities are presented in
table 2.1. If the price of capital increases by one percent, the cost-minimizing firm will decrease
the use of capital by 0.89% and increase the use of labor and materials by 0.11% each. If the
price of labor increases by one percent, the cost-minimizing firm will decrease the use of labor
by 0.54% and increase the use of capital and materials by 0.46% each. If the price of materials
increases by one percent, the cost-minimizing firm will decrease the use of materials by 0.57%
and increase the use of capital and labor by 0.43% each. If the cost-minimizing firm increases
the output quantity by one percent, (s)he will increase all input quantities by 0.68%.
Table 2.1: Conditional demand elasticities
wcap
xcap -0.89
xlab
0.11
xmat 0.11
i xi +
1 XX
ij xi xj ,
2 i j
(2.99)
where the restriction ij = ji is required to identify all coefficients, because xi xj and xj xi are
the same regressors. Based on this general form, we can derive the specification of a quadratic
78
2.5.2 Estimation
We can estimate this quadratic production function with the command
> prodQuad <- lm( qOut ~ qCap + qLab + qMat
+
data = dat )
1Q
Median
3Q
Max
-3928802
-695518
-186123
545509
4474143
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)
-2.911e+05
3.615e+05
-0.805 0.422072
qCap
5.270e+00
4.403e+00
1.197 0.233532
qLab
6.077e+00
3.185e+00
1.908 0.058581 .
qMat
1.430e+01
2.406e+01
0.595 0.553168
I(0.5 * qCap^2)
5.032e-05
3.699e-05
1.360 0.176039
2.081e-05
-1.482 0.140671
8.951e-04
-2.118 0.036106 *
I(qCap * qLab)
-3.097e-05
1.498e-05
-2.067 0.040763 *
I(qCap * qMat)
-4.160e-05
1.474e-04
-0.282 0.778206
I(qLab * qMat)
4.011e-04
1.112e-04
--Signif. codes:
0.8449,
Adjusted R-squared:
p-value: < 2.2e-16
79
0.8342
136
130
Pr(>F)
--Signif. codes:
LogLik Df
5 -2191.3
11 -2169.1
Chisq Pr(>Chisq)
6 44.529
5.806e-08 ***
--Signif. codes:
These tests show that the linear production function is clearly inferior to the quadratic production
function and hence, should not be used for analyzing the production technology of the firms in
this data set.
80
2.5.3 Properties
We cannot see from the estimated coefficients whether the monotonicity condition is fulfilled.
Unless all coefficients are non-negative (but not necessarily the intercept), quadratic production
functions cannot be globally monotone, because there will always be a set of input quantities
that result in negative marginal products. We will check the monotonicity condition at each
observation in section 2.5.5.
Our estimated quadratic production function does not fulfill the weak essentiality assumption,
because the intercept is different from zero (but its deviation from zero is not statistically significant). The production technology described by a quadratic production function with more than
one (relevant) input never shows strict essentiality.
The input requirement sets derived from quadratic production functions are always closed and
non-empty.
The quadratic production function always returns finite, real, and single values but the nonnegativity assumption is only fulfilled, if all coefficients (including the intercept), are non-negative.
All quadratic production functions are continuous and twice-continuously differentiable.
0.0e+00
1e+07
1e+05
5e+05 2e+06
fitted
1.0e+07
0.0e+00
fitted
2.0e+07
1.0e+07
2.0e+07
1e+05
observed
5e+05
5e+06
observed
81
ij xj
(2.101)
We can simplify the code for computing the marginal products and some other figures by using
short names for the coefficients:
> b1 <- coef( prodQuad )[ "qCap" ]
> b2 <- coef( prodQuad )[ "qLab" ]
> b3 <- coef( prodQuad )[ "qMat" ]
> b11 <- coef( prodQuad )[ "I(0.5 * qCap^2)" ]
> b22 <- coef( prodQuad )[ "I(0.5 * qLab^2)" ]
> b33 <- coef( prodQuad )[ "I(0.5 * qMat^2)" ]
> b12 <- b21 <- coef( prodQuad )[ "I(qCap * qLab)" ]
> b13 <- b31 <- coef( prodQuad )[ "I(qCap * qMat)" ]
> b23 <- b32 <- coef( prodQuad )[ "I(qLab * qMat)" ]
Now, we can use the following commands to calculate the marginal products in R:
> dat$mpCapQuad <- with( dat,
+
82
10
10
30
10
0
0
20
20
Frequency
30
10
20
Frequency
30
20
0
10
Frequency
40
40
10
MP Cap
15
20
25
30
50
50 100
MP Lab
200
MP Mat
83
xi
y
(2.102)
As explained in section 2.4.11.1, we will use the predicted output quantities rather than the
observed output quantities. We can calculate the output elasticities with:
> dat$eCapQuad <- with( dat, mpCapQuad * qCap / qOutQuad )
> dat$eLabQuad <- with( dat, mpLabQuad * qLab / qOutQuad )
> dat$eMatQuad <- with( dat, mpMatQuad * qMat / qOutQuad )
We can visualize (the variation of) these output elasticities with histograms:
> hist( dat$eCapQuad, 15 )
> hist( dat$eLabQuad, 15 )
0.4
0.0
0.4
0.8
50
40
30
0
10
20
Frequency
30
10
20
Frequency
30
20
10
Frequency
40
50
60
0.5 0.0
0.5
eCap
1.0
1.5
2.0
2.5
1.5
eLab
0.5
dat$eMatQuad
84
12
0.8
1.0
1.2
1.4
1.6
0 2 4 6 8
Frequency
25
15
0 5
Frequency
1.1
eScaleQuad
1.3
1.5
1.7
eScaleQuad[ monoQuad ]
85
eScaleQuad
eScaleQuad
5e+05
2e+06
1e+07
0.5
1e+05
5e+05
2e+06
1.7
1.5
2.0
5.0
1.3
eScaleQuad[ monoQuad ]
1.3
1.0
1.1
1.7
1.5
1.1
eScaleQuad[ monoQuad ]
observed output
1e+05
5e+06
0.5
observed output
1.0
2.0
Figure 2.26: Quadratic production function: elasticities of scale at different firm sizes
> dat$mrtsCapLabQuad <- with( dat, - mpLabQuad / mpCapQuad )
> dat$mrtsLabCapQuad <- with( dat, - mpCapQuad / mpLabQuad )
> dat$mrtsCapMatQuad <- with( dat, - mpMatQuad / mpCapQuad )
> dat$mrtsMatCapQuad <- with( dat, - mpCapQuad / mpMatQuad )
> dat$mrtsLabMatQuad <- with( dat, - mpMatQuad / mpLabQuad )
> dat$mrtsMatLabQuad <- with( dat, - mpLabQuad / mpMatQuad )
As the marginal rates of technical substitution (MRTS) are meaningless if the monotonicity
condition is not fulfilled, we visualize (the variation of) these MRTSs only for the observations,
where the monotonicity condition is fulfilled:
> hist( dat$mrtsCapLabQuad[ dat$monoQuad ], 30 )
> hist( dat$mrtsLabCapQuad[ dat$monoQuad ], 30 )
> hist( dat$mrtsCapMatQuad[ dat$monoQuad ], 30 )
> hist( dat$mrtsMatCapQuad[ dat$monoQuad ], 30 )
> hist( dat$mrtsLabMatQuad[ dat$monoQuad ], 30 )
> hist( dat$mrtsMatLabQuad[ dat$monoQuad ], 30 )
The resulting graphs are shown in figure 2.27. As some outliers hide the variation of the majority
of the RMRTS, we use function colMedians (package miscTools) to show the median values of
the MRTS:
86
40
20
80
60
20
0
10
0
0
60
40
Frequency
30
20
Frequency
30
20
10
Frequency
40
40
50
15
1000
600
mrtsLabCapQuad
200
mrtsCapMatQuad
10
0
5
0
6
20
Frequency
10
Frequency
40
0
20
Frequency
60
30
15
80
mrtsCapLabQuad
10
60
mrtsMatCapQuad
40
20
mrtsLabMatQuad
mrtsMatLabQuad
Figure 2.27: Quadratic production function: marginal rates of technical substitution (RMRTS)
> colMedians( subset( dat, monoQuad,
+
+
-0.44741654
-14.19802214
-0.07043235
-7.86423950
mrtsMatLabQuad
-0.12715788
Given that the median marginal rate of technical substitution between capital and labor is -2.24,
a typical firm that reduces the use of labor by one unit, has to use around 2.24 additional units
of capital in order to produce the same amount of output as before. Alternatively, the typical
firm can replace one unit of labor by using 0.13 additional units of materials.
87
15
10
700
rmrtsLabCapQuad
Frequency
15
100
rmrtsCapMatQuad
10
10
Frequency
300
25
80
60
40
40
30
20
10
rmrtsMatCapQuad
20
Frequency
500
30
20
rmrtsCapLabQuad
60
20
20
35
200
20
400
15
600
0
800
40
Frequency
20
40
Frequency
40
0
20
Frequency
60
60
80
80
80
rmrtsLabMatQuad
15
10
rmrtsMatLabQuad
Figure 2.28: Quadratic production function: relative marginal rates of technical substitution
(RMRTS)
The resulting graphs are shown in figure 2.28. As some outliers hide the variation of the majority
of the RMRTS, we use function colMedians (package miscTools) to show the median values of
88
-0.1793986
-4.2567577
-0.2349206
-0.7745132
rmrtsMatLabQuad
-1.2911336
Given that the median relative marginal rate of technical substitution between capital and labor
is -5.57, a typical firm that reduces the use of labor by one percent, has to use around 5.57 percent
more capital in order to produce the same amount of output as before. Alternatively, the typical
firm can replace one percent of labor by using 1.29 percent more materials.
xi M Pi ij = 0 j
(2.103)
In order to check this condition, we need to calculate not only (normal) elasticities of substitution
(ij ; i 6= j) but also economically not meaningful elasticities of self-substitution (ii ):
> dat$esaCapLabQuad <- NA
> dat$esaCapMatQuad <- NA
> dat$esaLabMatQuad <- NA
> dat$esaCapCapQuad <- NA
> dat$esaLabLabQuad <- NA
> dat$esaMatMatQuad <- NA
> for( obs in 1:nrow( dat ) ) {
+
89
+
+
+
+
+
+
+
+
+
+
+
+
+
+ }
Before we take a look at and interpret the elasticities of substitution, we check whether the
conditions (2.103) are fulfilled:
> range( with( dat, qCap * mpCapQuad * esaCapCapQuad +
+
[1] -3.725290e-08
1.043081e-07
[1] -3.725290e-09
2.095476e-09
[1] -1.117587e-08
1.396984e-09
90
25
20
15
10
Frequency
15
5
10
Frequency
30
20
esaCapLabQuad
10
Frequency
40
50
20
esaCapMatQuad
0.0
0.5
1.0
1.5
2.0
2.5
esaLabMatQuad
91
2.5.11 Quasiconcavity
We check whether our estimated quadratic production function is quasiconcave at each observation:
> dat$quasiConcQuad <- NA
> for( obs in 1:nrow( dat ) ) {
+
+ }
> sum( dat$quasiConcQuad )
[1] 0
Our estimated quadratic production function is quasiconcave at none of the 140 observations.
92
20
500
400
10
w Cap
300
200
40
0.1
1.0
0.5
2.0
100
300
5.0
500
w Mat
100
10
20
50
MVP Mat
5.0 10.0
0.5
0.1
30
2.0
5.0
2.0
0.5
MVP Cap
MVP Lab
20
w Lab
100
10
0
60
40
30
60
MVP Mat
20
20
MVP Lab
40
MVP Cap
40
w Cap
5.0
20.0
10
w Lab
20
50
100
w Mat
93
- dat$mrtsLabCapQuad[ dat$monoQuad ] )
- dat$mrtsMatCapQuad[ dat$monoQuad ] )
- dat$mrtsMatLabQuad[ dat$monoQuad ] )
15
10
15
10.00
2.00
0.010
w Cap / w Lab
0.100
1.000
0.20 0.50
0.02
94
w Cap / w Mat
w Lab / w Mat
0.02 0.05
1.000
0.100
0.001
0.50 2.00
0.001
0.50
0.10
0.010
10.00
2.00
0.02
0.10
w Cap / w Mat
w Cap / w Lab
0.02
4
3
0
10
5
0
0.10
0.50
w Lab / w Mat
2.00
40
20
10
20
0
10
10
0
5
15
40
10
20
Frequency
15
10
10
15
Frequency
20
30
20
25
25
30
Frequency
30
Frequency
40
Frequency
40
30
20
Frequency
50
50
60
60
60
70
3 2 1
(2.104)
Hence, these firms can get closer to the minimum of their production costs by substituting labor
for capital, because this will decrease the marginal product of labor and increase the marginal
95
1 XX
ij ln xi ln xj
2 i j
i ln xi +
with ij = ji .
2.6.2 Estimation
We can estimate this Translog production function with the command
> prodTL <- lm( log( qOut ) ~ log( qCap ) + log( qLab ) + log( qMat )
+
data = dat )
1Q
Median
3Q
Max
-1.68015 -0.36688
0.05389
0.44125
1.26560
96
(2.105)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)
-4.14581
21.35945
-0.194
0.8464
log(qCap)
-2.30683
2.28829
-1.008
0.3153
log(qLab)
1.99328
4.56624
0.437
0.6632
log(qMat)
2.23170
3.76334
0.593
0.5542
I(0.5 * log(qCap)^2)
-0.02573
0.20834
-0.124
0.9019
I(0.5 * log(qLab)^2)
-1.16364
0.67943
-1.713
0.0892 .
I(0.5 * log(qMat)^2)
-0.50368
0.43498
-1.158
0.2490
0.56194
0.29120
1.930
0.0558 .
0.23534
-1.742
0.0839 .
I(log(qLab) * log(qMat))
0.42750
1.539
I(log(qCap) * log(qLab))
0.65793
0.1262
--Signif. codes:
0.6296,
Adjusted R-squared:
0.6039
None of the estimated coefficients is statistically significantly different from zero at the 5% significance level and only three coefficients are statistically significant at the 10% level. As the
Cobb-Douglas production function is nested in the Translog production function, we can apply
a Wald test or likelihood ratio test to check whether the Cobb-Douglas production function is
rejected in favor of the Translog production function. This can be done by the functions waldtest
and lrtest (package lmtest):
> waldtest( prodCD, prodTL )
Wald test
Model 1: log(qOut) ~ log(qCap) + log(qLab) + log(qMat)
Model 2: log(qOut) ~ log(qCap) + log(qLab) + log(qMat) + I(0.5 * log(qCap)^2) +
I(0.5 * log(qLab)^2) + I(0.5 * log(qMat)^2) + I(log(qCap) *
log(qLab)) + I(log(qCap) * log(qMat)) + I(log(qLab) * log(qMat))
Res.Df Df
1
136
130
Pr(>F)
6 2.062 0.06202 .
--Signif. codes:
97
LogLik Df
5 -137.61
11 -131.25
Chisq Pr(>Chisq)
6 12.727
0.04757 *
--Signif. codes:
At the 5% significance level, the Cobb-Douglas production function is accepted by the Wald test
but rejected in favor of the Translog production function by the likelihood ratio test. In order
to reduce the chance of using a too restrictive functional form, we proceed with the Translog
production function.
2.6.3 Properties
We cannot see from the estimated coefficients whether the monotonicity condition is fulfilled. The
Translog production function cannot be globally monotone, because there will be always a set of
input quantities that result in negative marginal products.6 The Translog function would only be
globally monotone, if all first-order coefficients are positive and all second-order coefficients are
zero, which is equivalent to a Cobb-Douglas function. We will check the monotonicity condition
at each observation in section 2.6.5.
All Translog production functions fulfill the weak and the strong essentiality assumption, because as soon as a single input quantity approaches zero, the right-hand side of equation (2.105)
approaches minus infinity (if monotonicity is fulfilled), and thus, the output quantity y = exp(ln y)
approaches zero. Hence, if a data set includes observations with a positive output quantity but
at least one input quantity that is zero, strict essentiality cannot be fulfilled in the underlying
true production technology so that the Translog production function is not a suitable functional
form for analyzing this data set.
The input requirement sets derived from Translog production functions are always closed and
non-empty. The Translog production function always returns finite, real, non-negative, and single
values as long as all input quantities are strictly positive. All Translog production functions are
continuous and twice-continuously differentiable.
Please note that ln xj is a large negative number if xj is a very small positive number.
98
1e+07
2.0e+07
0.0e+00
1.0e+07
1e+05
5e+05 2e+06
fitted
1.0e+07
0.0e+00
fitted
2.0e+07
1e+05
observed
5e+05
5e+06
observed
X
ln y
= i +
ij ln xj
ln xi
j
(2.106)
We can simplify the code for computing these output elasticities by using short names for the
coefficients:
99
We can visualize (the variation of) these output elasticities with histograms:
> hist( dat$eCapTL, 15 )
> hist( dat$eLabTL, 15 )
Frequency
0.4
0.0
0.4
0.8
5
0
10
20
30
25
20
15
10
Frequency
15
10
Frequency
20
25
1.0
eCap
0.0
eLab
0.5
1.0
1.5
2.0
eMat
100
X
y
y ln y
y
M Pi =
=
=
ij ln xj
i +
xi
xi ln xi
xi
j
(2.107)
We can calculate the marginal products based on the output elasticities that we have calculated
above. As argued in section 2.4.11.1, we use the predicted output quantities in this calculation:
> dat$mpCapTL <- with( dat, eCapTL * qOutTL / qCap )
> dat$mpLabTL <- with( dat, eLabTL * qOutTL / qLab )
> dat$mpMatTL <- with( dat, eMatTL * qOutTL / qMat )
We can visualize (the variation of) these marginal products with histograms:
> hist( dat$mpCapTL, 15 )
> hist( dat$mpLabTL, 15 )
> hist( dat$mpMatTL, 15 )
The resulting graphs are shown in figure 2.35. If the firms increase capital input by one unit,
the output of most firms will increase by around 4 units. If the firms increase labor input by
one unit, the output of most firms will increase by around 4 units. If the firms increase material
input by one unit, the output of most firms will increase by around 70 units.
101
10
10
20
15
0
10
Frequency
20
20
15
0
10
Frequency
15
10
5
Frequency
20
25
mpCapTL
10
15
20
25
50
mpLabTL
100
mpMatTL
dat$eMatTL
8
6
0
Frequency
10
5
Frequency
15
1.2
eScaleTL
1.3
1.4
1.5
1.6
1.7
eScaleTL[ monoTL ]
102
1e+05
5e+05
2e+06
1.6
1.4
1.2
eScaleTL
1.4
1.2
eScaleTL
1.6
1e+07
0.5
1e+05
5e+05
2e+06
1e+07
2.0
5.0
eScaleTL[ monoTL ]
eScaleTL[ monoTL ]
observed output
1.0
0.5
observed output
1.0
2.0
5.0
Figure 2.37: Translog production function: elasticities of scale at different firm sizes
The resulting graphs are shown in figure 2.37. Both of them indicate that the elasticity of scale
slightly decreases with firm size but there are considerable increasing returns to scale even for
the largest firms in the sample. Hence, all firms in the sample would gain from increasing their
size and the optimal firm size seems to be larger than the largest firm in the sample.
103
150
100
50
10 20 30 40 50 60 70
Frequency
40
30
0
10
20
Frequency
40
20
Frequency
60
50
60
60
20
800
mrtsLabCapTL
600
400
200
mrtsCapMatTL
0.6
0.4
0.2
0.0
30
0
10
20
Frequency
40
60
0
20
40
Frequency
10
5
Frequency
15
50
80
mrtsCapLabTL
40
1200
mrtsMatCapTL
800
400
mrtsLabMatTL
mrtsMatLabTL
Figure 2.38: Translog production function: marginal rates of technical substitution (MRTS)
The resulting graphs are shown in figure 2.39. As some outliers hide the variation of the majority
of the MRTS, we use function colMedians (package miscTools) to show the median values of the
MRTS:
> colMedians( subset( dat, monoTL,
+
+
104
-1.19196521 -12.72554396
-0.07858435 -12.79850828
-0.07813810
Given that the median marginal rate of technical substitution between capital and labor is -0.84,
a typical firm that reduces the use of labor by one unit, has to use around 0.84 additional units
of capital in order to produce the same amount of output as before. Alternatively, the typical
firm can replace one unit of labor by using 0.08 additional units of materials.
-0.3539150
-3.0064237
rmrtsMatLabTL
-0.7439008
105
-0.3331325
-1.3444115
300
100
25
20
15
10
60
400
rmrtsLabCapTL
300
200
100
rmrtsCapMatTL
40
30
20
Frequency
40
30
20
3.0
2.0
1.0
rmrtsMatCapTL
0.0
10
10
10
Frequency
15
50
50
60
20
rmrtsCapLabTL
40
20
0
0
500
Frequency
Frequency
20
40
Frequency
60
40
0
20
Frequency
60
80
80
50 40 30 20 10
rmrtsLabMatTL
35
25
15
rmrtsMatLabTL
Figure 2.39: Translog production function: relative marginal rates of technical substitution
(RMRTS)
106
2y
=
xi xj
(i +
=
k ik ln xk )
y
xi
(2.108)
xj
ij y
i +
=
+
xj xi
ij y i +
=
+
xi xj
X
ik ln xk y
ij i +
ik ln xk
xi
xj
k
ik ln xk
xi
j +
X
k
jk ln xk
y
x2i
X
y
ij i +
ik ln xk
xj
k
(2.109)
!
y
x2i
(2.110)
ij y i j y
i y
=
+
ij 2
xi xj
xi xj
xi
y
=
(ij + i j ij i ) ,
xi xj
(2.111)
(2.112)
where ij is (again) Kroneckers delta (2.66). Alternatively, the second derivatives of the Translog
function can be expressed based on the marginal products (instead of the output elasticities):
ij y M Pi M Pj
M Pi
2y
=
+
ij
xi xj
xi xj
y
xi
Now, we can calculate the second derivatives for each observation in our data set:
> dat$fCapCapTL <- with( dat,
+
107
(2.113)
+
+
+
+
+
+
108
+ }
Before we take a look at and interpret the elasticities of substitution, we check whether the
conditions (2.103) are fulfilled:
> range( with( dat, qCap * mpCapTL * esaCapCapTL +
+
[1] -5.960464e-08
1.907349e-06
[1] -1.490116e-08
5.960464e-08
[1] -3.725290e-08
1.311302e-06
The extremely small deviations from zero are most likely caused by rounding errors that are
unavoidable on digital computers. This test does not proof that all of our calculations are done
correctly but if we had made a mistake, we probably would have discovered it. Hence, we can be
rather sure that our calculations are correct.
As the elasticities of substitution measure changes in the marginal rates of technical substitution
(MRTS) and the MRTS are meaningless if the monotonicity conditions are not fulfilled, also the
elasticities of substitution are meaningless if the monotonicity conditions are not fulfilled. Hence,
we visualize (the variation of) the Allen elasticities of substitution only for the observations,
where the monotonicity condition is fulfilled:
> hist( dat$esaCapLabTL[ dat$monoTL ], 30 )
> hist( dat$esaCapMatTL[ dat$monoTL ], 30 )
> hist( dat$esaLabMatTL[ dat$monoTL ], 30 )
> hist( dat$esaCapLabTL[ dat$monoTL & abs( dat$esaCapLabTL ) < 10 ], 30 )
> hist( dat$esaCapMatTL[ dat$monoTL & abs( dat$esaCapMatTL ) < 10 ], 30 )
> hist( dat$esaLabMatTL[ dat$monoTL & abs( dat$esaLabMatTL ) < 10 ], 30 )
109
200
500
1000
1500
50
150
esaLabMatTL
10
Frequency
15
20
12
10
8
10
Frequency
10
5
100
25
esaCapMatTL
15
esaCapLabTL
Frequency
30
10
0
10
0
0
400
20
Frequency
30
20
Frequency
30
20
10
Frequency
40
40
40
50
50
50
10
2.5436068
0.4193423
The median elasticity of substitution between labor and materials (0.42) lies between the elasticity
of substitution of the Leontief production function ( = 0) and the elasticity of substitution of
the Cobb-Douglas production function ( = 1). Hence, the substitutability between labor and
materials seems to be rather low. A typical firm who substitutes materials for labor (or vice versa)
so that the MRTS between materials and labor increases (decreases) by one percent, has increased
(decreased) the ratio between the quantity of materials and the labor quantity by 0.42 percent. If
the firm is maximizing profit or minimizing costs and the price ratio between labor and materials
110
2.6.12 Quasiconcavity
We check whether our estimated Translog production function is quasiconcave at each observation:
> dat$quasiConcTL <- NA
> for( obs in 1:nrow( dat ) ) {
+
+ }
> sum( dat$quasiConcTL )
[1] 63
Our estimated Translog production function is quasiconcave at 63 of the 140 observations.
111
30
60
150
10 15 20 25 30
100
5.00
100
150
5e02
0.20
MVP Lab
5e+00
5e01
MVP Cap
0.05
5e01
50
w Mat
200
20.00
5e02
w Lab
1.00
5e+01
w Cap
100
MVP Mat
5
50
40
20
20
10
MVP Mat
40
5
5
40
20
15
MVP Lab
40
20
50
25
20
MVP Cap
10
60
5e+00
5e+01
0.05
0.20
w Cap
1.00
5.00 20.00
10
w Lab
20
50
100
w Mat
112
- dat$mrtsLabCapTL[ dat$monoTL ] )
- dat$mrtsMatCapTL[ dat$monoTL ] )
- dat$mrtsMatLabTL[ dat$monoTL ] )
113
(2.114)
0.6
60
1
0
1.000
0.100
0.001
0.001
1e+02
0.010
0.500
0.100
0.020
w Lab / w Mat
0.005
1e+01
1e02
1e01
1e+00
w Cap / w Lab
w Cap / w Mat
1e+00
1e02
w Cap / w Lab
1e+02
0.4
0.3
0.2
0.1
40
0.0
20
60
40
20
0
0.5
0.001
0.005
0.050
0.500
0.001
w Cap / w Mat
114
0.010
0.100
w Lab / w Mat
1.000
60
80
10
0.6
0.2
0.4
0.6
20
15
10
Frequency
10
20
Frequency
15
10
5
0
Frequency
20
30
25
25
40
0.2
30
40
5
0
20
40
20
10
10
0
30
Frequency
20
15
Frequency
30
20
Frequency
25
50
40
30
60
115
qmCap
qmLab
qmMat
-1.110223e-16 -1.110223e-16
0.000000e+00
0.000000e+00
Please note that mean-scaling does not imply that the mean values of the logarithmic variables
are zero:
> colMeans( log( dat[ , c( "qmOut", "qmCap", "qmLab", "qmMat" ) ] ) )
qmOut
qmCap
qmLab
qmMat
data = dat )
116
1Q
Median
3Q
Max
-1.68015 -0.36688
0.05389
0.44125
1.26560
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)
-0.09392
0.08815
-1.065
0.28864
log(qmCap)
0.15004
0.11134
1.348
0.18013
log(qmLab)
0.79339
0.17477
log(qmMat)
0.50201
0.16608
3.023
I(0.5 * log(qmCap)^2)
-0.02573
0.20834
-0.124
0.90189
I(0.5 * log(qmLab)^2)
-1.16364
0.67943
-1.713
0.08916 .
I(0.5 * log(qmMat)^2)
-0.50368
0.43498
-1.158
0.24902
0.56194
0.29120
1.930
0.05582 .
0.23534
-1.742
0.08387 .
I(log(qmLab) * log(qmMat))
0.42750
1.539
I(log(qmCap) * log(qmLab))
0.65793
0.00302 **
0.12623
--Signif. codes:
0.6296,
Adjusted R-squared:
0.6039
While the intercept and the first-order coefficients have adjusted to the new units of measurement,
the second-order coefficients of the Translog function remain unchanged (compare with estimates
in section 2.6.2):
> all.equal( coef(prodTL)[-c(1:4)], coef(prodTLm)[-c(1:4)],
+
check.attributes = FALSE )
[1] TRUE
In case of functional forms that are invariant to the units of measurement (e.g. linear, CobbDouglas, quadratic, Translog), mean-scaling does not change the relative indicators of the technology (e.g. output elasticities, elasticities of scale, relative marginal rates of technical substitution, elasticities of substitution). As the logarithms of the mean values of the mean-scaled input
117
118
119
120
2.7.4 Summary
The various criteria for assessing whether the quadratic or the Translog functional form is more
appropriate for analyzing the production technology in our data set are summarized in table 2.2.
While the quadratic production function results in less monotonicity violations and less implausible output elasticities, the Translog production function seems to give a better fit to the data
and results in slightly more plausible elasticities of substitution.
Table 2.2: Criteria for assessing functional forms
quadratic Translog
R2 of y
0.84
0.77
2
R of ln y
0.55
0.63
visual fit
()
ok
total monotonicity violations
41
54
observations with monotonicity violated
39
48
negative output quantities
0
0
observations with quasiconcavity violated
140
77
implausible elasticities of scale
0
0
implausible output elasticities
28
56
implausible elasticities of substitution
cap,lab
121
gradients = TRUE )
data = dat,
641971.3 0.8432296
122
16.0
13.0
14.5
log(qOut)
16.0
14.5
13.0
log(qOut)
10
11
12
13
11.5
12.0
13.0
13.5
14.0
14.5
16.0
log(qLab)
13.0
log(qOut)
log(qCap)
12.5
9.0
9.5
10
11
12
13
0.5
11.5
12.0
12.5
13.0
13.5
14.0
log(qLab)
0.5
log(qCap)
0.5
0.5
0.5
0.5
9.0
9.5
123
641971.3 0.8432296
Significance Tests
P Value:
log(qCap) 0.1478697
log(qLab) 0.0025063 **
log(qMat) 0.0025063 **
--Signif. codes:
0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
The results confirm the results from the parametric regressions that labor and materials have a
significant effect on the output while capital does not have a significant effect (at 10% significance
level).
The following commands plot histograms of the three output elasticities and the elasticity of
scale:
> hist( gradients( prodNP )[ ,1] )
> hist( gradients( prodNP )[ ,2] )
> hist( gradients( prodNP )[ ,3] )
> hist( rowSums( gradients( prodNP ) ) )
The resulting graphs are shown in figure 2.46. The monotonicity condition is fulfilled at almost
all observations, only 1 output elasticity of capital and 1 output elasticity of labor is negative.
All firms operate under increasing returns to scale with most farms having an elasticity of scale
around 1.4.
Finally, we visualize the relationship between firm size and the elasticity of scale based on our
non-parametric estimation results:
> plot( dat$qOut, rowSums( gradients( prodNP ) ), log = "x" )
> plot( dat$X, rowSums( gradients( prodNP ) ), log = "x" )
124
40
0
20
Frequency
40
20
0
Frequency
0.2
0.1
0.0
0.1
0.2
0.3
0.0
0.2
0.4
0.8
1.0
40
0
20
30
Frequency
labor
0 10
Frequency
capital
0.6
0.4
0.6
0.8
1.0
1.2
1.4
1.2
1.4
materials
1.6
1.8
scale
Figure 2.46: Output elasticities and elasticities of scale estimated by non-parametric kernel
regression
1.8
1e+05
5e+05
5e+06
1.2
elaScaleNP
1.6
1.4
1.6
1.8
1.2
elaScaleNP
1.4
0.5
qOut
1.0
2.0
5.0
Figure 2.47: Relationship between firm size and elasticities of scale estimated by non-parametric
kernel regression
125
126
wi x i
(3.1)
wi xi , s.t. f (x) y
(3.2)
returns the minimal (total) cost that is required to produce at least the output quantity y given
input prices w.
It is important to distinguish the cost definition (3.1) from the cost function (3.2).
(3.3)
c(w, y)
y
c(w, y)
y
(3.4)
At the cost-minimizing points, the elasticity of size is equal to the elasticity of scale (Chambers,
1988, p. 7172). For homothetic production technologies such as the Cobb-Douglas production
technology, the elasticity of size is always equal to the elasticity of scale (Chambers, 1988, p. 72
74).1
Further details about the relationship between the elasticity of size and the elasticity of scale are available, e.g.,
in McClelland, Wetzstein, and Musserwetz (1986).
127
cv (w1 y, , x2 ) = min
x1
wi xi , s.t. f (x1 , x2 ) y
(3.5)
iN 1
where w1 denotes the vector of the prices of all variable inputs, x2 denotes the vector of the
quantities of all quasi-fixed inputs, cv denotes the variable costs defined in equation (1.3), and
N 1 is a vector of the indices of the variable inputs.
y y
(3.6)
i ln wi + y ln y
(3.7)
X
i
with 0 = ln A.
3.2.2 Estimation
The linearized Cobb-Douglas cost function can be estimated by OLS:
> costCD <- lm( log( cost ) ~ log( pCap ) + log( pLab ) + log( pMat ) + log( qOut ),
+
data = dat )
1Q
Median
3Q
Max
0.24439
0.74339
Coefficients:
128
6.75383
0.40673
16.605
log(pCap)
0.07437
0.04878
1.525
0.12969
log(pLab)
0.46486
0.14694
3.164
0.00193 **
log(pMat)
0.48642
0.08112
log(qOut)
0.37341
0.03072
12.154
--Signif. codes:
0.6884,
Adjusted R-squared:
0.6792
3.2.3 Properties
As the coefficients of the (logarithmic) input prices are all non-negative, this cost function is
monotonically non-decreasing in input prices. Furthermore, the coefficient of the (logarithmic)
output quantity is non-negative so that this cost function is monotonically non-decreasing in
output quantities. The Cobb-Douglas cost function always implies no fixed costs, as the costs
are always zero if the output quantity is zero. Given that A = exp(0 ) is always positive,
all Cobb-Douglas cost functions that are based on its (estimated) linearized version fulfill the
non-negativity condition.
Finally, we check if the Cobb-Douglas cost function is positive linearly homogeneous in input
prices. This condition is fulfilled if
t c(w, y) = c(t w, y)
ln(t c) = 0 +
(3.8)
i ln(t wi ) + y ln y
(3.9)
(3.10)
ln t + ln c = 0 +
i ln t +
i ln wi + y ln y
ln c + ln t = 0 + ln t
i +
ln c + ln t = ln c + ln t
i ln wi + y ln y
(3.11)
(3.12)
ln t = ln t
(3.13)
1=
(3.14)
Hence, the homogeneity condition is only fulfilled if the coefficients of the (logarithmic) input
prices sum up to one. As they sum up to 1.03 the homogeneity condition is not fulfilled in our
estimated model.
129
N
1
X
(3.15)
i=1
and replace N in the cost function (3.7) by the right-hand side of the above equation:
ln c = 0 +
ln c = 0 +
ln c ln wN = 0 +
ln
c
= 0 +
wN
N
1
X
i=1
N
1
X
i=1
N
1
X
i=1
N
1
X
i ln wi + 1
N
1
X
i ln wN + y ln y
(3.16)
i=1
i (ln wi ln wN ) + ln wN + y ln y
(3.17)
i (ln wi ln wN ) + y ln y
(3.18)
i ln
i=1
wi
+ y ln y
wN
(3.19)
This Cobb-Douglas cost function with linear homogeneity in input prices imposed can be estimated by following command:
> costCDHom <- lm( log( cost / pMat ) ~ log( pCap / pMat ) + log( pLab / pMat ) +
+
1Q
Median
3Q
Max
0.24470
0.74688
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)
6.75288
0.40522
16.665
log(pCap/pMat)
0.07241
0.04683
1.546
log(pLab/pMat)
0.44642
0.07949
log(qOut)
0.37415
0.03021
12.384
--Signif. codes:
130
0.5456,
Adjusted R-squared:
0.5355
The coefficient of the N th (logarithmic) input price can be obtained by the homogeneity condition
(3.15). Hence, the estimate of Mat is 0.4812 in our model.
As there is no theory that says which input price should be taken for the normalization/deflation,
it is desirable that the estimation results do not depend on the price that is used for the normalization/deflation. This desirable property is fulfilled for the Cobb-Douglas cost function and
we can verify this by re-estimating the cost function, while using a different input price for the
normalization/deflation, e.g. capital:
> costCDHomCap <- lm( log( cost / pCap ) ~ log( pLab / pCap ) + log( pMat / pCap ) +
+
1Q
Median
3Q
Max
0.24470
0.74688
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)
6.75288
0.40522
16.665
log(pLab/pCap)
0.44642
0.07949
log(pMat/pCap)
0.48117
0.07285
log(qOut)
0.37415
0.03021
12.384
--Signif. codes:
0.8168,
Adjusted R-squared:
0.8128
The results are identical to the results from the Cobb-Douglas cost function with the price of
materials used for the normalization/deflation. The coefficient of the (logarithmic) capital price
can be obtained by the homogeneity condition (3.15). Hence, the estimate of Cap is 0.0724 in
our model with the capital price as numeraire, which is identical to the corresponding estimate
from the model with the price of materials as numeraire. Both models have identical residuals:
131
+ log(pLab)
+ log(pMat) = 1
RSS Df Sum of Sq
136 15.563
135 15.560
F Pr(>F)
LogLik Df
5 -44.878
6 -44.867
Chisq Pr(>Chisq)
1 0.0232
0.879
These tests clearly show that the data do not contradict linear homogeneity in input prices.
132
(3.20)
Now, we can calculate the second derivatives as derivatives of the first derivatives (3.20):
i wci
c
2c
= wi =
wi wj
wj
wj
c
i c
=
ij i 2
wi wj
wi
i
c
c
=
j
ij i 2
wi wj
wi
c
,
= i (j ij )
wi wj
(3.21)
(3.22)
(3.23)
(3.24)
where ij (again) denotes Kroneckers delta (2.66). Alternative, the second derivatives of the
Cobb-Douglas cost function with respect to the input prices can be written as:
2c
fi
fi fj
ij ,
=
wi wj
c
wi
(3.25)
Please note that the selection of c has no effect on the test for concavity, because all elements of the Hessian
matrix include c as a multiplicative term and c is always positive so that the value of c does not change the
sign of the principal minors and the determinant, as |c M | = c |M |, where M denotes a quadratic matrix, c
denotes a scalar, and the two vertical bars denote the determinant function.
133
[,2]
[,3]
[1,] -5031.9274
7323.804
775.3358
[2,]
[3,]
14046.155 -1570.0447
As all diagonal elements of this Hessian matrix are negative, the necessary conditions for negative semidefiniteness are fulfilled. Now, we calculate the principal minors in order to check the
sufficient conditions for negative semidefiniteness:
> hessian[1,1]
[1] -5031.927
> det( hessian[1:2,1:2] )
[1] 714919939
> det( hessian )
[1] 121651514835
While the conditions for the first two principal minors are fulfilled, the third principal minor is
positive, while negative semidefiniteness requires a non-positive third principal minor. Hence, this
Hessian matrix is not negative semidefinite and consequently, the Cobb-Douglas cost function is
not concave at the first observation.3
3
Please note that this Hessian matrix is not positive semidefinite either, because the first principal minor is
negative. Hence, the Cobb-Douglas cost function is neither concave nor convex at the first observation.
134
+ }
> sum( dat$concaveCD )
[1] 0
This shows that our Cobb-Douglas cost function without linear homogeneity imposed is concave
in input prices not at a single observation.
Now, we will check, whether our Cobb-Douglas cost function with linear homogeneity imposed
is concave in input prices. Again, we obtain the predicted total costs:
> dat$costCDHom <- exp( fitted( costCDHom ) ) * dat$pMat
We create short-cuts for the estimated coefficients:
> chCap <- coef( costCDHom )[ "log(pCap/pMat)" ]
> chLab <- coef( costCDHom )[ "log(pLab/pMat)" ]
> chMat <- 1 - chCap - chLab
We compute the second derivatives:
> hhCapCap <- chCap * ( chCap - 1 ) * dat$costCDHom / dat$pCap^2
> hhLabLab <- chLab * ( chLab - 1 ) * dat$costCDHom / dat$pLab^2
135
( dat$pCap * dat$pLab )
( dat$pCap * dat$pMat )
( dat$pLab * dat$pMat )
[,2]
[,3]
[1,] -4901.0204
6835.826
745.4417
[2,]
[3,]
13318.172 -1566.0312
As all diagonal elements of this Hessian matrix are negative, the necessary conditions for negative semidefiniteness are fulfilled. Now, we calculate the principal minors in order to check the
sufficient conditions for negative semidefiniteness:
> hessianHom[1,1]
[1] -4901.02
> det( hessianHom[1:2,1:2] )
[1] 695515989
> det( hessianHom )
[1] 0.0006325681
The conditions for the first two principal minors are fulfilled and the third principal minor is close
to zero, where it is negative on some computers but positive on other computers. As Hessian
matrices of linear homogeneous functions are always singular, it is expected that the determinant
136
dat$concaveCDHom[obs] <-
+ }
> sum( !dat$concaveCDHom )
[1] 0
This result indicates that the concavity condition is violated not at a single observation. Consequently, our Cobb-Douglas cost function with linear homogeneity imposed is concave in input
prices at all observations.
In fact, all Cobb-Douglas cost functions that are non-decreasing and linearly homogeneous in
all input prices are always concave (e.g. Coelli, 1995, p. 266).4
ln c(w, y)
c(w, y) wi
wi
wi xi (w, y)
=
= xi (w, y)
=
= si (w, y),
ln wi
wi c(w, y)
c(w, y)
c(w, y)
(3.26)
137
0.0
0.1
0.2
0.3
0.4
30
25
15
0
10
Frequency
20
15
0
10
Frequency
20
25
35
30
25
20
15
10
Frequency
30
0.3
0.4
0.5
0.6
0.7
0.8
0.1
0.2
0.3
0.4
0.5
0.6
c(w, y)
c(w, y)
= i
wi
wi
(3.27)
138
(3.28)
t c(w, y)
c(w, y)
c(t w, y)
= i
= i
= xi (w, y)
t wi
t wi
wi
(3.29)
Furthermore, input demand functions should be symmetric with respect to input prices:
xi (t w, y)
xj (t w, y)
=
wj
wi
(3.30)
This condition is fulfilled for the input demand functions derived from any Cobb-Douglas cost
function:
xi (w, y)
i c(w, y)
i
c(w, y)
i j
=
=
j
=
c(w, y) i 6= j
wj
wi wj
wi
wj
wi wj
xj (w, y)
j c(w, y)
j
c(w, y)
i j
=
=
i
=
c(w, y) i 6= j
wi
wj
wi
wj
wi
wi wj
(3.31)
(3.32)
(3.33)
This condition is fulfilled for the input demand functions derived from any linearly homogeneous
Cobb-Douglas cost function that is monotonically increasing in all input prices (as this implies
0 i 1):
i c(w, y)
c(w, y)
xi (w, y)
=
i
wi
wi wi
wi2
i
c(w, y)
c(w, y)
=
i
i
wi
wi
wi2
c(w, y)
= i
(i 1) 0
wi2
(3.34)
(3.35)
(3.36)
We can calculate the cost-minimizing input quantities that are predicted by a Cobb-Douglas
cost function by using equation (3.27). The following commands compare the observed input
quantities with the cost-minimizing input quantities that are predicted by our Cobb-Douglas
cost function with linear homogeneity imposed:
> compPlot( chCap * dat$costCDHom / dat$pCap, dat$qCap )
> compPlot( chLab * dat$costCDHom / dat$pLab, dat$qLab )
> compPlot( chMat * dat$costCDHom / dat$pMat, dat$qMat )
> compPlot( chCap * dat$costCDHom / dat$pCap, dat$qCap, log = "xy" )
> compPlot( chLab * dat$costCDHom / dat$pLab, dat$qLab, log = "xy" )
> compPlot( chMat * dat$costCDHom / dat$pMat, dat$qMat, log = "xy" )
139
1200000
20000
4e+05
400000
qMat optimal
qCap optimal
5e+05
5e+04
5e+05
5e+03
5e+04
2e+05
qLab observed
1e+05
2e+04
1e+05
100000
2e+04
60000
5e+03
20000
qLab optimal
1200000
2e+04
5e+05
800000
qMat observed
2e+05
qCap optimal
5e+03
qMat observed
800000
100000
400000
qLab observed
0e+00
qCap observed
60000
4e+05
2e+05
0e+00
qCap observed
5e+04
2e+05
5e+05
5e+03
qLab optimal
140
2e+04
5e+04
qMat optimal
ij (w, y) =
c(w, y)
wj
c(w, y)
i
j
ij i
wi
wj xi (w, y)
wi xi (w, y)
c(w, y)
i
= i j
ij
wi xi (w, y)
si (w, y)
i j
i
=
ij
si (w, y)
si (w, y)
=
= j ij
(3.37)
(3.38)
(3.39)
(3.40)
(3.41)
(3.42)
y
xi (w, y)
y
xi (w, y)
c(w, y) i
y
=
y wi xi (w, y)
c(w, y) i
y
= y
y wi xi (w, y)
y
c(w, y)
= i y
y wi xi (w, y)
c(w, y)
= i y
wi xi (w, y)
i
= y
si (w, y)
iy (w, y) =
= y
(3.43)
(3.44)
(3.45)
(3.46)
(3.47)
(3.48)
(3.49)
All derived input demand elasticities based on our estimated Cobb-Douglas cost function with
linear homogeneity imposed are presented in table 3.1. If the price of capital increases by one
percent, the cost-minimizing firm will decrease the use of capital by 0.93% and increase the
use of labor and materials by 0.07% each. If the price of labor increases by one percent, the
cost-minimizing firm will decrease the use of labor by 0.55% and increase the use of capital and
materials by 0.45% each. If the price of materials increases by one percent, the cost-minimizing
firm will decrease the use of materials by 0.52% and increase the use of capital and labor by
0.48% each. If the cost-minimizing firm increases the output quantity by one percent, (s)he will
141
ij = 0 i
(3.50)
The input demand elasticities derived from any linearly homogeneous Cobb-Douglas cost function
fulfill the homogeneity condition:
X
ij (w, y) =
X
j
(j ij ) =
ij = 1 1 = 0 i
(3.51)
As we computed the elasticities in table 3.1 based on the Cobb-Douglas function with linear
homogeneity imposed, these conditions are fulfilled for these elasticities.
It follows from the necessary conditions for the concavity of the cost function that all own-price
elasticities are non-positive:
ii 0 i
(3.52)
The input demand elasticities derived from any linearly homogeneous Cobb-Douglas cost function
that is monotonically increasing in all input prices fulfill the negativity condition, because linear
P
homogeneity (
i i
142
(3.53)
(3.54)
(3.55)
Hence, the symmetry condition is also fulfilled for the elasticities in table 3.1, e.g. scap cap,lab =
cap cap,lab = 0.07 0.45 is equal to slab lab,cap = lab lab,cap = 0.45 0.07.
(3.56)
(3.57)
This condition is fulfilled for the marginal costs derived from a linearly homogeneous CobbDouglas cost function:
c(t w, y)
c(t w, y)
t c(w, y)
c(w, y)
c(w, y)
= y
= y
= t y
=t
y
y
y
y
y
We can compute the marginal costs by following command:
> chOut <- coef( costCDHom )[ "log(qOut)" ]
> dat$margCost <- chOut * dat$costCDHom / dat$qOut
We can visualize these marginal costs with a histogram.
143
(3.58)
0 5
15
Frequency
25
0.0
0.1
0.2
0.3
0.4
0.5
margCost
0.0
0.5
1.0
1.5
2.0
2.5
3.0
0.50
0.10
0.02
margCost
2.00
margCost
0.02
0.10
pOut
0.50
2.00
pOut
144
0.50
1.0e+07
0.20
0.05
0.10
0.0e+00
margCost
0.3
0.2
0.1
margCost
0.4
0.5
2.0e+07
1e+05
5e+05
qOut
5e+06
qOut
Figure 3.5: Marginal costs depending on output quantity and firm size
The resulting graphs are shown in figure 3.5. Due to the large economies of size, the marginal
costs are decreasing with the output quantity.
The relation between output quantity and marginal costs in a Cobb-Douglas cost function can
be analyzed by taking the first derivative of the marginal costs (3.56) with respect to the output
quantity:
y c(w,y)
M C
y
=
y
y
y c(w, y)
c(w, y)
=
y
y
y
y2
c(w, y)
y c(w, y)
y
y
=
y
y
y2
c
= y 2 (y 1)
y
(3.59)
(3.60)
(3.61)
(3.62)
As y , c, and y 2 should always be positive, the marginal costs are (globally) increasing in the
output quantity, if there are decreasing returns to size (i.e. y > 1) and the marginal costs are
(globally) decreasing in the output quantity, if there are increasing returns to size (i.e. y < 1).
Now, we illustrate our estimated model by drawing the total cost curve for output quantities
between 0 and the maximum output level in the sample, where we use the sample means of the
input prices. Furthermore, we draw the average cost curve and the marginal cost curve for the
above-mentioned output quantities and input prices:
> y <- seq( 0, max( dat$qOut ), length.out = 200 )
> chInt <- coef(costCDHom)[ "(Intercept)" ]
> costs <- exp( chInt + chCap * log( mean( dat$pCap ) ) +
+
chOut * log( y ) )
145
0.0e+00
1.2
0.8
0.4
average costs
marginal costs
0.0
400000 800000
total costs
1.0e+07
2.0e+07
0.0e+00
1.0e+07
2.0e+07
cv = A
wii
iN 1
xj j y y ,
(3.63)
jN 2
where cv denotes the variable costs as defined in (1.3), N 1 is a vector of the indices of the variable
inputs, and N 2 is a vector of the indices of the quasi-fixed inputs. The Cobb-Douglas short-run
146
i ln wi +
iN 1
j ln xj + y ln y
(3.64)
jN 2
with 0 = ln A.
3.3.2 Estimation
The following commands estimate a Cobb-Douglas short-run cost function with capital as a
quasi-fixed input and summarize the results:
> costCDSR <- lm( log( vCost ) ~ log( pLab ) + log( pMat ) + log( qCap ) + log( qOut ),
+
data = dat )
1Q
Median
3Q
Max
0.20729
0.71633
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)
5.66013
0.42523
13.311
log(pLab)
0.45683
0.13819
3.306
0.00121 **
log(pMat)
0.44144
0.07715
log(qCap)
0.19174
0.04034
log(qOut)
0.29127
0.03318
--Signif. codes:
0.7265,
Adjusted R-squared:
0.7183
3.3.3 Properties
This short-run cost function is (significantly) increasing in the prices of the variable inputs (labor
and materials) as the coefficient of the labor price (0.457) and the coefficient of the materials
147
X
X
c
wi
= 0 +
i ln
+
j ln xj + y ln y
wk
wk
1
2
iN \k
(3.65)
jN
with k N 1 . We can estimate a Cobb-Douglas short-run cost function with capital as a quasifixed input and linear homogeneity in input prices imposed by the command:
> costCDSRHom <- lm( log( vCost / pMat ) ~ log( pLab / pMat ) +
+
1Q
Median
3Q
Max
0.19533
0.71792
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)
5.67882
0.42335
13.414
log(pLab/pMat)
0.53487
0.06781
log(qCap)
0.18774
0.03978
log(qOut)
0.29010
0.03306
--Signif. codes:
148
0.5963,
Adjusted R-squared:
0.5874
We can obtain the coefficient of the materials price from the homogeneity condition (3.15): 1
0.535 = 0.465. We can test the homogeneity restriction by a likelihood ratio test:
> lrtest( costCDSRHom, costCDSR )
Likelihood ratio test
Model 1: log(vCost/pMat) ~ log(pLab/pMat) + log(qCap) + log(qOut)
Model 2: log(vCost) ~ log(pLab) + log(pMat) + log(qCap) + log(qOut)
#Df
LogLik Df
5 -36.055
6 -35.838
Chisq Pr(>Chisq)
1 0.4356
0.5093
Given the large P -value, we can conclude that the data do not contradict the linear homogeneity
in the prices of the variable inputs.
While the linear homogeneity in the prices of all variable inputs is accepted and the short-run
cost function is still increasing in the output quantity and the prices of all variable inputs, the
estimated short-run cost function is still increasing in the capital quantity, which contradicts
microeconomic theory. Therefore, a further microeconomic analysis with this function is not
reasonable.
1
2
N
X
i ln wi
i=1
N X
N
X
+ y ln y
1
ij ln wi ln wj + yy (ln y)2
2
i=1 j=1
N
X
iy ln wi ln y
i=1
with ij = ji i, j.
149
(3.66)
3.4.2 Estimation
The Translog cost function can be estimated by following command:
> costTL <- lm( log( cost ) ~ log( pCap ) + log( pLab ) + log( pMat ) +
+
log( qOut ) + I( 0.5 * log( pCap )^2 ) + I( 0.5 * log( pLab )^2 ) +
data = dat )
1Q
Median
3Q
Max
-0.73251 -0.18718
0.02001
0.15447
0.82858
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)
25.383429
3.511353
log(pCap)
0.198813
0.537885
0.370 0.712291
log(pLab)
-0.024792
2.232126
-0.011 0.991156
log(pMat)
-1.244914
1.201129
-1.036 0.301992
log(qOut)
-2.040079
0.510905
I(0.5 * log(pCap)^2)
-0.095173
0.105158
-0.905 0.367182
I(0.5 * log(pLab)^2)
-0.503168
0.943390
-0.533 0.594730
I(0.5 * log(pMat)^2)
0.529021
0.337680
1.567 0.119728
0.244445
I(log(pCap) * log(pMat))
0.182268
0.130463
1.397 0.164865
I(log(pLab) * log(pMat))
0.139429
0.433408
0.322 0.748215
I(0.5 * log(qOut)^2)
0.164075
0.041078
0.042844
-0.656 0.513259
I(log(pLab) * log(qOut))
0.171134
0.044 0.964959
0.007533
150
-3.053 0.002772 **
0.048794
0.092266
0.529 0.597849
--Signif. codes:
0.7682,
Adjusted R-squared:
0.7423
As the Cobb-Douglas cost function is nested in the Translog cost function, we can use a
statistical test to check whether the Cobb-Douglas cost function fits the data as good as the
Translog cost function:
> lrtest( costCD, costTL )
Likelihood ratio test
Model 1: log(cost) ~ log(pCap) + log(pLab) + log(pMat) + log(qOut)
Model 2: log(cost) ~ log(pCap) + log(pLab) + log(pMat) + log(qOut) + I(0.5 *
log(pCap)^2) + I(0.5 * log(pLab)^2) + I(0.5 * log(pMat)^2) +
I(log(pCap) * log(pLab)) + I(log(pCap) * log(pMat)) + I(log(pLab) *
log(pMat)) + I(0.5 * log(qOut)^2) + I(log(pCap) * log(qOut)) +
I(log(pLab) * log(qOut)) + I(log(pMat) * log(qOut))
#Df
1
2
LogLik Df
Chisq Pr(>Chisq)
6 -44.867
16 -24.149 10 41.435
9.448e-06 ***
--Signif. codes:
Given the very small P -value, we can conclude that the Cobb-Douglas cost function is not suitable
for analyzing the production technology in our data set.
1
2
N
X
i ln(t
i=1
N X
N
X
(3.67)
wi ) + y ln y
1
ij ln(t wi ) ln(t wj ) + yy (ln y)2
2
i=1 j=1
N
X
iy ln(t wi ) ln y
i=1
151
(3.68)
= 0 +
N
X
i ln(t) +
i=1
N
X
i ln(wi ) + y ln y
(3.69)
i=1
N X
N
1X
N X
N
1X
+
ij ln(t) ln(t) +
ij ln(t) ln(wj )
2 i=1 j=1
2 i=1 j=1
N X
N X
N
N
1X
1X
ij ln(wi ) ln(t) +
ij ln(wi ) ln(wj )
2 i=1 j=1
2 i=1 j=1
N
N
X
X
1
iy ln(t) ln y +
iy ln(wi ) ln y
+ yy (ln y)2 +
2
i=1
i=1
= 0 + ln(t)
N
X
i=1
i +
N
X
i ln(wi ) + y ln y
(3.70)
i=1
N X
N
X
N
N
X
X
1
1
ln(wj )
ij
+ ln(t) ln(t)
ij + ln(t)
2
2
i=1 j=1
j=1
i=1
N
N
N X
N
X
X
1
1X
ln(t)
ln(wi )
ij +
ij ln(wi ) ln(wj )
2
2 i=1 j=1
i=1
j=1
N
N
X
X
1
2
iy ln(wi ) ln y
iy +
+ yy (ln y) + ln(t) ln y
2
i=1
i=1
= ln c(w, y) + ln(t)
N
X
(3.71)
i=1
N
N X
X
N
N
X
X
1
1
+ ln(t) ln(t)
ij + ln(t)
ij
ln(wj )
2
2
i=1
i=1 j=1
j=1
N
N
N
X
X
X
1
ln(t)
ln(wi )
ij + ln(t) ln y
iy
2
i=1
j=1
i=1
N X
N
N
N
X
X
X
1
1
ij + ln(t)
ln(wj )
ij
ln t = ln(t)
i + ln(t) ln(t)
2
2
i=1 j=1
j=1
i=1
i=1
N
X
N
N
N
X
X
X
1
iy
ln(t)
ln(wi )
ij + ln(t) ln y
2
i=1
i=1
j=1
N
X
N X
N
N
N
X
X
1
1X
i + ln(t)
1=
ij +
ij
ln(wj )
2
2 j=1
i=1
i=1
i=1 j=1
(3.72)
(3.73)
N
N
N
X
X
1X
ln(wi )
ij + ln y
iy
2 i=1
j=1
i=1
Hence, the homogeneity condition is only globally fulfilled (i.e. no matter which values t, w, and
y have) if the following parameter restrictions hold:
N
X
i = 1
(3.74)
i=1
152
ij =ji
ij = 0 j
i=1
N
X
N
X
ij = 0 i
(3.75)
j=1
iy = 0
(3.76)
i=1
We can see from the estimates above that these conditions are not fulfilled in our Translog cost
function. For instance, according to condition (3.74), the first-order coefficients of the input
prices should sum up to one but our estimates sum up to 0.199 + (0.025) + (1.245) = 1.071.
Hence, the homogeneity condition is not fulfilled in our estimated Translog cost function.
N
1
X
i=1
N
1
X
i=1
N
1
X
(3.77)
ij j
(3.78)
ij i
(3.79)
iy
(3.80)
j=1
N y =
N
1
X
i=1
Replacing N , N y and all iN and jN in equation (3.66) by the right-hand sides of equations (3.77) to (3.80) and re-arranging, we get
ln
N
1
X
c(w, y)
wi
= 0 +
i ln
+ y ln y
wN
w
N
i=1
(3.81)
1
1 N
X
wi
1 NX
wj
1
+
ij ln
ln
+ yy (ln y)2
2 j=1 i=1
wN
wN
2
N
1
X
i=1
iy ln
wi
ln y.
wN
This Translog cost function with linear homogeneity imposed can be estimated by following
command:
> costTLHom <- lm( log( cost / pMat ) ~ log( pCap / pMat ) +
+
I( 0.5 * log( pCap / pMat )^2 ) + I( 0.5 * log( pLab / pMat )^2 ) +
153
data = dat )
1Q
Median
3Q
Max
-0.6860 -0.2086
0.0192
0.1978
0.8281
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)
23.714976
3.445289
log(pCap/pMat)
0.306159
0.525789
0.582 0.561383
log(pLab/pMat)
1.093860
1.169160
0.936 0.351216
-1.933605
0.501090
I(0.5 * log(pCap/pMat)^2)
0.025951
0.089977
0.288 0.773486
I(0.5 * log(pLab/pMat)^2)
0.716467
0.338049
2.119 0.035957 *
0.142710
-2.052 0.042144 *
I(0.5 * log(qOut)^2)
0.158662
0.039866
I(log(pCap/pMat) * log(qOut))
-0.048274
0.040025
-1.206 0.229964
I(log(pLab/pMat) * log(qOut))
0.008363
0.096490
0.087 0.931067
log(qOut)
--Signif. codes:
0.6377,
Adjusted R-squared:
0.6126
We can use a likelihood ratio test to compare this function with the unconstrained Translog cost
function (3.66):
> lrtest( costTL, costTLHom )
Likelihood ratio test
154
LogLik Df
Chisq Pr(>Chisq)
16 -24.149
11 -29.014 -5 9.7309
0.08323 .
--Signif. codes:
The null hypothesis, linear homogeneity in input prices, is rejected at the 10% significance level
but not at the 5% level. Given the importance of microeconomic consistency and that 5% is the
standard significance level, we continue our analysis with the Translog cost function with linear
homogeneity in input prices imposed.
Furthermore, we can use a likelihood ratio test to compare this function with the Cobb-Douglas
cost function with homogeneity imposed (3.19):
> lrtest( costCDHom, costTLHom )
Likelihood ratio test
Model 1: log(cost/pMat) ~ log(pCap/pMat) + log(pLab/pMat) + log(qOut)
Model 2: log(cost/pMat) ~ log(pCap/pMat) + log(pLab/pMat) + log(qOut) +
I(0.5 * log(pCap/pMat)^2) + I(0.5 * log(pLab/pMat)^2) + I(log(pCap/pMat) *
log(pLab/pMat)) + I(0.5 * log(qOut)^2) + I(log(pCap/pMat) *
log(qOut)) + I(log(pLab/pMat) * log(qOut))
#Df
LogLik Df
5 -44.878
11 -29.014
Chisq Pr(>Chisq)
6 31.727
1.84e-05 ***
--Signif. codes:
Again, the Cobb-Douglas functional form is clearly rejected by the data in favor of the Translog
functional form.
Some parameters of the Translog cost function with linear homogeneity imposed (3.81) have
not been directly estimated (N , N y , all iN , all jN ) but they can be retrieved from the
155
0.3061589
> # alpha_ij
> matrix( c( ch11, ch12, ch13, ch21, ch22, ch23, ch31, ch32, ch33 ), ncol=3 )
[,1]
[1,]
[,2]
[,3]
0.02595083 -0.2928892
0.2669384
[2,] -0.29288920
[3,]
0.7164670 -0.4235778
0.26693837 -0.4235778
0.1566394
0.008362717
0.039911768
156
0.158661757
(3.82)
ln c(w, y)
ln y
1
(3.83)
We can calculate the cost flexibilities and the elasticities of size with following commands:
> dat$costFlex <- with( dat, chy + ch1y * log( pCap ) +
+
0.2
0.4
0.6
cost flexibility
0.8
50
40
30
10
0
20
0
5
0
0.0
20
Frequency
80
60
40
Frequency
20
15
10
Frequency
25
30
120
60
35
20
20
40
60
elasticity of size
80
100
10
elasticity of size
Figure 3.7: Translog cost function: cost flexibility and elasticity of size
The resulting graphs are presented in figure 3.7. Only 1 out of 140 cost flexibilities is negative.
Hence, the estimated Translog cost function is to a very large extent increasing in the output
quantity. All cost flexibilities are lower than one, which indicates that all apple producers operate
under increasing returns to size. Most cost flexibilities are around 0.5, which corresponds to an
elasticity of size of 2. Hence, if the apple producers increase their output quantity by one percent,
the total costs of most producers increases by around 0.5 percent. Orthe other way roundif
157
10
0.0e+00
1.0e+07
2.0e+07
dat$elaSize
dat$elaSize
60
40
20
20
dat$elaSize
10
80
100
> abline( 1, 0 )
0.0e+00
1.0e+07
qOut
2.0e+07
1e+05
qOut
5e+05
5e+06
qOut
Figure 3.8: Translog cost function: output quantity and elasticity of size
The resulting graphs are shown in figure 3.8. With increasing output quantity, the elasticity of
size approaches one (from above). Hence, small apple producers could gain a lot from increasing
their size, while large apple producers would gain much less from increasing their size. However,
even the largest producers still gain from increasing their size so that the optimal firm size is
larger than the largest firm in the sample.
y +
N
X
iy ln wi + yy ln y
i=1
c(w, y)
.
y
(3.84)
Hence, they areas alwaysequal to the cost flexibility multiplied by total costs and divided
by the output quantity. We can compute the total costs that are predicted by our estimated
Translog cost function by following command:
> dat$costTLHom <- exp( fitted( costTLHom ) ) * dat$pMat
158
40
20
0
Frequency
0.15
0.05
0.05
0.15
margCostTL
0.50
2.00
0.10
0.02
margCostTL
2.0
1.0
0.0
margCostTL
3.0
0.02
0.10
pOut
0.50
2.00
pOut
Figure 3.10: Translog cost function: marginal costs and output prices
The resulting graphs are shown in figure 3.10. The marginal costs of all firms are considerably
smaller than their output prices. Hence, all firms would gain from increasing their output level.
This is not surprising for a technology with large economies of scale.
159
1.0e+07
0.15
0.05
0.15
0.0e+00
0.05
margCost
0.05
0.05
0.15
margCost
0.15
2.0e+07
1e+05
qOut
5e+05
5e+06
qOut
Figure 3.11: Translog cost function: Marginal costs depending on output quantity
The resulting graphs are shown in figure 3.11. There is no clear relationship between marginal
costs and the output quantity.
Now, we illustrate our estimated model by drawing the average cost curve and the marginal
cost curve for output quantities between 0 and five times the maximum output level in the sample,
where we use the sample means of the input prices.
> y <- seq( 0, 5 * max( dat$qOut ), length.out = 200 )
> lpCap <- log( mean( dat$pCap ) )
> lpLab <- log( mean( dat$pLab ) )
> lpMat <- log( mean( dat$pMat ) )
> totalCost <- exp( ch0 + ch1 * lpCap + ch2 * lpLab + ch3 * lpMat +
+
160
0.0e+00
4.0e+07
8.0e+07
1.2e+08
0.100
0.080
0.090
average costs
marginal costs
0.070
0.1
0.2
0.3
0.4
average costs
marginal costs
0.5
0.0e+00
4.0e+07
8.0e+07
1.2e+08
161
xi (w, y) =
= i +
N
X
(3.85)
(3.86)
ij ln wj + iy ln y
j=1
c
wi
(3.87)
And we can re-arrange these derived input demand functions in order to obtain the cost-minimizing
cost shares:
si (w, y)
N
X
wi xi (w, y)
= i +
ij ln wj + iy ln y
c
j=1
(3.88)
We can calculate the cost-minimizing cost shares based on our estimated Translog cost function
by following commands:
> dat$shCap <- with( dat, ch1 + ch11 * log( pCap ) +
+
162
25
20
15
10
Frequency
0.1 0.0
0.1
0.2
0.3
0.4
10
20
Frequency
15
10
Frequency
20
30
30
25
0.0
0.5
shCap
1.0
0.2
0.2
shLab
0.4
0.6
0.8
1.0
shMat
1.0
shMat
0.2
0.0
0.6
1.0
observed
0.5
observed
shLab
0.2
0.1
observed
shCap
0.1 0.0
0.1
0.2
0.3
0.4
0.0
optimal
0.5
1.0
optimal
0.2 0.0
0.2
0.4
0.6
0.8
1.0
optimal
Figure 3.14: Translog cost function: observed and cost-minimizing cost shares
The resulting graphs are shown in figure 3.14. Most firms use less than optimal materials, while
there is a tendency to use more than optimal capital and a very slight tendency to use more than
optimal labor.
Similarly, we can compare the observed input quantities with the cost-minimizing input quantities:
> compPlot( dat$shCap * dat$costTLHom / dat$pCap,
+
dat$vCap / dat$pCap )
163
dat$vLab / dat$pLab )
qCap
qLab
qMat
1e+05
1e+05
3e+05
0e+00
5e+05
200000
100000
0e+00
5e+05
optimal
1e+06
optimal
5e+05
1e+06
observed
2e+05 4e+05
1e+05
observed
observed
50000
150000
250000
optimal
Figure 3.15: Translog cost function: observed and cost-minimizing input quantities
The resulting graphs are shown in figure 3.15. Of course, the conclusions derived from these
graphs are the same as conclusions derived from figure 3.14.
xi (w, y) wj
wj xi (w, y)
i +
(3.89)
PN
k=1 ik
ln wk + iy ln y
c
wi
wj
xi
wj
"
N
X
ij c
+ i +
ik ln wk + iy ln y
wj wi
k=1
ij i +
N
X
ik ln wk + iy ln y
k=1
"
xj
wi
(3.90)
(3.91)
c wj
wi2 xi
ij c
x i wi x j
x i wi c wj
=
+
ij
wi wj
c wi
c wi2 xi
ij c
wj x j
wj
+
ij
=
wi x i
c
wi
ij
=
+ sj ij ,
si
164
(3.92)
(3.93)
(3.94)
y
xi (w, y)
y
xi (w, y)
i +
(3.95)
PN
k=1 ik
ln wk + iy ln y
c
wi
y
"
N
X
iy c
+ i +
ik ln wk + iy ln y
y wi
k=1
iy c wi xi c 1 y
=
+
wi y
c y wi xi
iy c
c y
=
+
wi xi y c
iy
ln c
=
+
,
si
ln y
y
xi
!
(3.96)
#
c 1 y
y wi xi
(3.97)
(3.98)
(3.99)
(3.100)
[,2]
[,3]
[,4]
[2,]
[3,]
0.5938258
These demand elasticities indicate that when the capital price increases by one percent, the
demand for capital decreases by 0.638 percent, the demand for labor increases by 4.448 percent,
165
shLab
shMat
166
N
X
N X
N
N
X
1X
1
lim 0 +
i ln wi + y ln y +
ij ln wi ln wj + yy (ln y)2 +
iy ln wi ln y
y0+
2
2
i=1
i=1 j=1
i=1
(3.101)
= lim
y0+
N
X
1
y ln y + yy (ln y)2 +
iy ln wi ln y
2
i=1
167
(3.102)
15
20
100
200
20
10
80
60
80
100
70
10 0
80
Frequency
20
0
8
0.5
0.5
E mat lab
1.5
2.5
E mat mat
30
20
10
30
10
0
0
40
20
Frequency
10
20
Frequency
30
20
10
0
Frequency
30
40
40
40
50
E mat cap
50
30
100
100
60
20
0
4
50
E lab mat
40
Frequency
3
250
20
40
80
80
60
40
Frequency
20
200
0
20
E lab lab
150
60
Frequency
0
E lab cap
100
100
100
80
60
20
0
30
50
E cap mat
40
Frequency
80
60
40
0
20
Frequency
100
E cap lab
100
E cap cap
40
60
20
0
300
40
10
60
40
40
Frequency
80
80
60
0
20
40
Frequency
60
40
20
Frequency
80
100
0.0
0.5
E cap y
1.0
1.5
0.0
0.4
E lab y
168
0.8
E mat y
1.2
= lim
y0+
N
X
1
y + yy ln y +
iy ln wi
2
i=1
= lim
y0+
(3.103)
lim ln y
(3.104)
y0+
(3.105)
y0+
Hence, if coefficientt yy is negativ and the output quantity approaches zero (from above), the
predicted cost (exponential function of the right-hand side of equation 3.66) approaches zero so
that the no fixed costs property is asymptotically fulfilled.
Our estimated Translog cost function with linear homogeneity in input prices imposed (of
course) is linearly homogeneous in input prices. Hence, the linear homogeneity property is globally
fulfilled.
A cost function is non-decreasing in the output quantity if the cost flexibility and the elasticity
of size are non-negative. As we can see from figure 3.7, only a single cost flexibility and thus,
only a single elasticity of size is negative. Hence, our estimated Translog cost function with linear
homogeneity in input prices imposed violates the monotonicity condition regarding the output
quantity only at a single observation.
Given Shepards lemma, a cost function is non-decreasing in input prices if the derived costminimizing input quantities and the corresponding cost shares are non-negative. As we can see
from figure 3.13, our estimated Translog cost function with linear homogeneity in input prices
imposed predicts that 24 cost shares of capital, 10 cost shares of labor, and 3 cost shares of
materials are negative. In total, the monotonicity condition regarding the input prices is violated
at 36 observations:
> sum( dat$shCap < 0 | dat$shLab < 0 | dat$shMat < 0 )
[1] 36
Concavity in input prices of the cost function requires that the Hessian matrix of the cost
function with respect to the input prices is negative semidefinite. The elements of the Hessian
matrix are:
Hij =
2 c(w, y)
xi (w, y)
=
wi wj
wj
c
wi
N
X
ij c
=
+ i +
ik ln wk + iy ln y
wj wi
k=1
(3.106)
i +
PN
k=1 ik
ln wk + iy ln y
(3.107)
wj
N
X
xj
ij i +
ik ln wk + iy ln y
wi
k=1
c
wi2
(3.108)
ij c
x i wi x j
x i wi c
=
+
ij
wi wj
c wi
c wi2
(3.109)
169
ij c
xi xj
xi
+
ij ,
wi wj
c
wi
(3.110)
where ij (again) denotes Kroneckers delta (2.66). As the elements of the Hessian matrix have
the same sign as the corresponding elasticities (Hij = ij (w, y) xi /wj ), the positive own-price
elasticities of labor in figure 3.16 indicate that the element Hlab,lab is positive at all observations, where the monotonicity conditions regarding the input prices are fulfilled. As negative
semidefiniteness requires that all diagonal elements of the (Hessian) matrix are negative, we can
conclude that the estimated Translog cost function is concave at not a single observation where
the monotonicity conditions regarding the input prices are fulfilled.
This means that our estimated Translog cost function is inconsistent with microeconomic theory
at all observations.
170
wi xi , s.t. y = f (x)
(4.1)
returns the maximum profit that is attainable given the output price p and input prices w.
It is important to distinguish the profit definition (1.4) from the profit function (4.1).
(4.2)
where w1 denotes the vector of the prices of all variable inputs, x2 denotes the vector of the
quantities of all quasi-fixed inputs, cs (w1 , y, x2 ) is the short-run cost function (see section 3.3),
v denotes the gross margin defined in equation (1.5), and N 1 is a vector of the indices of the
variable inputs.
171
1e+07
2e+04
5e+05
profit
60
40
20
Frequency
80
0e+00
2e+07
4e+07
6e+07
0.5
profit
1.0
2.0
5.0
172
0e+00
2e+07
4e+07
6e+07
0.5
1.0
gross margin
2.0
5.0
gross margin
gross margin
60
40
0
20
Frequency
80
5e+03
2e+04
1e+05
5e+05
qCap
wj x j ,
(4.3)
jN 2
where N 2 is a vector of the indices of the quasi-fixed inputs. However, in the long-run, profit
must be non-negative:
(p, w) = max s (p, w, x2 ) 0,
x2
(4.4)
= Ap
Y
w i
i
(4.5)
i ln wi
(4.6)
with 0 = ln A.
Please note that the Cobb-Douglas profit function is used as a simple example here but that it is much too
restrictive for most real empirical applications (Chand and Kaul, 1986).
173
4.3.2 Estimation
The linearized Cobb-Douglas profit function can be estimated by OLS. As the logarithm of a
negative number is not defined and function lm automatically removes observations with missing
data, we do not have to remove the observations (apple producers) with negative profits manually.
> profitCD <- lm( log( profit ) ~ log( pOut ) + log( pCap ) + log( pLab ) +
+
1Q
Median
3Q
Max
-3.6183 -0.2778
0.1261
0.5986
2.0442
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)
13.9380
0.4921
28.321
log(pOut)
2.7117
0.2340
11.590
log(pCap)
-0.7298
0.1752
log(pLab)
-0.1940
0.4623
-0.420
0.676
log(pMat)
0.1612
0.2543
0.634
0.527
--Signif. codes:
0.5911,
Adjusted R-squared:
0.5776
As expected, lm reports that 14 observations have been removed due to missing data (logarithms
of negative numbers).
4.3.3 Properties
A Cobb-Douglas profit function is always continuous and twice continuously differentiable for all
p > 0 and wi > 0 i. Furthermore, a Cobb-Douglas profit function automatically fulfills the
non-negativity property, because the profit predicted by equation (4.5) is always positive as long
as coefficient A is positive (given that all input prices and the output price are positive). As A
174
(4.7)
ln(t ) = 0 + p ln(t p) +
i ln(t wi )
(4.8)
ln t + ln = 0 + p ln t + p ln p +
i ln t +
i ln wi
(4.9)
ln t + ln = 0 + p ln p +
i ln wi + ln t p +
(4.10)
ln = ln + ln t
p +
i 1
(4.11)
0 = ln t
p +
i 1
(4.12)
0 = p +
i 1
(4.13)
(4.14)
1 = p +
X
i
Hence, the homogeneity condition is only fulfilled if the coefficient of the (logarithmic) output
price and the coefficients of the (logarithmic) input prices sum up to one. As they sum up to
2.71 + (0.73) + (0.19) + 0.16 = 1.95, the homogeneity condition is not fulfilled in our estimated
model.
N
X
(4.15)
i=1
and replace p in the profit function (4.6) by the right-hand side of the above equation:
!
ln = 0 + 1
X
i
175
i ln p +
X
i
i ln wi
(4.16)
i ln p +
ln ln p = 0 +
i ln wi
(4.17)
i (ln wi ln p)
(4.18)
ln
wi
= 0 +
i ln
p
p
i
(4.19)
This Cobb-Douglas profit function with linear homogeneity imposed can be estimated by following
command:
> profitCDHom <- lm( log( profit / pOut ) ~ log( pCap / pOut ) +
+
1Q
Median
3Q
Max
-3.6045 -0.2724
0.0972
0.6013
2.0385
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)
14.27961
0.45962
31.068
log(pCap/pOut) -0.82114
0.16953
log(pLab/pOut) -0.90068
0.25591
log(pMat/pOut) -0.02469
0.23530
-0.105 0.916610
--Signif. codes:
0.3568,
Adjusted R-squared:
0.341
p-value: 1.091e-11
The coefficient of the (logarithmic) output price can be obtained by the homogeneity restriction (4.15). Hence, it is 1 (0.82) (0.9) (0.02) = 2.75. Now, all monotonicity conditions
are fulfilled: profit is increasing in the output price and decreasing in all input prices. We can
use a Wald test or a likelihood-ratio test to test whether the model and the data contradict the
homogeneity assumption:
176
+ log(pCap)
+ log(pLab)
+ log(pMat) = 1
RSS Df Sum of Sq
122 119.78
121 116.57
Pr(>F)
--Signif. codes:
LogLik Df
Chisq Pr(>Chisq)
6 -173.88
5 -175.60 -1 3.4316
0.06396 .
--Signif. codes:
Both tests reject the null hypothesis, linear homogeneity in all prices, at the 10% significance
level but not at the 5% level. Given the importance of microeconomic consistency and that 5%
is the standard significance level, we continue our analysis with the Cobb-Douglas profit function
with linear homogeneity imposed.
177
ln
=
= i
wi
ln wi wi
wi
(4.20)
and the first derivative with respect to the output price is:
ln
=
= p
p
ln p p
p
(4.21)
Now, we can calculate the second derivatives as derivatives of the first derivatives (4.20)
and (4.21):
2
wi wj
2
wi p
w
i
i wi
=
wj
wj
i
=
ij i 2
wi wj
wi
i
=
j
ij i 2
wi wj
wi
= i (j ij )
wi wj
=
w
i
i wi
=
p
i
=
wi p
i
=
p
wi p
= i p
wi p
(4.22)
(4.23)
(4.24)
(4.25)
(4.26)
(4.27)
(4.28)
(4.29)
p p
2
p
=
=
p2
p
p
p
=
p 2
p p
p
p
=
p p 2
p
p
p
= p (p 1) 2 ,
p
(4.30)
(4.31)
(4.32)
(4.33)
178
[,2]
[,3]
[,4]
[1,]
0.185633270
[2,]
0.060331020
0.07442915
0.074429148 10.64451706
179
Please note that this Hessian matrix is not negative semidefinite either, because the other three principal minors
are positive. Hence, the Cobb-Douglas profit function is neither concave nor convex at the first observation.
180
[,2]
[,3]
[,4]
[1,]
0.2198994186
0.315188197
0.0008740851 -1.30964735
[2,]
0.3151881974
2.114366248
0.0027786041 -4.16320062
[3,]
0.0008740851
0.002778604
0.0003198275 -0.01154546
181
dat$convexCDHom[obs] <-
+ }
> sum( !dat$convexCDHom, na.rm = TRUE )
[1] 0
This result indicates that the convexity condition is violated not at a single observation. Consequently, our Cobb-Douglas profit function with linear homogeneity imposed is convex in all prices
at all observations.
182
p =
In contrast to real shares, these profit shares are never between zero and one but they sum
up to one, as do real shares:
r+
py X
wi x i
+
i
ri =
py
i wi
xi
=1
(4.36)
For instance, an optimal profit share of the output of p = 2.75 means that profit maximization
would result in a total revenue that is 2.75 times as large as the profit, which corresponds to
a return on sales of 1/2.75 = 36%. Similarly, an optimal profit share of the capital input of
cap = 0.82 means that profit maximization would result in total capital costs that are 0.82
times as large as the profit.
The following commands draw histograms of the observed profit shares and compare them to
the optimal profit shares, which are predicted by our Cobb-Douglas profit function with linear
homogeneity imposed:
> hist( ( dat$pOut * dat$qOut / dat$profit )[
+
dat$profit > 0 ], 30 )
dat$profit > 0 ], 30 )
dat$profit > 0 ], 30 )
dat$profit > 0 ], 30 )
The resulting graphs are shown in figure 4.3. These results somewhat contradict previous results.
183
60
0
20
40
Frequency
40
20
0
Frequency
60
80
80
10
15
20
60
40
0
20
Frequency
40
20
0
Frequency
60
80
Figure 4.3: Cobb-Douglas profit function: observed and optimal profit shares
184
(4.37)
(4.38)
These output supply and input demand functions should be homogeneous of degree zero in all
prices:
y(t p, t w) = y(p, w)
(4.39)
xi (t p, t w) = xi (p, w)
(4.40)
This condition is fulfilled for the output supply and input demand functions derived from a
linearly homogeneous Cobb-Douglas profit function:
(t p, t w)
t (p, w)
(p, w)
= p
= p
= y(p, w)
tp
tp
p
(t p, t w)
t (p, w)
(p, w)
xi (t p, t w) = i
= i
= i
= xi (p, w)
t wi
t wi
wi
y(t p, t w) = p
(4.41)
(4.42)
yp (p, w) =
185
(4.43)
(4.44)
(4.45)
(4.46)
(4.47)
y(p, w) wj
wj y(p, w)
p (p, w) wj
=
p
wj
y(p, w)
wj
p
xj (p, w)
=
p
y(p, w)
(p, w) wj xj (p, w)
= p
p y(p, w) (p, w)
p rj (w, y)
=
r(w, y)
(4.50)
= j
(4.53)
yj (p, w) =
p
xi (p, w)
p
xi (p, w)
i (p, w)
p
=
wi
p
xi (p, w)
i
p
=
y(p, w)
wi
xi (p, w)
(p, w) p y(p, w)
= i
wi xi (p, w) (p, w)
i p
=
ri (w, y)
ip (p, w) =
= p
(4.48)
(4.49)
(4.51)
(4.52)
(4.54)
(4.55)
(4.56)
(4.57)
(4.58)
(4.59)
xi (p, w) wj
wj xi (p, w)
i (p, w)
wj
(p, w)
wj
=
+ ij i
2
wi
wj
xi (p, w)
xi (p, w)
wj
ij (p, w) =
i
wj
(p, w)
xj (p, w)
+ ij i
wi
xi (p, w)
wi xi (p, w)
(p, w) wj xj (p, w)
i
= i
ij
wi xi (p, w) (p, w)
ri (w, y)
i
i rj (w, y)
=
ij
ri (w, y)
ri (w, y)
=
= j ij
(4.60)
(4.61)
(4.62)
(4.63)
(4.64)
(4.65)
All derived input demand elasticities based on our Cobb-Douglas profit function with linear
186
v = A pp
wii
Y
iN 1
xj j ,
(4.66)
jN 2
X
iN 1
with 0 = ln A.
187
i ln wi +
X
jN 2
j ln xj
(4.67)
4.4.2 Estimation
We can estimate a Cobb-Douglas short-run profit function with capital as a quasi-fixed input
using the following commands. Again, function lm automatically removes the observations (apple
producers) with negative gross margin:
> profitCDSR <- lm( log( vProfit ) ~ log( pOut ) + log( pLab ) + log( pMat ) +
+
1Q
Median
3Q
Max
-4.7422 -0.0646
0.2578
0.4931
0.8989
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)
3.2739
1.2261
2.670 0.008571 **
log(pOut)
3.1745
0.2263
14.025
log(pLab)
-1.6188
0.4434
log(pMat)
-0.7637
0.2687
-2.842 0.005226 **
log(qCap)
1.0960
0.1245
--Signif. codes:
0.6591,
Adjusted R-squared:
0.6484
4.4.3 Properties
This short-run profit function fulfills all microeconomic monotonicity conditions: it is increasing
in the output price, it is decreasing in the prices of all variable inputs, and it is increasing in the
quasi-fixed input quantity. However, the homogeneity condition is not fulfilled, as the coefficient
of the output price and the coefficients of the prices of the variable inputs do not sum up to one
but to 3.17 + (1.62) + (0.76) = 0.79.
188
1Q
Median
3Q
Max
-4.7302 -0.0677
0.2598
0.5160
0.8916
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)
3.3145
1.2184
log(pLab/pOut)
-1.4574
0.2252
log(pMat/pOut)
-0.7156
0.2427
-2.949
1.0847
0.1212
log(qCap)
2.720
0.00743 **
0.00380 **
--Signif. codes:
0.5227,
Adjusted R-squared:
0.5115
We can obtain the coefficient of the output price from the homogeneity condition (4.15): 1
(1.457) (0.716) = 3.173. All microeconomic monotonicity conditions are still fulfilled: the
Cobb-Douglas short-run profit function with homogeneity imposed is increasing in the output
price, decreasing in the prices of all variable inputs, and increasing in the quasi-fixed input
quantity.
We can test the homogeneity restriction by a likelihood ratio test:
> lrtest( profitCDSRHom, profitCDSR )
Likelihood ratio test
189
LogLik Df
5 -180.27
6 -180.17
Chisq Pr(>Chisq)
1 0.1859
0.6664
Given the large P -value, we can conclude that the data do not contradict the linear homogeneity
in the output price and the prices of the variable inputs.
v
ln v v
v
=
= j
xj
ln xj xj
xj
(4.68)
Before we can calculate the shadow price of the capital input, we need to calculate the predicted
gross margin v . As the dependent variable of the Cobb-Douglas short-run profit function with
homogeneity imposed is ln( v / ln p), we have to apply the exponential function to the fitted
dependent variable and then we have to multiply the result with p, in order obtain the fitted
gross margins v . Furthermore, we have to be aware of that the fitted method only returns
the predicted values for the observations that were included in the estimation. Hence, we have
to make sure that the predicted gross margins are only assigned to the observations that have a
positive gross margin and hence, were included in the estimation:
> dat$vProfitCDHom[ dat$vProfit > 0 ] <+
Now, we can calculate the shadow price of the capital input for each apple producer who has a
positive gross margin and hence, was included in the estimation:
190
vProfitCDHom / qCap )
The following commands show the variation of the shadow prices of capital and compare them
to the observed capital prices:
> hist( dat$pCapShadow, 30 )
> hist( dat$pCapShadow[ dat$pCapShadow < 30 ], 30 )
100
200
300
400
5.0
50.0
0.2
1.0
shadow prices
6
0
Frequency
40
30
20
10
Frequency
50
10
60
500.0
10
15
20
25
30
0.2
1.0
5.0 20.0
200.0
observed prices
191
y
y
y = T E y
0 T E 1,
(5.1)
where y is the observed output quantity and y is the maximum output quantity that can be
produced with the observed input quantities x.
The output-oriented technical efficiency according to Farrell is defined as
y
y
TE =
y = T E y
T E 1.
(5.2)
These efficiency measures are graphically illustrated in Bogetoft and Otto (2011, p. 26, figure 2.2).
5.1.1.2 Input-Oriented Technical Efficiency with One Input
The input-oriented technical efficiency according to Shepard is defined as
TE =
x
x
x = T E x
T E 1,
(5.3)
where x is the observed input quantity and x is the minimum input quantity at which the
observed output quantities y can be produced.
The input-oriented technical efficiency according to Farrell is defined as
TE =
x
x
x = T E x
0 T E 1.
(5.4)
These efficiency measures are graphically illustrated in Bogetoft and Otto (2011, p. 26, figure 2.2).
192
y1
y2
yM
= = ... =
y1
y2
yM
yi = T E yi i
0 T E 1,
(5.5)
quantities (given a proportional increase of all output quantities) that can be produced with the
observed input quantities x, and M is the number of outputs.
The output-oriented technical efficiency according to Farrell is defined as
TE =
y1
y
y
= 2 = ... = M
y1
y2
yM
yi = T E yi i
T E 1.
(5.6)
These efficiency measures are graphically illustrated in Bogetoft and Otto (2011, p. 27, figure 2.3, right panel).
5.1.1.4 Input-Oriented Technical Efficiency with Two or More Inputs
The input-oriented technical efficiencies according to Shepard and Farrell assume a proportional
reduction of all inputs, while all outputs are held constant.
Hence, the input-oriented technical efficiency according to Shepard is defined as
TE =
x1
x2
xN
= = ... =
x1
x2
xN
xi = T E xi i
TE 1
(5.7)
where x1 , x2 , . . . , xN are the observed input quantities, x1 , x2 , . . . , xN are the minimum input
quantities (given a proportional decrease of all input quantities) at which the observed output
quantities y can be produced, and N is the number of inputs.
The input-oriented technical efficiency according to Farrell is defined as
TE =
x1
x
x
= 2 = ... = N
x1
x2
xN
xi = T E xi i
0 T E 1.
(5.8)
These efficiency measures are graphically illustrated in Bogetoft and Otto (2011, p. 27, figure 2.3, left panel).
5.1.1.5 Output-Oriented Allocative Efficiency and Revenue Efficiency
According to equation (5.6), the output-oriented technical efficiency according to Farrell is
TE =
y2
yM
p y
y1
=
= ... =
=
,
y1
y2
yM
py
193
(5.9)
p y
p y
=
,
p y
p y
(5.10)
where y is the vector of technically efficient and allocatively efficient output quantities and y is
the vector of output quantities so that p y = p y and yi /
yi = AE i.
Finally, the revenue efficiency according to Farrell is
RE =
p y
p y p y
=
= AE T E
py
p y p y
(5.11)
All these efficiency measures can also be specified according to Shepard by just taking the inverse of the Farrell specifications. These efficiency measures are graphically illustrated in Bogetoft
and Otto (2011, p. 40, figure 2.11).
5.1.1.6 Input-Oriented Allocative Efficiency and Cost Efficiency
According to equation (5.8), the input-oriented technical efficiency according to Farrell is
TE =
x
1
wx
x
2
x
N
=
,
=
= ... =
x1
x2
xN
wx
(5.12)
where x
is the vector of technically efficient input quantities and w is the vector of output prices.
The input-oriented allocative efficiency according to Farrell is defined as
AE =
w x
wx
=
,
wx
wx
(5.13)
where x is the vector of technically efficient and allocatively efficient input quantities and x
is
the vector of output quantities so that w x
= w x and x
i /
xi = AE i.
Finally, the cost efficiency according to Farrell is
CE =
w x
w x w x
=
= AE T E
wx
wx
wx
(5.14)
All these efficiency measures can also be specified according to Shepard by just taking the inverse of the Farrell specifications. These efficiency measures are graphically illustrated in Bogetoft
and Otto (2011, p. 36, figure 2.9).
5.1.1.7 Profit Efficiency
The profit efficiency according to Farrell is defined as
PE =
p y w x
,
pyw x
194
(5.15)
AP
,
AP
(5.16)
where AP = f (x)/x is the observed average product AP = f (x )/x is the maximum average
product, and x is the input quantity that results in the maximum average product.
The first-order condition for a maximum of the average product is
AP
f (x) 1 f (x)
=
2 =0
x
x x
x
(5.17)
(5.18)
Hence, a necessary (but not sufficient) condition for a maximum of the average product is an
elasticity of scale equal to one.
(5.19)
where u 0 are the non-positive residuals. One solution to achieve this could be to estimate
an average production function by ordinary least squares and then simply shift the production
function up until all residuals are negative or zero (see right panel of figure 5.1). However, this
195
o
o
o
o
o o
o o
o
o
o
o
o
o
with u 0,
(5.20)
where u 0 accounts for technical inefficiency and v accounts for statistical noise. This model
can be re-written (see, e.g. Coelli et al., 2005, p. 243):
y = f (x) eu ev
(5.21)
Output-oriented technical efficiencies are usually defined as the ratio between the observed
output and the (individual) stochastic frontier output (see, e.g. Coelli et al., 2005, p. 244):
TE =
y
f (x) eu ev
=
= eu
f (x) ev
f (x) ev
(5.22)
This is also true for the frequently-used Data Envelopment Analysis (DEA).
196
(5.23)
u N + (, u2 ),
(5.24)
where = 0 for a positive half-normal distribution and 6= 0 for a positive truncated normal
distribution. These assumptions result in a left-skewed distribution of the total error terms =
u + v, i.e. the density function is flat on the left and steep on the right. Hence, it is very rare
that a firm has a large positive residual (much higher output than the production function) but
it is not so rare that a firm has a large negative residual (much lower output than the production
function).
5.2.1.1 Marginal products and output elasticities in SFA models
Given the multiplicative specification of stochastic production frontier models (5.21) and assuming
that the random error v is zero, we can see that the marginal products are downscaled by the
level of the technical efficiency:
y
f (x) u
f (x)
f (x)
=
e = TE
= T E i
xi
xi
xi
xi
(5.25)
However, the partial production elasticities are unaffected by the efficiency level:
y xi
f (x) u xi
f (x) xi
ln f (x)
=
e
=
=
= i
xi y
xi
f (x)eu
xi f (x)
ln xi
(5.26)
As the output elasticities do not depend on the firms technical efficiency, also the elasticity of
scale does not depend on the firms technical efficiency.
197
10 15 20
0
Frequency
10 15 20
5
0
Frequency
1.5
0.5
1.5
residuals prodCD
0.5
residuals prodTL
data = dat )
198
Pr(>|z|)
(Intercept) 0.228813
1.247739
0.1834 0.8544981
log(qCap)
0.160934
0.081883
1.9654 0.0493668 *
log(qLab)
0.684777
0.146797
log(qMat)
0.465871
0.131588
sigmaSq
1.000040
0.202456
gamma
0.896664
--Signif. codes:
V ar(u) = u2 1
2
,
(5.27)
where (.) indicates the cumulative distribution function and (.) the probability density function
of the standard normal distribution. If the inefficiency term u follows a positive halfnormal
distribution (i.e. = 0), the above equation reduces to
h
V ar(u) = u2 1 (2 (0))2 ,
199
(5.28)
This equation relies on the assumption that the inefficiency term u and the noise term v are independent, i.e.
their covariance is zero.
200
Pr(>|z|)
(Intercept) 0.228813
1.247739
0.1834 0.8544981
log(qCap)
0.160934
0.081883
1.9654 0.0493668 *
log(qLab)
0.684777
0.146797
log(qMat)
0.465871
0.131588
sigmaSq
1.000040
0.202456
gamma
0.896664
sigmaSqU
0.896700
0.241715
sigmaSqV
0.103340
0.055831
1.8509 0.0641777 .
sigma
1.000020
0.101226
sigmaU
0.946942
0.127629
sigmaV
0.321465
0.086838
lambdaSq
8.677179
6.644542
1.3059 0.1915829
lambda
2.945705
1.127836
2.6118 0.0090061 **
varU
0.325843
NA
NA
NA
sdU
0.570827
NA
NA
NA
gammaVar
0.759217
NA
NA
NA
--Signif. codes:
p
p
= v2 = 2 (1 ) = Var (v), sigma = = 2 , sigmaU = u = u2 , sigmaV = v = v2 ,
lambdaSq = 2 = u2 /v2 , lambda = = u /v , varU = Var (u), sdU =
201
LogLik Df
5 -137.61
6 -133.89
Chisq Pr(>Chisq)
1 7.4387
0.003192 **
--Signif. codes:
Under the null hypothesis (no inefficiency, only noise), the test statistic asymptotically follows a
mixed 2 -distribution (Coelli, 1995).3 The rather small P-value indicates that the data clearly
reject the OLS model in favor of the stochastic frontier model, i.e. there is significant technical
inefficiency.
As neither the noise term v nor the inefficiency term u but only the total error term = u + v
is known, the technical efficiencies T E = eu are generally unknown. However, given that the
parameter estimates (including the parameters 2 and or v2 and u2 ) and the total error term
are known, it is possible to determine the expected value of the technical efficiency (see, e.g.
Coelli et al., 2005, p. 255):
Td
E = E eu
(5.29)
As a standard likelihood ratio test assumes that the test statistic follows a (standard) 2 -distribution under the
null hypothesis, a test that is conducted by the command lrtest( prodCD, prodCDSfa ) returns an incorrect
P-value.
202
0.2
0.4
0.6
0.8
1e+05
5e+05
effCD
0.8
0.4
0.6
0.2
0.2
0
effCD
0.6
0.4
effCD
10
5
Frequency
15
0.8
5e+06
0.5
1.0
qOut
2.0
5.0
data = dat )
Pr(>|z|)
(Intercept)
log(qCap)
-0.6332521
log(qLab)
4.4511064
log(qMat)
-1.3976309
0.9991 0.3177593
I(0.5 * log(qCap)^2)
0.0053258
I(0.5 * log(qLab)^2)
-1.5030433
I(0.5 * log(qMat)^2)
-0.5113559
I(log(qCap) * log(qLab))
0.4187529
0.1866174
0.2747251
203
0.0285 0.9772324
1.5243 0.1274434
I(log(qLab) * log(qMat))
0.9800294
0.4216637
2.3242 0.0201150 *
sigmaSq
0.9587307
0.1968009
gamma
0.9153387
sigmaSqU
0.8775633
0.2328364
sigmaSqV
0.0811674
0.0497448
1.6317 0.1027476
sigma
0.9791480
0.1004960
sigmaU
0.9367835
0.1242744
sigmaV
0.2848989
0.0873025
3.2634 0.0011010 **
10.8117751
9.0334816
1.1969 0.2313628
lambda
3.2881264
1.3736518
2.3937 0.0166789 *
varU
0.3188892
NA
NA
NA
sdU
0.5647027
NA
NA
NA
gammaVar
0.7971103
NA
NA
NA
lambdaSq
--Signif. codes:
11 -131.25
12 -128.07
1 6.353
0.005859 **
--Signif. codes:
A further likelihood ratio test indicates that it is not really clear whether the Translog stochastic
frontier model fits the data significantly better than the Cobb-Douglas stochastic frontier model:
> lrtest( prodCDSfa, prodTLSfa )
204
LogLik Df
6 -133.89
12 -128.07
Chisq Pr(>Chisq)
6 11.642
0.07045 .
--Signif. codes:
While the Cobb-Douglas functional form is accepted at the 5% significance level, it is rejected in
favor of the Translog functional form at the 10% significance level.
The efficiency estimates based on the Translog stochastic production frontier can be obtained
(again) by the efficiencies method:
> dat$effTL <- efficiencies( prodTLSfa )
The following commands illustrate their variation, their correlation with the output level, and
their correlation with the firm size (measured as input use):
> hist( dat$effTL, 15 )
> plot( dat$qOut, dat$effTL, log = "x" )
0.2
0.4
0.6
0.8
1e+05
5e+05
effTL
5e+06
0.8
0.4
0.6
0.2
0.2
effTL
0.6
0.4
effTL
8
6
4
Frequency
10
0.8
12
0.5
qOut
1.0
2.0
5.0
205
0.6
0.8
effTL
0.2
0.4
0.2
0.4
0.6
0.8
effCD
data = dat )
206
Pr(>|z|)
(Intercept)
0.6388793
0.1311531
log(qmCap)
0.1308903
0.1003318
1.3046
log(qmLab)
0.7065404
0.1555606
log(qmMat)
0.4657266
0.1516483
3.0711
0.002133 **
I(0.5 * log(qmCap)^2)
0.0053227
0.1848995
0.0288
0.977034
I(0.5 * log(qmLab)^2)
-1.5030266
0.6761522 -2.2229
0.026222 *
I(0.5 * log(qmMat)^2)
-0.5113617
0.3749803 -1.3637
0.172661
0.2686428
0.119047
I(log(qmCap) * log(qmLab))
0.4187571
1.5588
0.192038
0.1886950 -2.3167
0.020521 *
I(log(qmLab) * log(qmMat))
0.9800162
0.4201674
2.3324
0.019677 *
sigmaSq
0.9587158
0.1967744
gamma
0.9153349
--Signif. codes:
207
with u 0,
(5.30)
where u 0 accounts for cost inefficiency and v accounts for statistical noise. This model can be
re-written as:
c = c(w, y) eu ev
(5.31)
c
f (x) eu ev
=
= eu ,
v
c(w, y) e
c(w, y) ev
(5.32)
c(w, y) ev
c(w, y) ev
=
= eu .
c
f (x) eu ev
(5.33)
Assuming a normal distribution of the noise term v and a positive half-normal distribution of
the inefficiency term u, the distribution of the residuals from a cost function is expected to be
right-skewed in the case of cost inefficiencies.
208
10 20 30 40
Frequency
15
0 5
Frequency
25
0.5
0.0
0.5
0.5
residuals costCDHom
0.0
0.5
1.0
residuals costTLHom
bution of the inefficiency term (misspecification of the distribution of the noise term in the
SFA model),
the distribution of the inefficiency term is symmetric or left-skewed (misspecification of the
distribution of the inefficiency term in the SFA model),
the sampling of the observations by coincidence resulted in a symmetric or left-skewed distribution of the true total error term (u+v) in this specific sample, although the distribution
of the true total error term (u + v) in the population is right-skewed, and/or
the farm managers do not aim at maximizing profit (which implies minimizing costs) but
have other objectives.
It could also be that the distribution of the unknown true residuals in the sample is right-skewed,
but the OLS estimates are left-skewed, e.g. because
the parameter estimates are imprecise (but unbiased),
the estimated functional forms (Cobb-Douglas and Translog) are poor approximations of
209
ineffDecrease = FALSE )
6.74916371 0.74012850
Pr(>|z|)
1.6069
0.1081
log(qOut)
sigmaSq
0.11117637 0.01424404
gamma
0.00019319 0.06608496
0.0029
0.9977
--Signif. codes:
210
5 -44.878
6 -44.878
0.4991
This test confirms that the fit of the OLS model (which assumes that is zero and hence, that
there is no inefficiency) is not significantly worse than the fit of the stochastic frontier model.
In fact, the cost efficiency estimates are all very close to one. By default, the efficiencies()
method calculates the efficiency estimates as E [eu ], which means that we obtain estimates
of Farrell-type cost efficiencies (5.33). Given that E [eu ] is not equal to 1/E [eu ] (as the expectation operator is an additive operator), we cannot obtain estimates of Shepard-type cost
efficiencies (5.32) by taking the inverse of the estimates of the Farrell-type cost efficiencies (5.33).
However, we can obtain estimates of Shepard-type cost efficiencies (5.32) by setting argument
minusU of the efficiencies() method equal to FALSE, which tells the efficiencies() method
to calculate the efficiency estimates as E [eu ].
> dat$costEffCDHomFarrell <- efficiencies( costCDHomSfa )
> dat$costEffCDHomShepard <- efficiencies( costCDHomSfa, minusU = FALSE )
> hist( dat$costEffCDHomFarrell, 15 )
25
0.99630
0.99634
15
Frequency
0.99626
0 5
15
0 5
Frequency
25
1.00366
costEffCDHomFarrell
1.00370
1.00374
costEffCDHomShepard
211
(5.34)
This function can be used to analyze how the additional explanatory variables (z) affect the
output quantity for given input quantities, i.e. how they affect the productivity.
In case of a Cobb-Douglas functional form, we get following extended production function:
ln y = 0 +
i ln xi + z z
(5.35)
Based on this Cobb-Douglas production function and our data set on French apple producers,
we can check whether the apple producers who use an advisory service produce a different output
quantity than non-users with the same input quantities, i.e. whether the productivity differs
between users and non-users. This extended production function can be estimated by following
command:
> prodCDAdv <- lm( log( qOut ) ~ log( qCap ) + log( qLab ) + log( qMat ) + adv,
+
data = dat )
1Q
Median
3Q
Max
-1.7807 -0.3821
0.0022
0.4709
1.3323
212
1.29590
-1.801
0.0740 .
log(qCap)
0.15673
0.08581
1.826
0.0700 .
log(qLab)
0.69225
0.15190
log(qMat)
0.62814
0.12379
adv
0.25896
0.10932
2.369
0.0193 *
--Signif. codes:
0.6105,
Adjusted R-squared:
0.599
The estimation result shows that users of an advisory service produce significantly more than
non-users with the same input quantities. Given the Cobb-Douglas production function (5.35),
the coefficient of an additional explanatory variable can be interpreted as the marginal effect on
the relative change of the output quantity:
z =
ln y
ln y y
y 1
=
=
z
y z
z y
(5.36)
Hence, our estimation result indicates that users of an advisory service produce approximately
25.9% more output than non-users with the same input quantity but the large standard error
of this coefficient indicates that this estimate is rather imprecise. Given that the change of a
dummy variable from zero to one is not marginal and that the coefficient of the variable adv is
not close to zero, the above interpretation of this coefficient is a rather poor approximation. In
fact, our estimation results suggest that the output quantity of apple producers with advisory
service is on average exp(z ) = 1.296 times as large as (29.6% larger than) the output quantity of
apple producers without advisory service given the same input quantities. As users and non-users
of an advisory service probably differ in some unobserved variables that affect the productivity
(e.g. motivation and effort to increase productivity), the coefficient az is not necessarily the
causal effect of the advisory service but describes the difference in productivity between users
and non-users of the advisory service.
213
data = dat )
Pr(>|z|)
(Intercept) -0.247751
log(qCap)
0.156906
0.081337
1.9291 0.0537222 .
log(qLab)
0.695977
0.148793
log(qMat)
0.491840
0.139348
adv
0.150742
0.111233
1.3552 0.1753583
sigmaSq
0.916031
0.231604
gamma
0.861029
0.114087
--Signif. codes:
LogLik Df
6 -133.89
7 -132.87
Chisq Pr(>Chisq)
1 2.0428
0.1529
214
6 -134.76
7 -132.87
3.78
0.02593 *
--Signif. codes:
The following commands compute the technical efficiency estimates and compare them to the
efficiency estimates obtained from the Cobb-Douglas production frontier without advisory service
as an explanatory variable:
> dat$effCDAdv <- efficiencies( prodCDAdvSfa )
> compPlot( dat$effCD[ dat$adv == 0 ],
+
dat$effCDAdv[ dat$adv == 0 ] )
The resulting graph is shown in figure 5.8. It appears as if the non-users of an advisory service
became somewhat more efficient. This is because the stochastic frontier model that includes
the advisory service as an explanatory variable has in fact two production frontiers: a lower
frontier for the non-users of an advisory service and a higher frontier for the users of an advisory
service. The coefficient of the dummy variable adv, i.e. adv , can be interpreted as a quick
estimate of the difference between the two frontier functions. In our empirical case, the difference
is approximately 15.1%. However, a precise calculation indicates that the frontier of the users of
the advisory service is exp (adv ) = 1.163 times (16.3% higher than) the frontier of the non-users
of advisory service. And the frontier of the non-users of the advisory service is exp (adv ) =
0.86 times (14% lower than) the frontier of the users of advisory service. As the non-users of
an advisory service are compared to a lower frontier now, they appear to be more efficient now.
While it is reasonable to have different frontier functions for different soil types, it does not seem
to be too reasonable to have different frontier functions for users and non-users of an advisory
service, because there is no physical reasons, why users of an advisory service should have a
maximum output quantity that is different from the maximum output quantity of non-users.
215
0.8
0.6
0.4
0.2
0.2
0.4
0.6
0.8
Figure 5.8: Technical efficiency estimates of Cobb-Douglas production frontier with and without
advisory service as additional explanatory variable (circles = producers who do not
use an advisory service, solid dots = producers who use an advisory service
with = z,
(5.37)
where is an additional parameter (vector) to be estimated. Function sfa can also estimate
these efficiency effects frontiers. The additional variables that should explain the efficiency
level must be specified at the end of the model formula, where a vertical bar separates them from
the (regular) input variables:
> prodCDSfaAdvInt <- sfa( log( qOut ) ~ log( qCap ) + log( qLab ) + log( qMat ) |
+
216
-0.090747
Pr(>|z|)
log(qCap)
0.168625
0.082138
2.0529 0.0400775 *
log(qLab)
0.653868
0.142849
4.5773
log(qMat)
0.513527
0.132644
4.71e-06 ***
Z_(Intercept) -0.016714
Z_adv
-1.077550
sigmaSq
1.096441
0.747990
1.4659 0.1426891
gamma
0.863087
0.094468
--Signif. codes:
LogLik Df
5 -137.61
8 -130.52
Chisq Pr(>Chisq)
3 14.185
0.001123 **
--Signif. codes:
The test indicates that the fit of this model is significantly better than the fit of the OLS model
(without advisory service as explanatory variable).
217
1.00
-0.06
-0.48
-0.21
0.07
0.08
log(qCap)
-0.06
1.00
-0.37
-0.17
log(qLab)
-0.48
-0.37
1.00
-0.57
log(qMat)
-0.21
-0.17
-0.57
1.00
Z_(Intercept)
0.07
-0.13
0.20
-0.11
1.00
0.91
Z_adv
0.08
-0.13
0.26
-0.20
0.91
1.00
sigmaSq
0.03
0.10
-0.14
0.00
-0.95 -0.89
gamma
0.29
-0.03
0.07
-0.34
-0.55 -0.46
-0.13 -0.13
0.20
0.26
-0.11 -0.20
sigmaSq gamma
(Intercept)
0.03
log(qCap)
0.10 -0.03
log(qLab)
log(qMat)
-0.14
0.29
0.07
0.00 -0.34
Z_(Intercept)
-0.95 -0.55
Z_adv
-0.89 -0.46
sigmaSq
1.00
0.73
gamma
0.73
1.00
The estimate of the intercept of the inefficiency model (0 ) is very highly correlated with the
estimate of the (slope) coefficient of the advisory service in the inefficiency model (1 ) and the
estimate of the parameter 2 and it is considerably correlated with the estimate of the parameter .
The intercept can be suppressed by adding a -1 to the specification of the inefficiency model:
> prodCDSfaAdv <- sfa( log( qOut ) ~ log( qCap ) + log( qLab ) + log( qMat ) |
+
218
Pr(>|z|)
(Intercept) -0.090455
1.247496 -0.0725
0.94220
log(qCap)
0.168471
0.077008
2.1877
0.02869 *
log(qLab)
0.654341
0.139669
log(qMat)
0.513291
0.130854
Z_adv
-1.064859
0.545950 -1.9505
0.05112 .
sigmaSq
1.086417
0.255371
gamma
0.862306
--Signif. codes:
LogLik Df
5 -137.61
7 -130.52
Chisq Pr(>Chisq)
2 14.185
0.0002907 ***
--Signif. codes:
A likelihood ratio test confirms the t-test that the intercept in the inefficiency model is statistically
insignificant:
> lrtest( prodCDSfaAdv, prodCDSfaAdvInt )
219
7 -130.52
8 -130.52
1 2e-04
0.9892
The coefficient of the advisory service in the inefficiency model is now significantly negative
(at 10% significance level), which means that users of an advisory service have a significantly
smaller inefficiency term u, i.e. are significantly more efficient. The size of the coefficients of the
inefficiency model () cannot be reasonably interpreted. However, if argument margEff of the
efficiencies method is set to TRUE, this method does not only return the efficiency estimates but
also the marginal effects of the variables that should explain the efficiency level on the efficiency
estimates (see Olsen and Henningsen, 2011):
> dat$effCDAdv2 <- efficiencies( prodCDSfaAdv, margEff = TRUE )
The marginal effects differ between observations and are available in the attribute margEff. The
following command extracts and visualizes the marginal effects of the variable that indicates the
use of an advisory service on the efficiency estimates:
15
5
0
Frequency
0.02
0.03
0.04
0.05
0.06
marginal effect
Figure 5.9: Marginal effects of the variable that indicates the use of an advisory service on the
efficiency estimates
The resulting graph is shown in figure 5.9. It indicates that apple producers who use an advisory
service are between 6.3 and 6.4 percentage points more efficient than apple producers who do not
use an advisory service.
220
(6.1)
This function can be used to analyze how the time (t) affects the (available) production technology.
The average production technology (potentially depending on the time period) can be estimated
from panel data sets by the OLS method (i.e. pooled) or by any of the usual panel data methods
(e.g. fixed effects, random effects).
i ln xi + t t
221
(6.2)
t
y t
x
(6.3)
1Q
Median
3Q
Max
-1.83351 -0.16006
0.05329
0.22110
0.86745
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.665096
0.248509
log(AREA)
0.333214
0.062403
log(LABOR)
0.395573
0.066421
log(NPK)
0.270847
0.041027
mYear
0.010090
0.008007
1.260
0.208
--Signif. codes:
0.86,
Adjusted R-squared:
0.8583
The estimation result indicates an annual rate of technical change of 1%, but this is not statistically different from 0%, which means no technological change.
The command above can be simplified by using the pre-calculated logarithmic (and meanscaled) quantities:
222
1Q
Median
3Q
Max
-1.83351 -0.16006
0.05329
0.22110
0.86745
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.015590
0.019325
-0.807
0.420
lArea
0.333214
0.062403
lLabor
0.395573
0.066421
lNpk
0.270847
0.041027
mYear
0.010090
0.008007
1.260
0.208
--Signif. codes:
0.86,
Adjusted R-squared:
0.8583
The intercept has changed because of the mean-scaling of the input and output quantities but
all slope parameters are unaffected by using the pre-calculated logarithmic (and mean-scaled)
quantities:
> all.equal( coef( riceCdTime )[-1], coef( riceCdTimeS )[-1],
+
check.attributes = FALSE )
[1] TRUE
6.1.1.2 Panel data estimations of the Cobb-Douglas Production Function with
Technological Change
The panel data estimation with fixed individual effects can be done by:
> riceCdTimeFe <- plm( lProd ~ lArea + lLabor + lNpk + mYear, data = pdat )
> summary( riceCdTimeFe )
Oneway (individual) effect Within Model
223
-1.5900 -0.1570
0.0456
0.1780
Max.
0.8180
Coefficients :
Estimate Std. Error t-value
lArea
Pr(>|t|)
0.5607756
0.0785370
lLabor 0.2549108
0.0690631
lNpk
0.1748528
0.0484684
mYear
0.0130908
0.0071824
1.8226 0.0693667 .
--Signif. codes:
43.632
0.42995
Adj. R-Squared :
0.3712
model = "random" )
224
0.8
individual
0.2
theta:
0.02088 0.14451
0.4222
Residuals :
Min. 1st Qu.
-1.7500 -0.1430
0.0485
0.1910
Max.
0.8520
Coefficients :
Estimate Std. Error t-value
Pr(>|t|)
(Intercept) -0.0213044
0.0292268 -0.7289
0.4665
lArea
0.4563002
0.0662979
lLabor
0.3190041
0.0647524
lNpk
0.2268399
0.0426651
mYear
0.0115453
0.0071921
1.6053
0.1094
--Signif. codes:
117.05
0.75058
Adj. R-Squared :
0.73968
1st Qu.
Median
Mean
3rd Qu.
Max.
-0.817500 -0.081970
0.006677
0.000000
0.093980
0.554100
225
:-3.8110
lArea
Min.
:-5.2850
lLabor
Min.
:-2.72761
lNpk
Min.
:-1.3094
1st Qu.:-0.3006
1st Qu.:-0.4200
1st Qu.:-0.30989
1st Qu.:-0.1867
Median : 0.1145
Median : 0.6978
Median : 0.08778
Median : 0.1050
Mean
Mean
Mean
Mean
: 0.1839
: 0.5896
: 0.06079
: 0.1265
Max.
Max.
: 4.7633
Max.
Max.
NA's
:18
: 3.7270
: 1.75595
: 1.7180
mYear
Min.
:-0.471049
1st Qu.:-0.044359
Median :-0.008111
Mean
:-0.012327
: 0.275875
model = "pooling" )
This gives the same estimated coefficients as the model estimated by lm:
226
X
i
i ln xi +
1 XX
ij ln xi ln xj + t t
2 i j
(6.4)
227
(6.5)
X
ln y
= i +
ij ln xj
ln xi
j
(6.6)
In order to be able to interpret the first-order coefficients of the (logarithmic) input quantities
(i ) as output elasticities (i ) at the sample mean, we use the mean-scaled input quantities. We
also use the mean-scaled output quantity in order to use the same variables as Coelli et al. (2005,
p. 250).
6.1.2.1 Pooled estimation of the Translog Production Function with Constant and Neutral
Technological Change
The following command estimates a Translog production function that can account for constant
and neutral technical change:
> riceTlTime <- lm( lProd ~ lArea + lLabor + lNpk +
+
data = riceProdPhil )
1Q
Median
3Q
Max
-1.52184 -0.18121
0.04356
0.22298
0.87019
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)
0.013756
0.024645
0.558
lArea
0.588097
0.085162
lLabor
0.191764
0.080876
2.371
0.01831 *
lNpk
0.197875
0.051605
3.834
0.00015 ***
-0.435547
0.247491
-1.760
0.07935 .
0.303236
-2.448
0.01489 *
I(0.5 * lNpk^2)
0.020367
0.097907
0.208
0.83534
I(lArea * lLabor)
0.678647
0.216594
3.133
0.00188 **
I(0.5 * lArea^2)
228
0.57712
0.063920
0.145613
0.439
0.66097
I(lLabor * lNpk)
-0.178286
0.138611
-1.286
0.19926
0.012682
0.007795
1.627
0.10468
mYear
--Signif. codes:
0.8719,
Adjusted R-squared:
0.868
In the Translog production function that accounts for constant and neutral technological change,
the monotonicity conditions are fulfilled at the sample mean and the estimated output elasticities
of land, labor and fertilizer are 0.588, 0.192, and 0.198, respectively, at the sample mean. The
estimated (constant) annual rate of technological progress is around 1.3%.
Conduct a Wald test to test whether the Translog production function outperforms the CobbDouglas production function:
> library( "lmtest" )
> waldtest( riceCdTimeS, riceTlTime )
Wald test
Model 1: lProd ~ lArea + lLabor + lNpk + mYear
Model 2: lProd ~ lArea + lLabor + lNpk + I(0.5 * lArea^2) + I(0.5 * lLabor^2) +
I(0.5 * lNpk^2) + I(lArea * lLabor) + I(lArea * lNpk) + I(lLabor *
lNpk) + mYear
Res.Df Df
1
339
333
Pr(>F)
--Signif. codes:
The Cobb-Douglas specification is clearly rejected in favour of the Translog specification for the
pooled estimation.
6.1.2.2 Panel-data estimations of the Translog Production Function with Constant and
Neutral Technological Change
The following command estimates a Translog production function that can account for constant
and neutral technical change with fixed individual effects:
> riceTlTimeFe <- plm( lProd ~ lArea + lLabor + lNpk +
+
229
-1.0100 -0.1450
0.0191
0.1680
Max.
0.7460
Coefficients :
Estimate Std. Error t-value Pr(>|t|)
lArea
0.5828102
0.1173298
lLabor
0.0473355
0.0848594
0.5578 0.577402
lNpk
0.1211928
0.0610114
1.9864 0.047927 *
I(0.5 * lArea^2)
-0.8543901
I(0.5 * lNpk^2)
0.0429446
0.0987119
0.4350 0.663849
I(lArea * lLabor)
0.5867063
0.2125686
2.7601 0.006145 **
I(lArea * lNpk)
0.1167509
0.1461380
0.7989 0.424995
I(lLabor * lNpk)
-0.2371219
mYear
0.0165309
2.3887 0.017547 *
--Signif. codes:
43.632
0.49781
Adj. R-Squared :
0.42111
230
0.79
individual
0.21
theta:
0.01997 0.14130
0.434
Residuals :
Min. 1st Qu.
-1.3900 -0.1620
0.1840
Max.
0.7980
Coefficients :
Estimate Std. Error t-value
Pr(>|t|)
(Intercept)
0.0213211
0.0347371
0.6138
lArea
0.6831045
0.0922069
lLabor
0.0974523
0.0804060
1.2120
0.226370
lNpk
0.1708366
0.0546853
3.1240
0.001941 **
I(0.5 * lArea^2)
0.539776
-0.4275328
0.2468086 -1.7322
0.084156 .
0.2872825 -2.2166
0.027326 *
I(0.5 * lNpk^2)
0.0307547
0.0957745
0.3211
0.748324
I(lArea * lLabor)
0.5666863
0.2059076
2.7521
0.006245 **
I(lArea * lNpk)
0.1037657
0.1421739
0.7299
0.465995
I(lLabor * lNpk)
-0.2055786
0.1277476 -1.6093
0.108508
0.0070184
0.043549 *
mYear
0.0142202
2.0261
231
114.08
0.76662
Adj. R-Squared :
0.74211
> summary(riceTlTimePool)
Oneway (individual) effect Pooling Model
Call:
plm(formula = lProd ~ lArea + lLabor + lNpk + I(0.5 * lArea^2) +
I(0.5 * lLabor^2) + I(0.5 * lNpk^2) + I(lArea * lLabor) +
I(lArea * lNpk) + I(lLabor * lNpk) + mYear, data = pdat,
model = "pooling")
Balanced Panel: n=43, T=8, N=344
Residuals :
Min. 1st Qu.
-1.5200 -0.1810
0.2230
Max.
0.8700
Coefficients :
Estimate Std. Error t-value
Pr(>|t|)
(Intercept)
0.0137557
0.0246454
0.5581 0.5771201
lArea
0.5880972
0.0851622
lLabor
0.1917638
0.0808764
2.3711 0.0183052 *
lNpk
0.1978747
0.0516045
I(0.5 * lArea^2)
-0.4355466
232
I(0.5 * lNpk^2)
0.0203673
0.0979072
0.2080 0.8353358
I(lArea * lLabor)
0.6786472
0.2165937
3.1333 0.0018822 **
I(lArea * lNpk)
0.0639200
0.1456135
0.4390 0.6609677
I(lLabor * lNpk)
-0.1782859
mYear
0.0126820
1.6270 0.1046801
--Signif. codes:
263.52
0.87189
Adj. R-Squared :
0.84401
I(0.5 * lNp
233
I(0.5 * lNp
297
291
Chisq Pr(>Chisq)
6 39.321
6.191e-07 ***
--Signif. codes:
339
333
Chisq Pr(>Chisq)
6 30.077
3.8e-05 ***
--Signif. codes:
234
339
333
6 30.89
2.66e-05 ***
--Signif. codes:
The Cobb-Douglas functional form is rejected in favour of the Translog functional for for all three
panel-specifications that we estimated above. The Wald test for the pooled model differs from
the Wald test that we did in section 6.1.2.1, because waldtest by default uses a finite sample
F statistic for models estimated by lm but uses a large sample Chi-squared statistic for models
estimated by plm. The test statistic used by waldtest can be specified by argument test.
i ln xi +
X
1 XX
1
ij ln xi ln xj + t t +
ti ln xi + tt t2
2 i j
2
i
(6.7)
In this specification, the rate of technological change depends on the input quantities and the
time period:
X
ln y
= t +
ti ln xi + tt t
t
i
(6.8)
X
ln y
ij ln xj + ti t.
= i +
ln xi
j
(6.9)
235
1Q
Median
3Q
Max
-1.54976 -0.17245
0.04623
0.21624
0.87075
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)
0.001255
0.031934
0.039
lArea
0.579682
0.085892
lLabor
0.187505
0.081359
2.305
lNpk
0.207193
0.052130
-0.468372
0.265363
-1.765
0.07849 .
0.308046
-2.236
0.02599 *
I(0.5 * lNpk^2)
0.055993
0.099848
0.561
0.57533
I(lArea * lLabor)
0.676833
0.223271
3.031
0.00263 **
I(lArea * lNpk)
0.082374
0.151312
0.544
0.58654
I(lLabor * lNpk)
-0.226885
0.145568
-1.559
0.12005
mYear
0.008746
0.008513
1.027
0.30497
I(mYear * lArea)
0.003482
0.028075
0.124
0.90136
I(mYear * lLabor)
0.034661
0.029480
1.176
0.24054
I(mYear * lNpk)
-0.037964
0.020355
-1.865
I(0.5 * mYear^2)
0.007611
0.007954
0.957
I(0.5 * lArea^2)
0.96867
0.02181 *
0.06305 .
0.33933
--Signif. codes:
0.8734,
Adjusted R-squared:
0.868
We conduct a Wald test to test whether the Translog production function with non-constant
and non-neutral technological change outperforms the Cobb-Douglas production function and
the Translog production function with constant and neutral technological change:
236
Pr(>F)
339
--Signif. codes:
333
329
F Pr(>F)
4 0.9976 0.4089
The fit of the Translog specification with non-constant and non-neutral technological change is
significantly better than the fit of the Cobb-Douglas specification but it is not significantly better
than the fit of the Translog specification with constant and neutral technological change.
In order to simplify the calculation of the output elasticities (with equation 6.9) and the
annual rates of technological change (with equation 6.8), we create shortcuts for the estimated
coefficients:
> a1 <- coef( riceTlTimeNn )[ "lArea" ]
> a2 <- coef( riceTlTimeNn )[ "lLabor" ]
> a3 <- coef( riceTlTimeNn )[ "lNpk" ]
> at <- coef( riceTlTimeNn )[ "mYear" ]
237
We can calculate the elasticity of scale by taken the sum over all partial output elasticities:
> riceProdPhil$eScale <- with( riceProdPhil, eArea + eLabor + eNpk )
We can visualize (the variation of) the output elasticities and the elasticity of scale with
histograms:
> hist( riceProdPhil$eArea, 15 )
> hist( riceProdPhil$eLabor, 15 )
> hist( riceProdPhil$eNpk, 15 )
> hist( riceProdPhil$eScale, 15 )
The resulting graphs are shown in figure 6.1. If the firms increase the land area by one percent,
the output of most firms will increase by around 0.6 percent. If the firms increase labor input by
one percent, the output of most firms will increase by around 0.2 percent. If the firms increase
fertilizer input by one percent, the output of most firms will increase by around 0.25 percent. If
the firms increase all input quantities by one percent, the output of most firms will also increase
by around 1 percent. These graphs also show that the monotonicity condition is not fulfilled for
some observations:
> sum( riceProdPhil$eArea < 0 )
[1] 20
238
40
0
20
Frequency
40
20
0
Frequency
60
0.5
0.0
0.5
1.0
0.5
0.0
0.5
1.5
60
20
0
20
40
Frequency
eLabor
Frequency
eArea
1.0
0.1
0.1
0.3
0.5
0.8
1.0
eNpk
eScale
239
1.2
1.4
40
20
0
Frequency
0.05
0.00
0.05
0.10
tc
240
0.1670
Max.
0.7490
Coefficients :
Estimate Std. Error t-value
Pr(>|t|)
lArea
0.5857359
0.1191164
lLabor
0.0336966
0.0869044
0.3877
0.698494
lNpk
0.1276970
0.0623919
2.0467
0.041599 *
I(0.5 * lArea^2)
-0.8588620
0.2952677 -2.9088
0.003912 **
0.2979094 -2.0659
0.039733 *
I(0.5 * lNpk^2)
0.0673038
0.1014542
0.6634
0.507613
I(lArea * lLabor)
0.6016538
0.2164953
2.7791
0.005811 **
I(lArea * lNpk)
0.1205064
0.1549834
0.7775
0.437479
I(lLabor * lNpk)
-0.2660519
0.1353699 -1.9654
0.050336 .
mYear
0.0148796
0.0076143
1.9542
0.051654 .
I(mYear * lArea)
0.0105012
0.0270130
0.3887
0.697752
I(mYear * lLabor)
0.0230156
0.0286066
0.8046
0.421743
0.0199045 -1.4044
0.161277
I(mYear * lNpk)
-0.0279542
241
0.0058526
0.0069948
0.8367
0.403458
--Signif. codes:
43.632
0.50189
Adj. R-Squared :
0.41872
0.4275
Residuals :
Min. 1st Qu.
-1.3900 -0.1620
0.1800
Max.
0.7900
242
Pr(>|t|)
(Intercept)
0.0101183
0.0389961
0.2595
lArea
0.6809764
0.0930789
lLabor
0.0865327
0.0813309
1.0640
0.288128
lNpk
0.1800677
0.0554226
3.2490
0.001278 **
I(0.5 * lArea^2)
0.795434
-0.4749163
0.2627102 -1.8078
0.071557 .
0.2907148 -2.1144
0.035232 *
I(0.5 * lNpk^2)
0.0614961
0.0980315
0.6273
0.530891
I(lArea * lLabor)
0.5916989
0.2113078
2.8002
0.005409 **
I(lArea * lNpk)
0.1224789
0.1488815
0.8227
0.411297
I(lLabor * lNpk)
-0.2531048
0.1350400 -1.8743
0.061776 .
mYear
0.0116511
0.0077140
1.5104
0.131907
I(mYear * lArea)
0.0028675
0.0265731
0.1079
0.914134
I(mYear * lLabor)
0.0355897
0.0279156
1.2749
0.203242
I(mYear * lNpk)
-0.0344049
I(0.5 * mYear^2)
0.0069525
0.0195392 -1.7608
0.079198 .
0.0071510
0.331650
0.9722
--Signif. codes:
115.71
0.77169
Adj. R-Squared :
0.73804
243
I(0.5 * lNp
Df Chisq Pr(>Chisq)
287
9.392e-06 ***
--Signif. codes:
244
I(0.5 * lNp
Df
Chisq Pr(>Chisq)
329
0.0002103 ***
--Signif. codes:
Df Chisq Pr(>Chisq)
329
0.0001309 ***
--Signif. codes:
Finally, we test whether the fit of Translog specification with non-constant and non-neutral
technological change is significantly better than the fit of Translog specification with constant
and neutral technological change:
> waldtest( riceTlTimeNnFe, riceTlTimeFe )
Wald test
Model 1: lProd ~ lArea + lLabor + lNpk + I(0.5 * lArea^2) + I(0.5 * lLabor^2) +
I(0.5 * lNpk^2) + I(lArea * lLabor) + I(lArea * lNpk) + I(lLabor *
lNpk) + mYear + I(mYear * lArea) + I(mYear * lLabor) + I(mYear *
lNpk) + I(0.5 * mYear^2)
245
Chisq Pr(>Chisq)
287
291 -4 2.3512
0.6715
Chisq Pr(>Chisq)
329
333 -4 3.6633
0.4535
Chisq Pr(>Chisq)
329
333 -4 3.9905
0.4073
The tests indicate that the fit of Translog specification with constant and neutral technological
change is not significantly worse than the fit of Translog specification with non-constant and
non-neutral technological change.
The difference between the Wald tests for the pooled model and the Wald test that we did in
section 6.1.3.1 is explained at the end of section 6.1.2.2.
246
(6.10)
where the subscript k = 1, . . . , K indicates the firm, t = 1, . . . , T indicates the time period, and
all other variables are defined as before. We will apply the following three model specifications:
1. time-invariant individual efficiencies, i.e. ukt = uk , which means that each firm has an
individual fixed efficiency that does not vary over time;
2. time-variant individual efficiencies, i.e. ukt = uk exp( (t T )), which means that each
firm has an individual efficiency and the efficiency terms of all firms can vary over time
with the same rate (and in the same direction); and
3. observation-specific efficiencies, i.e. no restrictions on ukt , which means that the efficiency
term of each observation is estimated independently from the other efficiencies of the firm
so that basically the panel structure of the data is ignored.
Pr(>|z|)
247
0.035340
lArea
0.453900
0.064382
lLabor
0.288922
0.063853
lNpk
0.227542
0.040644
sigmaSq
0.155377
0.024144
gamma
0.464317
0.088270
--Signif. codes:
Pr(>|z|)
(Intercept) 0.1832751
0.0345895
lArea
0.4625174
0.0644245
lLabor
0.3029415
0.0641323
lNpk
0.2098907
0.0418709
mYear
0.0116003
0.0071758
1.6166
sigmaSq
0.1556806
0.0242951
gamma
0.4706143
0.0869549
0.106
--Signif. codes:
248
panel data
number of cross-sections = 43
number of time periods = 8
total number of observations = 344
thus there are 0 observations not in the panel
mean efficiency: 0.8176333
In the Cobb-Douglas production frontier that accounts for technological change, the monotonicity
conditions are globally fulfilled and the (constant) output elasticities of land, labor and fertilizer
are 0.463, 0.303, and 0.21, respectively. The estimated (constant) annual rate of technological
progress is around 1.2%. However, both the t-test for the coefficient of the time trend and a
likelihood ratio test give rise to doubts whether the production technology indeed changes over
time (P-values around 10%):
> lrtest( riceCdTimeSfaInv, riceCdSfaInv )
Likelihood ratio test
Model 1: riceCdTimeSfaInv
Model 2: riceCdSfaInv
#Df
LogLik Df
Chisq Pr(>Chisq)
7 -85.074
6 -86.430 -1 2.7122
0.09958 .
--Signif. codes:
Further likelihood ratio tests show that OLS models are clearly rejected in favor of the corresponding stochastic frontier models (no matter whether the production frontier accounts for
technological change or not):
> lrtest( riceCdSfaInv )
Likelihood ratio test
Model 1: OLS (no inefficiency)
Model 2: Error Components Frontier (ECF)
#Df
LogLik Df
5 -104.91
-86.43
Chisq Pr(>Chisq)
1 36.953
6.051e-10 ***
--Signif. codes:
249
LogLik Df
6 -104.103
-85.074
Chisq Pr(>Chisq)
1 38.057
3.434e-10 ***
--Signif. codes:
This model estimates only a single efficiency estimate for each of the 43 firms. Hence, the vector
returned by the efficiencies method only has 43 elements by default:
> length( efficiencies( riceCdSfaInv ) )
[1] 43
One can obtain the efficiency estimates for each observation by setting argument asInData equal
to TRUE:
> pdat$effCdInv <- efficiencies( riceCdSfaInv, asInData = TRUE )
Please note that the efficiency estimates for each firm still do not vary between time periods.
6.2.1.2 Time-variant Individual Efficiencies
Now we estimate a Cobb-Douglas production frontier with time-variant individual efficiencies.
Again, we estimate two Cobb-Douglas production frontiers, the first does not account for technological change, while the second does:
> riceCdSfaVar <- sfa( lProd ~ lArea + lLabor + lNpk,
+
250
Pr(>|z|)
(Intercept) 0.182016
0.035251
lArea
0.474919
0.066213
lLabor
0.300094
0.063872
lNpk
0.199461
0.042740
sigmaSq
0.129957
0.021098
gamma
0.369639
0.104045
time
0.058909
0.030863
1.9087 0.0563017 .
--Signif. codes:
Pr(>|z|)
(Intercept)
0.1817461
0.0358969
lArea
0.4761159
0.0654855
lLabor
0.2987926
0.0646926
251
0.1991403
mYear
-0.0031916
0.0428435
0.0150900 -0.2115
0.83249
sigmaSq
0.1255571
0.0287422
gamma
0.3478592
0.1488417
2.3371
0.01943 *
time
0.0711209
0.0657939
1.0810
0.27971
--Signif. codes:
LogLik Df
Chisq Pr(>Chisq)
8 -84.529
7 -84.550 -1 0.0433
0.8352
A positive sign of the coefficient (named time) indicates that efficiency is increasing over
time. However, in the model without technological change, the t-test for the coefficient and
252
6 -86.43
7 -84.55
Chisq Pr(>Chisq)
1 3.7601
0.05249 .
--Signif. codes:
In the model that accounts for technological change, the t-test for the coefficient and the
corresponding likelihood ratio test indicate that the efficiencies do not change over time:
> lrtest( riceCdTimeSfaInv, riceCdTimeSfaVar )
Likelihood ratio test
Model 1: riceCdTimeSfaInv
Model 2: riceCdTimeSfaVar
#Df
LogLik Df
7 -85.074
8 -84.529
Chisq Pr(>Chisq)
1 1.0912
0.2962
Finally, we can use a likelihood ratio test to simultaneously test whether the technology and the
technical efficiencies change over time:
> lrtest( riceCdSfaInv, riceCdTimeSfaVar )
Likelihood ratio test
Model 1: riceCdSfaInv
Model 2: riceCdTimeSfaVar
#Df
LogLik Df
6 -86.430
8 -84.529
Chisq Pr(>Chisq)
2 3.8034
0.1493
All together, these tests indicate that there is no significant technological change, while it remains
unclear whether the technical efficiencies significantly change over time.
253
(Intercept)
1.00
0.17
-0.12
0.04
0.43
0.47 -0.17
lArea
0.17
1.00
0.02
0.05
lLabor
-0.12 -0.68
0.02
time
1.00 -0.28
0.10
0.09
lNpk
0.02 -0.38
-0.28
1.00
0.02
0.03
0.01 -0.12
mYear
0.04 -0.08
0.10
0.02
1.00
0.70
0.69 -0.87
sigmaSq
0.43
0.02
-0.06
0.03
0.70
1.00
0.94 -0.85
gamma
0.47
0.05
-0.08
0.01
0.69
0.94
1.00 -0.84
time
-0.17
0.09
-0.85 -0.84
1.00
The estimate of the parameter for technological change (mYear) is highly correlated with the
estimate of the parameter that indicates the change of the efficiencies (time).
Again, further likelihood ratio tests show that OLS models are clearly rejected in favor of the
corresponding stochastic frontier models:
> lrtest( riceCdSfaVar )
Likelihood ratio test
Model 1: OLS (no inefficiency)
Model 2: Error Components Frontier (ECF)
#Df
LogLik Df
5 -104.91
-84.55
Chisq Pr(>Chisq)
2 40.713
4.489e-10 ***
--Signif. codes:
LogLik Df
Chisq Pr(>Chisq)
254
6 -104.103
-84.529
2 39.149
9.85e-10 ***
--Signif. codes:
In case of time-variant efficiencies, the efficiencies method returns a matrix, where each row
corresponds to one of the 43 firms and each column corresponds to one of the 0 time periods:
> dim( efficiencies( riceCdSfaVar ) )
[1] 43
One can obtain a vector of efficiency estimates for each observation by setting argument asInData
equal to TRUE:
> pdat$effCdVar <- efficiencies( riceCdSfaVar, asInData = TRUE )
6.2.1.3 Observation-specific efficiencies
Finally, we estimate a Cobb-Douglas production frontier with observation-specific efficiencies.
The following commands estimate two Cobb-Douglas production frontiers, the first does not
account for technological change, while the second does:
> riceCdSfa <- sfa( lProd ~ lArea + lLabor + lNpk, data = riceProdPhil )
> summary( riceCdSfa )
Error Components Frontier (see Battese & Coelli 1992)
Inefficiency decreases the endogenous variable (as in a production function)
The dependent variable is logged
Iterative ML estimation terminated after 9 iterations:
log likelihood values and parameters of two successive iterations
are within the tolerance limit
final maximum likelihood estimates
Estimate Std. Error z value
Pr(>|z|)
(Intercept) 0.333747
lArea
0.355511
0.060125
lLabor
0.333302
0.063026
lNpk
0.271277
0.035364
sigmaSq
0.238627
0.025941
gamma
0.885382
--Signif. codes:
255
data = riceProdPhil )
Pr(>|z|)
(Intercept) 0.3375352
lArea
0.3557511
0.0596403
lLabor
0.3507357
0.0631077
lNpk
0.2565321
0.0351012
mYear
0.0148902
0.0068853
2.1626
sigmaSq
0.2418364
0.0259495
gamma
0.8979766
0.03057 *
--Signif. codes:
256
LogLik Df
Chisq Pr(>Chisq)
7 -83.767
6 -86.203 -1 4.8713
0.02731 *
--Signif. codes:
I( 0.5 * log( area )^2 ) + I( 0.5 * log( labor )^2 ) + I( 0.5 * log( npk )^2 ) +
We use not only mean-scaled input quantities but also the mean-scaled output quantity in order to obtain
the same estimates as Coelli et al. (2005, p. 250). Please note that the order of coefficients/regressors is
different in Coelli et al. (2005, p. 250): intercept, mYear, log(area), log(labor), log(npk), 0.5*log(area)^2,
log(area)*log(labor), log(area)*log(npk), 0.5*log(labor)^2, log(labor)*log(npk), 0.5*log(npk)^2.
257
Pr(>|z|)
(Intercept)
3.3719e-01
log(area)
5.3429e-01
7.9139e-02
log(labor)
2.0910e-01
7.4439e-02
2.8090 0.0049699 **
log(npk)
2.2145e-01
4.5141e-02
I(0.5 * log(area)^2)
-5.1502e-01
I(0.5 * log(labor)^2)
-5.6134e-01
I(0.5 * log(npk)^2)
-7.1029e-05
I(log(area) * log(labor))
6.2604e-01
1.7284e-01
I(log(area) * log(npk))
8.1749e-02
1.3867e-01
0.5895 0.5555218
I(log(labor) * log(npk))
-1.5750e-01
sigmaSq
2.1856e-01
2.4990e-02
gamma
8.6930e-01
--Signif. codes:
I( 0.5 * log( area )^2 ) + I( 0.5 * log( labor )^2 ) + I( 0.5 * log( npk )^2 ) +
258
Pr(>|z|)
(Intercept)
0.3423626
log(area)
0.5313816
0.0786313
log(labor)
0.2308950
0.0744167
3.1027 0.0019174 **
log(npk)
0.2032741
0.0448189
I(0.5 * log(area)^2)
-0.4758612
I(0.5 * log(labor)^2)
-0.5644708
I(0.5 * log(npk)^2)
-0.0072200
I(log(area) * log(labor))
0.6088402
0.1658019
I(log(area) * log(npk))
0.0617400
0.1383298
0.4463 0.6553627
I(log(labor) * log(npk))
-0.1370538
mYear
0.0151111
0.0069164
2.1848 0.0289024 *
sigmaSq
0.2217092
0.0251305
gamma
0.8835549
--Signif. codes:
LogLik Df
Chisq Pr(>Chisq)
13 -74.410
12 -76.954 -1 5.0884
0.02409 *
--Signif. codes:
259
LogLik Df
Chisq Pr(>Chisq)
12 -76.954
6 -86.203 -6 18.497
0.005103 **
--Signif. codes:
LogLik Df
Chisq Pr(>Chisq)
13 -74.410
7 -83.767 -6 18.714
0.004674 **
--Signif. codes:
I( 0.5 * log( area )^2 ) + I( 0.5 * log( labor )^2 ) + I( 0.5 * log( npk )^2 ) +
260
data = riceProdPhil )
Pr(>|z|)
(Intercept)
0.3106562
0.0314015
log(area)
0.5126722
0.0786138
log(labor)
0.2380479
0.0745020
3.1952 0.0013974 **
log(npk)
0.2151253
0.0443911
I(0.5 * log(area)^2)
-0.5095014
I(0.5 * log(labor)^2)
-0.5394564
I(0.5 * log(npk)^2)
0.0212605
0.0923038
0.2303 0.8178341
I(log(area) * log(labor))
0.6132444
0.1687103
I(log(area) * log(npk))
0.0683940
0.1436778
0.4760 0.6340580
I(log(labor) * log(npk))
-0.1590173
mYear
0.0090026
0.0074634
1.2062 0.2277309
I(mYear * log(area))
0.0050527
0.0235232
0.2148 0.8299271
I(mYear * log(labor))
0.0241186
0.0254828
0.9465 0.3439117
I(mYear * log(npk))
-0.0335256
I(0.5 * mYear^2)
0.0149772
0.0069298
2.1613 0.0306744 *
sigmaSq
0.2227259
0.0243844
gamma
0.8957679
--Signif. codes:
261
LogLik Df
Chisq Pr(>Chisq)
17 -70.592
12 -76.954 -5 12.725
0.0261 *
--Signif. codes:
17 -70.592
13 -74.410 -4 7.636
0.1059
These tests indicate that the Translog production frontier that can account for non-constant
rates of technological change as well as biased technological change is superior to the Translog
production frontier that does not account for any technological change but it is not significantly
better than the Translog production frontier that accounts for constant and neutral technological
change. Although it seems to be unnecessary to use the Translog production frontier that can
account for non-constant rates of technological change as well as biased technological change, we
use it in our further analysis for demonstrative purposes.
The following commands create short-cuts for some of the estimated coefficients and calculate
the rates of technological change at each observation:
> at <- coef(riceTlTimeNnSfa)["mYear"]
> atArea <- coef(riceTlTimeNnSfa)["I(mYear * log(area))"]
> atLabor <- coef(riceTlTimeNnSfa)["I(mYear * log(labor))"]
262
The following command visualizes the variation of the individual rates of technological change:
30
0 10
Frequency
0.05
0.00
0.05
0.10
technological change
technological change,
the firms technical efficiency (T E), which might change if the firms distance to the current
technology changes, and
the firms scale efficiency (SE), which might change if the firms size relative to the optimal
firm size changes.
Hence, changes of a firms (or a sectors) total factor productivity (T F P ) can be decomposed into technological changes (T ), technical efficiency changes (T E), and scale efficiency
263
(6.11)
This decomposition often helps to understand the reasons for improved or reduced total factor
productivity and competitiveness.
189
303
333
NA
[2,]
189
303
333
NA
[3,]
189
303
333
NA
[4,]
90
227
256
333
[5,]
90
303
333
NA
[6,]
227
256
333
NA
[7,]
189
320
333
NA
[8,]
189
303
333
NA
[9,]
256
303
333
NA
[10,]
189
303
333
NA
[11,]
12
227
331
NA
[12,]
12
NA
NA
NA
[13,]
227
256
333
NA
[14,]
256
303
333
NA
264
L227
0 0.069365011
0 0.0000000
0 0.177065580
0 0.0000000
0 0.128631252
0 0.0000000
0 0.000000000
0 0.2424113
0 0.000000000
0 0.0000000
0 0.000000000
0 0.8329082
0 0.009313996
0 0.0000000
0 0.029415433
0 0.0000000
0 0.000000000
0 0.0000000
0 0.101147709
0 0.0000000
0 0.000000000
0 0.2380402
0 0.000000000
0 0.0000000
0 0.000000000
0 0.3755759
0 0.000000000
0 0.0000000
L256
L320
L331 L332
L333
0 0.00000000 0.000000000
0 0.83666267
0 0.00000000 0.000000000
0 0.60482775
0 0.00000000 0.000000000
0 0.61638940
0 0.00000000 0.000000000
0 0.49455386
0 0.00000000 0.000000000
0 0.01678618
0 0.00000000 0.000000000
0 0.06209945
0 0.06897294 0.000000000
0 0.92171307
0 0.00000000 0.000000000
0 0.95749456
0 0.00000000 0.000000000
0 0.57950444
0 0.00000000 0.000000000
0 0.68265733
0 0.00000000 0.009937601
0 0.00000000
0 0.00000000 0.000000000
0 0.00000000
0 0.00000000 0.000000000
0 0.38347778
0 0.00000000 0.000000000
0 0.37277941
265
266
sx2
sx3
0.00000
[2,] 0.00000000
0.5002562
0.00000
[3,] 0.00000000
7.4030979
0.00000
[4,] 0.00000000
0.0000000
0.00000
[5,] 0.04397651
0.0000000
0.00000
[6,] 0.00000000
2.3670434
0.00000
[7,] 0.00000000
0.0000000 39.68889
0.00000
0.00000
0.00000
[11,] 0.00000000
0.0000000
0.00000
[12,] 0.00000000
0.0000000
0.00000
[13,] 0.00000000
5.4720202
0.00000
0.00000
267
268
Bibliography
Aigner, D., C.A.K. Lovell, and P. Schmidt. 1977. Formulation and Estimation of Stochastic
Frontier Production Function Models. Journal of Econometrics 6:2137.
Battese, G.E., and T.J. Coelli. 1995. A Model for Technical Inefficiency Effects in a Stochastic
Frontier Production Function for Panel Data. Empirical Economics 20:325332.
Bogetoft, P., and L. Otto. 2011. Benchmarking with DEA, SFA, and R, vol. 157 of International
Series in Operations Research & Management Science. Springer.
Chambers, R.G. 1988. Applied Production Analysis. A Dual Approach. Cambridge University
Press, Cambridge.
Chand, R., and J.L. Kaul. 1986. A Note on the Use of the Cobb-Douglas Profit Function.
American Journal of Agricultural Economics 68:162164.
Chiang, A.C. 1984. Fundamental Methods of Mathematical Economics, 3rd ed. McGraw-Hill.
Coelli, T.J. 1995. Estimators and Hypothesis Tests for a Stochastic: A Monte Carlo Analysis.
Journal of Productivity Analysis 6:247268.
Coelli, T.J., D.S.P. Rao, C.J. ODonnell, and G.E. Battese. 2005. An Introduction to Efficiency
and Productivity Analysis, 2nd ed. New York: Springer.
Croissant, Y., and G. Millo. 2008. Panel Data Econometrics in R: The plm Package. Journal
of Statistical Software 27:143.
Czekaj, T., and A. Henningsen. 2012. Comparing Parametric and Nonparametric Regression
Methods for Panel Data: the Optimal Size of Polish Crop Farms. FOI Working Paper No.
2012/12, Institute of Food and Resource Economics, University of Copenhagen.
Hayfield, T., and J.S. Racine. 2008. Nonparametric Econometrics: The np Package. Journal of
Statistical Software 27:132.
Henning, C.H.C.A., and A. Henningsen. 2007. Modeling Farm Households Price Responses in
the Presence of Transaction Costs and Heterogeneity in Labor Markets. American Journal of
Agricultural Economics 89:665681.
269
Bibliography
Hurvich, C.M., J.S. Simonoff, and C.L. Tsai. 1998. Smooting Parameter Selection in Nonparametric Regression Using an Improved Akaike Information Criterion. Journal of the Royal
Statistical Society Series B 60:271293.
Ivaldi, M., N. Ladoux, H. Ossard, and M. Simioni. 1996. Comparing Fourier and Translog
Specifications of Multiproduct Technology: Evidence from an Incomplete Panel of French
Farmers. Journal of Applied Econometrics 11:649667.
Kleiber, C., and A. Zeileis. 2008. Applied Econometrics with R. New York: Springer.
Li, Q., and J.S. Racine. 2007. Nonparametric Econometrics: Theory and Practice. Princeton:
Princeton University Press.
McClelland, J.W., M.E. Wetzstein, and W.N. Musserwetz. 1986. Returns to Scale and Size in
Agricultural Economics. Western Journal of Agricultural Economics 11:129133.
Meeusen, W., and J. van den Broeck. 1977. Efficiency Estimation from Cobb-Douglas Production
Functions with Composed Error. International Economic Review 18:435444.
Olsen, J.V., and A. Henningsen. 2011. Investment Utilization and Farm Efficiency in Danish
Agriculture. FOI Working Paper No. 2011/13, Institute of Food and Resource Economics,
University of Copenhagen.
Racine, J.S. 2008. Nonparametric Econometrics: A Primer. Foundations and Trends in Econometrics 3:188.
Teetor, P. 2011. R Cookbook . OReilly Media.
Zuur, A., E.N. Ieno, and E. Meesters. 2009. A Beginners Guide to R. Use R!, Springer.
270