75% found this document useful (4 votes)
5K views

Chiang - Fundamental Methods of Mathematical Economics 3e

Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
75% found this document useful (4 votes)
5K views

Chiang - Fundamental Methods of Mathematical Economics 3e

Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 676
FUNDAMENTAL METHODS OF MATHEMATICAL ECONOMICS Third Edition Alpha C. Chiang Professor of Economics The University of Connecticut McGraw-Hill, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico City Milan Montreal New Delhi San Juan Singapore Sydney Tokyo Toronto Part | be ow CONTENTS Preface Introduction The Nature of Mathematical Economi Mathematical versus Nonmathematical Economics Mathematical Economics versus Econometrics Economic Models Ingredients of a Mathematical Model The Real-Number System The Concept of Sets Relations and Functions Types of Function Functions of Two or More Independent Variables Levels of Generality Static (or Equilibrium) Analysis Equilibrium Analysis in Economics The Meaning of Equilibrium Partial Market Equilibrium— A Linear Model 10 n 0 B 29 31 35 35 36 vi CONTENTS 33 34 3.5 4 4 42 43 44 45 46 5 Sa 52 33 54 55 56 37 58 Part 3 61 62 63 64 65 66 Partial Market Equilibrium-—A Nonlinear Model General Market Equilibrium Equilibrium in National-Income Analysis Linear Models and Matrix Algebra Matrices and Vectors Matrix Operations Notes on Vector Operations Commutative, Associative, and Distributive Laws Identity Matrices and Null Matrices Transposes and Inverses Linear Models and Matrix Algebra (Continued) Conditions for Nonsingularity of a Matrix Test of Nonsingularity by Use of Determinant Basic Properties of Determinants Finding the Inverse Matrix Cramer's Rule Application to Market and National-Income Models Leontief Input-Output Models Limitations of Static Analysis Comparative-Static Analysis Comparative Stati of Derivative The Nature of Comparative Statics Rate of Change and the Derivative The Derivative and the Slope of a Curve The Concept of Limit Digtession on Inequalities and Absolute Values Limit Theorems Continuity and Differentiability of a Function ind the Concept Rules of Differentiation and Their Use in Comparative Statics Rules of Differentiation for a Function of One Variable Rules of Differentiation Involving Two or More Funetions of the Same Variable Rules of Differentiation Involving Functions of Different Variables Partial Differentiation Applications to Comparative-Static Analysis Note on Jacobian Determinants 40 46 52 54 58 58 67 16 19 82 88 88 9 98 103 107 12 ns 124 7 7 128 131 132 141 145 147 a1 93 94 95 96 10 10.1 102 103 104 105 106 107 nul Wa 12 13 14 ILS 16 17 CONTENTS vii Comparative-Static Analysis of General-Function Models Differentials Total Differentials Rules of Differentials Total Derivatives Derivatives of Implicit Functions Comparative Statics of General-Function Models Limitations of Comparative Statics Optimization Problems Optimization: A Special Variety of Equilibrium Analysis Optimum Values and Extreme Values Relative Maximum and Minimum: First-Derivative Test ‘Second and Higher Derivatives ‘Second-Derivative Test Digression on Maclaurin and Taylor Series Nth-Derivative Test for Reiative Extremum of a Function of One Variable Exponential and Logarithmic Functions The Nature of Exponential Functions ‘Natural Exponential Functions and the Problem of Growth Logarithms Logarithmic Functions Derivatives of Exponential and Logarithmic Functions Optimal Timing Further Applications of Exponential and Logarithmic Derivatives The Case of More than One Choice Variable The Differential Version of Optimization Conditions Extreme Values of a Function of Two Variables Quadratic Forms—An Excursion Objective Functions with More than Two Variables Second-Order Conditions in Relation to Concavity and Convexity Economic Applications Comparative-Static Aspects of Optimization Optimization with Equality Constraints Effects of a Constraint Finding the Stationary Values Second-Order Conditions 187 188 194 196 198 204 215 226 231 232 233 239 245 254 263 307 308 310 319 332 337 353 364 369 370 372 379 viii CONTENTS 141 142 143 144 145 146 147 15 15.1 152 153 Isa 155 136 157 16 161 16.2 163 164 165 166 Quasiconcavity and Quasiconveaity Utility Maximization and Consumer Demand Homogencous Functions Least-Cost Combination of Inputs ‘Some Concluding Remarks Dynamic Analysis Economic Dynamics and Integral Calculus Dynamics and Integration Indefinite Integrals Definite Integrals Improper Integrals Some Economie Applications of Integrals Domar Growth Model Continuous Time: First-Order Differential Equations First-Order Linear Differential Equations with Constant Coefficient and Constant Term Dynamics of Market Price Variable Coefficient and Variable Term Exact Differential Equations Nonlinear Differential Equations of the First Order and First Degree ‘The Qualitative-Graphic Approach Solow Growth Model Higher-Order Differential Equations Second-Order Linear Differential Equations with Constant Coefficients and Constant Term Complex Numbers and Circular Functions Analysis of the Complex-Root Case ‘A Market Model with Price Expectations The Interaction of Inflation and Unemployment Differential Equations with a Variable Term Higher-Order Linear Differential Equations Discrete Time: First-Order Difference Equations Discrete Time, Differences, and Difference Equations Solving a First-Order Difference Equation The Dynamic Stability of Equilibrium ‘The Cobweb Model ‘A Market Model with Inventory Nonlinear Difference Equations The Qualitative-Graphic Approach 387 400 410 418 41 435 436 437 4a7 454 458, 465 470 470 475 480 483 489 493 496 502 503 sl 523 529 535 S41 544 349) 350 581 557 561 566 569) 17 Wi 72 173 4 18 Ig 18.2 183 184 18.5 18.6 18.7 Part 6 19 19.1 19.2 193 194 19.5 196 20 20.1 20.2 20.3 20.4 8 CONTENTS ix Higher-Order Difference Equations Second-Order Linear Difference Equations with Constant Coefficients and Constant Term Samuelson Multiplier-Acceleration Interaction Model Inflation and Unemployment in Discrete Time Generalizations to Variable-Term and Higher-Order Equations Simultaneous Differential Equations and Difference Equations The Genesis of Dynamic Systems Solving Simultaneous Dynamic Equations Dynamic Input-Output Models The Inflation-Unemployment Model Once More Two-Variable Phase Diagrams Linearization of a Nonlinear Differential-Equation System Limitations of Dynamic Analysis Mathematical Programming Linear Programming Simple Examples of Linear Programming General Formulation of Linear Programs Convex Sets and Linear Programming Simplex Method: Finding the Extreme Points Simplex Method: Finding the Optimal Extreme Point Further Notes on the Simplex Method Linear Programming (Continued) Duality Economic Interpretation of a Dual Activity Analysis: Miero Level Activity Analysis: Macro Level Nonlinear Programming ‘The Nature of Nonlinear Programming Kuhn-Tucker Conditions The Constraint Qualification Kuhn-Tucker Sufficiency Theorem: Concave Programming Arrow-Enthoven Sufficiency Theorem: Quasiconcave Programming Economic Applications Limitations of Mathematical Programming The Greek Alphabet Mathematical Symbols A Short Reading List Answers to Selected Exercise Problems Index 516 377 585 591 596 605 605 608 616 623 628 638 646 651 652 661 665 671 676 682 688 688 696 700 709 116 716 72 7 738 744 747 784 756 737 760 163 781 CHAPTER ONE THE NATURE OF MATHEMATICAL ECONOMICS Mathematical economics is not a distinct branch of economies in the sense that public finance or international trade is. Rather, it is an approach to economic analysis, in which the economist makes use of mathematical symbols in the statement of the problem and also draws upon known mathematical theorems to aid in reasoning. As far as the specific subject matter of analysis goes. it can be micro- or macroeconomic theory. public finance, urban economics. or what not Using the term mathematical economics in the broadest possible sense, one may very well say that every elementary textbook of economics today exemplifies mathematical economics insofar as geometrical methods are frequently utilized to derive theoretical results, Conventionally, however, mathematical economics is reserved to describe cases employing mathematical techniques beyond simple geometry, such as matrix algebra, differential and integral calculus, differential equations, difference equations, etc. It is the purpose of this book to introduce the reader to the most fundamental aspects of these mathematical methods—those encountered daily in the current economic literature. 1.1. MATHEMATICAL VERSUS ECONOMICS NONMATHEMATICAL Since mathematical economics is merely an approach to economic analysis, it should not and does not differ from the nonmathematical approach to economic analysis in any fundamental way. The purpose of any theoretical analysis. regardless of the approach, is always to derive a set of conclusions or theorems from a given set of assumptions or postulates via a process of reasoning. The major difference between “mathematical economics” and “literary economi 4 INTRODUCTION lies principally in the fact that, in the former, the assumptions and conclusions are stated in mathematical symbols rather than words and in equations rather than sentences; moreover, in place of literary logic, use is made of mathematical theorems—of which there exists an abundance to draw upon—in the reasoning process. Inasmuch as symbols and words are really equivalents (witness the fact that symbols are usually defined in words), it matters little which is chosen over the other. But it is perhaps beyond dispute that symbols are more convenient to use in deductive reasoning, and certainly are more conducive to conciseness and preciseness of statement, The choice between literary logic and mathematical logic, again, is a matter of little import, but mathematics has the advantage of forcing analysts to make their assumptions explicit at every stage of reasoning, This is because mathematical theorems are usually stated in the “if-then” form, so that in order to tap the “then” (result) part of the theorem for their use, they must first make sure that the “if” (condition) part does conform to the explicit assumptions adopted. Granting these points, though, one may still ask why it is necessary to go beyond geometric methods. The answer is that while geometric analysis has the important advantage of being visual, it also suffers from a serious dimensional limitation. In the usual graphical discussion of indifference curves, for instance, the standard assumption is that only fvo commodities are available to the consumer. Such a simplifying assumption is not willingly adopted but is forced upon us because the task of drawing a three-dimensional graph is exceedingly difficult and the construction of a four- (or higher) dimensional graph is actually a physical impossibility. To deal with the more general case of 3, 4, or 1 goods, we must instead resort to the more flexible tool of equations. This reason alone should provide sufficient motivation for the study of mathematical methods beyond geometry. In short, we see that the mathematical approach has claim to the following advantages: (1) The “language” used is more concise and precise: (2) there exists a wealth of mathematical theorems at our service; (3) in forcing us to state explicitly all our assumptions as a prerequisite to the use of the mathematical theorems. it keeps us from the pitfall of an unintentional adoption of unwanted implicit assumptions; and (4) it allows us to treat the general n-variable case. Against these advantages, one sometimes hears the criticism that a mathe- matically derived theory is inevitably unrealistic. However, this criticism is not valid, In fact, the epithet “unrealistic” cannot even be used in criticizing eco- nomic theory in general, whether or not the approach is mathematical. Theory is by its very nature an abstraction from the real world. It is a device for singling out only the most essential factors and relationships so that we can study the crux of the problem at hand, free from the many complications that do exist in the actual world. Thus the statement “theory lacks realism” is merely a truism that cannot be accepted as a valid criticism of theory. It then follows logically that it is quite meaningless to pick out any one approach to theory as “unrealistic.” For example, the theory of firm under pure competition is unrealistic, as is the theory THE NATURE OF MATHEMATICAL ECONOMICS 5. of firm under imperfect competition, but whether these theories are derived mathematically or not is irrelevant and immaterial, In sum, we might liken the mathematical approach to a “mode of transporta- tion” that can take us from a set of postulates (point of departure) to a set of conclusions (destination) at a good speed. Common sense would tell us that, if you intend to go to a place 2 miles away, you will very likely prefer driving to walking, unless you have time to kill or want to exercise your legs. Similarly, as a theorist who wishes to get to your conclusions more rapidly, you will find it convenient to “drive” the vehicle of mathematical techniques appropriate for your particular purpose. You will, of course, have to take “driving lessons” first; but since the skill thus acquired tends to be of service for a long, long while. the time and effort required would normally be well spent indeed. For a serious “driver”—to continue with the metaphor—some solid lessons in mathematics are imperative. It is obviously impossible to introduce all the mathematical tools used by economists in a single volume. Instead, we shall concentrate on only those that are mathematically the most fundamental and economically the most relevant. Even so. if you work through this book conscien- tiously, you should at least become proficient enough to comprehend most of the professional articles you will come across in such periodicals as the American Economic Review, Quarterly Journal of Economics, Journal of Political Economy. Review of Economies and Statistics, and Economic Journal. Those of you who, through this exposure, develop a serious interest in mathematical economics can then proceed to a more rigorous and advanced study of mathematics. 1.2) MATHEMATICAL ECONOMICS VERSUS ECONOMETRI! The term “mathematical economics” is sometimes confused with a related term, “econometrics.” As the “metric” part of the latter term implies, econometrics is concerned mainly with the measurement of economic data, Hence it deals with the study of empirical observations using statistical methods of estimation and hypothesis testing. Mathematical economics, on the other hand, refers to the application of mathematics to the purely rheorerical aspects of economic analysis. with little or no concern about such statistical problems as the errors of measure- ment of the variables under study In the present volume, we shall confine ourselves to mathematical economics. That is, we shall concentrate on the application of mathematics to deductive reasoning rather than inductive study. and as a result we shall be dealing primarily with theoretical rather than empirical material. This is, of course, solely 4 matter of choice of the scope of discussion, and it is by no means implied that econometrics is less important. Indeed, empirical studies and theoretical analyses are often complementary and mutually reinforcing. On the one hand. theories must be tested against empirical data for validity before they can be applied with confidence. On the 6 INTRODUCTION other, statistical work needs economic theory as a guide, in order to determine the most relevant and fruitful direction of research. A classic illustration of the complementary nature of theoretical and empirical studies is found in the study of the aggregate consumption function, The theoretical work of Keynes on the consumption function led to the statistical estimation of the propensity to consume, but the statistical findings of Kuznets and Goldsmith regarding the relative long-run constancy of the propensity to consume (in contradiction to what might be expected from the Keynesian theory). in turn, stimulated the refinement of aggregate consumption theory by Duesenberry, Friedman, and others.* In one sense, however, mathematical economics may be considered as the more basic of the two: for. to have a meaningful statistical and econometric study, a good theoretical framework—preferably in a mathematical formulation is indispensable. Hence the subject matter of the present volume should be useful not only for those interested in theoretical economics, but also for those seeking a foundation for the pursuit of econometric studies. * John M. Keynes, The General Theory of Employment. Interest and Money, Harcourt, Brace and Company. Ine., New York, 1936, Book III: Simon Kuznets, National Income: A Summary of Findings National Bureau of Economic Research, 1946, p. $3: Raymond Goldsmith, 4 Study of Saving in th United States, vol. 1, Princeton University Press, Princeton, N.J., 1958, chap. 3; James $, Duesenberry. Income, Saving. and the Theory of Consumer Behavior, Harvard University Press, Cambridge, Mass.. 1949; Milton Friedman, A Theory of the Consumption Function, National Bureau of Economic Research, Princeton University Press, Princeton, N.J.. 1987. CHAPTER rwo ECONOMIC MODELS ‘As mentioned before, any economic theory is necessarily an abstraction from the real world. For one thing, the immense complexity of the real economy makes it impossible for us to understand all the interrelationships at once; nor, for that matter, are all these interrelationships of equal importance for the understanding of the particular economic phenomenon under study. The sensible procedure is, therefore, to pick out what appeal to our season to be the primary factors and relationships relevant to our problem and to focus our attention on these alone. Such a deliberately simplified analytical framework is called an economic model, since it is only a skeletal and rough representation of the actual economy. 2.1. INGREDIENTS OF A MATHEMATICAL MODEL An economic model is merely a theoretical framework, and there is no inherent reason why it must be mathematical. If the model is mathematical, however, it will usually consist of a set of equations designed to describe the structure of the model. By relating a number of variables to one another in certain ways, these equations give mathematical form to the set of analytical assumptions adopted. Then, through application of the relevant mathematical operations to these equations, we may seek to derive a set of conclusions which logically follow from those assumptions. 8 INTRODUCTION Variables, Constants, and Parameters ‘A variable is something whose magnitude can change, i.e., something that can take on different values. Variables frequently used in economies include price. profit, revenue, cost, national income, consumption, investment, imports. exports, and so on, Since each variable can assume various values, it must be represented by a symbol instead of a specific number. For example, we may represent price by P. profit by 7, revenue by R, cost by C, national income by Y. and so forth. When we write P = 3 or C = 18, however, we are “freezing” these variables at specific values (in appropriately chosen units). Properly constructed, an economic model can be solved to give us the solution values of a certain set of variables, such as the market-clearing level of price, or the profit-maximizing level of output. Such variables, whose solution values we seek from the model, are known as endogenous variables (originating from within), However, the model may also contain variables which are assumed to be determined by forces external to the model, and whose magnitudes are accepted as given data only: such variables are called exogenous variables (originating from without). It should be noted that a variable that is endogenous to one model may very well be exogenous to another. In an analysis of the market determination of wheat price (P), for instance, the variable P should definitely be endogenous; but in the framework of a theory of consumer expenditure, P would become instead a datum to the individual consumer, and must therefore be considered exogenous. Variables frequently appear in combination with fixed numbers or constants, such as in the expressions 7P or 0.5R. A constant is a magnitude that does not change and is therefore the antithesis of a variable. When a constant is joined to a variable, it is often referred to as the coefficient of that variable. However, a coefficient may be symbolic rather than numerical. We can, for instance. let the symbol a stand for a given constant and use the expression aP in lieu of TP in a model, in order to attain a higher level of generality (see Sec. 2.7). This symbol a is a rather peculiar case—it is supposed to represent a given constant, and yet, since we have not assigned to it a specific number, it can take virtually any value In short, it is a constant that is variable! To identify its special status, we give it the distinctive name parametric constant (or simply parameter) It must be duly emphasized that, although different values can be assigned to a parameter, it is nevertheless to be regarded as a datum in the model. It is for this reason that people sometimes simply say “constant” even when the constant is parametric. In this respect. parameters closely resemble exogenous variables, for both are to be treated as “givens” in a model, This explains why many writers for simplicity, refer to both collectively with the single designation “ parameters.” ‘As a matter of convention, parametric constants are normally represented by the symbols a, b,c, o theit counterparts in the Greek alphabet: a, 8 and y. But other symbols naturally are also permissible. As for exogenous variables, in order that they can be visually distinguished from their endogenous cousins, we shall follow the practice of attaching a subscript 0 to the chosen symbol. For example, if P symbolizes price, then P, signifies an exogenously determined price, ECONOMIC MODELS 9 Equations and Identities Variables may exist independently, but they do not really become interesting until they are related to one another by equations or by inequalities. At this juncture we shall discuss equations only. In economic applications we may distinguish between three types of equa- tion: definitional equations, behavioral equations, and equilibrium conditions. A definitional equation sets up an identity between two alternate expressions that have exactly the same meaning, For such an equation, the identical-equali ign = (read: “is identically equal to”) is often employed in place of the regular equals sign = , although the latter is also acceptable. As an example, total profit is defined as the excess of total revenue over total cost; we can therefore write aw=R-C A behavioral equation, on the other hand, specifies the manner in which a variable behaves in response to changes in other variables. This may involve either human behavior (such as the aggregate consumption pattern in relation to national income) or nonhuman behavior (such as how total cost of a firm reacts to output changes). Broadly defined, behavioral equations can be used to describe the general institutional setting of a model, including the technological (e.g.. production function) and legal (e.g.. tax structure) aspects. Before a behavioral equation can be written, however, it is always necessary to adopt definite assumptions regarding the behavior pattern of the variable in question, Consider the two cost functions (21) C= 75+ 109 (2.2) C=1l0+Q? where Q denotes the quantity of output. Since the two equations have different forms, the production condition assumed in each is obviously different from the other. In (2.1), the fixed cost (the value of C when Q = 0) is 75, whereas in (2.2) it is 110. The variation in cost is also different, In (2.1), for each unit increase in Q, there is a constant increase of 10 in C. But in (2.2), as Q increases unit after unit, C will increase by progressively larger amounts, Clearly, it is primarily through the specification of the form of the behavioral equations that we give mathemati- cal expression to the assumptions adopted for a model. The third type of equations, equilibriwn conditions. have relevance only if our model involves the notion of equilibrium, If so, the equilibrium condition is an ‘equation that describes the prerequisite for the attainment of equilibrium. Two of the most familiar equilibrium conditions in economics are Q,=@, — [quantity demanded = quantity supplied] and S=J [intended saving = intended investment] which pertain, respectively, to the equilibrium of a market model and the equilibrium of the national-income model in its simplest form. Because equations 10 INTRODUCTION of this type are neither definitional nor behavioral, they constitute a class by themselves. 2.2. THE REAL-NUMBER SYSTEM Equations and variables are the essential ingredients of a mathematical model. But since the values that an economic variable takes are usually numerical, a few words should be said about the number system, Here, we shall deal only with so-called “real numbers.” Whole numbers such as | are called positive integers: these are the numbers most frequently used in counting. Their negative counterparts 1, =2. =3.... are called negative integers: these can be employed. for example, to indicate subzero temperatures (in degrees). The number 0 (zero), on the other hand. is neither positive nor negative, and is in that sense unique. Let us lump all the positive and negative integers and the number zero into a single category, referring to them collectively as the ser of all imegers Integers. of course, do not exhaust all the possible numbers, for we have fractions, such as 3. }, and . which—if placed on a ruler—would fall between the integers. Also, we have negative fractions, such as ~ } and — 3. Together. these make up the ser of all fractions The common property of all fractional numbers is that each is expressible as a ratio of two integers: thus fractions qualify for the designation rational numbers (in this usage. rational means ratio-nal). But integers are also rational, because any integer n can be considered as the ratio n/1. The set of all integers and the set of all fractions together form the ser of all rational numbers Once the notion of rational numbers is used. however. there naturally arises the concept of irrational numbers—numbers that cannot be expressed as ratios of & pair of integers. One example is the number V2 = 1.4142... which is a nonrepeating, nonterminating decimal. Another is the special constant 7 1415... (representing the ratio of the circumference of any circle to its diame- ter), which is again a nonrepeating, nonterminating decimal, as is characteristic of all irrational numbers. Each irrational number. if placed on a ruler. would fall between two rational numbers, so that, just as the fractions fill in the gaps between the integers on a ruler. the irrational numbers fill in the gaps between rational numbers. The result of this filling-in process is a continuum of numbers, all of which are so-called “real numbers.” This continuum constitutes the set of all real numbers. which is often denoted by the symbol R. When the set R is displayed on a straight line (an extended ruler), we refer to the line as the real line. In Fig. 2.1 are listed (in the order discussed) all the number s relationship to one another. If we read from bottom to top, however, we find in effect a classiticatory scheme in which the set of real numbers is broken down into its component and subcomponent number sets. This figure therefore is a summary of the structure of the real-number system, 1s, arranged in ECONOMIC MODELS 11 Integers 71 L Fractions Rational Ierationat sumbers numbers Real numbers Figure 2.1 Real numbers are all we need for the first 14 chapters of this book, but they are not the only numbers used in mathematics. In fact, the réason for the term “real” is that there are also “imaginary” numbers, which have to do with the square roots of negative numbers. That concept will be discussed later. in Chap. iF 2.3. THE CONCEPT OF SETS. We have already employed the word “set” several times. Inasmuch as the concept of sets underlies every branch of modern mathematics. it is desirable to familiarize ourselves at least with its more basic aspects Set Notation A set is simply a collection of distinet objects. These objects may be a group of (distinct) numbers, or something else. Thus, all the students enrolled in a particular economies course can be considered a set, just as the three integers 2, 3, and 4 can form a set. The objects in a set are called the elements of the set. There are two alternative ways of writing a set: by enumeration and by description. If we let S represent the set of three numbers 2. 3, and 4, we can write, by enumeration of the elements, 4} S=Q2 But if we let / denote the set of all positive integers, enumeration becomes difficult, and we may instead simply describe the elements and write 1 = {x | xa positive integer) which is read as follows: “/ is the set of all (numbers) x, such that x is a positive integer.” Note that braces are used to enclose the set in both cases. In the descriptive approach, a vertical bar (or a colon) is always inserted to separate the general symbol for the elements from the description of the elements. As another example, the set of all real numbers greater than 2 but less than 5 (call it J) can 12 INTRODUCTION be expressed sy ce bolically as [2 (includes), we may then write TOS of SOT It is possible that two given sets happen to be subsets of each other. When this occurs. however, we can be sure that these two sets are equal. To state this formally: we can have S, C S; and S, ¢ S, if and only if S, ECONOMIC MODELS 13 Note that, whereas the © symbol relates an individual element to a set, the C symbol relates a subser to a set. As an application of this idea, we may state on the basis of Fig. 2.1 that the set of all integers is a subset of the set of all rational numbers. Similarly, the set of all rational numbers is a subset of the set of all real numbers. How many subsets can be formed from the five elements in the set $= {1,3,5,7,9}? First of all, each individual element of S can count as a distinct subset of S, such as (1), (3), ete, But so can any pair, triple, or quadruple of these elements, such as (1.3), (1,5},.-..(3.7.9}. ete. For that matter, the set S itself (with all its five elements) can be considered as one of its own subsets—every element of $ is an element of S, and thus the set S itself fulfills the definition of a subset. This is, of course, a limiting case, that from which we get the “largest” possible subset of S, namely, S itself At the other extreme, the “smallest” possible subset of S is a set that contains no element at all. Such a set is called the mull set, or empty set. denoted by the symbol @ or ( }. The reason for considering the null set as a subset of S is quite interesting: If the null set is not a subset of $ (@ ¢ S), then @ must contain at least one element x such that x € S. But since by definition the null set has no element whatsoever, we cannot say that @ ¢ S: hence the null set is a subset of Ss. Counting all the subsets of S, including the two limiting cases $ and 2. we find a total of 2° = 32 subsets. In general, if a set has n elements, a total of 2” subsets can be formed from those elements,* It is extremely important to distinguish the symbol @ or { } clearly from the notation (0); the former is devoid of elements, but the latter does contain an element, zero. The null set is unique: there is only one such set in the whole world, and it is considered a subset of any set that can be conceived. As a third possible type of relationship, two sets may have no elements in common at all. In that case, the two sets are said to be disjoint. For example, the set of all positive integers and the set of all negative integers are disjoint sets. A fourth type of relationship occurs when two sets have some elements in common but some elements peculiar to each. In that event, the two sets are neither equal nor disjoint; also, neither set is a subset of the other. Operations on Sets. When we add, subtract, multiply. divide, or take the square root of some numbers. we are performing mathematical operations. Sets are different from * Given a set with a clements (a. 6. ¢...., ) We may first classify its subsets into Wo categories: one with the clement « in it, and one without, Each of theve two can be further classified into two subcategories: one with the clement 6 in it, and one without, Note that by considering the second clement h, we double the number of categories in the classification from 2 10 4 (= 2°), By the same token, the consideration of the element « will increase the total number of categories to 8 (= 2°) When all 1 elements are considered. the total number of categories will become the total number of subsets, and that number is 2 14 INTRODUCTION numbers, but one can similarly perform certain mathematical operations on them. ‘Three principal operations to be discussed here involve the union, intersection, and complement of sets. To take the union of wo sets A and B means to form a new set containing those elements (and only those elements) belonging to A. or to B. or to both A and B. The union set is symbolized by 4 U B (read: “A union B”), Example 1 Wf A = (3.5.7) and B = (2.3.4.8), then AU B= (2,3.4,5,7,8) This example illustrates the case in which two sets 4 and B are neither equal nor disjoint and in which neither is a subset of the other, Example 2 Again referring to Fig. 2.1, we see that the union of the set of all integers and the set of all fractions is the set of all rational numbers. Similarly, the union of the rational-number set and the irrational-number set yields the set of all real numbers. The imersection of two sets A and B, on the other hand, is a new set which contains those elements (and only those elements) belonging to both A and B. The intersection set is symbolized by 4 9 B (read: “A intersection B”). Example 3 From the sets A and B in Example 1, we can write ANB=() Example 4 If A = {—3,6, 10} and B = (9,2,7,4), then A B= @. Set A and set B are disjoint; therefore their intersection is the empty set—no element is common to A and B. It is obvious that intersection is a more restrictive concept than union. In the former, only the elements common to A and B are acceptable, whereas in the latter, membership in either A or B is sufficient to establish membership in the union set, The operator symbols 1 and U—which, incidentally, have the same kind of general status as the symbols, +. +, etc.—therefore have the connotations “and” and “or.” respectively. This point can be better appreciated by comparing the following formal definitions of intersection and union: Intersection: AQ B=(x|x€Aandx € BY Union AUB=(x|x€Aorx € B) Before explaining the complement of a set, let us first introduce the concept of universal set, In a particular context of discussion, if the only numbers used are the set of the first seven positive integers, we may refer to it as the universal set, U, Then, with a given set, say, A = (3, 6.7}, we can define another set 4 (read: “the complement of A”) as the set that contains all the numbers in the universal ECONOMIC MODELS. 1S That is. (x |x © Uand x € 4} = (1.2.4.5) set U which are not in the A et Note that, whereas the symbol U has the connotation “or” and the symbol 9 means “and.” the complement symbol ~ carries the implication of “not.” Example 5 (f U = (5,6,7,8.9} and 4 = (5.6). then A = (7.8.93 Example 6 What is the complement of U? Since every object (number) under consideration is included in the universal set, the complement of U must be empty. Thus 0 = & The three types of set operation can be visualized in the three diagrams of Fig. 2.2, known as Venn diagrams. In diagram a, the points in the upper circle form a set A, and the points in the lower circle form a set B. The union of A and B then consists of the shaded area covering both circles, In diagram b are shown the same two sets (circles). Since their intersection should comprise only the points common to both sets, only the (shaded) overlapping portion of the two circles satisfies the definition. In diagram c. let the points in the rectangle be the universal set and tet A be the set of points in the circle; then the complement set 4 will be the (shaded) area outside the circle, Laws of Set Operations From Fig, 2.2. it may be noted that the shaded area in diagram a represents not only 4 U B but also BU A. Analogously. in diagram 6 the small shaded area is the visual representation not only of A. B but also of BO A, When formalized. Union Intersection Complement AUB AnB A | A 4 \ A | } | i A | : | | | | fo (a) (6. — (e} Figure 22 16 INTRODUCTION this result is known as the commutative law (of unions and intersections) AUB=BUA ANB=BNA These relations are very similar to the algebraic laws a + 6 = 6 + aand aX b= bxa To take the union of three sets 4, B. and C, we first take the union of any two sets and then “union” the resulting set with the third; a similar procedure is applicable to the intersection operation. The results of such operations are illustrated in Fig. 2.3. It is interesting that the order in which the sets are selected for the operation is immaterial. This fact gives rise to the associative law (of unions and intersections): AU(BUC)=(AUB)UC AN(BOC)=(ANB)NC These equations are strongly reminiscent of the algebraic laws a + (b + c) = (a +b) +e and a x (bX c) = (aX b) Xe There is also a law of operation that applies when unions and intersections are used in combination. This is the distributive law (of unions and intersections): AU(BOC)=(AUB)N(AUC) AN(BUC)=(ANB)U(ANC) These resemble the algebraic law a X (b + ¢) = (a X b) + (aX c). Example 7 Verify the distributive law. given A = (4,5), B= (3,6,7), and C= (2.3). To verify the first part of the law, we find the left- and right-hand expressions separately: Left: AU(BAC) 3) UG) = 8.4.5} Right: (AV BINA UC) = B,4.5,6,.7)9 2.3.4.5 (3.4.5) AUBUC AN BNC Figure 23 ECONOMIC MODELS 17 Since the two sides yield the same result, the law is ver procedure for the second part of the law, we have Left: AN(BUC)= {4.5} 02367 = © ed. Repeating the Right: (ANB)U(ANC)=BUG=9 Thus the law is again verified. EXERCISE 2.3 1 Write the following in set notation: (a) The set of all real numbers greater than 27. (b) The set of all real numbers greater than 8 but less than 73, 2 Given the sets S = (2,4,6), Ss = (7.2.6). 4,2,6), and following statements are true? (a) 8-5 (3 € (g) $, > Sy (b) S=R? (02) 4 ES, UH OCS. (c) SES; (f) SOR (1) Sy > (1,2) 3 Referring to the four sets given in the preceding problem, find: (a) SU (0) S08 (2) YASS, (bh) SUS, (AISAS, (PSUS US, 4 Which of the following statements are valid? 4), which of the (U)AUA=A (ANE =D (AN ASA (fA UMA - (c) AUS=A_— (g) The complement of 4 is (4) 4UU=U 5 Given 4 = (4.5.6). B 3.4.6.7}, and C = (2.3.6), verify the distributive law, 6 Verify the distributive law by means of Venn diagrams, with different orders of successive shading, 7 Enumerate all the subsets of the set (a. b.c} 8 Enumerate all the subsets of the set S = (1.3.5.7), How many subsets are there altogether? 9 Example 6 shows that @ is the complement of U. But since the null set is a subset of any set, 2 must be a subset of U. Inasmuch as the term “complement of U" implies the notion of being nor in U, whereas the term “subset of U implies the notion of being int L it seems paradoxical for & to be both of these. How do you resolve this paradox? 2.4 RELATIONS AND FUNCTIONS Our discussion of sets was prompted by the usage of that term in connection with the various kinds of numbers in our number system. However. sets can refer as well to objects other than numbers. In particular, we can speak of sets of 18 iwtRODUCTION “ordered pairs" —to be defined presently—which will lead us to the important concepts of relations and functions Ordered Pai In writing a set (a, b}, we do not care about the order in which the elements a and + appear, because by definition (a, 5} = (b, a). The pair of elements a and b is in this case an unordered pair. When the ordering of a and b does carry a significance, however, we can write two different ordered pairs denoted by (a. b) and (b, a). which have the property that (a,b) # (b, a) unless a = b. Similar concepts apply to a set with more than two elements, in which c distinguish between ordered and unordered triples, quadruples, quintuples, and so forth, Ordered pairs, triples, etc., collectively can be called ordered sets Example 1 To show the age and weight of each student in a class, we can form ordered pairs (a, w), in which the first element indicates the age (in years) and the second element indicates the weight (in pounds). Then (19,127) and (127,19) would obviously mean different things. Moreover, the latter ordered pair would hardly fit any student anywhere. Example 2. When we speak of the set of the five finalists in a contest, the order in which they are listed is of no consequence and we have an unordered quintuple. But after they are judged, respectively, as the winner, first runner-up, etc., the list becomes an ordered quintuple, Ordered pairs, like other objects, can be elements of a set. Consider the rectangular (cartesian) coordinate plane in Fig. 2.4, where an x axis and a y axis cross each other at a right angle, dividing the plane into four quadrants. This xy plane is an infinite set of points. each of which represents an ordered pair whose first element is an x value and the second element a y value. Clearly, the point labeled (4, 2) is different from the point (2.4): thus ordering is significant here. With this visual understanding, we are ready to consider the process of generation of ordered pairs. Suppose, from two given sets, x = {1,2} and y = (3,4), we wish to form all the possible ordered pairs with the first element taken from set x and the second element taken from set y. The result will, of course, be the set of four ordered pairs (1,3), (1,4), (2,3), and (2,4). This set is called the cartesian product (named after Descartes), or direct product, of the sets x and y and is denoted by x X y (read: “x cross y"). It is important to remember that, while x and y are sets of numbers, the cartesian product turns out to be a set of ordered pairs. By enumeration, or by description, we may express the cartesian product alternatively as x xy = (1,3), (1.4), (2,3), (2,4) {(a,b) |a © xandb € y} or oxxXy ECONOMIC MODELS 19 ‘The latter expression may in fact be taken as the general definition of cartesian product for any given sets x and y To broaden our horizon, now let both x and y include all the real numbers. Then the resulting cartesian product (23) xxy (a,b) | @€ Randb © R) will represent the set of all ordered pairs with real-valued elements. Besides, each ordered pair corresponds to a unique point in the cartesian coordinate plane of Fig. 2.4, and, conversely, each point in the coordinate plane also corresponds to a unique ordered pair in the set x xy. In view of this double uniqueness, a one-to-one correspondence is said to exist between the set of ordered pairs in the cartesian product (2.3) and the set of points in the rectangular coordinate plane. The rationale for the notation x X y is now easy to perceive; we may associate it with the crossing of the x axis and the y axis in Fig. 2.4. A simpler way of expressing the set x x y in (2.3) is to write it directly as R x R: this is also commonly denoted by R? Extending this idea, we may also define the cartesian product of three sets x. y, and z as follows: xxy X29 {(a,b.c)|aexbeyce which is a set of ordered triples. Furthermore, if the sets x, y. and z each consist of all the real numbers, the cartesian product will correspond to the set of all points in a three-dimensional space. This may be denoted by RX RX R, or (Quadrant 1) (Quagrant 1) (2.4) (4,4) . 4 . . . ~3 . . (2,2) (4,2) . 2 ° . . -1 . . 1 . . x = -10 1 2 3 4 le . (Quadrant ii) (Quadrant 1V) Figure 24 20 iNrRODUCTION more simply, R°. In the following development, all the variables are taken to be real-valued; thus the framework of our discussion will generally be R*, or R',..., or R", Relations and Functions Since any ordered pair associates a y value with an x value, any collection of ordered pairs—any subset of the cartesian product (2.3)—will constitute a relation between y and x. Given an x value, one or more y values will be specified by that relation. For convenience, we shall now write the elements of x x y generally as (x, y)—rather than as (a,b), as was done in (2.3)—where both x and » are variables. Example 3 The set {(x. y)|y = 2x) is a set of ordered pairs including, for example, (1,2), (0,0), and (—1, ~2), It constitutes a relation, and its graphical counterpart is the set of points lying on the straight line y = 2x, as seen in Fig. 25. Example 4 The set ((x. y) |," [cubic function} and so forth. The superscript indicators of the powers of x are called exponents The highest power involved, i.e., the value of n, is often called the degree of the polynomial function; a quadratic function, for instance, is a second-degree polynomial, and a cubic function is a third-degree polynomial.* The order in which the several terms appear to the right of the equals sign is inconsequential; * In the several equations just cited, the last coefficient (a,.) is always assumed to be nonzero: otherwise the function would degenerate into a lower-degree polynomial, ECONOMIC MODELS 25 they may be arranged in descending order of power instead. Also, even though we have put the symbol y on the left, itis also acceptable to write f(x) in its place When plotted in the coordinate plane, a linear function will appear as a straight line, as illustrated in Fig. 2.8a. When x = 0, the linear function yields y = ao; thus the ordered pair (0, ay) is on the line. This gives us the so-called “ y intercept” (or vertical intercept), because it is at this point that the vertical axis intersects the line. The other coefficient, a, measures the slope (the steepness of incline) of our line. This means that a unit increase in x will result in an increment in y in the amount of a, What Fig. 2.84 illustrates is the case of a, > 0, involving a positive slope and thus an upward-sloping line; if a, <0, the line will be downward-sloping. A quadratic function, on the other hand, plots as a parabola—roughly, a curve with a single built-in bump or wiggle. The particular illustration in Fig, 2.85 implies a negative a; in the case of a, > 0, the curve will “open” the other way. displaying a valley rather than a hill, The graph of a cubie function will, in general, manifest two wiggles, as illustrated in Fig. 2.8c. These functions will be used quite frequently in the economic models discussed below. Rational Functions A function such as wernt in which y is expressed as a ratio of two polynomials in the variable x, is known as a rational function (again, meaning ratio-nal). According to this definition, any polynomial function must itself be a rational function, because it can always be expressed as a ratio to 1, which is a constant function, A special rational function that has interesting applications in economics is the function a OO ed which plots as a rectangular hyperbola, as in Fig, 2.8d. Since the product of the wo variables is always a fixed constant in this case, this function may be used to represent that special demand curve—with price and quantity Q on the two axes—for which the total expenditure PQ is constant at all levels of price. (Such a demand curve is the one with a unitary elasticity at each point on the curve.) Another application is to the average fixed cost (AFC) curve. With AFC on one axis and output Q on the other. the AFC curve must be rectangular-hyperbolic because AFC x Q(= total fixed cost) is a fixed constant. The rectangular hyperbola drawn from xy = a never meets the axes, even if extended indefinitely upward and to the right. Rather, the curve approaches the axes asymptotically: as y becomes very large, the curve will come ever closer to the Linear Quadratic Slope = a (Case of a2 0) (a () Cubic Rectangular hyperboli o” B10) () (a) Exponential Logarthoe ya" » ~ogy, x (bay ° “0 te) i Figure 28 ECONOMIC MODELS. 27 y axis but never actually reach it, and similarly for the x axis, The axes constitute the asymprores of this function Nonalgebraic Functions Any function expressed in terms of polynomials and/or roots (such as square root) of polynomials is an algebraic function. Accordingly, the functions discussed thus far are all algebraic, A function such as y = ¥x* + 3 is not rational, yet it is algebraic. However, exponential functions such as y = b‘, in which the independent variable appears in the exponent, are nonalgebraic. The closely related logarithmic functions, such as y = log,x, are also nonalgebraic. These two types of function will be explained in detail in Chap. 10, but their general graphic shapes are indicated in Fig. 28¢ and f. Other types of nonalgebraic function are the rrigonometric (or circular) functions, which we shall discuss in Chap. 15 in connection with dynamic analysis. We should add here that nonalgebraic func- tions are also known by the more esoteric name of transcendental functions A Digression on Exponents In discussing polynomial functions, we introduced the term exponents as indica tors of the power to which a variable (or number) is to be raised. The expression 6? means that 6 is to be raised to the second power; that is, 6 is to be multiplied by itself, or 6 = 6 X 6 = 36. In general, we define XU SXK XK KY sv terens and as a special case, we note that x' = x, From the general definition. it follows that exponents obey the following rules: Rule I x™ Xx" = x (for example, x* x x x) Proor x" xx"= (xXx xX x|[xK ax xx) XX XX m+ m terms Rule ; "(e#0) | forexample, “> = m terms mR KKK PROOF == ———— = XXNX KE xm XXxxoXe \ _ — ~ mn ~ terms 28. INTRODUCTION because the m terms in the denominator cancel out m of the m terms in the numerator. Note that the case of x = 0 is ruled out in the statement of this rule. This is because when x = 0, the expression x”/x" would involve division by zero, which is undefined, What if m 0) +dP (c,d >0) Qn. Qs Qua bP (demand) Q. 20rd P (suppiv) PQ Figure 3.1 38 STATIC (OR EQUILIBRIUM) ANALYSIS vertical intercept is seen to be negative. at —c. Why did we want to specify such a negative vertical intercept? The answer is that, in so doing, we force the supply curve to have a positive horizontal intercept at P;, thereby satisfying the proviso stated earlier that supply will not be forthcoming unless the price is positive and sufficiently high. The reader should observe that, contrary to the usual practice, quantity rather than price has been plotted vertically in Fig, 3.1. This, however. is in line with the mathematical convention of placing the dependent variable on the vertical axis. In a different context below, in which the demand curve is viewed from the standpoint of a business firm as describing the average-revenue curve, AR = P = #(Q,). we shall reverse the axes and plot P vertically. With the model thus constructed, the next step is to solve it, i., to obtain the solution values of the three endogenous variables, Q,. Q,. and P. The solution values, to be denoted Q,. @,, and P, are those values that satisfy the three equations in (3.1) simultaneously: i.e., they are the values which, when substituted into the three equations. make the latter a set of true statements, In the context of an equilibrium model, those values may also be referred to as the equilibrium values of the said variables. Since Q, = Q,. however. they can be replaced by a single symbol Q. Hence, an equilibrium solution of the model may simply be denoted by an ordered pair (P,Q). In case the solution is not unique, several ordered pairs may each satisfy the system of simultaneous equations: there will then be a solution set with more than one element in it, However, the multiple- equilibrium situation cannot arise in a linear model such as the present one. Solution by Elimination of Variables One way of finding a solution to an equation system is by successive elimination of variables and equations through substitution. In (3.1), the model contains three equations in three variables. However, in view of the equating of Q, and Q, by the equilibrium condition, we can let Q= Q,=Q, and rewrite the model equivalently as follows Q=a-bP Q=-c+adP (3.2) thereby reducing the model to two equations in two variables. Moreover. by substituting the first equation into the second in (3.2), the model can be further reduced to a single equation in a single variable: a- bP = -c+dP or, after subtracting (a + dP) from both sides of the equation and multiplying through by ~ 1, (33) (b+d)P=ate EQUILIBRIUM ANALYSIS IN ECONOMICS 39 This result is also obtainable directly from (3.1) by substituting the second and third equations into the firs Since b + d + 0, it is permissible to divide both sides of (3.3) by (b + d). The result is the solution value of P ate b+d (3.4) = Note that P is—as all solution values should be—expressed entirely in terms of the parameters, which represent given data for the model. Thus P is a determinate value, as it ought to be. Also note that P is positive—as a price should be—because all the four parameters are positive by model specification. To find the equilibrium quantity @ (= Q, = Q,) that corresponds to the value P, simply substitute (3.4) into either equation of (3.2), and then solve the resulting equation. Substituting (3.4) into the demand function, for instance, we can get _ bla +e) +d)-blatc) _ ad~be b+d b+d abe, (35) @= which is again an expression in terms of parameters only. Since the denominator (b + d) is positive, the positivity of @ requires that the numerator (ad — be) be positive as well. Hence, to be economically meaningful, the present model should contain the additional restriction that ad > be The meaning of this restriction can be seen in Fig. 3.1, It is well known that the P and Q of a market model may be determined graphically at the intersection of the demand and supply curves. To have Q > 0 is to require the intersection point to be located above the horizontal axis in Fig. 3.1, which in turn requires the slopes and vertical intercepts of the two curves to fulfill a certain restriction on their relative magnitudes. That restriction, according to (3.5), is ad > be, given that both and d are positive. The intersection of the demand and supply curves in Fig. 3.1, incidentally, is in concept no different from the intersection shown in the Venn diagram of Fig. 2.2b. There is one difference only: instead of the points lying within two circles. the present case involves the points that lie on two lines. Let the set of points on the demand and supply curves be denoted, respectively, by D and S. Then, by utilizing the symbol Q (= Q, = Q,), the two sets and their intersection can be written D={(P,Q)|Q=a— bP} S={(P,Q)|Q= -c+ aP) The intersection set contains in this instance only a single element, the ordered pair (P,Q). The market equilibrium is unique 40. STATIC (OR EQUILIBRIUM) ANALYSIS EXERCISE 3.2 1 Given the market model find P and Q by (a) elimination of variables and () using formulas (3.4) and (3.5), (Use fractions rather than decimals.) 2 Let the demand and supply functions be as follows (2) Q,=51=3P — (b) Q,=30-2P Q.=6P-10 Q.= -6+5P find P and @ by elimination of variables. (Use fractions rather than decimals.) 3 According to (3.5), for Q to be positive, it is necessary that the expression (ad — be) have the same algebraic sign as (> + d), Verify that this condition is indeed satisfied in the models of the preceding two problems 4 If (b+ d)=0 in the linear market model, can an equilibrium solution be found by using (3.4) and (3.5)? Why or why not? 5 If (b+ d)=0 in the linear market model, what can you conclude regarding the positions of the demand and supply curves in Fig. 3.1? What can you conclude, then, regarding the equilibrium solution? 3.3. PARTIAL MARKET EQUILIBRIUM—A NONLINEAR MODEL Let the linear demand in the isolated market model be replaced by a quadratic demand function, while the supply function remains linear. Then, if numerical coefficients are employed rather than parameters, a model such as the following may emerge Q4= 2, (3.6) Qy=4-P? Q,=4P-1 As previously, this system of three equations can be reduced to a single equation by elimination of variables (by substitution): or G7) P +4P-5=0 This is a quadratic equation because the left-hand expression is a quadratic function of variable P. The major difference between a quadratic equation and a linear one is that, in general, the former will yield two solution values EQUILIBRIUM ANALYSIS IN ECONOMICS 41 Quadratic Equation versus Quadratic Function Before discussing the method of solution, a clear distinction should be made between the two terms quadratic equation and quadratic function. According to the earlier discussion, the expression P? + 4P — 5 constitutes a quadratic function, say, {(P). Hence we may write (38) f(P)=P?+4P-5 What (3.8) does is to specify a rule of mapping from P to f(P), such as Although we have listed only nine P values in this table, actually all the P values in the domain of the function are eligible for listing. It is perhaps for this reason that we rarely speak of “solving” the equation /(P) = P? + 4P — 5, because we normally expect “solution values” to be few in number, but here all P values can get involved. Nevertheless, one may legitimately consider each ordered pair in the table above—such as (—6,7) and (—5,0)—as a solution of (3.8), since each such ordered pair indeed satisfies that equation. Inasmuch as an infinite number of such ordered pairs can be written, one for each P value, there is an infinite number of solutions to (3.8). When plotted as a curve, these ordered pairs together yield the parabola in Fig. 3.2 HP) f(P) =P?+4P—5 Figure 3.2 2 STATIC (OR EQUILIBRIUM) ANALYSIS In (3.7). where we set the quadratic function f( P) equal to zero, the situation is fundamentally changed. Since the variable /(P) now disappears (having been assigned a zero value), the result is a quadratic equation in the single variable P.* Now that /(P) is restricted to a zero value, only a select number of P values can satisfy (3.7) and qualify as its solution values, namely, those P values at which the parabola in Fig. 3.2 intersects the horizontal axis—on which /(P) is zero. Note that this time the solution values are just P values, not ordered pairs. The solution P values are often referred to as the roots of the quadratic equation {( P) = 0. or, alternatively, as the zeros of the quadratic function f(P) There are two such intersection points in Fig. 3.2, namely, (1,0) and (—5.0). As required, the second element of each of these ordered pairs (the ordinare of the corresponding point) shows /(P) = 0 in both cases. The first element of each ordered pair (the abscissa of the point). on the other hand, gives the solution value of P. Here we get two solutions, Byala and ae but only the first is economically admissible, as negative prices are ruled out. The Quadratic Formula Equation (3.7) has been solved graphically. but an algebraic method is also available. In general, given a quadratic equation in the form (3.9) ax? + bx +e 0 (a+0) its two roots can be obtained from the quadratic formula: ~b + (b? = 4ac)'* Qa (3.10) 8.8 where the + part of the + sign yields ¥, and the — part yields x, This widely used formula is derived by means of a process known as “completing the square.” First, dividing each term of (3.9) by @ results in the equation 2,6 tint asa 0 Subtracting c/a from, and adding b°/4a® to, both sides of the equation, we get * The distinction between quadratic function and quadratic equation just discussed can be extended also to eases of polynomials other than quadratic. Thus, a cubic equation results when a ‘cubic function is set equal to zero, EQUILIBRIUM ANALYSIS IN ECONOMICS 43 The left side is now a “perfect square.” and thus the equation can be expressed as gts Ome or, after taking the square root on both sides, b (82 = 4ac)'? Finally, by subtracting 6/2a from both sides. the result in (3.10) is evolved. Applying the formula to (3.7), where a = 1, b = 4, c= —5, and x = P. the roots are found to be =4 + (16 + 20)" _ -4 26 PLP, 2 which check with the graphical solutions in Fig. 3.2. Again, we reject P, = ~S on economic grounds and, after omitting the subscript I. write simply P = 1 With this information in hand, the equilibrium quantity @ can readily be found from either the second or the third equation of (3.6) to be @ = 3 Another Graphical Solution One method of graphical solution of the present model has been presented in Fig, 3.2. However, since the quantity variable has been eliminated in deriving the quadratic equation, only P can be found from that figure. If we are interested in P Figure 33 44 STATIC (OR EQUILIBRIUM) ANALYSIS finding P and @ simultaneously from a graph, we must instead use a diagram with Q on one axis and P on the other, similar in construction to Fig, 3.1. This is illustrated in Fig. 3.3. Our problem is of course again to find the intersection of two sets of points. namely, D={((P,Q)iQ=4- P*} and = S={(P,Q)|Q=4P~1} If no restriction is placed on the domain and the range, the intersection set will contain two elements, namely, DOS =((1,3),(-5. -21)) The former is located in quadrant I, and the latter (not drawn) in quadrant HII. If the domain and range are restricted to being nonnegative, however, only the first ordered pair (1,3) can be accepted. Then the equilibrium is again unique. Higher-Degree Polynomial Equations If a system of simultancous equations reduces not to a linear equation such as (3.3) or to a quadratic equation such as (3.7) but to a cubic (third-degree polynomial) equation or quartic (fourth-degree polynomial) equation, the roots will be more difficult to find. One useful method which may work is that of factoring the function. For example, the expression x° ~ x? ~ 4x +4 can be written as the product of three factors (x ~ 1), (x + 2), and (x ~ 2). Thus the cubic equation ian? 4x4 4=0 can be written after factoring as, (= Dlx + (x -2)=0 In order for the left-hand product to be zero, at least one of the three terms in the product must be zero. Setting each term equal to zero in turn, we get 0 or x+2=0 or x-2=0 three equations will supply the three roots of the cubic equation, namely, ad The trick is, of course, to discover the appropriate way of factoring, Unfor- tunately, no general rule exists, and it must therefore remain a matter of trial and error. Generally speaking, however. given an nth-degree polynomial equation f(x) = 0. we can expect exactly m roots, which may be found as follows. First, try to find a constant c, such that f(x) is divisible by (x + ¢)). The quotient P(x + ¢)) will be a polynomial function of a lesser—(n — I)st—degree; let * Equation (3.3) can be viewed as the result of setting the linear function (b + d)P — (a + ¢) equal to zer0, EQUILIBRIUM ANALYSIS IN ECONOMICS 45 us call it g(x). It then follows that f(x) = (x + e)g(x) Now, try to find a constant c, such that g(x) is divisible by (x + c,). The quotient g(.x)/(x + ¢,) will again be a polynomial function of a lesser—this time (n ~ 2ynd— degree, say, A(x). Since g(x) = (x + ¢,)A(x) it follows that f(x) = (x + o:)g(2) = (x + ex + ACY) By repeating the process, it will be possible to reduce the original nth-degree polynomial f(x) to a product of exactly n terms: fx) = (x + eaten) (xt eG) which, when set equal to zero, will yield n roots. Setting the first factor equal to zero, for example, one gets ¥, = ~c;. Similarly, the other factors will yield Fy = cy, Fy = C5, ete. These results can be more succinctly expressed by employing an index subscript i (§=1,2,...,") Even though only one equation is written, the fact that the subscript can take different values means that in all there are n equations involved, Thus the index subscript provides a very concise way of statement. EXERCISE 3.3 1 Find the zen (4) fo 2 Solve the preceding problem by the quadratic formula, of the following functions graphically Te 10 (b) g(x) = 2x? = dy ~ 16 3 Solve the following polynomial equations by factoring: (a) P?+4P-5=0 [see37) (c) x°- 7 (b) x3 + 2x7 ~ dx - 8 =0 (a) 3x7 4 Find a cubic function with roots 7, —2, and 5 5 Find the equilibrium solution for each of the following models: (2) Qs= Q, (5) Qu= Q, Qu a, 6 The market equilibrium condition, Q,=Q,. is often expressed in an equivalent alternati op-4 Q.=P form, 0, ~ Q, = 0, which has the economic interpretation “excess demand is zero.” Does (3.7) represent this latter version of the equilibrium condition? If not, supply an appropriate economic interpretation for (3.7), 46 STATIC (OR EQUILIBRIUM) ANALYSIS 3.4. GENERAL MARKET EQUILIBRIUM The last two sections dealt with models of an isolated market, wherein the Q, and Q, of a commodity are functions of the price of that commodity alone. In the actual world, though, no commodity ever enjoys (or suffers) such a hermitic existence; for every commodity. there would normally exist many substitutes and complementary goods. Thus a more realistic depiction of the demand function of a commodity should take into account the effect not only of the price of the commodity itself but also of the prices of most, if not all, of the related commodities. The same also holds true for the supply function, Once the prices of other commodities are brought into the picture, however, the structure of the model itself must be broadened so as to be able to yield the equilibrium values of these other prices as well. As a result, the price and quantity variables of multiple commodities must enter endogenously into the model en masse In an isolated-market model, the equilibrium condition consists of only one equation, Q, = Q,, or E = Q,~ Q, = 0, where E stands for excess demand. When several interdependent commodities are simultaneously considered, equi- librium would require the absence of excess demand for each and every commod- ity included in the model, for if so much as one commodity is faced with an excess demand, the price adjustment of that commodity will necessarily affect the quantities demanded and quantities supplied of the remaining commodities, thereby causing price changes all around. Consequently, the equilibrium condi tion of an n-commodity market model will involve # equations, one for each commodity, in the form GAN = Qy-O= 0 (F= 12....0) If a solution exists, there will be a set of prices P, and corresponding quantities Q, such that all the » equations in the equilibrium condition will be simultaneously satisfied Two- Commodity Market Model To illustrate the problem, let us discuss a simple model in which only two commodities are related to each other. For simplicity, the demand and supply functions of both commodities are assumed to be linear. In parametric terms, such a model can be written as Qn ~ Ws Oy) = a5 + aP, + ay Py : by + bP, + on = & : Qu — Q,) = 0 Quy = Ay + OP, + a: Py 2 By + BP, + ByPy EQUILIBRIUM ANALYSIS IN FCONOMICS 47 where the a and b coefficients pertain to the demand and supply functions of the first commodity, and the a and f coefficients are assigned to those of the second. We have not bothered to specify the signs of the coefficients, but in the course of analysis certain restrictions will emerge as a prerequisite to economically sensible results. Also, in a subsequent numerical example, some comments will be made ‘on the specific signs to be given the coefficients. As a first step toward the solution of this model, we can again resort to climination of variables. By substituting the second and third equations into the first (for the first commodity) and the fifth and sixth equations into the fourth (for the second commodity), the model is reduced to two equations in two variables — by) + (ay ~ bP, + (ay = by) Py = 0 — By) + (a — By) Py + (a2 ~ B)P) = 0 “These represent the two-commodity version of (3.11), after the demand and supply functions have been substituted into the two equilibrium-condition equa- tions Although this is a simple system of only two equations, as many as 12 parameters are involved, and algebraic manipulations will prove unwieldy unless some sort of shorthand is introduced. Let us therefore define the shorthand symbols c= 4,-b, (i= 0.1.2) y= 4,-B, Then (3.13) becomes—after transposing the c, and yp terms to the right-hand side of the equals sign: alae (3.13 HP t+ Pr = —% which may be solved by further elimination of variables. From the first equation, it can be found that Py = ~(c, + ¢,P,)/c;. Substituting this into the second equation and solving, we get -¢ (3.14) P = Cor 2 — OY, Note that P, is entirely expressed, as a solution value should be, in terms of the data (parameters) of the model. By a similar process. the equilibrium price of the second commodity is found to be Co = C1%0 3.15) Py= ae CM — ON For these two values to make sense, however, certain restrictions shouid be imposed on the model. First, since division by zero is undefined, we must require the common denominator of (3.14) and (3.15) to be nonzero, that is, ¢,¥2 + 6%) Second, to assure positivity, the numerator must have the same sign as the denominator. 48 STATIC (OR EQUILIBRIUM) ANALYSIS ‘The equilibrium prices having been found, the equilibrium quantities @, and @, can readily be calculated by substituting (3.14) and (3.15) into the second (or third) equation and the fifth (or sixth) equation of (3.12). These solution values will naturally also be expressed in terms of the parameters. (Their actual calcula- tion is left to you as an exercise.) Numerical Example Suppose that the demand and supply functions are numerically as follows: Qu = 10-22, + Py = ~24+3P, (3.16) Qu i Qn = I+ Py Q.2-1 +2P, What will be the equilibrium solution? Before answering the question, let us take a look at the numerical coefficients. For each commodity, Q,, is seen to depend on P, alone, but Q,, is shown as a function of both prices. Note that while P, has a negative coefficient in Q,, as we would expect. the coefficient of P, is positive. The fact that a rise in P, tends to raise Q.), suggests that the two commodities are substitutes for each other. The role of P, in the Q,, function has a similar interpretation, With these coefficients, the shorthand symbols c, and y, will take the following values W = 10~(-2=12 0 = wWrI-(-)=16 y= and a Thus all the equilibrium values turn out positive, as required. In order to preserve the exact values of P, and P, to be used in the further calculation of Q, and Q). it is advisable to express them as fractions rather than decimals Could we have obtained the equilibrium prices graphically? The answer is yes. From (3.13), it is clear that a two-commodity model can be summarized by two equations in two variables P, and P;. With known numerical coefficients, both equations can be plotted in the P,P, coordinate plane, and the intersection of the two curves will then pinpoint P, and P, EQUILIBRIUM ANALYSIS IN ECONOMICS 49 The above discussion of the multicommodity market has been limited to the case of two commodities, but it should be apparent that we are already moving from partial-equilibrium analysis in the direction of general-equilibrium analysis. As more commodities enter into a model, there will be more variables and more equations, and the equations will get longer and more complicated. If all the commodities in an economy are included in a comprehensive market model, the result will be a Walrasian type of general-equilibrium model, in which the excess demand for every commodity is considered to be a function of the prices of all the commodities in the economy Some of the prices may, of course, carry zero coefficients when they play no role in the determination of the excess demand of a particular commodity: e.g. in the excess-demand function of pianos the price of popcorn may well have a zero coefficient, In general, however, with n commodities in all, we may express the demand and supply functions as follows (using Qy, and Q,, as function symbols in place of f and g) Qui = Quail Pre Paerees Py) P= 12... BI) ear PB) | ” In view of the index subscript, these two equations represent the totality of the 2” functions which the model contains. (These functions are not necessarily linear.) Moreover, the equilibrium condition is itself composed of a set of 1 equation (3.18) Q,-Q,=0 (i= 1,2....0) When (3.18) is added to (3.17), the model becomes complete. You should therefore count a total of 37 equations. Upon substitution of (3.17) into (3.18), however, the model can be reduced to a set of n simultaneous equations only Qa Pye Paseees Pa) = Ooi Prs Press Px) =O (= 1,2, Besides, inasmuch as £, = Qy, — Q,,. where E, is necessarily also a function of all the n prices, the above set of equations may be written alternatively as ECP Pree PHO (F= 1.2...) Solved simultaneously, these equations will determine the » equilibrium prices P—if a solution does indeed exist. And then the Q, may be derived from the demand or supply functions. Solution of a General-Equation System If a model comes equipped with numerical coefficients, as in (3.16), the equi- librium values of the variables will be in numerical terms, too. On a more general level, if a model is expressed in terms of parametric constants, as in (3.12), the equilibrium values will also involve parameters and will hence appear as “for- 50 STATIC (OR EQUE [BRIUM) ANALYSIS mulas,” as exemplified by (3.14) and (3.15). If, for greater generality. even the function forms are left unspecified in a model, however, as in (3.17), the manner of expressing the solution values will of necessity be exceedingly general as well. Drawing upon our experience in parametric models, we know that a solution value is always an expression in terms of the parameters. For a general-function model containing, say. a total of m parameters (a). a3..... 4,,)—Where m is not necessarily equal to n—the 7 equilibrium prices can therefore be expected to take the general analytical form of (3.19) P= P(a,.a,.....4,,) (i=, This is a symbolic statement to the effect that the solution value of each variable (here, price) is a function of the set of all parameters of the model. As this is a very general statement, it really does not give much detailed information about the solution, But in the general analytical treatment of some types of problem, even this seemingly uninformative way of expressing a solution will prove of use, as will be seen in a later chapter. Writing such a solution is an easy task, But an important catch exists: the expression in (3.19) can be justified if and only if a unique solution does indeed exist, for then and only then can we map the ordered m-tuple (4), @3..-.. dq.) into a determinate value for each price P. Yet, unfortunately for us, there is no a priori reason to presume that every model will automatically yield a unique solution. In this connection, it needs to be emphasized that the process of “counting equations and unknowns” does not suffice as a test. Some very simple examples should convince us that an equal number of equations and unknowns (endogenous variables) does not necessarily guarantee the existence of a unique solution. Consider the three simultaneous-equation systems n) 8 3.20) ae a x+ y=9 321 dxt yp=12 G21) 4x + 2p = 24 ec (3.22) y= 18 xt y=20 In (3.20), despite the fact that two unknowns are linked together by exactly two equations, there is nevertheless no solution. These two equations happen to be inconsistent, for if the sum of x and y is 8 it cannot possibly be 9 at the same time. In (3.21), another case of two equations in two variables, the two equations are functionally dependent, which means that one can be derived from (and is implied by) the other. (Here, the second equation is equal to two times the first equation). Consequently, one equation is redundant and may be dropped from the systema, leaving in effect only one equation in two unknowns. The solution will EQUILIBRIUM ANALYSIS IN ECONOMICS. 51 then be the equation y = 12 — 2x, which yields not a unique ordered pair (¥, ¥*) but an infinite number of them, including (0, 12), (1, 10), (2,8). etc.. all of which satisfy that equation. Lastly, the case of (3.22) involves more equations than unknowns, yet the ordered pair (2, 18) does constitute the unique solution to it ‘The reason is that, in view of the existence of functional dependence among the ‘equations (the first is equal to the second plus twice the third), we have in effect only two independent, consistent equations in two variables. These simple examples should suffice to convey the importance of consistency and functional independence as the two prerequisites for application of the process of counting equations and unknowns. In general, in order to apply that process, make sure that (1) the satisfaction of any one equation in the model will not preclude the satisfaction of another and (2) no equation is redundant. In (3.17), for example, the n demand and n supply functions may safely be assumed to be independent of one another, each being derived from a different source—each demand from the decisions of a group of consumers, and each supply from the decisions of a group of firms. Thus each function serves to describe one facet of the market situation, and none is redundant. Mutual consistency may perhaps also be assumed. In addition, the equilibrium-condition equations in (3.18) are also independent and presumably consistent. Therefore the analytical solution as written in (3.19) can in general be considered justifiable.* For simultaneous-equation models, there exist systematic methods of testing the existence of a unique (or determinate) solution. These would involve, for linear models, an application of the concept of determinants, to be introduced in Chap. 5. In the case of nonlinear models, such a test would also require a knowledge of so-called “partial derivatives” and a special type of determinant called the Jacobian determinant, which will be discussed in Chaps. 7 and 8 EXERCISE 3.4 1 Work out the step-by-step solution of (3.13%), thereby verifying the results in (3.14) and G15). 2. Rewrite (3.14) and (3.15) in terms of the original parameters of the model in (3.12) 3 The demand and supply functions of a two-commodity market model are as follows: On 1B-3P,+P Qy= 12+ P,-2Py Qy = 2447, Qo" + 3P, Find P, and Q, (i = 1,2). (Use fractions rather than decimals.) * This is essentially the way that Léon Walras approached the problem of the existence of general market equilibrium. In the modern literature, there can be found a number of sophisticated mathematical proofs of the existence of a competitive market equilibrium under certain postulated economic conditions. But the mathematics used is advanced. The easiest one to understand is perhaps the proof given in Robert Dorfman, Paul A. Samuelson, and Robert M. Solow, Linear Programming and Economic Analysis. MeGraw-Hiill Book Company, New York, 1958, chapter 13, which you should read after having studied Part 6 of the present volume 52. STATIC (OR EQUILIBRIUM) ANALYS 3.5 EQUILIBRIUM IN NATIONAL-INCOME ANALYSIS Even though the discussion of static analysis has hitherto been restricted to market models in various guises—linear and nonlinear, one-commodity and multicommodity. specific and general—it, of course, has applications in other areas of economics also. As a simple example, we may cite the familiar Keynesian national-income model, (3.23) YeC+h+G (a. o0, O0, 0<1<1) [1 income tax rate] (a) How many endogenous variables are there? (b) Find ¥, T, and C. 2 Let the national-income model be: y=CHh+G CaathY-%) (a>0, 0 12 8 Example 2 ay ay bu by ay t by ay + by 4 An | +] by by | = | ay + by) aay + by ay dy by by ay + by ayy + by. LINEAR MODELS AND MATRIX ALGEBRA 59 In general, we may state the rule thus: [a] + [4] =[¢,] wheree,, = a, + 6, Note that the sum matrix {c,,] must have the same dimension as the component matrices {a,,] and [5,,]. The subtraction operation A ~ B can be similarly defined if and only if A and B have the same dimension. The operation entails the result [aj] - [6] =[4,] where d,, = a, — 8, i Oy Example 3 [3 3|-[§ Ai Bet pat -5 2 ol] li 3 2-1 0-3 1-3 The subtraction operation A — B may be considered alternatively as an addition operation involving a matrix A and another matrix (— 1)B. This, however, raises the question of what is meant by the multiplication of a matrix by a single number (here, — 1). Scalar Multiplication To multiply a matrix by a number—or in matrix-algebra terminology, by a scalar. —is to multiply every element of that matrix by the given scalar. Example 4 to o]-P Example 5 fe aks 2la an) [tan fan From these examples, the rationale of the name scalar should become clear, for it “scales up (or down)” the matrix by a certain multiple. The scalar can, of course, be a negative number as well. aftr te 4]. [om ote a4] ay ay dy —ay ay —d, Note that if the matrix on the left represents the coefficients and the constant 60 STATIC (OR EQUILIBRIUM) ANALYSIS terms in the simultaneous equations yx, + ayx2 = dy Gy X1 + y9Xq = dy then multiplication by the scalar —1 will amount to multiplying both sides of both equations by — 1, thereby changing the sign of every term in the system. Multiplication of Matrices Whereas a scalar can be used to multiply a matrix of any dimension, the multiplication of two matrices is contingent upon the satisfaction of a different dimensional requirement, Suppose that, given two matrices A and B, we want to find the product 4B. The conformability condition for multiplication is that the column dimension of A_ (the “lead” matrix in the expression AB) must be equal to the row dimension of B (the “lag” matrix). For instance, if 4.5 A= vw B= (43) dy lan aul 2, bu ba by by ba bay the product AB then is defined, since 4 has two columns and B has two rows—precisely the same number.* This can be checked at a glance by comparing the second number in the dimension indicator for A, which is (1 X 2), with the first number in the dimension indicator for B, (2 X 3). On the other hand, the reverse product Bd is nor defined in this case, because B (now the lead matrix) has three columns while A (the lag matrix) has only one row; hence the conformability condition is violated. In general, if A is of dimension m Xn and B is of dimension p X q, the matrix product AB will be defined if and only if n = p. If defined, moreover, the product matrix AB will have the dimension m x q—the same number of rows as the lead matrix 4 and the same number of columns as the lag matrix B. For the matrices given in (4.5), AB will be 1 x 3. It remains to define the exact procedure of multiplication. For this purpose, let us take the matrices 4 and B in (4.5) for illustration. Since the product AB is defined and is expected to be of dimension 1 x 3, we may write in general (using the symbol C rather than c’ for the row vector) that AB=C=[en en ea) Each element in the product matrix C, denoted by ¢,,, is defined as a sum of products, to be computed from the elements in the ith row of the lead matrix A, and those in the jth column of the lag matrix B. To find c,,, for instance, we should take the first row in A (since i = 1) and the first column in B (since j = 1) * The matrix 4. being a row vector. would normally be denoted by a’. We use the symbol 4 here to stress the fact that the multiplication rule being explained applies to matrices in general, not only to the product of one vector and one matrix. LINEAR MODELS AND MATRIX ALGEBRA 61 ‘Second pair uo [ ©) a 7 —as shown in the top panel of Fig. 4.2—and then pair the elements together sequentially, multiply out each pair, and take the sum of the resulting products, to get (4.6) ey = aby + diab Similarly, for ¢)3, we take the first row in A (since i = 1) and the second column in B (since j = 2), and calculate the indicated sum of products—in accordance with the lower panel of Fig. 4.2—as follows: (4.6) ei = abi + Arbo By the same token, we should also have 4.6") ey = audi + arabs It is the particular pairing requirement in this process which necessitates the matching of the column dimension of the lead matrix and the row dimension of the lag matrix before multiplication can be performed. The multiplication procedure illustrated in Fig. 4.2 can also be described by using the concept of the inner product of two vectors. Given two vectors u and 0 with n elements each, say, (1), ty,..., U,) and (0), By..... 0,), arranged either as two rows or as two columns or as one row and one column, their inner product, written as u - 0, is defined as uso = uo, + uv, +++ + U0, This is a sum of products of corresponding elements, and hence the inner product of two vectors is a scalar. If, for instance, we prepare after a shopping trip a vector of quantities purchased of n goods and a vector of their prices (listed in the corresponding order), then their inner product will give the total purchase cost. 62 STATIC (OR EQUILIBRIUM) ANALYSIS. Note that the inner-product concept is exempted from the conformability condi- tion, since the arrangement of the two vectors in rows or columns is immaterial. Using this concept, we can describe the element ¢,, in the product matrix C = AB simply as the inner product of the ith row of the lead matrix A and the ith column of the lag matrix B. By examining Fig. 4.2, we can easily verify the “validity of this description. The rule of multiplication outlined above applies with equal validity when the dimensions of A and B are other than those illustrated above; the only prere- quisite is that the conformability condition be met. Example 7 Given -1 0 oy [3g] a ety lL 47 find AB. The product AB is obviously defined, and will be 2 x 2: 3(=1) + 5/4) 3(0) +5(7)] _ [1735 4(-1)+6(4) 4(0) + 6(7)} [20 42 AB | Example 8 Given 1 3 5 . 2 8{ and b= | | p97 AG exn 19 find Ab. This time the product matrix should be 3 X 1, that is, a column vector: 15) +3(9)] [32 Ab = | 2(5) + 8(9) | = | 82 4(5) + 0(9)] | 20 A= x2 Example 9 Given 3-1 2 0 -$ & ee 1 0 3 and a =|-1 t t xn 6x3) 4 0 2 Oe find AB. The same rule of multiplication now yields a very special product matrix: O+140 -$-444 B-4-% 100 AB=|0+0+0 ~§+0+$ #+0-H]=]0 1 0 O+0+0 ~$+0+% B+0-% oo 1 This last matrix—a square matrix with Is in its principal diagonal (the diagonal running from northwest to southeast) and Os everywhere else—exemplifies the important type of matrix known gg jdeuuty gaaurix, This will be further discussed below: LINEAR MODELS AND MATRIX ALGEBRA 63 Example 10 Let us now take the matrix A and the vector x as defined in (4.4) and find Ax. The product matrix is a 3 X 1 column vector: 6 3 fay [ot Bent os 1 4 3[:]- xy + 4x, = 2x5 1 4x, — 2 + 5x5 ex axn oxn Repeat: the product on the right is a column vector, its corpulent appearance notwithstanding! When we write Ax = d, therefore, we have 6x, + 3x) + x5 22 x, + 4x, — 2x5] = | 12 4x, — x) + 5x5 10 which, according to the definition of matrix equality, is equivalent to the state- ment of the entire equation system in (4.3). Note that, to use the matrix notation Ax = d, it is necessary, because of the conformability condition, to arrange the variables x, into a column vector, though these variables are listed in a horizontal order in the original equation system, Example 11 The simple nationa and C, Y=C+h+G C=a+bY can be rearranged into the standard format of (4.1) as follows: Y-C=h+G -bY+C=a Hence the coefficient matrix A, the vector of variables x, and the vector of constants d are: | | [ | _ [" +G | A= = d =| 0 2, le HW atn tel ekn @ Let us verify that this given system can be expressed by the equation Ax = d. By the rule of matrix multiplication, we have A 1 o-1))¥ = W(¥) + (-1)(C) i Y-c eee lllic —b(y)+(c) | | -or+¢ Thus the matrix equation Ax = d would give us y-c ]- [25%] bY +C a come model in two endogenous variables ¥ 64 STATIC (OR EQUILIBRIUM) ANALYSIS. Since matrix equality means the equality between corresponding elements, it is clear that the equation Ax = d does precisely represent the original equation system, as expressed in the (4.1) format above. The Question of Division While matrices, like numbers, can undergo the operations of addition, subtrac- tion, and multiplication—subject to the conformability conditions—it is not possible to divide one matrix by another. That is, we cannot write A/B. For two numbers a and 6, the quotient a/b (with 6 + 0) can be written alternatively as ab~! or b~'a, where b~' represents the inverse or reciprocal of b. Since ab“' = b~'a, the quotient expression a/b can be used to represent both ab~' and 5 'a. The case of matrices is different. Applying the concept of inverses to matrices, we may in certain cases (discussed below) define a matrix B~! that is the inverse of matrix B. But from the discussion of conformability condition it follows that, if AB™' is defined, there can be no assurance that B™'A is also defined. Even if AB~' and B~'4 are indeed both defined, they still may not represent the same product. Hence the expression 4/B cannot be used without ambiguity, and it must be avoided. Instead, you must specify whether you are referring to AB™' or B~'A—provided that the inverse B~' does exist and that the matrix product in question is defined. Inverse matrices will be further discussed below. Digression on 5 Notation The use of subscripted symbols not only helps in designating the locations of parameters and variables but also lends itself to a flexible shorthand for denoting sums of terms, such as those which arose during the process of matrix multiplica- tion, The summation shorthand makes use of the Greek letter ¥ (sigma, for “sum"), To express the sum of x, x2, and x3, for instance, we may write 3 xy tmty= Day jet which is read: “the sum of x, as j ranges from 1 to 3.” The symbol j, called the summation index, takes only integer values. The expression x, represents the summand (that which is to be summed), and it is in effect a function of j. Aside from the letter j, summation indices are also commonly denoted by i or k, such as 1 Lx, = xy t tg t x5 + 6 + xy i LDxyaxo tate +x, ko LINEAR MODELS AND MATRIX ALGEBRA 65 The application of © notation can be readily extended to cases in which the x term is prefixed with a coefficient or in which each term in the sum is raised to some integer power. For instance, we may write: : ax, + ax; + ax = a(x, tx tx) aD x, in = ax, + a,x, + 43x; gx? + ax! + a,x? +--+ + a,x" = ay + ax + agx? +++ $.a,x" ‘The last example, in particular, shows that the expression }) a,x‘ can in fact be used as a shorthand form of the general polynomial function of (2.4). It may be mentioned in passing that, whenever the context of the discussion leaves no ambiguity as to the range of summation, the symbol E can be used alone, without an index attached (such_as Lx,), or with only the index letter underneath (such as ))x,). Let us apply the ¥ shorthand to matrix multiplication. In (4.6), (4.6’), and (4.6), each element of the product matrix C = AB is defined as a sum of terms, which may now be rewritten as follows: 2 rt aby = LD andi 2 C2 = andi + Anba = Lauds ke 2 C3 = Adis + Giabes = LD dudes kel In each case, the first subscript of c, is reflected in the first subscript of a,,, and the second subscript of ¢,, is reflected in the second subscript of bg, in the I expression. The index k, on the other hand, is a “dummy” subscript; it serves to indicate which particular pair of elements is being multiplied, but it does not show up in the symbol ¢,,. Extending this to the multiplication of an m X n matrix A = [a,] and an nX p matrix B= [b,,], we may now write the elements of the m x p product matrix AB = C = [¢;,] a8 eu Leuba ea= Laub 0 k kel 66 STATIC (OR EQUILIBRIUM) ANALYSIS or more generally, This last equation represents yet another way of stating the rule of multiplication for the matrices defined above. EXERCISE 4.2 si 4 - O 5 [8 3 : 1 Given A {é j}a-[8 3} aac=([8 3], ina: (@) A+B (byC-A (0) 3A (d) 4B +2 28 - [2 0 -({7 2 2 Gena = [3 ols [3 gheanac (2 5 (a) Is AB defined? Calculate 4B. Can you calculate BA? Why? (b) Is BC defined? Calculate BC. Is CB defined? If so, calculate CB. Is it true that BC = CB? 3 On the basis of the matrices given in Example 9, is the product BA defined? If so, caleulate the product. In this case do we have AB = BA? 4 Find the product matrices in the following (in each case, append beneath every matrix a dimension indicator): oOo 1 olf8 0 x os 04 ( | oy [i ; lf 2.3 O13 5. 2 4 -1 70 eee t () 5 2] @la «fp | Is : fs | 14 5 Expand the following summation expressions: (a) Dx, (©) bx, () Let? int i io 8 a (6) Lax, (@) Lax! iss ro 6 Rewrite the following in £ notation: (4) (5) ~ I) + 2x62 — 1) + 34565 ~ 1) (b) a3(x3 + 2) + a3(xq + 3) + ag(xs + 4) 1 1 eee Oe = LINEAR MODELS AND MATRIX ALGEBRA 67 7 Show that the following are true: (a) (E+) tat De = imo (b) Xi aby, = by, i ini ©) Leyty= Lxy+ Ly int) jet inl 43 NOTES ON VECTOR OPERATIONS In the above, vectors are considered as special types of matrix. As such, they qualify for the application of all the algebraic operations discussed. Owing to their dimensional peculiarities, however, some additional comments on vector operations are useful. Multiplication of Vectors An m X 1 column vector u, and a 1 X 7 row vector 0’, yield a product matrix wo’ of dimension m X n. Example 1 Given w= [3] and o"= (1 4 5), we can get 3(1) 3(4) 3(5)]_ [3 12 15 (1) 214) 1205)|| 2 se 10) Since each row in u consists of one element only, as does each column in v’, each element of uv’ turns out to be a single product instead of a sum of products. The product wo’ is a 2X 3 matrix, even though we started out only with two vectors. ‘On the other hand, given a 1 X n row vector u’ and an n X 1 column vector , the product u’e will be of dimension 1 1 wo’ Example 2 Given u’ =[3. 4] and v = [3]. we have u'v = [3(9) + 4(7)] = [55] As written, u’v is a matrix, despite the fact that only a single element is present. However, 1 x 1 matrices behave exactly like scalars with respect to addition and multiplication: [4] + [8] = [12], just as 4+ 8 = 12; and [3] [7] = [21], just as 3(7) = 21. Moreover, 1 x I matrices possess no major properties that scalars do not have. In fact, there is a one-to-one correspondence between the set of all scalars and the set of all 1X 1 matrices whose elements are scalars. For this reason, we may redefine u’o to be the scalar corresponding to the 1 x 1 product ey 68 STATIC (OR EQUILIBRIUM) ANALYSIS matrix. For the above example, we can accordingly write u’o = 5S. Such a product is called a scalar product.* Remember, however, that while a 1 X 1 matrix can be treated as a scalar, a scalar cannot be replaced by a 1 x 1 matrix at will if further calculation is to be carried out, unless conformability conditions are fulfilled, eultlled. Example 3 Given a row vector u’ = [3 6 9}, find u’u. Since u is merely the column vector with the elements of u’ arranged vertically, we have. 3 uu=[3 6 9]|6} = (3) + (6) + (9)? 9 where we have omitted the brackets from the 1 x 1 product matrix on the right. Note that the product u‘w gives the sum of squares of the elements of u. In general, if u’ “++ uy) then w’y will be the sum of squares (a -alar) of the elements u, -Lw j=l fu =u? + wg? uu = uj? + uy + Had we calculated the inner product u + u (or w’ - u’), we would have, of course, obtained exactly the same result. : To conclude, it is important to distinguish between the meanings of uv’ (a matrix larger than 1 x 1) and u’v (a 1X 1 matrix, or a scalar). Observe, in particular, that a scalar product must have a row vector as the lead matrix and a column vector as the lag matrix; otherwise the product cannot be | X 1. a Geometric Interpretation of Vector Operations It was mentioned earlier that a column or row vector with n elements (referred to hereafter as an n-vector) can be viewed as an n-tuple, and hence as a point in an n-dimensional space (referred to hereafter as an n-space). Let us elaborate on this idea. In Fig, 4.3a, a point (3, 2) is plotted in a 2-space and is labeled w. This is the geometric counterpart of the vector u = 2 or the vector u’ = [3 2}, both of which indicate in this context one and the same ordered pair. If an arrow (a directed-line segment) is drawn from the point of origin (0,0) to the point w, it will specify the unique straight route by which to reach the destination point « from the point of origin, Since a unique arrow exists for each point, we can regard the vector w as graphically represented either by the point (3,2), or by the * The concept of scalar product is thus akin to the concept of inner product of two vectors with the same number of elements in each, which also yields a scalar. Recall, however, thatthe inner product is exempted from the conformabilty condition for multiplication, so that we may write it as wv. In the case of scalar product (denoted without a dot between the two vector symbols) on the other hand, we can expres it only as a row vector multiplied by a column vector, with the row vector in the lee. LINEAR MODELS AND MATRIX ALGEBRA 69 corresponding arrow. Such an arrow, which emanates from the origin (0,0) like the hand of a clock, with a definite length and a definite direction, is called a radius vector. Following this new interpretation of a vector, it becomes possible to give geometric meanings to (a) the scalar multiplication of a vector, (b) the addition and subtraction of vectors, and more generally, (c) the so-called “linear combina- tion” of vectors. First, if we plot the vector [s] = 2u in Fig. 4.3a, the resulting arrow will overlap the old one but will be twice as long. In fact, the multiplication of vector u by any scalar k will produce an overlapping arrow, but the arrowhead will be 2 eee le) . (a) 70 StATIC (OR EQUILIBRIUM) ANALYSIS relocated, unless k= 1. If the scalar multiplier is k > 1, the arrow will be extended out (scaled up); if 0 0 (for u +») d(u, v) < d(u,w) + d(w, v) (for w + u,v) d= 74 STATIC (OR EQUILIBRIUM) ANALYSI: The last property is known as the triangular inequality, because the three points u, v, and w together will usually define a triangle. When a vector space has a distance function defined that fulfills the above three properties it is called a metric space. However, note that the distance d(u, v) has been discussed above only in general terms. Depending on the specific form assigned to the d function, there may result a variety of metric spaces. The so-called “euclidean space” is one specific type of metric space, with a distance function defined as follows. Let point u be the n-tuple (a), @),..-, 4,) and point v be the n-tuple (b,, b,,.-., 8,); then the euclidean distance function is d(u,e) = Va, ~ b,)° + (ay = &) + ++ + (a, = 8) where the square root is taken to be positive. As can be easily verified, this specific distance function satisfies all three properties enumerated above. Applied to the two-dimensional space in Fig. 4.3a, the distance between the two points (6,4) and (3,2) is found to be V6 — 3) + (4-2) = v8 42? = VIB This result is seen to be consistent with Pythagoras’ theorem, which states that the length of the hypotenuse of a right-angled triangle is equal to the (positive) square root of the sum of the squares of the lengths of the other two sides. For if we take (6.4) and (3,2) to be w and », and plot a new point w at (6,2), we shall indeed have a right-angled triangle with the lengths of its horizontal and vertical sides equal to 3 and 2, respectively, and the length of the hypotenuse (the distance between w and v) equal to V3? + 2? = V3. The euclidean distance function can also be expressed in terms of the square root of a scalar product of two vectors. Since v and © denote the two n-tuples (4;,.++5 4) and (by,...5 B,), We can Write a column vector u ~ v, with elements a, — by, a; — by,...54,— 2. What goes under the square-root sign in the euclidean distance function is, of course, simply the sum of squares of these n elements, which, in view of Example 3 above, can be written as the scalar product (w= vy(u — 0). Hence we have d(u,) = uv) (u—0) EXERCISE 4.3 1 Given u'=[5 2 3], 0° =[3 1 9), w’=[7 5 8], and x’=[x, x) x3}, write out the column vectors, u,v, w, and x, and find (a) wo’ (ec) xx’ Ce) wo (gh w'a (b) ww’ (d) o'w (f)wx (A) x'x LINEAR MODELS AND MATRIX ALGEBRA. 75 3 2 Given w =| 2], abe [Rhee eben [2]: (a) Which of the following are defined: w'x, x'y’, ay’, y'ys 22’, yw’, x+y? (b) Find all the products that are defined. 3 Having bought m items of merchandise at quantities Q,,..., Q, and prices P,,..., Py, how would you express the total cost of purchase in (a) E notation and (b) vector notation? 4 Given two nonzero vectors w, and w,, the angle 8 (0° < 8 < 180°) they form is related to the scalar product w/w, (= w3,) as follows: acute > Gis a(n) | right } angle if and only iri = Jo obtuse < Verify this by computing the scalar product for each of the following pair of vectors (see Figs. 43 and 4.4): won=[ne[] om-[s}n-[] ©)» =[1}. [3] wm-[3}=-[3] omfbe[s] 5 Given u = [§] and v= (3} find the following graphically: (a) 20 (c)u-0 — (e) 2u+30 (b)uto (d)o-u (f) 4u-20 6 Since the 3-space is spanned by the three unit vectors defined in (4.7), any other 3-vector should be expressible as a linear combination of ¢), ¢3, and e;. Show that the following 3-vectors can be so expresses -1 2 (co) | 3 (4) {0 9 8 4 15 (a) {7 (b) | -2 0 1 7 In the three-dimensional euclidean space, what is the distance between the following points? (a) G,2,8) and (0,—1,5)_ (b) (9,0,4) and (2,0, ~4) 8 The triangular inequality is written with the weak inequality sign < , rather than the strict inequality sign < . Under what circumstances would the “=” part of the inequality apply? 9 Express the length of a radius vector v in the euclidean n-space (i.., the distance from the origin to point 0) in terms of: (a) scalars (b) ascalar product —_(c) an inner product 76 STATIC (OR EQUILIBRIUM) ANALYSIS 4.4 COMMUTATIVE, ASSOCIATIVE, AND DISTRIBUTIVE LAWS In ordinary scalar algebra, the additive and multiplicative operations obey the commutative, associative, and distributive laws as follows: Commutative law of addition: a+b Commutative law of multiplication: ab Associative law of addition: (a+b) +e=a4(b+0) Associative law of multiplication: (ab)e = a(be) Distributive law: a(b +c) = ab + ac These have been referred to during the discussion of the similarly named laws applicable to the union and intersection of sets. Most, but not all, of these laws also apply to matrix operations—the significant exception being the commutative law of multiplication. Matrix Addition Matrix addition is commutative as well as associative. This follows from the fact that matrix addition calls only for the addition of the corresponding elements of two matrices, and that the order in which each pair of corresponding elements is added is immaterial, In this context, incidentally, the subtraction operation A — Ban simply be regarded as the addition operation A + (~B), and thus no separate discussion is necessary. The commutative and associative laws can be stated as follows: Commutative law = A+ B=B+A PRoor A + B=[a,,] +(b,)=[a,, +b] =[b, + a,,)=B +A Example 1 Given a= [3 3] ana 8 = [6 2], we find that / = 9 3 A+B=B+A [3 Al Associative law (A + B)+ C= A +(B+C) PROOF (A + B)+C=[a,, + 5,)] + le, = (ai) + (b+ Gy [a,, + b+ ¢))] A+(B+C) Example 2 Given v, = [3]} = [3} and v= [2]. we find that AEBS (0) + ©) LINEAR MODELS AND MATRIX ALGEBRA 77 (2) +{2]-['5] Applied to the linear combination of vectors k,o, +--+ + kyv,, this law permits us to select any pair of terms for addition (or subtraction) first, instead of having to follow the sequence in which the n terms are listed. which is equal to 0 + (02 ~ Matrix multiplication is not commutative, that is, AB + BA As explained previously, even when AB is defined, BA may not be; but even if both products are defined, the general rule is still AB # BA. eile fo -1 Example 3 tea=[! 2] and 8 [° 1}: then _ ] 0) +206) 1(-1) + 2(7) ] _ fiz 13 ABI 50) 4.4(6) 3(-1) +.4(7)|~ [24 25 0(1) = 143) 0(2)- 1(4)] _[-3 -4 6(1) + 7(3) ~ 6(2) + 7(4) 27 40 Example 4 Let u’ be 1 X 3 (a row vector); then the corresponding column vector u must be 3 X 1. The product uu will be 1 x 1, but the product wu’ will be 3 x 3. Thus, obviously, w’u # uu’. but = BA = In view of the general rule AB + BA, the terms premultiply and postmultiply are often used to specify the order of multiplication. In the product AB, the matrix B is said to be premultiplied by A, and A to be postmultiplied by B. There do exist interesting exceptions to the rule AB + BA, however. One such case is when A is a square matrix and B is an identity matrix. Another is when A is the inverse of B, that is, when A = B-'. Both of these will be taken up again later. It should also be remarked here that the scalar multiplication of 4 matrix. does obey the commutative law; thus. kA = Ak wt if kis a scalar. Although it is not in general commutative, matrix multiplication is associa~ Peta eee aoe tive. Associative law (AB)C = A( BC) = ABC In forming the product ABC, the conformability condition must naturally be satisfied by each adjacent pair of matrices. If A is m X n and if Cis p X q, then 7B STATIC (OR EQUILIBRIUM) ANALYSIS conformability requires that B be n X p: A BC (en) (np) (pO Note the dual appearance of m and p in the dimension indicators. If the conformability condition is met, the associative law states that any adjacent pait of matrices may be multiplied out first, provided that the product is duly inserted in the exact place of the original pair. Example $ It x= [| and A= x'Ax = x'(Ax [x which is a “weighted” sum of squares, in contrast to the simple sum of squares given by x'x. Exactly the same result comes from y * 2 2 (x= Loum dxml[ | = aust + ane Matrix multiplication is also distributive. Distributive law = A(B + C)= AB+ AC [premultiplication by 4] (B+ C)A=BA+ CA _ [postmultiplication by 4] In each case, the conformability conditions for addition as well as for multiplica- tion must, of course, be observed. EXERCISE 4.4 -[3 6] g-[-1 7 (ise sven a=([2 SJa=[-L Tanse= [2 4 vata (a) (A+ B)+C=4+(B+C) (b) (A + B)~C=4+(B-C) 2 The subtraction of a matrix B may be considered as the addition of the tmatrix (~1) B. ‘Does the commutative law of addition permit us to state that A — B = 8 — A? If not, how would you correct the statement? 3 Test the associative law of multiplication with the following matrices: ° 10 5 a B i 0 a c-lo 1 0.5 13 2. 74 4 Prove that for any two scalars g and k (a) k(A + B)= kA + kB (b) (g + k)A = gd + kA LINEAR MODELS AND MATRIX ALGEBRA 79 5 Prove that (A + BC + D) = AC + AD + BC + BD. 6 If the matrix A in Example 5 had all its four elements nonzero, would x’Ax still give a ‘weighted sum of squares? Would the associative law still apply? 4.5 IDENTITY MATRICES AND NULL MATRICES Identity Matrices Reference has been made earlier to the term identity matrix. Such_a matrix is defined as a square (repeat: square) matrix with Is in its principal diagonal and Os_ everywhere else. It is denoted by the symbol /, or J, in which the subscript serves to indicate its row (as well as column) dimension. Thus, 10 0 =[} | a=[s 1 4 001 But both of these can also be denoted by J. The importance of this special type of matrix lies in the fact that it plays a role similar to that of the number | in scalar algebra. For any number a, we have 1(a) = a(1) = a, Similarly, for any matrix A, we have G8) a= ar= y) Bae Example LetA=|) 5 3 ue[} ik: 3 als ho 0 3] 4 ee 123 a=[3 3 alls t o)-[ 3 a]-4 20 3 [ot } 2.0 3 Because A is 2 x 3, premultiplication and postmultiplication of A by J would call for identity matrices of different dimensions, namely, /, and J;, respectively. But in case A is n X n, then the same identity matrix I, can be used, so that (4.8) becomes J, = AJ, thus illustrating an exception to the rule that matrix multi- plication is not commutative. , then ‘The special nature of identity matrices makes it possible, during the multipli-. cation process, to insert or delete an identity matrix without affecting the matrix product, This follows directly from (4.8). Recalling the associative law, we have, for instance, A I B =(AI)B= AB (mn) (1) OD) (mxn) (nx) which shows that the presence or absence of J does not affect the product. 80 STATIC (OR EQUILIBRIUM) ANALYSIS Observe that dimension conformability is preserved whether or not J appears in the product. An interesting case of (4.8) occurs when A = J,, for then we have Aly = (Uy)? = ly which states that an identity matrix squared is equal to itself. A generalization of this result is that U,)* = 1, (k= 1,2...) An identity matrix remains unchanged when it is multiplied by itself any number of times. Any matrix with such a property (namely, AA = A) is referred to as an idempotent matrix. —————- = — Null Matrices Just as an identity matrix J plays the role of the number 1, a null matrix—or zero ‘matrix—denoted by 0, plays the role of the number 0. A null matrix is simply a matrix whose elements are all zero. Unlike /, the zero matrix is not restricted to being square. Thus it is possible to write _|0 0 000 aapao | and 857 lo 0 él and so forth. A square null matrix is idempotent, but a nonsquare one is not. (Why?) ‘As the counterpart of the number 0, null matrices obey the following rules of operation (subject to conformability) with regard to addition and multiplication: A+ 0 = 0 +424 (mxny (mxn) (mn) (min) (mn) 4 0 = 0 and 0 A= 0 (mx) (ap) mxp) (amy (mem) (am) Note that, in multiplication, the null matrix to the left of the equals sign and the one to the right may be of different dimensions. Example 2 _ fan aa], f9 07 far an aso=(7 oz] + [o = [a az] 74 Example 3 sata a a2 | 2x3) xD) 4 Ay aay 0 o|= [3] = 0 o} bol” xn To the left, the null matrix is a 3 x I null vector; to the right, it is a 2x 1 null vector, LINEAR MODELS AND MATRIX ALGEBRA 81 Idiosyncracies of Matrix Algebra Despite the apparent similarities between matrix algebra and scalar algebra, the case of matrices does display certain idiosyncracies that serve to warn us not to “borrow” from scalar algebra too unquestioningly. We have already seen that, in general, AB + BA in matrix algebra. Let us look at two more such idiosyncracies of matrix algebra. For one thing, in the case of scalars, the equation ab = 0 always implies that either a or b is zero, but this is not so in matrix multiplication. Thus, we have {2 4][-2 4)/_[o 0 aa {i all 1 lf |=? although neither A nor B is itself a zero matrix. As another illustration, for scalars, the equation cd = ce (with ¢ + 0) implies that d = e. The same does not hold for matrices. Thus, given _f2 3 mies _f-2.1 e-[2 3 o-[} 1] ze [ 3 1 we find that 8 5 oon cem|,§ 24 even though D + E. These strange results actually pertain only to the special class of matrices known as singular matrices, of which the matrices 4, B, and C are examples. (Roughly, these matrices contain a row which is a multiple of another row.) Nevertheless, such examples do reveal the pitfalls of unwarranted extension of algebraic theorems to matrix operations. EXERCISE 4.5 9 -1 8 Tae -f*). b $ dhe [Sfomas- [2 1 Calculate: (a) AT (b) IA (co) Ix (d) x'T Indicate the dimension of the identity matrix used in each case. 2 Calculate: (a) Ab (b) Alb (c) x'IA (d) x'A ‘Does the insertion of / in (b) affect the result in (a)? Does the deletion of / in (d) affect the result in (c)? Given A = 3 What is the dimension of the null matrix resulting from each of the following? (a) Premultiply 4 by a 4 x 2 null matrix, (b) Postmultiply 4 by a 3 x 6 null matrix. (c) Premultiply 6 by a4 x 3 null matrix. (d) Postmultiply x by a 1 x 5 null matrix. 82 STATIC (OR EQUILIBRIUM) ANALYSIS 4 Show that a diagonal matrix, i.e., a matrix of the form a, 0 0 0 ay s+ 0 0 0 a can be idempotent only if each diagonal element is either 1 or 0, How many different numerical idempotent diagonal matrices of dimension n X 7 can be constructed altogether from the matrix above? 4.6 TRANSPOSES AND INVERSES, ‘When the rows and columns of a matrix A are interchanged—so that its first row becomes the first column, and vice versa—we obtain the transpose of A, which is denoted by 4’ or A’. The prime symbol is by no means new to us; it was used earlier (o distinguish a row vector from a column vector. In the newly introduced terminology, a row vector x’ constitutes the transpose of the column vector x. The superscript T in the alternative symbol is obviously shorthand for the word transpose, ~— Example 1 Given A oxy change the rows and columns and write al A A =|] 8 0 and B= fa 4 6x2 | _9 4 @x2) 308 -9 3 4) ve can inter: =k 0 a} li gj] we can inter By definition, if a matrix A is m x n, then its transpose A’ must be n X m. An n X n square matrix, however, possesses a transpose with the same dimension. 9 -1 104 Example 2 if C [3 ] ma = 0 3° 7), then y 472 92 c [2 | and Oe eee 472 Here, the dimension of each transpose is identical with that of the original matrix. In D’, we also note the remarkable result that D’ inherits not only the dimension of D but also the original array of elements! The fact that D’ = D is the result_of the symmetry of the elements with reference to the principal diagonal. Considering the principal diagonal in D as a mirror, the elements ~ LINEAR MODELS AND MATRIX ALGEBRA 83 located to its northeast are exact images of the elements to its southwest; hence the first row reads identically with the first column, and so forth. The matrix D exemplifies the special class of square matrices known as symmetric matrices. Another example of such a matrix is the identity matrix J, which, as a symmetric matrix, has the transpose J’ = I. Properties of Transposes The following properties characterize transposes: (49) (4 =4 (4.10) (4+ By (4.11) (ABY = BIA" +B The first says that the transpose of the transpose is the original matrix—a rather self-evident conclusion. The second property may be verbally stated thus: the transpose of a sum is the sum of the transposes. [41 ela Example 3 If A = 9 | ana [3 Of then aoe Tl 8] and area[4 3j+3 7]-[6 is} ~ The third property is that the transpose of a product is the product of the transposes in reverse order. To appreciate the necessity for the reversed order, let us examine the dimension conformability of the two products on the two sides of (4.11). If we let A be m Xn and B be n X p, then AB will be m x p, and (ABY’ will be p X m. For equality to hold, it is necessary that the right-hand expression BYA’ be of the identical dimension. Since B’ is p Xn and A’ is n X m, the product BYA’ is indeed p X m, as required. The dimension of B’A’ thus works out. Note that, on the other hand, the product 4’B’ is not even defined unless m = p. Lise Example 4 Given a = [} 2| ana 8 = [° ~4, we nave Wee tare cay-[2 BERR] aot Opnee 1 Spx Ky =) 0 ale 3)-[2 x) anton a -1 7)12 4)7 113) 25 par yp tm This verifies the property. 84 STATIC (OR EQUILIBRIUM) ANALYSIS Inverses and Their Properties For a given matrix A, the transpose 4’ is always derivable. On the other hand, its inverse matrix—another type of “derived” matrix—may or may not exist. The inverse of matrix A, denoted by A~', is defined only if A is a square matrix, in which case the inverse is the matrix that satisfies the condition (4.12) AA-'=A-'4=1 That is, whether A is pre- or postmultiplied by A~', the product will be the same identity matrix. This is another exception to the rule that matrix multiplication is not commutative. The following points are worth noting: 1. Not every square matrix has an inverse—squareness is a necessary condition, but nor a suficient condition, for the existence of an inverse. If a square matrix A has an inverse, A is said to be nonsingular; if A possesses no inverse, it is called a singular matrix. 2, If A~" does exist, then the matrix A can be regarded as the inverse of A~', just as A~' is the inverse of A. In short, A and A~' are inverses of each other. 3. If A ism X n, then A~' must also be n X n; otherwise it cannot be conform- able for both pre- and postmultiplication. The identity matrix produced by the multiplication will also be n X n. 4, If an inverse exists, then it is unique. To prove its uniqueness, let us suppose that B-has been found to be an inverse for A, so that AB = BA=I Now assume that there is another matrix C such that AC = CA = J. By premultiplying both sides of AB = I by C, we find that * CAB=CIK(=C) [by (4.8)] Since CA = J by assumption, the preceding equation is reducible to IB=C or B=C That is, B and C must be one and the same inverse matrix. For this reason, we can speak of the (as against an) inverse of A 5. The two parts of condition (4.12)—namely, AA~! = Iand A~'4 = I—actually imply each other, so that satisfying either equation is sufficient to establish the inverse relationship between A and A~', To prove this, we should show that if, AA~! = J, and if there is a matrix B such that BA = J, then B = A~! (so that BA =I must in effect be the equation A~'A = J). Let us postmultiply both sides of the given equation BA = I by A~'; then (BA) A“! = 1"! B(AA~')=IA~! [associative law] BI=I4-' — [AA~! = I by assumption] LINEAR MODELS AND MATRIX ALGEBRA 85 Therefore, as required, B=A"' [by (4.8)] Analogously, it can be demonstrated that, if A~'4 = J, then the only matrix C which yields CA“' = Tis C= A. Example $ Let A= [3 | and B= 32 “4: then, since the scalar multiplier (4) in B can be moved to the rear (commutative law), we can write ile o z-[6 2-3 | i [ allé slee[6 6lé~lo 1 This establishes B as the inverse of A, and vice versa. The reverse multiplication, as expected, also yields the same identity matrix: aif2 -1]f3 1)= 26 ele 4 a slo alle 2 slo 6]~lo 1 The following three properties of inverse matrices are of interest. If A and B are nonsingular matrices with dimension n X n, then: (4.13) (AT!) b= (4.14) (4B)! =B-4" (4.15) (ay =(4a'y The first says that the inverse of an inverse is the original matrix. The second states that the inverse of a product is the product of the inverses in reverse order, ‘And the last one means that the inverse of the transpose is the transpose of the inverse, Note that in these statements the existence of the inverses and the satisfaction of the conformability condition are presupposed. ‘The validity of (4.13) is fairly obvious, but let us prove (4.14) and (4.15). Given the product AB, let us find its inverse—call it C. From (4.12) we know that CAB = J; thus, postmultiplication of both sides by B'A~' will yield (4.16) CABBM' = 1B "(= BA) gt)” But the left side is reducible to Cz Ae CA(BB™')A~! = CAIA~" [by (4.12)] = CAA“'=CI=C [by (4.12) and (4.8)] Substitution of this into (4.16) then tells us that C = B~'A~! or, in other words, that the inverse of AB is equal to B~'4~', as alleged. In this proof, the equation AA~' = A~'4 =] was utilized twice. Note that the application of this equation is permissible if and only if a matrix and its inverse are strictly adjacent to each other in a product. We may write AA~'B = 1B = B, but never ABA~' =-B. The proof of (4.15) is as follows. Given 4’, let us find its inverse—call it D. By definition, we then have DA’ = J. But we know that (Ady =P = ‘86 STATIC (OR EQUILIBRIUM) ANALYSIS produces the same identity matrix. Thus we may write DA’ = (AA-')' =(A'yA [by (4.11)] Postmultiplying both sides by ( A’) DAA) ' = (A NACA)! or D=(A'y {by (4.12)] , we obtain Thus, the inverse of A’ is equal to (A ~'Y, as alleged. In the proofs just presented, mathematical operations were performed on whole blocks of numbers. If those blocks of numbers had not been treated as ‘mathematical entities (matrices), the same operations would have been much more lengthy and involved. The beauty of matrix algebra lies precisely in its simplification of such operations. Inverse Matrix and Solution of Linear-Equation System ‘The application of the concept of inverse matrix to the solution of a simulta- neous-equation system is immediate and direct. Referring to the equation system in (4.3), we pointed out earlier that it can be written in matrix notation as 4.17 Ax = d ( ) Bx) Gx1 Bx) where A, x, and d are as defined in (4,4). Now if the inverse matrix A“? exists, the premultiplication of both sides of the equation (4.17) by A~! will yield AMx= And or \ (418) ox sacl d Pa oxy GxaexH The left side of (4.18) is a column vector of variables, whereas the right-hand product is a column vector of certain known numbers. Thus, by definition of the equality of matrices or vectors, (4.18) shows the set of values of the variables that satisfy the equation system, ie., the solution values. Furthermore, since A~! is unique if it exists, A~'d must be a unique vector of solution values. We shall therefore write the x vector in (4.18) as X, to indicate its status as a (unique) solution. Methods of testing the existence of the inverse and of its calculation will be discussed in the next chapter. It may be stated here, however, that the inverse of the matrix 4 in (4.4) is ; | 18 -16 “| At=S/-23 2% 13 21-17 Bo ¢ LINEAR MODELS AND MATRIX ALGEBRA 87 Thus (4.18) will turn out to be x, ,{ 18 —16 -10}/22 2) Bl=y[-B % Bf 2}=)3 oo -17 18 aiff 10 1 which gives the solution: X; = 2, X, = 3, and ¥ = 1. ‘The upshot is that, as one way of finding the solution of a linear-equation system Ax = d, where the coefficient matrix 4 is nonsingular, we can first find the inverse A~', and then postmultiply A~' by the constant vector d. The product A~'d will then give the solution values of the variables. ‘ EXERCISE 4.6 yaar p i uf 2 4) ,.]3 8 afi o9 YB ae tonena= [2 f}a-[2 thawc=[! 9 2etoue a anaces! fs 3) 2. Use the matrices given in the preceding problem to verify that (a) (A+ BY =A'+ BY (b) (ACY = CA 3 Generalize the result (4.11) to the case of a product of three matrices by proving that, for any conformable matrices A, B, and C, the equation (ABC) = C’B'4’ holds. 4 Given the following four matrices, test whether any one of them is the inverse of another: 2 pele 1-4 4-4 o-() 8] e-[S i] e-[t 3 4 5 Generalize the result (4.14) by proving that, for any conformable nonsingular matrices A, B, and C, the equation (ABC)! = C~'B-'4~'. 6 Let A =1— X(X'X) |X". (a) Must 4 be square? Must (X'X) be square? Must X be square? (b) Show that matrix A is idempotent. [Note: If X’ and X are not square, it is inappropriate to apply (4.14).] CHAPTER FIVE LINEAR MODELS AND MATRIX ALGEBRA (CONTINUED) In Chap. 4, it was shown that a linear-equation system, however large, may be written in a compact matrix notation. Furthermore, such an equation system can be solved by finding the inverse of the coefficient matrix, provided the inverse exists. Now we must address ourselves to the questions of how to test for the existence of the inverse and how to find that inverse. Only after we have answered these questions will it be possible to apply matrix algebra meaningfully to economic models. 5.1 CONDITIONS FOR NONSINGULARITY OF A MATRIX A given coefficient matrix A can have an inverse (i.e., can be “nonsingular”) only if it is square. As was pointed out earlier, however, the squareness condition is necessary but not sufficient for the existence of the inverse A_'. A matrix can be square, but singular (without an inverse) nonetheless. Necessary versus Sufficient Cond ns, ‘The concepts of “necessary condition” and “sufficient condition” are used frequently in economics. It is important that we understand their precise mean- ings before proceeding further. LINEAR MODELS AND MATRIX ALGEBRA (CONTINUED) 89 ‘A necessary condition is in the nature of a prerequisite: suppose that a statement p is true only if another statement_g is true; then q constitutes a necessary condition of p. Symbolically, we express this as follows: GA) pq which is read: “p only if q.” or alternatively, “if p, then q.” It is also logically correct to interpret (5.1) to mean “‘p implies g." It may happen, of course, that we also have p = w at the same time. Then both q and w are necessary conditions for p. Example 1 If we let p be the statement “a person is a father” and q be the statement ‘a person is male,” then the logical statement p = q applies. A person is a father only if he is male, and to be male is a necessary condition for fatherhood. Note, however, that the converse is not true: fatherhood is not a necessary condition for maleness. A different type of situation is that in which a statement p is true if q is true, but p can also be true when q is not true. In this case, q is said to be a sufficient condition for p. The truth of q suffices for the establishment of the truth of p, but it is not a necessary condition for p. This case is expressed symbolically by (52) pea x which is read: “p if g” (without the word “only”)—or alternatively, “if g, then p.” as if reading (5.2) backwards. It can also be interpreted to mean “q implies p.” Example 2. If we let p be the statement “one can get to Europe” and q be the statement “one takes a plane to Europe,” then p = q. Flying can serve to get one to Europe, but since ocean transportation is also feasible, flying is not a prerequisite. We can write p = q, but not p = q. Ina third possible situation, q is both necessary and sufficient for p. In such an event, we write (33) peg 7 which is read: “‘p if and only if q” (also written as “‘p iff ¢”). The double-headed arrow is really a combination of the two types of arrow in (5.1) and (5.2); hence the joint use of the two terms “if” and “only if” Note that (5.3) states not only that p implies g but also that q implies Example 3 If we let p be the statement “there are less than 30 days in the month” and q be the statement “it is the month of February,” then p © q. To have less than 30 days in the month, it is necessary that it be February. Conversely, the specification of February is sufficient to establish that there are less than 30 days in the month. Thus q is a necessary-and-sufficient condition for p_— : 90 STATIC (OR EQUILIBRIUM) ANALYSIS In order to prove p = q, it needs to be shown that q follows logically from p. Similarly, to prove p = q requires a demonstration that p follows logically from q. But to prove p q necessitates a demonstration that p and q follow from each other. Conditions for Nonsingularity When _the squareness condition is already met, a sufficient condition for the nonsingularity of a matrix is that its rows be linearly independent (or, what amounts to the same thing, that its columns be linearly independent). When the dual conditions of squareness_and linear independence are taken together, they constitute the necessary-and-sufficient condition for nonsingularity (nonsingular- ity © squareness and linear independence). An n Xn coefficient matrix A can be considered as an ordered set of row vectors, i.e., as a column vector whose elements are themselves row vectors: v ay ay ay : nee az Aa| 42 422 aan | Bm An 7 ay y where of = [aj aj. **+ digh i= 1,2,..., m, For the rows (row vectors) to be linearly independent, none must be a linear combination of the rest. More formally, as was mentioned in Sec. 4.3, linear row independence requires that the only set of scalars k, which can satisfy the vector equation (54) Skex= 0 im (xn be k, = 0 for all i. Example 4 If the coefficient matrix is 34 5] [oy A=|0 1 2/}=] 05 . 6 8 10] [2% then, since [6 8 10]= 2{3 4 5}, we have vf = 20; = 20; + Ov}. Thus the third row is expressible as a linear combination of the first two, and the rows are not linearly independent. Alternatively, we may write the above equation as 2vj + Ojos =[6 8 10] +[0 0 o]-[6 8 10]=[0 0 O] Inasmuch as the set of scalars that led to the zero vector of (5.4) is not k; = 0 for all i, it follows that the rows are linearly dependent, LINEAR MODELS AND MATRIX ALGEBRA (CONTINUED) 91 Unlike the squareness condition, the linear-independence condition cannot normally be ascertained at a glance. Thus a method of testing linear independence among rows (or columns) needs to be developed. Before we concern ourselves with that task, however, it would strengthen our motivation first to have an intuitive understanding of why the linear-independence condition is heaped together with the squareness condition at all. From the discussion of counting equations and unknowns in Sec. 3.4, we recall the general conclusion that, for a. system of equations to possess a unique solutign, it is not sufficient to have the. same_number of_equations as unknowns. In addition, the equations must be consistent with and functionally independent (meaning, in the present context of linear systems, linearly independent) of one another. There is a fairly obvious tie-in between the “same number of equations as unknowns” criterion and the squareness (same number of rows and columns) of the coefficient matrix. What the “linear independence among the rows” requirement does is to preclude the _ inconsistency and the linear dependence among the equations as well. Taken together, therefore, the dual requirement of squareness and row independence in the coefficient matrix is tantamount to the conditions for the existence of a unique solution enunciated in Sec. 3.4. Let us illustrate how the linear dependence among the rows of the coefficient matrix can cause inconsistency or linear dependence among the equations them- selves. Let the equation system Ax = d take the form 10 4]{x,]_ [a 5 2\lx.] [a& where the coefficient matrix A contains linearly dependent rows: vj, = 205. (Note that its columns are also dependent, the first being § of the second.) We have not specified the values of the constant terms d, and d3, but there are only 1wo distinct possibilities regarding their relative values: (1) d, = 2d and (2) d, # 2d. Under the first—with, say, d, = 12 and d, = 6—the two equations are consistent but linearly dependent (just as the two rows of matrix A are), for the first equation is merely the second equation times 2. One equation is then redundant, and the system reduces in effect to a single equation, 5x, + 2x) = 6, with an infinite number of solutions. For the second possibility—with, say, d, = 12 but d, 0—the two equations are inconsistent, because if the first equation (10x, + 4x, 12) is true, then, by halving each term, we can deduce that 5x, + 2x, = 6; consequently the second equation (5x, + 2x, = 0) cannot possibly be true also. Thus no solution exists. The upshot is that no unique solution will be available (under either possibil- ity) so long as the rows in the coefficient matrix 4 are linearly dependent. In fact, the only way to have a unique solution is to have linearly independent rows (or columns) in the coefficient matrix. In that case, matrix A will be nonsingular, which means that the inverse A~' does exist, and that a unique solution ¥ = A~'d can be found. 92 sTATIC (OR EQUILIBRIUM) ANALYSIS Rank of a Matrix Even though the concept of row independence has been discussed only with regard to square matrices, it is equally applicable to any m X n rectangular matrix. If the maximum number of linearly independent rows that can be found im such a matrix is, the matrix is said to be of rank r. (The rank also tells us the maximum number of linearly independent columns in the said matrix.) The rank of an m X n matrix can be at most m or n, whichever is smaller. By definition, an n X n nonsingular matrix 4 has n linearly independent rows {or columns); consequently it must be of rank n. Conversely, an n X n matrix having rank n must be nonsingular. EXERCISE 5.1 1 In the following paired statements, let p be the first statement and q the second, Indicate for each case whether (5.1) or (5.2) or (5.3) applies. (a) It is a holiday; it is Thanksgiving Day. (6) A geometric figure has four sides; it is a rectangle. (c) Two ordered pairs (a, ) and (b, a) are equal; a is equal to b. (a) A umber is rational; it can be expressed as a ratio of two integers. (e) 4X 4 matrix is nonsingular; the rank of the matrix is 4. (f) The gasoline tank in my car is empty; I cannot start my car. Pe 9 (g) The letter is returned to the sender for insufficient postage: the sender forgot to put a stamp on the envelope. 2 Let p be the statement “‘a geometric figure is a square,” and let q be as follows: (a) It has four sides, > ¥ (b) Ithas four equal sides. °F (c) It has four equal sides each perpendicular to the adjacent one, 72 / Which is true for each case: p => q, p=, or p q? 3 Are the rows linearly independent in each of the following? of 3} fs) of a els 4 Check whether the columns of each matrix in the preceding problem are also linearly independent. Do you get the same answer as for row independence? 5.2. TEST OF NONSINGULARITY BY USE OF DETERMINANT To ascertain whether a square matrix is nonsingular, we can’ make use of the concept of determinant. LINEAR MODELS AND MATRIX ALGEBRA (CONTINUED) 93 Determinants and Nonsingularity The determinant of a square matrix A, denoted by |AJ, is a uniquely defined scalar (number) associated with that matrix. Determinants are defined only for a, 4 square matrices, For a 2 X 2 matrix A = | i 5 . its determinant is defined ay to be the sum of two terms as follows: a a2 ay Ay, a = ajay ~ ayaia_[= a salar] which is obtained by multiplying the two elements in the principal diagonal of A and then subtracting the product of the two remaining elements. In view of the dimension of matrix A, |A| as defined in (5.5) is called a second-order determinant, Example 1 Given A = fe ‘| and B F a their determinants are: _|10 4 _ ga) = lA -| 9 l= 10(5) — 9(4) = 18 and 121 -(3 3 |= 3-1) - 015) = -3 While a determinant (enclosed by two vertical bars rather than brackets) is by definition a scalar, a matrix as such does not have a numerical value. In other words, a determinant is reducible to a number, but a matrix is, in contrast, a whole block of numbers. It should also be emphasized that a determinant is defined only for a square matrix, whereas a matrix as such does not have to be square. ~~ > one be Even at this early stage of discussion, it is possible to have an inkling of the relationship between the linear dependence of the rows in a matrix A, on the one hand, and its determinant |4|, on the other. The two matrices c-ft]-P 8 a peso 8 2 “{Lesf7]3 ef “Tas{7 [8 24 . both have linearly dependent rows, because cj = c} and d} = 4d}. Both of their determinants also turn out to be equal to zero:_ ci =[3 3(8) - 3(8) [D| = ; 24|7 224) ~ (6) = 0 x This result strongly suggests that a “vanishing” determinant (a zero-value de- terminant) may have something to do with linear dependence, We shall see that this is indeed the case, Furthermore, the value of a determinant || can serve not only as a criterion for testing the linear independence of the rows (hence the 94 STATIC (OR EQUILIBRIUM) ANALYSIS nonsingularity) of matrix A, but also as an input in the calculation of the inverse A_|. if it exists First, however, we must widen our vista by a discussion of higher-order determinants. = ~ Evaluating a Third-Order Determinant A determinant of order 3 is associated with a 3 x 3 matrix. Given a, az ayy A= |4x 42 425 Gy dx 33 its determinant has the value a, a2 3 ay day (5.6) ieles ay da3 ee ay -a, ay @. Ray @. 2 435 an 35 Jas 32 ay ! x] aX +a, 13] as F GyyGy9455 ~ Gy ,4y9A3q + Gy242545) — 412421435 4413431452 — 434243, [= a scalar] Looking first at the lower line of (5.6), we see the value of |4| expressed as a sum of six product terms, three of which are prefixed by minus signs and three by plus signs. Complicated as this sum may appear, there is nonetheless a very easy way of “catching” all these six terms from a given third-order determinant. This is best explained diagrammatically (Fig. 5.1). In the determinant shown in Fig. 5.1, Figure 5.1 LINEAR MODELS AND MATRIX ALGEBRA (CONTINUED) 95 each element in the top row has been linked with two other elements via two solid arrows as follows: a); > 422 > 453, dp —* day —* yy, aNd ay3 > dy) > day. Each triplet of elements so linked can be multiplied out, and their product be taken as one of the six product terms in (5.6). The solid-arrow product terms are to be prefixed with plus signs. On the other hand, each top-row element has also been connected with two other elements via two broken arrows as follows: a), > dy, > @zy, 1) > 43) > ayy, and dy, > dy > @yy. Each triplet of elements so connected can also be ‘multiplied out, and their product taken as one of the six terms in (5.6). Such products are prefixed by minus signs. The sum of all the six products will then be the value of the determinant. - Example 2 eae 45 |= A510) + (100) + OA - BOLO ~ (149) = (B)(5)() = on - h(gee Wife -7 0 3 9 1 = (NV) + (ONAKO) + CHEIS) — KS) 0 5 = (0)(9)(5) = (3)(1)(0) = 295 This method of cross-diagonal multiplication provides a handy way of evaluating a third-order determinant, but unfortunately it is nor applicable to determinants of orders higher than 3. For the latter, we must resort to the so-called “Laplace expansion” of the determinant. Evaluating an nth-Order Determinant by Laplace Expansion Let us first explain the Laplace-expansion process for a third-order determinant. Returning to the first line of (5.6), we see that the value of |4| can also be regarded as a sum of three terms, each of which is a product of a first-row element and a particular second-order determinant. This latter process of evaluating |A|—by means of certain lower-order determinants—illustrates the Laplace expansion of the determinant ‘The three second-order determinants in (5.6) are not arbitrarily determined, but are specified by means of a definite rule. The first one, |7™ 02), subdeterminant of |A| obtained by deleting the first row and first column of |A|. This is called the minor of the clement a,, (the element at the intersection of C deléied row and column) and is denoted by |M,,|. In general, the symbol |, ‘can be used to represent the minor obtained by deleting the ith row and Hn isa 96 STATIC (OR EQUILIBRIUM) ANALYSIS. column of a given determinant. Since a minor is itself a determinant, it has a value. As the reader can verify, the other two second-order determinants in (5.6) are, respectively, the minors |M, | and |Mj5|; that is, ay ay ay dy ay ay ay 35 an ay 432 G33 My Mal Mil A concept closely related to the minor is that_of the cofactor. A cofactor, denoted by |C,,|, is a minor with a prescribed algebraic sign attached to it.* The tule of sign is as follows. If the sum of the two subscripts i and j in the minor M,,| is even, then the cofactor takes the same sign as the minor; that is, IC,| = [M,,|- If itis odd, then the cofactor takes the opposite sign to the minor; that is, |C,,| = —|M,,]- In short, we have Io =(- My — where it is obvious that the expression (—1)‘*/ can be positive if and only if (i+) is even. The fact that a cofactor has a specific sign is of extreme importance and should always be borne in mind, oc 65 4 ee Example 4 In the determinant |; the minor of the element 8 is Mal but the cofactor of the same element is [Cn] = ~ [Mal = 6 because i + j = 1 + 2 = 3 is odd. Similarly, the cofactor of the element 4 is 98 IGs| = ~ [Mas] = ie 3.2 Using these new concepts, we can express a third-order determinant as (5.7) (AL = aul Mul ~ ail Mial + a3/Misl 3 =aylCul + aplCal + aslCsl = L alc int ie. as a sum of three terms, each of which is the product of a first-row element and its corresponding cofactor. Note the difference in the signs of the a;2|M,2l and @,)|C;9| terms in (5.7). This is because 1 + 2 gives an odd number. The Laplace expansion of a third-order determinant serves to reduce the evaluation problem to one of evaluating only certain second-order determinants. * Many writers use the symbols M,, and C,, (without the vertical bars) for minors and cofactors. We add the vertical bars to give visual emphasis to the fact that minors and cofactors are in the nature of determinants and, as such, have scalar values. LINEAR MODELS AND MATRIX ALGEBRA (CONTINUED) 97 A similar reduction is achieved in the Laplace expansion of higher-order determi nants. In a fourth-order determinant |B{, for instance, the top row will contain four elements, by;... big; thus, in the spirit of (5.7), we may write ; [Bi = & bylC, yet where the cofactors |C,,| are of order 3. Each third-order cofactor can then be evaluated as in (5.6). In general, the Laplace expansion of an nth-order determi- nant will reduce the problem to one of evaluating n cofactors, each of which is of the (1 — I)st order, and the repeated application of the process will methodically lead to lower and lower orders of determinants, eventually culminating in the basic second-order determinants as defined in (5.5). Then the value of the original determinant can be easily calculated. ‘Although the process of Laplace expansion has been couched in terms of the cofactors of the first-row elements, it is also feasible to expand a determinant by the cofactor of any row or, for that matter, of any column. For instance, if the first column of a third-order determinant |A| consists of the elements ay,, 421. and a3,, expansion by the cofactors of these elements will also yield the value of iA: 3 MA = an lCul + aa lCal + asilGil = D aula fa Example 5 Given |A| . expansion by the first row produces the result -3 0 But expansion by the first column yields the identical answer: ay=5|_3 |-ol? oe 3l-0+0-27= -27 3 ol, ar=s|_3 4 2 6 \ 6 lig -3 alts ane Insofar as numerical calculation is concerned, this fact affords us an oppor- tunity to choose some “easy” row or column for expansion. A row or column with the largest number of Os or Is is always preferable for this purpose, because a 0 times its cofactor is simply 0, so that the term will drop out, and a 1 times its cofactor is simply the cofactor itself, so that at least one multiplication step can be saved. In Example 5, the easiest way to expand the determinant is by the third column, which consists of the elements 1, 0, and 0. We could therefore have evaluated it thus: arma? jl-o+0- ~21 98 STATIC (OR EQUILIBRIUM) ANALYSIS. To sum up, the value of a determinant |4| of order n can be found by the Laplace expansion of any row or any column as follows: (58) 1A) = Dall [expansion by the ith row] i = LalC,| [expansion by the jth column] ist EXERCISE 5.2 1 Evaluate the following determinants: 8 13 40 2 abe (a)}4 0 1 (c)|6 0 3 (e) |b c a 6 0 3 823 cab 1623 1 1 4 x 5 0 (b)|4 7 5 (a2) |8 WN -2) | ¥ 2 ce oo 7 i9 -1 8 2 Determine the signs to be attached to the relevant minors in order to get the following cofactors of a determinant: |Cys], [Coal [C3uls |Cayl and [C34] abe 3 Given|d ef], find the minors and cofactors of the elements a, b, and f. g hi 4 Evaluate the following determinants: 1 2 i) 9 2 a oO 1 2 3 4 6 5 6 4 8 Mi 6 0 -1) lo 0 9 0 0 -5 0 8 1-3 1 4{ 5 In the first determinant of the preceding problem, find the value of the cofactor of the element 9. 5.3 BASIC PROPERTIES OF DETERMINANTS We can now discuss some properties of determinants which will enable us to “discover” the connection between linear dependence among the rows of a square matrix and the vanishing of the determinant of that matrix. Five basic properties will be discussed here. These are properties common to determinants of all orders, although we shall illustrate mostly with second-order determinants: Property I The interchange of rows and columns does not affect the value of a determinant, In other words, the determinant of a matrix 4 has the same value as that of its transpose 4’, that is, |A| = |4’|. LINEAR MODELS AND MATRIX ALGEBRA (CONTINUED) 99 4 3[Lj4 5 Example |S §|-|3 §|-9 a bl ja cl_ Example 2 le 4) -( G| = ad ~ be Property I The interchange of any two rows (or any wo columns) will alter the sign, but not the numerical value, of the determinant. ¢ dey ad=—(ad- §| = cb ~ ad = — (ad ~ be) a a =a" = O13 a Example 4 |2 5 7| = —26, but the interchange of the first and third col- 301 ae x 31 _ 15.56 umns yields|}7_ 5 2] = 26. gw Fe 10 3 Property III The multiplication of any one row (or one column) by a scalar k will change the value of the determinant k-fold. Example 5 By multiplying the top row of the determinant in Example 3 by k, we get kad — kbe = k(ad — be) ka eI cod It is important to distinguish between the wo expressions kA and k|A|. In multiplying a matrix A by a scalar k, all the elements in A are to be multiplied by k, But, if we read the equation in the present example from right to left, it should be clear that, in multiplying a determinant |A| by k, only a single row (or column) should be multiplied by . This equation, therefore, in effect gives us a rule for factoring a determinant: whenever any single row or column contains a common divisor, it may be factored out of the determinant. Example 6 Factoring the first column and the second row in turn, we have 15a 7b] _ 12¢ 2d| Sa 1b 4c 2d Sa 7b = 3(2)) 30 UG = (Sad — 14b0) ‘The direct evaluation of the original determinant will, of course, produce the same answer. 100 static (OR EQUILIBRIUM) ANALYSIS In contrast, the factoring of a matrix requires the presence of a common divisor for all its elements, as in ka kb ab Jae f] ke kd Property IV The addition (subtraction) of a multiple of any row to (from) another row will leave the value of the determinant unaltered. The same holds true if we replace the word row by column in the above statement. Example 7 Adding k times the top row of the determinant in Example 3 to its second row, we end up with the original determinant: a b 7 ee lan o r——“i=Et Property V_ If one row (or column) is a multiple of another row (or column), the value of the determinant will be zero. As a special case of this, when two rows (or two columns) are identical, the determinant will vanish. Example 8 [pa | = 2ab - 2a = 0 ab G alt ed=0 Additional examples of this type of “vanishing” determinants can be found in Exercise 5.2-1. This important property is, in fact, a logical consequence of Property IV. To understand this, let us apply Property IV to the two determinants in Example 8 and watch the outcome. For the first one, try to subtract twice the second row from the top row; for the second determinant, subtract the second column from the first column, Since these operations do not alter the values of the determi- nants, we can write 7 22 28)-|0 0 a b| la b coe Oe la al 0d The new (reduced) determinants now contain, respectively, a row and a column of zeros; thus their Laplace expansion must yield a value of zero in both cases. In general, when one row (column) is a multiple of another row (column), the application of Property IV can always reduce all elements of that row (column) to zero, and Property V therefore follows. The basic properties just discussed are useful in several ways. For one thing, they can be of great help in simplifying the task of evaluating determinants. By subtracting multiples of one row (or column) from another, for instance, the elements of the determinant may be reduced to much smaller and simpler numbers. Factoring, if feasible, can also accomplish the same. If we can indeed LINEAR MODELS AND MATRIX ALGEBRA (CONTINUED) 101 apply these properties to transform some row or column into a form containing mostly 0s or 1s, Laplace expansion of the determinant will become a much more manageable task. Determinantal Criterion for Nonsingularity Our present concern, however, is primarily to link the linear dependence of rows with the vanishing of a determinant. For this purpose, Property V can be invoked, Consider an equation system Ax = d: 304 27fx] [a 15 20 fx, [=| a, 4 0 I\}x} [ds This system can have a unique solution if and only if the rows in the coefficient matrix A are linearly independent, so that A is nonsingular. But the second row is five times the first; the rows are indeed dependent, and hence no unique solution exists. The detection of this row dependence was by visual inspection, but by virtue of Property V we could also have discovered it through the fact that [A] =0. The row dependence in a matrix may, of course, assume a more intricate and secretive pattern, For instance, in the matrix 41 27] [oj B=|5 2 1]=|% 101 v there exists row dependence because 20; — vs - 304 = 0; yet this fact defies visual detection. Even in this case, however, Property V will give us a vanishing determinant, |B| = 0, since by adding three times v to v3 and subtracting twice vj from it, the second row can be reduced to a zero vector. In general, any pattern of linear dependence among rows will be reflected in a vanishing determinant—and herein lies the beauty of Property V! Conversely, if the rows are linearly independent, the determinant must have a nonzero value. We have, in the above, tied the nonsingularity of a matrix principally to the ear independence among rows. But, on occasion, we have made the claim that, for a square matrix A, row independence = column independence. We are now equipped to prove that claim: According to Property I, we know that |] = |4’|. Since row independence in A @ |A| #0, we may also state that row independence in A <> |4’| + 0. But |4"| # 0 © row independence in the transpose 4’ = column indepen- dence in A (rows of A’ are by definition the columns of A). Therefore, row independence in A ¢* column independence in A. 102 stavic (oR EQUILIBRIUM) ANALYSIS Our discussion of the test of nonsingularity can now be summarized. Given a linear-equation system Ax = d, where A is an n X n coefficient matrix, [4] #0 «2 there is row (column) independence in matrix A < A is nonsingular @ A”! exists a unique solution Thus the value of the determinant of the coefficient matrix, |A], provides a convenient criterion for testing the nonsingularity of matrix A and the existence of a unique solution to the equation system Ax = d. Note, however, that the determinantal criterion says nothing about the algebraic signs of the solution values, i.e., even though we are assured of a unique solution when |A| + 0, we may sometimes get negative solution values that are economically inadmissible. Example 9 Does the equation system Ty — 3x, — 3x5 =7 2x, +4x,+ x, =0 =2x,- x, =2 possess a unique solution? The determinant |A| is 7-3 <3 mie (mo) He = 2 4 1)=-840 0 -2 -1 Therefore a unique solution does exist. Rank of a Matrix Redefined “ ‘The rank of a matrix A was earlier defined to be the maximum number of linearly independent rows in 4. In view of the link between row independence and the nonvanishing of the determinant, we can redefine the rank of an m X n matrix as the maximum order of a nonvanishing determinant that can be constructed from the rows and columns of that matrix. The rank of any matrix is a unique number. Obviously, the rank can at most be m or n, whichever is smaller, because a determinant is defined only for a square matrix, and from a matrix of dimension, say, 3 X 5, the largest possible determinants (vanishing or not) will be of order 3. Symbolically, this fact may be expressed as follows: r(A) < min (m,n) which is read: “The rank of A is less than or equal to the minimum of the set of two numbers m and 7.” The rank of ann X n nonsingular matrix A must be n; in that case, we may write r(A) = n. LINEAR MODELS AND MATRIX ALGEBRA (CONTINUED) 103 Sometimes, one may be interested in the rank of the product of two matrices. In that case, the following rule is of us r(AB) s min {r( A), r(B)} EXERCISE 5.3 20 -1 17 33009) 2 Show that, when all the elements of an nth-order determinant |4| are multiplied by a number k, the result will be k"|A| 1 Use the determinant to verify the first four properties of determinants. 3 Which properties of determinants enable us to write the following? 9 18 9 18 9°27 13 ) \a7 56 ae ot 1 8} il 4 Test whether the following matrices ate nonsingular: 401 7-1 0 (a) }19 1 3 itor 4 5.47 13e 4 4-2 1 79°55 @)|-5 6 0 (@}3 01 7 0 3 10 8 6 5 What can you conclude about the rank of each matrix in the preceding problem? 6 Can any set of 3-vectors below span the 3-space? Why or why not? Caylee Glee (2st ali rouiaeea (6) (8 13) (1 2 8) [-7 1 Sy 7 Rewrite the simple national-income model (3.23) in the Ax = d format (with Y as the first variable in the vector x), and then test whether the coefficient matrix A is nonsingular. 5.4 FINDING THE INVERSE MATRIX If the matrix A in the linear-equation system Ax = d is nonsingular, then 4~! exists, and the solution of the system will be ¥ = A~'d. We have learned to test__ the nonsingularity of A by the criterion || #0. The next question is: How can we find the inverse A~' if A does pass that test? Expansion of a Determinant by Alien Cofactors Before answering this query, let us discuss another important property of determi nants, Property VI The expansion of a determinant by alien cofac a “wrong” row or column) always yields a value of zero. -§ (the cofactors of 104 static (oR EQUILIBRIUM) ANALYSIS 412 Example 1 If we expand the determinant |5 2 1] by using its first-row. I} 0 3] — elements but the cofactors of the second-row elements 1 2) 42 - 41 ieut= [9 3]--3 Icat=|f 3[=10 iai--[4 } we get @)/Cy)| + @2/Cral + a3/Cy3] = 4-3) + 1(10) + 2(1) = 0. More generally, applying the same type of expansion by alien cofactors as 42° 43 described in Example | to the determinant |A| 4m 43! will yield a a3 433 zero sum of products as follows: A (59) LaylGl = aulCal + a2lCal + ais1Gol a M2 ys a a3 ay a2 =a, +a, a "ay. ayy Ras) a33 las, yz pA y25q + 4144332 + 414,23 ~ A474 1345, yy y3Gyq + Gy24134y, = 0 ‘The reason for this outcome lies in the fact that the sum of products in (5.9) can be considered as the result of the regular expansion by the second row of another ay 42 Hy Seeeerneee 43) G37 433 row and whose first two rows are identical, As an exercise, write out the cofactors of the second rows of |4*| and verify that these are precisely the cofactors which appeared in (5.9)—and with the correct signs. Since |A*| = 0, because of its two identical rows, the expansion by alien cofactors shown in (5.9) will of necessity yield a value of zero also. Property VI is valid for determinants of all orders and applies when a determinant is expanded by the alien cofactors of any row or any column. Thus ‘we may state, in general, that for a determinant of order n the following holds: determinant |4* , which differs from |A| only in its second L4G; (i#i’) [expansion by ith row and cofactors of i’th row] (5.10), LalGyl (43) — [expansion by jth column and cofactors of j’th column] Carefully compare (5.10) with (5.8). In the latter (regular Laplace expansion), the subscripts of a,, and of |C,,| must be identical in each product term in the sum. In the expansion by alien cofactors, such as in (5.10), on the other hand, one of the two subscripts (a chosen value of i’ or j’) is inevitably “out of place.” LINEAR MODELS AND MATRIX ALGEBRA (CONTINUED) 105 Matrix Inversion Property VI, as summarized in (5.10), is of direct help in developing a method of matrix inversion, i.e., of finding the inverse of a matrix. Assume that an n X n nonsingular matrix A is given: (5.1) A =[9 S22 * Gan (JA| #0) Bm Ayr” Fan Since each element of 4 has a cofactor |C;,|, it is possible to form a matrix of cofactors by replacing each element a,, in (5.11) with its cofactor |C,,|. Such a cofactor matrix, denoted by C= [|C,j|], must also be 1 X n, For our present purposes, however, the transpose /of Cis of more interest. This transpose C’ is commonly referred to as the adjoint of 4 and is symbolized by adj A. Written out, the adjoint takes the form nl IGuloves 1Gal (5.12) c sada} !Crl IGal + IGal (xem) eee vee 1Cinl VGanl [Gan The matrices A and C’ are conformable for multiplication, and their product AC" is another n X n matrix in which each element is a sum of products. By utilizing the formula for Laplace expansion as well as Property VI of determi- nants, the product AC’ may be expressed as follows: Lalo Layla) LaylG eo oo j-t Lael LaylGl Lael AC’ =| j=1 dnt I (on Lahey) LaylGl 2 LaylGyl im i i VA = cee 0 1Al =|. 0. : [by (5.8) and (5.10)] 0 0 | 10 0 ol 0 tlie. 6 -|= IAL, — [factoring] 106 STATIC (OR EQUILIBRIUM) ANALYSIS As the determinant |4| is a nonzero scalar, it is permissible to divide both sides of the equation AC’ = |A|J by |A|. The result is AC. c saat or A 14] lA] Premultiplying both sides of the last equation by A~', and using the result that ie ne Oya AWM = 1, we can get > = A~', or 1 (5.13) Att = jay 844 [by (5.12)] Now, we have found a way to invert the matrix A! The general procedure for finding the inverse of a square matrix A thus involves the following steps: (I) find || [we need to proceed with the subsequent steps if and only if |A| #0, for if || = 0, the inverse in (5.13) will be undefined]; (2) find the cofactors of all the elements of 4, and arrange them as a cofactor matrix C = [|C,,|]; (3) take the transpose of C to get adj 4; and (4) divide adj A by the determinant |A|. The result will be the desired inverse A~'. Example 2 Find the inverse of A = a 2]. Since |A| = —2 +0, the inverse A”! exists, The cofactor of each element is in this case a 1X 1 determinant, which is simply defined as the scalar element of that determinant itself (that is, |a,,| = a,,). Thus, we have jal aflenl Int} _ fo 3 bal IGul Gal lee Observe the minus signs attached to 1 and 2, as required for cofactors. Transpos- ing the cofactor matrix yields -[o 72 adj A = -1 3 so the inverse A~' can be written as ~-t --lf ° ei ~ rie aL -3]- 4 Example 3 Find the inverse of B = Bea a7 inverse B~' also exists. The cofactor matrix is iF | -|$ | 2 3 o 7 30047 30 ‘Lal bal BaP 4 o 7 4 30 a Is val -1o 3 6 3 3 2) “lo 2} jo 3 LINEAR MODELS AND MATRIX ALGEBRA (CONTINUED) 107 Therefore, Al cy adjB=| 6 31 -8 -9 3 2 and the desired inverse matrix is i 1f2o-7 5 B= ia adj B 31-8 3 12 You can check that the results in the above two examples do satisfy AA~' = A~'A =I and BB™' = B 'B = J, respectively. EXERCISE 5.4 1 Suppose that we expand a fourth-order determinant by its third column and the cofactors of the second-column elements. How would you write the resulting sum of products in © notation? What will be the sum of products in E notation if we expand it by the second row and the cofactors of the fourth-row elements? < og ic ‘| 2 Find the inverse of each of the following matrices: . oh : 32 oy J @a-[p i] @e-[t 1] & axle) _[i 0 {7 6 oy wa-[} | wo-[? $| 3 (a) Drawing on your answers to the preceding problem, formulate a two-step rule for finding the adjoint of a given 2 x 2 matrix A: In the first step, indicate what should be done to the two diagonal elements of 4 in order to get the diagonal elements of adj A; in the second step, indicate what should be done to the two off-diagonal elements of 4. (Warning: This rule applies only to 2 X 2 matrices.) (5) Add a third step which, in conjunction with the previous two steps, yields the 2 x 2 inverse matrix A~! 4 Find the inverse of each of the following matrices: 4-201 10 0 7303 (G=|0 01 2 0 1 ol 1-1 2 10 0 1 0 3 (d)H=}0 10 4 0 2 00 5 Is it possible for a matrix to be its own inverse? (a) E (b) F= 5.5 CRAMER'S RULE The method of matrix inversion just discussed enables us to derive a convenient, practical way of solving a linear-equation system, known as Cramer’s rule. 108 STATIC (OR EQUILIBRIUM) ANALYSIS Derivation of the Rule Given an equation system Ax = d, where A is n X n, the solution can be written as =A ‘a= ppd da [by (5.13)] provided A is nonsingular. According to (5.12), this means that be d, ee WCul 1Gal see 1G |] 4 S/W Cal iCal Gal |] ial q 1Cinl 1Gal 2 WGaal If dy AlCl + dal Gul + +--+ dl Gul = LY aileial + dala +--+ dyl Gal iat]. on IC ug + da Cagl + 2° + dal Gol . L4lGul it 1} Ldlcal =-41 4] L4lGal int Equating the corresponding elements on the two sides of the equation, we obtain the solution values 7 Ealcal (ee) it The E terms in (5.14) look unfamiliar. What do they mean? From (5.8), we see that the Laplace expansion of a determinant |A| by its first column can be expressed in the form )) a,)|C;,|. If we replace the first column of |A| by the int column vector d but keep all the other columns intact, then a new determinant will result, which we can call |4,|—the subscript 1 indicating that the first column has been replaced by d. The expansion of |4,| by its first column (the d column) will yield the expression } d,|C,|, because the elements d, now take the LINEAR MODELS AND MATRIX ALGEBRA (CONTINUED) 109 place of the elements a,;. Returning to (5.14), we see therefore that 1 R= WIA 1 Tgql4il Similarly, if we replace the second column of |A| by the column vector d, while retaining all the other columns, the expansion of the new determinant |43| by its second column (the d column) will result in the expression > d,|Cj|- When divided by [A], this latter sum will give us the solution value X,; and so on. This procedure can now be generalized. To find the solution value of the jth variable x,, we can merely replace the jth column of the determinant || by the constant terms d, -++ d, to get a new determinant |4,| and then divide |4,| by the original determinant |A|. Thus, the solution of the system Ax = d can be expressed as ay an dy 4, 14 42 +d n (5.15) ia : . Gy Oya 4, yy (jth column replaced by d) The result in (5.15) is the statement of Cramer’s rule. Note that, whereas the matrix inversion method yields the solution values of al! the endogenous variables at once (X is a vector), Cramer’s rule can give us the solution value of only a single endogenous variable at a time (x, is a scalar). Example 1 Find the solution of the equation system 5x, + 3x, = 30 6x, - 2x, = 8 ‘The coefficients and the constant terms give the following determinants: ays |eee 300 3JL_ i-[3 | 2 14, | A | 84 5 30 A, == 14, | é git 40 Therefore, by virtue of (5.15), we can immediately write = Ail | =84_ = lal _ = 140 Hops? md ae Example 2 Find the solution of the equation system Ixy = xy x = 0 10x, — 2x, + x, =8 6x, + 3x, — 2x3 =7 110 static (OR EQUILIBRIUM) ANALYSIS The relevant determinants |4| and |4,| are found to be ole 0 -1 -1 la] = -2 1f=-61 AJ =|8 -2 1]=-61 6 3 -2 a 7 0 -1 7-1 0 14)=|10 8 = 1]= 183 [45] =J10 -2— 8 |= -244 6 7 -2 6 3 7 thus the solution values of the variables are CrCl x, al a4 ou Alois. = TA 3° TAL 6 Notice that in each of these examples we find |A| + 0. This is a necessary condition for the application of Cramer's rule, as it is for the existence of the inverse A~!. Cramer's rule is, after all, based upon the concept of the inverse matrix, even though in practice it bypasses the process of matrix inversion Note on Homogeneous-Equation Systems The equation systems Ax = d considered above can have any constants in the vector d. If d= 0, that is, if d,=d,=--- =d,=0, however, the equation where 0 is a zero vector. This special case is referred to as a homogeneous-equation system.* If the matrix A is nonsingular, a homogeneous-equation system can yield only “trivial solution,” namely, = X, = 0. This follows from the fact that the solution ¥ = A~'d will in this case become X¥ =A' 0 = 0 Oe) Gem (xt) (eay Alternatively, this outcome can be derived from Cramer's rule. The fact that d = 0 implies that |4,|, for all j, must contain a whole column of zeros, and thus the solution will turn out to be M4 0 J 0 {4| lA} Curiously enough, the only way to get a nontrivial solution from a homoge- neous-equation system is to have |A| = 0, that is, to have a singular coefficient * The word “homogencous” describes the property that when all the variables x;...., x, are ‘multiplied by the same number, the equation system will remain valid, This is possible only if the constant terms (those unattached to any x,) are all zero, LINEAR MODELS AND MATRIX ALGEBRA (CONTINUED) 111 matrix A! In that event, we have M4 _o lA, 0 where the 0/0 expression is not equal to zero but is, rather, something undefined. Consequently, Cramer's rule is not applicable. This does not mean that we cannot obtain solutions; it means only that we cannot get a unique solution. Consider the homogeneous-equation system 4X1 + yx, = 0 (5.16) ay x, + yx, = 0 Its self-evident that X, = ¥, = 0 is a solution, but that solution is trivial. Now, assume that the coefficient matrix 4 is singular, so that |A| = 0. This implies that the row vector [@,, 4,)] is a multiple of the row vector [az, _a2)]; consequently, one of the two equations is redundant. By deleting, say, the second equation from (5.16), we end up with one (the first) equation in two variables, the solution of which is X, = (—4)2/4a),)X3. This solution is nontrivial and well defined if a,, + 0, but it really represents an infinite number of solutions because, for every possible value of X,, there is a corresponding value %, such that the pair constitutes a solution, Thus no unique nontrivial solution exists for this homoge- neous-equation system. This last statement is also generally valid for the n-vari- able case. Solution Outcomes for a Linear-Equation System Our discussion of the several variants of the linear-equation system Ax = d reveals that as many as four different types of solution outcome are possible. For a better overall view of these variants, we list them in tabular form in Table 5.1. Table 5.1 Solution outcomes for a linear-equation system Ax = d Vectord | d +0 a-0 Determinant |] (nonhomogeneous system) | (homogeneous system) 14) #0 ‘There exists a unique, non- | There exists a unique, trivial (matsix A nonsingular) trivial solution ¥ #0 solution ¥ = 0 Equations ‘There exist an infinite num- | There exist an infinite num: Ayn0 dependent ber of solutions (not in- | ber of solutions (inclu- (entin A cluding the trivial one) ding the trivial one) singular) = Equations Jo solution exists (Not applicable aera eent | No solution exist INot applicable] 112. STATIC (OR EQUILIBRIUM) ANALYSIS. ' As a first possibility, the system may yield a unique, nontrivial solution. This type of outcome can arise only when we have a nonhomogencous system with a nonsingular coefficient matrix A. The second possible outcome is a unique, trivial solution, and this is associated with a homogeneous system with a nonsingular matrix A. As a third possibility, we may have an infinite number of solutions. This eventuality is linked exclusively to a system in which the equations are dependent (i.e., in which there are redundant equations). Depending on whether the system is homogeneous, the trivial solution may or may not be included in the set of infinite number of solutions. Finally, in the case of an inconsistent equation system, there exists no solution at all. From the point of view of a model builder, the most useful and desirable outcome is, of course, that of a unique, nontrivial solution ¥ + 0. EXERCISE 5.5 1 Use Cramer's rule to solve the following equation systems: (a) 3x,-2x)= 11 (©) 8x, — 7x. 2x, + x)= 12 x;+ 2° (b) =x) +3x,=-3 — (d) 6x, + 9x: 4x, — x, = 12 Tx, — 3x, = 4 2 For cach of the equation systems in the preceding problem, find the inverse of the coefficient matrix, and get the solution by the formula x = 4~'d. 3 Use Cramer’s rule to solve the following equation systems: (a) 8x, — x =15 (0) 4x + 3y—22=7 22+ 5x = 1 xt y 2x, +3x 3x + (6) “x, 3x, +2x,=26 © (d) —xtyt2 xy + x= 6 x-yt Sx,- x35 8 xtya 4 Show that Cramer's rule can be derived alternatively by the following procedure. Multiply both sides of the first equation in the system Ax = d by the cofactor |C,,|, and then multiply both sides of the second equation by the cofactor |C;,|, etc. Add all the newly obtained equations, Then assign the values 1,2,..., m to the index j, successively, to get the solution values X), %3,..., , a8 shown in (5.14). 5.6 APPLICATION TO MARKET AND NATIONAL-INCOME MODELS Simple equilibrium models such as those discussed in Chap. 3 can be solved with ease by Cramer's rule or by matrix inversion. LINEAR MODELS AND MATRIX ALGEBRA (CONTINUED) 113 Market Model The two-commodity model described in (3.12) can be written (after eliminating the quantity variables) as a system of two linear equations, as in (3.13): cP, + cP, -¢ WP + Pr = —%o The three determinants needed—|A|, |A,|, and |4,|—have the following values: eo IAL =]y, yf or — OM ho o2: lA -| —Y %2|7 ~ Cov + 2% 4 moo aat=[j) T30]= -ert0 + con Therefore the equilibrium prices must be WA et —cor. pg Iai _ cor = 1% Ml em-an 9? AL em en which are precisely those obtained in (3. 14) and (3.15). The equilibrium quantities can be found, as before, by setting P, = P, and P, = P, in the demand or supply functions. National-Income Model ‘The simple national-income model cited in (3.23) can also be solved by the use of Cramer's rule. As written in (3.23), the model consists of the following two simultaneous equations: Y=C+h+ Gy C=a+bY (a>0, 0, and ¥, must entail a definite required amount of the primary input. Would the amount required be consistent with what is available in the economy? On the basis 120 STATIC (OR EQUILIBRIUM) ANALYSIS. e of (5.20), the required primary input may be calculated as follows: 3 L a, X, = 0.3(24.84) + 0.3(20.68) + 0.4(18.36) = $21.00 billion 10 Therefore, the specific final demand d = | 5 | will be feasible if and only if the available amount of the primary input is at least $21 billion. If the amount available falls short, then that particular production target will, of course, have to be revised downward accordingly One important feature of the above analysis is that, as long as the input coefficients remain the same, the inverse T-!=({— A)~' will not change; therefore only one matrix inversion needs to be performed, even if we are to consider a hundred or a thousand different final-demand vectors—such as a spectrum of alternative development targets. This can mean considerable savings in computational effort as compared with the elimination-of-variable method, especially if large equation systems are involved. Note that this advantage is not shared by Cramer's rule. By the latter rule, the solution will be calculated according to the formula X, = |7;|/|T|, but each time a different final-demand vector d is used, we must reevaluate the determinants |7;|. This would be more time-consuming than the multiplication of a known T~! by a new vector d. \ding the Inverse by Approximation For large equation systems, the task of inverting a matrix can be exceedingly lengthy and tedious. Even though computers can aid us, simpler computational schemes would still be desirable. For the input-output models under considera- tion, there does exist a method of finding an approximation to the inverse T~' = (1 — A)! to any desired degree of accuracy; thus it is possible to avoid the process of matrix inversion entirely. Let us first consider the following matrix multiplication (m = a positive integer): (1-A)T+ At AP +e tA) =IL+ At A +e +A") =A + AF A$ -+ +A) = (1+ AFA te FAM) - (AFA + +A™+ Am!) =y-am Had the result of the multiplication been the identity matrix J alone, we could have taken the matrix sum (J + A + A? +--+ + A”) as the inverse of (J — A). It is the presence of the —4”*! term that spoils things! Fortunately, though, there remains for us a second-best course, for if the matrix A"*' can be made to approach an n X n null matrix, then J - A”*! will approach I, and accordingly the said sum matrix (J + A + A? + --- + A") will approach the desired inverse

You might also like