Engineering Math Review
Engineering Math Review
Dr Colin Turner
October 15, 2009
i
Copyright Notice
The contents of this document are protected by a Creative Commons
License, that allows copying and modification with attribution, but not
commercial reuse. Please contact me for more details.
http://creativecommons.org/licenses/by-nc/2.0/uk/
Queries
Should you have any queries about these notes, you should approach your
lecturer or your tutor as soon as possible. Dont be afraid to ask questions, it
is possible you may have found an error, and if you have not, your questions
will help your lecturer / tutor understand the problems you are experiencing.
As mathematics is cumulative, it will be very hard to continue the module
with outstanding problems from the start, a bit of work at this point will
make the rest much easier going.
Practice
Mathematics requires practice. No matter how simple a procedure may
look when demonstrated in a lecture or a tutorial you can have no idea how
well you can perform it until you try. As there is very little opportunity
for practice in the university environment it is vitally important that you
attempt the questions provided in the tutorial, preferably before attending
the relevant tutorial class. Your time with your tutor will be best spent when
you arrive at the class with a list of problems you are unable to tackle, the
more specific the better. If you find the questions too hard before the tutorial,
do not become discouraged, the mere act of thinking about the problem will
have a positive affect on your understanding of the problem once explained
to you in the tutorial.
Contact Details
My contact details are as follows
Name
Dr Colin Turner
Room 5F10
Phone 68084 (+44-28-9036-8084 externally)
Email
[email protected]
WWW http://newton.engj.ulst.ac.uk/crt/
Contents
1 Preliminaries
1.1 Introduction . . . . . . . . .
1.2 Notation . . . . . . . . . . .
1.3 Arithmetic . . . . . . . . . .
1.3.1 The law of signs . . .
1.3.2 Order of precedence .
1.4 Decimal Places & Significant
1.4.1 Decimal Places . . .
1.4.2 Significant Figures .
1.5 Standard Form . . . . . . .
1.5.1 Standard prefixes . .
2 Number Systems
2.1 Natural numbers .
2.2 Prime numbers . .
2.3 Integers . . . . . .
2.4 Real numbers . . .
2.5 Rational numbers .
2.6 Irrational Numbers
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3 Basic Algebra
3.1 Rearranging Equations . . .
3.1.1 Example . . . . . . .
3.1.2 Order of Rearranging
3.1.3 Example . . . . . . .
3.1.4 Example . . . . . . .
3.1.5 Example . . . . . . .
3.2 Function Notation . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
Figures
. . . . .
. . . . .
. . . . .
. . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
1
2
2
3
3
3
4
5
5
.
.
.
.
.
.
8
8
8
9
9
9
10
.
.
.
.
.
.
.
11
11
11
12
13
14
15
15
CONTENTS
3.3
Expansion of Brackets . . . . . . . . . .
3.3.1 Examples . . . . . . . . . . . . .
3.3.2 Brackets upon Brackets . . . . . .
3.3.3 Examples . . . . . . . . . . . . .
3.4 Factorization . . . . . . . . . . . . . . .
3.4.1 Examples . . . . . . . . . . . . .
3.5 Laws of Indices . . . . . . . . . . . . . .
3.5.1 Example proofs . . . . . . . . .
3.5.2 Examples . . . . . . . . . . . . .
3.6 Laws of Surds . . . . . . . . . . . . . . .
3.6.1 Examples . . . . . . . . . . . . .
3.7 Quadratic Equations . . . . . . . . . . .
3.7.1 Examples . . . . . . . . . . . . .
3.7.2 Graphical interpretation . . . . .
3.7.3 Factorization . . . . . . . . . . .
3.7.4 Quadratic solution formula . . . .
3.7.5 The discriminant . . . . . . . . .
3.7.6 Examples . . . . . . . . . . . . .
3.7.7 Special cases . . . . . . . . . . . .
3.8 Notation . . . . . . . . . . . . . . . . . .
3.8.1 Modulus or absolute value . . . .
3.8.2 Sigma notation . . . . . . . . . .
3.8.3 Factorials . . . . . . . . . . . . .
3.9 Exponential and Logarithmic functions .
3.9.1 Exponential functions . . . . . .
3.9.2 Logarithmic functions . . . . . .
3.9.3 Logarithms to solve equations . .
3.9.4 Examples . . . . . . . . . . . . .
3.9.5 Anti-logging . . . . . . . . . . . .
3.9.6 Examples . . . . . . . . . . . . .
3.10 Binomial Expansion . . . . . . . . . . . .
3.10.1 Theory . . . . . . . . . . . . . . .
3.10.2 Example . . . . . . . . . . . . . .
3.10.3 Examples . . . . . . . . . . . . .
3.10.4 High values of n . . . . . . . . . .
3.11 Arithmetic Progressions . . . . . . . . .
3.11.1 Examples . . . . . . . . . . . . .
3.11.2 Sum of an arithmetic progression
3.11.3 Example . . . . . . . . . . . . . .
3.11.4 Example . . . . . . . . . . . . . .
3.12 Geometric Progressions . . . . . . . . . .
iii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
17
18
18
19
20
21
21
22
23
23
24
24
24
25
26
27
28
28
29
30
30
31
32
32
32
33
34
35
36
37
38
38
40
40
41
42
42
42
43
44
45
CONTENTS
3.12.1
3.12.2
3.12.3
3.12.4
3.12.5
iv
Examples . . . . . . . . . . . .
Sum of a geometric progression
Sum to infinity . . . . . . . . .
Example . . . . . . . . . . . . .
Example . . . . . . . . . . . . .
4 Trigonometry
4.1 Right-angled triangles . . . . . . . .
4.1.1 Labelling . . . . . . . . . . .
4.1.2 Pythagoras Theorem . . . . .
4.1.3 Basic trigonometric functions
4.1.4 Procedure . . . . . . . . . . .
4.1.5 Example . . . . . . . . . . . .
4.2 Notation . . . . . . . . . . . . . . . .
4.2.1 Example . . . . . . . . . . . .
4.3 Table of values . . . . . . . . . . . .
4.4 Graphs of Functions . . . . . . . . .
4.5 Multiple Solutions . . . . . . . . . .
4.5.1 CAST diagram . . . . . . . .
4.5.2 Procedure . . . . . . . . . . .
4.5.3 Example . . . . . . . . . . . .
4.6 Scalene triangles . . . . . . . . . . .
4.6.1 Labelling . . . . . . . . . . .
4.6.2 Scalene trigonmetry . . . . . .
4.6.3 Sine Rule . . . . . . . . . . .
4.6.4 Cosine Rule . . . . . . . . . .
4.6.5 Example . . . . . . . . . . . .
4.7 Radian Measure . . . . . . . . . . . .
4.7.1 Conversion . . . . . . . . . . .
4.7.2 Length of Arc . . . . . . . . .
4.7.3 Area of Sector . . . . . . . . .
4.8 Identities . . . . . . . . . . . . . . . .
4.8.1 Basic identities . . . . . . . .
4.8.2 Compound angle identities . .
4.8.3 Double angle identities . . . .
4.9 Trigonmetric equations . . . . . . . .
4.9.1 Example . . . . . . . . . . . .
4.9.2 Example . . . . . . . . . . . .
4.9.3 Example . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
45
46
46
47
48
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
49
49
49
50
50
51
51
52
53
53
53
54
55
56
56
58
58
58
58
59
60
61
61
61
62
63
64
64
64
65
65
65
66
CONTENTS
5 Complex Numbers
5.1 Basic Principle . . . . . . . . . . . . . .
5.1.1 Imaginary and Complex Numbers
5.2 Examples . . . . . . . . . . . . . . . . .
5.3 Argand Diagram Representation . . . . .
5.4 Algebra of Complex Numbers . . . . . .
5.4.1 Addition . . . . . . . . . . . . . .
5.4.2 Subtraction . . . . . . . . . . . .
5.4.3 Multiplication . . . . . . . . . . .
5.4.4 Division . . . . . . . . . . . . . .
5.4.5 Examples . . . . . . . . . . . . .
5.5 Definitions . . . . . . . . . . . . . . . . .
5.5.1 Modulus . . . . . . . . . . . . . .
5.5.2 Conjugate . . . . . . . . . . . . .
5.5.3 Real part . . . . . . . . . . . . .
5.5.4 Imaginary part . . . . . . . . . .
5.6 Representation . . . . . . . . . . . . . .
5.6.1 Cartesian form . . . . . . . . . .
5.6.2 Polar form . . . . . . . . . . . . .
5.6.3 Exponential form . . . . . . . . .
5.6.4 Examples . . . . . . . . . . . . .
5.6.5 Examples . . . . . . . . . . . . .
5.7 De Moivres Theorem . . . . . . . . . . .
5.7.1 Examples . . . . . . . . . . . . .
5.7.2 Roots of Unity . . . . . . . . . .
5.7.3 Roots of other numbers . . . . . .
5.8 Trigonometric functions . . . . . . . . .
6 Vectors & Matrices
6.1 Vectors . . . . . . . . . . . . .
6.1.1 Modulus . . . . . . . .
6.1.2 Unit Vector . . . . . .
6.1.3 Cartesian unit vectors
6.1.4 Examples . . . . . . .
6.1.5 Signs of vectors . . . .
6.1.6 Addition . . . . . . . .
6.1.7 Subtraction . . . . . .
6.1.8 Zero vector . . . . . .
6.1.9 Scalar Product . . . .
6.1.10 Example . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
68
68
69
69
70
70
71
72
72
74
74
75
75
75
76
76
76
76
77
78
79
80
80
82
82
83
84
.
.
.
.
.
.
.
.
.
.
.
85
85
85
86
86
87
87
88
88
88
89
90
CONTENTS
6.2
6.3
6.4
6.5
6.6
6.7
6.8
6.1.11 Example . . . . . . . . . .
6.1.12 Vector Product . . . . . .
6.1.13 Example . . . . . . . . . .
Matrices . . . . . . . . . . . . . .
6.2.1 Square matrices . . . . . .
6.2.2 Row and Column vectors .
6.2.3 Examples . . . . . . . . .
6.2.4 Zero and Identity . . . . .
Matrix Arithmetic . . . . . . . .
6.3.1 Addition . . . . . . . . . .
6.3.2 Examples . . . . . . . . .
6.3.3 Subtraction . . . . . . . .
6.3.4 Examples . . . . . . . . .
6.3.5 Multiplication by a scalar
6.3.6 Examples . . . . . . . . .
6.3.7 Domino Rule . . . . . . .
6.3.8 Multiplication . . . . . . .
6.3.9 Examples . . . . . . . . .
6.3.10 Exercise . . . . . . . . . .
Determinant of a matrix . . . . .
6.4.1 Examples . . . . . . . . .
6.4.2 Sign rule for matrices . . .
6.4.3 Order 3 . . . . . . . . . .
6.4.4 Examples . . . . . . . . .
6.4.5 Order 4 . . . . . . . . . .
Inverse of a matrix . . . . . . . .
6.5.1 Order 2 . . . . . . . . . .
6.5.2 Examples . . . . . . . . .
6.5.3 Other orders . . . . . . . .
6.5.4 Exercise . . . . . . . . . .
Matrix algebra . . . . . . . . . .
6.6.1 Addition . . . . . . . . . .
6.6.2 Multiplication . . . . . . .
6.6.3 Mixed . . . . . . . . . . .
Solving equations . . . . . . . . .
6.7.1 Example . . . . . . . . . .
6.7.2 Example . . . . . . . . . .
6.7.3 Row reduction . . . . . . .
Row Operations . . . . . . . . . .
6.8.1 Determinants . . . . . . .
6.8.2 Example . . . . . . . . . .
vi
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
91
91
93
93
93
94
94
94
95
95
95
95
95
96
96
96
97
97
99
99
99
99
100
100
101
101
102
102
103
103
104
104
104
104
105
106
107
108
108
109
110
CONTENTS
6.9
6.10
6.11
6.12
6.13
vii
7 Graphs of Functions
7.1 Simple graph plotting . . . . . . .
7.1.1 Example . . . . . . . . . .
7.1.2 Example . . . . . . . . . .
7.1.3 Example . . . . . . . . . .
7.2 Important functions . . . . . . . .
7.2.1 Direct Proportion . . . . .
7.2.2 Inverse Proportion . . . .
7.2.3 Inverse Square Proportion
7.2.4 Exponential Functions . .
7.2.5 Logarithmic Functions . .
7.3 Transformations on graphs . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
110
111
112
114
115
116
117
117
117
119
119
119
120
121
122
122
122
123
123
123
124
125
125
126
126
127
127
.
.
.
.
.
.
.
.
.
.
.
128
. 128
. 128
. 129
. 130
. 130
. 130
. 131
. 132
. 134
. 134
. 135
CONTENTS
7.4
7.3.1
7.3.2
7.3.3
7.3.4
Even
7.4.1
7.4.2
7.4.3
7.4.4
viii
Addition or Subtraction . . .
Multiplication or Division . .
Adding to or Subtracting from
Multiplying or Dividing x . .
and Odd functions . . . . . . .
Even functions . . . . . . . .
Odd functions . . . . . . . . .
Combinations of functions . .
Examples . . . . . . . . . . .
. .
. .
x
. .
. .
. .
. .
. .
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
136
136
136
137
138
138
138
139
140
8 Coordinate geometry
8.1 Elementary concepts . . . . . . . . .
8.1.1 Distance between two points .
8.1.2 Example . . . . . . . . . . . .
8.1.3 Example . . . . . . . . . . . .
8.1.4 Midpoint of two points . . . .
8.1.5 Example . . . . . . . . . . . .
8.1.6 Example . . . . . . . . . . . .
8.1.7 Gradient . . . . . . . . . . . .
8.1.8 Example . . . . . . . . . . . .
8.1.9 Example . . . . . . . . . . . .
8.2 Equation of a straight line . . . . . .
8.2.1 Meaning of equation of line .
8.2.2 Finding the equation of a line
8.2.3 Example . . . . . . . . . . . .
8.2.4 Example . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
142
. 142
. 143
. 143
. 143
. 144
. 144
. 144
. 144
. 145
. 145
. 145
. 146
. 146
. 147
. 147
9 Differential Calculus
9.1 Concept . . . . . . . . . . . . . .
9.2 Notation . . . . . . . . . . . . . .
9.3 Rules & Techniques . . . . . . . .
9.3.1 Power Rule . . . . . . . .
9.3.2 Addition and Subtraction
9.3.3 Constants upon functions
9.3.4 Chain Rule . . . . . . . .
9.3.5 Product Rule . . . . . . .
9.3.6 Quotient Rule . . . . . . .
9.3.7 Trigonometric Rules . . .
9.3.8 Exponential Rules . . . .
9.3.9 Logarithmic Rules . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
149
149
149
150
150
151
151
151
151
152
152
152
152
CONTENTS
9.4
9.5
9.6
9.7
9.8
9.9
ix
Examples . . . . . . . . . . . . . . . .
9.4.1 Solutions . . . . . . . . . . . . .
Tangents . . . . . . . . . . . . . . . . .
9.5.1 Example . . . . . . . . . . . . .
9.5.2 Example . . . . . . . . . . . . .
Turning Points . . . . . . . . . . . . .
9.6.1 Types of turning point . . . . .
9.6.2 Finding turning points . . . . .
9.6.3 Classification of turning points .
9.6.4 Example . . . . . . . . . . . . .
9.6.5 Example . . . . . . . . . . . . .
9.6.6 Example . . . . . . . . . . . . .
Newton Rhapson . . . . . . . . . . . .
9.7.1 Example . . . . . . . . . . . . .
9.7.2 Example . . . . . . . . . . . . .
Partial Differentiation . . . . . . . . .
9.8.1 Example . . . . . . . . . . . . .
Small Changes . . . . . . . . . . . . .
9.9.1 Example . . . . . . . . . . . . .
9.9.2 Example . . . . . . . . . . . . .
10 Integral Calculus
10.1 Concept . . . . . . . . . . . . . . .
10.1.1 Constant of Integration . . .
10.2 Rules & Techniques . . . . . . . . .
10.2.1 Power Rule . . . . . . . . .
10.2.2 Addition & Subtraction . .
10.2.3 Multiplication by a constant
10.2.4 Substitution . . . . . . . . .
10.2.5 Limited Chain Rule . . . . .
10.2.6 Logarithm rule . . . . . . .
10.2.7 Partial Fractions . . . . . .
10.2.8 Integration by Parts . . . .
10.2.9 Other rules . . . . . . . . .
10.3 Examples . . . . . . . . . . . . . .
10.3.1 Examples . . . . . . . . . .
10.3.2 Examples . . . . . . . . . .
10.3.3 Example . . . . . . . . . . .
10.3.4 Example . . . . . . . . . . .
10.3.5 Examples . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
152
153
156
157
157
158
158
159
160
161
162
163
163
164
165
166
167
168
168
169
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
172
172
172
173
173
173
174
174
174
175
176
177
178
179
179
180
182
183
184
CONTENTS
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
Turning Points
12 Differential Equations
12.1 Concept . . . . . . . . .
12.2 Exact D.E.s . . . . . . .
12.2.1 Example . . . . .
12.2.2 Example . . . . .
12.2.3 Example . . . . .
12.3 Variables separable D.E.s
12.3.1 Example . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
185
186
186
186
186
186
187
187
188
188
188
189
189
190
191
192
192
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
194
. 194
. 194
. 195
. 195
. 196
. 196
. 196
. 197
. 197
. 197
. 198
. 198
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
200
200
200
201
201
202
203
203
CONTENTS
12.3.2 Example . . . . .
12.4 First order linear D.E.s .
12.4.1 Example . . . . .
12.4.2 Example . . . . .
12.5 Second order D.E.s . . .
12.5.1 Homogenous D.E.
12.5.2 Example . . . . .
12.5.3 Example . . . . .
12.5.4 Example . . . . .
12.5.5 Example . . . . .
12.5.6 Example . . . . .
xi
. . .
. . .
. . .
. . .
. . .
with
. . .
. . .
. . .
. . .
. . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
constant
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
coefficients
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
204
206
207
208
209
209
211
211
212
212
213
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
214
. 214
. 214
. 215
. 216
. 216
. 217
. 218
. 218
. 218
. 219
. 219
. 220
. 221
. 222
. 223
. 224
. 225
. 226
. 226
. 227
. 227
.
.
.
.
229
. 229
. 229
. 230
. 231
CONTENTS
xii
14.3.1 Example . . . . . . . . . . . .
14.3.2 Example . . . . . . . . . . . .
14.4 Triple integrals . . . . . . . . . . . .
14.4.1 Example . . . . . . . . . . . .
14.5 Change of variable . . . . . . . . . .
14.5.1 Polar coordinates . . . . . . .
14.5.2 Example . . . . . . . . . . . .
14.5.3 Cylindrical Polar Coordinates
14.5.4 Spherical Polar Coordinates .
15 Fourier Series
15.1 Periodic functions . . . . . . . . .
15.1.1 Example . . . . . . . . . .
15.1.2 Example . . . . . . . . . .
15.2 Sets of functions . . . . . . . . . .
15.2.1 Orthogonal functions . . .
15.2.2 Orthonormal functions . .
15.2.3 Norm of a function . . . .
15.3 Fourier concepts . . . . . . . . . .
15.3.1 Fourier coefficents . . . . .
15.3.2 Fourier series . . . . . . .
15.3.3 Convergence . . . . . . . .
15.4 Important functions . . . . . . . .
15.4.1 Trigonometric system . . .
15.4.2 Exponential system . . . .
15.5 Trigonometric expansions . . . . .
15.5.1 Even functions . . . . . .
15.5.2 Odd functions . . . . . . .
15.5.3 Other Ranges . . . . . . .
15.6 Harmonics . . . . . . . . . . . . .
15.6.1 Odd and Even Harmonics
15.6.2 Trigonometric system . . .
15.6.3 Exponential system . . . .
15.6.4 Percentage harmonic . . .
15.7 Examples . . . . . . . . . . . . .
15.7.1 Example . . . . . . . . . .
15.8 Exponential Series . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
232
232
233
233
234
235
235
236
236
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
237
. 237
. 237
. 238
. 239
. 239
. 239
. 239
. 239
. 239
. 240
. 240
. 240
. 241
. 242
. 242
. 242
. 243
. 243
. 243
. 244
. 244
. 245
. 245
. 245
. 245
. 246
CONTENTS
16 Laplace transforms
16.1 Definition . . . . . . . . . . . . . . .
16.1.1 Example . . . . . . . . . . . .
16.1.2 Example . . . . . . . . . . . .
16.1.3 Example . . . . . . . . . . . .
16.1.4 Inverse Transform . . . . . . .
16.1.5 Elementary properties . . . .
16.1.6 Example . . . . . . . . . . . .
16.2 Important Transforms . . . . . . . .
16.2.1 First shifting property . . . .
16.2.2 Further Laplace transforms .
16.3 Transforming derivatives . . . . . . .
16.3.1 First derivative . . . . . . . .
16.3.2 Second derivative . . . . . . .
16.3.3 Higher derivatives . . . . . . .
16.4 Transforming integrals . . . . . . . .
16.5 Differential Equations . . . . . . . . .
16.5.1 Example . . . . . . . . . . . .
16.5.2 Example . . . . . . . . . . . .
16.5.3 Example . . . . . . . . . . . .
16.5.4 Example . . . . . . . . . . . .
16.5.5 Exercise . . . . . . . . . . . .
16.5.6 Example . . . . . . . . . . . .
16.5.7 Example . . . . . . . . . . . .
16.6 Other theorems . . . . . . . . . . . .
16.6.1 Change of Scale . . . . . . . .
16.6.2 Derivative of the transform . .
16.6.3 Convolution Theorem . . . . .
16.6.4 Example . . . . . . . . . . . .
16.6.5 Example . . . . . . . . . . . .
16.6.6 Example . . . . . . . . . . . .
16.7 Heaviside unit step function . . . . .
16.7.1 Laplace transform of u(t c)
16.7.2 Example . . . . . . . . . . . .
16.7.3 Example . . . . . . . . . . . .
16.7.4 Delayed functions . . . . . . .
16.7.5 Example . . . . . . . . . . . .
16.8 The Dirac Delta . . . . . . . . . . . .
16.8.1 Delayed impulse . . . . . . . .
16.8.2 Example . . . . . . . . . . . .
16.9 Transfer Functions . . . . . . . . . .
xiii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
248
248
248
249
249
250
250
250
251
251
253
254
254
254
254
254
255
255
256
258
260
261
261
262
263
263
263
264
264
264
265
265
268
268
270
270
271
272
273
274
274
CONTENTS
xiv
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
279
. 279
. 281
. 281
. 282
. 283
. 283
. 283
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
285
. 285
. 285
. 287
. 287
. 288
. 288
. 288
. 289
. 289
. 289
. 289
. 290
. 290
. 291
. 291
. 292
. 292
. 292
. 293
. 293
. 294
. 294
. 294
. 294
. 295
. 295
CONTENTS
xv
19 Probability
19.1 Events . . . . . . . . . . . . . . . . . . . .
19.1.1 Probability of an Event . . . . . . .
19.1.2 Exhaustive lists . . . . . . . . . . .
19.2 Multiple Events . . . . . . . . . . . . . . .
19.2.1 Notation . . . . . . . . . . . . . . .
19.2.2 Relations between events . . . . . .
19.3 Probability Laws . . . . . . . . . . . . . .
19.3.1 A or B (mutually exclusive events)
19.3.2 not A . . . . . . . . . . . . . . . .
19.3.3 1 event of N . . . . . . . . . . . . .
19.3.4 n events of N . . . . . . . . . . . .
19.3.5 Examples . . . . . . . . . . . . . .
19.3.6 A and B (independent events) . . .
19.3.7 Example . . . . . . . . . . . . . . .
19.3.8 A or B or C or ... . . . . . . . . . .
19.3.9 A and B and C and ... . . . . . . .
19.3.10 Example . . . . . . . . . . . . . . .
19.3.11 A or B revisited . . . . . . . . . . .
19.3.12 Example . . . . . . . . . . . . . . .
19.3.13 A and B revisited . . . . . . . . . .
19.3.14 Conditional probability . . . . . . .
19.3.15 Example . . . . . . . . . . . . . . .
19.3.16 Bayes Theorem . . . . . . . . . . .
19.4 Discrete Random Variables . . . . . . . . .
19.4.1 Notation . . . . . . . . . . . . . . .
19.4.2 Expected Value . . . . . . . . . . .
19.4.3 Variance . . . . . . . . . . . . . . .
19.4.4 Example . . . . . . . . . . . . . . .
19.5 Continuous Random Variables . . . . . . .
19.5.1 Definition . . . . . . . . . . . . . .
19.5.2 Probability Density Function . . .
20 The Normal Distribution
20.1 Definition . . . . . . . . . . .
20.2 Standard normal distribution
20.2.1 Transforming variables
20.2.2 Calculation of areas . .
20.2.3 Example . . . . . . . .
20.2.4 Confidence limits . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
297
297
297
297
298
298
298
299
299
299
300
300
300
300
301
301
301
302
302
302
303
303
304
304
304
304
305
306
307
308
309
309
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
310
. 310
. 311
. 311
. 312
. 312
. 313
CONTENTS
20.2.5 Sampling distribution . . . . .
20.3 The central limit theorem . . . . . .
20.4 Finding the Population mean . . . .
20.5 Hypothesis Testing . . . . . . . . . .
20.5.1 Two tailed tests . . . . . . . .
20.6 Difference of two normal distributions
xvi
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
313
314
315
315
316
317
A Statistical Tables
318
B Greek Alphabet
321
List of Tables
1.1
1.2
1.3
1.4
1.5
Basic notation . . . . . . . . .
The law of signs . . . . . . . .
Order of precedence . . . . . .
SI prefixes for large numbers .
SI prefixes for small numbers .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2
2
3
6
7
3.1
3.2
3.3
3.4
3.5
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
21
23
25
34
39
4.1
4.2
5.1
Powers of j . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
6.1
6.2
6.3
7.1
7.2
9.1
9.2
LIST OF TABLES
18.1 An example of class intervals . . . . . . . . . . . . . . . . . . . 293
19.1 Probabilities for total of two rolled dice . . . . . . . . . . . . . 305
19.2 Calculating E(X) and var (X) for two rolled dice. . . . . . . . 308
A.1 Table of (x) (Normal Distribution) . . . . . . . . . . . . . . 319
A.2 Table of 2 distribution (Part I) . . . . . . . . . . . . . . . . . 320
A.3 Table of 2 distribution (Part II) . . . . . . . . . . . . . . . . 320
xviii
List of Figures
2.1
3.1
4.1
4.2
4.3
4.4
4.5
4.6
5.1
5.2
6.1
6.2
Vector Addition . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Vector Subtraction . . . . . . . . . . . . . . . . . . . . . . . . 89
7.1
7.2
7.3
7.4
7.5
7.6
7.7
7.8
7.9
7.10
7.11
7.12
The graph of x2 + 2x 3 . .
The graph of 2x + 3 . . . .
The graph of x1 . . . . . . .
The graph of x12 . . . . . . .
The graph of ex . . . . . . .
The graph of ln x . . . . . .
Closeup of graph of ln x . .
Graph of sin x . . . . . . . .
Graph of sin x + 1 . . . . . .
Graph of 2 sin x . . . . . . .
Graph of sin(x + 90) . . . .
Closeup of graph of sin(2x) .
8.1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
50
55
56
57
59
63
129
130
132
133
134
135
136
137
138
139
140
141
LIST OF FIGURES
9.1
9.2
xx
An L and R curcuit. . . . . . . . . . . . . . . . . . . .
An L, C, and R curcuit. . . . . . . . . . . . . . . . . .
The unit step function u(t). . . . . . . . . . . . . . . .
The displaced unit step function u(t c). . . . . . . . .
Building functions that are on and off when we please.
A positive waveform built from steps. . . . . . . . . . .
A waveform built from steps. . . . . . . . . . . . . . .
A waveform built from delayed linear functions. . . . .
An impulse train built from Dirac deltas. . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
261
262
266
266
267
269
270
272
274
Chapter 1
Preliminaries
1.1
Introduction
We shall start the course, by recapping many definitions and results that
may already be well known. As mathematics is a cumulative subject, it is
necessary however, to ensure that all the basics are in place before we can
go on.
We assume the reader is familiar with the elementary arithmetic of numbers positive, negative and zero. We also assume the reader is familiar with
the decimal representation of numbers, and that they can evaluate simple
expressions, including fractional arithmetic.
1.2
Notation
We now list some mathematical notation that we may use in the course, or
may be encountered elsewhere, this is shown in table 1.1.
Another important bit of notation is . . . , which is used as a sort of
mathematicians etcetera. For example
1, 2, 3, . . . , 10 is short hand for 1, 2, 3, 4, 5, 6, 7, 8, 9, 10.
1, 2, 3, . . . is short hand for 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, etc.
Its probably worth noting that in algebra when we use letters to represent
numbers, then
1.3 Arithmetic
equal to
6=
not equal to
<
less than
>
greater than
equivalent to
approximately equal to
implies
infinity
1.3
Arithmetic
1.3.1
1.3.2
Order of precedence
We are familiar with the fact that expressions inside brackets must be evaluated first, that is what the bracket signifies. However, without brackets there
is still an inherent order in which operations must be done. Consider this
simple calculation
2+34
opinion is usually split as to whether the answer is 20, or 14. The reason is that multiplication should be performed before addition, and so the
3 4 segment should be calculated first. Be aware that not all calculators
understand this, test yours with this calculuation.
Calculations should be performed in the order shown in table 1.3.
B
Division
Multiplication
Addition
Subtraction
Table 1.3: Order of precedence
1.4
1.4.1
Decimal Places
1.4.2
Significant Figures
Sometimes decimal places are not the most appropriate way to define accuracy. There is no specific number of decimal places that suit all situations.
For example, if we quote the radius of the Earth in metres, then probably no
number of decimal places are appropriate for most purposes, as the answer
will not be that accurate, and there will be so many other figures before it,
they are unlikely to be significant.
An alternative often used is to specify a number of significant figures.
This is essential the number of non-zero numbers that should be displayed.
Suppose that we specify four significant figures. Then the speed of light in
m/s is written as:
c = 2, 997, 992, 458 2, 998, 000, 000 m/s
which can be written more simply again in standard form (see below). The
issue here is that the other figures are less likely to have any real impact on
the answer of a problem. Similarly the standard atomic mass of Uranium is
238.02891 g/mol 238.0 g/mol
since we only have four significant figures, we round after the zero. Note that
writing the zero helps indicate the precision of the answer.
1.5
Standard Form
In science, large and small numbers are often represented by standard form.
This takes the form
a.bcd 10n
if we are using four significant figures. For example, we saw above that to
four significant figures
c = 2, 998, 000, 000 m/s = 2, 998 1, 000, 000 m/s
or, working a bit more, we move the decimal place each time to the left,
(which divides the left hand number by ten), and multiply by another ten
on the right to compensate.
= 2.998 1, 000, 000, 000 m/s
now all that remains to do, is to write the number on the right as a power
of ten. We count the zeros, there are nine, and so
c 2.998 109 m/s.
The same applies for small numbers. The light emitted by a Helium-Neon
Laser has a wavelength of
= 0.000, 000, 632, 8 m
but this is clearly rather unwieldy to write down. This time we move the
decimal place to the right until we get to after the first non-zero digit. Each
time we do this we essentially multiply by 10, and so to compensate we
have to divide by ten. This can be represented by increasingly large negative
values of the power.1
So here, we need to move the decimal place seven times to the right, and
so we will multiply by 107 .
= 6.328 107 m.
1.5.1
Standard prefixes
There are a number of prefixes applied to large and small numbers to allow
us to write them more meaningfully. You will have met many of them before.
The prefixes for large numbers are shown in table 1.5.1.
1
We will have to wait a while, until 3.5 to see exactly why this is.
Power of Ten
deca
da
tens
101
hecto
hundreds
102
kilo
thousands
103
Mega
millions
106
Giga
billions
109
Tera
trillions
1012
Peta
quadrillions 1015
Exa
quintillions
1018
Power of Ten
deci
tenths
101
centi
hundredths
102
milli
thousandths
103
micro
millionths
106
nano
billionths
109
pico
trillionths
1012
femto
quadrillionths 1015
atto
quintillionths
1018
Chapter 2
Number Systems
We remind ourselves of different sets of numbers that we will refer to later.
2.1
Natural numbers
2.2
Prime numbers
A prime number is a positive integer which has exactly two factors 1 , namely
itself and one.
Thus 2 is the first prime number, and the only even prime number. So
the prime numbers are given by
2, 3, 5, 7, 11, 13, 17, 19, . . .
1
Recall that a factor of a number x is one that divides into x with no remainder.
2.3 Integers
2.3
Integers
2.4
Real numbers
2.5
Rational numbers
10
2.6
Irrational Numbers
Of course, not all real numbers are rational, and in fact many numbers you
will already have met
are not. These numbers are called irrational numbers.
Chapter 3
Basic Algebra
Algebra is perhaps the most important part of mathematics to master. If
you do not, you will find problems in all the areas you study, all caused by
the underlying weakness in your algebra skills. In many ways, algebra is
representative of mathematics in that it deals with forming an easy problem
out of a difficult one.
3.1
Rearranging Equations
3.1.1
Example
12
x+4=9
Well, it is simple to see what value x has in this case. However, we wish
to show how rearranging works in these very simple cases. We wish to find x,
and this is really saying we want to manipulate the equation into the form:
x =??
Where ?? represents the answer. Therefore, we wish to have an x on its
own, on one side of the equation, with everything else on the other side of
the equals sign. To that end, we start to look at what is attached to x, how
it is attached, and how we should remove it. In our example, 4 is attached
to the x by the process of addition. Now, how do you get rid of a 4 that has
been added? Of course, the answer is to subtract it, but we must not simply
do this on one side, rather in accordance with 3.1 we must do it on both
sides of the equation to maintain its validity.
So we obtain
x+44=94
Now the +4 4 on the L.H.S. cancel, leaving zero, and this step wouldnt
be written normally. So we finally obtain
x=94=5
If you observe that it appears that the +4 crossed the equals sign to
become a 4 on the R.H.S.. However, now we know what has actually
happened.
3.1.2
Order of Rearranging
13
((3(x2 )) 4) = 8
where the brackets serve simply to underline the order in which things are
done. The effect is somewhat similar to an onion with the x in the very centre.
We could peel the onion from the outside in for the most tidy approach. That
is, we remove things in the reverse order to the way they were attached in
the first place.
So in our simple example, the 4 is subtracted last, so remove it first,
(adding 4 on both sides).
3x2 4 + 4 = 8 + 4 3x2 = 12
Now the x still has two things attached, the three, which is multiplied
on, and the 2 which is a power. Powers are done before multiplication, so we
remove in the reverse order again. Therefore we divide by 3 on both sides.
3 2 12
x =
x2 = 4
3
3
Now we only have one thing stuck to the x, and that is the power of 2.
To remove this we simply take the square root on both sides:
x = 2.
Recall that -2 squared is also 4.
3.1.3
Example
14
Instead we first gather all the x terms together, and we do this by performing the same operation on both sides. For example, we dont want the
2x on the RHS, it is positive and so it present by addition. We subtract it
on both sides.
4x + 6 2x = 2x 3 2x 2x + 6 = 3
So we now have a simpler equation, with x only on one side. We can
proceed as before now to remove things from the x in the LHS. Subtract 6
on both sides.
2x + 6 6 = 3 6 2x = 9
Finally divide by 2 on both sides.
x=
3.1.4
9
2
Example
15
3.1.5
Example
10
5.
x
Solution
We have a more serious problem here, namely that x is on the bottom line.
We begin by removing the 5 to clarify the equation, by adding 5 on both
sides of course.
10
10
5+52=
x
x
Now, theres not much attached to x, but the x is still on the bottom line.
That means the x has been divided into something (the 10 in this case). To
cancel the division by x, we multiply x on both sides.
3 + 5 =
10x
2x = 10
x
which simplifies our equation quite a lot. We can now divide by two on
both sides to finish.
2x =
10
2x
=
x=5
2
2
3.2
Function Notation
16
y+3
2
17
x+3
2
Recall that with our simple example for f (x), that
f 1 (x) =
f (2) = 2(2) 3 = 1.
If we now feed this output into the input of the inverse, we should get
back to our starting position (2).
1+3
= 2.
2
Just as before we insert the value in the brackets into x throughout the
expression for the inverse function, and you can see that indeed the inverse
function here has taken us back to the start.
We will meet other examples of inverse functions throughout this module.
Its not always possible to do this, and not all functions have inverses
unfortunately.
f 1 (1) =
3.3
Expansion of Brackets
18
3.3.1
Examples
3.3.2
19
together, each of which with exactly two terms. We shall examine a general
technique without this shortcoming.
Consider once more
(a + b)(c + d)
Pick any bracket, for the sake of demonstration, we shall pick the first.
Now take the first term in it (which is a). We now multiply this term on
each term of the other bracket in turn, adding all the results.
= ac + ad +
When we reach the end of the other bracket, we return to the first bracket
and move onto the next term, which is now b and do the same again, adding
to our existing terms.
= ac + ad + bc + bd +
Now we return to the first bracket, and move to the next term. We find
we have actually exhausted our supply of terms, and so our expansion is
really complete.
(a + b)(c + d) = ac + ad + bc + bd
To multiply several brackets together at once we should multiply two only
at a time. For example
(a + b)(c + d)(e + f ).
We begin my multiplying one pair together, let us say the first two, to
obtain:
= (ac + ad + bc + bd)(e + f ).
We may then complete the expansion, it is left to the reader as an exercise
to confirm that the full expansion will be:
= ace + ade + bce + bde + acf + adf + bcf + bdf.
3.3.3
Examples
3.4 Factorization
20
2.
(x + y)2 = (x + y)(x + y) = x2 + xy + xy + y 2 = x2 + 2xy + y 2
(See binomial expansions later).
3.
(x + y)(x y) = x2 xy + xy y 2 = x2 y 2
(This is called the difference of two squares).
4.
(3x 2y)(x + 3) = 3x2 + 9x 2xy 6y
5.
(2x y)(x + 2y) = 2x2 + 4xy xy 2y 2 = 2x2 + 3xy 2y 2
3.4
Factorization
3.4.1
21
Examples
Let us look at some examples. Remember that in each case, expanding the
end result should give us our original expression, and this allows you to check
and follow the logic.
1.
3x + 12y 2 6z = 3(x + 4y 2 2z)
2.
x3 + 3x2 + 4x = x(x2 + 3x + 4)
3.
x3 + 3x2 = x2 (x + 3)
4.
2x2 + 4xy + 8x2 z = 2x(x + 2y + 4xz)
We can also factorise expressions into two or more brackets multiplied
together, but this is more difficult and we shall examine it later.
3.5
Laws of Indices
The term index is a formal term for a power, such as squaring, cubing etc,
and the plural of index is indices.
There are some simple laws of indices, which are shown in table 3.1.
1 xa xb
= xa+b
2 xa xb
= xab
(xa )b
xab
x0
xb
1
xb
xb
xb
=
=
b
x
b
xa
3.5.1
22
Example proofs
We shall attempt to show how a selection of these results work, but such
demonstrations are for understanding and are not examinable.
Let us consider the first law, with a concrete example:
x3 x2
We dont know what the number x is, but all that is important is that
the base values of each number are the same.
We recall that powers mean a string of the same thing multiplied together,
so that:
x} .
x3 x2 = x
x x} x
| {z
| {z
x3
x2
x2
x5
z
}|
{
x
x
| {z } | {z }
| {z } .
a times
a times
a times
23
3.5.2
Examples
16 4 =
5.
153 =
6.
2
27 3 =
3.6
16(= 2)
1
1
(=
)
3
15
3375
1
27
2
3
=
3
1
272
1
9
Laws of Surds
a b
=
=
ab
pa
b
3.6.1
Examples
The second law of surds is most often used to work out the square roots of
fractions.
1.
r
1
1
1
= =
4
2
4
2.
r
4
4
2
= =
7
7
7
The first law was often used to split large square roots into smaller ones,
by attempting to divide the original number by a perfect square.
1.
12 = 4 3 = 4 3 = 2 3
2.
80 = 16 5 = 16 5 = 4 5
3.7
Quadratic Equations
3.7.1
Examples
Here are some examples of equations which are, and which are not quadratic
equations, shown in table 3.3. Equations 1,2 and 3 are genuine quadratics,
even though terms are missing in 2 and 3. Equation 4 is the equation of a
straight line, or if you like a = 0 which is not permitted. Equation 5 contains
an x3 term, and so is a cubic equation and not a quadratic whose highest
term must be x2 .
24
25
Equation
Quadratic?
1 2x2 + 3x 4 = 0 YES
-4
x2 + 2x = 0
YES
x2 + 3 = 0
YES
4x + 2 = 0
NO
x3 + 3x 2 = 0
NO
3.7.2
Graphical interpretation
26
3.7.3
Factorization
3.7.4
27
28
2
b2
b2 4ac
b
c
= 2 =
x+
2a
4a
a
4a2
r
b
b2 4ac
=
x+
2a
4a2
which using the laws of surds (see 3.5), yields our final result.
b b2 4ac
x=
2a
3.7.5
The discriminant
3.7.6
Examples
29
p
22 4(1)(3)
x=
2
which when calculated out yields answers of 1 and 3.
2. In this case a = 1, b = 4, c = 4. The solution formula yields
p
+4 (4)2 4(1)(4)
x=
2
which when calculated out yields answers of 2 and 2. The repeated solution is nothing unusual, and indicated that this quadratic has only one
solution.
3. In this case a = 1, b = 1, c = 1. We shall follow this case in more detail.
p
1 12 4(1)(1)
x=
2
1 1 4
=
2
1 3
=
2
Note that we have a negative square root here. We cannot calculate this,
suppose that the answer was a positive number, then when squared we get
a positive number, not 3. We also get a positive number when we square
a negative
number, and we get zero when we square zero. Thus no number
can be 3.
There are no solutions to this quadratic equation.
2
3.7.7
Special cases
c
a
3.8 Notation
30
r
c
x= .
a
Note that ac must be positive, and this will be the case only if a and c
have different signs, otherwise there are no solutions. Note also the in the
formula, solving directly often leads to forgetting the solution coming from
the negative branch of the square root.
c=0
In this case we have
ax2 + bx = 0 x(ax + b) = 0
from which we obtain, that either
x=0
which is one solution, or
ax + b = 0 x =
3.8
b
a
Notation
3.8.1
3.8 Notation
31
Examples
Here are some examples
|2| = larger of 2 and 2 = 2;
| 2| = larger of 2 and 2 = 2;
|0| = larger of 0 and 0 = 0.
In other words, the modulus function simply strips off any leading minus
sign on the number.
Alternative definitions
There are some alternative, but equivalent definitions of |x|:
x if x 0
|x| =
x if x < 0
and
|x| =
x2
where we adopt the convention that the square root takes the positive
branch only, unless we include , which is commonly accepted.
3.8.2
Sigma notation
f (k)
k=m
32
Examples
Here are some examples of sigma notation
1.
4
X
2k = 20 + 21 + 22 + 23 + 24 = 31
k=0
2.
k = 4 + 5 + 6 + 7 + 8 +
k=4
3.
P5
=
=
3.8.3
k+1 1
k=2 (1)
3k
5
(1)3 312 +(1)4 313 + (1)
1
1
1
1
32 + 33 34 + 35
1
34
+ (1)6
1
35
Factorials
3.9
3.9.1
Exponential functions
y = kx
where k is some positive constant. These functions are important due to
their extraordinary ability to climb or decrease, as we shall see later when
we examine their graphs.
3.9.2
Logarithmic functions
10 100 1000
1
2
3
The scales of pH (chemical scale of acidity) and the decibel range, the
Richter scale of earthquake intensity are all logarithmic,2 base 10.
Therefore a pH of 6 is 10 times more acidic than the neutral 7, and 5 is
100 times more acidic than 7, etc.
1
The Scottish mathematician John Napier (1550 - 1617) did a great deal of work
33
2 logn (x y)
logn (xy )
= y logn (x)
logn n =
logn 1
3.9.3
34
3.9.4
logn ax = logn b
x logn a = logn b (using laws of logs 3)
logn b
x = log
a
n
Examples
log 27
=3
log 3
log 30
= 3.0959
log 3
35
u = 2x u2 = (2x )2 = 22x
by the laws of indices (see 3.5). Placing these into the equation yields
u2 5u + 6 = 0
which is now a plain quadratic. Solving by whatever method yields u = 2
and u = 3 as solutions. Having dealt with the quadratic, we now deal with
the exponential problem, recall that u = 2x .
Consider the u = 2 solution, this means 2x = 2 so that clearly x = 1,
with no logarithms required.
The u = 3 solution proceeds as follows
2x = 3
and following the procedure above we obtain
x=
log 3
= 1.5850
log 2
3.9.5
Anti-logging
36
3.9.6
Examples
10log x = 102
On the LHS, the anti-log and log cancel, leaving
x2 = 102 = 100 x = 10
2. In this case, the base of the logarithm is e. We could immediately take
anti-logs on both sides, but the 4 in front of the ln x makes it messy. It is
easier to rearrange to ln x first (onion within onion as before).
ln x =
70
4
37
38
3.10
Binomial Expansion
3.10.1
Theory
Consider
(p + q)n
for all numbers, and in particular positive integers n.
We now examine the expansion of powers of (p + q), and let us start by
considering some examples3 :
(p + q)2 = p2 + 2pq + q 2 ;
(p + q)3 = p3 + 3p2 q + 3pq 2 + q 3 ;
(p + q)4 = p4 + 4p3 q + 6p2 q 2 + 4pq 3 + q 4 .
If you examine this you will note some facts:
Patterns
Binomial expansions follow these rules:
the powers of each term add to the power of the expansion;
the powers of p begin with n and decrement each time until they reach
0;
3
These were produced by considering (p+q)(p+q) and expanding, and then multiplying
39
the powers of q begin with 0 and increment each time until they reach
n.
Armed with these facts we can write out the expansion for any value of
n, except that we as yet cant work out the coefficients. However, we can
note something about these too:
Binomial coefficients follow these rules:
the coefficients on the first and last term are both 1;
the coefficients on the second and one but last term are both n;
if the coefficients are laid out in a triangle, then any one can be found
by adding the two above.
This triangle layout is shown in table 3.5 as far as n = 8 and is known as
Pascals triangle 4 .
1
11
121
1331
14641
1 5 10 10 5 1
1 6 15 20 16 6 1
1 7 21 35 35 21 7 1
1 8 28 56 70 56 28 8 1
Named after Blaise Pascal (1623-1662) the noted French mathematician, physicist and
philosopher. Due to his work in hydrostatics the SI unit of pressure takes his name, and
a computer language is named after him for his production of a calculating machine at
age 19. Pascal, along with Fermat and De Moivre was a pioneer in the mathematics of
probability
3.10.2
40
Example
3.10.3
Examples
We can deal with more difficult problems in the same way. Expand the
following expressions fully
1.
4
1
x
x
2.
(2a + b)5
Solutions
1. It is usually easier to do the simple expansion first. That is, expand out
(a + b)4 which is
a4 + 4a3 b + 6a2 b2 + 4ab3 + b4
and now let a = x and let b = y1 (note that the minus sign is included in b
itself). Now insert these into the expansion, inserting brackets for safety.
2
3
4
1
1
1
1
2
+ 6(x)
+ 4(x)
+
(x) + 4(x)
x
x
x
x
4
Which, with careful working out using the laws of signs gives us
2
3 4
1
1
1
1
4
3
2
= x 4x
+ 6x
4x
+
x
x
x
x
41
3.10.4
High values of n
n
X
(n Cr pnr q r ).
r=0
3.11
Arithmetic Progressions
3.11.1
Examples
3.11.2
Let
a, a + d, a + 2d, . . .
be an A.P.. Then the sum of the first n terms of the A.P., denoted Sn is
given by
Sn =
n
(2a + (n 1)d)
2
42
3.11.3
Example
The 4th term of an A.P. is 12 and the 8th term is 32 . Find the initial term,
the 3rd term, and sum of the first 100 terms.
43
44
Solution
Suppose the A.P. has initial term a and common difference d as before.
Then we obtain
a + 3d = 21
a + 7d = 32
(i)
(ii)
So here we have two equations in two unknowns, so we solve by subtracting (i) from (ii) to obtain
4d = 2
and so we have found that d = 12 . Now that weve found d we can use it
to find a by simply inserting the value for d into either equation. Inserting
it into (ii) yields:
1
3
a + 7( ) = a = 2.
2
2
We have found a and d and so we can now go on to finish off the question.
The 3rd term will be a + 2d and so is 2 + 2( 12 ) = 2 + 1 = 1.
and the sum of the first 100 terms is given by
S100
3.11.4
=
=
=
=
100
(2(2)
2
+ (100 1) 12 )
50(4 + 99( 12 ))
50(45.5)
2275
Example
The sum of the first ten terms of an A.P. is 10, the sum of the first hundred terms is 8900. Find the initial term and common difference of the
progression.
Solution
First of all we formulate the problem in mathematics rather than words.
(2a + 9d) = 10
S10 = 10
2
5(2a + 9d) = 10
2a + 9d = 2
(i)
3.12
Geometric Progressions
3.12.1
Examples
45
3.12.2
46
Let
a, ar, ar2 , ar3 , . . . , arn1 , . . .
be a G.P.. Then the sum of the first n terms of the G.P., denoted Sn is given
by
Sn =
a(1 rn )
1r
provided that r 6= 1.
Although it is not examinable, a simple proof is presented below:
We can write
Sn = a + ar + ar2 + + arn2 + arn1
and multiplying both sides by r we obtain
rSn = ar + ar2 + + arn1 + arn
just as in the proof for the sum for A.P.s, we combine these two equations,
this time by subtraction to obtain
Sn rSn = a arn (1 r)Sn = a(1 rn )
and dividing by 1r on both sides now gives us the result. Of course, the
r 6= 1 restriction comes from this devision, ensuring that we are not dividing
by zero.
3.12.3
Sum to infinity
Sometimes, we may be asked to find the sum to infinity of a G.P., that is,
the sum of all the terms added together. This is not a relevant question with
A.P.s by the way, as we keep on adding a sizeable chunk each time. The only
way the sum to infinity can exist is if the amount we add on with each new
term shrinks to negligible levels as n tends to infinity. For example, consider
that |r| < 1 in a given G.P., then every subsequent power of r gets smaller
and smaller numerically speaking.
The sum to infinity is given by
S =
a
; |r| < 1
1r
3.12.4
47
Example
The 6th term of a geometric progression is 32, and the 11th term is 1. Determine
1. the common ratio of the progression;
2. the initial term of the progression;
3. the sum to infinity of the progression.
Solution
1. We begin by forumlating the given information as equations.
ar5 = 32 (i)
ar10 = 1 (ii)
This gives us two equations in two unknowns. There are two main methods of solving these, you could for example rearrange each equation to give
a and then put those two expression equal to each other, and solve for r. We
will solve by simply dividing equation (i) into (ii) to obtain
1
1
ar10
=
r5 =
5
ar
32
32
You can see that the as cancel, and the rs combine according to the laws
of indices (see 3.5). We find r by taking the 5th root on both sides, showing
that r = 21 .
2. We now have to obtain a, and we can do this by inserting our discovered
value for r into one of our equations. If we pick (i) we obtain
1
a
a( )5 = 32
= 32 a = 32 32 = 1024.
2
32
3. We have now found the two numbers which characterise this progression, and we are hence in a position to answer almost any subsequent
question on it.
The sum to infinity exists if |r| < 1, and in our example this is certainly
the case. We can then plug in our values
S =
1024
= 2048.
1 21
3.12.5
Example
48
Chapter 4
Trigonometry
Trigonometry is an essential skill for performing many real life mathematical
calculations, including the analysis of simple harmonic motion and waves.
There is a strong link between trigonometry and complex numbers as
evidenced by the polar and exponential form of complex numbers.
We recall the most elementary principles quickly.
4.1
Right-angled triangles
4.1.1
Labelling
In a right-angled triangle, we call the side opposite the right angle (that is,
the side that does not touch the right angle) the hypotenuse.
Given any other angle (which clearly cannot be other than acute), we
label sides relative to that angle. Thus the side opposite this acute angle is
called simply the opposite and the remaining side becomes the adjacent.
It should be clear that the opposite and adjacent will switch, depending
on what acute angle is considered, while the hypotenuse remains the same
side at all times.
50
4.1.2
Pythagoras Theorem
4.1.3
Recall that
sin =
A
O
O
; cos = ; tan =
H
H
A
Pythagoras (c. 560-480 BC) was a Greek methematician and mystic who founded a
cult in which astronomy, geometry and especially numbers were central. Pythagoras was
exiled to Metapontum around 500 BC for the political ambitions of the cult.
4.1.4
1
1
1
; csc =
; cot =
cos
sin
tan
Procedure
4.1.5
Example
51
4.2 Notation
52
Solution
If we draw the triangle which reflects this situation then the path of the plane
is the hypotenuse, the length of 1000m along the ground is the adjacent and
the height is the opposite.
(a) We know the angle, and the adjacent, we want the hypotenuse. Looking above (see 4.1.3) we see we need cos, so write the equation, filling the
blanks.
cos 15 =
1000
1000
H cos 15 = 1000 H =
= 1035.276m
H
cos 15
(b) We could now use Pythgoras theorem (see 4.1.2) for this, but we shall
use trigonometry to demonstrate. We have the angle, adjacent and we want
the opposite. We see that tan is the function that associates these together.
O
O = 1000 tan 15 O = 267.949m
1000
All figures accurate to 3 decimal places.
tan 15 =
4.2
Notation
53
1
.
sin x
Remember that all trig function work on angles, and they are meaningless
without them. Therefore sin 20 does not mean sin 20, rather it is similar to
4.2.1
Example
where the brackets are added for emphasis. Now the tan1 and tan functions
cancel each other (thats the whole point) so we obtain
= tan1 (0.5) = 26.565
4.3
Table of values
The exact values of the sine, cosine and tan functions at important angles
are given in table 4.1.
Note that tan 90 is not defined.
4.4
Graphs of Functions
54
Angle
sin
cos
tan
30
1
2
3
2
2
2
3
3
60
2
2
3
2
90
180
45
1
2
4.5
Multiple Solutions
4.5.1
CAST diagram
We use a type of diagram called a CAST diagram (see figure 4.4 on page 57
to remind us of the behaviour of the three basic trig functions. The name of
the diagram comes from the single letter to be found in each quadrant. The
significance is as follows
A All functions are positive in this quadrant;
S Only sin is positive, cos and tan are negative;
T Only tan is positive, sin and cos are negative;
C Only cos is positive, sin and tan are negative.
The use of the diagram is detailed below. Note that the diagram here is
labelled in the range 0 to 360 , but it is as easy to label in the range 180
to 180 .
55
56
4.5.2
Procedure
4.5.3
Example
1
2
2.
sin =
in the range 180 < 180.
1
3
57
4.6
Scalene triangles
4.6.1
Labelling
For non-right-angled triangles, another naming convention and set of equations are required. We name the sides a, b and c in an arbitrary order.
We then name the angle opposite side a with A, the angle opposite side
b becomes labelled B and similarly for c and C.
It should be noted that there is a close relationship between angles and
their opposite sides. In particular the larger the angle, the larger the opposite
side.
This labelling is shown in figure 4.5 on page 59.
4.6.2
Scalene trigonmetry
4.6.3
Sine Rule
58
59
4.6.4
Cosine Rule
60
We can also rearrange each equation (left as an exercise for the reader)
to obtain
b 2 + c 2 a2
2bc
allowing us to obtain an angle given all three sides.
For technical reasons there is no ambiguous case for the cosine rule (the
second solution will lie in the range of 270 to 360 and thus cannot appear
in a triangle).
cos A =
4.6.5
Example
=
sin B =
= 0.965
a
b
12
13
12
Using the inverse function sin1 we may show that B = 74.853 , but this is
no right angled triangle, as we must check for multiple solutions. Using the
method shown above (see 4.5) we can show that B = 105.147 , and there
seems to be no reason why this could not be correct also.
We have no choice but to accept that this is an ambiguous question, and
find both possible triangles, splitting it into two cases.
(a) Consider the case B = 74.853 , then as all the angles add to 180 we
obtain that C = 42.147 . All that remains to be found is c. We could use
the sine rule for this, but we shall use the cosine rule for variety.
c2 = a2 + b2 2ab cos C c2 = 122 + 132 2(12)(13) cos 42.147 = 81.675.
Thus we obtain c = 9.037. This complete solution is
A = 63 , B = 74.853 , C = 42.147
a = 12, b = 13, c = 9.037
61
(b) Consider the case B = 105.147 . Then C = 11.853 , and using the cosine
rule shows that
c2 = 122 + 132 2(12)(13) cos 11.853 = 7.652
so that c = 2.766 and the complete solution in this case is
A = 63 , B = 105.147 , C = 11.853
a = 12, b = 13, c = 2.766
4.7
Radian Measure
There is nothing special about the degree. The fact that we have 360 degrees
in a circle is a historical accident, and has no mathematical significance.
There is a natural unit of angle, which is mathematically simple. This
unit is called the radian.
There are 2 radians in a circle, so the radian is quite big, there being
only slightly over 6 in the whole circle.
4.7.1
Conversion
4.7.2
Length of Arc
62
Degrees Radians
0
30
45
60
90
180
360
=
s=
s = r
2
2r
2
Exercise
It is left as an exercise for the reader to show that if the angle was in
degrees the formula would be
s=
r
180
4.7.3
Area of Sector
4.8 Identities
63
= 2 A=
A = r2
2
r
2
2
Exercise
It is left as an exercise for the reader to show that if the angle was in
degrees the formula would be
A=
r2
360
4.8
Identities
4.8 Identities
64
x + x = 2x
is always true, regardless of the value of x. This is an identity, an equation
that holds true for all values of the variables in it.
We may use trigonometric identities to change the form of expressions to
make them easier to deal with.
4.8.1
Basic identities
tan =
sin
cos
sin2 + cos2 = 1
1 + tan2 = sec2
cot2 + 1 = csc2
4.8.2
4.8.3
tan tan
1 tan tan
2 tan
1 tan2
4.9
65
Trigonmetric equations
4.9.1
Example
4.9.2
Example
6 cos2 + sin = 5
Solution
The first problem consists of getting rid of the cos2 term which is not in
terms of sin . We recall from 4.8.1 that
sin2 + cos2 = 1 cos2 = 1 sin2
so inserting this into our equation, we obtain
6(1 sin2 ) + sin = 5 6 6 sin2 + sin = 5
6 sin2 sin 1 = 0
which is actually a quadratic equation (see 3.7) which can be made more
clear by letting
s = sin s2 = sin2
so we obtain
1
1
6s2 s 1 = 0 (2s 1)(3s + 1) = 0 s = ; s =
2
3
which we could have solved using the solution formula of course. Now let
us not forget that s was a temporary place holder, we must now solve the
equations
1
1
sin = ; sin =
2
3
which we solved previously in example 4.5.3 so we can write down our
solutions as
= 30 ; = 150 ; = 19.471 ; = 160.529
4.9.3
Example
66
67
Solution
The principle of superposition says that we can algebraicly add these two
signals to find the resultant, but we want to find the resulting wave exactly.
When we add the signals together we get
sin(t) + 2cos(t)
and let us suppose the resulting signal is of the form r sin(t + ), then4
by the results in 4.8.2 we can expand this out to obtain
r(sin(t) cos + sin cos(t))
Comparing these two equations gives us
r cos = 1 (i)
r sin = 2 (ii)
If we divide equation (ii) by equation (i), then using the results from 4.8.1
we obtain
tan = 2 = 63.435
Either using this to find r directly, or by squaring (i) and (ii) and adding
together we obtain
r2 cos2 + r2 sin2 = 5 r2 (cos2 + sin2 ) = 5 r2 = 5 r =
So the resultant wave is
A=
5 sin(t + 63.435 )
Chapter 5
Complex Numbers
We saw in 3.5 that we have a difficulty with certain square roots, namely the
square roots of negative numbers. We introduce the set of Complex Numbers
to address this situation.
5.1
Basic Principle
Rather than dealing with the square roots of all negative numbers, we can
reduce the problem to one such square root. For example, we can use the
laws of surds detailed in 3.5 to split negative square roots up.
9 =
25 =
1 9 = 3 1
1 25 = 5 1
a = 1 a
so in all cases the problem comes down to the square root of 1. Now
we know that no real number can have this value, but we shall assign this
special square root a specific symbol, namely j.
Please note that i is used throughout the mathematical world itself, both
in textbooks and in calculators, while engineers often prefer j to represent
this square root. You should be aware of this for background reading.
Thus we now say that
5.2 Examples
69
5.1.1
5.2
Examples
Here are some examples of equations that have complex number solutions,
and which we could not solve previously. In particular, we now see that
quadratic equations (see 3.7) which have no real solutions in fact have two
complex solutions.
1. x2 + x + 1 = 0
1 3
1 3j
=
.
2
2
So our two solutions are
1 + 3j
1 3j
x=
;x =
2
2
2. x2 6x + 13 = 0
Using the quadratic solution formula once more we obtain
p
6 (6)2 4(1)(13)
6 16
6 4j
=
=
.
2
2
2
So again we have two solutions, which are
x = 3 + 2j; x = 3 2j
5.3
The real numbers are frequently represented by a line often called the number
line or simply the real line.
Complex numbers are inherently two dimensional and cannot be presented on a line, instead we represent these on a plane. We often talk of the
complex plane as an interchangable name for the set of complex numbers, and
diagrams in the form of two dimensional graphs are called Argand diagrams.
We take the horizontal axis to be, essentially, the real line itself, and we
call it the real axis. We call the vertical axis the imaginary axis. Then,
given a complex number z = x + yj we represent it as the point (x, y) on the
diagram.
Note that every point on the diagram corresponds to a unique complex
number, and every complex number has a unique point on the diagram.
Figure 5.1 shows a typical argand diagrams with two complex numbers,
z and w, z = 3 + 5j and w = 4 2j.
5.4
70
5.4.1
Addition
71
5.4.2
Subtraction
5.4.3
Multiplication
72
73
Powers of j
We can deal with powers of j as a special case.
j1
j2
j3
j j2
j4
= j2 j2
j5
j j4
= 1
=
1 j
= 1 1
=
j1
We have shown how the powers may be calculated in table 5.1. Note
that by j 5 we are repeating ourselves, we can go round and round this table
for any power of j.
Examples
Here are some concrete examples involving specific numbers.
1.
(2 + 3j) (4 + j) = 8 12j + 2j + 3j 2 = 8 3 10j = 11 10j
2.
(2 + 3j)2 = (2 + 3j)(2 + 3j) = 4 + 12j + 9j 2 = 5 + 12j
3.
(2 + 3j)(2 3j) = 4 + 6j 6j 9j 2 = 4 + 9 = 13
4.
(2 + 3j)3 = (5 + 12j)(2 + 3j) = 10 + 24j 15j + 36j 2 = 46 + 9j
We could use the binomial expansion (see 3.10) to work out high powers
of complex numbers, but it is quite tedious to work out all the powers of j.
It is easier to use a technique still to come, De Moivres theorem (see 5.7).
5.4.4
Division
5.4.5
Examples
5 14j
5
14
= j
17
17 17
74
5.5 Definitions
75
section. However, the aim of this is to create a real number on the bottom
line, and in this case we can obtain that simply by multiplying by j. Its
quite ok to multiply by j though, and some people would prefer this.
=
5.5
5.5.1
3 + 2j
= 3 2j
1
Definitions
Modulus
p
x2 + y 2 .
Note that we met the concept of modulus before (see 3.8.1) and so a
new definition seems odd. Close examination will show that this definition
is exactly the same as the previous
modulus on the real numbers (note that
in this case y = 0 and so |z| = x2 ).
5.5.2
Conjugate
5.6 Representation
5.5.3
76
Real part
5.5.4
Imaginary part
5.6
Representation
5.6.1
Cartesian form
We normally use the form z = x + yj, representing the point (x, y) on our
Argand diagram.
This form or representation is called cartesian form, after Rene Descartes.1
1
for popularising analytical co-ordinate geometry using the system of co-ordinates that now
carry his name (in part at least).
5.6 Representation
5.6.2
77
Polar form
p
x2 + y 2
5.6 Representation
78
So this polar form doesnt change the value, as for example r cos = x,
but it only alters the form of expression.
Arithmetic
We can rework multiplication and division in polar form to get a useful result.
Let us take two complex numbers, in polar form.
z = r(cos + j sin ); w = s(cos + j sin )
Then
z w = rs(cos + j sin )(cos + j sin )
= rs{(cos )(cos ) + j(sin )(cos ) + j(cos )(sin ) + j 2 (sin )(sin )}
= rs{(cos )(cos ) (sin )(sin )} + j{(cos )(sin ) + (sin )(cos )}
Which, by two of the identities in 4.8 can be written as
z w = rs(cos( + ) + j sin( + ))
It is left to the reader as an exercise to show that
r
z
= (cos( ) + j sin( )).
w
s
Thus to multiply two complex numbers in polar form we multiply their
modulii and add their arguments. To divide, we divide their modulii and
subtract their arguments.
5.6.3
Exponential form
It can be shown, by the power series expansion of all the functions involved
that
ej = cos + j sin .
It follows that
rej = r(cos + j sin ).
Therefore we can represent a complex number in this form also. This
expansion is produced using calculus, and therefore because we are mixing
calculus with trigonometry we must use radian measure for any values of .
5.6 Representation
5.6.4
79
Examples
12 + 12 = 2; = tan1 =
1
4
We check our value for , this angle corresponds to a complex number in
the top right quadrant, which is correct. We proceed.
r=
1+j =
2(cos
+ j sin ) = 2ej 4
4
4
2. 1 j
12 + 12 = 2; = tan1 =
1
4
We check , it indicates a complex number in the upper right quadrant,
this is clearly wrong, as our number is in the lower left quadrant. Using the
techniques described in 4.5 we can determine that the correct solution is 5
4
or 3
(depending
on
the
range
we
are
using
they
are
the
same
angle).
4
Thus
r=
1 j =
3. 1 +
5
5
5
2(cos
+ j sin ) = 2ej 4
4
4
3j
q
2
r = 12 + 3 = 2; = tan1 3 =
3
Our check on reveals it to be in the correct quadrant, so
1+
3j = 2(cos
+ j sin = 2e 3 .
3
3
4. 1
r=
12 + 02 = 1; = tan1 0 = 0
02 + 12 = 1; = tan1
80
5.6.5
Examples
1. Calculate ej
By looking at the exponential representation of a complex number (see
5.6.3) we can see that this is a complex number with modulus 1 and argument
. By looking at this number on an argand diagram we can see that it is in
fact 1. This identity
ej = 1
is one of the most famous in mathematics, combining as it does an imaginary number and two irrational numbers in a simple way to perform a simple,
integer, result.
2. Calculate j j
This is another question most easily answered in exponential form. In
the previous examples we showed that
j = ej 2
so that
= e 2 = exp
2
which is, surprisingly, an ordinary real number, about 0.2079 to four
decimal places.
j j = (ej 2 )j = ej
5.7
2
2
De Moivres Theorem
z = r(cos + j sin )
then
z n = rn (cos n + j sin n).
That is, to take z to the power of n we take the modulus r to the power
of n, and multiply the argument by n.
This result allows us to easily work out real number powers of complex
numbers, as well as nth roots.
81
5.7.1
Examples
+ j sin
)
1 + 3j = 2(cos
3
3
Invoking De Moivres theorem says that
4
4
4
4
(1 + 3j) = 2 (cos
+ j sin
)
3
3
We could put this back in cartesian form is necessary
1
3
= 16(
) = 8 8 3j
2
2
6
2. (1 + 3j)
It is left to the reader to modify the above argument slightly to show that
this evaluates to be simply 64, a real number.
5.7.2
Roots of Unity
We have already seen that a real number has two square roots, for example
25 = 5; 25 = 5j.
As an exercise, it is interesting to plot these answers on an Argand diagram. You will note that the square roots are distributed at 180 to each
other.
Observe further that
14 = (1)4 = (j)4 = (j)4 = 1
which tells us that 1 has at least four fourth-roots, given by 1, 1, j
and j. Draw these solutions on an Argand diagram. Again the roots are
distributed perfectly, this time at 90 to each other.
What about the cube roots of 1?
82
1 z3 = 1 z3 1 = 0
1 3j
1 + 3j
,z =
z=
2
2
When we plot these solutions on an Argand diagram we find they are symmetrically distributed. The modulus of each solution is 1, and the arguments
are 0 , 120 and 240 respectively.
nth roots of unity
In general then, for the nth root, we can show that the solutions in polar
form (and radians) are
2m
2m
cos
+ j sin
n
n
5.7.3
Let z = r(cos + j sin ) then we can add any multiple of 2 radians and the
complex number is the same. Thus, if m is an integer
z = r(cos( + 2m) + j sin( + 2m))
and thus, by De Moivres theorem
1
1
+ 2m
+ 2m
n
n
n
z = z = r (cos
+ j sin
)
n
n
where m can be any integer, but the the range m = 0, 1, . . . , n 1 covers all
the solutions, after this we repeat existing solutions.
83
5.8
84
Trigonometric functions
ej + ej
2
sin =
ej ej
2j
and
or more readably
cos =
exp(j) + exp(j)
2
sin =
exp(j) exp(j)
2j
and
Chapter 6
Vectors & Matrices
6.1
Vectors
A scalar quantity has magnitude only, but a vector quantity has magnitude
and direction.
The real numbers are often called scalars, although when the sign of the
number is used it can represent direction in a very limited way (there are
only two possible choices, positive and negative).
The complex numbers are often used to represent vectors. We have already seen in 5.6.2 that complex numbers have a magnitude (length or modulus) and a direction (argument).
Vectors are frequently represented by lower case, bold letters. For example we talk about the vector a or b. When handwritten we usually underline
the vector as we cannot easily emulate a bold type face; thus we talk about
a or b.
Vectors are often represented pictorially as arrows, with the direction
of the line representing the direction of the vector (and so we must label
which direction we travel along the line) and the magnitude of the vector
represented by the length of the line.
6.1.1
Modulus
6.1 Vectors
86
6.1.2
Unit Vector
6.1.3
p
x2 + y 2 + z 2
6.1 Vectors
6.1.4
87
Examples
6.1.5
(1)2 + 22 + (1)2 = 6
32 + 02 + (4)2 = 25 = 5
Signs of vectors
6.1 Vectors
6.1.6
88
Addition
We add vectors by combining them nose to tail. That is, to add a and b
we follow a to its endpoint, and then glue b on at this point. See figure 6.1
for a graphical representation of vector addition.
This corresponds to the natural addition in the cartesian unit vectors.
6.1.7
Subtraction
We define
a b = a + (b).
In other words, we reverse the direction of vector b, but add it to a in the
usual way. See figure 6.2 for a graphical representation of vector subtraction.
6.1.8
Zero vector
The zero vector, denoted 0 has a mangitude of zero, and so its direction is
irrelevant. As you should expect
a+0=a
6.1 Vectors
89
6.1.9
Scalar Product
6.1 Vectors
90
Now
i.j = 1 1 cos 90 = 0
and similarly
i.j = j.i = i.k = k.i = j.k = k.j = 0
Thus if we have two vectors
a = x1 i + y1 j + z1 k, b = x2 i + y2 j + z2 k
Then
a.b = (x1 i + y1 j + z1 k)(x2 i + y2 j + z2 k)
= x1 i.x2 i + x1 i.y2 j + x1 i.z2 k
+y1 j.x2 i + y1 j.y2 j + y1 j.z2 k
+z1 k.x2 i + z1 k.y2 j + z1 k.z2 k
= x1 .x2 + y1 .y2 + z1 .z2
Thus the scalar product is very simple to work out when the vectors have
this form.
Application
The scalar product can be used to calculate the angle between two vectors,
in the following manner, by rearranging the formula which defines the dot
product we obtain
cos =
6.1.10
a.b
.
|a||b|
Example
6.1 Vectors
91
Solution
We calculate the modulus of eachvector (we have done this in an example
above) to find |a| = 13 and |b| = 6. We now calculate the scalar product
a.b = (3 1) + (4 2) + (12 1) = 7
and now we have
cos =
7
a.b
= 0.2198
=
|a||b|
13 6
6.1.11
Example
Let
a = 2i 2j + 2k, b = 5i + 4j + zk.
Find z so that a and b are perpendicular.
Solution
If a and b are perpendicular then = 90 and a.b = 0.
Thus
a.b = (2 5) + (2 4) + (2 z) = 0
10 8 + 2z = 0 2 = 2z z = 1
6.1.12
Vector Product
at right angles to
6.1 Vectors
92
Warning
Vector products are not commutative. That is
a b 6= b a
although a b and b a have the same magnitude, they have opposite
directions. Therefore
a b = b a
Cartesian unit vectors
Note in particular that i j has a magnitude of 1 1 sin 90 = 1 and has a
direction perpendicular to i and j such that i, j and k form a right angled
triple, thus it is the direction k.
So
i j = k, j k = i, k i = j
and
j i = k, k j = i, i k = j
while
i i = 0, j j = 0, k k = 0
Thus if we have two vectors
a = x1 i + y1 j + z1 k, b = x2 i + y2 j + z2 k
Then
a b = (x1 i + y1 j + z1 k) (x2 i + y2 j + z2 k)
= x1 i x2 i + x1 i y2 j + x1 i z2 k
+y1 j x2 i + y1 j y2 j + y1 j z2 k
+z1 k x2 i + z1 k y2 j + z1 k z2 k
= x1 y2 k x1 z2 j y1 x2 k + y1 z2 i + z1 x2 j z1 y2 i
= (y1 z2 y2 z1 )i + (x2 z1 x1 z2 )j + (x1 y2 x2 y1 )k
It can be shown later (see 6.4)
determinant.
a b =
j
y1
y2
k
z1
z2
6.2 Matrices
93
Application
The vector product is useful in a variety of situations, such as finding the
area of thetriangle enclosed by a and b (this is the magnitude of a b).
We shall use it simply to find a vector at right angles to two given vectors.
6.1.13
Example
6.2
Matrices
6.2.1
Square matrices
A square matrix has an equal number of rows and columns, and we say that
it is n n.
6.2 Matrices
6.2.2
94
6.2.3
Examples
2 3
2
0 ,B =
A= 1
2
1
1
D=
1 2
5
2 3
0 3
0
, C = 1 1
1
0
1
1 2
2
0 9 ,E =
0
6.2.4
The zero matrix is a square matrix, denoted by 0 in which all the entries are
zero. There is a family of zero matrices, one for each size of square matrix.
0 0 0
0 0
0=
,0 = 0 0 0 ,...
0 0
0 0 0
The identity matrix is a square matrix, denoted by I in which all the
entries are zero except for those entries on the leading diagonal 2 , which are
1. Thus, once again there is a family of identity matrices, one for each size
of square matrix.
1 0 0
1 0
I=
,I = 0 1 0 ,...
0 1
0 0 1
2
the leading diagonal goes from the top-left to the bottom-right of the matrix
6.3
95
Matrix Arithmetic
6.3.1
Addition
We only define the addition A + B of two matrices A and B when they are
of exactly the same size, that is, they are both m n. Then we add the
matrices by adding corresponding cells.
6.3.2
1.
Examples
2 3
0
1
2 1
2.
6.3.3
3
+
3
3 2
0
2
3 1
(2 + 3)
=
4
0
(1 + 3)
5 0 1
=
2 6 1
(3 + 3) (0 + 1)
(2 + 4) (1 + 0)
2
0
(3 + 2) (2 + 0)
=
4 2
(0 + 4) (2 + 2)
1 2
=
4
0
Subtraction
6.3.4
1.
Examples
2 3
0
1
2 1
3
1
(2 3)
=
0
(1 3)
1 6
1
=
4 2 1
3
4
(3 3) (0 1)
(2 4) (1 0)
6.3.5
3 2
0
2
96
2
0
(3 2) (2 0)
=
4 2
(0 4) (2 2)
5 2
=
4
4
Multiplication by a scalar
6.3.6
Examples
=
6.3.7
22
20
2
3 0
2
0 1 1
23 20
4
6 0
=
2 1 2 1
0 2 2
Domino Rule
(m
A
n)
B
(p
q)
and the product is defined only if the two inner numbers match, in this
case if n = p.
Moreover, we can use the outer numbers. If the product is defined then it
will be of a size defined by the outer numbers, that is m q in this example.
3
4
6.3.8
97
Multiplication
We begin by defining the product of a row vector and a column vector, which
are of equal length. Note that this satisfies the requirements of the domino
rule, and the product will be 1 1, or in other words an ordinary number.
To do the multiplication we multiply corresponding components and add:
b1
b2
A = a1 a2 . . . an , B = ..
.
bn
A B = a1 b 1 + a2 b 2 + + an b n
Note that we do not define the product B A.
All matrix multiplication returns to this simple multiplication, so it is
most important that it is grasped.
To multiply a matrix A by B we first check, using the domino rule, to
find whether the multiplication is possible, and also determine the size of the
answer.
Then, to calculate the entry in the ith row and jth column in the product
we multiply the ith row in A by the jth column in B using the approach
above.
We always multiply rows in the first matrix by columns in the second.
6.3.9
1.
2.
3
0
Examples
2
2
2
0
4 2
2
0
(3 2) + (2 4)
=
4 2
(0 2) + (2 4)
14
4
=
8 4
3 2
(2 3) + (0 0)
=
0
2
(4 3) + (2 0)
6 4
=
12 12
(3 0) + (2 2)
(0 0) + (2 2)
(2 2) + (0 2)
(4 2) + (2 2)
98
Note that our second example demonstrates that when we multiply matrices
in the opposite order (cf. our first example) we usually get a different result.
In fact, it can be even more strange than this, as our next examples show.
3.
1
0
2 3 1
0 1
0
1 2
2
3
(2 1) + (3 0) + (1 2) (2 0) + (3 1) + (1 3)
=
(0 1) + (1 0) + (2 2)
(0 0) + (1 1) + (2 3)
4 6
=
4 5
4.
1
0
0 1 2
0
2
3
(1 2) + (0 0)
(0 2) + (1 0)
=
(2 2) + (3 0)
3 1
1 2
(1 3) + (0 1)
(0 3) + (1 1)
(2 3) + (3 1)
2 3
1
= 0 1 2
4 3
8
(1 1) + (0 2)
(0 1) + (1 2)
(2 1) + (3 2)
In this case multiplication in the opposite order of the same matrices (examples 3 and 4) produces not only different matrices as a result, but different
sizes of matrices.
5.
2 1
0
1
(2 1) + (1 2) + (0 3)
4
1 0
2 2 = (1 1) + (0 2) + (2 3) = 5
1 2 2
3
(1 1) + (2 2) + (2 3)
1
The opposite product is not defined in this
6.
2
0 1 2 1
1
=
case.
0
2
2
1
0
2
4 2
6.3.10
Exercise
Write down any 3 3 matrix, and multiply it by I for that size, in both
orders. What effect does the multiplication have on the original matrix?
Try this for other sizes of square matrices.
6.4
Determinant of a matrix
6.4.1
1.
2.
3.
4.
6.4.2
Examples
2
1
3
= (2 4) (3 1) = 5
4
2 1
0 2
2
3
= (2 2) (0 1) = 4
4
= (2 6) (4 3) = 0
6
1 3
2 4
= (1 4) (3 2) = 2
To build up larger determinants we first introduce the sign rule for matrices.
This is just a square array of + and characters, such that the top left
character is always + and we alternate in each direction. Thus we have a
different rule of signs for each size of matrix.
+ +
+ +
+ +
+
+ ,
,
+ + ...
+
+ +
+ +
99
6.4.3
100
Order 3
6.4.4
Examples
We first expand along the top row, which is the most usual way.
0 5
4 5
4 0
= +2
1
+ 3
1 2
0 2
0 1
= 2(5) 1(8) + 3(4) = 6
This works fine, but it would be better to expand along something containing
a zero. Lets use the bottom row (remember to readjust for the table of signs).
1 3
2 3
2 1
1
= +0
4 5 + 2 4 0
0 5
= 0(. . . ) 1(2) + 2(4) = 6
as you can see, one of the smaller 2 2 determinants does not need to be
calculated, saving us some work, and achieving the same result.
6.4.5
Order 4
6.5
Inverse of a matrix
Note that we have not defined division in terms of matrices. Indeed there is
no exact counterpart of division in matrices, but we have a similar concept.
We multiply by the inverse of a matrix rather than talking of dividing.
The inverse of a square matrix A, denoted A1 is such that
A A1 = A1 A = I.
The inverse does not exist for all matrices. When A has an inverse we
say it is invertible, otherwise we say that A is singular.
The singular matrices are those that have a determinant of 0.
It is worth noting that if A1 is the inverse of A, so A is the inverse of
1
A , so that invertible matrices exist in pairs, hence the reason for calling
non invertible matrices singular.
101
6.5.1
102
Order 2
1
=
ad bc
d b
c
a
Note that |A| = ad bc, and so if |A| = 0 we have division by zero which
is why the inverse will not be defined.
Exercise
It is left to the reader as an exercise to show that
A A1 = A1 A = I
in this case.
6.5.2
Examples
1.
2
1
3
4
2.
1
1
=
5
2 1
0 2
1
4
6
3.
2
3
1
=
4
1
=
4
1
3
2
2
0
1
2
1
(. . . )
0
This matrix is singular, its inverse is not defined as the determinant is zero.
4.
1
1
1 3
4 3
=
2 4
2 1
2
6.5.3
103
Other orders
There are several techniques for finding the inverse of larger matrices, including row reduction and cofactors. We shall not cover these formally by
examination, but an outline of one, known as row reduction or Gaussian
elimination 5 is given below.
We first write the matrix, with the identity matrix I of the appropriate
size glued onto the right.
We then use any of the following operations repeatedly
swap any two rows;
multiply any row by a constant;
add any row to another;
subtract any row from another.
The aim is to produce the identity matrix I on the left of the block,
whereupon the matrix on the right will be the inverse. It is not always
possible to do this (if the determinant of the original matrix is zero it will be
impossible).
We can much more easily verify a matrix is an inverse of another.
6.5.4
Exercise
Show that
1 3 4
2 3 5
4
3 5
25 27
7
10 11
3
14
15 4
Named after Karl Gauss (1777-1855), thought by many to be the greatest of all math-
ematicians. In any case Gauss was a prolofic mathematician and contributed to diverse
fields in the subject such as number theory and mathematical astronomy.
104
Solution
To prove this you must multiply the matrices together in both directions.
Let us call the matrices A and B respectively. Then you should show that
A.B = I
and
B.A = I
6.6
Matrix algebra
6.6.1
Addition
B+A
Commutative law
A + (B + C)
(A + B) + C
Associative law
A+0
=A=
0+A
6.6.2
Multiplication
6.6.3
Mixed
There is one major result concerning the mixture of addition and subtraction,
which is the distributive law, shown in table 6.3.
105
AB
6=
BA
In general
A (B C)
(A B) C
Associative law
AI
=A=
IA
A A1
=I=
A1 A
= AB + AC Distributive law
(A + B)C
AC + BC Distributive law
6.7
Solving equations
y
2y
=
1
= 3
106
either premultiply (multiply on the front of both sides) or postmultiply (multiply on the back of both sides). Postmultiplication gives an equation which
while technically correct does not help us in our problem. If we premultiply
however, we obtain:
A1 .A.X = A1 .Y I.X = A1 .Y X = A1 .Y
and so we have obtained X.
6.7.1
Example
y
2y
=
1
= 3
Solution
It is rarely worthwhile to use matrix methods to solve two equations in two
unknowns, but we do it this way by way of an example.
Recall that we wrote this as the equation:
3 1
x
1
=
.
5
2
y
3
Now
1
2
3 1
1
A=
A =
5
5
2
(3 2) (1 5)
1
3
=
x
y
=
2
5
1
3
1
3
.
2
5
1
3
107
Now we now from the matrix algebra that the LHS should collapse simply
to the matrix X, so we could have started from this step, it is a matter of
choice. Some people like to do the multiplication on the LHS to verify that
the inverse was correct. In any case, we now proceed with multiplication on
the RHS.
x
1
=
.
y
4
Thus x = 1 and y = 4.
6.7.2
Example
27b + 7c = 1
11b + 3c = 2
+ 15b 4c = 3
Solution
This corresponds to the following matrix equation:
1
25 27
7
a
10 11
3 b = 2
3
14
15 4
c
and normally this would be pretty difficult to solve, as we would need
to find the inverse of the square matrix using some more time consuming
techniques. Fortunately, we showed in an example above that the inverse of
this matrix was
1 3 4
2 3 5
4
3 5
and so we can say that
a
1 3 4
1
b = 2 3 5 2 .
c
4
3 5
3
In this case we have skipped the multiplication on the left for brevity,
we know this should be the result if we have been correct in our choice of
inverse. We now perform the multipication on the right.
108
a
(1 1) + (3 2) + (4 3)
19
b = (2 1) + (2 2) + (5 3) = 21
c
(4 1) + (3 2) + (5 3)
13
Therefore a = 19, b = 21 and c = 13.
6.7.3
Row reduction
It is also possible to solve equations like this directly with row reduction,
without finding the inverse directly.
To do this, form the matrix composed of A with Y glued to the right,
and row reduce until we get the identity in the left hand block, at this point
the entries on the right hand column will be X, the variables we require.
It is not always possible to do this. We now examine many features of
matrices in greater depth and detail.
6.8
Row Operations
When one recalls how to solve simultaneous equations by elimination it becomes apparant that there are many operations we routinely use.
For example to solve
3x
2x
+ 2y
y
=
=
4
5
(i)
(ii)
109
Now when remind ourselves that this system of equations can be written
3
2
x
4
=
2 1
y
5
|
{z
} | {z } | {z }
A
6.8.1
Determinants
These principles can be applied to determinants too, but with some care.
As determinants depend intimately upon the values of the coefficients. For
example
1 0
= 1 but 2 0 = 2
0 1
0 1
so here it is clear that multiplying through on a row by a number does affect
the determinant. In effect all of the row operations discussed in 6.8 may be
used, with care taken for multiplication and division. For example, when
dividing by a constant on a given row, that constant is taken outside the
determinant where it must be multiplied back on. So from above we see that
2 0
1 0
0 1 = 2 0 1 = 2 1 = 2
It is also useful to note that swapping rows has an impact on the value of
the determinant. For example
1 0
0 1
0 1 = 1; (swap R1, R2) 1 0 = 1
In fact it can be shown that swapping any two rows in a determinant swaps
the sign of the result.
6.8.2
Example
1 2
1
4 2 3
2 3
1
Solution
We know that introducing zeros make determinant far easier to calculate. So
we proceed this way, starting off by subtracting row 1 from row 3 twice.
1
4
2
1
1
2
1
2
1
2
1
R2=R24R1
R3=R32R1
2 3
2 3
=
=
4
0 6 7
0 1 1
0 1 1
3
1
Although these determinants are equal, this last version is far easier to calculate, since if we expand along the first column we obtain
6 7
0 |. . . | + 0 |. . . |
= 1
1 1
= (6)(1) (7)(1) = 6 7 = 1.
6.9
We can use this approach to help us solve systems of equations where the
inverse is difficult or impossible to find. For example, to solve the set of
equations represented by the augmented matrix above, we simply apply row
operations in a sensible manner. For clarity at each stage we indicate just
what operation has occured, by referring to the rows as R1, R2 etc.
110
3
2 4
2 1 5
R2=R22
111
3
2 4
4 2 10
R2=R2+R1
3 2
7 0
4
14
We could continue, but in fact this is already possible to solve very trivially
now. Remembering what this augmented matrix represents we see that our
system of equations has now become.
3x +
7x
2y
=
=
4
14
(i)
(iii)
So that a solution for one variable is now obvious. From equation (iii) we
see that x = 2, and placing this value into equation (i) we see that
3(2) + 2y = 4 y = 1
and so the system is solved.
6.9.1
Gaussian Elimination
There is an ideal form for the rearranged section of the augmented matrix
to the left of the line. We again begin by looking at the simple example
represented by the system we just examined.
3
2 4
2 1 5
R2=R22
3
2 4
4 2 10
R2=R2+R1
3 2
7 0
4
14
1 0 2
as the second row now really does give a direct readout for x = 2. However,
we could have continued again
0 2 2
R1=R13R2
1 0
2
which now means the first row gives 2y = 2. We could take two more steps.
0 1 1
1 0
2
R1=R12
R1R2
1 0
2
0 1 1
112
The second step is just to make it very clear that the left closely resembles
the identity matrix. If we now examine this set of equations we see it is
simply
0x + y = 1; 1x + 0y = 2
or
x = 1, y = 2.
Therefore the most ideal solution might be to rearrange the left side of
the augmented matrix to the identity matrix, whereupon the right hand side
becomes the solutions for the variables. To accomplish this we attempt to
produce zeros in the left hand column everywhere except the top entry, and
then we try to produce zeros everywhere in the next column except for the
second entry and so on.
There are two reasons why we do not always do this:
the left hand side may not be square, such as when we have fewer
equations than unknowns;
the extra operations arent really necessary to solve the equations.
In practice therefore we try to reduce the cells to the bottom and left of
the leading diagonal to zero. It is nice, but not essential if the first number
on each row is a 1. This will allow us to begin with the bottom row and solve
for one unknown and then we move upwards through the row substituting
the known variables to find each next unknown.
This process is known as Gaussian Elimination.
6.9.2
Example
2y
2y
3y
+ z
3z
+ z
=
=
=
1
2
3
Solution
The augmented matrix for this system is as follows
1 2
1 1
AB = 4 2 3 2
2 3
1 3
1
1
2
1
1
1
2
1
R2=R24R1
0 6 7 2
0 6 7 2 R3=R32R1
3
1
2
3
1
0 1 1
so thats a great start. For our purposes row 1 is fine as it is now, and we
wont touch it in subsequent work. Now we work only with rows 2 and 3.
We now want to introduce a zero at the bottom of column 2, to continue our
job of zeroing the elements to the bottom and left of the leading diagonal.
Now we could add a sixth of row 2 to row 3 to achieve this, but itll be easier
to swap rows 2 and 3 first.
1
2
1
1
1
1
2
1
R2=R21
R2R3
1 0
1
1 1
0 1 1
0 6 7 2
0 6 7 2
I have also taken the liberty to multiply through row 2 by 1 as it makes
our life a little easier. Its much easier to make the bottom of column 2 a
zero now, and doesnt introduce fractions so early. If we simply add row 2
to row 3 six times.
1
1 2
1
R3=R3+6R2
0 1
1 1
0 0 1 8
and we need not go further. Row 3 now gives
z = 8 z = 8
and Row 2 now gives
y + z = 1 y + 8 = 1 y = 9
and Row 1 now gives
x + 2y + z = 1 x 18 + 8 = 1 x = 11
and so we have a unique solution
x = 11, y = 9, z = 8.
113
6.9.3
114
Example
+ z
+ z
+ z
=
=
=
1
2
3
Solution
The augmented matrix for this system is
3 2 1 1
3 1 2
AB = 2
4 7 1 3
We begin by producing a 1 somewhere in the first column, in order to make
subsequent work more simple. This isnt necessary, but just easier.
1
3
3 2 1 1
1
1
2
2
R2=R22
R1R2
3
1
1 3 2 1 1 .
1
2
2
4 7 1 3
4 7 1 3
This zero can now be used to produce an elimination of the other numbers
in the column.
3
3
1
1
1
1
1
1
2
2
2
2
R2=R23R1
0 13 1 2 R3=R34R1
0 13 1 2 .
2
2
2
2
4 7
1
3
0 13 1 1
Now we would next work to produce a zero at the bottom of the right hand
column. To reduce the fractions, lets multiply row 2 by 2.
1
3
1
1
2
2
R2=R22
0 13 1 4 .
0 13 1 1
Now we can create a zero in the bottom of column 2 by subtracting row 2
from row 3:
3
1
1
1
2
2
R3=R3R2
0 13 1 4 .
0
0
0
3
Now actually this represents a problem. The bottom row is now equivalent
to the equation
0x + 0y + 0z = 3 0 = 3
115
and this is clearly impossible. This indicates that our system of equations
is inconsistent and there is in fact no solution. A quick calculation of the
determinant of the coefficient matrix will reveal it to be zero.
6.9.4
Example
+ z
+ z
+ z
=
=
=
1
2
0
Solution
You may note this is extremely similar to the previous example. Again the
inverse of the coefficient matrix is zero. This means there is no inverse, but
critically it does not mean there are no solutions in itself, it just means there
is not a unique solution.
The augmented matrix for this system is
3 2 1 1
3 1 2
AB = 2
4 7 1 0
We begin by producing a 1 somewhere in the first column, in order to make
subsequent work more simple. This isnt necessary, but just easier.
3
1
3 2 1 1
1
1
2
2
R2=R22
R1R2
3
1
1 3 2 1 1 .
1
2
2
4 7 1 0
4 7 1 0
This zero can now be used to produce an elimination of the other numbers
in the column.
1
1
3
3
1
1
1
1
2
2
2
2
R2=R23R1
0 13 1 2 .
0 13 1 2 R3=R34R1
2
2
2
2
4 7
1
0
0 13 1 4
Now we would next work to produce a zero at the bottom of the right hand
column. To reduce the fractions, lets multiply row 2 by 2.
3
1
1
1
2
2
R2=R22
0 13 1 4 .
0 13 1 4
116
1
3
1
1
2
2
R3=R3R2
0 13 1 4 .
0
0
0
0
Compare this with example 6.9.2 at this point. It might be tempting to think
this is the same situation, but it is not. This time the bottom row is
0x + 0y + 0z = 0 0 = 0
which is true. Therefore there is no inconsistency, its just that one equation
out of the original three is essentially worthless, its a copy of the original
two. (If you look you will see that R3 = 2R1 R2 in the original system).
This means there will be no unique solution. In this case we proceed
as follows. There is no constraint on z, so z can take any value, say t for
example, where t is any real number.
Then row 2 gives (multiplying by 1 first for clarity)
13y + z = 4 13y = 4 t y =
1
(4 t)
13
3
1
26 12 + 3t 13t
7 5t
(4 t) t =
=
26
2
26
13
and thus
4t
7 5t
;y =
;z = t
13
13
represents all the (infinite!) solutions of this problem. This is called a parametric solution since the solution depends on one or more parameters (in
this case t), and as t takes all the values a real number can, we obtain all the
solutions of this system.
x=
6.9.5
The previous example also shows that elimination can be undertaken with
fewer equations than unknowns. If one repeats the analysis of the system
with the third equation removed one obtains exactly the same solutions.
If one has more equations than unknowns then either the system will be
inconsistent because some equations will contradict another, or one or more
6.10
6.11
Rank
The rank of a matrix A, denoted r(A) is the order of the largest square
matrix contained within A that has a non-zero determinant. The submatrix
can be formed out of any of the rows and columns of A.
The rank is essentially a measure of the amount of information in a matrix,
or alternatively the number of redundant rows in the matrix. Clearly the rank
of an m n matrix cannot exceed the smaller of m and n, as no larger square
matrix can be constituted.
6.11.1
Example
0 0 0
0 0 0
0 0 0
117
6.11 Rank
118
(b)
1
0
0
0
1
0
0
0
1
(c)
3 2 1
2
3 1
4 7 1
Solution
(a) It should be clear that the determinant of this 3 3 matrix is zero.
Therefore the rank is not 3. One can construct 9 possible submatrices 6 that
are 22, but these again all consist of zeros, and the determinant will be zero.
Therefore the rank cannot be 2. Finally there are 9 possible submatrices that
are 1 1, which are all zero. Therefore the rank cannot be 1. So the rank in
this case is zero.
Can you see that if exactly one number in the matrix was changed to 4
say then the rank would become 1?
(b) We take the determinant of this matrix to obtain 1 6= 0 and therefore the
rank of this matrix is 3.
(c) It is left as an exercise for the reader to show that the determinant of
this 3 3 matrix is 0. Therefore the rank of the matrix cannot be three. By
eliminating any one of the rows and any one of the columns we can generate
9 matrices which are 2 2. For example, if we eliminate row 3 and column
2 we obtain
3 1
2 1 = (3)(1) (1)(2) = 1 6= 0
so that we see the rank for this matrix is 2.
We can see looking at example (a) that no information is contained in this
matrix and the rank is 0, (b) may surprise by having a rank of 3 compared to
that in example (c) which is 2, but all the rows in (b) are genuinely different.
A close analysis of row 3 in example (c) will show it to be R3 = 2 R1 R2,
and therefore that row really doesnt add anything to the discussion.
6
R1 and C1, R1 and C2, R1 and C3, R2 and C1, etc. and it is not simply a matter of
testing the matrices found by eliminating the outer rows and columns.
6.11 Rank
6.11.2
119
Systems of equations
Matrix rank is particularly useful due to what it can tell you about the solutions presented by a system of equations. In particular, a comparison between
the rank of a co-efficient matrix A and the rank of the augmented matrix
AB is very useful. We look at some examples that may prove instructive.
6.11.3
Example
Find the rank of the co-efficient and augmented matrix for the following
system of equations.
3x + 2y
2x y
=
=
4
5
(i)
(ii)
Solution
You may note that this is the very simple system we looked at before in 6.9.
This system gave rise to the co-efficient matrix and augmented matrix as
follows.
3
2
3
2 4
A=
; AB =
2 1 5
2 1
Now |A| = (3)(1) (2)(2) = 7 6= 0, and so r(A) = 2. To look at the
rank of the augmented matrix there are two square 22 matrices that can be
formed, but we already know what the left hand one (found by eliminating
the third column) has a non zero determinant because it is the same as A.
Therefore
r(A) = r(AB ) = 2.
6.11.4
Example
Find the rank of the co-efficient and augmented matrix for the following
system of equations.
3x +
6x +
2y
4y
=
=
1
2
(i)
(ii)
6.11 Rank
120
Solution
It is clear here that equation (ii) is simply double equation (i). Therefore
there is actually only one equations worth of information here.
This system gives rise to the co-efficient matrix and augmented matrix
as follows.
3 2
3 2 1
A=
; AB =
6 4
6 4 2
Now |A| = (3)(4) (2)(6) = 0, and so the rank of A cannot be two.
Taking any of the possible one by one submatrices we can clearly see that
the determinant will not be zero. Therefore r(A) = 1.
For the augmented matrix we know that the submatrix formed by eliminating the third column has a zero determinant, but we have not checked
the submatrices formed by eliminating the first or middle column. However a
brief inspection shows that both of these also have a zero determinant. Once
again, any of the six possible 1 1 submatrices show that r(AB = 1.
r(A) = r(AB ) = 1.
It should be clear that since both of these equations are the same, then
by simply taking equation (i) we obtain that
1
3x + 2y = 1 y = (1 3x)
2
and consequently since there is no contraint on the choice of x and a choice
of x determines y there are infinitely many solutions.
6.11.5
Example
Find the rank of the co-efficient and augmented matrix for the following
system of equations.
3x +
6x +
2y
4y
=
=
1
1
(i)
(ii)
Solution
This is very similar to the previous example. However the second equation is
not quite the exact double of the first anymore. In fact it should be obvious
that this system of equations is inconsistent, since the two contradict each
other.
6.11 Rank
121
This system gives rise to the co-efficient matrix and augmented matrix
as follows.
3 2
3 2 1
A=
; AB =
6 4
6 4 1
Now |A| = (3)(4) (2)(6) = 0, and so the rank of A cannot be two.
Taking any of the possible one by one submatrices we can clearly see that
the determinant will not be zero. Therefore r(A) = 1.
For the augmented matrix we know that the submatrix formed by eliminating the third column has a zero determinant, but we have not checked
the submatrices formed by eliminating the first or middle column. If we
eliminate the first column the resulting submatrix has a determinant of
(2)(1) (1)(4) = 2 6= 0. Therefore the r(AB ) = 2.
r(A) = 1 < r(AB ) = 2.
Remember that this system of equations has no solutions.
6.11.6
Summary
Although the previous examples do not constitute a proof, they illustrate the
following results.
With a system of equations in n unknowns such that the matrix of coefficients is A and the augmented matrix is AB , then if
r(A) = r(AB ) = n then there exists a unique solution;
r(A) = r(AB ) < n then there are infinitely many solutions;
Note that this implies that the matrix A has a zero determinant, and thus is singular
(no inverse exists) and therefore other methods must be employed to find the solutions.
6.11.7
Exercise
Use rank to check the systems of equations already analysed in 6.9.2, 6.9.3
and 6.9.4.
6.12
6.12.1
Finding Eigenvalues
122
6.12.2
123
Example
3
1
1
1
Solution
So the characteristic equation is
3 1
0 3
=
1 1
0
1
1
=0
1
6.12.3
Finding eigenvectors
Once the eigenvalues for the matrix A are found, insert each value in turn
into the equation
AX = X (A I)X = 0
and solve for X. Note that there is no unique solution for each X since
eigenvectors are exactly those that turn into multiples of themselves upon
transformation.
6.12.4
Example
3
1
1
1
Solution
We know already that this matrix has a single eigenvalue of = 2.
Therefore the corresponding eigenvector will be a solution of the equation
(A 2I)X = 0
1
1
x
0
=
1 1
y
0
124
x+y
x y
=
0
0
You will note that in fact this is really only one equation in two unknowns.
This is entirely to be expected as noted above. Let x = t for some parameter
t, then y = t and the eigenvector for this eigenvalue is
t
t
for all the possible real values of t. So in fact each eigenvector is really a
class of vectors (that all lie in the same direction), with different magnitudes.
Some people like to normalise the vectors so that it has modulus one, which
is a trivial extra step.
6.12.5
Example
6.13 Diagonalisation
125
a little investigation shows they are the same equation. A solution then
would be (for example)
2
1
For = 3 we obtain
5 6
1
0
x
y
=3
x
y
5x 6y = 3x; x = 3y
a little investigation shows they are the same equation. A solution then
would be (for example)
3
1
Therefore the eigenvalues are 2 and 3 with corresponding eigenvectors
2
3
;
1
1
6.12.6
Other orders
For larger matrices the problem is tackled in the same way. Note that for a
3 3 matrix the characteristic equation is likely to be a cubic equation and
so on, so there may be algebraic difficulties in solving it but the principle
remains the same.
Also, in our 2 2 case finding the eigenvectors we noted that we get two
equations that are essentially the same. For larger matrices such as a 3 3
problem there may appear to be three equations and it is less likely one will
obviously be a multiple of another, but may be a combination of the other
two. Is the system is solved using the techniques discussed above then any
redundant equation will soon be eliminated. The solution will always be
parametric in nature.
6.13
Diagonalisation
Diagonal matrices are square matrices which have zeros in all their entries
except for those on the leading diagonals.
An important problem in matrix theory is the process of producing a
diagonal matrix D from a given square matrix A. This process is known as
diagonalisation.
6.13 Diagonalisation
126
1
0
0
0 2
0
D = ..
..
.
.
.
.
.
0 0
0 n
where 1 , 2 , . . . n are the eigenvalues of A.
6.13.1
6.13.2
Example
Find
2
0
0
1
3
Solution
We start by squaring
2
2
2 0
=
0
0 1
4+0
0+0
0+0
0+1
0+0
0+1
0
1
2
0
0
1
=
4
0
0
1
8
0
0
1
Clearly we would have achieved the result simply by cubing all the numbers
on the leading diagonal.
6.13 Diagonalisation
6.13.3
127
This can be used the find powers of other matrices. Suppose A is a square
matrix that has dioganalisation
D = S1 AS
then premultiply by S and postmultiply by S1 on both sides.
SD = IAS SDS1 = A
Then A2 can be found in the following way
A2 = SDS1 SDS1 = SD2 S1
since the inner matrices cancel. It can be proved by induction that
An = SDn S1
6.13.4
Example
Calculate
5 6
1
0
4
Solution
We have already solved the eigenvalue problem for this matrix in example
6.12.5 and so we can write that
A = SDS1
where
S=
2
1
3
1
It follows that
S
;D =
1
3
1 2
2
0
0
3
Therefore
1
A = SD4 S
2 3
16 0
1
3
=
1 1
0 81
1 2
2 3
16
48
=
1 1
81 162
32 + 243 96 486
211 390
=
=
16 + 81 48 162
65 114
Chapter 7
Graphs of Functions
We now look at the graphs of some simple functions that are of use to us.
7.1
While differential calculus can make the job of plotting graphs much easier
it is possible to do some basic graph plotting using very simple techniques
and observations.
The simplest way to plot a graph of y = f (x) is to insert a range of values
of x and calculate the corresponding y values and plot these on graph paper.
The advantages of this method are that it is simple to understand and can
even be easily undertaken by a computer. The disadvantages are, that for
humans it can be very tedious, and we have no way of knowing if interesting
features are occuring between our chosen x values, or outside the range.
We can use simple observations to improve the situation in more complicated examples.
7.1.1
Example
129
5 3 1
18
6
2
1 3 5
6 18 38
7.1.2
Example
0 5
3 13
Plotting these points and lining them up produces a graph like that shown
in figure 7.2.
130
7.1.3
Example
7.2
Important functions
There are a number of very important functions and relationships that appear
throughout mathematics, science and engineering. We look briefly at the
most important below.
7.2.1
Direct Proportion
131
example
y x y = kx
for some constant k which is known as the constant of proportionality. It
follows that this is a straight line relationship, and a simple straight line that
passes through the origin at that. This is the way to graphically identify
quantitys that are in direct proportion.
Example
For a constant current I Ohms law is an example of direction proportion,
with the voltage drop V being directly proportional to the resistance R.
V = IR
For a constant mass m Newtons second law shows that the force F exerted
on a particle is directly proportional to the acceleration a that is produced.
F = ma
7.2.2
Inverse Proportion
k
1
y=
x
x
132
1
x
Examples
The pressure P produced when a force F acts on an area A is inversely
proportional to that area, if the force is constant.
P =
7.2.3
F
A
133
1
x2
Example
Some of the most important relationships in science obey this law.
Newtons law of universal gravitation states that the force F between two
bodies of mass m1 and m2 respectively at a distance of r apart is given by
F =
Gm1 m2
r2
where G is the gravitational constant. If the two bodies in question stay the
same mass then the whole top line is constant and this relationship becomes
an inverse square law.
Coulombs law states that the force F between two charges Q1 and Q2 at
a distance of r is given by
1 Q1 Q2
F =
4 r2
where 0 is a constant known as the permittivity of the medium. Once again,
if the charges are constant then all the material at the front may be gathered
into one large constant term.
7.2.4
Exponential Functions
We met exponential functions before (see 3.9.1) and looked at them in some
detail, so we simply given an example of one such graph here. You will
observe that the graph is always positive (never below the x axis) and in
the case of exponential growth y = ex in this case (see figure 7.5) we have
extremely rapid growth to the right and fall off to the left.
7.2.5
Logarithmic Functions
Again, we have met logarithmic functions before (see 3.9.2) and we restrict
ourselves to plotting some examples here.
134
7.3
Transformations on graphs
135
7.3.1
Addition or Subtraction
7.3.2
Multiplication or Division
7.3.3
136
7.3.4
Multiplying or Dividing x
137
7.4
7.4.1
Even functions
An even function f (x) has the property that f (x) = f (x) for all values of
x.
Essentially this means the height of the function is the same whether we
insert x or x, or in other words the graph has reflective symmetry in the y
axis. The name even comes about because when such functions are expanded
as powers of x they only contain positive powers.
Examples of even functions are 1 = x0 , x2 , xn where n is even, cos x etc.
7.4.2
Odd functions
138
139
7.4.3
Combinations of functions
Many functions are neither odd nor even, and when such such functions are
combined the properties can be changed or lost.
Combination
Result
Odd Odd
Odd
Even Even
Even
Odd Even
Neither
Even Odd
Neither
140
7.4.4
Examples
1. Consider
f (x) = x2 , g(x) = x4
then clearly both functions are even.
f (x) + g(x) = x2 + x4 (even)
f (x) g(x) = x6 (even)
2. Consider
f (x) = x2 , g(x) = x3
Combination
Result
Odd Odd
Even
Even Even
Even
Odd Even
Odd
Even Odd
Odd
141
Chapter 8
Coordinate geometry
We now look at some basic coordinate geometry and we restrict ourselves to
two dimensions in the discussions that follow.
8.1
Elementary concepts
For the following concepts we will consider two arbitary points A defined by
(x1 , y1 ) and B defined by (x2 , y2 ). See figure 8.1 for an explanation of the
geometry that follows.
143
8.1.1
8.1.2
Example
d = (5 2)2 + (7 3)2 = 32 + 42 = 25 = 5
8.1.3
Example
8.1.4
144
We can find the midpoint M of the line segment joining A and B by using
the fact that triangles AM D and ABC are similar.
It can be down that
x1 + x2 y1 + y2
M=
,
2
2
Note that this is simply the average x coordinate and the average y coordinate.
8.1.5
Example
Find the midpoint of the line segment joining (2, 3) and (5, 7).
Solution
From the above formula we obtain
2+5 3+7
,
= (3.5, 5)
M=
2
2
8.1.6
Example
Find the midpoint of the line segment joining (1, 2) and (3, 4).
Solution
This time we obtain
M=
8.1.7
1 + 3 2 + 4
,
2
2
= (1, 1)
Gradient
The gradient of a line is simply the amount it rises for every unit it travels
to the right. It follows that a horizontal line has a gradient of zero, and a
vertical line as an infinite, or undefined gradient.
Using the diagram above we see that the gradient of the line segment
from A to B is given by
y2 y1
m=
x2 x1
8.1.8
145
Example
Find the gradient of the line segment joining (2, 3) and (5, 7).
Solution
Using the formula above, we obtain
m=
8.1.9
4
73
=
52
3
Example
Find the gradient of the line segment joining (1, 2) and (3, 4).
Solution
Again, but being very careful with our signs, we obtain
m=
8.2
6
3
4 2
=
=
3 1
4
2
8.2.1
146
8.2.2
m and c are the constants that define the line, once we have them we are finished.
Even though for different points on the line x and y will change, m and c will remain the
same.
8.2.3
Example
Find the equation of the line joining (2, 3) and (5, 7).
Solution
We have already peformed the first step in the example above by calculating
the gradient as m = 34 . So we know that
4
y = x+c
3
for some value of c.
We now insert either of the two points we have above. Let us choose the
first one
4
8
8
1
3= 2+c3= +cc=3 = .
3
3
3
3
Therefore the equation of the line is
1
4
y = x+ .
3
3
8.2.4
Example
Find the equation of the line joining (1, 2) and (3, 4).
Solution
Once again, we have previously determined the gradient between these two
points, this time it is m = 32 . Therefore our equation is
3
y = x+c
2
for some value of c.
We insert a point on the line, again we pick the first one.
3
3
3 1
2 = 1 + c 2 = + c c = 2
2
2
2 2
147
148
Chapter 9
Differential Calculus
Calculus is essentially the precise study of varying quantities. In a problem
involving a constant, such as a car travelling at constant speed it is easy to
answer most questions about it, such as distance travelled. However, if the
speed varies, everything about the problem becomes a little harder.
It is a testament to the genius of Isaac Newton that he developed calculus
as a stepping stone to his great work on the physics of gravity.
Another mathematician, Liebnitz developed calculus independently.
9.1
Concept
9.2
Notation
If y = f (x) then we often use the shorthand f 0 (x) for the derivative. Thus
f 0 (x) =
dy
dx
150
d2 y
.
dx2
f 000 (x) =
d3 y
.
dx3
Third derivative
9.3
9.3.1
Power Rule
151
d n
(x ) = nxn1 .
dx
Note that n must be a constant, that is it must depend on the value of x.
9.3.2
9.3.3
d
(u) +
dx
d
(u)
dx
d
(v) ;
dx
d
(v) .
dx
9.3.4
Chain Rule
9.3.5
Product Rule
9.4 Examples
9.3.6
If y =
152
Quotient Rule
u
v
9.3.7
Trigonometric Rules
9.3.8
Exponential Rules
d x
(e ) = ex .
dx
9.3.9
Logarithmic Rules
1
d
(ln x) = .
dx
x
9.4
Examples
(b)
4x3
(e) x
(c) 2x
(f) 2x
9.4 Examples
153
(a) 2x2 + x3 3x (b) 3x(x 2)
x+1
(c)
(d) (x 1)(x + 2)
x
3.
(a) (2x + 5)4
(c) (2x + 3)100
(e) sin 3x
(g) e2x
4.
(a) x sin x
(b) (3x2 + 1) cos x
(c) (ln x)(sin x) (d) (x2 + 3x)ex
5.
9.4.1
(a)
x
sin x
(b)
cos x
x
(c)
e2x
3x2 4
(d)
x2 3x + 6
x3 + 2x 5
Solutions
9.4 Examples
154
(e)
d
d 1
1 1
1
x 2 = x 2 =
x=
dx
dx
2
2 x
(f)
d 2
d 1
d 2
=
2x 2
1 =
dx x
dx x 2
dx
3
3
1
1
= 2 x 2 = x 2 =
2
x3
Note also that constants vanish when we differentiate them
d
d 0
k=
kx = 0 kx1 = 0
dx
dx
2. Although in some of these cases we could use exotic rules like the product
and quotient rule, all of these examples can be differentiated using the power
rule only, with a little algebra to get them in the right form, we also use the
fact that we can work around addition and subtraction.
(a) Power rule, working between each addition, subtraction
2 2x1 + 3x2 3 = 3x2 + 4x 3
(b) We could use the product rule, but its overkill to do so, instead consider
that
3x(x 2) = 3x2 6x
d
(3x2 6x) = 2 3x1 6 = 6x 6.
dx
(c) We could use the quotient rule, but some algebra is better
x+1
1
= 1 + = 1 + x1
x
x
d
1
(1 + x1 ) = 0 + 1x2 = x2 = 2 .
dx
x
(d) We could use the product rule, but algebra is easier
(x 1)(x + 2) = x2 + x 2
d 2
(x + x 2) = 2x + 1 0 = 2x + 1.
dx
3. The key to chain rules is to see some block of symbols we wish was simply
x. We let u be this block, and then write y in terms of u. Finally we plug
things into the chain rule formula, like so.
9.4 Examples
155
dx
du dx
dy
= 4u3 (2) = 4(2x + 5)3 2 = 8(2x + 5)3 .
dx
(b) Chain rule, y = u5 , u = 3x2 + 4
dy
= 5u4 (6x + 0) = 30x(3x2 + 4)4 .
dx
(c) Chain rule, y = u100 , u = 2x + 3
dy
= 100u99 2 = 200(2x + 3)99 .
dx
(d) Chain rule, y = u50 , u = 3x 2
dy
= 50u51 3 = 150(3x 2)51 .
dx
(e) Chain rule, y = sin u, u = 3x
dy
= cos u 3 = 3 cos 3x.
dx
(f) Chain rule, y = u2 , u = cos x
dy
= 2u sin x = 2 sin x cos x = sin 2x.
dx
(g) Chain rule, y = eu , u = 2x
dy
= eu 2 = 2e2x .
dx
(h) Chain rule, y = u3 , u = cos 2x
dy
= 3u2 2 sin 2x = 6(sin 2x)(cos2 2x).
dx
(We use the chain rule again to work out
d
dx
9.5 Tangents
156
2e2x (3x2 3x 4)
.
(3x2 4)2
(d) Quotient rule, u = x2 3x + 6, v = x3 + 2x 5
=
(x3 + 2x 5)2
9.5
Tangents
A tangent is a line which only grazes a curve at a specific point. The gradient
of the tangent and the curve are equal at that point.
Differentiation allows us to easily find tangents to curves at points.
The procedure is to differentiate the formula for the curve, as the derivative is the gradient. Recall that the equation of a straight line is
y = mx + c
where m and c are constants (and m is the gradient).
So once we find the derivative we have calculated m. To finish we need
only insert the coordinates of the point at which the tangent hits the curve
in order to calculate c.
9.5 Tangents
9.5.1
157
Example
dy
= m = 2(1) + 2 = 4.
dx
Thus our line has the form
y = 4x + c
so all we need to do now is to find c, which we do by plugging in the coordinates of some point on the line, which is (1, 0), so let x = 1, y = 0 and we
get
0 = 4(1) + c c = 4
and we have therefore our final line as
y = 4x 4.
9.5.2
Example
158
Solution
When x = 0, y = 0.e0 = 0 1 = 0, so the tangent passes through the point
(0, 0).
The gradient of the tangent is the gradient of the curve, and
dy
= ex .1 + x.ex = ex (1 + x)
dx
so when x = 0
dy
= m = e0 (1 + 0) = 1
dx
9.6
Turning Points
There are certain features in curves that particularly interest us. An example
would be the peaks and troughs of a curve, or in more exact language,
the maxima and minima. These, together with two other cases we shall
see later, are known as turning points.
Looking at such points on a curve reveals an important fact: at just the
point of the maximum or minimum, the curve is horizontal. In other words
the gradient is zero at these points.
9.6.1
There are four types of turning point, or points where the gradient is zero
for an instant. These are shown in figure 9.1.
Local maximum
A local maximum is a point on the curve so that if we were to imagine
standing at that point, the curve would descend in both directions, at least
for a little distance. This is case 1 in figure 9.1.
9.6.2
To find the turning points of a curve y = f (x) we first of all find the derivative,
f 0 (x) and solve the equation
159
160
dy
=0
dx
for x. There may be no, one or several values of x that solve this equation. For each value of x, place it into the formula for y = f (x) to find the
corresponding y coordinates.
In doing this we have found all the turning points of the curve.
f 0 (x) =
9.6.3
Once we have located our turning points we usually wish to know what type
of turning point each one is. There are two ways of determining this.
Second derivative test
The first method is to differentiate the derivative again, to obtain the second
derivative.
We plug the x values from each turning point into this function and,
depending upon the sign of our answer we can determine which type of
turning point we have. This is shown in table 9.1.
f 00 (x) Turning Point
Comments
Local minimum
Conclusive
Local maximum
Conclusive
Stationary inflection
Inconclusive
161
To use this method we examine the sign of the gradient, f 0 (x) a little to
the left and to the right of the turning point. It is important we stay close
however. The results we can conclude are shown in table 9.2.
Left Right Conclusion
+
Local maximum
Local minimum
9.6.4
Example
162
Finally it simply remains to classify the points, using the second derivative
test.
d2 y
x = 4 2 = 6(4) + 6 = ve
dx
which indicates a local maximum, and
x=2
d2 y
= 6(2) + 6 = +ve
dx2
9.6.5
Example
d2 y
= 6(0) = 0
dx2
which is inconclusive.
We have to fall back on using the first derivative. Try just left of the
turning point, say when x = 1
dy
= 3(1)2 = 3 = +ve
dx
and right just a bit, say when x = 1
dy
= 3(1)2 = 3 = +ve
dx
163
9.6.6
Example
9.7
Newton Rhapson
Comparing this with the quadratic solution formula is rather interesting, it should be-
come clear that the solution formula gives two solutions in general, each an equal distance
away from the turning point, but on either side.
164
it will not (usually) find the exact solution to the equation, but one which is
close enough for our purposes.
Suppose that we have an approximate solution for the equation
f (x) = 0
which is x = a1 , then we can find a better approximation to the solution
x = a2 from the following formula:
an+1 = an
f (a)
.
f 0 (a)
9.7.1
Example
f (an )
f 0 (an )
a2n 3an + 1
2an 3
165
0 3(0) + 1
= 0 0.3333 = 0.3333
2(0) 3
and continuing, placing this value back into the formula and so on, we obtain:
a3
a4
a4
= 0.3333
0.0476
= 0.3809 1.0131 103
= 0.3820 4.5907 107
= 0.3809
= 0.3820
= 0.3820.
=
2.5
= 2.6250
= 2.6181
= 2.6180
0.125
6.9444 103
2.1567 105
2.0800 1010
= 2.6250
= 2.6181
= 2.6180
= 2.6180.
9.7.2
Example
1
1
x
166
f (an )
f 0 (an )
which will be
ln(an ) an + 10
a1
n 1
So shall only explicitly do the first iteration
an+1 = an
a2 = 12
ln 12 12 + 10
= 12 0.5290 = 12.5290
(12)1 1
= 12.5280
= 12.5280
9.8
Partial Differentiation
When we have more than one variable, for example, suppose that some quantity z depends on both x and y, then we must use partial differentiation.
The partial derivative of z with respect to x denoted
z
x
means the derivative of z with respect to x, treated as though any variables other than x are constant.
In an analogy of our previous notation, we accept the convention that
x
means the partial derivative, with respect to x, of what follows.
It is possible to partially differentiate z with respect to x and then y
which is denoted by
z
2z
(z) =
=
.
y x
y x
xy
167
Note that this is usually the same as the partial derivative of z with
respect to y and then x, denoted
2z
yx
but not always, so the order on the bottom line of the notation is relevant.
Note that the derivatives take place in the reverse order from their listing on
the bottom line.
Of course, if we partially differentiate z with respect to x twice, we get
the more familiar notation
2z
.
x2
9.8.1
Example
z = x2 y + 3x3 y 2
Find
z z 2 z 2 z 2 z 2 z
, ,
,
,
,
.
x y xy yx x2 y 2
Solution
z
First find x
. To do this we differentiate the formula for z with respect to x,
treating every y as a constant:
z
= 2xy + 9x2 y 2 .
x
To find
constants:
z
y
z
= x2 + 6x3 y.
y
Now we can proceed with the other derivatives more easily.
2z
z
=
= 2x + 18x2 y;
xy
x y
2z
=
yx
y
z
x
= 2x + 18x2 y.
Note that these two derivates are the same, this is almost, but not always
the case.
168
2z
=
x2
x
z
= 2y + 18xy 2 ;
x
2z
z
= 6x3 .
=
y 2
y y
9.9
Small Changes
.
x
dx
It therefore follows that
dy
x.
dx
Thus, given our increase x in x, we can calculate the corresponding
increase in y, given by y.
We can do the same thing with partial differentiation if more than two
variables are concerned.
In general then
z
z
x + +
y.
z =
x
y
y =
9.9.1
Example
Note that x does not signify two numbers and x multiplied together. In this
context the small greek delta represents a small increase in the variable x. This is
common throughout the literature.
169
Solution
We know that the area of a circle is given by A = r2 , and that therefore
dA
= 2r
dr
and we know that
dA
r.
dr
= 10.
9.9.2
Example
A voltage of 12V is placed over a resistor of value 10. Without direct calculation, find the change in the current through the resistor if the resistance
is changed to 10.2.
Voltage V , Current I and resistance R are governed by Ohms law.
V = IR
Solution
We need to calulcate I so we first rearrange by dividing by R on both sides
to obtain
V
=I
R
Our initial current is thus 12
= 1.2A.
10
Here we have three variables, but we shouldnt be alarmed about this
because the voltage is constant, and we want current with respect to voltage,
and we may use the relationship
I
I
I
I
R.
R
R
R
I
V
= V R2 = 2
R
R
so first we have written I to make it a little easier to differentiate (power
rule rather than the quotient rule), and we remember that V is treated as a
constant in the partial derivative.
Now, plugging in our values we obtain
I = V R1
12
R I 0.12 0.2 = 0.024.
102
So the change results in a downward movement of the current of 0.024A
(or in other words, we estimate the new current to be 1.176A).
Direct calculation shows the answer in this case to be 1.176A (to three
decimal places).
I
170
171
Chapter 10
Integral Calculus
10.1
Concept
10.1.1
Constant of Integration
Consider that
d
d
d
(5x 10) =
(5x) =
(5x + 1000) = 5
dx
dx
dx
If we consider the problem of finding the integral of 5 with respect to x we
need something that differentiates to be 5, so any of these functions could
do.
173
10.2
Differentiation is a relatively simple procedure; with practice we can differentiate almost any function. Integration is, in general, much harder and many
of our favourite rules have no equivalent, so we must use a variety of less
perfect techniques. As before we look at some rules and techniques before
introducing examples.
10.2.1
Power Rule
10.2.2
Z
u vdx =
Z
udx
vdx.
10.2.3
Multiplication by a constant
10.2.4
Substitution
We have no analog of the chain rule in integration, but the closest thing is
the idea of substitution.
Once again, we look for an ackward expression that would be easier if
it were x, and we let this be u. We then differentiate u with respect to x,
and imagine that we can split the du from the dx. We rearrange for dx (and
sometimes other bits of x terms and replace the dx with du. We then perform
the u based integral.
10.2.5
When we looked at the chain rule in differentiation (see 9.3.4 we saw that
there was a quick way to think about it. We can differentiate the object u
. For example, we have the basic
as if it were x, provided we multiply by du
dx
rule
d
(sin x) = cos x,
dx
but the chain rule can extend this to
d
du
(sin u) = cos u .
dx
dx
This is very quick to operate, and the laborious process of subsitution in
integration does not easily lend itself to this speed.
Fortunately, if our u = ax + b where a and b are constants (b may be
zero), we can take a shortcut. We integrate u just as we would x, and then
174
175
1
eax+b dx = eax+b + c
a
and so on.
10.2.6
Logarithm rule
Again, appealing to the differentiation chain rule (see 9.3.4) we can extend
the basic result
1
d
(ln x) =
dx
x
to
d
1 du
u0
(ln u) =
=
dx
u dx
u
where u is some function of x. It follows that
Z 0
f (x)
dx = ln f (x) + c.
f (x)
That is, if we need to integrate a function where the numerator is the derivative of the denominator we simply state it to be the natural logarithm of the
bottom line (plus the usual constant of course).
10.2.7
Partial Fractions
Because of the lack of solid rules in integration, we must rely with our ingenuity in algebra far more than was the case in differentiation. One technique
that is useful for dealing with fraction expressions is called partial fractions
in which we try to split a complex fraction into several smaller, simpler ones.
The first step of the method is to attempt to factorize the bottom line as
completely as possible, and the exact action will be decided partially on how
well the bottom factors, and on the degree of the top line of the equation.
We shall assume that the degree1 of the top line is always less than the
degree of the lower line. If this is not the case we must begin with long
division in algebra which is beyond the scope of this module.
We consider several cases and in each case we give a suitable right hand
side which we hope to simplify our expression to.
Denominator factors completely into linear factors
In this case, the bottom line of the expression can be decomposed entirely
into linear factors, that is, factors of the form (ax + b) like (2x 3), or 3x
etc.. In this case we shall use the following target expression.
A
B
gx + h
=
+
(ax + b)(cx + d)
ax + b cx + d
So the expression on the right hand side is formed entirely of numbers over
each factor. The numbers are represented by the capitals A and B and are
to be determined.
Denominator factors with repeated linear factors
This is similar to the previous case in that the bottom line decomposes completely into linear factors, but this time one of those factors is repeated. For
example
gx + h
(ax + b)(cx + d)2
In this case it is not correct to use the expected expansion
gx + h
A
B
C
=
+
+
2
(ax + b)(cx + d)
ax + b cx + d cx + d
1
The degree of an equation is the index of the highest power of x, thus a quadratic
equation is of degree 2.
176
10.2.8
Integration by Parts
The integration by parts formula is the closest thing that we have to a product
rule for differentiation, and it is nothing like as powerful.
If u and v are functions of x then
Z
Z
dv
du
v dx = uv u dx
dx
dx
177
178
Proof
We include a simple proof for interest and completeness, it is not examinable.
This result comes from the differentiation product rule:
d
du
dv
(uv) = v
+u
dx
dx
dx
Integrating with respect to x both sides obtains
Z
Z
dv
du
uv = v dx + u dx,
dx
dx
with the derivative and integral cancelling on the LHS.
We now rearrange
R dv
this to the parts formula by, for example, subtracting u dx dx on both sides.
Observations
Note that to use this as a product rule we need to pick one bit of the product
dv
to be v and the other to be du
. We will also need to find dx
and u. It
dx
follows that we must pick one bit we can differentiate (v) and one bit we can
.
integrate du
dx
The RHS of the formula consists of two terms; the term uv is of no
concern, it is already integrated, but the other term can be a problem. If we
cannot work out the integral on the RHS this method is no use. We must
make this integral easier than our first one, or at least no worse.
10.2.9
Other rules
There are some other rules in integration that are useful, which are as follows.
Trigonometric functions
Z
sin xdx = cos x + c
Z
cos xdx = sin x + c
Z
tan xdx = ln | sec x| + c
Z
cot xdx = ln | sin x| + c
10.3 Examples
179
Z
sec xdx = ln | sec x + tan x| + c
Z
x
csc xdx = ln | tan | + c
2
Exponential functions
Z
ex dx = ex + c
Miscellaneous functions
Z
x
1
1
dx = tan1 + c
2
+a
a
a
Z
x 1
1
1
+c
dx =
ln
x 2 a2
2a x + a
Z
x
1
dx = sin1 + c
a
a2 x 2
x2
10.3
Examples
10.3.1
Examples
x3 + 4x2 + 6x 5dx
2.
3.
xdx
x2 + 1
dx
x
10.3 Examples
180
Solutions
These are all plain, power rule, integrations.
1.
x4 4x3
=
+
+ 3x2 5x + c.
4
3
2.
Z
1
= x 2 dx
2 3
= x 2 + c.
3
Z
1
= x + dx
x
3.
10.3.2
x2
+ ln x + c.
2
Examples
Z
sin 3xdx
2.
3.
4.
(3x 2)10 dx
x x2 3dx
Z
x
dx
x+1
Solutions
These can all be tackled with substitution (see 10.2.4).
1. Let
du
du
u = 3x
= 3 dx =
dx
3
So we can replace our 3x by u to obtain
Z
= sin udx,
10.3 Examples
181
but this isnt enough, although the function is now simple to integrate, it is
simple to integrate with respect to u, and not x. We also need to change the
dx, so we use the formula we obtained from rearranging du
.
dx
Z
1
1
=
sin udu = cos u + c
3
3
but it is not appropriate to leave u in the answer (just as in the differentiation
chain rule questions) so we place it back in:
1
= cos 3x + c.
3
This question is also (more quickly) solvable using the limited chain (see 10.2.5)
which is left as an exercise.
2. Let
du
du
= 3 dx =
.
dx
3
So we can replace occurences of 3x 2 and dx to obtain
Z
1
1 1 11
=
u10 du =
u + c.
3
3 11
u = 3x 2
1
(3x 2)11 + c.
33
This question is also (more quickly) solvable using the limited chain
(see 10.2.5) which is left as an exercise.
3. Let
du
du
= 2x
= xdx.
dx
2
Note that we try to place x terms with dx and constants and any u terms
(there arent any here) in du. So we obtain
Z
= x udx
u = x2 3
10.3 Examples
182
=
12 3
u2 + c
23
4. Let
u=x+1
du
= 1 du = dx
dx
x
dx,
u
10.3.3
Example
4x + 5
dx
(2x 1)(x + 2)
10.3 Examples
183
Solution
This is a partial fraction problem (see 10.2.7).
Before any integration takes place at all, we first perform the algebra of
simplifying the expression in the integral. From partial fractions we have
A
B
4x + 5
=
+
(2x 1)(x + 2)
2x 1 x + 2
Now we multiply up by the denominator of the LHS (2x 1)(x + 2) on both
sides, which removes the fractions from the problem.
4x + 5 = A(x + 2) + B(2x 1)
and we now have to find A and B. This expression is an identity, that is to
say, it is true for all values of x. We can insert a couple of values into x to get
simaltaneous equations, but we can make life very easy by picking carefully,
so that one bracket vanishes (to zero).
3
x = 2 3 = 5B B = ,
5
and similarly
x=
1
5
14
7= AA= .
2
2
5
10.3.4
Example
10.3 Examples
1.
184
2.
4x3 6x + 6
dx
x4 3x2 + 6x 4
Z
x
dx
x2 3
Solutions
These are both solvable using the logarithm rule (see 10.2.6).
1. Straight away we observe that the numerator is the derivative of the
denominator. So
= ln(x4 3x2 + 6x 4) + c
2. The numerator is not the denominator, but we can tweak things.
Z
2x
1
1
dx
=
ln(x2 3) + c.
=
2
x2 3
2
10.3.5
Examples
Z
x sin xdx;
2.
xe2x dx.
Solutions
These are classic parts integrations (see 10.2.8).
1. Pick the bit to differentiate
v=x
dv
=1
dx
185
dv
=1
dx
10.4
Definite Integration
is given by
F (b) F (a)
where
Z
F (x) =
f (x)dx.
This last integral, of the type we have already met is known as a indefinite
integral, and is generally speaking a function of x.
The definite integral will be a number, formed by inserting b and then a
into the indefinite integral and subtracting.
We call a and b the limits of integration.
Note that because the definite integral is a number, independent of x that
the variable x is sometimes called a dummy variable as it vanishes from the
result, and we might just as easily have used t or some other name with no
consequence.
10.4.1
186
Notation
If
Z
f (x)dx = F (x) + c
we normally write
b
f (x)dx = [F (x)]ba
which signifies that the expression has been integrated, but the limits have
not yet been inserted.
10.4.2
Concept
The definite integral usually has some significance, usually that of summing
up strips of area.
10.4.3
Areas
The area enclosed by the curve y = f (x) and above the x-axis from x = a to
x = b is given by
Z b
ydx
A=
a
10.4.4
Example
4
3x2 dx = x3 2
10.4.5
Example
187
Solution
We are not given limits here, but this is a quadratic, so we can find where
the graph hits the x-axis and hence have our limits.
Solve
1
2x2 3x 2 = 0 (2x + 1)(x 2) = 0 x = , x = 2.
2
So these are our limits. The area we require is given by
Z 2
2x2 3x 2dx
21
2
2 3 3 2
= x x 2x
3
2
12
2 1 3 3 1 2
1
2 3 3 2
2 2 2(2)
( ) ( ) 2( )
=
3
2
3 2
2 2
2
= 5.2083
which is below the axis, we can disregard the minus sign.
10.4.6
Volumes of Revolution
The volume formed when the curve y = f (x) is rotated 360 around the
x-axis, from x = a to x = b is given by
Z b
V =
y 2 dx
a
10.4.7
Example
Find the volume formed when the curve y = ex is revolved around the x-axis
between x = 1 and x = 3.
Solution
First of all we must find y 2 :
y 2 = (ex )2 = e2x
188
y 2 dx
1
e dx = e2x
=
2
1
6
=
e e2
2
10.4.8
2x
3
1
Mean Values
10.4.9
Example
10.4.10
Example
189
Solution
Recall that we use radians in calculus, so we are integrating over 360 here.
Z 2
1
M=
sin xdx
2 0 0
=
1
[cos x]2
0
2
1
{(cos 2) (cos 0)}
2
1
=
(1 1) = 0
2
Oddly, at first, the mean value of this function is zero. This is because
the sin graph spends an equal amount of time above and below the axis over
this range and the two areas cancel out. This is not really an acceptable was
of dealing with this sort of signal (which could represent, for example, an
A.C. current).
=
10.4.11
RMS Values
The mean value suffers from the problem that integration considers areas
below the x-axis to be negative, and sometimes we wish to use the superior
concept of the RMS or Root-Mean-Squared value of a function. This is given
by
s
Z b
1
y 2 dx
RM S =
ba a
so that it is literally the square root, of the mean, of the function squared.
The squaring ensures the function will be positive, while the square root
takes us back to the same units afterwards.
10.4.12
Example
190
Solution
We shall concentrate on finding the contents of the square root first.
Z 4
1
2
RM S =
(2x + 3)2 dx
42 2
Z
1 4 2
=
4x + 12x + 9dx
2 2
4
1 4x3
2
=
+ 6x + 9x
2 3
2
3
3
1
4(4)
4(2)
247
2
2
=
+ 6(4) + 9(4)
+ 6(2) + 9(2)
=
2
3
3
3
Thus the RMS is the square root of this
r
247
RM S =
9.0738
3
10.4.13
Example
that is, we divide by 2 to find the analogous D.C. value for an A.C. signal
of raw amplitude A.
= A2
Note
I rather glossed over the problem of integrating sin2 x so I do this now for
completeness.
Recall from our trigonometry that
cos 2x = cos2 x sin2 x
and
sin2 x + cos2 x = 1
Thus
cos 2x = (1 sin2 x) sin2 x = 1 2 sin2 x
1
sin2 x = (1 cos 2x)
2
whereas the function on the left is hard to integrate, the one on the right is
relatively simple.
Z
1
1
1
(1 cos 2x) dx =
x sin 2x + c
2
2
2
by using the limited chain rule (see 10.2.5).
10.5
Numerical Integration
191
10.5.1
192
Simpsons rule
10.5.2
Example
193
Chapter 11
Power Series
11.1
Definition
n
X
ai x i
i=0
11.1.1
Convergence
11.2
195
Maclaurins Expansion
11.2.1
A function where the only powers of x present are odd numbers is called an
odd function. Odd functions obey the rule
f (x) = f (x).
A function where the only powers of x present are even numbers is called
an even function. Even functions obey the rule
f (x) = f (x).
11.2.2
196
Example
0 2 1 3 0 4
x +
x + x + cdots
2!
3!
4!
Thus
x3 x5 x7
+
+
3!
5!
7!
It can be shown that this series has an infinite radius of convergence.
sin x = x
11.2.3
Exercise
Obtain the Maclaurin expansion for cos x, and show that cos x is an even
function.
It can be shown that this series has an infinite radius of convergence.
11.2.4
Example
197
and so
f (0) = f 0 (0) = f 00 (0) = f 000 (0) = = 1
so that the Maclaurin expansion is
ex = 1 + 1x +
1 2 1 3
x + x +
2!
3!
x2 x3 x4
+
+
+
2!
3!
4!
11.2.5
Exercise
11.2.6
Example
11.3
Taylors Expansion
11.3.1
198
Example
1
1 00
2
3!
, f (x) = 2 , f 000 (x) = 3 , f 4 (x) = 4 , . . .
x
x
x
x
and so
f (1) = 0, f 0 (1) = 1, f 00 (1) = 1, f 000 (1) = 2, f 4 (1) = 3!, . . .
and so
f (1 + x) = 0 + 1x +
1 2 2 3
x + x +
2!
3!
11.3.2
x2 x 3 x4 x5
+
+
+
2
3
4
5
So note that
f (a + x) f (a) = +f 0 (a)x +
199
and f 0 (a) = 0 so
f (a + x) f (a) =
If x is sufficiently small, (i.e. we are very close to the turning point) then we
neglect terms of x3 or higher.
f (a + x) f (a)
f 00 (a) 2
x
2!
Now x2! is clearly positive, so the sign of the LHS depends entirely upon the
sign of f 00 (a).
If f 00 (a) > 0 then f (a + x) f (a) > 0 f (a + x) > f (a) for small x;
If f 00 (a) < 0 then f (a + x) f (a) > 0 f (a + x) < f (a) for small x.
These statements are precisely that there is a local minimum, or maximum at x = a respectively.
Chapter 12
Differential Equations
A very important application of integration is that of differential equations.
These are equations in terms of the derivatives of a variable.
12.1
Concept
We shall restrict ourselves to Differntial Equationss involving only two variables x and y.
A Differential Equation or D.E. for short is an equation involving x, y
and the derivatives of y with respect to x.
The order of a differential equation is the number of the highest derivative
present in the equation.
In general we wish to find the function y = f (x) which satisfies the D.E.,
and in general, unless we have information to help us calculate them, we will
have a constant for each order of the D.E. when we solve it.
12.2
Exact D.E.s
201
This is a first order exact equation, and we could have second order or third
order equations, but we will restrict ourselves to first order equations of this
type here.
Equations of the form
d
(f (x, y)) = g(x)
dx
are also exact, as all that has to be done is to integrate on both sides with
respect to x to remove all derivatives.
12.2.1
Example
12.2.2
Example
dy
= x+1
dx
202
Solution
This doesnt look exact, but all we have to do is divide by x on both sides
to obtain
1
dy
x+1
dy
x2 + 1
=
dx
x
dx
x
so that we obtain
1
dy
1
= x 2 +
dx
x
Now integrate on both sides, with respect to x.
Z
Z
1
dy
1
dx = x 2 + dx
dx
x
so that we obtain
1
y = x2
which is
1
+ ln x + c
2
y = 2x 2 + ln x + c
12.2.3
Example
dy
+ 2xy = 4e2x
dx
Solution
This is an exact equation, but it doesnt look like it. The left hand side can
be written as the derivative of a product, as so
d
x2 y = 4e2x
dx
and so integrating both sides with respect to x removes the derivative (cancels
it).
Z
x2 y =
4e2x dx = 2e2x + c
2e2x + c
x2
12.3
203
dy
= g(x)
dx
12.3.1
Example
dy
=x
dx
Solution
There are two ways of doing this, the formal way and the informal way. For
this first example, we shall do both.
Formally, we integrate with respect to x on both sides.
Z
Z
dy
y dx = xdx
dx
204
ydy =
xdx
y2
x2
=
+c
2
2
Note that each integration technically gives rise to a constant, but we can
absorb them into a single one. Were finished with the calculus now, its only
algebra to polish up. Multiply by 2 on both sides
y 2 = x2 + 2c
but 2c is just a constant, say d so
y 2 = x2 + d
We could leave it here.
The other way of thinking is to imagine we split the dy from the dx. So
that here, we multiply up by dx on both sides
ydy = xdx
and then we place integral signs in front of both sides
Z
Z
ydy = xdx
this is not really what happens, but it works out the same, and is faster.
From here we continue as above.
12.3.2
Example
which defines the activity of a nuclear sample over time (radioactive decay), in which case
lambda is a poaitive constant known as the decay constant (which is related to the half
life). If we let y = I and x = t and =
in a discharging capacitor.
1
CR
dy = dx
y
and we can take the constant out on the RHS to obtain
Z
Z
1
dx
y
which gives us
ln y = x + c
R
205
y2
2y = 2 ln(x + 3) + c
2
12.4
206
207
d
(i(x)y) = i(x)g(x)
dx
in which case, from the product rule we have
d
di
i(x) = i(x)f (x)
= i(x)f (x)
dx
dx
which is first order variables separable
Z
Z
Z
di
= f (x)dx ln(i(x)) = f (x)dx
i(x)
which finally gives us
i(x) = e
12.4.1
f (x)dx
Z
= exp( f (x)dx)
Example
f (x)dx
=e
2dx
= e2x
dy
+ 2ye2x = ex e2x = e3x
dx
e y = e3x
dx
and if we now integrate on both sides we obtain
1
e2x y = e3x + c
3
2x
and thus we can now divide by e on both sides
1
y = ex + ce2x
3
12.4.2
Example
+ y=
dx x
x2
Now this is in classic first order linear form, and we try to find the integrating
factor i(x).
R 1
R
i(x) = e f (x)dx = e x dx = eln x = x
So we now multiply throughout by this factor, which is simply x, to obtain
x1
dy
x +y =
dx
x
which is now an exact D.E., looking at the LHS we see that
d
x1
(xy) =
dx
x
and so integrating both sides with respect to x yields
Z
Z
x1
1
xy =
dx = 1 dx = x ln x + c
x
x
and finally, dividing by x both sides we obtain
ln x c
+
y =1
x
x
208
12.5
d2 y
dy
+ b + cy = f (x).
2
dx
dx
When f (x) = 0 this is called a homogenous differential equation, otherwise it is called inhomogeneous. We shall only consider solutions to the
homogeneous case.
The solution to the homogenous equation is known as the Complementary
function, or C.F. for short.
There is a step-by-step procedure for solving these D.E.s.
12.5.1
Consider
dy
d2 y
+ b + cy = 0.
2
dx
dx
We begin by forming a quadratic equation known as the auxilliary equation. This equation is
am2 + bm + c = 0.
a
Now we know from 3.7 that this equation can have two, one or no real (two
complex) solutions.
Two real solutions
Suppose that we have two real solutions m = and m = , then the C.F. is
given by
y = Aex + Bex
where A and B are constants2 which can often be determined from data in
the question.
2
These arise out of the two integrations required for the second order derivative in the
equation in fact, but this process does not require us to do any integration at all.
209
210
Recall at this point that when we solve a quadratic equation to give two complex
12.5.2
211
Example
12.5.3
Example
12.5.4
212
Example
y = Ae 2 x + Be2x
12.5.5
Example
22 4(1)(5)
2pm 16
m=
=
2
2
= 1pm2j m = 1 + 2j, m = 1 2j
2
12.5.6
213
Example
m2 = 2 m = = j
and we have
m = j, m = j
and we have two complex solutions, this time with = 0 and = in the
above form. Thus
y = e0x (C cos x + D sin x)
and finally we have
y = C cos x + D sin x
which is the general form of a wave with angular frequency , the phase
angle and amplitude can be determined using a procedure demonstrated in
our trigonometry section (see 4.9.3).
This is another very important D.E., as it is the D.E. which describes all simple
harmonic motion (behind all sin and cos waves and similar phenomonena).
Chapter 13
Differentiation in several
variables
The function z = f (x, y) may be represented in three dimensions by a surface,
where the value z represents the height of the surface above the x, y plane.
13.1
Partial Differentiation
13.1.1
Procedure
13.1.2
215
Examples
z
x
and
z
.
y
2.
z
z
= y2;
= 2xy
x
y
z
= 3x2 (sin xy + 3x + y + 4) + x3 (y cos xy + 3)
x
z
= x3 (x cos xy + 1)
y
13.1.3
Notation
Due to the ambiguity of the f 0 notation (we do not know which variable
we differentiated with respect to) a different notation is adopted for partial
derivatives.
f
f=
x
x
f
f=
fy =
y
y
fx =
13.1.4
Higher Derivatives
216
217
f
2
2f
fyy =
= 2f =
y y
y
y 2
2
2f
f
=
f=
fxy =
y x
yx
yx
f
2
2f
fyx =
=
f=
x y
xy
xy
while it is usually the case that fxy = fyx it is not always the case.
13.1.5
Example
For the functions in Example 13.1.2 find the higher order derivatives fx y,
fy x, fy y and fx x.
Solution
1.
2
y = 0;
x
2
fx y =
y = 2y;
y
fy x =
2xy = 2y;
y
2xy = 2x.
fy y =
y
fx x =
2.
= 3x3 cos xy + 3x2 x4 y sin xy + x3 cos xy = 4x3 cos xy + 3x2 x4 y sin xy;
4
fy x =
x cos xy + x3
y
= x4 y sin xy + 4x3 cos xy + 3x2 ;
4
fy y =
x cos xy + x3
y
= x5 sin xy.
13.2
Taylors Theorem
f (a + h, b + k) = f (a, b) + h
+k
f (a, b) +
h
+k
f (a, b)
x
y
2!
x
y
n1
n
1
+ +
h
+k
f (a, b)+
h
+k
f (a+h, b+k)
(n 1)!
x
y
(n)!
x
y
where 0 < < 1 and the notation used indicates
r
r1
h
f (x, y) = h
h
f (x, y)
+k
+k
+k
x
y
x
y
x
y
where
h
+k
f (x, y) = hfx (x, y) + kfy (x, y)
x
y
13.3
Stationary Points
13.3.1
Types of points
The most important types of points to find are local maxima and local minima.
A local maximum (a, b) is a point so that for small h and k
f (a + h, b + k) f (a, b) f (a + h, b + k) f (a, b) 0.
In other words, even slight movements off centre result in a smaller value
(and they must be small, for larger maxima may exist elsewhere).
Conversely, a local minimum (a, b) is a point so that for small h and k
f (a + h, b + k) f (a, b) f (a + h, b + k) f (a, b) 0.
The conditions on the left are more understandable, but the form on the
right is useful later.
218
13.3.2
219
Finding points
At any such point the rate of change with respect to any variable must be
zero (otherwise the surface would be steep rather than flat approaching
from the direction of that variables axis).
Therefore, in three variables, where z = f (x, y)
z
z
= 0,
= 0.
x
y
These conditions are necessary for locating local maxima and minima
but they are not sufficient. Points that satisfy these equations are known as
critical points or stationary points. We need to classify points we find in that
way.
13.3.3
Classifying points
1
h
+k
f (a, b) +
f (a + h, b + k) f (a, b) =
2
x
y
and therefore
1
f (a + h, b + k) f (a, b) =
2
2
2
2
2
2
h
+ 2hk
+k
f (a, b) +
x2
xy
y 2
Remember that at the critical point the first partial derivatives of z with respect to x
and y are zero, which will cause the first term in the Taylor expansion with derivatives to
vanish.
13.3.4
Summary
To find and classify turning points in three variables where z = f (x, y) the
whole procedure is.
Solve fx = fy = 0, and find all (a, b);
2
If fxx fyy fxy
< 0 at (a, b) then we have a saddle point.
2
If fxx fyy fxy
>0
If fxx > 0 then (a, b) is a local minimum (and fyy will also be
positive).
If fxx < 0 then (a, b) is a local maximum (and fyy will also be
negative).
2
If fxx fyy fxy
= 0 further investigation is required.
220
13.3.5
221
Example
222
13.4
Implicit functions
223
then
dz
z dx z dy
=
+
=0
dx
x dx y dx
Thus
13.5
z
dy
= zx
dx
y
Lagrange Multipliers
0=
z
f (x, y, z(x, y)) = fy + fz
y
y
We can eliminate
only if
z
z
and
x + z
z
=0
x
y + z
z
=0
y
z
y
(a, b, c) = 0
2
Comte Joeseph Lewis Lagrange (1736-1813) was a French Mathematician who did a
224
x fz z fx = 0
y fz z fy = 0
when all functions are evaluated at (a, b, c). If we now define = fz /z ,
these conditions become
(a, b, c) = 0
fx + x = 0
fy + y = 0
fz + z = 0
The value is called the Lagrange multiplier and we find turning points
by working out the values of which satisfy the above four equations.
Another way of putting this is that we define
F = f +
and solve for the equations
Fx = Fy = Fz = = 0
13.5.1
Example
Find the closest distance from the surface z = x2 + y 2 to the point (3, 3, 4).
Solution
The distance from (x, y, z) to the point (3, 3, 4), which we shall call d satisfies the equation
d2 = f (x, y, z) = (x 3)2 + (y + 3)2 + (z 4)2
The surface specifies the constraint, given that (x, y, z) lies on the surface
z = x2 + y 2 we have that
(x, y, z) = x2 + y 2 z 2 = 0
We now consider F = f + .
F = (x 3)2 + (y + 3)+ (z 4)2 + (x2 + y 2 z)
13.6 Jacobians
225
and thus
Fx = 2(x 3) + 2x = 0
Fy = 2(y + 3) + 2y = 0
Fz = 2(z 4) = 0
= x2 + y 2 z = 0
If we rearrange each of these equations for each variable in turn, and insert
them into the fourth we obtain
9
9
+8
=0
+
2
2
(1 + )
(1 + )
2
36 ( + 8)(2 + 2 + 1) = 0
( 1)( + 7)( + 4) = 0
We examine the distance at each value for .
3
3
9
19
= 1 x = , y = , z = , d2 =
2
2
2
4
1
1
1
147
= 7 x = , y = , z = , d2 =
2
2
2
4
2
= 4 x = 1, y = 1, z = 2, d = 36
so the shortest distance was
19
2
13.6
Jacobians
The determinant
x
u
y
u
y
v
x
v
13.6.1
Differential
13.7
Parametric functions
Karl Gustav Jacob Jacobi (1804-1851) was a German mathematician who worked on
various fields such as analysis, number theory and who helped to found the area known as
elliptic functions, which were used to produce the recent proof of Fermats last Theorem.
226
227
in other words
dz
z dx z dy
=
+
dt
x dt
y dt
13.7.1
Example
dz
dt
when t = 0.
Solution
We need to evaluate fx and fy .
z
= y(2x) 4y 20 + 0 = 2xy 4y 20
x
z
= x2 4x 20y 19 + 0 = x2 80xy 19
y
We also require the ordinary derivatives of the x and y functions with
respect to t.
dx
dy
= 3t2 1,
= sin t
dt
dt
Therefore
dz
= (2xy 4y 20 )(3t2 1) + (x2 80xy 19 )( sin t)
dt
Now when t = 0, we have that
x = 03 0 = 0, y = cos 0 = 1
plugging this all in yields
dz
= (0 4(1))(0 1) + (0 0)(0) = 4 1 = 4
dt
13.8
Chain Rule
Suppose z = f (x, y) and x = x(u, v) and y = y(u, v) are functions such that
xu , xv , yu and yv all exist. Then
zu = fx xu + fy yu
228
z
f x f y
=
+
u
x u y u
and similarly
zv = fx xv + fy yv
i.e.
z
f x f y
=
+
v
x v
y v
Chapter 14
Integration in several variables
14.1
Double integrals
which we denote as
Z Z
f (x, y)dxdy
V =
R
14.1.1
Example
Calculate the volume under the graph of f (x, y) = xy + 1 over the interval
0 x 2 and 0 y 4.
230
Z
xy+1dxdy =
V =
0
14.2
x2 y
+x
2
2
Z
dy =
4
2y+2dy = y 2 + 2y 0 = 24
Change of order
It is possible, with care, to change the order in which the integration is performed (that is, whether we sum over x or y first). This is sometimes required
because the order we may try first results in a very difficult integration.
Usually it helps to sketch over the region of integration first, we shall
consider a simple situation in which R is such that any line parallel to the x
or y axis meets the boundary or R at most twice. If this is not the case then
it is possible to subdivide R and treat each section in this way.
One possibility is that we split R into two curves y = 1 (x) and y = 2 (x)
such that (x) (x) for a < x < b. We divide R into vertical strips of
14.3 Examples
231
width x, which is then divided into sections of height y. See figure 14.2 for
details.
Z x=b (Z
y=2 (x)
f (x, y)dy
dx
y=1 (x)
x=a
y=d
(Z
14.3
x=2 (y)
x=1 (y)
Examples
14.3 Examples
232
14.3.1
Example
V =
xy + 1dxdy
0
was the original integral, which, because the region of integration is so simple
(just a rectangle) we obtain
Z 2Z 4
V =
xy + 1dydx
0
Z
=
0
xy 2
+y
2
4
Z
dx =
8x + 4dx
0
= [4x2 + 4x]20 = 24
which was the same result we achived before.
14.3.2
Example
233
Solution
This volume is given by
Z
x2
cos xydydx.
0
Z
=
x sin x3 {x sin 0} dx
Z
=
x sin x3 dx.
14.4
Triple integrals
Double integrals sum a function over some region R, and if the function
translates as a height then the result is a volume that is calculated.
A volume could be obtained directly by a triple integral over some three
dimension region V (over the function 1). Actually this is usually pointless,
but sometimes we wish to sum a function over a volume and not just an area.
For example, suppose that the density of some region of space is given
by the function = (x, y, z), then the mass of the region could be found by
multiplying the density by the volume elements over the whole volume. In
other words
Z Z Z
m=
dxdydz.
V
14.4.1
Example
The density of a gas in a cubic box which extends in each axis from 0 to 5 is
given by
= x2 y + z
find the mass enclosed in the box.
234
Solution
The mass will be given by
Z Z Z
x2 y + zdxdydz
m=
V
which is
Z
=
0
x2 y + zdxdydz
14.5
Change of variable
x x
(x, y) u v
=
=J
y y
u, v
u v
is a Jacobian (see 13.6) related to the transformation.
14.5.1
235
Polar coordinates
= r(cos2 + sin2 theta) = r
14.5.2
Example
Evaluate
Z Z
(1
I=
p
x2 + y 2 dxdy
where the region of the circle has been represented by letting r run from 0
to 1 and letting run from 0 to 2, we have changed the integral in terms of
the new variable and added an r from the Jacobian for this transformation.
We now proceed
Z
=
0
r2 r3
2
3
1
Z
d =
d
= f rac3
6
14.5.3
236
cos sin 0
(x, y, z)
J=
= sin cos 0
(, , z)
0
0
1
= (cos2 + sin2 ) =
so note that in this coordination system, rho is the distance from the z axis,
not the origin as such.
14.5.4
To use three dimensional coordinates based on the distance from the origin
and not the z-axis we need to define the three coordinates r, the distance
from the origin, the angle from the projection of the point on the xy plane
from the x axis, and the angle between the z-axis and the line joing the
point to the origin.
This means the transformation equations are
x = r sin cos , y = r sin sin , z = r cos
r 0, 0 , 0 2
and thus
sin cos
(x, y, z)
= sin sin
J=
(r, , )
cos
Chapter 15
Fourier Series
Fourier1 series are a powerful tool. On one hand they simply allow a periodic
function to be expressed as an infinite series of simple functions, usually
trigonometric or exponential, but this also allows great insight into a function
be splitting into component frequencies.
15.1
Periodic functions
A periodic function is one that repeats its values at regular intervals. So that
if the repeat occurs ever T units on the x-axis.
f (x) = f (x + T ) = f (x + 2T ) + + f (x + nT )
The contant value T is known as the period of oscillation.
15.1.1
Example
Of course the classic examples of periodic functions are the graphs of sin x
and cos x.
Consider
y = A sin(t + )
1
Baron Jean Baptiste Fourier (1768-1830) was a French mathematician who narrowly
avoided the guillotine during the French Revolution. Fourier contributed greatly to the
use of differential equations to solve problems in physics.
238
1
.
f
15.1.2
Example
1
= 0.159Hz
2
(b) Everything here is a distraction apart from the constant 4.5 which
precedes the t. This is the angular frequency and so
= 2f f =
and therefore
T =
4.5
=
0.716Hz
2
2
1
1.396s.
f
15.2
239
Sets of functions
15.2.1
Orthogonal functions
15.2.2
Orthonormal functions
15.2.3
Norm of a function
The quantity
s
Z
(k (x))2 dx
||k || =
a
15.3
Fourier concepts
We now examine the most important concepts at the heart of the Fourier
Series.
15.3.1
Fourier coefficents
240
exists. Then the sequence (ck ) is called the Fourier co-efficients of f with
respect to the orthogonal system (k (x)).
15.3.2
Fourier series
ck k (x)dx
k=0
15.3.3
Convergence
15.4
Important functions
The most important sets of functions for Fourier Series are the trigonemetric
and exponential functions.
15.4.1
241
Trigonometric system
k1k =
12 dx = 2;
cos2 nxdx
Z
1
1
sin 2nx
=
(1 + cos 2nx)dx =
x+
=
2
2
2n
Z
2
2
sin2 nxdx
k sin nxk =
Z
1
1
sin 2nx
=
(1 cos 2nx)dx =
x
=
2
2
2n
Now we must show that the inner product of any function with any different function evaluates to zero. It is left as a trivial exercise to show that
the products of 1 with sin nx and 1 with cos nx produce a zero integral.
2
k cos nxk =
Z
1
cos(m n)x + cos(m + n)xdx
cos mx cos nxdx =
2
1 sin(m n)x sin(m + n)x
+
=
= 0 for (m 6= n).
2
mn
m+n
Z
Z
1
sin mx sin nxdx =
cos(m n)x cos(m + n)xdx
2
1 sin(m n)x sin(m + n)x
=
= 0 for (m 6= n).
2
mn
m+n
Z
Z
1
sin mx cos nxdx =
sin(m + n)x + sin(m n)xdx
2
1 cos(m + n)x cos(m n)x
=
+
= 0.
2
m+n
mn
15.4.2
242
Exponential system
15.5
Trigonometric expansions
a0 X
(an cos nx + bn sin nx)
+
2
n=1
15.5.1
Even functions
that
bn = 0, n > 0.
Note these are not different expansions, but what the previously stated expansions collapse to in this very special case.
2
This follows from the discussion in 7.4 before, noting the nature of each product being
integrated.
15.6 Harmonics
15.5.2
243
Odd functions
15.5.3
Other Ranges
Note that in much of the theory above, we have not assumed that all expansions run between and . In the trigonometric system it is relatively easy
to adapt the process for periodic functions that repeat from L to L. It can
be shown quite easily that
nx
nx
a0 X
an cos
+
+ bn sin
f (x) =
2
L
L
n=1
where
1
a0 =
L
15.6
f (x)dx
L
1
an =
L
1
bn =
L
f (x) cos
nx
dx for n > 0
L
f (x) sin
nx
dx for n > 0
L
L
L
Harmonics
When the Fourier series of a function f (x) is produced using the trigonometric system, it is clear that the function is made up of signals with specific
frequencies.
The functions
an cos nx + bn sin nx
could be combined together into one signal of the form
cn sin(nx + n )
using the technique shown in 4.9.3. Therefore this whole term represents the
component of the function or signal f (x) with angular frequency n.
15.6 Harmonics
244
This is called the nth harmonic. The 1st harmonic is therefore given by
a1 cos x + b1 sin x = c1 sin(x + 1 )
is known as the first harmonic or fundamental harmonic.
The term
a0
2
can be looked at as a form of static background noise that does not rely on
frequency at all.
15.6.1
Sometimes symmetry in the original function will tell us that some harmonics
will not be present in the final result.
Even Harmonics Only
If f (x) = f (x + ) then there will be only even harmonics.
Odd Harmonics Only
If f (x) = f (x + ) there there will be only odd harmonics.
We can now surmise a great deal about the terms we expect to find in
the series before expansion; there are shown in table 15.1.
f (x) = f (x) (Cosine Only) f (x) = f (x) (Sine only)
f (x) = f (x + )
15.6.2
Trigonometric system
15.7 Examples
245
1
n = tan
an
bn
15.6.3
Exponential system
The exponential system is less intuitive than the trigonometric system, but
the constants cn are often simpler to determine.
15.6.4
Percentage harmonic
With the constants cn defined as above we define the percentage of the nth
harmonic to be
cn
100.
c1
That is, the percentage that the amplitude of the nth harmonic is of the
amplitude of the fundamental harmonic.
15.7
Examples
15.7.1
Example
If f (x) = x for x such that < x and f (x + 2) = f (x) for all real x,
evaluate the Fourier series of f (x).
Solution
Since f (x) is an odd function we have that an = 0 and
Z
Z
2x cos nx
2
2
x sin nxdx =
+
cos nxdx
bn =
0
n
n 0
0
2 cos n
2(1)n+1
=
.
n
n
Thus the Fourier series of f (x) is
bn =
2(sin x
+ ).
2
3
4
15.8
246
Exponential Series
ejnx ejnx
2
sin nx =
ejnx ejnx
2j
Now consider the formula for the Fourier series of f (x) which is
a0 X
(an cos nx + bn sin nx)
f (x) =
+
2
n=1
and we shall insert these terms
bn jnx
a0 X an jnx
jnx
jnx
=
+
e +e
e e
+
2
2
2j
n=1
to simply things, we multiply top and bottom of the right most term by j
which gives us j on the top line and means we are dividing by j 2 = 1. We
absorb that into the brackets, inverting the subtraction to give
bn j jnx
a0 X an jnx
jnx
jnx
+
e +e
+
e
e
=
2
2
2
n=1
a0 X
+
=
2
n=1
an b n j
2
jnx
+
an + b n j
2
e
jnx
247
cn ejnx .
Note carefully here that n now runs from to . The relationship of the
contants cn are
1
2 (an jbn ) n > 0
1
a0
n=0
cn =
21
(a + jbn ) n < 0
2 n
Chapter 16
Laplace transforms
Laplace1 transforms are (among other things) a way of transforming a differential equation into an algebraic equation. This equation is then rearranged
and we attempt to reverse the transform. This last part is usually the hardest
unfortunately.
16.1
Definition
Suppose that f (t) is some function of t, then the Laplace transform of f (t),
denoted L{f (t)} is given by
Z
F (s) = L {f (t)} =
est f (t)dt
0
16.1.1
Example
Find L {eat }.
Solution
L eat =
at st
e e
0
dt =
e(as)t dt
These are named after Pierre Simon de Laplace (1749-1827), a brilliant French math-
16.1 Definition
249
e(as)t
=
as
0
L eat = 0
16.1.2
Example
Find L {1}.
Solution
st
L {a} =
1e
0
Thus
L {1} = 0
16.1.3
est
dt =
s
0
1
1
=
s
s
Example
L {t } =
tn est dt
du
dv
est
= ntn1 ;
v=
dx
dx
s
Thus we obtain
Z
st
est
ne
L {t } = t
ntn1 dtdt
s 0
s
0
n
16.1 Definition
250
and when zero is used as a limit the polynomial term is also zero. Thus,
tidying up the integral that remains, gives
Z
n n1 st
n
n
L {t } =
t e dt = L tn1
s 0
s
so that we obtain a reduction formula. Now, we know L {t0 } = L {1} = 1s .
Therefore,
1
11
= 2
L {t} =
ss
s
2 2 1
2
L t =
= 3
2
ss
s
and in general
n!
L {tn } = n+1
s
16.1.4
Inverse Transform
16.1.5
Elementary properties
16.1.6
Example
251
Solution
We use Eulers identity
ejt = cos t + j sin t
Now
L ejt =
1
s + j
s
= 2
+j 2
2
s j s + j
s +
s + 2
So
L ejt = L {cos t + j sin t}
and equating real and imaginary components we obtain
L {cos t} =
and
L {sin t} =
16.2
s2
s
+ 2
s2 + 2
Important Transforms
16.2.1
Let f (t) be a function of t with Laplace transform F (s), which exists for
s > b, then if a is a real number
L eat f (t) = F (s a)
for s > a + b.
Proof
Clearly, if
Z
0
252
f (t)
L {f (t)}
Condition
eat
1
sa
s>a
k
s
s>0
tn
sin at
cos at
sinh at
cosh at
n!
sn+1
s>0
a
+ a2
s>0
s
s 2 + a2
s>0
a
a2
s > |a|
s
s 2 a2
s > |a|
s2
s2
253
f (t)
L {f (t)}
Condition
eat tn
n!
(s a)n+1
s>a
eat sin bt
b
(s a)2 + b2
s>a
eat cos bt
sa
(s a)2 + b2
s>a
eat sinh at
b
(s a)2 b2
s > a + |b|
eat cosh at
sa
(s a)2 b2
s > a + |b|
F (s a) =
e(sa)t f (t)dt
We note that
at
L e f (t) =
at
e f (t)e
0
st
dt =
e(sa)t f (t)dt
16.2.2
The first shifting property of Laplace transforms gives rise to the following
other transforms, shown in table 16.2.
16.3
Transforming derivatives
16.3.1
First derivative
16.3.2
Second derivative
16.3.3
Higher derivatives
16.4
Transforming integrals
1
f (t)dt = F (s).
s
254
255
Proof
Let
f (t)dt = g(t)
0
or in other words
d
g(t)
dt
which under Laplace transformation yields
f (t) =
L {f (t)} = sg g(0)
Note that
Z
g(0) =
f (x)dx = 0
0
16.5
Differential Equations
16.5.1
Example
256
L {N } = N0
Now the expression
1
s+
1
s+
L {N } = N0 L et L {N } = L N0 et
Therefore, removing the transform on both sides we have
N = N0 et
16.5.2
Example
dx
dt
dx
d2 x
+ 5 3x = t 4
2
dt
dt
= 2 when t = 0.
257
Solution
First of all we transform the equation on both sides
2(s2 x sx(0) x0 (0)) + 5(sx x(0)) 3x =
1
4
.
2
s
s
Now we insert the initial conditions now, which are x(0) = 0 and x0 (0) = 2
to obtain
1
4
2s2 x 4 + 5sx 3x = 2 .
s
s
Next, we rearrange to make the Laplace transform of x(t), denoted by x
for short, to be the subject of the equation.
(2s2 + 5s 3)x =
x=
1
4
1 4s + 4s2
+
4
=
s2 s
s2
4s2 4s + 1
.
s2 (2s2 + 5s 3)
Finally, we have the most difficult part, we have already transformed the
whole differential equation into an algebraic equation and solved it, now we
have to invert the transform.
Factorize first, to simplify as much as possible.
x=
(2s 1)2
2s 1
= 2
2
s (2s 1)(s + 3)
s (s + 3)
s2 (s
7
9
1
3
258
71 1 1
7 1
2
9s 3s
9s+3
and therefore
x=
16.5.3
7 1
7
t e3t .
9 3
9
Example
dy
d2 y
d3 y
= 2 = 3 =0
dt
dt
dt
when t = 0.
Solution
Transforming the equation yields
s4 y s3 y(0) s2 y 0 (0) sy 00 (0) y 000 (0) 81y = 0
and inserting initial conditions simplifies this to
s4 y s3 81y = 0
which rearranges to
(s4 81)y = s3
y=
s3
s3
=
.
s4 81
(s2 9)(s2 + 9)
s3
A
B
Cs + D
=
+
+ 2
2
(s 3)(s + 3)(s + 9)
s3 s+3
s +9
259
1 1
1 1
1 s
+
+ 2
4s3 4s+3 2s +9
and we can easily now employ the inverse transform to obtain
y=
1
1
1
y = e3t + e3t + cos 9t
4
4
2
Alternative Solution
Suppose that we missed the second factorization of the bottom line, then we
would have proceeded as follows.
s3
As + B Cs + D
= 2
+ 2
2
2
(s 9)(s + 9)
s 9
s +9
and multiply by denominator of the L.H.S.
s3 = (As + B)(s2 + 9) + (Cs + D)(s2 9),
s = 3 27 = 54A + 18B 3 = 6A + 2B
s = 3 27 = 54A + 18B 3 = 6A + 2B
which solved together yield A =
1
2
and B = 0.
260
16.5.4
Example
s2
s
+ 2
This is the equation for Simple Harmonic Motion and is a very important differential
equation. It is not necessary to use Laplace transforms to solve it and this method is used
as an example.
16.5.5
261
Exercise
16.5.6
sin t
v
Example
Ri + L(si i(0)) =
inserting initial conditions gives
(R + Ls)i =
E
s
E
s
262
E
A
B
= +
.
s(Ls + R)
s
Ls + R
Therefore
E = A(Ls + R) + Bs
which is true for all values of s, so in particular
s = 0 E = AR A =
E
R
EL
.
R
E 1 EL 1
E1 E
1
Rs
R Ls + R
R s R s + R/L
16.5.7
E E Rt
e L
R R
R
E
(1 exp( t))
R
L
Example
263
Solution
From Kirchoffs laws we have
di
1
Ri + L +
dt C
which when transformed gives
idt = 10
0
Ri + L(si i(0)) +
1
10
i=
Cs
s
104
10
)=
s
s
10
(s + 50)(s + 200)
i=
1 50t
e
e200t
15
16.6
Other theorems
Here are some more theorems about Laplace transforms for which no proof
is given. In each case we assume that f (t) is a function and that
F (s) = L {f (t)}
16.6.1
Change of Scale
16.6.2
d
{F (s)}
ds
or more generally
L {tn f (t)} = (1)n
dn
{F (s)}
dsn
16.6.3
264
Convolution Theorem
f (r)g(t r)dr.
{F (s)G(s)} =
0
16.6.4
Example
Given that
L {cos t} =
s2
s
+1
find
L {cos 3t} .
Solution
We already can work this out straight away from our table 16.1 but we do
this as an exercise.
From 16.6.1 we see that
L {cos 3t} =
=
16.6.5
s/3
1
s
1
=
2
2
3 (s/3) + 1
9 s /9 + 1
s
1 9s
= 2
2
9s +9
s +9
Example
Given that
L e4t =
1
s4
find
L te4t
265
Solution
Again, we can work this out directly from our table 16.2 but this is an
exercise.
We can see from 16.6.2 that we can write
4t
1
1
d
= (1)(1)(s 4)2 =
L te
=
ds s 4
(s 4)2
16.6.6
Example
Find
L
1
(s 2)(s 3)
Solution
Once more, this problem would normally be tackled using partial fractions,
but we use it as a very simple application of the convolution theorem (see 16.6.3).
Let
F (s) =
1
1
f (t) = e2t ; G(s) =
g(t) = e3t
s2
s3
Then
L
1
(s 2)(s 3)
Z
=
t
2r 3(tr)
e e
Z
dr =
16.7
u(t) =
0
1
266
for t < 0
for t 0
f (t) u(t c) =
267
0 for t < c
f (t) for t c
and we have successfully switched off the function f (t) for times before
t = c. Frequently we shall want much more fine control than this however,
requiring that we can again switch off the function f (t) after some time
interval. This can easily be done by combining variations of u(t c).
Figure 16.5: Building functions that are on and off when we please.
For example, consider figure 16.5 on page 267. Here we have shown the
graphs of u(t 2) and u(t 4) and clearly, by subtracting the second graph
268
from the first, we obtain the third graph, which shows that we can switch
on between t = 2 and t = 4 or any other values we please.
16.7.1
L {u(t c)} =
Proof
By definition
Z
L {u(t c)} =
0
st
u(t c) =
0 for t < c
est for t c
and thus
est dt.
L {u(t c)} =
c
L {u(t)} =
16.7.2
1
s
Example
Find the function f (t), described by step functions, and the transform F (s)
for the waveform shown in figure 16.6.
s
s
s
s
s
s
=
1
4 4e2s + 2e4s 2e6s + e7s e8s
s
269
16.7.3
Example
Find the function f (t), described by step functions, and the transform F (s)
for the waveform shown in figure 16.7.
Solution
From the graph we see that
f (t) = 2{u(t 1) u(t 3)} + 1{u(t 3) u(t 5)}
which when expanded yields
= 2u(t1)+2u(t3)+u(t3)u(t5) = 2u(t1)+3u(t3)u(t5).
Now we apply the transform and obtain
e3s e5s
es
+3
s
s
s
1
=
2es + 3e3s e5s
s
L {f (t)} = F (s) = 2
16.7.4
Delayed functions
270
271
Proof
Clearly
Z
L {f (t a)u(t a)} =
but note that no area can occur before t = a (due to the switching off of the
function with u(t a). Therefore this integral becomes
Z
est f (t a)dt.
=
a
Note carefully the change of limits, and observe that s is a constant with
respect to this integral.
Z
as
=e
esT f (T )dT = eas F (s)
0
16.7.5
Example
Find the function f (t), described by step functions, and the transform F (s)
for the waveform shown in figure 16.8.
Solution
Some simple examination shows that this function could be defined as
when 0 t < 2
t
2
when 2 t < 5
f (t) =
12 2t when 5 t < 6
and which is zero at all other times. Converting this into step function form,
we obtain
f (t) = t{u(t)u(t2)}+2{u(t2)u(t5)}+(122t){u(t5)u(t6)}.
272
f (t) = tu(t)tu(t2)+2u(t2)2u(t5)+12u(t5)2tu(t5)+2(t6)u(t6)
where the last section has been expanded to make it a delayed function. If
we proceed to do this in the other sections where possible we obtain.
f (t) = tu(t) (t 2)u(t 2) 2(t 5)u(t 5) + (t 6)u(t 6)
So now we appeal to the result for delayed functions, and noting that the
function being delayed is f (t) = t in each case, to obtain
F (s) =
=
16.8
e2s
e5s e6s
1
2
+ 2
s2
s2
s2
s
1
1 e2s 2e5s + e6s
2
s
Another way in this case would be to expand with no forethought wherupon we would
have a list of step functions, and step functions multiplied by t. We could then use the
derivative of the transform (see 16.6.2) to crack the problem.
273
(t) =
for t < 0
for 0 t <
0 for t
1
where 0 and
Z
(t)dt = 1
1
1
[u(t) u(t )] = (1 ese )
s
x2 x3
+
2!
3!
so that
1
s2 2 s3 3
(1 1 + s
+
)
s
2!
3!
s2 2 s3 3
= 1 s +
+
2!
3!
and now, allowing 0 we obtain
L {(t)} =
L {(t)} = 1
16.8.1
Delayed impulse
16.8.2
274
Example
Find the Laplace transform of the wave train shown in figure 16.9.
Solution
We could describe the train as a function as follows
f (t) = 3(t 1) 2(t 3) + (t 4) 3(t 5)
+2(t 6) (t 8)
and thus the transform will be
L {f (t)} = F (s) = 3es 2e3s + e4s 3e5s + 2e6s e8s
16.9
Transfer Functions
Consider once again the circuit in figure 16.2, with a potentially varying EMF
e. If we take Laplace transforms on both sides and adopt the shorthands
i = L {i(t)} , e = L {e(t)}
then we obtain
Lsi + Ri +
1
i = e.
Cs
275
Thus
LCs2 + RCs + 1
=e
Cs
Cs
i
.
=
2
e
LCs + RCs + 1
This is called the transfer function for the circuit. In general the transfer
function of a system, is given by.
i
FT (s) =
L {input}
.
L {output}
The transfer function is determined by the system, and once found is unchanged by varying inputs and their corresponding outputs. The analysis of
the transfer function can reveal information about the stability of the system.
The system can be considered stable if the output remains bounded for all
values of t, even as t . So terms in the output of the form et , t, t2 cos 3t
are all unbounded. On the other hand, terms of the form et and e2t cost
show stability.
We can determine the presence of such terms by analysing the poles 5 of
the transfer function. This really comes down to examining the demonimator
and finding what values of s cause it to become zero.
We can then analyse the stability of the system by plotting the poles on
an Argand diagram, and using the following simple rules.
If all the poles occur to the left of the imaginary axis then the system
is stable;
If any pole occurs to the right of the imaginary axis then the system is
unstable;
If a pole occurs on the imaginary axis the system is marginally stable
if the pole is of order 6 1. The system is unstable if the pole is of higher
order.
So for example, consider the factors in the denominator of the transfer
function shown in table 16.3 and what they indicate. In each case we can see
an expression that would arise from the inverse and see the stability of the
end system.
5
A pole of a function f (z) is a value which when inserted into z, causes an infinite
276
Inverse Pole Order Stable?
(s 3)
e3t
no
(s + 2)
e2t
yes
(s + 2)2
te2t
yes
(s2 + 4)
sin 2t
2j
yes
yes
s2
no
16.9.1
Impulse Response
If the input to the system is the impulse function (t), and the response is
the function h(t), then the transfer function is given by
FT (s) =
L {h(t)}
= L {h(t)} .
L {(t)}
since L {(t)} = 1. So that one way to determine the transfer function for a
system is simply to take the Laplace transform of the impoulse response for
that system.
Supposing that now we change the input function to f (t) and that the
corresponding output function is g(t), we obtain
FT (s) =
L {g(t)}
L {g(t)}
L {h(t)} =
L {g(t)} = L {f (t)}L {h(t)} .
L {f (t)}
L {f (t)}
So we can obtain the transform of the general output, provided we know the
Laplace transform of the input and impulse response for the system.
To obtain the output from the system g(t) we now take the inverse transform on both sides.
g(t) = L1 {L {f (t)} L {h(t)}} .
From the convolution theorem (see 16.6.3), we see that
g(t) = f h.
16.9.2
277
s0
Proof
We know that
L
dx
dt
=
0
s0
as the integral will tend to zero, because as s the term est tends
rapidly to zero.
0 = lim sF (s) lim f (t)
s
t0
16.9.3
s0
Proof
Once again, we begin with the observation
Z
dx
L
=
est f 0 (t)dt = sF (s) f (0).
dt
0
This time we allow s 0. On the left hand side (the integral) we obtain
Z
Z t
0
f (t)dt = lim
f 0 (t)dt = lim f (t) f (0).
o
s0
278
s0
Chapter 17
Z-transform
17.1
Concept
Consider a signal of a continuous function f (t) and how it appears when the
signal is examined at discrete time intervals, say when
t = kT, k = 0, 1, 2, . . .
where T is some fixed time period.
17.1 Concept
280
281
f (kT )(t kT )
k=0
where f (kT ) is the value of the signal at the sampling frequencies and (t
kT ) is the delayed Dirac delta. Note that k is a dummy variable in this
summation and wont appear in the final expansion, although T will.
If we consider the Laplace transform of this sum, we obtain
F (s) = L {fD (t)} = f (0) + f (T )eT s + f (2T )e2T s +
F (s) =
f (kT )ekT s
k=0
f (kT )z k
k=0
f (kT )z k
k=0
17.2
Important Z-transforms
17.2.1
Proof
From the definition of the Z-transform.
z
z1
282
inf ty
U (z) = Z {u(t)} =
u(kT )z k
k=0
Now u(kT ) means the value of the step function at each of the specified
sampling periods, but this is always simply one.
U (z) =
1z k = 1 + z 1 + z 2 + z 3 +
k=0
which is a geometric progression. Using the formula for the sum to infinity
we obtain that (if |z| > 1)
U (z) =
1
1 z 1
17.2.2
Linear function
Z {t} =
Tz
(z 1)2
Proof
From the definition
F (z) = Z {t} =
f (kT )z k .
k=0
kT z k = T (z 1 + 2z 2 + 3z 3 + ).
k=0
z 1
(1 z 1 )2
which upon multiplying through by z 2 top and bottom produces the desired
result.
17.2.3
283
Exponential function
Z {eat} =
z
z eaT
Proof
From the definition
X
F (z) = Z eat =
f (kT )z k
k=0
eakT z k
k=0
z + e2aT z 2 + e3aT z 3 +
1
2
3
= 1 + eaT z
+ eaT z
+ eaT z
+ cdots
=1+e
aT 1
17.2.4
Elementary properties
Just as for the Laplace transform, the Z-transform obeys the following basic
rules. If f (t) and g(t) are functions of t and c is a constant.
Z {f (t) g(t)} = Z {f (t)} Z {g(t)}
and
Z {cf (t)} = cZ {f (t)}
17.2.5
In Laplace transforms the most important property was that of the transform
of the derivative which allowed differential equations to be solved easily.
If f (t) is such that F (z) = Z {f (t)} then
Z {f (t nT )} = z n F (z)
284
Proof
From the definition
Z {f (t nT )} =
X
k=0
f (kT nT )z k =
f ((k n)T )z k
k=0
= f (nT ) + f (1 nT )z 1 + f (2 nT )z 3 +
Chapter 18
Statistics
We are frequently required to describe the properties of large numbers of
objects in a simplified way. For example, political opinion polls seek to
distill the complex variety of political opinions in a region or country into
just a few figures. Similarly, when we talk about the mean time to failure
of a component, we use a single number to reflect the behaviour of a large
population of components.
We shall review some definitions.
18.1
Sigma Notation
Throughout statistics and mathematics in general, we often use sigma notation as a shorthand for a sum of objects. A capital sigma is used, and the
range of the summation is given above and below, unless this is obvious.
b
X
i=a
18.1.1
Example
286
n
X
i = 1 + 2 + 3 + + n
i=1
n
X
1
1
1 1
= + + + 2
2
i
1 4
n
i=1
X
1 1
1
= + + = 1
n
2
2 4
i=1
xi = x1 + x2 + + xn
i=1
is a common abbreviation for adding up n items of data labelled appropriately. There are two specific summations which we examine carefully.
If k is a constant, and f (i) is a function of i we can show easily:
n
X
k = nk.
i=1
This is because
n
X
i=1
k=k
| + k + k + k{z+ k + + k} = nk.
nof these
kf (i) = k
n
X
f (i).
i=1
This is because
n
X
i=1
18.2
18.2.1
Sampling
287
18.3
18.4
Frequency
In many situations, a specific value will occur many times within a sample.
The frequency of a value is the number of times that value occurs within the
sample.
The relative frequency of a value is determined by dividing the frequency
for that value by the sum of all frequencies (which is the same as the number
of all values). Thus, relative frequency is always scaled between 0 and 1
inclusive and is closely related to the concept of probability.
18.5
Measures of Location
Measures of location attempt to generalise all the values with a single, central
value. These are often called averages, and that quantity that is normally
288
289
called the average in everyday speech is just one example of this class of
measures. There are three main averages.
18.5.1
Arithmetic Mean
1X
xi
n i=1
It is customary to use the symbol x to denote the mean of a sample, and
the symbol to denote the mean of the population.
Clearly the mean provides a rough measure of the centre point of the
data, note that it may be somewhat unreliable for many uses if the the data
is very skewed (see 18.9).
There are two other commonly used averages which may be used.
18.5.2
Mode
The mode of a sample is the value that occurs the most within the sample,
that is, the value with the highest frequency (see 18.4).
Then two values are tied with the highest frequency the sample is called
bimodal and both values are modes.
If more than two values are tied, we call the sample multimodal.
18.5.3
Median
The median of a sample is the middle value when the values of the sample
are ordered in ascending or descending order. Note that when there are an
even number of values, we usually use the mean of the middle two elements.
18.5.4
Example
1+2+3+4+5+6+7+8+9
= 5.
9
The mode is the element with the highest frequency, but all the numbers
1 to 9 have the same frequency (1). Consequently there is no unique
mode or one might even suggest that every number is the mode.
The median is the middle number when the sample is arranged in order
(and it already is), so here the median is 5.
This kind of a distribution is called the Uniform Distribution because (at
least within a specific range) every item has an equal chance of being picked.
18.5.5
Example
State the mean, mode and median of the sample formed of the numbers 5,
5, 5, 5, 5, 5, 5, 5, 5.
Solution
The sample mean will be given by
x=
5+5+5+5+5+5+5+5+5
= 5.
9
This time the mode is clear cut, there is only one element, and it has
a frequency of 9, so the mode is 5.
The median is 5 once again.
18.6
Measures of Dispersion
The previous examples demonstrates that averages on their own do not tell
us enough about the data, we also want to know how spread out or dispersed
the data is.
290
18.6.1
Range
The simplest form of this measure is the range, which is simply the smallest
value subtracted from the largest value.
This measure is often unreliable, as the largest and smallest values can
often be freakish and error-prone (called outliers).
Note the whole measure depends on two values only.
18.6.2
Standard deviation
291
18.6.3
Inter-quartile range
18.7
Frequency Distributions
18.7.1
Class intervals
We often consider, either for simplicity of some other reason, that data falls
into certain ranges of values, known as class intervals.
For example, suppose we table the ages of all people admitted to casualty
in a specified time. Rather than consider every age value, we might consider
ages tabulated as shown in table 18.1. There is no need to insist on equal
widths of interval, we can change them as required.
The fundamental trick to dealing with this situation is to imagine that all
the items in each interval are concentrated on the central value. For example,
we consider the 7 items in the first interval all to be concentrated at 4.5 years,
and so on. The calculation can then go on unimpeded.
292
293
Age
Frequency
0-9
10 - 19
20 - 29
30 - 39
40 - 49
50 - 59
13
60 - 69
70 - 79
80 +
18.8
Cumulative frequency
18.8.1
Once we have plotted our cumulative frequency graph, we find the value that
if half of the largest cumulative frequency. If we draw a horizontal line from
the cumulative frequency to the graph and then vertically down, the figure
we land on is the median.
18.9 Skew
294
This is very useful for calculating (or estimating) the mean in data arranged by frequency tables.
18.8.2
Calculating quartiles
Finding the quartiles is very similar. To find the lower quartile take one
quarter of the highest cumulative frequency and draw a horizontal line to
the graph and vertically down to the axis we obtain the quartile. We start
with three quarters of the highest cumulative frequency to find the upper
quartile.
18.8.3
We are not restricted to the 50%, or 25% and 75% marks in this procedure,
we can equally find the 10%, 5% or any figure we care by working out this
percentage of the total cumulative frequency and tracing to the right and
down.
18.9
Skew
18.10
Correlation
When we have two sets of interlinked numbers of equal size, we are often
interested in whether the numbers show a correlation. That is to say, whether
or not there is a link between the two sets.
Suppose that we have two sets of numbers
x1 , x2 , x3 , . . . , xn ; y1 , y2 , y3 , . . . , yn
then clearly we can plot the values
(x1 , y1 ), (x2 , y2 ), (x3 , y3 ), . . . , (xn , yn )
18.10 Correlation
295
18.10.1
Linear regression
18.10.2
Correlation coefficient
The Pearson product moment correlation coefficient or simply linear correlation coefficient r is given by
P
P P
x xy ( x)( y)
r=p P
P p P
P
n( x2 ) ( x)2 n( y 2 ) ( y)2
is a measure of how well the scattered points fit the straight line above.
18.10 Correlation
296
Interpretation
The value of r is bounded by -1 and 1. That is
1 r 1.
A value of 1 or 1 signifies a perfect line up, with 1 representing a positive
gradient (larger x produces larger y), and with 1 representing a negative
gradient (larger x produces smaller y).
So values close to 1 or 1 suggest a correlation, while values close to 0 in
the middle suggest no correlation.
Scaling
The value of r is unaffected by scaling - that is the units used do not change
it, nor does swapping the x and y values.
Warnings
Remember, this is designed for linear relationships. If you suspect a curved
relationship you should transform to a straight line first.
Note also that correlation does not indicate causality.
Chapter 19
Probability
Probability is concerned with the likelyhood of certain events. In many
ways then, probability can be thought of as an attempt to predict the likely
outcomes of experiments, while statistics provides the means of analysing
data after an event has occured.
19.1
Events
19.1.1
Probability of an Event
19.1.2
Exhaustive lists
19.2
Multiple Events
19.2.1
Notation
19.2.2
Before we introduce methods for obtaining the probabilities of these combinations, we need to look at two important concepts.
Mutually exclusive events
Two events A and B are said to be mutually exclusive if and only if there is
no way that both A and B can occur.
298
299
Independent events
Two events A and B are said to be independent if and only if the occurrence
of A does not effect the probability of B, and the occurrence of B does not
effect the probability of A.
More than two events
We can easily extend these ideas to more than two events. For example, we
say that events A, B and C are mutually exclusive if and only if
A and B are mutually exclusive;
B and C are mutually exclusive;
A and C are mutually exclusive.
We perform a similar extension to the notion of independence, and if we
are dealing with more than three events.
19.3
Probability Laws
19.3.1
19.3.2
not A
19.3.3
1 event of N
19.3.4
n events of N
19.3.5
Examples
Given a fair coin, the probability of getting a head on a single toss of the
coin is 12 .
Given a fair die, the probability of getting a 6 on a single roll of the die
is 16 .
We now know how to find the probability of A or B in certain circumstances, so we turn to the problem of A and B.
19.3.6
300
19.3.7
Example
19.3.8
A or B or C or ...
19.3.9
301
19.3.10
Example
A fair die is rolled once, show that the probability of rolling a number less
than 6 is 56 .
Solution
There are at least three ways of tackling this problem.
Let A1 be the event of rolling a 1, down to A6 being the probability of
rolling a 6.
1. We could simply observe that there are only six equally likely outcomes,
and our event pertains to five of them. That, with a simple application
of 19.3.4, gives us a probability of 56 .
2. Clearly A1 , A2 , . . . , A5 are mutually exclusive (it is impossible to roll
two numbers at the same time on one roll). Therefore (by 19.3.1) we
can simply add the probabilities, and as each probability is 61 we arrive
at an answer of 56 .
3. We could observe that the question is equivalent to the probability of
not rolling a 6. We know the probability of rolling a 6 is simply 61 , so
we subtract this from 1 to find 56 , (see 19.3.2).
These results are useful, but we will frequently encounter situations where
two events are not mutually exclusive, or not independent.
19.3.11
A or B revisited
19.3.12
Example
A certain device consists of two components A and B, which are wired in parallel. Each component has enough capacity individually to work the device.
Each component is unaffected by the state of the other.
The probability of A being functional is 0.9, and the probability of B being
functional is 0.8. What is the probability of the device being functional?
302
303
Solution
We shall label A as the event that component A is functioning, and similarly
label B as the event that component B is functioning.
Unlike the example above, this device will work if A or B is functional.
We note that A and B are certainly not mutually exclusive; it is quite possible
for both components to be working at the same time. Therefore we have to
use the general form of the probability law (19.3.11), and not the basic form
(19.3.1).
P (A B) = P (A) + (P ) P (A B) = 0.9 + 0.8 0.72 = 0.98
Note that A and B were independent, so we were able to simply use the
P (AB) = P (A)P (B) relation (see 19.3.6) for that part of the calculation.
Note also that the reliability of the parallel system is much greater than
that of the individual components. Compare this to the previous example of
the series system.
So, we no longer require events to be mutually exclusive to find the probability of one of them occurring. So what about the notion of independence?
19.3.13
A and B revisited
Given any two events A and B, the probability that A and B both occur is
given by
P (A B) = P (A)P (B|A) = P (B)P (A|B).
Compare this result to 19.3.6.
This relationship also provides a neat way of working out conditional
probability.
19.3.14
Conditional probability
Given any two events A and B, the probability that A occurs given B already
has is given by
P (A|B) =
provided that P (B) 6= 0.
P (A B)
P (B)
19.3.15
Example
The probability that two components A and B in a device are both working
is 0.63. The probability that B is working is 0.7. Calculate the probability
that A is working given that B is.
Solution
Clearly P (A B) = 0.63, from the above P (A|B) = 0.63/0.7 = 0.9.
Question. Do we have enough information in this example to calculate
P (A)?
We can also rearrange our probability law 19.3.13 to produce the following
important result.
19.3.16
Bayes Theorem
P (B|A) =
P (B)P (A|B)
.
P (A)
P (A)P (F |A)
.
i P (F |Bi )P (Bi )
19.4
A discrete random variable is any variable quantity that can only take exact
or separate numerical values. For example, while the length of feet measured
in people may assume any (therefore continuous) values, the shoe sizes they
have can have only discrete values.
19.4.1
Notation
It is usual to use capital letters to denote a variable, and lower case letters
to denote possible outcomes. These outcomes should all be arranged to be
mututally exclusive.
304
305
n
X
P (X = xi ) = 1
i=1
19.4.2
Expected Value
f (xi )
1
36
2
36
3
36
4
36
5
36
6
36
5
36
4
36
10
3
36
11
2
36
12
1
36
306
1
2
3
2
1
T
=2
+3
+3
+ + 11
+ 12
36
36
36
36
36
36
252
=7
36
The division has been moved to the second part of the multiplication to
help highlight that each term is merely the product of the outcome with the
probability of that outcome. We can generalise this as follows.
=
Definition
If a discrete random variable X has the possible outcomes x1 , x2 , . . . xn , with
a probability density function f (x) such that f (xi ) = P (X = xi ), then
E(X) =
n
X
xi f (xi ).
i=1
19.4.3
Variance
Given that we have some measure of the mean outcome for a discrete random
variable we turn our attention to how dispersed the outcomes may be around
this mean. We do this by examining the variance which is the name for the
square of standard deviation (see 18.6.2).
Now the standard deviation is essentially the root, of the mean of the
squared deviations from the mean, so the variance will be the same calculation without this final root. We shall denote the variance of the variable X
with the notation
var (X) .
19.4.4
Example
307
308
1
36
2
36
4
36
2
36
6
36
18
36
3
36
12
36
48
36
4
36
20
36
100
36
5
36
30
36
180
36
6
36
42
36
294
36
5
36
40
36
320
36
4
36
36
36
324
36
10
3
36
30
36
300
36
11
2
36
22
36
242
36
12
1
36
12
36
144
36
total
54 56
Table 19.2: Calculating E(X) and var (X) for two rolled dice.
19.5
In fact this nature of the real numbers is noted in the fact that they are often described
as the continuum. Furthermore when people discuss the sizes of the integers versus that
of the real numbers the integers are of the size of the smallest infinity 0 while the real
numbers are of the size of the continuum, written c.
19.5.1
309
Definition
If X is a continuous variable such that there are real numbers a and b such
that
P (a X < b) = 1; P (X < a) = 0; P (X b)
then X is a continuous random variable.
19.5.2
looking for all values of X that round to the number 2 and so we find P (1.5 x < 2)
instead.
Chapter 20
The Normal Distribution
20.1
Definition
20.2
20.2.1
Transforming variables
To complete our problem, we examine our tables which show the area
under the graph (proportion) up to and including our value (which is 2).
z=
311
20.2.2
Calculation of areas
Note that the tables provided, like most tables show areas calculated only
in a certain way. To work out other areas we must use ingenuity combined
with the following facts
1. The total area under the graph is 1;
2. The graph is perfectly symmetrical about 0;
3. Therefore the area under each side is 21 .
20.2.3
Example
312
20.2.4
Confidence limits
20.2.5
Sampling distribution
Suppose that we have a large population, out of which we select many samples
of size n. If we calculate the sample mean for each sample, then we could
consider the sample formed by these averages.
This is a sample taken from the population of all possible averages of
samples of size n from the original population.
This is confusing - you must realise that there are two populations, the
original one (P1 say), and the population (P2 say) of samples of size n taken
from P1 .
Example
Take the following, extremely small example.
Suppose our population P1 consists of the numbers
1, 2, 3, 4, 5
and that we are taking samples of size 3, then the population P2 of all
possible samples looks like this.
313
20.3
314
2
2
3
3
2
3
3
4
4
4
3
4
4
4
5
5
5
5
5
5
.
n
A value of n = 30 is quite safe to use this for most original parent populations, but any sample size is safe for a normally distributed parent.
We write
x =
and
x =
n
1
There are exceptions, the Cauchy distribution does not obey the Central Limit The-
20.4
1.96 x + 1.96
n
n
where the figure of 1.96 is the corresponding z score for a two-tailed 95% confidence interval, and we are taking samples from the sample mean population,
and so its standard deviation as above.
If n = 1 then clearly this just becomes the normal 95% confidence limit,
but as n becomes larger this interval becomes smaller and smaller, giving us
a more detailed and precise picture of the possible locations of x.
Of course, normally the whole point is that we dont have the population
mean, but instead have simply the sample mean and wish the deduce the
population mean from it. The same logic can be employed. We can interpret
the interval above like so:
| x| 1.96
n
so that the distance between and x has an upper limit (within our confidence level). Or to put it yet another way
x 1.96 x + 1.96 .
c
c
so that given x and an estimate for we can produce a range of possible
values for which will narrow as n increases given the relevant confidence
level.
20.5
Hypothesis Testing
Another use for this result is to determine if a mean has moved significantly.
This is generally only useful if we really were sure we knew it before.
Suppose that a mean of a population was reliably found to be . Some
time later a sample of size n was taken from the new population, and the
mean found to be x, assume the standard deviation has not changed.
315
20.5.1
| x| n
z=
where once again, due to the two-tailed nature of the problem we are only interested in the magnitude of the difference, not the sign. If this test statistics
is larger than the value Z obtained above then either
1. the mean has not changed but this sample lay outside the confidence
interval even so; this is called a Type I error, and the probability of
this occurring is usually denoted as and called the significance level.
Clearly = 0.05 for a 95% confidence.
2. the hypothesis H1 is actually true.
There is no way to determine for sure which is the case, which is why a
high level of confidence is useful in order to make the error unlikely. We are
therefore forced to accept H1 and reject H0 .
Conversely, if the value of z < Z then either
1. the mean has changed significantly, but our random sample simply did
not reflect this; this is called a Type II error, and probability of this
occurring is usually denoted as .
2. the hypothesis H0 is actually true.
Again we reject H1 and accept the null hypothesis that the change is not
statistically significant.
316
20.6
12 22
+ .
n21 n22
317
Appendix A
Statistical Tables
For the standard normal distribution, that is the normal distribution with
mean 0 and standard deviation 1, the probability density function is given
by:
1 2
1
1
1
(x) = e 2 x = exp( x2 ).
2
2
2
Table A.1 shows the cumulative normal distribution calculated as
Z x
(x) =
(z)dz.
x 2 1 e 2 x dx.
319
0.0
0.500
0.504
0.508
0.512
0.516
0.520
0.524
0.528
0.532
0.536
0.1
0.540
0.544
0.548
0.552
0.556
0.560
0.564
0.567
0.571
0.575
0.2
0.579
0.583
0.587
0.591
0.595
0.599
0.603
0.606
0.610
0.614
0.3
0.618
0.622
0.626
0.629
0.633
0.637
0.641
0.644
0.648
0.652
0.4
0.655
0.659
0.663
0.666
0.670
0.674
0.677
0.681
0.684
0.688
0.5
0.691
0.695
0.698
0.702
0.705
0.709
0.712
0.716
0.719
0.722
0.6
0.726
0.729
0.732
0.736
0.739
0.742
0.745
0.749
0.752
0.755
0.7
0.758
0.761
0.764
0.767
0.770
0.773
0.776
0.779
0.782
0.785
0.8
0.788
0.791
0.794
0.797
0.800
0.802
0.805
0.808
0.811
0.813
0.9
0.816
0.819
0.821
0.824
0.826
0.829
0.831
0.834
0.836
0.839
1.0
0.841
0.844
0.846
0.848
0.851
0.853
0.855
0.858
0.860
0.862
1.1
0.864
0.867
0.869
0.871
0.873
0.875
0.877
0.879
0.881
0.883
1.2
0.885
0.887
0.889
0.891
0.893
0.894
0.896
0.898
0.900
0.901
1.3
0.903
0.905
0.907
0.908
0.910
0.911
0.913
0.915
0.916
0.918
1.4
0.919
0.921
0.922
0.924
0.925
0.926
0.928
0.929
0.931
0.932
1.5
0.933
0.934
0.936
0.937
0.938
0.939
0.941
0.942
0.943
0.944
1.6
0.945
0.946
0.947
0.948
0.949
0.951
0.952
0.953
0.954
0.954
1.7
0.955
0.956
0.957
0.958
0.959
0.960
0.961
0.962
0.962
0.963
1.8
0.964
0.965
0.966
0.966
0.967
0.968
0.969
0.969
0.970
0.971
1.9
0.971
0.972
0.973
0.973
0.974
0.974
0.975
0.976
0.976
0.977
2.0
0.977
0.978
0.978
0.979
0.979
0.980
0.980
0.981
0.981
0.982
2.1
0.982
0.983
0.983
0.983
0.984
0.984
0.985
0.985
0.985
0.986
2.2
0.986
0.986
0.987
0.987
0.987
0.988
0.988
0.988
0.989
0.989
2.3
0.989
0.990
0.990
0.990
0.990
0.991
0.991
0.991
0.991
0.992
2.4
0.992
0.992
0.992
0.992
0.993
0.993
0.993
0.993
0.993
0.994
2.5
0.994
0.994
0.994
0.994
0.994
0.995
0.995
0.995
0.995
0.995
2.6
0.995
0.995
0.996
0.996
0.996
0.996
0.996
0.996
0.996
0.996
2.7
0.997
0.997
0.997
0.997
0.997
0.997
0.997
0.997
0.997
0.997
2.8
0.997
0.998
0.998
0.998
0.998
0.998
0.998
0.998
0.998
0.998
2.9
0.998
0.998
0.998
0.998
0.998
0.998
0.998
0.999
0.999
0.999
3.0
0.999
0.999
0.999
0.999
0.999
0.999
0.999
0.999
0.999
0.999
320
99
95
90
85
80
60
50
=1
0.000157
0.00393
0.0158
0.0358
0.064
0.275
0.455
0.0201
0.103
0.211
0.325
0.446
1.02
1.39
0.115
0.352
0.584
0.798
1.01
1.87
2.37
0.297
0.711
1.06
1.37
1.65
2.75
3.36
0.554
1.15
1.61
1.99
2.34
3.66
4.35
0.872
1.64
2.20
2.66
3.07
4.57
5.35
1.24
2.17
2.83
3.36
3.82
5.49
6.35
1.65
2.73
3.49
4.08
4.59
6.42
7.34
2.09
3.33
4.17
4.82
5.38
7.36
8.34
10
2.56
3.94
4.87
5.57
6.18
8.30
9.34
11
3.05
4.57
5.58
6.34
6.99
9.24
10.34
12
3.57
5.23
6.30
7.11
7.81
10.18
11.34
13
4.11
5.89
7.04
7.90
8.63
11.13
12.34
14
4.66
6.57
7.79
8.70
9.47
12.08
13.34
15
5.23
7.26
8.55
9.50
10.31
13.03
14.34
20
8.26
10.85
12.44
13.60
14.58
17.81
19.34
30
14.95
18.49
20.60
22.11
23.36
27.44
29.34
40
22.16
26.51
29.05
30.86
32.34
37.13
39.34
50
29.71
34.76
37.69
39.75
41.45
46.86
49.33
60
37.48
43.19
46.46
48.76
50.64
56.62
59.33
40
30
20
10
0.1
=1
0.708
1.07
1.64
2.71
3.84
5.41
6.63
10.83
1.83
2.41
3.22
4.61
5.99
7.82
9.21
13.82
2.95
3.66
4.64
6.25
7.81
9.84
11.34
16.27
4.04
4.88
5.99
7.78
9.49
11.67
13.28
18.47
5.13
6.06
7.29
9.24
11.07
13.39
15.09
20.51
6.21
7.23
8.56
10.64
12.59
15.03
16.81
22.46
7.28
8.38
9.80
12.02
14.07
16.62
18.48
24.32
8.35
9.52
11.03
13.36
15.51
18.17
20.09
26.12
9.41
10.66
12.24
14.68
16.92
19.68
21.67
27.88
10
10.47
11.78
13.44
15.99
18.31
21.16
23.21
29.59
11
11.53
12.90
14.63
17.28
19.68
22.62
24.73
31.26
12
12.58
14.01
15.81
18.55
21.03
24.05
26.22
32.91
13
13.64
15.12
16.98
19.81
22.36
25.47
27.69
34.53
14
14.69
16.22
18.15
21.06
23.68
26.87
29.14
36.12
15
15.73
17.32
19.31
22.31
25.00
28.26
30.58
37.70
20
20.95
22.77
25.04
28.41
31.41
35.02
37.57
45.31
30
31.32
33.53
36.25
40.26
43.77
47.96
50.89
59.70
40
41.62
44.16
47.27
51.81
55.76
60.44
63.69
73.40
50
51.89
54.72
58.16
63.17
67.50
72.61
76.15
86.66
60
62.13
65.23
68.97
74.40
79.08
84.58
88.38
99.61
Appendix B
Greek Alphabet
The greek alphabet has been included for completeness here. Of course, not
all the greek letters are used in this course, but a full reference may prove
useful.
322
Name
Alpha
Beta
Gamma
Delta
Epsilon
Zeta
Eta
Theta
Iota
Kappa
Lambda
Mu
Nu
Xi
Omicron
Pi
Rho
Sigma
Tau
Upsilon
Phi
Chi
Psi
Omega
Index
!, 32
<, 2
=, 2
>, 2
Im(z), 76
Re(z), 76
j, 68
=(z), 76
, 2
<(z), 76
, 2
, 2, 31
, 2
cos , 50
cot , 51
csc , 51
, 273
, 64
, 2
, 2
, 2
, 2
6=, 2
, 64
sec , 51
sin , 50
tan , 50
i, 86
j, 86
k, 86
n Cr , 41
i, 68
j, 68
absolute value, 30
adjacent, 49
angles
converting, 61
degrees, 61
radians, 61
angular frequency, 64
antilog, 33, 36
antilogarithm, 33
arc length, 61
area of sector, 62
Argan diagrams, 70
augmented matrix, 108
binomial expansion, 38
binomial theorem, 38
BODMAS, 3, 12
brackets
expanding, 17
multiplying, 18
CAH, 51
calculus
derivative, 149
differential, 149
adding, 151
chain rule, 151
product rule, 151
quotient rule, 152
subtracting, 151
trigonometry, 152
integral, 172
areas, 186
definite, 185
INDEX
fractions, 176
logarithm, 175
mean value, 188
numerical, 191
parts, 177
power rule, 173
RMS, 189
substitution, 174
volumes, 187
several variables
Lagrange multipliers, 223
stationary points, 218
turning points, 158
Cartesian vectors, 86
CAST diagram, 55
combinations, 41
complex numbers, 68, 69
addition, 71
algebra, 71
Argand diagrams, 70
cartesian form, 76
conjugate, 75
division, 74, 78
exponential form, 78
imaginary, 69
imaginary part, 76
modulus, 75, 77
multiplication, 72, 78
plane, 70
polar form, 77
real part, 76
subtraction, 72
complex plane, 70
constant of integration, 172
convolution, 264
cosine rule, 59
Coulombs law, 133
counting numbers, 8
cross product, 91
decibel, 33
324
decimal places, 3
delta, Dirac, 273
differential equations
Laplace transforms, 256
differentiation, see clculus149
partial, 214
discriminant, 28
domino rule, 96
dot product, 89
equation of straight line, 145
equations
exponential, 34
quadratic, 24
discriminant, 28
in trigonometry, 65
simple cases, 29
solving, 26, 27
rearranging, 11
solving
multiple solutions, 54
with matrices, 104
trigonometric, 54, 64
events, 297
independent, 299, 303
multiple, 298
mutually exclusive, 298
exp(), 37
expanding brackets, 17
expansion
binomial, 38
expected value, 305
exponent
exponential equations, 34
exponential functions, 32, 36
factor, 20
factorial, 32
factorization, 20, 26
Fourier
coefficients, 239
INDEX
series, 240
function notation, 15
functions
exponential, 32, 36
inverse, 16
logarithmic, 33
trigonometric, 50
Gaussian elimination, 102
gradient, 144
325
of indices, 21
of logs, 34
of signs, 2
of surds, 23, 68
leading diagonal, 94
length of arc, 61
log, 33
logarithm, 33
logarithmic functions, 33
logarithms, 34
INDEX
John, 33
Newtons laws
Gravitation, 133
second, 131
notation
function, 15
sigma, 31, 285
trigonometric, 52
numbers, 8
complex, see complex numbers
imaginary, 69
prime, 8
rational, 9
real, 9
whole, 8
Ohms law, 131
operator precedence, 3, 12
opposite, 49
order of precedence, 3
parabola, 25
partial differentiation, 214
partial fractions, 176
Pascals triangle, 39, 41
pH, 33
phase shift, 64
positive integers, 8
power, 21, 32
Power Series, 194
prime numbers, 8
principle of superposition, 66
Probability
distributions
uniform, 290
probability, 297
conditional, 303
continuous variables, 308
discrete variables, 304
exhaustive list, 297
expected value, 305
326
laws, 299
n of N , 300
1 of N , 300
and, 300, 301, 303
not, 299
or, 299, 301, 302
multiple events, 298
probability density function, 309
proportion
direct, 130
inverse, 131
inverse square, 132
Pythagoras theorem, 50
quadratic equations, 24, 27
complex solutions, 69, 75
radians, 61
rational numbers, 9
real line, 70
real numbers, 9
line, 9
rearranging equations, 11
Richter scale, 33
right angled triangles, 49
RMS value, 189
row reduction, 102
scalar product, 89
scalene triangles, 57
sector area, 62
Sigma notation, 31
sigma notation, see notation
significant figures, 4
Simpsons rule, 191
sine rule, 59
singular, 101
SOH, 51
stationary points, 158, 218
Statistics, 285
statistics
averages, 288, 289
INDEX
cumulative frequency, 288
frequency, 288
measures
of dispersion, 290
of location, 288
population, 287
range, 291
relative frequency, 288
sample, 287
sampling, 287
standard deviation, 291
sum and product, 26
surd, 23
Taylors Expansion, 195
Taylors Theorem, 218
Taylors theorem, 197
TOA, 51
triangles
irregular, 57
labelling, 49, 58
right angled, 49
scalene, 57
trigonometric equations, 64
trigonometry, 49
common values, 53
cosine rule, 59
functions, 50
identities, 63
compound angles, 64
double angles, 64
non right angled, 58
sine rule, 59
turning points, 158
unit step function, see Laplace transforms
variables
continuous random, 308
discrete random, 304
vector product, 91
vectors, 85
327
addition, 88
Cartesian, 86
column, 94
cross product, 91
dot product, 89
modulus, 85
row, 94
scalar product, 89
subtraction, 88
unit, 86
zero, 88
warpspeed, 33
waveform, 64
whole numbers, 8
zero matrix, 94
zero vector, 88