MathEng5-M - Part 5
MathEng5-M - Part 5
3. Variance
2 σ 𝑦𝑖 −𝑦ത 2
𝑆𝑦 =
𝑛−1
𝑛 − 1 = 𝑑𝑒𝑔𝑟𝑒𝑒𝑠 𝑜𝑓 𝑓𝑟𝑒𝑒𝑑𝑜𝑚
Normally distributed
𝑦ത − 𝑆𝑦 𝑡𝑜 𝑦ത + 𝑆𝑦 - comprises of 68% of
the total measurements
𝑦ത − 2𝑆𝑦 𝑡𝑜 𝑦ത + 2𝑆𝑦 - comprises of 95%
of the total measurements
𝑡𝛼/2,𝑛−1 = standard normal random variable for the t – distribution with a probability of α/2
• Minimax criterion – a line is chosen that minimizes the maximum distance that an
individual point falls from the line.
a. Ill-suited for regression
b. Well-suited for fitting a simple function to a complicated function
2
𝑆𝑟 = σ𝑛𝑖=1 𝑒𝑖 2 = σ𝑛𝑖=1 𝑦𝑖,𝑚𝑒𝑎𝑠𝑢𝑟𝑒𝑑 − 𝑦𝑖,𝑚𝑜𝑑𝑒𝑙
𝑆𝑟 = σ𝑛𝑖=1 𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖 2
3. σ 𝑎0 = 𝑛𝑎0
𝑛𝑎0 + σ 𝑥𝑖 𝑎1 = σ 𝑦𝑖
σ 𝑥𝑖 𝑎0 + σ 𝑥𝑖 2 𝑎1 = σ 𝑥𝑖 𝑦𝑖
x 0 2 4 6 9 11 12 15 17 19
y 5 6 7 6 9 8 7 10 12 12
𝑛 = 10 σ 𝑦𝑖 = 82 𝑛 σ 𝑥𝑖 𝑦𝑖 −σ 𝑥𝑖 σ 𝑦𝑖 10 911 − 95 82
𝑎1 = = = 0.3524699599
𝑛 σ 𝑥𝑖 2 − σ 𝑥𝑖 2 10 1277 − 95 2
95
σ 𝑥𝑖 𝑦𝑖 = 911 𝑥ҧ = 10 = 9.5
82 𝑎0 = 𝑦ത − 𝑎1 𝑥ҧ = 8.2 − 0.3524699599 9.5 = 4.851535381
σ 𝑥𝑖 2 = 1277 𝑦ത = 10 = 8.2
σ 𝑥𝑖 = 95
9.073965287 8
𝑆𝑦Τ𝑥 = = 1.0650097
10−2
6
Correlation coefficient
4
𝑆𝑡 −𝑆𝑟 σ 𝒚𝒊 −ഥ
𝒚 𝟐 −σ𝒚𝒊 −𝒂𝟎 −𝒂𝟏 𝒙𝒊 𝟐
𝑟2 = = 2
𝑆𝑡 𝒚 𝟐
σ 𝒚𝒊 −ഥ
55.60−9.073965287
𝑟2 = 0
0 5 10 15 20
55.60
𝑟 2 = 0.8367991855
𝒓 = 𝟎. 𝟗𝟏𝟒𝟕𝟔𝟕𝟐𝟖𝟒𝟗
x 0.75 2 3 4 6 8 8.5
y 1.2 1.95 2 2.4 2.4 2.7 2.6
a) Saturation-growth rate
𝑥 78 𝑥
𝑦 = 𝛼3 𝛽 +𝑥 x y 1/x 1/y 𝑦=
5 6+5𝑥
3
1 𝛽3 1 1
𝑦
= 𝛼3 𝑥
+𝛼 0.75 1.20 1.3333333333 0.8333333333 1.2000000000
3
From the table, it yields 2.00 1.95 0.5000000000 0.5128205128 1.9500000000
1 5 1 25
𝑦
= 13 𝑥
+ 78 3.00 2.00 0.3333333333 0.5000000000 2.2285714286
78
𝛼3 = 25 = 3.12 4.00 2.40 0.2500000000 0.4166666667 2.4000000000
6
𝛽3 = 5 6.00 2.40 0.1666666667 0.4166666667 2.6000000000
78 𝑥 78 5𝑥
𝑦 = 25 6 = 25 8.00 2.70 0.1250000000 0.3703703704 2.7130434783
+𝑥 6+5𝑥
5
78 𝑥 8.50 2.60 0.1176470588 0.3846153846 2.7340206186
𝑦= 5 6+5𝑥
2.50 3
3
2.00
2
1.50
2
1.00
1
0.50 1
0.00 0
0.00 2.00 4.00 6.00 8.00 10.00 0.00 2.00 4.00 6.00 8.00 10.00
xi yi x2 x3 x4 xy x2y
0.75 1.20 0.5625 0.421875 0.31640625 0.9 0.675
2.00 1.95 4.0000 8.000000 16.00000000 3.9 7.800
3.00 2.00 9.0000 27.000000 81.00000000 6.0 18.000
4.00 2.40 16.0000 64.000000 256.00000000 9.6 38.400
6.00 2.40 36.0000 216.000000 1296.00000000 14.4 86.400
8.00 2.70 64.0000 512.000000 4096.00000000 21.6 172.800
8.50 2.60 72.2500 614.125000 5220.06250000 22.1 187.850
32.25 15.25 201.8125 1441.546875 10965.37890625 78.5 511.925
2.50 3
2.00 2
1.50 2
1.00 1
0.50 1
0.00 0
0.00 2.00 4.00 6.00 8.00 10.00 0.00 2.00 4.00 6.00 8.00 10.00
𝒏 𝒙𝒊 𝒚𝒊 𝒙𝒊 𝒚 𝒊 𝒙𝒊 𝟐 𝒚𝒊 𝒙𝒊 𝟐 𝒙𝒊 𝟑 𝒙𝒊 𝟒 ഥ
𝒚𝒊 − 𝒚 𝟐 𝑺𝒓 𝒚
1 0 2.1 0 0 0 0 0 544.44444 0.14332 2.47857
2 1 7.7 7.7 7.7 1 1 1 314.47111 1.00286 6.69857
3 2 13.6 27.2 54.4 4 8 16 140.02778 1.08160 14.64000
4 3 27.2 81.6 244.8 9 27 81 3.12111 0.80487 26.30286
5 4 40.9 163.6 654.4 16 64 256 239.21778 0.61959 41.68714
6 5 61.1 305.5 1527.5 25 125 625 1272.11111 0.09434 60.79286
60 60
50 50
40 40
30 30
20 20
10 10
0 0
0 1 2 3 4 5 6 0 1 2 3 4 5 6
❖ take partial derivative with respect to each of the coefficients and set the resulting equations to zero
𝑇 𝑇
𝑍 𝑍 𝐴 = 𝑍 𝑌
g) Solution techniques
1. LU decomposition including Gauss elimination
2. Cholesky’s method
3. Matrix inversion
where 𝑠 𝑎𝑗 = the standard error of coefficient 𝑎𝑗 = 𝑣𝑎𝑟 𝑎𝑗 . In similar manner, lower and upper
bounds on the slope can be formulated as
𝐿 = 𝑎1 − 𝑡𝛼Τ2,𝑛−2 𝑠 𝑎1 𝑈 = 𝑎1 + 𝑡𝛼Τ2,𝑛−2 𝑠 𝑎1
The above equation cannot be manipulated so that it conforms to the general form of matrix
formulation for linear least squares. For the nonlinear case, the above equation can be solved
in an iterative fashion.
The Gauss-Newton method is one algorithm for minimizing the sum of the squares of the
residuals between data and nonlinear equations. The key concept underlying the technique is
that a Taylor series expansion is used to express the original nonlinear equation in an
approximate, linear form. Then, least-squares theory can be used to obtain new estimates of
the parameters that move in the direction of minimizing the residual.
𝑦𝑖 = 𝑓 𝑥𝑖 : 𝑎0 , 𝑎1 , … , 𝑎𝑚 + 𝑒𝑖
❖ For convenience, this model can be expressed in abbreviated form by omitting the
parameters
𝑦𝑖 = 𝑓 𝑥𝑖 + 𝑒𝑖
11/18/2021 PREPARED BY: ENGR. LUCIA V. ORTEGA 44
NONLINEAR REGRESSION
The nonlinear model can be expanded in a Taylor series around the parameter values and
curtailed after the first derivative. Example: for a two-parameter case
𝜕𝑓 𝑥𝑖 𝑗 𝜕𝑓 𝑥𝑖 𝑗
𝑓 𝑥𝑖 𝑗+1 = 𝑓 𝑥𝑖 𝑗+ ∆𝑎0 + ∆𝑎1
𝜕𝑎0 𝜕𝑎1
where j = the initial guess, j + 1 = the prediction, ∆𝑎0 = 𝑎0,𝑗+1 − 𝑎0,𝑗 and ∆𝑎1 =
𝑎1,𝑗+1 = 𝑎1,𝑗
𝜕𝑓 𝑥𝑖 𝑗 𝜕𝑓 𝑥𝑖 𝑗
𝑦𝑖 − 𝑓 𝑥𝑖 𝑗 = ∆𝑎0 + ∆𝑎1 + 𝑒𝑖
𝜕𝑎0 𝜕𝑎1
or in matrix form,
𝐷 = 𝑍𝑗 ∆𝐴 + 𝐸
𝑇 𝑇
𝑍𝑗 𝑍𝑗 ∆𝐴 = 𝑍𝑗 𝐷
Thus the approach consists of solving for ∆𝐴 , which can be employed to compute improved
values for the parameters as in
𝑎0,𝑗+1 = 𝑎0,𝑗 + ∆𝑎0 and 𝑎1,𝑗+1 = 𝑎1,𝑗 + ∆𝑎1
❖ This procedure is repeated until the solution converges – that is, until
𝑎𝑘,𝑗+1 −𝑎𝑘,𝑗
𝜀𝑎 𝑘 = 𝑎𝑘,𝑗+1
100%
❖ Then the parameters would be adjusted systematically to minimize Sr, using search techniques of
the type described in optimization.
n 𝒙𝒊 𝒚𝒊 𝟏 − 𝒆−𝒂𝟏 𝒙 𝒂𝟎 𝒙𝒆−𝒂𝟏 𝒙