Least Square Methods
Least Square Methods
Suppose that the formula of f contains the parameters α1 , α2 , . . . , αK . Then the error quantity E(f )
that we wish to minimize will depend on these parameters, so we write it as E(α1 , α2 , . . . , αK ). From
multivariable calculus we know that to minimize E(α1 , α2 , . . . , αK ), we should solve the K equations
∂E
=0
∂α1
∂E
=0
∂α2 for α1 , α2 , . . . , αK
..
.
∂E
=0
∂αK
If we choose the parameters of f in order to minimize the root-mean-square error, then the process is
called “least squares fitting”.
q P 2
Minimizing the root-mean-square error E2 (f ) = n1 ni=1 yi − f (xi ) is equivalent to minimizing
n
X 2
krk22 = yi − f (xi )
i=1
Write
n
X 2
E(α1 , α2 , . . . , αK ) = yi − f (xi )
i=1
Then
n
∂E X ∂f (xi )
= 2 yi − f (xi ) · − ,
∂αk ∂αk
i=1
When f (x) is a polynomial of degree m (with m + 1 coefficients), (?) can be written as a linear system
M α = β where
Pn 2m
Pn 2m−1 Pn m
Pn m
i=1 xi i=1 xi ··· i=1 xi i=1 xi yi
Pn Pn Pn Pn
x2m−1 2m−2 m−1 m−1
i=1 i i=1 xi ··· i=1 xi i=1 xi yi
M =
.. .. ..
, β= ..
..
. . . . Pn.
Pn m
P n m−1
i=1 ix i=1 xi ··· n i=1 yi
and α is the array of parameters (i.e. coefficients of the polynomial) that we solve for.