Lectures 17 To 20
Lectures 17 To 20
Section A
Pr ofdegree atmost no
(Saus
"*
ISPCqcudi) (Rissdi)
↳
<
"a
roses (cuscai)
powocu fasqasdn a
W(X)> 0
2)"ackg(y)2 ai)"2
#thogonality Inner Product space
(4,x) =
Im Let 92. -
Guy are
non-zero
orthogonal vectors. (i.e. (Mi,4j) =
0
they G41. -
Hub is LI 1Fj)
Proof
④ 2,41 -
+
Cnln 0
=
(21,9,x,7
-
(nMu)
Cu(4).T)
G(u,,H,)
-
((2,,5e)+
+
- 0
j
~.
=>
C.1(1/2 0
=
=>
a 0
=
Similarly 4 =
0. - - (n = c
Consider a Basis
the vectors
Sis. -
Only g.f. are
n = C141 +
-
-
CnUn
⑥
r
unique
2
(x,y,) =
2, (M1,41) 4 ((2,1)
=
=> a
ua
=
. .
Sj=
caM;)_.
sn
yre
PAAVectorie s aid
is normalized if
(1211 1.
=
Desirable
-
We banta Basis of vector
-
authonormal Basis)
⑭Schmidt
orthogonalization
dim (F)
Un]
Basis n
[U1,
=
Orthonamal Basic
qui-In Sto
->
((u,))
=
SpansV, 4gc -
Yub
v = Us -
(Ug,v,)U,
Vs:
(V, v.):(Mg,V)
-
(Mg,4)(Y,r,)
(U2,v.)
= -
(U2,4,)
=
0
SpanqUis-UnbESpan9 Vi,We, Ms.
-
Un
(Exercise
⑬3 Us
=
-
(Us,vJVe(Us,v,)W,
Us ((Us,v)Ve f(us,4,)V)
=
-
IV3, Y) (Us,V.)
=
-
(Us,Ve)(UeNX)
Hs 4e Us
Orthonamal Basis using standard dot
produ
u
j(i),v Hz (ue,v,)u,
=
=
-
(i) jj(?) -
v. -
1:(125 (i) () =
-
(a)
5 45
=
-
(43,v,)U, -14s,Vc(Ve
e.g. 12
x+y
(x,y)
=
(6) ((4.1)
10)(,][I]
:
n =
n.:
ull
Yg Hc
=
- (Ug,Y)V,
2183.
2
SD,c)
exercise: 2, 01,
Pallisdia
arthonamal Basis
Lecture 17
section B.
Orthogonal+y
-
-
orthogonality (4,0)
=> c
=
then 141. -
has are 2.
Consider
otherbasi
a
201.-2nY which is
athoganal
n = C,4, +
-
Cu2n
2
N
unique.
(2,2) (GH1 =
+ -
(nUn,M.)
G.(4,4,)
22(2x,4,)
=
+ -
110
=
2.(2,4,)
2. 1
nnn)
--
6:
215, n=
dired
Ifit a vector is is called normalized if
1121) 1
=
#esirable
Orthogonal Normal =
Orthonormal vecters
Eng
Step 1. 1(41/1 0
v1 =
11 Span24s. -Yn3=span
((411) <H1, Ha, was
us =
U2-(Uc,v,)V, because Hg://U21/02 1
p
(N2, v) (4e -(U0,Y)
=
V,v) (U8gU) = -
(Ueiv,)(v,v)
(H2,V.)
=
-
(H2,V,) 0
=
Span[Ui.-Un]-spanSU,He.-Un3
=
spanSVi,E2, Us. -Gn3
12 G
= (uz,v,)u
+
H1 1(4,71V1
=
Vs:
all
n Us
=
- (Us,v)h, -
(Us,Uc)Vg
u
Us:
(iTs, v, =
0, (M,v2) c
=
[Exercise
5 45
-
-
(MssVisVi
y =
n
114j1)
i 4n = =
(un,vi)vi
W:
null
agus fun, ina
e.g.
u.=
y( m
=
(d)
=
ns 4c (ue,4)u
(j) 2(b) (8)
=
=
- -
=
v
z 1)
=
n=4 -
(Us,veVe
(0)
=
-
(us,v(x
(i) (8)
= -
(B)
n:
vs. is
(ci,j s (i Az
+
10,112.(110)(42][]
=
=(*) =(4,2][] a
:
(((,3) 5
=
2
=
[1,01(32][]
114,11:(Un,42) =
4
=
i 42
=
-
(42,4)V,
15(%.] (")
(0,v) =
[0,
-[2s]( 2]
2
:
no
("") ve.(i,) = = ( i ) h (i)
=
(IEC112 [-1/c, 1]
[ :?] [ ]
:
[0,43):"):
=
4
[1,0,6123 Polynomiaarthogan
I Legendre,
f(x)g()dx
I (14,1)
:
f,1.1dx.
V=u1
((U11 2
=
1
=
vg
=
Hz -
e
luivy =
x - (x,
1
&
I
(x,1)
f(.2dx 0
↓v
=
x
=
=
(V2/18: jsdn =
us Us
= -
(43,v)Va
cli2 (usu
=
(x2,a)/
= x2 -
(x,y) -
&
=x2 -
5
1
(7,g) 'w01f()
=
g(1)d3
1():2
Chebyshev polynanial
Lecture 18
Section A
Best approximation
V = PS
x 1
+ W
w*
a
x
# &
(x wt,b) 0
=
-
↓b) W
approximation
LetB -
Let W be a
Im is
It the approximation
best ofGEF
in t ifand only if[ie-wt, w) = 0
XBIV.
80W
1/2-2+1) ((3-01) =
ifand only if
0fWtE
(x w+,w)
=
Assume (29-207,0) = 0 (
w*,WX W)
112-w*112 1180 -0*)8 2(n
-
-
+
+
=
2 *(10 1Z w+1)
&
N.
1/2 ((x w
+
81)
-
- = -
((N-LM)e
(18-21
=min
=>
1/2 -w*11A
((2-W15 wth
Conversely Assume
112-0*)) roftht
*
112-811 >
To poswe (4-0*,0) 0
=
f0+N
112-8112 1(4-8+1(8 ((W.W*)(8
=
+
+ 2(2 w
=
*,WEW)
Using (12-0119>,1(2-w*)/2
112
-w *1(2 2(2-W*,W*.0)
+
T0 fwtw
*
u 2 b
ueW
= -
=>
+,u)x0 +4 -5.
114118 2(2
+
-
W
H =
= (0t-WS
(hot (W* -
w)
11 w * -
w1)&
CoI
(AWS Oci)
(ALS)
C
11207 -
81/2
+2 oinofs
WEW) met
2
(200
*,wf W) I
np 110 011
<
2F,wF-cFwtw
-
=>
W)2
-
1127 -
112
+ -W//2
=>
1 (2 -1
*,(0 *-8s /2
④
7,0
#w((u -
0
=
=>
(2-2 *,20*-5) 0
=
F WCE
=> (U -5 *, W) a
=
f W-F
Tw Bestapproximation of21 in 20
is unique.
Proof Assume We and we are two best
=
approximation of2
W),()) 0
=
(X w,W) 0(X W
10 I
= -
-
I
20-w2
10:
0(X -wz,w,-wq)
=
a
(x W,,w,-Wc)
=
-
w,,w,-we)
(z
= -
= 0
=> 115,
=
-w2112:0 = 1 W, w2 =
is
given by
R
1A=5(Honmee
Proofto show (h*,Wil =0
Writton.
(2-I wi) es
was
=
Lecture 18 section B.
v MC
=
2
*
- W* ⑨
F
wt
·
( - W
+,W) 0
=
f W-IN
Approximation: Let z+Fand
Defis
Best
a
subspace ofvi,then notis
if +F.
112-w+1) ((2 -11)
= W
I
hm Itisthe bestapproximation of2
in W ifand only if
f W (W.
(x -
Wt,W) 0
=
Proof V ce
1/2-W+11 HR-W11
-
-
if and only if
F c
(2-204,W) 0
=
4-W written
nectar
BEIN
LANYME
can be as
any
u u
=
- 2*+2F =
Assume (2-2 *, W) =
0 # Wo
To prome (0
f -E.
112 - w) 1/4-5))
<
* (wX W)
+ 8( -
17
=>
=>
1(2 -
8118>, 1(2-5*/12
WCF.
r
((2- 8+1) ((7-81)
=
=>
we Assume
112-w*1) 1(4-601)
< b
To FLOCTN
prove (2-0*,W) 0
= F
112 -
8112 1/2-1*1/8 7((8-8*)/8 +8(4 -W*,w* -W)
=
=> -
7,0
f4+W.
(14118
=> 2(4-2*,4)
+
>0
USE
1IU118 2(2 -w *, 4) T0 F
for waterdefine
any give
(xfc20x-w)
u -
=
1w e
2(2 wf,u)
=
(H,4)
-
Eis
ofc10x2)
-(4-05, 100)
(H,4) =(
=
(1-0),
S
-
2
(107-w//
InPhotl" 1120*-W)
#
lR-NER
I
1/2 Z)))2
X
-
&(x *,u) 2(X 24)
oxcoxios (0x-a
(
- w = - -
WA
g(n*, (n -LX, e
= - w
= -
g(( - wX,2X -
W)/2
xw(12 -
(14118 2(x
+
-
wf,4) =
l
oshoscs,
#WF
I
20)(= 10H.
=>
Is = 0
0
((8 2+1/2
-
2*,W) 0
=
(X
(2-2X,20*.W)
0
-
=
El CIF:
=
=>
# Best
approximation is unique.
of2 in t
approximations
Fo
C +
(2-W1, ()
=> 0
=
# LCF.
and (U W2,b) 0
=
-
take to -
w0, -w2 Ev
I
=>
(x- W,(, -Ws) 0&(H-W2,W, -We)
= 0
=
(Wg - W,,w,-wa) 0
=
=>- (W,-02112 =
=> w, wg
=
//
Lecture 19 Section B
Best seevin
approximation ofa vector
a
subspace to
loto
=>
(H -
w*,W) 0
=
A
#m Let inbe subspace ofitwith
CX
sUcUWce
=
Proof V to s
(2 wX,W) 0
=
-
-
-
↳
i=
v 1 to n.
(2-1*,Wi)
= 0
(2-wX, Wil
Co.sncaiusibe
=
=
(x,vi) -
(nmius), wil
(2,wi)
=
-
EncurosislW, a
is
(u,wi)
=
-
vi) toi)
2
will
= (2sWi) -
(22,vi) = 0
Inner Productspace, can
you
approximate rectar
a
using a subspase
v (1,2) w
[(1,0) (x (R3 <
=
=
R
·
-
W
W span
=
[(3,0)3 x (1,2)
=
W1: (3,0)
wi)wi
3,0)
w*
(9,0)
=
11 Well
=
·
(1,0)
=
V H3x 21,2,3)
= =
W:
spans (8), (%)]
w1)
log roll a one
of: we
(181/)
2 x4
=
W PC
=
span
[1,0,02-53
=
Sf(n)gx)di
ene
-wAlian)M/s
12
ac+bacle
xir1 +yousatli
et
20* <
ig//
-
=
881 0
+ +
Lecture 19 Section A
BestApproximation
x v
+ and * * InSt.
C
((2-W1/ v i
112-w+)) = ce
↑
(2-W*,W) 0
=
↓ We I
Find a formula wh
Im Letzeerand ofbe the
wx
wn
=
e 1 1
=
Proof We need to
prove
-
-
wt
t W
( wF,w) 0
=
-
⑭
05 =1 ton.
(2.25,Wj)
=
-wicws-cu.cn,avoidsee
(x,vj)
=
=
wr
1 =
1
(wi, wil (Wijwj(
01Fj
=
(2,65)
=
-
(25Wi) i(
-ill2
(2,Wj) (2,5j)
=
=
0
=
//
e.g.E 42 u =
(3) w,
W:soan
[(i)?
x y
=
wis
wX= we
⑤
(i)
:E(i):(id
-(3)
43
spans of
e.g. 2.
W:
Basic
codewa lockhere
24 =
1x 1|
*
-
w
setofall
29.3. V =
polynomials
W PC setofall
quadratic polyman
=
jd,(2)
- (
Pa(*) di
Orthogonal Basis at W
G1,x,x 13
=
-
x c0
=
110
-consent issee x
Lecture 20
u x4
=
3 *
112-W*1) ((..W1)
< 10-PC
1 1
min
a,b> <
J &(U - 942 -
bx - 762dx
fpademectan
"
~
f(a,b,x)
=
Discrete feastsquare method.
xiy;
1
1
! ↑
f(x) a ba
=
+
find a & b
②iii. "
m
1
wzi
( E..Absh:21
[0:](b]: [w]
a Sincus b
+
a since bCosini)
+
+(Sincei) +
d(os(eNi)Eg.
/
1]
N
Casinci) bscasoies
11
+
Sin(ai) dCOST21:
=
C
+
my 4 2
-
2..)
ATA
[9]:
T
A [5]
(pi,yi( ... to m
af,(x) b
+
fg(k)
↳ 2
a f,(0i) bfe(si) Yr
+ 1 = 1 tom.
SinE(fiis
2
+ delis -
bi)
j 3.
=
x
[9]
=
A iin j
(941) essuis]
A.
pii i i,#
-
Lecture: Least Square Methods
http://hkkaushik.wordpress.com/courses/mtl107/
Least Square methods
Ax ⇡ b, b 2 Rm , x 2 Rn , m > n.
http://hkkaushik.wordpress.com/courses/mtl107/
Origins of linear least squares problems Data fitting
http://hkkaushik.wordpress.com/courses/mtl107/
Origins of linear least squares problems(cont.) image
Numerical Methods for Computational Science and Engineering
Introduction
morphing
Origins of linear least squares problems (cont.)
Image morphing
http://hkkaushik.wordpress.com/courses/mtl107/
Linear least squares
http://hkkaushik.wordpress.com/courses/mtl107/
Linear least squares(cont.)
We introduce residual
r=b Ax 2 Rm
r = b A x
s
1
minx (x), with (x) = krk22
2
Necessary conditions for a minimum:
@
grad (x) = 0 , (x) = 0, k = 1, ..., n.
@xk
http://hkkaushik.wordpress.com/courses/mtl107/
Linear least squares(cont.) f((1,5(e. - S(n)
gradf:of, t, ifneed-
1 1
(x) = rT r = (Ax b)T (Ax b)
2 2
1
= (xT AT Ax xT AT b bT Ax + bT b
2
1 1
= xT AT Ax (AT b)T x + bT b.
2 2
grad (x) = AT Ax AT b = AT (Ax b) = 0
1
(x + x) = kb Ax A xk22
2
1 1
= kb Axk22 (b Ax)T A x + kA xk22
2 2
1
= (x) + kA xk22 (x)
2
http://hkkaushik.wordpress.com/courses/mtl107/
Linear least squares(cont.)
B := AT A, B 2 Rn⇥n
A has maximal rank =) B is SPD with quadratic form
So, in particular,
1
kA xk22 > 0, for all x 6= 0,
2
and
http://hkkaushik.wordpress.com/courses/mtl107/
Theorem:Least squares
where A has full column rank, has a unique solution that satisfies
the normal equations
⑦ (AT A)x = AT b
A+ = (AT A) 1 T
A 2 Rn⇥m .
http://hkkaushik.wordpress.com/courses/mtl107/
Geometrical interpretation
or Computational Science and Engineering
s
rpretation
From
al interpretation
grad (x) = AT Ax AT b = AT (Ax b) = 0
grad we
(x) AT AAxT r =
see= that AT0.
b = AT (A x b) = 0
hat AT r = 0.
b
http://hkkaushik.wordpress.com/courses/mtl107/
Example 6.1 from Ascher-Greif
A52T
2 3 2 3
1 0 1 4
6 2 3 57 6 27
6 7 6 7
A=6
6 5 3 27 5⇥3 6
72R , b=6 5772R
5
4 3 5 4 5 4 25
1 6 3 1
http://hkkaushik.wordpress.com/courses/mtl107/
Example 6.2 from Ascher-Greif: Linear regression
Consider fitting a given data set of m pairs (ti , bi ) by a straight
line:
i
ti
1
0.0
2
1.0
3
2.0
B:
ATA. A b =
T
http://hkkaushik.wordpress.com/courses/mtl107/
Polynomial data fitting
v (t) = x1 + x2 t + ...xn t n 1
=) v (ti ) ⇡ bi , i = 1, ..., m.
1
The matrix that we now obtain is called a Vandermonde matrix.
2 3
1 t0 · · · t0n 2 t0n 1
6 1 t1 · · · t n 2 t n 17
6 1 1 7
A = 6. . .. .. 7 2 Rm⇥n
4 .. .. . . 5
n 2 tn
1 t m · · · tm 1
m a S1nQLS
x 2x
⑨ e b
+ -
+
http://hkkaushik.wordpress.com/courses/mtl107/
Polynomial data fitting
v (t) = x1 + x2 t + ...xn t n 1
=) v (ti ) ⇡ bi , i = 1, ..., m.
http://hkkaushik.wordpress.com/courses/mtl107/