0% found this document useful (0 votes)
72 views7 pages

Exam2 2016answers

1) The document contains questions for an examination in Numerical Linear Algebra. It provides information about the exam such as date, time, examiner contact details, and grading criteria. 2) The questions cover topics in numerical linear algebra including computing eigenvalues and condition numbers, matrix decompositions, linear systems of equations, and matrix factorizations. 3) Students are instructed to show their work, answer questions clearly on one side of the page, and organize their work by question number.

Uploaded by

Himanshu Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views7 pages

Exam2 2016answers

1) The document contains questions for an examination in Numerical Linear Algebra. It provides information about the exam such as date, time, examiner contact details, and grading criteria. 2) The questions cover topics in numerical linear algebra including computing eigenvalues and condition numbers, matrix decompositions, linear systems of equations, and matrix factorizations. 3) Students are instructed to show their work, answer questions clearly on one side of the page, and organize their work by question number.

Uploaded by

Himanshu Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Questions for the course

Numerical Linear Algebra

TMA265/MMA600

Date: January 5, 2017, Time: 14.00 - 18.00

• Examiner: Larisa Beilina, tel. 070-4177036 or at work 031- 772 3567.


• Results: results of examination can be received at the student’s office at the De-
partment of Mathematics, daily 12.30-13.00;
• Grades: to pass (get G) requires 15 points together with points from homework
assignments and computer exercises.
• Solutions will be announced at the end of exam (placed on the homepage of course).
• Aids: you can use written by hand notes on the one side of A4 sheet. Easy (not
advanced) calculators are also allowed to use.

Instructions
• Answer to the question carefully and clearly.
• Write on the one side of the sheet. Do not use a red pen. Do not answer more than
to the one question for one page.
• Sort your answers by the order of appearance of questions. Mark on the cover
answered questions. Count the number of sheets you have and fill the number of
every page on the cover.

Question 1
• 1. Find eigenvalues of a matrix A which is defined as
 
7 1
A=
1 3
Using information about eigenvalues of the matrix A check if the matrix A is
symmetric positive definite (s.p.d.) matrix or not.
(1p)
• 2. Compute condition number of the matrix A defined above in a two norm k · k2 .
(2p)
• 3. Compute ||A||∞ , ||A||1 for matrix A defined as
 
100 300 −50
A =  −30 20 −70
−100 50 10
Find conjugate transpose matrix A∗ to this matrix A. (1p)

Question 2
• 1. Suppose that all leading principal submatrices of matrix A of the size n × n
are nonsingular. Prove that there exists a unique unit lower triangular L and non-
singular upper triangular U such that A = LU .
(3p)
1
2

• 2. Suppose that A is an invertible square matrix and u, v are vectors. Suppose


furthermore that 1 + v T A−1 u 6= 0. Then the Sherman-Morrison formula states that
A−1 uv T A−1
(A + uv T )−1 = A−1 − .
1 + v T A−1 u
Here, uv T is the outer product of two vectors u and v.
−1 T A−1
Prove that Y X = I, where Y = A−1 − A1+vuv T A−1 u (the right-hand side of the
T
Sherman - Morrison formula) and X = A + uv .
(2p)

Question 3
• 1. Let us consider the problem of solution of linear system of equations Ax = b.
Let x̃ be approximate solution of this equation such that δx = x̃ − x. Derive the
estimate for δx using definition of the residual r = Ax̃ − b.
(2p)
• 2. Give definition of the relative condition number of the matrix A. (1p)

Question 4
• 1. Apply method of normal equations to solve the linear least squares problem
minc kAc − yk2 of fitting to a polynomial
3
X
f (x, c) = ci xi−1
i=1

to data points (xi , yi ), i = 1, ..., m.


(2p)

Question 5
• Transform the given matrix A to the tridiagonal matrix using Householder trans-
formation.
 
10 4 3
A =  4 15 7 
3 7 21
(2p)
• Transform the given matrix A to the tridiagonal matrix using Given’s rotation
 
10 4 3
A =  4 15 7 
3 7 21
(2p)

Question 6
• 1. Briefly describe the algorithm of Tridiagonal QR iteration for the solution of
symmetric eigenvalue problem.
(1p)
3

• 2. Give the definition of the Rayleigh quitient and discuss how and where can be
applied the Rayleigh quitient.
(2p)

Question 7
• 1. Describe the main structure of all algorithms, except Jacobi’s method, to com-
pute the SVD decomposition for a symmetric matrix A.
(2p)
• 2. Present the Gauss-Seidel method for the iterative solution of linear system
Ax = b.
(2p)
4

Numerical Linear Algebra

TMA265/MMA600
Solutions to the examination at 5 January 2017
Question 1
1. We should solve characteristic equation det(A − λI) = 0 :
 
7−λ 1
det = 0.
1 3−λ
√ √
Solving above equation for λ we get eigenvalues λ1 = 5 + 5, λ2 = 5 − 5 which are
solutions to the equation λ2 − 10λ + 20 = 0. Since λ1 > 0, λ2 > 0 then the matrix A is
symmetric positive definite.
2. The condition number of the matrix A in the 2-norm is defined as k(A) = kA−1 k2 ·
kAk2 . The inverse of the matrix A is:
 
−1 0.15 −0.05
A =
−0.05 0.35.
Since kA−1 k2 ≈ 0.3618 and kAk2 ≈ 7.2361 then k(A) = 2.618.
3. We use definition of A∗ :
A∗ = AT
and thus  
100 −30 −100
A∗ =  300 20 50  .
−50 −70 10
||A||1 = max(100+|−30|+|−100|, 300+20+50, |−50|+|−70|+10) = max(230, 370, 130) =
370 (maximum absolute column sum),
||A||∞ = max(100+300+|−50|, |−30|+20+|−70|, |−100|+50+10) = max(450, 120, 160) =
450 (maximum absolute row sum).
Question 2
1.See Lecture 3 and the course book.
We will use induction on n. For 1-by-1 matrices we have: a = 1 · a. To prove it for
n-by-n matrices Ã, we need to find unique (n-1)-by-(n-1) triangular matrices L and U ,
unique (n-1)-by-1 vectors l and u, and unique nonzero scalar η such that
       
A b L 0 U u LU Lu
à = T = T × = T T
c δ l 1 0 η l U l u+η

By induction unique L and U exist such that A = LU . Now let u = L−1 b, lT = cT U −1 , and
η = δ − lT u, all of which are unique. The diagonal entries of U are nonzero by induction,
and η 6= 0 since 0 6= detà = det(U ) · η.
2. See Lecture 3.

Question 3
1. See Lecture 5 and the course book.
To solve any equation f (x) = 0, we can try to use Newton’s method to improve an
approximate solution xi to get xi+1 = xi − ff′(x i)
(xi )
. Applying this to f (x) = Ax − b yields
5

one step of iterative refinement:


r = Axi − b
solve Ad = r for d
xi+1 = xi − d
2. See Lecture 4 and the course book. The relative condition number is given as
kCR (A) = k |A−1 | · |A| k.
Question 4
1. See Lecture 6.
We can write the polynomial as f (x, c) = 3i=1 ci xi−1 = c1 + c2 x + c3 x2 and our data
P
fitting problem minc kAc − yk for this polynomial takes the form

1 x1 x21
   
y0
1 x2 x22  c1
 
 y1 
1 x3 x23  · c2  =  y2  .
   
. .. . .  c
 ..
  
. . 3  ... 
1 xm x2m ym
The method of normal equations for minc kAc − yk will be:
AT Ac = AT y,
then
c = (AT A)−1 AT y.

Question 5
• 1. To get tridiagonal matrix we need to zero the (3, 1) and (1,3) entries. Apply pro-
cedure of Lecture 8 of using the Householder transformation for tridiagonalization:
First we compute α and r as:
v v
u n 2 u n
uX uX
α = −sgn(a21 ) t aj1 = −sgn(4)t (42 + 32 ) = −5;
j=2 j=2

r r
1 2 5
r= (α − a21 α) = 3 ;
2 2
From α and r, construct vector v:
 
v1
v2 
v (1) = 

 ...  ,

vn
where v1 = 0, v2 = a212r−α , v3 = a2r31 .
3 1
In our case we have v T = (0, √ 5
, √ 5
).
2 2
2 2
Then compute:
 
1 0 0
P 1 = I − 2v (1) (v (1) )T = 0 −4/5 −3/5
0 −3/5 4/5
6

and obtain the tridiagonal matrix A(1) as


 
10 −5 0
A(1) = P 1 AP 1 ≈ −5 23.88 −4.84
0 −4.84 12.12

• 2. To obtain the tridiagonal of the matrix


 
10 4 3
A =  4 15 7 
3 7 21
using Given’s rotation we have to zero out elements (1, 3), (3, 1).
We construct Given’s matrix G in order to zero out (1, 3) element of the matrix
A.
To do that we compute c, s from the known a = 4 and b = 3 as
     
c −s a r
· =
s c b 0
to get:
√ √
r= a2 + b 2 = 42 + 32 = 5,
a
c= = 4/5 = 0.8,
r
−b
s= = −3/5 = −0.6.
r
The Given’s matrix will be
 
1 0 0
G = 0 c −s
0 s c
or  
1 0 0
G = 0 0.8 0.6
0 −0.6 0.8
Then the tridiagonal matrix will be
 
10 5 0
G · A · GT =  5 23.88 4.84 
0 4.84 12.12
Question 6
1. The algorithm of tridiagonal QR iteration for a symmetric matrix A has the following
structure:
(1) Given A = AT , use the variation of Algorithm 4.6 (reduction to upper Hessenberg
form, see Lecture 10) to find an orthogonal Q so that QAQT = T is tridiagonal.
(2) Apply QR iteration to T to get a sequence T = T0 , T1 , T2 , . . . of tridiagonal matrices
converging to diagonal form.
7

2. See lecture 13.


The Rayleigh quotient of a symmetric matrix A and nonzero vector u is ρ(u, A) ≡
(uT Au)/(uT u). Its largest value, maxu6=0 ρ(u, A), occurs for u = q1 (ξ = e1 ) and equals
ρ(q1 , A) = α1 , the first eigenvalue of A.
Its smallest value, minu6=0 ρ(u, A), occurs for u = qn (ξ = en ) and equals ρ(qn , A) = αn ,
the last eigenvalue of A.
Question 7
• 1. See Lecture 14 and the course book.
All the algorithms for the SVD of a general matrix G, except Jacobi’s method,
have an analogous structure:
(1) Reduce G to bidiagonal form B with orthogonal matrices U1 and V1 : G =
U1 BV1T . This means B is nonzero only on the main diagonal and first super-
diagonal.
(2) Find the SVD of B: B = U2 ΣV2T , where Σ is the diagonal matrix of singular
values, and U2 and V2 are orthogonal matrices whose columns are the left and
right singular vectors, respectively.
(3) Combine these decompositions to get G = (U1 U2 )Σ(V1 V2 )T . The columns
of U = U1 U2 and V = V1 V2 are the left and right singular vectors of G,
respectively.
• 2. See Lecture 11 and the course book. To get the Gauss-Seidel method we use the
same splitting as for the Jacobi method. Applying it to the solution of Ax = b we
have:
Ax = Dx − (L̃x + Ũ x) = b,
we rearrange terms in the right hand side now like that:
(0.1) Dx − L̃x = b + Ũ x
and thus now the solution is computed as
x = (D − L̃)−1 (b + Ũ x) = (D − L̃)−1 b + (D − L̃)−1 Ũ x.
We can rewrite the above equation using notations DL = L̃ and DU = Ũ as:
x = (D − L̃)−1 b + (D − L̃)−1 Ũ x
= (D − DL)−1 b + (D − DL)−1 Ũ x
(0.2)
= (I − L)−1 D−1 b + (I − L)−1 D−1 Ũ x
= (I − L)−1 D−1 b + (I − L)−1 U x.
Let us define
RGS ≡ (I − L)−1 U,
(0.3)
cGS ≡ (I − L)−1 D−1 b.
Then iterative update in the Gauss-Seidel method can be written as:
(0.4) xm+1 = RGS xm + cGS .

You might also like