0% found this document useful (0 votes)
122 views

Filtering Via Rank-Reduced Hankel Matrix

This document describes a method called singular value decomposition (SVD) filtering for noise reduction in time series signals. It works by arranging the signal values into a Hankel matrix and computing the SVD. A reduced-rank approximation is obtained by keeping only the leading singular vectors. While the reduced matrix will not exactly match the Hankel structure, averaging along anti-diagonals can recover a filtered signal from the first column and last row in a Hankel-compatible format. Examples show the technique effectively extracting signals from noisy measurements.

Uploaded by

Ryan Mitchley
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
122 views

Filtering Via Rank-Reduced Hankel Matrix

This document describes a method called singular value decomposition (SVD) filtering for noise reduction in time series signals. It works by arranging the signal values into a Hankel matrix and computing the SVD. A reduced-rank approximation is obtained by keeping only the leading singular vectors. While the reduced matrix will not exactly match the Hankel structure, averaging along anti-diagonals can recover a filtered signal from the first column and last row in a Hankel-compatible format. Examples show the technique effectively extracting signals from noisy measurements.

Uploaded by

Ryan Mitchley
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Filtering via Rank-Reduced Hankel Matrix

CEE 690 , ME 555 — System Identification — Fall, 2013


H.P.
c Gavin , September 24, 2013

This method goes by the terms “Structured Total Least Squares” and “Singular Spectrum
Analysis” and finds application to a very wide range of problems [4]. For noise-filtering appli-
cations, a discrete-time signal , yi , i = 1, · · · N , is broken up into n time-shifted segments and
the segments are arranged as columns in a Hankel matrix, Y ∈ Rm×n , m > n, Yij = yi+j−1 .

···
 
y1 y2 yn

 y2 y3 ··· yn+1 

Y=  .. .. .. 
. . .
 
 
ym ym+1 · · · ym+n−1

This is a Hankel matrix because values along the anti-diagonals are all equal to one another.
The SVD of Y is Y = UΣVT , and a reduced-rank version of Y can be reconstructed from the
first r dyads of the SVD.
Yr = Ur Σr VrT
where Ur ∈ Rm×r , Σr ∈ Rr×r , and Vr ∈ Rr×n . are the first r singular vectors of U and V
and the largest r singular values. This reduced-rank matrix Yr is the rank-r matrix that is
closest to Y in the sense of minimizing the Frobeneous norm of their difference, ||Y − Yr ||2F . In
general Yr will not have the same Hankel structure as Y, but a matrix with a Hankel structure,
Ȳr , can be obtained from Y in a number of ways.

• Singular Spectrum Analysis [2].


In SSA, Yr is computed as above, and elements along each anti-diagonal are replaced
by the average of the anti-diagonal. The resulting matrix Ȳr will not have rank-r and
will not be the closest matrix to Y in the sense of Frobeneous, but it will have a Hankel
structure.

• Cadzow’s algorithm [1].


In Cadzow’s algorithm, the SSA averaging is repeatedly applied. After each anti-diagonal
averaging step, the matrix grows in rank, so a new SVD can be computed, a new rank-r
matrix can be constructed, and the anti-diagonals of the new reduced rank matrix can
be averaged.

• Structured low-rank approximation [3].


These methods solve the constrained optimization problem: min ||Y − Ȳr ||2F such that
rank(Ȳr ) = r and Ȳr has a desired structure. These methods are iterative, but apply
more rigorous methods to determining Ȳr .

The low-rank (filtered signal) can be recovered from the first row and last column of Ȳ. The
examples below apply SSA to recoving signals from noisy data. If a signal can be represented
by a few components of the SVD of Y, it will be clear from the plot of the singular values of
2 CEE 690, ME 555 – System Identification – Duke University – Fall 2013 – H.P. Gavin

Y. This is the case in the first example (a good application of SSA in which the signal-to-noise
ratio can even be less than 1), but not in the second. SSA is a type of Principal Component
Analysis (PCA).

1 Recover a reduced basis from a very noisy measurement (PCA)


P
signal is sum of sines: yi = j sin(2πfj ti )
noisy measurements: ỹi = yi + dny √ ny is a unit white noise process
d = σy /(SNR ∆t) SNR : signal-to-noise ratio

100
σi / σ1

10-1

10-2
100 101 102 103
i

100

10-2 noisy

10-4
PSD

10-6

10-8

10-10
filtered
true

10-12
10-1 100 101
frequency, Hz

1
signals

-1

-2

-3
10 12 14 16 18 20
time, s
SVD filtering 3

2 Filter a noisy measurement of a broad-band response


noise-driven linear dynamics: ẋ = ax + bnx nx is a unit white noise process
true output: y = cx
noisy measurements: ỹ = y + dny ny is a unit white noise process
parameter values: a = −0.5; b = √
1; c = 1;
d = σy /(SNR ∆t) SNR : signal-to-noise ratio

100
σi / σ1

10-1

10-2
100 101 102 103
i

100

10-1

10-2
noisy
10-3

10-4
PSD

10-5

10-6

10-7
true
10-8

10-9 filtered

10-10
10-1 100 101
frequency, Hz

1
signals

-1

-2

-3
10 12 14 16 18 20
time, s
4 CEE 690, ME 555 – System Identification – Duke University – Fall 2013 – H.P. Gavin

SVD filter.m
1 function y = SVD_filter (y , m , sv_ratio )
2 % y = S V D f i l t e r ( u , m, s v r a t i o )
3 % Use t h e SVD t o f i l t e r o u t l e a s t s i g n i f i c a n t ( n o i s y ) p o r t i o n s o f a s i g n a l
4 %
5 % INPUTS DESCRIPTION
6 % ========== ===============
7 % y s i g n a l t o be f i l t e r e d , 1 x N
8 % m rows i n t h e Hankel m a t r i x o f y , Y m > N/2+1
9 % sv ratio remove components o f SVD o f Y w i t h s i < s v r a t i o ∗ s 1
10 %
11 % OUTPUT DESCRIPTION
12 % ========== ===============
13 % y re−c o n s t r u c t i o n o f y from low−rank a p p r o x i m a t i o n o f Y
14
15 [l , N ] = s i z e ( y );
16
17 % p u t s i g n a l i n t o a Hankel Matrix . o f dimension m x n ; m > n ; m+n = N+1
18 i f ( m < N /2) error ( ’ SVD_filter : m should be greater than N /2 ’ ); end
19 n = N +1 - m ; % number o f columns o f t h e Hankel m a t r i x
20
21 Y = zeros (m , n );
22 f o r k =1: m
23 Y ( k , : ) = y ( k : k +n -1 );
24 end
25
26 [U ,S , V ] = svd ( Y , ’ economy ’ ); % make e c o n o m i c a l SVD o f t h e Hankel Matrix .
27
28 K = max( find ( diag ( S )/ S (1 ,1) > sv_ratio )) % f i n d t h e most s i g n i f i c a n t p a r t
29
30 f i g u r e (3)
31 log log ( diag ( S )/ S (1 ,1) , ’o ’ , [1 K ] ,[1 1]* sv_ratio , ’k ’ , [ K K ] ,[ sv_ratio 1] , ’k ’)
32 ylabel ( ’\ sigma_i / \ sigma_1 ’)
33 xlabel ( ’i ’)
34 print ( ’ S VD_filte r_svd . eps ’ , ’ - color ’ , ’ - solid ’ , ’ -F :28 ’ );
35
36
37 % b u i l d a new rank−K m a t r i x from t h e f i r s t K dyads o f t h e SVD
38 Y = U (: ,1: K )* diag ( diag ( S )(1: K ))* V (: ,1: K ) ’;
39
40 % Average a n t i −d i a g o n a l components t o make t h e l o w e r −rank m a t r i x a Hankel m a t r i x
41 % E x t r a c t t h e f i l t e r e d s i g n a l from t h e f i r s t column and l a s t row o f t h e
42 % l o w e r −rank Hankel m a t r i x .
43
44 y = zeros (1 , N );
45 y (1) = Y (1 ,1);
46
47 f o r k =2: m % f i r s t column o f Hankel m a t r i x
48 min_kn = min(k , n );
49 y ( k ) = sum( diag ( Y ( k : -1:1 ,1: min_kn ))) / min_kn ;
50 end
51 f o r k =2: n % l a s t row o f Hankel m a t r i x
52 y ( m +k -1) = sum( diag ( Y ( m : -1: m - n +k , k : n ))) / (n - k +1);
53 end
54
55 % −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− S V D f i l t e r HP Gavin 2013−09−10
56 % System I d e n t i f i c a t i o n , Duke U n i v e r s i t y , F a l l 2013
SVD filtering 5

1 % SVD filter test


2 % t e s t t h e u s e o f SVD o f s i g n a l Hankel m a t r i x f o r f i l t e r i n g
3
4 % u s e SVD t o remove n o i s e
5 % m = number o f rows i n Hankel m a t r i x ; m >= N/2+1 ;
6 % s m a l l e r m : : s l o w e r SVD : : l e s s e x t r a c t i o n
7 % s m a l l e r m : : s h a r p e r SVD knee : : l e s s n o i s e i n p r i n c i p a l components
8
9 % HP Gavin , CEE 699 , System I d e n t i f i c a t i o n , F a l l 2013
10
11 epsPlots = 1; formatPlot ( epsPlots );
12
13 Example = 1;
14
15 randn( ’ seed ’ ,2); % i n i t i a l i z e random number g e n e r a t o r
16 N = 2048; % number o f d a t a p o i n t s
17 dt = 0.05; % time s t e p i n c r e m e n t
18 t = [1: N ]* dt ; % time v a l u e s
19
20 i f ( Example == 1) % sum o f harmonic s i g n a l s
21
22 freq = [ ( sqrt (5) -1)/2 1 2/( sqrt (5) -1) e pi ] ’; % set of frequencies
23 yt = sum( s i n (2* pi * freq * t )) / length ( freq ); % true signal
24
25 SNR = 0.5; % works w i t h v e r y poor s i g n a l −to−n o i s e r a t i o
26 m = c e i l (0.6* N + 1 )
27 sv_ratio = 0.60; % s i n g u l a r v a l u e r a t i o c l o s e r t o 1 : : more f i l t e r i n g
28
29 end
30
31 i f ( Example == 2) % dynam ical sy s te m d r i v e n by u n i t w h i t e n o i s e
32
33 yt = lsim ( -0.5 ,1 ,1 ,0 ,randn(1 , N )/ sqrt ( dt ) ,t ,0); % true signal
34
35 SNR = 2.0; % n e e d s b e t t e r s i g n a l −to−n o i s e r a t i o
36 m = c e i l (0.9* N + 1 )
37 sv_ratio = 0.15; % smaller singular value ratio :: less filtering
38
39 end
40
41 % add measurement n o i s e
42 yn = yt + randn(1 , N )/ sqrt ( dt ) * sqrt ( yt * yt ’/ N ) / ( SNR * sqrt ( dt ));
43
44
45
46 yf = SVD_filter ( yn , m , sv_ratio ); % remove random components
47
48
49
50 yf_yt_err = norm( yf - yt )/norm( yt ) % compare f i l t e r e d t o t r u e
51 yf_yn_err = norm( yf - yn )/norm( yt ) % compare f i l t e r e d t o n o i s y
52
53 nfft = 512;
54 [ PSDyt , f ] = psd ( yt ,1/ dt , nfft );
55 [ PSDyn , f ] = psd ( yn ,1/ dt , nfft );
56 [ PSDyf , f ] = psd ( yf ,1/ dt , nfft );
57
58 %

59
60 % P l o t t i n g −−−−
61
62 f i g u r e (1);
63 clf
64 plot (t , yt , t , yn , t , yf )
65 axis ([10 20])
66 ylabel ( ’ signals ’)
67 xlabel ( ’ time , s ’)
6 CEE 690, ME 555 – System Identification – Duke University – Fall 2013 – H.P. Gavin

68 i f epsPlots , print ( s p r i n t f ( ’ SVD_filter_ % d_1 . eps ’ , Example ) , ’ - color ’ , ’ - solid ’ , ’ -F :28 ’ ); end
69
70 f i g u r e (2)
71 clf
72 idx = [4: nfft /2];
73 log log ( f ( idx ) , PSDyt ( idx ) , f ( idx ) , PSDyn ( idx ) , f ( idx ) , PSDyf ( idx ))
74 xlabel ( ’ frequency , Hz ’)
75 ylabel ( ’ PSD ’)
76 text ( f ( nfft /2) , PSDyt ( nfft /2) , ’ true ’)
77 text ( f ( nfft /2) , PSDyn ( nfft /2) , ’ noisy ’)
78 text ( f ( nfft /2) , PSDyf ( nfft /2) , ’ filtered ’)
79 i f epsPlots , print ( s p r i n t f ( ’ SVD_filter_ % d_2 . eps ’ , Example ) , ’ - color ’ , ’ - solid ’ , ’ -F :28 ’ ); end

References
[1] Cadzow, J., “Signal Enhancement: a composite property mapping algorithm,” IEEE
Trans. Acoustics, Speech, and Signal Processing, 36(2):49-82 (1988).

[2] Golyandina et. al., Analysis of Time Series Structure: SSA and related techniques,
Champman-Hall, CRC 2001

[3] Lemmerling, Philippe, Structured Total Least Squares: Analysis, Algorithms, and Applica-
tions Ph.D. Dissertation, Leuven , 1999.

[4] Markovsky, Ivan “Structured low-rank approximation and its application,” Automatica 44:
891-909 (2008).

You might also like