Tech Report03
Tech Report03
Technical Report
CSD-TR-03-02
May 28, 2003
¡
¢
£
¤
1 Introduction
During recent years there have been advances in data learning using kernel
methods. Kernel representation offers an alternative learning to non-linear
functions by projecting the data into a high dimensional feature space to
increase the computational power of the linear learning machines, though this
still leaves open the issue of how best to choose the features or the kernel func-
tion in ways that will improve performance. We review some of the methods
that have been developed for learning the feature space.
Proposed by H. Hotelling in 1936 [12], CCA can be seen as the problem of find-
ing basis vectors for two sets of variables such that the correlation between the
projections of the variables onto these basis vectors are mutually maximised.
In an attempt to increase the flexibility of the feature selection, kernelisation of
CCA (KCCA) has been applied to map the hypotheses to a higher-dimensional
feature space. KCCA has been applied in some preliminary work by Fyfe &
Lai [8], Akaho [1] and the recently Vinokourov et al. [19] with improved results.
Introduction 2
During recent years there has been a vast increase in the amount of mul-
timedia content available both off-line and online, though we are unable to
access or make use of this data unless it is organised in such a wa y as to
allow efficient browsing. To enable content based retrieval with no reference
to labeling w e attempt to learn the semantic representation of images and
their associated text. W e present a general approach using KCCA that can
be used for content [11] to as well as mate based retrieval [18, 11]. In both
cases we compare the KCCA approach to the Generalised Vector Space Model
(GVSM), which aims at capturing some term-term correlations by looking at
co-occurrence information.
This study aims to serve as a tutorial and give additional novel contribu-
tions in the following ways:
• In this study we follow the work of Borga [4] where we represent the
eigenproblem as two eigenvalue equations as this allows us to reduce the com-
putation time and dimensionality of the eigenvectors.
• Further to that, w e follow the idea of Bach & Jordan [2] to compute a
new correlation matrix with reduced dimensionality. Though Bach & Jordan
[2] address a very different problem, they use the same underlining technique of
Cholesky decomposition to re-represent the kernel matrices. We show that by
using partial Gram-Schmidt orthogonolisation [6] is equivalent to incomplete
Cholesky decomposition, in the sense that incomplete Cholesky decomposition
can be seen as a dual implementation of partial Gram-Schmidt.
• We show that the general approach can be adapted to two different types
of problems, content and mate retrieval, by only changing the selection of
eigenvectors used in the semantic projection.
we present the CCA and KCCA algorithm. Approaches to deal with the com-
putational problems that arose in Section 3 are presented in Section 4. Our
experimental results are presented In Section 5. In Section 6 we present the
generalisation framework for CCA while in Section 7 draws final conclusions.
2 Theoretical Foundations
Proposed by H. Hotelling in 1936 [12], Canonical correlation analysis can be
seen as the problem of finding basis vectors for two sets of variables such that
the correlation between the projections of the variables onto these basis vectors
are mutually maximised. Correlation analysis is dependent on the co-ordinate
system in which the variables are described, so even if there is a very strong
linear relationship between two sets of multidimensional variables, depending
on the co-ordinate system used, this relationship might not be visible as a cor-
relation. Canonical correlation analysis seeks a pair of linear transformations
one for each of the sets of variables such that when the set of variables are
transformed the corresponding co-ordinates are maximally correlated.
Consider a multivariate random vector of the form (x, y). Suppose we are
given a sample of instances S = ((x 1 , y 1 ),..., (x n , y n )) of (x, y), we use S x to
denote (x 1 ,..., x n ) and similarly S y to denote (y 1 ,..., y n ). W e can consider
defining a new co-ordinate for x by choosing a direction w x and projecting x
onto that direction
x → w x, x
if we do the same for y by choosing a direction w y we obtain a sample of the
new x co-ordinate as
S x , w x = ( w x , x 1 ,..., w x , x n )
S y , w y = ( w y , y 1 ,..., w y , y n )
ρ = max corr(S x w x ,S y w y )
w x ,w y
= max S x w x ,S y w y
w x ,w y S x w x Sy w y
If we useˆE [f (x, y)] to denote the empirical expectation of the function f (x, y),
were
m
Eˆ [f (x, y)] =1 f (x i , y i )
m i= 1
Algorithm 4
Eˆ[ wx, x wy , y ]
ρ = max
w x ,w y Eˆ[ w x , x 2 ]ˆ
Eˆ[w x xy w y ]
= max
w x ,w y Eˆ[wxxx w x]ˆE[wyyy wy]
follows that
w xEˆ[xy ]w y
ρ = max .
w x ,w y wxEˆ[xx ]wxwyEˆ[yy ]wy
Where we use A to denote the transpose of a vector or matrix A.
Now observe that the covariance matrix of (x, y) is
C(x, y) =ˆE x x C xx xy
= = C. (2.1)
y y C yx yy
The total covariance matrix C is a block matrix where the within-sets covari-
ance matrices are C xx and C yy and the between-sets covariance matrices are
C xy = C yx
ρ = max w x C xy w y
(2.2)
w x ,w y w x C xx w x w y C yy w y
the maximum canonical correlation is the maximum of ρ with respect to w x
and w y .
3 Algorithm
In this section we will give an overview of the Canonical correlation analysis
(CCA) and Kernel-CCA (KCCA) algorithms where we formulate the optimisa-
tion problem as a generalised eigenproblem.
αw x C xy w y w x C xy w y
= .
xx w x w y C yy w y w x C xx w x w y C yy w y
Since the choice of re-scaling is therefore arbitrary, the CCA optimisation prob-
lem formulated in equation (2.2) is equivalent to maximising the numerator
Algorithm 5
subject to
w x C xx w x = 1
w y C yy w y = 1.
The corresponding Lagrangian is
L (λ, w x , w y ) = w x C xy w y − λ x xx w x − 1) − λ y yy w y − 1)
before performing CCA in the new feature space, essentially moving from the
primal to the dual representation approach. Kernels are methods of implicitly
mapping data into a higher dimensional feature space, a method known as the
”kernel trick”. A kernel is a function K, such that for all x,z ∈ X
K (x,z) =< φ(x) · φ(z) > (3.5)
Using the definition of the covariance matrix in equation (2.1) we can rewrite
the covariance matrix C using the data matrices (of vectors) X and Y , which
have the sample vector as rows and are therefore of size m × N , we obtain
C xx XX
C xy X Y.
wx = X α
wy = Y β.
ρ = max α XX Y Y β
(3.6)
α,β √α XX XX α · β Y Y Y Y β
Let K x = XX and K y = Y Y be the kernel matrices corresponding to the two
representation. We substitute into equation (3.6)
ρ = max α K xK y β
. (3.7)
α,β α Kx2α · β Ky2β
We find that in equation (3.7) the variables are now represented in the dual
form.
Observe that as with the primal form presented in equation (2.2), equation (3.7)
is not affected by re-scaling of α and β either together or independently. Hence
Algorithm 7
α K x2 α = 1
β K y2 β = 1
Subtracting β times the second equation from α times the first we have
0 = α K x K y β − α λ α K x2 α − β K y K x α + β λ β K 2
= λ β β K y2 β − λ α α K x2 α
K xK yK −1K xα − λ 2K xK xα = 0.
Hence
K xK xα − λ 2K xK xα = 0
or
I α = λ 2 α. (3.10)
We are left with a generalised eigenproblem of the form Ax = λx. We can
deduce from equation 3.10 that λ = 1 for every vector of α; hence we can
choose the projections w x to be unit vectors j i i = 1,...,m while w y are the
columns of λ1 K y−1K x. Hence when K x or K y is invertible, perfect correlation can
be formed. Since kernel methods provide high dimensional representations such
independence is not uncommon. It is therefore clear that a naive application of
CCA in kernel defined feature space will not provide useful results. In the next
section we investigate how this problem can be avoided.
Computational Issues 8
4 Computational Issues
We observe from equation (3.10) that if K x is invertible maximal correlation is
obtained, suggesting learning is trivial. To force non-trivial learning we intro-
duce a control on the flexibility of the projections by penalising the norms of
the associated weight vectors by a convex combination of constraints based on
Partial Least Squares. Another computational issue that can arise is the use
of large training sets, as this can lead to computational problems and degener-
acy. To overcome this issue we apply partial Gram-Schmidt orthogonolisation
(equivalently incomplete Cholesky decomposition) to reduce the dimensionality
of the kernel matrices.
4.1 Regularisation
To force non-trivial learning on the correlation we introduce a control on the
flexibility of the projection mappings using Partial Least Squares (PLS) to
penalise the norms of the associated weights. We convexly combine the PLS
term with the KCCA term in the denominator of equation (3.7) obtaining
α K xK y β
ρ = max α,β
(α K x2 α + κ w x 2 ) · (β K y2 β + κ w y 2 ))
α K xK y β
= max α,β .
(α K x2 α + κα K x α) · (β K y2 β + κβ K y β)
We observe that the new regularised equation is not affected by re-scaling of α
or β, hence the optimisation problem is subject to
(α K x2 α + κα K x α) = 1
(β K y2 β + κβ K y β) = 1
L (λ α ,λ β ,α,β) = α K xK yβ
− λ2(α α Kx2α + κα Kxα − 1)
0 = α K x K y β − λ α α (K x2 α + κK x α) − β K y K x α + λ β β (K y2 β + κK y β)
= λ β β (K y2 β + κK y β) − λ α α (K x2 α + κK x α).
Computational Issues 9
(K y + κI) − 1 K − 1 K y K x α
β =
λ
(K y + κI) − 1 K x α
=
λ
substituting in equation 4.1 gives
K x K y (K y + κI) − 1 K x α = λ 2 K x (K x + κI)α
K y (K y + κI) − 1 K x α = λ 2 (K x + κI)α
(K x + κI) − 1 K y (K y + κI) − 1 K x α = λ2α
k−1
1
u kj = a kj − l kp u pj for j > k ≥ 2 (4.5)
lkk
p=1
k−1
1
l ik = a ik − l ip u pk for i > k ≥ 2 (4.6)
u kk
p=1
Input N x N matrix K
precision parameter η
N
2. While j= 1 G jj > η and i! = N + 1
• Find best new element: j ∗ = argmax j∈ [i,N ] G jj
• Update j ∗ = (j ∗ + i) − 1
• Update permutation P :
∗ ∗
P = P · Pnext
• Permute elements i and j ∗ in K :
K = Pnext · K · Pnext
• Update (due to new permutation) the already calculated elements
∗ ,1:i−1
Initialisations:
m = size of K , a N × N matrix
j= 1
size and index are a vector with the same length as K
feat a zeros matrix equal to the size of K
for i = 1 to m do
norm2[i] = K ii;
Algorith m:
while i norm2[i] > η and j! = N + 1 do
i j = argmax i (norm2[i]);
index[j] = ij ;
size[j] = norm2[ij ];
for i = 1 to m do
“k(di,dij )−Ptj=1−1 feat[i,t]·feat[ij ,t]”
feat[i,j] = ;
size[j]
norm2[i] = norm2[i]− feat(i,j)· feat(i,j);
end;
j= j+ 1
end;
return feat
Output:
K − feat · feat ≤ η where feat is a N × M lower triangular matrix
(appendix 1.2 for proof)
T o classify a n e w e x a m p l e at location i:
Computational Issues 12
for j = 1 to M
j−1
newfeat[j] = (K i,index[j] − t=1 newfeat[t] · feat[index[j],t])/size[j];
end;
α˜ = Rxα
β˜ = R y β
substituting in equations (4.9) and (4.10) we find that we return to the primal
representation of CCA with a dual representation of the data
ZxxZxyβ˜ − λZxx2 α˜ = 0
Zyy Zyxα˜ − λZyy2 β˜ = 0.
Computational Issues 13
Assuming that the Z xx and Z yy are invertible. We multiply the first equation
with Z −1 and the second with Z −1
xx yy
β˜ = Zyy−1Zyxα˜
λ
and substituting in equation (4.11) gives
Zxy Zyy−1Zyxα˜ = λ2Zxxα˜ (4.13)
S −1 Z xy Z yy−1 Z yx S −1 αˆ = λ 2 αˆ
R x R x R y R y β − λ(R x R x R x R x + κ R x R x )α = 0
R y R y R x R x α − λ(R y R y R y R y + κR y R y )β = 0
Multiplying the first equation with R x and the second equation with R y gives
R x R x R x R y R y β − λ R x (R x R x R x R x + κ R x R x )α = 0 (4.14)
R y R y R y R x R x α − λR y (R y R y R y R y + κR y R y )β = 0 (4.15)
rewriting equation (4.14) with the new reduced correlation matrix Z as defined
in the previous Section 4.3, we obtain
Z xx Z xy β˜ − λZ xx (Z xx + κI)˜α = 0
Z yy Z yx α˜ − λZ yy (Z yy + κI)˜β = 0.
Assuming that the Z xx and Z yy are invertible. We multiply the first equation
with Z −1 and the second with Z −1
xx yy
β˜ =(Z yy + κI) −1 Z yx α˜
λ
substituting in equation 4.16 gives
Z xy (Z yy + κI) −1 Z yx α˜ = λ 2 (Z xx + κI)˜α
S −1 Z xy (Z yy + κI) −1 Z yx S −1 αˆ = λ 2 α.ˆ
5 Experimental Results
In the following experiments the problem of learning semantics of multimedia
content by combining image and text data is addressed. The synthesis is ad-
dressed by the kernel Canonical correlation analysis described in Section 4.3.
We test the use of the derived semantic space in an image retrieval task that
uses only image content. The aim is to allow retrieval of images from a text
query but without reference to any labeling associated with the image. This
can be viewed as a cross-modal retrieval task. We used the combined multime-
dia image-text web database, which was kindly provided by the authors of [15],
where we are trying to facilitate mate retrieval on a test set. The data was di-
vided into three classes (Figure 1) - Sport, Aviation and Paintball - 400 records
each and consisted of jpeg images retrieved from the Internet with attached
text. We randomly split each class into two halves which were used as training
and test data accordingly. The extracted features of the data were used the
same as in [15] (detailed description of the features used can be found in [15]:
image HSV colour, image Gabor texture and term frequencies in text.
We compute the value of κ for the regularization by running the KCCA with the
association between image and text randomized. Let λ(κ) be the spectrum with-
out randomisation, the database with itself, and λ R (κ) be the spectrum with
randomisation, the database with a randomised version of itself, (by spectrum
it is meant that the vector whose entries are the eigenvalues). We would like
to have the non-random spectrum as distant as possible from the randomised
spectrum, as if the same correlation occurs for λ(κ) and λ R κ then clearly over-
fitting is taking place. Therefor we expect for κ = 0 (no regularisation) and
let j = 1,..., 1 (the all ones vector) that we may have λ(κ) = λ R (κ) = j, since
it is very possible that the examples are linearly independent. Though we find
that only 50% of the examples are linearly independent, this does not affect the
selection of κ through this method. We choose κ so that the κ for which the
Experimental Results 15
20 20 20 20
40 40 40 40
60 60 60 60
80 80 80 80
Aviation
200 200 200 200
50 100 150 200 250 300 350 400 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 50 100 150 200 250 300 350 400 450
10 10 10
20
20
20 20
30
30
40 30
40
40
40
50
60
50
50 60
60
80 70
60
80 70
70
100
90 80
Sports
80
100
90
120
90 110
100
10 20 30 40 50 60 70 80 20 40 60 80 100 120 10 20 30 40 50 60 70 80 20 40 60 80 100 120
10
20
20
20
20
40
40 30
60
40
40
60 80
50
60
100
80 60
120
70
80
100 140
80
160
100 90
120
180
100
140
120
110 200
10 20 30 40 50 60 70 80 90 100 110 20 40 60 80 100 120 140 20 40 60 80 100 120 140 50 100 150 200 250 300
κ = argmax λ R (κ ) − λ(κ)
We find that κ = 7 and set via a heuristic technique the Gram-Schmidt preci-
sion parameter η = 0.5 .
To perform the test image retrieval we compute the features of the images
and text query using the Gram-Schmidt algorithm. Once we have obtained
the features for the test query (text) and test images we project them into the
semantic feature space using˜β and α˜ (which are computed through training)
respectively. Now we can compare them using an inner product of the semantic
feature vector. The higher the value of the inner product, the more similar the
two objects are. Hence, we retrieve the images whose inner products with the
test query are highest.
Image Set GVSM success KCCA success (30) KCCA success (5)
10 78.93% 85% 90.97%
30 76.82% 83.02% 90.69%
95
KCCA (5)
KCCA (30)
GVSM
90
85
80
75
70
65
60
0 20 40 60 80 100 120 140 160 180 200
Image Set Size
10
20
30
40
50
60
70
80
90
100
110
10 20 30 40 50 60 70 80 10 20 30 40 50 60 70 80
Figure 3 Images retrieved for the text query: ”height: 6-11 weight: 235 lbs
position: forward born: september 18, 1968, split, croatia college: none”
Image set GVSM success KCCA success (30) KCCA success (150)
10 8% 17.19% 59.5%
30 19% 32.32% 69%
In Table 3 we compare the performance of the KCCA algorithm with the GVSM
Experimental Results 18
90
80
70
60
50
40
30
0 5 10 15 20 25 30 35 40 45 50
eigenvectors
over 10 and 30 image sets where in Table 4 we present the overall success over
all image sets. In figure 6 we see the overall performance of the KCCA method
against the GVSM for all possible image sets.
The success rate in Table 3 and Figure 6 is computed as follows
600
where count j = 1 if the exact matching image to the text query was present in
the set, else count j = 0. The success rate in Table 4 is computed as above and
averaged over all image sets.
20
40
60
20
20
40 80
60 40
100
80
120 60
100
120 140 80
140
160
100
160
180
180 120
200
200
140
50 100 150 200 250 300 350 400 450 50 100 150 200 250 300 350 400 450 500 20 40 60 80 100 120 140 160 180 200 220
10 20
20 40
60
30
80
40
100
50
120
60 140
160
70
180
80
200
90
20 40 60 80 100 120 50 100 150 200 250 300 350 400 450 500
Figure 5 Images retrieved for the text query: ”at phoenix sky harbor on july
6, 1997. 757-2s7, n907wa phoenix suns taxis past n902aw teamwork america
west america west 757-2s7, n907wa phoenix suns taxis past n901aw arizona at
phoenix sky harbor on july 6, 1997.” The actual match is the middle picture in
the first row.
and the reminding eigenvectors would not necessarily add meaningful semantic
information.
100
90
80
70
60
50
40
30
20
KCCA (150)
10 KCCA (30)
GVSM
0
0 100 200 300 400 500 600
Figure 6 Success plot for KCCA mate-based against GVSM (success (%) against
image set size).
performance between the a priori value κˆ and the new found optimal value
κ for 5 eigenvectors is 1.0423% and for 30 eigenvectors is 5.031%. The more
substantial increase in performance on the latter is due to the increase in the
selection of the regularisation parameter, which compensates for the substantial
decrease in performance (figure 6) of the content based retrieval, when high
dimensional semantic feature space is used.
95
90
85
80
75
70
65
60
55
50
0 20 40 60 80 100 120 140 160 180
eigenvectors
88.55
88.5
88.45
88.4
100 120 140 160 180 200 220 240 260 280 300
Kappa
92.796
92.794
92.792
92.79
92.788
92.786
92.784
92.782
92.78
92.778
80 85 90 95 100 105 110 115 120
Kappa
85.515
85.51
85.505
85.5
85.495
85.49
85.485
100 120 140 160 180 200 220 240 260 280 300
Kappa
93.03
93.02
93.01
93
92.99
92.98
92.97
92.96
92.95
100 150 200 250 300 350 400 450 500 550
kappa
performance between the a priori value κˆ and the new found optimal value κ
is for 150 eigenvectors 0.627% and for 30 eigenvectors is 0.7586%.
Our observed results support our proposed method for selecting the regu-
larisation parameter κ in an a priori fashion, since the difference between the
actual optimal κ and the a priori κˆ is very slight.
Tr(A ) = a ii (6.1)
i
Let the set Y ⊆ R n the feasibility domain for y determined by the constrain
g(y) = 0.
Assume the function f is convex in both variables x and y, the optimal solution
of x can be expressed by the function h(y) of the optimal solution of y, where
the function of h is defined on the whole set Y and the functions f,g,h are
twice continuously differentiable on R m × Y .
Then the optimisation problem with the same constrain
min f (h(y),y) (6.8)
y
subject to (6.9)
g(y) = 0, (6.10)
y ∈ R n, (6.11)
Proof. Let the optimal solution of equation (6.4) be denoted by x 1 ,y 1 and for
equation (6.8) be denoted by y 2 .
From the convexity of f and the same feasibility domains the optimum solutions
have to be the same.
columns. Introducing notations for the product of the matrices to simplify the
formulas:
Σ ij = H (i) H (j) , i,j = 1 , 2 . (6.12)
We are looking for linear combinations of the columns of these matrices such
that the first pair of the vectors (a (1) ,a (2) ) are optimal solution of the optimi-
1 1
sation problem:
max a (1) Σ 12 a (2) (6.13)
a(1),a(2) 1
1 1
subject to (6.14)
a (1) Σ 11 a (1)
1 = 1, (6.15)
a (2) Σ22a(2)1
1 = 1. (6.16)
The meaning of this optimisation problem is to find the maximum correlation
between the linear combinations of the columns of the matrices H (1) ,H (2) ,
subject to the length of the vectors corresponding to these linear combinations
normalised to 1.
To determinate the remaining pairs of the vectors, columns in A (1) and A (2) ,
a series of optimisation problems are solved successively. For the pair of the
vectors (a (1) ,a r (2) ), r = 2,...,p we have
max a (1)
r
Σ12a(2)r (6.17)
ar (1),ar (2)
subject to (6.18)
a (k) Σ kk a (rk)
r = 1, (6.19)
a (k) Σ kk a (jk)
r = 0, (6.20)
a (k) Σ kl a (jl)
r = 0, (6.21)
k,l = 1 , 2 , j = 1 ,...,r − 1 . (6.22)
The problem (6.13) expanded by the orthogonality constrains (6.17), namely
the components of every new pair in the iteration have to be orthogonal to the
components of the previous pairs.
The upper limit p of the iteration has to be ≤ min(rank(H (1) ),rank(H ( 2))).
subject to (6.27)
y (k) y (k)
1 1 = 1,k = 1, 2. (6.28)
(6.29)
∂L 1 = 2D 21 y (1)
∂y (2) 1 = 0. (6.32)
1
Multiplying equation (6.31) by y (1) and equation (6.32) by y (2)
1 1 and dividing
by the constant 2 provides
y (1) D 12 y (2) − λ 1 y (1) y (1)
1 1 = 0, (6.33)
y (2) D 12 y (1) y (2)
1 1 1 = 0. (6.34)
Based on the constrains of the optimisation problem (6.26) and the identity
D 21 = D 12 we have
λ 1 = λ 2 = y (1) D 12 y (2) . (6.35)
After replacing λ 1 and λ 2 with λ the following equality system can be formulated
y (1)
− λI D 1 2 1
= 0. (6.36)
y (2)
D 21 − λ I 1
It is not too hard to realise this equality system is a singular vector and value
problem of the matrix D 12 having y (1) and y1(2) are a left and a right singular
vectors and the value of the Lagrangian λ is equal to the corresponding singular
value. Based on this statements we can claim that the optimal solutions are
the singular vectors belonging to the greatest singular value of the matrix D 12 .
subject to (6.45)
A (k ) Σ k k A (k ) = I, (6.46)
a (k) Σ kl a (jl)
i = 0, (6.47)
k,l = {1, 2}, l = k,i,j = 1,...,p, j = i. (6.48)
where I is the identity matrix with size p × p.
Repeating the substitution in equation (6.23) the set of feasible vectors for
the simultaneous problem is equal to the left and right singular vectors of ma-
trix D 12 , hence the optimal solution is compatible to the successive problems.
subject to (6.50)
A (k ) Σ k k A (k ) = I, (6.51)
a (k) Σ kl a (jl)
i = 0, (6.52)
k,l = 1,..., 2, l = k,i,j = 1,...,p, j = i. (6.53)
Generalisation of Canonical Correlation Analysis 28
Unfolding the objective function of the minimisation problem (6.49) shows the
optimisation problem is the same as the maximisation problem (6.44).
subject to (6.55)
A (k ) Σ k k A (k ) = I, (6.56)
a (k) Σ kl a (jl)
i = 0, (6.57)
k,l = 1,...,K, l = k,i,j = 1,...,p, j = i. (6.58)
In the forthcoming sections we will show how to simplify this problem.
The total squared distance, the sum of the squared Euclidean distance of
all possible pairs of vectors in X is equal to
1 m m
2
2 xk − xl 2 = (6.59)
k = 1 l= 1 ,l= k
n m x2ki + m x2li m
2x ki x li
=1 (6.63)
2 m
i=1 k=1,l=1 k = 1 ,l= 1 k = 1 ,l= 1
n m m m
=1 m x2ki + m x2li x ki x li (6.64)
2 i= 1 k=1 l= 1 k=1 l= 1
Generalisation of Canonical Correlation Analysis 29
Hence the total squared distance turns to be equal to the sum of the component-
wise variances of the vectors in X multiplied by the square of the number of
the vectors.
The components of the optimal solution are equal to the mean values of the
corresponding components of the known vectors.
subject to (6.71)
a (k) Σ kl a (jl) 1 if k = l and i = j,
i = (6.72)
0 if (k = l and i = j) or (k = l and i = j),
k,l = 1,...,K, i,j = 1,...,p, except when k = l and i = j, (6.73)
Generalisation of Canonical Correlation Analysis 30
where a (k) denotes the ith column of the matrix A (k) containing the possible
i
linear combinations.
where we can compute the inverse because the columns of the matrix H (k) are
independent meaning Σ kk has full rank. We can transform this optimisation
problem into a more simply form. First, we modify the set of constrains. To
make this modification readable the notation is introduced
2 2
ll = Dkl, k,l = 1,...,K, (6.75)
kk2 .
y (k) y (k) 1 if i = j,
i j = (6.76)
0 if i = j,
k = 1,...,K, i,j = 1,...,p, (6.77)
y (k) D kly j(l)
i = 0, (6.78)
k,l = 1,...,K, k = l, i,j = 1,...p, i = j, (6.79)
for which we can recognise the singular decomposition problems of the matrices
{D kl }. If we consider the matrix D kl for a fixed pair of the indeces k,l and
apply the singular decomposition we have
the matrices Y (k) and Y (l) have columns being equal to the vectors y (k) and y (l)
i i
respectively, where i = 1,...,p. The singular decomposition Λ kl is a diagonal
matrix and Y (k) Y (k) = I, Y (l) Y (l) = I. The constrains do not contain the
items having indeces with the properties k = l and i = j. They give the singular
values of the matrix D kl
y (k) D kly i(l)
i = Λ ii . (6.81)
The consequence of the singular decomposition form is that the set of the
feasible solutions of the optimisation problem with constrains (6.76) are equal
to the set of the singular vectors of the matrices { D kl ,k,l = 1 ...,K}.
To express the objective function of the optimisation problem (6.70) we use
the notations
kk2 , (6.82)
D kl = Q Tk Q l. (6.83)
Generalisation of Canonical Correlation Analysis 31
We can derive another statement about the optimal solution of the problem.
Exploiting the definition of the Frobenius norm the objective function (6.70)
can be rewritten as a sum of the Euclidean norm of the column vectors, where
x i denotes the ith column of the matrix X,
1 K
K = (6.84)
F
k=1
K p
1 x i − H (k ) a (k )
2
= 2 = (6.85)
K k=1 i=1
K p
2
=1 x i − Q k y (k) (6.86)
K 2 =
k=1 i=1
K p
=1 xi − Q k y (k ) ,x i − Q k y (k ) . (6.87)
K k=1 i=1
After computing the partial derivatives, where x i signs the ith column of
the matrix X, we get
∂L K
2x i − 2Q k y (k) = 0, i = 1,...,p,
∂x i = (6.92)
k=1
p K p
∂L = 2D kk y (k) y (k) λ kl,ij D kl y (l)
∂y (k) i − 2 λ k,ij j j = 0,
i j l j
l=k j=i
(6.93)
k = 1,...,K, i = 1,...,p. (6.94)
Conclusions 32
Based on the proposition (3) we can replace the variable X in equation (6.70)
by an expression of the other variables without changing the optimum value
and the optimal solution. Thus we have the variance problem.
7 Conclusions
Through this study we have presented a tutorial on canonical correlation
analysis and have established a novel general approach to retrieving images
based solely on their content. This is then applied to content-based and mate-
based retrieval. Experiments show that image retrieval can be more accurate
than with the Generalised Vector Space Model. W e demonstrate that one
can choose the regularisation parameter κ a priori that performs well in very
different regimes. Hence we have come to the conclusion that kernel Canonical
Correlation Analysis is a powerful tool for image retrieval via content. In the
future we will extend our experiments to other data collections.
These approaches can give tools to handle some problems in the kernel space,
where the inner products and the distances between the points are known but
the coordinates are not. For some problems it is sufficient to know only the
coordinates of a few special points, which can be expressed from the known
inner product, e.g. to do cluster analysis in the kernel space and to compute
the coordinates of the cluster centres only.
Acknowledgments
We would like to acknowledge the financial support of EU Projects KerMIT,
No. IST-2000-25341 and LAVA, No. IST-2001-34405.
1 Proof K − G i G i ≤η
Proof.
n
Trane(A B ) = (ab) ii
i
n
= a ij b ji
i,j
n
= b ji a ij
j,i
n
= (ba) jj
j
= Trace(BA)
Proof.
Trace(Λ) = Trace(V AV )
= Trace((V A)V )
= Trace(V (V A))
= Trace(V V A)
= Trace(A)
Proof.
Ax
A = max .
x=0 x
Hence we obtain
max Ax = m a x Ax
x=0 x x =1
Ax = (x A Ax)
Ax 2 = x A Ax
A A = UDU
A x 2 = x UDUx
Hence we obtain
A = max λ i
i
1.2 Proof
Theorem 7. If K is a positive definite matrix and G G is its incomplete
cholesky decomposition then the Euclidian norm of GG subtracted from K is
less than or equal to the trace of the uncalculated part of K. Let ∆K i be the
uncalculated part of K and let η = Trace(∆K i ) then K − G i G i ≤ η.
Proof. Let GG be the being the complete cholesky decomposition K = GG
where G is a lower triangular matrix were the upper triangular is zeros.
A 0
G = .
B C
Let G iG i to be the incopmlete decomposition of K where i are the iterations
of the Cholesky factorization procedure
G i = G 1:n,1:i = A
B
∆Ki = 0 0
0 CC
We show that CC is positive semi-definite
˜
CC = K i+ 1 : n,i+ 1 : n − K ii+ 1 : n,i + 1 : n
= K i+ 1 : n,i+ 1 : n − B B
= K i+1:n,i+1:n − B · A −1 A · B
= K i+1:n,i+1:n − B · A −1 · (A B )
= K i+1:n,i+1:n − G i+1:n,1:i · G 1:−1i,1:i · K 1:i,i+1:n
−1
1:i
therefore
xCC x = < xC, (xC) >
≥ 0
λc ≥ 0
CC is a positive semi-definite matrix, hence ∆K i is also a positive semi-definite
matrix. Using Lemma 6 we are now able to show that
K˜i = ∆K i
K−
K − G iG i = ∆Ki
n
= λ iw i
i
= max λ i
i
As the maximum eigenvalue is less than or equal to the sum of all the eigenval-
ues, using Lemma 5, we are able to rewrite the expression as
n
K − G iG i ≤ i λi
≤ Trace(Λ)
≤ Trace ( ∆ K ii ).
Therefore,
K − G iG i ≤ η.
Bibliography
[2] Francis Bach and Michael Jordan. Kernel independent component analysis.
Journal of Machine Leaning Research, 3:1–48, 2002.
[6] Nello Cristianini, John Shawe-Taylor, and Huma Lodhi. Latent semantic
kernels. In Caria Brodley and Andrea Danyluk, editors, Proceedings of
ICML-01, 18th International Conference on Machine Learning, pages 66–
73. Morgan Kaufmann Publishers, San Francisco, US, 2001.
[7] Colin Fyfe and Pei Ling Lai. Ica using kernel canonical correlation analysis.
[8] Colin Fyfe and Pei Ling Lai. Kernel and nonlinear canonical correlation
analysis. International Journal of Neural Systems, 2001.
[11] David R. Hardoon and John Shawe-Taylor. Kcca for different level preci-
sion in content-based image retrieval. In Submitted to Third International
Workshop on Content-Based Multimedia Indexing, IRISA, Rennes, France,
2003.
[16] Malte Kuss and Thore Graepel. The geometry of kernel canonical correla-
tion analysis. 2002.
[17] Yong Rui, Thomas S. Huang, and Shih-Fu Chang. Image retrieval: Cur-
rent techniques, promising directions, and open issues. Journal of Visual
Communications and Image Representation, 10:39–62, 1999.