0% found this document useful (0 votes)
10 views

Tech Report03

Uploaded by

personal.njiyar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Tech Report03

Uploaded by

personal.njiyar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 39

Canonical correlation analysis; An

overview with application to learning


methods

David R. Hardoon , Sandor Szedmak and John Shawe-Taylor


Department of Computer Science
Royal Holloway, University of London
{ davidh, sandor, john} @cs.rhul.ac.uk

Technical Report
CSD-TR-03-02
May 28, 2003

¡
¢
£
¤

Department of Computer Science


Egham, Surrey TW20 0EX, England
Abstract

We present a general method using kernel Canonical Correlation Analysis to


learn a semantic representation to web images and their associated text. The
semantic space provides a common representation and enables a comparison
between the text and images. In the experiments we look at two approaches
of retrieving images based only on their content from a text query. We com-
pare the approaches against a standard cross-representation retrieval technique
known as the Generalised Vector Space Model.

Keywords : Canonical correlation analysis, kernel canonical correlation


analysis, partial Gram-Schmidt orthogonolisation, Cholesky decomposition,
incomplete Cholesky decomposition, kernel methods.
Introduction 1

1 Introduction
During recent years there have been advances in data learning using kernel
methods. Kernel representation offers an alternative learning to non-linear
functions by projecting the data into a high dimensional feature space to
increase the computational power of the linear learning machines, though this
still leaves open the issue of how best to choose the features or the kernel func-
tion in ways that will improve performance. We review some of the methods
that have been developed for learning the feature space.

• Principal Component Analysis (PCA) is a multivariate data analysis proce-


dure that involves a transformation of a number of possibly correlated variables
into a smaller number of uncorrelated variables known as principal components.
PCA only makes use of the training inputs while making no use of the labels.

• Independent Component Analysis (ICA) in contrast to correlation-based


transformations such as PCA not only decorrelates the signals but also reduces
higher-order statistical dependencies, attempting to make the signals as inde-
pendent as possible. In other words, ICA is a way of finding a linear not only
orthogonal co-ordinate system in any multivariate data. The directions of the
axes of this co-ordinate system are determined by both the second and higher
order statistics of the original data. The goal is to perform a linear transform
which makes the resulting variables as statistically independent from each other
as possible.

• Partial Least Squares (PLS) is a method similar to canonical correlation


analysis. It selects feature directions that are useful for the task at hand,
though PLS only uses one view of an object and the label as the corresponding
pair. PLS could be thought of as a method, which looks for directions that are
good at distinguishing the different labels.

• Canonical Correlation Analysis (CCA) is a method of correlating linear


relationships between two multidimensional variables. CCA can be seen as us-
ing complex labels as a way of guiding feature selection towards the underling
semantics. CCA makes use of two views of the same semantic object to extract
the representation of the semantics. The main difference between CCA and
the other three methods is that CCA is closely related to mutual information
(Borga 1998 [3]). Hence CCA can be easily motivated in information based
tasks and is our natural selection.

Proposed by H. Hotelling in 1936 [12], CCA can be seen as the problem of find-
ing basis vectors for two sets of variables such that the correlation between the
projections of the variables onto these basis vectors are mutually maximised.
In an attempt to increase the flexibility of the feature selection, kernelisation of
CCA (KCCA) has been applied to map the hypotheses to a higher-dimensional
feature space. KCCA has been applied in some preliminary work by Fyfe &
Lai [8], Akaho [1] and the recently Vinokourov et al. [19] with improved results.
Introduction 2

During recent years there has been a vast increase in the amount of mul-
timedia content available both off-line and online, though we are unable to
access or make use of this data unless it is organised in such a wa y as to
allow efficient browsing. To enable content based retrieval with no reference
to labeling w e attempt to learn the semantic representation of images and
their associated text. W e present a general approach using KCCA that can
be used for content [11] to as well as mate based retrieval [18, 11]. In both
cases we compare the KCCA approach to the Generalised Vector Space Model
(GVSM), which aims at capturing some term-term correlations by looking at
co-occurrence information.

This study aims to serve as a tutorial and give additional novel contribu-
tions in the following ways:

• In this study we follow the work of Borga [4] where we represent the
eigenproblem as two eigenvalue equations as this allows us to reduce the com-
putation time and dimensionality of the eigenvectors.

• Further to that, w e follow the idea of Bach & Jordan [2] to compute a
new correlation matrix with reduced dimensionality. Though Bach & Jordan
[2] address a very different problem, they use the same underlining technique of
Cholesky decomposition to re-represent the kernel matrices. We show that by
using partial Gram-Schmidt orthogonolisation [6] is equivalent to incomplete
Cholesky decomposition, in the sense that incomplete Cholesky decomposition
can be seen as a dual implementation of partial Gram-Schmidt.

• We show that the general approach can be adapted to two different types
of problems, content and mate retrieval, by only changing the selection of
eigenvectors used in the semantic projection.

• To simplify the learning of the K C C A we explore a method of selecting


the regularization parameter a priori such that it gives a value that performs
well in several different tasks.

In this study we also present a generalisation of the framework for canoni-


cal correlation analysis. Our approach is based on the works of Gifi (1990) and
Ketterling (1971). The purpose of the generalisation is to extend the canonical
correlation as an associativity measure between two set of variables to more
than two sets, whilst preserving most of its properties. The generalisation
starts with the optimisation problem formulation of canonical correlation. By
changing the objective function we will arrive at the multi set problem. Ap-
plying similar constraint sets in the optimisation problems we find that the
feasible solutions are singular vectors of matrices, which are derived the same
way for the original and generalised problem.

In Section 2 we present the theoretical background of CCA. In Section 3


Theoretical Foundations 3

we present the CCA and KCCA algorithm. Approaches to deal with the com-
putational problems that arose in Section 3 are presented in Section 4. Our
experimental results are presented In Section 5. In Section 6 we present the
generalisation framework for CCA while in Section 7 draws final conclusions.

2 Theoretical Foundations
Proposed by H. Hotelling in 1936 [12], Canonical correlation analysis can be
seen as the problem of finding basis vectors for two sets of variables such that
the correlation between the projections of the variables onto these basis vectors
are mutually maximised. Correlation analysis is dependent on the co-ordinate
system in which the variables are described, so even if there is a very strong
linear relationship between two sets of multidimensional variables, depending
on the co-ordinate system used, this relationship might not be visible as a cor-
relation. Canonical correlation analysis seeks a pair of linear transformations
one for each of the sets of variables such that when the set of variables are
transformed the corresponding co-ordinates are maximally correlated.

Consider a multivariate random vector of the form (x, y). Suppose we are
given a sample of instances S = ((x 1 , y 1 ),..., (x n , y n )) of (x, y), we use S x to
denote (x 1 ,..., x n ) and similarly S y to denote (y 1 ,..., y n ). W e can consider
defining a new co-ordinate for x by choosing a direction w x and projecting x
onto that direction
x → w x, x
if we do the same for y by choosing a direction w y we obtain a sample of the
new x co-ordinate as

S x , w x = ( w x , x 1 ,..., w x , x n )

with the corresponding values of the new y co-ordinate being

S y , w y = ( w y , y 1 ,..., w y , y n )

The first stage of canonical correlation is to choose w x and w y to maximise


the correlation between the two vectors. In other words the function to be
maximised is

ρ = max corr(S x w x ,S y w y )
w x ,w y

= max S x w x ,S y w y
w x ,w y S x w x Sy w y

If we useˆE [f (x, y)] to denote the empirical expectation of the function f (x, y),
were
m
Eˆ [f (x, y)] =1 f (x i , y i )
m i= 1
Algorithm 4

we can rewrite the correlation expression as

Eˆ[ wx, x wy , y ]
ρ = max
w x ,w y Eˆ[ w x , x 2 ]ˆ
Eˆ[w x xy w y ]
= max
w x ,w y Eˆ[wxxx w x]ˆE[wyyy wy]

follows that
w xEˆ[xy ]w y
ρ = max .
w x ,w y wxEˆ[xx ]wxwyEˆ[yy ]wy
Where we use A to denote the transpose of a vector or matrix A.
Now observe that the covariance matrix of (x, y) is

C(x, y) =ˆE x x C xx xy
= = C. (2.1)
y y C yx yy

The total covariance matrix C is a block matrix where the within-sets covari-
ance matrices are C xx and C yy and the between-sets covariance matrices are
C xy = C yx

Hence, we can rewrite the function ρ as

ρ = max w x C xy w y
(2.2)
w x ,w y w x C xx w x w y C yy w y
the maximum canonical correlation is the maximum of ρ with respect to w x
and w y .

3 Algorithm
In this section we will give an overview of the Canonical correlation analysis
(CCA) and Kernel-CCA (KCCA) algorithms where we formulate the optimisa-
tion problem as a generalised eigenproblem.

3.1 Canonical Correlation Analysis


Observe that the solution of equation (2.2) is not affected by re-scaling w x or
w y either together or independently, so that for example replacing w x by αw x
gives the quotient

αw x C xy w y w x C xy w y
= .
xx w x w y C yy w y w x C xx w x w y C yy w y

Since the choice of re-scaling is therefore arbitrary, the CCA optimisation prob-
lem formulated in equation (2.2) is equivalent to maximising the numerator
Algorithm 5

subject to
w x C xx w x = 1
w y C yy w y = 1.
The corresponding Lagrangian is
L (λ, w x , w y ) = w x C xy w y − λ x xx w x − 1) − λ y yy w y − 1)

Taking derivatives in respect to w x and w y we obtain


∂f
=C xxwx = 0 (3.1)
∂w x
∂f
=C yyw y = 0. (3.2)
∂w y
Subtracting w y times the second equation from w x times the first we have
yx w x + w y λ y C yy w y
xxw x,

which together with the constraints implies that λ y − λ x = 0, let λ = λ x = λ y .


Assuming C yy is invertible we have
C −1C yxw x
yy
wy = (3.3)
λ
and so substituting in equation (3.1) gives
C xyC yy−1C yxw x
λ xxwx = 0
or
CxyCyy−1C xxw x (3.4)

We are left with a generalised eigenproblem of the form Ax = λBx. We can


therefore find the co-ordinate system that optimises the correlation between
corresponding co-ordinates by first solving for the generalised eigenvectors of
equation (3.4) to obtain the sequence of w x ’s and then using equation (3.3) to
find the corresponding w y’s.

As the covariance matrices C xx and C yy are symmetric positive definite


we are able to decompose them using a complete Cholesky decomposition
(more details on Cholesky decomposition can be found in section 4.2)
C xx = R xx xx

where R xx is a lower triangular matrix. If we let u x = R xx w x we are able to


rewrite equation (3.4) as follows
CxyCyy−1CyxRxx−1 xxu x
R − 1 C C − 1 C R − 1 ux = λ 2 ux .
xx xy yy yx xx

We are therefore left with a symmetric eigenproblem of the form Ax = λx.


Algorithm 6

3.2 Kernel Canonical Correlation Analysis


CCA may not extract useful descriptors of the data because of its linearity.
Kernel CCA offers an alternative solution by first projecting the data into a
higher dimensional feature space

φ : x = (x 1 ,... x n ) → φ (x) = (φ 1 (x),...,φ N (x)) (n < N )

before performing CCA in the new feature space, essentially moving from the
primal to the dual representation approach. Kernels are methods of implicitly
mapping data into a higher dimensional feature space, a method known as the
”kernel trick”. A kernel is a function K, such that for all x,z ∈ X
K (x,z) =< φ(x) · φ(z) > (3.5)

where φ is a mapping from X to a feature space F . Kernels offer a great deal


of flexibility, as they can be generated from other kernels. In the kernel the
data only appears through entries in the Gram matrix, therefore this approach
gives a further advantage as the number of tuneable parameters and updating
time does not depend on the number of attributes being used.

Using the definition of the covariance matrix in equation (2.1) we can rewrite
the covariance matrix C using the data matrices (of vectors) X and Y , which
have the sample vector as rows and are therefore of size m × N , we obtain
C xx XX
C xy X Y.

The directions w x and w y (of length N ) can be rewritten as the projection of


the data onto the direction α and β (of length m )

wx = X α
wy = Y β.

Substituting into equation (2.2) we obtain the following

ρ = max α XX Y Y β
(3.6)
α,β √α XX XX α · β Y Y Y Y β
Let K x = XX and K y = Y Y be the kernel matrices corresponding to the two
representation. We substitute into equation (3.6)

ρ = max α K xK y β
. (3.7)
α,β α Kx2α · β Ky2β
We find that in equation (3.7) the variables are now represented in the dual
form.
Observe that as with the primal form presented in equation (2.2), equation (3.7)
is not affected by re-scaling of α and β either together or independently. Hence
Algorithm 7

the KCCA optimisation problem formulated in equation (3.7) is equivalent to


maximising the numerator subject to

α K x2 α = 1
β K y2 β = 1

The corresponding Lagrangian is


L(λ,α,β) = α K x K y β − λ2 α α K x2 α − 1 − λ2 β β K y2 β − 1

Taking derivatives in respect to α and β we obtain


∂f
(3.8)
∂α = K x K y β − λ α K x2 α = 0
∂f
∂β = K y K x α − λ β K y2 β = 0. (3.9)

Subtracting β times the second equation from α times the first we have

0 = α K x K y β − α λ α K x2 α − β K y K x α + β λ β K 2
= λ β β K y2 β − λ α α K x2 α

which together with the constraints implies that λ α − λ β = 0, let λ = λ α = λ β .


Considering the case where the kernel matrices K x and K y are invertible, we
have
K −1 K y−1 K y K x α
y
β =
λ
K y−1 K x α
=
λ
substituting in equation (3.8) we obtain

K xK yK −1K xα − λ 2K xK xα = 0.

Hence

K xK xα − λ 2K xK xα = 0

or
I α = λ 2 α. (3.10)
We are left with a generalised eigenproblem of the form Ax = λx. We can
deduce from equation 3.10 that λ = 1 for every vector of α; hence we can
choose the projections w x to be unit vectors j i i = 1,...,m while w y are the
columns of λ1 K y−1K x. Hence when K x or K y is invertible, perfect correlation can
be formed. Since kernel methods provide high dimensional representations such
independence is not uncommon. It is therefore clear that a naive application of
CCA in kernel defined feature space will not provide useful results. In the next
section we investigate how this problem can be avoided.
Computational Issues 8

4 Computational Issues
We observe from equation (3.10) that if K x is invertible maximal correlation is
obtained, suggesting learning is trivial. To force non-trivial learning we intro-
duce a control on the flexibility of the projections by penalising the norms of
the associated weight vectors by a convex combination of constraints based on
Partial Least Squares. Another computational issue that can arise is the use
of large training sets, as this can lead to computational problems and degener-
acy. To overcome this issue we apply partial Gram-Schmidt orthogonolisation
(equivalently incomplete Cholesky decomposition) to reduce the dimensionality
of the kernel matrices.

4.1 Regularisation
To force non-trivial learning on the correlation we introduce a control on the
flexibility of the projection mappings using Partial Least Squares (PLS) to
penalise the norms of the associated weights. We convexly combine the PLS
term with the KCCA term in the denominator of equation (3.7) obtaining
α K xK y β
ρ = max α,β
(α K x2 α + κ w x 2 ) · (β K y2 β + κ w y 2 ))
α K xK y β
= max α,β .
(α K x2 α + κα K x α) · (β K y2 β + κβ K y β)
We observe that the new regularised equation is not affected by re-scaling of α
or β, hence the optimisation problem is subject to
(α K x2 α + κα K x α) = 1
(β K y2 β + κβ K y β) = 1

The corresponding Lagrangain is

L (λ α ,λ β ,α,β) = α K xK yβ
− λ2(α α Kx2α + κα Kxα − 1)

− λ2(β β Ky2β + κβ Ky β − 1).

Taking derivatives in respect to α and β


∂f = K x K y β − λ α (K x2 α + κ K x α) (4.1)
∂α
∂f = K y K x α − λ β (K y2 β + κK y β). (4.2)
∂β
Subtracting β times the second equation from α times the first we have

0 = α K x K y β − λ α α (K x2 α + κK x α) − β K y K x α + λ β β (K y2 β + κK y β)
= λ β β (K y2 β + κK y β) − λ α α (K x2 α + κK x α).
Computational Issues 9

Which together with the constraints implies that λ α − λ β = 0, let λ = λ α = λ β .


Consider the case where K x and K y are invertible, we have

(K y + κI) − 1 K − 1 K y K x α
β =
λ
(K y + κI) − 1 K x α
=
λ
substituting in equation 4.1 gives

K x K y (K y + κI) − 1 K x α = λ 2 K x (K x + κI)α
K y (K y + κI) − 1 K x α = λ 2 (K x + κI)α
(K x + κI) − 1 K y (K y + κI) − 1 K x α = λ2α

We obtain a generalised eigenproblem of the form Ax = λx .

4.2 Cholesky Decomposition


We describe some background information on direct factorisation methods on
triangular decomposition [13].
LU = A (4.3)
in which the diagonal elements of L are not necessarily unity. We consider
L ≡ (l ij ) then equation (4.3) implies
k−1
l kk u kk = a kk − p= 1 l kp u pk for k ≥ 2 (4.4)

k−1
1
u kj = a kj − l kp u pj for j > k ≥ 2 (4.5)
lkk
p=1
k−1
1
l ik = a ik − l ip u pk for i > k ≥ 2 (4.6)
u kk
p=1

Theorem 1. Let A be symmetric. If the factorisation LU = A is possible,


then the choice l kk = u kk implies l ik = u ki , that is, LL T = A .

Proof. Use equation (4.4) and induction on k.

A simple, non-singular, symmetric matrix for which the factorisation is not


possible is
0 1
1 0
On the other hand, if the symmetric matrix A is positive definite (i.e., x Ax > 0
if x x > 0), then the factorisation is possible. We have
Computational Issues 10

Theorem 2. Let A be symmetric, positive definite. Then, A can be factored in


the form
LL = A
Proof. If we define l kk = u kk = √b kk then we will obtain from the previous
equations LU = A where l ik = u ki

Incomplete Cholesky Decomposition


Complete decomposition of a kernel matrix is an expensive step and should be
avoided with real world data. Incomplete Cholesky decomposition as described
in [2] differs from Cholesky decomposition in that all pivots, which are below
a certain threshold are skipped. If M is the number of non-skipped pivots,
then we obtain a lower triangular matrix G i with only M nonzero columns.
Symmetric permutations of rows and columns are necessary during the factori-
sation if we require the rank to be as small as possible (Golub and loan, 1983).

We describe the algorithm from [2] (with slight modification) :

Input N x N matrix K
precision parameter η

N
2. While j= 1 G jj > η and i! = N + 1
• Find best new element: j ∗ = argmax j∈ [i,N ] G jj
• Update j ∗ = (j ∗ + i) − 1
• Update permutation P :
∗ ∗

P = P · Pnext
• Permute elements i and j ∗ in K :
K = Pnext · K · Pnext
• Update (due to new permutation) the already calculated elements
∗ ,1:i−1

• Permute elements j ∗ ,j ∗ and i,i of G :


G(i,i) ↔ G (j ∗ ,j ∗ )
• Set G ii = √ G ii
1 i−1
G i+ 1 : n,i = G ii K i+ 1 : n,i − j= 1 G i+1:n,j G ij
ik=1 Gjk 2
jj
• Update i = i + 1
3. Output P , G and M = i

Output: an N × M lower triangular matrix G and a permutation matrix


P such that P KP − G G ≤ η (appendix 1.2 for proof).

The algorithm involves picking one column of K at a time, choosing the


column to be added by greedily maximising a lower bound on the reduction
Computational Issues 11

in the error of the approximation. After l steps, we have an approximation of


the form˜K l = G l i G il , where G i is N × l. The ranking of the N − l vectors
can be computed by comparing the diagonal elements of the remainder matrix
K − Gl iG il .

Partial Gram-Schmidt Orthogonolisation


We explore the Partial Gram-Schmidt Orthogonolisation (PGSO) algorithm,
described in [6], as our matrix decomposition approach. ICD could been as
equivalent to PGSO as ICD is the dual implementation of PGSO. PGSO
works as follows; The projection is built up as the span of a subset of the
projections of a set of m training examples. These are selected by performing
a Gram-Schmidt orthogonalisation of the training vectors in the feature space.
We slightly modify the Gram-Schmidt algorithm so it will use a precision
parameter as a stopping criterion as shown in [2].

Given a kernel K from a training set, and precision parameter η:

Initialisations:
m = size of K , a N × N matrix
j= 1
size and index are a vector with the same length as K
feat a zeros matrix equal to the size of K
for i = 1 to m do
norm2[i] = K ii;

Algorith m:
while i norm2[i] > η and j! = N + 1 do
i j = argmax i (norm2[i]);
index[j] = ij ;
size[j] = norm2[ij ];
for i = 1 to m do
“k(di,dij )−Ptj=1−1 feat[i,t]·feat[ij ,t]”
feat[i,j] = ;
size[j]
norm2[i] = norm2[i]− feat(i,j)· feat(i,j);
end;
j= j+ 1
end;
return feat

Output:
K − feat · feat ≤ η where feat is a N × M lower triangular matrix
(appendix 1.2 for proof)

We observe that the output is equivalent to the output of ICD.

T o classify a n e w e x a m p l e at location i:
Computational Issues 12

Given a kernel K from a testing set

for j = 1 to M
j−1
newfeat[j] = (K i,index[j] − t=1 newfeat[t] · feat[index[j],t])/size[j];
end;

The advantage of using the partial Gram-Schmidt orthonogolisation (PGSO) in


comparison to the incomplete Cholesky decomposition (as described in Section
4.2) is that there is no need for a permutation matrix P .

4.3 Kernel-CCA with PGSO


So far we have considered the kernel matrices as invertible, although in prac-
tice this may not be the case. In this Section we address the issue of using
large training sets, which may lead to computational problems and degeneracy.
We use PGSO to approximate the kernel matrices such that we are able to
re-represent the correlation with reduced dimensiality.

Decomposing the kernel matrices K x and K y via PGSO, where R is a lower


triangular matrix, gives
Kx =˜ R x R x
Ky =˜ R yR y
substituting the new representation into equations (3.8) and (3.9)
R xR xR y R yβ − λ R xR xR xR xα = 0 (4.7)
R y R y R x R x α − λR y R y R y R y β = 0. (4.8)
Multiplying the first equation with R x and the second equation with R y gives
R xR xR xR y R yβ − λ R xR xR xR xR xα = 0 (4.9)
R y R y R y R x R x α − λR y R y R y R y R y β = 0. (4.10)
Let Z be the new correlation matrix with the reduced dimensiality
R xR x = Z xx
R yR y = Z yy
R x R y = Z xy
R y R x = Z yx
Let α˜ and˜β be the reduced directions, such that

α˜ = Rxα
β˜ = R y β
substituting in equations (4.9) and (4.10) we find that we return to the primal
representation of CCA with a dual representation of the data
ZxxZxyβ˜ − λZxx2 α˜ = 0
Zyy Zyxα˜ − λZyy2 β˜ = 0.
Computational Issues 13

Assuming that the Z xx and Z yy are invertible. We multiply the first equation
with Z −1 and the second with Z −1
xx yy

Zxy β˜ − λZxxα˜ = 0 (4.11)


Z yx α˜ − λZ yy β˜ = 0. (4.12)
We are able to rewrite˜β from equation (4.12) as

β˜ = Zyy−1Zyxα˜
λ
and substituting in equation (4.11) gives
Zxy Zyy−1Zyxα˜ = λ2Zxxα˜ (4.13)

we are left with a generalised eigenproblem of the form A x = λBx. Let SS


be equal to the complete Cholesky decomposition of Z xx such that Z xx = SS
where S is a lower triangular matrix, and let αˆ = S ·α˜. Substituting in equation
(4.13) we obtain

S −1 Z xy Z yy−1 Z yx S −1 αˆ = λ 2 αˆ

We now have a symmetric generalised eigenproblem of the form Ax = λx.

KCCA Regularisation with PGSO


We combine the dimensiality reduction introduced in the previous Section 4.3
with the regularisation parameter (Section 4.1) to maximise the learning. Fol-
lowing the same approach in the previous section we can rewrite equations (4.1)
and (4.2) with the approximation of K x and K y as formulated in equations (4.7)
and (4.8) respectively, in the following manner

R x R x R y R y β − λ(R x R x R x R x + κ R x R x )α = 0
R y R y R x R x α − λ(R y R y R y R y + κR y R y )β = 0
Multiplying the first equation with R x and the second equation with R y gives

R x R x R x R y R y β − λ R x (R x R x R x R x + κ R x R x )α = 0 (4.14)
R y R y R y R x R x α − λR y (R y R y R y R y + κR y R y )β = 0 (4.15)
rewriting equation (4.14) with the new reduced correlation matrix Z as defined
in the previous Section 4.3, we obtain

Z xx Z xy β˜ − λZ xx (Z xx + κI)˜α = 0
Z yy Z yx α˜ − λZ yy (Z yy + κI)˜β = 0.
Assuming that the Z xx and Z yy are invertible. We multiply the first equation
with Z −1 and the second with Z −1
xx yy

Z xy β˜ − λ(Z xx + κI)˜α = 0 (4.16)


Z yx α˜ − λ(Z yy + κI)˜β = 0. (4.17)
Experimental Results 14

We are able to rewrite˜β from equation (4.17) as

β˜ =(Z yy + κI) −1 Z yx α˜
λ
substituting in equation 4.16 gives

Z xy (Z yy + κI) −1 Z yx α˜ = λ 2 (Z xx + κI)˜α

We are left with a generalised eigenproblem of the form Ax = λBx. Performing


a complete Cholesky decomposition on Z xx + κI = SS where S is a lower
triangular matrix. and let αˆ = S α˜, substituting in equation (4.18)

S −1 Z xy (Z yy + κI) −1 Z yx S −1 αˆ = λ 2 α.ˆ

We obtain a symmetric generalised eigenproblem of the form Ax = λx .

5 Experimental Results
In the following experiments the problem of learning semantics of multimedia
content by combining image and text data is addressed. The synthesis is ad-
dressed by the kernel Canonical correlation analysis described in Section 4.3.
We test the use of the derived semantic space in an image retrieval task that
uses only image content. The aim is to allow retrieval of images from a text
query but without reference to any labeling associated with the image. This
can be viewed as a cross-modal retrieval task. We used the combined multime-
dia image-text web database, which was kindly provided by the authors of [15],
where we are trying to facilitate mate retrieval on a test set. The data was di-
vided into three classes (Figure 1) - Sport, Aviation and Paintball - 400 records
each and consisted of jpeg images retrieved from the Internet with attached
text. We randomly split each class into two halves which were used as training
and test data accordingly. The extracted features of the data were used the
same as in [15] (detailed description of the features used can be found in [15]:
image HSV colour, image Gabor texture and term frequencies in text.

We compute the value of κ for the regularization by running the KCCA with the
association between image and text randomized. Let λ(κ) be the spectrum with-
out randomisation, the database with itself, and λ R (κ) be the spectrum with
randomisation, the database with a randomised version of itself, (by spectrum
it is meant that the vector whose entries are the eigenvalues). We would like
to have the non-random spectrum as distant as possible from the randomised
spectrum, as if the same correlation occurs for λ(κ) and λ R κ then clearly over-
fitting is taking place. Therefor we expect for κ = 0 (no regularisation) and
let j = 1,..., 1 (the all ones vector) that we may have λ(κ) = λ R (κ) = j, since
it is very possible that the examples are linearly independent. Though we find
that only 50% of the examples are linearly independent, this does not affect the
selection of κ through this method. We choose κ so that the κ for which the
Experimental Results 15

20 20 20 20

40 40 40 40

60 60 60 60

80 80 80 80

100 100 100 100

120 120 120 120

140 140 140 140

160 160 160 160

180 180 180 180

Aviation
200 200 200 200

50 100 150 200 250 300 350 400 50 100 150 200 250 300 350 400 450 500 50 100 150 200 250 300 350 400 50 100 150 200 250 300 350 400 450

10 10 10

20
20
20 20

30
30
40 30

40
40
40
50
60
50
50 60

60
80 70
60

80 70

70
100

90 80

Sports
80
100
90
120
90 110
100
10 20 30 40 50 60 70 80 20 40 60 80 100 120 10 20 30 40 50 60 70 80 20 40 60 80 100 120

10
20
20
20
20
40

40 30

60
40
40
60 80

50
60
100
80 60

120
70
80
100 140

80

160
100 90
120

180
100
140
120
110 200

10 20 30 40 50 60 70 80 90 100 110 20 40 60 80 100 120 140 20 40 60 80 100 120 140 50 100 150 200 250 300

Figure 1 Example of images in database.

difference between the spectrum of the randomized set is maximally different


(in the two norm) from the true spectrum.

κ = argmax λ R (κ ) − λ(κ)

We find that κ = 7 and set via a heuristic technique the Gram-Schmidt preci-
sion parameter η = 0.5 .

To perform the test image retrieval we compute the features of the images
and text query using the Gram-Schmidt algorithm. Once we have obtained
the features for the test query (text) and test images we project them into the
semantic feature space using˜β and α˜ (which are computed through training)
respectively. Now we can compare them using an inner product of the semantic
feature vector. The higher the value of the inner product, the more similar the
two objects are. Hence, we retrieve the images whose inner products with the
test query are highest.

We compared the performance of our methods with a retrieval technique


based on the Generalised Vector Space Model (GVSM). This uses as a seman-
tic feature vector the vector of inner products between either a text query and
each training label or test image and each training image. For both methods we
have used a Gaussian kernel, with σ = max. distance/20, for the image colour
component and all experiments were an average of 10 runs. For convenience
we separate the content-based and mate-based approaches into the following
Subsections 5.1 and 5.2 respectively.

5.1 Content-Based Retrieval


In this experiment we used the first 30 and 5 α˜ eigenvectors and˜β eigenvectors
(corresponding to the largest eigenvalues). We computed the 10 and 30 images
for which their semantic feature vector has the closest inner product with the
semantic feature vector of the chosen text. Success is considered if the images
contained in the set are of the same label as the query text (Figure 3 - retrieval
Experimental Results 16

example for set of 5 images).

Image Set GVSM success KCCA success (30) KCCA success (5)
10 78.93% 85% 90.97%
30 76.82% 83.02% 90.69%

Table 1 Success cross-results between kernel-cca & generalised vector space.

95
KCCA (5)
KCCA (30)
GVSM
90

85

80

75

70

65

60
0 20 40 60 80 100 120 140 160 180 200
Image Set Size

Figure 2 Success plot for content-based KCCA against GVSM

In Tables 1 and 2 we compare the performance of the kernel-CCA algorithm and


generalised vector space model. In Table 1 we present the performance of the
methods over 10 and 30 image sets where in Table 2 as plotted in Figure 2 we
see the overall performance of the KCCA method against the GVSM for image
sets (1 − 200), as in the 200 th image set location the maximum of 200 × 600
of the same labelled images over all text queries can be retrieved (we only have
200 images per label). The success rate in Table 1 and Figure 2 is computed as
follows
600j=1 ki =1 count j
success % for image set i = × 100
i × 600
where count jk = 1 if the image k in the set is of the same label as the text query
present in the set, else count jk = 0. The success rate in Table 2 is computed as
above and averaged over all image sets.

As visible in Figure 4 we observe that when we add eigenvectors to the seman-


tic projection we will reduce the success of the content based retrieval. We
speculate that this may be the result of unnecessary detail in the semantic
projection. and as the semantic information needed is contained in the first
Experimental Results 17

10

20

30

40

50

60

70

80

90

100

110

10 20 30 40 50 60 70 80 10 20 30 40 50 60 70 80

Figure 3 Images retrieved for the text query: ”height: 6-11 weight: 235 lbs
position: forward born: september 18, 1968, split, croatia college: none”

Method overall success


GVSM 72.3%
KCCA (30) 79.12%
KCCA (5) 88.25%

Table 2 Success rate over all image sets (1 − 200).

few eigenvectors. Hence a minimal selection of 5 eigenvectors is sufficient to


obtain a high success rate.

5.2 Mate-Based Retrieval


In the experiment we used the first 150 and 30 α˜ eigenvectors and˜β eigenvectors
(corresponding to the largest eigenvalues). We computed the 10 and 30 images
for which their semantic feature vector has the closest inner product with the
semantic feature vector of the chosen text. A successful match is considered if
the image that actually matched the chosen text is contained in this set. We
compute the success as the average of 10 runs (Figure 5 - retrieval example for
set of 5 images).

Image set GVSM success KCCA success (30) KCCA success (150)
10 8% 17.19% 59.5%
30 19% 32.32% 69%

Table 3 Success cross-results between kernel-cca & generalised vector space.

In Table 3 we compare the performance of the KCCA algorithm with the GVSM
Experimental Results 18

90

80

70

60

50

40

30
0 5 10 15 20 25 30 35 40 45 50
eigenvectors

Figure 4 Content-Based plot of eigenvector selection against overall success


(%).

over 10 and 30 image sets where in Table 4 we present the overall success over
all image sets. In figure 6 we see the overall performance of the KCCA method
against the GVSM for all possible image sets.
The success rate in Table 3 and Figure 6 is computed as follows
600

success % for image set i = j= 1 count j × 100


600

where count j = 1 if the exact matching image to the text query was present in
the set, else count j = 0. The success rate in Table 4 is computed as above and
averaged over all image sets.

Method overall success


GVSM 70.6511%
KCCA (30) 83.4671%
KCCA (150) 92.9781%

Table 4 Success rate over all image sets.

As visible in Figure 7 we find that unlike the Content-Based (Section 5.1)


retrieval, increasing the number of eigenvectors used will assist in locating the
matching image to the query text. We speculate that this may be the result
of added detail towards exact correlation in the semantic projection. Though
we do not compute for all eigenvectors as this process would be expensive
Experimental Results 19

20

40

60
20

20
40 80

60 40
100
80
120 60

100

120 140 80

140
160
100
160
180
180 120

200
200
140
50 100 150 200 250 300 350 400 450 50 100 150 200 250 300 350 400 450 500 20 40 60 80 100 120 140 160 180 200 220

10 20

20 40

60
30

80
40
100

50
120

60 140

160
70

180
80

200
90

20 40 60 80 100 120 50 100 150 200 250 300 350 400 450 500

Figure 5 Images retrieved for the text query: ”at phoenix sky harbor on july
6, 1997. 757-2s7, n907wa phoenix suns taxis past n902aw teamwork america
west america west 757-2s7, n907wa phoenix suns taxis past n901aw arizona at
phoenix sky harbor on july 6, 1997.” The actual match is the middle picture in
the first row.

and the reminding eigenvectors would not necessarily add meaningful semantic
information.

It is visible that the kernel-CCA significantly outperformes the GVSM method


both in content retrieval and in mate retrieval.

5.3 Regularisation Parameter


We next verify that the method of selecting the regularisation parameter κ
a priori gives a value performed well. We randomly split each class into two
halves which were used as training and test data accordingly, we keep this
divided set for all runs. We set the value of the incomplete Gram-Schmidt
orthogonolisation precision parameter η = 0.5 and run over possible values
κ where for each value we test its content-based and mate-based retrieval
performance.

Let κˆ be the previous optimal choice of the regularisation parameter κˆ = κ = 7.


As we define the new optimal value of κ by its performance on the testing set,
we can say that this method is biased (loosely its cheating). Though we will
show that despite this, the difference between the performance of the biased κ
and our a priori κˆ is slight.

In table 5 we compare the overall performance of the Content Based (CB)


performance in respect to the different values of κ and in figures 8 and 9
we view the plotting of the comparison. W e observe that the difference in
Experimental Results 20

100

90

80

70

60

50

40

30

20

KCCA (150)
10 KCCA (30)
GVSM

0
0 100 200 300 400 500 600

Figure 6 Success plot for KCCA mate-based against GVSM (success (%) against
image set size).

κ CB-KCCA (30) CB-KCCA (5)


0 46.278% 43.8374%
κˆ 83.5238% 91.7513%
90 88.4592% 92.7936%
230 88.5548% 92.5281%

Table 5 Overall success of Content-Based (CB) KCCA with respect to κ.

performance between the a priori value κˆ and the new found optimal value
κ for 5 eigenvectors is 1.0423% and for 30 eigenvectors is 5.031%. The more
substantial increase in performance on the latter is due to the increase in the
selection of the regularisation parameter, which compensates for the substantial
decrease in performance (figure 6) of the content based retrieval, when high
dimensional semantic feature space is used.

κ MB-KCCA (30) MB-KCCA (150)


0 73.4756% 83.46%
κˆ 84.75% 92.4%
170 85.5086% 92.9975%
240 85.5086% 93.0083%
430 85.4914% 93.027%

Table 6 Overall success of Mate-Based (MB) KCCA with respect to κ.

In table 6 we compare the overall performance of the Mate-Based (MB) per-


formance with respect to the different values of κ and in figures 10 and 11 we
view a plot of the comparison. We observe that in this case the difference in
Experimental Results 21

95

90

85

80

75

70

65

60

55

50
0 20 40 60 80 100 120 140 160 180
eigenvectors

Figure 7 Mate-Based plot of eigenvector selection against overall success (%).

88.55

88.5

88.45

88.4

100 120 140 160 180 200 220 240 260 280 300
Kappa

Figure 8 Content-Based. κ selection over overall success for 30 eigenvectors.


Experimental Results 22

92.796

92.794

92.792

92.79

92.788

92.786

92.784

92.782

92.78

92.778
80 85 90 95 100 105 110 115 120
Kappa

Figure 9 Content-Based. κ selection over overall success for 5 eigenvectors.

85.515

85.51

85.505

85.5

85.495

85.49

85.485
100 120 140 160 180 200 220 240 260 280 300
Kappa

Figure 10 Mate-Based. κ selection over overall success for 30 eigenvectors.


Generalisation of Canonical Correlation Analysis 23

93.03

93.02

93.01

93

92.99

92.98

92.97

92.96

92.95
100 150 200 250 300 350 400 450 500 550
kappa

Figure 11 Mate-Based. κ selection over overall success for 150 eigenvectors.

performance between the a priori value κˆ and the new found optimal value κ
is for 150 eigenvectors 0.627% and for 30 eigenvectors is 0.7586%.

Our observed results support our proposed method for selecting the regu-
larisation parameter κ in an a priori fashion, since the difference between the
actual optimal κ and the a priori κˆ is very slight.

6 Generalisation of Canonical Correlation Analysis


In this section we follow A. Gifi’s book “Nonlinear Multivariate Analysis” (1990)
and partially J. R. Ketterling “Canonical analysis of several sets of variables”
(1971).

6.1 Some notations


For an n × n square matrix A having elements { a ij } , i,j = 1,...,n w e can
define the trace by the formula

Tr(A ) = a ii (6.1)
i

the norm F , so called the Frobenius norm, defined by


a2
A F = Tr A A = ij (6.2)
ij

and if a i denotes the ith column(row) of A then we have


2
A F = ai 2 = ai,ai (6.3)
i i

the notation 2 means the Euclidean, l 2 , norm of a vector.


Generalisation of Canonical Correlation Analysis 24

6.2 Some propositions


Proposition 3. Let an optimisation problem be given in the form
min f (x,y) (6.4)
x,y
subject to (6.5)
g(y) = 0, (6.6)
x ∈ R m ,y ∈ R n . (6.7)

Let the set Y ⊆ R n the feasibility domain for y determined by the constrain
g(y) = 0.

Assume the function f is convex in both variables x and y, the optimal solution
of x can be expressed by the function h(y) of the optimal solution of y, where
the function of h is defined on the whole set Y and the functions f,g,h are
twice continuously differentiable on R m × Y .
Then the optimisation problem with the same constrain
min f (h(y),y) (6.8)
y
subject to (6.9)
g(y) = 0, (6.10)
y ∈ R n, (6.11)

has the same optimal solution in y than equation (6.4) has.

Proof. Let the optimal solution of equation (6.4) be denoted by x 1 ,y 1 and for
equation (6.8) be denoted by y 2 .

Based on the condition of the proposition we have x 1 = h(y 1 ). Because


y 1 is a feasible solution for equation (6.8) thus f (x 1 ,y 1 ) = f (h(y 1 ),y 1 ) ≥
f (h(y 2 ),y 2 ), but the objective function of equation (6.4) is not restricted in
the first variable, thus the inequality f (x 1 ,y 1 ) ≤ f (h(y 2 ),y 2 ) holds, hence
f (x 1 ,y 1 ) = f (h(y 2 ),y 2 ).

From the convexity of f and the same feasibility domains the optimum solutions
have to be the same.

6.3 Formulation of the Canonical Correlation


Let H (1) ,H (2) be matrices with size m × n 1 ,m × n 2 respectively and assume
the sum of the elements in the columns of these matrices are equal to 0, they
are centralised and they are linearly independent vectors within one matrix.
We consider arbitrary linear combinations of the columns of these matrices in
the f o r m H (1) a (1) , H (2) a (2) ,i = 1 ,...,p. L e t A (1) = a (1) ,...,a n (1) 1 a n d A (2) =
i i 1
a (2) ,...,a n(2) 2 be matrices comprising the vectors of the linear combinations as
1
Generalisation of Canonical Correlation Analysis 25

columns. Introducing notations for the product of the matrices to simplify the
formulas:
Σ ij = H (i) H (j) , i,j = 1 , 2 . (6.12)
We are looking for linear combinations of the columns of these matrices such
that the first pair of the vectors (a (1) ,a (2) ) are optimal solution of the optimi-
1 1
sation problem:
max a (1) Σ 12 a (2) (6.13)
a(1),a(2) 1
1 1

subject to (6.14)
a (1) Σ 11 a (1)
1 = 1, (6.15)
a (2) Σ22a(2)1
1 = 1. (6.16)
The meaning of this optimisation problem is to find the maximum correlation
between the linear combinations of the columns of the matrices H (1) ,H (2) ,
subject to the length of the vectors corresponding to these linear combinations
normalised to 1.

To determinate the remaining pairs of the vectors, columns in A (1) and A (2) ,
a series of optimisation problems are solved successively. For the pair of the
vectors (a (1) ,a r (2) ), r = 2,...,p we have
max a (1)
r
Σ12a(2)r (6.17)
ar (1),ar (2)

subject to (6.18)
a (k) Σ kk a (rk)
r = 1, (6.19)
a (k) Σ kk a (jk)
r = 0, (6.20)
a (k) Σ kl a (jl)
r = 0, (6.21)
k,l = 1 , 2 , j = 1 ,...,r − 1 . (6.22)
The problem (6.13) expanded by the orthogonality constrains (6.17), namely
the components of every new pair in the iteration have to be orthogonal to the
components of the previous pairs.

The upper limit p of the iteration has to be ≤ min(rank(H (1) ),rank(H ( 2))).

Applying the Karush-Kuhn-Tucker conditions we can express the optimal


solutions of the problem (6.13) and the problems (6.17) for r = 2,...,p. Let’s
begin with the problem (6.17).

First we apply a substitution such that


a (k) 2
i i , (6.23)
2 2
, ll
(6.24)
k,l = 1 , 2 , i = 1 ,...,p, (6.25)
Generalisation of Canonical Correlation Analysis 26

Thus we have the problem

max y (1) D 12 y (2)


y (1) ,y (2) 1 (6.26)
1 1

subject to (6.27)
y (k) y (k)
1 1 = 1,k = 1, 2. (6.28)
(6.29)

The Lagrangian of this problem has the form

L 1 = y (1) D 12 y (2) +1 y (1)) + 1 y (2) ),


(6.30)
2 λ 1(1 − y (1) 1 2 λ 2(1 − y (2) 2
where λ 1 and λ 2 are the Lagrangian multipliers. The vectors of the partial
derivatives of L 1 respect to the vectors y (1) ,y 1(2) are equal to 0 by the K K T
conditions, thus we get
∂L 1 = 2D 12 a (2) − 2λ 1 y (1)
∂y (1) = 0, (6.31)
1

∂L 1 = 2D 21 y (1)
∂y (2) 1 = 0. (6.32)
1
Multiplying equation (6.31) by y (1) and equation (6.32) by y (2)
1 1 and dividing
by the constant 2 provides
y (1) D 12 y (2) − λ 1 y (1) y (1)
1 1 = 0, (6.33)
y (2) D 12 y (1) y (2)
1 1 1 = 0. (6.34)
Based on the constrains of the optimisation problem (6.26) and the identity
D 21 = D 12 we have
λ 1 = λ 2 = y (1) D 12 y (2) . (6.35)

After replacing λ 1 and λ 2 with λ the following equality system can be formulated
y (1)
− λI D 1 2 1
= 0. (6.36)
y (2)
D 21 − λ I 1

It is not too hard to realise this equality system is a singular vector and value
problem of the matrix D 12 having y (1) and y1(2) are a left and a right singular
vectors and the value of the Lagrangian λ is equal to the corresponding singular
value. Based on this statements we can claim that the optimal solutions are
the singular vectors belonging to the greatest singular value of the matrix D 12 .

Considering the successive optimisation problem and applying similar sub-


stitution for the all variables a (k)
i as introduced in equation (6.23), a problem
with the greatest r singular values and the corresponding left and right singular
vectors arises.
Generalisation of Canonical Correlation Analysis 27

6.4 The simultaneous formulation of the canonical correla-


tion
Instead of using the successive formulation of the canonical correlation we can
join the subproblemsinto one. The simultaneous formulation is the optimisation
problem
p
max a (1) Σ12a(2)i
i (6.37)
(a(1),a(2)),...,(a p (1),ap (2))
1 1 i= 1
subject to (6.38)
a (1) Σ11a(1)j 1 if i = j,
i = (6.39)
0 otherwise,
a (2) Σ22a(2)j 1 if i = j,
i = (6.40)
0 otherwise,
i,j = 1,...,p, (6.41)
a (1) Σ12a(2)j
i = 0, (6.42)
i,j = 1,...,p, j = i. (6.43)
Based on equation (6.37) and the definition of the Frobenius norm we have a
compact formulation of the canonical correlation problem:
max Tr A (1) Σ 12 A (2) (6.44)
A (1) , A (2)

subject to (6.45)
A (k ) Σ k k A (k ) = I, (6.46)
a (k) Σ kl a (jl)
i = 0, (6.47)
k,l = {1, 2}, l = k,i,j = 1,...,p, j = i. (6.48)
where I is the identity matrix with size p × p.

Repeating the substitution in equation (6.23) the set of feasible vectors for
the simultaneous problem is equal to the left and right singular vectors of ma-
trix D 12 , hence the optimal solution is compatible to the successive problems.

6.5 Correlation versus Distance


The canonical correlation problem can be transformed into a distance problem
where the distance between two matrices is measured by the Frobenius norm.
min (6.49)
A (1) , A (2) F

subject to (6.50)
A (k ) Σ k k A (k ) = I, (6.51)
a (k) Σ kl a (jl)
i = 0, (6.52)
k,l = 1,..., 2, l = k,i,j = 1,...,p, j = i. (6.53)
Generalisation of Canonical Correlation Analysis 28

Unfolding the objective function of the minimisation problem (6.49) shows the
optimisation problem is the same as the maximisation problem (6.44).

6.6 The generalisation of canonical correlation


Exploiting the distance problem we can give a generalisation of the canoni-
cal correlation for more than two known matrices. Given a set of matrices
{ H (1) ,...,H (K ) } with d i m e n s i o n m × n 1 ,...,m × n K . W e are looking for
the linear combinations of the columns of these matrices in the matrix form
A(1),...,A(K) such that they gives the optimum solution of the problem
K
min (6.54)
A (1) ,...A (K ) k,l= 1 F

subject to (6.55)
A (k ) Σ k k A (k ) = I, (6.56)
a (k) Σ kl a (jl)
i = 0, (6.57)
k,l = 1,...,K, l = k,i,j = 1,...,p, j = i. (6.58)
In the forthcoming sections we will show how to simplify this problem.

6.7 Total Distance versus Variance


Given a set of vectors X = x 1 ,...,x m ⊆ R n . The notation x ki means the ith
component of the vector x k .

The total squared distance, the sum of the squared Euclidean distance of
all possible pairs of vectors in X is equal to
1 m m
2
2 xk − xl 2 = (6.59)
k = 1 l= 1 ,l= k

as for any k, x k − x k = 0 we can drop the constrain l = k, thus we have


m
=1 2
2 xk − xl 2 = (6.60)
k = 1 ,l= 1
m n
=1 (x ki − x li ) 2 = (6.61)
2 k = 1 ,l= 1 i= 1
m n
=1 (x2ki + x2li (6.62)
2 k = 1 ,l= 1 i= 1

n  m x2ki + m x2li m
2x ki x li
=1 (6.63)
2 m 
i=1  k=1,l=1 k = 1 ,l= 1 k = 1 ,l= 1
n m m m
=1 m x2ki + m x2li x ki x li (6.64)
2 i= 1 k=1 l= 1 k=1 l= 1
Generalisation of Canonical Correlation Analysis 29

to simplify the formula we introduce


m m
M (i) =1 x ki , M (i) =1 x2ki,
1 2 (6.65)
m k=1 m k= 1
we can reformulate equation (6.64)
n
= m 2M (i) )2 = (6.66)
2 1
i= 1
applying the well-known identity of the variance for the vectors
(x 1 1 ,...,x m 1 ),..., ( x 1 n ,...x m n ) g i v e s
n m
= m2 1
)2. (6.67)
i= 1 k = 1

Hence the total squared distance turns to be equal to the sum of the component-
wise variances of the vectors in X multiplied by the square of the number of
the vectors.

Another statement about the variance is introduced. If we have the following


optimisation problem
min 2
(6.68)
z
then the optimal solution can be expressed by
m
zi =1 x ki . (6.69)
n k=1

The components of the optimal solution are equal to the mean values of the
corresponding components of the known vectors.

6.8 General form


Let H (1) ,...,H (K ) be a set of k n o w n matrices with size m × n 1 ,...,m × n K
and X be an unknown matrix with size m × p. The columns of the matrices
H (1) ,...,H (K ) are centralised, i.e. the mean of every column in every matrix
is equal to 0. W e assume the c ol um ns of every matrix H (k ) ,k = 1 ,...,K
are linearly independent. A notation to simplify the formulas, is introduced;
Σ kl = H (k)T H (l). We are looking for linear combinations of the columns of the
known matrices and a corresponding X such that they are the optimal solution
of the optimisation problem given by
K
1
min (6.70)
X , A (1) ,...,A ( K ) K k=1
F

subject to (6.71)
a (k) Σ kl a (jl) 1 if k = l and i = j,
i = (6.72)
0 if (k = l and i = j) or (k = l and i = j),
k,l = 1,...,K, i,j = 1,...,p, except when k = l and i = j, (6.73)
Generalisation of Canonical Correlation Analysis 30

where a (k) denotes the ith column of the matrix A (k) containing the possible
i
linear combinations.

Applying substitutions for all k = 1,...,K, i = 1,...,p


a (k) 2
i i , (6.74)

where we can compute the inverse because the columns of the matrix H (k) are
independent meaning Σ kk has full rank. We can transform this optimisation
problem into a more simply form. First, we modify the set of constrains. To
make this modification readable the notation is introduced
2 2
ll = Dkl, k,l = 1,...,K, (6.75)

kk2 .

Thus the constrains get the form

y (k) y (k) 1 if i = j,
i j = (6.76)
0 if i = j,
k = 1,...,K, i,j = 1,...,p, (6.77)
y (k) D kly j(l)
i = 0, (6.78)
k,l = 1,...,K, k = l, i,j = 1,...p, i = j, (6.79)

for which we can recognise the singular decomposition problems of the matrices
{D kl }. If we consider the matrix D kl for a fixed pair of the indeces k,l and
apply the singular decomposition we have

D kl = Y (k) Λ kl Y (l) , (6.80)

the matrices Y (k) and Y (l) have columns being equal to the vectors y (k) and y (l)
i i
respectively, where i = 1,...,p. The singular decomposition Λ kl is a diagonal
matrix and Y (k) Y (k) = I, Y (l) Y (l) = I. The constrains do not contain the
items having indeces with the properties k = l and i = j. They give the singular
values of the matrix D kl
y (k) D kly i(l)
i = Λ ii . (6.81)
The consequence of the singular decomposition form is that the set of the
feasible solutions of the optimisation problem with constrains (6.76) are equal
to the set of the singular vectors of the matrices { D kl ,k,l = 1 ...,K}.
To express the objective function of the optimisation problem (6.70) we use
the notations

kk2 , (6.82)
D kl = Q Tk Q l. (6.83)
Generalisation of Canonical Correlation Analysis 31

We can derive another statement about the optimal solution of the problem.
Exploiting the definition of the Frobenius norm the objective function (6.70)
can be rewritten as a sum of the Euclidean norm of the column vectors, where
x i denotes the ith column of the matrix X,

1 K

K = (6.84)
F
k=1
K p
1 x i − H (k ) a (k )
2
= 2 = (6.85)
K k=1 i=1
K p
2
=1 x i − Q k y (k) (6.86)
K 2 =
k=1 i=1
K p
=1 xi − Q k y (k ) ,x i − Q k y (k ) . (6.87)
K k=1 i=1

The constrains are formulated in equation (6.76).

For the Lagrangian function of the optimisation problem we have:


K p
L= x i − Q k y (k ) ,x i − Q k y (k) + (6.88)
k=1 i=1
K p
+ λ k,ii 1 − y (k) y (k) + (6.89)
i
k i
K p
+ λ k,ij − y (k ) y (k) + (6.90)
j
k i,j
i=j
K p
+ λ kl,ij − y (k ) D kly j(l) . (6.91)
k,l i,j
k=l i=j

We disregard the constant 1


K
from the objective function (6.70).

After computing the partial derivatives, where x i signs the ith column of
the matrix X, we get

∂L K
2x i − 2Q k y (k) = 0, i = 1,...,p,
∂x i = (6.92)
k=1

p K p
∂L = 2D kk y (k) y (k) λ kl,ij D kl y (l)
∂y (k) i − 2 λ k,ij j j = 0,
i j l j
l=k j=i
(6.93)
k = 1,...,K, i = 1,...,p. (6.94)
Conclusions 32

We can express x i for any i = 1,...,p from (6.92)


K
xi =1 Q l y (l) , i = 1 ,...,p.
i (6.95)
K l= 1

Based on the proposition (3) we can replace the variable X in equation (6.70)
by an expression of the other variables without changing the optimum value
and the optimal solution. Thus we have the variance problem.

7 Conclusions
Through this study we have presented a tutorial on canonical correlation
analysis and have established a novel general approach to retrieving images
based solely on their content. This is then applied to content-based and mate-
based retrieval. Experiments show that image retrieval can be more accurate
than with the Generalised Vector Space Model. W e demonstrate that one
can choose the regularisation parameter κ a priori that performs well in very
different regimes. Hence we have come to the conclusion that kernel Canonical
Correlation Analysis is a powerful tool for image retrieval via content. In the
future we will extend our experiments to other data collections.

In the procedure of the generalisation of the canonical correlation analysis


we can see that the original problem can be transformed and reinterpreted as a
total distance problem or variance minimisation problem. This special duality
between the correlation and the distance requires more investigation to give
more suitable description of the structure of some special spaces generated by
different kernels.

These approaches can give tools to handle some problems in the kernel space,
where the inner products and the distances between the points are known but
the coordinates are not. For some problems it is sufficient to know only the
coordinates of a few special points, which can be expressed from the known
inner product, e.g. to do cluster analysis in the kernel space and to compute
the coordinates of the cluster centres only.

Acknowledgments
We would like to acknowledge the financial support of EU Projects KerMIT,
No. IST-2000-25341 and LAVA, No. IST-2001-34405.

1 Proof K − G i G i ≤η

1.1 Some notation


n
Lemma 4. Let A and B be an square matrices such that Trace(A ) = i a ii
then we have Trace(AB) = Trace(BA)
Proof K − G i G i ≤ η 33

Proof.
n
Trane(A B ) = (ab) ii
i
n
= a ij b ji
i,j
n
= b ji a ij
j,i
n
= (ba) jj
j
= Trace(BA)

Lemma 5. Let A be a symmetric matrix having eigenvalue decomposition equal


to A = V ΛV (we are able to write Λ = V AV ) and using Lemma 4, then
Trace(Λ) = Trace(A).

Proof.

Trace(Λ) = Trace(V AV )
= Trace((V A)V )
= Trace(V (V A))
= Trace(V V A)
= Trace(A)

Hence we show that the following holds


n n
a ii = λi
i i

Lemma 6. If we have a symmetric matrix A, the Euclidian norm is equal with


the maximum eigenvalue of A

Proof.

Ax
A = max .
x=0 x

For any c ∈ R the scaling does not change


cAx Ax
= c Ax =
cx cx x
Proof K − G i G i ≤ η 34

Hence we obtain

max Ax = m a x Ax
x=0 x x =1

Ax = (x A Ax)
Ax 2 = x A Ax

Let UDU be the eigenvalue decomposition of AA such that D is a diagonal


matrix containing square of the eigenvalues of A

A A = UDU
A x 2 = x UDUx

Setting w = U x and as U is orthognoal we can rewrite x = 1 to w = 1


A 2 = w Dw
= λ2i w2

We can see that the following holds

max λ2i w2 = max λ 2


i i
(P wi2=1)

Hence we obtain
A = max λ i
i

1.2 Proof
Theorem 7. If K is a positive definite matrix and G G is its incomplete
cholesky decomposition then the Euclidian norm of GG subtracted from K is
less than or equal to the trace of the uncalculated part of K. Let ∆K i be the
uncalculated part of K and let η = Trace(∆K i ) then K − G i G i ≤ η.
Proof. Let GG be the being the complete cholesky decomposition K = GG
where G is a lower triangular matrix were the upper triangular is zeros.

A 0
G = .
B C
Let G iG i to be the incopmlete decomposition of K where i are the iterations
of the Cholesky factorization procedure

G i = G 1:n,1:i = A
B

such that G i G i =˜K i , where˜ is the approximation of K subject to a sym-


metric permutation of rows and columns. Assuming that the rows and columns
Proof K − G iG i ≤ η 35

of K are ordered and no permutation is necessary (this is only for convenience


˜
of the proof). Let ∆ K i = K − K i .
Let A ∈ G 1:i,1:i , B ∈ G i+1:n,1:i and C ∈ G 1+i:n,1+i:n
AA AB
K = GG =
BA BB + CC
˜ AA AB
Ki = G iG i =
BA BB

∆Ki = 0 0
0 CC
We show that CC is positive semi-definite
˜
CC = K i+ 1 : n,i+ 1 : n − K ii+ 1 : n,i + 1 : n
= K i+ 1 : n,i+ 1 : n − B B
= K i+1:n,i+1:n − B · A −1 A · B
= K i+1:n,i+1:n − B · A −1 · (A B )
= K i+1:n,i+1:n − G i+1:n,1:i · G 1:−1i,1:i · K 1:i,i+1:n
−1
1:i

therefore
xCC x = < xC, (xC) >
≥ 0
λc ≥ 0
CC is a positive semi-definite matrix, hence ∆K i is also a positive semi-definite
matrix. Using Lemma 6 we are now able to show that
K˜i = ∆K i
K−
K − G iG i = ∆Ki
n
= λ iw i
i
= max λ i
i

As the maximum eigenvalue is less than or equal to the sum of all the eigenval-
ues, using Lemma 5, we are able to rewrite the expression as
n
K − G iG i ≤ i λi

≤ Trace(Λ)
≤ Trace ( ∆ K ii ).
Therefore,
K − G iG i ≤ η.
Bibliography

[1] Shotaro Akaho. A kernel method for canonical correlation analysis. In


International Meeting of Psychometric Society, Osaka, 2001.

[2] Francis Bach and Michael Jordan. Kernel independent component analysis.
Journal of Machine Leaning Research, 3:1–48, 2002.

[3] Magnus Borga. Learning Multidimensional Signal Processing. PhD thesis,


Linkping Studies in Science and Technology, 1998.

[4] Magnus Borga. Canonical correlation a tutorial, 1999.

[5] Nello Cristianini and John Shawe-Taylor. An Introduction to Support Vec-


tor Machines and other kernel-based learning methods. Cambridge Univer-
sity Press, 2000.

[6] Nello Cristianini, John Shawe-Taylor, and Huma Lodhi. Latent semantic
kernels. In Caria Brodley and Andrea Danyluk, editors, Proceedings of
ICML-01, 18th International Conference on Machine Learning, pages 66–
73. Morgan Kaufmann Publishers, San Francisco, US, 2001.

[7] Colin Fyfe and Pei Ling Lai. Ica using kernel canonical correlation analysis.

[8] Colin Fyfe and Pei Ling Lai. Kernel and nonlinear canonical correlation
analysis. International Journal of Neural Systems, 2001.

[9] A. Gifi. Nonlinear Multivariate Analysis. Wiley, 1990.

[10] G. H. Golub and C. F. V. Loan. Matrix Computations. The Johns Hopkins


University Press, Baltimore, MD, 1983.

[11] David R. Hardoon and John Shawe-Taylor. Kcca for different level preci-
sion in content-based image retrieval. In Submitted to Third International
Workshop on Content-Based Multimedia Indexing, IRISA, Rennes, France,
2003.

[12] H. Hotelling. Relations between two sets of variates. Biometrika, 28:312–


377, 1936.

[13] E. Isaacson and H. B. Keller. Analysis of Numerical Methods. John Wiley


& Sons, Inc, 1966.
BIBLIOGRAPHY 37

[14] J. R. Ketterling. Canonical analysis of several sets of variables. Biometrika,


58:433–451, 1971.

[15] T. Kolenda, L. K. Hansen, J. Larsen, and O. Winther. Independent com-


ponent analysis for understanding multimedia content. In H. Bourlard,
T. Adali, S. Bengio, J. Larsen, and S. Douglas, editors, Proceedings of IEEE
Workshop on Neural Networks for Signal Processing XII, pages 757–766,
Piscataway, New Jersey, 2002. IEEE Press. Martigny, Valais, Switzerland,
Sept. 4-6, 2002.

[16] Malte Kuss and Thore Graepel. The geometry of kernel canonical correla-
tion analysis. 2002.

[17] Yong Rui, Thomas S. Huang, and Shih-Fu Chang. Image retrieval: Cur-
rent techniques, promising directions, and open issues. Journal of Visual
Communications and Image Representation, 10:39–62, 1999.

[18] Alexei Vinokourov, David R. Hardoon, and John Shawe-Taylor. Learn-


ing the semantics of multimedia content with application to web image
retrieval and classification. In Proceedings of Fourth International Sym-
posium on Independent Component Analysis and Blind Source Separation,
Nara, Japan, 2003.

[19] Alexei Vinokourov, John Shawe-Taylor, and Nello Cristianini. Inferring a


semantic representation of text via cross-language correlation analysis. In
Advances of Neural Information Processing Systems 15 (to appear), 2002.

You might also like