0% found this document useful (0 votes)
15 views

From Algebraic Structures To

ebook

Uploaded by

sabajpliner
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

From Algebraic Structures To

ebook

Uploaded by

sabajpliner
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

Full download test bank at ebook textbookfull.

com

From Algebraic Structures to


Tensors Digital Signal and Image
Processing Matrices and Tensors in
Signal Processing Set 1st Edition
CLICK LINK TO DOWLOAD

https://textbookfull.com/product/from-
algebraic-structures-to-tensors-digital-
signal-and-image-processing-matrices-and-
tensors-in-signal-processing-set-1st-edition-
gerard-favier-editor/

textbookfull
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Digital Image Processing A Signal Processing and


Algorithmic Approach 1st Edition D. Sundararajan
(Auth.)

https://textbookfull.com/product/digital-image-processing-a-
signal-processing-and-algorithmic-approach-1st-edition-d-
sundararajan-auth/

Digital Image Watermarking: Theoretical and


Computational Advances (Intelligent Signal Processing
and Data Analysis) 1st Edition Borra

https://textbookfull.com/product/digital-image-watermarking-
theoretical-and-computational-advances-intelligent-signal-
processing-and-data-analysis-1st-edition-borra/

Conceptual Digital Signal Processing with MATLAB


Keonwook Kim

https://textbookfull.com/product/conceptual-digital-signal-
processing-with-matlab-keonwook-kim/

An Introduction to Algebraic Statistics with Tensors


Cristiano Bocci

https://textbookfull.com/product/an-introduction-to-algebraic-
statistics-with-tensors-cristiano-bocci/
Fundamentals of Signal Enhancement and Array Signal
Processing 1st Edition Jacob Benesty

https://textbookfull.com/product/fundamentals-of-signal-
enhancement-and-array-signal-processing-1st-edition-jacob-
benesty/

Think DSP Digital Signal Processing in Python 1st


Edition Allen B. Downey

https://textbookfull.com/product/think-dsp-digital-signal-
processing-in-python-1st-edition-allen-b-downey/

Signal processing for neuroscientists Drongelen

https://textbookfull.com/product/signal-processing-for-
neuroscientists-drongelen/

Digital Watermarking: Techniques and Trends (Springer


Topics in Signal Processing Book 11) Nematollahi

https://textbookfull.com/product/digital-watermarking-techniques-
and-trends-springer-topics-in-signal-processing-
book-11-nematollahi/

Methods and techniques for fire detection : signal,


image and video processing perspectives 1st Edition
Çetin

https://textbookfull.com/product/methods-and-techniques-for-fire-
detection-signal-image-and-video-processing-perspectives-1st-
edition-cetin/
From Algebraic Structures to Tensors
Matrices and Tensors in Signal Processing Set
coordinated by
Gérard Favier

Volume 1

From Algebraic Structures


to Tensors

Edited by

Gérard Favier
First published 2019 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as
permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced,
stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers,
or in the case of reprographic reproduction in accordance with the terms and licenses issued by the
CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the
undermentioned address:

ISTE Ltd John Wiley & Sons, Inc.


27-37 St George’s Road 111 River Street
London SW19 4EU Hoboken, NJ 07030
UK USA

www.iste.co.uk www.wiley.com

© ISTE Ltd 2019


The rights of Gérard Favier to be identified as the author of this work have been asserted by him in
accordance with the Copyright, Designs and Patents Act 1988.

Library of Congress Control Number: 2019945792

British Library Cataloguing-in-Publication Data


A CIP record for this book is available from the British Library
ISBN 978-1-78630-154-3
Contents

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi

Chapter 1. Historical Elements of Matrices and Tensors . . . . . . . 1

Chapter 2. Algebraic Structures . . . . . . . . . . . . . . . . . . . . . . . . 9


2.1. A few historical elements . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2. Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3. Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3.1. Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3.2. Sets of numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3.3. Cartesian product of sets . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3.4. Set operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3.5. De Morgan’s laws . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.6. Characteristic functions . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.7. Partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.8. σ-algebras or σ-fields . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.9. Equivalence relations . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.10. Order relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4. Maps and composition of maps . . . . . . . . . . . . . . . . . . . . . . . 17
2.4.1. Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4.2. Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4.3. Composition of maps . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.5. Algebraic structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.5.1. Laws of composition . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.5.2. Definition of algebraic structures . . . . . . . . . . . . . . . . . . . 22
2.5.3. Substructures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.5.4. Quotient structures . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.5.5. Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
vi From Algebraic Structures to Tensors

2.5.6. Rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.5.7. Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.5.8. Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.5.9. Vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.5.10. Vector spaces of linear maps . . . . . . . . . . . . . . . . . . . . . 38
2.5.11. Vector spaces of multilinear maps . . . . . . . . . . . . . . . . . . 39
2.5.12. Vector subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.5.13. Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.5.14. Sum and direct sum of subspaces . . . . . . . . . . . . . . . . . . 45
2.5.15. Quotient vector spaces . . . . . . . . . . . . . . . . . . . . . . . . 47
2.5.16. Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.6. Morphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.6.1. Group morphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.6.2. Ring morphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.6.3. Morphisms of vector spaces or linear maps . . . . . . . . . . . . . 51
2.6.4. Algebra morphisms . . . . . . . . . . . . . . . . . . . . . . . . . . 56

Chapter 3. Banach and Hilbert Spaces – Fourier Series and


Orthogonal Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.1. Introduction and chapter summary . . . . . . . . . . . . . . . . . . . . . 57
3.2. Metric spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.2.1. Definition of distance . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.2.2. Definition of topology . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.2.3. Examples of distances . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.2.4. Inequalities and equivalent distances . . . . . . . . . . . . . . . . . 62
3.2.5. Distance and convergence of sequences . . . . . . . . . . . . . . . 62
3.2.6. Distance and local continuity of a function . . . . . . . . . . . . . 62
3.2.7. Isometries and Lipschitzian maps . . . . . . . . . . . . . . . . . . 63
3.3. Normed vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.3.1. Definition of norm and triangle inequalities . . . . . . . . . . . . . 63
3.3.2. Examples of norms . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.3.3. Equivalent norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.3.4. Distance associated with a norm . . . . . . . . . . . . . . . . . . . 69
3.4. Pre-Hilbert spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.4.1. Real pre-Hilbert spaces . . . . . . . . . . . . . . . . . . . . . . . . 70
3.4.2. Complex pre-Hilbert spaces . . . . . . . . . . . . . . . . . . . . . . 70
3.4.3. Norm induced from an inner product . . . . . . . . . . . . . . . . . 72
3.4.4. Distance associated with an inner product . . . . . . . . . . . . . . 75
3.4.5. Weighted inner products . . . . . . . . . . . . . . . . . . . . . . . . 76
3.5. Orthogonality and orthonormal bases . . . . . . . . . . . . . . . . . . . 76
3.5.1. Orthogonal/perpendicular vectors and Pythagorean theorem . . . 76
3.5.2. Orthogonal subspaces and orthogonal complement . . . . . . . . . 77
3.5.3. Orthonormal bases . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.5.4. Orthogonal/unitary endomorphisms and isometries . . . . . . . . 79
Contents vii

3.6. Gram–Schmidt orthonormalization process . . . . . . . . . . . . . . . . 80


3.6.1. Orthogonal projection onto a subspace . . . . . . . . . . . . . . . . 80
3.6.2. Orthogonal projection and Fourier expansion . . . . . . . . . . . . 80
3.6.3. Bessel’s inequality and Parseval’s equality . . . . . . . . . . . . . 82
3.6.4. Gram–Schmidt orthonormalization process . . . . . . . . . . . . . 83
3.6.5. QR decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.6.6. Application to the orthonormalization of a set of functions . . . . 86
3.7. Banach and Hilbert spaces . . . . . . . . . . . . . . . . . . . . . . . . . . 88
3.7.1. Complete metric spaces . . . . . . . . . . . . . . . . . . . . . . . . 88
3.7.2. Adherence, density and separability . . . . . . . . . . . . . . . . . 90
3.7.3. Banach and Hilbert spaces . . . . . . . . . . . . . . . . . . . . . . 91
3.7.4. Hilbert bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.8. Fourier series expansions . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.8.1. Fourier series, Parseval’s equality and Bessel’s inequality . . . . . 97
3.8.2. Case of 2π-periodic functions from R to C . . . . . . . . . . . . . 97
3.8.3. T -periodic functions from R to C . . . . . . . . . . . . . . . . . . 102
3.8.4. Partial Fourier sums and Bessel’s inequality . . . . . . . . . . . . . 102
3.8.5. Convergence of Fourier series . . . . . . . . . . . . . . . . . . . . . 103
3.8.6. Examples of Fourier series . . . . . . . . . . . . . . . . . . . . . . 108
3.9. Expansions over bases of orthogonal polynomials . . . . . . . . . . . . 117

Chapter 4. Matrix Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123


4.1. Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
4.2. Matrix vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
4.2.1. Notations and definitions . . . . . . . . . . . . . . . . . . . . . . . 124
4.2.2. Partitioned matrices . . . . . . . . . . . . . . . . . . . . . . . . . . 125
4.2.3. Matrix vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . 126
4.3. Some special matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
4.4. Transposition and conjugate transposition . . . . . . . . . . . . . . . . . 128
4.5. Vectorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
4.6. Vector inner product, norm and orthogonality . . . . . . . . . . . . . . . 130
4.6.1. Inner product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
4.6.2. Euclidean/Hermitian norm . . . . . . . . . . . . . . . . . . . . . . 131
4.6.3. Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
4.7. Matrix multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
4.7.1. Definition and properties . . . . . . . . . . . . . . . . . . . . . . . 132
4.7.2. Powers of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
4.8. Matrix trace, inner product and Frobenius norm . . . . . . . . . . . . . 137
4.8.1. Definition and properties of the trace . . . . . . . . . . . . . . . . . 137
4.8.2. Matrix inner product . . . . . . . . . . . . . . . . . . . . . . . . . . 138
4.8.3. Frobenius norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
4.9. Subspaces associated with a matrix . . . . . . . . . . . . . . . . . . . . . 139
viii From Algebraic Structures to Tensors

4.10. Matrix rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141


4.10.1. Definition and properties . . . . . . . . . . . . . . . . . . . . . . . 141
4.10.2. Sum and difference rank . . . . . . . . . . . . . . . . . . . . . . . 143
4.10.3. Subspaces associated with a matrix product . . . . . . . . . . . . 143
4.10.4. Product rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
4.11. Determinant, inverses and generalized inverses . . . . . . . . . . . . . 145
4.11.1. Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
4.11.2. Matrix inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
4.11.3. Solution of a homogeneous system of linear equations . . . . . . 149
4.11.4. Complex matrix inverse . . . . . . . . . . . . . . . . . . . . . . . 150
4.11.5. Orthogonal and unitary matrices . . . . . . . . . . . . . . . . . . 150
4.11.6. Involutory matrices and anti-involutory matrices . . . . . . . . . 151
4.11.7. Left and right inverses of a rectangular matrix . . . . . . . . . . . 153
4.11.8. Generalized inverses . . . . . . . . . . . . . . . . . . . . . . . . . 155
4.11.9. Moore–Penrose pseudo-inverse . . . . . . . . . . . . . . . . . . . 157
4.12. Multiplicative groups of matrices . . . . . . . . . . . . . . . . . . . . . 158
4.13. Matrix associated to a linear map . . . . . . . . . . . . . . . . . . . . . 159
4.13.1. Matrix representation of a linear map . . . . . . . . . . . . . . . . 159
4.13.2. Change of basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
4.13.3. Endomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
4.13.4. Nilpotent endomorphisms . . . . . . . . . . . . . . . . . . . . . . 166
4.13.5. Equivalent, similar and congruent matrices . . . . . . . . . . . . 167
4.14. Matrix associated with a bilinear/sesquilinear form . . . . . . . . . . . 168
4.14.1. Definition of a bilinear/sesquilinear map . . . . . . . . . . . . . . 168
4.14.2. Matrix associated to a bilinear/sesquilinear form . . . . . . . . . 170
4.14.3. Changes of bases with a bilinear form . . . . . . . . . . . . . . . 170
4.14.4. Changes of bases with a sesquilinear form . . . . . . . . . . . . . 171
4.14.5. Symmetric bilinear/sesquilinear forms . . . . . . . . . . . . . . . 172
4.15. Quadratic forms and Hermitian forms . . . . . . . . . . . . . . . . . . 174
4.15.1. Quadratic forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
4.15.2. Hermitian forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
4.15.3. Positive/negative definite quadratic/Hermitian forms . . . . . . . 177
4.15.4. Examples of positive definite quadratic forms . . . . . . . . . . . 178
4.15.5. Cauchy–Schwarz and Minkowski inequalities . . . . . . . . . . . 179
4.15.6. Orthogonality, rank, kernel and degeneration of a bilinear form . 180
4.15.7. Gauss reduction method and Sylvester’s inertia law . . . . . . . . 181
4.16. Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . . . . . 184
4.16.1. Characteristic polynomial and Cayley–Hamilton theorem . . . . 184
4.16.2. Right eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . 186
4.16.3. Spectrum and regularity/singularity conditions . . . . . . . . . . 187
4.16.4. Left eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
4.16.5. Properties of eigenvectors . . . . . . . . . . . . . . . . . . . . . . 188
4.16.6. Eigenvalues and eigenvectors of a regularized matrix . . . . . . . 190
4.16.7. Other properties of eigenvalues . . . . . . . . . . . . . . . . . . . 190
Contents ix

4.16.8. Symmetric/Hermitian matrices . . . . . . . . . . . . . . . . . . . 191


4.16.9. Orthogonal/unitary matrices . . . . . . . . . . . . . . . . . . . . . 193
4.16.10. Eigenvalues and extrema of the Rayleigh quotient . . . . . . . . 194
4.17. Generalized eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . 195

Chapter 5. Partitioned Matrices . . . . . . . . . . . . . . . . . . . . . . . . 199


5.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
5.2. Submatrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
5.3. Partitioned matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
5.4. Matrix products and partitioned matrices . . . . . . . . . . . . . . . . . 202
5.4.1. Matrix products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
5.4.2. Vector Kronecker product . . . . . . . . . . . . . . . . . . . . . . . 202
5.4.3. Matrix Kronecker product . . . . . . . . . . . . . . . . . . . . . . . 202
5.4.4. Khatri–Rao product . . . . . . . . . . . . . . . . . . . . . . . . . . 204
5.5. Special cases of partitioned matrices . . . . . . . . . . . . . . . . . . . . 205
5.5.1. Block-diagonal matrices . . . . . . . . . . . . . . . . . . . . . . . . 205
5.5.2. Signature matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
5.5.3. Direct sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
5.5.4. Jordan forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
5.5.5. Block-triangular matrices . . . . . . . . . . . . . . . . . . . . . . . 206
5.5.6. Block Toeplitz and Hankel matrices . . . . . . . . . . . . . . . . . 207
5.6. Transposition and conjugate transposition . . . . . . . . . . . . . . . . . 207
5.7. Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
5.8. Vectorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
5.9. Blockwise addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
5.10. Blockwise multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . 209
5.11. Hadamard product of partitioned matrices . . . . . . . . . . . . . . . . 209
5.12. Kronecker product of partitioned matrices . . . . . . . . . . . . . . . . 210
5.13. Elementary operations and elementary matrices . . . . . . . . . . . . . 212
5.14. Inversion of partitioned matrices . . . . . . . . . . . . . . . . . . . . . 214
5.14.1. Inversion of block-diagonal matrices . . . . . . . . . . . . . . . . 215
5.14.2. Inversion of block-triangular matrices . . . . . . . . . . . . . . . 215
5.14.3. Block-triangularization and Schur complements . . . . . . . . . 216
5.14.4. Block-diagonalization and block-factorization . . . . . . . . . . . 216
5.14.5. Block-inversion and partitioned inverse . . . . . . . . . . . . . . 217
5.14.6. Other formulae for the partitioned 2 × 2 inverse . . . . . . . . . 218
5.14.7. Solution of a system of linear equations . . . . . . . . . . . . . . 219
5.14.8. Inversion of a partitioned Gram matrix . . . . . . . . . . . . . . . 220
5.14.9. Iterative inversion of a partitioned square matrix . . . . . . . . . 220
5.14.10. Matrix inversion lemma and applications . . . . . . . . . . . . . 221
5.15. Generalized inverses of 2 × 2 block matrices . . . . . . . . . . . . . . 222
5.16. Determinants of partitioned matrices . . . . . . . . . . . . . . . . . . . 224
5.16.1. Determinant of block-diagonal matrices . . . . . . . . . . . . . . 224
5.16.2. Determinant of block-triangular matrices . . . . . . . . . . . . . 225
x From Algebraic Structures to Tensors

5.16.3. Determinant of partitioned matrices with square diagonal blocks 225


5.16.4. Determinants of specific partitioned matrices . . . . . . . . . . . 226
5.16.5. Eigenvalues of CB and BC . . . . . . . . . . . . . . . . . . . . . 227
5.17. Rank of partitioned matrices . . . . . . . . . . . . . . . . . . . . . . . . 228
5.18. Levinson–Durbin algorithm . . . . . . . . . . . . . . . . . . . . . . . . 229
5.18.1. AR process and Yule–Walker equations . . . . . . . . . . . . . . 230
5.18.2. Levinson–Durbin algorithm . . . . . . . . . . . . . . . . . . . . . 232
5.18.3. Linear prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . 237

Chapter 6. Tensor Spaces and Tensors . . . . . . . . . . . . . . . . . . . 243


6.1. Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
6.2. Hypermatrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
6.2.1. Hypermatrix vector spaces . . . . . . . . . . . . . . . . . . . . . . 244
6.2.2. Hypermatrix inner product and Frobenius norm . . . . . . . . . . 245
6.2.3. Contraction operation and n-mode hypermatrix–matrix product . 245
6.3. Outer products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
6.4. Multilinear forms, homogeneous polynomials and hypermatrices . . . . 251
6.4.1. Hypermatrix associated to a multilinear form . . . . . . . . . . . . 251
6.4.2. Symmetric multilinear forms and symmetric hypermatrices . . . . 252
6.5. Multilinear maps and homogeneous polynomials . . . . . . . . . . . . . 255
6.6. Tensor spaces and tensors . . . . . . . . . . . . . . . . . . . . . . . . . . 255
6.6.1. Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
6.6.2. Multilinearity and associativity . . . . . . . . . . . . . . . . . . . . 257
6.6.3. Tensors and coordinate hypermatrices . . . . . . . . . . . . . . . . 257
6.6.4. Canonical writing of tensors . . . . . . . . . . . . . . . . . . . . . 258
6.6.5. Expansion of the tensor product of N vectors . . . . . . . . . . . . 260
6.6.6. Properties of the tensor product . . . . . . . . . . . . . . . . . . . . 261
6.6.7. Change of basis formula . . . . . . . . . . . . . . . . . . . . . . . . 266
6.7. Tensor rank and tensor decompositions . . . . . . . . . . . . . . . . . . 268
6.7.1. Matrix rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
6.7.2. Hypermatrix rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
6.7.3. Symmetric rank of a hypermatrix . . . . . . . . . . . . . . . . . . . 269
6.7.4. Comparative properties of hypermatrices and matrices . . . . . . . 269
6.7.5. CPD and dimensionality reduction . . . . . . . . . . . . . . . . . . 271
6.7.6. Tensor rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
6.8. Eigenvalues and singular values of a hypermatrix . . . . . . . . . . . . . 274
6.9. Isomorphisms of tensor spaces . . . . . . . . . . . . . . . . . . . . . . . 276

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Preface

This book is part of a collection of four books about matrices and tensors, with
applications to signal processing. Although the title of this collection suggests an
orientation toward signal processing, the results and methods presented should also
be of use to readers of other disciplines.

Writing books on matrices is a real challenge given that so many excellent books
on the topic have already been written1. How then to stand out from the existing,
and which Ariadne’s thread to unwind? A way to distinguish oneself was to treat in
parallel matrices and tensors. Viewed as extensions of matrices with orders higher than
two, the latter have many similarities with matrices, but also important differences in
terms of rank, uniqueness of decomposition, as well as potentiality for representing
multi-dimensional, multi-modal, and inaccurate data. Moreover, regarding the
guiding thread, it consists in presenting structural foundations, then both matrix and
tensor decompositions, in addition to related processing methods, finally leading to
applications, by means of a presentation as self-contained as possible, and with some
originality in the topics being addressed and the way they are treated.

Therefore, in Volume 2, we shall use an index convention generalizing Einstein’s


summation convention, to write and to demonstrate certain equations involving multi-
index quantities, as is the case with matrices and tensors. A chapter will be dedicated
to Hadamard, Kronecker, and Khatri–Rao products, which play a very important role
in matrix and tensor computations.

After a reminder of main matrix decompositions, including a detailed presentation


of the SVD (for singular value decomposition), we shall present different tensor
operations, as well as the two main tensor decompositions which will be the basis of
both fundamental and applied developments, in the last two volumes. These standard
tensor decompositions can be seen as extensions of matrix SVD to tensors of order

1 A list of books, far from exhaustive, is provided in Chapter 1.


xii From Algebraic Structures to Tensors

higher than two. A few examples of equations for representing signal processing
problems will be provided to illustrate the use of such decompositions. A chapter
will be devoted to structured matrices. Different properties will be highlighted, and
extensions to tensors of order higher than two will be presented. Two other chapters
will concern quaternions and quaternionic matrices, on the one hand, and polynomial
matrices, on the other hand.
In Volume 3, an overview of several tensor models will be carried out by
taking some constraints (structural, linear dependency of factors, sparsity, and non-
negativity) into account. Some of these models will be used in Volume 4, for the
design of digital communication systems. Tensor trains and tensor networks will
also be presented for the representation and analysis of massive data (big data).
The algorithmic aspect will be taken into consideration with the presentation of
different processing methods.

Volume 4 will mainly focus on tensorial approaches for array processing,


wireless digital communications (first point-to-point, then cooperative), modeling and
identification of both linear and nonlinear systems, as well as the reconstruction of
missing data in data matrices and tensors, the so-called problems of matrix and tensor
completion. For these applications, tensor-based models will be more particularly
detailed. Monte Carlo simulation results will be provided to illustrate some of the
tensorial methods. This will be particularly the case of semi-blind receivers recently
developed for wireless communication systems.

Matrices and tensors, and more generally linear algebra and multilinear algebra,
are at the same time exciting, extensive, and fundamental topics equally important
for teaching and researching as for applications. It is worth noting here that the
choices made for the content of the books of this collection have not been guided
by educational programs, which explains some gaps compared to standard algebra
treaties. The guiding thread has been rather to present the definitions, properties,
concepts and results necessary for a good understanding of processing methods and
applications considered in these books. In addition to the great diversity of topics,
another difficulty resided in the order in which they should be addressed, due to the
fact that a lot of topics overlap, certain notions or/and some results being sometimes
used before they have been defined or/and demonstrated, which requires the reader to
be referred to sections or chapters that follow.
Four particularities should be highlighted. The first relates to the close relationship
between some of the topics being addressed, certain methods presented and recent
research results, particularly with regard to tensorial approaches for signal processing.
The second reflects the will to situate the results stated in their historical context,
using some biographical information on certain authors being cited, as well as lists
of references comprehensive enough to deepen specific results, and also to extend the
biographical sources provided. This has motivated the introductory chapter entitled
“Historical elements of matrices and tensors.”
Preface xiii

The last two characteristics concern the presentation and illustration of properties
and methods under consideration. Some will be provided without demonstration
because of their simplicity or availability in numerous books thereabout. Others will
be demonstrated, either for pedagogical reasons, since their knowledge should allow
for better understanding the results being demonstrated, or because of the difficulty to
find them in the literature, or still due to the originality of the proposed demonstrations
as it will be the case, for example, of those making use of the index convention. The
use of many tables should also be noted with the purpose of recalling key results
while presenting them in a synthetic and comparative manner.

Finally, numerous examples will be provided to illustrate certain definitions,


properties, decompositions, and methods presented. This will be particularly the case
for the fourth book dedicated to applications of tensorial tools, which has been my
main source of motivation. After 15 years of works dedicated to research (pioneering
for some), aiming to use tensor decompositions for modeling and identifying
nonlinear dynamical systems, and for designing wireless communication systems
based on new tensor models, it seemed to me useful to share this experience and this
scientific path for trying to make tensor tools as accessible as possible and to motivate
new applications based on tensor approaches.
This first book, whose content is described below, provides an introduction to
matrices and tensors based on the structures of vector spaces and tensor spaces,
along with the presentation of fundamental concepts and results. In the first part
(Chapters 2 and 3), a refresher of the mathematical bases related to classical algebraic
structures is presented, by way of bringing forward a growing complexity of the
structures under consideration, ranging from monoids to vector spaces, and to
algebras. The notions of norm, inner product, and Hilbert basis are detailed in order to
introduce Banach and Hilbert spaces. The Hilbertian approach, which is fundamental
for signal processing, is illustrated based on two methods widely employed for signal
representation and analysis, as well as for function approximation, namely, Fourier
and orthogonal polynomial series.
Chapter 4 is dedicated to matrix algebra. The notions of fundamental subspaces
associated with a matrix, rank, determinant, inverse, auto-inverse, generalized
inverse, and pseudo-inverse are described therein. Matrix representations of linear and
bilinear/sesquilinear maps are established. The effect of a change of basis is studied,
leading to the definition of equivalent, similar, and congruent matrices. The notions of
eigenvalue and eigenvector are then defined, ending with matrix eigendecomposition,
and in some cases, with its diagonalization, which are topics to be covered in Volume
2. The case of certain structured matrices, such as symmetric/hermitian matrices and
orthogonal/unitary matrices, is more specifically considered. The interpretation of
eigenvalues as extrema of the Rayleigh quotient is presented, before introducing the
notion of generalized eigenvalues.

In Chapter 5, we consider partitioned matrices. This type of structure is inherent


to matrix products in general, and Kronecker and Khatri–Rao products in particular.
xiv From Algebraic Structures to Tensors

Partitioned matrices corresponding to block-diagonal and block-triangular matrices,


as well as to Jordan forms are described. Next, transposition/conjugate transposition,
vectorization, addition and multiplication operations, as well as Hadamard and
Kronecker products, are presented for partitioned matrices. Elementary operations and
associated matrices allowing the partitioned matrices to be decomposed are detailed.
These operations are then utilized for block-triangularization, block-diagonalization,
and block-inversion of partitioned matrices. The matrix inversion lemma, which is
widely used in signal processing, is deduced from block-inversion formulae. This
lemma is used to demonstrate a few inversion formulae very often encountered in
calculations. Fundamental results on generalized inverse, determinant, and rank of
partitioned matrices are presented. The Levinson algorithm is demonstrated using
the formula for inverting a partitioned square matrix, recursively with respect to the
matrix order. This algorithm, which is one of the most famous in signal processing,
allows to efficiently solve the problem of parameter estimation of autoregressive
(AR) models and linear predictors, recursively with respect to the order of the
AR model and of the predictor, respectively. To illustrate the results of Chapter 3
relatively to orthogonal projection, it is shown that forward and backward linear
predictors, optimal in the sense of the MMSE (minimum mean squared error), can be
interpreted in terms of orthogonal projectors on subspaces of the Hilbert space of the
second-order stationary random signals.

In Chapter 6, hypermatrices and tensors are introduced in close connection with


multilinear maps and multilinear forms. Hypermatrix vector spaces are first defined,
along with operations such as inner product and contraction of hypermatrices – the
particular case of the n-mode hypermatrix-matrix product being considered in more
detail. Hypermatrices associated with multilinear forms and maps are highlighted,
and symmetric hypermatrices are introduced through the definition of symmetric
multilinear forms. Then, tensors of order N > 2 are defined in a formal way as
elements of a tensor space, i.e., a tensor product of N vector spaces. The effect of
changes to the tensor space on the coordinate hypermatrix of a tensor are analyzed.
In addition, some attributes of the tensor product are described, with a focus on
the so-called universal property. Following this, the notions of a rank based on the
canonical polyadic decomposition (CPD) of a tensor are introduced, as well as the
ranking of a tensor’s eigenvalues and singular values. These highlight the similarities
and the differences between matrices, and tensors of order greater than two. Finally,
the concept of tensor unfolding is illustrated via the definition of isomorphisms of
tensor spaces.

I want to thank my colleagues Sylvie Icart and Vicente Zarzoso for their review of
some chapters and Henrique de Morais Goulart, who co-authored Chapter 4.

Gérard FAVIER
August 2019
[email protected]
1

Historical Elements of Matrices


and Tensors

The objective of this introduction is by no means to outline a rigorous and


comprehensive historical background of the theory of matrices and tensors. Such
a historical record should be the work of a historian of mathematics and would
require thorough bibliographical research, including reading the original publications
of authors cited to analyze and reconstruct the progress of mathematical thinking
throughout years and collaborations. A very interesting illustration of this type
of analysis is provided, for example, in the form of a “representation of research
networks”1, over the period 1880–1907, in which are identified the interactions and
influences of some mathematicians, such as James Joseph Sylvester (1814–1897),
Karl Theodor Weierstrass (1815–1897), Arthur Cayley (1821–1895), Leopold
Kronecker (1823–1891), Ferdinand Georg Frobenius (1849–1917), or Eduard
Weyr (1852–1903), with respect to the theory of matrices, the theory of numbers
(quaternions, hypercomplex numbers), bilinear forms, and algebraic structures.

Our modest goal here is to locate in time the contributions of a few mathematicians
and physicists2 who have laid the foundations for the theory of matrices and tensors,
and to whom we will refer later in our presentation. This choice is necessarily very
incomplete.

1 F. Brechenmacher, “Les matrices : formes de représentation et pratiques opératoires


(1850–1930)”, Culture MATH - Expert site ENS Ulm / DESCO, December 20, 2006.
2 For more information on the mathematicians cited in this introduction, refer to the document
“Biographies de mathématiciens célèbres”, by Johan Mathieu, 2008, and the remarkable site
Mac Tutor History of Mathematics Archive (http://www-history.mcs.st-andrews.ac.uk) of the
University of St. Andrews, in Scotland, which contains a very large number of biographies of
mathematicians.

From Algebraic Structures to Tensors,


First Edition. Edited by Gérard Favier.
© ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.
2 From Algebraic Structures to Tensors

The first studies of determinants that preceded those of matrices were conducted
independently by the Japanese mathematician Seki Kowa (1642–1708) and the
German mathematician Gottfried Leibniz (1646–1716), and then by the Scottish
mathematician Colin Maclaurin (1698–1746) for solving 2 × 2 and 3 × 3 systems
of linear equations. These works were then generalized by the Swiss mathematician
Gabriel Cramer (1704–1752) for the resolution of n × n systems, leading, in
1750, to the famous formulae that bear his name, whose demonstration is due to
Augustin-Louis Cauchy (1789–1857).
In 1772, Théophile Alexandre Vandermonde (1735–1796) defined the notion of
determinant, and Pierre-Simon Laplace (1749–1827) formulated the computation
of determinants by means of an expansion according to a row or a column, an
expansion which will be presented in section 4.11.1. In 1773, Joseph-Louis Lagrange
(1736–1813) discovered the link between the calculation of determinants and that of
volumes. In 1812, Cauchy used, for the first time, the determinant in the sense that it
has today, and he established the formula for the determinant of the product of two
rectangular matrices, a formula which was found independently by Jacques Binet
(1786–1856), and which is called nowadays the Binet–Cauchy formula.

In 1810, Johann Carl Friedrich Gauss (1777–1855) introduced a notation using a


table, similar to matrix notation, to write a 3 × 3 system of linear equations, and he
proposed the elimination method, known as Gauss elimination through pivoting, to
solve it. This method, also known as Gauss–Jordan elimination method, was in fact
known to Chinese mathematicians (first century). It was presented in a modern form,
by Gauss, when he developed the least squares method, first published by Adrien-
Marie Legendre (1752–1833), in 1805.

Several determinants of special matrices are designated by the names of their


authors, such as Vandermonde’s, Cauchy’s, Hilbert’s, and Sylvester’s determinants.
The latter of whom used the word “matrix” for the first time in 1850, to designate
a rectangular table of numbers. The presentation of the determinant of an nth-order
square matrix as an alternating n-linear function of its n column vectors is due to
Weierstrass and Kronecker, at the end of the 19th century.

The foundations of the theory of matrices were laid in the 19th century around the
following topics: determinants for solving systems of linear equations, representation
of linear transformations and quadratic forms (a topic which will be addressed in detail
in Chapter 4), matrix decompositions and reductions to canonical forms, that is to say,
diagonal or triangular forms such as the Jordan (1838–1922) normal form with Jordan
blocks on the diagonal, introduced by Weierstrass, the block-triangular form of Schur
(1875–1941), or the Frobenius normal form that is a block-diagonal matrix, whose
blocks are companion matrices.
Historical Elements of Matrices and Tensors 3

A history of the theory of matrices in the 19th century was published by Thomas
Hawkins3 in 1974, highlighting, in particular, the contributions of the British
mathematician Arthur Cayley, seen by historians as one of the founders of the theory
of matrices. Cayley laid the foundations of the classical theory of determinants4 in
1843. He then developed matrix computation5 by defining certain matrix operations as
the product of two matrices, the transposition of the product of two matrices, and the
inversion of a 3 × 3 matrix using cofactors, and by establishing different properties of
matrices, including, namely, the famous Cayley–Hamilton theorem which states that
every square matrix satisfies its characteristic equation. This result highlighted for the
fourth order by William Rowan Hamilton (1805–1865), in 1853, for the calculation
of the inverse of a quaternion, was stated in the general case by Cayley in 1857, but
the demonstration for any arbitrary order is due to Frobenius, in 1878.
An important part of the theory of matrices concerns the spectral theory, namely,
the notions of eigenvalue and characteristic polynomial. Directly related to the
integration of systems of linear differential equations, this theory has its origins in
physics, and more particularly in celestial mechanics for the study of the orbits of
planets, conducted in the 18th century by mathematicians, physicists, and astronomers
such as Lagrange and Laplace, then in the 19th century by Cauchy, Weierstrass,
Kronecker, and Jordan.

The names of certain matrices and associated determinants are those of the
mathematicians who have introduced them. This is the case, for example, for
Alexandre Théophile Vandermonde (1735–1796) who gave his name to a matrix
whose elements on each row (or each column) form a geometric progression and
whose determinant is a polynomial. It is also the case for Carl Jacobi (1804–1851)
and Ludwig Otto Hesse (1811–1874), for Jacobian and Hessian matrices, namely,
the matrices of first- and second-order partial derivatives of a vector function, whose
determinants are called Jacobian and Hessian, respectively. The same is true for the
Laplacian matrix or Laplace matrix, which is used to represent a graph. We can also
mention Charles Hermite (1822–1901) for Hermitian matrices, related to the so-called
Hermitian forms (see section 4.15). Specific matrices such as Fourier (1768–1830)
and Hadamard (1865–1963) matrices are directly related to the transforms of the
same name. Similarly, Householder (1904–1993) and Givens (1910–1993) matrices
are associated with transformations corresponding to reflections and rotations,
respectively. The so-called structured matrices, such as Hankel (1839–1873) and
Toeplitz (1881–1943) matrices, play a very important role in signal processing.

3 Thomas Hawkins, “The theory of matrices in the 19th century”, Proceedings of the
International Congress of Mathematicians, Vancouver, 1974.
4 Arthur Cayley, “On a theory of determinants”, Cambridge Philosophical Society 8, l–16,
1843.
5 Arthur Cayley, “A memoir on the theory of matrices”, Philosophical Transactions of the
Royal Society of London 148, 17–37, 1858.
4 From Algebraic Structures to Tensors

Matrix decompositions are widely used in numerical analysis, especially to solve


systems of equations using the method of least squares. This is the case, for example,
of EVD (eigenvalue decomposition), SVD (singular value decomposition), LU, QR,
UD, Cholesky (1875–1918), and Schur (1875–1941) decompositions, which will be
presented in Volume 2.

Just as matrices and matrix computation play a fundamental role in linear algebra,
tensors and tensor computation are at the origin of multilinear algebra. It was in the
19th century that tensor analysis first appeared, along with the works of German
mathematicians Georg Friedrich Bernhard Riemann6 (1826–1866) and Elwin Bruno
Christoffel (1829–1900) in geometry (non-Euclidean), introducing the index notation
and notions of metric, manifold, geodesic, curved space, curvature tensor, which gave
rise to what is today called Riemannian geometry and differential geometry.

It was the Italian mathematician Gregorio Ricci-Curbastro (1853–1925) with


his student Tullio Levi-Civita (1873–1941) who were the founders of the tensor
calculus, then called absolute differential calculus7, with the introduction of the
notion of covariant and contravariant components, which was used by Albert Einstein
(1879–1955) in his theory of general relativity, in 1915.

Tensor calculus originates from the study of the invariance of quadratic forms
under the effect of a change of coordinates and, more generally, from the theory of
invariants initiated by Cayley8, with the introduction of the notion of hyperdeterminant
which generalizes matrix determinants to hypermatrices. Refer to the article by Crilly9
for an overview of the contribution of Cayley on the invariant theory. This theory
was developed by Jordan and Kronecker and involved controversy10 between these
two authors, then continued by David Hilbert (1862–1943), Elie Joseph Cartan
(1869–1951), and Hermann Klaus Hugo Weyl (1885–1955), for algebraic forms

6 A detailed analysis of Riemann’s contributions to tensor analysis has been made by Ruth
Farwell and Christopher Knee, “The missing link: Riemann’s Commentatio, differential
geometry and tensor analysis”, Historia Mathematica 17, 223–255, 1990.
7 G. Ricci and T. Levi-Civita, “Méthodes de calcul différentiel absolu et leurs applications”,
Mathematische Annalen 54, 125–201, 1900.
8 A. Cayley, “On the theory of linear transformations”, Cambridge Journal of Mathematics 4,
193–209, 1845. A. Cayley, “On linear transformations”, Cambridge and Dublin Mathematical
Journal 1, 104–122, 1846.
9 T. Crilly, “The rise of Cayley’s invariant theory (1841–1862)”, Historica Mathematica 13,
241–254, 1986.
10 F. Brechenmacher, “La controverse de 1874 entre Camille Jordan et Leopold Kronecker:
Histoire du théorème de Jordan de la décomposition matricielle (1870–1930)”, Revue d’histoire
des Mathématiques, Society Math De France 2, no. 13, 187–257, 2008 (hal-00142790v2).
Historical Elements of Matrices and Tensors 5

(or homogeneous polynomials), or for symmetric tensors11. A historical review of the


theory of invariants was made by Dieudonné and Carrell12.
This property of invariance vis-à-vis the coordinate system characterizes the
laws of physics and, thus, mathematical models of physics. This explains that
tensor calculus is one of the fundamental mathematical tools for writing and
studying equations that govern physical phenomena. This is the case, for example, in
general relativity, in continuum mechanics, for the theory of elastic deformations, in
electromagnetism, thermodynamics, and so on.
The word tensor was introduced by the German physicist Woldemar Voigt
(1850–1919), in 1899, for the geometric representation of tensions (or pressures)
and deformations in a body, in the areas of elasticity and crystallography. Note that
the word tensor was introduced independently by the Irish mathematician, physicist
and astronomer William Rowan Hamilton (1805–1865), in 1846, to designate the
modulus of a quaternion13.

As we have just seen in this brief historical overview, tensor calculus was
used initially in geometry and to describe physical phenomena using tensor fields,
facilitating the application of differential operators (gradient, divergence, rotational,
and Laplacian) to tensor fields14.

Thus, we define the electromagnetic tensor (or Maxwell’s (1831–1879) tensor)


describing the structure of the electromagnetic field, the Cauchy stress tensor, and
the deformation tensor (or Green–Lagrange deformation tensor), in continuum
mechanics, and the fourth-order curvature tensor (or Riemann–Christoffel tensor) and
the third-order torsion tensor (or Cartan tensor15) in differential geometry.

11 M. Olive, B. Kolev, and N. Auffray, “Espace de tenseurs et théorie classique des invariants”,
21ème Congrès Français de Mécanique, Bordeaux, France, 2013 (hal-00827406).
12 J. A. Dieudonné and J. B. Carrell, Invariant Theory, Old and New, Academic Press, 1971.
13 See page 9 in E. Sarrau, Notions sur la théorie des quaternions, Gauthiers-Villars, Paris,
1889, http://rcin.org.pl/Content/13490.
14 The notion of tensor field is associated with physical quantities that may depend on both
spatial coordinates and time. These variable geometric quantities define differentiable functions
on a domain of the physical space. Tensor fields are used in differential geometry, in algebraic
geometry, general relativity, and in many other areas of mathematics and physics. The concept
of tensor field generalizes that of vector field.
15 E. Cartan, “Sur une généralisation de la notion de courbure de Riemann et les espaces à
torsion”, Comptes rendus de l’Académie des Sciences 174, 593–595, 1922. Elie Joseph Cartan
(1869–1951), French mathematician and student of Jules Henri Poincaré (1854–1912) and
Charles Hermite (1822–1901) at the Ecole Normale Supérieure. He brought major contributions
concerning the theory of Lie groups, differential geometry, Riemannian geometry, orthogonal
polynomials, and elliptic functions. He discovered spinors, in 1913, as part of his work on the
6 From Algebraic Structures to Tensors

After their introduction as computational and representation tools in physics


and geometry, tensors have been the subject of mathematical developments
related to polyadic decomposition (Hitchcock 1927) aiming to generalize dyadic
decompositions, that is to say, matrix decompositions such as SVD.
Then emerged their applications as tools for the analysis of three-dimensional data
generalizing matrix analysis to sets of matrices, viewed as arrays of data characterized
by three indices. We can mention here the works of pioneers in factor analysis by
Cattell16 and Tucker17 in psychometrics (Cattell 1944; Tucker 1966), and Harshman18
in phonetics (Harshman 1970) who have introduced Tucker’s and PARAFAC (parallel
factors) decompositions. This last one was proposed independently by Carroll
and Chang (1970), under the name of canonical decomposition (CANDECOMP),
following the publication of an article by Wold (1966), with the objective to generalize
the (Eckart and Young 1936) decomposition, that is, SVD, to arrays of order higher
than two. This decomposition was then called CP (for CANDECOMP/PARAFAC) by
Kiers (2000). For an overview of tensor methods applied to data analysis, the reader
should consult the books by Coppi and Bolasco (1989) and Kroonenberg (2008).
From the early 1990s, tensor analysis, also called multi-way analysis, has also
been widely used in chemistry, and more specifically in chemometrics (Bro 1997).
Refer to, for example, the book by Smilde et al. (2004) for a description of various
applications in chemistry.
In parallel, at the end of the 1980s, statistic “objects,” such as moments and
cumulants of order higher than two, have naturally emerged as tensors (McCullagh
1987). Tensor-based applications were then developed in signal processing for solving
the problem of blind source separation using cumulants (Cardoso 1990, 1991; Cardoso
and Comon 1990). The book by Comon and Jutten (2010) outlines an overview of
methods for blind source separation.

In the early 2000s, tensors were used for modeling digital communication
systems (Sidiropoulos et al. 2000a), for array processing (Sidiropoulos et al. 2000b),
for multi-dimensional harmonics recovery (Haardt et al. 2008; Jiang et al. 2001;
Sidiropoulos 2001), and for image processing, more specifically for face recognition

representations of groups. Like tensor calculus, spinor calculus plays a major role in quantum
physics. His name is associated with Albert Einstein (1879–1955) for the classical theory of
gravitation that relies on the model of general relativity.
16 Raymond Cattell (1905–1998), Anglo-American psychologist who used factorial analysis
for the study of personality with applications to psychotherapy.
17 Ledyard Tucker (1910–2004), American mathematician, expert in statistics and psychology,
and more particularly known for tensor decomposition which bears his name.
18 Richard Harshman (1943–2008), an expert in psychometrics and father of three-dimensional
PARAFAC analysis which is the most widely used tensor decomposition in applications.
Historical Elements of Matrices and Tensors 7

(Vasilescu and Terzopoulos 2002). The field of wireless communication systems


has then given rise to a large number of tensor models (da Costa et al. 2018;
de Almeida and Favier 2013; de Almeida et al. 2008; Favier et al. 2012a; Favier and
de Almeida 2014b; Favier et al. 2016). These models will be covered in a chapter
of Volume 3. Tensors have also been used for modeling and parameter estimation of
dynamic systems both linear (Fernandes et al. 2008, 2009a) and nonlinear, such as
Volterra systems (Favier and Bouilloc 2009a, 2009b, 2010) or Wiener-Hammerstein
systems (Favier and Kibangou 2009a, 2009b; Favier et al. 2012b; Kibangou and
Favier 2008, 2009, 2010), and for modeling and estimating nonlinear communication
channels (Bouilloc and Favier 2012; Fernandes et al. 2009b, 2011; Kibangou and
Favier 2007). These different tensor-based applications in signal processing will be
addressed in Volume 4.

Many applications of tensors also concern speech processing (Nion et al. 2010),
MIMO radar (Nion and Sidiropoulos 2010), and biomedical signal processing,
particularly for electroencephalography (EEG) (Cong et al. 2015; de Vos et al. 2007;
Hunyadi et al. 2016), and electrocardiography (ECG) signals (Padhy et al. 2018),
magnetic resonance imaging (MRI) (Schultz et al. 2014), or hyperspectral imaging
(Bourennane et al. 2010; Velasco-Forero and Angulo 2013), among many others.
Today, tensors viewed as multi-index tables are used in many areas of application for
the representation, mining, analysis, and fusion of multi-dimensional and multi-modal
data (Acar and Yener 2009; Cichocki 2013; Lahat et al. 2015; Morup 2011).

A very large number of books address linear algebra and matrix calculus, for
example: Gantmacher (1959), Greub (1967), Bellman (1970), Strang (1980), Horn
and Johnson (1985, 1991), Lancaster and Tismenetsky (1985), Noble and Daniel
(1988), Barnett (1990), Rotella and Borne (1995), Golub and Van Loan (1996),
Lütkepohl (1996), Cullen (1997), Zhang (1999), Meyer (2000), Lascaux and Théodor
(2000), Serre (2002), Abadir and Magnus (2005), Bernstein (2005), Gourdon (2009),
Grifone (2011), and Aubry (2012).

For multilinear algebra and tensor calculus, there are much less reference
books, for example: Greub (1978), McCullagh (1987), Coppi and Bolasco (1989),
Smilde et al. (2004), Kroonenberg (2008), Cichocki et al. (2009), and Hackbusch
(2012). For an introduction to multilinear algebra and tensors, see Ph.D. theses by
de Lathauwer (1997) and Bro (1998). The following synthesis articles can also be
consulted: (Bro 1997; Cichocki et al. 2015; Comon 2014; Favier and de Almeida
2014a; Kolda and Bader 2009; Lu et al. 2011; Papalexakis et al. 2016; Sidiropoulos
et al. 2017).
2

Algebraic Structures

2.1. A few historical elements

We make here a brief historical note concerning algebraic structures. The notion
of structure plays a fundamental role in mathematics. In a treatise entitled Eléments
de mathématique, comprising 11 books, Nicolas Bourbaki1 distinguishes three main
types of structures: algebraic structures, ordered structures that equip sets with an
order relation, and topological structures equipping sets with a topology that allows
the definition of topological concepts such as open sets, neighborhood, convergence,
and continuity. Some structures are mixed, that is, they combine several of the three
basic structures. That is the case, for instance, of Banach and Hilbert spaces which
combine the vector space structure with the notions of norm and inner product, that is,
a topology.
Algebraic structures endow sets with laws of composition governing operations
between elements of a same set or between elements of two distinct sets. These
composition laws known as internal and external laws, respectively, exhibit certain
properties such as associativity, commutativity, and distributivity, with the existence
(or not) of a symmetric for each element, and of a neutral element. Algebraic structures
make it possible to characterize, in particular, sets of numbers, polynomials, matrices,
and functions. The study of these structures (groups, rings, fields, vector spaces, etc.)
and their relationships is the primary purpose of general algebra, also called abstract
algebra. A reminder of the basic algebraic structures will be carried out in this chapter.

The vector spaces gave rise to linear algebra for the resolution of systems of
linear equations and the study of linear maps (also called linear mappings, or linear
transformations). Linear algebra is closely related to the theory of matrices and matrix
algebra, of which an introduction will be made in Chapter 4.

1 Nicolas Bourbaki is the pseudonym of a group of French mathematicians formed in 1935.

From Algebraic Structures to Tensors,


First Edition. Edited by Gérard Favier.
© ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.
10 From Algebraic Structures to Tensors

Multilinear algebra extends linear algebra to the study of multilinear maps, through
the notions of tensor space and tensor, which will be introduced in Chapter 6.
Although the resolution of (first- and second-degree) equations can be traced
to the Babylonians2 (about 2000 BC, according to Babylonian tables), then to the
Greeks (300 BC), to the Chinese (200 BC), and to the Indians (6th century), algebra
as a discipline emerged in the Arab-Muslim world, during the 8th century. It gained
momentum in the West, in the 16th century, with the resolution of algebraic (or
polynomial) equations, first with the works of Italian mathematicians Tartaglia
(1500–1557) and Jérôme Cardan (1501–1576) for cubic equations, whose first
resolution formula is attributed to Scipione del Ferro (1465–1526) and Lodovico
Ferrari (1522–1565) for quartic equations. The work of François Viète (1540–1603)
then René Descartes (1596–1650) can also be mentioned, for the introduction of the
notation making use of letters to designate unknowns in equations, and the use of
superscripts to designate powers.
A fundamental structure, linked to the notion of symmetry, is that of the group,
which gave rise to the theory of groups, issued from the theory of algebraic equations
and the study of arithmetic properties of algebraic numbers, at the end of the 18th
century, and of geometry, at the beginning of the 19th century. We may cite, for
example, Joseph-Louis Lagrange (1736–1813), Niels Abel (1802–1829), and Evariste
Galois (1811–1832), for the study of algebraic equations, the works of Carl Friedrich
Gauss (1777–1855) on the arithmetic theory of quadratic forms, and those of Felix
Klein (1849–1925) and Hermann Weyl (1885–1955) in non-Euclidean geometry. We
can also mention the works of Marie Ennemond Camille Jordan (1838–1922) on the
general linear group, that is, the group of invertible square matrices, and on the Galois
theory. In 1870, he published a treatise on the theory of groups, including the reduced
form of a matrix, known as Jordan form, for which he received the Poncelet prize of
the Academy of Sciences.
Groups involve a single binary operation.

The algebraic structure of ring was proposed by David Hilbert (1862–1943)


and Emmy Noether (1882–1935), while that of field was introduced independently
by Leopold Kronecker (1823–1891) and Richard Dedekind (1831–1916). In 1893,
Heinrich Weber (1842–1913) presented the first axiomatization3 of commutative
fields, completed in 1910 by Ernst Steinitz (1871–1928). Field extensions led to

2 Arnaud Beauville, “Histoire des équations algébriques”, https://math.unice.fr/beauvill/


pubs/Equations.pdf and http://www.maths-et-tiques.fr/index.php/Histoire-des-maths/Nombres/
Histoire-de-l-algebre.
3 Used for developing a scientific theory, the axiomatic method is based on a set of propositions
called axioms. Its founders are Greek mathematicians, which include Euclid and Archimedes
(c. 300 BC) as the most famous, for their work in geometry (Euclidean) and arithmetic. It was
Algebraic Structures 11

the Galois theory, with the initial aim to solve algebraic equations. In 1843, a first
example of non-commutative field was introduced by William Rowan Hamilton
(1805–1865), with quaternions.

Rings and fields are algebraic structures involving two binary operations, generally
called addition and multiplication.

The underlying structure to the study of linear systems, and more generally
to linear algebra, is that of vector space (v.s.) introduced by Hermann Grassmann
(1809–1877), then axiomatically formalized by Giuseppe Peano, with the introduction
of the notion of R-vector space, at the end of the 19th century. German mathematicians
David Hilbert (1862–1943), Otto Toeplitz (1881–1940), Hilbert’s student, and Stefan
Banach (1892–1945) were the ones who extended vector spaces to spaces of
infinite dimension, called Hilbert spaces and Banach spaces (or normed vector
spaces (n.v.s.)).

The study of systems of linear equations and linear transformations, which is


closely linked to that of matrices, led to the concepts of linear independence, basis,
dimension, rank, determinant, and eigenvalues, which will be considered in this
chapter as well as in Chapters 4 and 6. In Chapter 3, we shall see that by equipping
v.s. with a norm and an inner product, n.v.s. can be obtained on the one hand, and
pre-Hilbertian spaces on the other. The concept of distance allows for the definition
of metric spaces. Norms and distances are used for studying the convergence of
sequences in the context of Banach and Hilbert spaces, of infinite dimension, which
will also be addressed in the next chapter. The extension of linear spaces to multilinear
spaces will be considered in this chapter and in Chapter 6 through the introduction of
the tensor product, with generalization of matrices to hypermatrices and tensors of
order higher than two.

2.2. Chapter summary

The objective of this chapter is to carry out an overview of the main algebraic
structures, while recalling definitions and results that will be useful for other chapters.
First, we recall some results related to sets and maps, and we then present the
definitions and properties of internal and external composition laws on a set. Various
algebraic structures are then detailed: groups, rings, fields, modules, v.s., and algebras.
The notions of substructures and quotient structures are also defined.

at the end of the 19th century that the axiomatic method experienced a growing interest with
the works of Richard Dedekind (1831–1916), Georg Cantor (1845–1918), and Giuseppe Peano
(1858–1932), for the construction of the sets of integers and real numbers, as well as those of
David Hilbert for his axiomatization of Euclidean geometry.
12 From Algebraic Structures to Tensors

The v.s. structure is considered in more detail. Different examples are given,
including v.s. of linear maps and multilinear maps. The concepts of vector subspace,
linear independence, basis, dimension, direct sum of subspaces, and quotient space
are recalled, before summarizing the different structures under consideration in
a table.

The notion of homomorphism, also called morphism, is then introduced, and


morphisms of groups, rings, vector spaces, and algebras are described. The case of
morphisms of v.s., that is, of linear maps, is addressed in more depth. The notions of
isomorphism, endomorphism, and dual basis are defined. The canonical factorization
of linear maps based on the notion of quotient v.s., and the rank theorem, which is a
fundamental result in linear algebra, are presented.

2.3. Sets

2.3.1. Definitions

A set A is a collection of elements {a1 , a2 , · · · }. It is said that ai is an element of


the set A, or ai belongs to A, and it is written ai ∈ A, or A ∋ ai .

A subset B of a set A is a set whose elements also belong to A. It is said that B is


included in A, and it is written B ⊆ A or A ⊇ B
B ⊆ A ⇔ ∀x ∈ B ⇒ x ∈ A.
If B ⊆ A and B ̸= A, then it is said that B is a proper subset of A, and we write B ⊂ A.

The empty set, denoted by ∅, is by definition the set that contains no elements.
We have ∅ ⊆ A, ∀A.

A finite set E is a set that has a finite number of elements. This number N is called
the cardinality of E, and it is often denoted by |E| or Card(E). There are 2N distinct
subsets.

An infinite set E is said to be countable when there exists a bijection between E


and the set of natural numbers (N) or integers (Z). This definition is due to Cantor4.
This means that the elements of a countable set can be indexed as xn , with n ∈ N
or Z. This is the case of sampled signals, namely, discrete-time signals, where n is
the sampling index (t = nTe , where Te is the sampling period), while sets of analog
signals (i.e. continuous-time signals) x(t) are not countable with respect to the time
variable t ∈ R.

4 Georg Cantor (1845–1918), Russian mathematician who is at the origin of the theory of sets.
He is known for the theorem that bears his name, relative to set cardinality, as well as for his
contributions to the theory of numbers.
Algebraic Structures 13

2.3.2. Sets of numbers

In Table 2.1, we present a few sets of numbers5 that satisfy the following inclusion
relations: N ⊂ Z ⊂ P ⊂ R ⊂ C ⊂ Q ⊂ O. We denote by R∗ = R\{0} the set of

Sets Definitions
N Natural numbers including 0
Z Integers
P Rational numbers
R Real numbers
R+ Positive real numbers
R− Negative real numbers
C Complex numbers
Q Quaternions
O Octonions

Table 2.1. Sets of numbers

non-zero real numbers. Similarly for N∗ , Z∗ , P∗ , and C∗ . In Volume 2, a chapter will


be dedicated to complex numbers, quaternions, and octonions6, with the purpose of
highlighting the matrix representations of these numbers, and introducing quaternionic
and octonionic matrices.

2.3.3. Cartesian product of sets

The Cartesian product7 of N sets A1 , A2 , · · · , AN , denoted A1 × A2 × · · · × AN , or


N
still × An , is the set of all ordered N -tuples (x1 , x2 , · · · , xN ), where xn ∈ An , n ∈
n=1
⟨N ⟩ = {1, 2, · · · , N }:
N
× An = {(x1 , x2 , · · · , xN ) : xn ∈ An , n ∈ ⟨N ⟩}.
n=1

5 For the set of rational numbers, the notation P is a substitute for the usual notation Q, which
will be used to designate the set of quaternions instead of H, often used to refer to Hamilton,
discoverer of quaternions.
6 Quaternions and octonions, which can be considered as generalizations of complex numbers,
themselves extending real numbers, are part of hypercomplex numbers.
7 The notion of Cartesian product is due to René Descartes (1596–1650), French philosopher,
mathematician, and physicist, and author of philosophical works including the treatise entitled
Discours de la méthode pour bien conduire sa raison et chercher la vérité dans les sciences
(Discourse on the Method for Rightly Conducting the Reason, and Seeking Truth in the
Sciences), which contains the famous quote “I think, therefore I am” (originally in Latin
“Cogito, ergo sum”). He introduced the Cartesian product to represent the Euclidian plane and
three-dimensional space, in the context of analytic geometry, also called Cartesian geometry,
using coordinate systems.
14 From Algebraic Structures to Tensors

Operations Definitions

Equality A = B if and only if A ⊆ B and B ⊆ A.

Transitivity if A ⊂ B and B ⊂ C , then A ⊂ C .

Union (or sum) A ∪ B = {x : x ∈ A or x ∈ B}.

Intersection (or product) A ∩ B = {x : x ∈ A and x ∈ B}.

Complementation A ⊂ Ω ⇒ A = {x ∈ Ω : x ∈
/ A}.

Reduction (or difference) A − B = A ∩ B.

Exclusive or A ⊕ B = (A − B) ∪ (B − A).

Table 2.2. Set operations

For example, we define the Cartesian product of N sets of indices Jn = {1, · · · , In },


N
as J = × Jn . The elements of J are the ordered N -tuples of indices (i1 , · · · , iN ),
n=1
N
with in ∈ Jn . Later in the book, × In will be used to highlight the dimensions.
n=1

N
When An = A, ∀n ∈ ⟨N ⟩, the Cartesian product will be written as × An = A N .
n=1

If the sets are vector spaces, we then have a Cartesian product of vector spaces
which is a fundamental notion underlying, in particular, the definition of multilinear
maps, and therefore, as it will be seen in section 6.6, that of tensor spaces.

2.3.4. Set operations

In Table 2.2, we summarize the main set operations8.


Union and intersection are commutative, associative, and distributive:
– Commutativity: A ∪ B = B ∪ A ; A ∩ B = B ∩ A.
– Associativity: (A ∪ B) ∪ C = A ∪ (B ∪ C ) ; (A ∩ B) ∩ C = A ∩ (B ∩ C ).
– Distributivity: A ∪(B ∩C ) = (A ∪B)∩(A ∪C ) ; A ∩(B ∪C ) = (A ∩B)∪(A ∩C ).

Exclusive or (also called exclusive disjonction), noted A ⊕ B, is the set of all


elements of A or B which do not belong to the two sets at once.

8 Set union and intersection are also denoted by A + B and A B, respectively.


Algebraic Structures 15

N sets A1 , · · · , AN are said to be mutually exclusive if they are pairwise disjoint,


that is, if Ai ∩ Aj = ∅ , ∀i, j ̸= i.

The following properties hold for A ⊂ Ω and B ⊂ Ω:


– A ∪ A = Ω ; A ∩ A = ∅ ; A = A.
– Ω = ∅ ; ∅ = Ω.
– If B ⊂ A then B ⊃ A.

2.3.5. De Morgan’s laws

De Morgan’s laws, also called rules9, are properties related to the complement of
a union or an intersection of subsets of the same set. Thereby, for two subsets A and
B, it follows that:
A ∪B =A ∩B ; A ∩B =A ∪B
and in general for N subsets:
N N N N
∪ An = ∩ An ; ∩ An = ∪ An .
n=1 n=1 n=1 n=1

The equalities above are logical equivalences, and the symbol of equality can be
replaced by the symbol of equivalence (⇔).

De Morgan’s laws express the fact that the complement of unions and intersections
of sets can be obtained by replacing all sets by their complements, unions by
intersections, and intersections by unions. Therefore, for example:
A ∩ (B ∪ C ) ⇔ A ∪ (B ∩ C ),
or equivalently:
A(B + C ) ⇔ A + B C .

2.3.6. Characteristic functions

For a given subset F of E, the characteristic function, or indicator function, is the


function χF : E → {0, 1} such that:
{
1 if x ∈ F
E ∋ x 7→ χF (x) = .
0 if x ∈
/F

9 Augustus de Morgan (1806–1871), British mathematician who is the founder of modern logic
with George Boole (1815–1864).
16 From Algebraic Structures to Tensors

2.3.7. Partitions

A N -partition of Ω is a collection of N disjoint subsets An , n ∈ ⟨N ⟩, of Ω whose


union is equal to Ω:
N
An ⊂ Ω , ∪ An = Ω , An ∩ Ai = ∅ ∀n and i ̸= n.
n=1

2.3.8. σ-algebras or σ-fields

Let Ω be a non-empty set. A σ-algebra (or σ-field) on Ω is a collection A of


subsets of Ω satisfying the following properties:
– A is not empty.
– A is closed under complement, namely, ∀An ∈ A, then A n ∈ A.
– A is closed under countable unions, namely, if An ∈ A, ∀n ∈ N, then
∪ An ∈ A. A union is said to be countable because the set of subsets An is countable.
n∈N

The pair (Ω, A) is called a measurable space, and the subsets An are called
measurable sets. By equipping the measurable space (Ω, A) of a measure µ : A →
[0, +∞], the triplet (Ω, A, µ) is called a measure space.

In probability theory, Ω is the universal set, that is, the set of all possible
experimental outcomes of a random trial, also called elementary events. Defining an
event An as a set of elementary events, a collection (or field) A of events is called
a σ-field, and the pair (Ω, A) is a measurable space. When this space is endowed
with a probability measure P , the triplet (Ω, A, P ) defines a probability space, where
P satisfies for any element An of A: P (∅) = 0; P (Ω) = 1; 0 ≤ P (An ) ≤ 1 and
P (∅) = 0, meaning that the empty set is an impossible event, whereas P (Ω) = 1
means that Ω is a sure event, that is, an event which always occurs.

2.3.9. Equivalence relations

Let E be a non-empty set. An equivalence relation on E, denoted by ∼, is a binary


relation that is reflexive, symmetric, and transitive:
– Reflexivity: ∀a ∈ E , a ∼ a.
– Symmetry: ∀(a, b) ∈ E 2 , a ∼ b ⇒ b ∼ a.
– Transitivity: ∀(a, b, c) ∈ E 3 , a ∼ b and b ∼ c ⇒ a ∼ c.

The elements equivalent to an element a form a set called the equivalence class of a,
denoted by ca ⊂ E. The set of all equivalence classes associated with the equivalence
relation ∼ forms a partition of E, denoted by E/ ∼ and called quotient set or quotient
space of E.
Another random document with
no related content on Scribd:
When arrived at the Burial-ground:
| Halt—Ranks, left and right wheel—Quick
| march—Halt—Inwards face—Rest upon your
| arms reversed—Stand at ease.

When the Corpse has passed through:


| Attention—Reverse arms—Re-form Column—
Ranks,
| right and left wheel—Quick march—Halt,
| front—March.

When facing the grave:


| Rest upon your arms reversed—Stand
| at ease.

After funeral service:


| Attention—Present arms—Shoulder arms—Load
| with blank cartridge—Fire three volleys
| in the air—Order arms—Fix bayonets—Shoulder
| arms—Rear rank take close order—March.

March back to barracks, right in front.


COMPANY DRILL.

Part 2.—Section 1.—Formation of the Company.


The Company is ordered to “Fall in” at close order; is then sized
from flanks to centre; and told off in Sub-divisions, and four sections.
In Column of Sections, the Senior officer takes the Leading, the
second senior the Third, the third Senior the Fourth, and the junior
the Second section. The Company is also told off by “Threes from
the right,” numbered 1, 2, 3. Should there be a blank file in telling off
the company in line, it will invariably be the fourth file from the left.
The Company is also told off from the right by alternate files, right,
and left.
In Close order the Rear rank is one pace, in Open order two
paces, and for inspection three paces, from the front rank.
S. 2. Marching to the Front.
By the right (left, or
The Company will
centre)—March.
occasionally be ordered, to
Step out—Mark time—Step
short—Open, and close ranks—
Oblique—Diagonal march.
S. 3. The Side, or Closing step.
To the right (or left) close
—Quick march—Halt.
S. 4. The Back step.
Step back—March.
S. 5. To form four deep.*
Form four deep—March.
* In telling off the files, should
Rear form four deep— the last file be a right file, the left
March. file on its right will double in the
Right form four deep— rear of it, completing it to four
March.
Left form four deep— deep, and leaving the other
March. Section only two deep

To re-form two deep, from each of these formations.


Front. (or Halt—Front.†)
† If the Company is in
movement.
S. 6. File marching.
To the left face—Quick
Halt, Front.
march.

From the Halt.

Advance in double files from


the centre—Sub-divisions After facing, the leading files
inwards face—Quick disengage.
march.
Front form Company—(or, Forward.
To the right form
Right Sub-division—Halt,
Company.)
Front, Dress. The remainder of
the Company march on in file,
and form in succession on left of
halted Sub-division.

On the March.

Advance in double files from


Sub-divisions, inwards turn—
the Centre.
Right and left wheel.
S. 7. Wheeling from a Halt.
Right (or left) wheel—Quick
Halt, Dress.
march.
S. 8. Wheeling forward by Sub-divisions, from Line.
By Sub-divisions right wheel Halt, Dress.
—Quick march.
S. 9. Wheeling backward by Sub-divisions, from Line.
By Sub-divisions on the left
backward wheel—Quick Halt, Dress.
march.
S. 10. Marching on an Alignement, in Open column of Sub-
divisions.
March—(or, Quick march).
S. 11. Wheeling into Line, from Open column of Sub-divisions.
1. Halt—Left wheel into line
Halt, Dress, Eyes front.
—Quick march.
2. On the Moveable pivot.
Shoulders forward—
Forward (or, Halt, Dress.)
S. 12. In Open column of Sub-divisions entering into a new
direction on a moveable pivot.
Right (or left) shoulders
forward—Forward.
S. 13. Counter-marching.
Counter-march by files.
Right (or left) face—Quick
Halt, Front, Dress.
march.
Counter-march by ranks.
Right and left face—Right
countermarch—Quick Halt, Front, Dress.
march.
S. 14. Wheeling on the centre of the Company.
Company { Right } Halt, Dress.
On the { Left } Wheel
centre
{ Right }

about
March.
{ Left }
about
S. 15. Diagonal march.

The pivots, or outward Files, march in the direct line to which


they have faced, the others conforming to them.
S. 16. Increasing the Front of an Open column halted Right in
front.
Form Company. Left Sub-division, Left half face
—Quick march—Halt, Front,
Dress up.
Diminishing the front of an Open column, halted.
Form Sub-divisions. Left Sub-division—Right about
three-quarters face—Quick
march.
2nd Senior. Halt, Front, Dress.
S. 17. Increasing the front of an Open column, on the march.
Form Company. Left Sub-division, Left half turn,
Double—Front turn, Quick.
Diminishing the front of an Open column, on the march.
Form Sub-divisions. Left Sub-division, Mark time—
Right half turn—2nd Senior,
Front turn.
When the above movements, 16 and 17, are performed Left in
front, the Words Right will be altered to Left, and Left to
Right. The same directions apply to sections.
S. 18. In Open column of Sub-divisions to pass a short defile, by
breaking off files.
Break off —— files. —— Files on the left, Right
turn—Left wheel.
After passing the defile.—Files to the front.
S. 19. The Company in line halted, or on the march, moves to a
flank in Column of sections, or Sections of threes.
Sections (or threes) right
(or left) shoulders
forward (if halted, Quick
march—Forward.)
When Pivots are required to be accurately dressed, or when
the alignement of the Company is to be preserved.
Sections (or threes) on the
right (or left) backwards Halt, Dress.
wheel—Quick March.
To re-form Company on the march.
Sections (or threes) right
(or left) shoulders
* If the march is to be
forward—
continued.
* Forward—(or Halt,
Dress.)
For accurate dressing, or when the alignement is to be
preserved.
Right (or left) wheel into
Halt, Dress.
line—Quick march.
S. 20. Forming Company, Sub-divisions, Sections, or Sections of
threes, from file marching.
Front form Company (Sub-
* If the march is to be
divisions, or Sections)—
continued.
*Forward.
Marching in file from the right, to form the Company to the
Left flank.
Halt—Front.
Marching in file from the right, to form the Company to the
Right flank.
On the leading file to the
right form Company.
To form to the Right about.
On the leading file to the
right about form Company.
S. 21. To form to either flank, from Open column of Sub-divisions.
To the Left flank.
Halt—Left wheel into line—
Halt, Dress, Eyes front.
Quick march.
To form the Company to its Right flank.
To the right forward form Leading sub-division Left
Company. shoulders forward—Forward,
Halt, Dress. 2nd Sub-division
(Left oblique till clear of the right
Sub-division)—Left shoulders
forward—Forward, Halt, Dress
up.
S. 22. Company moving to the front, to gain ground to a flank, by
march in echellon, by sections.
Sections right—Forward.
To form Company—
Form Company—Forward.
S. 23. To form the Rallying square.
Form the Rallying Square.
When the Square is to march
—The Square will move to
the front, (rear, right, or * If ordered to fire, the
left,) Inwards Face—quick Standing ranks only will
march—Halt—Prepare to commence an independent fire.
resist Cavalry—Ready.*
Reduce the Square—
Quick march.

MODE TO BE OBSERVED IN DISMISSING A COMPANY OFF PARADE.

Recover arms.
Right face.
Lodge arms.
In turning in a Guard, or Piquet, the same mode is to be observed.

MANNER OF INSPECTING A COMPANY ON PARADE.

Attention.
Fix bayonets.
Shoulder arms.
Rear rank take open order—March.
Slope arms.
The inspection of Arms will now take place
Carry arms.
Order arms.
Examine arms.
Return ramrods.
An inspection of the Appointments, Clothing, &c., is now to be
made.
Unfix bayonets.
Rear rank take close order—March.
Stand at ease.
MOVEMENTS OF A BATTALION.

Part 3.—Section 1. Commands.


All words of command must be given short, quick, and
loud.
S. 2. Degrees of march.
The Slow step is particularly applicable to purposes of parade,
and occasionally to the march of extended lines.
The Quick march is the usual pace to be applied to all general
movements of Battalions, or greater bodies, in Column, or Line.
The Double march is only to be applied to the movements of the
Divisions of a battalion, except upon peculiar occasions for short
distances.
S. 3. Marching in line.
The March in line is generally adopted where the country is
open.
S. 4. Wheeling.
Wheels are made on a Halted pivot from Line into Column, and
from Column into Line. The principle of the Moveable pivot must
always be applied to the wheel of divisions marching in column.
Wheels of divisions may be either made forward, or backward. In
progressive movements they are to be made Forward, but particular
occasions require that they should be made Backward on the pivot
flank. The Backwards wheel need not, however, be practised where
the ground is uneven, and the Divisions stronger than 15 or 16 files:
where this is the case, the Command will be
Form open column, right (or
left) in front—Right
Halt, Front.
about face—Right (or
left) wheel—Quick march.
S. 5. Movements.
Every movement must be divided into its distinct parts, and
each part executed by its cautionary and executive words of
command. All field movements and firings are to be performed with
fixed bayonets, except when troops are acting as Light infantry.
S. 6. The Alignement.
To march in an Alignement is to make troops march in any
straight line, which joins two given points—or to form upon any such
given line. When troops are to form in a straight line, two necessary
points in it must always be previously ascertained. One, the point of
Appui at which one flank of the Body is to be placed, and the other
the distant point of formation or dressing, on which the front of the
body is directed.
S. 7. Points of formation.
The line on which troops move, or are successively to form,
may be taken up to any extent by the prolongation of an original
short base, given in the direction which the Commander of a line will
point out.
S. 8. Dressing.
The Officer in dressing is placed on that flank of his division to
which the men’s eyes are turned on the word “Dress,” and from the
second file from the flank of the Company towards which his
wheeling flank moves from column, or his inward flank from echellon:
he makes his corrections on his intermediate point. In all wheelings
into line the word “Eyes Front” will be given as soon as the
dressing is completed. When Officers change from one flank to the
other in order to close, they will pass by the front, and repass by the
rear of their Companies. On all other occasions, when it is necessary
to change their flanks in line, they will pass and repass by the rear.
S. 9. Open column.
All changes of position, by means of the Open column, will be
effected by the formation of a column, right or left, in front, on the
named division. If on a flank division, the caution will specify whether
the other flank is to be thrown back, or forward: if on a central
division, the caution will in like manner specify which wing is to be
thrown forward. An open column may effect a change of position
upon its front, rear, or any central division by the named division
wheeling up according to the front to which it is intended to change;
and the other divisions facing, and filing into the new alignement.
S. 10. Column at Quarter distance, and Close column.
When Close columns are formed, the Companies or Divisions
must be at one pace distance. In the wheel of a Column at quarter or
close distance, the leading division acts as a moving base for the
rear Companies to follow; its length of step is regulated according to
the depth of the column, and when at quarter distance must be very
short to enable the rear to circle simultaneously round at the usual
pace. Upon the wheel being ordered, all the rear divisions make a
half face to the wheeling flank; but at a quarter distance, the leading
division will advance six paces on the word “Quick (or Double)
March,” and will wheel round the pivot file at a shortened pace,
while each succeeding division will advance, in circling round, to
quarter distance, which will leave room for the rear divisions to circle
into their relative positions at that distance. In wheeling on the
Moveable pivot, the rear divisions make a half turn towards the
shoulder brought forward, and the front division wheels and
advances at a shortened pace in the new direction, the rear divisions
circling round. In Close column, the supernumeraries will form on the
reverse flanks of companies; and when the column marches to a
flank, they will move with their companies; when the Close column is
to countermarch they will remain on the reverse flank, and
countermarch on their own ground. When a Column deploys on a
rear division, the named division when uncovered will move up to the
front (which its covering Serjeant will mark); the points, therefore,
necessary for the formation of the Battalion will be taken in
prolongation of these points, and the Divisions which successively
move up must Halt, Front, until their front is clear.
S. 11. Echellon.
The Echellon position and movements are applicable to the
oblique or direct changes of situation, which a Battalion may be
obliged to make to the front or rear, or on a particular fixed division of
the line. The oblique changes are produced by the wheel (less than
the quarter circle) of divisions, which places them in the echellon
situation. The direct changes are produced by the perpendicular and
successive march of divisions from line to front or rear. In Echellon,
the inner flank, (or that which first joins its preceding division when
the line is to be formed forward,) is the directing one; and in Oblique
echellon the wheels are made on it, into echellon—forward, and into
line—backward.
Practical rule for the Battalion and Line on all occasions of
Wheeling by Companies into echellon.
“Each covering Serjeant having previously placed himself before
or behind a given file (the 8th) from the standing flank, will take the
named number of paces from the centre of that file on the arc of
the circle, and thereby become a direction for the Company to
wheel up to, and halt.”
As eight paces from the eighth file complete the Quarter circle,
so four paces give the One-eighth, and two paces the One-sixteenth
of the circle. All changes of front by the Echellon march are
performed by the forming divisions wheeling half the angle wheeled
by the division to be formed upon. In all changes of position by
echellon, whether direct or oblique, the leaders of companies will
invariably be on that flank towards which the change of position is to
be made. But in taking ground to a flank on the march, in echellon of
Sub-divisions or Sections, Companies’ leaders remain in their places
as when in line.
S. 12. Squares.
Squares are formed either from Line, or from Column at full,
half, or quarter distance. The Hollow square, four deep, is sufficiently
solid to oppose an attack of Cavalry: it possesses, at the same time
the advantage of rendering the fire of all the men available to the
resistance of the enemy. The Solid square should seldom be
adopted, because a proportion of the men cannot give their fire.
Close columns should in all cases when practicable, open therefore
to quarter distance, and form Square (as in Sec. 21, No. 1, Part III.).
But as every position in which a Battalion may be placed should be
susceptible of ready resistance against Cavalry, the Close column
can always assume an efficient posture of defence, by the six centre
Companies wheeling outwards by Threes, and closing to the front,
and the two rear Companies facing to the right about: the Officers
and Serjeants taking post in the centre. The wing of a battalion can
in like manner form the solid square from a Close column of Sub-
divisions. When Cavalry is not to be resisted, it will be sufficient to
form the Square two deep to the rear. The formation of Battalion
squares, either from Line or Column, is to be completed as
expeditiously as possible; and the Squares may afterwards be
placed in direct echellon for mutual defence.
S. 13. Firings.
In all movements, Firing should commence after a formation. In
firing by Companies, the Leaders will give the Word “Ready” when
the previous division fires, preserving the pause of slow time
between this and “Present,” the men firing when they have covered
their objects. In firing by wings, one wing will receive the word
“Ready,” the instant the other has completed its loading. Great care
must be taken in file firing that it is not hurried, and that the men
“Present” deliberately. The value of a soldier’s ammunition, and a
jealousy of its expenditure without effect, must be carefully
inculcated; for in proportion as a cool and well-directed fire serves to
distract and throw an enemy into disorder, so is a wild, confused,
and hurried fire (which is always without effect) calculated to give
him confidence, and a contempt for his opponent. Soldiers should,
therefore, bear in mind that nothing makes so strong an impression
on an enemy, as the thinning of his ranks by a well-directed fire; and
that nothing tends more to animate and encourage troops than the
diminished fire from ranks so thinned; affording also the most
favourable opportunity for a successful charge. In firing in square,
the two front ranks are to come to the kneeling position without
cocking, on preparing to receive cavalry. The standing ranks in
square will fire independently from the right of faces.
Street firing.
A Column at open, half, or quarter distance, formed in a street or
narrow ground where deployment is impracticable, may be required
to fire previous to charging forward, or by successive divisions in
retiring. It will be performed in the following manner:—
If advancing, the two front Companies only will fire in succession;
the leading Company firing and loading kneeling, the second
Company closing to the front, and firing standing. When the enemy’s
fire has been overcome, or at any favourable moment that may
present itself, the column will charge briskly forward, and make good
the ground it is contending for.
In retiring, the leading division will give its fire; Slope arms; Face
outwards by sub-divisions; File to the rear; Re-form Company; Load;
and remain halted, until its front is again clear, or the whole column
is put in motion. The moment the front of the second company is
clear, it will give its fire; Face outwards by sub-divisions; and file to
the rear as above directed: and so on by companies in succession:
the companies thus follow each other, and when the front of the
column occupies the whole breadth of the street, the outward files of
companies will double in the rear, to give the companies which have
fired room to pass. It must never be forgotten, in entering towns or
villages occupied by the enemy, that the first thing to be done, on
gaining a footing in the place, is to clear the houses on both flanks,
and the column should on no account proceed through the streets
without previously occupying the houses on either side; the troops
employed for that purpose breaking through partition walls, or
pushing on from house to house, so as to accompany the march of
the main body, and protect its flanks.

FORMATION OF THE BATTALION.

When the Battalion is formed, there is to be no interval between


any of the Companies, and every part of the front of the Battalion
should be equally strong. The Grenadiers will be on the right, Light
company on the left, the other companies from right to left. The
Battalion will be told off into Right, and Left wings.
FORMATION OF THE BATTALION, AT CLOSE ORDER.

The Commanding officer is advanced in front for the general


purpose of exercise when the Battalion is single; but in the March in
line, and in the firings, he is in the rear of the Colours. The
Lieutenant-colonel is behind the colours, twelve paces from the
supernumerary rank. The 1st Major is six paces in the rear of the
second Battalion company from the right flank; 2nd Major at the
same distance in the rear of the second Battalion company from the
left; the Adjutant at the same distance in rear of the colours. One
Officer is on the right of the front rank of each company, and One on
the left of the Battalion; all these are covered in the rear rank by their
respective Serjeants; and the remaining Officers and Serjeants are
in a third rank behind their companies. The colours are placed (both
in the front rank) between the two Centre companies. The
supernumerary rank is at three paces distance when in Line; and
when in Column, it is at the distance of one pace.
When the Battalion takes Open order.
Rear Rank take Open Order
Officers of Companies and
—March.
those with the Colours dress
three paces in front of the Line.
The 1st Major is on the right of
the Officers, the 2nd Major on
the left. The Adjutant on the left
of the front rank. The Colonel
ten paces, and the Lieutenant-
colonel six paces, in front of the
Colours.

When the Battalion resumes Close order.


Rear Rank take Close Order
—March.

EVOLUTIONS OF THE BATTALION.


MOVEMENTS OF THE BATTALION FROM LINE.

S. 14. 1. The Battalion halted, and correctly dressed, is to advance


in Line.
Commander’s Executive Words of command,
Words of command. Directions, &c.
The Battalion will advance—
March (or Quick march)—
Halt.
2. When the Battalion is to retire.
The Battalion will retire—
Right about face—Quick
March.
3. While advancing in Line, the Battalion may form to either flank
by the Divisions wheeling to the Bight (or Left) on the Moveable
pivot, and forming on the flank Company (which will be halted in the
direction of the new front) by the Echellon march of divisions.
S. 15. When a Battalion advancing in Line is to charge.
Prepare to charge—Charge
—Halt.
S. 16. When a Battalion moving in Line passes a wood, &c., to
Front or Rear, by the flank march of Companies in file.
1. If to pass to the Front.
From the right (or left) of
Companies pass by files to
the front. Companies,
right (or left) turn—
Right (or left) wheel.
2. If to pass to the Rear.*
From the proper right (or
* The Battalion in Line having
left) of Companies pass by
arrived at the point where it must
files to the rear—left (or break. Companies may also
right) turn—Right (or pass to the front or rear by
left) wheel—Halt—Front. Sections of Threes.

3. If a Battalion in first Line passes through a second, which


advances and relieves it.
Pass by files to the rear—
The relieving Battalion
Right face—Right wheel—
marches up within twelve Paces
Quick march—Halt—
of the front Line, the Companies
Front.
of which proceed to the rear
through the second Line.

4. When the second Line does not advance to relieve the first.
Pass by files to the rear—
The first Line retires, and
Left turn—Right wheel.
when within twelve paces
passes through the second.

S. 17. When the Battalion advances, or retires, by half Battalion,


and fires.
1. If the Battalion is in march, and advancing.
The Battalion will advance
by wings. (2nd Major) Left
wing, halt—March (or
quick march). (Senior
* After having advanced 15
Major)* Right wing, halt—
paces.
Ready—Present—Load—
March (or quick march).
(2nd Major) Left wing, halt
—Ready, &c.
2. If the Battalion is in march, and retiring.
The Battalion will retire by * After retiring 15 paces.
wings. (Senior Major) Right
wing, halt—Front. (2nd
Major)* Left wing, halt— † When Right wing has retired
Front. (Senior Major) Right 15 paces.
wing, ready—Present—
Load—Right about face—
March (or quick march).
(2nd Major) Left wing, halt
—Front. †Left wing, ready
—Present—Load, &c.
S. 18. A Battalion in Line to move to attack, or pass a bridge, &c.,
to the front, from either flank, or from the centre.
1. If from a Flank, by Companies, or Sub-divisions.
Right (or left) Division to
the front.—Remaining Rear Divisions successively—
Divisions, right (or left) Right (or left) shoulders forward
shoulders forward— —Forward.
Quick march—Forward.
2. When the Column arrives near the point where the Line is to re-
form.
Form line on the first Leading Division, Halt, Dress
Division—Remaining up.
Divisions, right (or left) Remaining Divisions
shoulders forward— successively—Right (or left)
Forward. shoulders forward—Forward.—
Halt—Dress up—Eyes front.
3. If the advance is from the Centre.
Two centre Sub-divisions to Right wing Sub-divisions first
the front. throw Right shoulders, and
Remaining Sub-divisions, afterwards Left shoulders
right and left shoulders forward.
forward—Quick March—
Forward. Left wing Sub-divisions first
throw Left shoulders, and
afterwards Right shoulders
forward, Sub-divisions
successively—Forward.
4. When the Double column arrives near the point where the Line
is to be formed.
Form line on two centre
Remaining Sub-divisions
Sub-divisions.—Remaining
successively—Right (or left)
Sub-divisions, right and
shoulders forward—Forward—
left shoulders forward—
Halt—Dress up—Eyes front.
Forward.
5. To form Line to the right, from the Double column.
Form line to the right— Right wing Sub-divisions—
Right wing left shoulders Halt, Dress. Left wing Sub-
forward. divisions successively—Left
shoulders forward—Forward—
Halt, Dress up—Eyes front.
S. 19. A Battalion in Line to retire over a bridge, or defile, or retreat
from a Flank or Flanks, in rear of the Centre.
1. If from a flank.
Retire by Companies (or Sub- Left (or Right) Company—
divisions) from the left Company (or Sub-division)—
(or right) in rear of the Right about face—Quick march
right (or left). —Right (or Left) shoulders
forward—Forward. When at
inward flank of right (or left)
Division—Right (or left)
shoulders forward—Forward.
The other Divisions follow in
succession.
2. If the Retreat is from both flanks.
Retire from both flanks by Right (or left) Sub-division—
sub-divisions in rear of the Right about face—Quick march
centre. —Right (or left) shoulders
forward—Forward. When arrived
at the proper points—Right (or
left) shoulders forward—
Forward.
Remaining right, and left Sub-
divisions follow in succession.
Two centre Sub-divisions, when
the Divisions next to them have
commenced their second wheel
—Right about face—Quick
march.
S. 20. A Battalion in Line to march off in Column of divisions,
successively to a flank.
1. If the Movement is along the rear, and from the right flank.
The Battalion will move in
Right Division (or Section)—
Column of Divisions (or
Left face—Left wheel—Quick
Sections) from the right
march—Front turn.
along the rear.
2. When the Movement is from the left flank.
Left Division (or Section)—
Right face—Right wheel—Quick
march—Front turn.
In both movements the Divisions follow in succession the leading
division.
S. 21. When the Battalion, halted in Line, is to form Square on a
named Company, or on two centre Sub-divisions.
1. If on a Central company.
Column at quarter distance When the Companies reach
on the right (or left) their places in Column, they
centre Company—Threes, receive successively Threes
right and left shoulders right (or left) shoulders forward
forward—Double march, —Halt, Dress.

You might also like