100% found this document useful (1 vote)
38 views

From Algebraic Structures to Tensors Digital Signal and Image Processing Matrices and Tensors in Signal Processing Set 1st Edition Gérard Favier (Editor) 2024 scribd download

Algebraic

Uploaded by

bubevbina90
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
38 views

From Algebraic Structures to Tensors Digital Signal and Image Processing Matrices and Tensors in Signal Processing Set 1st Edition Gérard Favier (Editor) 2024 scribd download

Algebraic

Uploaded by

bubevbina90
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

Download the full version of the textbook now at textbookfull.

com

From Algebraic Structures to Tensors Digital


Signal and Image Processing Matrices and
Tensors in Signal Processing Set 1st Edition
Gérard Favier (Editor)
https://textbookfull.com/product/from-algebraic-
structures-to-tensors-digital-signal-and-image-
processing-matrices-and-tensors-in-signal-
processing-set-1st-edition-gerard-favier-editor/

Explore and download more textbook at https://textbookfull.com


Recommended digital products (PDF, EPUB, MOBI) that
you can download immediately if you are interested.

Digital Image Processing A Signal Processing and


Algorithmic Approach 1st Edition D. Sundararajan (Auth.)

https://textbookfull.com/product/digital-image-processing-a-signal-
processing-and-algorithmic-approach-1st-edition-d-sundararajan-auth/

textbookfull.com

Digital Image Watermarking: Theoretical and Computational


Advances (Intelligent Signal Processing and Data Analysis)
1st Edition Borra
https://textbookfull.com/product/digital-image-watermarking-
theoretical-and-computational-advances-intelligent-signal-processing-
and-data-analysis-1st-edition-borra/
textbookfull.com

Conceptual Digital Signal Processing with MATLAB Keonwook


Kim

https://textbookfull.com/product/conceptual-digital-signal-processing-
with-matlab-keonwook-kim/

textbookfull.com

Lev Vygotsky First Paperback Edition René Van Der Veer

https://textbookfull.com/product/lev-vygotsky-first-paperback-edition-
rene-van-der-veer/

textbookfull.com
Lung Epithelial Biology in the Pathogenesis of Pulmonary
Disease Venkataramana K. Sidhaye And Michael Koval (Eds.)

https://textbookfull.com/product/lung-epithelial-biology-in-the-
pathogenesis-of-pulmonary-disease-venkataramana-k-sidhaye-and-michael-
koval-eds/
textbookfull.com

Baby Steps Seven Brothers 4 1st Edition Catherine Lievens

https://textbookfull.com/product/baby-steps-seven-brothers-4-1st-
edition-catherine-lievens/

textbookfull.com

Mathematics and Philosophy 1st Edition Daniel Parrochia

https://textbookfull.com/product/mathematics-and-philosophy-1st-
edition-daniel-parrochia/

textbookfull.com

Unleashing Great Teaching The Secrets to the Most


Effective Teacher Development 1st Edition Weston

https://textbookfull.com/product/unleashing-great-teaching-the-
secrets-to-the-most-effective-teacher-development-1st-edition-weston/

textbookfull.com

The Eurosceptic 2014 European Parliament Elections: Second


Order or Second Rate? 1st Edition Julie Hassing Nielsen

https://textbookfull.com/product/the-eurosceptic-2014-european-
parliament-elections-second-order-or-second-rate-1st-edition-julie-
hassing-nielsen/
textbookfull.com
Jet in Supersonic Crossflow Mingbo Sun

https://textbookfull.com/product/jet-in-supersonic-crossflow-mingbo-
sun/

textbookfull.com
From Algebraic Structures to Tensors
Matrices and Tensors in Signal Processing Set
coordinated by
Gérard Favier

Volume 1

From Algebraic Structures


to Tensors

Edited by

Gérard Favier
First published 2019 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as
permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced,
stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers,
or in the case of reprographic reproduction in accordance with the terms and licenses issued by the
CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the
undermentioned address:

ISTE Ltd John Wiley & Sons, Inc.


27-37 St George’s Road 111 River Street
London SW19 4EU Hoboken, NJ 07030
UK USA

www.iste.co.uk www.wiley.com

© ISTE Ltd 2019


The rights of Gérard Favier to be identified as the author of this work have been asserted by him in
accordance with the Copyright, Designs and Patents Act 1988.

Library of Congress Control Number: 2019945792

British Library Cataloguing-in-Publication Data


A CIP record for this book is available from the British Library
ISBN 978-1-78630-154-3
Contents

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi

Chapter 1. Historical Elements of Matrices and Tensors . . . . . . . 1

Chapter 2. Algebraic Structures . . . . . . . . . . . . . . . . . . . . . . . . 9


2.1. A few historical elements . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2. Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3. Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3.1. Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3.2. Sets of numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3.3. Cartesian product of sets . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3.4. Set operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3.5. De Morgan’s laws . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.6. Characteristic functions . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.7. Partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.8. σ-algebras or σ-fields . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.9. Equivalence relations . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3.10. Order relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4. Maps and composition of maps . . . . . . . . . . . . . . . . . . . . . . . 17
2.4.1. Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4.2. Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4.3. Composition of maps . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.5. Algebraic structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.5.1. Laws of composition . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.5.2. Definition of algebraic structures . . . . . . . . . . . . . . . . . . . 22
2.5.3. Substructures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.5.4. Quotient structures . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.5.5. Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
vi From Algebraic Structures to Tensors

2.5.6. Rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.5.7. Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.5.8. Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.5.9. Vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.5.10. Vector spaces of linear maps . . . . . . . . . . . . . . . . . . . . . 38
2.5.11. Vector spaces of multilinear maps . . . . . . . . . . . . . . . . . . 39
2.5.12. Vector subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.5.13. Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.5.14. Sum and direct sum of subspaces . . . . . . . . . . . . . . . . . . 45
2.5.15. Quotient vector spaces . . . . . . . . . . . . . . . . . . . . . . . . 47
2.5.16. Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.6. Morphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.6.1. Group morphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.6.2. Ring morphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.6.3. Morphisms of vector spaces or linear maps . . . . . . . . . . . . . 51
2.6.4. Algebra morphisms . . . . . . . . . . . . . . . . . . . . . . . . . . 56

Chapter 3. Banach and Hilbert Spaces – Fourier Series and


Orthogonal Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.1. Introduction and chapter summary . . . . . . . . . . . . . . . . . . . . . 57
3.2. Metric spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.2.1. Definition of distance . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.2.2. Definition of topology . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.2.3. Examples of distances . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.2.4. Inequalities and equivalent distances . . . . . . . . . . . . . . . . . 62
3.2.5. Distance and convergence of sequences . . . . . . . . . . . . . . . 62
3.2.6. Distance and local continuity of a function . . . . . . . . . . . . . 62
3.2.7. Isometries and Lipschitzian maps . . . . . . . . . . . . . . . . . . 63
3.3. Normed vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.3.1. Definition of norm and triangle inequalities . . . . . . . . . . . . . 63
3.3.2. Examples of norms . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
3.3.3. Equivalent norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.3.4. Distance associated with a norm . . . . . . . . . . . . . . . . . . . 69
3.4. Pre-Hilbert spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.4.1. Real pre-Hilbert spaces . . . . . . . . . . . . . . . . . . . . . . . . 70
3.4.2. Complex pre-Hilbert spaces . . . . . . . . . . . . . . . . . . . . . . 70
3.4.3. Norm induced from an inner product . . . . . . . . . . . . . . . . . 72
3.4.4. Distance associated with an inner product . . . . . . . . . . . . . . 75
3.4.5. Weighted inner products . . . . . . . . . . . . . . . . . . . . . . . . 76
3.5. Orthogonality and orthonormal bases . . . . . . . . . . . . . . . . . . . 76
3.5.1. Orthogonal/perpendicular vectors and Pythagorean theorem . . . 76
3.5.2. Orthogonal subspaces and orthogonal complement . . . . . . . . . 77
3.5.3. Orthonormal bases . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.5.4. Orthogonal/unitary endomorphisms and isometries . . . . . . . . 79
Contents vii

3.6. Gram–Schmidt orthonormalization process . . . . . . . . . . . . . . . . 80


3.6.1. Orthogonal projection onto a subspace . . . . . . . . . . . . . . . . 80
3.6.2. Orthogonal projection and Fourier expansion . . . . . . . . . . . . 80
3.6.3. Bessel’s inequality and Parseval’s equality . . . . . . . . . . . . . 82
3.6.4. Gram–Schmidt orthonormalization process . . . . . . . . . . . . . 83
3.6.5. QR decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.6.6. Application to the orthonormalization of a set of functions . . . . 86
3.7. Banach and Hilbert spaces . . . . . . . . . . . . . . . . . . . . . . . . . . 88
3.7.1. Complete metric spaces . . . . . . . . . . . . . . . . . . . . . . . . 88
3.7.2. Adherence, density and separability . . . . . . . . . . . . . . . . . 90
3.7.3. Banach and Hilbert spaces . . . . . . . . . . . . . . . . . . . . . . 91
3.7.4. Hilbert bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.8. Fourier series expansions . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.8.1. Fourier series, Parseval’s equality and Bessel’s inequality . . . . . 97
3.8.2. Case of 2π-periodic functions from R to C . . . . . . . . . . . . . 97
3.8.3. T -periodic functions from R to C . . . . . . . . . . . . . . . . . . 102
3.8.4. Partial Fourier sums and Bessel’s inequality . . . . . . . . . . . . . 102
3.8.5. Convergence of Fourier series . . . . . . . . . . . . . . . . . . . . . 103
3.8.6. Examples of Fourier series . . . . . . . . . . . . . . . . . . . . . . 108
3.9. Expansions over bases of orthogonal polynomials . . . . . . . . . . . . 117

Chapter 4. Matrix Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123


4.1. Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
4.2. Matrix vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
4.2.1. Notations and definitions . . . . . . . . . . . . . . . . . . . . . . . 124
4.2.2. Partitioned matrices . . . . . . . . . . . . . . . . . . . . . . . . . . 125
4.2.3. Matrix vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . 126
4.3. Some special matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
4.4. Transposition and conjugate transposition . . . . . . . . . . . . . . . . . 128
4.5. Vectorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
4.6. Vector inner product, norm and orthogonality . . . . . . . . . . . . . . . 130
4.6.1. Inner product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
4.6.2. Euclidean/Hermitian norm . . . . . . . . . . . . . . . . . . . . . . 131
4.6.3. Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
4.7. Matrix multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
4.7.1. Definition and properties . . . . . . . . . . . . . . . . . . . . . . . 132
4.7.2. Powers of a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
4.8. Matrix trace, inner product and Frobenius norm . . . . . . . . . . . . . 137
4.8.1. Definition and properties of the trace . . . . . . . . . . . . . . . . . 137
4.8.2. Matrix inner product . . . . . . . . . . . . . . . . . . . . . . . . . . 138
4.8.3. Frobenius norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
4.9. Subspaces associated with a matrix . . . . . . . . . . . . . . . . . . . . . 139
viii From Algebraic Structures to Tensors

4.10. Matrix rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141


4.10.1. Definition and properties . . . . . . . . . . . . . . . . . . . . . . . 141
4.10.2. Sum and difference rank . . . . . . . . . . . . . . . . . . . . . . . 143
4.10.3. Subspaces associated with a matrix product . . . . . . . . . . . . 143
4.10.4. Product rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
4.11. Determinant, inverses and generalized inverses . . . . . . . . . . . . . 145
4.11.1. Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
4.11.2. Matrix inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
4.11.3. Solution of a homogeneous system of linear equations . . . . . . 149
4.11.4. Complex matrix inverse . . . . . . . . . . . . . . . . . . . . . . . 150
4.11.5. Orthogonal and unitary matrices . . . . . . . . . . . . . . . . . . 150
4.11.6. Involutory matrices and anti-involutory matrices . . . . . . . . . 151
4.11.7. Left and right inverses of a rectangular matrix . . . . . . . . . . . 153
4.11.8. Generalized inverses . . . . . . . . . . . . . . . . . . . . . . . . . 155
4.11.9. Moore–Penrose pseudo-inverse . . . . . . . . . . . . . . . . . . . 157
4.12. Multiplicative groups of matrices . . . . . . . . . . . . . . . . . . . . . 158
4.13. Matrix associated to a linear map . . . . . . . . . . . . . . . . . . . . . 159
4.13.1. Matrix representation of a linear map . . . . . . . . . . . . . . . . 159
4.13.2. Change of basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
4.13.3. Endomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
4.13.4. Nilpotent endomorphisms . . . . . . . . . . . . . . . . . . . . . . 166
4.13.5. Equivalent, similar and congruent matrices . . . . . . . . . . . . 167
4.14. Matrix associated with a bilinear/sesquilinear form . . . . . . . . . . . 168
4.14.1. Definition of a bilinear/sesquilinear map . . . . . . . . . . . . . . 168
4.14.2. Matrix associated to a bilinear/sesquilinear form . . . . . . . . . 170
4.14.3. Changes of bases with a bilinear form . . . . . . . . . . . . . . . 170
4.14.4. Changes of bases with a sesquilinear form . . . . . . . . . . . . . 171
4.14.5. Symmetric bilinear/sesquilinear forms . . . . . . . . . . . . . . . 172
4.15. Quadratic forms and Hermitian forms . . . . . . . . . . . . . . . . . . 174
4.15.1. Quadratic forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
4.15.2. Hermitian forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
4.15.3. Positive/negative definite quadratic/Hermitian forms . . . . . . . 177
4.15.4. Examples of positive definite quadratic forms . . . . . . . . . . . 178
4.15.5. Cauchy–Schwarz and Minkowski inequalities . . . . . . . . . . . 179
4.15.6. Orthogonality, rank, kernel and degeneration of a bilinear form . 180
4.15.7. Gauss reduction method and Sylvester’s inertia law . . . . . . . . 181
4.16. Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . . . . . 184
4.16.1. Characteristic polynomial and Cayley–Hamilton theorem . . . . 184
4.16.2. Right eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . 186
4.16.3. Spectrum and regularity/singularity conditions . . . . . . . . . . 187
4.16.4. Left eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
4.16.5. Properties of eigenvectors . . . . . . . . . . . . . . . . . . . . . . 188
4.16.6. Eigenvalues and eigenvectors of a regularized matrix . . . . . . . 190
4.16.7. Other properties of eigenvalues . . . . . . . . . . . . . . . . . . . 190
Contents ix

4.16.8. Symmetric/Hermitian matrices . . . . . . . . . . . . . . . . . . . 191


4.16.9. Orthogonal/unitary matrices . . . . . . . . . . . . . . . . . . . . . 193
4.16.10. Eigenvalues and extrema of the Rayleigh quotient . . . . . . . . 194
4.17. Generalized eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . 195

Chapter 5. Partitioned Matrices . . . . . . . . . . . . . . . . . . . . . . . . 199


5.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
5.2. Submatrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
5.3. Partitioned matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
5.4. Matrix products and partitioned matrices . . . . . . . . . . . . . . . . . 202
5.4.1. Matrix products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
5.4.2. Vector Kronecker product . . . . . . . . . . . . . . . . . . . . . . . 202
5.4.3. Matrix Kronecker product . . . . . . . . . . . . . . . . . . . . . . . 202
5.4.4. Khatri–Rao product . . . . . . . . . . . . . . . . . . . . . . . . . . 204
5.5. Special cases of partitioned matrices . . . . . . . . . . . . . . . . . . . . 205
5.5.1. Block-diagonal matrices . . . . . . . . . . . . . . . . . . . . . . . . 205
5.5.2. Signature matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
5.5.3. Direct sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
5.5.4. Jordan forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
5.5.5. Block-triangular matrices . . . . . . . . . . . . . . . . . . . . . . . 206
5.5.6. Block Toeplitz and Hankel matrices . . . . . . . . . . . . . . . . . 207
5.6. Transposition and conjugate transposition . . . . . . . . . . . . . . . . . 207
5.7. Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
5.8. Vectorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
5.9. Blockwise addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
5.10. Blockwise multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . 209
5.11. Hadamard product of partitioned matrices . . . . . . . . . . . . . . . . 209
5.12. Kronecker product of partitioned matrices . . . . . . . . . . . . . . . . 210
5.13. Elementary operations and elementary matrices . . . . . . . . . . . . . 212
5.14. Inversion of partitioned matrices . . . . . . . . . . . . . . . . . . . . . 214
5.14.1. Inversion of block-diagonal matrices . . . . . . . . . . . . . . . . 215
5.14.2. Inversion of block-triangular matrices . . . . . . . . . . . . . . . 215
5.14.3. Block-triangularization and Schur complements . . . . . . . . . 216
5.14.4. Block-diagonalization and block-factorization . . . . . . . . . . . 216
5.14.5. Block-inversion and partitioned inverse . . . . . . . . . . . . . . 217
5.14.6. Other formulae for the partitioned 2 × 2 inverse . . . . . . . . . 218
5.14.7. Solution of a system of linear equations . . . . . . . . . . . . . . 219
5.14.8. Inversion of a partitioned Gram matrix . . . . . . . . . . . . . . . 220
5.14.9. Iterative inversion of a partitioned square matrix . . . . . . . . . 220
5.14.10. Matrix inversion lemma and applications . . . . . . . . . . . . . 221
5.15. Generalized inverses of 2 × 2 block matrices . . . . . . . . . . . . . . 222
5.16. Determinants of partitioned matrices . . . . . . . . . . . . . . . . . . . 224
5.16.1. Determinant of block-diagonal matrices . . . . . . . . . . . . . . 224
5.16.2. Determinant of block-triangular matrices . . . . . . . . . . . . . 225
x From Algebraic Structures to Tensors

5.16.3. Determinant of partitioned matrices with square diagonal blocks 225


5.16.4. Determinants of specific partitioned matrices . . . . . . . . . . . 226
5.16.5. Eigenvalues of CB and BC . . . . . . . . . . . . . . . . . . . . . 227
5.17. Rank of partitioned matrices . . . . . . . . . . . . . . . . . . . . . . . . 228
5.18. Levinson–Durbin algorithm . . . . . . . . . . . . . . . . . . . . . . . . 229
5.18.1. AR process and Yule–Walker equations . . . . . . . . . . . . . . 230
5.18.2. Levinson–Durbin algorithm . . . . . . . . . . . . . . . . . . . . . 232
5.18.3. Linear prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . 237

Chapter 6. Tensor Spaces and Tensors . . . . . . . . . . . . . . . . . . . 243


6.1. Chapter summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
6.2. Hypermatrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
6.2.1. Hypermatrix vector spaces . . . . . . . . . . . . . . . . . . . . . . 244
6.2.2. Hypermatrix inner product and Frobenius norm . . . . . . . . . . 245
6.2.3. Contraction operation and n-mode hypermatrix–matrix product . 245
6.3. Outer products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
6.4. Multilinear forms, homogeneous polynomials and hypermatrices . . . . 251
6.4.1. Hypermatrix associated to a multilinear form . . . . . . . . . . . . 251
6.4.2. Symmetric multilinear forms and symmetric hypermatrices . . . . 252
6.5. Multilinear maps and homogeneous polynomials . . . . . . . . . . . . . 255
6.6. Tensor spaces and tensors . . . . . . . . . . . . . . . . . . . . . . . . . . 255
6.6.1. Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
6.6.2. Multilinearity and associativity . . . . . . . . . . . . . . . . . . . . 257
6.6.3. Tensors and coordinate hypermatrices . . . . . . . . . . . . . . . . 257
6.6.4. Canonical writing of tensors . . . . . . . . . . . . . . . . . . . . . 258
6.6.5. Expansion of the tensor product of N vectors . . . . . . . . . . . . 260
6.6.6. Properties of the tensor product . . . . . . . . . . . . . . . . . . . . 261
6.6.7. Change of basis formula . . . . . . . . . . . . . . . . . . . . . . . . 266
6.7. Tensor rank and tensor decompositions . . . . . . . . . . . . . . . . . . 268
6.7.1. Matrix rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
6.7.2. Hypermatrix rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
6.7.3. Symmetric rank of a hypermatrix . . . . . . . . . . . . . . . . . . . 269
6.7.4. Comparative properties of hypermatrices and matrices . . . . . . . 269
6.7.5. CPD and dimensionality reduction . . . . . . . . . . . . . . . . . . 271
6.7.6. Tensor rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
6.8. Eigenvalues and singular values of a hypermatrix . . . . . . . . . . . . . 274
6.9. Isomorphisms of tensor spaces . . . . . . . . . . . . . . . . . . . . . . . 276

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Visit https://textbookfull.com
now to explore a rich
collection of eBooks, textbook
and enjoy exciting offers!
Preface

This book is part of a collection of four books about matrices and tensors, with
applications to signal processing. Although the title of this collection suggests an
orientation toward signal processing, the results and methods presented should also
be of use to readers of other disciplines.

Writing books on matrices is a real challenge given that so many excellent books
on the topic have already been written1. How then to stand out from the existing,
and which Ariadne’s thread to unwind? A way to distinguish oneself was to treat in
parallel matrices and tensors. Viewed as extensions of matrices with orders higher than
two, the latter have many similarities with matrices, but also important differences in
terms of rank, uniqueness of decomposition, as well as potentiality for representing
multi-dimensional, multi-modal, and inaccurate data. Moreover, regarding the
guiding thread, it consists in presenting structural foundations, then both matrix and
tensor decompositions, in addition to related processing methods, finally leading to
applications, by means of a presentation as self-contained as possible, and with some
originality in the topics being addressed and the way they are treated.

Therefore, in Volume 2, we shall use an index convention generalizing Einstein’s


summation convention, to write and to demonstrate certain equations involving multi-
index quantities, as is the case with matrices and tensors. A chapter will be dedicated
to Hadamard, Kronecker, and Khatri–Rao products, which play a very important role
in matrix and tensor computations.

After a reminder of main matrix decompositions, including a detailed presentation


of the SVD (for singular value decomposition), we shall present different tensor
operations, as well as the two main tensor decompositions which will be the basis of
both fundamental and applied developments, in the last two volumes. These standard
tensor decompositions can be seen as extensions of matrix SVD to tensors of order

1 A list of books, far from exhaustive, is provided in Chapter 1.


xii From Algebraic Structures to Tensors

higher than two. A few examples of equations for representing signal processing
problems will be provided to illustrate the use of such decompositions. A chapter
will be devoted to structured matrices. Different properties will be highlighted, and
extensions to tensors of order higher than two will be presented. Two other chapters
will concern quaternions and quaternionic matrices, on the one hand, and polynomial
matrices, on the other hand.
In Volume 3, an overview of several tensor models will be carried out by
taking some constraints (structural, linear dependency of factors, sparsity, and non-
negativity) into account. Some of these models will be used in Volume 4, for the
design of digital communication systems. Tensor trains and tensor networks will
also be presented for the representation and analysis of massive data (big data).
The algorithmic aspect will be taken into consideration with the presentation of
different processing methods.

Volume 4 will mainly focus on tensorial approaches for array processing,


wireless digital communications (first point-to-point, then cooperative), modeling and
identification of both linear and nonlinear systems, as well as the reconstruction of
missing data in data matrices and tensors, the so-called problems of matrix and tensor
completion. For these applications, tensor-based models will be more particularly
detailed. Monte Carlo simulation results will be provided to illustrate some of the
tensorial methods. This will be particularly the case of semi-blind receivers recently
developed for wireless communication systems.

Matrices and tensors, and more generally linear algebra and multilinear algebra,
are at the same time exciting, extensive, and fundamental topics equally important
for teaching and researching as for applications. It is worth noting here that the
choices made for the content of the books of this collection have not been guided
by educational programs, which explains some gaps compared to standard algebra
treaties. The guiding thread has been rather to present the definitions, properties,
concepts and results necessary for a good understanding of processing methods and
applications considered in these books. In addition to the great diversity of topics,
another difficulty resided in the order in which they should be addressed, due to the
fact that a lot of topics overlap, certain notions or/and some results being sometimes
used before they have been defined or/and demonstrated, which requires the reader to
be referred to sections or chapters that follow.
Four particularities should be highlighted. The first relates to the close relationship
between some of the topics being addressed, certain methods presented and recent
research results, particularly with regard to tensorial approaches for signal processing.
The second reflects the will to situate the results stated in their historical context,
using some biographical information on certain authors being cited, as well as lists
of references comprehensive enough to deepen specific results, and also to extend the
biographical sources provided. This has motivated the introductory chapter entitled
“Historical elements of matrices and tensors.”
Preface xiii

The last two characteristics concern the presentation and illustration of properties
and methods under consideration. Some will be provided without demonstration
because of their simplicity or availability in numerous books thereabout. Others will
be demonstrated, either for pedagogical reasons, since their knowledge should allow
for better understanding the results being demonstrated, or because of the difficulty to
find them in the literature, or still due to the originality of the proposed demonstrations
as it will be the case, for example, of those making use of the index convention. The
use of many tables should also be noted with the purpose of recalling key results
while presenting them in a synthetic and comparative manner.

Finally, numerous examples will be provided to illustrate certain definitions,


properties, decompositions, and methods presented. This will be particularly the case
for the fourth book dedicated to applications of tensorial tools, which has been my
main source of motivation. After 15 years of works dedicated to research (pioneering
for some), aiming to use tensor decompositions for modeling and identifying
nonlinear dynamical systems, and for designing wireless communication systems
based on new tensor models, it seemed to me useful to share this experience and this
scientific path for trying to make tensor tools as accessible as possible and to motivate
new applications based on tensor approaches.
This first book, whose content is described below, provides an introduction to
matrices and tensors based on the structures of vector spaces and tensor spaces,
along with the presentation of fundamental concepts and results. In the first part
(Chapters 2 and 3), a refresher of the mathematical bases related to classical algebraic
structures is presented, by way of bringing forward a growing complexity of the
structures under consideration, ranging from monoids to vector spaces, and to
algebras. The notions of norm, inner product, and Hilbert basis are detailed in order to
introduce Banach and Hilbert spaces. The Hilbertian approach, which is fundamental
for signal processing, is illustrated based on two methods widely employed for signal
representation and analysis, as well as for function approximation, namely, Fourier
and orthogonal polynomial series.
Chapter 4 is dedicated to matrix algebra. The notions of fundamental subspaces
associated with a matrix, rank, determinant, inverse, auto-inverse, generalized
inverse, and pseudo-inverse are described therein. Matrix representations of linear and
bilinear/sesquilinear maps are established. The effect of a change of basis is studied,
leading to the definition of equivalent, similar, and congruent matrices. The notions of
eigenvalue and eigenvector are then defined, ending with matrix eigendecomposition,
and in some cases, with its diagonalization, which are topics to be covered in Volume
2. The case of certain structured matrices, such as symmetric/hermitian matrices and
orthogonal/unitary matrices, is more specifically considered. The interpretation of
eigenvalues as extrema of the Rayleigh quotient is presented, before introducing the
notion of generalized eigenvalues.

In Chapter 5, we consider partitioned matrices. This type of structure is inherent


to matrix products in general, and Kronecker and Khatri–Rao products in particular.
xiv From Algebraic Structures to Tensors

Partitioned matrices corresponding to block-diagonal and block-triangular matrices,


as well as to Jordan forms are described. Next, transposition/conjugate transposition,
vectorization, addition and multiplication operations, as well as Hadamard and
Kronecker products, are presented for partitioned matrices. Elementary operations and
associated matrices allowing the partitioned matrices to be decomposed are detailed.
These operations are then utilized for block-triangularization, block-diagonalization,
and block-inversion of partitioned matrices. The matrix inversion lemma, which is
widely used in signal processing, is deduced from block-inversion formulae. This
lemma is used to demonstrate a few inversion formulae very often encountered in
calculations. Fundamental results on generalized inverse, determinant, and rank of
partitioned matrices are presented. The Levinson algorithm is demonstrated using
the formula for inverting a partitioned square matrix, recursively with respect to the
matrix order. This algorithm, which is one of the most famous in signal processing,
allows to efficiently solve the problem of parameter estimation of autoregressive
(AR) models and linear predictors, recursively with respect to the order of the
AR model and of the predictor, respectively. To illustrate the results of Chapter 3
relatively to orthogonal projection, it is shown that forward and backward linear
predictors, optimal in the sense of the MMSE (minimum mean squared error), can be
interpreted in terms of orthogonal projectors on subspaces of the Hilbert space of the
second-order stationary random signals.

In Chapter 6, hypermatrices and tensors are introduced in close connection with


multilinear maps and multilinear forms. Hypermatrix vector spaces are first defined,
along with operations such as inner product and contraction of hypermatrices – the
particular case of the n-mode hypermatrix-matrix product being considered in more
detail. Hypermatrices associated with multilinear forms and maps are highlighted,
and symmetric hypermatrices are introduced through the definition of symmetric
multilinear forms. Then, tensors of order N > 2 are defined in a formal way as
elements of a tensor space, i.e., a tensor product of N vector spaces. The effect of
changes to the tensor space on the coordinate hypermatrix of a tensor are analyzed.
In addition, some attributes of the tensor product are described, with a focus on
the so-called universal property. Following this, the notions of a rank based on the
canonical polyadic decomposition (CPD) of a tensor are introduced, as well as the
ranking of a tensor’s eigenvalues and singular values. These highlight the similarities
and the differences between matrices, and tensors of order greater than two. Finally,
the concept of tensor unfolding is illustrated via the definition of isomorphisms of
tensor spaces.

I want to thank my colleagues Sylvie Icart and Vicente Zarzoso for their review of
some chapters and Henrique de Morais Goulart, who co-authored Chapter 4.

Gérard FAVIER
August 2019
[email protected]
1

Historical Elements of Matrices


and Tensors

The objective of this introduction is by no means to outline a rigorous and


comprehensive historical background of the theory of matrices and tensors. Such
a historical record should be the work of a historian of mathematics and would
require thorough bibliographical research, including reading the original publications
of authors cited to analyze and reconstruct the progress of mathematical thinking
throughout years and collaborations. A very interesting illustration of this type
of analysis is provided, for example, in the form of a “representation of research
networks”1, over the period 1880–1907, in which are identified the interactions and
influences of some mathematicians, such as James Joseph Sylvester (1814–1897),
Karl Theodor Weierstrass (1815–1897), Arthur Cayley (1821–1895), Leopold
Kronecker (1823–1891), Ferdinand Georg Frobenius (1849–1917), or Eduard
Weyr (1852–1903), with respect to the theory of matrices, the theory of numbers
(quaternions, hypercomplex numbers), bilinear forms, and algebraic structures.

Our modest goal here is to locate in time the contributions of a few mathematicians
and physicists2 who have laid the foundations for the theory of matrices and tensors,
and to whom we will refer later in our presentation. This choice is necessarily very
incomplete.

1 F. Brechenmacher, “Les matrices : formes de représentation et pratiques opératoires


(1850–1930)”, Culture MATH - Expert site ENS Ulm / DESCO, December 20, 2006.
2 For more information on the mathematicians cited in this introduction, refer to the document
“Biographies de mathématiciens célèbres”, by Johan Mathieu, 2008, and the remarkable site
Mac Tutor History of Mathematics Archive (http://www-history.mcs.st-andrews.ac.uk) of the
University of St. Andrews, in Scotland, which contains a very large number of biographies of
mathematicians.

From Algebraic Structures to Tensors,


First Edition. Edited by Gérard Favier.
© ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.
2 From Algebraic Structures to Tensors

The first studies of determinants that preceded those of matrices were conducted
independently by the Japanese mathematician Seki Kowa (1642–1708) and the
German mathematician Gottfried Leibniz (1646–1716), and then by the Scottish
mathematician Colin Maclaurin (1698–1746) for solving 2 × 2 and 3 × 3 systems
of linear equations. These works were then generalized by the Swiss mathematician
Gabriel Cramer (1704–1752) for the resolution of n × n systems, leading, in
1750, to the famous formulae that bear his name, whose demonstration is due to
Augustin-Louis Cauchy (1789–1857).
In 1772, Théophile Alexandre Vandermonde (1735–1796) defined the notion of
determinant, and Pierre-Simon Laplace (1749–1827) formulated the computation
of determinants by means of an expansion according to a row or a column, an
expansion which will be presented in section 4.11.1. In 1773, Joseph-Louis Lagrange
(1736–1813) discovered the link between the calculation of determinants and that of
volumes. In 1812, Cauchy used, for the first time, the determinant in the sense that it
has today, and he established the formula for the determinant of the product of two
rectangular matrices, a formula which was found independently by Jacques Binet
(1786–1856), and which is called nowadays the Binet–Cauchy formula.

In 1810, Johann Carl Friedrich Gauss (1777–1855) introduced a notation using a


table, similar to matrix notation, to write a 3 × 3 system of linear equations, and he
proposed the elimination method, known as Gauss elimination through pivoting, to
solve it. This method, also known as Gauss–Jordan elimination method, was in fact
known to Chinese mathematicians (first century). It was presented in a modern form,
by Gauss, when he developed the least squares method, first published by Adrien-
Marie Legendre (1752–1833), in 1805.

Several determinants of special matrices are designated by the names of their


authors, such as Vandermonde’s, Cauchy’s, Hilbert’s, and Sylvester’s determinants.
The latter of whom used the word “matrix” for the first time in 1850, to designate
a rectangular table of numbers. The presentation of the determinant of an nth-order
square matrix as an alternating n-linear function of its n column vectors is due to
Weierstrass and Kronecker, at the end of the 19th century.

The foundations of the theory of matrices were laid in the 19th century around the
following topics: determinants for solving systems of linear equations, representation
of linear transformations and quadratic forms (a topic which will be addressed in detail
in Chapter 4), matrix decompositions and reductions to canonical forms, that is to say,
diagonal or triangular forms such as the Jordan (1838–1922) normal form with Jordan
blocks on the diagonal, introduced by Weierstrass, the block-triangular form of Schur
(1875–1941), or the Frobenius normal form that is a block-diagonal matrix, whose
blocks are companion matrices.
Historical Elements of Matrices and Tensors 3

A history of the theory of matrices in the 19th century was published by Thomas
Hawkins3 in 1974, highlighting, in particular, the contributions of the British
mathematician Arthur Cayley, seen by historians as one of the founders of the theory
of matrices. Cayley laid the foundations of the classical theory of determinants4 in
1843. He then developed matrix computation5 by defining certain matrix operations as
the product of two matrices, the transposition of the product of two matrices, and the
inversion of a 3 × 3 matrix using cofactors, and by establishing different properties of
matrices, including, namely, the famous Cayley–Hamilton theorem which states that
every square matrix satisfies its characteristic equation. This result highlighted for the
fourth order by William Rowan Hamilton (1805–1865), in 1853, for the calculation
of the inverse of a quaternion, was stated in the general case by Cayley in 1857, but
the demonstration for any arbitrary order is due to Frobenius, in 1878.
An important part of the theory of matrices concerns the spectral theory, namely,
the notions of eigenvalue and characteristic polynomial. Directly related to the
integration of systems of linear differential equations, this theory has its origins in
physics, and more particularly in celestial mechanics for the study of the orbits of
planets, conducted in the 18th century by mathematicians, physicists, and astronomers
such as Lagrange and Laplace, then in the 19th century by Cauchy, Weierstrass,
Kronecker, and Jordan.

The names of certain matrices and associated determinants are those of the
mathematicians who have introduced them. This is the case, for example, for
Alexandre Théophile Vandermonde (1735–1796) who gave his name to a matrix
whose elements on each row (or each column) form a geometric progression and
whose determinant is a polynomial. It is also the case for Carl Jacobi (1804–1851)
and Ludwig Otto Hesse (1811–1874), for Jacobian and Hessian matrices, namely,
the matrices of first- and second-order partial derivatives of a vector function, whose
determinants are called Jacobian and Hessian, respectively. The same is true for the
Laplacian matrix or Laplace matrix, which is used to represent a graph. We can also
mention Charles Hermite (1822–1901) for Hermitian matrices, related to the so-called
Hermitian forms (see section 4.15). Specific matrices such as Fourier (1768–1830)
and Hadamard (1865–1963) matrices are directly related to the transforms of the
same name. Similarly, Householder (1904–1993) and Givens (1910–1993) matrices
are associated with transformations corresponding to reflections and rotations,
respectively. The so-called structured matrices, such as Hankel (1839–1873) and
Toeplitz (1881–1943) matrices, play a very important role in signal processing.

3 Thomas Hawkins, “The theory of matrices in the 19th century”, Proceedings of the
International Congress of Mathematicians, Vancouver, 1974.
4 Arthur Cayley, “On a theory of determinants”, Cambridge Philosophical Society 8, l–16,
1843.
5 Arthur Cayley, “A memoir on the theory of matrices”, Philosophical Transactions of the
Royal Society of London 148, 17–37, 1858.
4 From Algebraic Structures to Tensors

Matrix decompositions are widely used in numerical analysis, especially to solve


systems of equations using the method of least squares. This is the case, for example,
of EVD (eigenvalue decomposition), SVD (singular value decomposition), LU, QR,
UD, Cholesky (1875–1918), and Schur (1875–1941) decompositions, which will be
presented in Volume 2.

Just as matrices and matrix computation play a fundamental role in linear algebra,
tensors and tensor computation are at the origin of multilinear algebra. It was in the
19th century that tensor analysis first appeared, along with the works of German
mathematicians Georg Friedrich Bernhard Riemann6 (1826–1866) and Elwin Bruno
Christoffel (1829–1900) in geometry (non-Euclidean), introducing the index notation
and notions of metric, manifold, geodesic, curved space, curvature tensor, which gave
rise to what is today called Riemannian geometry and differential geometry.

It was the Italian mathematician Gregorio Ricci-Curbastro (1853–1925) with


his student Tullio Levi-Civita (1873–1941) who were the founders of the tensor
calculus, then called absolute differential calculus7, with the introduction of the
notion of covariant and contravariant components, which was used by Albert Einstein
(1879–1955) in his theory of general relativity, in 1915.

Tensor calculus originates from the study of the invariance of quadratic forms
under the effect of a change of coordinates and, more generally, from the theory of
invariants initiated by Cayley8, with the introduction of the notion of hyperdeterminant
which generalizes matrix determinants to hypermatrices. Refer to the article by Crilly9
for an overview of the contribution of Cayley on the invariant theory. This theory
was developed by Jordan and Kronecker and involved controversy10 between these
two authors, then continued by David Hilbert (1862–1943), Elie Joseph Cartan
(1869–1951), and Hermann Klaus Hugo Weyl (1885–1955), for algebraic forms

6 A detailed analysis of Riemann’s contributions to tensor analysis has been made by Ruth
Farwell and Christopher Knee, “The missing link: Riemann’s Commentatio, differential
geometry and tensor analysis”, Historia Mathematica 17, 223–255, 1990.
7 G. Ricci and T. Levi-Civita, “Méthodes de calcul différentiel absolu et leurs applications”,
Mathematische Annalen 54, 125–201, 1900.
8 A. Cayley, “On the theory of linear transformations”, Cambridge Journal of Mathematics 4,
193–209, 1845. A. Cayley, “On linear transformations”, Cambridge and Dublin Mathematical
Journal 1, 104–122, 1846.
9 T. Crilly, “The rise of Cayley’s invariant theory (1841–1862)”, Historica Mathematica 13,
241–254, 1986.
10 F. Brechenmacher, “La controverse de 1874 entre Camille Jordan et Leopold Kronecker:
Histoire du théorème de Jordan de la décomposition matricielle (1870–1930)”, Revue d’histoire
des Mathématiques, Society Math De France 2, no. 13, 187–257, 2008 (hal-00142790v2).
Historical Elements of Matrices and Tensors 5

(or homogeneous polynomials), or for symmetric tensors11. A historical review of the


theory of invariants was made by Dieudonné and Carrell12.
This property of invariance vis-à-vis the coordinate system characterizes the
laws of physics and, thus, mathematical models of physics. This explains that
tensor calculus is one of the fundamental mathematical tools for writing and
studying equations that govern physical phenomena. This is the case, for example, in
general relativity, in continuum mechanics, for the theory of elastic deformations, in
electromagnetism, thermodynamics, and so on.
The word tensor was introduced by the German physicist Woldemar Voigt
(1850–1919), in 1899, for the geometric representation of tensions (or pressures)
and deformations in a body, in the areas of elasticity and crystallography. Note that
the word tensor was introduced independently by the Irish mathematician, physicist
and astronomer William Rowan Hamilton (1805–1865), in 1846, to designate the
modulus of a quaternion13.

As we have just seen in this brief historical overview, tensor calculus was
used initially in geometry and to describe physical phenomena using tensor fields,
facilitating the application of differential operators (gradient, divergence, rotational,
and Laplacian) to tensor fields14.

Thus, we define the electromagnetic tensor (or Maxwell’s (1831–1879) tensor)


describing the structure of the electromagnetic field, the Cauchy stress tensor, and
the deformation tensor (or Green–Lagrange deformation tensor), in continuum
mechanics, and the fourth-order curvature tensor (or Riemann–Christoffel tensor) and
the third-order torsion tensor (or Cartan tensor15) in differential geometry.

11 M. Olive, B. Kolev, and N. Auffray, “Espace de tenseurs et théorie classique des invariants”,
21ème Congrès Français de Mécanique, Bordeaux, France, 2013 (hal-00827406).
12 J. A. Dieudonné and J. B. Carrell, Invariant Theory, Old and New, Academic Press, 1971.
13 See page 9 in E. Sarrau, Notions sur la théorie des quaternions, Gauthiers-Villars, Paris,
1889, http://rcin.org.pl/Content/13490.
14 The notion of tensor field is associated with physical quantities that may depend on both
spatial coordinates and time. These variable geometric quantities define differentiable functions
on a domain of the physical space. Tensor fields are used in differential geometry, in algebraic
geometry, general relativity, and in many other areas of mathematics and physics. The concept
of tensor field generalizes that of vector field.
15 E. Cartan, “Sur une généralisation de la notion de courbure de Riemann et les espaces à
torsion”, Comptes rendus de l’Académie des Sciences 174, 593–595, 1922. Elie Joseph Cartan
(1869–1951), French mathematician and student of Jules Henri Poincaré (1854–1912) and
Charles Hermite (1822–1901) at the Ecole Normale Supérieure. He brought major contributions
concerning the theory of Lie groups, differential geometry, Riemannian geometry, orthogonal
polynomials, and elliptic functions. He discovered spinors, in 1913, as part of his work on the
6 From Algebraic Structures to Tensors

After their introduction as computational and representation tools in physics


and geometry, tensors have been the subject of mathematical developments
related to polyadic decomposition (Hitchcock 1927) aiming to generalize dyadic
decompositions, that is to say, matrix decompositions such as SVD.
Then emerged their applications as tools for the analysis of three-dimensional data
generalizing matrix analysis to sets of matrices, viewed as arrays of data characterized
by three indices. We can mention here the works of pioneers in factor analysis by
Cattell16 and Tucker17 in psychometrics (Cattell 1944; Tucker 1966), and Harshman18
in phonetics (Harshman 1970) who have introduced Tucker’s and PARAFAC (parallel
factors) decompositions. This last one was proposed independently by Carroll
and Chang (1970), under the name of canonical decomposition (CANDECOMP),
following the publication of an article by Wold (1966), with the objective to generalize
the (Eckart and Young 1936) decomposition, that is, SVD, to arrays of order higher
than two. This decomposition was then called CP (for CANDECOMP/PARAFAC) by
Kiers (2000). For an overview of tensor methods applied to data analysis, the reader
should consult the books by Coppi and Bolasco (1989) and Kroonenberg (2008).
From the early 1990s, tensor analysis, also called multi-way analysis, has also
been widely used in chemistry, and more specifically in chemometrics (Bro 1997).
Refer to, for example, the book by Smilde et al. (2004) for a description of various
applications in chemistry.
In parallel, at the end of the 1980s, statistic “objects,” such as moments and
cumulants of order higher than two, have naturally emerged as tensors (McCullagh
1987). Tensor-based applications were then developed in signal processing for solving
the problem of blind source separation using cumulants (Cardoso 1990, 1991; Cardoso
and Comon 1990). The book by Comon and Jutten (2010) outlines an overview of
methods for blind source separation.

In the early 2000s, tensors were used for modeling digital communication
systems (Sidiropoulos et al. 2000a), for array processing (Sidiropoulos et al. 2000b),
for multi-dimensional harmonics recovery (Haardt et al. 2008; Jiang et al. 2001;
Sidiropoulos 2001), and for image processing, more specifically for face recognition

representations of groups. Like tensor calculus, spinor calculus plays a major role in quantum
physics. His name is associated with Albert Einstein (1879–1955) for the classical theory of
gravitation that relies on the model of general relativity.
16 Raymond Cattell (1905–1998), Anglo-American psychologist who used factorial analysis
for the study of personality with applications to psychotherapy.
17 Ledyard Tucker (1910–2004), American mathematician, expert in statistics and psychology,
and more particularly known for tensor decomposition which bears his name.
18 Richard Harshman (1943–2008), an expert in psychometrics and father of three-dimensional
PARAFAC analysis which is the most widely used tensor decomposition in applications.
Visit https://textbookfull.com
now to explore a rich
collection of eBooks, textbook
and enjoy exciting offers!
Historical Elements of Matrices and Tensors 7

(Vasilescu and Terzopoulos 2002). The field of wireless communication systems


has then given rise to a large number of tensor models (da Costa et al. 2018;
de Almeida and Favier 2013; de Almeida et al. 2008; Favier et al. 2012a; Favier and
de Almeida 2014b; Favier et al. 2016). These models will be covered in a chapter
of Volume 3. Tensors have also been used for modeling and parameter estimation of
dynamic systems both linear (Fernandes et al. 2008, 2009a) and nonlinear, such as
Volterra systems (Favier and Bouilloc 2009a, 2009b, 2010) or Wiener-Hammerstein
systems (Favier and Kibangou 2009a, 2009b; Favier et al. 2012b; Kibangou and
Favier 2008, 2009, 2010), and for modeling and estimating nonlinear communication
channels (Bouilloc and Favier 2012; Fernandes et al. 2009b, 2011; Kibangou and
Favier 2007). These different tensor-based applications in signal processing will be
addressed in Volume 4.

Many applications of tensors also concern speech processing (Nion et al. 2010),
MIMO radar (Nion and Sidiropoulos 2010), and biomedical signal processing,
particularly for electroencephalography (EEG) (Cong et al. 2015; de Vos et al. 2007;
Hunyadi et al. 2016), and electrocardiography (ECG) signals (Padhy et al. 2018),
magnetic resonance imaging (MRI) (Schultz et al. 2014), or hyperspectral imaging
(Bourennane et al. 2010; Velasco-Forero and Angulo 2013), among many others.
Today, tensors viewed as multi-index tables are used in many areas of application for
the representation, mining, analysis, and fusion of multi-dimensional and multi-modal
data (Acar and Yener 2009; Cichocki 2013; Lahat et al. 2015; Morup 2011).

A very large number of books address linear algebra and matrix calculus, for
example: Gantmacher (1959), Greub (1967), Bellman (1970), Strang (1980), Horn
and Johnson (1985, 1991), Lancaster and Tismenetsky (1985), Noble and Daniel
(1988), Barnett (1990), Rotella and Borne (1995), Golub and Van Loan (1996),
Lütkepohl (1996), Cullen (1997), Zhang (1999), Meyer (2000), Lascaux and Théodor
(2000), Serre (2002), Abadir and Magnus (2005), Bernstein (2005), Gourdon (2009),
Grifone (2011), and Aubry (2012).

For multilinear algebra and tensor calculus, there are much less reference
books, for example: Greub (1978), McCullagh (1987), Coppi and Bolasco (1989),
Smilde et al. (2004), Kroonenberg (2008), Cichocki et al. (2009), and Hackbusch
(2012). For an introduction to multilinear algebra and tensors, see Ph.D. theses by
de Lathauwer (1997) and Bro (1998). The following synthesis articles can also be
consulted: (Bro 1997; Cichocki et al. 2015; Comon 2014; Favier and de Almeida
2014a; Kolda and Bader 2009; Lu et al. 2011; Papalexakis et al. 2016; Sidiropoulos
et al. 2017).
2

Algebraic Structures

2.1. A few historical elements

We make here a brief historical note concerning algebraic structures. The notion
of structure plays a fundamental role in mathematics. In a treatise entitled Eléments
de mathématique, comprising 11 books, Nicolas Bourbaki1 distinguishes three main
types of structures: algebraic structures, ordered structures that equip sets with an
order relation, and topological structures equipping sets with a topology that allows
the definition of topological concepts such as open sets, neighborhood, convergence,
and continuity. Some structures are mixed, that is, they combine several of the three
basic structures. That is the case, for instance, of Banach and Hilbert spaces which
combine the vector space structure with the notions of norm and inner product, that is,
a topology.
Algebraic structures endow sets with laws of composition governing operations
between elements of a same set or between elements of two distinct sets. These
composition laws known as internal and external laws, respectively, exhibit certain
properties such as associativity, commutativity, and distributivity, with the existence
(or not) of a symmetric for each element, and of a neutral element. Algebraic structures
make it possible to characterize, in particular, sets of numbers, polynomials, matrices,
and functions. The study of these structures (groups, rings, fields, vector spaces, etc.)
and their relationships is the primary purpose of general algebra, also called abstract
algebra. A reminder of the basic algebraic structures will be carried out in this chapter.

The vector spaces gave rise to linear algebra for the resolution of systems of
linear equations and the study of linear maps (also called linear mappings, or linear
transformations). Linear algebra is closely related to the theory of matrices and matrix
algebra, of which an introduction will be made in Chapter 4.

1 Nicolas Bourbaki is the pseudonym of a group of French mathematicians formed in 1935.

From Algebraic Structures to Tensors,


First Edition. Edited by Gérard Favier.
© ISTE Ltd 2019. Published by ISTE Ltd and John Wiley & Sons, Inc.
10 From Algebraic Structures to Tensors

Multilinear algebra extends linear algebra to the study of multilinear maps, through
the notions of tensor space and tensor, which will be introduced in Chapter 6.
Although the resolution of (first- and second-degree) equations can be traced
to the Babylonians2 (about 2000 BC, according to Babylonian tables), then to the
Greeks (300 BC), to the Chinese (200 BC), and to the Indians (6th century), algebra
as a discipline emerged in the Arab-Muslim world, during the 8th century. It gained
momentum in the West, in the 16th century, with the resolution of algebraic (or
polynomial) equations, first with the works of Italian mathematicians Tartaglia
(1500–1557) and Jérôme Cardan (1501–1576) for cubic equations, whose first
resolution formula is attributed to Scipione del Ferro (1465–1526) and Lodovico
Ferrari (1522–1565) for quartic equations. The work of François Viète (1540–1603)
then René Descartes (1596–1650) can also be mentioned, for the introduction of the
notation making use of letters to designate unknowns in equations, and the use of
superscripts to designate powers.
A fundamental structure, linked to the notion of symmetry, is that of the group,
which gave rise to the theory of groups, issued from the theory of algebraic equations
and the study of arithmetic properties of algebraic numbers, at the end of the 18th
century, and of geometry, at the beginning of the 19th century. We may cite, for
example, Joseph-Louis Lagrange (1736–1813), Niels Abel (1802–1829), and Evariste
Galois (1811–1832), for the study of algebraic equations, the works of Carl Friedrich
Gauss (1777–1855) on the arithmetic theory of quadratic forms, and those of Felix
Klein (1849–1925) and Hermann Weyl (1885–1955) in non-Euclidean geometry. We
can also mention the works of Marie Ennemond Camille Jordan (1838–1922) on the
general linear group, that is, the group of invertible square matrices, and on the Galois
theory. In 1870, he published a treatise on the theory of groups, including the reduced
form of a matrix, known as Jordan form, for which he received the Poncelet prize of
the Academy of Sciences.
Groups involve a single binary operation.

The algebraic structure of ring was proposed by David Hilbert (1862–1943)


and Emmy Noether (1882–1935), while that of field was introduced independently
by Leopold Kronecker (1823–1891) and Richard Dedekind (1831–1916). In 1893,
Heinrich Weber (1842–1913) presented the first axiomatization3 of commutative
fields, completed in 1910 by Ernst Steinitz (1871–1928). Field extensions led to

2 Arnaud Beauville, “Histoire des équations algébriques”, https://math.unice.fr/beauvill/


pubs/Equations.pdf and http://www.maths-et-tiques.fr/index.php/Histoire-des-maths/Nombres/
Histoire-de-l-algebre.
3 Used for developing a scientific theory, the axiomatic method is based on a set of propositions
called axioms. Its founders are Greek mathematicians, which include Euclid and Archimedes
(c. 300 BC) as the most famous, for their work in geometry (Euclidean) and arithmetic. It was
Algebraic Structures 11

the Galois theory, with the initial aim to solve algebraic equations. In 1843, a first
example of non-commutative field was introduced by William Rowan Hamilton
(1805–1865), with quaternions.

Rings and fields are algebraic structures involving two binary operations, generally
called addition and multiplication.

The underlying structure to the study of linear systems, and more generally
to linear algebra, is that of vector space (v.s.) introduced by Hermann Grassmann
(1809–1877), then axiomatically formalized by Giuseppe Peano, with the introduction
of the notion of R-vector space, at the end of the 19th century. German mathematicians
David Hilbert (1862–1943), Otto Toeplitz (1881–1940), Hilbert’s student, and Stefan
Banach (1892–1945) were the ones who extended vector spaces to spaces of
infinite dimension, called Hilbert spaces and Banach spaces (or normed vector
spaces (n.v.s.)).

The study of systems of linear equations and linear transformations, which is


closely linked to that of matrices, led to the concepts of linear independence, basis,
dimension, rank, determinant, and eigenvalues, which will be considered in this
chapter as well as in Chapters 4 and 6. In Chapter 3, we shall see that by equipping
v.s. with a norm and an inner product, n.v.s. can be obtained on the one hand, and
pre-Hilbertian spaces on the other. The concept of distance allows for the definition
of metric spaces. Norms and distances are used for studying the convergence of
sequences in the context of Banach and Hilbert spaces, of infinite dimension, which
will also be addressed in the next chapter. The extension of linear spaces to multilinear
spaces will be considered in this chapter and in Chapter 6 through the introduction of
the tensor product, with generalization of matrices to hypermatrices and tensors of
order higher than two.

2.2. Chapter summary

The objective of this chapter is to carry out an overview of the main algebraic
structures, while recalling definitions and results that will be useful for other chapters.
First, we recall some results related to sets and maps, and we then present the
definitions and properties of internal and external composition laws on a set. Various
algebraic structures are then detailed: groups, rings, fields, modules, v.s., and algebras.
The notions of substructures and quotient structures are also defined.

at the end of the 19th century that the axiomatic method experienced a growing interest with
the works of Richard Dedekind (1831–1916), Georg Cantor (1845–1918), and Giuseppe Peano
(1858–1932), for the construction of the sets of integers and real numbers, as well as those of
David Hilbert for his axiomatization of Euclidean geometry.
12 From Algebraic Structures to Tensors

The v.s. structure is considered in more detail. Different examples are given,
including v.s. of linear maps and multilinear maps. The concepts of vector subspace,
linear independence, basis, dimension, direct sum of subspaces, and quotient space
are recalled, before summarizing the different structures under consideration in
a table.

The notion of homomorphism, also called morphism, is then introduced, and


morphisms of groups, rings, vector spaces, and algebras are described. The case of
morphisms of v.s., that is, of linear maps, is addressed in more depth. The notions of
isomorphism, endomorphism, and dual basis are defined. The canonical factorization
of linear maps based on the notion of quotient v.s., and the rank theorem, which is a
fundamental result in linear algebra, are presented.

2.3. Sets

2.3.1. Definitions

A set A is a collection of elements {a1 , a2 , · · · }. It is said that ai is an element of


the set A, or ai belongs to A, and it is written ai ∈ A, or A ∋ ai .

A subset B of a set A is a set whose elements also belong to A. It is said that B is


included in A, and it is written B ⊆ A or A ⊇ B
B ⊆ A ⇔ ∀x ∈ B ⇒ x ∈ A.
If B ⊆ A and B ̸= A, then it is said that B is a proper subset of A, and we write B ⊂ A.

The empty set, denoted by ∅, is by definition the set that contains no elements.
We have ∅ ⊆ A, ∀A.

A finite set E is a set that has a finite number of elements. This number N is called
the cardinality of E, and it is often denoted by |E| or Card(E). There are 2N distinct
subsets.

An infinite set E is said to be countable when there exists a bijection between E


and the set of natural numbers (N) or integers (Z). This definition is due to Cantor4.
This means that the elements of a countable set can be indexed as xn , with n ∈ N
or Z. This is the case of sampled signals, namely, discrete-time signals, where n is
the sampling index (t = nTe , where Te is the sampling period), while sets of analog
signals (i.e. continuous-time signals) x(t) are not countable with respect to the time
variable t ∈ R.

4 Georg Cantor (1845–1918), Russian mathematician who is at the origin of the theory of sets.
He is known for the theorem that bears his name, relative to set cardinality, as well as for his
contributions to the theory of numbers.
Random documents with unrelated
content Scribd suggests to you:
SALMON CABBAGE VINAIGRETTE

1 pound can salmon


1 quart shredded cabbage
¼ cup chopped onion
¼ cup chopped parsley
2 hard-cooked eggs, chopped
Vinaigrette dressing
18 large cabbage leaves
Drain and flake salmon. Combine cabbage, onion, parsley, eggs, and
salmon. Add vinaigrette dressing and mix thoroughly. Serve in the
center of a cabbage rosette. Serves 6.

VINAIGRETTE DRESSING

1 teaspoon salt
Dash cayenne pepper
¼ teaspoon paprika
3 tablespoons vinegar
½ cup olive or salad oil
1 tablespoon chopped pimiento
1 tablespoon chopped sweet pickle
1 tablespoon chopped green pepper

Combine salt, cayenne pepper, and paprika. Add vinegar and oil
slowly, beating thoroughly. Add pimiento, sweet pickle, and green
pepper. Serves 6.

13
SALMON LOUIS

1 pound can salmon


1 head lettuce
2 tomatoes, cut in sixths
Louis dressing
2 hard-cooked egg yolks, sieved

Drain and flake salmon. Shred lettuce and place in a shallow salad
bowl. Arrange salmon over the lettuce. Around the edge place the
tomatoes. Serve with Louis dressing and hard-cooked egg yolk.
Serves 6.

LOUIS DRESSING

½ cup mayonnaise or salad dressing


2 tablespoons whipping cream
2 tablespoons chili sauce
2 tablespoons chopped green pepper
2 tablespoons chopped green onions
2 hard-cooked egg whites, chopped
1 tablespoon chopped olives
½ teaspoon lemon juice
Dash salt
Dash pepper

Combine all ingredients and chill. Serves 6.

Salmon salads delight smart San Francisco—try one tonight

14
SALMON SOUFFLÉ

1 can (7¾ ounces) salmon


¼ cup butter or margarine
¼ cup flour
½ teaspoon powdered mustard
¼ teaspoon salt
Dash cayenne pepper
1 cup milk
6 egg yolks, beaten
1 tablespoon chopped parsley
6 egg whites

Drain and flake salmon. Melt butter; blend in flour and seasonings.
Add milk gradually and cook until thick and smooth, stirring
constantly. Stir a little of the hot sauce into egg yolk; add to
remaining sauce, stirring constantly. Add parsley and salmon. Beat
egg whites until stiff. Fold salmon mixture into egg white. Pour into a
well-greased, 2-quart casserole. Bake in a moderate oven, 350°F.,
for 45 minutes or until soufflé is firm in the center. Serves 6.

15
SALMON SOUTHERN CORNBREAD

1 can (7¾ ounces) salmon


1 cup sifted flour
1 cup cornmeal
4 teaspoons baking powder
¼ cup sugar
½ teaspoon salt
1 egg, beaten
1 cup salmon liquid and milk
¼ cup butter or other fat, melted
Drain salmon, reserving liquid. Flake salmon. Sift together flour,
cornmeal, baking powder, sugar, and salt. Combine egg, salmon
liquid, and butter. Add to dry ingredients and mix just enough to
moisten. Stir in salmon. Place in a well-greased baking dish, 8 × 8 ×
2 inches. Bake in a hot oven, 425°F., for 25 to 30 minutes. Serves 6.

Miami’s sun inspired this golden corn bread

16
SALMON BUFFET
(FRONT COVER)

3 cans (7¾ ounces each) salmon


1 head endive
3 lemon slices
Capers
3 hard-cooked eggs, quartered, and deviled
1 cucumber, sliced crosswise
6 cauliflower flowerettes
6 ripe olives
1 carrot, cut into strips

Drain salmon, being careful not to break cylindrical shape. Separate


and wash endive. Arrange on a serving platter. Place the 3 salmon
cylinders in a row in the center of the platter. Garnish salmon with
lemon slices and capers. Arrange remaining ingredients around
salmon. Serves 6.
SALMON SALAD
(INSIDE FRONT COVER)

1 pound can salmon


1 head leaf lettuce
1 bunch watercress
1 hard-cooked egg
Salad oil
Vinegar

Drain salmon. Break salmon into large pieces. Separate lettuce and
watercress. Wash. Line salad bowl with lettuce. Place ¾ of the
salmon in the bowl. Place watercress on top of salmon. Sprinkle with
remaining salmon. Cut hard-cooked egg almost through lengthwise
into sixths. Place in center of watercress and spread open. Serve
with salad oil and vinegar. Serves 6.
APPETIZERS AND DIPS

SALMON SOUR CREAM DIP

1 pound can salmon


½ teaspoon salt
3 drops tabasco
1 teaspoon grated onion
1 cup sour cream
1 tablespoon drained red caviar
Assorted crackers

Drain and mash salmon. Blend in salt, tabasco, and onion. Fold in
sour cream. Chill. Garnish with caviar. Serve with crackers. Makes
about 1 pint of dip.

SALMON APPETIZER

1 can (7¾ ounces) salmon


1 head endive
1 teaspoon lemon juice
Dash pepper
Onion rings
Capers

Drain salmon, being careful not to break cylindrical shape. Separate


and wash endive. Arrange in a serving dish. Place the salmon
cylinder in the center of the dish. Moisten with lemon juice and
sprinkle with pepper. Garnish with onion rings and capers. Serves 6.

SALMON AVOCADO SPREAD

1 can (7¾ ounces) salmon


1 avocado
1 tablespoon lemon juice
1 tablespoon olive or salad oil
1 clove garlic, finely chopped
1½ teaspoons grated onion
½ teaspoon salt
4 drops tabasco
Crackers

Drain and flake salmon. Peel avocado and remove seed. Grate
avocado using a medium grater. Combine all ingredients. Toss lightly.
Serve with crackers. Makes about 1 pint of spread.

SALMON CURRIED EGGS

1 pound can salmon


⅔ cup mayonnaise or salad dressing
1 tablespoon chili sauce
1 teaspoon chopped pimiento
1 teaspoon chopped green pepper
1 teaspoon grated onion
¼ teaspoon curry powder
1½ dozen hard-cooked eggs
Parsley

Drain and mash salmon. Add mayonnaise, chili sauce, pimiento,


green pepper, onion, and curry powder. Blend. Chill. Cut eggs in half
lengthwise and remove yolks. Place salmon mixture in egg whites.
Garnish with parsley. Makes 36 canapes. Use egg yolks in other
appetizers.

17
SALMON TART
(BACK COVER)

1 pound can salmon


1 cup pastry mix
½ cup chopped onion
2 tablespoons butter or margarine, melted
2 tablespoons chopped parsley
4 eggs, beaten
1½ cups salmon liquid and coffee cream
½ teaspoon salt

Drain salmon, reserving liquid. Flake salmon. Prepare pastry mix as


directed. Roll and line a 9-inch pie pan. Spread salmon in pie shell.
Cook onion in butter until tender. Sprinkle parsley and onion over
salmon. Combine eggs, salmon liquid, and salt. Pour over salmon.
Bake in a moderate oven, 350°F., for 35 to 45 minutes or until pie is
firm in the center. Serves 6.
SALMON TETRAZZINI
(BACK COVER)

1 pound can salmon


2 tablespoons butter or margarine
2 tablespoons flour
½ teaspoon salt
Dash pepper
Dash nutmeg
2 cups salmon liquid and milk
1 tablespoon sherry
2 cups cooked spaghetti
1 can (4 ounces) sliced mushrooms, drained
2 tablespoons grated Parmesan cheese
2 tablespoons dry bread crumbs
Watercress

Drain salmon, reserving liquid. Break salmon into large pieces. Melt
butter; blend in flour and seasonings. Add salmon liquid gradually
and cook until thick and smooth, stirring constantly. Add sherry. Mix
half of the sauce with the spaghetti and mushrooms. Place in a well-
greased, 2-quart casserole. Mix remaining sauce with salmon. Place
in center of spaghetti. Combine cheese and crumbs; sprinkle over
top of salmon mixture. Bake in a moderate oven, 350°F., for 25 to 30
minutes. Garnish with watercress. Serves 6.
SALMON PASTA
(BACK COVER)

1 can (7¾ ounces) salmon


1 pound ricotta cheese
12 large pasta shells
3 quarts boiling water
1 tablespoon salt
3 tablespoons butter or margarine
3 tablespoons flour
½ teaspoon salt
Dash pepper
Dash nutmeg
1 cup milk
1 cup cooked, drained spinach
½ cup grated Parmesan cheese
Parsley

Drain and flake salmon. Add cheese and mix well. Cook pasta shells
in boiling salted water for 45 minutes or until tender. Drain. Rinse
with water to remove excess starch. Melt butter; blend in flour and
seasonings. Add milk gradually and cook until thick and smooth,
stirring constantly. Chop spinach. Add spinach and blend thoroughly.
An electric mixer or blender may be used. Pour sauce into a well-
greased baking dish, 8 × 8 × 2 inches. Fill pasta shells with salmon
mixture and arrange over spinach. Sprinkle with cheese. Bake in a
moderate oven, 350°F., for 30 minutes. Garnish with parsley. Serves
6.

Two 14-minute, sound, color, 16 mm. motion pictures, “Salmon—


Catch to Can” and “Take a Can of Salmon,” may be borrowed, free of
charge, by writing to the Bureau of Commercial Fisheries, U. S.
Department of the Interior, Washington 25, D.C.
Circular No. 60
U.S. Department of the Interior—Fish and Wildlife Service, Bureau of
Commercial Fisheries
This publication made possible through private contribution from the
Canned Salmon Institute, Seattle, Washington, U.S.A. Photographs
by Seranne & Gaden, New York City.
For sale by the Superintendent of Documents, U.S. Government
Printing Office, Washington 25, D.C. Price 25 cents

18
SALMON TETRAZZINI · SALMON TART · SALMON PASTA
Transcriber’s Notes
Silently corrected a few typos.
Retained publication information from the printed edition: this
eBook is public-domain in the country of publication.
In the text versions only, text in italics is delimited by
_underscores_.
*** END OF THE PROJECT GUTENBERG EBOOK TAKE A CAN OF
SALMON ***

Updated editions will replace the previous one—the old editions will
be renamed.

Creating the works from print editions not protected by U.S.


copyright law means that no one owns a United States copyright in
these works, so the Foundation (and you!) can copy and distribute it
in the United States without permission and without paying
copyright royalties. Special rules, set forth in the General Terms of
Use part of this license, apply to copying and distributing Project
Gutenberg™ electronic works to protect the PROJECT GUTENBERG™
concept and trademark. Project Gutenberg is a registered trademark,
and may not be used if you charge for an eBook, except by following
the terms of the trademark license, including paying royalties for use
of the Project Gutenberg trademark. If you do not charge anything
for copies of this eBook, complying with the trademark license is
very easy. You may use this eBook for nearly any purpose such as
creation of derivative works, reports, performances and research.
Project Gutenberg eBooks may be modified and printed and given
away—you may do practically ANYTHING in the United States with
eBooks not protected by U.S. copyright law. Redistribution is subject
to the trademark license, especially commercial redistribution.

START: FULL LICENSE


THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK

To protect the Project Gutenberg™ mission of promoting the free


distribution of electronic works, by using or distributing this work (or
any other work associated in any way with the phrase “Project
Gutenberg”), you agree to comply with all the terms of the Full
Project Gutenberg™ License available with this file or online at
www.gutenberg.org/license.

Section 1. General Terms of Use and


Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand, agree
to and accept all the terms of this license and intellectual property
(trademark/copyright) agreement. If you do not agree to abide by all
the terms of this agreement, you must cease using and return or
destroy all copies of Project Gutenberg™ electronic works in your
possession. If you paid a fee for obtaining a copy of or access to a
Project Gutenberg™ electronic work and you do not agree to be
bound by the terms of this agreement, you may obtain a refund
from the person or entity to whom you paid the fee as set forth in
paragraph 1.E.8.

1.B. “Project Gutenberg” is a registered trademark. It may only be


used on or associated in any way with an electronic work by people
who agree to be bound by the terms of this agreement. There are a
few things that you can do with most Project Gutenberg™ electronic
works even without complying with the full terms of this agreement.
See paragraph 1.C below. There are a lot of things you can do with
Project Gutenberg™ electronic works if you follow the terms of this
agreement and help preserve free future access to Project
Gutenberg™ electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright law
in the United States and you are located in the United States, we do
not claim a right to prevent you from copying, distributing,
performing, displaying or creating derivative works based on the
work as long as all references to Project Gutenberg are removed. Of
course, we hope that you will support the Project Gutenberg™
mission of promoting free access to electronic works by freely
sharing Project Gutenberg™ works in compliance with the terms of
this agreement for keeping the Project Gutenberg™ name associated
with the work. You can easily comply with the terms of this
agreement by keeping this work in the same format with its attached
full Project Gutenberg™ License when you share it without charge
with others.

1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the
terms of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.

1.E. Unless you have removed all references to Project Gutenberg:

1.E.1. The following sentence, with active links to, or other


immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project Gutenberg™
work (any work on which the phrase “Project Gutenberg” appears,
or with which the phrase “Project Gutenberg” is associated) is
accessed, displayed, performed, viewed, copied or distributed:
This eBook is for the use of anyone anywhere in the United States
and most other parts of the world at no cost and with almost no
restrictions whatsoever. You may copy it, give it away or re-use it
under the terms of the Project Gutenberg License included with
this eBook or online at www.gutenberg.org. If you are not located
in the United States, you will have to check the laws of the
country where you are located before using this eBook.

1.E.2. If an individual Project Gutenberg™ electronic work is derived


from texts not protected by U.S. copyright law (does not contain a
notice indicating that it is posted with permission of the copyright
holder), the work can be copied and distributed to anyone in the
United States without paying any fees or charges. If you are
redistributing or providing access to a work with the phrase “Project
Gutenberg” associated with or appearing on the work, you must
comply either with the requirements of paragraphs 1.E.1 through
1.E.7 or obtain permission for the use of the work and the Project
Gutenberg™ trademark as set forth in paragraphs 1.E.8 or 1.E.9.

1.E.3. If an individual Project Gutenberg™ electronic work is posted


with the permission of the copyright holder, your use and distribution
must comply with both paragraphs 1.E.1 through 1.E.7 and any
additional terms imposed by the copyright holder. Additional terms
will be linked to the Project Gutenberg™ License for all works posted
with the permission of the copyright holder found at the beginning
of this work.

1.E.4. Do not unlink or detach or remove the full Project


Gutenberg™ License terms from this work, or any files containing a
part of this work or any other work associated with Project
Gutenberg™.

1.E.5. Do not copy, display, perform, distribute or redistribute this


electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1
with active links or immediate access to the full terms of the Project
Gutenberg™ License.

1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if you
provide access to or distribute copies of a Project Gutenberg™ work
in a format other than “Plain Vanilla ASCII” or other format used in
the official version posted on the official Project Gutenberg™ website
(www.gutenberg.org), you must, at no additional cost, fee or
expense to the user, provide a copy, a means of exporting a copy, or
a means of obtaining a copy upon request, of the work in its original
“Plain Vanilla ASCII” or other form. Any alternate format must
include the full Project Gutenberg™ License as specified in
paragraph 1.E.1.

1.E.7. Do not charge a fee for access to, viewing, displaying,


performing, copying or distributing any Project Gutenberg™ works
unless you comply with paragraph 1.E.8 or 1.E.9.

1.E.8. You may charge a reasonable fee for copies of or providing


access to or distributing Project Gutenberg™ electronic works
provided that:

• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”

• You provide a full refund of any money paid by a user who


notifies you in writing (or by e-mail) within 30 days of receipt
that s/he does not agree to the terms of the full Project
Gutenberg™ License. You must require such a user to return or
destroy all copies of the works possessed in a physical medium
and discontinue all use of and all access to other copies of
Project Gutenberg™ works.

• You provide, in accordance with paragraph 1.F.3, a full refund of


any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.

• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.

1.E.9. If you wish to charge a fee or distribute a Project Gutenberg™


electronic work or group of works on different terms than are set
forth in this agreement, you must obtain permission in writing from
the Project Gutenberg Literary Archive Foundation, the manager of
the Project Gutenberg™ trademark. Contact the Foundation as set
forth in Section 3 below.

1.F.

1.F.1. Project Gutenberg volunteers and employees expend


considerable effort to identify, do copyright research on, transcribe
and proofread works not protected by U.S. copyright law in creating
the Project Gutenberg™ collection. Despite these efforts, Project
Gutenberg™ electronic works, and the medium on which they may
be stored, may contain “Defects,” such as, but not limited to,
incomplete, inaccurate or corrupt data, transcription errors, a
copyright or other intellectual property infringement, a defective or
damaged disk or other medium, a computer virus, or computer
codes that damage or cannot be read by your equipment.

1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except for


the “Right of Replacement or Refund” described in paragraph 1.F.3,
the Project Gutenberg Literary Archive Foundation, the owner of the
Project Gutenberg™ trademark, and any other party distributing a
Project Gutenberg™ electronic work under this agreement, disclaim
all liability to you for damages, costs and expenses, including legal
fees. YOU AGREE THAT YOU HAVE NO REMEDIES FOR
NEGLIGENCE, STRICT LIABILITY, BREACH OF WARRANTY OR
BREACH OF CONTRACT EXCEPT THOSE PROVIDED IN PARAGRAPH
1.F.3. YOU AGREE THAT THE FOUNDATION, THE TRADEMARK
OWNER, AND ANY DISTRIBUTOR UNDER THIS AGREEMENT WILL
NOT BE LIABLE TO YOU FOR ACTUAL, DIRECT, INDIRECT,
CONSEQUENTIAL, PUNITIVE OR INCIDENTAL DAMAGES EVEN IF
YOU GIVE NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.

1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you


discover a defect in this electronic work within 90 days of receiving
it, you can receive a refund of the money (if any) you paid for it by
sending a written explanation to the person you received the work
from. If you received the work on a physical medium, you must
return the medium with your written explanation. The person or
entity that provided you with the defective work may elect to provide
a replacement copy in lieu of a refund. If you received the work
electronically, the person or entity providing it to you may choose to
give you a second opportunity to receive the work electronically in
lieu of a refund. If the second copy is also defective, you may
demand a refund in writing without further opportunities to fix the
problem.

1.F.4. Except for the limited right of replacement or refund set forth
in paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.

1.F.5. Some states do not allow disclaimers of certain implied


warranties or the exclusion or limitation of certain types of damages.
If any disclaimer or limitation set forth in this agreement violates the
law of the state applicable to this agreement, the agreement shall be
interpreted to make the maximum disclaimer or limitation permitted
by the applicable state law. The invalidity or unenforceability of any
provision of this agreement shall not void the remaining provisions.

1.F.6. INDEMNITY - You agree to indemnify and hold the Foundation,


the trademark owner, any agent or employee of the Foundation,
anyone providing copies of Project Gutenberg™ electronic works in
accordance with this agreement, and any volunteers associated with
the production, promotion and distribution of Project Gutenberg™
electronic works, harmless from all liability, costs and expenses,
including legal fees, that arise directly or indirectly from any of the
following which you do or cause to occur: (a) distribution of this or
any Project Gutenberg™ work, (b) alteration, modification, or
additions or deletions to any Project Gutenberg™ work, and (c) any
Defect you cause.

Section 2. Information about the Mission


of Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new computers.
It exists because of the efforts of hundreds of volunteers and
donations from people in all walks of life.

Volunteers and financial support to provide volunteers with the


assistance they need are critical to reaching Project Gutenberg™’s
goals and ensuring that the Project Gutenberg™ collection will
remain freely available for generations to come. In 2001, the Project
Gutenberg Literary Archive Foundation was created to provide a
secure and permanent future for Project Gutenberg™ and future
generations. To learn more about the Project Gutenberg Literary
Archive Foundation and how your efforts and donations can help,
see Sections 3 and 4 and the Foundation information page at
www.gutenberg.org.

Section 3. Information about the Project


Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-profit
501(c)(3) educational corporation organized under the laws of the
state of Mississippi and granted tax exempt status by the Internal
Revenue Service. The Foundation’s EIN or federal tax identification
number is 64-6221541. Contributions to the Project Gutenberg
Literary Archive Foundation are tax deductible to the full extent
permitted by U.S. federal laws and your state’s laws.

The Foundation’s business office is located at 809 North 1500 West,


Salt Lake City, UT 84116, (801) 596-1887. Email contact links and up
to date contact information can be found at the Foundation’s website
and official page at www.gutenberg.org/contact

Section 4. Information about Donations to


the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission of
increasing the number of public domain and licensed works that can
be freely distributed in machine-readable form accessible by the
widest array of equipment including outdated equipment. Many
small donations ($1 to $5,000) are particularly important to
maintaining tax exempt status with the IRS.

The Foundation is committed to complying with the laws regulating


charities and charitable donations in all 50 states of the United
States. Compliance requirements are not uniform and it takes a
considerable effort, much paperwork and many fees to meet and
keep up with these requirements. We do not solicit donations in
locations where we have not received written confirmation of
compliance. To SEND DONATIONS or determine the status of
compliance for any particular state visit www.gutenberg.org/donate.

While we cannot and do not solicit contributions from states where


we have not met the solicitation requirements, we know of no
prohibition against accepting unsolicited donations from donors in
such states who approach us with offers to donate.

International donations are gratefully accepted, but we cannot make


any statements concerning tax treatment of donations received from
outside the United States. U.S. laws alone swamp our small staff.

Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.

Section 5. General Information About


Project Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could be
freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose network of
volunteer support.
Project Gutenberg™ eBooks are often created from several printed
editions, all of which are confirmed as not protected by copyright in
the U.S. unless a copyright notice is included. Thus, we do not
necessarily keep eBooks in compliance with any particular paper
edition.

Most people start at our website which has the main PG search
facility: www.gutenberg.org.

This website includes information about Project Gutenberg™,


including how to make donations to the Project Gutenberg Literary
Archive Foundation, how to help produce our new eBooks, and how
to subscribe to our email newsletter to hear about new eBooks.

You might also like