From Algebraic Structures to Tensors Digital Signal and Image Processing Matrices and Tensors in Signal Processing Set 1st Edition Gérard Favier (Editor) 2024 scribd download
From Algebraic Structures to Tensors Digital Signal and Image Processing Matrices and Tensors in Signal Processing Set 1st Edition Gérard Favier (Editor) 2024 scribd download
com
https://textbookfull.com/product/digital-image-processing-a-signal-
processing-and-algorithmic-approach-1st-edition-d-sundararajan-auth/
textbookfull.com
https://textbookfull.com/product/conceptual-digital-signal-processing-
with-matlab-keonwook-kim/
textbookfull.com
https://textbookfull.com/product/lev-vygotsky-first-paperback-edition-
rene-van-der-veer/
textbookfull.com
Lung Epithelial Biology in the Pathogenesis of Pulmonary
Disease Venkataramana K. Sidhaye And Michael Koval (Eds.)
https://textbookfull.com/product/lung-epithelial-biology-in-the-
pathogenesis-of-pulmonary-disease-venkataramana-k-sidhaye-and-michael-
koval-eds/
textbookfull.com
https://textbookfull.com/product/baby-steps-seven-brothers-4-1st-
edition-catherine-lievens/
textbookfull.com
https://textbookfull.com/product/mathematics-and-philosophy-1st-
edition-daniel-parrochia/
textbookfull.com
https://textbookfull.com/product/unleashing-great-teaching-the-
secrets-to-the-most-effective-teacher-development-1st-edition-weston/
textbookfull.com
https://textbookfull.com/product/the-eurosceptic-2014-european-
parliament-elections-second-order-or-second-rate-1st-edition-julie-
hassing-nielsen/
textbookfull.com
Jet in Supersonic Crossflow Mingbo Sun
https://textbookfull.com/product/jet-in-supersonic-crossflow-mingbo-
sun/
textbookfull.com
From Algebraic Structures to Tensors
Matrices and Tensors in Signal Processing Set
coordinated by
Gérard Favier
Volume 1
Edited by
Gérard Favier
First published 2019 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as
permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced,
stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers,
or in the case of reprographic reproduction in accordance with the terms and licenses issued by the
CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the
undermentioned address:
www.iste.co.uk www.wiley.com
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
2.5.6. Rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.5.7. Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.5.8. Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.5.9. Vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.5.10. Vector spaces of linear maps . . . . . . . . . . . . . . . . . . . . . 38
2.5.11. Vector spaces of multilinear maps . . . . . . . . . . . . . . . . . . 39
2.5.12. Vector subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.5.13. Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.5.14. Sum and direct sum of subspaces . . . . . . . . . . . . . . . . . . 45
2.5.15. Quotient vector spaces . . . . . . . . . . . . . . . . . . . . . . . . 47
2.5.16. Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.6. Morphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.6.1. Group morphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.6.2. Ring morphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.6.3. Morphisms of vector spaces or linear maps . . . . . . . . . . . . . 51
2.6.4. Algebra morphisms . . . . . . . . . . . . . . . . . . . . . . . . . . 56
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Visit https://textbookfull.com
now to explore a rich
collection of eBooks, textbook
and enjoy exciting offers!
Preface
This book is part of a collection of four books about matrices and tensors, with
applications to signal processing. Although the title of this collection suggests an
orientation toward signal processing, the results and methods presented should also
be of use to readers of other disciplines.
Writing books on matrices is a real challenge given that so many excellent books
on the topic have already been written1. How then to stand out from the existing,
and which Ariadne’s thread to unwind? A way to distinguish oneself was to treat in
parallel matrices and tensors. Viewed as extensions of matrices with orders higher than
two, the latter have many similarities with matrices, but also important differences in
terms of rank, uniqueness of decomposition, as well as potentiality for representing
multi-dimensional, multi-modal, and inaccurate data. Moreover, regarding the
guiding thread, it consists in presenting structural foundations, then both matrix and
tensor decompositions, in addition to related processing methods, finally leading to
applications, by means of a presentation as self-contained as possible, and with some
originality in the topics being addressed and the way they are treated.
higher than two. A few examples of equations for representing signal processing
problems will be provided to illustrate the use of such decompositions. A chapter
will be devoted to structured matrices. Different properties will be highlighted, and
extensions to tensors of order higher than two will be presented. Two other chapters
will concern quaternions and quaternionic matrices, on the one hand, and polynomial
matrices, on the other hand.
In Volume 3, an overview of several tensor models will be carried out by
taking some constraints (structural, linear dependency of factors, sparsity, and non-
negativity) into account. Some of these models will be used in Volume 4, for the
design of digital communication systems. Tensor trains and tensor networks will
also be presented for the representation and analysis of massive data (big data).
The algorithmic aspect will be taken into consideration with the presentation of
different processing methods.
Matrices and tensors, and more generally linear algebra and multilinear algebra,
are at the same time exciting, extensive, and fundamental topics equally important
for teaching and researching as for applications. It is worth noting here that the
choices made for the content of the books of this collection have not been guided
by educational programs, which explains some gaps compared to standard algebra
treaties. The guiding thread has been rather to present the definitions, properties,
concepts and results necessary for a good understanding of processing methods and
applications considered in these books. In addition to the great diversity of topics,
another difficulty resided in the order in which they should be addressed, due to the
fact that a lot of topics overlap, certain notions or/and some results being sometimes
used before they have been defined or/and demonstrated, which requires the reader to
be referred to sections or chapters that follow.
Four particularities should be highlighted. The first relates to the close relationship
between some of the topics being addressed, certain methods presented and recent
research results, particularly with regard to tensorial approaches for signal processing.
The second reflects the will to situate the results stated in their historical context,
using some biographical information on certain authors being cited, as well as lists
of references comprehensive enough to deepen specific results, and also to extend the
biographical sources provided. This has motivated the introductory chapter entitled
“Historical elements of matrices and tensors.”
Preface xiii
The last two characteristics concern the presentation and illustration of properties
and methods under consideration. Some will be provided without demonstration
because of their simplicity or availability in numerous books thereabout. Others will
be demonstrated, either for pedagogical reasons, since their knowledge should allow
for better understanding the results being demonstrated, or because of the difficulty to
find them in the literature, or still due to the originality of the proposed demonstrations
as it will be the case, for example, of those making use of the index convention. The
use of many tables should also be noted with the purpose of recalling key results
while presenting them in a synthetic and comparative manner.
I want to thank my colleagues Sylvie Icart and Vicente Zarzoso for their review of
some chapters and Henrique de Morais Goulart, who co-authored Chapter 4.
Gérard FAVIER
August 2019
[email protected]
1
Our modest goal here is to locate in time the contributions of a few mathematicians
and physicists2 who have laid the foundations for the theory of matrices and tensors,
and to whom we will refer later in our presentation. This choice is necessarily very
incomplete.
The first studies of determinants that preceded those of matrices were conducted
independently by the Japanese mathematician Seki Kowa (1642–1708) and the
German mathematician Gottfried Leibniz (1646–1716), and then by the Scottish
mathematician Colin Maclaurin (1698–1746) for solving 2 × 2 and 3 × 3 systems
of linear equations. These works were then generalized by the Swiss mathematician
Gabriel Cramer (1704–1752) for the resolution of n × n systems, leading, in
1750, to the famous formulae that bear his name, whose demonstration is due to
Augustin-Louis Cauchy (1789–1857).
In 1772, Théophile Alexandre Vandermonde (1735–1796) defined the notion of
determinant, and Pierre-Simon Laplace (1749–1827) formulated the computation
of determinants by means of an expansion according to a row or a column, an
expansion which will be presented in section 4.11.1. In 1773, Joseph-Louis Lagrange
(1736–1813) discovered the link between the calculation of determinants and that of
volumes. In 1812, Cauchy used, for the first time, the determinant in the sense that it
has today, and he established the formula for the determinant of the product of two
rectangular matrices, a formula which was found independently by Jacques Binet
(1786–1856), and which is called nowadays the Binet–Cauchy formula.
The foundations of the theory of matrices were laid in the 19th century around the
following topics: determinants for solving systems of linear equations, representation
of linear transformations and quadratic forms (a topic which will be addressed in detail
in Chapter 4), matrix decompositions and reductions to canonical forms, that is to say,
diagonal or triangular forms such as the Jordan (1838–1922) normal form with Jordan
blocks on the diagonal, introduced by Weierstrass, the block-triangular form of Schur
(1875–1941), or the Frobenius normal form that is a block-diagonal matrix, whose
blocks are companion matrices.
Historical Elements of Matrices and Tensors 3
A history of the theory of matrices in the 19th century was published by Thomas
Hawkins3 in 1974, highlighting, in particular, the contributions of the British
mathematician Arthur Cayley, seen by historians as one of the founders of the theory
of matrices. Cayley laid the foundations of the classical theory of determinants4 in
1843. He then developed matrix computation5 by defining certain matrix operations as
the product of two matrices, the transposition of the product of two matrices, and the
inversion of a 3 × 3 matrix using cofactors, and by establishing different properties of
matrices, including, namely, the famous Cayley–Hamilton theorem which states that
every square matrix satisfies its characteristic equation. This result highlighted for the
fourth order by William Rowan Hamilton (1805–1865), in 1853, for the calculation
of the inverse of a quaternion, was stated in the general case by Cayley in 1857, but
the demonstration for any arbitrary order is due to Frobenius, in 1878.
An important part of the theory of matrices concerns the spectral theory, namely,
the notions of eigenvalue and characteristic polynomial. Directly related to the
integration of systems of linear differential equations, this theory has its origins in
physics, and more particularly in celestial mechanics for the study of the orbits of
planets, conducted in the 18th century by mathematicians, physicists, and astronomers
such as Lagrange and Laplace, then in the 19th century by Cauchy, Weierstrass,
Kronecker, and Jordan.
The names of certain matrices and associated determinants are those of the
mathematicians who have introduced them. This is the case, for example, for
Alexandre Théophile Vandermonde (1735–1796) who gave his name to a matrix
whose elements on each row (or each column) form a geometric progression and
whose determinant is a polynomial. It is also the case for Carl Jacobi (1804–1851)
and Ludwig Otto Hesse (1811–1874), for Jacobian and Hessian matrices, namely,
the matrices of first- and second-order partial derivatives of a vector function, whose
determinants are called Jacobian and Hessian, respectively. The same is true for the
Laplacian matrix or Laplace matrix, which is used to represent a graph. We can also
mention Charles Hermite (1822–1901) for Hermitian matrices, related to the so-called
Hermitian forms (see section 4.15). Specific matrices such as Fourier (1768–1830)
and Hadamard (1865–1963) matrices are directly related to the transforms of the
same name. Similarly, Householder (1904–1993) and Givens (1910–1993) matrices
are associated with transformations corresponding to reflections and rotations,
respectively. The so-called structured matrices, such as Hankel (1839–1873) and
Toeplitz (1881–1943) matrices, play a very important role in signal processing.
3 Thomas Hawkins, “The theory of matrices in the 19th century”, Proceedings of the
International Congress of Mathematicians, Vancouver, 1974.
4 Arthur Cayley, “On a theory of determinants”, Cambridge Philosophical Society 8, l–16,
1843.
5 Arthur Cayley, “A memoir on the theory of matrices”, Philosophical Transactions of the
Royal Society of London 148, 17–37, 1858.
4 From Algebraic Structures to Tensors
Just as matrices and matrix computation play a fundamental role in linear algebra,
tensors and tensor computation are at the origin of multilinear algebra. It was in the
19th century that tensor analysis first appeared, along with the works of German
mathematicians Georg Friedrich Bernhard Riemann6 (1826–1866) and Elwin Bruno
Christoffel (1829–1900) in geometry (non-Euclidean), introducing the index notation
and notions of metric, manifold, geodesic, curved space, curvature tensor, which gave
rise to what is today called Riemannian geometry and differential geometry.
Tensor calculus originates from the study of the invariance of quadratic forms
under the effect of a change of coordinates and, more generally, from the theory of
invariants initiated by Cayley8, with the introduction of the notion of hyperdeterminant
which generalizes matrix determinants to hypermatrices. Refer to the article by Crilly9
for an overview of the contribution of Cayley on the invariant theory. This theory
was developed by Jordan and Kronecker and involved controversy10 between these
two authors, then continued by David Hilbert (1862–1943), Elie Joseph Cartan
(1869–1951), and Hermann Klaus Hugo Weyl (1885–1955), for algebraic forms
6 A detailed analysis of Riemann’s contributions to tensor analysis has been made by Ruth
Farwell and Christopher Knee, “The missing link: Riemann’s Commentatio, differential
geometry and tensor analysis”, Historia Mathematica 17, 223–255, 1990.
7 G. Ricci and T. Levi-Civita, “Méthodes de calcul différentiel absolu et leurs applications”,
Mathematische Annalen 54, 125–201, 1900.
8 A. Cayley, “On the theory of linear transformations”, Cambridge Journal of Mathematics 4,
193–209, 1845. A. Cayley, “On linear transformations”, Cambridge and Dublin Mathematical
Journal 1, 104–122, 1846.
9 T. Crilly, “The rise of Cayley’s invariant theory (1841–1862)”, Historica Mathematica 13,
241–254, 1986.
10 F. Brechenmacher, “La controverse de 1874 entre Camille Jordan et Leopold Kronecker:
Histoire du théorème de Jordan de la décomposition matricielle (1870–1930)”, Revue d’histoire
des Mathématiques, Society Math De France 2, no. 13, 187–257, 2008 (hal-00142790v2).
Historical Elements of Matrices and Tensors 5
As we have just seen in this brief historical overview, tensor calculus was
used initially in geometry and to describe physical phenomena using tensor fields,
facilitating the application of differential operators (gradient, divergence, rotational,
and Laplacian) to tensor fields14.
11 M. Olive, B. Kolev, and N. Auffray, “Espace de tenseurs et théorie classique des invariants”,
21ème Congrès Français de Mécanique, Bordeaux, France, 2013 (hal-00827406).
12 J. A. Dieudonné and J. B. Carrell, Invariant Theory, Old and New, Academic Press, 1971.
13 See page 9 in E. Sarrau, Notions sur la théorie des quaternions, Gauthiers-Villars, Paris,
1889, http://rcin.org.pl/Content/13490.
14 The notion of tensor field is associated with physical quantities that may depend on both
spatial coordinates and time. These variable geometric quantities define differentiable functions
on a domain of the physical space. Tensor fields are used in differential geometry, in algebraic
geometry, general relativity, and in many other areas of mathematics and physics. The concept
of tensor field generalizes that of vector field.
15 E. Cartan, “Sur une généralisation de la notion de courbure de Riemann et les espaces à
torsion”, Comptes rendus de l’Académie des Sciences 174, 593–595, 1922. Elie Joseph Cartan
(1869–1951), French mathematician and student of Jules Henri Poincaré (1854–1912) and
Charles Hermite (1822–1901) at the Ecole Normale Supérieure. He brought major contributions
concerning the theory of Lie groups, differential geometry, Riemannian geometry, orthogonal
polynomials, and elliptic functions. He discovered spinors, in 1913, as part of his work on the
6 From Algebraic Structures to Tensors
In the early 2000s, tensors were used for modeling digital communication
systems (Sidiropoulos et al. 2000a), for array processing (Sidiropoulos et al. 2000b),
for multi-dimensional harmonics recovery (Haardt et al. 2008; Jiang et al. 2001;
Sidiropoulos 2001), and for image processing, more specifically for face recognition
representations of groups. Like tensor calculus, spinor calculus plays a major role in quantum
physics. His name is associated with Albert Einstein (1879–1955) for the classical theory of
gravitation that relies on the model of general relativity.
16 Raymond Cattell (1905–1998), Anglo-American psychologist who used factorial analysis
for the study of personality with applications to psychotherapy.
17 Ledyard Tucker (1910–2004), American mathematician, expert in statistics and psychology,
and more particularly known for tensor decomposition which bears his name.
18 Richard Harshman (1943–2008), an expert in psychometrics and father of three-dimensional
PARAFAC analysis which is the most widely used tensor decomposition in applications.
Visit https://textbookfull.com
now to explore a rich
collection of eBooks, textbook
and enjoy exciting offers!
Historical Elements of Matrices and Tensors 7
Many applications of tensors also concern speech processing (Nion et al. 2010),
MIMO radar (Nion and Sidiropoulos 2010), and biomedical signal processing,
particularly for electroencephalography (EEG) (Cong et al. 2015; de Vos et al. 2007;
Hunyadi et al. 2016), and electrocardiography (ECG) signals (Padhy et al. 2018),
magnetic resonance imaging (MRI) (Schultz et al. 2014), or hyperspectral imaging
(Bourennane et al. 2010; Velasco-Forero and Angulo 2013), among many others.
Today, tensors viewed as multi-index tables are used in many areas of application for
the representation, mining, analysis, and fusion of multi-dimensional and multi-modal
data (Acar and Yener 2009; Cichocki 2013; Lahat et al. 2015; Morup 2011).
A very large number of books address linear algebra and matrix calculus, for
example: Gantmacher (1959), Greub (1967), Bellman (1970), Strang (1980), Horn
and Johnson (1985, 1991), Lancaster and Tismenetsky (1985), Noble and Daniel
(1988), Barnett (1990), Rotella and Borne (1995), Golub and Van Loan (1996),
Lütkepohl (1996), Cullen (1997), Zhang (1999), Meyer (2000), Lascaux and Théodor
(2000), Serre (2002), Abadir and Magnus (2005), Bernstein (2005), Gourdon (2009),
Grifone (2011), and Aubry (2012).
For multilinear algebra and tensor calculus, there are much less reference
books, for example: Greub (1978), McCullagh (1987), Coppi and Bolasco (1989),
Smilde et al. (2004), Kroonenberg (2008), Cichocki et al. (2009), and Hackbusch
(2012). For an introduction to multilinear algebra and tensors, see Ph.D. theses by
de Lathauwer (1997) and Bro (1998). The following synthesis articles can also be
consulted: (Bro 1997; Cichocki et al. 2015; Comon 2014; Favier and de Almeida
2014a; Kolda and Bader 2009; Lu et al. 2011; Papalexakis et al. 2016; Sidiropoulos
et al. 2017).
2
Algebraic Structures
We make here a brief historical note concerning algebraic structures. The notion
of structure plays a fundamental role in mathematics. In a treatise entitled Eléments
de mathématique, comprising 11 books, Nicolas Bourbaki1 distinguishes three main
types of structures: algebraic structures, ordered structures that equip sets with an
order relation, and topological structures equipping sets with a topology that allows
the definition of topological concepts such as open sets, neighborhood, convergence,
and continuity. Some structures are mixed, that is, they combine several of the three
basic structures. That is the case, for instance, of Banach and Hilbert spaces which
combine the vector space structure with the notions of norm and inner product, that is,
a topology.
Algebraic structures endow sets with laws of composition governing operations
between elements of a same set or between elements of two distinct sets. These
composition laws known as internal and external laws, respectively, exhibit certain
properties such as associativity, commutativity, and distributivity, with the existence
(or not) of a symmetric for each element, and of a neutral element. Algebraic structures
make it possible to characterize, in particular, sets of numbers, polynomials, matrices,
and functions. The study of these structures (groups, rings, fields, vector spaces, etc.)
and their relationships is the primary purpose of general algebra, also called abstract
algebra. A reminder of the basic algebraic structures will be carried out in this chapter.
The vector spaces gave rise to linear algebra for the resolution of systems of
linear equations and the study of linear maps (also called linear mappings, or linear
transformations). Linear algebra is closely related to the theory of matrices and matrix
algebra, of which an introduction will be made in Chapter 4.
Multilinear algebra extends linear algebra to the study of multilinear maps, through
the notions of tensor space and tensor, which will be introduced in Chapter 6.
Although the resolution of (first- and second-degree) equations can be traced
to the Babylonians2 (about 2000 BC, according to Babylonian tables), then to the
Greeks (300 BC), to the Chinese (200 BC), and to the Indians (6th century), algebra
as a discipline emerged in the Arab-Muslim world, during the 8th century. It gained
momentum in the West, in the 16th century, with the resolution of algebraic (or
polynomial) equations, first with the works of Italian mathematicians Tartaglia
(1500–1557) and Jérôme Cardan (1501–1576) for cubic equations, whose first
resolution formula is attributed to Scipione del Ferro (1465–1526) and Lodovico
Ferrari (1522–1565) for quartic equations. The work of François Viète (1540–1603)
then René Descartes (1596–1650) can also be mentioned, for the introduction of the
notation making use of letters to designate unknowns in equations, and the use of
superscripts to designate powers.
A fundamental structure, linked to the notion of symmetry, is that of the group,
which gave rise to the theory of groups, issued from the theory of algebraic equations
and the study of arithmetic properties of algebraic numbers, at the end of the 18th
century, and of geometry, at the beginning of the 19th century. We may cite, for
example, Joseph-Louis Lagrange (1736–1813), Niels Abel (1802–1829), and Evariste
Galois (1811–1832), for the study of algebraic equations, the works of Carl Friedrich
Gauss (1777–1855) on the arithmetic theory of quadratic forms, and those of Felix
Klein (1849–1925) and Hermann Weyl (1885–1955) in non-Euclidean geometry. We
can also mention the works of Marie Ennemond Camille Jordan (1838–1922) on the
general linear group, that is, the group of invertible square matrices, and on the Galois
theory. In 1870, he published a treatise on the theory of groups, including the reduced
form of a matrix, known as Jordan form, for which he received the Poncelet prize of
the Academy of Sciences.
Groups involve a single binary operation.
the Galois theory, with the initial aim to solve algebraic equations. In 1843, a first
example of non-commutative field was introduced by William Rowan Hamilton
(1805–1865), with quaternions.
Rings and fields are algebraic structures involving two binary operations, generally
called addition and multiplication.
The underlying structure to the study of linear systems, and more generally
to linear algebra, is that of vector space (v.s.) introduced by Hermann Grassmann
(1809–1877), then axiomatically formalized by Giuseppe Peano, with the introduction
of the notion of R-vector space, at the end of the 19th century. German mathematicians
David Hilbert (1862–1943), Otto Toeplitz (1881–1940), Hilbert’s student, and Stefan
Banach (1892–1945) were the ones who extended vector spaces to spaces of
infinite dimension, called Hilbert spaces and Banach spaces (or normed vector
spaces (n.v.s.)).
The objective of this chapter is to carry out an overview of the main algebraic
structures, while recalling definitions and results that will be useful for other chapters.
First, we recall some results related to sets and maps, and we then present the
definitions and properties of internal and external composition laws on a set. Various
algebraic structures are then detailed: groups, rings, fields, modules, v.s., and algebras.
The notions of substructures and quotient structures are also defined.
at the end of the 19th century that the axiomatic method experienced a growing interest with
the works of Richard Dedekind (1831–1916), Georg Cantor (1845–1918), and Giuseppe Peano
(1858–1932), for the construction of the sets of integers and real numbers, as well as those of
David Hilbert for his axiomatization of Euclidean geometry.
12 From Algebraic Structures to Tensors
The v.s. structure is considered in more detail. Different examples are given,
including v.s. of linear maps and multilinear maps. The concepts of vector subspace,
linear independence, basis, dimension, direct sum of subspaces, and quotient space
are recalled, before summarizing the different structures under consideration in
a table.
2.3. Sets
2.3.1. Definitions
The empty set, denoted by ∅, is by definition the set that contains no elements.
We have ∅ ⊆ A, ∀A.
A finite set E is a set that has a finite number of elements. This number N is called
the cardinality of E, and it is often denoted by |E| or Card(E). There are 2N distinct
subsets.
4 Georg Cantor (1845–1918), Russian mathematician who is at the origin of the theory of sets.
He is known for the theorem that bears his name, relative to set cardinality, as well as for his
contributions to the theory of numbers.
Random documents with unrelated
content Scribd suggests to you:
SALMON CABBAGE VINAIGRETTE
VINAIGRETTE DRESSING
1 teaspoon salt
Dash cayenne pepper
¼ teaspoon paprika
3 tablespoons vinegar
½ cup olive or salad oil
1 tablespoon chopped pimiento
1 tablespoon chopped sweet pickle
1 tablespoon chopped green pepper
Combine salt, cayenne pepper, and paprika. Add vinegar and oil
slowly, beating thoroughly. Add pimiento, sweet pickle, and green
pepper. Serves 6.
13
SALMON LOUIS
Drain and flake salmon. Shred lettuce and place in a shallow salad
bowl. Arrange salmon over the lettuce. Around the edge place the
tomatoes. Serve with Louis dressing and hard-cooked egg yolk.
Serves 6.
LOUIS DRESSING
14
SALMON SOUFFLÉ
Drain and flake salmon. Melt butter; blend in flour and seasonings.
Add milk gradually and cook until thick and smooth, stirring
constantly. Stir a little of the hot sauce into egg yolk; add to
remaining sauce, stirring constantly. Add parsley and salmon. Beat
egg whites until stiff. Fold salmon mixture into egg white. Pour into a
well-greased, 2-quart casserole. Bake in a moderate oven, 350°F.,
for 45 minutes or until soufflé is firm in the center. Serves 6.
15
SALMON SOUTHERN CORNBREAD
16
SALMON BUFFET
(FRONT COVER)
Drain salmon. Break salmon into large pieces. Separate lettuce and
watercress. Wash. Line salad bowl with lettuce. Place ¾ of the
salmon in the bowl. Place watercress on top of salmon. Sprinkle with
remaining salmon. Cut hard-cooked egg almost through lengthwise
into sixths. Place in center of watercress and spread open. Serve
with salad oil and vinegar. Serves 6.
APPETIZERS AND DIPS
Drain and mash salmon. Blend in salt, tabasco, and onion. Fold in
sour cream. Chill. Garnish with caviar. Serve with crackers. Makes
about 1 pint of dip.
SALMON APPETIZER
Drain and flake salmon. Peel avocado and remove seed. Grate
avocado using a medium grater. Combine all ingredients. Toss lightly.
Serve with crackers. Makes about 1 pint of spread.
17
SALMON TART
(BACK COVER)
Drain salmon, reserving liquid. Break salmon into large pieces. Melt
butter; blend in flour and seasonings. Add salmon liquid gradually
and cook until thick and smooth, stirring constantly. Add sherry. Mix
half of the sauce with the spaghetti and mushrooms. Place in a well-
greased, 2-quart casserole. Mix remaining sauce with salmon. Place
in center of spaghetti. Combine cheese and crumbs; sprinkle over
top of salmon mixture. Bake in a moderate oven, 350°F., for 25 to 30
minutes. Garnish with watercress. Serves 6.
SALMON PASTA
(BACK COVER)
Drain and flake salmon. Add cheese and mix well. Cook pasta shells
in boiling salted water for 45 minutes or until tender. Drain. Rinse
with water to remove excess starch. Melt butter; blend in flour and
seasonings. Add milk gradually and cook until thick and smooth,
stirring constantly. Chop spinach. Add spinach and blend thoroughly.
An electric mixer or blender may be used. Pour sauce into a well-
greased baking dish, 8 × 8 × 2 inches. Fill pasta shells with salmon
mixture and arrange over spinach. Sprinkle with cheese. Bake in a
moderate oven, 350°F., for 30 minutes. Garnish with parsley. Serves
6.
18
SALMON TETRAZZINI · SALMON TART · SALMON PASTA
Transcriber’s Notes
Silently corrected a few typos.
Retained publication information from the printed edition: this
eBook is public-domain in the country of publication.
In the text versions only, text in italics is delimited by
_underscores_.
*** END OF THE PROJECT GUTENBERG EBOOK TAKE A CAN OF
SALMON ***
Updated editions will replace the previous one—the old editions will
be renamed.
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the
terms of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if you
provide access to or distribute copies of a Project Gutenberg™ work
in a format other than “Plain Vanilla ASCII” or other format used in
the official version posted on the official Project Gutenberg™ website
(www.gutenberg.org), you must, at no additional cost, fee or
expense to the user, provide a copy, a means of exporting a copy, or
a means of obtaining a copy upon request, of the work in its original
“Plain Vanilla ASCII” or other form. Any alternate format must
include the full Project Gutenberg™ License as specified in
paragraph 1.E.1.
• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.F.
1.F.4. Except for the limited right of replacement or refund set forth
in paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.