Download Complete From Algebraic Structures to Tensors Digital Signal and Image Processing Matrices and Tensors in Signal Processing Set 1st Edition Gérard Favier (Editor) PDF for All Chapters
Download Complete From Algebraic Structures to Tensors Digital Signal and Image Processing Matrices and Tensors in Signal Processing Set 1st Edition Gérard Favier (Editor) PDF for All Chapters
com
OR CLICK BUTTON
DOWNLOAD NOW
https://textbookfull.com/product/digital-image-processing-a-signal-
processing-and-algorithmic-approach-1st-edition-d-sundararajan-auth/
textboxfull.com
https://textbookfull.com/product/conceptual-digital-signal-processing-
with-matlab-keonwook-kim/
textboxfull.com
https://textbookfull.com/product/an-introduction-to-algebraic-
statistics-with-tensors-cristiano-bocci/
textboxfull.com
Fundamentals of Signal Enhancement and Array Signal
Processing 1st Edition Jacob Benesty
https://textbookfull.com/product/fundamentals-of-signal-enhancement-
and-array-signal-processing-1st-edition-jacob-benesty/
textboxfull.com
https://textbookfull.com/product/think-dsp-digital-signal-processing-
in-python-1st-edition-allen-b-downey/
textboxfull.com
https://textbookfull.com/product/signal-processing-for-
neuroscientists-drongelen/
textboxfull.com
https://textbookfull.com/product/digital-watermarking-techniques-and-
trends-springer-topics-in-signal-processing-book-11-nematollahi/
textboxfull.com
https://textbookfull.com/product/methods-and-techniques-for-fire-
detection-signal-image-and-video-processing-perspectives-1st-edition-
cetin/
textboxfull.com
From Algebraic Structures to Tensors
Matrices and Tensors in Signal Processing Set
coordinated by
Gérard Favier
Volume 1
Edited by
Gérard Favier
First published 2019 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as
permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced,
stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers,
or in the case of reprographic reproduction in accordance with the terms and licenses issued by the
CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the
undermentioned address:
www.iste.co.uk www.wiley.com
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
2.5.6. Rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.5.7. Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.5.8. Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.5.9. Vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.5.10. Vector spaces of linear maps . . . . . . . . . . . . . . . . . . . . . 38
2.5.11. Vector spaces of multilinear maps . . . . . . . . . . . . . . . . . . 39
2.5.12. Vector subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.5.13. Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.5.14. Sum and direct sum of subspaces . . . . . . . . . . . . . . . . . . 45
2.5.15. Quotient vector spaces . . . . . . . . . . . . . . . . . . . . . . . . 47
2.5.16. Algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.6. Morphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.6.1. Group morphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.6.2. Ring morphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.6.3. Morphisms of vector spaces or linear maps . . . . . . . . . . . . . 51
2.6.4. Algebra morphisms . . . . . . . . . . . . . . . . . . . . . . . . . . 56
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Preface
This book is part of a collection of four books about matrices and tensors, with
applications to signal processing. Although the title of this collection suggests an
orientation toward signal processing, the results and methods presented should also
be of use to readers of other disciplines.
Writing books on matrices is a real challenge given that so many excellent books
on the topic have already been written1. How then to stand out from the existing,
and which Ariadne’s thread to unwind? A way to distinguish oneself was to treat in
parallel matrices and tensors. Viewed as extensions of matrices with orders higher than
two, the latter have many similarities with matrices, but also important differences in
terms of rank, uniqueness of decomposition, as well as potentiality for representing
multi-dimensional, multi-modal, and inaccurate data. Moreover, regarding the
guiding thread, it consists in presenting structural foundations, then both matrix and
tensor decompositions, in addition to related processing methods, finally leading to
applications, by means of a presentation as self-contained as possible, and with some
originality in the topics being addressed and the way they are treated.
higher than two. A few examples of equations for representing signal processing
problems will be provided to illustrate the use of such decompositions. A chapter
will be devoted to structured matrices. Different properties will be highlighted, and
extensions to tensors of order higher than two will be presented. Two other chapters
will concern quaternions and quaternionic matrices, on the one hand, and polynomial
matrices, on the other hand.
In Volume 3, an overview of several tensor models will be carried out by
taking some constraints (structural, linear dependency of factors, sparsity, and non-
negativity) into account. Some of these models will be used in Volume 4, for the
design of digital communication systems. Tensor trains and tensor networks will
also be presented for the representation and analysis of massive data (big data).
The algorithmic aspect will be taken into consideration with the presentation of
different processing methods.
Matrices and tensors, and more generally linear algebra and multilinear algebra,
are at the same time exciting, extensive, and fundamental topics equally important
for teaching and researching as for applications. It is worth noting here that the
choices made for the content of the books of this collection have not been guided
by educational programs, which explains some gaps compared to standard algebra
treaties. The guiding thread has been rather to present the definitions, properties,
concepts and results necessary for a good understanding of processing methods and
applications considered in these books. In addition to the great diversity of topics,
another difficulty resided in the order in which they should be addressed, due to the
fact that a lot of topics overlap, certain notions or/and some results being sometimes
used before they have been defined or/and demonstrated, which requires the reader to
be referred to sections or chapters that follow.
Four particularities should be highlighted. The first relates to the close relationship
between some of the topics being addressed, certain methods presented and recent
research results, particularly with regard to tensorial approaches for signal processing.
The second reflects the will to situate the results stated in their historical context,
using some biographical information on certain authors being cited, as well as lists
of references comprehensive enough to deepen specific results, and also to extend the
biographical sources provided. This has motivated the introductory chapter entitled
“Historical elements of matrices and tensors.”
Preface xiii
The last two characteristics concern the presentation and illustration of properties
and methods under consideration. Some will be provided without demonstration
because of their simplicity or availability in numerous books thereabout. Others will
be demonstrated, either for pedagogical reasons, since their knowledge should allow
for better understanding the results being demonstrated, or because of the difficulty to
find them in the literature, or still due to the originality of the proposed demonstrations
as it will be the case, for example, of those making use of the index convention. The
use of many tables should also be noted with the purpose of recalling key results
while presenting them in a synthetic and comparative manner.
I want to thank my colleagues Sylvie Icart and Vicente Zarzoso for their review of
some chapters and Henrique de Morais Goulart, who co-authored Chapter 4.
Gérard FAVIER
August 2019
[email protected]
1
Our modest goal here is to locate in time the contributions of a few mathematicians
and physicists2 who have laid the foundations for the theory of matrices and tensors,
and to whom we will refer later in our presentation. This choice is necessarily very
incomplete.
The first studies of determinants that preceded those of matrices were conducted
independently by the Japanese mathematician Seki Kowa (1642–1708) and the
German mathematician Gottfried Leibniz (1646–1716), and then by the Scottish
mathematician Colin Maclaurin (1698–1746) for solving 2 × 2 and 3 × 3 systems
of linear equations. These works were then generalized by the Swiss mathematician
Gabriel Cramer (1704–1752) for the resolution of n × n systems, leading, in
1750, to the famous formulae that bear his name, whose demonstration is due to
Augustin-Louis Cauchy (1789–1857).
In 1772, Théophile Alexandre Vandermonde (1735–1796) defined the notion of
determinant, and Pierre-Simon Laplace (1749–1827) formulated the computation
of determinants by means of an expansion according to a row or a column, an
expansion which will be presented in section 4.11.1. In 1773, Joseph-Louis Lagrange
(1736–1813) discovered the link between the calculation of determinants and that of
volumes. In 1812, Cauchy used, for the first time, the determinant in the sense that it
has today, and he established the formula for the determinant of the product of two
rectangular matrices, a formula which was found independently by Jacques Binet
(1786–1856), and which is called nowadays the Binet–Cauchy formula.
The foundations of the theory of matrices were laid in the 19th century around the
following topics: determinants for solving systems of linear equations, representation
of linear transformations and quadratic forms (a topic which will be addressed in detail
in Chapter 4), matrix decompositions and reductions to canonical forms, that is to say,
diagonal or triangular forms such as the Jordan (1838–1922) normal form with Jordan
blocks on the diagonal, introduced by Weierstrass, the block-triangular form of Schur
(1875–1941), or the Frobenius normal form that is a block-diagonal matrix, whose
blocks are companion matrices.
Historical Elements of Matrices and Tensors 3
A history of the theory of matrices in the 19th century was published by Thomas
Hawkins3 in 1974, highlighting, in particular, the contributions of the British
mathematician Arthur Cayley, seen by historians as one of the founders of the theory
of matrices. Cayley laid the foundations of the classical theory of determinants4 in
1843. He then developed matrix computation5 by defining certain matrix operations as
the product of two matrices, the transposition of the product of two matrices, and the
inversion of a 3 × 3 matrix using cofactors, and by establishing different properties of
matrices, including, namely, the famous Cayley–Hamilton theorem which states that
every square matrix satisfies its characteristic equation. This result highlighted for the
fourth order by William Rowan Hamilton (1805–1865), in 1853, for the calculation
of the inverse of a quaternion, was stated in the general case by Cayley in 1857, but
the demonstration for any arbitrary order is due to Frobenius, in 1878.
An important part of the theory of matrices concerns the spectral theory, namely,
the notions of eigenvalue and characteristic polynomial. Directly related to the
integration of systems of linear differential equations, this theory has its origins in
physics, and more particularly in celestial mechanics for the study of the orbits of
planets, conducted in the 18th century by mathematicians, physicists, and astronomers
such as Lagrange and Laplace, then in the 19th century by Cauchy, Weierstrass,
Kronecker, and Jordan.
The names of certain matrices and associated determinants are those of the
mathematicians who have introduced them. This is the case, for example, for
Alexandre Théophile Vandermonde (1735–1796) who gave his name to a matrix
whose elements on each row (or each column) form a geometric progression and
whose determinant is a polynomial. It is also the case for Carl Jacobi (1804–1851)
and Ludwig Otto Hesse (1811–1874), for Jacobian and Hessian matrices, namely,
the matrices of first- and second-order partial derivatives of a vector function, whose
determinants are called Jacobian and Hessian, respectively. The same is true for the
Laplacian matrix or Laplace matrix, which is used to represent a graph. We can also
mention Charles Hermite (1822–1901) for Hermitian matrices, related to the so-called
Hermitian forms (see section 4.15). Specific matrices such as Fourier (1768–1830)
and Hadamard (1865–1963) matrices are directly related to the transforms of the
same name. Similarly, Householder (1904–1993) and Givens (1910–1993) matrices
are associated with transformations corresponding to reflections and rotations,
respectively. The so-called structured matrices, such as Hankel (1839–1873) and
Toeplitz (1881–1943) matrices, play a very important role in signal processing.
3 Thomas Hawkins, “The theory of matrices in the 19th century”, Proceedings of the
International Congress of Mathematicians, Vancouver, 1974.
4 Arthur Cayley, “On a theory of determinants”, Cambridge Philosophical Society 8, l–16,
1843.
5 Arthur Cayley, “A memoir on the theory of matrices”, Philosophical Transactions of the
Royal Society of London 148, 17–37, 1858.
4 From Algebraic Structures to Tensors
Just as matrices and matrix computation play a fundamental role in linear algebra,
tensors and tensor computation are at the origin of multilinear algebra. It was in the
19th century that tensor analysis first appeared, along with the works of German
mathematicians Georg Friedrich Bernhard Riemann6 (1826–1866) and Elwin Bruno
Christoffel (1829–1900) in geometry (non-Euclidean), introducing the index notation
and notions of metric, manifold, geodesic, curved space, curvature tensor, which gave
rise to what is today called Riemannian geometry and differential geometry.
Tensor calculus originates from the study of the invariance of quadratic forms
under the effect of a change of coordinates and, more generally, from the theory of
invariants initiated by Cayley8, with the introduction of the notion of hyperdeterminant
which generalizes matrix determinants to hypermatrices. Refer to the article by Crilly9
for an overview of the contribution of Cayley on the invariant theory. This theory
was developed by Jordan and Kronecker and involved controversy10 between these
two authors, then continued by David Hilbert (1862–1943), Elie Joseph Cartan
(1869–1951), and Hermann Klaus Hugo Weyl (1885–1955), for algebraic forms
6 A detailed analysis of Riemann’s contributions to tensor analysis has been made by Ruth
Farwell and Christopher Knee, “The missing link: Riemann’s Commentatio, differential
geometry and tensor analysis”, Historia Mathematica 17, 223–255, 1990.
7 G. Ricci and T. Levi-Civita, “Méthodes de calcul différentiel absolu et leurs applications”,
Mathematische Annalen 54, 125–201, 1900.
8 A. Cayley, “On the theory of linear transformations”, Cambridge Journal of Mathematics 4,
193–209, 1845. A. Cayley, “On linear transformations”, Cambridge and Dublin Mathematical
Journal 1, 104–122, 1846.
9 T. Crilly, “The rise of Cayley’s invariant theory (1841–1862)”, Historica Mathematica 13,
241–254, 1986.
10 F. Brechenmacher, “La controverse de 1874 entre Camille Jordan et Leopold Kronecker:
Histoire du théorème de Jordan de la décomposition matricielle (1870–1930)”, Revue d’histoire
des Mathématiques, Society Math De France 2, no. 13, 187–257, 2008 (hal-00142790v2).
Historical Elements of Matrices and Tensors 5
As we have just seen in this brief historical overview, tensor calculus was
used initially in geometry and to describe physical phenomena using tensor fields,
facilitating the application of differential operators (gradient, divergence, rotational,
and Laplacian) to tensor fields14.
11 M. Olive, B. Kolev, and N. Auffray, “Espace de tenseurs et théorie classique des invariants”,
21ème Congrès Français de Mécanique, Bordeaux, France, 2013 (hal-00827406).
12 J. A. Dieudonné and J. B. Carrell, Invariant Theory, Old and New, Academic Press, 1971.
13 See page 9 in E. Sarrau, Notions sur la théorie des quaternions, Gauthiers-Villars, Paris,
1889, http://rcin.org.pl/Content/13490.
14 The notion of tensor field is associated with physical quantities that may depend on both
spatial coordinates and time. These variable geometric quantities define differentiable functions
on a domain of the physical space. Tensor fields are used in differential geometry, in algebraic
geometry, general relativity, and in many other areas of mathematics and physics. The concept
of tensor field generalizes that of vector field.
15 E. Cartan, “Sur une généralisation de la notion de courbure de Riemann et les espaces à
torsion”, Comptes rendus de l’Académie des Sciences 174, 593–595, 1922. Elie Joseph Cartan
(1869–1951), French mathematician and student of Jules Henri Poincaré (1854–1912) and
Charles Hermite (1822–1901) at the Ecole Normale Supérieure. He brought major contributions
concerning the theory of Lie groups, differential geometry, Riemannian geometry, orthogonal
polynomials, and elliptic functions. He discovered spinors, in 1913, as part of his work on the
6 From Algebraic Structures to Tensors
In the early 2000s, tensors were used for modeling digital communication
systems (Sidiropoulos et al. 2000a), for array processing (Sidiropoulos et al. 2000b),
for multi-dimensional harmonics recovery (Haardt et al. 2008; Jiang et al. 2001;
Sidiropoulos 2001), and for image processing, more specifically for face recognition
representations of groups. Like tensor calculus, spinor calculus plays a major role in quantum
physics. His name is associated with Albert Einstein (1879–1955) for the classical theory of
gravitation that relies on the model of general relativity.
16 Raymond Cattell (1905–1998), Anglo-American psychologist who used factorial analysis
for the study of personality with applications to psychotherapy.
17 Ledyard Tucker (1910–2004), American mathematician, expert in statistics and psychology,
and more particularly known for tensor decomposition which bears his name.
18 Richard Harshman (1943–2008), an expert in psychometrics and father of three-dimensional
PARAFAC analysis which is the most widely used tensor decomposition in applications.
Historical Elements of Matrices and Tensors 7
Many applications of tensors also concern speech processing (Nion et al. 2010),
MIMO radar (Nion and Sidiropoulos 2010), and biomedical signal processing,
particularly for electroencephalography (EEG) (Cong et al. 2015; de Vos et al. 2007;
Hunyadi et al. 2016), and electrocardiography (ECG) signals (Padhy et al. 2018),
magnetic resonance imaging (MRI) (Schultz et al. 2014), or hyperspectral imaging
(Bourennane et al. 2010; Velasco-Forero and Angulo 2013), among many others.
Today, tensors viewed as multi-index tables are used in many areas of application for
the representation, mining, analysis, and fusion of multi-dimensional and multi-modal
data (Acar and Yener 2009; Cichocki 2013; Lahat et al. 2015; Morup 2011).
A very large number of books address linear algebra and matrix calculus, for
example: Gantmacher (1959), Greub (1967), Bellman (1970), Strang (1980), Horn
and Johnson (1985, 1991), Lancaster and Tismenetsky (1985), Noble and Daniel
(1988), Barnett (1990), Rotella and Borne (1995), Golub and Van Loan (1996),
Lütkepohl (1996), Cullen (1997), Zhang (1999), Meyer (2000), Lascaux and Théodor
(2000), Serre (2002), Abadir and Magnus (2005), Bernstein (2005), Gourdon (2009),
Grifone (2011), and Aubry (2012).
For multilinear algebra and tensor calculus, there are much less reference
books, for example: Greub (1978), McCullagh (1987), Coppi and Bolasco (1989),
Smilde et al. (2004), Kroonenberg (2008), Cichocki et al. (2009), and Hackbusch
(2012). For an introduction to multilinear algebra and tensors, see Ph.D. theses by
de Lathauwer (1997) and Bro (1998). The following synthesis articles can also be
consulted: (Bro 1997; Cichocki et al. 2015; Comon 2014; Favier and de Almeida
2014a; Kolda and Bader 2009; Lu et al. 2011; Papalexakis et al. 2016; Sidiropoulos
et al. 2017).
2
Algebraic Structures
We make here a brief historical note concerning algebraic structures. The notion
of structure plays a fundamental role in mathematics. In a treatise entitled Eléments
de mathématique, comprising 11 books, Nicolas Bourbaki1 distinguishes three main
types of structures: algebraic structures, ordered structures that equip sets with an
order relation, and topological structures equipping sets with a topology that allows
the definition of topological concepts such as open sets, neighborhood, convergence,
and continuity. Some structures are mixed, that is, they combine several of the three
basic structures. That is the case, for instance, of Banach and Hilbert spaces which
combine the vector space structure with the notions of norm and inner product, that is,
a topology.
Algebraic structures endow sets with laws of composition governing operations
between elements of a same set or between elements of two distinct sets. These
composition laws known as internal and external laws, respectively, exhibit certain
properties such as associativity, commutativity, and distributivity, with the existence
(or not) of a symmetric for each element, and of a neutral element. Algebraic structures
make it possible to characterize, in particular, sets of numbers, polynomials, matrices,
and functions. The study of these structures (groups, rings, fields, vector spaces, etc.)
and their relationships is the primary purpose of general algebra, also called abstract
algebra. A reminder of the basic algebraic structures will be carried out in this chapter.
The vector spaces gave rise to linear algebra for the resolution of systems of
linear equations and the study of linear maps (also called linear mappings, or linear
transformations). Linear algebra is closely related to the theory of matrices and matrix
algebra, of which an introduction will be made in Chapter 4.
Multilinear algebra extends linear algebra to the study of multilinear maps, through
the notions of tensor space and tensor, which will be introduced in Chapter 6.
Although the resolution of (first- and second-degree) equations can be traced
to the Babylonians2 (about 2000 BC, according to Babylonian tables), then to the
Greeks (300 BC), to the Chinese (200 BC), and to the Indians (6th century), algebra
as a discipline emerged in the Arab-Muslim world, during the 8th century. It gained
momentum in the West, in the 16th century, with the resolution of algebraic (or
polynomial) equations, first with the works of Italian mathematicians Tartaglia
(1500–1557) and Jérôme Cardan (1501–1576) for cubic equations, whose first
resolution formula is attributed to Scipione del Ferro (1465–1526) and Lodovico
Ferrari (1522–1565) for quartic equations. The work of François Viète (1540–1603)
then René Descartes (1596–1650) can also be mentioned, for the introduction of the
notation making use of letters to designate unknowns in equations, and the use of
superscripts to designate powers.
A fundamental structure, linked to the notion of symmetry, is that of the group,
which gave rise to the theory of groups, issued from the theory of algebraic equations
and the study of arithmetic properties of algebraic numbers, at the end of the 18th
century, and of geometry, at the beginning of the 19th century. We may cite, for
example, Joseph-Louis Lagrange (1736–1813), Niels Abel (1802–1829), and Evariste
Galois (1811–1832), for the study of algebraic equations, the works of Carl Friedrich
Gauss (1777–1855) on the arithmetic theory of quadratic forms, and those of Felix
Klein (1849–1925) and Hermann Weyl (1885–1955) in non-Euclidean geometry. We
can also mention the works of Marie Ennemond Camille Jordan (1838–1922) on the
general linear group, that is, the group of invertible square matrices, and on the Galois
theory. In 1870, he published a treatise on the theory of groups, including the reduced
form of a matrix, known as Jordan form, for which he received the Poncelet prize of
the Academy of Sciences.
Groups involve a single binary operation.
the Galois theory, with the initial aim to solve algebraic equations. In 1843, a first
example of non-commutative field was introduced by William Rowan Hamilton
(1805–1865), with quaternions.
Rings and fields are algebraic structures involving two binary operations, generally
called addition and multiplication.
The underlying structure to the study of linear systems, and more generally
to linear algebra, is that of vector space (v.s.) introduced by Hermann Grassmann
(1809–1877), then axiomatically formalized by Giuseppe Peano, with the introduction
of the notion of R-vector space, at the end of the 19th century. German mathematicians
David Hilbert (1862–1943), Otto Toeplitz (1881–1940), Hilbert’s student, and Stefan
Banach (1892–1945) were the ones who extended vector spaces to spaces of
infinite dimension, called Hilbert spaces and Banach spaces (or normed vector
spaces (n.v.s.)).
The objective of this chapter is to carry out an overview of the main algebraic
structures, while recalling definitions and results that will be useful for other chapters.
First, we recall some results related to sets and maps, and we then present the
definitions and properties of internal and external composition laws on a set. Various
algebraic structures are then detailed: groups, rings, fields, modules, v.s., and algebras.
The notions of substructures and quotient structures are also defined.
at the end of the 19th century that the axiomatic method experienced a growing interest with
the works of Richard Dedekind (1831–1916), Georg Cantor (1845–1918), and Giuseppe Peano
(1858–1932), for the construction of the sets of integers and real numbers, as well as those of
David Hilbert for his axiomatization of Euclidean geometry.
12 From Algebraic Structures to Tensors
The v.s. structure is considered in more detail. Different examples are given,
including v.s. of linear maps and multilinear maps. The concepts of vector subspace,
linear independence, basis, dimension, direct sum of subspaces, and quotient space
are recalled, before summarizing the different structures under consideration in
a table.
2.3. Sets
2.3.1. Definitions
The empty set, denoted by ∅, is by definition the set that contains no elements.
We have ∅ ⊆ A, ∀A.
A finite set E is a set that has a finite number of elements. This number N is called
the cardinality of E, and it is often denoted by |E| or Card(E). There are 2N distinct
subsets.
4 Georg Cantor (1845–1918), Russian mathematician who is at the origin of the theory of sets.
He is known for the theorem that bears his name, relative to set cardinality, as well as for his
contributions to the theory of numbers.
Algebraic Structures 13
In Table 2.1, we present a few sets of numbers5 that satisfy the following inclusion
relations: N ⊂ Z ⊂ P ⊂ R ⊂ C ⊂ Q ⊂ O. We denote by R∗ = R\{0} the set of
Sets Definitions
N Natural numbers including 0
Z Integers
P Rational numbers
R Real numbers
R+ Positive real numbers
R− Negative real numbers
C Complex numbers
Q Quaternions
O Octonions
5 For the set of rational numbers, the notation P is a substitute for the usual notation Q, which
will be used to designate the set of quaternions instead of H, often used to refer to Hamilton,
discoverer of quaternions.
6 Quaternions and octonions, which can be considered as generalizations of complex numbers,
themselves extending real numbers, are part of hypercomplex numbers.
7 The notion of Cartesian product is due to René Descartes (1596–1650), French philosopher,
mathematician, and physicist, and author of philosophical works including the treatise entitled
Discours de la méthode pour bien conduire sa raison et chercher la vérité dans les sciences
(Discourse on the Method for Rightly Conducting the Reason, and Seeking Truth in the
Sciences), which contains the famous quote “I think, therefore I am” (originally in Latin
“Cogito, ergo sum”). He introduced the Cartesian product to represent the Euclidian plane and
three-dimensional space, in the context of analytic geometry, also called Cartesian geometry,
using coordinate systems.
14 From Algebraic Structures to Tensors
Operations Definitions
Complementation A ⊂ Ω ⇒ A = {x ∈ Ω : x ∈
/ A}.
Exclusive or A ⊕ B = (A − B) ∪ (B − A).
N
When An = A, ∀n ∈ ⟨N ⟩, the Cartesian product will be written as × An = A N .
n=1
If the sets are vector spaces, we then have a Cartesian product of vector spaces
which is a fundamental notion underlying, in particular, the definition of multilinear
maps, and therefore, as it will be seen in section 6.6, that of tensor spaces.
De Morgan’s laws, also called rules9, are properties related to the complement of
a union or an intersection of subsets of the same set. Thereby, for two subsets A and
B, it follows that:
A ∪B =A ∩B ; A ∩B =A ∪B
and in general for N subsets:
N N N N
∪ An = ∩ An ; ∩ An = ∪ An .
n=1 n=1 n=1 n=1
The equalities above are logical equivalences, and the symbol of equality can be
replaced by the symbol of equivalence (⇔).
De Morgan’s laws express the fact that the complement of unions and intersections
of sets can be obtained by replacing all sets by their complements, unions by
intersections, and intersections by unions. Therefore, for example:
A ∩ (B ∪ C ) ⇔ A ∪ (B ∩ C ),
or equivalently:
A(B + C ) ⇔ A + B C .
9 Augustus de Morgan (1806–1871), British mathematician who is the founder of modern logic
with George Boole (1815–1864).
16 From Algebraic Structures to Tensors
2.3.7. Partitions
The pair (Ω, A) is called a measurable space, and the subsets An are called
measurable sets. By equipping the measurable space (Ω, A) of a measure µ : A →
[0, +∞], the triplet (Ω, A, µ) is called a measure space.
In probability theory, Ω is the universal set, that is, the set of all possible
experimental outcomes of a random trial, also called elementary events. Defining an
event An as a set of elementary events, a collection (or field) A of events is called
a σ-field, and the pair (Ω, A) is a measurable space. When this space is endowed
with a probability measure P , the triplet (Ω, A, P ) defines a probability space, where
P satisfies for any element An of A: P (∅) = 0; P (Ω) = 1; 0 ≤ P (An ) ≤ 1 and
P (∅) = 0, meaning that the empty set is an impossible event, whereas P (Ω) = 1
means that Ω is a sure event, that is, an event which always occurs.
The elements equivalent to an element a form a set called the equivalence class of a,
denoted by ca ⊂ E. The set of all equivalence classes associated with the equivalence
relation ∼ forms a partition of E, denoted by E/ ∼ and called quotient set or quotient
space of E.
Discovering Diverse Content Through
Random Scribd Documents
Fig. 514
Esophageal forceps.
Fig. 516
ESOPHAGISMUS.
Esophagismus, or spasmodic contraction of the esophagus, is
usually an expression of hysteria, or else is a reflex spasmodic effect
due to the presence of some neighboring irritation. In the
esophagus, as in the urethra, there may be spasmodic stricture,
which will afford considerable obstruction. Thus I have seen it as a
functional neurosis, absolutely without explanation, in an apparently
healthy workingman. It is noticed also in connection with
hemorrhoids and with hepatic lesions. It is seen in pregnancy, and a
certain degree of it will complicate many cases of gastric ulcer,
gastritis, or esophagitis such as is produced by swallowing mild
caustics. While producing dysphagia and obstructive phenomena it is
intermittent and interposes little real obstacle to the passage of a
full-sized bougie or tube. It is frequently accompanied in the
hysterical by globus hystericus, and by regurgitation of whatever
food the patient attempts to swallow.
The local treatment consists of dilatation by the passage of full-
sized instruments at frequent intervals. If a neurosis the patient may
require other treatment, addressed either to the nervous system or
to any well-marked constitutional condition.
Fig. 520
THE THYMUS.
The possibility of suffocative and other disturbances proceeding
from enlargement of the thymus has been discussed, as well as the
use of long trachea tubes in cases of this character which call for
tracheotomy, as they usually do if they permit of any surgical
intervention. The thymus is seldom the site of primary malignant
disease. Certain acute lesions are due to a peculiar form of
hypertrophy in the young, which takes place instead of that
spontaneous disappearance which should have occurred during the
earliest months of infancy. Its connection with the status
lymphaticus, with thymic asthma, and laryngismus stridulus has
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
textbookfull.com