0% found this document useful (0 votes)
23 views

Linear Algebra

Linear algebra studies vectors and vector spaces using linear maps and matrices. A vector space is a set of elements that can be added together and multiplied by scalars according to certain rules. Linear algebra has applications in engineering, physics, and other sciences. It originated in the 19th century and took its modern form in the early 20th century with the generalization of matrices and tensors. The main structures in linear algebra are vector spaces and linear maps between them. Key concepts include linear combinations, spans, bases, dimensions, and representing linear transformations with matrices by choosing bases.

Uploaded by

Arslan Ali Khan
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

Linear Algebra

Linear algebra studies vectors and vector spaces using linear maps and matrices. A vector space is a set of elements that can be added together and multiplied by scalars according to certain rules. Linear algebra has applications in engineering, physics, and other sciences. It originated in the 19th century and took its modern form in the early 20th century with the generalization of matrices and tensors. The main structures in linear algebra are vector spaces and linear maps between them. Key concepts include linear combinations, spans, bases, dimensions, and representing linear transformations with matrices by choosing bases.

Uploaded by

Arslan Ali Khan
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Linear algebra 

is a branch of mathematics that studies vectors. Working according to certain rules, it


mainly uses families of vectors called vector spaces or linear spaces, along with functions that input one
vector and output another. These functions are called linear maps (or linear transformations or linear
operators) and are often represented by matrices. Linear algebra is central to modern mathematics and
its applications. An elementary application of linear algebra is to the solution of a system of linear
equations in several unknowns. More advanced applications are ubiquitous, in areas as diverse
as abstract algebra and functional analysis. Linear algebra has a concrete representation in analytic
geometry and is generalized in operator theory and in module theory. It has extensive applications
inengineering, physics, natural sciences and the social sciences. Nonlinear mathematical models can
often be approximated by linear ones.

History
The subject first took its modern form in the first half of the twentieth
century. At this time, many ideas and methods of previous centuries
were generalized as abstract algebra. Matricesand tensors were
introduced in the latter part of the 19th century. The use of these objects
in quantum mechanics, special relativity, and statistics did much to
spread the subject of linear algebra beyond pure mathematics.
The origin of many of these ideas is discussed in the articles
on determinants and Gaussian elimination. However, any claim that the
concepts of linear algebra were known to mathematicians prior to the
end of the nineteenth century is inaccurate, an instance of the historical
error of anachronism.
Main structures
The main structures of linear algebra are vector spaces and linear
maps between them. A vector space is a set whose elements can be
added together and multiplied by the scalars, or numbers. In many
physical applications, the scalars are real numbers, R. More generally,
the scalars may form any field F—thus one can consider vector spaces
over the field Q ofrational numbers, the field C of complex numbers, or
a finite field Fq. These two operations must behave similarly to the usual
addition and multiplication of numbers: addition
iscommutative and associative, multiplication distributes over addition,
and so on. More precisely, the two operations must satisfy a list of
axioms chosen to emulate the properties of addition and scalar
multiplication of Euclidean vectors in the coordinate n-space Rn. One of
the axioms stipulates the existence of zero vector, which behaves
analogously to the number zero with respect to addition. Elements of a
general vector space V may be objects of any nature, for
example, functions or polynomials, but when viewed as elements of V,
they are frequently called vectors.
Given two vector spaces V and W over a field F, a linear
transformation is a map

that is compatible with addition and scalar multiplication:

for any vectors u,v ∈ V and a scalar r ∈ F.


A fundamental role in linear algebra is played by the notions
of linear combination, span, and linear independence of vectors
and basis and the dimension of a vector space. Given a vector
space V over a field F, an expression of the form

where v1, v2, …, vk are vectors and r1, r2, …, rk are scalars, is


called the linear combination of the vectors v1, v2, …, vk with
coefficients r1, r2, …, rk. The set of all linear combinations of
vectors v1, v2, …, vk is called their span. A linear combination
of any system of vectors with all zero coefficients is zero
vector of V. If this is the only way to express zero vector as a
linear combination of v1, v2, …, vk then these vectors
are linearly independent. A linearly independent set of
vectors that spans a vector space V is a basis of V. If a
vector space admits a finite basis then any two bases have
the same number of elements (called the dimension of V)
and V is a finite-dimensional vector space. This theory can be
extended to infinite-dimensional spaces.
There is an important distinction between the coordinate n-
space Rn and a general finite-dimensional vector space V.
While Rn has a standard basis {e1, e2, …, en}, a vector
space Vtypically does not come equipped with a basis and
many different bases exist (although they all consist of the
same number of elements equal to the dimension of V).
Having a particular basis {v1, v2, …, vn} of V allows one to
construct a coordinate system in V: the vector with
coordinates (r1, r2, …, rn) is the linear combination

The condition that v1, v2, …, vn span V guarantees that


each vector v can be assigned coordinates, whereas the
linear independence of v1, v2, …, vn further assures that
these coordinates are determined in a unique way (i.e.
there is only one linear combination of the basis vectors
that is equal to v). In this way, once a basis of a vector
space V over F has been chosen, V may be identified
with the coordinate n-space Fn. Under this identification,
addition and scalar multiplication of vectors
in V correspond to addition and scalar multiplication of
their coordinate vectors in Fn. Furthermore, if V and W are
an n-dimensional and m-dimensional vector space
over F, and a basis of V and a basis of W have been
fixed, then any linear transformation T: V → W may be
encoded by an m × n matrix A with entries in the field F,
called the matrix of T with respect to these bases.
Therefore, by and large, the study of linear
transformations, which were defined axiomatically, may
be replaced by the study of matrices, which are concrete
objects. This is a major technique in linear algebra.

You might also like