In this paper we summarize what’s written in the book “Elementary Linear Algebra” by Howard Anton & Chris Rortes. The book describes the basic methods of elementary linear algebra, linear nature of considering objects: vector (or linear) space, linear transformations, systems of linear equations, quadratic and bilinear forms. The main linear algebra tools considered in this book are systems of linear algebraic equations, determinants, matrixes, conjugation.
The book consists of ten chapters; each part describes one of the major areas of linear algebra.
The first chapter is “Systems of Linear Equations and Matrices”.
A System of m linear algebraic equations with n unknown variables (or linear system, also used the abbreviation SLAE) in linear algebra is a system of equations of the following form:
Here, m - the number of equations, and n - number of unknowns. x1, x2,, xn - unknown to be determined. a11, a12,, amn - coefficients of the system - and b1, b2, bm - free members - assumed to be known. Indices of the coefficients (aij) denote the number of equations (i) and unknown (j), in which there is this factor, respectively.
The book covers the basic methods for finding solutions of systems of linear algebraic equations, including the method of Gauss, Gauss method - Jordan Cramer method, matrix method and other
The second chapter is “Determinants”.
Determinant (or determinants) is one of the basic concepts of linear algebra. This is polynomial, which combines elements of a square matrix so that its value is saved when transposing and linear combinations of rows or columns. I.e., the determinant of the matrix defines its content. In particular, if the matrix has linearly dependent rows or columns of the determinant zero.
Determinant plays a key role in solving generally linear systems based on it are introduced basic concepts. In general, the matrix can be defined over any commutative ring, in which case the determinant element is the same ring.
There are several basic methods for finding the determinants of matrices, of which the book discusses such as Cofactor Expansion, Row Reduction, Cramer’s Rule.
The third chapter is “Euclidean Vector Spaces”.
Euclidean space in the original sense is a space with the properties which are described by the axioms of Euclidean geometry. In a more general sense, one may denote similar and closely related objects, defined below. Usually n-dimensional Euclidean space is denoted En, though not often used is an acceptable designation Rn.
In this book we consider finite-dimensional Hilbert space, then there is a finite-dimensional real vector space Rn-introduced it (positive definite) scalar product induces a norm :
In this chapter several basic objects of vector spaces are discussed, such as norm, dot product, distance in Rn, orthogonality, cross product, etc.
The forth chapter is about General Vector Spaces
Instead of Euclidean vector spaces, generally, a vector space is a mathematical structure, which is formed by a set of elements, called vectors, for which the operations of addition with each other and multiplying by - scalar. These operations are subject to the eight axioms. Scalar can also be an element of a real, complex or any other field properties. A particular case of such vectors are the usual Euclidean space vectors, which are used, for example, to demonstrate physical strength. It should be noted that as part of the vector space of the vector need not necessarily be presented as a directional line segment. Generalization of the concept "vector" to an element of the vector space of any nature does not cause confusion of terms, but also allows you to understand or even to anticipate a number of results, which are valid for spaces of an arbitrary nature.
The book discusses all specials of determining real vector spaces, subspaces, linear independence of vectors, coordinates and basis, dimension, row space, column space and null space, rank, nullity, fundamental matrix spaces and so on and so forth.
The fifth chapter is about eigenvalues and eigenvectors.
Eigenvector is a concept in linear algebra, defined for an arbitrary square matrix or linear transformation as a vector, matrix multiplication or the use to which to which the conversion gives a collinear vector - the same vector multiplied by a scalar value, called an eigenvalue of a matrix or linear transformation.
Concepts eigenvector eigenvalue are key in linear algebra, based on them being built many structures. The set of all eigenvectors of a linear transformation is called eigenspace set of all eigenvalues of the matrix or linear transformation is called the spectrum of the matrix or transformation.
Let L is linear space over the field K, A : L -> L - a linear transformation.
Eigenvector of a linear transformation A is a nonzero vector x in L, that for some lambda in K Ax = lambda*x
Eigenvalue of a linear transformation A is that value of lambda, for which there exists an eigenvector, i.e. the equation Ax = lambda*x has a nonzero solution x in L.
The sixth chapter is “Inner Product Spaces”
Complex linear space U is called unitary (or Hermitian) if every pair of elements u,v of this space is assigned a complex number <u,v>, called the scalar product, and this correspondence satisfies the following conditions:
This chapter considers basic operations with inner product spaces, such as Gram-Schmidt Process, QR-Decomposition, best approximation, least squares, etc.
The seventh chapter is about diagonalization and quadratic forms.
This chapter discuss the following concepts: orthogonal matrices, orthogonal diagonalization, quadratic forms, optimization using quadratic forms, hermitian, unitary and normal matrices. Quadratic form is a function on the vector space defined by a homogeneous polynomial of degree two coordinates of the vector.
The eighth chapter is about linear transformations.
Linear map or linear transformation is a generalization of linear numerical function (more precisely, the function y = kx) to the case of a more general set of arguments and values. Linear operators in the nonlinear contrast sufficiently well investigated, that allows to apply the results of the general theory, since their properties are not dependent on the nature of the variables.
This chapter also covers the definitions of isomorphism, compositions and inverse transformations and matrices for general linear transformations.
Isomorphism is a very general concept, which is used in various branches of mathematics. In general terms it can be described as follows: suppose we are given two sets with a certain structure (groups, rings, vector spaces, etc.). Bijection between them is called an isomorphism if it preserves this structure. If between these sets there is an isomorphism, then they are isomorphic. Isomorphism always sets an equivalence relation on the class of sets with structure.
Objects, between which there is an isomorphism, are in some sense " equally arranged ", they are isomorphic. A classic example of isomorphic systems can be set R of all real numbers with some addition operation on it and set R + of positive real numbers with a given multiplication operation on it. Mapping x -> exp (x) in this case is an isomorphism.
The ninth section is “Numerical Methods”
The section covers such numerical methods of linear algebra as LU-Decompositions, power method, internet search engines, comparison of procedures for solving linear systems, singular value decomposition. Also it considers data compression using singular value decomposition using Mathlab, Mathematica or Maple.
The tenth section is about the application of linear algebra.
Linear algebra has a very wide specter of application in real world problems. The basic areas of application are geometric linear programming, cubic spline interpolation, Markov chains, graph theory, games of strategy, economic models, forest management, computer graphics and so on and so forth.
Examples
- Matrix addition and subtraction
Consider the matrices
A=2103-10244-270, B=-4351220-132-45, C=1122
Then if we consider the addition of A and B we will have the matrix with the sum of elements on each corresponding position:
A+B=-245412237035
If we consider subtraction, we will have the matrix with the values obtained after subtraction of corresponding element of B from A:
A-B=6-2-52-3-22532-45
As the dimensions of A and C or B and C are not similar, the expressions A+C, B+C, A-C and B-C are unidentified
- Using Cramer’s rule to solve the system of linear equations
x1+2x3=6-3x1+4x2+6x3=30-x1-2x2+3x3=8
So, we have to consider the determinants of the following matrices:
A=102-346-1-23, A1=60230468-23,A2=162-3306-183, A3=106-3430-1-28
We obtain, that the determinants are the following:
detA1=-40detA2=72detA3=152detA=44
Hence, the solutions are:
x1=-4044=-1011x2=7244=1811x3=3811