next up previous contents
Next: Consequences of symmetry Up: Mappings Between Vector Spaces Previous: Mappings Between Vector Spaces   Contents

Mappings

In the domain of linear mappings from one vector space to another, the mappings to the one-dimensional space of scalar coefficients are especially important, and we have seen them in two forms: the vector space of all such functions, which is the dual space, and the collection of positive symmetric bilinear forms, which are usually called inner products, and written as $(x, y)$, without invoking any explicit function name. Functions of the dual space are often written in the same style, $[x, y]$, distinguished by square brackets rather than round parentheses. Curly brackets are reserved for sets, or lists of items.

Inner products serve to connect geometry to linear algebra, as a concept familiar from many introductory courses in engineering, physics, or even mathematics. The essential element of the relationship is the fact that bilinear functions with one fixed argument are linear functions of the other argument; just which one depends on the exact value of the constant argument. If that value is taken from the reciprocal basis, the result is a function from the dual basis, which establishes the connection between the two concepts.

When a fixed first argument makes the inner product work on basis vector columns, then reciprocal basis rows satisfy the Kronecker delta relationship needed for the dual basis. But if the reciprocal basis vectors are inserted into a matrix as rows, and the direct basis elements are placed in another matrix as columns, the mutual relationship just makes the matrices inverses of one another. That is where the reciprocal basis gets its name.

Passing on from mappings of all kinds from vector spaces and sets of vector spaces to the trivial vector space of scalar coefficients, the next important category of mappings consists of those from a vector space to itself; amongst all mappings, they enjoy the unique feature that they can be combined indefinitely since no other spaces ever have to be specified. Amongst other things, that means that mappings can be iterated, there are polynomials of mappings, and that there are such things as fixed subspaces of mappings. Evidently finding a basis for a stable subspace is the key to finding the subspace itself; ever so much better if the basis itself is stable, which is the rationale for introducing eigenvectors.

Conducting efficient and reliable searches for eigenvectors is an important activity in the practical application of linear algebra, perhaps much more so than the other significant venture, which is the finding of matrix inverses. Over the years, the preferred schemes for finding eigenvectors have changed, both from changes in computing technology and as the result of theoretical investigations.

Before describing specific techniques, it is worth looking at some symmetry properties of matrices, because of the influence they have on the stable subspaces and the preferred bases associated with them. At the outset, there are two great categories of matrices, which have their own distinctive properties and areas of application. For physicists and engineers, normal matrices predominate, because of their relationship to such symmetry considerations as Newton's third law (action and reaction) or the passivity of electrical circuits. Those matrices have a complete orthonormal set of eigenvectors, with many nice estimates and limits for their eigenvalues.

Amongst normal matrices are those which are symmetric or hermitean, having only real eigenvalues, and even more specialized, those which are positive definite, having only positive eigenvalues.

The other great category consists of those matrices with positive matrix elements, which form a strict subset of those with non-negative elements. Those additional zeroes permit a great deal of limiting behavior which is not accessible to the positive matrices. Such matrices are of interest in probability theory, and in such fields of application as economics. The outstanding attribute of this category of matrices is the uniqueness of the largest eigenvalue, which are easily found by iteration, and the unique positive eigenvector associated with it.


next up previous contents
Next: Consequences of symmetry Up: Mappings Between Vector Spaces Previous: Mappings Between Vector Spaces   Contents
Pedro Hernandez 2004-02-28