Given a vector (see Vector Spaces, Modules, and Linear Algebra), we have seen that one of the things we can do to it is to “scale” it (in fact, it is one of the defining properties of a vector). We can also use a matrix (see Matrices) to scale vectors. Consider, for example, the matrix
.
Applying this matrix to any vector “doubles” the magnitude of the vector:
This is applicable to any vector except, of course, the zero vector, which cannot be scaled and is therefore excluded in our discussion in this post.
The interesting case, however, is when the matrix “scales” only a few special vectors. Consider for example, the matrix
.
Applying it to the vector
gives us
.
This is, of course, not an example of “scaling”. However, for the vector
we get
.
This is a scaling, since
.
The same holds true for the vector
from which we obtain
which is also a “scaling” by a factor of . Finally, this also holds true for scalar multiples of the two vectors we have enumerated. These vectors, the only “special” ones that are scaled by our linear transformation (represented by our matrix), are called the eigenvectors of the linear transformation, and the factors by which they are scaled are called the eigenvalues of the eigenvectors.
So far we have focused on finite-dimensional vector spaces, which give us a lot of convenience; for instance, we can express finite-dimensional vectors as column matrices. But there are also infinite-dimensional vector spaces; recall that the conditions for a set to be a vector space are that its elements can be added or subtracted, and scaled. An example of an infinite-dimensional vector space is the set of all continuous real-valued functions of the real numbers (with the real numbers serving as the field of scalars).
Given two continuous real-valued functions of the real numbers and
, the functions
and
are also continuous real-valued functions of the real numbers, and the same is true for
, for any real number
. Thus we can see that the set of continuous real-valued functions of the real numbers form a vector space.
Matrices are not usually used to express linear transformations when it comes to infinite-dimensional vector spaces, but we still retain the concept of eigenvalues and eigenvectors. Note that a linear transformation is a function from a vector space to another (possibly itself) which satisfies the conditions
and
.
Since our vector spaces in the infinite-dimensional case may be composed of functions, we may think of linear transformations as “functions from functions to functions” that satisfy the conditions earlier stated.
Consider the “operation” of taking the derivative (see An Intuitive Introduction to Calculus). The rules of calculus concerning derivatives (which can be derived from the basic definition of the derivative) state that we must we have
and
where is a constant. This holds true for “higher-order” derivatives as well. This means that the “derivative operator”
is an example of a linear transformation from an infinite-dimensional vector space to another (note that the functions that comprise our vector space must be “differentiable”, and that the derivatives of our functions must possess the same defining properties we required for our vector space).
We now show an example of eigenvalues and eigenvectors in the context of infinite-dimensional vector spaces. Let our linear transformation be
which stands for the “operation” of taking the second derivative with respect to . We state again some of the rules of calculus pertaining to the derivatives of trigonometric functions (once again, they can be derived from the basic definitions, which is a fruitful exercise, or they can be looked up in tables):
which means that
we can see now that the function is an eigenvector of the linear transformation
, with eigenvalue equal to
.
Eigenvalues and eigenvectors play many important roles in linear algebra (and its infinite-dimensional version, which is called functional analysis). We will mention here something we have left off of our discussion in Some Basics of Quantum Mechanics. In quantum mechanics, “observables”, like the position, momentum, or energy of a system, correspond to certain kinds of linear transformations whose eigenvalues are real numbers (note that our field of scalars in quantum mechanics is the field of complex numbers . These eigenvalues correspond to the only values that we can obtain after measurement; we cannot measure values that are not eigenvalues.
References:
Eigenvalues and Eigenvectors on Wikipedia
Linear Algebra Done Right by Sheldon Axler
Algebra by Michael Artin
Calculus by James Stewart
Introductory Functional Analysis with Applications by Erwin Kreyszig
Introduction to Quantum Mechanics by David J. Griffiths