We discussed linear algebra in Vector Spaces, Modules, and Linear Algebra, and there we focused on “finite-dimensional” vector spaces (the concept of dimension for vector spaces was discussed in More on Vector Spaces and Modules), writing vectors in the form

Vectors need not be written in this way, since the definition of the concept of vector space only required that it be a set closed under addition and scalar multiplication. For example, we could have just denoted vectors by , or, in quantum mechanics, we use what we call “Dirac notation”, writing vectors as .

However, the notation that we used in Vector Spaces, Modules, and Linear Algebra is quite convenient; it allowed us to display explicitly the “components”; if we declare that our scalars, for example, be the set of real numbers , and that our vector space is the set of all vectors of the form

where , then we already know that we can use the following vectors for our basis:

and

since any vector can be expressed uniquely as a linear combination

It is also quite easy to see that our vector space here has dimension . What we have done is express our vector as a matrix, more specifically a column matrix. A **matrix** is a rectangular array of numbers (which we refer to as its “entries”), with some specific properties as we will discuss later. If a matrix has rows and columns, we refer to it as an matrix. A matrix that has only one row is often referred to as a **row matrix**, and a matrix with only one column, as we have been using to express our vectors up to now, is referred to as a **column matrix**. A matrix with the same number of columns and rows is referred to as a **square matrix**.

Here are some examples of matrices (with real number entries):

( matrix)

( square matrix)

( row matrix)

We will often adopt the notation that the entry in the first row and first column of a matrix will be labeled by , the entry in the second row and first column of the same matrix will be labeled , and so on. Since we often denote vectors by , we will denote its first component (the entry in the first row) by , the second component by , and so on.

We can perform operations on matrices. The set of matrices, for fixed and form a vector space, which means we can “scale” them or multiply them by a “scalar”, and we can also add or subtract them from each other. This is done so “componentwise”, i.e.

Multiplication of matrices is more complicated. A matrix can be multiplied by a matrix to form a matrix. Note that the number of columns of the first matrix must be equal to the number of rows of the second matrix. Note also that multiplication of matrices is not commutative; a product of two matrices and may not be equal to the product of the same matrices, contrary to what we find in the multiplication of ordinary numbers.

The procedure to obtaining the entries of this product matrix is as follows: Let’s denote the product of the matrix and the matrix by (this is a matrix, as we have mentioned above) and let be its entry in the -th row and -th column. Then

For example, we may have

We highlight the following step to obtain the entry in the first row and first column:

Now that we know how to multiply matrices, we now go back to vectors, which can always be written as column matrices. For the sake of simplicity we continue to restrict ourselves to finite-dimensional vector spaces. We have seen that writing vectors as column matrices provides us with several conveniences. Other kinds of matrices are also useful in studying vector spaces.

For instance, we noted in Vector Spaces, Modules, and Linear Algebra that an important kind of function between vector spaces (of the same dimension) are the **linear transformations**, which are functions such that and . We note that if is an square matrix, and is an column matrix, then the product is another column matrix. It is a theorem that* all* linear transformations between -dimensional vector spaces can be written as an square matrix.

We also have functions from a vector space to the set of its scalars, which are sometimes referred to as **functionals**. The set of **linear functionals**, i.e. the set of functionals such that and , are represented by multiplying any column matrix by a row matrix (the number of their entries must be the same, as per the rules of matrix multiplication). For instance, we may have

Note that the right side is just a real number (or complex number, or perhaps most generally, an element of the field of scalars of our vector space).

Matrices are rather ubiquitous in mathematics (and also in physics). In fact, some might teach the subject of linear algebra with the focus on matrices first. Here, however, we have taken the view of introducing first the abstract idea of vector spaces, and with matrices being viewed as a method of making these abstract ideas of vectors, linear transformations, and linear functionals more “concrete”. At the very heart of linear algebra still remains the idea of a set whose elements can be added and scaled, and functions between these sets that “respect” the addition and scaling. But when we want to actually compute things, matrices will often come in handy.

References:

Linear Algebra Done Right by Sheldon Axler

Algebra by Michael Artin

Abstract Algebra by David S. Dummit and Richard M. Foote

Pingback: Rotating Vectors Using Matrices | Theories and Theorems

Pingback: Some Basics of Quantum Mechanics | Theories and Theorems

Pingback: Eigenvalues and Eigenvectors | Theories and Theorems

Pingback: Some Basics of Relativistic Quantum Field Theory | Theories and Theorems

Pingback: Some Basics of (Quantum) Electrodynamics | Theories and Theorems

Pingback: Geometry on Curved Spaces | Theories and Theorems

Pingback: Elliptic Curves | Theories and Theorems

Pingback: The Moduli Space of Elliptic Curves | Theories and Theorems

Pingback: Differential Forms | Theories and Theorems