Matrices

We discussed linear algebra in Vector Spaces, Modules, and Linear Algebra, and there we focused on “finite-dimensional” vector spaces (the concept of dimension for vector spaces was discussed in More on Vector Spaces and Modules), writing vectors in the form

\displaystyle \left(\begin{array}{c}a\\b\end{array}\right)

Vectors need not be written in this way, since the definition of the concept of vector space only required that it be a set closed under addition and scalar multiplication. For example, we could have just denoted vectors by v, or, in quantum mechanics, we use what we call “Dirac notation”, writing vectors as |\psi\rangle.

However, the notation that we used in Vector Spaces, Modules, and Linear Algebra is quite convenient; it allowed us to display explicitly the “components”; if we declare that our scalars, for example, be the set of real numbers \mathbb{R}, and that our vector space is the set of all vectors of the form

\displaystyle \left(\begin{array}{c}a\\b\end{array}\right)

where a,b\in \mathbb{R}, then we already know that we can use the following vectors for our basis:

\displaystyle \left(\begin{array}{c}1\\0\end{array}\right)

and

\displaystyle \left(\begin{array}{c}0\\1\end{array}\right)

since any vector can be expressed uniquely as a linear combination

\displaystyle \left(\begin{array}{c}a\\b\end{array}\right)=a\left(\begin{array}{c}1\\0\end{array}\right)+b\left(\begin{array}{c}0\\1\end{array}\right)

It is also quite easy to see that our vector space here has dimension 2. What we have done is express our vector as a matrix, more specifically a column matrix. A matrix is a rectangular array of numbers (which we refer to as its “entries”), with some specific properties as we will discuss later. If a matrix has m rows and n columns, we refer to it as an m\times n matrix. A matrix that has only one row is often referred to as a row matrix, and a matrix with only one column, as we have been using to express our vectors up to now, is referred to as a column matrix. A matrix with the same number of columns and rows is referred to as a square matrix.

Here are some examples of matrices (with real number entries):

\displaystyle \left(\begin{array}{cc}1&-0.25\\ 100&0\\2&-5\end{array}\right)        (3\times 2 matrix)

\displaystyle \left(\begin{array}{cc}1&0\\ 0&\frac{3}{2}\end{array}\right)        (2\times 2 square matrix)

\displaystyle \left(\begin{array}{cccc}1&27&-\frac{4}{5}&10\\ \end{array}\right)       (1\times 4 row matrix)

We will often adopt the notation that the entry in the first row and first column of a matrix A will be labeled by A_{1,1}, the entry in the second row and first column of the same matrix will be labeled A_{2,1}, and so on. Since we often denote vectors by v, we will denote its first component (the entry in the first row) by v_{1}, the second component by v_{2}, and so on.

We can perform operations on matrices. The set of m\times n matrices, for fixed m and n form a vector space, which means we can “scale” them or multiply them by a “scalar”, and we can also add or subtract them from each other. This is done so “componentwise”, i.e.

\displaystyle c\left(\begin{array}{cc}A_{1,1}&A_{1,2}\\ A_{2,1}&A_{2,2}\end{array}\right)=\left(\begin{array}{cc}cA_{1,1}&cA_{1,2}\\ cA_{2,1}&cA_{2,2}\end{array}\right)

\displaystyle \left(\begin{array}{cc}A_{1,1}&A_{1,2}\\ A_{2,1}&A_{2,2}\end{array}\right)+\left(\begin{array}{cc}B_{1,1}&B_{1,2}\\ B_{2,1}&B_{2,2}\end{array}\right)=\left(\begin{array}{cc}A_{1,1}+B_{1,1}&A_{1,2}+B_{1,2}\\ A_{2,1}+B_{2,1}&A_{2,2}+B_{2,2}\end{array}\right)

\displaystyle \left(\begin{array}{cc}A_{1,1}&A_{1,2}\\ A_{2,1}&A_{2,2}\end{array}\right)-\left(\begin{array}{cc}B_{1,1}&B_{1,2}\\ B_{2,1}&B_{2,2}\end{array}\right)=\left(\begin{array}{cc}A_{1,1}-B_{1,1}&A_{1,2}-B_{1,2}\\ A_{2,1}-B_{2,1}&A_{2,2}-B_{2,2}\end{array}\right)

Multiplication of matrices is more complicated. A j\times k matrix can be multiplied by a k\times l matrix to form a j\times l matrix. Note that the number of columns of the first matrix must be equal to the number of rows of the second matrix. Note also that multiplication of matrices is not commutative; a product AB of two matrices A and B may not be equal to the product BA of the same matrices, contrary to what we find in the multiplication of ordinary numbers.

The procedure to obtaining the entries of this product matrix is as follows: Let’s denote the product of the j\times k matrix A and the k\times l matrix B by AB (this is a j\times l matrix, as we have mentioned above) and let AB_{m,n} be its entry in the m-th row and n-th column. Then

\displaystyle AB_{m,n}=\sum_{i=1}^{k}A_{m,i}B_{i,n}

For example, we may have

\displaystyle \left(\begin{array}{cc}1&-3\\ 2&0\\-2&6\end{array}\right) \left(\begin{array}{cccc}5&-2&0&1\\ 0&1&-1&4\end{array}\right)=\left(\begin{array}{cccc}(1)(5)+(-3)(0)&(1)(-2)+(-3)(1)&(1)(0)+(-3)(-1)&(1)(1)+(-3)(4)\\ (2)(5)+(0)(0)&(2)(-2)+(0)(1)&(2)(0)+(0)(-1)&(2)(1)+(0)(4)\\(-2)(5)+(6)(0)&(-2)(-2)+(6)(1)&(-2)(0)+(6)(-1)&(-2)(1)+(6)(4)\end{array}\right)

\displaystyle \left(\begin{array}{cc}1&-3\\ 2&0\\-2&6\end{array}\right) \left(\begin{array}{cccc}5&-2&0&1\\ 0&1&-1&4\end{array}\right)=\left(\begin{array}{cccc}5&-5&3&-11\\ 10&-4&0&2\\-10&10&-6&22\end{array}\right)

We highlight the following step to obtain the entry in the first row and first column:

\displaystyle \left(\begin{array}{cc}\mathbf{1}&\mathbf{-3}\\ 2&0\\-2&6\end{array}\right) \left(\begin{array}{cccc}\mathbf{5}&-2&0&1\\ \mathbf{0}&1&-1&4\end{array}\right)=\left(\begin{array}{cccc}\mathbf{(1)(5)+(-3)(0)}&(1)(-2)+(-3)(1)&(1)(0)+(-3)(-1)&(1)(1)+(-3)(4)\\ (2)(5)+(0)(0)&(2)(-2)+(0)(1)&(2)(0)+(0)(-1)&(2)(1)+(0)(4)\\(-2)(5)+(6)(0)&(-2)(-2)+(6)(1)&(-2)(0)+(6)(-1)&(-2)(1)+(6)(4)\end{array}\right)

Now that we know how to multiply matrices, we now go back to vectors, which can always be written as column matrices. For the sake of simplicity we continue to restrict ourselves to finite-dimensional vector spaces. We have seen that writing vectors as column matrices provides us with several conveniences. Other kinds of matrices are also useful in studying vector spaces.

For instance, we noted in Vector Spaces, Modules, and Linear Algebra that an important kind of function between vector spaces (of the same dimension) are the linear transformations, which are functions f(v) such that f(av)=af(v) and f(u+v)=f(u)+f(v). We note that if A is an n\times n square matrix, and v is an n\times 1 column matrix, then the product Av is another n\times 1 column matrix. It is a theorem that all linear transformations between n-dimensional vector spaces can be written as an n\times n square matrix.

 We also have functions from a vector space to the set of its scalars, which are sometimes referred to as functionals. The set of linear functionals, i.e. the set of functionals f(v) such that f(av)=af(v) and f(u+v)=f(u)+f(v), are represented by multiplying any column matrix by a row matrix (the number of their entries must be the same, as per the rules of matrix multiplication). For instance, we may have

\displaystyle \left(\begin{array}{cccc}u_{1}&u_{2}&u_{3}&u_{4} \end{array}\right)\left(\begin{array}{c}v_{1}\\v_{2}\\v_{3}\\v_{4}\end{array}\right) = u_{1}v_{1}+u_{2}v_{2}+u_{3}v_{3}+u_{4}v_{4}

Note that the right side is just a real number (or complex number, or perhaps most generally, an element of the field of scalars of our vector space).

Matrices are rather ubiquitous in mathematics (and also in physics). In fact, some might teach the subject of linear algebra with the focus on matrices first. Here, however, we have taken the view of introducing first the abstract idea of vector spaces, and with matrices being viewed as a method of making these abstract ideas of vectors, linear transformations, and linear functionals more “concrete”. At the very heart of linear algebra still remains the idea of a set whose elements can be added and scaled, and functions between these sets that “respect” the addition and scaling. But when we want to actually compute things, matrices will often come in handy.

References:

 Matrix on Wikipedia

Linear Algebra Done Right by Sheldon Axler

Algebra by Michael Artin

Abstract Algebra by David S. Dummit and Richard M. Foote

Advertisements

9 thoughts on “Matrices

  1. Pingback: Rotating Vectors Using Matrices | Theories and Theorems

  2. Pingback: Some Basics of Quantum Mechanics | Theories and Theorems

  3. Pingback: Eigenvalues and Eigenvectors | Theories and Theorems

  4. Pingback: Some Basics of Relativistic Quantum Field Theory | Theories and Theorems

  5. Pingback: Some Basics of (Quantum) Electrodynamics | Theories and Theorems

  6. Pingback: Geometry on Curved Spaces | Theories and Theorems

  7. Pingback: Elliptic Curves | Theories and Theorems

  8. Pingback: The Moduli Space of Elliptic Curves | Theories and Theorems

  9. Pingback: Differential Forms | Theories and Theorems

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s