Linear Algebra for Team-Based Inquiry Learning

2022 Edition

Steven Clontz Drew Lewis
University of South Alabama University of South Alabama

August 2, 2022

Section 5.1: Row Operations and Determinants (GT1)

Activity 5.1.1 (~5 min)

The image in Figure 46 illustrates how the linear transformation \(T : \IR^2 \rightarrow \IR^2\) given by the standard matrix \(A = \left[\begin{array}{cc} 2 & 0 \\ 0 & 3 \end{array}\right]\) transforms the unit square.

Figure 1. Transformation of the unit square by the matrix \(A\text{.}\)

Part 1.

What are the lengths of \(A\vec e_1\) and \(A\vec e_2\text{?}\)

Activity 5.1.1 (~5 min)

The image in Figure 46 illustrates how the linear transformation \(T : \IR^2 \rightarrow \IR^2\) given by the standard matrix \(A = \left[\begin{array}{cc} 2 & 0 \\ 0 & 3 \end{array}\right]\) transforms the unit square.

Figure 2. Transformation of the unit square by the matrix \(A\text{.}\)

Part 2.

What is the area of the transformed unit square?

Activity 5.1.2 (~5 min)

The image below illustrates how the linear transformation \(S : \IR^2 \rightarrow \IR^2\) given by the standard matrix \(B = \left[\begin{array}{cc} 2 & 3 \\ 0 & 4 \end{array}\right]\) transforms the unit square.

Figure 3. Transformation of the unit square by the matrix \(B\)

Part 1.

What are the lengths of \(B\vec e_1\) and \(B\vec e_2\text{?}\)

Activity 5.1.2 (~5 min)

The image below illustrates how the linear transformation \(S : \IR^2 \rightarrow \IR^2\) given by the standard matrix \(B = \left[\begin{array}{cc} 2 & 3 \\ 0 & 4 \end{array}\right]\) transforms the unit square.

Figure 4. Transformation of the unit square by the matrix \(B\)

Part 2.

What is the area of the transformed unit square?

Observation 5.1.3

It is possible to find two nonparallel vectors that are scaled but not rotated by the linear map given by \(B\text{.}\)

\begin{equation*} B\vec e_1=\left[\begin{array}{cc} 2 & 3 \\ 0 & 4 \end{array}\right]\left[\begin{array}{c}1\\0\end{array}\right] =\left[\begin{array}{c}2\\0\end{array}\right]=2\vec e_1 \end{equation*}
\begin{equation*} B\left[\begin{array}{c}\frac{3}{4}\\\frac{1}{2}\end{array}\right] = \left[\begin{array}{cc} 2 & 3 \\ 0 & 4 \end{array}\right]\left[\begin{array}{c}\frac{3}{4}\\\frac{1}{2}\end{array}\right] = \left[\begin{array}{c}3\\2\end{array}\right] = 4\left[\begin{array}{c}\frac{3}{4}\\\frac{1}{2}\end{array}\right] \end{equation*}
Figure 5. Certain vectors are stretched out without being rotated.

The process for finding such vectors will be covered later in this chapter.

Observation 5.1.4

Notice that while a linear map can transform vectors in various ways, linear maps always transform parallelograms into parallelograms, and these areas are always transformed by the same factor: in the case of \(B=\left[\begin{array}{cc} 2 & 3 \\ 0 & 4 \end{array}\right]\text{,}\) this factor is \(8\text{.}\)

Figure 6. A linear map transforming parallelograms into parallelograms.

Since this change in area is always the same for a given linear map, it will be equal to the value of the transformed unit square (which begins with area \(1\)).

Remark 5.1.5

We will define the determinant of a square matrix \(B\text{,}\) or \(\det(B)\) for short, to be the factor by which \(B\) scales areas. In order to figure out how to compute it, we first figure out the properties it must satisfy.

Figure 7. The linear transformation \(B\) scaling areas by a constant factor, which we call the determinant

Activity 5.1.6 (~2 min)

The transformation of the unit square by the standard matrix \([\vec{e}_1\hspace{0.5em} \vec{e}_2]=\left[\begin{array}{cc}1&0\\0&1\end{array}\right]=I\) is illustrated below. If \(\det([\vec{e}_1\hspace{0.5em} \vec{e}_2])=\det(I)\) is the area of resulting parallelogram, what is the value of \(\det([\vec{e}_1\hspace{0.5em} \vec{e}_2])=\det(I)\text{?}\)

Figure 8. The transformation of the unit square by the identity matrix.

The value for \(\det([\vec{e}_1\hspace{0.5em} \vec{e}_2])=\det(I)\) is:

  1. 0

  2. 1

  3. 2

  4. 4

Activity 5.1.7 (~2 min)

The transformation of the unit square by the standard matrix \([\vec{v}\hspace{0.5em} \vec{v}]\) is illustrated below: both \(T(\vec{e}_1)=T(\vec{e}_2)=\vec{v}\text{.}\) If \(\det([\vec{v}\hspace{0.5em} \vec{v}])\) is the area of the generated parallelogram, what is the value of \(\det([\vec{v}\hspace{0.5em} \vec{v}])\text{?}\)

Figure 9. Transformation of the unit square by a matrix with identical columns.

The value of \(\det([\vec{v}\hspace{0.5em} \vec{v}])\) is:

  1. 0

  2. 1

  3. 2

  4. 4

Activity 5.1.8 (~5 min)

The transformations of the unit square by the standard matrices \([\vec{v}\hspace{0.5em} \vec{w}]\) and \([c\vec{v}\hspace{0.5em} \vec{w}]\) are illustrated below. Describe the value of \(\det([c\vec{v}\hspace{0.5em} \vec{w}])\text{.}\)

Figure 10. The parallelograms generated by \(\vec{v}\) and \(\vec{w}\)/\(c\vec{w}\)

Describe the value of \(\det([c\vec{v}\hspace{0.5em} \vec{w}])\text{:}\)

  1. \(\displaystyle \det([\vec{v}\hspace{0.5em} \vec{w}])\)

  2. \(\displaystyle c\det([\vec{v}\hspace{0.5em} \vec{w}])\)

  3. \(\displaystyle c^2\det([\vec{v}\hspace{0.5em} \vec{w}])\)

  4. Cannot be determined from this information.

Activity 5.1.9 (~5 min)

The paralellograms generated by the standard matrices \([\vec{u}\hspace{0.5em} \vec{w}]\text{,}\) \([\vec{v}\hspace{0.5em} \vec{w}]\) and \([\vec{u}+\vec{v}\hspace{0.5em} \vec{w}]\) are illustrated below.

Figure 11. Parallelogram generated by \(\vec{u}+\vec{v}\) and \(\vec{w}\)

Describe the value of \(\det([\vec{u}+\vec{v}\hspace{0.5em} \vec{w}])\text{.}\)

  1. \(\displaystyle \det([\vec{u}\hspace{0.5em} \vec{w}])=\det([\vec{v}\hspace{0.5em} \vec{w}])\)

  2. \(\displaystyle \det([\vec{u}\hspace{0.5em} \vec{w}])+\det([\vec{v}\hspace{0.5em} \vec{w}])\)

  3. \(\displaystyle \det([\vec{u}\hspace{0.5em} \vec{w}])\det([\vec{v}\hspace{0.5em} \vec{w}])\)

  4. Cannot be determined from this information.

Definition 5.1.10

The determinant is the unique function \(\det:M_{n,n}\to\IR\) satisfying these properties:

  1. \(\displaystyle \det(I)=1\)
  2. \(\det(A)=0\) whenever two columns of the matrix are identical.
  3. \(\det[\cdots\hspace{0.5em}c\vec{v}\hspace{0.5em}\cdots]= c\det[\cdots\hspace{0.5em}\vec{v}\hspace{0.5em}\cdots]\text{,}\) assuming no other columns change.
  4. \(\det[\cdots\hspace{0.5em}\vec{v}+\vec{w}\hspace{0.5em}\cdots]= \det[\cdots\hspace{0.5em}\vec{v}\hspace{0.5em}\cdots]+ \det[\cdots\hspace{0.5em}\vec{w}\hspace{0.5em}\cdots]\text{,}\) assuming no other columns change.

Note that these last two properties together can be phrased as “The determinant is linear in each column.”

Observation 5.1.11

The determinant must also satisfy other properties. Consider \(\det([\vec v \hspace{1em}\vec w+c \vec{v}])\) and \(\det([\vec v\hspace{1em}\vec w])\text{.}\)

Figure 12. Parallelogram built by \(\vec{w}+c\vec{v}\) and \(\vec{w}\)

The base of both parallelograms is \(\vec{v}\text{,}\) while the height has not changed, so the determinant does not change either. This can also be proven using the other properties of the determinant:

\begin{align*} \det([\vec{v}+c\vec{w}\hspace{1em}\vec{w}]) &= \det([\vec{v}\hspace{1em}\vec{w}])+ \det([c\vec{w}\hspace{1em}\vec{w}])\\ &= \det([\vec{v}\hspace{1em}\vec{w}])+ c\det([\vec{w}\hspace{1em}\vec{w}])\\ &= \det([\vec{v}\hspace{1em}\vec{w}])+ c\cdot 0\\ &= \det([\vec{v}\hspace{1em}\vec{w}]) \end{align*}

Remark 5.1.12

Swapping columns may be thought of as a reflection, which is represented by a negative determinant. For example, the following matrices transform the unit square into the same parallelogram, but the second matrix reflects its orientation.

\begin{equation*} A=\left[\begin{array}{cc}2&3\\0&4\end{array}\right]\hspace{1em}\det A=8\hspace{3em} B=\left[\begin{array}{cc}3&2\\4&0\end{array}\right]\hspace{1em}\det B=-8 \end{equation*}
Figure 13. Reflection of a parallelogram as a result of swapping columns.

Observation 5.1.13

The fact that swapping columns multiplies determinants by a negative may be verified by adding and subtracting columns.

\begin{align*} \det([\vec{v}\hspace{1em}\vec{w}]) &= \det([\vec{v}+\vec{w}\hspace{1em}\vec{w}])\\ &= \det([\vec{v}+\vec{w}\hspace{1em}\vec{w}-(\vec{v}+\vec{w})])\\ &= \det([\vec{v}+\vec{w}\hspace{1em}-\vec{v}])\\ &= \det([\vec{v}+\vec{w}-\vec{v}\hspace{1em}-\vec{v}])\\ &= \det([\vec{w}\hspace{1em}-\vec{v}])\\ &= -\det([\vec{w}\hspace{1em}\vec{v}]) \end{align*}

Fact 5.1.14

To summarize, we've shown that the column versions of the three row-reducing operations a matrix may be used to simplify a determinant in the following way:

  1. Multiplying a column by a scalar multiplies the determinant by that scalar:

    \begin{equation*} c\det([\cdots\hspace{0.5em}\vec{v}\hspace{0.5em} \cdots])= \det([\cdots\hspace{0.5em}c\vec{v}\hspace{0.5em} \cdots]) \end{equation*}

  2. Swapping two columns changes the sign of the determinant:

    \begin{equation*} \det([\cdots\hspace{0.5em}\vec{v}\hspace{0.5em} \cdots\hspace{1em}\vec{w}\hspace{0.5em} \cdots])= -\det([\cdots\hspace{0.5em}\vec{w}\hspace{0.5em} \cdots\hspace{1em}\vec{v}\hspace{0.5em} \cdots]) \end{equation*}

  3. Adding a multiple of a column to another column does not change the determinant:

    \begin{equation*} \det([\cdots\hspace{0.5em}\vec{v}\hspace{0.5em} \cdots\hspace{1em}\vec{w}\hspace{0.5em} \cdots])= \det([\cdots\hspace{0.5em}\vec{v}+c\vec{w}\hspace{0.5em} \cdots\hspace{1em}\vec{w}\hspace{0.5em} \cdots]) \end{equation*}

Activity 5.1.15 (~5 min)

The transformation given by the standard matrix \(A\) scales areas by \(4\text{,}\) and the transformation given by the standard matrix \(B\) scales areas by \(3\text{.}\) By what factor does the transformation given by the standard matrix \(AB\) scale areas?

Figure 14. Area changing under the composition of two linear maps
  1. \(\displaystyle 1\)

  2. \(\displaystyle 7\)

  3. \(\displaystyle 12\)

  4. Cannot be determined

Fact 5.1.16

Since the transformation given by the standard matrix \(AB\) is obtained by applying the transformations given by \(A\) and \(B\text{,}\) it follows that

\begin{equation*} \det(AB)=\det(A)\det(B)=\det(B)\det(A)=\det(BA)\text{.} \end{equation*}

Remark 5.1.17

Recall that row operations may be produced by matrix multiplication.

  • Multiply the first row of \(A\) by \(c\text{:}\) \(\left[\begin{array}{cccc} c & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right]A\)

  • Swap the first and second row of \(A\text{:}\) \(\left[\begin{array}{cccc} 0 & 1 & 0 & 0\\ 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right]A\)

  • Add \(c\) times the third row to the first row of \(A\text{:}\) \(\left[\begin{array}{cccc} 1 & 0 & c & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right]A\)

Fact 5.1.18

The determinants of row operation matrices may be computed by manipulating columns to reduce each matrix to the identity:

  • Scaling a row: \(\det \left[\begin{array}{cccc} c & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right] = c\det \left[\begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right] = c\)

  • Swapping rows: \(\det \left[\begin{array}{cccc} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right] = -1\det \left[\begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right] = -1\)

  • Adding a row multiple to another row: \(\det \left[\begin{array}{cccc} 1 & 0 & c & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right] = \det \left[\begin{array}{cccc} 1 & 0 & c-1c & 0\\ 0 & 1 & 0-0c & 0\\ 0 & 0 & 1-0c & 0 \\ 0 & 0 & 0-0c & 1 \end{array}\right] = \det(I)=1\)

Activity 5.1.19 (~5 min)

Consider the row operation \(R_1+4R_3\to R_1\) applied as follows to show \(A\sim B\text{:}\)

\begin{equation*} A=\left[\begin{array}{cccc}1&2&3 & 4\\5&6 & 7 & 8\\9 & 10 & 11 & 12 \\ 13 & 14 & 15 & 16\end{array}\right] \sim \left[\begin{array}{cccc}1+4(9)&2+4(10)&3+4(11) & 4+4(12) \\5&6 & 7 & 8\\9 & 10 & 11 & 12 \\ 13 & 14 & 15 & 16\end{array}\right]=B \end{equation*}

Part 1.

Find a matrix \(R\) such that \(B=RA\text{,}\) by applying the same row operation to \(I=\left[\begin{array}{cccc}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{array}\right]\text{.}\)

Activity 5.1.19 (~5 min)

Consider the row operation \(R_1+4R_3\to R_1\) applied as follows to show \(A\sim B\text{:}\)

\begin{equation*} A=\left[\begin{array}{cccc}1&2&3 & 4\\5&6 & 7 & 8\\9 & 10 & 11 & 12 \\ 13 & 14 & 15 & 16\end{array}\right] \sim \left[\begin{array}{cccc}1+4(9)&2+4(10)&3+4(11) & 4+4(12) \\5&6 & 7 & 8\\9 & 10 & 11 & 12 \\ 13 & 14 & 15 & 16\end{array}\right]=B \end{equation*}

Part 2.

Find \(\det R\) by comparing with the previous slide.

Activity 5.1.19 (~5 min)

Consider the row operation \(R_1+4R_3\to R_1\) applied as follows to show \(A\sim B\text{:}\)

\begin{equation*} A=\left[\begin{array}{cccc}1&2&3 & 4\\5&6 & 7 & 8\\9 & 10 & 11 & 12 \\ 13 & 14 & 15 & 16\end{array}\right] \sim \left[\begin{array}{cccc}1+4(9)&2+4(10)&3+4(11) & 4+4(12) \\5&6 & 7 & 8\\9 & 10 & 11 & 12 \\ 13 & 14 & 15 & 16\end{array}\right]=B \end{equation*}

Part 3.

If \(C \in M_{4,4}\) is a matrix with \(\det(C)= -3\text{,}\) find

\begin{equation*} \det(RC)=\det(R)\det(C). \end{equation*}

Activity 5.1.20 (~5 min)

Consider the row operation \(R_1\leftrightarrow R_3\) applied as follows to show \(A\sim B\text{:}\)

\begin{equation*} A=\left[\begin{array}{cccc}1&2&3&4\\5&6&7&8\\9&10&11&12 \\ 13 & 14 & 15 & 16\end{array}\right] \sim \left[\begin{array}{cccc}9&10&11&12\\5&6&7&8\\1&2&3&4 \\ 13 & 14 & 15 & 16\end{array}\right]=B \end{equation*}

Part 1.

Find a matrix \(R\) such that \(B=RA\text{,}\) by applying the same row operation to \(I\text{.}\)

Activity 5.1.20 (~5 min)

Consider the row operation \(R_1\leftrightarrow R_3\) applied as follows to show \(A\sim B\text{:}\)

\begin{equation*} A=\left[\begin{array}{cccc}1&2&3&4\\5&6&7&8\\9&10&11&12 \\ 13 & 14 & 15 & 16\end{array}\right] \sim \left[\begin{array}{cccc}9&10&11&12\\5&6&7&8\\1&2&3&4 \\ 13 & 14 & 15 & 16\end{array}\right]=B \end{equation*}

Part 2.

If \(C \in M_{4,4}\) is a matrix with \(\det(C)= 5\text{,}\) find \(\det(RC)\text{.}\)

Activity 5.1.21 (~5 min)

Consider the row operation \(3R_2\to R_2\) applied as follows to show \(A\sim B\text{:}\)

\begin{equation*} A=\left[\begin{array}{cccc}1&2&3&4\\5&6&7&8\\9&10&11&12 \\ 13 & 14 & 15 & 16\end{array}\right] \sim \left[\begin{array}{cccc}1&2&3&4\\3(5)&3(6)&3(7)&3(8)\\9&10&11&12 \\ 13 & 14 & 15 & 16\end{array}\right]=B \end{equation*}

Part 1.

Find a matrix \(R\) such that \(B=RA\text{.}\)

Activity 5.1.21 (~5 min)

Consider the row operation \(3R_2\to R_2\) applied as follows to show \(A\sim B\text{:}\)

\begin{equation*} A=\left[\begin{array}{cccc}1&2&3&4\\5&6&7&8\\9&10&11&12 \\ 13 & 14 & 15 & 16\end{array}\right] \sim \left[\begin{array}{cccc}1&2&3&4\\3(5)&3(6)&3(7)&3(8)\\9&10&11&12 \\ 13 & 14 & 15 & 16\end{array}\right]=B \end{equation*}

Part 2.

If \(C \in M_{4,4}\) is a matrix with \(\det(C)= -7\text{,}\) find \(\det(RC)\text{.}\)

Remark 5.1.22

Recall that the column versions of the three row-reducing operations a matrix may be used to simplify a determinant:

  1. Multiplying columns by scalars:

    \begin{equation*} \det([\cdots\hspace{0.5em}c\vec{v}\hspace{0.5em} \cdots])= c\det([\cdots\hspace{0.5em}\vec{v}\hspace{0.5em} \cdots]) \end{equation*}

  2. Swapping two columns:

    \begin{equation*} \det([\cdots\hspace{0.5em}\vec{v}\hspace{0.5em} \cdots\hspace{1em}\vec{w}\hspace{0.5em} \cdots])= -\det([\cdots\hspace{0.5em}\vec{w}\hspace{0.5em} \cdots\hspace{1em}\vec{v}\hspace{0.5em} \cdots]) \end{equation*}

  3. Adding a multiple of a column to another column:

    \begin{equation*} \det([\cdots\hspace{0.5em}\vec{v}\hspace{0.5em} \cdots\hspace{1em}\vec{w}\hspace{0.5em} \cdots])= \det([\cdots\hspace{0.5em}\vec{v}+c\vec{w}\hspace{0.5em} \cdots\hspace{1em}\vec{w}\hspace{0.5em} \cdots]) \end{equation*}

Remark 5.1.23

The determinants of row operation matrices may be computed by manipulating columns to reduce each matrix to the identity:

  • Scaling a row: \(\left[\begin{array}{cccc} 1 & 0 & 0 &0 \\ 0 & c & 0 &0\\ 0 & 0 & 1 &0 \\ 0 & 0 & 0 & 0 \end{array}\right]\)

  • Swapping rows: \(\left[\begin{array}{cccc} 0 & 1 & 0 &0 \\ 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]\)

  • Adding a row multiple to another row: \(\left[\begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & 1 & c & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]\)

Fact 5.1.24

Thus we can also use both row operations to simplify determinants:

  • Multiplying rows by scalars:

    \begin{equation*} \det\left[\begin{array}{c}\vdots\\cR\\\vdots\end{array}\right]= c\det\left[\begin{array}{c}\vdots\\R\\\vdots\end{array}\right] \end{equation*}

  • Swapping two rows:

    \begin{equation*} \det\left[\begin{array}{c}\vdots\\R\\\vdots\\S\\\vdots\end{array}\right]= -\det\left[\begin{array}{c}\vdots\\S\\\vdots\\R\\\vdots\end{array}\right] \end{equation*}

  • Adding multiples of rows/columns to other rows:

    \begin{equation*} \det\left[\begin{array}{c}\vdots\\R\\\vdots\\S\\\vdots\end{array}\right]= \det\left[\begin{array}{c}\vdots\\R+cS\\\vdots\\S\\\vdots\end{array}\right] \end{equation*}

Observation 5.1.25

So we may compute the determinant of \(\left[\begin{array}{cc} 2 & 4 \\ 2 & 3 \end{array}\right]\) by manipulating its rows/columns to reduce the matrix to \(I\text{:}\)

\begin{align*} \det\left[\begin{array}{cc} 2 & 4 \\ 2 & 3 \end{array}\right] &= 2 \det \left[\begin{array}{cc} 1 & 2 \\ 2 & 3 \end{array}\right]\\ &= %2 \det \left[\begin{array}{cc} 1 & 2 \\ 2-2(1) & 3-2(2)\end{array}\right]= 2 \det \left[\begin{array}{cc} 1 & 2 \\ 0 & -1 \end{array}\right]\\ &= %2(-1) \det \left[\begin{array}{cc} 1 & -2 \\ 0 & +1 \end{array}\right]= -2 \det \left[\begin{array}{cc} 1 & -2 \\ 0 & 1 \end{array}\right]\\ &= %-2 \det \left[\begin{array}{cc} 1+2(0) & -2+2(1) \\ 0 & 1\end{array}\right] = -2 \det \left[\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\right]\\ &= %-2\det I = %-2(1) = -2 \end{align*}