Skip to main content

Section 5.1 Row Operations and Determinants (GT1)

Subsection 5.1.1 Class Activities

Activity 5.1.1.

The image in Figure 46 illustrates how the linear transformation \(T : \IR^2 \rightarrow \IR^2\) given by the standard matrix \(A = \left[\begin{array}{cc} 2 & 0 \\ 0 & 3 \end{array}\right]\) transforms the unit square.

Figure 46. Transformation of the unit square by the matrix \(A\text{.}\)
(a)

What are the lengths of \(A\vec e_1\) and \(A\vec e_2\text{?}\)

(b)

What is the area of the transformed unit square?

Activity 5.1.2.

The image below illustrates how the linear transformation \(S : \IR^2 \rightarrow \IR^2\) given by the standard matrix \(B = \left[\begin{array}{cc} 2 & 3 \\ 0 & 4 \end{array}\right]\) transforms the unit square.

Figure 47. Transformation of the unit square by the matrix \(B\)
(a)

What are the lengths of \(B\vec e_1\) and \(B\vec e_2\text{?}\)

(b)

What is the area of the transformed unit square?

Observation 5.1.3.

It is possible to find two nonparallel vectors that are scaled but not rotated by the linear map given by \(B\text{.}\)

\begin{equation*} B\vec e_1=\left[\begin{array}{cc} 2 & 3 \\ 0 & 4 \end{array}\right]\left[\begin{array}{c}1\\0\end{array}\right] =\left[\begin{array}{c}2\\0\end{array}\right]=2\vec e_1 \end{equation*}
\begin{equation*} B\left[\begin{array}{c}\frac{3}{4}\\\frac{1}{2}\end{array}\right] = \left[\begin{array}{cc} 2 & 3 \\ 0 & 4 \end{array}\right]\left[\begin{array}{c}\frac{3}{4}\\\frac{1}{2}\end{array}\right] = \left[\begin{array}{c}3\\2\end{array}\right] = 4\left[\begin{array}{c}\frac{3}{4}\\\frac{1}{2}\end{array}\right] \end{equation*}
Figure 48. Certain vectors are stretched out without being rotated.

The process for finding such vectors will be covered later in this chapter.

Observation 5.1.4.

Notice that while a linear map can transform vectors in various ways, linear maps always transform parallelograms into parallelograms, and these areas are always transformed by the same factor: in the case of \(B=\left[\begin{array}{cc} 2 & 3 \\ 0 & 4 \end{array}\right]\text{,}\) this factor is \(8\text{.}\)

Figure 49. A linear map transforming parallelograms into parallelograms.

Since this change in area is always the same for a given linear map, it will be equal to the value of the transformed unit square (which begins with area \(1\)).

Remark 5.1.5.

We will define the determinant of a square matrix \(B\text{,}\) or \(\det(B)\) for short, to be the factor by which \(B\) scales areas. In order to figure out how to compute it, we first figure out the properties it must satisfy.

Figure 50. The linear transformation \(B\) scaling areas by a constant factor, which we call the determinant

Activity 5.1.6.

The transformation of the unit square by the standard matrix \([\vec{e}_1\hspace{0.5em} \vec{e}_2]=\left[\begin{array}{cc}1&0\\0&1\end{array}\right]=I\) is illustrated below. If \(\det([\vec{e}_1\hspace{0.5em} \vec{e}_2])=\det(I)\) is the area of resulting parallelogram, what is the value of \(\det([\vec{e}_1\hspace{0.5em} \vec{e}_2])=\det(I)\text{?}\)

Figure 51. The transformation of the unit square by the identity matrix.

The value for \(\det([\vec{e}_1\hspace{0.5em} \vec{e}_2])=\det(I)\) is:

  1. 0

  2. 1

  3. 2

  4. 4

Activity 5.1.7.

The transformation of the unit square by the standard matrix \([\vec{v}\hspace{0.5em} \vec{v}]\) is illustrated below: both \(T(\vec{e}_1)=T(\vec{e}_2)=\vec{v}\text{.}\) If \(\det([\vec{v}\hspace{0.5em} \vec{v}])\) is the area of the generated parallelogram, what is the value of \(\det([\vec{v}\hspace{0.5em} \vec{v}])\text{?}\)

Figure 52. Transformation of the unit square by a matrix with identical columns.

The value of \(\det([\vec{v}\hspace{0.5em} \vec{v}])\) is:

  1. 0

  2. 1

  3. 2

  4. 4

Activity 5.1.8.

The transformations of the unit square by the standard matrices \([\vec{v}\hspace{0.5em} \vec{w}]\) and \([c\vec{v}\hspace{0.5em} \vec{w}]\) are illustrated below. Describe the value of \(\det([c\vec{v}\hspace{0.5em} \vec{w}])\text{.}\)

Figure 53. The parallelograms generated by \(\vec{v}\) and \(\vec{w}\)/\(c\vec{w}\)

Describe the value of \(\det([c\vec{v}\hspace{0.5em} \vec{w}])\text{:}\)

  1. \(\displaystyle \det([\vec{v}\hspace{0.5em} \vec{w}])\)

  2. \(\displaystyle c\det([\vec{v}\hspace{0.5em} \vec{w}])\)

  3. \(\displaystyle c^2\det([\vec{v}\hspace{0.5em} \vec{w}])\)

  4. Cannot be determined from this information.

Consider the vectors \(\vec{u}\text{,}\) \(\vec{v}\text{,}\) \(\vec{u}+\vec{v}\text{,}\) and \(\vec{w}\) displayed below. Each pair of vectors generates a parallelogram, and the area of each parallelogram can be described in terms of determinants.

Figure 54. The vectors \(\vec{u}\text{,}\) \(\vec{v}\text{,}\) \(\vec{u}+\vec{v}\) and \(\vec{w}\)

For example, \(\det([\vec{u}\hspace{0.5em} \vec{w}])\) represents the shaded area shown below.

Figure 55. Parallelogram generated by \(\vec{u}\) and \(\vec{w}\)

Similarly, \(\det([\vec{v}\hspace{0.5em} \vec{w}])\) represents the shaded area shown below.

Figure 56. Parallelogram generated by \(\vec{v}\) and \(\vec{w}\)

Activity 5.1.9.

The paralellograms generated by the standard matrices \([\vec{u}\hspace{0.5em} \vec{w}]\text{,}\) \([\vec{v}\hspace{0.5em} \vec{w}]\) and \([\vec{u}+\vec{v}\hspace{0.5em} \vec{w}]\) are illustrated below.

Figure 57. Parallelogram generated by \(\vec{u}+\vec{v}\) and \(\vec{w}\)

Describe the value of \(\det([\vec{u}+\vec{v}\hspace{0.5em} \vec{w}])\text{.}\)

  1. \(\displaystyle \det([\vec{u}\hspace{0.5em} \vec{w}])=\det([\vec{v}\hspace{0.5em} \vec{w}])\)

  2. \(\displaystyle \det([\vec{u}\hspace{0.5em} \vec{w}])+\det([\vec{v}\hspace{0.5em} \vec{w}])\)

  3. \(\displaystyle \det([\vec{u}\hspace{0.5em} \vec{w}])\det([\vec{v}\hspace{0.5em} \vec{w}])\)

  4. Cannot be determined from this information.

Definition 5.1.10.

The determinant is the unique function \(\det:M_{n,n}\to\IR\) satisfying these properties:

  1. \(\displaystyle \det(I)=1\)

  2. \(\det(A)=0\) whenever two columns of the matrix are identical.

  3. \(\det[\cdots\hspace{0.5em}c\vec{v}\hspace{0.5em}\cdots]= c\det[\cdots\hspace{0.5em}\vec{v}\hspace{0.5em}\cdots]\text{,}\) assuming no other columns change.

  4. \(\det[\cdots\hspace{0.5em}\vec{v}+\vec{w}\hspace{0.5em}\cdots]= \det[\cdots\hspace{0.5em}\vec{v}\hspace{0.5em}\cdots]+ \det[\cdots\hspace{0.5em}\vec{w}\hspace{0.5em}\cdots]\text{,}\) assuming no other columns change.

Note that these last two properties together can be phrased as “The determinant is linear in each column.”

Observation 5.1.11.

The determinant must also satisfy other properties. Consider \(\det([\vec v \hspace{1em}\vec w+c \vec{v}])\) and \(\det([\vec v\hspace{1em}\vec w])\text{.}\)

Figure 58. Parallelogram built by \(\vec{w}+c\vec{v}\) and \(\vec{w}\)

The base of both parallelograms is \(\vec{v}\text{,}\) while the height has not changed, so the determinant does not change either. This can also be proven using the other properties of the determinant:

\begin{align*} \det([\vec{v}+c\vec{w}\hspace{1em}\vec{w}]) &= \det([\vec{v}\hspace{1em}\vec{w}])+ \det([c\vec{w}\hspace{1em}\vec{w}])\\ &= \det([\vec{v}\hspace{1em}\vec{w}])+ c\det([\vec{w}\hspace{1em}\vec{w}])\\ &= \det([\vec{v}\hspace{1em}\vec{w}])+ c\cdot 0\\ &= \det([\vec{v}\hspace{1em}\vec{w}]) \end{align*}

Remark 5.1.12.

Swapping columns may be thought of as a reflection, which is represented by a negative determinant. For example, the following matrices transform the unit square into the same parallelogram, but the second matrix reflects its orientation.

\begin{equation*} A=\left[\begin{array}{cc}2&3\\0&4\end{array}\right]\hspace{1em}\det A=8\hspace{3em} B=\left[\begin{array}{cc}3&2\\4&0\end{array}\right]\hspace{1em}\det B=-8 \end{equation*}
Figure 59. Reflection of a parallelogram as a result of swapping columns.

Observation 5.1.13.

The fact that swapping columns multiplies determinants by a negative may be verified by adding and subtracting columns.

\begin{align*} \det([\vec{v}\hspace{1em}\vec{w}]) &= \det([\vec{v}+\vec{w}\hspace{1em}\vec{w}])\\ &= \det([\vec{v}+\vec{w}\hspace{1em}\vec{w}-(\vec{v}+\vec{w})])\\ &= \det([\vec{v}+\vec{w}\hspace{1em}-\vec{v}])\\ &= \det([\vec{v}+\vec{w}-\vec{v}\hspace{1em}-\vec{v}])\\ &= \det([\vec{w}\hspace{1em}-\vec{v}])\\ &= -\det([\vec{w}\hspace{1em}\vec{v}]) \end{align*}

Activity 5.1.15.

The transformation given by the standard matrix \(A\) scales areas by \(4\text{,}\) and the transformation given by the standard matrix \(B\) scales areas by \(3\text{.}\) By what factor does the transformation given by the standard matrix \(AB\) scale areas?

Figure 60. Area changing under the composition of two linear maps
  1. \(\displaystyle 1\)

  2. \(\displaystyle 7\)

  3. \(\displaystyle 12\)

  4. Cannot be determined

Remark 5.1.17.

Recall that row operations may be produced by matrix multiplication.

  • Multiply the first row of \(A\) by \(c\text{:}\) \(\left[\begin{array}{cccc} c & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right]A\)

  • Swap the first and second row of \(A\text{:}\) \(\left[\begin{array}{cccc} 0 & 1 & 0 & 0\\ 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right]A\)

  • Add \(c\) times the third row to the first row of \(A\text{:}\) \(\left[\begin{array}{cccc} 1 & 0 & c & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right]A\)

Activity 5.1.19.

Consider the row operation \(R_1+4R_3\to R_1\) applied as follows to show \(A\sim B\text{:}\)

\begin{equation*} A=\left[\begin{array}{cccc}1&2&3 & 4\\5&6 & 7 & 8\\9 & 10 & 11 & 12 \\ 13 & 14 & 15 & 16\end{array}\right] \sim \left[\begin{array}{cccc}1+4(9)&2+4(10)&3+4(11) & 4+4(12) \\5&6 & 7 & 8\\9 & 10 & 11 & 12 \\ 13 & 14 & 15 & 16\end{array}\right]=B \end{equation*}
(a)

Find a matrix \(R\) such that \(B=RA\text{,}\) by applying the same row operation to \(I=\left[\begin{array}{cccc}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{array}\right]\text{.}\)

(b)

Find \(\det R\) by comparing with the previous slide.

(c)

If \(C \in M_{4,4}\) is a matrix with \(\det(C)= -3\text{,}\) find

\begin{equation*} \det(RC)=\det(R)\det(C). \end{equation*}

Activity 5.1.20.

Consider the row operation \(R_1\leftrightarrow R_3\) applied as follows to show \(A\sim B\text{:}\)

\begin{equation*} A=\left[\begin{array}{cccc}1&2&3&4\\5&6&7&8\\9&10&11&12 \\ 13 & 14 & 15 & 16\end{array}\right] \sim \left[\begin{array}{cccc}9&10&11&12\\5&6&7&8\\1&2&3&4 \\ 13 & 14 & 15 & 16\end{array}\right]=B \end{equation*}
(a)

Find a matrix \(R\) such that \(B=RA\text{,}\) by applying the same row operation to \(I\text{.}\)

(b)

If \(C \in M_{4,4}\) is a matrix with \(\det(C)= 5\text{,}\) find \(\det(RC)\text{.}\)

Activity 5.1.21.

Consider the row operation \(3R_2\to R_2\) applied as follows to show \(A\sim B\text{:}\)

\begin{equation*} A=\left[\begin{array}{cccc}1&2&3&4\\5&6&7&8\\9&10&11&12 \\ 13 & 14 & 15 & 16\end{array}\right] \sim \left[\begin{array}{cccc}1&2&3&4\\3(5)&3(6)&3(7)&3(8)\\9&10&11&12 \\ 13 & 14 & 15 & 16\end{array}\right]=B \end{equation*}
(a)

Find a matrix \(R\) such that \(B=RA\text{.}\)

(b)

If \(C \in M_{4,4}\) is a matrix with \(\det(C)= -7\text{,}\) find \(\det(RC)\text{.}\)

Remark 5.1.22.

Recall that the column versions of the three row-reducing operations a matrix may be used to simplify a determinant:

  1. Multiplying columns by scalars:

    \begin{equation*} \det([\cdots\hspace{0.5em}c\vec{v}\hspace{0.5em} \cdots])= c\det([\cdots\hspace{0.5em}\vec{v}\hspace{0.5em} \cdots]) \end{equation*}
  2. Swapping two columns:

    \begin{equation*} \det([\cdots\hspace{0.5em}\vec{v}\hspace{0.5em} \cdots\hspace{1em}\vec{w}\hspace{0.5em} \cdots])= -\det([\cdots\hspace{0.5em}\vec{w}\hspace{0.5em} \cdots\hspace{1em}\vec{v}\hspace{0.5em} \cdots]) \end{equation*}
  3. Adding a multiple of a column to another column:

    \begin{equation*} \det([\cdots\hspace{0.5em}\vec{v}\hspace{0.5em} \cdots\hspace{1em}\vec{w}\hspace{0.5em} \cdots])= \det([\cdots\hspace{0.5em}\vec{v}+c\vec{w}\hspace{0.5em} \cdots\hspace{1em}\vec{w}\hspace{0.5em} \cdots]) \end{equation*}

Remark 5.1.23.

The determinants of row operation matrices may be computed by manipulating columns to reduce each matrix to the identity:

  • Scaling a row: \(\left[\begin{array}{cccc} 1 & 0 & 0 &0 \\ 0 & c & 0 &0\\ 0 & 0 & 1 &0 \\ 0 & 0 & 0 & 0 \end{array}\right]\)

  • Swapping rows: \(\left[\begin{array}{cccc} 0 & 1 & 0 &0 \\ 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]\)

  • Adding a row multiple to another row: \(\left[\begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & 1 & c & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]\)

Observation 5.1.25.

So we may compute the determinant of \(\left[\begin{array}{cc} 2 & 4 \\ 2 & 3 \end{array}\right]\) by manipulating its rows/columns to reduce the matrix to \(I\text{:}\)

\begin{align*} \det\left[\begin{array}{cc} 2 & 4 \\ 2 & 3 \end{array}\right] &= 2 \det \left[\begin{array}{cc} 1 & 2 \\ 2 & 3 \end{array}\right]\\ &= %2 \det \left[\begin{array}{cc} 1 & 2 \\ 2-2(1) & 3-2(2)\end{array}\right]= 2 \det \left[\begin{array}{cc} 1 & 2 \\ 0 & -1 \end{array}\right]\\ &= %2(-1) \det \left[\begin{array}{cc} 1 & -2 \\ 0 & +1 \end{array}\right]= -2 \det \left[\begin{array}{cc} 1 & -2 \\ 0 & 1 \end{array}\right]\\ &= %-2 \det \left[\begin{array}{cc} 1+2(0) & -2+2(1) \\ 0 & 1\end{array}\right] = -2 \det \left[\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\right]\\ &= %-2\det I = %-2(1) = -2 \end{align*}

Subsection 5.1.2 Videos

Figure 61. Video: Row operations, matrix multiplication, and determinants

Subsection 5.1.3 Slideshow

Slideshow of activities available at https://teambasedinquirylearning.github.io/linear-algebra/2022/GT1.slides.html.

Exercises 5.1.4 Exercises

Exercises available at https://teambasedinquirylearning.github.io/linear-algebra/2022/exercises/#/bank/GT1/.

Subsection 5.1.5 Mathematical Writing Explorations

Exploration 5.1.26.

  • Prove or disprove. The determinant is a linear operator on the vector space of \(n \times n\) matrices.

  • Find a matrix that will double the area of a region in \(\mathbb{R}^2\text{.}\)

  • Find a matrix that will triple the area of a region in \(\mathbb{R}^2\text{.}\)

  • Find a matrix that will halve the area of a region in \(\mathbb{R}^2\text{.}\)

Subsection 5.1.6 Sample Problem and Solution

Sample problem Example B.1.21.