Skip to main content

Section B.1 Sample Exercises with Solutions

Here we model one exercise and solution for each learning objective. Your solutions should not look identical to those shown below, but these solutions can give you an idea of the level of detail required for a complete solution.

Example B.1.1. E1.

Consider the scalar system of equations

\begin{alignat*}{5} 3x_1 &\,+\,& 2x_2 &\,\,& &\,+\,&x_4 &= 1 \\ -x_1 &\,-\,& 4x_2 &\,+\,&x_3&\,-\,&7x_4 &= 0 \\ &\,\,& x_2 &\,-\,&x_3 &\,\,& &= -2 \end{alignat*}
  1. Rewrite this system as a vector equation.

  2. Write an augmented matrix corresponding to this system.

Solution.
  1. \begin{equation*} x_1\left[\begin{array}{c} 3 \\ -1 \\ 0 \end{array}{}\right] + x_2 \left[\begin{array}{c}2 \\ -4 \\ 1 \end{array}{}\right]+ x_3 \left[\begin{array}{c} 1 \\ 1 \\ -1 \end{array}{}\right] + x_4 \left[\begin{array}{c} 1 \\ -7 \\ 0 \end{array}{}\right] = \left[\begin{array}{c} 1 \\ 0 \\ -2 \end{array}{}\right] \end{equation*}

  2. \begin{equation*} \left[\begin{array}{cccc|c} 3 & 2 & 0 & 1 & 1 \\ -1 & -4 & 1 & -7 & 0 \\ 0 & 1 & -1 & 0 & -2 \end{array}\right] \end{equation*}

Example B.1.2. E2.

  1. Show that

    \begin{equation*} \RREF \left[\begin{array}{cccc} 0 & 3 & 1 & 2 \\ 1 & 2 & -1 & -3 \\ 2 & 4 & -1 & -1 \end{array}\right] = \left[\begin{array}{cccc} 1 & 0 & 0 & 4 \\ 0 & {1} & 0 & -1 \\ 0 & 0 & {1} & 5 \end{array}\right]. \end{equation*}
  2. Explain why the matrix \(\left[\begin{array}{cccc} 1 & 1 & 0 & 3 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 0 \end{array}\right]\) is \textbf{not} in reduced row echelon form.

Solution.
  1. \begin{alignat*}{4} \left[\begin{array}{cccc} 0 & 3 & 1 & 2 \\ 1 & 2 & -1 & -3 \\ 2 & 4 & -1 & -1 \end{array}\right] &\sim& \left[\begin{array}{cccc} \circledNumber{1} & 2 & -1 & -3 \\ 0 & 3 & 1 & 2 \\ 2 & 4 & -1 & -1 \end{array}\right] &\hspace{0.2in} \text{Swap Rows 1 and 2}& \\ &\sim& \left[\begin{array}{cccc} \circledNumber{1} & 2 & -1 & -3 \\ 0 & 3 & 1 & 2 \\ 0 & 0 & 1 & 5 \end{array}\right] &\hspace{0.2in} \text{Add Row 1 to Row 3}& \\ &\sim& \left[\begin{array}{cccc} \circledNumber{1} & 2 & -1 & -3 \\ 0 & \circledNumber{1} & \frac{1}{3} & \frac{2}{3} \\ 0 & 0 & 1 & 5 \end{array}\right] &\hspace{0.2in} \text{Multiply Row 3 by }& \\ &\sim& \left[\begin{array}{cccc} \circledNumber{1} & 0 & -\frac{5}{3} & -\frac{13}{3} \\ 0 & \circledNumber{1} & \frac{1}{3} & \frac{2}{3} \\ 0 & 0 & \circledNumber{1} & 5 \end{array}\right] &\hspace{0.2in} \text{Add Row 2 to Row 1}& \\ &\sim& \left[\begin{array}{cccc} \circledNumber{1} & 0 & -\frac{5}{3} & -\frac{13}{3} \\ 0 & \circledNumber{1} & 0 & -1 \\ 0 & 0 & \circledNumber{1} & 5 \end{array}\right] &\hspace{0.2in} \text{Add Row 3 to Row 2}& \\ &\sim& \left[\begin{array}{cccc} \circledNumber{1} & 0 & 0 & 4 \\ 0 & \circledNumber{1} & 0 & -1 \\ 0 & 0 & \circledNumber{1} & 5 \end{array}\right] &\hspace{0.2in} \text{Add Row 3 to Row 1}& \end{alignat*}

  2. Circling our pivots, \(\left[\begin{array}{cccc} \circledNumber{1} & 1 & 0 & 3 \\ 0 & \circledNumber{1} & 1 & 1 \\ 0 & 0 & 0 & 0 \end{array}\right]\) we see that each pivot is a 1, they are in a descending staircase pattern, and the row(s) of zeroes is at the bottom. However, there is not a zero above the second pivot, so it is not in reduced row echelon form.

Example B.1.3. E3.

Show how to find the solution set for the following system of linear equations.
\begin{alignat*}{4} 2x&\,+\,&4y&\,+\,&z &= 5 \\ x&\,+\,&2y &\,\,& &= 3 \end{alignat*}
Solution.

First, note that this system corresponds to the matrix \(\left( \left[\begin{array}{ccc|c} 2 & 4 & 1 & 5 \\ 1 & 2 & 0 & 3 \end{array}\right] \right)\text{.}\) Then we compute (using technology)

\begin{equation*} \RREF \left( \left[\begin{array}{ccc|c} 2 & 4 & 1 & 5 \\ 1 & 2 & 0 & 3 \end{array}\right] \right) = \left[\begin{array}{ccc|c} 1 & 2 & 0 & 3 \\ 0 & 0 & 1 & -1\end{array}\right]. \end{equation*}

This corresponds to the system

\begin{alignat*}{2} x\,+\,2y&\,\,& &= 3 \\ &\,\,& z&= -1 \end{alignat*}

Since the \(y\)-column is a non-pivot column, it is a free variable, so we let \(y=a\text{;}\) then we have

\begin{alignat*}{3} x&\,+\,&2y&\,\,& &= 3\\ &\,\,&y &\,\,& &=a\\ &\,\,& &\,\,& z&= -1. \end{alignat*}

and thus

\begin{align*} x&= 3-2a\\ y&= a\\ z&= -1 \end{align*}

So the solution set is

\begin{equation*} \setBuilder{ \left[\begin{array}{c} 3-2a \\ a \\ -1 \end{array}\right] }{ a \in \IR }. \end{equation*}

Example B.1.4. V1.

Let \(V\) be the set of all pairs of numbers \((x,y)\) of real numbers together with the following operations:

\begin{align*} (x_1,y_1) \oplus (x_2,y_2) &= (2x_1+2x_2,2y_1+2y_2)\\ c\odot (x,y) &= (cx,c^2y) \end{align*}
  1. Show that scalar multiplication distributes over vector addition:

    \begin{equation*} c\odot \left((x_1,y_1) \oplus (x_2,y_2) \right) = c \odot (x_1,y_1) \oplus c \odot (x_2,y_2) \end{equation*}
  2. Explain why \(V\) nonetheless is not a vector space.

Solution.
  1. We compute both sides:

    \begin{align*} c \odot \left((x_1,y_1) \oplus (x_2,y_2) \right) &= c \odot (2x_1+2x_2,2y_1+2y_2)\\ &= (c(2x_1+2x_2),c^2(2y_1+2y_2))\\ &= (2cx_1+2cx_2,2c^2y_1+2c^2y_2) \end{align*}

    and

    \begin{align*} c\odot (x_1,y_1) \oplus c \odot (x_2,y_2) &= (cx_1,c^2y_1) \oplus (cx_2,c^2y_2)\\ &= (2cx_1+2cx_2,2c^2y_1+2c^2y_2) \end{align*}

    Since these are the same, we have shown that the property holds.

  2. To show \(V\) is not a vector space, we must show that it fails one of the 8 defining properties of vector spaces. We will show that scalar multiplication does not distribute over scalar addition, i.e., there are values such that

    \begin{equation*} (c+d)\odot(x,y) \neq c \odot(x,y) \oplus d\odot(x,y) \end{equation*}
    • (Solution method 1) First, we compute

      \begin{align*} (c+d)\odot(x,y) &= ((c+d)x,(c+d)^2y)\\ &= ( (c+d)x, (c^2+2cd+d^2)y). \end{align*}

      Then we compute

      \begin{align*} c\odot (x,y) \oplus d\odot(x,y) &= (cx,c^2y) \oplus (dx,d^2y)\\ &= ( 2cx+2dx, 2c^2y+2d^2y). \end{align*}

      Since \((c+d)x\not=2cx+2dy\) when \(c,d,x,y=1\text{,}\) the property fails to hold.

    • (Solution method 2) When we let \(c,d,x,y=1\text{,}\) we may simplify both sides as follows.

      \begin{align*} (c+d)\odot(x,y) &= 2\odot(1,1)\\ &= (2\cdot1,2^2\cdot1)\\ &=(2,4) \end{align*}
      \begin{align*} c\odot (x,y) \oplus d\odot(x,y) &= 1\odot(1,1)\oplus 1\odot(1,1)\\ &= (1\cdot1,1^2\cdot1)\oplus(1\cdot1,1^2\cdot1)\\ &= (1,1)\oplus(1,1)\\ &= (2\cdot1+2\cdot1,2\cdot1+2\cdot1)\\ &= (4,4) \end{align*}

      Since these ordered pairs are different, the property fails to hold.

Example B.1.5. V2.

Consider the statement

The vector \(\left[\begin{array}{c} 3 \\ -1 \\ 2 \end{array}\right] \) is a linear combination of the vectors \(\left[\begin{array}{c} 1 \\ 0 \\ 1 \end{array}\right] \text{,}\) \(\left[\begin{array}{c} 3 \\ 2 \\ -1 \end{array}\right] \text{,}\) and \(\left[\begin{array}{c} 1 \\ 1 \\ -1 \end{array}\right] \text{.}\)

  1. Write an equivalent statement using a vector equation.

  2. Explain why your statement is true or false.

Solution.
  1. By definition, this statement is equivalent to the statement

    There exists a solution to the vector equation \(x_1\left[\begin{array}{c} 1 \\ 0 \\ 1 \end{array}\right] + x_2\left[\begin{array}{c} 3 \\ 2 \\ -1 \end{array}\right] + x_3\left[\begin{array}{c} 1 \\ 1 \\ -1 \end{array}\right] = \left[\begin{array}{c} 3 \\ -1 \\ 2 \end{array}\right].\)

  2. This vector equation corresponds to the augmented matrix \(\left[\begin{array}{ccc|c} 1 & 3 & 1 & 3 \\ 0 & 2 & 1 & -1 \\ 1 & -1 & -1 & 2 \end{array}\right]\text{.}\) Therefore, we compute

    \begin{equation*} \RREF \left[\begin{array}{ccc|c} 1 & 3 & 1 & 3 \\ 0 & 2 & 1 & -1 \\ 1 & -1 & -1 & 2 \end{array}\right] = \left[\begin{array}{ccc|c} 1 & 0 & -\frac{1}{2} & 0 \\ 0 & 1 & \frac{1}{2} & 0 \\ 0 & 0 & 0 & 1 \end{array}\right]. \end{equation*}

    Since this corresponds to an inconsistent system of equations, the vector equation

    \begin{equation*} x_1\left[\begin{array}{c} 1 \\ 0 \\ 1 \end{array}\right] + x_2\left[\begin{array}{c} 3 \\ 2 \\ -1 \end{array}\right] + x_3\left[\begin{array}{c} 1 \\ 1 \\ -1 \end{array}\right] = \left[\begin{array}{c} 3 \\ -1 \\ 2 \end{array}\right] \end{equation*}

    has no solution, and therefore \(\left[\begin{array}{c} 3 \\ -1 \\ 2 \end{array}\right] \) is not a linear combination of \(\left[\begin{array}{c} 1 \\ 0 \\ 1 \end{array}\right] \text{,}\) \(\left[\begin{array}{c} 3 \\ 2 \\ -1 \end{array}\right] \text{,}\) and \(\left[\begin{array}{c} 1 \\ 1 \\ -1 \end{array}\right] \text{.}\)

Example B.1.6. V3.

Consider the statement

The set of vectors \(\left\{\left[\begin{array}{c} 1 \\ 0 \\ 1 \end{array}\right] , \left[\begin{array}{c} 3 \\ 2 \\ -1 \end{array}\right] , \left[\begin{array}{c} 1 \\ 1 \\ -1 \end{array}\right] \right\}\) does not span \(\IR^3\text{.}\)

  1. Write an equivalent statement using a vector equation.

  2. Explain why your statement is true or false.

Solution.
  1. By definition, this statement is equivalent to the statement

    There is some \(\vec{v} \in \IR^3\) for which the vector equation \(x_1\left[\begin{array}{c} 1 \\ 0 \\ 1 \end{array}\right]+ x_2 \left[\begin{array}{c} 3 \\ 2 \\ -1 \end{array}\right]+ x_3\left[\begin{array}{c} 1 \\ 1 \\ -1 \end{array}\right]=\vec{v}\) does not have a solution.

  2. We compute

    \begin{equation*} \RREF \left[\begin{array}{ccc} 1 & 3 & 1 \\ 0 & 2 & 1 \\ 1 & -1 & -1 \end{array}\right] = \left[\begin{array}{ccc} 1 & 0 & -\frac{1}{2} \\ 0 & 1 & \frac{1}{2} \\ 0 & 0 & 0 \end{array}\right] \end{equation*}

    Since the last row lacks a pivot, there is some vector \(\vec{v} \in \IR^3\) that upon augmenting this matrix will produce an inconsistent system. That vector will not be in the span of these three vectors, so the vectors do not span \(\IR^3\text{.}\)

Example B.1.7. V4.

Consider the following two sets of Euclidean vectors.

\begin{equation*} W = \setBuilder{\left[\begin{array}{c} x \\ y \\ z \\ w \end{array}\right] }{x+y=3z+2w} \hspace{3em} U = \setBuilder{\left[\begin{array}{c} x \\ y \\ z \\ w \end{array}\right]}{x+y=3z+w^2} \end{equation*}

Explain why one of these sets is a subspace of \(\IR^3\text{,}\) and why the other is not.

Solution.

To show that \(W\) is a subspace, let \(\vec v=\left[\begin{array}{c} x_1 \\y_1 \\ z_1 \\ w_1 \end{array}\right]\in W\) and \(\vec w=\left[\begin{array}{c} x_2 \\y_2 \\ z_2 \\ w_2 \end{array}\right] \in W \text{,}\) so we know that \(x_1+y_1=3z_1+2w_1\) and \(x_2+y_2=3z_2+2w_2\text{.}\)

Consider

\begin{equation*} \left[\begin{array}{c} x_1 \\y_1 \\ z_1 \\ w_1\end{array}\right] +\left[\begin{array}{c} x_2 \\y_2 \\ z_2 \\ w_2 \end{array}\right] =\left[\begin{array}{c} x_1+x_2 \\y_1+y_2 \\ z_1+z_2 \\w_1+w_2 \end{array}\right] . \end{equation*}

To see if \(\vec{v}+\vec{w} \in W\text{,}\) we need to check if \((x_1+x_2)+(y_1+y_2) = 3(z_1+z_2)+2(w_1+w_2)\text{.}\) We compute

\begin{align*} (x_1+x_2)+(y_1+y_2) &= (x_1+y_1)+(x_2+y_2) &\text{by regrouping}\\ &= (3z_1+2w_1)+(3z_2+2w_2) & \text{since }\\ &=3(z_1+z_2)+2(w_1+w_2) & \text{by regrouping.} \end{align*}

Thus \(\vec v+\vec w\in W\text{,}\) so \(W\) is closed under vector addition.

Now consider

\begin{equation*} c\vec v =\left[\begin{array}{c} cx_1 \\cy_1 \\ cz_1 \\ cw_1 \end{array}\right] . \end{equation*}

Similarly, to check that \(c\vec{v} \in W\text{,}\) we need to check if \(cx_1+cy_1=3(cz_1)+2(cw_1)\text{,}\) so we compute

\begin{align*} cx_1+cy_1 & = c(x_1+y_1) &\text{by factoring}\\ &=c(3z_1+2w_1) &\text{since }\\ &=3(cz_1)+2(cw_1) &\text{by regrouping} \end{align*}

and we see that \(c\vec v\in W\text{,}\) so \(W\) is closed under scalar multiplication. Therefore \(W\) is a subspace of \(\IR^3\text{.}\)

Now, to show \(U\) is not a subspace, we will show that it is not closed under vector addition.

  • (Solution Method 1) Now let \(\vec v=\left[\begin{array}{c} x_1 \\y_1 \\ z_1 \\ w_1 \end{array}\right]\in U\) and \(\vec w=\left[\begin{array}{c} x_2 \\y_2 \\ z_2 \\ w_2 \end{array}\right] \in U \text{,}\) so we know that \(x_1+y_1=3z_1+w_1^2\) and \(x_2+y_2=3z_2+w_2^2\text{.}\)

    Consider

    \begin{equation*} \vec{v}+\vec{w}= \left[\begin{array}{c} x_1 \\y_1 \\ z_1 \\ w_1\end{array}\right] +\left[\begin{array}{c} x_2 \\y_2 \\ z_2 \\ w_2 \end{array}\right] =\left[\begin{array}{c} x_1+x_2 \\y_1+y_2 \\ z_1+z_2 \\w_1+w_2 \end{array}\right] . \end{equation*}

    To see if \(\vec{v}+\vec{w} \in U\text{,}\) we need to check if \((x_1+x_2)+(y_1+y_2) = 3(z_1+z_2)+(w_1+w_2)^2\text{.}\) We compute

    \begin{align*} (x_1+x_2)+(y_1+y_2) &= (x_1+y_1)+(x_2+y_2) &\text{by regrouping}\\ &= (3z_1+w_1^2)+(3z_2+w_2^2) &\text{since }\\ &=3(z_1+z_2)+(w_1^2+w_2^2) &\text{by regrouping} \end{align*}

    and thus \(\vec v+\vec w\in U\) \textbf{only when} \(w_1^2+w_2^2=(w_1+w_2)^2\text{.}\) Since this is not true in general, \(U\) is not closed under vector addition, and thus cannot be a subspace.

  • (Solution Method 2) Note that the vector \(\vec v=\left[\begin{array}{c} 0\\1\\0\\1\end{array}\right] \) belongs to \(U\) since \(0+1=3(0)+1^2\text{.}\) However, the vector \(2\vec v=\left[\begin{array}{c} 0\\2\\0\\2\end{array}\right] \) does not belong to \(U\) since \(0+2\not=3(0)+2^2\text{.}\) Therefore \(U\) is not closed under scalar multiplication, and thus is not a subspace.

Example B.1.8. V5.

Consider the statement

The set of vectors \(\left\{ \left[\begin{array}{c} 3 \\ 2 \\ 1 \\ 0 \end{array}\right] , \left[\begin{array}{c} -1 \\ 1 \\ 2 \\ 3 \end{array}\right] , \left[\begin{array}{c} 0 \\ 1 \\ -1 \\ 1 \end{array}\right] , \left[\begin{array}{c} 2 \\ 5 \\ 1 \\ 5 \end{array}\right] \right\}\) is linearly dependent.

  1. Write an equivalent statement using a vector equation.

  2. Explain why your statement is true or false.

Solution.
  1. This statement is equivalent to the statement

    The vector equation \(x_1\left[\begin{array}{c} 3 \\ 2 \\ 1 \\ 0 \end{array}\right] + x_2\left[\begin{array}{c} -1 \\ 1 \\ 2 \\ 3 \end{array}\right] + x_3\left[\begin{array}{c} 0 \\ 1 \\ -1 \\ 1 \end{array}\right]+ x_4\left[\begin{array}{c} 2 \\ 5 \\ 1 \\ 5 \end{array}\right] =\vec{0}\) has (infintely many) nontrivial solutions.

  2. Converting the left side of this system to the corresponding matrix and row reducing, we have

    \begin{equation*} \RREF \left[\begin{array}{cccc} 3 & -1 & 0 & 2 \\ 2 & 1 & 1 & 5 \\ 1 & 2 & -1 & 1 \\ 0 & 3 & 1 & 5 \end{array}\right] = \left[\begin{array}{cccc} 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 0 \end{array}\right]. \end{equation*}

    Since the fourth column is not a pivot column, the system has (infinitely many) nontrivial solutions. Thus the set of vectors is linearly dependent.

Example B.1.9. V6.

Consider the statement

The set of vectors \(\left\{ \left[\begin{array}{c} 3 \\ 2 \\ 1 \\ 0 \end{array}\right] , \left[\begin{array}{c} -1 \\ 1 \\ 2 \\ 3 \end{array}\right] , \left[\begin{array}{c} 0 \\ 1 \\ -1 \\ 1 \end{array}\right] , \left[\begin{array}{c} 2 \\ 5 \\ 1 \\ 5 \end{array}\right] \right\} \) is a basis of \(\IR^4\text{.}\)

  1. Write an equivalent statement in terms of other vector properties.

  2. Explain why your statement is true or false.

Solution.
  1. This statement is equivalent to the statement

    The set of vectors \(\left\{ \left[\begin{array}{c} 3 \\ 2 \\ 1 \\ 0 \end{array}\right] , \left[\begin{array}{c} -1 \\ 1 \\ 2 \\ 3 \end{array}\right] , \left[\begin{array}{c} 0 \\ 1 \\ -1 \\ 1 \end{array}\right] , \left[\begin{array}{c} 2 \\ 5 \\ 1 \\ 5 \end{array}\right] \right\} \) is both linearly independent and spans \(\IR^4\text{.}\)

  2. Compute

    \begin{equation*} \RREF \left[\begin{array}{cccc} 3 & -1 & 0 & 2 \\ 2 & 1 & 1 & 5 \\ 1 & 2 & -1 & 1 \\ 0 & 3 & 1 & 5 \end{array}\right] = \left[\begin{array}{cccc} 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 0 \end{array}\right]. \end{equation*}

    Since the fourth column is not a pivot column, the vectors are linearly dependent and thus not a basis of \(\IR^4\text{.}\)

    (Alternate solution:) Since the fourth row is not a pivot row, the vectors do not span \(\IR^4\) and thus are not a basis of \(\IR^4\text{.}\)

Example B.1.10. V7.

Consider the subspace

\begin{equation*} W = \vspan \left\{ \left[\begin{array}{c} 1 \\ -3 \\ -1 \\ 2 \end{array}\right] , \left[\begin{array}{c} 1 \\ 0 \\ 1 \\ -2 \end{array}\right] , \left[\begin{array}{c} 3 \\ -6 \\ -1 \\ 2 \end{array}\right] , \left[\begin{array}{c} 1 \\ 6 \\ 1 \\ -1 \end{array}\right] , \left[\begin{array}{c} 2 \\ 3 \\ 0 \\ 1 \end{array}\right] \right\} . \end{equation*}
  1. Explain how to find a basis of \(W\text{.}\)

  2. Explain how to find the dimension of \(W\text{.}\)

Solution.
  1. Observe that

    \begin{equation*} \RREF \left[\begin{array}{ccccc} 1 & 1 & 3 & 1 & 2 \\ -3 & 0 & -6 & 6 & 3 \\ -1 & 1 & -1 & 1 & 0 \\ 2 & -2 & 2 & -1 & 1 \end{array}\right] = \left[\begin{array}{ccccc} 1 & 0 & 2 & 0 & 1 \\ 0 & 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 & 0 \end{array}\right] \end{equation*}

    If we remove the vectors yielding non-pivot columns, the resulting set will span the same vectors while being linearly independent. Therefore

    \begin{equation*} \left\{ \left[\begin{array}{c} 1 \\ -3 \\ -1 \\ 2 \end{array}\right] , \left[\begin{array}{c} 1 \\ 0 \\ 1 \\ -2 \end{array}\right] , \left[\begin{array}{c} 1 \\ 6 \\ 1 \\ -1 \end{array}\right] \right\} \end{equation*}

    is a basis of \(W\text{.}\)

  2. Since this (and thus every other) basis has three vectors in it, the dimension of \(W\) is \(3\text{.}\)

Example B.1.11. V8.

Consider the statement

The set of polynomials \(\setList{3x^3+2x^2+x,-x^3+x^2+2x+3,x^2-x+1,2x^3+5x^2+x+5}\) is linearly independent.

  1. Write an equivalent statement using a polynomial equation.

  2. Explain why your statement is true or false.

Solution.
  1. This statement is equivalent to the statement

    The polynomial equation \(y_1(3x^3+2x^2+x) + y_2(-x^3+x^2+2x+3)\) \(+y_3(x^2-x+1)+y_4(2x^3+5x^2+x+5)=0\) has no nontrivial solutions.

  2. This polynomial equation corresponds to the system of equations

    \begin{alignat*}{5} 3y_1 &-& y_2 & & &+& 2y_4 &=& 0\\ 2y_1 &+& y_2 &+& y_3 &+& 5y_4 &=& 0\\ y_1 &+& 2y_2 &-& y_3 &+& y_4 &=& 0\\ & & 3y_2 &+& y_3 &+& 5y_4 &=& 0 \end{alignat*}

    which corresponds to the augmented matrix \(\left[\begin{array}{cccc} 3 & -1 & 0 & 2 \\ 2 & 1 & 1 & 5 \\ 1 & 2 & -1 & 1 \\ 0 & 3 & 1 & 5 \end{array}\right]\text{.}\) So we compute

    \begin{equation*} \RREF \left[\begin{array}{cccc} 3 & -1 & 0 & 2 \\ 2 & 1 & 1 & 5 \\ 1 & 2 & -1 & 1 \\ 0 & 3 & 1 & 5 \end{array}\right] = \left[\begin{array}{cccc} 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 2 \\ 0 & 0 & 0 & 0 \end{array}\right]. \end{equation*}

    Since the fourth column is not a pivot column, there are (infinitely many) nontrivial solutions. Thus, the set \(\setList{3x^3+2x^2+x,-x^3+x^2+2x+3,x^2-x+1}\) is linearly dependent.

Example B.1.12. V9.

Consider the homogeneous system of equations

\begin{alignat*}{6} x_1 &\,+\,& x_2 &\,+\,& 3x_3 &\,+\,& x_4 &\,+\,& 2x_5 &=& 0\\ -3x_1 &\,\,& &\,-\,& 6x_3 &\,+\,&6 x_4 &\,+\,& 3x_5 &=& 0\\ -x_1 &\,+\,& x_2 &\,-\,& x_3 &\,+\,& x_4 &\,\,& &=& 0\\ 2x_1 &\,-\,& 2x_2 &\,+\,& 2x_3 &\,-\,& x_4 &\,+\,& x_5 &=& 0 \end{alignat*}
  1. Find the solution space of the system.

  2. Find a basis of the solution space.

Solution.
  1. Observe that

    \begin{equation*} \RREF \left[\begin{array}{ccccc|c} 1 & 1 & 3 & 1 & 2 & 0\\ -3 & 0 & -6 & 6 & 3 & 0\\ -1 & 1 & -1 & 1 & 0 & 0\\ 2 & -2 & 2 & -1 & 1& 0 \end{array}\right] = \left[\begin{array}{ccccc|c} 1 & 0 & 2 & 0 & 1 &0\\ 0 & 1 & 1 & 0 & 0 &0\\ 0 & 0 & 0 & 1 & 1 &0\\ 0 & 0 & 0 & 0 & 0&0 \end{array}\right] \end{equation*}

    Letting \(x_3=a\) and \(x_5=b\) (since those correspond to the non-pivot columns), this is equivalent to the system

    \begin{alignat*}{6} x_1 &\,\,& &\,+\,& 2x_3 &\,\,& &\,+\,& x_5 &=& 0\\ &\,\,& x_2 &\,+\,& x_3 &\,\,& &\,\,& &=& 0\\ &\,\,& &\,\,& x_3 &\,\,& &\,\,& &=& a\\ &\,\,& &\,\,& &\,\,& x_4 &\,+\,& x_5 &=& 0\\ &\,\,& &\,\,& &\,\,& &\,\,& x_5 &=& b \end{alignat*}

    Thus, the solution set is

    \begin{equation*} \setBuilder{\left[\begin{array}{c} -2a-b \\ -a \\ a \\ -b \\ b \end{array}\right]}{a,b \in \IR} . \end{equation*}
  2. Since we can write

    \begin{equation*} \left[\begin{array}{c} -2a-b \\ -a \\ a \\ -b \\ b \end{array}\right] = a \left[\begin{array}{c} -2 \\ -1 \\ 1 \\ 0 \\ 0 \end{array}\right] + b \left[\begin{array}{c} -1 \\ 0 \\ 0 \\ -1 \\ 1 \end{array}\right], \end{equation*}

    a basis for the solution space is

    \begin{equation*} \left \{ \left[\begin{array}{c} -2 \\ -1 \\ 1 \\ 0 \\ 0 \end{array}\right] , \left[\begin{array}{c} -1 \\ 0 \\ 0 \\ -1 \\ 1 \end{array}\right] \right\}. \end{equation*}

Example B.1.13. A1.

Consider the following maps of polynomials \(S: \P \rightarrow \P\) and \(T:\P\rightarrow\P\) defined by

\begin{equation*} S(f(x))= 3xf(x) \text{ and }T(f(x)) = 3f'(x)f(x). \end{equation*}

Explain why one of these maps is a linear transformation, and why the other map is not.

Solution.

To show \(S\) is a linear transformation, we must show two things:

\begin{equation*} S\left(f(x)+g(x)\right)=S(f(x))+s(g(x)) \end{equation*}
\begin{equation*} S(cf(x)) = cS(f(x)) \end{equation*}

To show \(S\) respects addition, we compute

\begin{align*} S\left(f(x)+g(x)\right) &= 3x\left(f(x)+g(x)\right) & \text{by definition of } \\\\ &= 3xf(x)+3xg(x) & \text{by distributing} \end{align*}

But note that \(S(f(x))=3xf(x)\) and \(S(g(x))=3xg(x)\text{,}\) so we have \(S(f(x)+g(x))=S(f(x))+S(g(x))\text{.}\)

For the second part, we compute

\begin{align*} S\left(cf(x)\right) &= 3x\left(cf(x)\right) & \text{by definition of } \\\\ &= 3cxf(x) & \text{rewriting the multiplication.} \end{align*}

But note that \(cS(f(x))=c(3xf(x))=3cxf(x)\) as well, so we have \(S(cf(x))=cS(f(x))\text{.}\) Now, since \(S\) respects both addition and scalar multiplication, we can conclude \(S\) is a linear transformation.

  • (Solution method 1) As for \(T\text{,}\) we compute

    \begin{align*} T(f(x)+g(x))& =3 (f(x)+g(x))'(f(x)+g(x)) &\text{by definition of }\\ &= 3(f'(x)+g'(x))(f(x)+g(x)) & \text{since the derivative is linear}\\ &= 3f(x)f'(x)+3f(x)g'(x)+3f'(x)g(x)+3g(x)g'(x) &\text{by distributing} \end{align*}

    However, note that \(T(f(x))+T(g(x))=3f'(x)f(x)+3g'(x)g(x)\text{,}\) which is not always the same polynomial (for example, when \(f(x)=g(x)=x\)). So we see that \(T(f(x)+g(x)) \neq T(f(x))+T(g(x))\text{,}\) so \(T\) does not respect addition and is therefore not a linear transformation.

  • (Solution method 2) As for \(T\text{,}\) we may choose the polynomial \(f(x)=x\) and scalar \(c=2\text{.}\) Then

    \begin{equation*} T(cf(x))=T(2x)=3(2x)'(2x)=3(2)(2x)=12x. \end{equation*}

    But on the other hand,

    \begin{equation*} cT(f(x))=2T(x)=2(3)(x)'(x)=2(3)(1)(x)=6x. \end{equation*}

    Since this isn't the same polynomial, \(T\) does not preserve multiplication and is therefore not a linear transformation.

Example B.1.14. A2.

  1. Find the standard matrix for the linear transformation \(T: \IR^3\rightarrow \IR^4\) given by

    \begin{equation*} T\left(\left[\begin{array}{c} x \\ y \\ z \\ \end{array}\right] \right) = \left[\begin{array}{c} -x+y \\ -x+3y-z \\ 7x+y+3z \\ 0 \end{array}\right]. \end{equation*}
  2. Let \(S: \IR^4 \rightarrow \IR^3\) be the linear transformation given by the standard matrix

    \begin{equation*} \left[\begin{array}{cccc} 2 & 3 & 4 & 1 \\ 0 & 1 & -1 & -1 \\ 3 & -2 & -2 & 4 \end{array}\right]. \end{equation*}

    Compute \(S\left( \left[\begin{array}{c} -2 \\ 1 \\ 3 \\ 2\end{array}\right] \right) \text{.}\)

Solution.
  1. Since

    \begin{align*} T\left(\left[\begin{array}{c} 1 \\ 0 \\ 0 \end{array}\right]\right) &= \left[\begin{array}{c} -1 \\ -1 \\ 7 \\0\end{array}\right]\\ T\left(\left[\begin{array}{c} 0 \\ 1 \\ 0 \end{array}\right]\right) &= \left[\begin{array}{c} 1 \\ 3 \\ 1 \\0 \end{array}\right]\\ T\left(\left[\begin{array}{c} 0 \\ 0 \\ 1 \end{array}\right]\right) &= \left[\begin{array}{c} 0 \\ -1 \\ 3 \\ 0 \end{array}\right], \end{align*}

    the standard matrix for \(T\) is \(\left[\begin{array}{ccc} -1 & 1 & 0 \\ -1 & 3 & -1 \\ 7 & 1 & 3 \\ 0 & 0 & 0 \end{array}\right] \text{.}\)

  2. \begin{equation*} S\left(\left[\begin{array}{c} -2 \\ 1 \\ 3 \\ 2 \end{array}\right] \right) = -2S(\vec{e}_1)+S(\vec{e}_2)+3S(\vec{e}_3)+2S(\vec{e}_4) \end{equation*}
    \begin{equation*} = -2 \left[\begin{array}{c} 2 \\ 0 \\ 3 \end{array}\right] + \left[\begin{array}{c} 3 \\ 1 \\ -2 \end{array}\right] + 3 \left[\begin{array}{c} 4 \\ -1 \\ -2 \end{array}\right]+2\left[\begin{array}{c} 1 \\ -1 \\ 4 \end{array}\right] = \left[\begin{array}{c} 13 \\ -4 \\ -6\end{array}\right]. \end{equation*}

Example B.1.15. A3.

Let \(T: \IR^4 \rightarrow \IR^3\) be the linear transformation given by

\begin{equation*} T\left(\left[\begin{array}{c}x\\y\\z\\w\end{array}\right] \right) = \left[\begin{array}{c} x+3y+2z-3w \\ 2x+4y+6z-10w \\ x+6y-z+3w \end{array}\right] \end{equation*}
  1. Explain how to find the image of \(T\) and the kernel of \(T\text{.}\)

  2. Explain how to find a basis of the image of \(T\) and a basis of the kernel of \(T\text{.}\)

  3. Explain how to find the rank and nullity of T, and why the rank-nullity theorem holds for T.

Solution.
  1. To find the image we compute

    \begin{equation*} \Im(T) = T\left(\vspan\left\{\vec{e}_1,\vec{e}_2,\vec{e}_3,\vec{e}_4\right\}\right) \end{equation*}
    \begin{equation*} = \vspan\left\{T(\vec{e}_1),T(\vec{e}_2),T(\vec{e}_3),T(\vec{e}_4)\right\} \end{equation*}
    \begin{equation*} = \vspan\left\{\left[\begin{array}{c}1 \\ 2 \\ 1 \end{array}\right], \left[\begin{array}{c} 3 \\ 4 \\ 6 \end{array}\right], \left[\begin{array}{c} 2 \\ 6 \\ -1 \end{array}\right], \left[\begin{array}{c} -3 \\ -10 \\ 3 \end{array}\right]\right\}. \end{equation*}
  2. The kernel is the solution set of the corresponding homogeneous system of equations, i.e.

    \begin{alignat*}{5} x &+& 3y &+& 2z &-& 3w &=& 0\\ 2x &+& 4y &+& 6z &-& 10w &=& 0\\ x &+& 6y &-& z &+& 3w &=& 0 . \end{alignat*}

    So we compute

    \begin{equation*} \RREF\left[\begin{array}{cccc|c} 1 & 3 & 2 & -3 & 0 \\ 2 & 4 & 6 & -10 &0 \\ 1 & 6 & -1 & 3 & 0 \end{array}\right] = \left[\begin{array}{cccc|c} 1 & 0 & 5 & -9 & 0 \\ 0 & 1 & -1 & 2 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{array}\right]. \end{equation*}

    Then, letting \(z=a\) and \(w=b\) we have

    \begin{equation*} \ker T = \setBuilder{\left[\begin{array}{c}-5a+9b \\ a-2b \\ a \\ b \end{array}\right]}{a,b \in \IR}. \end{equation*}
  3. Since \(\Im(T) = \vspan\left\{\left[\begin{array}{c}1 \\ 2 \\ 1 \end{array}\right], \left[\begin{array}{c} 3 \\ 4 \\ 6 \end{array}\right], \left[\begin{array}{c} 2 \\ 6 \\ -1 \end{array}\right], \left[\begin{array}{c} -3 \\ -10 \\ 3 \end{array}\right]\right\}\text{,}\) we simply need to find a linearly independent subset of these four spanning vectors. So we compute

    \begin{equation*} \RREF \left[\begin{array}{cccc}1 & 3 & 2 & -3 \\ 2 & 4 & 6 & -10 \\ 1 & 6 & -1 & 3 \end{array}\right] = \left[\begin{array}{cccc} 1 & 0 & 5 & -9 \\ 0 & 1 & -1 & 2 \\ 0 & 0 & 0 & 0\end{array}\right]. \end{equation*}

    Since the first two columns are pivot columns, they form a linearly independent spanning set, so a basis for \(\Im T\) is \(\setList{\left[\begin{array}{c}1\\2\\1 \end{array}\right], \left[\begin{array}{c}3\\4\\6 \end{array}\right]}.\)

    To find a basis for the kernel, note that

    \begin{equation*} \ker T = \setBuilder{\left[\begin{array}{c}-5a+9b \\ a-2b \\ a \\ b \end{array}\right]}{a,b \in \IR} \end{equation*}
    \begin{equation*} = \setBuilder{a \left[\begin{array}{c}-5 \\ 1 \\ 1 \\ 0 \end{array}\right]+b \left[\begin{array}{c} 9 \\ -2 \\ 0 \\ 1 \end{array}\right]}{a,b \in \IR} \end{equation*}
    \begin{equation*} = \vspan\left\{ \left[\begin{array}{c} -5 \\ 1 \\ 1 \\ 0 \end{array}\right], \left[\begin{array}{c} 9 \\ -2 \\ 0 \\ 1 \end{array}\right]\right\}. \end{equation*}

    so a basis for the kernel is

    \begin{equation*} \setList{\left[\begin{array}{c}-5 \\ 1 \\ 1 \\ 0 \end{array}\right], \left[\begin{array}{c}9 \\ -2 \\ 0 \\ 1 \end{array}\right]}. \end{equation*}
  4. The dimension of the image (the rank) is \(2\text{,}\) the dimension of the kernel (the nullity) is \(2\text{,}\) and the dimension of the domain of \(T\) is \(4\text{,}\) so we see \(2+2=4\text{,}\) which verifies that the sum of the rank and nullity of \(T\) is the dimension of the domain of \(T\text{.}\)

Example B.1.16. A4.

Let \(T: \IR^4 \rightarrow \IR^3\) be the linear transformation given by the standard matrix \(\left[\begin{array}{cccc} 1 & 3 & 2 & -3 \\ 2 & 4 & 6 & -10 \\ 1 & 6 & -1 & 3 \end{array}\right]\text{.}\)

  1. Explain why \(T\) is or is not injective.

  2. Explain why \(T\) is or is not surjective.

Solution.

Compute

\begin{equation*} \RREF\left[\begin{array}{cccc}1 & 3 & 2 & -3 \\ 2 & 4 & 6 & -10 \\ 1 & 6 & -1 & 3 \end{array}\right] = \left[\begin{array}{cccc} 1 & 0 & 5 & -9 \\ 0 & 1 & -1 & 2 \\ 0 & 0 & 0 & 0\end{array}\right]. \end{equation*}
  1. Note that the third and fourth columns are non-pivot columns, which means \(\ker T\) contains infinitely many vectors, so \(T\) is not injective.

  2. Since there are only two pivots, the image (i.e. the span of the columns) is a 2-dimensional subspace (and thus does not equal \(\IR^3\)), so \(T\) is not surjective.

Example B.1.17. M1.

Of the following three matrices, only two may be multiplied.

\begin{align*} A &= \left[\begin{array}{cc} 1 & -3 \\ 0 & 1 \end{array}\right] & B&= \left[\begin{array}{ccc} 4 & 1 & 2 \end{array}\right] & C&= \left[\begin{array}{ccc} 0 & 1 & 3 \\ 1 & -2 & 5 \end{array}\right] \end{align*}

Explain which two may be multiplied and why. Then show how to find their product.

Solution.

\(AC\) is the only one that can be computed, since \(A\) is \(2\times 2\) and \(C\) is \(2\times 3\text{.}\) Thus \(AC\) will be the \(2\times 3\) matrix given by

\begin{align*} AC\left( \vec{e}_1 \right) &= A \left( \left[\begin{array}{c} 0 \\ 1 \end{array}\right] \right) = 0 \left[\begin{array}{c} 1 \\ 0 \end{array}\right] + 1\left[\begin{array}{c} -3 \\ 1 \end{array}\right] = \left[\begin{array}{c} -3 \\ 1 \end{array}\right] \\\\ AC\left( \vec{e}_2 \right) &= A \left( \left[\begin{array}{c} 1 \\ -2 \end{array}\right] \right) = 1 \left[\begin{array}{c} 1 \\ 0 \end{array}\right] -2\left[\begin{array}{c} -3 \\ 1 \end{array}\right] = \left[\begin{array}{c} 7 \\ -2 \end{array}\right] \\\\ AC\left( \vec{e}_3 \right) &= A \left( \left[\begin{array}{c} 3 \\ 5 \end{array}\right] \right) = 3 \left[\begin{array}{c} 1 \\ 0 \end{array}\right] + 5\left[\begin{array}{c} -3 \\ 1 \end{array}\right] = \left[\begin{array}{c} -12 \\ 5 \end{array}\right] \\ \end{align*}

Thus

\begin{equation*} AC = \left[\begin{array}{ccc} -3 & 7 & -12 \\ 1 & -2 & 5 \end{array}\right]. \end{equation*}

Example B.1.18. M2.

Let \(A\) be a \(4\times4\) matrix.

  1. Give a \(4\times 4\) matrix \(P\) that may be used to perform the row operation \({R_3} \to R_3+4 \, {R_1} \text{.}\)

  2. Give a \(4\times 4\) matrix \(Q\) that may be used to perform the row operation \({R_1} \to -4 \, {R_1}\text{.}\)

  3. Use matrix multiplication to describe the matrix obtained by applying \({R_3} \to 4 \, {R_1} + {R_3}\) and then \({R_1} \to -4 \, {R_1}\) to \(A\) (note the order).

Solution.
  1. \(\displaystyle P=\left[\begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \\ 4 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right]\)

  2. \(\displaystyle Q=\left[\begin{array}{cccc} -4 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{array}\right]\)

  3. \(\displaystyle QPA\)

Example B.1.19. M3.

Explain why the matrix \(\left[\begin{array}{ccc}1 & 3 & 2 \\ 2 & 4 & 6 \\ 1 & 6 & -1 \end{array}\right]\) is or is not invertible.

Solution.

We compute

\begin{equation*} \RREF\left(\left[\begin{array}{ccc}1 & 3 & 2 \\ 2 & 4 & 6 \\ 1 & 6 & -1 \end{array}\right]\right) = \left[\begin{array}{ccc} 1 & 0 & 5 \\ 0 & 1 & -1 \\ 0 & 0 & 0 \end{array}\right]. \end{equation*}

Since its \(\RREF\) is not the identity matrix, the linear map is not bijective and thus the matrix is not invertible.

Example B.1.20. M4.

Show how to compute the inverse of the matrix \(A=\left[\begin{array}{cccc} 1 & 2 & 3 & 5 \\ 0 & -1 & 4 & -2 \\ 0 & 0 & 1 & 3 \\ 0 & 0 & 0 & 1 \end{array}\right]\text{.}\)

Solution.

To find the matrix \(A^{-1}\) where \(AA^{-1}=I\text{,}\) we need to solve the augmented matrix \([A|I]\text{.}\)

\begin{equation*} \RREF\left(\left[\begin{array}{cccc|cccc} 1 & 2 & 3 & 5 & 1 & 0 & 0 & 0 \\ 0 & -1 & 4 & -2 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 3 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 \end{array}\right]\right) = \left[\begin{array}{cccc|cccc} 1 & 0 & 0 &0 & 1 & 2 & -11 & 32 \\ 0 & 1 & 0 & 0 & 0 & -1 & 4 & -14 \\ 0 & 0 & 1 & 0 & 0 & 0 & 1 & -3 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 \end{array}\right] \end{equation*}

So the inverse is \(\left[\begin{array}{cccc} 1 & 2 & -11 & 32 \\ 0 & -1 & 4 & -14 \\ 0 & 0 & 1 & -3 \\ 0 & 0 & 0 & 1 \end{array}\right]\text{.}\)

Example B.1.21. G1.

Let \(A\) be a \(4 \times 4\) matrix with determinant \(-7\text{.}\)

  1. Let \(B\) be the matrix obtained from \(A\) by applying the row operation \(R_3 \to R_3+3R_4\text{.}\) What is \(\det(B)\text{?}\)

  2. Let \(C\) be the matrix obtained from \(A\) by applying the row operation \(R_2 \to -3R_2\text{.}\) What is \(\det(C)\text{?}\)

  3. Let \(D\) be the matrix obtained from \(A\) by applying the row operation \(R_3 \leftrightarrow R_4\text{.}\) What is \(\det(D)\text{?}\)

Solution.
  1. Adding a multiple of one row to another row does not change the determinant, so \(\det(B)=\det(A)=-7\text{.}\)

  2. Scaling a row scales the determinant by the same factor, so so \(\det(B)=-3\det(A)=-3(-7)=21\text{.}\)

  3. Swaping rows changes the sign of the determinant, so \(\det(B)=-\det(A)=7\text{.}\)

Example B.1.22. G2.

Show how to compute the determinant of the matrix

\begin{equation*} A = \left[\begin{array}{cccc} 1 & 3 & 0 & -1 \\ 1 & 1 & 2 & 4 \\ 1 & 1 & 1 & 3 \\ -3 & 1 & 2 & -5 \end{array}\right] \end{equation*}
Solution.

Here is one possible solution, first applying a single row operation, and then performing Laplace/cofactor expansions to reduce the determinant to a linear combination of \(2\times 2\) determinants:

\begin{align*} \det \left[\begin{array}{cccc} 1 & 3 & 0 & -1 \\ 1 & 1 & 2 & 4 \\ 1 & 1 & 1 & 3 \\ -3 & 1 & 2 & -5 \end{array}\right] &= \det \left[\begin{array}{cccc} 1 & 3 & 0 & -1 \\ 0 & 0 & 1 & 1 \\ 1 & 1 & 1 & 3 \\ -3 & 1 & 2 & -5 \end{array}\right] = (-1) \det \left[\begin{array}{ccc} 1 & 3 & -1 \\ 1 & 1 & 3 \\ -3 & 1 & -5 \end{array}\right] + (1) \det \left[\begin{array}{ccc} 1 & 3 & 0 \\ 1 & 1 & 1 \\ -3 & 1 & 2 \end{array}\right]\\ &= (-1) \left( (1) \det \left[\begin{array}{cc} 1 & 3 \\ 1 & -5 \end{array}\right] - (1) \det \left[\begin{array}{cc} 3 & -1 \\ 1 & -5 \end{array}\right] + (-3) \det \left[\begin{array}{cc} 3 & -1 \\ 1 & 3 \end{array}\right] \right) +\\ &\phantom{==} (1) \left( (1) \det \left[\begin{array}{cc} 1 & 1 \\ 1 & 2 \end{array}\right] - (3) \det \left[\begin{array}{cc} 1 & 1 \\ -3 & 2 \end{array}\right] \right)\\ % &= (-1)\left( (1)(-8)-(1)(-14)+(-3)(10) \right) + (1) \left( (1)(1)-(3)(5) \right)\\ &= (-1) \left( -8+14-30 \right) + (1) \left(1-15 \right)\\ &=10 \end{align*}

Here is another possible solution, using row and column operations to first reduce the determinant to a \(3\times 3\) matrix and then applying a formula:

\begin{align*} \det \left[\begin{array}{cccc} 1 & 3 & 0 & -1 \\ 1 & 1 & 2 & 4 \\ 1 & 1 & 1 & 3 \\ -3 & 1 & 2 & -5 \end{array}\right] &= \det \left[\begin{array}{cccc} 1 & 3 & 0 & -1 \\ 0 & 0 & 1 & 1 \\ 1 & 1 & 1 & 3 \\ -3 & 1 & 2 & -5 \end{array}\right] = \det \left[\begin{array}{cccc} 1 & 3 & 0 & -1 \\ 0 & 0 & 1 & 0 \\ 1 & 1 & 1 & 2 \\ -3 & 1 & 2 & -7 \end{array}\right]\\ &=-\det \left[\begin{array}{cccc} 1 & 3 & 0 & -1 \\ 1 & 1 & 1 & 2 \\ 0 & 0 & 1 & 0 \\ -3 & 1 & 2 & -7 \end{array}\right] = -\det \left[\begin{array}{ccc} 1 & 3 & -1 \\ 1 & 1 & 2 \\ -3 & 1 & -7 \end{array}\right]\\ &=-((-7-18-1)-(3+2-21))\\ &=10 \end{align*}

Example B.1.23. G3.

Explain how to find the eigenvalues of the matrix \(\left[\begin{array}{cc} -2 & -2 \\ 10 & 7 \end{array}\right] \text{.}\)

Solution.

Compute the characteristic polynomial:

\begin{equation*} \det(A-\lambda I) = \det \left[\begin{array}{cc} -2 - \lambda & -2 \\ 10 & 7-\lambda \end{array}\right] \end{equation*}
\begin{equation*} = (-2-\lambda)(7-\lambda)+20 = \lambda ^2 -5\lambda +6 = (\lambda -2)(\lambda -3) \end{equation*}

The eigenvalues are the roots of the characteristic polynomial, namely \(2\) and \(3\text{.}\)

Example B.1.24. G4.

Explain how to find a basis for the eigenspace associated to the eigenvalue \(3\) in the matrix

\begin{equation*} \left[\begin{array}{ccc} -7 & -8 & 2 \\ 8 & 9 & -1 \\ \frac{13}{2} & 5 & 2 \end{array}\right]. \end{equation*}
Solution.

The eigenspace associated to \(3\) is the kernel of \(A-3I\text{,}\) so we compute

\begin{equation*} \RREF(A-3I) = \RREF \left[\begin{array}{ccc} -7-3 & -8 & 2 \\ 8 & 9-3 & -1 \\ \frac{13}{2} & 5 & 2-3 \end{array}\right] = \end{equation*}
\begin{equation*} \RREF \left[\begin{array}{ccc} -10 & -8 & 2 \\ 8 & 6 & -1 \\ \frac{13}{2} & 5 & -1 \end{array}\right] = \left[\begin{array}{ccc} 1 & 0 & 1 \\ 0 & 1 & -\frac{3}{2} \\ 0 & 0 & 0 \end{array}\right]. \end{equation*}

Thus we see the kernel is

\begin{equation*} \setBuilder{\left[\begin{array}{c} -a \\ \frac{3}{2} a \\ a \end{array}\right]}{a \in \IR} \end{equation*}

which has a basis of \(\left\{ \left[\begin{array}{c} -1 \\ \frac{3}{2} \\ 1 \end{array}\right] \right\}\text{.}\)