Skip to main content

Section B.1 Sample Exercises with Solutions

Here we model one exercise and solution for each learning objective. Your solutions should not look identical to those shown below, but these solutions can give you an idea of the level of detail required for a complete solution.

Example B.1.1. LE1.

Consider the scalar system of equations
\begin{alignat*}{5} 3x_1 &\,+\,& 2x_2 &\,\,& &\,+\,&x_4 &= 1 \\ -x_1 &\,-\,& 4x_2 &\,+\,&x_3&\,-\,&7x_4 &= 0 \\ &\,\,& x_2 &\,-\,&x_3 &\,\,& &= -2 \end{alignat*}
  1. Rewrite this system as a vector equation.
  2. Write an augmented matrix corresponding to this system.
Solution.
  1. \begin{equation*} x_1\left[\begin{array}{c} 3 \\ -1 \\ 0 \end{array}{}\right] + x_2 \left[\begin{array}{c}2 \\ -4 \\ 1 \end{array}{}\right]+ x_3 \left[\begin{array}{c} 1 \\ 1 \\ -1 \end{array}{}\right] + x_4 \left[\begin{array}{c} 1 \\ -7 \\ 0 \end{array}{}\right] = \left[\begin{array}{c} 1 \\ 0 \\ -2 \end{array}{}\right] \end{equation*}
  2. \begin{equation*} \left[\begin{array}{cccc|c} 3 & 2 & 0 & 1 & 1 \\ -1 & -4 & 1 & -7 & 0 \\ 0 & 1 & -1 & 0 & -2 \end{array}\right] \end{equation*}

Example B.1.2. LE2.

  1. For each of the following matrices, explain why it is not in reduced row echelon form.
    \begin{equation*} A = \left[\begin{array}{ccc} -4 & 0 & 4 \\ 0 & 1 & -2 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array}\right] \hspace{2em} B = \left[\begin{array}{ccc} 0 & 1 & 2 \\ 1 & 0 & -3 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array}\right] \hspace{2em} C = \left[\begin{array}{ccc} 1 & -4 & 4 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array}\right] \hspace{2em} \hspace{2em} \end{equation*}
  2. Show step-by-step why
    \begin{equation*} \RREF \left[\begin{array}{cccc} 0 & 3 & 1 & 2 \\ 1 & 2 & -1 & -3 \\ 2 & 4 & -1 & -1 \end{array}\right] = \left[\begin{array}{cccc} 1 & 0 & 0 & 4 \\ 0 & {1} & 0 & -1 \\ 0 & 0 & {1} & 5 \end{array}\right]. \end{equation*}
Solution.
    • \(A=\left[\begin{array}{ccc} -4 & 0 & 4 \\ 0 & 1 & -2 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array}\right]\) is not in reduced row echelon form because the pivots are not all \(1\text{.}\)
    • \(B=\left[\begin{array}{ccc} 0 & 1 & 2 \\ 1 & 0 & -3 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array}\right]\) is not in reduced row echelon form because the pivots are not descending to the right.
    • \(C=\left[\begin{array}{ccc} 1 & -4 & 4 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array}\right]\) is not in reduced row echelon form because not every entry above and below each pivot is zero.
  1. \begin{alignat*}{4} \left[\begin{array}{cccc} 0 & 3 & 1 & 2 \\ 1 & 2 & -1 & -3 \\ 2 & 4 & -1 & -1 \end{array}\right] &\sim& \left[\begin{array}{cccc} \markedPivot{1} & 2 & -1 & -3 \\ 0 & 3 & 1 & 2 \\ 2 & 4 & -1 & -1 \end{array}\right] &\hspace{0.2in} \text{Swap Rows 1 and 2}& \\ &\sim& \left[\begin{array}{cccc} \markedPivot{1} & 2 & -1 & -3 \\ 0 & 3 & 1 & 2 \\ 0 & 0 & 1 & 5 \end{array}\right] &\hspace{0.2in} \text{Add } -2 \text{ Row 1 to Row 3}& \\ &\sim& \left[\begin{array}{cccc} \markedPivot{1} & 2 & -1 & -3 \\ 0 & \markedPivot{1} & \frac{1}{3} & \frac{2}{3} \\ 0 & 0 & 1 & 5 \end{array}\right] &\hspace{0.2in} \text{Multiply Row 3 by } \frac{1}{3}& \\ &\sim& \left[\begin{array}{cccc} \markedPivot{1} & 0 & -\frac{5}{3} & -\frac{13}{3} \\ 0 & \markedPivot{1} & \frac{1}{3} & \frac{2}{3} \\ 0 & 0 & \markedPivot{1} & 5 \end{array}\right] &\hspace{0.2in} \text{Add } -2 \text{ Row 2 to Row 1}& \\ &\sim& \left[\begin{array}{cccc} \markedPivot{1} & 0 & -\frac{5}{3} & -\frac{13}{3} \\ 0 & \markedPivot{1} & 0 & -1 \\ 0 & 0 & \markedPivot{1} & 5 \end{array}\right] &\hspace{0.2in} \text{Add } -\frac{1}{3} \text{ Row 3 to Row 2}& \\ &\sim& \left[\begin{array}{cccc} \markedPivot{1} & 0 & 0 & 4 \\ 0 & \markedPivot{1} & 0 & -1 \\ 0 & 0 & \markedPivot{1} & 5 \end{array}\right] &\hspace{0.2in} \text{Add } \frac{5}{3} \text{ Row 3 to Row 1}& \end{alignat*}

Example B.1.3. LE3.

Consider each of the following systems of linear equations or vector equations.
  1. \begin{equation*} \begin{matrix} -2 \, x_{1} & + & x_{2} & + & x_{3} & = & -2 \\ -2 \, x_{1} & - & 3 \, x_{2} & - & 3 \, x_{3} & = & 0 \\ 3 \, x_{1} & + & x_{2} & + & x_{3} & = & 3 \\ \end{matrix} \end{equation*}
  2. \begin{equation*} x_{1} \left[\begin{array}{c} -5 \\ 3 \\ -1 \end{array}\right] + x_{2} \left[\begin{array}{c} 3 \\ -2 \\ 2 \end{array}\right] + x_{3} \left[\begin{array}{c} 14 \\ -9 \\ 7 \end{array}\right] = \left[\begin{array}{c} 1 \\ 0 \\ -4 \end{array}\right] \end{equation*}
  3. \begin{equation*} x_{1} \left[\begin{array}{c} 0 \\ -1 \\ -1 \end{array}\right] + x_{2} \left[\begin{array}{c} 1 \\ -4 \\ -4 \end{array}\right] + x_{3} \left[\begin{array}{c} 2 \\ -4 \\ -3 \end{array}\right] = \left[\begin{array}{c} -5 \\ 11 \\ 8 \end{array}\right] \end{equation*}
  • Explain how to find a simpler system or vector equation that has the same solution set for each.
  • Explain whether each solution set has no solutions, one solution, or infinitely-many solutions. If the set is finite, describe it using set notation.
Solution.
  1. \begin{equation*} \mathrm{RREF}\left[\begin{array}{ccc|c} -2 & 1 & 1 & -2 \\ -2 & -3 & -3 & 0 \\ 3 & 1 & 1 & 3 \end{array}\right]=\left[\begin{array}{ccc|c} 1 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right] \end{equation*}
    This matrix corresponds to the simpler system
    \begin{equation*} \begin{matrix} x_{1} & & & & & = & 0 \\ & & x_{2} & + & x_{3} & = & 0 \\ & & & & 0 & = & 1 \\ \end{matrix} \end{equation*}
    The third equation \(0=1\) indicates that the system has no solutions. The solution set is \(\emptyset\text{.}\)
  2. \begin{equation*} \mathrm{RREF}\left[\begin{array}{ccc|c} -5 & 3 & 14 & 1 \\ 3 & -2 & -9 & 0 \\ -1 & 2 & 7 & -4 \end{array}\right]=\left[\begin{array}{ccc|c} 1 & 0 & -1 & -2 \\ 0 & 1 & 3 & -3 \\ 0 & 0 & 0 & 0 \end{array}\right] \end{equation*}
    This matrix corresponds to the simpler system
    \begin{equation*} \begin{matrix} x_{1} & & & - & x_3 & = & -2 \\ & & x_{2} & + & 3\,x_{3} & = & -3 \\ & & & & 0 & = & 0 \\ \end{matrix}. \end{equation*}
    Since there are three variables and two nontrivial equations, the solution set has infinitely-many solutions.
  3. \begin{equation*} \mathrm{RREF}\left[\begin{array}{ccc|c} 0 & 1 & 2 & -5 \\ -1 & -4 & -4 & 11 \\ -1 & -4 & -3 & 8 \end{array}\right]=\left[\begin{array}{ccc|c} 1 & 0 & 0 & -3 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & -3 \end{array}\right] \end{equation*}
    This matrix corresponds to the simpler system
    \begin{equation*} \begin{matrix} x_{1} & & & & & = & -3 \\ & & x_{2} & & & = & 1 \\ & & & & x_{3} & = & -3 \\ \end{matrix}. \end{equation*}
    This system has one solution. The solution set is \(\left\{ \left[\begin{array}{c} -3 \\ 1 \\ -3 \end{array}\right] \right\}\text{.}\)

Example B.1.4. LE4.

Consider the following vector equation.
\begin{equation*} x_{1} \left[\begin{array}{c} -3 \\ 0 \\ 4 \end{array}\right] + x_{2} \left[\begin{array}{c} -3 \\ 0 \\ 4 \end{array}\right] + x_{3} \left[\begin{array}{c} 0 \\ 1 \\ 0 \end{array}\right] + x_{4} \left[\begin{array}{c} -4 \\ -5 \\ 5 \end{array}\right] = \left[\begin{array}{c} -11 \\ -9 \\ 14 \end{array}\right] \end{equation*}
  1. Explain how to find a simpler system or vector equation that has the same solution set.
  2. Explain how to describe this solution set using set notation.
Solution.
First, we compute
\begin{equation*} \mathrm{RREF}\left[\begin{array}{cccc|c} -3 & -3 & 0 & -4 & -11 \\ 0 & 0 & 1 & -5 & -9 \\ 4 & 4 & 0 & 5 & 14 \end{array}\right]=\left[\begin{array}{cccc|c} 1 & 1 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 1 & 2 \end{array}\right]. \end{equation*}
This corresponds to the simpler system
\begin{equation*} \begin{matrix} x_{1} & + & x_2 & & & & & = & 1 \\ & & & & x_3 & & & = & 1 \\ & & & & & & x_4 & = & 2 \\ \end{matrix}. \end{equation*}
Since the second column is a non-pivot column, we let \(x_2=a\text{.}\) Making this substitution and then solving for \(x_1\text{,}\) \(x_3\text{,}\) and \(x_4\) produces the system
\begin{equation*} \begin{matrix} x_1 &=& 1-a \\ x_2 &=& a \\ x_3 &=& 1 \\ x_4 &=& 2 \\ \end{matrix} \end{equation*}
Thus, the solution set is \(\left\{ \left[\begin{array}{c} -a + 1 \\ a \\ 1 \\ 2 \end{array}\right] \,\middle|\, a \in\mathbb R \right\} \text{.}\)

Example B.1.5. EV1.

  1. Write a statement involving the solutions of a vector equation that’s equivalent to each claim below.
    • \(\left[\begin{array}{c} -13 \\ 3 \\ -13 \end{array}\right]\)is a linear combination of the vectors \(\left[\begin{array}{c} 1 \\ 0 \\ 1 \end{array}\right] , \left[\begin{array}{c} 2 \\ 0 \\ 2 \end{array}\right] , \left[\begin{array}{c} 3 \\ 0 \\ 3 \end{array}\right] , \text{ and } \left[\begin{array}{c} -5 \\ 1 \\ -5 \end{array}\right]\text{.}\)
    • \(\left[\begin{array}{c} -13 \\ 3 \\ -15 \end{array}\right]\)is a linear combination of the vectors \(\left[\begin{array}{c} 1 \\ 0 \\ 1 \end{array}\right] , \left[\begin{array}{c} 2 \\ 0 \\ 2 \end{array}\right] , \left[\begin{array}{c} 3 \\ 0 \\ 3 \end{array}\right] , \text{ and } \left[\begin{array}{c} -5 \\ 1 \\ -5 \end{array}\right]\text{.}\)
  2. Use these statements to determine if each vector is or is not a linear combination. If it is, give an example of such a linear combination.
Solution.
  • \(\left[\begin{array}{c} -13 \\ 3 \\ -13 \end{array}\right]\)is a linear combination of the vectors \(\left[\begin{array}{c} 1 \\ 0 \\ 1 \end{array}\right] , \left[\begin{array}{c} 2 \\ 0 \\ 2 \end{array}\right] , \left[\begin{array}{c} 3 \\ 0 \\ 3 \end{array}\right] , \text{ and } \left[\begin{array}{c} -5 \\ 1 \\ -5 \end{array}\right]\) exactly when the vector equation
    \begin{equation*} x_1\left[\begin{array}{c} 1 \\ 0 \\ 1 \end{array}\right] +x_2 \left[\begin{array}{c} 2 \\ 0 \\ 2 \end{array}\right] +x_3 \left[\begin{array}{c} 3 \\ 0 \\ 3 \end{array}\right] +x_4 \left[\begin{array}{c} -5 \\ 1 \\ -5 \end{array}\right] = \left[\begin{array}{c} -13 \\ 3 \\ -13 \end{array}\right] \end{equation*}
    has a solution. To solve this vector equation, we compute
    \begin{equation*} \mathrm{RREF}\, \left[\begin{array}{cccc|c} 1 & 2 & 3 & -5 & -13 \\ 0 & 0 & 0 & 1 & 3 \\ 1 & 2 & 3 & -5 & -13 \end{array}\right] = \left[\begin{array}{cccc|c} 1 & 2 & 3 & 0 & 2 \\ 0 & 0 & 0 & 1 & 3 \\ 0 & 0 & 0 & 0 & 0 \end{array}\right]\text{.} \end{equation*}
    We see that this vector equation has solution set \(\left\{\left[\begin{array}{c}2-2a-3b \\ a \\ b \\ 3 \end{array}\right]\ \middle|\ a,b \in \mathbb{R}\right\}\text{,}\) so \(\left[\begin{array}{c} -13 \\ 3 \\ -13 \end{array}\right]\) is a linear combination; for example, \(2 \left[\begin{array}{c} 1 \\ 0 \\ 1 \end{array}\right] + 3 \left[\begin{array}{c} -5 \\ 1 \\ -5 \end{array}\right] = \left[\begin{array}{c} -13 \\ 3 \\ -13 \end{array}\right]\)
  • \(\left[\begin{array}{c} -13 \\ 3 \\ -15 \end{array}\right]\) is a linear combination of the vectors \(\left[\begin{array}{c} 1 \\ 0 \\ 1 \end{array}\right] , \left[\begin{array}{c} 2 \\ 0 \\ 2 \end{array}\right] , \left[\begin{array}{c} 3 \\ 0 \\ 3 \end{array}\right] , \text{ and } \left[\begin{array}{c} -5 \\ 1 \\ -5 \end{array}\right]\) exactly when the vector equation
    \begin{equation*} x_1\left[\begin{array}{c} 1 \\ 0 \\ 1 \end{array}\right] +x_2 \left[\begin{array}{c} 2 \\ 0 \\ 2 \end{array}\right] +x_3 \left[\begin{array}{c} 3 \\ 0 \\ 3 \end{array}\right] +x_4 \left[\begin{array}{c} -5 \\ 1 \\ -5 \end{array}\right] = \left[\begin{array}{c} -13 \\ 3 \\ -15 \end{array}\right] \end{equation*}
    has a solution. To solve this vector equation, we compute
    \begin{equation*} \mathrm{RREF}\, \left[\begin{array}{cccc|c} 1 & 2 & 3 & -5 & -13 \\ 0 & 0 & 0 & 1 & 3 \\ 1 & 2 & 3 & -5 & -15 \end{array}\right] = \left[\begin{array}{cccc|c} 1 & 2 & 3 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \end{array}\right]\text{.} \end{equation*}
    This vector equation has no solution, so \(\left[\begin{array}{c} -13 \\ 3 \\ -15 \end{array}\right]\) is not a linear combination.

Example B.1.6. EV2.

  1. Write a statement involving the solutions of a vector equation that’s equivalent to each claim below.
    • The set of vectors \(\left\{ \left[\begin{array}{c} 1 \\ -1 \\ 2 \\ 0 \end{array}\right] , \left[\begin{array}{c} 3 \\ -2 \\ 3 \\ 3 \end{array}\right] , \left[\begin{array}{c} 10 \\ -7 \\ 11 \\ 9 \end{array}\right] , \left[\begin{array}{c} -6 \\ 3 \\ -3 \\ -9 \end{array}\right] \right\}\) spans \(\mathbb R^4\text{.}\)
    • The set of vectors \(\left\{ \left[\begin{array}{c} 1 \\ -1 \\ 2 \\ 0 \end{array}\right] , \left[\begin{array}{c} 3 \\ -2 \\ 3 \\ 3 \end{array}\right] , \left[\begin{array}{c} 10 \\ -7 \\ 11 \\ 9 \end{array}\right] , \left[\begin{array}{c} -6 \\ 3 \\ -3 \\ -9 \end{array}\right] \right\}\) does not span \(\mathbb R^4\text{.}\)
  2. Explain how to determine which of these statements is true.
Solution.
The set of vectors \(\left\{ \left[\begin{array}{c} 1 \\ -1 \\ 2 \\ 0 \end{array}\right] , \left[\begin{array}{c} 3 \\ -2 \\ 3 \\ 3 \end{array}\right] , \left[\begin{array}{c} 10 \\ -7 \\ 11 \\ 9 \end{array}\right] , \left[\begin{array}{c} -6 \\ 3 \\ -3 \\ -9 \end{array}\right] \right\}\) spans \(\mathbb{R}^4\) exactly when the vector equation
\begin{equation*} x_1 \left[\begin{array}{c} 1 \\ -1 \\ 2 \\ 0 \end{array}\right] +x_2 \left[\begin{array}{c} 3 \\ -2 \\ 3 \\ 3 \end{array}\right] +x_3 \left[\begin{array}{c} 10 \\ -7 \\ 11 \\ 9 \end{array}\right] +x_4 \left[\begin{array}{c} -6 \\ 3 \\ -3 \\ -9 \end{array}\right] =\vec{v} \end{equation*}
has a solution for all \(\vec{v} \in \mathbb{R}^4\text{.}\) If there is some vector \(\vec{v} \in \mathbb{R}^4\) for which this vector equation has no solution, then the set does not span \(\mathbb{R}^4\text{.}\) To answer this, we compute
\begin{equation*} \mathrm{RREF}\, \left[\begin{array}{cccc} 1 & 3 & 10 & -6 \\ -1 & -2 & -7 & 3 \\ 2 & 3 & 11 & -3 \\ 0 & 3 & 9 & -9 \end{array}\right] = \left[\begin{array}{cccc} 1 & 0 & 1 & 3 \\ 0 & 1 & 3 & -3 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]\text{.} \end{equation*}
We see that for some \(\vec{v} \in \mathbb{R}^4\text{,}\) this vector equation will not have a solution, so the set of vectors \(\left\{ \left[\begin{array}{c} 1 \\ -1 \\ 2 \\ 0 \end{array}\right] , \left[\begin{array}{c} 3 \\ -2 \\ 3 \\ 3 \end{array}\right] , \left[\begin{array}{c} 10 \\ -7 \\ 11 \\ 9 \end{array}\right] , \left[\begin{array}{c} -6 \\ 3 \\ -3 \\ -9 \end{array}\right] \right\}\) does not span \(\mathbb{R}^4\text{.}\)

Example B.1.7. EV3.

Consider the following two sets of Euclidean vectors.
\begin{equation*} W = \setBuilder{\left[\begin{array}{c} x \\ y \\ z \\ w \end{array}\right] }{x+y=3z+2w} \hspace{3em} U = \setBuilder{\left[\begin{array}{c} x \\ y \\ z \\ w \end{array}\right]}{x+y=3z+w^2} \end{equation*}
Explain why one of these sets is a subspace of \(\IR^3\text{,}\) and why the other is not.
Solution.
To show that \(W\) is a subspace, let \(\vec v=\left[\begin{array}{c} x_1 \\y_1 \\ z_1 \\ w_1 \end{array}\right]\in W\) and \(\vec w=\left[\begin{array}{c} x_2 \\y_2 \\ z_2 \\ w_2 \end{array}\right] \in W \text{,}\) so we know that \(x_1+y_1=3z_1+2w_1\) and \(x_2+y_2=3z_2+2w_2\text{.}\)
Consider
\begin{equation*} \left[\begin{array}{c} x_1 \\y_1 \\ z_1 \\ w_1\end{array}\right] +\left[\begin{array}{c} x_2 \\y_2 \\ z_2 \\ w_2 \end{array}\right] =\left[\begin{array}{c} x_1+x_2 \\y_1+y_2 \\ z_1+z_2 \\w_1+w_2 \end{array}\right] . \end{equation*}
To see if \(\vec{v}+\vec{w} \in W\text{,}\) we need to check if \((x_1+x_2)+(y_1+y_2) = 3(z_1+z_2)+2(w_1+w_2)\text{.}\) We compute
\begin{align*} (x_1+x_2)+(y_1+y_2) &= (x_1+y_1)+(x_2+y_2) &\text{by regrouping}\\ &= (3z_1+2w_1)+(3z_2+2w_2) & \text{since }\\ &=3(z_1+z_2)+2(w_1+w_2) & \text{by regrouping.} \end{align*}
Thus \(\vec v+\vec w\in W\text{,}\) so \(W\) is closed under vector addition.
Now consider
\begin{equation*} c\vec v =\left[\begin{array}{c} cx_1 \\cy_1 \\ cz_1 \\ cw_1 \end{array}\right] . \end{equation*}
Similarly, to check that \(c\vec{v} \in W\text{,}\) we need to check if \(cx_1+cy_1=3(cz_1)+2(cw_1)\text{,}\) so we compute
\begin{align*} cx_1+cy_1 & = c(x_1+y_1) &\text{by factoring}\\ &=c(3z_1+2w_1) &\text{since }\\ &=3(cz_1)+2(cw_1) &\text{by regrouping} \end{align*}
and we see that \(c\vec v\in W\text{,}\) so \(W\) is closed under scalar multiplication. Therefore \(W\) is a subspace of \(\IR^3\text{.}\)
Now, to show \(U\) is not a subspace, we will show that it is not closed under vector addition.
  • (Solution Method 1) Now let \(\vec v=\left[\begin{array}{c} x_1 \\y_1 \\ z_1 \\ w_1 \end{array}\right]\in U\) and \(\vec w=\left[\begin{array}{c} x_2 \\y_2 \\ z_2 \\ w_2 \end{array}\right] \in U \text{,}\) so we know that \(x_1+y_1=3z_1+w_1^2\) and \(x_2+y_2=3z_2+w_2^2\text{.}\)
    Consider
    \begin{equation*} \vec{v}+\vec{w}= \left[\begin{array}{c} x_1 \\y_1 \\ z_1 \\ w_1\end{array}\right] +\left[\begin{array}{c} x_2 \\y_2 \\ z_2 \\ w_2 \end{array}\right] =\left[\begin{array}{c} x_1+x_2 \\y_1+y_2 \\ z_1+z_2 \\w_1+w_2 \end{array}\right] . \end{equation*}
    To see if \(\vec{v}+\vec{w} \in U\text{,}\) we need to check if \((x_1+x_2)+(y_1+y_2) = 3(z_1+z_2)+(w_1+w_2)^2\text{.}\) We compute
    \begin{align*} (x_1+x_2)+(y_1+y_2) &= (x_1+y_1)+(x_2+y_2) &\text{by regrouping}\\ &= (3z_1+w_1^2)+(3z_2+w_2^2) &\text{since }\\ &=3(z_1+z_2)+(w_1^2+w_2^2) &\text{by regrouping} \end{align*}
    and thus \(\vec v+\vec w\in U\) \textbf{only when} \(w_1^2+w_2^2=(w_1+w_2)^2\text{.}\) Since this is not true in general, \(U\) is not closed under vector addition, and thus cannot be a subspace.
  • (Solution Method 2) Note that the vector \(\vec v=\left[\begin{array}{c} 0\\1\\0\\1\end{array}\right] \) belongs to \(U\) since \(0+1=3(0)+1^2\text{.}\) However, the vector \(2\vec v=\left[\begin{array}{c} 0\\2\\0\\2\end{array}\right] \) does not belong to \(U\) since \(0+2\not=3(0)+2^2\text{.}\) Therefore \(U\) is not closed under scalar multiplication, and thus is not a subspace.

Example B.1.8. EV4.

  1. Write a statement involving the solutions of a vector equation that’s equivalent to each claim below.
    • The set of vectors \(\left\{ \left[\begin{array}{c} 1 \\ 3 \\ 4 \\ -4 \end{array}\right] , \left[\begin{array}{c} -1 \\ -3 \\ -4 \\ 4 \end{array}\right] , \left[\begin{array}{c} 0 \\ 1 \\ 3 \\ -3 \end{array}\right] \right\}\) is linearly independent.
    • The set of vectors \(\left\{ \left[\begin{array}{c} 1 \\ 3 \\ 4 \\ -4 \end{array}\right] , \left[\begin{array}{c} -1 \\ -3 \\ -4 \\ 4 \end{array}\right] , \left[\begin{array}{c} 0 \\ 1 \\ 3 \\ -3 \end{array}\right] \right\}\) is linearly dependent.
  2. Explain how to determine which of these statements is true.
Solution.
The set of vectors \(\left\{ \left[\begin{array}{c} 1 \\ 3 \\ 4 \\ -4 \end{array}\right] , \left[\begin{array}{c} -1 \\ -3 \\ -4 \\ 4 \end{array}\right] , \left[\begin{array}{c} 0 \\ 1 \\ 3 \\ -3 \end{array}\right] \right\}\) is linearly independent exactly when the vector equation
\begin{equation*} x_1 \left[\begin{array}{c} 1 \\ 3 \\ 4 \\ -4 \end{array}\right] +x_2 \left[\begin{array}{c} -1 \\ -3 \\ -4 \\ 4 \end{array}\right] +x_3 \left[\begin{array}{c} 0 \\ 1 \\ 3 \\ -3 \end{array}\right] =\left[\begin{array}{c}0 \\ 0 \\ 0 \\ 0 \end{array}\right] \end{equation*}
has no non-trivial (i.e. nonzero) solutions. The set is linearly dependent when there exists a nontrivial (i.e. nonzero) solution. We compute
\begin{equation*} \mathrm{RREF}\, \left[\begin{array}{ccc} 1 & -1 & 0 \\ 3 & -3 & 1 \\ 4 & -4 & 3 \\ -4 & 4 & -3 \end{array}\right] = \left[\begin{array}{ccc} 1 & -1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array}\right]\text{.} \end{equation*}
Thus, this vector equation has a solution set \(\left\{ \left[\begin{array}{c}a \\ a \\ 0 \end{array}\right]\ \middle|\ a \in \mathbb{R}\right\}\text{.}\) Since there are nontrivial solutions, we conclude that the set of vectors \(\left\{ \left[\begin{array}{c} 1 \\ 3 \\ 4 \\ -4 \end{array}\right] , \left[\begin{array}{c} -1 \\ -3 \\ -4 \\ 4 \end{array}\right] , \left[\begin{array}{c} 0 \\ 1 \\ 3 \\ -3 \end{array}\right] \right\}\) is linearly dependent.

Example B.1.9. EV5.

  1. Write a statement involving spanning and independence properties that’s equivalent to each claim below.
    • The set of vectors \(\left\{ \left[\begin{array}{c} 1 \\ 3 \\ 4 \\ -4 \end{array}\right] , \left[\begin{array}{c} 0 \\ 1 \\ 3 \\ -3 \end{array}\right] , \left[\begin{array}{c} 3 \\ 11 \\ 18 \\ -18 \end{array}\right] , \left[\begin{array}{c} -2 \\ -7 \\ -11 \\ 11 \end{array}\right] \right\}\) is a basis of \(\mathbb{R}^4\text{.}\)
    • The set of vectors \(\left\{ \left[\begin{array}{c} 1 \\ 3 \\ 4 \\ -4 \end{array}\right] , \left[\begin{array}{c} 0 \\ 1 \\ 3 \\ -3 \end{array}\right] , \left[\begin{array}{c} 3 \\ 11 \\ 18 \\ -18 \end{array}\right] , \left[\begin{array}{c} -2 \\ -7 \\ -11 \\ 11 \end{array}\right] \right\}\) is not a basis of \(\mathbb{R}^4\text{.}\)
  2. Explain how to determine which of these statements is true.
Solution.
The set of vectors \(\left\{ \left[\begin{array}{c} 1 \\ 3 \\ 4 \\ -4 \end{array}\right] , \left[\begin{array}{c} 0 \\ 1 \\ 3 \\ -3 \end{array}\right] , \left[\begin{array}{c} 3 \\ 11 \\ 18 \\ -18 \end{array}\right] , \left[\begin{array}{c} -2 \\ -7 \\ -11 \\ 11 \end{array}\right] \right\}\) is a basis of \(\mathbb{R}^4\) exactly when it is linearly independent and the set spans \(\mathbb{R}^4\text{.}\) If it is either linearly dependent, or the set does not span \(\mathbb{R}^4\text{,}\) then the set is not a basis.
To answer this, we compute
\begin{equation*} \mathrm{RREF}\, \left[\begin{array}{cccc} 1 & 0 & 3 & -2 \\ 3 & 1 & 11 & -7 \\ 4 & 3 & 18 & -11 \\ -4 & -3 & -18 & 11 \end{array}\right] = \left[\begin{array}{cccc} 1 & 0 & 3 & -2 \\ 0 & 1 & 2 & -1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]\text{.} \end{equation*}
We see that this set of vectors is linearly dependent, so therefore the set of vectors \(\left\{ \left[\begin{array}{c} 1 \\ 3 \\ 4 \\ -4 \end{array}\right] , \left[\begin{array}{c} 0 \\ 1 \\ 3 \\ -3 \end{array}\right] , \left[\begin{array}{c} 3 \\ 11 \\ 18 \\ -18 \end{array}\right] , \left[\begin{array}{c} -2 \\ -7 \\ -11 \\ 11 \end{array}\right] \right\}\) is not a basis.

Example B.1.10. EV6.

Consider the subspace
\begin{equation*} W = \vspan \left\{ \left[\begin{array}{c} 1 \\ -3 \\ -1 \\ 2 \end{array}\right] , \left[\begin{array}{c} 1 \\ 0 \\ 1 \\ -2 \end{array}\right] , \left[\begin{array}{c} 3 \\ -6 \\ -1 \\ 2 \end{array}\right] , \left[\begin{array}{c} 1 \\ 6 \\ 1 \\ -1 \end{array}\right] , \left[\begin{array}{c} 2 \\ 3 \\ 0 \\ 1 \end{array}\right] \right\} . \end{equation*}
  1. Explain how to find a basis of \(W\text{.}\)
  2. Explain how to find the dimension of \(W\text{.}\)
Solution.
  1. Observe that
    \begin{equation*} \RREF \left[\begin{array}{ccccc} 1 & 1 & 3 & 1 & 2 \\ -3 & 0 & -6 & 6 & 3 \\ -1 & 1 & -1 & 1 & 0 \\ 2 & -2 & 2 & -1 & 1 \end{array}\right] = \left[\begin{array}{ccccc} 1 & 0 & 2 & 0 & 1 \\ 0 & 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 & 0 \end{array}\right] \end{equation*}
    If we remove the vectors yielding non-pivot columns, the resulting set will span the same vectors while being linearly independent. Therefore
    \begin{equation*} \left\{ \left[\begin{array}{c} 1 \\ -3 \\ -1 \\ 2 \end{array}\right] , \left[\begin{array}{c} 1 \\ 0 \\ 1 \\ -2 \end{array}\right] , \left[\begin{array}{c} 1 \\ 6 \\ 1 \\ -1 \end{array}\right] \right\} \end{equation*}
    is a basis of \(W\text{.}\)
  2. Since this (and thus every other) basis has three vectors in it, the dimension of \(W\) is \(3\text{.}\)

Example B.1.11. EV7.

Consider the homogeneous system of equations
\begin{alignat*}{6} x_1 &\,+\,& x_2 &\,+\,& 3x_3 &\,+\,& x_4 &\,+\,& 2x_5 &=& 0\\ -3x_1 &\,\,& &\,-\,& 6x_3 &\,+\,&6 x_4 &\,+\,& 3x_5 &=& 0\\ -x_1 &\,+\,& x_2 &\,-\,& x_3 &\,+\,& x_4 &\,\,& &=& 0\\ 2x_1 &\,-\,& 2x_2 &\,+\,& 2x_3 &\,-\,& x_4 &\,+\,& x_5 &=& 0 \end{alignat*}
  1. Find the solution space of the system.
  2. Find a basis of the solution space.
Solution.
  1. Observe that
    \begin{equation*} \RREF \left[\begin{array}{ccccc|c} 1 & 1 & 3 & 1 & 2 & 0\\ -3 & 0 & -6 & 6 & 3 & 0\\ -1 & 1 & -1 & 1 & 0 & 0\\ 2 & -2 & 2 & -1 & 1& 0 \end{array}\right] = \left[\begin{array}{ccccc|c} 1 & 0 & 2 & 0 & 1 &0\\ 0 & 1 & 1 & 0 & 0 &0\\ 0 & 0 & 0 & 1 & 1 &0\\ 0 & 0 & 0 & 0 & 0&0 \end{array}\right] \end{equation*}
    Letting \(x_3=a\) and \(x_5=b\) (since those correspond to the non-pivot columns), this is equivalent to the system
    \begin{alignat*}{6} x_1 &\,\,& &\,+\,& 2x_3 &\,\,& &\,+\,& x_5 &=& 0\\ &\,\,& x_2 &\,+\,& x_3 &\,\,& &\,\,& &=& 0\\ &\,\,& &\,\,& x_3 &\,\,& &\,\,& &=& a\\ &\,\,& &\,\,& &\,\,& x_4 &\,+\,& x_5 &=& 0\\ &\,\,& &\,\,& &\,\,& &\,\,& x_5 &=& b \end{alignat*}
    Thus, the solution set is
    \begin{equation*} \setBuilder{\left[\begin{array}{c} -2a-b \\ -a \\ a \\ -b \\ b \end{array}\right]}{a,b \in \IR} . \end{equation*}
  2. Since we can write
    \begin{equation*} \left[\begin{array}{c} -2a-b \\ -a \\ a \\ -b \\ b \end{array}\right] = a \left[\begin{array}{c} -2 \\ -1 \\ 1 \\ 0 \\ 0 \end{array}\right] + b \left[\begin{array}{c} -1 \\ 0 \\ 0 \\ -1 \\ 1 \end{array}\right], \end{equation*}
    a basis for the solution space is
    \begin{equation*} \left \{ \left[\begin{array}{c} -2 \\ -1 \\ 1 \\ 0 \\ 0 \end{array}\right] , \left[\begin{array}{c} -1 \\ 0 \\ 0 \\ -1 \\ 1 \end{array}\right] \right\}. \end{equation*}

Example B.1.12. AT1.

Consider the following maps of polynomials \(S: \P \rightarrow \P\) and \(T:\P\rightarrow\P\) defined by
\begin{equation*} S(f(x))= 3xf(x) \text{ and }T(f(x)) = 3f'(x)f(x). \end{equation*}
Explain why one of these maps is a linear transformation, and why the other map is not.
Solution.
To show \(S\) is a linear transformation, we must show two things:
\begin{equation*} S\left(f(x)+g(x)\right)=S(f(x))+s(g(x)) \end{equation*}
\begin{equation*} S(cf(x)) = cS(f(x)) \end{equation*}
To show \(S\) respects addition, we compute
\begin{align*} S\left(f(x)+g(x)\right) &= 3x\left(f(x)+g(x)\right) & \text{by definition of } \\\\ &= 3xf(x)+3xg(x) & \text{by distributing} \end{align*}
But note that \(S(f(x))=3xf(x)\) and \(S(g(x))=3xg(x)\text{,}\) so we have \(S(f(x)+g(x))=S(f(x))+S(g(x))\text{.}\)
For the second part, we compute
\begin{align*} S\left(cf(x)\right) &= 3x\left(cf(x)\right) & \text{by definition of } \\\\ &= 3cxf(x) & \text{rewriting the multiplication.} \end{align*}
But note that \(cS(f(x))=c(3xf(x))=3cxf(x)\) as well, so we have \(S(cf(x))=cS(f(x))\text{.}\) Now, since \(S\) respects both addition and scalar multiplication, we can conclude \(S\) is a linear transformation.
  • (Solution method 1) As for \(T\text{,}\) we compute
    \begin{align*} T(f(x)+g(x))& =3 (f(x)+g(x))'(f(x)+g(x)) &\text{by definition of }\\ &= 3(f'(x)+g'(x))(f(x)+g(x)) & \text{since the derivative is linear}\\ &= 3f(x)f'(x)+3f(x)g'(x)+3f'(x)g(x)+3g(x)g'(x) &\text{by distributing} \end{align*}
    However, note that \(T(f(x))+T(g(x))=3f'(x)f(x)+3g'(x)g(x)\text{,}\) which is not always the same polynomial (for example, when \(f(x)=g(x)=x\)). So we see that \(T(f(x)+g(x)) \neq T(f(x))+T(g(x))\text{,}\) so \(T\) does not respect addition and is therefore not a linear transformation.
  • (Solution method 2) As for \(T\text{,}\) we may choose the polynomial \(f(x)=x\) and scalar \(c=2\text{.}\) Then
    \begin{equation*} T(cf(x))=T(2x)=3(2x)'(2x)=3(2)(2x)=12x. \end{equation*}
    But on the other hand,
    \begin{equation*} cT(f(x))=2T(x)=2(3)(x)'(x)=2(3)(1)(x)=6x. \end{equation*}
    Since this isn’t the same polynomial, \(T\) does not preserve multiplication and is therefore not a linear transformation.

Example B.1.13. AT2.

  1. Find the standard matrix for the linear transformation \(T: \IR^3\rightarrow \IR^4\) given by
    \begin{equation*} T\left(\left[\begin{array}{c} x \\ y \\ z \\ \end{array}\right] \right) = \left[\begin{array}{c} -x+y \\ -x+3y-z \\ 7x+y+3z \\ 0 \end{array}\right]. \end{equation*}
  2. Let \(S: \IR^4 \rightarrow \IR^3\) be the linear transformation given by the standard matrix
    \begin{equation*} \left[\begin{array}{cccc} 2 & 3 & 4 & 1 \\ 0 & 1 & -1 & -1 \\ 3 & -2 & -2 & 4 \end{array}\right]. \end{equation*}
    Compute \(S\left( \left[\begin{array}{c} -2 \\ 1 \\ 3 \\ 2\end{array}\right] \right) \text{.}\)
Solution.
  1. Since
    \begin{align*} T\left(\left[\begin{array}{c} 1 \\ 0 \\ 0 \end{array}\right]\right) &= \left[\begin{array}{c} -1 \\ -1 \\ 7 \\0\end{array}\right]\\ T\left(\left[\begin{array}{c} 0 \\ 1 \\ 0 \end{array}\right]\right) &= \left[\begin{array}{c} 1 \\ 3 \\ 1 \\0 \end{array}\right]\\ T\left(\left[\begin{array}{c} 0 \\ 0 \\ 1 \end{array}\right]\right) &= \left[\begin{array}{c} 0 \\ -1 \\ 3 \\ 0 \end{array}\right], \end{align*}
    the standard matrix for \(T\) is \(\left[\begin{array}{ccc} -1 & 1 & 0 \\ -1 & 3 & -1 \\ 7 & 1 & 3 \\ 0 & 0 & 0 \end{array}\right] \text{.}\)
  2. \begin{equation*} S\left(\left[\begin{array}{c} -2 \\ 1 \\ 3 \\ 2 \end{array}\right] \right) = -2S(\vec{e}_1)+S(\vec{e}_2)+3S(\vec{e}_3)+2S(\vec{e}_4) \end{equation*}
    \begin{equation*} = -2 \left[\begin{array}{c} 2 \\ 0 \\ 3 \end{array}\right] + \left[\begin{array}{c} 3 \\ 1 \\ -2 \end{array}\right] + 3 \left[\begin{array}{c} 4 \\ -1 \\ -2 \end{array}\right]+2\left[\begin{array}{c} 1 \\ -1 \\ 4 \end{array}\right] = \left[\begin{array}{c} 13 \\ -4 \\ -6\end{array}\right]. \end{equation*}

Example B.1.14. AT3.

Let \(T: \IR^4 \rightarrow \IR^3\) be the linear transformation given by
\begin{equation*} T\left(\left[\begin{array}{c}x\\y\\z\\w\end{array}\right] \right) = \left[\begin{array}{c} x+3y+2z-3w \\ 2x+4y+6z-10w \\ x+6y-z+3w \end{array}\right] \end{equation*}
  1. Explain how to find the image of \(T\) and the kernel of \(T\text{.}\)
  2. Explain how to find a basis of the image of \(T\) and a basis of the kernel of \(T\text{.}\)
  3. Explain how to find the rank and nullity of T, and why the rank-nullity theorem holds for T.
Solution.
  1. To find the image we compute
    \begin{equation*} \Im(T) = T\left(\vspan\left\{\vec{e}_1,\vec{e}_2,\vec{e}_3,\vec{e}_4\right\}\right) \end{equation*}
    \begin{equation*} = \vspan\left\{T(\vec{e}_1),T(\vec{e}_2),T(\vec{e}_3),T(\vec{e}_4)\right\} \end{equation*}
    \begin{equation*} = \vspan\left\{\left[\begin{array}{c}1 \\ 2 \\ 1 \end{array}\right], \left[\begin{array}{c} 3 \\ 4 \\ 6 \end{array}\right], \left[\begin{array}{c} 2 \\ 6 \\ -1 \end{array}\right], \left[\begin{array}{c} -3 \\ -10 \\ 3 \end{array}\right]\right\}. \end{equation*}
  2. The kernel is the solution set of the corresponding homogeneous system of equations, i.e.
    \begin{alignat*}{5} x &+& 3y &+& 2z &-& 3w &=& 0\\ 2x &+& 4y &+& 6z &-& 10w &=& 0\\ x &+& 6y &-& z &+& 3w &=& 0 . \end{alignat*}
    So we compute
    \begin{equation*} \RREF\left[\begin{array}{cccc|c} 1 & 3 & 2 & -3 & 0 \\ 2 & 4 & 6 & -10 &0 \\ 1 & 6 & -1 & 3 & 0 \end{array}\right] = \left[\begin{array}{cccc|c} 1 & 0 & 5 & -9 & 0 \\ 0 & 1 & -1 & 2 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{array}\right]. \end{equation*}
    Then, letting \(z=a\) and \(w=b\) we have
    \begin{equation*} \ker T = \setBuilder{\left[\begin{array}{c}-5a+9b \\ a-2b \\ a \\ b \end{array}\right]}{a,b \in \IR}. \end{equation*}
  3. Since \(\Im(T) = \vspan\left\{\left[\begin{array}{c}1 \\ 2 \\ 1 \end{array}\right], \left[\begin{array}{c} 3 \\ 4 \\ 6 \end{array}\right], \left[\begin{array}{c} 2 \\ 6 \\ -1 \end{array}\right], \left[\begin{array}{c} -3 \\ -10 \\ 3 \end{array}\right]\right\}\text{,}\) we simply need to find a linearly independent subset of these four spanning vectors. So we compute
    \begin{equation*} \RREF \left[\begin{array}{cccc}1 & 3 & 2 & -3 \\ 2 & 4 & 6 & -10 \\ 1 & 6 & -1 & 3 \end{array}\right] = \left[\begin{array}{cccc} 1 & 0 & 5 & -9 \\ 0 & 1 & -1 & 2 \\ 0 & 0 & 0 & 0\end{array}\right]. \end{equation*}
    Since the first two columns are pivot columns, they form a linearly independent spanning set, so a basis for \(\Im T\) is \(\setList{\left[\begin{array}{c}1\\2\\1 \end{array}\right], \left[\begin{array}{c}3\\4\\6 \end{array}\right]}.\)
    To find a basis for the kernel, note that
    \begin{equation*} \ker T = \setBuilder{\left[\begin{array}{c}-5a+9b \\ a-2b \\ a \\ b \end{array}\right]}{a,b \in \IR} \end{equation*}
    \begin{equation*} = \setBuilder{a \left[\begin{array}{c}-5 \\ 1 \\ 1 \\ 0 \end{array}\right]+b \left[\begin{array}{c} 9 \\ -2 \\ 0 \\ 1 \end{array}\right]}{a,b \in \IR} \end{equation*}
    \begin{equation*} = \vspan\left\{ \left[\begin{array}{c} -5 \\ 1 \\ 1 \\ 0 \end{array}\right], \left[\begin{array}{c} 9 \\ -2 \\ 0 \\ 1 \end{array}\right]\right\}. \end{equation*}
    so a basis for the kernel is
    \begin{equation*} \setList{\left[\begin{array}{c}-5 \\ 1 \\ 1 \\ 0 \end{array}\right], \left[\begin{array}{c}9 \\ -2 \\ 0 \\ 1 \end{array}\right]}. \end{equation*}
  4. The dimension of the image (the rank) is \(2\text{,}\) the dimension of the kernel (the nullity) is \(2\text{,}\) and the dimension of the domain of \(T\) is \(4\text{,}\) so we see \(2+2=4\text{,}\) which verifies that the sum of the rank and nullity of \(T\) is the dimension of the domain of \(T\text{.}\)

Example B.1.15. AT4.

Let \(T: \IR^4 \rightarrow \IR^3\) be the linear transformation given by the standard matrix \(\left[\begin{array}{cccc} 1 & 3 & 2 & -3 \\ 2 & 4 & 6 & -10 \\ 1 & 6 & -1 & 3 \end{array}\right]\text{.}\)
  1. Explain why \(T\) is or is not injective.
  2. Explain why \(T\) is or is not surjective.
Solution.
Compute
\begin{equation*} \RREF\left[\begin{array}{cccc}1 & 3 & 2 & -3 \\ 2 & 4 & 6 & -10 \\ 1 & 6 & -1 & 3 \end{array}\right] = \left[\begin{array}{cccc} 1 & 0 & 5 & -9 \\ 0 & 1 & -1 & 2 \\ 0 & 0 & 0 & 0\end{array}\right]. \end{equation*}
  1. Note that the third and fourth columns are non-pivot columns, which means \(\ker T\) contains infinitely many vectors, so \(T\) is not injective.
  2. Since there are only two pivots, the image (i.e. the span of the columns) is a 2-dimensional subspace (and thus does not equal \(\IR^3\)), so \(T\) is not surjective.

Example B.1.16. AT5.

Let \(V\) be the set of all pairs of numbers \((x,y)\) of real numbers together with the following operations:
\begin{align*} (x_1,y_1) \oplus (x_2,y_2) &= (2x_1+2x_2,2y_1+2y_2)\\ c\odot (x,y) &= (cx,c^2y) \end{align*}
  1. Show that scalar multiplication distributes over vector addition:
    \begin{equation*} c\odot \left((x_1,y_1) \oplus (x_2,y_2) \right) = c \odot (x_1,y_1) \oplus c \odot (x_2,y_2) \end{equation*}
  2. Explain why \(V\) nonetheless is not a vector space.
Solution.
  1. We compute both sides:
    \begin{align*} c \odot \left((x_1,y_1) \oplus (x_2,y_2) \right) &= c \odot (2x_1+2x_2,2y_1+2y_2)\\ &= (c(2x_1+2x_2),c^2(2y_1+2y_2))\\ &= (2cx_1+2cx_2,2c^2y_1+2c^2y_2) \end{align*}
    and
    \begin{align*} c\odot (x_1,y_1) \oplus c \odot (x_2,y_2) &= (cx_1,c^2y_1) \oplus (cx_2,c^2y_2)\\ &= (2cx_1+2cx_2,2c^2y_1+2c^2y_2) \end{align*}
    Since these are the same, we have shown that the property holds.
  2. To show \(V\) is not a vector space, we must show that it fails one of the 8 defining properties of vector spaces. We will show that scalar multiplication does not distribute over scalar addition, i.e., there are values such that
    \begin{equation*} (c+d)\odot(x,y) \neq c \odot(x,y) \oplus d\odot(x,y) \end{equation*}
    • (Solution method 1) First, we compute
      \begin{align*} (c+d)\odot(x,y) &= ((c+d)x,(c+d)^2y)\\ &= ( (c+d)x, (c^2+2cd+d^2)y). \end{align*}
      Then we compute
      \begin{align*} c\odot (x,y) \oplus d\odot(x,y) &= (cx,c^2y) \oplus (dx,d^2y)\\ &= ( 2cx+2dx, 2c^2y+2d^2y). \end{align*}
      Since \((c+d)x\not=2cx+2dy\) when \(c,d,x,y=1\text{,}\) the property fails to hold.
    • (Solution method 2) When we let \(c,d,x,y=1\text{,}\) we may simplify both sides as follows.
      \begin{align*} (c+d)\odot(x,y) &= 2\odot(1,1)\\ &= (2\cdot1,2^2\cdot1)\\ &=(2,4) \end{align*}
      \begin{align*} c\odot (x,y) \oplus d\odot(x,y) &= 1\odot(1,1)\oplus 1\odot(1,1)\\ &= (1\cdot1,1^2\cdot1)\oplus(1\cdot1,1^2\cdot1)\\ &= (1,1)\oplus(1,1)\\ &= (2\cdot1+2\cdot1,2\cdot1+2\cdot1)\\ &= (4,4) \end{align*}
      Since these ordered pairs are different, the property fails to hold.

Example B.1.17. AT6.

  1. Given the set
    \begin{equation*} \left\{ x^{3} - 2 \, x^{2} + x + 2 , 2 \, x^{2} - 1 , -x^{3} + 3 \, x^{2} + 3 \, x - 2 , x^{3} - 6 \, x^{2} + 9 \, x + 5 \right\} \end{equation*}
    write a statement involving the solutions to a polynomial equation that’s equivalent to each claim below.
    • The set of polynomials is linearly independent.
    • The set of polynomials is linearly dependent.
  2. Explain how to determine which of these statements is true.
Solution.
The set of polynomials
\begin{equation*} \left\{ x^{3} - 2 \, x^{2} + x + 2 , 2 \, x^{2} - 1 , -x^{3} + 3 \, x^{2} + 3 \, x - 2 , x^{3} - 6 \, x^{2} + 9 \, x + 5 \right\} \end{equation*}
is linearly independent exactly when the polynomial equation
\begin{equation*} y_1\left( x^{3} - 2 \, x^{2} + x + 2 \right)+y_2\left( 2 \, x^{2} - 1 \right)+y_3\left( -x^{3} + 3 \, x^{2} + 3 \, x - 2 \right)+y_4\left( x^{3} - 6 \, x^{2} + 9 \, x + 5\right)=0 \end{equation*}
has no nontrivial (i.e. nonzero) solutions. The set is linearly dependent when this equation has a nontrivial (i.e. nonzero) solution.
To solve this equation, we distribute and then collect coefficients to obtain
\begin{equation*} \left(y_1-y_3+y_4\right)x^3+\left(-2y_1+2y_2+3y_3-6y_4\right)x^2+\left(y_1+3y_3+9y_4\right)x+\left(2y_1-y_2-2y_3+5y_4\right)=0\text{.} \end{equation*}
These polynomials are equal precisely when their coefficients are equal, leading to the system
\begin{equation*} \begin{matrix} y_1 & & &-&y_3 & +&y_4 & = & 0 \\ -2 y_1 & + & 2y_2 &+&3y_3 & -&6y_4 & = & 0 \\ y_1 & + & &+&3y_3 & +&9y_4 & = & 0 \\ 2 y_1 & - & y_2 &-&2y_3 & +&5y_4 & = & 0 \end{matrix}\text{.} \end{equation*}
To solve this, we compute
\begin{equation*} \mathrm{RREF}\, \left[\begin{array}{cccc|c} 1 & 0 & -1 & 1 & 0\\ -2 & 2 & 3 & -6 & 0\\ 1 & 0 & 3 & 9 & 0\\ 2 & -1 & -2 & 5 & 0 \end{array}\right] = \left[\begin{array}{cccc|c} 1 & 0 & 0 & 3 & 0\\ 0 & 1 & 0 & -3 & 0\\ 0 & 0 & 1 & 2 & 0\\ 0 & 0 & 0 & 0 & 0 \end{array}\right] \end{equation*}
The system has (infintely many) nontrivial solutions, so we that the set of polynomials is linearly dependent.

Example B.1.18. MX1.

Of the following three matrices, only two may be multiplied.
\begin{align*} A &= \left[\begin{array}{cc} 1 & -3 \\ 0 & 1 \end{array}\right] & B&= \left[\begin{array}{ccc} 4 & 1 & 2 \end{array}\right] & C&= \left[\begin{array}{ccc} 0 & 1 & 3 \\ 1 & -2 & 5 \end{array}\right] \end{align*}
Explain which two may be multiplied and why. Then show how to find their product.
Solution.
\(AC\) is the only one that can be computed, since \(C\) corresponds to a linear transformation \(\mathbb{R}^3 \rightarrow \mathbb{R}^2\) and \(A\) corresponds to a linear transfromation \(\mathbb{R}^2 \rightarrow \mathbb{R}^2\text{.}\) Thus the composition \(AC\) corresponds to a linear transformation \(\mathbb{R}^3 \rightarrow \mathbb{R}^2\) with a \(2\times 3\) standard matrix. We compute
\begin{align*} AC\left( \vec{e}_1 \right) &= A \left( \left[\begin{array}{c} 0 \\ 1 \end{array}\right] \right) = 0 \left[\begin{array}{c} 1 \\ 0 \end{array}\right] + 1\left[\begin{array}{c} -3 \\ 1 \end{array}\right] = \left[\begin{array}{c} -3 \\ 1 \end{array}\right] \\\\ AC\left( \vec{e}_2 \right) &= A \left( \left[\begin{array}{c} 1 \\ -2 \end{array}\right] \right) = 1 \left[\begin{array}{c} 1 \\ 0 \end{array}\right] -2\left[\begin{array}{c} -3 \\ 1 \end{array}\right] = \left[\begin{array}{c} 7 \\ -2 \end{array}\right] \\\\ AC\left( \vec{e}_3 \right) &= A \left( \left[\begin{array}{c} 3 \\ 5 \end{array}\right] \right) = 3 \left[\begin{array}{c} 1 \\ 0 \end{array}\right] + 5\left[\begin{array}{c} -3 \\ 1 \end{array}\right] = \left[\begin{array}{c} -12 \\ 5 \end{array}\right] \\\text{.} \end{align*}
Thus
\begin{equation*} AC = \left[\begin{array}{ccc} -3 & 7 & -12 \\ 1 & -2 & 5 \end{array}\right]. \end{equation*}

Example B.1.19. MX2.

Explain why each of the following matrices is or is not invertible by disussing its corresponding linear transformation. If the matrix is invertible, explain how to find its inverse.
\begin{equation*} \hspace{2em} D = \left[\begin{array}{cccc} -1 & 1 & 0 & 2 \\ -2 & 5 & 5 & -4 \\ 2 & -3 & -2 & 0 \\ 4 & -4 & -3 & 5 \end{array}\right] \hspace{2em} N = \left[\begin{array}{cccc} -3 & 9 & 1 & -11 \\ 3 & -9 & -2 & 13 \\ 3 & -9 & -3 & 15 \\ -4 & 12 & 2 & -16 \end{array}\right] \hspace{2em} \end{equation*}
Solution.
We compute
\begin{equation*} \mathrm{RREF}\left(D\right)=\left[\begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right]\text{.} \end{equation*}
We see \(D\) is bijective, and therefore invertible. To compute the inverse, we solve \(D\vec{x}=\vec{e}_1\) by computing
\begin{equation*} \mathrm{RREF}\,\left[\begin{array}{cccc|c} -1 & 1 & 0 & 2 & 1\\ -2 & 5 & 5 & -4 & 0 \\ 2 & -3 & -2 & 0 & 0\\ 4 & -4 & -3 & 5 & 0 \end{array}\right]=\left[\begin{array}{cccc|c} 1 & 0 & 0 & 0 & 21 \\ 0 & 1 & 0 & 0 & 38\\ 0 & 0 & 1 & 0 & -36\\ 0 & 0 & 0 & 1 & -8 \end{array}\right]\text{.} \end{equation*}
Similarly, we solve \(D\vec{x}=\vec{e}_2\) by computing
\begin{equation*} \mathrm{RREF}\,\left[\begin{array}{cccc|c} -1 & 1 & 0 & 2 & 0\\ -2 & 5 & 5 & -4 & 1 \\ 2 & -3 & -2 & 0 & 0\\ 4 & -4 & -3 & 5 & 0 \end{array}\right]=\left[\begin{array}{cccc|c} 1 & 0 & 0 & 0 & 8 \\ 0 & 1 & 0 & 0 & 14\\ 0 & 0 & 1 & 0 & -13\\ 0 & 0 & 0 & 1 & -3 \end{array}\right]\text{.} \end{equation*}
Similarly, we solve \(D\vec{x}=\vec{e}_3\) by computing
\begin{equation*} \mathrm{RREF}\,\left[\begin{array}{cccc|c} -1 & 1 & 0 & 2 & 0\\ -2 & 5 & 5 & -4 & 0 \\ 2 & -3 & -2 & 0 & 1\\ 4 & -4 & -3 & 5 & 0 \end{array}\right]=\left[\begin{array}{cccc|c} 1 & 0 & 0 & 0 & 23 \\ 0 & 1 & 0 & 0 & 41\\ 0 & 0 & 1 & 0 & -39\\ 0 & 0 & 0 & 1 & -9 \end{array}\right]\text{.} \end{equation*}
Similarly, we solve \(D\vec{x}=\vec{e}_4\) by computing
\begin{equation*} \mathrm{RREF}\,\left[\begin{array}{cccc|c} -1 & 1 & 0 & 2 & 0\\ -2 & 5 & 5 & -4 & 0 \\ 2 & -3 & -2 & 0 & 0\\ 4 & -4 & -3 & 5 & 1 \end{array}\right]=\left[\begin{array}{cccc|c} 1 & 0 & 0 & 0 & -2 \\ 0 & 1 & 0 & 0 & -4\\ 0 & 0 & 1 & 0 & 4\\ 0 & 0 & 0 & 1 & 1 \end{array}\right]\text{.} \end{equation*}
Combining these, we obtain
\begin{equation*} D^{-1}=\left[\begin{array}{cccc} 21 & 8 & 23 & -2 \\ 38 & 14 & 41 & -4 \\ -36 & -13 & -39 & 4 \\ -8 & -3 & -9 & 1 \end{array}\right]\text{.} \end{equation*}
We compute
\begin{equation*} \mathrm{RREF}\left(N\right)=\left[\begin{array}{cccc} 1 & -3 & 0 & 3 \\ 0 & 0 & 1 & -2 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]\text{.} \end{equation*}
We see \(N\) is not bijective and thus is not invertible.

Example B.1.20. MX3.

Use a matrix inverse to solve the following matrix-vector equation.
\begin{equation*} \left[\begin{array}{ccc} 1& 2& 1\\ 0& 0& 2\\ 1& 1& 1\\ \end{array}\right] \vec{v} = \left[\begin{array}{c}4\\ -2 \\ 2 \end{array}\right] \end{equation*}
Solution.
Using the techniques from section Section 4.3, and letting \(M = \left[\begin{array}{ccc} 1& 2& 1\\ 0& 0& 2\\ 1& 1& 1\\ \end{array}\right]\text{,}\) we find \(M^{-1} = \left[\begin{array}{ccc} -1& -1/2& 2\\ 1& 0& -1\\ 0& 1/2& 0\\ \end{array}\right]\text{.}\) Our equation can be written as \(M\vec{v} = \left[\begin{array}{c}4\\ -2 \\ 2 \end{array}\right]\text{,}\) and may therefore be solved via
\begin{equation*} \vec{v} = I\vec{v} = M^{-1}M\vec{v} = M^{-1}\left[\begin{array}{c}4\\ -2 \\ 2 \end{array}\right] = \left[\begin{array}{c}1\\ 2 \\ -1 \end{array}\right] \end{equation*}

Example B.1.21. MX4.

Let \(A\) be a \(4\times4\) matrix.
  1. Give a \(4\times 4\) matrix \(P\) that may be used to perform the row operation \({R_3} \to R_3+4 \, {R_1} \text{.}\)
  2. Give a \(4\times 4\) matrix \(Q\) that may be used to perform the row operation \({R_1} \to -4 \, {R_1}\text{.}\)
  3. Use matrix multiplication to describe the matrix obtained by applying \({R_3} \to 4 \, {R_1} + {R_3}\) and then \({R_1} \to -4 \, {R_1}\) to \(A\) (note the order).
Solution.
  1. \(\displaystyle P=\left[\begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \\ 4 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right]\)
  2. \(\displaystyle Q=\left[\begin{array}{cccc} -4 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1\end{array}\right]\)
  3. \(\displaystyle QPA\)

Example B.1.22. GT1.

Let \(A\) be a \(4 \times 4\) matrix with determinant \(-7\text{.}\)
  1. Let \(B\) be the matrix obtained from \(A\) by applying the row operation \(R_3 \to R_3+3R_4\text{.}\) What is \(\det(B)\text{?}\)
  2. Let \(C\) be the matrix obtained from \(A\) by applying the row operation \(R_2 \to -3R_2\text{.}\) What is \(\det(C)\text{?}\)
  3. Let \(D\) be the matrix obtained from \(A\) by applying the row operation \(R_3 \leftrightarrow R_4\text{.}\) What is \(\det(D)\text{?}\)
Solution.
  1. Adding a multiple of one row to another row does not change the determinant, so \(\det(B)=\det(A)=-7\text{.}\)
  2. Scaling a row scales the determinant by the same factor, so so \(\det(B)=-3\det(A)=-3(-7)=21\text{.}\)
  3. Swaping rows changes the sign of the determinant, so \(\det(B)=-\det(A)=7\text{.}\)

Example B.1.23. GT2.

Show how to compute the determinant of the matrix
\begin{equation*} A = \left[\begin{array}{cccc} 1 & 3 & 0 & -1 \\ 1 & 1 & 2 & 4 \\ 1 & 1 & 1 & 3 \\ -3 & 1 & 2 & -5 \end{array}\right] \end{equation*}
Solution.
Here is one possible solution, first applying a single row operation, and then performing Laplace/cofactor expansions to reduce the determinant to a linear combination of \(2\times 2\) determinants:
\begin{align*} \det \left[\begin{array}{cccc} 1 & 3 & 0 & -1 \\ 1 & 1 & 2 & 4 \\ 1 & 1 & 1 & 3 \\ -3 & 1 & 2 & -5 \end{array}\right] &= \det \left[\begin{array}{cccc} 1 & 3 & 0 & -1 \\ 0 & 0 & 1 & 1 \\ 1 & 1 & 1 & 3 \\ -3 & 1 & 2 & -5 \end{array}\right] = (-1) \det \left[\begin{array}{ccc} 1 & 3 & -1 \\ 1 & 1 & 3 \\ -3 & 1 & -5 \end{array}\right] + (1) \det \left[\begin{array}{ccc} 1 & 3 & 0 \\ 1 & 1 & 1 \\ -3 & 1 & 2 \end{array}\right]\\ &= (-1) \left( (1) \det \left[\begin{array}{cc} 1 & 3 \\ 1 & -5 \end{array}\right] - (1) \det \left[\begin{array}{cc} 3 & -1 \\ 1 & -5 \end{array}\right] + (-3) \det \left[\begin{array}{cc} 3 & -1 \\ 1 & 3 \end{array}\right] \right) +\\ &\phantom{==} (1) \left( (1) \det \left[\begin{array}{cc} 1 & 1 \\ 1 & 2 \end{array}\right] - (3) \det \left[\begin{array}{cc} 1 & 1 \\ -3 & 2 \end{array}\right] \right)\\ % &= (-1)\left( (1)(-8)-(1)(-14)+(-3)(10) \right) + (1) \left( (1)(1)-(3)(5) \right)\\ &= (-1) \left( -8+14-30 \right) + (1) \left(1-15 \right)\\ &=10 \end{align*}
Here is another possible solution, using row and column operations to first reduce the determinant to a \(3\times 3\) matrix and then applying a formula:
\begin{align*} \det \left[\begin{array}{cccc} 1 & 3 & 0 & -1 \\ 1 & 1 & 2 & 4 \\ 1 & 1 & 1 & 3 \\ -3 & 1 & 2 & -5 \end{array}\right] &= \det \left[\begin{array}{cccc} 1 & 3 & 0 & -1 \\ 0 & 0 & 1 & 1 \\ 1 & 1 & 1 & 3 \\ -3 & 1 & 2 & -5 \end{array}\right] = \det \left[\begin{array}{cccc} 1 & 3 & 0 & -1 \\ 0 & 0 & 1 & 0 \\ 1 & 1 & 1 & 2 \\ -3 & 1 & 2 & -7 \end{array}\right]\\ &=-\det \left[\begin{array}{cccc} 1 & 3 & 0 & -1 \\ 1 & 1 & 1 & 2 \\ 0 & 0 & 1 & 0 \\ -3 & 1 & 2 & -7 \end{array}\right] = -\det \left[\begin{array}{ccc} 1 & 3 & -1 \\ 1 & 1 & 2 \\ -3 & 1 & -7 \end{array}\right]\\ &=-((-7-18-1)-(3+2-21))\\ &=10 \end{align*}

Example B.1.24. GT3.

Explain how to find the eigenvalues of the matrix \(\left[\begin{array}{cc} -2 & -2 \\ 10 & 7 \end{array}\right] \text{.}\)
Solution.
Compute the characteristic polynomial:
\begin{equation*} \det(A-\lambda I) = \det \left[\begin{array}{cc} -2 - \lambda & -2 \\ 10 & 7-\lambda \end{array}\right] \end{equation*}
\begin{equation*} = (-2-\lambda)(7-\lambda)+20 = \lambda ^2 -5\lambda +6 = (\lambda -2)(\lambda -3) \end{equation*}
The eigenvalues are the roots of the characteristic polynomial, namely \(2\) and \(3\text{.}\)

Example B.1.25. GT4.

Explain how to find a basis for the eigenspace associated to the eigenvalue \(3\) in the matrix
\begin{equation*} \left[\begin{array}{ccc} -7 & -8 & 2 \\ 8 & 9 & -1 \\ \frac{13}{2} & 5 & 2 \end{array}\right]. \end{equation*}
Solution.
The eigenspace associated to \(3\) is the kernel of \(A-3I\text{,}\) so we compute
\begin{equation*} \RREF(A-3I) = \RREF \left[\begin{array}{ccc} -7-3 & -8 & 2 \\ 8 & 9-3 & -1 \\ \frac{13}{2} & 5 & 2-3 \end{array}\right] = \end{equation*}
\begin{equation*} \RREF \left[\begin{array}{ccc} -10 & -8 & 2 \\ 8 & 6 & -1 \\ \frac{13}{2} & 5 & -1 \end{array}\right] = \left[\begin{array}{ccc} 1 & 0 & 1 \\ 0 & 1 & -\frac{3}{2} \\ 0 & 0 & 0 \end{array}\right]. \end{equation*}
Thus we see the kernel is
\begin{equation*} \setBuilder{\left[\begin{array}{c} -a \\ \frac{3}{2} a \\ a \end{array}\right]}{a \in \IR} \end{equation*}
which has a basis of \(\left\{ \left[\begin{array}{c} -1 \\ \frac{3}{2} \\ 1 \end{array}\right] \right\}\text{.}\)