\(\newcommand{\circledNumber}[1]{\boxed{#1}}
\newcommand{\IR}{\mathbb{R}}
\newcommand{\IC}{\mathbb{C}}
\renewcommand{\P}{\mathcal{P}}
\renewcommand{\Im}{\operatorname{Im}}
\newcommand{\RREF}{\operatorname{RREF}}
\newcommand{\vspan}{\operatorname{span}}
\newcommand{\setList}[1]{\left\{#1\right\}}
\newcommand{\setBuilder}[2]{\left\{#1\,\middle|\,#2\right\}}
\newcommand{\unknown}{\,{\color{gray}?}\,}
\newcommand{\drawtruss}[2][1]{
\begin{tikzpicture}[scale=#1, every node/.style={scale=#1}]
\draw (0,0) node[left,magenta]{C} --
(1,1.71) node[left,magenta]{A} --
(2,0) node[above,magenta]{D} -- cycle;
\draw (2,0) --
(3,1.71) node[right,magenta]{B} --
(1,1.71) -- cycle;
\draw (3,1.71) -- (4,0) node[right,magenta]{E} -- (2,0) -- cycle;
\draw[blue] (0,0) -- (0.25,-0.425) -- (-0.25,-0.425) -- cycle;
\draw[blue] (4,0) -- (4.25,-0.425) -- (3.75,-0.425) -- cycle;
\draw[thick,red,->] (2,0) -- (2,-0.75);
#2
\end{tikzpicture}
}
\newcommand{\trussNormalForces}{
\draw [thick, blue,->] (0,0) -- (0.5,0.5);
\draw [thick, blue,->] (4,0) -- (3.5,0.5);
}
\newcommand{\trussCompletion}{
\trussNormalForces
\draw [thick, magenta,<->] (0.4,0.684) -- (0.6,1.026);
\draw [thick, magenta,<->] (3.4,1.026) -- (3.6,0.684);
\draw [thick, magenta,<->] (1.8,1.71) -- (2.2,1.71);
\draw [thick, magenta,->] (1.6,0.684) -- (1.5,0.855);
\draw [thick, magenta,<-] (1.5,0.855) -- (1.4,1.026);
\draw [thick, magenta,->] (2.4,0.684) -- (2.5,0.855);
\draw [thick, magenta,<-] (2.5,0.855) -- (2.6,1.026);
}
\newcommand{\trussCForces}{
\draw [thick, blue,->] (0,0) -- (0.5,0.5);
\draw [thick, magenta,->] (0,0) -- (0.4,0.684);
\draw [thick, magenta,->] (0,0) -- (0.5,0);
}
\newcommand{\trussStrutVariables}{
\node[above] at (2,1.71) {\(x_1\)};
\node[left] at (0.5,0.866) {\(x_2\)};
\node[left] at (1.5,0.866) {\(x_3\)};
\node[right] at (2.5,0.866) {\(x_4\)};
\node[right] at (3.5,0.866) {\(x_5\)};
\node[below] at (1,0) {\(x_6\)};
\node[below] at (3,0) {\(x_7\)};
}
\newcommand{\lt}{<}
\newcommand{\gt}{>}
\newcommand{\amp}{&}
\definecolor{fillinmathshade}{gray}{0.9}
\newcommand{\fillinmath}[1]{\mathchoice{\colorbox{fillinmathshade}{$\displaystyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\textstyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\scriptstyle \phantom{\,#1\,}$}}{\colorbox{fillinmathshade}{$\scriptscriptstyle\phantom{\,#1\,}$}}}
\)
Linear Algebra for Team-Based Inquiry Learning
2022 Edition
Steven Clontz
Drew Lewis
University of South Alabama
University of South Alabama
August 2, 2022
Section 5.3: Eigenvalues and Characteristic Polynomials (GT3) Activity 5.3.1 (~5 min)
An invertible matrix \(M\) and its inverse \(M^{-1}\) are given below:
\begin{equation*}
M=\left[\begin{array}{cc}1&2\\3&4\end{array}\right]
\hspace{2em}
M^{-1}=\left[\begin{array}{cc}-2&1\\3/2&-1/2\end{array}\right]
\end{equation*}
Which of the following is equal to \(\det(M)\det(M^{-1})\text{?}\)
\(\displaystyle -1\)
\(\displaystyle 0\)
\(\displaystyle 1\)
\(\displaystyle 4\)
Fact 5.3.2
For every invertible matrix \(M\text{,}\)
\begin{equation*}
\det(M)\det(M^{-1})= \det(I)=1
\end{equation*}
so
\(\det(M^{-1})=\frac{1}{\det(M)}\text{.}\) Furthermore, a square matrix \(M\) is invertible if and only if \(\det(M)\not=0\text{.}\)
Observation 5.3.3
Consider the linear transformation \(A : \IR^2 \rightarrow \IR^2\) given by the matrix \(A = \left[\begin{array}{cc} 2 & 2 \\ 0 & 3 \end{array}\right]\text{.}\)
Figure 1. Transformation of the unit square by the linear transformation \(A\) It is easy to see geometrically that
\begin{equation*}
A\left[\begin{array}{c}1 \\ 0 \end{array}\right] =
\left[\begin{array}{cc} 2 & 2 \\ 0 & 3 \end{array}\right]\left[\begin{array}{c}1 \\ 0 \end{array}\right]=
\left[\begin{array}{c}2 \\ 0 \end{array}\right]=
2 \left[\begin{array}{c}1 \\ 0 \end{array}\right]\text{.}
\end{equation*}
It is less obvious (but easily checked once you find it) that
\begin{equation*}
A\left[\begin{array}{c} 2 \\ 1 \end{array}\right] =
\left[\begin{array}{cc} 2 & 2 \\ 0 & 3 \end{array}\right]\left[\begin{array}{c}2 \\ 1 \end{array}\right]=
\left[\begin{array}{c} 6 \\ 3 \end{array}\right] =
3\left[\begin{array}{c} 2 \\ 1 \end{array}\right]\text{.}
\end{equation*}
Definition 5.3.4
Let \(A \in M_{n,n}\text{.}\) An eigenvector for \(A\) is a vector \(\vec{x} \in \IR^n\) such that \(A\vec{x}\) is parallel to \(\vec{x}\text{.}\)
Figure 2. The map \(A\) stretches out the eigenvector \(\left[\begin{array}{c}2 \\ 1 \end{array}\right]\) by a factor of \(3\) (the corresponding eigenvalue). In other words, \(A\vec{x}=\lambda \vec{x}\) for some scalar \(\lambda\text{.}\) If \(\vec x\not=\vec 0\text{,}\) then we say \(\vec x\) is a nontrivial eigenvector and we call this \(\lambda\) an eigenvalue of \(A\text{.}\)
Activity 5.3.5 (~5 min)
Finding the eigenvalues \(\lambda\) that satisfy
\begin{equation*}
A\vec x=\lambda\vec x=\lambda(I\vec x)=(\lambda I)\vec x
\end{equation*}
for some nontrivial eigenvector
\(\vec x\) is equivalent to finding nonzero solutions for the matrix equation
\begin{equation*}
(A-\lambda I)\vec x =\vec 0\text{.}
\end{equation*}
Which of the following must be true for any eigenvalue?
The kernel of the transformation with standard matrix \(A-\lambda I\) must contain the zero vector , so \(A-\lambda I\) is invertible .
The kernel of the transformation with standard matrix \(A-\lambda I\) must contain a non-zero vector , so \(A-\lambda I\) is not invertible .
The image of the transformation with standard matrix \(A-\lambda I\) must contain the zero vector , so \(A-\lambda I\) is invertible .
The image of the transformation with standard matrix \(A-\lambda I\) must contain a non-zero vector , so \(A-\lambda I\) is not invertible .
Fact 5.3.6
The eigenvalues \(\lambda\) for a matrix \(A\) are the values that make \(A-\lambda I\) non-invertible.
Thus the eigenvalues \(\lambda\) for a matrix \(A\) are the solutions to the equation
\begin{equation*}
\det(A-\lambda I)=0.
\end{equation*}
Definition 5.3.7
The expression \(\det(A-\lambda I)\) is called characteristic polynomial of \(A\text{.}\)
For example, when \(A=\left[\begin{array}{cc}1 & 2 \\ 3 & 4\end{array}\right]\text{,}\) we have
\begin{equation*}
A-\lambda I=
\left[\begin{array}{cc}1 & 2 \\ 3 & 4\end{array}\right]-
\left[\begin{array}{cc}\lambda & 0 \\ 0 & \lambda\end{array}\right]=
\left[\begin{array}{cc}1-\lambda & 2 \\ 3 & 4-\lambda\end{array}\right]\text{.}
\end{equation*}
Thus the characteristic polynomial of \(A\) is
\begin{equation*}
\det\left[\begin{array}{cc}1-\lambda & 2 \\ 3 & 4-\lambda\end{array}\right]
=
(1-\lambda)(4-\lambda)-(2)(3)
=
\lambda^2-5\lambda-2
\end{equation*}
and its eigenvalues are the solutions to
\(\lambda^2-5\lambda-2=0\text{.}\)
Activity 5.3.8 (~10 min)
Let \(A = \left[\begin{array}{cc} 5 & 2 \\ -3 & -2 \end{array}\right]\text{.}\)
Part 1.
Compute \(\det (A-\lambda I)\) to determine the characteristic polynomial of \(A\text{.}\)
Activity 5.3.8 (~10 min)
Let \(A = \left[\begin{array}{cc} 5 & 2 \\ -3 & -2 \end{array}\right]\text{.}\)
Part 2.
Set this characteristic polynomial equal to zero and factor to determine the eigenvalues of \(A\text{.}\)
Activity 5.3.9 (~5 min)
Find all the eigenvalues for the matrix \(A=\left[\begin{array}{cc} 3 & -3 \\ 2 & -4 \end{array}\right]\text{.}\)
Activity 5.3.10 (~5 min)
Find all the eigenvalues for the matrix \(A=\left[\begin{array}{cc} 1 & -4 \\ 0 & 5 \end{array}\right]\text{.}\)
Activity 5.3.11 (~10 min)
Find all the eigenvalues for the matrix \(A=\left[\begin{array}{ccc} 3 & -3 & 1 \\ 0 & -4 & 2 \\ 0 & 0 & 7 \end{array}\right]\text{.}\)