(a)
Find three different solutions for this system.
Steven Clontz | Drew Lewis |
---|---|
University of South Alabama | University of South Alabama |
A linear equation is an equation of the variables \(x_i\) of the form
A solution for a linear equation is a Euclidean vector
In previous classes you likely used the variables \(x,y,z\) in equations. However, since this course often deals with equations of four or more variables, we will often write our variables as \(x_i\text{,}\) and assume \(x=x_1,y=x_2,z=x_3,w=x_4\) when convenient.
A system of linear equations (or a linear system for short) is a collection of one or more linear equations.
Its solution set is given by
When variables in a large linear system are missing, we prefer to write the system in one of the following standard forms:
Original linear system:
Verbose standard form:
Concise standard form:
It will often be convenient to think of a system of equations as a vector equation.
By applying vector operations and equating components, it is straightforward to see that the vector equation
A linear system is consistent if its solution set is non-empty (that is, there exists a solution for the system). Otherwise it is inconsistent.
All linear systems are one of the following:
All inconsistent linear systems contain a logical contradiction. Find a contradiction in this system to show that its solution set is the empty set.
Consider the following consistent linear system.
Find three different solutions for this system.
Let \(x_2=a\) where \(a\) is an arbitrary real number, then find an expression for \(x_1\) in terms of \(a\text{.}\) Use this to write the solution set \(\setBuilder { \left[\begin{array}{c} \unknown \\ a \end{array}\right] }{ a \in \IR }\) for the linear system.
Consider the following linear system.
Describe the solution set
Solving linear systems of two variables by graphing or substitution is reasonable for two-variable systems, but these simple techniques won't usually cut it for equations with more than two variables or more than two equations. For example,
The only important information in a linear system are its coefficients and constants.
Original linear system:
Verbose standard form:
Coefficients/constants:
A system of \(m\) linear equations with \(n\) variables is often represented by writing its coefficients and constants in an augmented matrix.
The corresponding augmented matrix for this system is obtained by simply writing the coefficients and constants in matrix form.
Linear system:
Augmented matrix:
Vector equation:
Two systems of linear equations (and their corresponding augmented matrices) are said to be equivalent if they have the same solution set.
For example, both of these systems share the same solution set \(\setList{ \left[\begin{array}{c} 1 \\ 1\end{array}\right] }\text{.}\)
Therefore these augmented matrices are equivalent, which we denote with \(\sim\text{:}\)
Following are seven procedures used to manipulate an augmented matrix. Label the procedures that would result in an equivalent augmented matrix as valid, and label the procedures that might change the solution set of the corresponding linear system as invalid.
The following three row operations produce equivalent augmented matrices.
Swap two rows, for example, \(R_1\leftrightarrow R_2\text{:}\)
Multiply a row by a nonzero constant, for example, \(2R_1\rightarrow R_1\text{:}\)
Add a constant multiple of one row to another row, for example, \(R_2-4R_1\rightarrow R_2\text{:}\)
Consider the following (equivalent) linear systems.
A)
B)
C)
D)
E)
F)
Rank the six linear systems from most complicated to simplest.
We can rewrite the previous in terms of equivalences of augmented matrices
Determine the row operation(s) necessary in each step to transform the most complicated system's augmented matrix into the simplest.
A matrix is in reduced row echelon form (RREF) if
Every matrix has a unique reduced row echelon form. If \(A\) is a matrix, we write \(\RREF(A)\) for the reduced row echelon form of that matrix.
Recall that a matrix is in reduced row echelon form (RREF) if
For each matrix, circle the leading terms, and label it as RREF or not RREF. For the ones not in RREF, find their RREF.
Recall that a matrix is in reduced row echelon form (RREF) if
For each matrix, circle the leading terms, and label it as RREF or not RREF. For the ones not in RREF, find their RREF.
In practice, if we simply need to convert a matrix into reduced row echelon form, we use technology to do so.
However, it is also important to understand the Gauss-Jordan elimination algorithm that a computer or calculator uses to convert a matrix (augmented or not) into reduced row echelon form. Understanding this algorithm will help us better understand how to interpret the results in many applications we use it for in Module V.
Consider the matrix
Consider the matrix
Consider the matrix
Consider the matrix
Perform three row operations to produce a matrix closer to RREF.
Finish putting it in RREF.
Consider the matrix
Compute \(\RREF(A)\text{.}\)
Consider the matrix
Compute \(\RREF(A)\text{.}\)
Free browser-based technologies for mathematical computation are available online.
Type rref([1,3,2;2,5,7])
and then press the Evaluate button to compute the \(\RREF\) of \(\left[\begin{array}{ccc} 1 & 3 & 2 \\ 2 & 5 & 7 \end{array}\right]\text{.}\)
Since the vertical bar in an augmented matrix does not affect row operations, the \(\RREF\) of \(\left[\begin{array}{cc|c} 1 & 3 & 2 \\ 2 & 5 & 7 \end{array}\right]\) may be computed in the same way.
In the HTML version of this text, code cells are often embedded for your convenience when RREFs need to be computed.
Try this out to compute \(\RREF\left[\begin{array}{cc|c} 2 & 3 & 1 \\ 3 & 0 & 6 \end{array}\right]\text{.}\)
Consider the following system of equations.
Convert this to an augmented matrix and use technology to compute its reduced row echelon form:
Use the \(\RREF\) matrix to write a linear system equivalent to the original system. Then find its solution set.
Consider the vector equation
Convert this to an augmented matrix and use technology to compute its reduced row echelon form:
Use the \(\RREF\) matrix to write a linear system equivalent to the original system. Then find its solution set.
Consider the following linear system.
Find its corresponding augmented matrix \(A\) and use technology to find \(\RREF(A)\text{.}\)
How many solutions do these linear systems have?
Consider the simple linear system equivalent to the system from the previous activity:
Let \(x_1=a\) and write the solution set in the form \(\setBuilder { \left[\begin{array}{c} a \\ \unknown \\ \unknown \end{array}\right] }{ a \in \IR } \text{.}\)
Let \(x_2=b\) and write the solution set in the form \(\setBuilder { \left[\begin{array}{c} \unknown \\ b \\ \unknown \end{array}\right] }{ b \in \IR } \text{.}\)
Which of these was easier? What features of the RREF matrix \(\left[\begin{array}{ccc|c} \circledNumber{1} & 2 & 0 & 4 \\ 0 & 0 & \circledNumber{1} & -1 \end{array}\right]\) caused this?
Recall that the pivots of a matrix in \(\RREF\) form are the leading \(1\)s in each non-zero row.
The pivot columns in an augmented matrix correspond to the bound variables in the system of equations (\(x_1,x_3\) below). The remaining variables are called free variables (\(x_2\) below).
To efficiently solve a system in RREF form, assign letters to the free variables, and then solve for the bound variables.
Find the solution set for the system
The solution set to the system
Don't forget to correctly express the solution set of a linear system. Systems with zero or one solutions may be written by listing their elements, while systems with infinitely-many solutions may be written using set-builder notation.
Consistent with one solution: e.g. \(\setList{ \left[\begin{array}{c}1\\2\\3\end{array}\right] }\)
Consistent with infinitely-many solutions: e.g. \(\setBuilder { \left[\begin{array}{c}1\\2-3a\\a\end{array}\right] }{ a\in\IR }\)
Inconsistent: \(\emptyset\) or \(\{\}\)
Several properties of the real numbers, such as commutivity:
Consider each of the following properties of the real numbers \(\IR^1\text{.}\) Label each property as valid if the property also holds for two-dimensional Euclidean vectors \(\vec u,\vec v,\vec w\in\IR^2\) and scalars \(a,b\in\IR\text{,}\) and invalid if it does not.
\(\vec u+(\vec v+\vec w)= (\vec u+\vec v)+\vec w\text{.}\)
\(\vec u+\vec v= \vec v+\vec u\text{.}\)
There exists some \(\vec z\) where \(\vec v+\vec z=\vec v\text{.}\)
There exists some \(-\vec v\) where \(\vec v+(-\vec v)=\vec z\text{.}\)
If \(\vec u\not=\vec v\text{,}\) then \(\frac{1}{2}(\vec u+\vec v)\) is the only vector equally distant from both \(\vec u\) and \(\vec v\)
\(a(b\vec v)=(ab)\vec v\text{.}\)
\(1\vec v=\vec v\text{.}\)
If \(\vec u\not=\vec 0\text{,}\) then there exists some scalar \(c\) such that \(c\vec u=\vec v\text{.}\)
\(a(\vec u+\vec v)=a\vec u+a\vec v\text{.}\)
\((a+b)\vec v=a\vec v+b\vec v\text{.}\)
A vector space \(V\) is any collection of mathematical objects with associated addition \(\oplus\) and scalar multiplication \(\odot\) operations that satisfy the following properties. Let \(\vec u,\vec v,\vec w\) belong to \(V\text{,}\) and let \(a,b\) be scalar numbers.
Vector addition is associative: \(\vec u\oplus (\vec v\oplus \vec w)= (\vec u\oplus \vec v)\oplus \vec w\text{.}\)
Vector addition is commutative: \(\vec u\oplus \vec v= \vec v\oplus \vec u\text{.}\)
An additive identity exists: There exists some \(\vec z\) where \(\vec v\oplus \vec z=\vec v\text{.}\)
Additive inverses exist: There exists some \(-\vec v\) where \(\vec v\oplus (-\vec v)=\vec z\text{.}\)
Scalar multiplication is associative: \(a\odot(b\odot\vec v)=(ab)\odot\vec v\text{.}\)
1 is a multiplicative identity: \(1\odot\vec v=\vec v\text{.}\)
Scalar multiplication distributes over vector addition: \(a\odot(\vec u\oplus \vec v)=(a\odot\vec u)\oplus(a\odot\vec v)\text{.}\)
Scalar multiplication distributes over scalar addition: \((a+ b)\odot\vec v=(a\odot\vec v)\oplus(b\odot \vec v)\text{.}\)
Every Euclidean vector space
The space of \(m \times n\) matrices
Every Euclidean space \(\IR^n\) is a vector space, but there are other examples of vector spaces as well.
For example, consider the set \(\IC\) of complex numbers with the usual defintions of addition and scalar multiplication, and let \(\vec u=a+b\mathbf{i}\text{,}\) \(\vec v=c+d\mathbf{i}\text{,}\) and \(\vec w=e+f\mathbf{i}\text{.}\) Then
All eight properties can be verified in this way.
The following sets are just a few examples of vector spaces, with the usual/natural operations for addition and scalar multiplication.
\(\IR^n\text{:}\) Euclidean vectors with \(n\) components.
\(\IC\text{:}\) Complex numbers.
\(M_{m,n}\text{:}\) Matrices of real numbers with \(m\) rows and \(n\) columns.
\(\P_n\text{:}\) Polynomials of degree \(n\) or less.
\(\P\text{:}\) Polynomials of any degree.
\(C(\IR)\text{:}\) Real-valued continuous functions.
Consider the set \(V=\setBuilder{(x,y)}{y=e^x}\) with operations defined by
Show that \(V\) satisfies the distributive property
Show that \(V\) contains an additive identity element satisfying
It turns out \(V=\setBuilder{(x,y)}{y=e^x}\) with operations defined by
Thus, \(V\) is a vector space.
Let \(V=\setBuilder{(x,y)}{x,y\in\IR}\) have operations defined by
Show that \(1\) is the scalar multiplication identity element by simplifying \(1\odot(x,y)\) to \((x,y)\text{.}\)
Show that \(V\) does not have an additive identity element by showing that \((0,-1)\oplus\vec z\not=(0,-1)\) no matter how \(\vec z=(z,w)\) is chosen.
Is \(V\) a vector space?
Let \(V=\setBuilder{(x,y)}{x,y\in\IR}\) have operations defined by
Show that scalar multiplication distributes over vector addition, i.e.
Show that vector addition is not associative, i.e.
Is \(V\) a vector space?
A linear combination of a set of vectors \(\{\vec v_1,\vec v_2,\dots,\vec v_m\}\) is given by \(c_1\vec v_1+c_2\vec v_2+\dots+c_m\vec v_m\) for any choice of scalar multiples \(c_1,c_2,\dots,c_m\text{.}\)
For example, we can say \(\left[\begin{array}{c}3 \\0 \\ 5\end{array}\right]\) is a linear combination of the vectors \(\left[\begin{array}{c} 1 \\ -1 \\ 2 \end{array}\right]\) and \(\left[\begin{array}{c} 1 \\ 2 \\ 1 \end{array}\right]\) since
The span of a set of vectors is the collection of all linear combinations of that set:
For example:
Consider \(\vspan\left\{\left[\begin{array}{c}1\\2\end{array}\right]\right\}\text{.}\)
Sketch \(1\left[\begin{array}{c}1\\2\end{array}\right]=\left[\begin{array}{c}1\\2\end{array}\right]\text{,}\) \(3\left[\begin{array}{c}1\\2\end{array}\right]=\left[\begin{array}{c}3\\6\end{array}\right]\text{,}\) \(0\left[\begin{array}{c}1\\2\end{array}\right]=\left[\begin{array}{c}0\\0\end{array}\right]\text{,}\) and \(-2\left[\begin{array}{c}1\\2\end{array}\right]=\left[\begin{array}{c}-2\\-4\end{array}\right]\) in the \(xy\) plane.
Sketch a representation of all the vectors belonging to \(\vspan\setList{\left[\begin{array}{c}1\\2\end{array}\right]} = \setBuilder{a\left[\begin{array}{c}1\\2\end{array}\right]}{a\in\IR}\) in the \(xy\) plane.
Consider \(\vspan\left\{\left[\begin{array}{c}1\\2\end{array}\right], \left[\begin{array}{c}-1\\1\end{array}\right]\right\}\text{.}\)
Sketch the following linear combinations in the \(xy\) plane.
Sketch a representation of all the vectors belonging to \(\vspan\left\{\left[\begin{array}{c}1\\2\end{array}\right], \left[\begin{array}{c}-1\\1\end{array}\right]\right\}=\setBuilder{a\left[\begin{array}{c}1\\2\end{array}\right]+b\left[\begin{array}{c}-1\\1\end{array}\right]}{a, b \in \IR}\) in the \(xy\) plane.
Sketch a representation of all the vectors belonging to \(\vspan\left\{\left[\begin{array}{c}6\\-4\end{array}\right], \left[\begin{array}{c}-3\\2\end{array}\right]\right\}\) in the \(xy\) plane.
The vector \(\left[\begin{array}{c}-1\\-6\\1\end{array}\right]\) belongs to \(\vspan\left\{\left[\begin{array}{c}1\\0\\-3\end{array}\right], \left[\begin{array}{c}-1\\-3\\2\end{array}\right]\right\}\) exactly when there exists a solution to the vector equation \(x_1\left[\begin{array}{c}1\\0\\-3\end{array}\right]+ x_2\left[\begin{array}{c}-1\\-3\\2\end{array}\right] =\left[\begin{array}{c}-1\\-6\\1\end{array}\right]\text{.}\)
Reinterpret this vector equation as a system of linear equations.
Find its solution set, using technology to find \(\RREF\) of its corresponding augmented matrix.
Given this solution set, does \(\left[\begin{array}{c}-1\\-6\\1\end{array}\right]\) belong to \(\vspan\left\{\left[\begin{array}{c}1\\0\\-3\end{array}\right], \left[\begin{array}{c}-1\\-3\\2\end{array}\right]\right\}\text{?}\)
A vector \(\vec b\) belongs to \(\vspan\{\vec v_1,\dots,\vec v_n\}\) if and only if the vector equation \(x_1 \vec{v}_1+\cdots+x_n \vec{v}_n=\vec{b}\) is consistent.
The following are all equivalent statements:
The vector \(\vec{b}\) belongs to \(\vspan\{\vec v_1,\dots,\vec v_n\}\text{.}\)
The vector equation \(x_1 \vec{v}_1+\cdots+x_n \vec{v}_n=\vec{b}\) is consistent.
The linear system corresponding to \(\left[\vec v_1\,\dots\,\vec v_n \,|\, \vec b\right]\) is consistent.
\(\RREF\left[\vec v_1\,\dots\,\vec v_n \,|\, \vec b\right]\) doesn't have a row \([0\,\cdots\,0\,|\,1]\) representing the contradiction \(0=1\text{.}\)
Determine if \(\left[\begin{array}{c}3\\-2\\1 \\ 5\end{array}\right]\) belongs to \(\vspan\left\{\left[\begin{array}{c}1\\0\\-3 \\ 2\end{array}\right], \left[\begin{array}{c}-1\\-3\\2 \\ 2\end{array}\right]\right\}\) by solving an appropriate vector equation.
Determine if \(\left[\begin{array}{c}-1\\-9\\0\end{array}\right]\) belongs to \(\vspan\left\{\left[\begin{array}{c}1\\0\\-3\end{array}\right], \left[\begin{array}{c}-1\\-3\\2\end{array}\right]\right\}\) by solving an appropriate vector equation.
Does the third-degree polynomial \(3y^3-2y^2+y+5\) in \(\P_3\) belong to \(\vspan\{y^3-3y+2,-y^3-3y^2+2y+2\}\text{?}\)
Reinterpret this question as a question about the solution(s) of a polynomial equation.
Answer this equivalent question, and use its solution to answer the original question.
Does the polynomial \(x^2+x+1\) belong to \(\vspan\{x^2-x,x+1, x^2-1\}\text{?}\)
Does the matrix \(\left[\begin{array}{cc}3&-2\\1&5\end{array}\right]\) belong to \(\vspan\left\{\left[\begin{array}{cc}1&0\\-3&2\end{array}\right], \left[\begin{array}{cc}-1&-3\\2&2\end{array}\right]\right\}\text{?}\)
Reinterpret this question as a question about the solution(s) of a matrix equation.
Answer this equivalent question, and use its solution to answer the original question.
Any single non-zero vector/number \(x\) in \(\IR^1\) spans \(\IR^1\text{,}\) since \(\IR^1=\setBuilder{cx}{c\in\IR}\text{.}\)
How many vectors are required to span \(\IR^2\text{?}\) Sketch a drawing in the \(xy\) plane to support your answer.
\(\displaystyle 1\)
\(\displaystyle 2\)
\(\displaystyle 3\)
\(\displaystyle 4\)
Infinitely Many
How many vectors are required to span \(\IR^3\text{?}\)
\(\displaystyle 1\)
\(\displaystyle 2\)
\(\displaystyle 3\)
\(\displaystyle 4\)
Infinitely Many
At least \(n\) vectors are required to span \(\IR^n\text{.}\)
Choose any vector \(\left[\begin{array}{c}\unknown\\\unknown\\\unknown\end{array}\right]\) in \(\IR^3\) that is not in \(\vspan\left\{\left[\begin{array}{c}1\\-1\\0\end{array}\right], \left[\begin{array}{c}-2\\0\\1\end{array}\right]\right\}\) by using technology to verify that \(\RREF \left[\begin{array}{cc|c}1&-2&\unknown\\-1&0&\unknown\\0&1&\unknown\end{array}\right] = \left[\begin{array}{cc|c}1&0&0\\0&1&0\\0&0&1\end{array}\right] \text{.}\) (Why does this work?)
The set \(\{\vec v_1,\dots,\vec v_m\}\) fails to span all of \(\IR^n\) exactly when the vector equation
Note that this happens exactly when \(\RREF[\vec v_1\,\dots\,\vec v_m]\) has a non-pivot row of zeros.
Consider the set of vectors \(S=\left\{ \left[\begin{array}{c}2\\3\\0\\-1\end{array}\right], \left[\begin{array}{c}1\\-4\\3\\0\end{array}\right], \left[\begin{array}{c}1\\7\\-3\\-1\end{array}\right], \left[\begin{array}{c}0\\3\\5\\7\end{array}\right], \left[\begin{array}{c}3\\13\\7\\16\end{array}\right] \right\}\) and the question “Does \(\IR^4=\vspan S\text{?}\)”
Rewrite this question in terms of the solutions to a vector equation.
Answer your new question, and use this to answer the original question.
Consider the set of third-degree polynomials
Rewrite this question to be about the solutions to a polynomial equation.
Answer your new question, and use this to answer the original question.
Consider the set of matrices
Rewrite this as a question about the solutions to a matrix equation.
Answer your new question, and use this to answer the original question.
Let \(\vec{v}_1, \vec{v}_2, \vec{v}_3 \in \IR^7\) be three vectors, and suppose \(\vec{w}\) is another vector with \(\vec{w} \in \vspan \left\{ \vec{v}_1, \vec{v}_2, \vec{v}_3 \right\}\text{.}\) What can you conclude about \(\vspan \left\{ \vec{w}, \vec{v}_1, \vec{v}_2, \vec{v}_3 \right\} \text{?}\)
A subset of a vector space is called a subspace if it is a vector space on its own.
For example, the span of these two vectors forms a planar subspace inside of the larger vector space \(\IR^3\text{.}\)
Any subset \(S\) of a vector space \(V\) that contains the additive identity \(\vec 0\) satisfies the eight vector space properties automatically, since it is a collection of known vectors.
However, to verify that it's a subspace, we need to check that addition and multiplication still make sense using only vectors from \(S\text{.}\) So we need to check two things:
The set is closed under addition: for any \(\vec{x},\vec{y} \in S\text{,}\) the sum \(\vec{x}+\vec{y}\) is also in \(S\text{.}\)
The set is closed under scalar multiplication: for any \(\vec{x} \in S\) and scalar \(c \in \IR\text{,}\) the product \(c\vec{x}\) is also in \(S\text{.}\)
Let \(S=\setBuilder{\left[\begin{array}{c} x \\ y \\ z \end{array}\right]}{ x+2y+z=0}\text{.}\)
Let \(\vec{v}=\left[\begin{array}{c} x \\ y \\ z \end{array}\right]\) and \(\vec{w} = \left[\begin{array}{c} a \\ b \\ c \end{array}\right] \) be vectors in \(S\text{,}\) so \(x+2y+z=0\) and \(a+2b+c=0\text{.}\) Show that \(\vec v+\vec w = \left[\begin{array}{c} x+a \\ y+b \\ z+c \end{array}\right]\) also belongs to \(S\) by verifying that \((x+a)+2(y+b)+(z+c)=0\text{.}\)
Let \(\vec{v}=\left[\begin{array}{c} x \\ y \\ z \end{array}\right]\in S\text{,}\) so \(x+2y+z=0\text{.}\) Show that \(c\vec v=\left[\begin{array}{c}cx\\cy\\cz\end{array}\right]\) also belongs to \(S\) for any \(c\in\IR\) by verifying an appropriate equation.
Is \(S\) is a subspace of \(\IR^3\text{?}\)
Let \(S=\setBuilder{\left[\begin{array}{c} x \\ y \\ z \end{array}\right]}{ x+2y+z=4}\text{.}\) Choose a vector \(\vec v=\left[\begin{array}{c} \unknown\\\unknown\\\unknown \end{array}\right]\) in \(S\) and a real number \(c=\unknown\text{,}\) and show that \(c\vec v\) isn't in \(S\text{.}\) Is \(S\) a subspace of \(\IR^3\text{?}\)
Since \(0\) is a scalar and \(0\vec{v}=\vec{z}\) for any vector \(\vec{v}\text{,}\) a nonempty set that is closed under scalar multiplication must contain the zero vector \(\vec{z}\) for that vector space.
Put another way, you can check any of the following to show that a nonempty subset \(W\) isn't a subspace:
Show that \(\vec 0\not\in W\text{.}\)
Find \(\vec u,\vec v\in W\) such that \(\vec u+\vec v\not\in W\text{.}\)
Find \(c\in\IR,\vec v\in W\) such that \(c\vec v\not\in W\text{.}\)
If you cannot do any of these, then \(W\) can be proven to be a subspace by doing the following:
Prove that \(\vec u+\vec v\in W\) whenever \(\vec u,\vec v\in W\text{.}\)
Prove that \(c\vec v\in W\) whenever \(c\in\IR,\vec v\in W\text{.}\)
Consider these subsets of \(\IR^3\text{:}\)
Show \(R\) isn't a subspace by showing that \(\vec 0\not\in R\text{.}\)
Show \(S\) isn't a subspace by finding two vectors \(\vec u,\vec v\in S\) such that \(\vec u+\vec v\not\in S\text{.}\)
Show \(T\) isn't a subspace by finding a vector \(\vec v\in T\) such that \(2\vec v\not\in T\text{.}\)
Let \(W\) be a subspace of a vector space \(V\text{.}\) How are \(\vspan W\) and \(W\) related?
\(\vspan W\) is bigger than \(W\)
\(\vspan W\) is the same as \(W\)
\(\vspan W\) is smaller than \(W\)
If \(S\) is any subset of a vector space \(V\text{,}\) then since \(\vspan S\) collects all possible linear combinations, \(\vspan S\) is automatically a subspace of \(V\text{.}\)
In fact, \(\vspan S\) is always the smallest subspace of \(V\) that contains all the vectors in \(S\text{.}\)
Consider the two sets
We say that a set of vectors is linearly dependent if one vector in the set belongs to the span of the others. Otherwise, we say the set is linearly independent.
You can think of linearly dependent sets as containing a redundant vector, in the sense that you can drop a vector out without reducing the span of the set. In the above image, all three vectors lay in the same planar subspace, but only two vectors are needed to span the plane, so the set is linearly dependent.
Let \(\vec{v}_1,\vec{v}_2,\vec{v}_3 \) be vectors in \(\mathbb R^n\text{.}\) Suppose \(3\vec{v}_1-5\vec{v}_2=\vec{v}_3\text{,}\) so the set \(\{\vec{v}_1,\vec{v}_2,\vec{v}_3\}\) is linearly dependent. Which of the following is true of the vector equation \(x_1\vec{v}_1+x_2\vec{v}_2+x_3\vec{v}_3=\vec{0}\) ?
It is consistent with one solution
It is consistent with infinitely many solutions
It is inconsistent.
For any vector space, the set \(\{\vec v_1,\dots\vec v_n\}\) is linearly dependent if and only if the vector equation \(x_1\vec v_1+\dots+x_n\vec v_n=\vec{0}\) is consistent with infinitely many solutions.
Find
A set of Euclidean vectors \(\{\vec v_1,\dots\vec v_n\}\) is linearly dependent if and only if \(\RREF\left[\begin{array}{ccc}\vec v_1&\dots&\vec v_n\end{array}\right]\) has a column without a pivot position.
Compare the following results:
A set of \(\IR^m\) vectors \(\{\vec v_1,\dots\vec v_n\}\) is linearly independent if and only if \(\RREF\left[\begin{array}{ccc}\vec v_1&\dots&\vec v_n\end{array}\right]\) has all pivot columns.
A set of \(\IR^m\) vectors \(\{\vec v_1,\dots\vec v_n\}\) spans \(\IR^m\) if and only if \(\RREF\left[\begin{array}{ccc}\vec v_1&\dots&\vec v_n\end{array}\right]\) has all pivot rows.
Consider whether the set of Euclidean vectors \(\left\{ \left[\begin{array}{c}-4\\2\\3\\0\\-1\end{array}\right], \left[\begin{array}{c}1\\2\\0\\0\\3\end{array}\right], \left[\begin{array}{c}1\\10\\10\\2\\6\end{array}\right], \left[\begin{array}{c}3\\4\\7\\2\\1\end{array}\right] \right\}\) is linearly dependent or linearly independent.
Reinterpret this question as an appropriate question about solutions to a vector equation.
Use the solution to this question to answer the original question.
Consider whether the set of polynomials \(\left\{ x^3+1,x^2+2x,x^2+7x+4 \right\}\) is linearly dependent or linearly independent.
Reinterpret this question as an appropriate question about solutions to a polynomial equation.
Use the solution to this question to answer the original question.
What is the largest number of \(\IR^4\) vectors that can form a linearly independent set?
\(\displaystyle 3\)
\(\displaystyle 4\)
\(\displaystyle 5\)
You can have infinitely many vectors and still be linearly independent.
What is the largest number of
\(\displaystyle 3\)
\(\displaystyle 4\)
\(\displaystyle 5\)
You can have infinitely many vectors and still be linearly independent.
What is the largest number of
\(\displaystyle 3\)
\(\displaystyle 4\)
\(\displaystyle 5\)
You can have infinitely many vectors and still be linearly independent.
A basis is a linearly independent set that spans a vector space.
The standard basis of \(\IR^n\) is the set \(\{\vec{e}_1, \ldots, \vec{e}_n\}\) where
For \(\IR^3\text{,}\) these are the vectors \(\vec e_1=\hat\imath=\left[\begin{array}{c}1 \\ 0 \\ 0\end{array}\right], \vec e_2=\hat\jmath=\left[\begin{array}{c}0 \\ 1 \\ 0\end{array}\right],\) and \(\vec e_3=\hat k=\left[\begin{array}{c}0 \\ 0 \\ 1\end{array}\right] \text{.}\)
A basis may be thought of as a collection of building blocks for a vector space, since every vector in the space can be expressed as a unique linear combination of basis vectors.
For example, in many calculus courses, vectors in \(\IR^3\) are often expressed in their component form
Label each of the sets \(A,B,C,D,E\) as
SPANS \(\IR^4\) or DOES NOT SPAN \(\IR^4\)
LINEARLY INDEPENDENT or LINEARLY DEPENDENT
BASIS FOR \(\IR^4\) or NOT A BASIS FOR \(\IR^4\)
If \(\{\vec v_1,\vec v_2,\vec v_3,\vec v_4\}\) is a basis for \(\IR^4\text{,}\) that means \(\RREF[\vec v_1\,\vec v_2\,\vec v_3\,\vec v_4]\) doesn't have a non-pivot column, and doesn't have a row of zeros. What is \(\RREF[\vec v_1\,\vec v_2\,\vec v_3\,\vec v_4]\text{?}\)
The set \(\{\vec v_1,\dots,\vec v_m\}\) is a basis for \(\IR^n\) if and only if \(m=n\) and \(\RREF[\vec v_1\,\dots\,\vec v_n]= \left[\begin{array}{cccc} 1&0&\dots&0\\ 0&1&\dots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\dots&1 \end{array}\right] \text{.}\)
That is, a basis for \(\IR^n\) must have exactly \(n\) vectors and its square matrix must row-reduce to the so-called identity matrix containing all zeros except for a downward diagonal of ones. (We will learn where the identity matrix gets its name in a later module.)
Recall that a subspace of a vector space is a subset that is itself a vector space.
One easy way to construct a subspace is to take the span of set, but a linearly dependent set contains “redundant” vectors. For example, only two of the three vectors in the following image are needed to span the planar subspace.
Consider the subspace of \(\IR^4\) given by \(W=\vspan\left\{ \left[\begin{array}{c}2\\3\\0\\1\end{array}\right], \left[\begin{array}{c}2\\0\\1\\-1\end{array}\right], \left[\begin{array}{c}2\\-3\\2\\-3\end{array}\right], \left[\begin{array}{c}1\\5\\-1\\0\end{array}\right] \right\} \text{.}\)
Mark the part of \(\RREF\left[\begin{array}{cccc} 2&2&2&1\\ 3&0&-3&5\\ 0&1&2&-1\\ 1&-1&-3&0 \end{array}\right]\) that shows that \(W\)'s spanning set is linearly dependent.
Find a basis for \(W\) by removing a vector from its spanning set to make it linearly independent.
Let \(S=\{\vec v_1,\dots,\vec v_m\}\text{.}\) The easiest basis describing \(\vspan S\) is the set of vectors in \(S\) given by the pivot columns of \(\RREF[\vec v_1\,\dots\,\vec v_m]\text{.}\)
Put another way, to compute a basis for the subspace \(\vspan S\text{,}\) simply remove the vectors corresponding to the non-pivot columns of \(\RREF[\vec v_1\,\dots\,\vec v_m]\text{.}\) For example, since
Let \(W\) be the subspace of \(\IR^4\) given by
Let \(W\) be the subspace of \(\P_3\) given by
Let \(W\) be the subspace of \(M_{2,2}\) given by
Let
Find a basis for \(\vspan S\text{.}\)
Find a basis for \(\vspan T\text{.}\)
Even though we found different bases for them, \(\vspan S\) and \(\vspan T\) are exactly the same subspace of \(\IR^4\text{,}\) since
Any non-trivial vector space has infinitely-many different bases, but all the bases for a given vector space are exactly the same size.
For example,
The dimension of a vector space is equal to the size of any basis for the vector space.
As you'd expect, \(\IR^n\) has dimension \(n\text{.}\) For example, \(\IR^3\) has dimension \(3\) because any basis for \(\IR^3\) such as
Find the dimension of each subspace of \(\IR^4\) by finding \(\RREF\) for each corresponding matrix.
Every vector space with finite dimension, that is, every vector space \(V\) with a basis of the form \(\{\vec v_1,\vec v_2,\dots,\vec v_n\}\) is said to be isomorphic to a Euclidean space \(\IR^n\text{,}\) since there exists a natural correspondance between vectors in \(V\) and vectors in \(\IR^n\text{:}\)
We've already been taking advantage of the previous fact by converting polynomials and matrices into Euclidean vectors. Since \(\P_3\) and \(M_{2,2}\) are both four-dimensional:
Suppose \(W\) is a subspace of \(\P_8\text{,}\) and you know that the set \(\{ x^3+x, x^2+1, x^4-x \}\) is a linearly independent subset of \(W\text{.}\) What can you conclude about \(W\text{?}\)
The dimension of \(W\) is 3 or less.
The dimension of \(W\) is exactly 3.
The dimension of \(W\) is 3 or more.
Suppose \(W\) is a subspace of \(\P_8\text{,}\) and you know that \(W\) is spanned by the six vectors
The dimension of \(W\) is 6 or less.
The dimension of \(W\) is exactly 3.
The dimension of \(W\) is 6 or more.
The space of polynomials \(\P\) (of any degree) has the basis \(\{1,x,x^2,x^3,\dots\}\text{,}\) so it is a natural example of an infinite-dimensional vector space.
Since \(\P\) and other infinite-dimensional spaces cannot be treated as an isomorphic finite-dimensional Euclidean space \(\IR^n\text{,}\) vectors in such spaces cannot be studied by converting them into Euclidean vectors. Fortunately, most of the examples we will be interested in for this course will be finite-dimensional.
A homogeneous system of linear equations is one of the form:
This system is equivalent to the vector equation:
Note that if \(\left[\begin{array}{c} a_1 \\ \vdots \\ a_n \end{array}\right] \) and \(\left[\begin{array}{c} b_1 \\ \vdots \\ b_n \end{array}\right] \) are solutions to \(x_1 \vec{v}_1 + \cdots+x_n \vec{v}_n = \vec{0}\) so is \(\left[\begin{array}{c} a_1 +b_1\\ \vdots \\ a_n+b_n \end{array}\right] \text{,}\) since
Similarly, if \(c \in \IR\text{,}\) \(\left[\begin{array}{c} ca_1 \\ \vdots \\ ca_n \end{array}\right] \) is a solution. Thus the solution set of a homogeneous system is...
A basis for \(\IR^n\text{.}\)
A subspace of \(\IR^n\text{.}\)
The empty set.
Consider the homogeneous system of equations
Find its solution set (a subspace of \(\IR^4\)).
Rewrite this solution space in the form
Rewrite this solution space in the form
The coefficients of the free variables in the solution set of a linear system always yield linearly independent vectors.
Thus if
Consider the homogeneous system of equations
Find a basis for its solution space.
Consider the homogeneous vector equation
Find a basis for its solution space.
Consider the homogeneous system of equations
Find a basis for its solution space.
The basis of the trivial vector space is the empty set. You can denote this as either \(\emptyset\) or \(\{\}\text{.}\)
Thus, if \(\vec{0}\) is the only solution of a homogeneous system, the basis of the solution space is \(\emptyset\text{.}\)
A linear transformation (also known as a linear map) is a map between vector spaces that preserves the vector space operations. More precisely, if \(V\) and \(W\) are vector spaces, a map \(T:V\rightarrow W\) is called a linear transformation if
\(T(\vec{v}+\vec{w}) = T(\vec{v})+T(\vec{w})\) for any \(\vec{v},\vec{w} \in V\text{.}\)
\(T(c\vec{v}) = cT(\vec{v})\) for any \(c \in \IR,\vec{v} \in V\text{.}\)
Given a linear transformation \(T:V\to W\text{,}\) \(V\) is called the domain of \(T\) and \(W\) is called the co-domain of \(T\text{.}\)
Let \(T : \IR^3 \rightarrow \IR^2\) be given by
To show that \(T\) is linear, we must verify...
Therefore \(T\) is a linear transformation.
Let \(T : \IR^2 \rightarrow \IR^4\) be given by
To show that \(T\) is not linear, we only need to find one counterexample.
Since the resulting vectors are different, \(T\) is not a linear transformation.
A map between Euclidean spaces \(T:\IR^n\to\IR^m\) is linear exactly when every component of the output is a linear combination of the variables of \(\IR^n\text{.}\)
For example, the following map is definitely linear because \(x-z\) and \(3y\) are linear combinations of \(x,y,z\text{:}\)
But this map is not linear because \(x^2\text{,}\) \(y+3\text{,}\) and \(y-2^x\) are not linear combinations (even though \(x+y\) is):
Recall the following rules from calculus, where \(D:\P\to\P\) is the derivative map defined by \(D(f(x))=f'(x)\) for each polynomial \(f\text{.}\)
What can we conclude from these rules?
\(\P\) is not a vector space
\(D\) is a linear map
\(D\) is not a linear map
Let the polynomial maps \(S: \P_4 \rightarrow \P_3\) and \(T: \P_4 \rightarrow \P_3\) be defined by
Compute \(S(x^4+x)\text{,}\) \(S(x^4)+S(x)\text{,}\) \(T(x^4+x)\text{,}\) and \(T(x^4)+T(x)\text{.}\) Which of these maps is definitely not linear?
If \(L:V\to W\) is linear, then \(L(\vec z)=L(0\vec v)=0L(\vec v)=\vec z\) where \(\vec z\) is the additive identity of the vector spaces \(V,W\text{.}\)
Put another way, an easy way to prove that a map like \(T(f(x)) = f'(x)+x^3\) can't be linear is because
Showing \(L:V\to W\) is not a linear transformation can be done by finding an example for any one of the following.
Show \(L(\vec z)\not=\vec z\) (where \(\vec z\) is the additive identity of \(L\) and \(W\)).
Find \(\vec v,\vec w\in V\) such that \(L(\vec v+\vec w)\not=L(\vec v)+L(\vec w)\text{.}\)
Find \(\vec v\in V\) and \(c\in \IR\) such that \(L(c\vec v)\not=cL(\vec v)\text{.}\)
Otherwise, \(L\) can be shown to be linear by proving the following in general.
For all \(\vec v,\vec w\in V\text{,}\) \(L(\vec v+\vec w)=L(\vec v)+L(\vec w)\text{.}\)
For all \(\vec v\in V\) and \(c\in \IR\text{,}\) \(L(c\vec v)=cL(\vec v)\text{.}\)
Note the similarities between this process and showing that a subset of a vector space is/isn't a subspace.
Continue to consider \(S: \P_4 \rightarrow \P_3\) defined by
Verify that
Verify that \(S(cf(x))\) is equal to \(cS(f(x))\) for all real numbers \(c\) and polynomials \(f\text{.}\)
Is \(S\) linear?
Let the polynomial maps \(S: \P \rightarrow \P\) and \(T: \P \rightarrow \P\) be defined by
Note that \(S(0)=0\) and \(T(0)=0\text{.}\) So instead, show that \(S(x+1)\not= S(x)+S(1)\) to verify that \(S\) is not linear.
Prove that \(T\) is linear by verifying that \(T(f(x)+g(x))=T(f(x))+T(g(x))\) and \(T(cf(x))=cT(f(x))\text{.}\)
Recall that a linear map \(T:V\rightarrow W\) satisfies
\(T(\vec{v}+\vec{w}) = T(\vec{v})+T(\vec{w})\) for any \(\vec{v},\vec{w} \in V\text{.}\)
\(T(c\vec{v}) = cT(\vec{v})\) for any \(c \in \IR,\vec{v} \in V\text{.}\)
In other words, a map is linear when vector space operations can be applied before or after the transformation without affecting the result.
Suppose \(T: \IR^3 \rightarrow \IR^2\) is a linear map, and you know \(T\left(\left[\begin{array}{c} 1 \\ 0 \\ 0 \end{array}\right] \right) = \left[\begin{array}{c} 2 \\ 1 \end{array}\right]\) and \(T\left(\left[\begin{array}{c} 0 \\ 0 \\ 1 \end{array}\right] \right) = \left[\begin{array}{c} -3 \\ 2 \end{array}\right] \text{.}\) Compute \(T\left(\left[\begin{array}{c} 3 \\ 0 \\ 0 \end{array}\right]\right)\text{.}\)
Suppose \(T: \IR^3 \rightarrow \IR^2\) is a linear map, and you know \(T\left(\left[\begin{array}{c} 1 \\ 0 \\ 0 \end{array}\right] \right) = \left[\begin{array}{c} 2 \\ 1 \end{array}\right]\) and \(T\left(\left[\begin{array}{c} 0 \\ 0 \\ 1 \end{array}\right] \right) = \left[\begin{array}{c} -3 \\ 2 \end{array}\right] \text{.}\) Compute \(T\left(\left[\begin{array}{c} 1 \\ 0 \\ 1 \end{array}\right]\right)\text{.}\)
\(\displaystyle \left[\begin{array}{c} 2 \\ 1\end{array}\right]\)
\(\displaystyle \left[\begin{array}{c} 3 \\ -1 \end{array}\right]\)
\(\displaystyle \left[\begin{array}{c} -1 \\ 3 \end{array}\right]\)
\(\displaystyle \left[\begin{array}{c} 5 \\ -8 \end{array}\right]\)
Suppose \(T: \IR^3 \rightarrow \IR^2\) is a linear map, and you know \(T\left(\left[\begin{array}{c} 1 \\ 0 \\ 0 \end{array}\right] \right) = \left[\begin{array}{c} 2 \\ 1 \end{array}\right]\) and \(T\left(\left[\begin{array}{c} 0 \\ 0 \\ 1 \end{array}\right] \right) = \left[\begin{array}{c} -3 \\ 2 \end{array}\right] \text{.}\) Compute \(T\left(\left[\begin{array}{c} -2 \\ 0 \\ -3 \end{array}\right]\right)\text{.}\)
\(\displaystyle \left[\begin{array}{c} 2 \\ 1\end{array}\right]\)
\(\displaystyle \left[\begin{array}{c} 3 \\ -1 \end{array}\right]\)
\(\displaystyle \left[\begin{array}{c} -1 \\ 3 \end{array}\right]\)
\(\displaystyle \left[\begin{array}{c} 5 \\ -8 \end{array}\right]\)
Suppose \(T: \IR^3 \rightarrow \IR^2\) is a linear map, and you know \(T\left(\left[\begin{array}{c} 1 \\ 0 \\ 0 \end{array}\right] \right) = \left[\begin{array}{c} 2 \\ 1 \end{array}\right]\) and \(T\left(\left[\begin{array}{c} 0 \\ 0 \\ 1 \end{array}\right] \right) = \left[\begin{array}{c} -3 \\ 2 \end{array}\right] \text{.}\) What piece of information would help you compute \(T\left(\left[\begin{array}{c}0\\4\\-1\end{array}\right]\right)\text{?}\)
The value of \(T\left(\left[\begin{array}{c} 0\\-4\\0\end{array}\right]\right)\text{.}\)
The value of \(T\left(\left[\begin{array}{c} 0\\1\\0\end{array}\right]\right)\text{.}\)
The value of \(T\left(\left[\begin{array}{c} 1\\1\\1\end{array}\right]\right)\text{.}\)
Any of the above.
Consider any basis \(\{\vec b_1,\dots,\vec b_n\}\) for \(V\text{.}\) Since every vector \(\vec v\) can be written as a linear combination of basis vectors, \(x_1\vec b_1+\dots+ x_n\vec b_n\text{,}\) we may compute \(T(\vec v)\) as follows:
Therefore any linear transformation \(T:V \rightarrow W\) can be defined by just describing the values of \(T(\vec b_i)\text{.}\)
Put another way, the images of the basis vectors determine the transformation \(T\text{.}\)
Since linear transformation \(T:\IR^n\to\IR^m\) is determined by the standard basis \(\{\vec e_1,\dots,\vec e_n\}\text{,}\) it's convenient to store this information in the \(m\times n\) standard matrix \([T(\vec e_1) \,\cdots\, T(\vec e_n)]\text{.}\)
For example, let \(T: \IR^3 \rightarrow \IR^2\) be the linear map determined by the following values for \(T\) applied to the standard basis of \(\IR^3\text{.}\)
Then the standard matrix corresponding to \(T\) is
Let \(T: \IR^4 \rightarrow \IR^3\) be the linear transformation given by
Let \(T: \IR^3 \rightarrow \IR^2\) be the linear transformation given by
Compute \(T(\vec e_1)\text{,}\) \(T(\vec e_2)\text{,}\) and \(T(\vec e_3)\text{.}\)
Find the standard matrix for \(T\text{.}\)
Because every linear map \(T:\IR^m\to\IR^n\) has a linear combination of the variables in each component, and thus \(T(\vec e_i)\) yields exactly the coefficients of \(x_i\text{,}\) the standard matrix for \(T\) is simply an ordered list of the coefficients of the \(x_i\text{:}\)
Let \(T: \IR^3 \rightarrow \IR^3\) be the linear transformation given by the standard matrix
Compute \(T\left(\left[\begin{array}{c} 1\\ 2 \\ 3 \end{array}\right] \right) \text{.}\)
Compute \(T\left(\left[\begin{array}{c} x\\ y \\ z \end{array}\right] \right) \text{.}\)
Compute the following linear transformations of vectors given their standard matrices.
Let \(T: V \rightarrow W\) be a linear transformation. The kernel of \(T\) is an important subspace of \(V\) defined by
Let \(T: \IR^2 \rightarrow \IR^3\) be given by
\(\displaystyle \setBuilder{\left[\begin{array}{c}a \\ a\end{array}\right]}{a\in\IR}\)
\(\displaystyle \setList{\left[\begin{array}{c}0\\0\end{array}\right]}\)
\(\displaystyle \IR^2=\setBuilder{\left[\begin{array}{c}x \\ y\end{array}\right]}{x,y\in\IR}\)
Let \(T: \IR^3 \rightarrow \IR^2\) be given by
\(\displaystyle \setBuilder{\left[\begin{array}{c}0 \\ 0\\ a\end{array}\right]}{a\in\IR}\)
\(\displaystyle \setBuilder{\left[\begin{array}{c}a \\ a\\ 0\end{array}\right]}{a\in\IR}\)
\(\displaystyle \setList{\left[\begin{array}{c}0\\0\\0\end{array}\right]}\)
\(\displaystyle \IR^3=\setBuilder{\left[\begin{array}{c}x \\ y\\z\end{array}\right]}{x,y,z\in\IR}\)
Let \(T: \IR^3 \rightarrow \IR^2\) be the linear transformation given by the standard matrix
Set \(T\left(\left[\begin{array}{c}x\\y\\z\end{array}\right]\right) = \left[\begin{array}{c}0\\0\end{array}\right]\) to find a linear system of equations whose solution set is the kernel.
Use \(\RREF(A)\) to solve this homogeneous system of equations and find a basis for the kernel of \(T\text{.}\)
Let \(T: \IR^4 \rightarrow \IR^3\) be the linear transformation given by
Find a basis for the kernel of \(T\text{.}\)
Let \(T: V \rightarrow W\) be a linear transformation. The image of \(T\) is an important subspace of \(W\) defined by
In the examples below, the left example's image is all of \(\IR^2\text{,}\) but the right example's image is a planar subspace of \(\IR^3\text{.}\)
Let \(T: \IR^2 \rightarrow \IR^3\) be given by
\(\displaystyle \setBuilder{\left[\begin{array}{c}0 \\ 0\\ a\end{array}\right]}{a\in\IR}\)
\(\displaystyle \setBuilder{\left[\begin{array}{c}a \\ b\\ 0\end{array}\right]}{a,b\in\IR}\)
\(\displaystyle \setList{\left[\begin{array}{c}0\\0\\0\end{array}\right]}\)
\(\displaystyle \IR^3=\setBuilder{\left[\begin{array}{c}x \\ y\\z\end{array}\right]}{x,y,z\in\IR}\)
Let \(T: \IR^3 \rightarrow \IR^2\) be given by
\(\displaystyle \setBuilder{\left[\begin{array}{c}a \\ a\end{array}\right]}{a\in\IR}\)
\(\displaystyle \setList{\left[\begin{array}{c}0\\0\end{array}\right]}\)
\(\displaystyle \IR^2=\setBuilder{\left[\begin{array}{c}x \\ y\end{array}\right]}{x,y\in\IR}\)
Let \(T: \IR^4 \rightarrow \IR^3\) be the linear transformation given by the standard matrix
Since \(T(\vec v)=T(x_1\vec e_1+x_2\vec e_2+x_3\vec e_3+x_4\vec e_4)\text{,}\) the set of vectors
spans \(\Im T\)
is a linearly independent subset of \(\Im T\)
is a basis for \(\Im T\)
Let \(T: \IR^4 \rightarrow \IR^3\) be the linear transformation given by the standard matrix
Since the set \(\setList{ \left[\begin{array}{c}3\\-1\\2\end{array}\right], \left[\begin{array}{c}4\\1\\1\end{array}\right], \left[\begin{array}{c}7\\0\\3\end{array}\right], \left[\begin{array}{c}1\\2\\-1\end{array}\right] }\) spans \(\Im T\text{,}\) we can obtain a basis for \(\Im T\) by finding \(\RREF A = \left[\begin{array}{cccc} 1 & 0 & 1 & -1\\ 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 0 \end{array}\right]\) and only using the vectors corresponding to pivot columns:
Let \(T:\IR^n\to\IR^m\) be a linear transformation with standard matrix \(A\text{.}\)
The kernel of \(T\) is the solution set of the homogeneous system given by the augmented matrix \(\left[\begin{array}{c|c}A&\vec 0\end{array}\right]\text{.}\) Use the coefficients of its free variables to get a basis for the kernel.
The image of \(T\) is the span of the columns of \(A\text{.}\) Remove the vectors creating non-pivot columns in \(\RREF A\) to get a basis for the image.
Let \(T: \IR^3 \rightarrow \IR^4\) be the linear transformation given by the standard matrix
Find a basis for the kernel and a basis for the image of \(T\text{.}\)
Let \(T: \IR^n \rightarrow \IR^m\) be a linear transformation with standard matrix \(A\text{.}\) Which of the following is equal to the dimension of the kernel of \(T\text{?}\)
The number of pivot columns
The number of non-pivot columns
The number of pivot rows
The number of non-pivot rows
Let \(T: \IR^n \rightarrow \IR^m\) be a linear transformation with standard matrix \(A\text{.}\) Which of the following is equal to the dimension of the image of \(T\text{?}\)
The number of pivot columns
The number of non-pivot columns
The number of pivot rows
The number of non-pivot rows
Combining these with the observation that the number of columns is the dimension of the domain of \(T\text{,}\) we have the rank-nullity theorem:
The dimension of the domain of \(T\) equals \(\dim(\ker T)+\dim(\Im T)\text{.}\)
The dimension of the image is called the rank of \(T\) (or \(A\)) and the dimension of the kernel is called the nullity.
Let \(T: \IR^3 \rightarrow \IR^4\) be the linear transformation given by the standard matrix
Let \(T: V \rightarrow W\) be a linear transformation. \(T\) is called injective or one-to-one if \(T\) does not map two distinct vectors to the same place. More precisely, \(T\) is injective if \(T(\vec{v}) \neq T(\vec{w})\) whenever \(\vec{v} \neq \vec{w}\text{.}\)
Let \(T: \IR^3 \rightarrow \IR^2\) be given by
Yes, because \(T(\vec v)=T(\vec w)\) whenever \(\vec v=\vec w\text{.}\)
Yes, because \(T(\vec v)\not=T(\vec w)\) whenever \(\vec v\not=\vec w\text{.}\)
No, because \(T\left(\left[\begin{array}{c}0\\0\\1\end{array}\right]\right) \not= T\left(\left[\begin{array}{c}0\\0\\2\end{array}\right]\right)\)
No, because \(T\left(\left[\begin{array}{c}0\\0\\1\end{array}\right]\right) = T\left(\left[\begin{array}{c}0\\0\\2\end{array}\right]\right)\)
Let \(T: \IR^2 \rightarrow \IR^3\) be given by
Yes, because \(T(\vec v)=T(\vec w)\) whenever \(\vec v=\vec w\text{.}\)
Yes, because \(T(\vec v)\not=T(\vec w)\) whenever \(\vec v\not=\vec w\text{.}\)
No, because \(T\left(\left[\begin{array}{c}1\\2\end{array}\right]\right) \not= T\left(\left[\begin{array}{c}3\\4\end{array}\right]\right)\)
No, because \(T\left(\left[\begin{array}{c}1\\2\end{array}\right]\right) = T\left(\left[\begin{array}{c}3\\4\end{array}\right]\right)\)
Let \(T: V \rightarrow W\) be a linear transformation. \(T\) is called surjective or onto if every element of \(W\) is mapped to by an element of \(V\text{.}\) More precisely, for every \(\vec{w} \in W\text{,}\) there is some \(\vec{v} \in V\) with \(T(\vec{v})=\vec{w}\text{.}\)
Let \(T: \IR^2 \rightarrow \IR^3\) be given by
Yes, because for every \(\vec w=\left[\begin{array}{c}x\\y\\z\end{array}\right]\in\IR^3\text{,}\) there exists \(\vec v=\left[\begin{array}{c}x\\y\end{array}\right]\in\IR^2\) such that \(T(\vec v)=\vec w\text{.}\)
No, because \(T\left(\left[\begin{array}{c}x\\y\end{array}\right]\right)\) can never equal \(\left[\begin{array}{c} 1 \\ 1 \\ 1 \end{array}\right] \text{.}\)
No, because \(T\left(\left[\begin{array}{c}x\\y\end{array}\right]\right)\) can never equal \(\left[\begin{array}{c} 0 \\ 0 \\ 0 \end{array}\right] \text{.}\)
Let \(T: \IR^3 \rightarrow \IR^2\) be given by
Yes, because for every \(\vec w=\left[\begin{array}{c}x\\y\end{array}\right]\in\IR^2\text{,}\) there exists \(\vec v=\left[\begin{array}{c}x\\y\\42\end{array}\right]\in\IR^3\) such that \(T(\vec v)=\vec w\text{.}\)
Yes, because for every \(\vec w=\left[\begin{array}{c}x\\y\end{array}\right]\in\IR^2\text{,}\) there exists \(\vec v=\left[\begin{array}{c}0\\0\\z\end{array}\right]\in\IR^3\) such that \(T(\vec v)=\vec w\text{.}\)
No, because \(T\left(\left[\begin{array}{c}x\\y\\z\end{array}\right]\right)\) can never equal \(\left[\begin{array}{c} 3\\-2 \end{array}\right] \text{.}\)
As we will see, it's no coincidence that the \(\RREF\) of the injective map's standard matrix
Let \(T: V \rightarrow W\) be a linear transformation where \(\ker T\) contains multiple vectors. What can you conclude?
\(T\) is injective
\(T\) is not injective
\(T\) is surjective
\(T\) is not surjective
A linear transformation \(T\) is injective if and only if \(\ker T = \{\vec{0}\}\text{.}\) Put another way, an injective linear transformation may be recognized by its trivial kernel.
Let \(T: V \rightarrow \IR^5\) be a linear transformation where \(\Im T\) is spanned by four vectors. What can you conclude?
\(T\) is injective
\(T\) is not injective
\(T\) is surjective
\(T\) is not surjective
A linear transformation \(T:V \rightarrow W\) is surjective if and only if \(\Im T = W\text{.}\) Put another way, a surjective linear transformation may be recognized by its identical codomain and image.
Let \(T: \IR^n \rightarrow \IR^m\) be a linear map with standard matrix \(A\text{.}\) Sort the following claims into two groups of \textit{equivalent} statements: one group that means \(T\) is injective, and one group that means \(T\) is surjective.
The kernel of \(T\) is trivial, i.e. \(\ker T=\{\vec 0\}\text{.}\)
The columns of \(A\) span \(\IR^m\text{.}\)
The columns of \(A\) are linearly independent.
Every column of \(\RREF(A)\) has a pivot.
Every row of \(\RREF(A)\) has a pivot.
The image of \(T\) equals its codomain, i.e. \(\Im T=\IR^m\text{.}\)
The system of linear equations given by the augmented matrix \(\left[\begin{array}{c|c}A & \vec{b} \end{array}\right]\) has a solution for all \(\vec{b} \in \IR^m\text{.}\)
The system of linear equations given by the augmented matrix \(\left[\begin{array}{c|c} A & \vec{0} \end{array}\right]\) has exactly one solution.
The easiest way to determine if the linear map with standard matrix \(A\) is injective is to see if \(\RREF(A)\) has a pivot in each column.
The easiest way to determine if the linear map with standard matrix \(A\) is surjective is to see if \(\RREF(A)\) has a pivot in each row.
What can you conclude about the linear map \(T:\IR^2\to\IR^3\) with standard matrix \(\left[\begin{array}{cc} a & b \\ c & d \\ e & f \end{array}\right]\text{?}\)
Its standard matrix has more columns than rows, so \(T\) is not injective.
Its standard matrix has more columns than rows, so \(T\) is injective.
Its standard matrix has more rows than columns, so \(T\) is not surjective.
Its standard matrix has more rows than columns, so \(T\) is surjective.
What can you conclude about the linear map \(T:\IR^3\to\IR^2\) with standard matrix \(\left[\begin{array}{ccc} a & b & c \\ d & e & f \end{array}\right]\text{?}\)
Its standard matrix has more columns than rows, so \(T\) is not injective.
Its standard matrix has more columns than rows, so \(T\) is injective.
Its standard matrix has more rows than columns, so \(T\) is not surjective.
Its standard matrix has more rows than columns, so \(T\) is surjective.
The following are true for any linear map \(T:V\to W\text{:}\)
If \(\dim(V)>\dim(W)\text{,}\) then \(T\) is not injective.
If \(\dim(V)<\dim(W)\text{,}\) then \(T\) is not surjective.
Basically, a linear transformation cannot reduce dimension without collapsing vectors into each other, and a linear transformation cannot increase dimension from its domain to its image.
But dimension arguments cannot be used to prove a map is injective or surjective.
Suppose \(T: \IR^n \rightarrow \IR^4\) with standard matrix \(A=\left[\begin{array}{cccc} a_{11}&a_{12}&\cdots&a_{1n}\\ a_{21}&a_{22}&\cdots&a_{2n}\\ a_{31}&a_{32}&\cdots&a_{3n}\\ a_{41}&a_{42}&\cdots&a_{4n}\\ \end{array}\right]\) is both injective and surjective (we call such maps bijective).
How many pivot rows must \(\RREF A\) have?
How many pivot columns must \(\RREF A\) have?
What is \(\RREF A\text{?}\)
Let \(T: \IR^n \rightarrow \IR^n\) be a bijective linear map with standard matrix \(A\text{.}\) Label each of the following as true or false.
\(\RREF(A)\) is the identity matrix.
The columns of \(A\) form a basis for \(\IR^n\)
The system of linear equations given by the augmented matrix \(\left[\begin{array}{c|c} A & \vec{b} \end{array}\right]\) has exactly one solution for each \(\vec b \in \IR^n\text{.}\)
The easiest way to show that the linear map with standard matrix \(A\) is bijective is to show that \(\RREF(A)\) is the identity matrix.
Let \(T: \IR^3 \rightarrow \IR^3\) be given by the standard matrix
\(T\) is neither injective nor surjective
\(T\) is injective but not surjective
\(T\) is surjective but not injective
\(T\) is bijective.
Let \(T: \IR^3 \rightarrow \IR^3\) be given by
\(T\) is neither injective nor surjective
\(T\) is injective but not surjective
\(T\) is surjective but not injective
\(T\) is bijective.
Let \(T: \IR^2 \rightarrow \IR^3\) be given by
\(T\) is neither injective nor surjective
\(T\) is injective but not surjective
\(T\) is surjective but not injective
\(T\) is bijective.
Let \(T: \IR^3 \rightarrow \IR^2\) be given by
\(T\) is neither injective nor surjective
\(T\) is injective but not surjective
\(T\) is surjective but not injective
\(T\) is bijective.
If \(T: \IR^n \rightarrow \IR^m\) and \(S: \IR^m \rightarrow \IR^k\) are linear maps, then the composition map \(S\circ T\) is a linear map from \(\IR^n \rightarrow \IR^k\text{.}\)
Recall that for a vector, \(\vec{v} \in \IR^n\text{,}\) the composition is computed as \((S \circ T)(\vec{v})=S(T(\vec{v}))\text{.}\)
Let \(T: \IR^3 \rightarrow \IR^2\) be given by the \(2\times 3\) standard matrix \(B=\left[\begin{array}{ccc} 2 & 1 & -3 \\ 5 & -3 & 4 \end{array}\right]\) and \(S: \IR^2 \rightarrow \IR^4\) be given by the \(4\times 2\) standard matrix \(A=\left[\begin{array}{cc} 1 & 2 \\ 0 & 1 \\ 3 & 5 \\ -1 & -2 \end{array}\right]\text{.}\)
What are the domain and codomain of the composition map \(S \circ T\text{?}\)
The domain is \(\IR ^2\) and the codomain is \(\IR^3\)
The domain is \(\IR ^3\) and the codomain is \(\IR^2\)
The domain is \(\IR ^2\) and the codomain is \(\IR^4\)
The domain is \(\IR ^3\) and the codomain is \(\IR^4\)
The domain is \(\IR ^4\) and the codomain is \(\IR^3\)
The domain is \(\IR ^4\) and the codomain is \(\IR^2\)
Let \(T: \IR^3 \rightarrow \IR^2\) be given by the \(2\times 3\) standard matrix \(B=\left[\begin{array}{ccc} 2 & 1 & -3 \\ 5 & -3 & 4 \end{array}\right]\) and \(S: \IR^2 \rightarrow \IR^4\) be given by the \(4\times 2\) standard matrix \(A=\left[\begin{array}{cc} 1 & 2 \\ 0 & 1 \\ 3 & 5 \\ -1 & -2 \end{array}\right]\text{.}\)
What size will the standard matrix of \(S \circ T:\IR^3\to\IR^4\) be? (Rows \(\times\) Columns)
Let \(T: \IR^3 \rightarrow \IR^2\) be given by the \(2\times 3\) standard matrix \(B=\left[\begin{array}{ccc} 2 & 1 & -3 \\ 5 & -3 & 4 \end{array}\right]\) and \(S: \IR^2 \rightarrow \IR^4\) be given by the \(4\times 2\) standard matrix \(A=\left[\begin{array}{cc} 1 & 2 \\ 0 & 1 \\ 3 & 5 \\ -1 & -2 \end{array}\right]\text{.}\)
Compute
Compute \((S \circ T)(\vec{e}_2) \text{.}\)
Compute \((S \circ T)(\vec{e}_3) \text{.}\)
Write the \(4\times 3\) standard matrix of \(S \circ T:\IR^3\to\IR^4\text{.}\)
We define the product \(AB\) of a \(m \times n\) matrix \(A\) and a \(n \times k\) matrix \(B\) to be the \(m \times k\) standard matrix of the composition map of the two corresponding linear functions.
For the previous activity, \(T\) was a map \(\IR^3 \rightarrow \IR^2\text{,}\) and \(S\) was a map \(\IR^2 \rightarrow \IR^4\text{,}\) so \(S \circ T\) gave a map \(\IR^3 \rightarrow \IR^4\) with a \(4\times 3\) standard matrix:
Let \(S: \IR^3 \rightarrow \IR^2\) be given by the matrix \(A=\left[\begin{array}{ccc} -4 & -2 & 3 \\ 0 & 1 & 1 \end{array}\right]\) and \(T: \IR^2 \rightarrow \IR^3\) be given by the matrix \(B=\left[\begin{array}{cc} 2 & 3 \\ 1 & -1 \\ 0 & -1 \end{array}\right]\text{.}\)
Write the dimensions (rows \(\times\) columns) for \(A\text{,}\) \(B\text{,}\) \(AB\text{,}\) and \(BA\text{.}\)
Find the standard matrix \(AB\) of \(S \circ T\text{.}\)
Find the standard matrix \(BA\) of \(T \circ S\text{.}\)
Consider the following three matrices.
Find the domain and codomain of each of the three linear maps corresponding to \(A\text{,}\) \(B)\text{,}\) and \(C\text{.}\)
Only one of the matrix products \(AB,AC,BA,BC,CA,CB\) can actually be computed. Compute it.
Let \(B=\left[\begin{array}{ccc} 3 & -4 & 0 \\ 2 & 0 & -1 \\ 0 & -3 & 3 \end{array}\right]\text{,}\) and let \(A=\left[\begin{array}{ccc} 2 & 7 & -1 \\ 0 & 3 & 2 \\ 1 & 1 & -1 \end{array}\right]\text{.}\)
Compute the product \(BA\) by hand.
Check your work using technology. Using Octave:
B = [3 -4 0 ; 2 0 -1 ; 0 -3 3] A = [2 7 -1 ; 0 3 2 ; 1 1 -1] B*A
Let \(A=\left[\begin{array}{ccc} 2 & 7 & -1 \\ 0 & 3 & 2 \\ 1 & 1 & -1 \end{array}\right]\text{.}\) Find a \(3 \times 3\) matrix \(B\) such that \(BA=A\text{,}\) that is,
The identity matrix \(I_n\) (or just \(I\) when \(n\) is obvious from context) is the \(n \times n\) matrix
For any square matrix \(A\text{,}\) \(IA=AI=A\text{:}\)
Tweaking the identity matrix slightly allows us to write row operations in terms of matrix multiplication.
Create a matrix that doubles the third row of \(A\text{:}\)
Create a matrix that swaps the second and third rows of \(A\text{:}\)
Create a matrix that adds \(5\) times the third row of \(A\) to the first row:
If \(R\) is the result of applying a row operation to \(I\text{,}\) then \(RA\) is the result of applying the same row operation to \(A\text{.}\)
Scaling a row: \(R= \left[\begin{array}{ccc} c & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right]\)
Swapping rows: \(R= \left[\begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{array}\right]\)
Adding a row multiple to another row: \(R= \left[\begin{array}{ccc} 1 & 0 & c \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right]\)
Such matrices can be chained together to emulate multiple row operations. In particular,
Consider the two row operations \(R_2\leftrightarrow R_3\) and \(R_1+R_2\to R_1\) applied as follows to show \(A\sim B\text{:}\)
Express these row operations as matrix multiplication by expressing \(B\) as the product of two matrices and \(A\text{:}\)
Let \(T: \IR^n \rightarrow \IR^m\) be a linear map with standard matrix \(A\text{.}\) Sort the following items into three groups of statements: a group that means \(T\) is injective, a group that means \(T\) is surjective, and a group that means \(T\) is bijective.
\(A\vec x=\vec b\) has a solution for all \(\vec b\in\IR^m\)
\(A\vec x=\vec b\) has a unique solution for all \(\vec b\in\IR^m\)
\(A\vec x=\vec 0\) has a unique solution.
The columns of \(A\) span \(\IR^m\)
The columns of \(A\) are linearly independent
The columns of \(A\) are a basis of \(\IR^m\)
Every column of \(\RREF(A)\) has a pivot
Every row of \(\RREF(A)\) has a pivot
\(m=n\) and \(\RREF(A)=I\)
Let \(T: \IR^3 \rightarrow \IR^3\) be the linear transformation given by the standard matrix \(A=\left[\begin{array}{ccc} 2 & -1 & 0 \\ 2 & 1 & 4 \\ 1 & 1 & 3 \end{array}\right]\text{.}\)
Write an augmented matrix representing the system of equations given by \(T(\vec x)=\vec{0}\text{,}\) that is, \(A\vec x=\left[\begin{array}{c}0 \\ 0 \\ 0 \end{array}\right]\text{.}\) Then solve \(T(\vec x)=\vec{0}\) to find the kernel of \(T\text{.}\)
Let \(T: \IR^n \rightarrow \IR^n\) be a linear map with standard matrix \(A\text{.}\)
If \(T\) is a bijection and \(\vec b\) is any \(\IR^n\) vector, then \(T(\vec x)=A\vec x=\vec b\) has a unique solution.
So we may define an inverse map \(T^{-1} : \IR^n \rightarrow \IR^n\) by setting \(T^{-1}(\vec b)\) to be this unique solution.
Let \(A^{-1}\) be the standard matrix for \(T^{-1}\text{.}\) We call \(A^{-1}\) the inverse matrix of \(A\text{,}\) so we also say that \(A\) is invertible.
Let \(T: \IR^3 \rightarrow \IR^3\) be the linear transformation given by the standard matrix \(A=\left[\begin{array}{ccc} 2 & -1 & -6 \\ 2 & 1 & 3 \\ 1 & 1 & 4 \end{array}\right]\text{.}\)
Write an augmented matrix representing the system of equations given by \(T(\vec x)=\vec{e}_1\text{,}\) that is, \(A\vec x=\left[\begin{array}{c}1 \\ 0 \\ 0 \end{array}\right]\text{.}\) Then solve \(T(\vec x)=\vec{e}_1\) to find \(T^{-1}(\vec{e}_1)\text{.}\)
Solve \(T(\vec x)=\vec{e}_2\) to find \(T^{-1}(\vec{e}_2)\text{.}\)
Solve \(T(\vec x)=\vec{e}_3\) to find \(T^{-1}(\vec{e}_3)\text{.}\)
Write \(A^{-1}\text{,}\) the standard matrix for \(T^{-1}\text{.}\)
We could have solved these three systems simultaneously by row reducing the matrix \([A\,|\,I]\) at once.
Find the inverse \(A^{-1}\) of the matrix \(A=\left[\begin{array}{cc} 1 & 3 \\ 0 & -2 \end{array}\right]\) by row-reducing \([A\,|\,I]\text{.}\)
Is the matrix \(\left[\begin{array}{ccc} 2 & 3 & 1 \\ -1 & -4 & 2 \\ 0 & -5 & 5 \end{array}\right]\) invertible? Give a reason for your answer.
An \(n\times n\) matrix \(A\) is invertible if and only if \(\RREF(A) = I_n\text{.}\)
Let \(T:\IR^2\to\IR^2\) be the bijective linear map defined by \(T\left(\left[\begin{array}{c}x\\y\end{array}\right]\right)=\left[\begin{array}{c} 2x -3y \\ -3x + 5y\end{array}\right]\text{,}\) with the inverse map \(T^{-1}\left(\left[\begin{array}{c}x\\y\end{array}\right]\right)=\left[\begin{array}{c} 5x+ 3y \\ 3x + 2y\end{array}\right]\text{.}\)
Compute \((T^{-1}\circ T)\left(\left[\begin{array}{c}-2\\1\end{array}\right]\right)\text{.}\)
If \(A\) is the standard matrix for \(T\) and \(A^{-1}\) is the standard matrix for \(T^{-1}\text{,}\) find the \(2\times 2\) matrix
\(T^{-1}\circ T=T\circ T^{-1}\) is the identity map for any bijective linear transformation \(T\text{.}\) Therefore \(A^{-1}A=AA^{-1}=I\) is the identity matrix for any invertible matrix \(A\text{.}\)
The image below illustrates how the linear transformation \(T : \IR^2 \rightarrow \IR^2\) given by the standard matrix \(A = \left[\begin{array}{cc} 2 & 0 \\ 0 & 3 \end{array}\right]\) transforms the unit square.
What are the lengths of \(A\vec e_1\) and \(A\vec e_2\text{?}\)
What is the area of the transformed unit square?
The image below illustrates how the linear transformation \(S : \IR^2 \rightarrow \IR^2\) given by the standard matrix \(B = \left[\begin{array}{cc} 2 & 3 \\ 0 & 4 \end{array}\right]\) transforms the unit square.
What are the lengths of \(B\vec e_1\) and \(B\vec e_2\text{?}\)
What is the area of the transformed unit square?
It is possible to find two nonparallel vectors that are scaled but not rotated by the linear map given by \(B\text{.}\)
The process for finding such vectors will be covered later in this module.
Notice that while a linear map can transform vectors in various ways, linear maps always transform parallelograms into parallelograms, and these areas are always transformed by the same factor: in the case of \(B=\left[\begin{array}{cc} 2 & 3 \\ 0 & 4 \end{array}\right]\text{,}\) this factor is \(8\text{.}\)
Since this change in area is always the same for a given linear map, it will be equal to the value of the transformed unit square (which begins with area \(1\)).
We will define the determinant of a square matrix \(B\text{,}\) or \(\det(B)\) for short, to be the factor by which \(B\) scales areas. In order to figure out how to compute it, we first figure out the properties it must satisfy.
The transformation of the unit square by the standard matrix \([\vec{e}_1\hspace{0.5em} \vec{e}_2]=\left[\begin{array}{cc}1&0\\0&1\end{array}\right]=I\) is illustrated below. What is \(\det([\vec{e}_1\hspace{0.5em} \vec{e}_2])=\det(I)\text{,}\) the area of the transformed unit square shown here?
0
1
2
4
The transformation of the unit square by the standard matrix \([\vec{v}\hspace{0.5em} \vec{v}]\) is illustrated below: both \(T(\vec{e}_1)=T(\vec{e}_2)=\vec{v}\text{.}\) What is \(\det([\vec{v}\hspace{0.5em} \vec{v}])\text{,}\) the area of the transformed unit square shown here?
0
1
2
4
The transformations of the unit square by the standard matrices \([\vec{v}\hspace{0.5em} \vec{w}]\) and \([c\vec{v}\hspace{0.5em} \vec{w}]\) are illustrated below. Describe the value of \(\det([c\vec{v}\hspace{0.5em} \vec{w}])\text{.}\)
\(\displaystyle \det([\vec{v}\hspace{0.5em} \vec{w}])\)
\(\displaystyle c\det([\vec{v}\hspace{0.5em} \vec{w}])\)
\(\displaystyle c^2\det([\vec{v}\hspace{0.5em} \vec{w}])\)
Cannot be determined from this information.
The transformations of unit squares by the standard matrices \([\vec{u}\hspace{0.5em} \vec{w}]\text{,}\) \([\vec{v}\hspace{0.5em} \vec{w}]\) and \([\vec{u}+\vec{v}\hspace{0.5em} \vec{w}]\) are illustrated below. Describe the value of \(\det([\vec{u}+\vec{v}\hspace{0.5em} \vec{w}])\text{.}\)
\(\displaystyle \det([\vec{u}\hspace{0.5em} \vec{w}])=\det([\vec{v}\hspace{0.5em} \vec{w}])\)
\(\displaystyle \det([\vec{u}\hspace{0.5em} \vec{w}])+\det([\vec{v}\hspace{0.5em} \vec{w}])\)
\(\displaystyle \det([\vec{u}\hspace{0.5em} \vec{w}])\det([\vec{v}\hspace{0.5em} \vec{w}])\)
Cannot be determined from this information.
The determinant is the unique function \(\det:M_{n,n}\to\IR\) satisfying these properties:
Note that these last two properties together can be phrased as “The determinant is linear in each column.”
The determinant must also satisfy other properties. Consider \(\det([\vec v \hspace{1em}\vec w+c \vec{v}])\) and \(\det([\vec v\hspace{1em}\vec w])\text{.}\)
The base of both parallelograms is \(\vec{v}\text{,}\) while the height has not changed, so the determinant does not change either. This can also be proven using the other properties of the determinant:
Swapping columns may be thought of as a reflection, which is represented by a negative determinant. For example, the following matrices transform the unit square into the same parallelogram, but the second matrix reflects its orientation.
The fact that swapping columns multiplies determinants by a negative may be verified by adding and subtracting columns.
To summarize, we've shown that the column versions of the three row-reducing operations a matrix may be used to simplify a determinant in the following way:
Multiplying a column by a scalar multiplies the determinant by that scalar:
Swapping two columns changes the sign of the determinant:
Adding a multiple of a column to another column does not change the determinant:
The transformation given by the standard matrix \(A\) scales areas by \(4\text{,}\) and the transformation given by the standard matrix \(B\) scales areas by \(3\text{.}\) By what factor does the transformation given by the standard matrix \(AB\) scale areas?
\(\displaystyle 1\)
\(\displaystyle 7\)
\(\displaystyle 12\)
Cannot be determined
Since the transformation given by the standard matrix \(AB\) is obtained by applying the transformations given by \(A\) and \(B\text{,}\) it follows that
Recall that row operations may be produced by matrix multiplication.
Multiply the first row of \(A\) by \(c\text{:}\) \(\left[\begin{array}{cccc} c & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right]A\)
Swap the first and second row of \(A\text{:}\) \(\left[\begin{array}{cccc} 0 & 1 & 0 & 0\\ 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right]A\)
Add \(c\) times the third row to the first row of \(A\text{:}\) \(\left[\begin{array}{cccc} 1 & 0 & c & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right]A\)
The determinants of row operation matrices may be computed by manipulating columns to reduce each matrix to the identity:
Scaling a row: \(\det \left[\begin{array}{cccc} c & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right] = c\det \left[\begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right] = c\)
Swapping rows: \(\det \left[\begin{array}{cccc} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right] = -1\det \left[\begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right] = -1\)
Adding a row multiple to another row: \(\det \left[\begin{array}{cccc} 1 & 0 & c & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right] = \det \left[\begin{array}{cccc} 1 & 0 & c-1c & 0\\ 0 & 1 & 0-0c & 0\\ 0 & 0 & 1-0c & 0 \\ 0 & 0 & 0-0c & 1 \end{array}\right] = \det(I)=1\)
Consider the row operation \(R_1+4R_3\to R_1\) applied as follows to show \(A\sim B\text{:}\)
Find a matrix \(R\) such that \(B=RA\text{,}\) by applying the same row operation to \(I=\left[\begin{array}{cccc}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{array}\right]\text{.}\)
Find \(\det R\) by comparing with the previous slide.
If \(C \in M_{3,3}\) is a matrix with \(\det(C)= -3\text{,}\) find
Consider the row operation \(R_1\leftrightarrow R_3\) applied as follows to show \(A\sim B\text{:}\)
Find a matrix \(R\) such that \(B=RA\text{,}\) by applying the same row operation to \(I\text{.}\)
If \(C \in M_{3,3}\) is a matrix with \(\det(C)= 5\text{,}\) find \(\det(RC)\text{.}\)
Consider the row operation \(3R_2\to R_2\) applied as follows to show \(A\sim B\text{:}\)
Find a matrix \(R\) such that \(B=RA\text{.}\)
If \(C \in M_{3,3}\) is a matrix with \(\det(C)= -7\text{,}\) find \(\det(RC)\text{.}\)
Recall that the column versions of the three row-reducing operations a matrix may be used to simplify a determinant:
Multiplying columns by scalars:
Swapping two columns:
Adding a multiple of a column to another column:
The determinants of row operation matrices may be computed by manipulating columns to reduce each matrix to the identity:
Scaling a row: \(\left[\begin{array}{cccc} 1 & 0 & 0 &0 \\ 0 & c & 0 &0\\ 0 & 0 & 1 &0 \\ 0 & 0 & 0 & 0 \end{array}\right]\)
Swapping rows: \(\left[\begin{array}{cccc} 0 & 1 & 0 &0 \\ 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]\)
Adding a row multiple to another row: \(\left[\begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & 1 & c & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]\)
Thus we can also use row operations to simplify determinants:
Multiplying rows by scalars: \(\det\left[\begin{array}{c}\vdots\\cR\\\vdots\end{array}\right]= c\det\left[\begin{array}{c}\vdots\\R\\\vdots\end{array}\right]\)
Swapping two rows: \(\det\left[\begin{array}{c}\vdots\\R\\\vdots\\S\\\vdots\end{array}\right]= -\det\left[\begin{array}{c}\vdots\\S\\\vdots\\R\\\vdots\end{array}\right]\)
Adding multiples of rows to other rows: \(\det\left[\begin{array}{c}\vdots\\R\\\vdots\\S\\\vdots\end{array}\right]= \det\left[\begin{array}{c}\vdots\\R+cS\\\vdots\\S\\\vdots\end{array}\right]\)
So we may compute the determinant of \(\left[\begin{array}{cc} 2 & 4 \\ 2 & 3 \end{array}\right]\) by manipulating its rows/columns to reduce the matrix to \(I\text{:}\)
We've seen that row reducing all the way into RREF gives us a method of computing determinants.
However, we learned in module E that this can be tedious for large matrices. Thus, we will try to figure out how to turn the determinant of a larger matrix into the determinant of a smaller matrix.
The following image illustrates the transformation of the unit cube by the matrix \(\left[\begin{array}{ccc} 1 & 1 & 0 \\ 1 & 3 & 1 \\ 0 & 0 & 1\end{array}\right]\text{.}\)
Recall that for this solid \(V=Bh\text{,}\) where \(h\) is the height of the solid and \(B\) is the area of its parallelogram base. So what must its volume be?
\(\displaystyle \det \left[\begin{array}{cc} 1 & 1 \\ 1 & 3 \end{array}\right]\)
\(\displaystyle \det \left[\begin{array}{cc} 1 & 0 \\ 3 & 1 \end{array}\right]\)
\(\displaystyle \det \left[\begin{array}{cc} 1 & 1 \\ 0 & 1 \end{array}\right]\)
\(\displaystyle \det \left[\begin{array}{cc} 1 & 3 \\ 0 & 0 \end{array}\right]\)
If row \(i\) contains all zeros except for a \(1\) on the main (upper-left to lower-right) diagonal, then both column and row \(i\) may be removed without changing the value of the determinant.
Since row and column operations affect the determinants in the same way, the same technique works for a column of all zeros except for a \(1\) on the main diagonal.
Remove an appropriate row and column of \(\det \left[\begin{array}{ccc} 1 & 0 & 0 \\ 1 & 5 & 12 \\ 3 & 2 & -1 \end{array}\right]\) to simplify the determinant to a \(2\times 2\) determinant.
Simplify \(\det \left[\begin{array}{ccc} 0 & 3 & -2 \\ 2 & 5 & 12 \\ 0 & 2 & -1 \end{array}\right]\) to a multiple of a \(2\times 2\) determinant by first doing the following:
Factor out a \(2\) from a column.
Swap rows or columns to put a \(1\) on the main diagonal.
Simplify \(\det \left[\begin{array}{ccc} 4 & -2 & 2 \\ 3 & 1 & 4 \\ 1 & -1 & 3\end{array}\right]\) to a multiple of a \(2\times 2\) determinant by first doing the following:
Use row/column operations to create two zeroes in the same row or column.
Factor/swap as needed to get a row/column of all zeroes except a \(1\) on the main diagonal.
Using row/column operations, you can introduce zeros and reduce dimension to whittle down the determinant of a large matrix to a determinant of a smaller matrix.
Rewrite
Compute \(\det\left[\begin{array}{cccc} 2 & 3 & 5 & 0 \\ 0 & 3 & 2 & 0 \\ 1 & 2 & 0 & 3 \\ -1 & -1 & 2 & 2 \end{array}\right]\) by using any combination of row/column operations.
Another option is to take advantage of the fact that the determinant is linear in each row or column. This approach is called Laplace expansion or cofactor expansion.
For example, since \(\color{blue}{ \left[\begin{array}{ccc} 1 & 2 & 4 \end{array}\right] = 1\left[\begin{array}{ccc} 1 & 0 & 0 \end{array}\right] + 2\left[\begin{array}{ccc} 0 & 1 & 0 \end{array}\right] + 4\left[\begin{array}{ccc} 0 & 0 & 1 \end{array}\right]} \text{,}\)
Applying Laplace expansion to a \(2 \times 2\) matrix yields a short formula you may have seen:
There are formulas for the determinants of larger matrices, but they can be pretty tedious to use. For example, writing out a formula for a \(4\times 4\) determinant would require 24 different terms!
So this is why we either use Laplace expansion or row/column operations directly.
Based on the previous activities, which technique is easier for computing determinants?
Memorizing formulas.
Using row/column operations.
Laplace expansion.
Some other technique (be prepared to describe it).
Use your preferred technique to compute \(\det\left[\begin{array}{cccc} 4 & -3 & 0 & 0 \\ 1 & -3 & 2 & -1 \\ 3 & 2 & 0 & 3 \\ 0 & -3 & 2 & -2 \end{array}\right] \text{.}\)
An invertible matrix \(M\) and its inverse \(M^{-1}\) are given below:
Which of the following is equal to \(\det(M)\det(M^{-1})\text{?}\)
\(\displaystyle -1\)
\(\displaystyle 0\)
\(\displaystyle 1\)
\(\displaystyle 4\)
For every invertible matrix \(M\text{,}\)
Furthermore, a square matrix \(M\) is invertible if and only if \(\det(M)\not=0\text{.}\)
Consider the linear transformation \(A : \IR^2 \rightarrow \IR^2\) given by the matrix \(A = \left[\begin{array}{cc} 2 & 2 \\ 0 & 3 \end{array}\right]\text{.}\)
It is easy to see geometrically that
It is less obvious (but easily checked once you find it) that
Let \(A \in M_{n,n}\text{.}\) An eigenvector for \(A\) is a vector \(\vec{x} \in \IR^n\) such that \(A\vec{x}\) is parallel to \(\vec{x}\text{.}\)
In other words, \(A\vec{x}=\lambda \vec{x}\) for some scalar \(\lambda\text{.}\) If \(\vec x\not=\vec 0\text{,}\) then we say \(\vec x\) is a nontrivial eigenvector and we call this \(\lambda\) an eigenvalue of \(A\text{.}\)
Finding the eigenvalues \(\lambda\) that satisfy
Which of the following must be true for any eigenvalue?
The kernel of the transformation with standard matrix \(A-\lambda I\) must contain the zero vector, so \(A-\lambda I\) is invertible.
The kernel of the transformation with standard matrix \(A-\lambda I\) must contain a non-zero vector, so \(A-\lambda I\) is not invertible.
The image of the transformation with standard matrix \(A-\lambda I\) must contain the zero vector, so \(A-\lambda I\) is invertible.
The image of the transformation with standard matrix \(A-\lambda I\) must contain a non-zero vector, so \(A-\lambda I\) is not invertible.
The eigenvalues \(\lambda\) for a matrix \(A\) are the values that make \(A-\lambda I\) non-invertible.
Thus the eigenvalues \(\lambda\) for a matrix \(A\) are the solutions to the equation
The expression \(\det(A-\lambda I)\) is called characteristic polynomial of \(A\text{.}\)
For example, when \(A=\left[\begin{array}{cc}1 & 2 \\ 3 & 4\end{array}\right]\text{,}\) we have
Thus the characteristic polynomial of \(A\) is
Let \(A = \left[\begin{array}{cc} 5 & 2 \\ -3 & -2 \end{array}\right]\text{.}\)
Compute \(\det (A-\lambda I)\) to determine the characteristic polynomial of \(A\text{.}\)
Set this characteristic polynomial equal to zero and factor to determine the eigenvalues of \(A\text{.}\)
Find all the eigenvalues for the matrix \(A=\left[\begin{array}{cc} 3 & -3 \\ 2 & -4 \end{array}\right]\text{.}\)
Find all the eigenvalues for the matrix \(A=\left[\begin{array}{cc} 1 & -4 \\ 0 & 5 \end{array}\right]\text{.}\)
Find all the eigenvalues for the matrix \(A=\left[\begin{array}{ccc} 3 & -3 & 1 \\ 0 & -4 & 2 \\ 0 & 0 & 7 \end{array}\right]\text{.}\)
It's possible to show that \(-2\) is an eigenvalue for \(\left[\begin{array}{ccc}-1&4&-2\\2&-7&9\\3&0&4\end{array}\right]\text{.}\)
Compute the kernel of the transformation with standard matrix
Since the kernel of a linear map is a subspace of \(\IR^n\text{,}\) and the kernel obtained from \(A-\lambda I\) contains all the eigenvectors associated with \(\lambda\text{,}\) we call this kernel the eigenspace of \(A\) associated with \(\lambda\text{.}\)
Find a basis for the eigenspace for the matrix \(\left[\begin{array}{ccc} 0 & 0 & 3 \\ 1 & 0 & -1 \\ 0 & 1 & 3 \end{array}\right]\) associated with the eigenvalue \(3\text{.}\)
Find a basis for the eigenspace for the matrix \(\left[\begin{array}{cccc} 5 & -2 & 0 & 4 \\ 6 & -2 & 1 & 5 \\ -2 & 1 & 2 & -3 \\ 4 & 5 & -3 & 6 \end{array}\right]\) associated with the eigenvalue \(1\text{.}\)
Find a basis for the eigenspace for the matrix \(\left[\begin{array}{cccc} 4 & 3 & 0 & 0 \\ 3 & 3 & 0 & 0 \\ 0 & 0 & 2 & 5 \\ 0 & 0 & 0 & 2 \end{array}\right]\) associated with the eigenvalue \(2\text{.}\)
In geology, a phase is any physically separable material in the system, such as various minerals or liquids.
A component is a chemical compound necessary to make up the phases; these are usually oxides such as Calcium Oxide (\({\rm CaO}\)) or Silicone Dioxide (\({\rm SiO_2}\)).
In a typical application, a geologist knows how to build each phase from the components, and is interested in determining reactions among the different phases.
Consider the 3 components
Geologists already know (or can easily deduce) that
To study this vector space, each of the three components \(\vec c_1,\vec c_2,\vec c_3\) may be considered as the three components of a Euclidean vector.
Determine if the set of phases is linearly dependent or linearly independent.
Geologists are interested in knowing all the possible chemical reactions among the 5 phases:
Set up a system of equations equivalent to this vector equation.
Find a basis for its solution space.
Interpret each basis vector as a vector equation and a chemical equation.
We found two basis vectors \(\left[\begin{array}{c} 1 \\ -2 \\ -2 \\ 1 \\ 0 \end{array}\right]\) and \(\left[\begin{array}{c} 0 \\ -1 \\ -1 \\ 0 \\ 1 \end{array}\right]\text{,}\) corresponding to the vector and chemical equations
Combine the basis vectors to produce a chemical equation among the five phases that does not involve \(\vec{p}_2 = {\rm CaMgSiO_4}\text{.}\)
In the picture below, each circle represents a webpage, and each arrow represents a link from one page to another.
Based on how these pages link to each other, write a list of the 7 webpages in order from most important to least important.
Links are endorsements. That is:
A webpage is important if it is linked to (endorsed) by important pages.
A webpage distributes its importance equally among all the pages it links to (endorses).
Consider this small network with only three pages. Let \(x_1, x_2, x_3\) be the importance of the three pages respectively.
This corresponds to the page rank system:
By writing this linear system in terms of matrix multiplication, we obtain the page rank matrix \(A = \left[\begin{array}{ccc} 0 & 1 & 0 \\ \frac{1}{2} & 0 & 1 \\ \frac{1}{2} & 0 & 0 \end{array}\right]\) and page rank vector \(\vec{x}=\left[\begin{array}{c} x_1 \\ x_2 \\ x_3 \end{array}\right]\text{.}\)
Thus, computing the importance of pages on a network is equivalent to solving the matrix equation \(A\vec{x}=1\vec{x}\text{.}\)
Thus, our $978,000,000,000 problem is what kind of problem?
Find a page rank vector \(\vec x\) satisfying \(A\vec x=1\vec x\) for the following network's page rank matrix \(A\text{.}\)
That is, find the eigenspace associated with \(\lambda=1\) for the matrix \(A\text{,}\) and choose a vector from that eigenspace.
Row-reducing \(A-I = \left[\begin{array}{ccc} -1 & 1 & 0 \\ \frac{1}{2} & -1 & 1 \\ \frac{1}{2} & 0 & -1 \end{array}\right] \sim \left[\begin{array}{ccc} 1 & 0 & -2 \\ 0 & 1 & -2 \\ 0 & 0 & 0 \end{array}\right]\) yields the basic eigenvector \(\left[\begin{array}{c} 2 \\ 2 \\1 \end{array}\right]\text{.}\)
Therefore, we may conclude that pages \(1\) and \(2\) are equally important, and both pages are twice as important as page \(3\text{.}\)
Compute the \(7 \times 7\) page rank matrix for the following network.
For example, since website \(1\) distributes its endorsement equally between \(2\) and \(4\text{,}\) the first column is \(\left[\begin{array}{c} 0 \\ \frac{1}{2} \\ 0 \\ \frac{1}{2} \\ 0 \\ 0 \\ 0 \end{array}\right]\text{.}\)
Find a page rank vector for the given page rank matrix.
Which webpage is most important?
Since a page rank vector for the network is given by \(\vec x\text{,}\) it's reasonable to consider page \(2\) as the most important page.
Based upon this page rank vector, here is a complete ranking of all seven pages from most important to least important:
Given the following diagram, use a page rank vector to rank the pages \(1\) through \(7\) in order from most important to least important.
In engineering, a truss is a structure designed from several beams of material called struts, assembled to behave as a single object.
Consider the representation of a simple truss pictured below. All of the seven struts are of equal length, affixed to two anchor points applying a normal force to nodes \(C\) and \(E\text{,}\) and with a \(10000 N\) load applied to the node given by \(D\text{.}\)
Which of the following must hold for the truss to be stable?
All of the struts will experience compression.
All of the struts will experience tension.
Some of the struts will be compressed, but others will be tensioned.
Since the forces must balance at each node for the truss to be stable, some of the struts will be compressed, while others will be tensioned.
By finding vector equations that must hold at each node, we may determine many of the forces at play.
For example, at the bottom left node there are 3 forces acting.
Let \(\vec F_{CA}\) be the force on \(C\) given by the compression/tension of the strut \(CA\text{,}\) let \(\vec F_{CD}\) be defined similarly, and let \(\vec N_C\) be the normal force of the anchor point on \(C\text{.}\)
For the truss to be stable, we must have:
Using the conventions of the previous remark, and where \(\vec L\) represents the load vector on node \(D\text{,}\) find four more vector equations that must be satisfied for each of the other four nodes of the truss.
The five vector equations may be written as follows.
Each vector has a vertical and horizontal component, so it may be treated as a vector in \(\IR^2\text{.}\) Note that \(\vec F_{CA}\) must have the same magnitude (but opposite direction) as \(\vec F_{AC}\text{.}\)
To write a linear system that models the truss under consideration with constant load \(10000\) newtons, how many scalar variables will be required?
\(7\text{:}\) \(5\) from the nodes, \(2\) from the anchors
\(9\text{:}\) \(7\) from the struts, \(2\) from the anchors
\(11\text{:}\) \(7\) from the struts, \(4\) from the anchors
\(12\text{:}\) \(7\) from the struts, \(4\) from the anchors, \(1\) from the load
\(13\text{:}\) \(5\) from the nodes, \(7\) from the struts, \(1\) from the load
Since the angles for each strut are known, one variable may be used to represent each.
For example:
Since the angle of the normal forces for each anchor point are unknown, two variables may be used to represent each.
The load vector is constant.
Each of the five vector equations found previously represent two linear equations: one for the horizontal component and one for the vertical.
Expand the vector equation given below using sine and cosine of appropriate angles, then compute each component (approximating \(\sqrt{3}/2\approx 0.866\)).
The full augmented matrix given by the ten equations in this linear system is given below, where the elevent columns correspond to \(x_1,\dots,x_7,y_1,y_2,z_1,z_2\text{,}\) and the ten rows correspond to the horizontal and vertical components of the forces acting at \(A,\dots,E\text{.}\)
This matrix row-reduces to the following.
Thus we know the truss must satisfy the following conditions.
In particular, the negative \(x_1,x_2,x_5\) represent tension (forces pointing into the nodes), and the postive \(x_3,x_4\) represent compression (forces pointing out of the nodes). The vertical normal forces \(y_2+z_2\) counteract the \(10000\) load.
Consider the scalar system of equations
Show that
Explain why the matrix \(\left[\begin{array}{cccc} 1 & 1 & 0 & 3 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 0 \end{array}\right]\) is \textbf{not} in reduced row echelon form.
Let \(V\) be the set of all pairs of numbers \((x,y)\) of real numbers together with the following operations:
Show that scalar multiplication distributes over vector addition:
Explain why \(V\) nonetheless is not a vector space.
Consider the statement
The vector \(\left[\begin{array}{c} 3 \\ -1 \\ 2 \end{array}\right] \) is a linear combination of the vectors \(\left[\begin{array}{c} 1 \\ 0 \\ 1 \end{array}\right] \text{,}\) \(\left[\begin{array}{c} 3 \\ 2 \\ -1 \end{array}\right] \text{,}\) and \(\left[\begin{array}{c} 1 \\ 1 \\ -1 \end{array}\right] \text{.}\)
Write an equivalent statement using a vector equation.
Explain why your statement is true or false.
Consider the statement
The set of vectors \(\left\{\left[\begin{array}{c} 1 \\ 0 \\ 1 \end{array}\right] , \left[\begin{array}{c} 3 \\ 2 \\ -1 \end{array}\right] , \left[\begin{array}{c} 1 \\ 1 \\ -1 \end{array}\right] \right\}\) does not span \(\IR^3\text{.}\)
Write an equivalent statement using a vector equation.
Explain why your statement is true or false.
Consider the following two sets of Euclidean vectors.
Consider the statement
The set of vectors \(\left\{ \left[\begin{array}{c} 3 \\ 2 \\ 1 \\ 0 \end{array}\right] , \left[\begin{array}{c} -1 \\ 1 \\ 2 \\ 3 \end{array}\right] , \left[\begin{array}{c} 0 \\ 1 \\ -1 \\ 1 \end{array}\right] , \left[\begin{array}{c} 2 \\ 5 \\ 1 \\ 5 \end{array}\right] \right\}\) is linearly dependent.
Write an equivalent statement using a vector equation.
Explain why your statement is true or false.
Consider the statement
The set of vectors \(\left\{ \left[\begin{array}{c} 3 \\ 2 \\ 1 \\ 0 \end{array}\right] , \left[\begin{array}{c} -1 \\ 1 \\ 2 \\ 3 \end{array}\right] , \left[\begin{array}{c} 0 \\ 1 \\ -1 \\ 1 \end{array}\right] , \left[\begin{array}{c} 2 \\ 5 \\ 1 \\ 5 \end{array}\right] \right\} \) is a basis of \(\IR^4\text{.}\)
Write an equivalent statement in terms of other vector properties.
Explain why your statement is true or false.
Consider the subspace
Explain how to find a basis of \(W\text{.}\)
Explain how to find the dimension of \(W\text{.}\)
Consider the statement
The set of polynomials \(\setList{3x^3+2x^2+x,-x^3+x^2+2x+3,x^2-x+1,2x^3+5x^2+x+5}\) is linearly independent.
Write an equivalent statement using a polynomial equation.
Explain why your statement is true or false.
Consider the homogeneous system of equations
Find the solution space of the system.
Find a basis of the solution space.
Consider the following maps of polynomials \(S: \P \rightarrow \P\) and \(T:\P\rightarrow\P\) defined by
Find the standard matrix for the linear transformation \(T: \IR^3\rightarrow \IR^4\) given by
Let \(S: \IR^4 \rightarrow \IR^3\) be the linear transformation given by the standard matrix
Let \(T: \IR^4 \rightarrow \IR^3\) be the linear transformation given by
Explain how to find the image of \(T\) and the kernel of \(T\text{.}\)
Explain how to find a basis of the image of \(T\) and a basis of the kernel of \(T\text{.}\)
Explain how to find the rank and nullity of T, and why the rank-nullity theorem holds for T.
Let \(T: \IR^4 \rightarrow \IR^3\) be the linear transformation given by the standard matrix \(\left[\begin{array}{cccc} 1 & 3 & 2 & -3 \\ 2 & 4 & 6 & -10 \\ 1 & 6 & -1 & 3 \end{array}\right]\text{.}\)
Explain why \(T\) is or is not injective.
Explain why \(T\) is or is not surjective.
Of the following three matrices, only two may be multiplied.
Explain which two may be multiplied and why. Then show how to find their product.
Let \(A\) be a \(4\times4\) matrix.
Give a \(4\times 4\) matrix \(P\) that may be used to perform the row operation \({R_3} \to R_3+4 \, {R_1} \text{.}\)
Give a \(4\times 4\) matrix \(Q\) that may be used to perform the row operation \({R_1} \to -4 \, {R_1}\text{.}\)
Use matrix multiplication to describe the matrix obtained by applying \({R_3} \to 4 \, {R_1} + {R_3}\) and then \({R_1} \to -4 \, {R_1}\) to \(A\) (note the order).
Explain why the matrix \(\left[\begin{array}{ccc}1 & 3 & 2 \\ 2 & 4 & 6 \\ 1 & 6 & -1 \end{array}\right]\) is or is not invertible.
Show how to compute the inverse of the matrix \(A=\left[\begin{array}{cccc} 1 & 2 & 3 & 5 \\ 0 & -1 & 4 & -2 \\ 0 & 0 & 1 & 3 \\ 0 & 0 & 0 & 1 \end{array}\right]\text{.}\)
Let \(A\) be a \(4 \times 4\) matrix with determinant \(-7\text{.}\)
Let \(B\) be the matrix obtained from \(A\) by applying the row operation \(R_3 \to R_3+3R_4\text{.}\) What is \(\det(B)\text{?}\)
Let \(C\) be the matrix obtained from \(A\) by applying the row operation \(R_2 \to -3R_2\text{.}\) What is \(\det(C)\text{?}\)
Let \(D\) be the matrix obtained from \(A\) by applying the row operation \(R_3 \leftrightarrow R_4\text{.}\) What is \(\det(D)\text{?}\)
Show how to compute the determinant of the matrix
Explain how to find the eigenvalues of the matrix \(\left[\begin{array}{cc} -2 & -2 \\ 10 & 7 \end{array}\right] \text{.}\)
Explain how to find a basis for the eigenspace associated to the eigenvalue \(3\) in the matrix