Subsection 4.6.1 Additional homework
ΒΆHomework 4.6.1.1.
Let \(A \in \mathbb{R}^{m \times n} \) and \(x \in \mathbb{R}^n \text{.}\) Then \(( A x )^T = x^T A^T\text{.}\) \\ \mbox{~} \hfill Always/Sometimes/Never
ALWAYS
Why?
Homework 4.6.1.2.
Our laff library has a routine laff_gemv( trans, alpha, A, x, beta, y ) that has the following property
laff_gemv( 'No transpose', alpha, A, x, beta, y ) computes \(y := \alpha A x + \beta y \text{.}\)
laff_gemv( 'Transpose', alpha, A, x, beta, y ) computes \(y := \alpha A^T x + \beta y \text{.}\)
The routine works regardless of whether \(x \) and/or \(y \) are column and/or row vectors. Our library does NOT include a routine to compute \(y^T := x^T A \text{.}\) What call could you use to compute \(y^T := x^T A \) if \(y^T \) is stored in yt and \(x^T \) in xt?
laff_gemv( 'No transpose', 1.0, A, xt, 0.0, yt ).
laff_gemv( 'No transpose', 1.0, A, xt, 1.0, yt ).
laff_gemv( 'Transpose', 1.0, A, xt, 1.0, yt ).
laff_gemv( 'Transpose', 1.0, A, xt, 0.0, yt ).
laff_gemv( 'Transpose', 1.0, A, xt, 0.0, yt ) computes \(y := A^T x \text{,}\) where \(y \) is stored in yt and \(x \) is stored in xt. To understand this, transpose both sides: \(y^T = ( A^T x )^T = x^T {A^T}^T = x^T A \text{.}\) For this reason, our laff library does not include a routine to compute \(y^T := \alpha x^T A + \beta y^T \text{.}\) You will need this next week!!!
Homework 4.6.1.3.
Let \(A = \left( \begin{array}{r r} 1 \amp -1 \\ 1 \amp -1 \end{array} \right) \text{.}\) Compute
\(A^2 = \)
\(A^3 = \)
For \(k \gt 1 \text{,}\) \(A^k = \)
Let \(A = \left( \begin{array}{r r} 1 \amp -1 \\ 1 \amp -1 \end{array} \right) \text{.}\) Compute
\(A^2 = \left( \begin{array}{r r} 0 \amp 0 \\ 0 \amp 0 \end{array} \right)\)
\(A^3 = \left( \begin{array}{r r} 0 \amp 0 \\ 0 \amp 0 \end{array} \right)\)
For \(k \gt 1 \text{,}\) \(A^k = \left( \begin{array}{r r} 0 \amp 0 \\ 0 \amp 0 \end{array} \right)\)
Homework 4.6.1.4.
Let \(A = \left( \begin{array}{r r} 0 \amp 1 \\ 1 \amp 0 \end{array} \right) \text{.}\)
\(A^2 = \)
\(A^3 = \)
For \(n \geq 0 \text{,}\) \(A^{2n} = \)
For \(n \geq 0 \text{,}\) \(A^{2n+1} = \)
Let \(A = \left( \begin{array}{r r} 0 \amp 1 \\ 1 \amp 0 \end{array} \right) \text{.}\)
\(A^2 = \left( \begin{array}{r r} 1 \amp 0 \\ 0 \amp 1 \end{array} \right)\)
\(A^3 = \left( \begin{array}{r r} 0 \amp 1 \\ 1 \amp 0 \end{array} \right)\)
For \(n \geq 0 \text{,}\) \(A^{2n} = \left( \begin{array}{r r} 1 \amp 0 \\ 0 \amp 1 \end{array} \right)\)
For \(n \geq 0 \text{,}\) \(A^{2n+1} = \left( \begin{array}{r r} 0 \amp 1 \\ 1 \amp 0 \end{array} \right)\)
Homework 4.6.1.5.
Let \(A = \left( \begin{array}{r r} 0 \amp -1 \\ 1 \amp 0 \end{array} \right) \text{.}\)
\(A^2 = \)
\(A^3 = \)
For \(n \geq 0 \text{,}\) \(A^{4n} = \)
For \(n \geq 0 \text{,}\) \(A^{4n+1} = \)
Let \(A = \left( \begin{array}{r r} 0 \amp -1 \\ 1 \amp 0 \end{array} \right) \text{.}\)
\(A^2 = \left( \begin{array}{r r} -1 \amp 0 \\ 0 \amp -1 \end{array} \right)\)
\(A^3 = \left( \begin{array}{r r} 0 \amp 1 \\ -1 \amp 0 \end{array} \right)\)
For \(n \geq 0 \text{,}\) \(A^{4n} = \left( \begin{array}{r r} 1 \amp 0 \\ 0 \amp 1 \end{array} \right)\)
For \(n \geq 0 \text{,}\) \(A^{4n+1} = \left( \begin{array}{r r} 0 \amp -1 \\ 1 \amp 0 \end{array} \right)\)
Homework 4.6.1.6.
Let \(A \) be a square matrix with \(A A = 0 \text{.}\) (\(A A \) is often written as \(A^2 \text{.}\))
ALWAYS/SOMETIMES/NEVER: \(A \) is a zero matrix.
SOMETIMES
Why?
If \(A = 0 \) then certainly \(A^2 = 0 \text{.}\)
However
Hence there are examples with \(A \neq 0 \) such that \(A^2 = 0 \text{.}\)
This may be counter intuitive since if \(\alpha \) is a scalar, then \(\alpha^2 = 0 \) only if \(\alpha = 0 \text{.}\) So, one of the points of this exercise is to make you skeptical about "facts" about scalar multiplications that you may try to transfer to matrix-matrix multiplication.}
Homework 4.6.1.7.
TRUE/FALSE: There exists a real valued matrix \(A \) such that \(A^2 = -I \text{.}\) (Recall: \(I \) is the identity)
Homework 4.6.1.8.
TRUE/FALSE: There exists a matrix \(A \) that is not diagonal such that \(A^2 = I \text{.}\)
TRUE
Why?
An examples of a matrices \(A \) that is not diagonal yet \(A^2 = I \text{:}\) \(A = \left( \begin{array}{r r} 0 \amp 1 \\ 1 \amp 0 \end{array} \right) \text{.}\)
This may be counter intuitive since if \(\alpha \) is a real scalar, then \(\alpha^2 = 1 \) only if \(\alpha = 1 \) or \(\alpha = -1 \text{.}\) Also, if a matrix is \(1 \times 1 \text{,}\) then it is automatically diagonal, so you cannot look at \(1 \times 1 \) matrices for inspiration for this problem.
