Subsection 4.4.4 Special shapes
¶ VIDEO
Homework 4.4.4.1 .
Let \(A =
\left( \begin{array}{r}
4
\end{array}
\right)\) and \(B =
\left( \begin{array}{r}
3
\end{array}
\right)
\text{.}\) Then \(A B = \text{.}\)
Solution \(\left( \begin{array}{r} 12 \end{array}\right) \) or \(12 \text{.}\)
Homework 4.4.4.2 .
Let \(A =
\left( \begin{array}{r}
1 \\
-3 \\
2
\end{array}
\right)\) and \(B =
\left( \begin{array}{r}
4
\end{array}
\right)
\text{.}\) Then \(A B = \text{.}\)
Solution
\begin{equation*}
A B =
\left( \begin{array}{r}
1 \\
-3 \\
2
\end{array}
\right)
\left( \begin{array}{r}
4
\end{array}
\right)
=
4
\left( \begin{array}{r}
1 \\
-3 \\
2
\end{array}
\right)
=
\left( \begin{array}{c}
4 \times 1 \\
4 \times (-3) \\
4 \times 2
\end{array}
\right)
=
\left( \begin{array}{r}
4 \\
-12 \\
8
\end{array}
\right).
\end{equation*}
Homework 4.4.4.3 .
Let \(A =
\left( \begin{array}{r}
4
\end{array}
\right)\) and \(B =
\left( \begin{array}{r r r r}
1 \amp -3 \amp 2
\end{array}
\right)
\text{.}\) Then \(A B = \text{.}\)
Solution
\begin{equation*}
A B =
\left( \begin{array}{r}
4
\end{array}
\right)
\left( \begin{array}{r r r r}
1 \amp -3 \amp 2
\end{array}
\right)
=
\left( \begin{array}{r r r r}
4 \cdot 1 \amp 4 \cdot (-3) \amp 4 \cdot 2
\end{array}
\right)
=
\left( \begin{array}{r r r r}
4 \amp -12 \amp 8
\end{array}
\right).
\end{equation*}
Homework 4.4.4.4 .
Let \(A =
\left( \begin{array}{r r r r}
1 \amp -3 \amp 2
\end{array}
\right)\) and \(B =
\left( \begin{array}{r}
2 \\
-1 \\
0
\end{array}
\right)
\text{.}\) Then \(A B = \)
Solution
\begin{equation*}
A B =
\left( \begin{array}{r r r r}
1 \amp -3 \amp 2
\end{array}
\right)
\left( \begin{array}{r}
2 \\
-1 \\
0
\end{array}
\right)
=
1 \cdot 2 + (-3) \cdot (-1) + 2 \cdot 0 = 2 + 3 + 0 = 5.
\end{equation*}
or
\begin{equation*}
A B =
\left( \begin{array}{r r r r}
1 \amp -3 \amp 2
\end{array}
\right)
\left( \begin{array}{r}
2 \\
-1 \\
0
\end{array}
\right)
=
\left( 1 \cdot 2 + (-3) \cdot (-1) + 2 \cdot 0 \right)
=
\left( 2 + 3 + 0 = 5 \right).
\end{equation*}
Homework 4.4.4.5 .
Let \(A =
\left( \begin{array}{r}
1 \\
-3 \\
2
\end{array}
\right)\) and \(B =
\left( \begin{array}{r r r r}
-1 \amp -2
\end{array}
\right)
\text{.}\) Then \(A B = \)
Solution
\begin{equation*}
A B =
\left( \begin{array}{r}
1 \\
-3 \\
2
\end{array}
\right)
\begin{array}{r}
\left( \begin{array}{r r r}
-1 \amp -2
\end{array}
\right)
\\
\\
\end{array}
=
\left( \begin{array}{ c c }
1 \cdot (-1) \amp 1 \cdot (-2) \\
(-3) \cdot (-1) \amp (-3) \cdot (-2) \\
2 \cdot (-1) \amp 2 \cdot (-2)
\end{array}
\right)
=
\left( \begin{array}{ r r }
-1 \amp -2 \\
3 \amp 6 \\
-2 \amp -4
\end{array}
\right)
\end{equation*}
Homework 4.4.4.6 .
Let \(a =
\left( \begin{array}{r}
1 \\
-3 \\
2
\end{array}
\right)\) and \(b^T =
\left( \begin{array}{r r r r}
-1 \amp -2
\end{array}
\right) \) and \(C = a b^T \text{.}\) Partition \(C \) by columns and by rows:
\begin{equation*}
C = \left( \begin{array}{c | c}
c_0 \amp c_1
\end{array}
\right)
\quad
\mbox{and}
\quad
C = \left( \begin{array}{c}
\widetilde c_0^T \\
\widetilde c_1^T
\\
\widetilde c_2^T
\end{array}
\right)
\end{equation*}
Then
\(c_0 =
(-1)
\left( \begin{array}{r}
1 \\
-3 \\
2
\end{array}
\right)
=
\left( \begin{array}{r}
(-1) \!\! \times \!\!
~~~(1)
\\
(-1) \!\! \times \!\!
(-3)
\\
(-1) \!\! \times \!\!
~~~(2)
\end{array}
\right)\) \mbox{~}\hfill True/False
\(c_1 =
(-2)
\left( \begin{array}{r}
1 \\
-3 \\
2
\end{array}
\right)
=
\left( \begin{array}{r}
(-2) \!\! \times \!\!
~~~(1)
\\
(-2) \!\! \times \!\!
(-3)
\\
(-2) \!\! \times \!\!
~~~ (2)
\end{array}
\right)\) \mbox{~}\hfill True/False
TRUE/FALSE: \(C =
\left( \begin{array}{r | r}
(-1) \!\! \times \!\!
~~~(1)
\amp
(-2) \!\! \times \!\!
~~~(1)
\\
(-1) \!\! \times \!\!
(-3)
\amp
(-2) \!\! \times \!\!
(-3)
\\
(-1) \!\! \times \!\!
~~~(2)
\amp
(-2) \!\! \times \!\!
~~~ (2)
\end{array}
\right)\)
TRUE/FALSE: \(\widetilde c_0^T =
~~(1)
\left( \begin{array}{r r}
-1 \amp -2
\end{array}
\right) =
\left( \begin{array}{r r}
~~(1) \!\! \times \!\! (-1) \amp ~~(1) \!\! \times \!\! (-2)
\end{array}
\right) \)
TRUE/FALSE: \(\widetilde c_1^T =
(-3)
\left( \begin{array}{r r}
-1 \amp -2
\end{array}
\right) =
\left( \begin{array}{r r}
(-3) \!\! \times \!\! (-1) \amp (-3) \!\! \times \!\! (-2)
\end{array}
\right) \)
TRUE/FALSE: \(\widetilde c_2^T =
~~(2)
\left( \begin{array}{r r}
-1 \amp -2
\end{array}
\right) =
\left( \begin{array}{r r}
~~(2) \!\! \times \!\! (-1) \amp ~~(2) \!\! \times \!\! (-2)
\end{array}
\right) \)
TRUE/FALSE: \(C =
\left( \begin{array}{r r}
(-1) \!\! \times \!\!
~~~(1)
\amp
(-2) \!\! \times \!\!
~~~(1)
\\ \hline
(-1) \!\! \times \!\!
(-3)
\amp
(-2) \!\! \times \!\!
(-3)
\\ \hline
(-1) \!\! \times \!\!
~~~(2)
\amp
(-2) \!\! \times \!\!
~~~ (2)
\end{array}
\right)\)
Solution The important thing here is to recognize that if you compute the first two results, then the third result comes for free. If you compute results 4-6, then the last result comes for free. Also, notice that the columns \(C \) are just multiples of \(a \) while the rows of \(C \) are just multiples of \(b^T \text{.}\)
Homework 4.4.4.7 .
Fill in the boxes:
\begin{equation*}
\left(
\begin{array}{c}
\mbox{\huge} \\
\mbox{\huge} \\
\mbox{\huge} \\
\mbox{\huge}
\end{array}
\right)
\left( \begin{array}[t]{r r r}
2 \amp
-1 \amp
3
\end{array}
\right)
=
\left( \begin{array}{r r r}
4 \amp \mbox{\huge} \amp \mbox{\huge}
\\
-2 \amp \mbox{\huge} \amp \mbox{\huge}
\\
2 \amp \mbox{\huge} \amp \mbox{\huge}
\\
6 \amp \mbox{\huge} \amp \mbox{\huge}
\end{array}
\right)
\end{equation*}
Solution
\begin{equation*}
\left(
\begin{array}{r}
2 \\
-1 \\
1 \\
3
\end{array}
\right)
\left( \begin{array}[t]{r r r}
2 \amp
-1 \amp
3
\end{array}
\right)
=
\left( \begin{array}{r r r}
4 \amp -2 \amp 6 \\
-2 \amp 1 \amp -3
\\
2 \amp -1 \amp 3
\\
6 \amp -3 \amp 9
\end{array}
\right)
\end{equation*}
Homework 4.4.4.8 .
Fill in the boxes:
\begin{equation*}
\left(
\begin{array}{r}
2 \\
-1 \\
1 \\
3
\end{array}
\right)
\left( \begin{array}[t]{r r r}
\mbox{\huge} \amp
\mbox{\huge} \amp
\mbox{\huge}
\end{array}
\right)
=
\left( \begin{array}{r r r}
4 \amp -2 \amp 6 \\
\mbox{\huge} \amp \mbox{\huge}\amp \mbox{\huge}
\\
\mbox{\huge} \amp \mbox{\huge} \amp \mbox{\huge}
\\
\mbox{\huge} \amp \mbox{\huge} \amp \mbox{\huge}
\end{array}
\right)
\end{equation*}
Solution
\begin{equation*}
\left(
\begin{array}{r}
2 \\
-1 \\
1 \\
3
\end{array}
\right)
\left( \begin{array}[t]{r r r}
2 \amp
-1 \amp
3
\end{array}
\right)
=
\left( \begin{array}{r r r}
4 \amp -2 \amp 6 \\
-2 \amp 1 \amp -3
\\
2 \amp -1 \amp 3
\\
6 \amp -3 \amp 9
\end{array}
\right)
\end{equation*}
Homework 4.4.4.9 .
Let \(A = \begin{MatrixR}
0 \amp 1 \amp 0
\end{MatrixR}\) and \(B =
\begin{MatrixR}
1 \amp -2 \amp 2 \\
4 \amp 2 \amp 0 \\
1 \amp 2 \amp 3
\end{MatrixR}
\text{.}\) Then \(A B = \)
Solution \(\begin{MatrixR}
4 \amp 2 \amp 0
\end{MatrixR}\)
Homework 4.4.4.10 .
Let \(e_i \in \mathbb{R}^m \) equal the \(i\)th standard basis vector and \(A \in \mathbb{R}^{m \times n} \text{.}\) ALWAYS/SOMETIMES/NEVER: \(e_i^T A = \row{a}_i^T \text{,}\) the \(i \)th row of \(A \text{.}\)
Answer
Solution
\begin{equation*}
\begin{array}{Nl}
\left( \begin{array}{c c c c c c c}
0 \amp \cdots \amp 0 \amp 1 \amp 0 \amp \cdots \amp 0
\end{array}
\right)
\left( \begin{array}{c c c c}
\alpha_{0,0} \amp \alpha_{0,1} \amp\cdots \amp \alpha_{0,n-1} \\
\vdots \amp \vdots \amp \amp \vdots \\
\alpha_{i-1,0} \amp \alpha_{i-1,1} \amp\cdots \amp \alpha_{i-1,n-1} \\
\alpha_{i,0} \amp \alpha_{i,1} \amp\cdots \amp \alpha_{i,n-1} \\
\alpha_{i+1,0} \amp \alpha_{i+1,1} \amp\cdots \amp \alpha_{i+1,n-1} \\
\vdots \amp \vdots \amp \amp \vdots \\
\alpha_{m-1,0} \amp \alpha_{m-1,1} \amp\cdots \amp \alpha_{m-1,n-1}
\end{array}
\right)
\\
~~~~ =
\left( \begin{array}{c c c c}
\alpha_{i,0} \amp \alpha_{i,1} \amp\cdots \amp \alpha_{i,n-1}
\end{array}
\right) .
\end{array}
\end{equation*}
Homework 4.4.4.11 .
Get as much practice as you want with the Matlab script in LAFF-2.0xM/Programming/Week04/PracticeGemm.m
We now show that if one treats scalars, column vectors, and row vectors as special cases of matrices, then many (all?) operations we encountered previously become simply special cases of matrix-matrix multiplication. In the below discussion, consider \(C = A B \) where \(C \in \mathbb{R}^{m \times n} \text{,}\) \(A \in
\mathbb{R}^{m \times k} \text{,}\) and \(B \in \mathbb{R}^{k \times n} \text{.}\)
Subsubsection 4.4.4.1 \(m = n = k = 1 \) (scalar multiplication)
¶ In this case, all three matrices are actually scalars:
\begin{equation*}
\left( \begin{array}{c}
\gamma_{0,0}
\end{array}
\right)
=
\left( \begin{array}{c}
\alpha_{0,0}
\end{array}
\right)
\left( \begin{array}{c}
\beta_{0,0}
\end{array}
\right)
=
\left( \begin{array}{c}
\alpha_{0,0} \beta_{0,0}
\end{array}
\right)
\end{equation*}
so that matrix-matrix multiplication becomes scalar multiplication.
Subsubsection 4.4.4.2 \(n = 1, k = 1 \) (vector times scalar)
¶ Now the matrices look like
\begin{equation*}
\begin{array}{rcl}
\left( \begin{array}{c}
\gamma_{0,0} \\
\gamma_{1,0} \\
\vdots \\
\gamma_{m-1,0}
\end{array}
\right)
=
\left( \begin{array}{c}
\alpha_{0,0} \\
\alpha_{1,0} \\
\vdots \\
\alpha_{m-1,0}
\end{array}
\right)
\left( \begin{array}{c}
\beta_{0,0}
\end{array}
\right)
=
\left( \begin{array}{c}
\alpha_{0,0}
\beta_{0,0} \\
\alpha_{1,0}
\beta_{0,0} \\
\vdots \\
\alpha_{m-1,0}
\beta_{0,0}
\end{array}
\right)
=
\left( \begin{array}{c}
\beta_{0,0}
\alpha_{0,0} \\
\beta_{0,0}
\alpha_{1,0} \\
\vdots \\
\beta_{0,0}
\alpha_{m-1,0}
\end{array}
\right)
=
\beta_{0,0}
\left( \begin{array}{c}
\alpha_{0,0} \\
\alpha_{1,0} \\
\vdots \\
\alpha_{m-1,0}
\end{array}
\right) .
\end{array}
\end{equation*}
In other words, \(C \) and \(A \) are vectors, \(B \) is a scalar, and the matrix-matrix multiplication becomes scaling of a vector.
Subsubsection 4.4.4.3 \(m = 1 , k = 1 \) (scalar times vector)
¶ Now the matrices look like
\begin{equation*}
\begin{array}{rcl}
\left( \begin{array}{c c c c}
\gamma_{0,0} \amp
\gamma_{0,1} \amp
\cdots \amp
\gamma_{0,n-1}
\end{array}
\right)
\amp=\amp
\left( \begin{array}{c}
\alpha_{0,0} \\
\end{array}
\right)
\left( \begin{array}{c c c c}
\beta_{0,0} \amp
\beta_{0,1} \amp
\cdots \amp
\beta_{0,n-1}
\end{array}
\right) \\
\amp=\amp
\alpha_{0,0}
\left( \begin{array}{c c c c}
\beta_{0,0} \amp
\beta_{0,1} \amp
\cdots \amp
\beta_{0,n-1}
\end{array}
\right)\\
\amp = \amp
\left( \begin{array}{c c c c}
\alpha_{0,0}
\beta_{0,0} \amp
\alpha_{0,0}
\beta_{0,1} \amp
\cdots \amp
\alpha_{0,0}
\beta_{0,n-1}
\end{array}
\right).
\end{array}
\end{equation*}
In other words, \(C \) and \(B \) are just row vectors and \(A \) is a scalar. The vector \(C \) is computed by scaling the row vector \(B \) by the scalar \(A \text{.}\)
Subsubsection 4.4.4.4 \(m = 1 , n = 1 \) (dot product)
¶ The matrices look like
\begin{equation*}
\left( \begin{array}{c}
\gamma_{0,0}
\end{array}
\right)
=
\left( \begin{array}{c c c c}
\alpha_{0,0} \amp
\alpha_{0,1} \amp
\cdots \amp
\alpha_{0,k-1}
\end{array}
\right)
\left( \begin{array}{c}
\beta_{0,0} \\
\beta_{1,0} \\
\vdots \\
\beta_{k-1,0}
\end{array}
\right)
=
\sum_{p=0}^{k-1} \alpha_{0,p} \beta_{p,0}.
\end{equation*}
In other words, \(C \) is a scalar that is computed by taking the dot product of the one row that is \(A \) and the one column that is \(B \text{.}\)
Subsubsection 4.4.4.5 {\(k = 1 \) (outer product)}
¶
\begin{equation*}
\begin{array}{rcl}
\left( \begin{array}{c c c c}
\gamma_{0,0} \amp \gamma_{0,1} \amp \cdots \amp \gamma_{0,n-1} \\
\gamma_{1,0} \amp \gamma_{1,1} \amp \cdots \amp \gamma_{1,n-1} \\
\vdots \amp \vdots \amp \ddots \amp \vdots \\
\gamma_{m-1,0} \amp \gamma_{m-1,1} \amp \cdots \amp \gamma_{m-1,n-1} \\
\end{array}
\right)
\amp=\amp
\left( \begin{array}{c}
\alpha_{0,0} \\
\alpha_{1,0} \\
\vdots \\
\alpha_{m-1,0}
\end{array}
\right)
\begin{array}{c}
\left( \begin{array}{c c c c}
\beta_{0,0} \amp
\beta_{0,1} \amp
\cdots \amp
\beta_{0,n-1}
\end{array}
\right) \\
\phantom{\beta_{0,n-1}}
\\
\phantom{\vdots}
\\
\phantom{\beta_{0,n-1}}
\end{array}
\\
\amp=\amp
\left( \begin{array}{c c c c}
\alpha_{0,0} \beta_{0,0} \amp \alpha_{0,0} \beta_{0,1} \amp \cdots \amp \alpha_{0,0} \beta_{0,n-1} \\
\alpha_{1,0} \beta_{0,0} \amp \alpha_{1,0} \beta_{0,1} \amp \cdots \amp \alpha_{1,0} \beta_{0,n-1} \\
\vdots \amp \vdots \amp \ddots \amp \vdots \\
\alpha_{m-1,0} \beta_{0,0} \amp \alpha_{m-1,0} \beta_{0,1} \amp \cdots \amp \alpha_{m-1,0} \beta_{0,n-1} \\
\end{array}
\right)
\end{array}
\end{equation*}
Subsubsection 4.4.4.6 {\(n = 1 \) (matrix-vector product)}
¶
\begin{equation*}
\begin{array}{rcl}
\left( \begin{array}{c}
\gamma_{0,0} \\
\gamma_{1,0} \\
\vdots \\
\gamma_{m-1,0}
\end{array}
\right)
\amp = \amp
\left( \begin{array}{c c c c}
\alpha_{0,0} \amp \alpha_{0,1} \amp \cdots \amp \alpha_{0,k-1} \\
\alpha_{1,0} \amp \alpha_{1,1} \amp \cdots \amp \alpha_{1,k-1} \\
\vdots \amp \vdots \amp \ddots \amp \vdots \\
\alpha_{m-1,0} \amp \alpha_{m-1,1} \amp \cdots \amp \alpha_{m-1,k-1} \\
\end{array}
\right)
\left( \begin{array}{c}
\beta_{0,0} \\
\beta_{1,0} \\
\vdots \\
\beta_{k-1,0}
\end{array}
\right)
\end{array}
\end{equation*}
We have studied this special case in great detail. To emphasize how it relates to have matrix-matrix multiplication is computed, consider the following:
\begin{equation*}
\begin{array}{rcl}
\left( \begin{array}{c}
\gamma_{0,0} \\
\vdots \\ \hline
\multicolumn{1}{|c|}{\gamma_{i,0}} \\ \hline
\vdots \\
\gamma_{m-1,0}
\end{array}
\right)
\amp = \amp
\left( \begin{array}{c c c c}
\alpha_{0,0} \amp \alpha_{0,1} \amp \cdots \amp \alpha_{0,k-1} \\
\vdots \amp \vdots \amp \ddots \amp \vdots \\ \hline
\multicolumn{1}{|c}{\alpha_{i,0}} \amp \alpha_{i,1} \amp \cdots \amp
\multicolumn{1}{c|}{\alpha_{i,k-1}} \\ \hline
\vdots \amp \vdots \amp \ddots \amp \vdots \\
\alpha_{m-1,0} \amp \alpha_{m-1,1} \amp \cdots \amp \alpha_{m-1,k-1} \\
\end{array}
\right)
\left( \begin{array}{|c|} \hline
\beta_{0,0} \\
\beta_{1,0} \\
\vdots \\
\beta_{k-1,0} \\ \hline
\end{array}
\right)
\end{array}
\end{equation*}
Subsubsection 4.4.4.7 \(m = 1 \) (row vector-matrix product)
¶
\begin{equation*}
\begin{array}{rcl}
\left( \begin{array}{c c c c}
\gamma_{0,0} \amp
\gamma_{0,1} \amp
\cdots \amp
\gamma_{0,n-1}
\end{array}
\right)
\amp = \amp
\left( \begin{array}{c c c c}
\alpha_{0,0} \amp
\alpha_{0,1} \amp
\cdots \amp
\alpha_{0,k-1}
\end{array}
\right)
\left( \begin{array}{c c c c}
\beta_{0,0} \amp \beta_{0,1} \amp \cdots \amp \beta_{0,n-1} \\
\beta_{1,0} \amp \beta_{1,1} \amp \cdots \amp \beta_{1,n-1} \\
\vdots \amp \vdots \amp \ddots \amp \vdots \\
\beta_{k-1,0} \amp \beta_{k-1,1} \amp \cdots \amp \beta_{k-1,n-1} \\
\end{array}
\right)
\end{array}
\end{equation*}
so that \(\gamma_{0,j} = \sum_{p=0}^{k-1} \alpha_{0,p} \beta_{p,j} \text{.}\) To emphasize how it relates to have matrix-matrix multiplication is computed, consider the following:
\begin{equation*}
\begin{array}{rcl}
\lefteqn{\left( \begin{array}{c c | c | c c} \cline{3-3}
\gamma_{0,0} \amp
\cdots \amp
\gamma_{0,j} \amp
\cdots \amp
\gamma_{0,n-1} \\ \cline{3-3}
\end{array}
\right) } \\
\amp = \amp
\left( \begin{array}{| c c c c c |} \hline
\alpha_{0,0} \amp
\alpha_{0,1} \amp
\cdots \amp
\alpha_{0,k-1} \\ \hline
\end{array}
\right)
\left( \begin{array}{c c | c | c c} \cline{3-3}
\beta_{0,0} \amp \cdots \amp \beta_{0,j} \amp\cdots \amp \beta_{0,n-1} \\
\beta_{1,0} \amp \cdots \amp \beta_{1,j} \amp\cdots \amp \beta_{1,n-1} \\
\vdots \amp \amp \vdots \amp \amp \vdots \\
\beta_{k-1,0} \amp \cdots \amp \beta_{k-1,j} \amp\cdots \amp \beta_{k-1,n-1} \\ \cline{3-3}
\end{array}
\right).
\end{array}
\end{equation*}