Definition. Given an \( n\times n \) matrix \( A \), the multiplicative inverse of \( A \) is a matrix \( A^{-1} \) such that \( A\cdot A^{-1} = A^{-1}\cdot A = I_n \).
Unlike the one-sided "inverses" we learned about in the previous lecture, \( A^{-1} \) must be an inverse that works when multiplying on the left and on the right. We also learned that a matrix must be square for such a "two-sided" inverse to exist. Even when a matrix is square, it is not guaranteed that an inverse exists.
Definition. If a square matrix \( A \) does not have an inverse, it is singular. A matrix that does have an inverse is called invertible or nonsingular.
Example 1. Let \( A = \begin{bmatrix} -1 & 1 & 0 \\ 2 & 0 & 1 \\ 0 & 1 & 1 \end{bmatrix} \) and \( B = \begin{bmatrix} 1 & 1 & -1 \\ 2 & 1 & -1 \\ -2 & -1 & 2 \end{bmatrix} \). Verify that \( A \) and \( B \) are inverses of one another.
We compute: \[ AB = \begin{bmatrix} -1 & 1 & 0 \\ 2 & 0 & 1 \\ 0 & 1 & 1 \end{bmatrix} \begin{bmatrix} 1 & 1 & -1 \\ 2 & 1 & -1 \\ -2 & -1 & 2 \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \] \[ BA = \begin{bmatrix} 1 & 1 & -1 \\ 2 & 1 & -1 \\ -2 & -1 & 2 \end{bmatrix} \begin{bmatrix} -1 & 1 & 0 \\ 2 & 0 & 1 \\ 0 & 1 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \] Since both \( AB = I_3 \) and \( BA = I_3 \), we see that \( A \) and \( B \) are inverses of one another. \( \Box \)
In the case of a \( 2 \times 2\) matrix \( A \), we have a convenient way of determining whether \( A \) is invertible:
Definition. Let \( A = \begin{bmatrix} a & b \\ c & d \end{bmatrix} \), where \( a,b,c,d \in \mathbb R \). The determinant of \( A \) is \( \det A = ad-bc \).
Theorem. If \( A = \begin{bmatrix} a & b \\ c & d \end{bmatrix} \) and \( \det A \ne 0 \), then \( A \) is invertible and \( A^{-1} = \frac 1 {\det A} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix} \).
Proof. We multiply to verify that \( A\cdot A^{-1} = I_2 \) and \( A^{-1}\cdot A = I_2 \): \[ A\cdot A^{-1} = \begin{bmatrix} a & b \\ c & d \end{bmatrix} \cdot \frac 1 {ad-bc} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix} = \frac 1 {ad-bc} \begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix} = \frac 1 {ad-bc} \begin{bmatrix} ad-bc & 0 \\ 0 & ad-bc \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \] \[ A^{-1}\cdot A = \frac 1 {ad-bc} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix} \cdot \begin{bmatrix} a & b \\ c & d \end{bmatrix} = \frac 1 {ad-bc} \begin{bmatrix} ad-bc & 0 \\ 0 & ad-bc \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}.\ \Box \]
Example 2. Let \( A = \begin{bmatrix} 7 & -3 \\ 4 & -2 \end{bmatrix} \). Find \( A^{-1} \).
Using the formula, we have \( \det A = (7)(-2)-(-3)(4) = -2 \). So, \( A^{-1} = \frac 1{-2} \begin{bmatrix} -2 & 3 \\ -4 & 7 \end{bmatrix} = \begin{bmatrix} 1 & -3/2 \\ 2 & -7/2 \end{bmatrix} \). \( \Box \)
Invertible matrices are convenient because they allow us to easily solve matrix equations if we know the inverse of the matrix.
Theorem (Invertible Matrix Equations). If \( A \) is an invertible \( n\times n \) matrix, then for each \( \bbm b\in \mathbb R^n \) the equation \( A \bbm x = \bbm b \) has a unique solution \( \bbm x = A^{-1} \bbm b \).
Proof. We first show that \( \bbm x = A^{-1} \bbm b \) is a solution for \( A \bbm x = \bbm b \). Since \( A(A^{-1}\bbm b) = (AA^{-1})\bbm b = I_n \bbm b = \bbm b \), the vector \( A^{-1} \bbm b \) is a solution to \( A \bbm x = \bbm b \).
To show that this solution is unique, suppose that \( \bbm u \) is any solution of \( A\bbm x = \bbm b\), which means that \( A\bbm u = \bbm b\). Multiplying both sides of this equation by \( A^{-1} \) gives \( A^{-1}(A\bbm u) = A^{-1}\bbm b \). Since \( A^{-1}(A\bbm u) = (A^{-1} A)\bbm u = \bbm u \), this shows that \( \bbm u \) must equal \( A^{-1} \bbm b \). \( \Box \)
Notice that this proof required both \( AA^{-1} = I \) and \( A^{-1}A = I \), demonstrating the importance of our inverses being "two-sided."
We have the following algebraic properties:
You might be thinking that \( (AB)^{-1} \) should equal \( A^{-1} B^{-1} \), but this does not work in general. Why not? When we multiply \( (AB)(A^{-1}B^{-1}) \), we can't "cancel" the \( A \) and \( A^{-1} \) because there is a \( B \) in the way. We can't rearrange the multiplication because matrix multiplication is not commutative. However, \( (AB)(B^{-1}A^{-1}) = A(BB^{-1})A^{-1} = AIA^{-1} = AA^{-1} = I \), without having to change the order of any multiplication. This is why \( (AB)^{-1} = B^{-1}A^{-1} \).
« Lecture 22 Back to Top Lecture 24 »