Lecture 26B - Understanding Determinants

These notes contain more details about the properties of determinants that were discussed in Lecture 26. The goal of these notes is to provide insight to students about why these properties work. Formal proofs are generally omitted.

Recall the definition of the determinant:

Definition. Given an \( n \times n \) matrix \( A \), the determinant of \( A \) is computed by the following process:

  1. Choose any row or column of \( A \)
  2. Multiply each entry of the chosen row/column by the cofactor corresponding to that entry
  3. Add up the results of these products
As a formula, for any \( 1 \le k \le n \) we can write \( \det A = \sum_{i=1}^n a_{ik} C_{i,k} = \sum_{j=1}^n a_{kj} C_{k,j} \).

From this definition, we can see that if a matrix has a row of column of all zeroes, then the determinant of that matrix will equal zero.

Determinant of an Elementary Matrix

Many of the properties of determinants rely on understanding the determinant of an elementary matrix. Recall from Lecture 24 that an elementary matrix is the result of applying a single row operation to the identity matrix \( I_n \). We consider each row operation individually.

Scaling. Suppose that \( E \) is the elementary matrix resulting from scaling a row by a factor of \( r\), for some nonzero scalar \( r\). What is \( \det E \)? Since \( E \) is triangular, its determinant is equal to the product of its diagonal entries. All these entries are 1, except for a single entry that equals \( r\). Thus, \( \det E = r \).

Replacement. Suppose that \( E \) is the elementary matrix resulting from replacing a row by the sum of itself and a scalar multiple of another row. What is \( \det E \)? In this case, \( E \) is either upper triangular or lower triangular, depending on whether the row being replaced is above or below the other row. The diagonal entries of \( E \) are all 1, so \( \det E = 1 \).

Swapping. Suppose that \( E \) is the elementary matrix resulting from swapping two rows. What is \( \det E \)? We compute the determinant of \( E \) by choosing a row that did not get swapped. In this row, there is a single nonzero entry, which is a 1 in row \( i \), column \( i \). So, \( \det E = \det E' \), where \( E' \) is the result of eliminating row \( i \) and column \( i \). The matrix \( E' \) is another "swapping" elementary matrix that is one size smaller than \( E \). We continue choosing non-swapped rows in this way until we reach the \( 2 \times 2 \) matrix \( \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \). Thus, \( \det E = \det \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} = -1 \).

Using Elementary Matrices

If \( A \) is any \( n \times n\) matrix, it is possible to prove the following:

It follows from this that if \( E \) is an elementary matrix, then \( \det (EA) = \det E \cdot \det A \). From this, we can prove the following theorem.

Theorem (Determinant of an Invertible Matrix). Let \( A \) be an \( n\times n\) matrix. Then, \( A \) is invertible if and only if \( \det A \ne 0 \).

Proof. First, suppose that \( A \) is not invertible. By the Invertible Matrix Theorem, an echelon form of \( A \) has a row of zeroes. So, for some sequence of elementary matrices \( E_1, E_2, \ldots, E_k \) we have \( \det (E_k E_{k-1}\cdots E_2 E_1 A) = 0 \). This means that \[ \det A = \frac 0 {\det E_k \cdot \det E_{k-1} \cdots \det E_2 \cdot \det E_1} = 0. \]

Now suppose that \( A \) is invertible. By the Invertible Matrix Theorem, \( A \) is row-equivalent to \( I_n \). So, for some sequence of elementary matrices \( E_1, E_2, \ldots, E_k \) we have \( \det (E_k E_{k-1}\cdots E_2 E_1 A) = 1 \). This means that \[ \det A = \frac 1 {\det E_k \cdot \det E_{k-1} \cdots \det E_2 \cdot \det E_1} \ne 0.\ \Box \]

Determinant of a Matrix Product

We saw earlier that, if \( E \) is an elementary matrix, then \( \det (EA) = \det E \cdot \det A \). We can now prove the following, more general fact:

Theorem (Determinant of a Matrix Product). If \( A \) and \( B \) are \( n \times n \) matrices, then \( \det (AB) = \det A \cdot \det B \).

Before we can prove this theorem, we need a "lemma," which is a minor theorem that helps us prove a bigger theorem.

Lemma. Let \( A \) and \( B \) be \( n\times n\) matrices. If \( A \) singular, then \( AB \) is singular.

Proof of the Lemma. Let \( A \) be singular and assume that \( AB \) is not singular. We have two cases: \( B \) is singular, and \( B \) is invertible.

If \( B \) is singular, then by the Invertible Matrix Theorem, then there exists a nonzero vector \( \bbm u\in \mathbb R^n \) for which \( B \bbm u = \bbm 0 \). Then \( (AB)\bbm u = A(B\bbm u) = A\bbm 0 = \bbm 0 \), so \( AB \) is singular. This is a contradiction.

If \( B \) is invertible, then \( A = (AB)(B^{-1}) \) is a product of two invertible matrices, which is invertible. This is a contradiction.

Thus, in either case, we cannot have that \( AB \) is invertible. Therefore, \( AB \) is singular. \( \Box \)

Proof of the Determinant of a Matrix Product Theorem. If \( A \) is singular, then by the lemma \( AB \) is also singular. By the Determinant of an Invertible Matrix Theorem we have \( \det AB = 0\) and \( \det A \cdot \det B = 0\cdot \det B = 0 \).

If \( A \) is invertible, then as we did in Lecture 24 we can write \( A^{-1} = E_k E_{k-1} \cdots E_2 E_1 \), where the \( E_i \) represent the row operations to reduce \( A \) to \( I_n \). Then \( A = E_1^{-1} E_2^{-1} \cdots E_k^{-1} \), and note that each \( E_i^{-1} \) is also an elementary matrix (representing the reverse row operation from \( E_i \)).

Now, \[ \begin{eqnarray*} \det AB & = & \det (E_1^{-1} E_2^{-1} \cdots E_k^{-1} B) \\ & = & \det E_1^{-1} \cdot \det E_2^{-1} \cdots \det E_k^{-1} \cdot \det B \\ & = & \det (E_1^{-1} E_2^{-1} \cdots E_k^{-1}) \cdot \det B \\ & = & \det A \cdot \det B. \ \Box \end{eqnarray*} \]

« Lecture 26 Back to Top Lecture 27 »