Modules
03/11
Linear Algebra

Contents

Determinants & Inverses

Understanding when transformations can be reversed.

Why Determinants Matter

The determinant of a matrix is a single number that captures something fundamental: does this transformation preserve the "full dimensionality" of space, or does it collapse some dimensions?

If the determinant is zero, the matrix is singular: it squashes space into a lower dimension, and there is no way to undo this. If the determinant is nonzero, the transformation is reversible, and we can compute the inverse.

The Central Question

Can we solve Ax=bAx = b for any bb? The answer is yes if and only if det(A)0\det(A) \neq 0.

Geometric Interpretation

Geometrically, the determinant measures how a matrix scales areas (in 2D) or volumes (in 3D). For a 2×2 matrix with columns v1v_1 and v2v_2, det(A)|\det(A)| is the area of the parallelogram spanned by those vectors.

det>0\det > 0

Transformation preserves orientation. Vectors maintain their "handedness" (right-hand rule).

det<0\det < 0

Transformation flips orientation. Like looking in a mirror.

det=0\det = 0

Transformation collapses at least one dimension. The parallelogram has zero area (it is a line or a point). Information is lost irreversibly.

det[abcd]=adbc\det \begin{bmatrix} a & b \\ c & d \end{bmatrix} = ad - bc
The 2×2 determinant formula: product of diagonal minus product of anti-diagonal.

Interactive: Determinant

Drag the sliders to change the matrix entries. Watch how the unit square transforms into a parallelogram, and see how the determinant tracks the signed area.

det(A) = (1.5)(1.0) - (0.5)(0.2)1.40
The red and green vectors are the transformed basis vectors. The colored area is the determinant. If the vectors swap order (relative to clockwise/counter-clockwise), the area becomes negative.

Properties of Determinants

Determinants obey several useful rules that make computation efficient.

  1. Multiplicativity: det(AB)=det(A)det(B)\det(AB) = \det(A) \det(B)
  2. Transpose: det(AT)=det(A)\det(A^T) = \det(A)
  3. Inverse: det(A1)=1/det(A)\det(A^{-1}) = 1/\det(A)
  4. Row swap: Swapping two rows negates the determinant.
  5. Row scaling: Multiplying a row by kk multiplies det by kk.
  6. Triangular matrices: det = product of diagonal entries.

Computational Insight

Direct computation of a determinant is O(n!)O(n!) via cofactor expansion. In practice, we use LU decomposition: factor A=LUA = LU, then det(A)=det(L)×det(U)\det(A) = \det(L) \times \det(U) = product of pivots. This is O(n3)O(n^3).

Matrix Inverse

The inverse of a matrix AA is the matrix A1A^{-1} such that AA1=A1A=IA A^{-1} = A^{-1} A = I. It "undoes" the transformation performed by AA.

A1=1det(A)[dbca]A^{-1} = \frac{1}{\det(A)} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}
The 2×2 inverse formula: swap diagonals, negate anti-diagonals, divide by det.

Existence

A1A^{-1} exists if and only if det(A)0\det(A) \neq 0.

Uniqueness

When it exists, the inverse is unique.

Interactive: Inverse

Adjust the matrix and watch its inverse update in real time. Note: when det approaches 0, the inverse entries explode (become very large), a hallmark of numerical instability.

Input Space (x)
x = [1.0, 1.0]
A
A⁻¹
Output Space (b)
b = [2.0, 1.5]

Matrix A

det: 1.25

Inverse A⁻¹

0.80
-0.40
-0.40
1.20

Notice: A⁻¹ is calculated as 1/det * adj(A). When det approaches 0 (singular), the division creates massive numbers (instability), making it impossible to "undo" the transform reliably.

Singular Matrices

A matrix with det=0\det = 0 is called singular. Several equivalent statements all describe this condition:

  • det(A) = 0
  • A is not invertible
  • The columns of A are linearly dependent
  • The null space of A is nontrivial (more than just zero)
  • Ax = b has either no solution or infinitely many solutions
  • At least one eigenvalue is zero

Numerical Reality

In practice, we rarely check if det = 0 exactly. Instead, we look at the condition number of A, which measures how close A is to being singular. High condition number means small changes in b cause large changes in x.

ML Applications

Covariance Matrices

In multivariate Gaussians, the det of the covariance matrix appears in the normalizing constant. A singular covariance means the distribution is degenerate (lives on a lower dimensional subspace).

Regularization

Ridge regression adds λI\lambda I to XTXX^TX before inverting. This ensures the matrix is invertible even if X has collinear features. Learn why.

Jacobian Determinant

In normalizing flows (generative models), we transform a simple distribution through invertible layers. The log det of the Jacobian appears in the loss function.

PCA Feature Collapse

If data covariance is singular (det = 0), some principal components have zero variance. PCA then reduces dimensionality naturally.