Now that we’ve talked about vector spaces, the natural next step is: how do we actually move one vector into another? Sure, we can add and scale them, but writing something like
over and over again is messy. What if we wanted an efficient way to say “take every vector and always scale it with these coefficients”? That’s basically why matrices exist. They’re a compact way of encoding those transformations and unlike numbers, matrices don’t usually commute: \(AB \neq BA\).
Linear Maps
A matrix is really just a concrete way to describe a linear map. A map \(\varphi:A\to B\) is linear if it respects two simple rules:
Matrix Times a Vector
Take a \(2\times 2\) matrix \(A\) and a vector \(v\in\mathbb{R}^2\). Multiplying gives:
In words: the new vector is a combination of the columns of \(A\), weighted by the components of \(v\). So when you multiply \(A\) with every vector in \(\mathbb{R}^2\), what you’re really doing is changing the basis of the space to the columns of \(A\).
Matrix Multiplication
So what about multiplying two matrices together? It’s just composition. First transform with one, then with the other—matrix multiplication compresses both steps into one matrix.
Nullspace
The nullspace of a matrix \(A\) is all the vectors that get sent straight to zero when multiplied by \(A\):
It’s always a subspace, and it’s basically the kernel of the map if you’re coming from an algebra background.
Column Space
The column space of \(A\) is just all linear combinations of its columns. Another way of saying it: it’s the span of the columns.
Determinant
The determinant tells you the “volume scaling” of the transformation. In 2D, it’s the signed area of the parallelogram formed by the columns. In higher dimensions, it’s volume.