The Basics

Vectors, linear combinations, span, and bare-bones matrix algebra. Scroll to begin.

Vectors

vector definition
v1 and v2 components components in two dimensions components in n dimensions
$$ \vec{v} = \begin{bmatrix} v_1 \\ v_2\end{bmatrix}$$

In simple 2D, you can think of the components as the x-component and y-component...

$$ \vec{v} = \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n\end{bmatrix}$$

...but we can also go to as many dimensions as we want.

Linear Combinations

If you have a collection of vectors, you call them a set. The only restriction here is no duplicates.

$$\{\vec{v},\vec{x},\vec{y},\vec{z}\}$$
set example

Another important term is a linear combination: a vector produced scaling and/or adding vectors.

$$ c_1\vec{x} + c_2\vec{y} + ... + c_n\vec{z}$$
linear combination examples

The span of a set is ALL possible linear combinations of its vectors. As an example, the span of a singular vector is a line (scaling up and down to infinity).

span of vector x

So what about the span of two vectors? A plane? Not always. That depends on a little something called linear independence.

Linear Independence

These two vectors are linearly independent, so the span is a plane...

span of 2 linearly independent vectors

...but these two are not. They are linearly dependent because \(\vec{y}\) is a linear combination of \(\vec{x}\) (specifically, \(\vec{y}\) is a scaled copy of \(\vec{x}\), and vice versa).

span of 2 linearly dependent vectors

So what exactly is linear independence? Basically, vectors in a set are not redundant.

There are a few ways to think about it:

The super formal definition is that a linear combination of the vectors equals zero is if and only if ALL the coefficients are zero.

$$0= c_1\vec{x} + c_2\vec{y} + ... + c_n\vec{z}$$

Matrices

A matrix is a grid with \(m\) rows and \(n\) columns of elements (not components), labeled by their row, then their column.

For our sake, a matrix is a function that acts on a vector. Input a vector, output a vector. For our sake, the calculations are not important.

$$A_{m\times n} = \begin{bmatrix} a_{1,1} & a_{1,2} & ... & a_{1,n} \\ a_{2,1} & a_{2,2} & \\ \vdots & & \ddots \\ a_{m,1} &&& a_{m,n} \end{bmatrix} $$

Linearity

The important thing, though, is that matrices are linear operations! Hence, the name linear algebra. That means vector addition and scalar multiplication can be moved through the operator:

$$ A({\color{black}\vec{x}+\vec{y}}) = A{\color{black}\vec{x}} + A{\color{black}\vec{y}} $$
$$ A({\color{black}k}\vec{x}) = {\color{black}k}A\vec{x} $$

Yippee! This is a super useful property that doesn't happen very often. For example, the easy function \(f(x) = x^2\) is not linear, because \(f(2x) ≠ 2f(x)\).

And that's it for the basics!

Matrices as Transformations

Consider: a ball. Now consider: matrix warps ball. Let me explain.

Matrices stretch vectors in a certain direction. Let's look at some examples in 2D.

Working with One Vector

The identity matrix (\(0\) everywhere except \(1\)s on the diagonal) does nothing to the vector. Useless.

$$ \begin{bmatrix}1 & 0 \\0& 1\end{bmatrix} {\color{black}\begin{bmatrix}v_1 \\v_2 \end{bmatrix}} = \begin{bmatrix}{\color{black}v_1} \\{\color{black}v_2} \end{bmatrix} $$

Now try moving the dot around.

By scaling the \(1\)s, we can manipulate how the matrix transforms our vector.

The Ball: a New Perspective

AKA, the unit circle. This includes all the vectors with magnitude \(1\) in every single direction (in 2D).

Working with Everything (in 2D)

We've been working mostly with one vector made of two components.

Now, we're going to think of every vector as a linear combination of the components to see how a matrix stretches all vectors.

Scaling

$$ \begin{bmatrix} k_1 & 0 \\ 0 & k_2 \end{bmatrix} $$

Rotating

$$ \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix} $$

Reflecting

$$ \begin{bmatrix} -1 & 0 \\ 0 & 1 \end{bmatrix} $$

Eigenvectors & Eigenvalues

Of course, we can combine all of these into one. Take a look at this matrix transformation:

You might notice that the bold vectors stay pointing in the same direction, even after they are transformed. They're only scaled up and down.

These are called eigenvectors \(\vec{u}\). The amount they're scaled is called an eigenvalue \(\lambda\). Because the matrix just scales \(\vec{u}\) by \(\lambda\), we can say:

$$ A\vec{u} = \lambda \vec{u} $$

The spectral radius of a matrix is the largest eigenvalue magnitude (absolute value).

Every vector \(v\) can be written as a liner combination of the eigenvectors. In other words, we can think of each eigenvector as our "componnents."

That means we can write the transformation of every vector \(Av\) as a linear combination of the transformed eigenvectors. This is what allows us to warp the unit circle in the way we did above.

$$ {\color{black}\vec{v}= c_1\vec{u_1} + c_2\vec{u_2} + ... + c_n\vec{u_n}}$$
$$ A{\color{black}\vec{v}} = A({\color{black}c_1\vec{u_1} + c_2\vec{u_2} + ... + c_n\vec{u_n}}) = {\color{black}c_1}A{\color{black}\vec{u_1}} + {\color{black}c_2}A{\color{black}\vec{u_2}} + ... + {\color{black}c_n}A{\color{black}\vec{u_n}}$$

Eigendecomposition

Each matrix can be decomposed into 3 matrices using it's eigenvectors and eigenvalues:

$$ A_{n\times n} = QDQ^T=\begin{bmatrix} \uparrow & \uparrow & ... & \uparrow \\ \vec{u}_1 & \vec{u}_2 & ... & \vec{u}_n \\ \downarrow & \downarrow & ... & \downarrow \\ \end{bmatrix} \begin{bmatrix} \lambda_1 & 0 & ... & 0\\ 0 & \lambda_2 \\ \vdots & & \ddots \\ 0 & & & \lambda_n \\ \end{bmatrix} \begin{bmatrix} \leftarrow & \vec{u}_1 & \rightarrow \\ \leftarrow & \vec{u}_2 & \rightarrow \\ & \vdots \\ \leftarrow & \vec{u}_n & \rightarrow \\ \end{bmatrix} $$

\(D\) is a matrix with the eigenvalues along the diagonal and zeroes everywhere else.

\(Q\) is a matrix with the normalized eigenvectors as its columns. (\(Q^T\) is the tranpose of \(Q\), meaning flipped along the diagonal.)

This decomposition is how the vectors and ellipses on the next page are calculated.

Now, those are big words, complicated matrices, and weird symbols. So I've created an interactive experience where you can click and drag eigenvectors, and see how it changes the matrix in real time.

?
Matrix
$$\small{\begin{bmatrix} a & b \\ c & d \end{bmatrix}}$$
$$\lambda_1= k, \vec{u_1} = \langle v_1, v_2\rangle$$
$$\lambda_2= k, \vec{u_2} = \langle v_1, v_2\rangle$$
Circle
Lines

unit circle

transformed

inverse

eigenvectors

# of Lines:

Oh no, your window is too small! Please view on laptop or desktop.