C. Hinsley

27 June 2021


What does $dx$ actually mean? You've seen this thing in calculus both in the context of differentiation (e.g., in $\displaystyle \frac{dy}{dx}$) and integration ($\displaystyle \int \cos x\ dx$), but attention is never given to what it does in isolation — you've generally just used it to keep track of what variables you're differentiating or integrating with respect to. It turns out that knowing what this object is and how to manipulate it will greatly simplify all of the more complicated theorems in calculus and allow us to differentiate and integrate along curves, surfaces, volumes, and other higher-dimensional manifolds using just a couple of intuitive techniques.

All you'll need to know beforehand is how to perform matrix multiplication and compute the determinant of a square matrix. Keep your eyes peeled for the 3 operators we'll be introducing: $d, \wedge, \partial$.

Vectors in $\mathbb{R}^n$  and covectors in its dual space, $(\mathbb{R}^n)^*$

You know what vectors in $\mathbb{R}^n$ look like. They're just vectors made up by $n$ real numbers. For instance, a vector in $\mathbb{R}^3$  might look like:

$$ \begin{bmatrix} 3 \\ -8 \\ 7\pi+\frac32 \end{bmatrix} $$

Similarly, the vector $[0]$ would be in $\mathbb{R}^1$.

Over the vector space $\mathbb{R}^n$ we have the standard operations of scalar multiplication and vector addition, as well as the inner product (i.e., the dot product of a pair of vectors). So we've encountered nothing unusual so far.

The critical thing to notice here is that I've chosen to represent vectors in $\mathbb{R}^n$  as column vectors — that is, as $n \times 1$ matrices. I could have just as easily chosen row vectors to represent elements of $\mathbb{R}^n$, but it's standard in mathematics to use column vectors for this.

Now, in this next bit, you're probably going to ask yourself where the hell this is coming from, why we're making these definitions. But just know that the reason this part seems so arbitrary and specific is that Leibniz came up with the calculus notation we're using first, back in the 1600s, while this modern theory did not come along until Élie Cartan developed it in the 1800s. So these tools had to be built up to give meaning to Leibniz's old notation and extend what could be done with calculus.

Here it is: Let's define a new vector space — this time consisting of row vectors — and call this new space $(\mathbb{R}^n)^*$. We're going to pronounce this as the 'dual space' of $\mathbb{R}^n$. Now, don't be confused — there aren't two parts to the dual space, it's just a strange and unfortunate name. The dual space is just another space of vectors with $n$ components; the usual operations on vectors — scalar multiplication and vector addition — are applicable here. There are just two differences: we call vectors in the dual space covectors (for a reason we won't cover in detail), and covectors can act as functions that "eat" a vector and "spit out" a real number (if you're interested in why vectors in the dual space are called covectors, it's because they are covariant with changes of basis under linear transformations of $\mathbb{R}^n$, while typical vectors are contravariant with such changes of basis; i.e., their components change oppositely to the way the basis changes as a linear transformation is applied).

This action of covectors in $(\mathbb{R}^n)^$ on vectors in $\mathbb{R}^n$  is rather straightforward: because covectors are $1 \times n$ matrices and vectors are $n \times 1$ matrices, matrix multiplication allows us to produce a $1 \times 1$ matrix, which we can just interpret to be a single real number. If we take a look at a particular covector $\omega \in (\mathbb{R}^n)^$, we can see that multiplying this covector with any vector $\vec{v} \in \mathbb{R}^n$ gives back a real number, so we can think of the covector as having the type $\omega : \mathbb{R}^n \to \mathbb{R}$ where $\omega(\vec{v}) \in \mathbb{R}$. Let's work through a concrete example.

Exercise 1

Let $\omega = \begin{bmatrix} 3 & -2 & 0 \end{bmatrix}$ and $\vec{v} = \begin{bmatrix} 4 \\ 1 \\ 3 \end{bmatrix}$. Compute $\omega(\vec{v})$.

The dual basis: hello, old friend

If you've taken physics you have seen the 3-dimensional basis vectors $\hat{i}, \hat{j}, \hat{k}$. Or, in linear algebra, you've seen $\vec{e}_1, \vec{e}_2, \dots, \vec{e}_n$ as a basis for $\mathbb{R}^n$. Recall that we can represent any vector $\vec{v}$ in $\mathbb{R}^n$ as a linear combination of basis vectors, so that