In my last post, I described a construction in Euclidean geometry which I claim constituted a vector space. In this post, I will discuss why it is a vector space. The basic idea was that the vector space was given by summing lines not passing through the origin of $\mathbb{R}^2$ as follows. Suppose $\ell_1$ and $\ell_2$ are lines not passing through the origin, and let's suppose they are transverse. To sum them, translate $\ell_1$ and $\ell_2$ to pass through the origin. The four lines now form a parallelogram, and the sum $\ell_1+\ell_2$ is the diagonal which does not pass through the origin. One should extend this to a vector space structure by including a zero (the "line at infinity") and using continuity to sum parallel lines. See last post for a precise description.
John Brown created a wonderful Geogebra applet to play around with the construction. I have included a screenshot below along with a description in the caption so that you can play around with it.
|
In John's applet, the three black lines are the starting lines, and you can move around the blue points to move those starting lines. The green and orange lines are the sums of two of the original starting lines, where the parallelogram involved in the addition law highlighted grey. The third line is then added, either to the green or orange line, and the corresponding parallelograms are highlighted green and orange respectively. The blue line is the sum, and it passes through all four points it should: the two yellow points forming a diagonal of the green parallelogram, and the two black points forming a diagonal of the orange parallelogram.
|
Finally, the comments suggested the idea of inversion. The affine lines become circles which do pass through the origin, and the addition law is just the sum of their centers. As I mentioned in the comments, I gave this half-credit.
The Solution
We recall the following notion from linear algebra. Let $V$ be a real vector space. Then the dual space $V^*$ is the vector space of covectors on $V$, also known as linear functionals $\phi \colon V \rightarrow \mathbb{R}$:
$$V^* = \mathrm{Hom}(V,\mathbb{R}).$$
The fact that this forms a vector space is relatively immediate: if $\lambda \in \mathbb{R}$ and $\phi,\psi \in V^*$, then we define the vector space structure by:
- Scalar Multiplication: $(\lambda \cdot \phi)(v) := \lambda \cdot \phi(v)$
- Addition: $(\phi + \psi)(v) := \phi(v) + \psi(v)$
(More generally, for any two vector space $V,W$, we have $\mathrm{Hom}(V,W)$ is a vector space.)
It is at this point that I rewind to Fall 2021, in which a student asked me the following question:
"How can I picture the dual space?"
In my first lecture about dual spaces, I had already included a picture of what a nonzero covector looks like. The covector has a codimension 1 kernel, and it assigns to all affine subspaces parallel to that kernel a number which scales with distance to the origin. Therefore, if you want to draw a covector $\phi$, it suffices to draw $\phi^{-1}(1)$, the affine hypersurface parallel to $\ker(\phi)$ along which $\phi$ takes the value $1$.
|
The level sets of a nonzero covector $\phi \in (\mathbb{R}^2)^*$. We may represent the data of $\phi$ by the affine line $H_{\phi} = \phi^{-1}(1)$. Translating to the origin gives the level set $K_{\phi}=\ker(\phi) = \phi^{-1}(0)$. |
The student then asked the follow-up question which I had not discussed.
"How can I picture adding these lines?"
I hadn't thought about it before. I would say I'm embarrassed to admit it, but my sense is that I'm not alone. Even though I had a geometric picture in my head for covectors, the actual addition of them had been compartmentalized away in my "algebraic" understanding of linear algebra, far away from my "geometric" understanding. So it was a very good question, and I took a minute, and concluded the following.
Suppose $\phi, \psi \in V^*$ are generic (linearly independent) covectors represented by affine hypersurfaces $H_{\phi}, H_{\psi}$. These hypersurfaces are parallel to $K_{\phi} = \ker(\phi)$ and $K_{\psi} = \ker(\psi)$ respectively. Hence:
$$H_{\phi} \cap K_{\psi} = \{v \in V \mid \phi(v) = 1, \psi(v) = 0\}.$$
$$H_{\psi} \cap K_{\psi} = \{v \in V \mid \phi(v) = 0, \psi(v) = 1\}.$$
Therefore, both of these intersections consist of vectors for which $(\phi+\psi)(v) = 1$, and hence, by a dimension count, $H_{\phi+\psi}$ is the unique (codimension 1) hypersurface containing both of these affine (codimension 2) subspaces! Translating back to $\mathbb{R}^2$ gives our parallelogram addition law. Hence...
The solution to the puzzle: We are doing nothing more than adding and scalar multiplying covectors in $(\mathbb{R}^2)^*$. Of course, we can only add covectors which are linearly independent using the parallelogram law, but this extends uniquely by continuity, and the topological finagling of the last post is just the same as defining the natural topology on the dual space $(\mathbb{R}^2)^*$. (Every finite-dimensional vector space has a unique natural topology.) The "line at infinity" is the zero covector. The tricky property of associativity of addition is automatic.
There are many wonderful things about this picture.
- It extends to arbitrary dimensions. You can now consider adding (codimension 1) affine hyperplanes, and it is again not obvious that addition is associative.
- The description is coordinate-invariant, so that we can work with a real vector space $V$ without a choice of basis.
- If $V$ comes with a choice of inner product, then there is a natural identification $V \cong V^*$ of vector spaces. If, for example, we pick the standard Euclidean inner product on $\mathbb{R}^2$, then we obtain the description suggested in the comments of last post of applying polar inversion and adding the centers (or really the antipodal points to the origin) of the corresponding circles.
As one final geometric interpretation, we may always form the graph of a covector $\phi \in V^*$
$$\Gamma(\phi) := \{(v,x) \in V \times \mathbb{R} \mid \phi(v) = x\}.$$
The hyperplane $H_{\phi}$ in $V$ is identified with the intersection of $V \times \{1\}$ with $\Gamma(\phi)$. Addition of the hyperplanes $\Gamma(\phi) \leq V \times \mathbb{R}$ is just the usual addition of graphs of functions, and we recover the parallelogram law by intersecting with $V \times \{1\}$.
Comments
Post a Comment