Physics for a Mathematician 02-III - Physics Background
III. Physics Background
It is worth pointing out one somewhat important detail which I am omitting, which is that in order to phrase everything correctly, one would need to understand a path integral formulation of quantum mechanics (no less supersymmetric quantum mechanics). We will be quite a bit more loosey goosey, mostly because I'm still coming to grips with path integrals, but also because the physical principles are still present from a more naive viewpoint. The goal of this section is to describe, at a most basic level, the physical phenomena associated with Witten's paper, and give some baseline story for what's going on. In the next (and final) post, we will finally see how the aspects presented here appear in the setting of Morse theory.
III.1: Classical Mechanics
Strictly speaking, this section is not necessary for the exposition, except to draw contrast with the quantum mechanical world. We include it anyway. For simplicity, I will discuss a particle moving on $\mathbb{R}$ instead of an arbitrary manifold $M$. The more general story is naturally described by symplectic geometry, which is my jam, so perhaps in a future blog post I will say more, especially since I will be teaching an introductory course on symplectic geometry next semester.
Let me begin with the classical picture. The particle moving along $\mathbb{R}$ has a well defined position at any time $t$, or in other words, we may think of the particle as represented by a map $$\gamma \colon \mathbb{R} \rightarrow \mathbb{R}$$ where $\gamma(t)$ is simply the position of the particle at time $t$. Newton's laws tell you precisely how this particle moves when there are forces applied to it, and you can phrase all of this in terms of conservation of energy. Namely, typically one has that there is some potential energy on $\mathbb{R}$, represented as a function $V \colon \mathbb{R} \rightarrow \mathbb{R}$, where $V(\gamma(t_1))-V(\gamma(t_0))$ represents how much kinetic energy the particle loses from time $t_0$ to time $t_1$. (We assume $V$ is time-independent for simplicity.) Recall that the kinetic energy is $T = \frac{1}{2}\left(\dot{\gamma}\right)^2$ (one uses a single dot to represent a time derivative), and we are phrasing classical mechanics in terms of the requirement that $$T+V = \frac{1}{2}(\dot{\gamma})^2 + V(\gamma(t))$$ is a constant function. Taking a derivative in time, one finds the standard equation of motion $$\ddot{\gamma}(t) = -\frac{\partial V}{\partial x}(\gamma(t)).$$ Phrased in the language of Newton's laws, the quantity $-\frac{\partial V}{\partial x}$ represents the force exerted on the particle. This has a unique solution, so long as we know the initial conditions $\gamma(0)$ and $\dot{\gamma}(0)$.
Example: If $V(x) = c$ is constant, then $\ddot{\gamma} = 0$, which tells us that the particle just moves with constant velocity. This is Newton's first law of motion: a particle will travel with constant velocity when there are no forces applied to it.
Example: If $V(x) = -gx$ for $g$ a constant, then $\ddot{\gamma} = -1,$ so $\gamma(t) = A + Bt - \frac{1}{2}gt^2$ gives parabolic motion. This is the standard first-term physics understanding of the height of an object subjected to a constant gravitational field (of strength given by $g$). On Earth, $g \approx 9.8~m/s^2$.
Example: If $V(x) = \frac{1}{2}kx^2$ for $k > 0$ a constant, then $\ddot{\gamma} = -k\gamma$, and so we find that $\gamma(t) = A\cos(t\sqrt{k}-\phi_0)$, so $\gamma$ is sinusoidal motion. This is called the harmonic oscillator, and appears, for example, if one discusses e.g. a particle attached to a spring. The constant $k$ is called the spring constant, and represents the strength of the string. The equation of motion is sometimes called Hooke's Law.
III.2: Quantum Mechanics
Witten's paper deals with the setting of supersymmetric quantum mechanics. Again, for simplicity, I'm going to pretend that the situation we are considering is a particle with mass $m=1$ constrained to live on the $1$-dimensional manifold $\mathbb{R}$. We can replace this whole story with a manifold $M$, but I'm really after a handful of fundamental ideas which already occur for this simple example. For quick review, I perused Griffith's Introduction to Quantum Mechanics, since it's been about 8 years (Spring 2012) since I took the course for which I bought the textbook.
So far, it seems as though the only information which matters is the absolute value $|\Psi(x,t)|$, but actually, the phase is important in determining the time evolution. In particular, the wave function of a particle evolves according to the Schrödinger equation (where we use the typical normalization $\hbar = 1$ to declutter the notation) $$i \frac{\partial \Psi(x,t)}{\partial t} = \left(-\frac{1}{2}\frac{\partial^2}{\partial x^2} + V(x) \right) \Psi(x,t).$$ One often writes this more simply as $$i\frac{\partial \Psi(x,t)}{\partial t} = H \Psi(x,t),$$ where $H = -\frac{1}{2}\frac{\partial^2}{\partial x^2} + V(x)$ is an operator on functions, called the Hamiltonian operator. The study of this partial differential equation for a variety of values of $V(x)$ takes up a large portion of the undergraduate physics syllabus, and can easily lead us astray.
Remark: If $\|\Psi(\cdot,0)\|_{L^2} = 1$, then so is $\|\Psi(\cdot,t)\|_{L^2} = 1$ for any $t$. The key input to the argument is that the Hamiltonian operator is self-adjoint with respect to the $L^2$ metric, so if we think of a solution as $\Psi(\cdot, t) = e^{-itH}\Psi(\cdot,0)$, since $H$ is self-adjoint, the operator $e^{-itH}$ will be unitary with respect to the $L^2$ metric.
Remark: Notice that if $\Psi(x,t)$ solves the Schrödinger equation, then so does $e^{i\theta}\Psi(x,t)$, the same wave function with a phase shift, and this preserves the $L^2$-normalization property. Hence, for our discussion, the phase shift is not important, and we will consider two wave functions to be the same up to this phase shift. Note that in general, the phase may be coupled to some field; for example, the Aharonov-Bohm effect physically demonstrates that an electromagnetic field is coupled to a charged particle's phase.
Remark: On the large scale, most objects have wave functions very much localized near a single value $x = x_0$. We claim that such a localized wave function will evolve as in Newton's Law, so that classical mechanics arises as a sort of limit of quantum mechanics.
One very helpful idea comes in the form of time-independence. Instead of solving the general problem, we study separable solutions to the Schrödinger equation, i.e. solutions of the form $$\Psi(x,t) = \psi(x)\phi(t).$$ If we substitute this in, we find $$i\frac{\dot{\phi}}{\phi} = \frac{H\psi}{\psi},$$ and the left hand side depends only on $t$ while the right hand side depends only on $x$, so both sides are some constant $E$. We find that $$\phi(t) = Ae^{-iEt}$$ for some constant $A$, whereas $\psi$ satisfies the time-independent Schrödinger equation $$H\psi := \left(-\frac{1}{2}\frac{d^2}{d x^2} + V(x)\right)\psi(x) = E\psi(x)$$ (where now we use the notation $d$ instead of $\partial$ since $\psi$ is a single-variable function). In other words, we look for eigenfunctions for the operator $H$. The value $E$ is called the energy: in this way, we should think of $H$ as the energy, so that $H = T+V$, where $T = -\frac{1}{2}\frac{d^2}{dx^2}$ is the kinetic energy operator. In particularly good situations, the set of values $E$ for which there exists a solution to the time-independent equation is countable, say $\{E_n\}$ each with a unique eigenvector $\{\psi_n\}$, and any solution to the general Schrödinger equation, again in good circumstances, can be written as $$\Psi(x,t) = \sum_n c_n e^{-iE_n t} \psi_n(x).$$ In still pretty good situations, the set of $E$ is not discrete, but we may still write $$\Psi(x,t) = \int_{\mathbb{R}} c(E)e^{-iEt}\psi_E(x)$$ where $\psi_E$ is a time-independent solution of energy $E$. There is quite a bit of subtlety to the point of writing $\Psi$ as a combination of separable solutions, but we will simply shoot first and ask questions later, where later means "not anytime soon."
Exercise: If $V$ is bounded from below, i.e. $V(x) \geq V_0$ for all $x$ and some constant $V_0$, then the time-independent Schrödinger equation has no nonzero solutions whenever $E < V_0$.
Let's come back to the harmonic oscillator, with $V(x) = \frac{1}{2}x^2$. The time-independent Schrödinger equation now reads $$\frac{1}{2} \cdot \left(-\frac{d^2}{d x^2}+x^2\right)\psi = E\psi.$$ The idea is to try to factor the left-hand side as a difference of two perfect squares, given by the operators $$a_{\pm} := \frac{1}{\sqrt{2}}\left(\mp\frac{d}{d x}+x\right).$$ However, $a_+$ and $a_-$ don't quite commute, and so their products aren't quite $H$. Indeed, applying these operators to a test function, it turns out that $$a_+a_- = H - \frac{1}{2},~~~~~~~a_-a_+ = H + \frac{1}{2}$$ $$[a_-,a_+] := a_-a_+ - a_+a_- = 1.$$ Notice that automatically, you can compute $$[H,a_{\pm}] := Ha_{\pm} - a_{\pm} H = \pm a_{\pm},$$ or in other words, if $\psi$ satisfies the time-independent Schrödinger equation with eigenvalue $E$, then $$Ha_{\pm}\psi = \left(a_{\pm}H\psi \pm a_{\pm}\right)\psi = (E \pm 1)a_{\pm}\psi.$$ So $a_{\pm}\psi$ is also an eigenfunction for $H$ if it is nonzero!
Notice that if $E < 0$, then there is no nonzero solution to the time-independent Schrödinger equation, by the exercise preceding this example. Suppose now there were an eigenfunction $\psi$ with $0 \leq E < 1$. Then $a_{-}\psi = 0$ since otherwise it would be an eigenfunction of $H$ with eigenvalue $E - 1 < 0$, which does not exist. So $a_+a_-\psi = 0$. Hence, $$0 = a_+a_-\psi = \left(H - \frac{1}{2}\right)\psi = \left(E-\frac{1}{2}\right)\psi,$$ and we see that we must have $E = 1/2$. Indeed, one finds a unique $L^2$-normalized eigenfunction (up to overall multiplication by a complex number of norm $1$) with $E = 1/2$, given by $$\psi_0(x) = \frac{1}{\pi^{1/4}}e^{-x^2/2}.$$
Exercise: Verify that this is the unique $L^2$-normalized eigenfunction (up to multiplication by a complex number of norm $1$) by solving the differential equation $a_-\psi_0 = 0$, a separable differential equation.
Now if $\psi$ is an eigenfunction with $1 \leq E < 2$, then so is $a_- \psi$ with eigenvalue $E-1$, so $a_-\psi = C\psi_0$. Applying $a_+$, we find $$Ca_+ \psi_0 = a_+a_-\psi = \left(H-\frac{1}{2}\right)\psi = \left(E-\frac{1}{2}\right)\psi,$$ so $\psi$ is a multiple of $a_+\psi_0$, and we may choose a unique constant (up to sign) to normalize it. We can continue this process up and up, and we find that all eigenfunctions are of the form $$\psi_n = C_n (a_+)^n \psi_0$$ with eigenvalue $E_n = n+\frac{1}{2}$, where we pick $C_n$ so that $\|\psi_n\|_{L^2} = 1$. Any solution to the (time-dependent) Schrödinger equation is then of the form $$\Psi(x,t) = \sum_{n=0}^{\infty} c_n \psi_n(x) e^{-iE_nt}.$$
The key here is that we have some amount of algebraic machinery at our disposable, given by the three operators $a_+$, $a_-$, and $H$, satisfying the key commutation relations $[a_-,a_+] = -1$, $[H,a_+] = a_+ + 1$, and $[H,a_-] = a_- - 1$. The rest is just a formality of the fact that $H$ has a lowest possible eigenvalue.
Remark: The formality here can be phrased in terms of the representation theory of the Lie algebra $\mathfrak{sl}_2(\mathbb{C})$.
If we work out the constants, then $$H\psi_n = \left(n+\frac{1}{2}\right)\psi_n,~~~~~a_+\psi_n = \sqrt{n+1} \cdot \psi_{n+1},~~~~~~a_-\psi_n = \sqrt{n}\psi_{n-1}.$$ One often writes this by thinking about the operators in question as shifting from one eigenspace of $H$ to another, as in the following diagram:
I thoroughly enjoy this animation from Wikipedia, which illustrates how the real and imaginary parts of $\psi_n(x)e^{i(n+1/2)t}$ evolve over time for $n=0,1,2,3$, as well as examples of two states which are some combinations of these, especially the last, which shows that we have a nice approximation to the classical solution, in which the particle oscillates back and forth.
To finish this example, the takeaway is that in peeling back the physics to the underlying mathematical structure, quantum mechanics is actually a more general machine involving operators, and it doesn't really require that we study the Schrödinger equation as we have discussed. Instead, the main property we need is that the Hamiltonian operator, $H$, which may come in a variety of forms, is self-adjoint. From this general viewpoint, the Hamiltonian $H$ will act on some Hilbert space $\mathcal{H}$; in our case this was the Hilbert space of $L^2$ functions on $\mathbb{R}$ (or more generally $M$). Then we may study the resulting equation $$i\dot{\Psi} = H\Psi$$ where $\Psi \colon \mathbb{R} \rightarrow \mathcal{H}$ represents our state over time. A stationary state is just one of the form $\Psi(t) = \psi \cdot e^{-iEt}$ where $\psi$ is an eigenfunction for $H$ of eigenvalue $E$, and we can try to write all states as sums (or integrals if the set of eigenvalues $E$ is not discrete) of these stationary states. One can then use the algebraic machinery of related operators to say something about the time-independent solutions.
III.2: Tunneling in Quantum Mechanics
Perhaps more fundamental than the harmonic oscillator is the case of the free particle. The time-independent Schrödinger equation is quite standard in a differential equations class (see e.g. the exponential response formula), so we find the solution to the general Schrödinger equation, when we include the time back in, is $$\Psi_E(x,t) = Ae^{ik\left(x-vt\right)} + Be^{-ik\left(x+vt\right)}$$ where $k = \sqrt{2E}$ and $v = \sqrt{E/2} = k/2$. The first term is right-moving, in the sense that increasing $t$ just shifts the function over in the positive $x$ direction (with velocity $v$). The second term is similarly left-moving. So we think about this as a combination of two wave packets, one which is left-moving and one which is right-moving. Unfortunately, this is not $L^2$-normalized. Really, we should use appropriate functions $\phi_L(E)$ and $\phi_R(E)$ to write any solution to the Schrödinger equation as $$\Psi(x,t) = \int_{E \geq 0} \phi_L(E)e^{ik(x+vt)} + \int_{E \geq 0} \phi_R(E) e^{ik(x-vt)},$$ where $k$ and $v$ depend on $E$ as described, and we can hence obtain functions which have $L^2$-norm $1$. The first integral gives a wave packet which moves to the left, whereas the second gives a wave packet which moves to the right; notice that the velocity is no longer well defined, since $v$ is itself a function of $E$. (There is a concept of a group velocity, the macroscopic scale velocity in which the wave packet will travel, and this is what we see in practice with our eyes which aren't fine-tuned enough to see the quantum mechanical detail.)Suppose we have $V(x) = a\delta(x)$, the Dirac delta function, with $$\delta(x) = \left\{\begin{matrix} 0, & x \neq 0 \\ \infty, & x = 0 \end{matrix}\right.$$ with the condition that $$\int_{x \in \mathbb{R}} \delta(x) = 1.$$ This is not a proper function, but it is a distribution, and if we weaken our setting of differential equations to such distributions, we can still study solutions to the Schrödinger equation legitimately. (If you don't like this, we can take a sequence of functions which converge to $\delta$, and the solutions to the Schrödinger equation should converge, though we will not worry about such issues of convergence.) We think about this as an infinite potential barrier between $x < 0$ and $x > 0$, corresponding to placing an infinitely solid (but also infinitely thin) barrier at the origin. So suppose we have a wave packet which is localized to some $x < 0$ which is travelling to the right. It will hit the boundary, and after the collision, there will be some new wave function. Classically, you would expect that the particle would bounce right back, so you might think the wave function will still be localized to $x < 0$. But this is not the case! It turns out that some of it goes through, and we obtain some right-moving component in the end. This phenomenon is referred to as tunneling (or scattering), where some part of a wave is transmitted into a region where, classically, it should not be allowed.
We will check this without worrying about normalization, so we'll send in some non-normalizable 'solution', one corresponding to constant $E$, and see what happens. If you're like me, this isn't an entirely comfortable idea, but you can believe that every solution can be written as some integral of these non-normalizable functions as discussed, so this story is close to true when we send in a wave packet which is localized near some energy level.
Let us look at the time-independent Schrödinger equation, which in our case reads $$-\frac{1}{2}\frac{d^2\psi}{dx^2} + a\delta(x)\psi(x) = E\psi(x).$$ Then we have that there are constants $A$, $B$, $C$, and $D$ such that $$\psi(x) = \left\{\begin{matrix}Ae^{ikx} + Be^{-ikx}, & x < 0 \\ Ce^{ikx} + De^{-ikx}, & x > 0\end{matrix}\right.$$ From the physical situation, we may say up to scaling that $A = 1$, representing the wave coming in, and $D = 0$, since when $x > 0$, everything is transmitted from the wall, hence moves in the positive direction. Then $B$ represents the amount of the wave that is reflected, and $C$ represents the amount of the wave that is transmitted. In particular, we may take $$R = |B|^2,~~~~~T = |C|^2$$ which represent the probabilities that the particle was reflected and transmitted, respectively. They are called the reflection and transmission coefficients, respectively.
The quantum mechanical version of Gandalf the Grey wasn't nearly as bad-ass... |
To solve for them we assume that $\psi$ is continuous at $0$, which I assume comes from studying a sequence of smooth potentials approaching the delta function. Then, by integrating the Schrödinger equation, $$0 = \lim_{\epsilon \rightarrow 0}\int_{x \in (-\epsilon,\epsilon)}\left(-\frac{1}{2}\frac{d^2\psi}{dx^2} + a\delta(x)\psi(x) - E\psi(x) \right)$$ $$= -\frac{1}{2} \cdot \left(\lim_{\epsilon \rightarrow 0^+}\frac{d\psi}{dx} - \lim_{\epsilon \rightarrow 0^-}\frac{d\psi}{dx}\right) + a\psi(0).$$ Continuity of $\psi$ at $0$ yields the equation $$1+B = C,$$ whereas the jump in $d\psi/dx$ at $0$ just discussed tells us that $$ikC - (ik - ikB) = 2a(1+B).$$ In other words, with $\beta = a/k$, we find $$B = \frac{i\beta}{1-i\beta},~~~~~C = \frac{1}{1-i\beta}$$ and the reflection and transmission coefficients are $$R = \frac{\beta^2}{1+\beta^2},~~~~~T = \frac{1}{1+\beta^2}.$$ In particular, the higher the energy, the higher the value of $k = \sqrt{2E}$, and so the smaller $\beta$ and the more likely the transmission. On the macroscopic scale, by the way, it would take a ridiculous amount of energy to have even a 0.0001% chance of jumping through a solid door. Nonetheless, we now have devices which work based on quantum mechanical principles, so quantum tunneling is actually extremely useful.
In general, one can use a tool called WKB approximation to study scattering in less idealized situations. To study this phenomenon, we write $\psi(x) = A(x)e^{i\phi(x)}$ in the time-independent Schrödinger equation, with $A(x) > 0$ and $\phi(x)$ real functions. Then the time-independent Schrödinger equation reads as two separate equations (coming from the real and imaginary parts) $$A'' - A(\phi')^2 = -2(E-V)A$$ $$(A^2\phi')' = 0.$$ The second one shows that we may write $A = C(\phi')^{-1/2}$, so it suffices to solve for $\phi'$. However, the first equation is quite difficult to solve. What we do is to assume that $A''/A$ is very small compared to all other terms which appear, so that we may simply solve $\phi'(x) = \sqrt{2(E-V(x))} =: p(x)$ to find the solution. I do not yet know the intricacies of how good this approximation is.
Let us apply it to the following situation: suppose that instead of a Dirac delta function potential, we have a potential function satisfying $$V(x) \left\{ \begin{matrix} = 0, & x < 0 \mathrm{~or~} x > a \\ \geq E_0, & 0 < x < a\end{matrix} \right.$$ with $E_0 > 0$. We will still have a tunneling phenomenon, in which a wave incoming from the left with $0 < E < E_0$ will partially reflect and partially transmit. The function $p(x)$ is purely imaginary on the region $0 < x < a$, and with $k = \sqrt{2E}$ as before, we may write $$\psi(x) \approx \left\{\begin{matrix}e^{ikx} + Be^{-ikx}, & x < 0 \\ \frac{C}{\sqrt{|p(x)|}}e^{\int_0^{x}|p(x')|~dx'} + \frac{D}{\sqrt{|p(x)|}}e^{-\int_0^{x}|p(x')|~dx'}, & 0 < x < a \\ Fe^{ikx}, & a < x \end{matrix}\right.$$
If we assume that the barrier is rather large, e.g. $E_0 \gg E$ or $a \gg (E_0-E)^{-1/2}$, then we must have $C \approx 0$, since otherwise the first term in region $0 < x < a$ would blow up, which doesn't make physical sense. I am sure one can also use continuity of $\psi$ and $\psi'$ at the points $x = 0$ and $x = a$ to argue this more rigorously, but I haven't thought about it. Regardless, the relevant amplitude difference is given by the second term with coefficient $D$, and we find that the approximate ratio of amplitudes (up to some smaller factors) should be $$|F| \approx e^{-\int_0^a |p(x)|~dx}.$$ Writing $\gamma = \int_0^a|p(x)|~dx = \int_0^{a}\sqrt{2(V(x)-E)}~dx$, we find therefore that $$T = |F|^2 \approx e^{-2\gamma}.$$
Remark: Notice that $\gamma \geq a\sqrt{2(E_0-E)}$, so $T \lesssim e^{-2a\sqrt{2(E_0-E)}}$, which is very small under our assumption about having a large barrier.
Remark: One part of the picture which I do not understand is how the WKB approximation fits in with the path integral formulation of quantum mechanics. This last piece, I believe, is the biggest missing piece in the description of Witten's work to come. For us, we will we take the transmission coefficient $T \sim e^{-2\gamma}$ as a sort of heuristic for the tunneling phenomenon.
As a final note, one may ask what quantum mechanics (and in particular the Schrödinger equation) means for 'particles' on a general manifold $M$. Recall that particles are really wave functions $\phi \colon M \rightarrow \mathbb{C}$ in quantum mechanics. As we mentioned, we should really be working on a Hilbert space of such functions, so we really want to take $L^2$ functions with the $L^2$ metric $$\|\phi\|^2_{L^2} = \int_M |\phi|^2~d\mathrm{vol}_g,$$ where we have to fix a metric on $M$ so that we have a volume form. (If we're doing physics, we really should be working on a Riemannian manifold anyway, since we'd like to be able to measure lengths.) The Hamiltonian is then given by $$H = -\frac{1}{2} \Delta_g + V,$$ where $\Delta_g$ is the Laplacian, and $V \colon M \rightarrow \mathbb{R}$ is a the background potential energy. In the case where $V = 0$, the ground states (i.e. states with minimal energy $E=0$) are so-called harmonic functions with respect to $g$, so if $M$ is connected, we obtain a single ground state. One can also make the Laplacian act on differential forms, as is standard in Hodge theory, and we may equally well call this a quantum mechanical system. We will describe some of the basics of Hodge theory next time, as it will be necessary for understanding Witten's paper.
III.3: Supersymmetry (SUSY)
Enter key physics concept number two. As I was preparing this post, I ended up reading through Notes on Supersymmetry by Deligne and Morgan (following Bernstein) in Volume 1 of Quantum Fields and Strings: A Course for Mathematicians. The idea of a superalgebra or supermanifold sounded relatively easy from what I knew of it beforehand, but it's actually surprisingly tricky. And despite the name, the notes of Deligne and Morgan (as far as I can tell) actually don't talk about supersymmetry at all. So the supersymmetry part of what I may be a little distorted or hand-wavy.
Despite the tricky details, let me try to keep it simple. The adjective super just means that everything decomposes into even and odd parts, i.e. that all objects are $\mathbb{Z}_2$-graded (we will always mean the group $\mathbb{Z}/2\mathbb{Z}$, and not the $2$-adic integers; this has confused me once in the opposite direction, which makes me suspect that this caveat is important for algebraists out there). At the easiest level, there's the notion of a super vector space, which is just a vector space of the form $$V = V_0 \oplus V_1,$$ where we think of $V_0$ as consisting of the even elements and $V_1$ as consisting of the odd elements. An element $v$ of either $V_0$ or $V_1$ is called homogeneous and comes with a parity $p(v) = 0,1$ respectively. You can then try to take this idea and study all sorts of algebraic structures. For example, you can take tensor products $$(V \otimes W)_0 = (V_0 \otimes W_0) \oplus (V_1 \otimes W_1),~~~~~(V \otimes W)_1 = (V_0 \otimes W_1) \oplus (V_1 \otimes W_0).$$ A(n even) morphism of super vector spaces is just a linear map of vector spaces sending even elements to even elements and odd elements to odd elements. Actually, it will be useful to also consider an odd morphism of super vector spaces, which sends even elements to odd elements and vice versa. A super algebra is a super vector space $A$ with a multiplication $\mu \colon A \otimes A \rightarrow A$, which is commutative if $$\mu(x,y) = (-1)^{p(x)p(y)}\mu(y,x)$$ for homogeneous elements $x$ and $y$. There are sign considerations all over the place, but so long as you follow the Kozsul sign rules relentlessy, in which every time you interchange objects, you include a sign when you move objects past each other as in the definition of commutativity just stated, you should be OK. And it goes on and on. There's a tricky concept which is actually quite important called the Berezinian, which is important, since it generalizes the notion of determinant, and hence is pivotal in understanding how to integrate. We won't deal with it as seriously here, since we're taking a naive approach for now. Whenever a physicist requires computing some path integral in a supersymmetric quantum field theory, and they integrate over a so-called Grassmann (or fermionic) variable, they're really using the notions of a supermanifold and the Berezinian line bundle behind the scenes. As a piece of physics notation, even objects are often called bosons (or bosonic), whereas odd objects are often called fermions (or fermionic), especially if they actually correspond to physical particles. So don't be scared of those words like I was at some point!
To start, the idea of supersymmetry (also written as SUSY) is that the physics comes with an underlying symmetry which (and here's the super part) mixes the bosonic and fermionic states. This might sound like nothing at first: we just push the even parts into the odd parts and vice versa. But there's something fundamentally interesting about this, which comes from the sign rules which I'm hiding, and which tell us that the bosonic and fermionic states are actually fundamentally different, an idea which is codified in the spin-statistics theorem, which I don't claim to fully understand yet. Let me just provide a quick example of how bosonic and fermionic states are different. Given a super vector space $V$, one can form the symmetric product $\mathrm{Sym}^*V$. Typically, you would think of this as the tensor algebra of $V$ modulo the ideal generated by elements of the form $x \otimes y-y \otimes x$. But actually, we should apply the sign rule relentlessly, so we should actually be using the homogeneous ideal generated by elements of the form $x \otimes y - (-1)^{p(x)p(y)}y \otimes x$ for homogeneous elements $x$ and $y$. Explicitly, as a vector space $$\mathrm{Sym}^*V = \mathrm{Sym}^*V_0 \oplus \Lambda^*V_1,$$ and this yields a super vector space by giving the exterior algebra action its natural $\mathbb{Z}_2$-grading. For example, if $V_0 = \mathbb{R} x$ and $V_1 = \mathbb{R}\theta$, then $\mathrm{Sym}^*V = \mathbb{R}[x,\theta]/(\theta^2)$ where $p(x) = 0$ and $p(\theta) = 1$. This is more than a formality - $\mathrm{Sym}^*V$ can be considered to be a natural construction from the more abstract perspective of category theory, and hence the sign rule fundamentally distinguishes bosons and fermions.
More formally, supersymmetry involves the action of a super Poincaré group, which, since we haven't discussed supermanifolds and hence super Lie groups, we won't define.
At this point, let us finally turn to supersymmetric quantum mechanics, which we should think of as the study of quantum mechanical systems with some underlying supersymmetry. The first fundamental piece of a supersymmetric quantum mechanical system is a super Hilbert space (a complete super vector space with an inner product) $\mathcal{V} = \mathcal{V}_0 \oplus \mathcal{V}_1$ with a Hamiltonian operator $H \colon \mathcal{V} \rightarrow \mathcal{V}$ which is an even morphism. Supersymmetry now requires also some kind of interchange of even and odd components, which means there are some odd self-adjoint morphisms $Q_i \colon \mathcal{V} \rightarrow \mathcal{V}$, and they need to satisfy some relations.
We will focus on the very simplest case for these relation:
At this point, let us finally turn to supersymmetric quantum mechanics, which we should think of as the study of quantum mechanical systems with some underlying supersymmetry. The first fundamental piece of a supersymmetric quantum mechanical system is a super Hilbert space (a complete super vector space with an inner product) $\mathcal{V} = \mathcal{V}_0 \oplus \mathcal{V}_1$ with a Hamiltonian operator $H \colon \mathcal{V} \rightarrow \mathcal{V}$ which is an even morphism. Supersymmetry now requires also some kind of interchange of even and odd components, which means there are some odd self-adjoint morphisms $Q_i \colon \mathcal{V} \rightarrow \mathcal{V}$, and they need to satisfy some relations.
We will focus on the very simplest case for these relation:
- $Q_i H = H Q_i$, which is to say that $Q_i$ is a time-independent symmetry of the system, as $H$ generates how the state evolves in time
- $Q_i Q_j + Q_j Q_i = 0$ for $i \neq j$, which is the proper way to say that $Q_i$ and $Q_j$ (graded) commute with each other. Indeed, notice that $Q_i$ and $Q_j$ are both odd, so this the proper notion of the commutator relation $[Q_i,Q_j] = 0$. Physically, we should think that since these operators commute, the supersymmetries that they represent do not interfere with each other.
- $Q_i^2 = H$, which is a bit more mysterious to me. It tells us that each supersymmetry is somehow a square-root of time evolution. From what I heard from physicists, this is really a fundamental property, but I don't yet understand how to think about it. Nonetheless, it is important in the algebraic machinery which follows.
Remark: One often introduces the operator $(-1)^F$ which is the identity on $\mathcal{V}_0$ and is negation on $\mathcal{V}_1$. Then, for example, Witten writes the requirement $(-1)^FQ_i + Q_i(-1)^F = 0$. But this is just the condition that $Q_i$ is odd, which we were already implicitly assuming.
For us, that will be enough to enter the world of Witten's paper. However, it is worthwhile to point out that these algebraic conditions do have some useful consequences in general (even if we will not really see them in this blog post).
First, we already know that the eigenstates of $H$ play an important role in understanding the time evolution of the quantum mechanical system. We find that since $H = Q_i^2$ and $Q_i$ is self adjoint, then $H$ has only non-negative eigenvalues. Hence, all physics occurs for such super-symmetric systems at non-negative energies.
Second, one phenomenon which physicists care very much about is determining whether a given symmetry of a system is either unbroken or broken. If it is unbroken, then fermions and bosons exist in pairs of equal mass. Meanwhile, if it is broken, this automatically implies that any state of minimal energy, called a vacuum state, will not be fixed by the entire symmetry of the system. This phenomenon is known as spontaneous breaking. In our setting, this is equivalent to asking whether there exists a state $\psi$ such that $Q_i\psi = 0$ for all $i$. (The operator $Q_i$ is actually an infinitesimal generator of the symmetry, so this equation tells us that the $1$-parameter family generated by $Q_i$ fixes $\psi$, and so $\psi$ is symmetry-invariant.) From the algebraic formalism, it is equivalent to ask simply that $H\psi = 0$, since indeed $H = Q_i^2$ with $Q_i$ self-adjoint, or even more simply, just that $Q_1\psi = 0$.
I think the Hilbert spaces arise from $L^2$ functions on a Lagrangian submanifold inside of a symplectic manifold $M$ rather than on $M$ itself. As a side remark, if you take $L^2(\mathbb{R})$, where $\mathbb{R}$ is the $x$-axis in $\mathbb{R}^2$, our theory is unitary equivalent to taking any other Lagrangian subspace in $\mathbb{R}^2$.
ReplyDeleteIf the symmetry is broken (in the SUSY section), this means that a vacuum state will have positive energy, right?
We're using the Lagrangian submanifold $M \subset T^*M$ (or in the example, $\mathbb{R} \subset \mathbb{R}^2$). This is because we're actually quantizing $T^*M$, the phase space, where Hamiltonian mechanics lives. So at least in the simple case of a single particle on a manifold $M$, your remark is already implicit in the main post.
DeleteMore generally, you could try to quantize a general symplectic manifold, perhaps with integral symplectic form (which gives a well-defined line bundle $L$ which is where we should take sections up to a twist) and there's a whole story for this which I don't know well at all (so take the following with a grain of salt), but the intuition of taking $L^2$ functions on a Lagrangian submanifold is a just a part of the idea of geometric quantization. So it's kind of right, but you have to be careful about global considerations. This is typically handled in the form of a polarization on the symplectic manifold, where one considers instead $L^2$ sections of $L$ which are polarization-invariant. In the example of $T^*M$, we may take our polarization so that polarization-invariance corresponds to invariance under translation in the fiber directions, so we recover the Hilbert space of $L^2$ functions on $M$ itself. In more general situations where the polarization has some monodromy associated to it, we do not simply obtain $L^2$ functions on a Lagrangian submanifold.
As for supersymmetry breaking, in the simple case described, your statement is certainly true. A physicist should correct me if I’m wrong, but I believe that even in more general situations (global SUSY), supersymmetry breaking occurs if and only if the vacuum states have non-zero (positive) energy.
Ah, I see. Thanks for clarifying. It seems that in physics, $T^*M$ is usually the symplectic manifold considered but not so often is a closed symplectic manifold considered since it is locally phase space but not globally. I suppose one might take a torus, like $T^{12}$ to represent two objects in $\mathbb{R}^3$; we take $T^* \mathbb{R}^6$ and then mod out by translations because maybe we just care about their relative positions to each other. But I wonder if physicists consider other symplectic manifolds which aren't just phase space modulo some symmetries.
DeleteThanks for the response on the SUSY question.
One point about compact vs non-compact symplectic phase spaces is that the dimension of the quantum Hilbert space is given by the symplectic volume of the manifold (this is an integer, since a pre-quantum line bundle ensures that the symplectic form has integral periods). Compact phase spaces show up in Witten's quantization of Chern-Simons theory, where you quantize a compact space of flat connections on a Riemann surface, obtaining a finite dimensional space of quantum states!
DeleteIs the quantization of a flat connections on a Riemann surface related to instanton Floer homology? I must admit, I don't really understand what quantization means; I vaguely think of it as introducing a map from classical observables to a Hilbert space which should represent quantum observables. The map should preserve some structure between the Poisson and Lie brackets though not in the way that I would naively guess.
Delete