So not too long ago I read The Theoretical Minimum by Susskind and Hrabovsky. Although it mostly stuck to the usual presentation of potential energy, there were enough hints that I finally got a clue and it is beginning to make sense to me.

Here’s how. Instead of Conservation of Energy or the Principle of Least Action, start with this notion:

The forces acting on an object come from conservative vector fields.

A conservative vector field is one in which the path integral between any two points is independent of the path taken. You get a conservative vector field whenever you take the gradient of a scalar field, and—given certain reasonable topological constraints—the converse is also true.

And that’s all the potential is: it’s a scalar field that, when you take the gradient, gives you the force: \(F = -\nabla V\)! (The reason for the sign change will be apparent in a minute.)

Since potential is an antiderivative, it’s only determined up to an additive constant: \(\nabla V = \nabla(V + c) \) for any constant \(c\). So the actual value of the potential is basically meaningless, only differences and differentials are important. In other words, it’s a torsor.

Now if you have a scalar field \( V \) with units \( u \), the gradient \( \nabla V \) is a vector field (to be precise, a rank-1 covariant tensor field) with units “\( u \) per unit distance.” For example if you have a temperature field in degrees Celsius, the temperature gradient is a vector field with units degrees Kelvin per meter.

So if \( \nabla V \) is force,
which is mass times distance over time squared,
then \( V \) must be mass times distance *squared* over time squared —
that explains the weird units!

As for conservation of energy: this is the same as saying that the time derivative of the total energy is always zero, \( {d \over dt} (E + V) = 0 \).

Start with the potential energy term: \({d \over dt} V(x) = \nabla V \cdot \dot{x} \) — that’s just the vector version of the chain rule.

Next the kinetic energy term. Now velocity is a vector, so you can’t really square it, so postulate an inner product and use that instead: \(E = {1\over 2}m(v \cdot v)\). The dot product is associative and obeys the Leibniz rule just like ordinary multiplication, so \( {d\over dt} (v \cdot v) = (\dot{v} \cdot v + v \cdot \dot{v}) = 2(\dot{v} \cdot v) \). (That sorta explains why \(1 \over 2\) appears in the definition of kinetic energy—it cancels out the factor of 2 you get when you differentiate velocity squared.)

Putting everything together, you get: \[ m\dot{v} \cdot v + \nabla V \cdot \dot{x} = 0 \]

Now make some substitutions:

- \(\dot{x} = v\) — the time derivative of position is velocity;
- \(\dot{v} = a\) — the time derivative of velocity is acceleration; and
- \(\nabla V = -F\), from the definition of potential energy.

Then shuffle things around to get: \[ ma \cdot v = F \cdot v \] And there you go: in a conservative vector field, conservation of energy is a consequence of good old Newton’s law!

\[ F = -\nabla V \land F = ma \implies \dot{E} + \dot{V} = 0\ \]

Looking back, I can see that Physics has been trying
to tell me this all along. It just never seems to put
everything together in one place. It occasionally
mentions that “objects seek a position of lower potential,”
but rarely if ever comes right out and says \(F = -\nabla V \).
Whenever I ask about the weird units it either
says “that’s just the way it is,” “Leibniz noticed it,”
or “that’s because work is force times distance”—which
just punts to the even less intuitive (to me anyway)
concept of “work”.
I had *never* seen anybody demonstrate
\(F = ma \implies \dot{E} + \dot{V} = 0\)
until reading Susskind and Hrabovsky
(and they only do it for the scalar case).

But my biggest mental block was probably the mistaken notion that an object “has” potential energy in the same way that it “has” kinetic energy (or position, velocity, or mass). The total kinetic energy is simply the sum of the kinetic energy of the individual bodies in the system, but potential doesn’t work that way: it is strictly a function of the system as a whole.

As for the Lagrangian, that still doesn’t make any intuitive sense to me at all. It doesn’t appear to have any physical significance; as far as I can tell it’s nothing more than “a quantity which, when integrated and made stationary, gives you the answer you were looking for.”

My best guess is this: the calculus of variations isn’t taught much nowadays, but it used to be all the rage. Back in the day, natural philosophers were having great success applying variational principles to solve all sorts of previously intractable problems. I suspect that Lagrange just had a really cool hammer and \(\int E - V \) is the equation that makes Newtonian mechanics look like a nail.

The Hamiltonian formulation *looks* like it ought
to make sense to me, but I haven’t quite grokked it yet.
I guess it’s probably time to tackle SICM.