I can't tell if it is using the GPU, SIMD, multiple cores, memory compression/decompression, code rewriting(JIT), or other high performance techniques.
The one application I know for 4D collision detection is 3D continuous collision detection. If two 3D objects are moving, you can parameterize their movement through time to obtain a 4D shape. If those movements are only translational, and the two shapes are convex and have a support function, then testing for contact is easy (e.g. using GJK). However if those movements contain rotations, the resulting 4D shape will be concave so interference detection may be hard.
I do not think anybody actually uses this method though.
Having worked through half of the Nature of Code book and implemented all of its physics examples, I realized that the physics libraries are all approximations (not simulations) of the real world. To make the approximations as close as possible to what we see in the real world, you need to make many more fine calculations. This limits the frame rate at which the simulation can run. You either run your simulation less accurately at 30 updates-per-second, or you run it with increased accuracy at 10 updates-per-second. When demo-ing physics libraries, authors usually want to showcase the small details that increase realism. This leads to more calculations, which leads to a greater slowdown.
I think gravity is the first force we notice when it's off, because we're very used to seeing objects fall. For gravity to feel right, it needs to feel like 9.8N/kg at a specific framerate. If you update less often, you'll slow down the world. If you update much faster, objects will fall faster. You might want to increase the value of gravity to 19.6N/kg if your fps is 50% slower to maintain the illusion of the real speed, but that effectively means that objects will skip ahead 19.6 units instead of just 9.8units, and that affects collision accuracy. There are a lot of parameters like this that can be tweaked, but since many results are then used as inputs to other calculations, changing one parameter affects the entire simulation. Once you reach a certain level of realism, you don't want to adjust those parameters again for a different rate of updates-per-second.
In theory. this shouldn't happen if you run your physics library as a state machine that takes time as input, and outputs the state of the system into your renderer. This should let your physics simulation run at 30 updates per second, and your graphics rendering at 60fps. In practice, they're all running on the same machine, along with any video capturing, and are competing for resources. This leads to slow-downs, which you notice as slow-motions.
If anyone can explain this better, please do chime in.
Yes, the physics demo shown in the user guide was a bit slow (less than 60fps for sure). This video was made one year before today, using a version of nphysics that was at least 5 times slower than the current one.
Also, unless the gravity is artificially boosted, things may still look a bit slow because of the scale issue described by aaronetz (in all the demos, most objects are around 1 meter tall, and the camera is far enough to see them all). AFAIK, it is not uncommon on e.g. video games, to artificially increase the gravity so that things look a bit more dynamic.
I think it's a matter of scale: The cubes are set to be quite large in size, and so the camera is quite far from them and things seem to be moving slowly. Think about a pile of large crates VS a pile of wooden toy blocks.