If you restrict yourself to certain types of objects and environments, you can use totally different techniques.
For a game dev class in uni, I wrote air hockey. This is straightforward because you only have circles and lines, and there's no gravity so objects can't rest on each other. I wrote an event-based physics engine, which exploits the fact that the equations to find the time t of a future collision are closed-form and you can just solve for t. If t is greater than the time to the next frame, just step everything forward; if it's less, step to that collision, then find the next one. Collision response is easy too: just choose a reference frame where one of the pucks is stationary, then solve conservation of momentum and energy.
This excludes some kinds of friction models, but if you choose one where deceleration due to friction is independent of speed (perfectly valid for lowish speeds), you're fine.
Another cool thing about an event-based system is that it makes it trivial to write an AI: just have the system try N moves toward the nearest puck, and run the simulation forward a few collisions to see if any of those moves results in a puck going in the opponent's goal.
Dont forget that game-engines in addition to "realism" have to handle hollywood physics required of them. Doors shall open with a bang, and then remain open..
Which adds energy to a system, that if it gets pumped into a ragdoll corpse, leads to funny reanimations..
His engine is doing a pretty good job. I looked at the code and can't figure out how he gets stable multipoint contact for the pile of balls out of that. There's no linear complementary problem solver. There's no elaborate integrator. Yet it's resolving problems where balls are in contact with many other balls, and where many balls are constrained in multiple directions. Naive approaches to that usually go "sprong" and send something flying.
Maybe it's just using a really tiny physics time step. That allows forces to propagate through the whole pile over many time steps.
Objects might sink into each other a bit, but that is generally not very noticeable unless you make huge piles of spheres. However, it might quickly become a problem in 2D, since you only need a quadratic number of objects to get a "critical" pile, compared to a cubic number of spheres in 3D.
That makes it so nothing explodes, and then there is a second solver that does a small raw position correction after. So it's not lifelike realism, but it works to stop objects sinking into each other too much. There is still a small error though, I'd have to look into more iterations and all that.
Well understood but still very complex to implement well I think. Most of the novel physics engine type things I see in the robotics space use bullet under the hood for detection, even as they implement their own constrain solver / whatever else.
I had to figure this out recently, a couple things that really helped me are Erin Catto's GDC lectures (particularly ) and Allen Chou's game physics articles .
I think of a Jacobian as a way to transform the velocities from however they're stored for the interacting objects into linear world-space velocities.
For example, in 2D rigid body physics, rotational velocity is usually just a single number representing how fast something is spinning. Given a radius from the center of a spinning object (e.g. object origin -> a contact point in world space), the instantaneous linear velocity is the rotation speed times the tangent to that radius. Therefore, the Jacobian is just that tangent vector - multiplication by that vector transforms scalar rotational velocities into 2D linear velocities. The Jacobian for the object's linear velocity is just the identity matrix.
The rotation Jacobian is more complicated in 3D and 4D, but the principal is the same - find a matrix (at a particular point, usually the contact point or the joint location) to multiply with whatever represents rotational velocity to get the linear velocity.
I was writing my own 3D engine for some time also. I decided at some point I would not venture into physics, that is where I would draw the line.
Then I understood that even doing proper collision detection would mean having to implement some kind of physics in order to do proper frame independent linear interpolation of objects moving in 3D space, in order for the objects not to go through each other if the movement step is too big and also to do proper moving of objects anyway in 3D space, as you can't just expect to linearly transform an object from one place to another, as framerates might wildly differ and the object might go through another if the frametime would be too big for this particular frame we are rendering.
That is where I just stopped, and decided that it was too much work to do anything proper by myself, and gave up on writing my own 3D engine.
I see many people are going through this process, writing their own engines and libraries. Somebody told me when I started I should use an existing engine, but I didn't listen. And that is a good thing, and at the same time, wasting time to do something that could be done using existing libraries or engines.
This is what I feel anytime I see somebody write about them re-inventing the wheels, but I guess it's a process that many choose to go through for learning and fun purposes.
Quite a few physics engines don't help much with this particular problem though. If you step the physics world in too large or variable intervals (e.g. once per frame), then small and fast objects can still tunnel through thin objects without registering a collision, you need to tweak the step size for your specific scenario (smallest and fastest objects), call the step function in fixed intervals several times each frame, and carry the last remaining 'time slice' over to the next frame.
And since the obvious next question is: Why don't all physics engines implement a "proper" continuous collision system? I guess because checking collision in discrete steps is a lot cheaper, so that it may be an acceptable tradeoff if one knows the tunneling problem exists and how to work around it.
I also designed my own 3D engine but after attacking the physics engine, I quickly (and, never regrettably!) switched to the open source Bullet engine instead after going through its code. I would never have the stamina to write that, and all that code and bunch of algos are really necessary for doing physics in "free world" games like mine.
Btw Bullet has continuous collision detection. It's not fundamentally magical - you extrude a hull of the shapes to do CCD on along their velocity tangents and collide the hulls instead of the shapes. This effectively solves the problem of bullets or other fast moving objects going through walls etc. (this is why the engine is called Bullet btw)
Actually the most difficult problem I had with the physics was the player physics, as its usually not handled by letting the player be a normal physics body (for gameplay reasons). The player shape can easily get caught up on infinitesimal seams between polygons and edges in the world around you. :)
> proper frame independent linear interpolation of objects moving in 3D space
Reminds me of another thing - back when I was trying to write my own 3D engine, I've never heard of the concept that you interpolate physics state between rendering frames. Something that's considered table stakes today.
From what I vaguely understand, the goal here is to reduce jerkiness if you render faster than you tick your physics. But then I always thought that beyond stability, the other main reason to make your physics time step independent from rendering is so that you can tick physics more often than you draw.
(Thinking about it now, I suppose this trick lets you update physics less frequently than drawing while faking fluid movement, and free up CPU time for other tasks.)
I have huge admiration for anyone who manages to implement a physics engine. I've tried to do this twice now (to integrate with my 2D canvas library) and both times I've not been able to solve the collision response part of the system.
Given that I failed my 'A' level maths in spectacular fashion (I honestly don't get integration - it just doesn't compute for me) I'm obviously attempting the impossible here. But I live in hope that one day I will somehow stumble on a solution that looks like its working on screen ... which will be enough for me.
I've tried combining my library with third party physics engines - but that, too, is hard work. The best I've ever managed is with the Matter.js engine but the amount, and complexity, of code needed to create a scene is too much for my tastes.
Maybe I'll go back for a third attempt if/when the second Covid wave hits.
The solution is to borrow ideas from those who have done it before.
One idea comes from quake: for collision response, call a function recursively until there's no more collision.
Basically, it pushes you away from a given plane. So the function looks like pushAway(plane, you). If that pushes you into something else, you get pushed away again, recursively.
If you imagine a V shape, the result is that you end up "standing" on the V, because you're getting pushed away from both sides of the V equally.
I'm not sure my explanation is very good, so to simplify: just add recursion, lol. (You'll need to limit the max recursion depth, or else you'll get a frozen game.)
I used to try to solve things from first principles – and it's more fun to do that – but there's no shame in copying the answer. Carmack's solutions tended to be copied from the medical rendering field anyway. :)
> The solution is to borrow ideas from those who have done it before.
I've tried that. Open various libraries, go through the code, work out how/where they perform the collision response calculations, trace back the vars passed into those calculations to see where their values get calculated, etc, etc ... and clearly I keep missing some key step or other along the way. I'm either misunderstanding the collision normal, or misinterpreting the collision depth, or my understanding of torque is deeply flawed (very possible, that one) ... frustration builds. Then I blame the engine coders for being too clever with their code and try a different library.
I'll get there one day. Failure is not a disaster!
As somebody who also struggles with math, I feel that the problem is it's extremely unforgiving. For instance, when I was trying to write some simple 3d rotation matrices in lua, I kept on getting crazy results just because I'd invariably mix up one cosine with a sine, or a plus with a minus. I don't really have any thoughts about how to get around this though.
> Note the use of pointers, this forces other systems to take care of the actual storing of objects, leaving the physics engine to worry about physics, not memory allocation.
Based on the access patterns in the code (iterating through every physics object every step), keeping the physics objects behind a pointer is actively cache hostile! If at all possible try to have these objects in contiguous memory.
Yeah it's up to the person using it to store them sensibly. I use an ECS so they are mostly all contiguous. I think that the pointer hurts the cache for the fist iterations, but once the whole array is cached I think the pointer becomes ok.
The naive integrator you're describing, just taking the differential equations and substituting dt by a small, non-zero time step, is known as the Euler integrator . As you say, this scheme does not keep energy invariant. In fact, energy may grow unbounded if you use it.
The simplest integrator that "preserves" energy is the Symplectic Euler  integrator, which reads:
v <- v + F/m * dt
x <- x + v * dt
The key is that you use the updated velocity to update the position in the same time step. In this scheme, energy isn't exactly preserved, it varies a little bit from one time step to the next. However, you can guarantee that the energy fluctuations will remain bounded, if your time step is below a (fairly large) critical value.
Reading the code of this blog post, the first version of the Step (at the beginning of the blogpost) uses the Symplectic Euler scheme, and the second version (towards the end) uses the naive Euler scheme. Bullet, a popular physics engine, uses the Symplectic Euler method .
A popular, more sophisticated, scheme is the Runge-Kutta integrator, which comes with variants that preserve the energy and ones that don't. The MuJoCo simulator, used in robotics research, uses the Runge-Kutta method 
Oh you're correct, I was trying to make the code more readable for the blog post and didn't even think of that!
Thanks for the heads up I would of never caught that, and good info about different integrators, not going to pretend that I knew all of it. It makes sense though because the velocity used for the position would be a frame behind.
I was looking into the Runge-Kutta method, but it looked too complex for what I was doing.
Each displayed object was backed by a simplified "physics 3D model" (small number of points of mass, fully connected with edges), and after applying forces and moving points, we checked the length of each connecting link and made sure that they didn't stretch or compress too much. Rigs of Rods has a similar model: https://en.wikipedia.org/wiki/Rigs_of_Rods
The basic implementation of GJK is inherently branchy - see  as an example. Extracting the contact points is similarly branchy, I'm not sure offhand how that would translate into gpgpu friendly code.
I'm also not sure it's worth it. If you have a small number of bodies, it doesn't make sense to attempt the collision detection on the GPU, the benefit of the GPU is really that it scales. If you _do_ have a large number of bodies, youre likely going to be using lots of your GPU for rendering anyway....
There's also the gameplay side of it - moving the bulk of the physics engine to the GPU will make things like contact filtering, scene querying difficult to do. Gameplay code is usually very straightforward, and adding in complex or async callbacks likely would be met with resistance..