Yes, I'm quite happy with my collision detection engine. But could it be faster? Probably!
Here's an idea I had today. Obviously, I'm obsessed with spatial hashing. It's probably my favorite thing ever. Well, what about this: take a mesh, create a spatial grid of booleans (which can, of course, be represented super-efficiently as a bit vector), then loop over every triangle in the mesh. For any spatial grid that the triangle intersects, the corresponding hash bucket gets set to true. Once this has been done for all triangles, you are left with an array of bits that represents the spatial buckets with which the mesh collides. Purely for the sake of being able to throw around cool and esoteric terminology, I'll call this array the "spatial signature" of the object.
Now, here's the fun part. In my previous technique, I looped over the vertices of one mesh, checking them against the density function of the second. The most expensive part of this check is, without a doubt, evaluating the density function. Well, instead, we could prune the possibilities by first checking the vertex against the spatial signature of the object! It's quite inexpensive to hash the vertex to a bucket in the same space as the one in which the spatial signature was created. Then, we simply check the corresponding index in the bit array. If the array indicates that no triangles intersect that spatial bucket, then there's absolutely no chance that the vertex could be colliding with the mesh. On the other hand, in the event that the bit is turned on, then we continue as before with the full-blown density check.
This is, of course, a completely untested idea, but it seems to me that the speed with which a bit array could be accessed would significantly cut down on the narrow-phase collision check, especially when dealing with complex density functions.
The war is over; the battle has been won. For now.
I merged the previously-discussed narrow-phase collision engine with the broad-phase spatial hashing engine that I wrote a few months ago, and the results couldn't have been more pleasing! Everything works just as I had wanted, and now the whole collision detection system is as simple as adding objects to the engine, then requesting collision reports in the update step. Everything else is handled automatically - and it doesn't require choosing a certain type of collision mesh or pre-building any special data for collision detection! All it requires is the density function that was used to generate the object's mesh. Furthermore, it's fast
I also made a minor optimization to the spatial hashing engine that's worth mentioning, as it kicked my framerate back up to 60 when doing collision detection on the large, rotating space station in my demo had taking it down to 10 fps or so. The idea is that, when adding an object to spatial buckets, it's a good idea to 'over-allocate,' so to speak. In other words, scale the bounding box by 150% or so, then add this enlarged box to the spatial buckets. It may waste a few checks in the broad-phase cycle, but it cuts down on the frequency with which you have to change the spatial buckets around, which I found to be a huge bottleneck for really large, moving objects (especially when the spatial grid is kept relatively small).
If you do the aforementioned, then when an object moves, you need only modify the spatial grid if the object's new bounding box isn't contained in the 'over-allocated' one. Now, if you've got a grid size of 1 and a battleship that's 50 x 50 x 50, suffice it to say that it's going to get really expensive, really fast to update the spatial grid every frame if the battleship is moving. So don't!
A celebratory shot of me swooping triumphantly over the station, headed back toward the mothership:
But it took a while. The biggest problem I encountered in implementing my scheme was due to slight inaccuracies in the construction of the surfaces from the polygonization algorithm that ended up causing noticeable detection problems when meshes were scaled up. To fix it, I implemented a pseudo-binary search to ensure that the algorithm places vertices at points at which the absolute value of the density function is no more than some very small error tolerance, ensuring that the algorithm stays as true to the real surface as possible. Linear interpolation of the density function just isn't good enough when you introduce metaballs with variable power (i.e., to create rectilinear shapes), which can cause sharp, sudden variations in the density function.
After increasing the precision of the polygonized surfaces, the detection worked like a charm! The time required to check two bodies is highly dependent on the resolution of the point set used for the first body. I found that it generally sufficed to use point sets generated from low-resolution meshes for the collision. With optimization, I can get the checking time down to around 0.1 ms per check.
I've been pondering narrow phase collision detection a lot lately, as it's really the only thing that stands between me and a working space shooter demo. Unfortunately, it's not at all an easy problem, especially considering the fact that I have on-the-fly meshes generated and can make no assumptions about the topology, nor can I rely on hand-construction of bounding trees.
I had been getting pretty excited about an idea I had that involved a two-phase check using AABB-trees for pruning and then spatial bucketing (a la spatial hashing) for triangle-to-triangle testing. Then it hit me: why am I running in circles? I started with a mathematical model of the mesh: an isosurface of a well-defined density function. I used a complex algorithm to convert the isosurface into a polygonized mesh. Now I was trying to go back to a mathematical representation that would be conducive to collision detection. Why??
Here's the punchline: why not use the original density field description for checking? After all...the density field is even more precise than the triangular mesh! All that collision detection really is, thanks to the nature of my engine, is a solver for the intersection of two isosurfaces, both of which can be completely described using formulae! Of course, I'm sure the math would get incredibly nasty for an analytic solution, and I'm not really up for exploring that option at the moment. But here's a better idea: use a set of points (the vertex positions generated by the polygonization algorithm) to describe one isosurface, then check each point against the other surface's density function. Then, if you ever find a sign change, you immediately know that there's a collision!
This is a pretty cool idea, since it only requires a set of points and a density function to work. It definitely avoids a lot of mess associated with computing BVHs. Most importantly, however, it just makes sense! If you're working with a precise, mathematical definition of a surface, then use it!
I'll probably be implementing a rough draft of this system quite soon, so we'll see if it measures up to my expectations. I would imagine that sampling the density function will be cheaper than other types of collision detection, but I'll need to do some extensive profiling to make sure that this idea is actually as clever as I believe it to be.
I've set aside terrains for a bit to explore more space-related content. In particular, I wanted to build a decent 3rd-person camera for use with all things space-related. The camera came out nicely; I just used simple damped spring physics. The scene that I'm using for testing, however, is actually turning out to be more interesting than the camera.
I slapped together an implicit model of some kind of blobby battleship or something, just to do some flybys and test out the camera. Of course, given how easy it is to tweak implicit models, I quickly got carried away and ended up adding a pretty cool interior - meaning that you can fly the fighter right into one of the side tunnels (BSG style :D) and park in one of the four garages inside the ship. Cool! It amused me for quite some time. And it only took about 45 minutes of experimentation to make it. I love implicit models!!
The only downside, as is evident in some of the pictures, is that the normals aren't well-behaved when you start getting really fancy with density functions (i.e., when you start having regions carved out of solid regions). The normals produced using my standard method of gradient estimation were complete garbage at some points on the mesh (especially inside the ship), so I had to discard them and use post-computed normals instead (the kind where you sum up all the planar normals for each vertex and compute the weighted average). This makes the normals a little nasty at points, since marching cubes tends to add slivers and such that screw with the normal computation.
Oh well. On the whole, I'm really happy with how this demo is shaping up. Did I mention it's only 198 lines of code/28kb packed exe? Hooray for procedural content!