Tag Archives: collision detection

Fast, Accurate Collision Detection for Arbitrary Triangle Meshes

This is probably the single most exciting post in my three years of blogging. It's definitely what I feel to be the biggest technical achievement, in terms of what people usually think of as being technical achievements. At first I wrote an exceedingly long post and a mini-history of my experiences with collision detection, and the development of this new algorithm. But it was too long and boring, so I'm just going to get to the facts.

Tonight, frustrated by the poor performance of Bullet's concave-concave solver (GIMPACT), I've developed a general-purpose intersection algorithm that works for any kind of mesh (convex, concave, degenerate, ... pretty much whatever you want), allows any kind of world transform (including scaling, shearing, ... pretty much whatever you want), and is remarkably fast. Maybe it's not that great, maybe everyone already knows about it...but I feel like I would have heard about it by now. I mean, it takes less than 200 lines to implement this incredibly simple, powerful algorithm that blows GIMPACT out of the water. Am I missing something?? Maybe so. I need to do some more tests to make sure I'm not being premature, but initial tests show that it takes an absurdly small amount of time, even for insane meshes. For example, I tried checking two full-quality asteroids against one another...a test that should be prohibitively expensive (they're concave and roughly 50000 polys)...and it takes almost no time at all (concrete numbers to come in the future). Wow. I can run full physics on my full meshes now...! And best of all, I can quit worrying about this convexity nonsense, and also drop all the hacks that I had to use to get other physics engines to cooperate with scaling.

I've never been more excited about an algorithm than I am tonight. This may mean ditching the 3rd-party physics library entirely, which would be a dream come true!!! After three years, I've finally gotten collision detection right. I hope to write something on this algorithm soon, so that I can get it out there, and people can tell me whether it's actually novel or not.

If this algorithm is really as good as initial tests show it to be, my belief that everything is fundamentally simple with the right insights...will just be cemented into my mind beyond any shadow of a doubt. If concave-concave collision detection is actually trivial...then can there be anything that isn't?!?

Dust and Collision Detection

I made some great progress yesterday with volumetric effects, and am officially excited about the current look of dust clouds/nebulas! I've implemented a very cheap approximation to global illumination for volumes. Although the effect is really simple, it adds a lot to the realism. On top of illumination from background nebulae, there's also a fake illumination effect from stars. Here are some shots:

Definitely starting to feel like the game has a strong atmosphere!

The rest of my time lately has been devoted to finally getting real collision detection. I tried several options, including Bullet, OZCollide and Opcode. For now, I've opted (no pun intended...) to go with Opcode. I love the cleanliness of the interface, as well as the fact that it's really easy to use as just a narrow-phase collider, since I've already implemented my own broad-phase.

Once again, I started a fight that I shouldn't have started with someone way bigger than me. Luckily, thanks to the new, accurate collision-detection, I was able to sneakily take refuge in a depression in this large asteroid:

Opcode is very fast, so I'm pleased with that. And I haven't even set up temporal coherence caches yet, which supposedly can yield 10x speedups. Unfortunately, I've already run into a very serious problem that it seems is universal to collision detectors. They don't support scaling in world transformations!! I'm not sure why...I guess there's some fundamental challenge with that and collision detection algorithms that I'm not aware of. At any rate, this poses a big problem: I've got thousands of collidable asteroids drifting around systems. With scaling support, this wouldn't be a problem, because there are only about 15 unique meshes that I generate for a given system. With instancing and scaling, you can't tell that some of the asteroids are the same. Scaling is an absolutely critical part of the illusion! But to get the scaled asteroids to work with collision detectors, I have to bake the scale data into the meshes. I can't possibly generate a unique, scaled mesh and acceleration structure for each of the thousands of asteroids at load time. That would take way too long, and way too much memory.

To solve this, what I'm currently doing is trying to anticipate collisions, and launch another thread to build the acceleration structures for the objects that would be involved. To do so, I keep track of a "fattened" world-space AABB for all objects, and launch the builder thread when the fattened boxes intersect. The factor by which the world boxes are exaggerated affects how early the acceleration structures will be cached. So far, this method is working really well. Opcode is fast at building acceleration structures, so I haven't had any trouble with the response time. In theory, with a slow processor, collisions will be ignored if the other thread does not finish building the acceleration structure before the collision occurs. I've tested to see how far I can push the complexity of meshes before this starts happening. Indeed, if I use the full-quality asteroid meshes (~120k tris) for collision, my first missile flies right through them (admittedly, I also turned the speed way up on the missiles). But let's face it, 120k tris for a collision mesh is a little much! And the only real alternative is to stall and wait for the acceleration structure to be generated, which means the game would freeze for a moment as your missile is heading towards an asteroid or ship. I'd much prefer to run the risk of an occasional missed collision on a really low-end machine than to have the game regularly stalling when I shoot at things.

I'm very impressed with how easy multithreading is with SFML. The thread and mutex structures provided by SFML are completely intuitive and work perfectly! It took less than an hour to implement the aforementioned collision structure worker thread.

Spatial Signatures: Even Faster Collision Detection?

Yes, I'm quite happy with my collision detection engine. But could it be faster? Probably!

Here's an idea I had today. Obviously, I'm obsessed with spatial hashing. It's probably my favorite thing ever. Well, what about this: take a mesh, create a spatial grid of booleans (which can, of course, be represented super-efficiently as a bit vector), then loop over every triangle in the mesh. For any spatial grid that the triangle intersects, the corresponding hash bucket gets set to true. Once this has been done for all triangles, you are left with an array of bits that represents the spatial buckets with which the mesh collides. Purely for the sake of being able to throw around cool and esoteric terminology, I'll call this array the "spatial signature" of the object.

Now, here's the fun part. In my previous technique, I looped over the vertices of one mesh, checking them against the density function of the second. The most expensive part of this check is, without a doubt, evaluating the density function. Well, instead, we could prune the possibilities by first checking the vertex against the spatial signature of the object! It's quite inexpensive to hash the vertex to a bucket in the same space as the one in which the spatial signature was created. Then, we simply check the corresponding index in the bit array. If the array indicates that no triangles intersect that spatial bucket, then there's absolutely no chance that the vertex could be colliding with the mesh. On the other hand, in the event that the bit is turned on, then we continue as before with the full-blown density check.

This is, of course, a completely untested idea, but it seems to me that the speed with which a bit array could be accessed would significantly cut down on the narrow-phase collision check, especially when dealing with complex density functions.

Full-Blown Collision Detection: Working!

The war is over; the battle has been won. For now.

I merged the previously-discussed narrow-phase collision engine with the broad-phase spatial hashing engine that I wrote a few months ago, and the results couldn't have been more pleasing! Everything works just as I had wanted, and now the whole collision detection system is as simple as adding objects to the engine, then requesting collision reports in the update step. Everything else is handled automatically - and it doesn't require choosing a certain type of collision mesh or pre-building any special data for collision detection! All it requires is the density function that was used to generate the object's mesh. Furthermore, it's fast :D

I also made a minor optimization to the spatial hashing engine that's worth mentioning, as it kicked my framerate back up to 60 when doing collision detection on the large, rotating space station in my demo had taking it down to 10 fps or so. The idea is that, when adding an object to spatial buckets, it's a good idea to 'over-allocate,' so to speak. In other words, scale the bounding box by 150% or so, then add this enlarged box to the spatial buckets. It may waste a few checks in the broad-phase cycle, but it cuts down on the frequency with which you have to change the spatial buckets around, which I found to be a huge bottleneck for really large, moving objects (especially when the spatial grid is kept relatively small).

If you do the aforementioned, then when an object moves, you need only modify the spatial grid if the object's new bounding box isn't contained in the 'over-allocated' one. Now, if you've got a grid size of 1 and a battleship that's 50 x 50 x 50, suffice it to say that it's going to get really expensive, really fast to update the spatial grid every frame if the battleship is moving. So don't!

A celebratory shot of me swooping triumphantly over the station, headed back toward the mothership:

Narrow Phase Collision Detection Using Density Fields: Results

It works!

But it took a while. The biggest problem I encountered in implementing my scheme was due to slight inaccuracies in the construction of the surfaces from the polygonization algorithm that ended up causing noticeable detection problems when meshes were scaled up. To fix it, I implemented a pseudo-binary search to ensure that the algorithm places vertices at points at which the absolute value of the density function is no more than some very small error tolerance, ensuring that the algorithm stays as true to the real surface as possible. Linear interpolation of the density function just isn't good enough when you introduce metaballs with variable power (i.e., to create rectilinear shapes), which can cause sharp, sudden variations in the density function.

After increasing the precision of the polygonized surfaces, the detection worked like a charm! The time required to check two bodies is highly dependent on the resolution of the point set used for the first body. I found that it generally sufficed to use point sets generated from low-resolution meshes for the collision. With optimization, I can get the checking time down to around 0.1 ms per check.

Narrow Phase Collision Detection Using Density Fields

I've been pondering narrow phase collision detection a lot lately, as it's really the only thing that stands between me and a working space shooter demo. Unfortunately, it's not at all an easy problem, especially considering the fact that I have on-the-fly meshes generated and can make no assumptions about the topology, nor can I rely on hand-construction of bounding trees.

I had been getting pretty excited about an idea I had that involved a two-phase check using AABB-trees for pruning and then spatial bucketing (a la spatial hashing) for triangle-to-triangle testing. Then it hit me: why am I running in circles? I started with a mathematical model of the mesh: an isosurface of a well-defined density function. I used a complex algorithm to convert the isosurface into a polygonized mesh. Now I was trying to go back to a mathematical representation that would be conducive to collision detection. Why??

Here's the punchline: why not use the original density field description for checking? After all...the density field is even more precise than the triangular mesh! All that collision detection really is, thanks to the nature of my engine, is a solver for the intersection of two isosurfaces, both of which can be completely described using formulae! Of course, I'm sure the math would get incredibly nasty for an analytic solution, and I'm not really up for exploring that option at the moment. But here's a better idea: use a set of points (the vertex positions generated by the polygonization algorithm) to describe one isosurface, then check each point against the other surface's density function. Then, if you ever find a sign change, you immediately know that there's a collision!

This is a pretty cool idea, since it only requires a set of points and a density function to work. It definitely avoids a lot of mess associated with computing BVHs. Most importantly, however, it just makes sense! If you're working with a precise, mathematical definition of a surface, then use it!

I'll probably be implementing a rough draft of this system quite soon, so we'll see if it measures up to my expectations. I would imagine that sampling the density function will be cheaper than other types of collision detection, but I'll need to do some extensive profiling to make sure that this idea is actually as clever as I believe it to be.