Huge breakthrough today!
I've written a new algorithm that I called the "Crawling Cubes Algorithm" that works extremely well and is ultra-efficient. Right now, it's looking about 4-5 times faster than my most heavily-optimized Marching Cubes, and roughly 20x faster than the vanilla Marching Cubes. Crawling Cubes is almost algorithmically optimal, meaning that it makes almost no unnecessary evaluations of the density field.
Crawling Cubes is specifically designed for metaball fields. It takes advantage of the fact that it can gain access to the positions of the metaballs. Using this information, Crawling Cubes is able to find a "starting point" cell on every disjoint isosurface. The algorithm then propagates over the surface until all cells containing the surface have been polygonized. Since any given cube that intersects an isosurface has perfect information about whether or not each of its neighbors intersects the isosurface, Crawling Cubes can propagate over the entire surface without wasting any evaluations on cells that don't intersect the surface.
To be fair, I didn't come up with this idea on my own. I found the idea on this page and quickly realized how brilliant it was and how perfectly it would work for metaball meshes.
The algorithm can now handle that 60k mesh of 30 metaballs that I mentioned in my last post in well under a single second. Furthermore, Crawling Cubes never causes topological artifacts. Unlike my previous hacky optimizations, this optimization preserves all topology (at the expense of generality, since it works only for metaball meshes).
I'm really, really excited about the development of this new algorithm and can't wait to put it to good use with ultra-high-quality procedural models!
I've spent an insane amount of time over the past few days trying to optimize the Marching Cubes algorithm, since it's going to be such a pivotal part of my project. Ideally, I'd like all meshes to be parametrized in terms of isosurfaces for memory efficiency as well as for the ability to quickly prototype models.
I experimented with both octree culling as well as optimizations to the original, brute-force algorithm. Surprisingly, the brute-force algorithm ended up outperforming the octree in all cases. Furthermore, I failed to find a reliable way of predicting whether or not an octree node would intersect the isosurface given just the density values at the corners, so reconstruction often produced meshes with holes in them.
Ultimately, I ended up cutting polygonization time down by a factor of four. I'm still using a brute-force technique. However, using some rather hacky methods to perform linear interpolation on selective parts of the density field, I've managed to avoid having to directly compute the entire field. This method basically involves computing a derivative of the field and then using a very reserved linear approximation of the density field to skip a few computations if the values are determined to be at a "safe" distance from the isolevel.
With this technique, the engine can build a 60k poly model (high quality) of a density field containing roughly 30 metaballs in less than 2 seconds. That's pretty fast! I'm going to continue to seek optimizations for this algorithm, but for now I'm much more pleased than I used to be with the speed. Isosurface extraction is starting to look very feasible for a game engine!
I experimented today with generating random metaball meshes. Surprisingly, the results were impressive! After an hour or so of tweaking the algorithm, I ended up with a pretty nice procedural building generator.
The algorithm used is fairly simple. It adds a variable number of metaballs to a metaball mesh in three different groups. The first group of balls make up auxiliary features of the buildings and are asymmetric. The second group make up exterior features and are reflected over the x axis, giving some of the building features a pleasant symmetry. The final group is simply distributed over the y-axis to ensure that the building has a "spine."
Here are a few of the better models:
A New Reality is looking good! I can't wait to see it when everything gets put together - the terrain, the buildings, the clouds, and the music. It's going to be quite a nice little demo.
It has been yet another inspiring night for the XDX Engine.
The engine now supports (somewhat inflexible) specular lighting as well as simple color blending. Combining the specular lighting with the triplanar texturing technique explained in the last post gives fantastic results (assuming "fantastic" is relative to the other so-called eye candy posted in this blog).
As evident in the above screenshot, I tweaked the previous spaceship model a bit.
My procedural techniques are finally starting to look decent!
And the best part of this? All of the assets used to create the above screenshot (including the actual engine executable) take up...
Not bad procedural content, not bad.
Finally, something genuinely cool to report in terms of progress!
A few weeks ago, I was pondering a disturbing problem: how does one go about texturing procedural models (i.e., models of arbitrary topology)? I came up with what seemed to be a rather simple but elegant solution: sample the surface texture three times (one on the XY plane, one on the XZ plane, and one on the YZ plane), then interpolate the samples based on model-space normals. Ideally, this would mean that the texture goes smoothly from one planar mapping to another, where the plane with normal closest to the model-space normal dominates the planar mapping of the texture onto the model.
Tonight I finally got a chance to code it. It was one of those glorious moments in my life where the stars aligned and the code just worked the first time through. Not only does it work, it looks quite good!
Yes, there are mirroring artifacts along the seams, but that's to be expected; I didn't use a tileable texture. Considering how little effort this type of mapping takes (it's literally "for free" if the model has correct normals), I'm really pleased with the results.
Unfortunately, I'm not sure what this type of mapping is called (I'm sure someone's done it before). I'll call it triplanar in the mean time.
My next big texturing project will be using metaball regions of influence along with multiple triplanar samples in order to smoothly blend multiple textures across an arbitrary topology.
Maybe I've gone overboard. But knowledge is a great thing, and what better way to prove you understand than to code? With that being said, in order that I may take complete control over my engine, I am rewriting the container classes that I will need.
So far, I've successfully written implementations for vector, map, and string (probably the three most used containers, at least in my engine). Building each was a learning process and I walked away with a far better understanding of the internals of containers, which is certainly a good thing.
It's nitty-gritty, it's low-level, and it's almost completely removed from what I'm actually trying to build (a new reality). But every tree starts from a seed.
I'm pretty bummed that I haven't had much time to work on ANR lately. A few days ago, however, I decided that I'm going to step up my game - I'm going to start treating small sub-tasks of project A New Reality like homework each day. Hopefully this will allow me to get big things done by taking them in small pieces.
This week I spent a ridiculous amount of time trying to get ray picking right. The engine now supports bounding-box ray picking. Individual triangle picking is coming soon. Once I get picking down, I should be able to create highly-interactive tools that will allow me to play with procedural modeling and texturing.
Slowly but surely!