I've sure come a long way in one week. Again, I think the picture speaks for itself.
Everything written from scratch in XNA, including shaders. Last week, I didn't even know what a pixel shader or index buffer was.
In a rather old textbook I read recently (Texturing and Modeling: A Procedural Approach; though note that I actually have the 1991 edition), the author mentioned briefly at the end of the book a type of fractal called the multifractal. The idea of a multifractal is to actually vary the fractal dimension of the fractal at each point based on, guess what, another fractal map! In a sense, it adds another level of recursion within the fractal.
So why should we care? Well, if perlin noise makes good terrains, multifractal perlin noise makes them 100x better! The explanation for it is simple. In real terrain, we see features like mountains, but we also see features like plains. Mountains have a high fractal dimension, while plains have a low fractal dimension. Perlin noise, in and of itself, has a relatively constant fractal dimension. Multifractal noise doesn't. For this reason, it serves much better for the purpose of generating terrains!
Here's a sample multifractal perlin noise heightmap generated with a home-brewed HLSL shader:
Notice that, in comparison to previous heightmaps, this terrain is far more varied - we can clearly see mountainous features as well as low features.
I did it!!
After hours and hours of reading tutorials on HLSL coding, I finally managed to write a perlin noise shader. The result? Unbelievable speed. At least 100x faster than the CPU-based implementation I coded a few months ago. It can easily display a new 512 by 512 heightmap at 60 FPS. Yes, that's 60 unique heightmaps in one second.
With this kind of raw power now available to me for procedural content generation, I know that great things lie ahead in terms of my work with virtual worlds.
For reference (since I lost the old one and had to rewrite from scratch), here's the noise function I used:
float Noise2D(float x,float y)
float a = acos(cos(((x+SEED*9939.0134)*(x+546.1976)+1)/(y*(y+48.9995)+149.7913)) + sin(x+y/71.0013))/PI;
float b = sin(a*10000+SEED) + 1;
Recently I've gotten detoured once more from algorithmic composition by Microsoft XNA, the first 3D engine I've found that I really, really feel comfortable with. Unlike the previous one, it won't do much of the work for me - no easy bloom/HDR, no easy model management. But it DOES expose powerful core functionality without having to jump through all the complicated hoops of D3D in c++.
In developing a terrain engine from scratch for my work in algorithmic world synthesis, I've begun testing different continuous level-of-detail (CLOD) systems. The easiest and most obvious CLOD system that comes to mind is quadtree-based terrain.
My initial quadtree CLOD engine is working pretty well, and can cut polys by 10 to 100 times, making decent framerates possible even with 1024x1024 heightmaps. Some problems with the quadtree engine:
I'm thinking that some of the processing could be done in a shader to speed things up. If I can wrap my head around a few more HLSL tutorials, I may be able to try such a technique in the near future.
For now, a simple screenshot:
Finally, an update! I took a bit of a break from work after graduation in celebration of finishing high school. Now, it's time to get back on track.
As important as it is to have great core and generative modules, it is equally critical to have adequate rendering modules, otherwise the composition cannot be formed into anything tangible. The renderer brings the composition to life, either through visuals or audio. I have used Glass Preview, my own visualization module, for quite a while now as my choice renderer when in need of quick visual previews of compositions without requiring audio.
Over the summer I plan to focus heavily on core modules. Unfortunately, it is difficult to visualize the abstract products of the core modules - which really just shape the output of other generate modules. That's where the second generation of Glass Preview comes in.
Glass Preview 2, like its predecessor, creates highly-detailed visual representations of compositions. Unlike the first installment, however, GP2 also visualizes abstractions of the composition process. Such abstractions include style classes, quantitative movement variables (such as intensity), and chord progressions. In the future, I will also include rhythmic information from the coordination module. In this way, I am no longer restricted to visualizing the output of generative modules.
Here's a reminder of what the old Glass Preview looks like:
And here's the new:
All the extra information really gives the viewer a holistic sense of the composition. I do believe that it will take far less work to develop high-quality core modules now that their output can be easily visualized in the context of the entire composition.
Here's a pretty good idea: what if, in the configuration interface for a drum module, you could tap a drum beat on the keyboard (to the beat of a metronome), which the interface would then analyze and translate into a pattern to be used with the module?
That sure would make it a LOT easier to input drum patterns.
I've been having so much fun with mGen lately that I've neglected to post updates! The FL Studio rendering interface has been working for about five days now, and with great success. Unlike the previous attempt (which took place almost a year ago), this renderer is fast, accurate, and works on both XP and Vista (and Windows 7, soon enough). It also includes a preliminary configuration interface that allows selection of patches directly, which is nice.
Not having to manually render each composition really takes the enjoyability of the program to the next level. Now, it truly takes only a single click to hear a new composition (since the renderer includes a convenient option to automatically play the composition after it finishes). Unfortunately, all this excitement is taking a toll on my hard drive - I end up keeping way too many of the compositions because a lot of them are so unique.
With the renderer in action, I'm able to listen more, produce more, and, in general, get a lot more done. I look forward to a lot of exciting progress this month (especially since I graduate in a week)!