Yet another primitive noise type has joined my library: Worley noise. This type of noise is directly based off of the concept of Voronoi diagrams. A Worley noise algorithm simply distributes a set of random points in 2-dimensional space and then computes the distance to the nearest point at a given pixel. The distance computed determines the color value of the pixel.
Interestingly, Worley noise resembles heavily-eroded perlin noise to some degree. It produces fairly convincing heightmaps with characteristic sharp slopes.
Two weeks ago, I knew nothing about low-level DirectX graphics programming (aka Direct3D/D3D). After relentless hours of reading, coding, trial-and-error, and frustrating bug hunts, I finally feel powerful again...almost as powerful as I feel in XNA. Except this time, I know the power is real. This power doesn't require .NET, doesn't rely on CLR, and isn't managed - no, it's power in the most raw of forms.
As far as topics go, I've covered everything up through effects/shaders. I did skip to the last chapter in the book in order to get a head-start on render-to-texture, since I simply can't live without procedural textures.
Here's the final product of tonights work, which included importing my previous ridged perlin multifractal shader and making a tweaking it up a bit. I'm absolutely stunned by how fast it runs - significantly faster than XNA! I continue to be impressed by the power of native D3D.
Having been through several high-level engines (Esenthel, Torque), several low-level engines (Irrlicht, Ogre3D), and a light C# DirectX wrapper (XNA), I've finally worked my way down to the bottom: native DirectX in C++. It's the way games were meant to be made. At every level, I find myself thirsting for more. More power, more control, more access. It's the nature of the types of applications I'm trying to develop. Procedural content isn't a task for the high-level engine. It requires tighter control than typical game content in order to achieve decent speed and memory management.
That being said, here we are, finally, with some results. It's taken two weeks and hundreds of pages of Frank Luna's excellent book to bring me this far.
It may not look like much. In fact, it isn't much. But 200 lines of code isn't a number to scoff at. That's the price you pay for tight control. It's a price that I'm completely and totally willing to pay.
From here on out, I'm committed to learning how to do things as thoroughly as possible, without using helper classes and frameworks like DXUT. My goal by the end of the summer: be able to write a simple 3D game in native DX starting with nothing but a blank project in VC++ and using no resources other than IntelliSense.
Crazy, I know. But talk about rewarding.
Here's an interesting little shader I wrote today, initially as an attempt to render the Mandelbrot Set in real-time. Unfortunately, my attempt failed somewhere along the line, because the output set only vaguely resembled the famous Mandelbrot image. Instead of quitting, I played around with the equation and came up with some interesting, real-time morphing fractals. Couple that with a little bloom filter and color transformation, and you've got a pretty nice little image.
Pixel shaders amaze me.
Well, the Workshop on Algorithmic Computer Music has finally come to a close. The presentation went well and I feel that my project was well-received.
WACM Presentation 2010
- More fine-tuning of the percussive L-system
- All probabilities and constants that affect the production rules and hit context rules are now located in compact lists, allowing for easy modification
- Wrote a humanization function that adds both random variation as well as sinusoidal accenting to arbitrary patterns
The project is just about finished! I'm extremely happy with how it turned out.
One of the WACM presentations today struck me in its similarity to some of my previous ideas concerning melodic coherence. In fact, Daniel seems to have already developed in an impressive amount of detail the very thing that I was trying to do with lateral movement in EvoSpeak.
Daniel breaks up mid-level composition into two basic elements: the motive and the gesture. A motive can be thought of as a melodic idea or phrase. Though it does not represent an exact set of pitches or offsets, it must represent some amount of data that characterizes the motive. In principle, I can imagine many valid ways to represent a motive. For example, a random stream could be thought of as a motive, or a set of intervals, or a set of abstract quantitative characteristics. Ideally, the motive would have explicit rhythmic characteristics, as per my belief in the importance of strong rhythmic coherence. The gesture, on the other hand, is the function that takes a motive from native form and maps it to precise pitch form. A gesture can be thought of as a transformation of the input data - the motive - to pitch space.
In his implementation of the idea, Daniel grounds his gesture types with analogies to natural language processing. I feel, however, that such analogies are not necessary. The important concept here is that of retaining an abstract object, representative of a musical idea, and presenting it in different ways over the course of a composition via the use of functional transformations.
- Designed a set of L-system production rules for creating interesting patterns
- Created a contextual hit system, wherein the output of the L-system changes meanings based on beat stress
- Allows for temporal coherence despite the L-system's blindness to time
I'm already feeling good about the output of the percussive L-system engine. With more fine-tuning, this system may actually come to rival GGrewve in future weeks!