I swore that I wouldn't go to bed until I made a good-looking planet in ShaderToy. No complexity, no frills, no BS...just finding out what makes a planet look good. LT planets look pretty bad IMO.
Thankfully, I was successful. Not to say that this is the best-looking planet ever, or anything, but it's a definite improvement over my previous works. AND, I finally implemented "the real deal" atmospheric scattering, instead of the cheap hacks I've been using up to this point. It took many hours of staring at O'Neil's GPU gems article. In the end, I was able to simplify his implementation to about 13 lines and achieve good results. It's nice to finally have this technique under my belt
Real-time Planet with Atmospheric Scattering
Not sure yet if this level of quality will be achievable in-game yet...but given that it's running at 60FPS in just a pixel shader, I would say it will probably be fine
Pretty happy with how this came out. Do not congratulate me, I am not the one who created a universe in which 29 lines of simple code can create such beauty. It is not my work.
Click to view it evolving in real time. Click here for full-size still.
It's difficult to grasp the fact that elegant simplicity can produce infinitely complex beauty. But this is a fact of our universe...incredible.
A one-line equation is at the heart of this beauty.
It is easy to lose oneself in such things.
(PS - My last visit to fractals looked like this. It is comforting to know that I am making progress.)
Here it is...a rendering of a terrain using a multifractal heightmap. I think the beauty of the terrain speaks for itself!
Notice the juxtaposition of the extremely flat region near the camera with the mountains in the background. Such extreme features can only happen in multifractals!
In a rather old textbook I read recently (Texturing and Modeling: A Procedural Approach; though note that I actually have the 1991 edition), the author mentioned briefly at the end of the book a type of fractal called the multifractal. The idea of a multifractal is to actually vary the fractal dimension of the fractal at each point based on, guess what, another fractal map! In a sense, it adds another level of recursion within the fractal.
So why should we care? Well, if perlin noise makes good terrains, multifractal perlin noise makes them 100x better! The explanation for it is simple. In real terrain, we see features like mountains, but we also see features like plains. Mountains have a high fractal dimension, while plains have a low fractal dimension. Perlin noise, in and of itself, has a relatively constant fractal dimension. Multifractal noise doesn't. For this reason, it serves much better for the purpose of generating terrains!
Here's a sample multifractal perlin noise heightmap generated with a home-brewed HLSL shader:
Notice that, in comparison to previous heightmaps, this terrain is far more varied - we can clearly see mountainous features as well as low features.
In keeping with the spirit of the recent drive to create a "generalized" algorithm system (starting with the XenCut fractal cutting engine mentioned a while ago), I'm beginning yet another grammar engine. The goal this time? Lighter and more portable than WordEngine. Easy applicability, easy conversions between words and soft patterns of notes, etc. The engine will also be more generalized. Words are simply comma-delimited strings now, and have no duration, velocity, or time data. To create full sets of note, velocity, time, and duration data, all one has to do is combine four words.
The key component that I want to emphasize in development is the interoperability of the algorithms. For example, the XenCut/generalized fractal engine could be used to create random words for the new grammar engine's dictionary. A Lindenmayer system could then be used to string words together into phrases, and a stochastic engine could control the distribution of phrases.
I'm hoping that this development of easy-to-use algorithms will lead to the creation of the ultimate hybrid plugin that will bring together the strengths of all methods as I originally envisioned when I first began the program.
Sadly, fractals are starting to get old. They've had a good run in mGen, and I don't anticipate them being topped by any other algorithm type anytime soon. Nonetheless, I'm getting tired of hearing them and my standards for coherence are going up. At first, fractals sounded quite coherent and creative, but I think it's time to push for more coherence.
Unfortunately, I don't know how to progress as far as algorithms go. I could try a grammar system layered on top of a fractal decision engine...but where will the grammar source come from? Would a fractal decision engine even sound good? If not, what else could I use to make choices (random functions are just too simple at this point in development)?
Looks like it's going to take some serious creative inspiration to get the next wave of development started.
With XenCut fully-working, preliminary CrystalNet tests began tonight. The testing was really just limited to observing how the XenCut engine operates under the parameters required to generate a structure. The parameters differ greatly from those used to generate fractal melodies, since the concept of abstracting a structure to a fractal pattern is completely different than abstracting a melody to a fractal pattern. In particular, the engine had to work with blocks of much smaller size (a single unit represented an entire movement of the composition) and deal with the fact that the starting material consisted of a few very large blocks placed at the root block.
Judging from a graphical display of the results of the cutting, CrystalNet (and XenCut) already seems to be doing a good job of handling the structure. The results were similar to the diagrams I posted a while back that came with the original "random cutting structure module" idea.
My only concern is the depth of this abstraction. Taking a composition, splitting it into pieces, then treating each instrument's instructions like physical block in space certainly doesn't seem as intuitive as treating the individual notes of a melody as blocks. Though the output looks nice on screen, it may result in incoherence and seemingly-random part instructions.
I really won't know how well the method fractal/random cutting works for structures until CrystalNet is up and generating compositions. If all goes well, that should only take another week or so.
As I mentioned earlier, a new fractal/random cutting engine is under development. The goal of the new engine is to be quicker and easier to implement, more flexible, and more powerful than the Fraccut engine (the original cutting engine). This new engine will be called XenCut.
XenCut went through some preliminary tests tonight and is already performing well. Since it's based on the OS data framework, the code looks very readable and runs fast. But the real power of the engine is in the new cutting methods. The engine now has three cutting "modes": linear factor, divisor, and offset. The linear factor simply takes the cutting parameters to be coefficients (multipliers) of the original block widths. The divisor mode does the same but with the reciprocal of the parameters (i.e. division). Finally, the last mode, which is already coming in very handy for CrystalNet, simply takes the parameters to be offsets (constant factors). The third mode helps to avoid strange durations and still find valid cuts when the original block widths don't fall within the typical power-of-two ranges.
I expect XenCut to find application in many more modules, and I hope it will provide a valuable source of fractal building blocks for future modules. CrystalNet will be the first module to make use of the XenCut engine. Given that I've found no other adequate way to do random structures, XenCut will really prove itself if it can power a working structure generator - that is, if CrystalNet comes through.
The diagrams given in the last post would seem to imply that making a structure module out of a fractal cutting engine would be easy. The truth is, however, that it's nowhere near as straightforward as making a melody.
Some of the central problems I'm trying to resolve at the moment:
- What blocks do you start with? Would slapping a bunch of root blocks that span the whole composition at a time offset of zero produce as good of results as a more sophisticated method? Or does the cutting engine bring equal complexity out of both of these situations?
- How do you handle separate pieces/parts/movements (whatever you want to call the compositional "divisions")? It's not hard to conceptually see that the blocks need to span more than one movement in order to be interesting (otherwise what would make this method any different from random selection?), but how does one know where to create a division? What about the fact that blocks will inevitably overlap divisions? The data structure does not allow multiple part instructions per movement for the same generative module, so instructions must not overlap.
- Once and for all - how should cutting work? Should it be based on divisors or on linear multiplicative factors? Furthermore, should we allow the engine to invert the durations of the old and new block so that the cut includes its "complementary" cut (i.e. the same cut in the reverse order)?
And finally, there's still the question of tempo, time signature, and pretty much all other things dealing with time that needs to be addressed. The relationship between the structure module and the coordination module still needs to be determined, and the structure module needs to become more influential over the others. After all, it crafts the entire composition. It should do more than say "start" and "stop."
Obviously there's a lot of work left to do with the core modules. Given the recent progress in generative modules, however, it's not surprising that the other modules are starting to look weak. That's the way it works with this project. A breakthrough in one thing causes everything else to look inadequate. Slowly but surely, the quality of each individual part rises to meet the expectations of the leading components.