No, seriously...this is by far the most epic - and by epic, I mean legendary, incredible, amazing - failure I've ever had. Don't ask me how or why, but, for some reason, changing the size of my diffuse texture to 1x1 caused...no, not a scene of solid-colored objects as I had hoped...no, not a failure message from D3D...no, not a crash...but rather, a mind-blowing scene of recursion. Of course, it should have been obvious to me that resizing a diffuse texture would...cause it to be replaced by the primary render target!?!?
This is by far the coolest glitch that has ever befallen me. Every surface displays an exact replica of that which is rendered on the screen (and it also happens that I had SSAO and DoF turned on at the time of the glitch...so it's a rather beautiful setting for a glitch!). Good example of recursion, yes? Also quite a trip.
But seriously...why didn't I ever think to do this intentionally???
Glide has come under a lot of fire recently for lack of both creativity and coherence. After extensive testing tonight, I came to the conclusion that randomly-generated dictionaries have very little uniqueness. In a series of many renders using specific dictionaries, I was unable to distinguish between compositions that used different dictionaries. However, upon constructing a custom dictionary, the difference immediately became apparent. There can be only one conclusion: the dictionary generator needs a lot of improvement.
But how does one create good dictionaries? That's really the matter at hand. The generator used by Glide was never meant to be the final version - in fact, as I recall, it was simply put there as a placeholder, but I ended up relying on it anyway to make dictionaries.
This problem is really just another manifestation of the "big" problem in algorithmic composition: how can we create data that is both original and coherent? A balance struck in the dictionary-creation process will be a balance struck during runtime and, by extension, a balance struck in composition.
Here are a few ideas for adequate dictionary generation:
- L-systems with random production rules
- Fractal cutting spaces
- Random permutations of a base dictionary (wait...do I smell recursion?)
- Root entropy
I hope to test each of these methods in the coming weeks in order to build a successful dictionary-synthesis tool for use with Glide.
Well, I've encountered a rather nasty problem with the MainBlock structure tonight during my attempts to get the new renderer working. Here's how the structure works at the moment:
- Each generative module is allowed to output an unlimited number of generative tracks, each with its own patch setting
- The main structure keeps track of how many generative modules are loaded
- The main structure keeps track of how many generative tracks have been outputted (>= the number of generative modules)
Unfortunately, this (poorly-thought-out) system creates a problem: generative tracks are "orphaned" from their parent generative modules. That is, there is no way, unless the module-to-track mapping happens to be 1-to-1, to determine which generative track belongs to which modules in a given data structure. Thus, the renderer cannot figure out how to assign instrument information to the tracks.
There are a few ways around this:
- Enforce strict patch output in modules
- Limit the "instrument" field to a certain number of instruments (piano, guitar, etc)
- Default to piano
- Eliminates the need to know the track's part
- For each module, record the number of tracks outputted
- For each track, record the ID of the parent module
The first fix, of course, is the most intuitive. Tracks should, ideally, be completely independent of the modules that created them. Unfortunately, this would require recompiling almost every plugin in existence, making sure that they all output proper patch information. The other two are rather simple but hacky fixes.
Alas, Fraccut has purported to offer equality to its block slices, but has done so falsely until now. The equality parameter, as I just discovered, is completely broken. No wonder I haven't heard any lovely arpeggiations from Fraccut. I set the equality to 100% and the slices are still rhythmically as normal as a tangent graph. Which isn't good.
A long night of bug-hunting awaits.
The diagrams given in the last post would seem to imply that making a structure module out of a fractal cutting engine would be easy. The truth is, however, that it's nowhere near as straightforward as making a melody.
Some of the central problems I'm trying to resolve at the moment:
- What blocks do you start with? Would slapping a bunch of root blocks that span the whole composition at a time offset of zero produce as good of results as a more sophisticated method? Or does the cutting engine bring equal complexity out of both of these situations?
- How do you handle separate pieces/parts/movements (whatever you want to call the compositional "divisions")? It's not hard to conceptually see that the blocks need to span more than one movement in order to be interesting (otherwise what would make this method any different from random selection?), but how does one know where to create a division? What about the fact that blocks will inevitably overlap divisions? The data structure does not allow multiple part instructions per movement for the same generative module, so instructions must not overlap.
- Once and for all - how should cutting work? Should it be based on divisors or on linear multiplicative factors? Furthermore, should we allow the engine to invert the durations of the old and new block so that the cut includes its "complementary" cut (i.e. the same cut in the reverse order)?
And finally, there's still the question of tempo, time signature, and pretty much all other things dealing with time that needs to be addressed. The relationship between the structure module and the coordination module still needs to be determined, and the structure module needs to become more influential over the others. After all, it crafts the entire composition. It should do more than say "start" and "stop."
Obviously there's a lot of work left to do with the core modules. Given the recent progress in generative modules, however, it's not surprising that the other modules are starting to look weak. That's the way it works with this project. A breakthrough in one thing causes everything else to look inadequate. Slowly but surely, the quality of each individual part rises to meet the expectations of the leading components.
Finally, I found the culprit causing Fraccut's random behavior. Indeed, one tiny call to a pseudorandom generator got passed my eye and was causing the entire process to change over the composition. Now Fraccut's ideas are static, as expected.
Why, one might ask, would I want the ideas to be static when the so-called "glitch" was allowing them to be dynamic? Well, in time I will make them dynamic. But I want the variation process to be controlled so that Fraccut's ideas evolve and mature over time like those of a real musician. Idea development, while in the end must be unpredictable, shouldn't be left to a totally random process that has no concept of development. I will start working on algorithms that slightly variate the seed input of the cutting engine, which should produce modified ideas that resemble the original ones.
With the engine glitches a thing of the past, I can really start concentrating on allowing Fraccut to fulfill its potential.
I'm working on giving Fraccut a greater sense of continuity. What I'm really trying to do is expand Fraccut's attention span, so to speak. Rather than making large jumps back to parent blocks when chord changes and such occur, Fraccut should have the ability to continue on with a previous idea provided that the idea meshes with the new key. This is the fifth major controllable variable added to the configuration interface and it will, with any luck, really separate the melodies from other static modules or other Fraccut modules with low continuity settings.
On the other hand, I'm still trying to figure out how on earth Fraccut is continuing to generate new, dynamic ideas when I'm restricting it to only one idea per composition at the moment. I'm having trouble with the randomness engine. It's really a very strange thing. There should be, at present, no way for randomness to enter the cutting process. The process is controlled completely by a string that is generated before the cutting is done. However, Fraccut's output is not static over measures, as one would expect it to be with this system in place and holding the input string constant. The plugin really does have a mind of its own at the moment. While it's cool that Fraccut is breaking my rules and running with its own ideas, I must ensure that the engine is operating exactly as I expect it to before continuing development of more advanced features. I need precise control over Fraccut's ideas so that I can distribute them properly through compositions.
Overall, however, the fractal experience continues to be a very rewarding one. I am hearing many original, coherent compositions that are only a few steps away from comparability with human compositions.
Alas, it seems that much of the creativity recently attributed to the two excellent performances of Fraccut may be the result of glitches rather than a well-written algorithm. More stress-testing of Fraccut has revealed some motifs that actually appear in numerous compositions, indicating a problem with the code (since the engine isn't supposed to inherently "like" anything).
New creativity methods have been implemented. I'm still using the pseudo-random generator approach, but in a different, more coherent way.
Fraccut is turning out to have many strange glitches nested within the engine. Strange durations and odd beat patterns are starting to emerge. It looks like the process of refining fractal cutting won't be as easy as it first appeared. But then again, that would have been all too easy.
The hardest part of the whole process will be developing helpful debugging tools. I'm going to have to design some interfaces to let me really see what Fraccut is thinking in a visual way so I can explore these strange glitches that are apparently more than just typos in the code.
I hope that, when Fraccut comes out clean, it will still sound as good as it did the first two times.
Several updates tonight. First of all, I got the new structure module Manual working. It'll be the structural equivalent of ProgLib - a simple library of structure choice in formats like ABABCBA, etc. I'm also experimenting with a new way of grouping instruments for issuing part instructions. As simple as the plugin is right now, I can't tell if it's actually doing anything. It more or less provide uniform movements with nothing going on, which is fine with me while I test the other modules. I guess that's all I really need for a while - a solid foundation.
I uncovered several problems in ProgLib that were keeping it from randomizing the progression properly. I also added a new library called "Typical," which will include the most basic progressions found in practically everything.
Output started sounding decent again but wasn't anything special. EvoSpeak's looping is getting kind of annoying, I need to make it dynamic as soon as possible. On the upside, I still have almost no complaints about GGrewve. I often forget that I actually have had great success on a at least one plugin. GGrewve does its job so well that I forget it even exists; I assume the drum tracks just happened magically.
Towards the end of the night a glitch in ProgLib actually contributed to a very nice assisted composition. I've started documenting interesting glitches now because the products can be quite awesome. In this glitch, ProgLib was failing to read one of the new progressions in the "Typical" library properly. It had the chords right and it had the durations right, but the fact that one duration was twice as long as the others caused something to go wrong internally (probably a duration counter), causing ProgLib to write a pleasing 5/4 progression that cycled. In other words, ProgLib would add an extra measure each cycle by doubling one of the single-measure duration chords. Very strange indeed, but it resulted in an inspiring composition.
There's nothing better than joy in things going wrong.
I've got to find a way out of this mGen rut. It's nice that I'm getting a lot of experience with new concepts like artificial intelligence and artificial neural networks. Naturally, it's great experience to have especially if I go down the computer science career path. Unfortunately, both ideas are currently too far abstracted from my work in music. We have yet to see anything useful come from aoAIm other than toy problem solutions (because I still haven't got the stomach to tackle the heuristics problem). Neural nets look fantastic and make very pretty pictures, but I have my doubts about the musical potential of such output. Furthermore, it's great that I'm learning c++ again, as it's the standard as far as robust programming languages go. But I've already written over 23,000 lines in AHK and I obviously can't switch mid-way.
I need to focus on the immediate. I need to get Sample #4 out. I need to get a decent structure module done, even if it's just a manual one. I need to work with EvoSpeak to finally create a melody that evolves over the entire piece instead of remaining static, which should break the static feel of the current compositions. Speaking of static feel, I've got to get some progression changes happening in ProgLib or even better, ChillZone (if I can figure out how to recode a good algorithm).
I'm afraid I might be drifting towards the thing that I have been scorning since the beginning. I looked at the theses, the scholarly research, the fancy jargon, and I said to myself: "the reason they get nothing done is because they are too concerned with the complexities and unwilling to see the simplicity of music." I've gotten lost and tangled in A.I., ANNs, frameworks, data structures, programming languages, GUIs, algorithm types, grammars, and the like. I've forgotten what's at the heart of my project: music. If I don't have music to show at the final presentation, I won't have anything. My research is worth nothing if I can't prove that it works.
I've got to make it work. It doesn't have to be pretty. It doesn't have to have a sleek GUI. It doesn't have to have a fancy algorithm or neurons or aspirations. It just has to sound good. That's really all there is to it.