Finally, I found the culprit causing Fraccut's random behavior. Indeed, one tiny call to a pseudorandom generator got passed my eye and was causing the entire process to change over the composition. Now Fraccut's ideas are static, as expected.
Why, one might ask, would I want the ideas to be static when the so-called "glitch" was allowing them to be dynamic? Well, in time I will make them dynamic. But I want the variation process to be controlled so that Fraccut's ideas evolve and mature over time like those of a real musician. Idea development, while in the end must be unpredictable, shouldn't be left to a totally random process that has no concept of development. I will start working on algorithms that slightly variate the seed input of the cutting engine, which should produce modified ideas that resemble the original ones.
With the engine glitches a thing of the past, I can really start concentrating on allowing Fraccut to fulfill its potential.
I'm working on giving Fraccut a greater sense of continuity. What I'm really trying to do is expand Fraccut's attention span, so to speak. Rather than making large jumps back to parent blocks when chord changes and such occur, Fraccut should have the ability to continue on with a previous idea provided that the idea meshes with the new key. This is the fifth major controllable variable added to the configuration interface and it will, with any luck, really separate the melodies from other static modules or other Fraccut modules with low continuity settings.
On the other hand, I'm still trying to figure out how on earth Fraccut is continuing to generate new, dynamic ideas when I'm restricting it to only one idea per composition at the moment. I'm having trouble with the randomness engine. It's really a very strange thing. There should be, at present, no way for randomness to enter the cutting process. The process is controlled completely by a string that is generated before the cutting is done. However, Fraccut's output is not static over measures, as one would expect it to be with this system in place and holding the input string constant. The plugin really does have a mind of its own at the moment. While it's cool that Fraccut is breaking my rules and running with its own ideas, I must ensure that the engine is operating exactly as I expect it to before continuing development of more advanced features. I need precise control over Fraccut's ideas so that I can distribute them properly through compositions.
Overall, however, the fractal experience continues to be a very rewarding one. I am hearing many original, coherent compositions that are only a few steps away from comparability with human compositions.
Most of the major glitches in Fraccut have been fixed. As I said earlier, I had serious concerns about whether or not the plugin would retain originality when the glitches were removed. Sure enough, it did.
After several hours of tweaking and fiddling with the code of the Fraccut engine, I'm started to get addicted to listening. That's something that's never happened before. I've been impressed every now and then with the quality of compositions, but I've never heard such good pieces one after the next. The whole "infinite" thing is really sinking in. It takes the program 20 seconds to generate another beautiful piece.
"I swear, this is the last comp I'll generate tonight." And after listening, "well...actually...I really enjoyed that one. Maybe I'll listen to just one more." And that's just the beauty of it. I can listen to one more. And another, and another, and so on. It's infinite. I rendered three of the many compositions that I heard tonight. That's more than I've ever saved in a single night; I'm usually very selective about the compositions that I render and save to the archive. The program already has a 21 minute playlist on my mp3 player and it's very enjoyable. Of course, in the future, this playlist will be days, weeks, years, and finally, infinite.
Perhaps the best part of all this is that it's nowhere near full potential. I've still got loads of ideas for Fraccut. The plugin is still in its infancy and I intend to see it through to adulthood. As for mGen, it is a mere fetus in a womb of musical potential.
As I said, my journeys into the depths of infinite music are starting to get addictive.
Alas, it seems that much of the creativity recently attributed to the two excellent performances of Fraccut may be the result of glitches rather than a well-written algorithm. More stress-testing of Fraccut has revealed some motifs that actually appear in numerous compositions, indicating a problem with the code (since the engine isn't supposed to inherently "like" anything).
New creativity methods have been implemented. I'm still using the pseudo-random generator approach, but in a different, more coherent way.
Fraccut is turning out to have many strange glitches nested within the engine. Strange durations and odd beat patterns are starting to emerge. It looks like the process of refining fractal cutting won't be as easy as it first appeared. But then again, that would have been all too easy.
The hardest part of the whole process will be developing helpful debugging tools. I'm going to have to design some interfaces to let me really see what Fraccut is thinking in a visual way so I can explore these strange glitches that are apparently more than just typos in the code.
I hope that, when Fraccut comes out clean, it will still sound as good as it did the first two times.
It's been one of those nights that completely justifies pouring my life into algorithmic composition. Tonight, after having developed some theory over the past week about creating yet more coherence in the fractal cutting engine, I finally augmented Fraccut a bit. The premise is very simple. The coherence is drawn from the fact that using the same seed for a pseudorandom number generator will produce the same sequence of numbers. Of course, the implementation is way more complex than this. It's very neat how everything works. Unfortunately, I can't give away such pivotal secrets in a blog!
Now about the composition. Once again, it was a duel Fraccut loadout accompanied by ChillZone Pianist for the background chords. The two Fraccut modules were loaded as-is (the settings automatically randomize on load, so the computer chose the behavior). Now to be honest, I wasn't sure if the whole seed premise would work. I've not messed much with random seeds in the past and from a mathematical standpoint it's a tricky thing to do. Changing the seed by a teeny amount completely alters the entire output. How do you get coherence form that? Well, there are ways. I ended up putting one of the Fraccut parts on a low cello and the other on a high flute. The harmony is beautiful. Best of all, it is quite clear than some motifs/phrases are repeated throughout the composition by each of the Fraccut modules, and yet the parts are far from static!
I'm getting closer and closer to the perfect creativity/coherence balance and I'm immensely pleased with Fraccut's work. I am in love with the new composition and am proud to say that for once, I would actually listen to this output on a normal occasion. I could definitely see this playing in the car or while doing homework.
There really seems to be no limit to what fractal cutting can do. I'm intensely excited (to say the least) to continue pushing the envelope with this new method of melody generation.
Fraccut made a debut today after only a few hours of coding for implementation of the Fraccut engine in plugin form. The addition of advanced module functions is making plugin coding a lot more straightforward.
The first composition, after debugging a few preliminary compositions that failed, featured two identical Fraccut modules on cello, accompanied by ChillZone with some background strings. The piece blew me away. The cello duet is simply astounding, unlike any previous mGen output. Just as I suspected, the cutting engine struck an almost perfect balance between creativity and coherence! At long last, a method seems to have hit close to the center of this scale!
The parts are obviously deliberate but at the same time unpredictable. This is because the cutting engine places parent blocks on root notes (in future versions it will also do 3rds, 5ths, etc.) and then does cuts. This results in numerous deviations from a simple root note line, but the deviations are all centered around the root note, which means coherence is very evident to the ear.
I couldn't have asked for a better first-try with random cutting. I anticipate great things from Fraccut in the future.
A new module has been in the conceptual stage for a while now. I'm really excited about the melodic possibilities with this new type of engine that I'm working on. It uses a method known as "random cutting," which falls under the category of fractal algorithmic composition methods. Without getting into the details, I think that random cutting will provide a way to strike the perfect balance between melodic "thought" or "deliberateness" and "randomness" or "creativity." This is because large melodic blocks can be placed on a certain note with the intention of emphasizing a particular tonality at that point in time (such as the root note), then the cutting engine can skew the blocks into smaller, fractal pieces that still hover around the original note but provide some originality at the same time.
The engine that I've created to perform all this is called Fraccut, short for, you guessed it, fractal cutting. I've finished the basic cutting engine and I'm looking at some of the output right now visually. I have yet to actually implement the engine in module format. The visual output, however, looks nice. I can see that the notes, as desired, hover around the main block from which they originated, meaning that emphasizing particular notes should be easy for the module, but the patterns are also very interesting.
On the other hand, it's critical for algorithmic composers to learn that there's a difference between visual beauty and musical beauty. I really won't know until I hear the output in the context of a composition. I'm certain, however, that the output will be far more interesting than current melodic techniques (yes, probably even better than EvoSpeak).
More important, perhaps, is the fact that the Fraccut engine will result in easy lateralization - that is, it will be easy to create dynamic parts. Creating coherence in these dynamic parts may be a bit more tricky. Still, things are looking good for random cutting.
Although it's very resemblant of the conscious model that was explored quite some time ago, the Partial Idea Stream model should bring some new potential to the program.
Here are some basic concepts defining the Partial Idea Stream model:
- A musical idea can be a string of pitches, a particular rhythm, an accent on a particular note, a play duration, or a combination of any of the aforementioned. Thus, an idea can be a distinct "melody" defined both by a string of pitches and a rhythm, or it can simply be a "motif," defined by only a loose string of pitches including wildcards, such that both the rhythm and phrasing of the idea changes of the course of the composition
- Many such ideas float within a composer's consciousness while composing and/or performing
- Ideas may have probabilistic relationships to each other that can be described by stochastic means
- An idea need only be complete in the score or in performance, but not within the consiousness of the composer or performer
- "Partial" ideas undergo some kind of "fleshing-out" process when the composer or performer calls on them to be implemented
These concepts should lead me to an algorithm capable of serious coherence. It will not simply repeat patterns or phrases. Rather, it will repeat "partial ideas" that it holds in consciousness, applying new twists each time the idea is rendered. In this way, motifs, styles, and distinct feelings should emerge within compositions.
I have yet to choose a name for this model, as Partial Idea Stream seems a bit lacking, nor have I chosen the name of the plugin that will make use of the model. But I've started designing it already. The plugin may be a complex hybrid, as I plan on eventually using grammars to represent ideas and stochastics to represent idea dynamics/relationships.
I had a rather unexpected success today in implementing the "style pool" engine in GGrewve. It took far less coding than I had imagined and worked almost flawlessly on the very first try. There's little more that one can ask for!
The excellent style switching made the whole composition sound even more coherent and varying than it already did. GGrewve now pays more attention to intensity than any other plugin, actually changing its playing style based on the value. This, ideally, is how each plugin should act. GGrewve is really a role-model for the other plugins and, once again, I am extremely proud of how well it performs.
A while ago a started building a generalized analysis engine for grammatical subdivision called WordSmith. The engine was meant to offer more analysis options such as time compression, variable word-length, non-mapped words (for tonal instruments), key detection, and cached word analysis. I never finished the engine. Now, however, I realize that I need these features, in particular, the cached analysis, in order to implement the improvements in GGrewve.
Accordingly, I've gone back and finished the missing parts of WordSmith, making it compatible with percussive samples and changing the dictionary formats to comply with GGrewve. Now I simply have to recreate all the GGrewve style dictionaries with the new WordSmith engine and rewire GGrewve to use the new, optimized dictionary format.
Though I'm not sure the tonal capabilities of WordSmith will ever be used (as I don't recall being overly impressed by grammatical subdivision when applied to a bass guitar pattern), the percussive part has powerful features that make it the best analysis engine I have available for creating style dictionaries.