Finally, an update! I took a bit of a break from work after graduation in celebration of finishing high school. Now, it's time to get back on track.
As important as it is to have great core and generative modules, it is equally critical to have adequate rendering modules, otherwise the composition cannot be formed into anything tangible. The renderer brings the composition to life, either through visuals or audio. I have used Glass Preview, my own visualization module, for quite a while now as my choice renderer when in need of quick visual previews of compositions without requiring audio.
Over the summer I plan to focus heavily on core modules. Unfortunately, it is difficult to visualize the abstract products of the core modules - which really just shape the output of other generate modules. That's where the second generation of Glass Preview comes in.
Glass Preview 2, like its predecessor, creates highly-detailed visual representations of compositions. Unlike the first installment, however, GP2 also visualizes abstractions of the composition process. Such abstractions include style classes, quantitative movement variables (such as intensity), and chord progressions. In the future, I will also include rhythmic information from the coordination module. In this way, I am no longer restricted to visualizing the output of generative modules.
Here's a reminder of what the old Glass Preview looks like:
And here's the new:
All the extra information really gives the viewer a holistic sense of the composition. I do believe that it will take far less work to develop high-quality core modules now that their output can be easily visualized in the context of the entire composition.
I've been having so much fun with mGen lately that I've neglected to post updates! The FL Studio rendering interface has been working for about five days now, and with great success. Unlike the previous attempt (which took place almost a year ago), this renderer is fast, accurate, and works on both XP and Vista (and Windows 7, soon enough). It also includes a preliminary configuration interface that allows selection of patches directly, which is nice.
Not having to manually render each composition really takes the enjoyability of the program to the next level. Now, it truly takes only a single click to hear a new composition (since the renderer includes a convenient option to automatically play the composition after it finishes). Unfortunately, all this excitement is taking a toll on my hard drive - I end up keeping way too many of the compositions because a lot of them are so unique.
With the renderer in action, I'm able to listen more, produce more, and, in general, get a lot more done. I look forward to a lot of exciting progress this month (especially since I graduate in a week)!
Glide, a new plugin based on the recently-explored method of melodic interpolation, will bring a new meaning to coherence in the mGen plugin library. As detailed in the previous post, this method leverages the coherence of a Fraccut-like block/space format, while leaning on an underlying grammar engine for variability.
If current performance is at all indicative of future payoffs, Glide is here to stay. It has already pumped out several extremely impressive compositions in which the melodies display a coherence unlike that obtainable with other plugins - even Fraccut!
The real long-term challenge with Glide will be extracting enough creativity while maintaining the underlying coherence. With only simple melodic interpolation, the algorithm is a mathematical system at best - which means little creativity. Building the blocks on top of grammar will certainly help. Still, it will no doubt be a challenge to get Glide to come up with highly original material. Unlike most of the other plugins, which always border on sounding aimless and uncoherent, Glide is most in danger of sounding predictable and repetitive.
Long-term goals for Glide:
- Develop a method of reliability generating high-quality grammar dictionaries for use with the grammar underpinnings of the plugin
- Build distinctive style settings that differentiate melodies
- Ideally, we shouldn't be able to tell that two melodies produced with different style settings were even created by the same plugin
- This entails carefully monitoring any "artifacts" that crop up as a result of the mathematical nature of the algorithm, like those that did with Fraccut
- Create an evolutionary system that slowly evolves styles to give the plugin a dynamic feel over long periods of time
- Rather than evolve as a function of executions, evolve as a function of absolute time to ensure that the plugin does not become stale quickly
- Imagine putting mGen away for a month and then coming back to find that the same plugin has an entirely different feel!
Introducing Fraccut v3, the next generation in fractal cutting. Fraccut has always been a solid plugin. Unfortunately, it's also been a relatively slow plugin due to the volume of information it handles. AHK heavily bottlenecks the data transfer since the language lacks OOP capabilities. With the c++ library in place, it's time to give Fraccut a serious speed boost.
Beyond the performance gains, the third generation of the Fraccut engine will also leverage the power of the mGN Library's RandomSource abstract interface to draw random numbers. This should allow for much greater flexibility in configuring the random component of fractal cutting.
Some additional features that Fraccut v3 will aim to support in the future include:
- Context-sensitive cutting parameters
- Pay attention to phrasing
- Optional grammar conversion (use the cutting blocks as controllers for a grammar, see Fractal Grammar)
- Incorporation of Markov engine for added stylistic coherence
- Loading/saving Markov styles
- Linkage with gMSE for advanced style options
Finally, having completed a basic implementation of Contour Grammar, I am getting to see what the idea sounds like in musical form. Indeed, some very interesting effects have been achieved thanks to the functional phrase language as well as the use of coherent random sources (which I will discuss at some point in another post). However, even this idea is lacking the coherence of Fraccut, and it bothers me enormously that such an elaborate idea as Contour Grammar can't even beat a relatively simple implementation of random fractal cutting, a rudimentary mathematical process.
Of course, it's very obvious why Fraccut still holds the record for best melodies. Since the cutting engine essentially starts with the chord progression as a seed, coherence is almost guaranteed. It's a powerful effect. Yet, the reality is that Fraccut is hardly aware of what it is doing. In a sense, the coherence generated by Fraccut is nothing more than a cheap trick stemming from the clever picking of the seed. Nonetheless, it undeniably works better than anything else. In the end, I am concerned only with the audible quality of the results, not with the route taken to achieve said results.
That being said, the big question now is: how can Contour Grammar pay closer attention to context in order to achieve greater contextual coherence? It would be nice to somehow combine the internal consistency of Contour Grammar with the external, contextual coherence of Fraccut.
Grammatical fractal cutting, anyone?
It's been a long, bumpy road to the first functional mGen plugin written in c++. Nonetheless, tonight has seen preliminary results of the work-in-progress contour grammar system embodied in c++.
Contour grammar is, essentially, a marriage of two previously-brainstormed ideas: rhythmic shells and curve splicing. The name refers to the fact that, at heart, the engine is grammatical, using the method of rhythmic shells to handle words and phrases in context, but that words are defined using basic contours like those described in the curve-splicing method.
Unlike the grammar engine of GGrewve or GrammGen, contour grammar is functional in form, meaning that a word is not necessarily limited to a certain length. Rather than using an array of pitch or rhythmic offsets to characterize a word, contour grammar uses an array of coefficients that determine the properties of an infinite contour. Rhythmic shells then give precise instructions for transforming sets of these infinite contours into concrete, finite note streams.
Some expected advantages of contour grammar:
- Arbitrary note size makes words extremely flexible (any given word can fill up any given amount of space, so no length-checking is necessary)
- Rhythmic shells preserve rhythmic integrity of phrases
- Shells still allow for variation by providing direct access to the words upon which the shell is built
- Tweaking a single word's coefficients will change a specific part of the shell while preserving everything else, allowing for great coherence
- Object-oriented data structures provide means of easily manipulating high-level variables that result in subtle, low-level changes
- Very little conversion necessary between rhythmic shell and raw pattern
- Rhythmic shell -> Polystream
- Polystream -> Contextual Polystream (snap to key, duration *= quanta, etc.)
- Contextual Polystream -> Pattern
So, in summary, contour grammar provides a flexible, coherent, and easily-variable data structure while still remaining concrete enough to require only a few minor steps to convert to pattern format. In general, abstractions such as those made by grammar engines suffer from either a lack of coherence or a difficulty in de-abstraction (returning them to the base format). Contour grammar, it seems, circumvents these drawbacks with a well-designed data structure.
Preliminary results with contour grammar already display some degree of coherence, a great deal of variability, and, most importantly, the potential for continued development and improvement. February will surely bring some new, interesting samples!
Finally, the reign of ChillZone Pianist has come to an end, at least in the provision of string parts, for which it was not even designed in the first place. Bowed, a generative module designed specifically for background string parts, will take its place.
Bowed has many functions that make it excellent for background parts played by sorrowful cellos, lush pads, and the like. It scatters individual string instruments over the various chord (or key) notes, locking some in place by snapping them to the chord, and allowing others to flow freely. This process makes inversions and other interesting chord variations a natural ability of Bowed. Bowed can also combine successive identical notes into a single note that spans multiple measures. The Bowed configuration allows a choice of how "thick" the strings should be - corresponding to how many string instruments Bowed places. Furthermore, Bowed follows structure module instructions carefully, increasing the thickness of parts as well as note velocities to complement the intensity of the movement.
Overall, Bowed is a very solid background plugin and will most likely provide the strings for many samples to come.
Today two samples showcasing multiple instances of Entropic Root have made their way to the samples page. Unlike most other samples, these don't even require the assistance of ChillZone Pianist to provide supporting chords. Rather, a single generative plugin does all the generative work for samples thirteen and fourteen.
Here's to a bright future for Entropic Root!
After heavy development in ScratchPad, Entropic Root is now ready for full plugin status. As described in the previous post, Entropic Root uses a mathematical method to generate melodies. It extracts pitch and duration information from fractional power functions. The plugin also has several features that enhance variability and overall quality of the output.
Entropic Root, even in its infancy, is the most powerful generative melody plugin yet. A number of impressive compositions have already been written using only Entropic Root instances.
Several samples will be uploaded shortly.
With ScratchPad I've been rapidly working on new melody ideas. One particularly interesting one revolves around raising numbers to fractional powers. It's the first truly mathematical melody method that I've experimented with, and the results are turning out better than I would have expected.
The method of generating goes as follows. First, the user chooses a fractional power that will be used to generate the melody as well as a digit depth that will be used in determining how far into the digits of the function the program should search. The plugin then creates a base number by multiplying a seed by certain compositional indices such as the style class of the movement or the index of the chord rotation. It then raises the base number to the chosen fractional power, removes the decimal point, and digs into the digits of the resultant irrational number until it reaches the digit specified by the depth setting. This digit will be used to determine the pitch offset.
Such is only a fraction (get it?) of what actually happens during the generation - since duration must also be dealt with by similar means, and several other factors weigh in to improve the quality of the results. Nonetheless, it's cool to think about "music drawn from the very nature of the square root function."
The plugin already beats all other melody plugins, hands-down. Fraccut and Fraccut v2 were, of course, close contestants with the root method. In the end, though, the ease of development in ScratchPad led the new module to victory. The module is almost ready to bud off of ScratchPad and become an independent plugin. I think I'll call it Entropic Root, since it both sounds cool and hints at the use of the "entropy" of fractional power functions (of which roots are a convenient class).
Below is an image of four instances of the plugin loaded into mGen. The preview shows coherent, creative, and complex melodies: