An Irish Algorithm

I was recently quite surprised by the capabilities of mGen, which is always a nice thing.

The night before my thesis presentation, I was looking for a good sample with which to close out the presentation.  I decided I would have to run through many compositions to find one that would work nicely.  Much to my surprise, I loaded in an old project file and only had to hit "render" a single time before getting a really interesting piece.  It was another one of those fantastic moments in which the program truly surprised me.  The piece sounded distinctively Irish, which was, naturally, a feat that I gave it neither the means nor the motivation to accomplish.  And yet, it did!

I've uploaded the sample as "Sample 16."

Once again, the program has proven that it truly does have the ability to compose pieces so creative that they startle even the programmer himself.

gMSE Pays Off

To say that the new grammatical multi-state engine blows previous analysis engines clear out of the water is an understatement.

Here are a few of gMSE's numerous powerful features:

  • Variable timescale analysis
    • Adjustable quanta
    • Adjustable grouping width
    • Adjustable abstract counters
  • Power-based reconstruction
    • Single parameter controls variability
  • Relative chord analysis
    • Reads chord and key data from MIDI file
    • Records words in offset form
    • Records chord state data in parallel with the other analysis factors
    • Reconstructed patterns behave differently based on the progression
  • Absolute pitch analaysis
    • Percussive instruments
    • Disregards chord and key information
  • Completely scalable and expandable analysis factors
    • n-th order analysis
    • Only limit on analysis factors is CPU time
  • Simultaneous multi-track analysis
    • Can perform analysis on an arbitrary number of tracks, storing information unique to each
    • Represents the first multi-channel plugin ever to grace mGen's plugin library
    • Great for multiple-hand piano parts, multiple-instrument auxiliary percussion, etc.

With gMSE having all of this power, it has become very clear to me that the bottlenecks in mGen are now the structure and progression modules which are both, at the moment, fixed in a 4/4 time signature.  This glaring, repetitive time signature doesn't do justice to the aforementioned variable timescale capabilities.  These two plugin types will be my next target for re-writing in c++.  They will require expansion of the new mGN library to allow saving of complete MainBlock data, rather than just loading.

All in all, I'm very excited about gMSE's performance over the past week and have a strong feeling about its future as both a percussive and melodic plugin.

First Test of gMSE

Last night, gMSE was put to use for the first time.  The engine did a fantastic job and far exceeded my expectations!  I used it to analyze several drum loops similar to those that GGrewve used as training material.  The engine was able to reproduce the loops exactly.

Reproducing the loops may not seem like a big deal - since the same result could be achieved with a program that simply records the input pattern and spits it back out.  But it's actually a huge triumph, because that's not how gMSE works.  It breaks the pattern into small grammatical fragments, contextually analyzes each piece, and then reconstructs the pattern in the context of the composition.  This method allows great flexibility in modifying the reconstruction of fragments.

Even more exciting is the fact that the reconstruction function takes one paramater - called power - that can be used to affect how much contextual score influences the output.  Setting the function to a high power, such as 5 or 6, causes the input pattern to be reconstructed with no variation.  A low power, like 1, causes the pattern to be very loosely reconstructed - with lots of (tasteful) variations.  Numbers in between, like 2.5 or 3, tend to give a nice output pattern that isn't a replica of the input pattern, but doesn't go too crazy with variations.  In effect, the entire "tightness" of the engine can be controlled with a single variable!

gMSE is also fast.  Very fast. It performs a whole lot faster than I would have thought, considering how intensive the underlying MSE analysis is.  At first, the engine was taking 7-8 seconds to generate all parts for a composition.  After doing some serious optimizing of several core components of the MSE and mGN library, however, I managed to get the average runtime down to about 300 milliseconds - an extremely impressive feat!  This speed means that I will be able to add more complex analysis factors without bogging down the engine.

A month of work on MSE and gMSE has finally paid off!

Project a New Reality: Reading List 1

Creating an entire algorithmic reality is going to take a lot of research and work in a vast number of fields. I've purchased the following books and am absorbing them with my dream of A New Reality in mind.

My primary focus, at the moment, other than algorithmic music, is algorithmic terrains (also known as procedural terrains).  I am exploring planetary geomorphology, the physical side of the field, as well as advanced terrain algorithms for implementing realistic techniques in code form.  I am also highly interested in star systems, galaxies, and cosmic features in general.  Nothing makes a beautiful reality like the vastness of outer-space.  On top of all that, I need the background knowledge in Direct3D and OpenGL to be able to bring everything to life.  Since I haven't decided which route to take on d3d vs. GL just yet, I figure I'll go ahead and learn both of them.  The more knowledge, the better.

I've also thrown in a random book about trees, just because I love trees and want to be able to algorithmically create them.  Having mulled the processes over in my head for a while and had several good ideas, I figure some nice pictures will really inspire me (though I still don't have quite enough background in 3D engines to implement the complexities just yet).

Color code

  • Finished
  • In progress
  • Not yet started

Loading & Saving Structures with Object System

Now that I've got the engines out of the way, there's really only one major thing standing in the way of a new GGrewve: data handling between sessions.  All these well-organized data structures look great inside a compiler...but one glaring downside to using object-oriented programming over my previous "hack-ish" data system is that loading and saving are no longer as easy as writing the data structure to a file - because such an operation isn't well-defined for arbitrary structures like it is for a data structures that consists of one large string (GDS and OS structures from the AHK code).  Loading and saving the structures from the c++ library requires a custom interface for each structure which is, to put it plainly, no fun at all.

For backwards compatibility, I've chosen to write all structures in Object System's encoding format.  It's not nearly as efficient as binary data, but it takes a lot less work to handle the data in lesser languages like AHK.  After writing a c++ version of the OS data system, to which most of my day today was devoted, reading and writing arbitrary data structures now consists of implementing an OS conversion interface for each structure.  That is, each data structure must be given a function for converting its data to Object System format as well as a function for loading its data from an Object System structure.

It sounds pretty straightforward, but in complex engines like gMSE in which structures are nested within structures that are, in turn, nested within superstructures, the task gets daunting.  Nonetheless, I've finished implementing loading/saving for all MSE structures and am in the process of writing the function for the gMSE space.

After loading and saving are out of the way, it should be relatively smooth sailing to the new GGrewve.

Grammar + MSE = gMSE

Both the multi-state engine and the Markov engine have been finished in the past week.  Now, to tackle a new implementation of GGrewve with maximum ease-of-use, I've dreamed up yet another type of engine.  It's a rather simple hybrid type that will combine grammar with multi-state analysis.  Not surprisingly, I will call it gMSE (grammatical multi-state engine).

One can think of gMSE as being a valuation dictionary based on a plurality of states.  The gMSE answers queries of the form "What quantitative value would [word] receive based on past analysis and given that the current state is [state plurality]?"  Perhaps more importantly, gMSE can answer queries of the form "Which word would received the highest/lowest quantitative score based on past analysis and given that the current state is [state plurality]?"  In this way, the grammatical multi-state engine embeds grammatical data in the analysis of state pluralities.

This new engine will, ideally, make a newer and better version of GGrewve quite easy to create.  Since GGrewve is based on probabilistic grammar analysis, it is easy to see how gMSE could accommodate the GGrewve engine plus added levels of depth thanks to the multi-state analysis.  All of this can be accomplished with just a single object: a gMSE space.  The gMSE space itself contains a grammar dictionary, an MSE space, and an MSE statestream, all wrapped into a single, easily-manageable object.

Goals for March

Here's what I'd like to get accomplished with the rest of the month:
  • Finish Multi-State Engine in c++
  • Finish Markov Engine in c++
  • Rewrite the GGrewve engine in c++
    • Even more powerful analysis techniques based off of MSE engine
    • Smoother transitions
    • Mutation engines
    • More configuration options
  • Begin work on PolyMarkov Engine
    • Combination of MSE and Markov

Cranking out Engines

It's almost mid-March already.  I don't like that fact the the samples haven't improved appreciably in a while.  As I noted earlier, it's mostly due to the fact that I've been upgrading internals rather than working on sound.  Still, it's time to step on it.

Over the past few weeks, I've been working like mad to re-code all the old engines in c++, taking advantage of the massive optimizations possible therein.  So far, the following engines are now at least partially-functional as part of the new c++ library:

  • Artificial Neural Network Engine (was actually never implemented in AHK and has yet to be used in a plugin)
  • Contour Grammar
  • Evolutionary Engine
  • Fraccut
  • Markov Engine
  • Multi-State Engine

ALL of the new implementations are better than their predecessors, both in terms of efficiency and ease-of-use.  Certain complex engines such as the Markov engine may see speed increases of over a thousand fold thanks to redesigning.

By the end of the month, these myriad engines should be coming together to form some really powerful new plugins.  All it takes is code.

Random Sources

Part of the success of the Fraccut engine is the direct result of a somewhat overlooked detail in how it handled random calls: Fraccut used deterministic randomness.  In fact, it was the only engine to make use of a source of randomness other than direct calls to a pseudorandom generator.  Rather, it first stored large arrays of random numbers (drawn from the random generator) and then accessed them based on an index that could be reset and incremented at will.  This resulted in subtle repetitions that make Fraccut seem far more coherent.  Since the method of using random sources outside of pseudorandom generators clearly provides benefits, I am exploring more general and flexible implementations of randomness for future plugins.

I have designed an abstract class for use with c++ plugins that will allow functions and objects to request random numbers without necessarily knowing from what kind of source the numbers come.  This way, an object such as a phrase (in the case of contour grammar) or a space (in the case of Fraccut) can be "bound" to a specific random source - a derivation of the abstract random source class - that will handle the details of randomness.  It is the object's responsability to keep up with an index.  The object must provide an index to the random source when requesting a random number.  This way, the random source may choose to be deterministic, assigning a single, definite output to each input state.  The random source may even be coherent, meaning that indices close in value should produce related output while indices far apart in value should be nearly independent.  On the other hand, it is still, of course, possible to simply neglect the index and draw from a pseudorandom generator in the event that determinism is not required for the task.

The trick in this technique lies in clever usage of indices.  For example, in Fraccut, a parent block may be given an index.  Now, when the engine cuts the block, it may choose to assign the new block(s) indices that correspond to the parent block's index modulated by a given amount.  This way, the new blocks will draw from an identical random stream (if the source is deterministic) as their parent, giving rise to relationships that can, as in the case of Fraccut, seem quite intricate.

In future plugins, I will explore better ways to generate random numbers in hopes of gaining more coherence.

Fractal Grammar

For some time now I have been thinking about using random cutting as a "base" method for a hybrid engine, using the cutting space to control the behavior of another process. These thoughts came to a head a few nights ago when, despite an enormous amount of brainstorming and coding, contour grammar failed to deliver the immediate results for which I was looking. I wasn't joking when I suggested a "fractal grammar" in the last post. Indeed, having now toyed around with implementations of the idea, I believe that this hybrid technique will capitalize on both the contextual coherence of random cutting as well as the internal consistency of grammatical systems. I will refer to the method as fractal grammar from now on.

Fractal grammar is, to put it simply, a grammar system driven by a random cutting engine (which, as discussed previously, falls under the category of fractal methods - specifically, Brownian motion). The engine first performs the same preliminary steps used in Fraccut, placing cutting blocks on the roots (or another specified interval) of each chord, then performing random cutting to subdivide the blocks. Instead of mapping the subdivided blocks directly to pitch space, however, the fractal grammar engine maps block offsets to indexes of words.

Here's an overview of the basic fractal grammar process:

  1. Create a space for random cutting
  2. Map chord progression to blocks in the cutting space
  3. Perform random cutting on the space
  4. Create a phrase (in the style of Contour Grammar)
  5. Map the cutting space to the phrase
    1. Iterate through blocks, for each:
      1. Map the vertical position (traditionally pitch offset) to the index of a word and add the corresponding word to the active phrase
      2. Map the width (traditionally duration) to the duration of the specific element of the active phrase
  6. Convert the phrase into a polystream
  7. Apply final mappings (offset->pitch, time scaling, etc.)
  8. Convert the polystream to a pattern

Note that the first three steps are precisely the steps taken by Fraccut, while the last three steps are precisely those taken by Contour Grammar.  The middle steps, then, are the most important - it is the mapping between the fractal engine and the grammar system that is most crucial to the final product.

Thankfully, fractal grammar has already produced some nice results.  Though not quite up to par with Fraccut yet, I have no doubt that the fractal grammar method, when it reaches maturity, will far surpass the abilities of random cutting and contour grammar.

Sample 15, the first in three months, will come online shortly!