GDS Interface Working

Just a quick update to indicate the rate at which progress on the c++ interface is coming - the GDS method of data handling has already been rewritten in c++.  Even though it's rather old and obsolete, this functionality will allow newer code to continue to communicate with older portions of the framework that use the GDS handling standard.

That's already one very large obstacle out of the way!

Thus Begins the Overhaul

Well, I can avoid it no longer.  Not having the ability to create plugins in an object-oriented programming language is simply killing me.  Hence, I will begin writing a c++ library for interfacing with the main framework.  The library will primarily be concerned with the ability to load data passed by the main framework into objects which will allow for more efficient processing, as well as the ability to save the data back to a format from which the main framework can integrate the new data into the composition.

Though the task is going to be rather daunting, thanks to the massive data structure specifications that I have created over time, it's also very exciting.  Building an object-oriented architecture just feels like it will offer immense benefits, and I have no doubt that it will speed up plugin-writing time, much like ScratchPad has done in recent months.

Little plugin coding progress has taken place this month thanks to my excursions in virtual worlds and artificial intelligence, and this trend will probably continue for the next few weeks.  Nevertheless, once the new library is complete, plugins will be back with a vengeance.  They'll be faster, easier, and more creative than ever before.

Revisiting aoAIm

To say that my recent efforts have been directed in nearly every direction imaginable would be an understatement.  Though part of me feels guilty for not having made any substantial progress in algorithmic composition in about two weeks, I have indeed accomplished a great amount of work.  Having written a full procedural particle-deposition terrain engine from scratch and finished part of the multithreading code to allow continuous terrain generation in real-time, I definitely feel good about the productivity of the month, even if mGen's code stagnated.

Now, with the rekindling of my interest in c++ as a viable platform for development (thanks to the discovery of some libraries that should drastically reduce development time), I am revisiting the component of my previous research that could have benefited most from an object-oriented language: aoAIm, the general artificial intelligence engine.

Previously, work on aoAIm, which was actually quite successful in many respects (see previous entries) halted due to the growing complexity of managing an object-oriented engine in a language with no objective capabilities and little real intensive processing capabilities.  It is for this reason that I am now rewriting the aoAIm engine in c++, taking advantage of the object-oriented nature if the language.  I've had a few good ideas concerning heuristic function evaluation as well, which should allow me to go further with the engine this time.

Once again, it is tempting to question how this relates to algorithmic composition.  I have previously provided rationales for pursuing a general artificial intelligence model based on the possibility of such a model introducing a sort of purpose or deliberacy into the music.  Now, considering the newly-widened scope of the project, it is clear that a general intelligence model would have great applicability in other algorithmic art fields as well.

A true model of intelligence is something from which I cannot forever hide.  At some point, mGen will be bottlenecked by a lack of intelligence, rather than a lack of algorithm choices.

A Shift in Tone: Algorithmic Art

I am afraid that the day has come that I can no longer restrain my creativity to a single field of algorithmic art. As faithful as I have been over the past two years to algorithmic composition, my love for music is not strong enough to keep my imagination from wanting to run boundlessly through the possibilities of other fields of algorithmic art. It is with this acceptance that I now broaden the scope of this blog. No longer will A New Music represent only the field of algorithmic composition.

From here on, any topic of algorithmic art will be open for contemplation and posting. From generative modeling to procedural worlds, algorithmic painting to virtual reality, and, of course, full-blown artificial intelligence.

Now, in broadening anything, one risks the loss of precision and quality in a single area. I do not intend for my work in algorithmic composition to diminish in any way. In order to counteract the broadening of the project's scope, I will extend the project's lifetime. I no longer expect to finish by this summer, nor do I even expect to finish by the end of college. No, my work in algorithmic art will take me nothing short of a lifetime. It will take me years and years to explore all territory of the beautiful landscapes of algorithms and still, I will never reach every crevice. I have no doubts, however, that I will someday arrive at the level of quality of which I currently dream.

For now, much of my effort has been diverted to algorithmically creating worlds.  In particular, I have turned my focus to procedural terrain generation using methods such as spectral synthesis, midpoint displacement, and particle deposition.  I hope to explore generative modeling in the near future.  Naturally, contour grammar is still in the back of my mind.  Wait...what if grammar could be applied to landscapes as well?  Oh the possibilities!

Henceforth, A New Music is no longer A New Music.  It is now A New Reality, an exploration of algorithmic art.

Here's to a bright future for algorithmic art.  I'll celebrate it with a beautiful world, created today by none other than my own computer:

Algorithmic Synthesis of Virtual Worlds

For a brief moment, I would like to deviate from the normal subject of algorithmic composition and discuss a recent endeavor of mine in an adjacent field of algorithmic art: algorithmic world building.

Having finally found a suitable 3D engine in c++, I was able to implement a long-standing dream of mine: an algorithmic terrain generation tool. The algorithms driving the terrain generation are spectral synthesis algorithms that use interference patterns of hundreds of oscillating functions in two variables to create interesting, three-dimensional landscapes. I wrote all the algorithms; they represent completely original work.

Here are a few shots of various journeys through virtual algorithmic worlds:

With this post, I hope to drive home the point that algorithmic art is, by no means, a small-potential field. Algorithmic composition represents only a tiny fraction of the total possibilities - what algorithms could be used for in the future. From algorithmically building virtual worlds to algorithmically generating paintings, anything can be accomplished with creative algorithms. Perhaps the most exciting possibility is that of a fusion of every possible algorithmic art. Imagine an algorithmic world populated by algorithmic persons, containing algorithmic art, architecture, music, etc. The implications of such a world would be numerous.

In short, this is only the beginning. Two years of work have seen the construction of an efficient and flexible tool for commanding music algorithms. A weekend of work has seen the completion of a simple world synthesis algorithm. One can only begin to imagine what the future holds for algorithmic art! Exciting, thrilling, and overwhelming is the world of infinity - the world of algorithms.

Rhythmic Shells

With the contour grammar engine in development, I am still searching for ways to simplify and optimize grammatical representations. At the moment, the most restrictive element of the engine is the separation between rhythm and pitch. In current engines, the two are completely separated into different streams driven by different grammar sets. Not only does this approach render it difficult to work with both streams at the same time, but it also negatively impacts the ability of the engine to produce coherence between pitch and rhythm.

A new idea I call "rhythmic shells," however, might promise an enormous simplification of the process, combining both streams into one while offering greatly improved coherence potential at the same time. The method focuses on stripping pitch words of all length dependency and transforming rhythm into a specification of pitch words. In essence, a rhythmic "shell" is selected, which contains a prescribed number of "slots," each with a distinct duration, and the slots are then filled with pitch words. The method requires that pitch words be functions rather than definitions - that is, they must be of arbitrary length and cannot be precomputed.

Here's a simple diagram of the method:

The shell stream can be specified with a simple data structure like this:


The number on the left side indicates the index of the pitch shell to be used, while the comma-delimited numbers indicate the indices of the pitch functions with which to fill each shell slot. Notice that the phrase produced by a shell can be variated by leaving one or more elements intact while changing the others, creating a different, but still somewhat similar phrase. This will certainly guarantee more coherence.

It is worth noting that, given the functional nature of the pitch words in this description, the method of rhythmic shells if very much like a generalized method of curve-splicing, which was discussed several weeks ago!

In the next few days I put a great deal of work into the contour grammar engine to try to implement the method of rhythmic shells. I also hope to continue finding new innovations for the simplification, unification, and overall improvement of grammar techniques.

Music, Grammar, and Information Theory

Spurred on by the idea of contour grammar, as explained in a recent post, I have been brainstorming grammar engines a good deal lately. I just finished drawing up a concept I call "unified grammar," which may present a promising method for abstracting units of pitch all the way up to phrases with contextual meaning. Of course, I wish to explore contour grammar a bit more before delving into a new grammar theory.

Thinking, however, has led me to an interesting proposition. I propose that much of the enjoyability of the music to which we listen stems from having a balanced amount of "information" contained within the music. This is not a novel theory, as others have explored it before - namely, Leonard Meyer. It struck me, as I analyzed one of my favorite orchestral pieces using my unified grammar method, that the melody of the piece could be represented, grammatically, by far fewer bytes of information than would be necessary to represent each pitch individually. That is, the melody had a sort of "compressibility" to it - one could compress the melody into several simple strings representing how values of different levels of a grammatical hierarchy change over time. It strikes me that the pieces of music I enjoy the most have some redundancy, but never too much.

Of course, it's obvious that music leans heavily on the concept of repetition, but information theory offers a new perspective in grammar applications. One could, for instance, limit the number of possible values for each level of the grammatical hierarchy, essentially placing an upper bound on the amount of information melodies could contain. This would prevent the piece from becoming too chaotic and uninterpretable. Likewise, one could place a lower bound on the number of values utilized for each level of the hierarchy, preventing the piece from becoming too monotonous. At the pitch level, this would mean that the melody must use between x and y unique pitches; at the word level, it would mean the melody must use between x and y different ways of grouping pitches; finally, at the phrase level, it would that the melody must use between x and y different ways of grouping words.

One final remark: ensuring that such restrictions are placed upon grammatical engines in no way ensures coherence in melody. At best, these restrictions can prevent monotony and chaos. They do not, however, guarantee meaningful music. For this reason, it is critical that the engine perform careful analysis when choosing how to group pitches and words in order to bring the grammar back down to a concrete level. Working with hierarchical abstractions always introduces a danger of becoming too removed. Consequence analysis must be performed at the base level - that is, at the pitch level - before decisions are made.


Finally, the reign of ChillZone Pianist has come to an end, at least in the provision of string parts, for which it was not even designed in the first place. Bowed, a generative module designed specifically for background string parts, will take its place.

Bowed has many functions that make it excellent for background parts played by sorrowful cellos, lush pads, and the like. It scatters individual string instruments over the various chord (or key) notes, locking some in place by snapping them to the chord, and allowing others to flow freely. This process makes inversions and other interesting chord variations a natural ability of Bowed. Bowed can also combine successive identical notes into a single note that spans multiple measures. The Bowed configuration allows a choice of how "thick" the strings should be - corresponding to how many string instruments Bowed places. Furthermore, Bowed follows structure module instructions carefully, increasing the thickness of parts as well as note velocities to complement the intensity of the movement.

Overall, Bowed is a very solid background plugin and will most likely provide the strings for many samples to come.

Redistributing Core Module Responsibilities

Much of the responsibility distribution to core modules took place during the very early days of the program.  With time, many problems related to the expansion and improvement of core module capabilities have cropped up.  With the recent creation of the first coordination module, the responsibilities of core modules can no longer go unexamined.  Tasks must be redistributed, particularly between the structure and coordination modules.

Here's a proposed distribution:

There are only a few major changes immediately evident. Perhaps the largest change suggested by this scheme lies in the reallocation of part-instruction writing to the coordination modules. I feel that it makes a good bit more sense for the coordination module to handle, well, "coordinating" the playing time of the instruments. Furthermore, part allocation seems slightly less abstract than general structure molding, and I am trying to keep strictly to a method of working from higher-order, abstract structures down to the lower-order details.

The structure module, meanwhile, picks up the task of providing tempo, time signature, and meter data. Combined, these pieces of information make up what I feel to be the largest cavity in the current data structure, which provides for none of the aforementioned. Listeners may have noticed that all the samples to date have been rendered at a 100 BPM tempo. I feel that all of this information is absolutely crucial to the rest of the process. One certainly cannot adequately place effective accents or chords without knowing the time signature!

Finally, the above scheme proposes a minor change in order: coordination before progression. This idea came about as I toyed with schemes in which the coordination module provided time signature data - which would, of course, require the progression module to run after, rather than before it. Though I now believe that the structure module should handle the task of time signature writing, I still think this order swap is necessary. Chord-writing is essentially as close to real composition data as one can get without writing actual notes, which suggests placement immediately before the generative modules. Though the coordination and progression modules will, for now, remain independent of each other (neither utilizes the data provided by the other), I see this minor swap as a profitable one.

It is worth noting that, though the initial tempo instructions will be controlled by the structure module, all generative modules will, at some point in the future, have a method of overwriting tempo data. This is necessary, for example, for a composition written with only a single piano part. While the core modules would still handle most higher-order aspects, the generative module would need access to tempo data to sync the tempo with the piano playing in order to achieve realistic undulations in the tempo, which occur during human performances (especially in solos).

2009: A Year in Reflection

Year 2009: the second, and by far the most eventful year of mGen's life has brought with it an enormous amount of progress.  The fledgling program grew from under 5,000 lines of code in late 2008 to a whopping 37,500 lines by the end of 2009.  Through untold hours of coding, numerous brainstorming sessions, and countless "inspirational walks" through the woods, new methods of creating great music have nestled into the collection of plugins powering the outer framework.  Some will survive the brutal increase in quality expectations of the coming year, and some will invariably lose their 64x64 tile in the plugin selection window.  With the life and death of each plugin, however, comes invaluable knowledge - knowledge of what works and what doesn't; triumphs of simple techniques and shattered delusions of over-decorated theoretical schemes.
A year in lists and a year in music samples, together, portray the greater part of the substance of that year.  Here is the former - a summary of the most outstanding developments of 2009:
  • Three new and fully-functional interfaces, the final one of which is enormously intuitive and efficient
  • Two data structure overhauls for the whole project - first to the standardized GDS block system, then a gradual movement to the optimized OS system
  • Six efficient, generalized algorithms implemented in a condensed library:
    • Fractal cutting engine
    • Grammar engine
    • Lindenmeyer system
    • Evolutionary engine
    • Markov engine
    • Multi-state engine
  • Seven new, fully-functional generative plugins:
    • GGrewve
    • Fraccut
    • ChillZone Pianist
    • Bowed (yet to be announced)
    • Entropic Root
    • EvoSpeak
    • Spirit
  • Two new, fully-functional structure plugins:
    • Crystal Network
    • Manual
  • One fully-functional progression plugin:
    • ProgLib
  • Development of the new "coordination" module type and one experimental coordination plugin: x-y
  • Two new, fully-functional graphical renderers
    • Glass Preview
    • GridView
  • One external component of the experimental fxGen, an interface for virtual instrument randomization
    • FPC interface
  • A huge amount (~2000 lines worth) of work in a generalized artificial intelligence engine which, one day, may provide the solution to intelligence decision-making for use at the core of mGen plugins

2010, however, promises to bring even greater developments in the life of mGen, the next-generation algortihmic composition project.  Breakthroughs will be made, expectations will be met or shattered, and aspirations will grow ever loftier for the future of infinite music and, by extension, infinite art.

Here's to a bright future for algorithmic art!  Bring on 2010.