Here's a rough diagram of the process with the entities shown on the left and the process shown on the right.
I had somewhat of a small breakthrough several minutes ago. Grammatical percussion is really all about subdividing time. It's about taking a given chunk of time and doing something with it. Unlike the melodic grammatical system, the percussive system should have "words" that correspond to different ways to subdivide a set amount of time. This way, one does not have to deal with overlapping words and other problems that arise when the length of a word isn't absolutely defined.
The process of subdivision is recursive, so the process might start at the level of a measure and ask, "ok, how do I want to subdivide this measure?" The response may be, "I want to equal subdivisions." The process will then make another decision: "Do I want to go ahead and replace this chunk of time (half a measure) with an 8-beat word? Or do I want to subdivide it further?" Let's say the computer decides to split the first chunk of time into two quarter notes (4 beats, also 1/4 of a measure). It decides to replace the second chunk with a word.
So the system really operates around two entities and one process: time "chunks ," words, and subdivision, respectively.
Conceptually this is a big step forward for grammatical drumming. With any luck it will make the process a lot less painful.
I'm still trying to resolve the grammatical method to work with drums and other percussive instruments. The problem is that, while melodic instruments only require notes (and details such as velocity, panning, etc.), drums require notes that correspond to different drums. Thus, the notes are not related in the same way that melodic notes are related in. Drums have no "key" and do not play contoured riffs.
Representing drum patterns within a grammar will require a more flexible grammar that allows for multiple notes on the same beat. I'm thinking of using a very low-level grammar to describe individual hit patterns on the different drums, then combining these in higher-level grammars that describe riffs, fills, and grooves that finally combine into the highest-level grammar that describes an entire style of playing.
After the recent data structure overhaul, adding new features to mGen feels like a breeze, which is a nice treat. Today I wrote a basic module instruction mechanism that allows the structure module to work with the generative modules to coordinate the composition at a higher level. This is, of course, essential to the coherence of the composition and is a feature that will require a lot of refining if I hope to get good material out of mGen.
Basically, the structure module can now coordinate, for instance, when the piano should come in, when the drums should make an entrance, when things should get softer, and when things should get heavier. There's now an overlaying set of general module instructions to help the generative modules achieve a greater coherence.
Also, the data structure now allows other modules to effectively "see" each other even before they have generated any output. This was necessary at first because the structure module needed to be able to see the generative modules before it could start giving part instructions...you can't rely on a nonexistent pianist to start a song, nor a nonexistent drummer to get fancy with a solo! As a consequence, modules can also now see each other. Conceivably, this could be used for a dynamic interactivity between them. Although there is no data structure in place yet to allow modules to communicate between each other, that may be a feature in the future. This could allow, for example, the drum and the bass modules to "establish a groove" before they start generative the composition. Communication is essential in a real band, so it should be essential in mGen as well.
Lots of progress has been made these past few days. mGen's compositions are taking less and less of my intervention to sound good. I usually just plop in a synth doing some rhythm work or harmony and then lay down a drum groove. Throw in some nice mixer effects and it all sounds pretty darn impressive, or at least I think. Soon enough mGen will be doing all of that autonomously. It's a scary thought. But it's the future of music.
Tonight was a long night for mGen. At the begin of the night, the poor thing was told it had an obsolete internal data structure that lacked much structure to speak of. Furthermore, mGen was accused of inefficient data handling that was making other modules work harder than necessary.
After an immense surgery that lasted about four hours, mGen is now smiling with a brand-new, sparkly internal data structure. The structure now conforms to the data structures used by other modules and makes it much easier for all other components of the program to access information about the composition on the fly. The surgery has, however, rendered mGen uncompatible with most of the previous plugins I wrote to accompany it. They too much schedule an appointment for surgery to become capable of taking advantage of mGen's new data structure.
In short, I rebuilt the internals of the mGen framework from scratch. It's an investment in the future, where data handling will be done much more efficiently by the program. It's really quite a huge change/improvement, as reflected by the half a thousand line increase in code length.
I made the GUI look a little prettier today. I am still withholding screenshots from the blog, however, because I do not want it to be seen until it is ready.
I am also working on filling up the main panels of the program, since the border regions are populated with functional tools, but the center area is completely blank. I'm not sure what to put there.
Well, after coming back from a long break, I have some good progress to report.
I tried implementing a grammatical system for algorithmic composition over the break and had a good deal of success in my endeavors. Although I created only a rudimentary composition language and a very basic phrase generator, the results seem more natural and interesting than any other method explored thus far, which means progress!
After a great deal of thinking on the subject, I've decided that a grammatical system might be the key I've been looking for to a successful path to my goals. The trick is that I can use a grammatical system as the underlying paradigm for other methods. In other words, I could have an evolutionary model that uses an underlying higher-level grammar to generate phrases. In theory, this idea is really no different than using an evolutionary model to generate an equally-abstract number that corresponds to a certain pitch (a MIDI note event). I could do the same thing with Markov chains and state-transition matrices.
There are a lot of places to go with grammatical algorithmic composition. I have a feeling I'm just scraping the surface of something big. Let's hope I'm not let down.
As the foundations upon which music is built, rhythm and meter will play an obvious and pivotal role in my program. Unfortunately, I have read very little on the topics, as Music, the Brain, and Ecstasy devoted only a single chapter to the subject in general. I need to delve deeper into the topic. To do so, I'll need some good sources.
Here are some books I'm looking at:
The first one looks extremely comprehensive and helpful.
A very interesting article that touches on some rhythmic similarities between language and music.