Well, the Workshop on Algorithmic Computer Music has finally come to a close. The presentation went well and I feel that my project was well-received.
Posts In: WACM
The project is just about finished! I'm extremely happy with how it turned out.
One of the WACM presentations today struck me in its similarity to some of my previous ideas concerning melodic coherence. In fact, Daniel seems to have already developed in an impressive amount of detail the very thing that I was trying to do with lateral movement in EvoSpeak.
Daniel breaks up mid-level composition into two basic elements: the motive and the gesture. A motive can be thought of as a melodic idea or phrase. Though it does not represent an exact set of pitches or offsets, it must represent some amount of data that characterizes the motive. In principle, I can imagine many valid ways to represent a motive. For example, a random stream could be thought of as a motive, or a set of intervals, or a set of abstract quantitative characteristics. Ideally, the motive would have explicit rhythmic characteristics, as per my belief in the importance of strong rhythmic coherence. The gesture, on the other hand, is the function that takes a motive from native form and maps it to precise pitch form. A gesture can be thought of as a transformation of the input data - the motive - to pitch space.
In his implementation of the idea, Daniel grounds his gesture types with analogies to natural language processing. I feel, however, that such analogies are not necessary. The important concept here is that of retaining an abstract object, representative of a musical idea, and presenting it in different ways over the course of a composition via the use of functional transformations.
I'm already feeling good about the output of the percussive L-system engine. With more fine-tuning, this system may actually come to rival GGrewve in future weeks!
Well, my new direction is a much simpler one. Basically, I'm going to try to use L-systems for percussive pattern generation. In other words, an L-system drummer.
Notes from Cope on EMMY
Something Paul said today really struck me. He talked about "transforming" rhythmic space such that durations and onsets map to a nonlinear space. The space can be chosen in such a way as to create the impression of some degree of rhythmic coherence even in the presence of random input data.
Essentially, consider snapping time and duration to a nonlinear grid. I imagine that this could have interesting consequences for plugins like Fraccut that would be conducive to such paradigms.
The sonification project turned out really well. Was very happy with results and could clearly hear the picture of the galaxy as intended. Things are going really well; now it's time to start hammering on the final project.