Motives and Gestures: Developing Abstract Ideas
One of the WACM presentations today struck me in its similarity to some of my previous ideas concerning melodic coherence. In fact, Daniel seems to have already developed in an impressive amount of detail the very thing that I was trying to do with lateral movement in EvoSpeak.
Daniel breaks up mid-level composition into two basic elements: the motive and the gesture. A motive can be thought of as a melodic idea or phrase. Though it does not represent an exact set of pitches or offsets, it must represent some amount of data that characterizes the motive. In principle, I can imagine many valid ways to represent a motive. For example, a random stream could be thought of as a motive, or a set of intervals, or a set of abstract quantitative characteristics. Ideally, the motive would have explicit rhythmic characteristics, as per my belief in the importance of strong rhythmic coherence. The gesture, on the other hand, is the function that takes a motive from native form and maps it to precise pitch form. A gesture can be thought of as a transformation of the input data - the motive - to pitch space.
In his implementation of the idea, Daniel grounds his gesture types with analogies to natural language processing. I feel, however, that such analogies are not necessary. The important concept here is that of retaining an abstract object, representative of a musical idea, and presenting it in different ways over the course of a composition via the use of functional transformations.