Tag Archives: WACM

WACM: Day 10

Accomplishments:

  • More fine-tuning of the percussive L-system
  • All probabilities and constants that affect the production rules and hit context rules are now located in compact lists, allowing for easy modification
  • Wrote a humanization function that adds both random variation as well as sinusoidal accenting to arbitrary patterns

The project is just about finished!  I'm extremely happy with how it turned out.

Motives and Gestures: Developing Abstract Ideas

One of the WACM presentations today struck me in its similarity to some of my previous ideas concerning melodic coherence.  In fact, Daniel seems to have already developed in an impressive amount of detail the very thing that I was trying to do with lateral movement in EvoSpeak.

Daniel breaks up mid-level composition into two basic elements: the motive and the gesture.  A motive can be thought of as a melodic idea or phrase.  Though it does not represent an exact set of pitches or offsets, it must represent some amount of data that characterizes the motive.  In principle, I can imagine many valid ways to represent a motive.  For example, a random stream could be thought of as a motive, or a set of intervals, or a set of abstract quantitative characteristics.  Ideally, the motive would have explicit rhythmic characteristics, as per my belief in the importance of strong rhythmic coherence.  The gesture, on the other hand, is the function that takes a motive from native form and maps it to precise pitch form.  A gesture can be thought of as a transformation of the input data - the motive - to pitch space.

In his implementation of the idea, Daniel grounds his gesture types with analogies to natural language processing.  I feel, however, that such analogies are not necessary.  The important concept here is that of retaining an abstract object, representative of a musical idea, and presenting it in different ways over the course of a composition via the use of functional transformations.

WACM: Day 9

Accomplishments

  • Designed a set of L-system production rules for creating interesting patterns
  • Created a contextual hit system, wherein the output of the L-system changes meanings based on beat stress
    • Allows for temporal coherence despite the L-system's blindness to time

I'm already feeling good about the output of the percussive L-system engine.  With more fine-tuning, this system may actually come to rival GGrewve in future weeks!

WACM: Day 8

Accomplishments

  • Played around extensively with neural network/genetic algorithm combination
  • Developed test for GA convergence
  • Decided that the genetic algorithm was NOT converging with neural net fitness functions...abandoned the whole darn project and got a new idea
  • Finished L-system engine

Well, my new direction is a much simpler one.  Basically, I'm going to try to use L-systems for percussive pattern generation.  In other words, an L-system drummer.

WACM: Day 7

Accomplishments
  • Began neural network engine

Notes from Cope on EMMY

  • Lazy voice-leading for high parts
  • Separate composition into “beats”
  • Make vertical lexicons
  • Look at next lexicon, use voice-leading
  • Essentially a first-order Markov analysis of grouped “lexicons”
  • To give the impression of higher-level structure
    • Put more information into lexicons
    • Contextual information (where is the lexicon supposed to go?)
  • Roughly 2% of EMMY is actual music-composition related

WACM: Day 6

Accomplishments
  • Finished and tested genetic algorithm engine
  • Reformatted the compositional data structure into a "general property list" that can be mutated by the GA
  • Ran some initial tests on composing using individuals of a random genetic population

Something Paul said today really struck me.  He talked about "transforming" rhythmic space such that durations and onsets map to a nonlinear space.  The space can be chosen in such a way as to create the impression of some degree of rhythmic coherence even in the presence of random input data.

Essentially, consider snapping time and duration to a nonlinear grid.  I imagine that this could have interesting consequences for plugins like Fraccut that would be conducive to such paradigms.

WACM: Day 3

Accomplishments

  • Wrote a library for turning a bitmap into a composition (sonification project)
  • Added some more functions to stream-line the composition process
    • An entire piece can be made now by calling (composition->MIDI)
  • Began work on random streams

The sonification project turned out really well.  Was very happy with results and could clearly hear the picture of the galaxy as intended.  Things are going really well; now it's time to start hammering on the final project.