Posts In:

Ideas on the Quantification of Style

March 5, 2009 Ideas 0 Comments

Today I worked on developing a structural outline for a system capable of analyzing and quantifying the essence of musical "style." The analysis will consist primarily of Markov chains with criteria automatically developed and analyzed by the program in the style of a nodal or neural network. In this way, the program will learn autonomously what criteria best 'define' a style and thus learn to reproduce new music in this style.

mGen Sample 1

March 2, 2009 General News 0 Comments

Well, here it is (the clip should be playing when you load the blog).

Eighty-nine hundred lines worth of code, embodied in a single audio clip. Yes, I know it's bad, probably pretty much intolerable by the standards of any of you guys. So am I wasting my time? Thousands of lines and all you get is an overly simplistic, predictable drum beat and uninspiring arpeggiations layered on top of overly simplistic chord progressions? No, not at all. Most of the coding so far has been dedicated to creating the framework for mGen, not the actual generative modules. The real work so far was getting a progression, getting arpeggiations, getting a drum beat, and pulling them all together into the same mp3 file and rendering them all automatically (everything in this sample was done automatically, I was literally one button click away from my rendered mp3 file).

I'd say I'm pretty much on target for my goal. For starters I've already achieved the one-click philosophy cited in my proposal. My system has now effectively demonstrated its ability to go from nothing to a finished mp3 file in only a single button click. Also, when one considers briefly the complexity of computer music, the fact that mGen's first sample sounds even remotely close to rhythmically and harmonically sound is quite an impressive feat. There are no chords that sound bad, most of the progressions actually sound good. The arpeggiations sound relatively good too. Both of those items were non-deterministic, meaning mGen completely determined each chord in the progression and each note in the arpeggios (actually it also individually determined the notes in each chord of the progression).

So, in defense of my somewhat-lacking first sample, I have clearly shown that what I am wanting to do is possible. I have not spend too much time coding the actual generative part of my program, so I didn't expect a diamond on my first try. I poured in a lot of effort, and I think I reaped a fair reward.

From here, onward to better-sounding samples.

Music, the Brain, and Ecstasy (5)

Music, the Brain, and Ecstasy (Robert Jourdain)
Chapter 5: Rhythm

  • Meter vs. Phrasing
  • Phrasing is a higher-level unit 0f perception than meter, and encompasses harmonic tension, contour, and dynamics
  • Pulse lies at the core of meter
  • Perception of meter is based on prime numbers
  • Polyrhythms are made by playing more than one meter at a time
  • Syncopation is created when beats are accentuated apart from the regular metrical pattern; often the offbeats are regular enough to anticipate
  • Memory vs. Anticipation
  • Memory recalls what has already happened, anticipation draws on memory to predict notes to come (usually only a beat or two in the future)
  • Importance of tempo: if music moves too slowly, the relations are not close enough to be intelligible; if music moves too quickly, the brain cannot keep up with the relation modeling and has to move to shallower relations, missing the nuances of the piece
  • Music needs some gradual changes in tempo; sounds unnatural without them
  • Harmonic complexity vs. Metrical Complexity have an inverse relationship
  • Most listeners and composers alike now opt for harmonic complexity since harmony information is parallel, while metric information is serial, thus more harmony information can be modeled in a shorter time
  • "Memory is music's canvas"
  • "In music, it is phrasing that reaches farthest across time to encompass the deepest relations"
  • "Composers gain maximum effect by interweaving the tensions created by music's various aspects"
  • "Tempo matters because the mechanics of music perception are exceedingly sensitive to the rate at which musical structures are presented to the brain"
  • "Most tempo fluctuations are made intentionally. Music just doesn't sound right without them"
  • "The more harmony wanders from its tonal center, the more it requires rhythmic buttressing"

Notes from Meeting with Machover

Although I was unable to write down many exact quotes from Professor Machover (thoughts were rolling quite fast), I snatched the general idea of a few of his important thoughts:

  • "I think music is more than entertainment"
  • Music should be made to touch people
  • Melodies have order but they are diverse, probably the most important part of a piece (relatively speaking)
  • Nobody has figured out a way to design a composition system that can determine the difference between pretty good and really good

Tod also directed me to the following sources:

Progress Report #7

March 1, 2009 General News 0 Comments

A Quick Off-Topic Note
I spent the passed week in Boston visiting MIT (for those who aren't sure what that is, it's the Massachusetts Institute of Technology, widely known as the best engineering/mathematics university in the world, check it out at mit.edu). It was everything I expected and more...I'm simply in love with it. I met with many people there, including a professor of computer science and contributor to the field of cryptography working closely with the National Security Agency, a physics graduate student helping design a massive laser to detect and prove the existence of gravitational waves as postulated by Einstein's General Theory of Relativity, a famous composer and opera writer, and a programmer responsible for an innovative music composition tool. What a week. Now the only problem: 10% admissions rate. Ouch. Wish me luck.

Reflection
I actually did a lot this week. I took my laptop with me to Boston and worked on the program every night. I finished a progression module that works based on the interval observations I posted in my blog a few days ago. It turned out quite nicely, and I'm continuing to refine it.

By far the most productive thing I did this week, however, was meet with Tod Machover, a well-known composer, opera writer, digital composition innovator, and just about everything else you can imagine. After getting over my initial feeling of awe when I met him, I managed to talk with him for a good while and told him about my project. He was genuinely interested and gave me some suggestions but overall seemed to think I was heading down a very promising path. He also introduced me to Peter Torpey, one of the programmers that helped make the Hyperscore software a reality. Peter also took a great interest in my project after I explained it in a fair amount of detail to him. He told me to keep in touch and let him know how the project goes. So I may have found some BETA-testers for my project! (That's a technical term for the people that test a software before it gets released) Peter also told me that a modularized structure (like the one I currently have built in my program) is definitely the way to go. He also said creative interfaces are important for making things intuitive to the public.

Goals

  1. Render a music sample
    Obviously this is a recurring goal for me. I have yet to achieve it, just because I don't want the first music sample to be embarrassingly bad. I know it won't be great, but I want it to at least sound somewhat like music. I think I'm getting pretty darn close to achieving this goal.
  2. Read more of Music, the Brain, and Ecstasy
    This is an important part of my research. I'm doing heavy note taking and annotating on this book.
  3. Organize everything into Noodlenotes
    In order to prepare for my research portfolio, I'll need to organize what I have into NoodeNotes. I should also start printing screen captures of the program's interface so that I can visually prove some of the work I've been doing. Ideally I could even have a sound clip as part of my portfolio.
  4. Expand the progression data structure
    This is a pretty specific goal. Right now the progression data structure stores everything in terms of which notes will play and how long they will play. I should at least separate the chord from the bass note so that I can have interesting splits between bass and treble.
  5. Add capabilities for post-processing
    I need to make the program capable of running post-processing modules after it finishes running generative modules. An example of a plugin that would fall under this category would be one that applies a swing groove to the piece, or perhaps various playing styles. This is done after the actual notes of the piece have been generated.