Don't Blame airframe

The first active tests of airframe today were basically all failures.  The coherence hasn't improved at all.  But I'm not sure why I ever expected that it would.  You can't take all the problems that you've been ignoring for so long and blame them all on one module.  It's not airframe's fault.  There's nothing wrong with the L-system.

The fact of the matter is, generative modules aren't musicians yet.  They don't act, they don't think, they don't respond.  They aren't dynamic like musicians.  As I've said before, they shouldn't rely solely on the structure module.  They're too rigid right now.  I don't know how to fix this.  Obviously I need more variety in almost all the pugins.  But it's more than's the ability to know when to play what.  It's one of those "essential questions" that you can't defeat with sheer complexity.

So my game plan?  Multifaceted.

FIRST, rework the part classification system so that it's no longer based on instrument name.  "Piano" and "Strings" don't help the structure module to arrange the piece at all.  What would be a lot more useful is if parts were name according to the playing charactiristics - "Melodic - Lead," "Melodic - Background," "Sustained," "Percussive," etc.  This way, the structure module's arrangements will make more sense.

SECOND, completely nix the "part intensity" instruction.  Musicians can figure out when to change volume based on how the composer moves his hands; it doesn't require individual instructions before hand.  In other words, the generative modules will be responsible for crafting their own volume instructions.  Instead, there will be three part instruction states: "On, Off, and Focus."  This will eliminate the quantitative junk from the reccomendation system, which is totally unneccessary and I'm really getting fed up with it.  This new black-and-white three-state system will tell modules whether to play, rest, or take focus.  The focus instruction isn't quite as obvious as the others, but it basically means that the module can draw attention to itself, via a shift in velocity or a more aggressive playing style.  It's not really a solo, but it's supposed to keep track of what the listener is focusing on so that modules can constantly shift the listener's attention to keep the song fresh.

THIRD, make the generative modules "dynamic" in their playing.  Changeable styles, evolving parts, etc.  The musician is responsible for giving his part direction.  Simple to say, very, very hard to actually do.  This will be the most difficult.

FOURTH, establish "real" qualitative and quantitative criteria for classifying structure parts.  "Intensity" is the only variable I use right now, even though some modules recognize "tension" and "fullness" as well.  These abstractions really mean very little to the program right now though, probably because I don't have a good understanding of them myself.  Maybe a single "emotion" state variable would be more appropriate?  At any rate, I feel that the current system just isn't adequate.

This stuff won't happen overnight, so let's hope I have the endurance.