Notes from Talk with Peter Torpey at MIT
- EchoNest is doing the "inverse" of what I'm doing
- Could use "cloud" processing to make small generative music devices possible (instead of having an on-board rendering engine, send rendering parameters to a server and render the compositions on a large, specialized parallel processing network at the server.)
- Though there are not dedicated algorithmic composition projects going on at the Media Lab right now, it's definitely something that would fit in
More to come later when I remember the rest of the stuff we talked about!