mGen Sample #4?

I got really close to sample #4 tonight. The only thing stopping it from being sample #4 is the fact that I had to do the production, because the new interface doesn't yet execute the (rather outdated) production module. But that's ALL I did. I just dragged the MIDI into FL Studio and loaded some typical plugins for the instruments.

The composition sounded much like a typical alternative rock song one would hear on the radio today. Of course it would only be the background for such a song. Nevertheless, it was a very good sound. The piece opened with the background chords and the melody, then launched into a verse with the melody dropping out and the drums coming in. It then went through a chorus, which involved all three of the loaded plugins, and repeated the cycle again. Very repetitive, as EvoSpeak, which was used for the melody, still isn't dynamic. Regardless, it was one of those "wow" moments that I haven't had in a while.

The experience wasn't quite as shaking as the sample #3 incident, but it certainly gives me some more energy to push forward.

Although I've been saying it for two months, I think we're pretty darn close to seeing a sample #4.

Thus Begins the Sprint

Having finished my final day of work today, I'm feeling good. I'm ready to do some very serious work on mGen. I'm throwing code-counting to the wind and I'm going to focus on real, audible results. I've got thirteen days to finalize my work and get ready to show that I've done something useful this summer. There's no doubt that I've done heaps of research and insane amounts of work. Nonetheless, I have to make sure that everything, including music sample quality, reflects that.

Inspiring Glitch

Several updates tonight. First of all, I got the new structure module Manual working. It'll be the structural equivalent of ProgLib - a simple library of structure choice in formats like ABABCBA, etc. I'm also experimenting with a new way of grouping instruments for issuing part instructions. As simple as the plugin is right now, I can't tell if it's actually doing anything. It more or less provide uniform movements with nothing going on, which is fine with me while I test the other modules. I guess that's all I really need for a while - a solid foundation.

I uncovered several problems in ProgLib that were keeping it from randomizing the progression properly. I also added a new library called "Typical," which will include the most basic progressions found in practically everything.

Output started sounding decent again but wasn't anything special. EvoSpeak's looping is getting kind of annoying, I need to make it dynamic as soon as possible. On the upside, I still have almost no complaints about GGrewve. I often forget that I actually have had great success on a at least one plugin. GGrewve does its job so well that I forget it even exists; I assume the drum tracks just happened magically.

Towards the end of the night a glitch in ProgLib actually contributed to a very nice assisted composition. I've started documenting interesting glitches now because the products can be quite awesome. In this glitch, ProgLib was failing to read one of the new progressions in the "Typical" library properly. It had the chords right and it had the durations right, but the fact that one duration was twice as long as the others caused something to go wrong internally (probably a duration counter), causing ProgLib to write a pleasing 5/4 progression that cycled. In other words, ProgLib would add an extra measure each cycle by doubling one of the single-measure duration chords. Very strange indeed, but it resulted in an inspiring composition.

There's nothing better than joy in things going wrong.

Code Counting is Lame

While my code-counting program inspired me to push out lots of work early in the process, it's now a little annoying. I simply can't compete with my previous ability to write loads of lines quickly, because things are now much more complicated. I'm going to miss the other two remaining code deadlines and there's really nothing I can do unless I churn out a thousand lines of day, and if I did that, the code wouldn't be "real" code. I've gotten lots of lines out of AI and ANNs, which is probably why I experimented with them in the first place.

I shouldn't be driven by a code count. I should be driven by music samples. I hereby decree all future code deadlines null and void until further notice.

Remember the Focus

I've got to find a way out of this mGen rut. It's nice that I'm getting a lot of experience with new concepts like artificial intelligence and artificial neural networks. Naturally, it's great experience to have especially if I go down the computer science career path. Unfortunately, both ideas are currently too far abstracted from my work in music. We have yet to see anything useful come from aoAIm other than toy problem solutions (because I still haven't got the stomach to tackle the heuristics problem). Neural nets look fantastic and make very pretty pictures, but I have my doubts about the musical potential of such output. Furthermore, it's great that I'm learning c++ again, as it's the standard as far as robust programming languages go. But I've already written over 23,000 lines in AHK and I obviously can't switch mid-way.

I need to focus on the immediate. I need to get Sample #4 out. I need to get a decent structure module done, even if it's just a manual one. I need to work with EvoSpeak to finally create a melody that evolves over the entire piece instead of remaining static, which should break the static feel of the current compositions. Speaking of static feel, I've got to get some progression changes happening in ProgLib or even better, ChillZone (if I can figure out how to recode a good algorithm).

I'm afraid I might be drifting towards the thing that I have been scorning since the beginning. I looked at the theses, the scholarly research, the fancy jargon, and I said to myself: "the reason they get nothing done is because they are too concerned with the complexities and unwilling to see the simplicity of music." I've gotten lost and tangled in A.I., ANNs, frameworks, data structures, programming languages, GUIs, algorithm types, grammars, and the like. I've forgotten what's at the heart of my project: music. If I don't have music to show at the final presentation, I won't have anything. My research is worth nothing if I can't prove that it works.

I've got to make it work. It doesn't have to be pretty. It doesn't have to have a sleek GUI. It doesn't have to have a fancy algorithm or neurons or aspirations. It just has to sound good. That's really all there is to it.

Heading Towards c++

My investigations in neural networks have led me back to the conclusion that I can't hide from c++ forever. As well as AutoHotKey has served me over the past few years, I fear the power limits may finally be upon me. The fact of the matter is, c++ boasts a raw speed and power with which AHK can't compete. AHK wins for ease-of-use any day...but in algorithmic programming, the speed of c++ wins. As such, I'm bringing out the old c++ compilers again and learning my way around. I used to be pretty fluent in the language, but I've grown pretty rusty and have grown too content with the ease of AHK programming.

I'm reworking the neural network engine in c++, and looking to gain roughly a 100x performance increase (based on some numbers I've seen around the AHK forums). If there is justice in the world, that could mean 100x better function approximations, 100x larger neural nets, or even 100x better music. Who knows.

If c++ starts working for me again, I have a good compromise in mind. Design the GUIs in AHK, because they generally don't have to do much complicated work. They just have to look friendly. Then do the actually plugin processing in c++ to get the speed advantage. This part doesn't have to look good.

This should also solve another thing I've been worried about: code security from theft. If I really do release mGen, it'd be reverse engineer, cracked, and modified to the world's content by even the least adept hacker. The code of AHK is way too insecure because it's an interpreted language, which means the whole script is sitting right there in memory, waiting to be stolen. In c++, however, everything is compiled down to machine code, which would take way, way longer to reverse engineer. It could still be done. BUT, with c++, I could purchase obfuscation software, which would be the final step in preventing reverse engineering. It wouldn't keep out professionals that really want to crack my program. But obfuscated c++ would be about 1000x harder to reverse engineer than an AHK script (no obfuscation tools even exist for AHK). So that's something that's been sitting in the back of my mind.

Not to mention, it's nice to have an OOP language back. I've missed classes, and I my mind has matured to the point that I can now understand all the basic functions of c++, including classes, pointers & references, inheritance, etc. Hopefully my new neural net engine will prove it.

Artificial Neural Networks (ANNs)

I've recently begun working with computational structures called artificial neural networks (ANNs). I have read many articles concerning them and am now working on several books covering the basic of ANNs. I generally am not much on such micro-scale computation (ANNs comprise tiny, tiny computational blocks that must be assembled by the hundreds to perform real-world functions), but I figured I need to have experience working with each and every technique of algorithmic composition. My background in systems which learn has been relatively weak until now.

Surprisingly, I managed to achieve some very interested graphical results with some rudimentary neural nets. Unfortunately, as is the case with fractals, interesting imagery does not necessarily translate into interesting music. I'll try not to get pulled into that trap of sacrificing musical quality for interesting computation structures.

Stuck.

I'm not really sure what to do. My creative motivation is way down and I'm not even sure why. But what's worse, I'm stuck. Here's my dilemma.

1. aoAIm has gotten to the point that it's huge and gives me a headache just thinking about programming more A.I. And yet there's still a LOT to do if I want it to be able to make music. I've got so many heuristic things to do it's crazy. And programming "intelligent" heuristics is not fun.

2. Within the heuristics, I really don't know how to approach a certain problem concerning the behavior of multivariable functions. So I'm at a standstill with that part of the coding, which means I'm at a standstill with aoAIm.

3. I'd like to work on EvoSpeak or some other generative plugin, but I'm tired of the bad structures, so I want to work on a structure module.

4. I can't make a decent structure module without A.I. which is why I started aoAIm in the first place, and I've already established that I'm at a standstill with that.

5. I'd program a manual structure module but I can't figure out what to do with the darn parts system, which is STILL giving me bad dreams (so to speak).

6. All of this is happening while the new interface isn't even fully built. And I got sidetracked with A.I., which means the new interface code is no longer cached in my brain's RAM (again, so to speak). Which means another hour of just looking through the code figuring out what everything does again.

So the question is: where do I start? I'll try to brainstorm some ideas:

- Implement some kind of forcing mechanism to ensure that there's a certain number of parts so that the structure module can actually know exactly what the instrumentation will be ahead of time

- Change up the entire functionality of the structure/generative modules: instead of letting the user choose the generative modules, allow the structure module to choose. Could give the structure a "pool" of plugins to draw from. It would kind of be similar to what I have now, but the structure module wouldn't be "forced" to fit in all modules.

Also, I'm thinking about unifying the architecture a little bit. If the A.I. pulls through, I think it'll be beneficial to bring things together a bit so that the structure module (and the A.I.) will have more control. Still thinking about how to do this. That could mean a massive change in the architecture. Which would be nasty.

Close

Today I was supposed to have hit 23.000 lines of code. Of course I knew this was going to be difficult. The rate of coding required to hit this would have been almost double that of my last goal (almost 400 lines per day!). But I got awful close at over 21.800 lines. Not bad at all.

But what does that mean in terms of sound? Unfortunately, still less than I would like it to mean. But I'm working on internal things that should help me get the external cranking faster next time. If this A.I. pulls through, there's going to be a lot of stress floating off. There's a lot of code resting on aoAIm (over 2.000 lines). Having an intelligent composition engine would/will be amazing. I really hope it works.

aoAIm

Over the past week I've developed an artificial intelligence engine dubbed aoAIm: aspiration-oriented Artificial Intelligence model.

The goal of aoAIm is to create an A.I. engine capable of understanding any system into which it is placed and figuring out how to "solve" the system by achieving its aspirations.  I'm very impressed already with what the A.I. is capable of.  It's not doubt a lot more intelligent than any of the algorithmic systems I've made in the past.

So why artificial intelligence?  What does it have to do with music?

Here's the rational: if I can create a system that is legitimately intelligent, rather than one that simply comprises complex algorithms, then I will be able to create a system capable of creating great music without having to code hundreds of lines worth of rules and algorithms.  I could simply set the A.I. in the "system" of music, and tell it to solve the system for a good piece of music.

Given the complexity of the engine I've made so far (which is over two thousand lines already!), I believe that such a method of composition would have the ultimate balance between originality and coherence, which is one of my fundamental goals.  The system would be intelligent enough to create coherent pieces, but that intelligence wouldn't be coming directly from me!  It would be coming from the generalized framework of intelligence that I created, but the specific application of the intelligence to music would be all original to the A.I.  Thus, originality would be expected as well.  It's like teaching a child how to read sheet music and ending up with a piano prodigy.  I believe that would be quite rewarding.