I'm halfway there with the new AI system. But that doesn't mean it will work. I set up the basic outline and functions that drive the AI and it's all looking pretty good. The only problem is that it's insanely complex. Then again, so is human behavior. The front end, however, is very easy to use. Writing the library will be the hard part. Implementation will be a piece of cake.
My test today was pretty simple. I had an AI by the name of "Josh" setup and initialized in a very basic "Reality." I gave him some basic actions. He had the ability to do the following things: "get out of the chair," "walk from the Pilot Plant Building to the Main Building," "walk from the Pilot Plant Building to the covered walkway," "walk from the covered walkway to the Main Building," and "climb the stairs in the Main Building." All the actions were set up accordingly so that they could only be performed in their respective locations and caused the consequence of changing location (or seated status in the case of the first action).
I then gave "Josh" an aspiration: "get to the upstairs part of the Main Building." I initialized the Reality with Josh being seated in his chair in the Pilot Plant Building (the very location from where he was being programmed, oddly enough). To successfully achieve his aspiration, Josh would have to figure out that he needed to get out of his chair (because you can't walk around while seated), walk from the PPB to either the Main Building or the covered walkway (depending on the weather), walk from the covered walkway to the Main Building (if the weather is inclement), then climb the stairs to finally reach the upstairs. Of course Josh was not provided with any information indicating that this sequence would have the desired effect. It was the job of the AI to examine the Reality and actions available to it and determine the best course of action to achieve the aspirations.
Without flaw, Josh managed to perform the aforementioned sequence of actions. When I set the weather to sunny, Josh cut across the asphalt accordingly, and when I made it rain, he used the covered walkway to get to the Main Building. All of the implementation was accomplished in only about 50 lines of simple code.
Although it may seem trivial, this is definitely a success. The AI "intelligently" navigated its environment to achieve something it wanted. It's not fully AI yet because it's not smart enough to know how to achieve something if the achievement is too far away, and it doesn't know what to do if it gets stuck. But all of that will be filled in soon enough and I'll be able to test a more complex system (maybe even one that I can't predict).
I wonder if Josh will be able to make good music while he walks to the Main Building?
I can see the future of AI. I have the system written down. I have numerous diagrams. I have the networks, the nodes, the trees, the parsing structures. But I'm not sure if I can make it work.
It would be ambitious, to say the least. This new AI that I have dreamt would constitute intelligence, no doubt. It would create music because it wants to, not because that's all it can do. Programming it would be as simple as this:
And there you have it, an AI system that has one and only one aspiration: to make the most pleasing composition possible. How would it achieve this aspiration? In the very same way that humans do: by breaking it in to pieces, by envisioning the future, and by taking action to move towards the future. Just as I am envisioning this AI system right now, my AI system will envision the ultimate composition.
It's all very lofty and probably sounds completely impossible to most. But it can be done. I'm not sure I have the programming ability to complete the system that I have drawn, but I'm going to try. And it could be the future of AI, right here in my hands.
After finishing some more serious work on the new interface today, I took a moment to appreciate the distance that the program has come over such a short period of time. In my notes I have a screenshot of the main interface from April. At the time, it was cutting-edge. The new treeview made organizing the composition a lot easier, and the cute little banner up to gave the program some personality.
And now I look at what I've designed over the past week. It's come a long way. It's come from a shoddy, single-button proof-of-concept, to a complicated but functional panel, to a more organized treeview skeleton, and finally, to a user-friendly and ultra-sleek tile palette. Visual progress is the easiest to see, which is why it makes me happy to know how much mGen has evolved over time, even if the inner workings aren't leaps and bounds ahead of those that were in place months ago.
If nothing else, I have succeeded in creating a very unique and functional platform for algorithmic composition. That, in itself, is a success. Whether or not I will be able to furnish this platform with the functional modules it will require to make amazing music, I do not know. But I know that I've done something. I've contributed. And that's what I set out to do.
On a side note, I had a strange moment today when the new interface gave its first output (yes, the new renderer is already working...it's true, I've come a long way as a programmer - I did indeed make it in about 1/3 the amount of code as the last renderer). It was a simple test with only Easy Contoured Structure, ProgLib, GGrewve, and ChillZone pianist loaded. Everything seemed to be going fine. But then ChillZone freaked out. It started doing crazy inversions and removing and adding notes somewhat randomly. Weirdest of all though, was that these notes were all perfectly acceptable. It was still playing in the key...but it was doing something I've never seen before. Exceptionally odd. The code hasn't been modified in over a month, as indicated by the timestamp! So what on earth is causing this "ghost-in-the-machine?" It's a pretty cool bug, I admit (if a little creepy). But I'll get to the bottom of it.
I'm continuing work on the new interface and progress is being made very rapidly. I've almost finished the frontend. All I have left to do is apply some cosmetics to the render button and then add some menus for render settings. For the most part, however, the important work is finish. Module loading and changing works perfectly.
I'll probably tackle the data handling soon. Maybe tomorrow if I work fast. I've been working very fast lately though, and I'll have to keep up this pace if I hope to make my next deadline - 23,000 lines by the 15th. Looking at this now I know it's not possible. That's simply too steep of an increase from the last deadline (17,000 on the 1st, which equates to 400 lines per day for fifteen days straight). Regardless, I can try to get as close as possible to this mark.
The emerging interface is both proof that hard work can accomplish anything, as well as hope for those like me that thought interfaces can't be designed without artistic talent. With enough fooling around in photoshop and the likes, I managed to produce an interface that, to my standards, is extremely slick.
I hand-coded all the controls, so you'll find no standard buttons, listboxes, etc. in the new interface. Everything is custom. And the work paid off. It looks great.
Not only does the interface look great, it also works great. It will be a very powerful and streamlined interface when I finish it. I've already finished the plugin loading functions and the structure & progression tiles are already fully-functioning. Unlike in the last GUI, where the user had to double-click the correct treeview entry, select a text-only plugin name from an ugly listbox, then click an OK button, the new interface demands only two clicks to load a plugin: one on the tile into which the plugin will be loaded, and the second to choose the plugin. Furthermore, plugins are displayed as tiles instead of text, so finding the right one among many is as easy as identifying the correct icon. It's way faster (not to mention way more fun) to load and configure plugins now.
As soon as I finish the frontend, which will probably be a few more days, I'll have to start grinding out the backend. I feel like the last interface's rendering functions were bloated, to say the least. I hope to be able to do it all over again in less than half the code. I think that's pretty reasonable given how much I've improved as a programmer since the time the original render code was written (seventeen thousand lines ago).
Things are definitely looking up for mGen's aesthetics...but the real question is: will these new aesthetics inspire more creative advances? Or will mGen just be a good-looking failure? I guess I'll have to keep pressing onward to find out.