Posts In: AI

TNB Frames, Reflections, and Rogue AI

August 28, 2012 General News 2 Comments

I've always wondered why people fatten up their vertex structures with an extra vector (tangent). It just seems unnecessary to me. I figured I was missing something. I guess I still am. Why not just generate a local tangent and bitangent from your normal? You can even make it continuous...and there you go, you have a TNB frame for normal mapping. I guess the issue is that the tangent and bitangent won't align with the direction of increasing texture coordinates. Which I suppose would be a problem if you really wanted everything to line up perfectly with your albedo. But hey, I just want some modulation in the normal! I don't need all this fancy math. So here's what I do:

A low-cost, continuous TNB frame in your vertex shader. Works for me...

I spent several hours last night fighting my worst mathematical enemy: world-space position reconstruction. I'm not sure why I have such a hard time with it, because it's really not that hard...but every time I try to get world-space position using linear depth inside a shader, I inevitably screw up for hours before getting it right. After finally getting it, though, I'm in good shape to implement environment maps. For now, I'm just hacking with screen-space reflections, but even those (as incorrect as they are) add a lot to the graphics. Shiny things are definitely a must in space games!! ūüôā

The above image has an interesting story to accompany it. I was fooling around with AI again, and implemented a retaliatory AI strategy that runs in tandem with the movement strategy and detects when the AI's ship comes under attack, switching the necessary strategies to pursue and attack the assailant. So I flew up to a passive AI that was on his way to make a trade run, fired a missile, and started to run as he began to pursue me. Unfortunately for him, several other traders were nearby, and he wasn't a very good shot. The idiot ended up blasting several of his nearby AI friends with missiles in his attempts to hit me. And then came one of those lovely "emergent AI" moments. The friends immediately turned on the rogue gunman and started chasing and attacking him! Of course they did: there was nothing special in the code that specified that it had to be the player attacking; the strategy just told the AI to retaliate against any attacks. And so they did! But I hadn't even thought of this possibility, so it was a very cool thing to see. I watched from afar as an all-out space brawl erupted, with missiles streaking all over the place. Cool! It's nice to see that they've got my back.

I'm a sucker for pretty asteroid fields, so here's another one, just for the records...

And finally, a shot of the real-time profiler that I made today to help me rein in my frame time. Always a pleasure trying to do graphics programming with an Intel HD 3000...

Looks like my GPU particle system is really a problem. Particle allocation is just killing my performance! I'm not sure how to further optimize those TexSubImage2D far as I can tell, there's no other way to do it, and they have to be done every frame, which forces a premature CPU/GPU sync.


I'm not really sure what to do. My creative motivation is way down and I'm not even sure why. But what's worse, I'm stuck. Here's my dilemma.

1. aoAIm has gotten to the point that it's huge and gives me a headache just thinking about programming more A.I. And yet there's still a LOT to do if I want it to be able to make music. I've got so many heuristic things to do it's crazy. And programming "intelligent" heuristics is not fun.

2. Within the heuristics, I really don't know how to approach a certain problem concerning the behavior of multivariable functions. So I'm at a standstill with that part of the coding, which means I'm at a standstill with aoAIm.

3. I'd like to work on EvoSpeak or some other generative plugin, but I'm tired of the bad structures, so I want to work on a structure module.

4. I can't make a decent structure module without A.I. which is why I started aoAIm in the first place, and I've already established that I'm at a standstill with that.

5. I'd program a manual structure module but I can't figure out what to do with the darn parts system, which is STILL giving me bad dreams (so to speak).

6. All of this is happening while the new interface isn't even fully built. And I got sidetracked with A.I., which means the new interface code is no longer cached in my brain's RAM (again, so to speak). Which means another hour of just looking through the code figuring out what everything does again.

So the question is: where do I start? I'll try to brainstorm some ideas:

- Implement some kind of forcing mechanism to ensure that there's a certain number of parts so that the structure module can actually know exactly what the instrumentation will be ahead of time

- Change up the entire functionality of the structure/generative modules: instead of letting the user choose the generative modules, allow the structure module to choose. Could give the structure a "pool" of plugins to draw from. It would kind of be similar to what I have now, but the structure module wouldn't be "forced" to fit in all modules.

Also, I'm thinking about unifying the architecture a little bit. If the A.I. pulls through, I think it'll be beneficial to bring things together a bit so that the structure module (and the A.I.) will have more control. Still thinking about how to do this. That could mean a massive change in the architecture. Which would be nasty.


Over the past week I've developed an artificial intelligence engine dubbed aoAIm: aspiration-oriented Artificial Intelligence model.

The goal of aoAIm is to create an A.I. engine capable of understanding any system into which it is placed and figuring out how to "solve" the system by achieving its aspirations.  I'm very impressed already with what the A.I. is capable of.  It's not doubt a lot more intelligent than any of the algorithmic systems I've made in the past.

So why artificial intelligence?  What does it have to do with music?

Here's the rational: if I can create a system that is legitimately intelligent, rather than one that simply comprises complex algorithms, then I will be able to create a system capable of creating great music without having to code hundreds of lines worth of rules and algorithms.  I could simply set the A.I. in the "system" of music, and tell it to solve the system for a good piece of music.

Given the complexity of the engine I've made so far (which is over two thousand lines already!), I believe that such a method of composition would have the ultimate balance between originality and coherence, which is one of my fundamental goals.  The system would be intelligent enough to create coherent pieces, but that intelligence wouldn't be coming directly from me!  It would be coming from the generalized framework of intelligence that I created, but the specific application of the intelligence to music would be all original to the A.I.  Thus, originality would be expected as well.  It's like teaching a child how to read sheet music and ending up with a piano prodigy.  I believe that would be quite rewarding.

You're Alive

I'm halfway there with the new AI system. But that doesn't mean it will work. I set up the basic outline and functions that drive the AI and it's all looking pretty good. The only problem is that it's insanely complex. Then again, so is human behavior. The front end, however, is very easy to use. Writing the library will be the hard part. Implementation will be a piece of cake.

My test today was pretty simple. I had an AI by the name of "Josh" setup and initialized in a very basic "Reality." I gave him some basic actions. He had the ability to do the following things: "get out of the chair," "walk from the Pilot Plant Building to the Main Building," "walk from the Pilot Plant Building to the covered walkway," "walk from the covered walkway to the Main Building," and "climb the stairs in the Main Building." All the actions were set up accordingly so that they could only be performed in their respective locations and caused the consequence of changing location (or seated status in the case of the first action).

I then gave "Josh" an aspiration: "get to the upstairs part of the Main Building." I initialized the Reality with Josh being seated in his chair in the Pilot Plant Building (the very location from where he was being programmed, oddly enough). To successfully achieve his aspiration, Josh would have to figure out that he needed to get out of his chair (because you can't walk around while seated), walk from the PPB to either the Main Building or the covered walkway (depending on the weather), walk from the covered walkway to the Main Building (if the weather is inclement), then climb the stairs to finally reach the upstairs. Of course Josh was not provided with any information indicating that this sequence would have the desired effect. It was the job of the AI to examine the Reality and actions available to it and determine the best course of action to achieve the aspirations.

Without flaw, Josh managed to perform the aforementioned sequence of actions. When I set the weather to sunny, Josh cut across the asphalt accordingly, and when I made it rain, he used the covered walkway to get to the Main Building. All of the implementation was accomplished in only about 50 lines of simple code.

Although it may seem trivial, this is definitely a success. The AI "intelligently" navigated its environment to achieve something it wanted. It's not fully AI yet because it's not smart enough to know how to achieve something if the achievement is too far away, and it doesn't know what to do if it gets stuck. But all of that will be filled in soon enough and I'll be able to test a more complex system (maybe even one that I can't predict).

I wonder if Josh will be able to make good music while he walks to the Main Building?

The Future of AI

I can see the future of AI.  I have the system written down.  I have numerous diagrams.  I have the networks, the nodes, the trees, the parsing structures.  But I'm not sure if I can make it work.

It would be ambitious, to say the least.  This new AI that I have dreamt would constitute intelligence, no doubt.  It would create music because it wants to, not because that's all it can do.  Programming it would be as simple as this:


And there you have it, an AI system that has one and only one aspiration: to make the most pleasing composition possible.  How would it achieve this aspiration?  In the very same way that humans do: by breaking it in to pieces, by envisioning the future, and by taking action to move towards the future.  Just as I am envisioning this AI system right now, my AI system will envision the ultimate composition.

It's all very lofty and probably sounds completely impossible to most.  But it can be done.  I'm not sure I have the programming ability to complete the system that I have drawn, but I'm going to try.  And it could be the future of AI, right here in my hands.