Posts In: General News

Beam Weapons and Epic Battles

October 15, 2012 General News 5 Comments

Beam weapons are now fully-implemented (the previous implementation was just a hacky prototype). A nice detail of my current weapon implementation is that turrets on ships actually rotate to track their target. It's pretty cool seeing those massive turrets on capital ships swivel towards an enemy ship, then rip them apart with a huge beam of energy.

I'm doing some stress tests with loads of ships in a free-for-all just to see where the bottlenecks in the engine are located. Not surprisingly, almost all of the time is spent in the code for pulse and beam weapons, performing the necessary scene raycasts in order to make collision work. I've already accelerated this a lot with a spatial hashing structure, enabling fast raymarching for short rays (pulses and beams qualify for this). Then, there's a world space AABB test and, finally, OPCODE does its thing with those AABB-trees.

Apparently I've done a pretty good job of optimizing so far, because I can handle about 50 ships in an all-out, epic war before my fps drops below 60. Still, I'd like to push that number to at least 100. I'm not going to settle for limiting the number of ships that can be in an engagement in this game - if you've got the cash to hire a hundred wingmen, you should be able to use them!

Coming Together

October 10, 2012 General News 0 Comments

The past few weeks have been a flurry of small changes, incremental improvements, and endless tweaking to hone the look of the game. Everything's starting to come together! With a decent ship algorithm, a good metal shader, and some decent procedural textures, things are starting to look pretty nice.

I honestly never thought it would end up looking this good. I guess I got carried away...then again, that's why I'm in graphics. It's just far too fun and gratifying, trying to make everything look as good as possible. Still, at a certain point, I am going to seriously shift my focus back to gameplay. For now, though, I need to impress for Kickstarter!

Metal and Ambient Occlusion

October 6, 2012 General News 0 Comments

I've spent a LOT of time on the look of metal lately. Finally, I've managed to make ships look like they're made of metal!!! This is the first time I've ever had any success with metal BRDFs (even in my offline rendering experiments). I think the main key that I was missing is that the distribution really needs to be exponential rather than power-based to look convincing, otherwise the surface looks like plastic. Another key is to modulate the specular amplitude in some interesting way (just modulating it with albedo looks great).

On top of good metal shaders, I now have new-and-improved SSAO in the engine. Together, the metal and AO are making my ships look better than ever before! In addition, I think I'm finally getting closer to a good procedural ship algorithm. It's taken forever, and I'm not there yet...but I'm getting closer.

Here's a cool shot that I took tonight. This has a bit of post-process on it for dramatic effect; I wanted to use it as a wallpaper. Still, you can get a nice sense of the metallic surface.

Clouds

October 2, 2012 General News 1 Comment

They turned out way better than I expected! 🙂

Terrestrial-like planets: solved?

Specularity

September 28, 2012 General News 0 Comments

Turns out it makes a pretty big difference...

🙂

Better Planets

September 28, 2012 General News 0 Comments

I really don't want to spend all my time on planets, because I see how often these kinds of projects get wrapped up in planetary generation and then never come back...still, I guess I need to have good planets since pretty much everybody does procedural planets these days.

They're getting better, although I constrained them to looking "Earth-like" for now. I think instead of picking random colors like before, I'm going to have to constrain the generator to picking from a list of palettes that I will create, since some of the planets before just looked plain ugly...purple land and green water don't really go well together.

The lighting model still needs a lot of work. In particular, the bumps are too exaggerated (and actually based off of luminance instead of height), and the water does not react to light differently than the ground, which is a terrible thing. Nonetheless, it's a step forward!

Blurring a Cubemap

September 26, 2012 General News 3 Comments

I was surprised by the lack of resources concerning blurring a cubemap available online. I'll give a brief explanation of how to do so in case someone else with the same needs happens to stumble upon this.

First, it is a very reasonable thing to want to blur a cubemap. The most obvious reason one might want to do so, at least in my mind, is to simulate a reflection map for a glossy surface. The cheap alternative to doing so is using the mipmaps, but those will probably be generated using a box filter, which looks pretty bad as an approximation of a glossy surface. So you might want to generate a nice gaussian-blurred version of your environment map. But how?

Doing so is straightforward with a texture - we all know the algorithm. But a cubemap is a very different beast. Or is it? Here's the thing one must keep in mind when approaching this problem: stop thinking in terms of six textures. Instead, think of your cubemap as the unit ball. Which it is, right? Or at least that's how you index into it. Now, if you have a ray in that ball, how might you find the "blurred" equivalent of that ray? The obvious answer is by summing contributions of "nearby" rays. What do you mean by "nearby"? Well, that totally depends on what kind of Kernel you want! For a simulation of a gaussian blur, we will sample points on a disk around our ray, where the radius of the disk will be sampled from a gaussian distribution.

Suppose that e1 is the normalized position that corresponds to the current cubemap pixel, and that e2 and e3 are the orthogonal vectors that form the tangent space of e1 (see the tangent/bitangent post from a while back if you need code to do that). Also suppose that "samples" is an int corresponding to how many samples you want to take (suggest at least 1000 for HQ, but this will depend on your SD), and that "SD" is the standard deviation of the blur. Finally, suppose that you have a function Noise2D(x, y, s) that returns a random float in 0, 1 given two coordinates and a seed (although for this application, the noise doesn't really need to be continous). Then, here you have it:

This is a great example of a Monte-Carlo technique. We solved the problem in an elegant and asymptotically-correct fashion with minimal math and minimal code! If you want better results, look up "stratified sampling" and play around with that (note that the code above is already performing stratification on the angle, just not on the radius). For my purposes, this simple sampling strategy was sufficient.

Dithering!

September 26, 2012 General News 1 Comment

I'm really excited about this post. If you look carefully at my previous screenshots, especially those in grayish/dusty environments, you might notice a dirty little secret that I've been refusing to talk about until now: color quantization (in the form of color banding). Usually, color quantization is not really a problem that most games have to address. However, the dusty and monochromatic atmospheres that I've been producing as of late are particularly subject to the problem, since they use a very narrow spectrum of colors. Within this narrow spectrum, one starts to run into the problem that 32-bit color only affords 256 choices each for red, green, and blue intensity in a given pixel. When the entire screen is spanned by a narrow spectrum of colors, the discrete steps between these 256 levels become visible, especially in desaturated colors (since there are only 256 total choices for gray).

As an example, here's a shot that demonstrates the problem:

Notice the visible banding in the lower-left. If you open this image in Photoshop, you can see that the colors are only one unit away from one another in each component - so we really can't do any better with 8 bits per component! The solution? A very old technique called dithering! Dithering adds some element of "noise" to break up the perfect borders of discrete color steps, which results in a remarkably-convincing illusion of color continuity.

OpenGL supposedly supports dithering, but it's not clear at all to me how it would do so, and especially how that would interact with a deferred renderer. Luckily, it's actually possible to implement dithering quite easily in a given shader. The appropriate time to do so is when you have a shader that you know will be performing some color computations in floating-point, but will then be outputting to a render target with a lower bit depth (i.e., RGBA8). You'd like to somehow retain that extra information that you had while performing the FP color computations - that's where dithering comes in. Before you output the final color, just add a bit of screen-space noise to the result! In particular, take a random vector v = (x, y, z) such that x,y,z are all Uniform ~ [-1, 1], where the uniform random function uses the fragment coordinates (or the texture coordinates) to compute a pseudorandom number (which need not be of high quality). Then add v / 256 to your result. Why v / 256? Well, this will allow your pixel to go up or down a single level of intensity in each component, assuming you're using an 8-bit-per-component format. In my experiments, this worked well.

Right now, I've implemented this the lazy way by switching my primary render targets to RGBA16, then dithering a single time immediately before presenting the result, which, if you think about it, is pretty much equivalent to the above, but requires 2x the bandwidth. I will switch to the more efficient way soon.

And here's the same scene, now with dithering:

As if by magic, the banding has totally disappeared, and we are left with the flawless impression of a continuous color space. Happy times 🙂

Mining and Beam Weapons

September 25, 2012 General News 2 Comments

I haven't given up! Not yet at least. I'm got a five-week plan to get a Kickstarter ready for this project. I hope to flesh out a lot of cool features between now and then, and pull together some real cinematic footage so as to show off all the graphical polish on which I've been spending time.

Right now, my priority is all things mining. Unfortunately, I'm still not positive how the mining system will be implemented. I know that ore/mineable things will actually be physical objects attached to asteroids. I think what will happen is normal weapons will be able to break off shards of the ore, which can then be tractored in. Specialty mining lasers will also be available, and will provide far more efficient extraction of ore (i.e., higher yield than just blasting at it). Mining lasers will have built-in tractor capability, so they won't require a two-step mining process. Of course, this requires implementation of beam weapons, which I've been avoiding.

Here's my first attempt at mining beams.

The beams look pretty cool, and are animated using coherent temporal noise that flows along the direction of the beam, which breaks up the perfect gradient and provides the illusion that the beam is pulsing into the target object.

I can't wait to use this same functionality to slap some beam weapons on large ships and observe epic cap-ship battles (which, sadly, will require a good ship generator, which I'm still avoiding :/).

Dust and Collision Detection

September 3, 2012 General News 2 Comments

I made some great progress yesterday with volumetric effects, and am officially excited about the current look of dust clouds/nebulas! I've implemented a very cheap approximation to global illumination for volumes. Although the effect is really simple, it adds a lot to the realism. On top of illumination from background nebulae, there's also a fake illumination effect from stars. Here are some shots:

Definitely starting to feel like the game has a strong atmosphere!

The rest of my time lately has been devoted to finally getting real collision detection. I tried several options, including Bullet, OZCollide and Opcode. For now, I've opted (no pun intended...) to go with Opcode. I love the cleanliness of the interface, as well as the fact that it's really easy to use as just a narrow-phase collider, since I've already implemented my own broad-phase.

Once again, I started a fight that I shouldn't have started with someone way bigger than me. Luckily, thanks to the new, accurate collision-detection, I was able to sneakily take refuge in a depression in this large asteroid:

Opcode is very fast, so I'm pleased with that. And I haven't even set up temporal coherence caches yet, which supposedly can yield 10x speedups. Unfortunately, I've already run into a very serious problem that it seems is universal to collision detectors. They don't support scaling in world transformations!! I'm not sure why...I guess there's some fundamental challenge with that and collision detection algorithms that I'm not aware of. At any rate, this poses a big problem: I've got thousands of collidable asteroids drifting around systems. With scaling support, this wouldn't be a problem, because there are only about 15 unique meshes that I generate for a given system. With instancing and scaling, you can't tell that some of the asteroids are the same. Scaling is an absolutely critical part of the illusion! But to get the scaled asteroids to work with collision detectors, I have to bake the scale data into the meshes. I can't possibly generate a unique, scaled mesh and acceleration structure for each of the thousands of asteroids at load time. That would take way too long, and way too much memory.

To solve this, what I'm currently doing is trying to anticipate collisions, and launch another thread to build the acceleration structures for the objects that would be involved. To do so, I keep track of a "fattened" world-space AABB for all objects, and launch the builder thread when the fattened boxes intersect. The factor by which the world boxes are exaggerated affects how early the acceleration structures will be cached. So far, this method is working really well. Opcode is fast at building acceleration structures, so I haven't had any trouble with the response time. In theory, with a slow processor, collisions will be ignored if the other thread does not finish building the acceleration structure before the collision occurs. I've tested to see how far I can push the complexity of meshes before this starts happening. Indeed, if I use the full-quality asteroid meshes (~120k tris) for collision, my first missile flies right through them (admittedly, I also turned the speed way up on the missiles). But let's face it, 120k tris for a collision mesh is a little much! And the only real alternative is to stall and wait for the acceleration structure to be generated, which means the game would freeze for a moment as your missile is heading towards an asteroid or ship. I'd much prefer to run the risk of an occasional missed collision on a really low-end machine than to have the game regularly stalling when I shoot at things.

I'm very impressed with how easy multithreading is with SFML. The thread and mutex structures provided by SFML are completely intuitive and work perfectly! It took less than an hour to implement the aforementioned collision structure worker thread.