It's cheating, sure, but can give a lovely, artistic look. The eye really does love those gradients...
I guess I'm prepared to admit that, after finally getting everything tweaked properly...linear lighting does look darn good.
Kind of amazing how what amounts to little more than a sqrt in the lighting falloff is...so influential to the eye's perception of correctness..!
NOTE: Aberration is too aggressive.
A much more intuitive way to visualize the data. And appropriate, considering the game's context
Area of "star" is proportional to memory footprint. The connectivity is reflective of the structure of the data. Cool stuff...someday I'll do the same with the code rather than just the data. That'll be fantastic.
Edit: and a few more just for show This is a pretty deep visualization of a ship, going all the way down to the attached hardpoints (thrusters, weapons, power generator) and most of the data therein. Amazing how beautiful and structured the data is.
Hard not to be proud of a c++ engine that lets you look inside of it with no boilerplate code
Really a shame that I didn't post about any of the recent graphical improvements Gotta hop back on the bandwagon at some point though!
Naturally, I can't leave well enough alone! Attempt IV was pretty cool, but it obviously lacked volume. No surprises there: it was 2D. Here's my first attempt at volumetric light inside of the same type of nebula as shown in the previous post. I'm sure it will get better over time, but already you can notice a much better sense of volume, softness/cloudiness, and of light transport. Light is correctly modeled using emissitivity and absorption as it passes through the nebula. I'd say this is a pretty good amount of nebula-related progress for 2 days!!
This method is about as expensive as the current LT nebulae...but it looks way better...so I think it's safe to say this will be replacing them soon I am very happy with these, and I think I would quite enjoy seeing them in the background! All the parameters - softness, brightness, feature size, absorption, wavelength-dependence of scattering, etc. are all easily-tweakable to get a lot of different styles.
Oh, and looks like we also get clouds for free!
Tonight, I feel like I have closed a chapter in my life. For almost three years, I have been trying, on and off, to understand nebulae. In particular, I've been trying to generate them procedurally. If you look back over the log, you'll find several attempts:
Arguably, I've been getting better over the years. As my understanding of math improves, so does my ability to craft these lovely things. Although 2013's nebulae are significantly better than the rest (and, arguably, some of the better procedural nebulae out there on the web), let's face it, they still don't look like nebulae. But tonight, tonight I think that I have discovered the secret of nebulae. After three years, I finally feel that I understand these things. And I'm proud to say that my nebulae...finally look like nebulae.
In yet another attempt to drive the lesson of simplicity into my mind, the universe has shown me that nebulae - in my opinion, some of the most gorgeous and complex objects out there - are actually simple. The image above was produced by 31 lines of code, which is far, far less than any of my previous attempts. The code that actually defines the nebulae itself is about 20 lines. Dead simple.
It is surprisingly hard to find clean and simple code to do this, but it's surprisingly easy to do. Here it is, if anyone has ever wanted to write their own sound files in c++.
My code formatting in this blog is horrible, hence linking it as a separate file. Now all you do is call writeWAVData like this:
writeWAVData("mySound.wav", mySampleBuffer, mySampleBufferSize, 44100, 1);
Which would write a 44.1khz mono wav ("CD quality"). mySampleBuffer should be an array of signed shorts for 16-bit sound, floats for 32-bit, unsigned chars for 8-bit. Since the function is templatized it automatically detects the format and takes care of the relevant fields in the WAV header.
Oh, and this only works on a little-endian machine, since WAV is expected to be little...but that probably doesn't matter to anyone these days...you're all running little endian...
It had to happen sooner or later...audio synthesis (not composition, but actual synthesis) is one of the only realms of procedural generation that I have yet to touch...well, at least, that was true yesterday. But not anymore!
Yesterday I finally indulged myself in trying out audio synthesis. I read a few basic materials on the subject, but, much to my surprise, audio synthesis is incredibly simple! It seems like the audio people really enjoy making up their own fancy terminology for everything, but, honestly, it's dead simple: audio synthesis is the study of scalar functions of one variable. Period. That's literally all there is to it: a sound wave is f(t), nothing more, nothing less. Wow! That's great, because I'm already pretty darn familiar with Mr. f(t) from my work in other fields of procedural generation. Is it coincidence that it always comes back to pure math? I think not
Here are a few sound effects that represent my first-ever endeavors into audio synthesis. I know, they're terrible, but they were made from scratch by someone who has only known how to synthesize audio for less than 24 hours, so maybe that makes them a little better. In the fancy audio terms, I guess you could say that I used "additive synthesis," combined with "subtractive synthesis," and even a little "frequency modulation" on top of that. But really, all I did was write some math. Sums of sines ("additive synthesis")...multiplications of those sums ("subtractive synthesis")...domain distortions ("frequency modulation"). All the usual procedural tricks. Heck, I even did the equivalent of fractal noise! Turns out it works just as well in audio. Those magical fractals
Well, here they are. Someday they'll get better, but certainly not in time to have procedural sound effects for LT1. Oh well. Maybe next time.
Spent a good chunk of today upgrading the in-engine profiler, which had gotten so old and obsolete that I never really used it anymore, and preferred external tools for profiling. Well, that needed to change, because in-engine profiling is by far the most convenient. It also gave me a nice opportunity to break in my new interface system. Thanks to this week's overhaul of the interface engine, I was able to cook up what I think is a great-looking profiler with almost not effort!
More important than the looks, however, is the accuracy. I dramatically improved the accuracy, and now it is giving me what seem to be very good results! This will be invaluable as I continue to add complexity to the game. Man, this thing makes the performance optimizer in me so excited
Last time I talked about AO, but I left out a teensy little detail: although per-vertex AO is very easy to compute, and also extremely fast to render, it's extremely slow to compute during the pre-process. To get high-quality, noise-free AO requires somewhere in the vicinity of 1000 samples of the density field per vertex. Not exactly a cheap operation! On the CPU, it quickly becomes prohibitively expensive as either the complexity of the density field or the resolution of the mesh increase.
Today, I moved the computation to the GPU, and have once again been blown away by the computational abilities of modern GPUs. Now that I have every piece of the mesh computation process - the field evaluation, the gradient evaluation, and, finally, the AO evaluation - running on the GPU, it's simply mind-blowing how high I can push the quality of the mesh and the complexity of the density field.
Here's a mesh consisting of 50 unioned and subtracted round boxes (round boxes are very expensive compared to sharp-edged ones), contoured on a grid of 300 x 300 x 300 (that's an insane level of detail, FYI), resulting in half a million vertices, each of which takes 1024 AO samples. The GPU performs this work in ~3 seconds. Incredible.
But that's not even the most amazing part. The amazing thing is that, after profiling, it would seem that the GPU actually takes less than 1 second to complete this work. It is OpenGL's shader compiler (which, of course, is running on the CPU) that takes the majority of the time. This isn't too surprising, as the shaders to compute these things are massive, since I actually bake the field equation into the shader. I'm sure GL spends a long time analyzing and optimizing the equation, which is a good thing, because the shader runs absurdly fast.
Unfortunately, this brings up a few unwanted issues - I now have a CPU bottleneck that can't be easily offloaded to another thread. Since the bottleneck is inside GL, I will need to explore multithreaded GL contexts in order to compile the shaders in another thread while the game runs, because I can't have the game stalling every time a new asset enters the region and the corresponding shaders have to be compiled. Sadly, this probably won't be too easy, but I'm sure I'll learn a lot...!
Another, less-tractable problem is that the shader compiler flat-out crashes after a certain field complexity is hit. I will need to explore this some more. It might just be the fact that my field function dumps an incredibly-ugly equation into the shader (it's literally a single line, with hundreds of functions wrapped together). Perhaps breaking it up will prevent the crash. Or maybe I've hit some kind of hard limit on the allowed complexity of pixel shaders. If that's the case, I could explore a solution that uploads the equation as a texture, and create a shader that understands how to parse an equation from a texture. But that would no doubt be significantly slower than baking the equation into the shader...probably at least an order of magnitude slower :/
But for now, I will allow myself to be happy with these results, and am most definitely looking forward to working on ships again with this technology in hand!