Posts In: Ideas

Top-Down

June 20, 2012 Ideas 3 Comments

Last night I spent a lot of time thinking, on a more philosophical level, about how to approach game design. Interestingly, I came to a pretty big revelation: I've been doing it all wrong. After a few hours of considering my previous technique, I think I've figured out exactly what it boils down to.  The key is top-down vs. bottom-up development. To start, I'll briefly define what I mean by each.

Bottom-Up

You start by developing small subsystems, fleshing them out until they work really well by themselves. In this way, you make everything look beautiful even out of context. For example, you might work on a GUI system, making it look beautiful before moving on to something else. But the GUI system is developed and tested as an isolated component. Maybe you decide to work on trees, and, again, you develop and test tree generation algorithms as a standalone component. In this way, you continue to tackle and perfect each subsystem of the game until all subsystems are complete.

Top-Down

You start with the "big picture," using placeholders for each subsystem, focusing instead on tying the subsystems together into a coherent game. For example, you may use very rough text and simple boxes for your interface, and perhaps you use cylinders for trees, boxes for ships, and spheres for planets - all without texture. But the important thing is that you explore how the interface lets you interact with the world, how trees are distributed over planets, how ships land on planets, and so on - you are not developing the subsystems in isolation. Gradually, you refine each subsystem until it seems good enough in context. Your planets get texture, then dynamic tessellation, and finally an atmosphere.

If you think about it for a while, I think you'll agree that it's a very real distinction.  In some sense, it's comparable to the difference between a progressive (interlaced) and non-progressive image being loaded in your browser.  In the first case (progressive/top-down), you can see the entire image almost immediately, but it is very blurry, and gradually gets more detailed.  In the latter case (non-progressive/bottom-up), you can see individual pixels filling in row after row at full precision, but probably can't tell what the entire image is until a good many rows are loaded; even then, the full image is only revealed at the very end.

Now, historically, I have always approached things in the bottom-up fashion. If you look at the images posted on this blog, I think it's quite clear: almost every image I post highlights a single element - a terrain, a tree, a texture, etc.

Why it Matters

Here's what I realized last night: bottom-up development is wrong. At least for serious game projects involving only one developer. I can't generalize my conclusion to large companies, but I do feel that it should apply to independent gamedevs like myself. Here are some reasons that support my claim:

  • Subsystems are developed out of context (a subsystem working well in isolation does not guarantee that it will work well as part of the final product)
  • Gameplay and game subsystem interactions are de-emphasized
  • Easy to lose sight of the "big picture," since it is not visible until the very end; by extension, easy to lose focus

I think these are all rather self-evident.  If you've developed in a bottom-up fashion, you can probably think of personal examples of each problem occurring in development (I know I can).  Perhaps the biggest problem with bottom-up development, in my experience, is the third item.  Focusing on a single element, it's easy to forget the grandeur of the project as a whole.  As a game developer, you come up with a fantastic game idea - something that really inspires you - and then you work to make it a reality.  But working for a week on a GUI system can make you forget all of the inspiration and beauty of the original idea, causing a loss of motivation and, ultimately, of development efficiency.

On the other hand, in top-down development, it's easy to keep a handle on the big picture, as it's really the focus of this flavor of development.  In that sense, it is easier to always remember the beauty of the idea, because it's right in front of you (albeit, in a coarse form).

Anyway, I don't usually post this kind of philosophical garbage, but it hit me so strongly yesterday that I felt the need to write about it, if only so that my future self will be able to remember.

Here's to a bright future of working down rather than up!

Spatial Signatures: Even Faster Collision Detection?

Yes, I'm quite happy with my collision detection engine. But could it be faster? Probably!

Here's an idea I had today. Obviously, I'm obsessed with spatial hashing. It's probably my favorite thing ever. Well, what about this: take a mesh, create a spatial grid of booleans (which can, of course, be represented super-efficiently as a bit vector), then loop over every triangle in the mesh. For any spatial grid that the triangle intersects, the corresponding hash bucket gets set to true. Once this has been done for all triangles, you are left with an array of bits that represents the spatial buckets with which the mesh collides. Purely for the sake of being able to throw around cool and esoteric terminology, I'll call this array the "spatial signature" of the object.

Now, here's the fun part. In my previous technique, I looped over the vertices of one mesh, checking them against the density function of the second. The most expensive part of this check is, without a doubt, evaluating the density function. Well, instead, we could prune the possibilities by first checking the vertex against the spatial signature of the object! It's quite inexpensive to hash the vertex to a bucket in the same space as the one in which the spatial signature was created. Then, we simply check the corresponding index in the bit array. If the array indicates that no triangles intersect that spatial bucket, then there's absolutely no chance that the vertex could be colliding with the mesh. On the other hand, in the event that the bit is turned on, then we continue as before with the full-blown density check.

This is, of course, a completely untested idea, but it seems to me that the speed with which a bit array could be accessed would significantly cut down on the narrow-phase collision check, especially when dealing with complex density functions.

Narrow Phase Collision Detection Using Density Fields

I've been pondering narrow phase collision detection a lot lately, as it's really the only thing that stands between me and a working space shooter demo. Unfortunately, it's not at all an easy problem, especially considering the fact that I have on-the-fly meshes generated and can make no assumptions about the topology, nor can I rely on hand-construction of bounding trees.

I had been getting pretty excited about an idea I had that involved a two-phase check using AABB-trees for pruning and then spatial bucketing (a la spatial hashing) for triangle-to-triangle testing. Then it hit me: why am I running in circles? I started with a mathematical model of the mesh: an isosurface of a well-defined density function. I used a complex algorithm to convert the isosurface into a polygonized mesh. Now I was trying to go back to a mathematical representation that would be conducive to collision detection. Why??

Here's the punchline: why not use the original density field description for checking? After all...the density field is even more precise than the triangular mesh! All that collision detection really is, thanks to the nature of my engine, is a solver for the intersection of two isosurfaces, both of which can be completely described using formulae! Of course, I'm sure the math would get incredibly nasty for an analytic solution, and I'm not really up for exploring that option at the moment. But here's a better idea: use a set of points (the vertex positions generated by the polygonization algorithm) to describe one isosurface, then check each point against the other surface's density function. Then, if you ever find a sign change, you immediately know that there's a collision!

This is a pretty cool idea, since it only requires a set of points and a density function to work. It definitely avoids a lot of mess associated with computing BVHs. Most importantly, however, it just makes sense! If you're working with a precise, mathematical definition of a surface, then use it!

I'll probably be implementing a rough draft of this system quite soon, so we'll see if it measures up to my expectations. I would imagine that sampling the density function will be cheaper than other types of collision detection, but I'll need to do some extensive profiling to make sure that this idea is actually as clever as I believe it to be.

Long-Term Ideas

April 28, 2010 Ideas 0 Comments

Multi-Module

  • A plugin that has its own plugin slots (nested plugins)
  • For example, a drum/bass multimodule could have the ability to load two plugins (one percussive, and one bass), and be able to have tighter control over coordinating them than the main framework would have if they were separate plugins
  • Think post-processing, except during the rendering process
  • Acts as an overseer

Post-Renderer

  • A macro that does something with rendered mp3s
  • Adds the song to iTunes library, for example

Unified Core Module

  • Alternative to structure -> coordination -> progression workflow
  • Provides a unified core interface
  • Could be more coherent

Unified Production Module

  • Overarching module that replaces the unified core module as well as all generative modules
  • Decides (algorithmically) which plugins to call and how to call them
  • This is the final step in making mGen truly random - no more user configuration of modules
  • Does NOT replace the renderer/post-renderer

Ideas for Melodic Interpolation Engine

April 13, 2010 Ideas 0 Comments

Here are a few ideas concerning melodic interpolation that I've been collecting in my mind over the past few days:

  • Certain blocks in the blockspace should have an ANCHORED flag so that they cannot be bumped vertically
    • I find that it is often the first and last notes of a phrase that really establish the tonality...perhaps they should both be "anchored" to an element of the chord?
    • Children of anchored blocks should not be anchored
  • Why not give blocks some inheritable properties? This will make the melodies less homogeneous (much like Fraccut, which tends to have, for example, short notes grouped together)
    • BLOCKINTERPCHANCE - when calculating the chance of an interpolation, use BASEINTERPCHANCE + BLOCKINTERPCHANCE
    • BLOCKVARIANCE - when calculating y deviations, use BASEVARIANCE + BLOCKVARIANCE
    • Grammar probability distribution - when calculating grammatical replacements, maybe certain blocks should be more likely to be replaced by certain words? This would have the effect of grouping words together more often. Is this a desirable property?
  • Make sure that blocks will not split if a split would create a block with a width below MINIMUMWIDTH
  • A style should include the following:
    • A grammatical dictionary
    • A grammar probability distribution
    • Intrinsic probabilities and quantitative measures that aren't part of the configuration interface
      • Like what? Need some more variables to work with.
  • To mutate a style over time, simply make small changes to the probability distribution or the other quantative measures
    • Probably not a good idea to mutate words
    • Maybe we could add new words to the dictionary
      • Add a new word
      • Slowly increase the probability of the new word
      • Slowly decrease the probability of another word (that will be phased out)
    • In this way, the dictionary undergoes slow, smooth, and continuous changes

Melodic Interpolation: Simple but Surprising

April 5, 2010 Ideas 0 Comments

It's amazing how the simplest methods can sometimes yield the most impressive results.  I have been experiencing this phenomenon all day.

While messing around with new methods, I actually discovered something very simple and very obvious: interpolation of melodies based on the chord progression.  It's something that I haven't used before - probably because it seems too obvious!  Here's the procedure:

  • Start by placing "blocks" in a space; place them such that they lie directly on top of the roots of the chords in the progression
  • Link the blocks such that each block is aware of the spatial indices of it's two neighbors (in back and in front)
  • Iterate through the blocks
    • Calculate the mean y/pitch offset of block i and its front neighbor
    • If the mean offset is different from both constituent offsets
      • Split block i and position its child at the previously-calculated mean
      • Update the neighbor links for block i, the child block, and the front neighbor
  • Convert blocks to notes
    • x -> time offset
    • y -> pitch offset
    • width -> duration

One could think of this as "blurring" the melody - removing sharp jumps where they exist.  It's really just interpolating the melody from the chord progression.  Now, throw in a grammatical foundation and some variation functions and we've got a very nice method.

Surprisingly, this simplistic technique sounds good!  It sounds both deliberate and coherent, with a dash of creativity thrown in.  By continuing to improve the method with a grammar, I believe that it will be possible to make the method of melodic interpolation a truly viable way of generating good melodies.

Additive Rhythmic Grammar

April 3, 2010 Ideas 0 Comments

As part of a larger scheme for building more coherent melodies, I'm looking into a better grammatical method for representing rhythms.  The additive rhythmic grammar technique may pave the way for better rhythmic coherence in future melodic plugins.

The general idea is to create very simple rhythmic words.  Then, define probabilities for each word - for example, the probability that the word will occur in the first, second, etc. measure.  These probabilities should, for maximum coherence, be either very large (~1) or very small (~0).  Finally, when retrieving the total rhythmic format for a given measure, simply "add" each rhythmic word that has an adequate chance of occuring in the measure.  The key here is that we can treat the words as simple mathematical entities that can be summed, subtracted, and have basic binary operations performed on them.  In this way, it is possible to create a complex rhythmic pattern by using only simple building blocks.

Yes, in reality, it's a very simple idea.  Nonetheless, I've never tried such "additive grammar" before.  Anything is worth trying.

Random Sources

Part of the success of the Fraccut engine is the direct result of a somewhat overlooked detail in how it handled random calls: Fraccut used deterministic randomness.  In fact, it was the only engine to make use of a source of randomness other than direct calls to a pseudorandom generator.  Rather, it first stored large arrays of random numbers (drawn from the random generator) and then accessed them based on an index that could be reset and incremented at will.  This resulted in subtle repetitions that make Fraccut seem far more coherent.  Since the method of using random sources outside of pseudorandom generators clearly provides benefits, I am exploring more general and flexible implementations of randomness for future plugins.

I have designed an abstract class for use with c++ plugins that will allow functions and objects to request random numbers without necessarily knowing from what kind of source the numbers come.  This way, an object such as a phrase (in the case of contour grammar) or a space (in the case of Fraccut) can be "bound" to a specific random source - a derivation of the abstract random source class - that will handle the details of randomness.  It is the object's responsability to keep up with an index.  The object must provide an index to the random source when requesting a random number.  This way, the random source may choose to be deterministic, assigning a single, definite output to each input state.  The random source may even be coherent, meaning that indices close in value should produce related output while indices far apart in value should be nearly independent.  On the other hand, it is still, of course, possible to simply neglect the index and draw from a pseudorandom generator in the event that determinism is not required for the task.

The trick in this technique lies in clever usage of indices.  For example, in Fraccut, a parent block may be given an index.  Now, when the engine cuts the block, it may choose to assign the new block(s) indices that correspond to the parent block's index modulated by a given amount.  This way, the new blocks will draw from an identical random stream (if the source is deterministic) as their parent, giving rise to relationships that can, as in the case of Fraccut, seem quite intricate.

In future plugins, I will explore better ways to generate random numbers in hopes of gaining more coherence.

Functional Phrase Language for Contour Grammar

February 21, 2010 General News, Ideas 0 Comments

Unfortunately, most of my work over the past week has been devoted entirely to the internals of Contour Grammar.  The format of the data structures will completely govern the quality of the plugin's future, so I wish to spend a great deal of time and thought on the layout of the grammar engine.

A minor breakthrough (which, oddly enough occurred in an airport) did provide a boost of inspiration. One of the current problems with Contour Grammar lies in the ambiguity in defining phrases. How should phrases be picked to maximize coherence? My newest solution to the problem entails a sub-grammar that I call functional phrase language, since it uses functions to indicate derivations of new words from parent words. In this way, rather than filling phrases with a random array of words, the program will fill the phrase with a random array of functional instructions that should ensure greater coherence.

Here's an example of what a line of such instructions might look like:

1,2,1.up.up,2.invert,1.maxdown

And here's how the Contour Grammar engine would read and interpret the instructions:

  1. Draw a random word from the active dictionary, label it "1," and add it to the active phrase
  2. Draw a random word from the active dictionary, label it "2," and add it to the active phrase
  3. Take the first word, translate all pitches up 1 step, then translate all pitches up another step, and add the resulting derived word to the active phrase
  4. Take the second word, invert the pitches (horizontal reflection), and add the resulting derived word to the active phrase
  5. Take the first word, determine the maximum pitch, then translate each occurrence of the maximum pitch up 1 step, and add the resulting derived word to the active phrase

If everything goes according to plan, Contour Grammar should be off the ground and getting cold, hard results by the end of the week. As I mentioned before, I really hope to have some unique samples up for February before it comes to a close. Audible progress is real progress!

Rhythmic Shells

January 14, 2010 General News, Ideas 1 Comment

With the contour grammar engine in development, I am still searching for ways to simplify and optimize grammatical representations. At the moment, the most restrictive element of the engine is the separation between rhythm and pitch. In current engines, the two are completely separated into different streams driven by different grammar sets. Not only does this approach render it difficult to work with both streams at the same time, but it also negatively impacts the ability of the engine to produce coherence between pitch and rhythm.

A new idea I call "rhythmic shells," however, might promise an enormous simplification of the process, combining both streams into one while offering greatly improved coherence potential at the same time. The method focuses on stripping pitch words of all length dependency and transforming rhythm into a specification of pitch words. In essence, a rhythmic "shell" is selected, which contains a prescribed number of "slots," each with a distinct duration, and the slots are then filled with pitch words. The method requires that pitch words be functions rather than definitions - that is, they must be of arbitrary length and cannot be precomputed.

Here's a simple diagram of the method:

The shell stream can be specified with a simple data structure like this:

13:2,4,5
15:2,8,6
13:2,4,6

The number on the left side indicates the index of the pitch shell to be used, while the comma-delimited numbers indicate the indices of the pitch functions with which to fill each shell slot. Notice that the phrase produced by a shell can be variated by leaving one or more elements intact while changing the others, creating a different, but still somewhat similar phrase. This will certainly guarantee more coherence.

It is worth noting that, given the functional nature of the pitch words in this description, the method of rhythmic shells if very much like a generalized method of curve-splicing, which was discussed several weeks ago!

In the next few days I put a great deal of work into the contour grammar engine to try to implement the method of rhythmic shells. I also hope to continue finding new innovations for the simplification, unification, and overall improvement of grammar techniques.