Lightning Bolts

You’re flying your ship down a cavern, dodging and weaving through enemy fire.  It’s becoming rapidly apparent, however, that you’re outmatched.  So, desperate to survive, you flip The Switch.  Yes, that switch.  The one that you reserve for those…special occasions.  Your ship charges up and releases bolt after deadly bolt of lightning into your opponents, devastating the entire enemy fleet.

At least, that’s the plan.

But how do you, the game developer, RENDER such an effect?

Lightning Is Fractally Right

As it turns out, generating lightning between two endpoints can be a deceptively simple thing to generate.  It can be generated as an L-System (with some randomization per generation).  Some simple pseudo-code follows: (note that this code, and really everything in this article, is geared towards generating 2D bolts; in general, that’s all you should need…in 3D, simply generate a bolt such that it’s offset relative to the camera’s view plane.  Or you can do the offsets in the full three dimensions, it’s your choice)

segmentList.Add(new Segment(startPoint, endPoint));
offsetAmount = maximumOffset; // the maximum amount to offset a lightning vertex.
for each generation (some number of generations)
  for each segment that was in segmentList when this generation started
    segmentList.Remove(segment); // This segment is no longer necessary.

    midPoint = Average(startpoint, endPoint);
    // Offset the midpoint by a random amount along the normal.
    midPoint += Perpendicular(Normalize(endPoint-startPoint))*RandomFloat(-offsetAmount,offsetAmount);

    // Create two new segments that span from the start point to the end point,
    // but with the new (randomly-offset) midpoint.
    segmentList.Add(new Segment(startPoint, midPoint));
    segmentList.Add(new Segment(midPoint, endPoint));
  end for
  offsetAmount /= 2; // Each subsequent generation offsets at max half as much as the generation before.
end for

Essentially, on each generation, subdivide each line segment into two, and offset the new point a little bit.  Each generation has half of the offset that the previous had.

So, for 5 generations, you would get:

lightningstage2lightningstage3lightningstage4lightningstage5lightningstage6

That’s not bad.  Already, it looks at least kinda like lightning.  It has about the right shape.  However, lightning frequently has branches: offshoots that go off in other directions.

To do this, occasionally when you split a bolt segment, instead of just adding two segments (one for each side of the split), you actually add three.  The third segment just continues in roughly the first segment’s direction (with some randomization thrown in)

direction = midPoint - startPoint;
splitEnd = Rotate(direction, randomSmallAngle)*lengthScale + midPoint; // lengthScale is, for best results, < 1.  0.7 is a good value.
segmentList.Add(new Segment(midPoint, splitEnd));

Then, in subsequent generations, this, too, will get divided.  It’s also a good idea to make these splits dimmer.  Only the main lightning bolt should look fully-bright, as it’s the only one that actually connects to the target.

Using the same divisions as above (and using every other division), it looks like this:

lightningsplitstage2lightningsplitstage4lightningsplitstage6Now that looks a little more like lightning!  Well..at least the shape of it.  But what about the rest?

Adding Some Glow

Initially, the system designed for Procyon used rounded beams.  Each segment of the lightning bolt was rendered using three quads, each with a glow texture applied (to make it look like a rounded-off line).  The rounded edges overlapped, creating joints.  This looked pretty good:

lightningtest

..but as you can see, it tended to get quite bright.  It only got brighter, too, as the bolt got smaller (and the overlaps got closer together).  Trying to draw it dimmer presented additional problems: the overlaps became suddenly VERY noticeable, as little dots along the length of the bolt.  Obviously, this just wouldn’t do.  If you have the luxury of rendering the lightning to an offscreen buffer, you can render the bolts using max blending (D3DBLENDOP_MAX) to the offscreen buffer, then just blend that onto the main scene to avoid this problem.  If you don’t have this luxury, you can create a vertex strip out of the lightning bolt by creating two vertices for each generated lighting point, and moving each of them along the 2D vertex normals (normals are perpendicular to the average of the directions two line segments that meet at the current vertex).

That is, you get something like this:

lightningvertices

Animation

This is the fun part.  How do you animate such a beast?

As with many things in computer graphics, it requires a lot of tweaking.  What I found to be useful is as follows:

Each bolt is actually TWO bolts at a time.  In this case every 1/3rd of a second, one of the bolts expires, but each bolt’s cycle is 1/6th of a second off.  That is, at 60 frames per second:

  • Frame 0: Bolt1 generated at full brightness
  • Frame 10: Bolt1 is now at half brightness, Bolt2 is generated at full brightness
  • Frame 20: A new Bolt1 is generated at full, Bolt2 is now at half brightness
  • Frame 30: A new Bolt2 is generated at full, Bolt1 is now at half brightness
  • Frame 40: A new Bolt1 is generated at full, Bolt2 is now at half brightness
  • Etc…

Basically, they alternate.  Of course, just having static bolts fading out doesn’t work very well, so every frame it can be useful to jitter each point just a tiny bit (it looks fairly cool to jitter the split endpoints even more than that, it makes the whole thing look more dynamic).  This gives:

And, of course, you can move the endpoints around…say, if you happen to have your lightning targetting some moving enemies:

So that’s it!  Lightning isn’t terribly difficult to render, and it can look super-cool when it’s all complete.

Understanding Half-Pixel and Half-Texel Offsets

For those of you not using Direct3D 9 or XNA, you can safely ignore this post (OpenGL and Direct3D 10 are immune to this particular oddity).  However, if you are, it’s likely that you’ve had to deal with the dreaded half-texel offset.  Today, after I don’t know how many years of using Direct3D, I came to realize that I really didn’t understand what the source of the issue was.  Now that I’ve sort of gotten a handle on it, I figured I’d post it to my super new journal.  Consider it a test run.

Coordinate Spaces

The first thing to note is the basic coordinate space.  I’m going to be referring to texture space and clip space a lot, so I thought I’d just do a quick refresher here on what I mean (mostly to make sure you’re thinking with the same terminology that I’m using).

  • Clip space – the post-projection half-cube of space where X,Y in [-1..1] and Z in [0..1]
    • X = -1 is the far left edge of the screen
    • X =  1 is the far right edge
    • Y = -1 is the bottom edge
    • Y = 1 is the top
    • Z = 0 is near (the near plane)
    • Z = 1 is far (the far plane)
  • Texture space – the area where u,v in [0..1] on a texture map making up a single end-to-end repeat of the texture.
    • 0,0 represents the upper-left coordinate on the texture
    • 1,1 is the lower-right.

For simplification, we can ignore the Z direction in clip space, since this article is really about the screen-space aspects of it.

Texture Sampling

Texture sampling is pretty straightforward.

In this diagram (and future diagrams), I’m using a 4×4 texture.  The [0..1] range in both X and Y map exactly to the range of the texture.  Note, here, that the center of the upper-left texel does not lie at 0,0; instead, 0,0 refers to the upper-left corner of that texel.  This is an important thing: if you are using bilinear filtering and you want to sample color data from a single texel, you need to sample from the center of the texel instead of the corner.  To do that, you’d sample from a half-texel in (that is, for this 4×4 texel, you’d want to sample from (0.125, 0.125), because a texel’s width in texture space is .25 (each texel is 1/4 of texture space), and half of that is 0.125).

A side effect of this is that, if you have bilinear sampling and wrapping on, sampling along the left edge gives you exactly the same data as sampling along the right edge at the same y coordinate.  That is, in both cases, you get a perfect blend of the texels on either end of the texture.  This is true for the borders between texels in general: sampling where the “dot” is on the image above is the only way to get a pure sample of a texel with bilinear filtering.  Anywhere else will be a blend between two.  Edges (the lines) are where the two texels neighboring the edge are weighted evenly.

The upshot of this is that the official center of a texel is located where the center SHOULD be: half of a texel from the edge.  However, as we’ll see in a moment, the rule for sampling texels does not apply to pixels.

Pixel Coordinates Are Weird And Unintuitive

When referring to “pixel coordinates” vs. “texel coordinates,” pixel coordinates are, for lack of a better way to explain it, the coordinates used when rendering to the screen (or a render target).  The pixels are the destination coordinates.

This is where clip space comes in: When you render your geometry, it goes through the gauntlet of matrices, ending with the projection matrices, and ends up in clip space (with x and y both being in [-1..1] for the visible area of the screen).

This diagram is the way that clip space maps to pixels.  note that the upper-left clip space coordinate (-1, +1) is actually RIGHT ON the pixel center.  This is the root cause of the whole offset problem.

Basically, say you have a screen-sized texture.  if you draw it to the screen using a full-screen quad (clip space from upper-left (-1,1) to lower-right (1, -1)), the upper-left pixel would sample the very upper-left corner of your quad, which would give you texture coordinates of (0,0).  But remember: that’s not the center of the texel!  It’s actually the upper-left corner of the texel.  With bilinear filtering, instead of getting a pure sample of your texture, you have a perfect blend of all 4 surrounding texels.

In simpler terms: the half-pixel/half-texel offset exists because there’s a discrepancy between how the centers of pixels and the centers of texels are computed.

To fix this, you could simply offset your texture coordinates by a half-texel before sampling (that is, when your shader gets its uv coordinate, you’d add a half texel ( (.125, .125) for our 4×4 example) before sampling, which would then give you a perfectly lined-up texture sample, and you’d get a great 1:1 mapping of pixels to texels.

However, there’s another problem.  What happens when you’re using multisample anti-aliasing (MSAA)?

Weird Pixel Centers and MSAA

So, we now know that the center of the origin pixel is actually the upper-left coordinate of clip space.  This causes problems when MSAA is turned on.  With MSAA, the geometry is sampled multiple times per pixel, using a set of points that exist within the pixel’s square (while a pixel is not really a square, but a point sample within a square, MSAA effectively treats a pixel as multiple samples in a square).

As an (important) aside: with MSAA on, textures are still only sampled once, from the pixel center.  MSAA does not affect the UV coordinates you get for a given pixel, it merely affects geometry coverage.

Continuing with our example, here’s a hypothetical, simple MSAA strategy wherein a pixel is sampled four times (4x MSAA) in an aligned grid:

This diagram illustrates the issue with the standard [-1..1] full screen quad, D3D’s choice of pixel center, and MSAA.  Even with a full-clip-space (and, thus, traditionally full-screen) quad, along the top and left edges, some samples no longer touch the quad, and thus the edges are not entirely filled in (a pure white quad drawn to a render target that had been cleared to black would have grey edges along the top and left).

You’ve likely seen this in a ton of games: when you turn on full-screen anti-aliasing, there’s a weird line down the side of the screen during fadeouts and the like.  This is the pixel offset problem rearing its head.

How do you fix this?  Turns out, it’s simple.  When drawing a full-screen quad, instead of adding a half-texel to the uv coordinates, you should instead SUBTRACT a half-pixel from the position (not the uv coordinates).  This shifts the quad so that it lies perfectly within the grid in the diagram, so that every MSAA sample hits the geometry.  As an added bonus, it means that the UV samples that you get will already be in the texel centers; no more adding a half-texel required!

Note, that when I say “subtract a half-pixel from position,” what I mean is “move the position of the quad one half-pixel towards the upper-left.  To do this, you actually add (-1/width, +1/height).  The reason for the signs (-, +) is that you’re moving left and up.  In post-projection space, -x is left, and +y is up.  The reason that it’s 1/width instead of 0.5/width is because, in clip space, the coordinates range from -1 to 1 (width 2), and not from 0 to 1 (width 1) like they do in textures, so you need to double the movement to account for that.

Of course, if you’re drawing world geometry and using its screen coordinates to index into a texture map (like, for instance, if you’re using light pre-pass rendering), you’ll still want to add the half-texel instead.  In fact, here is a great reference on how to handle this case.

Take Us Home, Article

The half-texel/half-pixel offset is a bizarre feature in Direct3D 9 (and, by extension, XNA).  In order to properly handle it when using full-screen-sized textures:

  • When rendering a full-screen (or otherwise screen-aligned quad), subtract a half-pixel’s size from the output vertex position from your vertex shader.  This will ensure both that the texel centers line up with the pixel centers (for proper texture sampling) AND that the quad will play nice along the left and top edges of the screen with MSAA.
  • When rendering normal geometry, the geometry is already in the proper place (i.e. it already plays fine with MSAA).  Consequently, you should add a half-texel to the uv coordinates for your full-screen sample.   This will allow you to sample from the texel centers as desired.

While this article refers mostly to full-screen effects, this information is generally more useful when downsampling textures, as you need to know where your sample points on the source texture will actually hit when rendering (and you’ll want bilinear filtering for, for instance, a 4-tap 4×4 average downsample).

Hopefully, if you were as confused by the half-texel offset as I was, this helps clear things up.

Additional references from MSDN:

Directly Mapping Texels to Pixels (Direct3D 9)

Coordinate Systems (Direct3D 10)

Scrollathon 2008

This previous weekend, I was able to accomplish another major milestone in game development: The Scrolling Background (TM) (C) (R) (BBQ).


Click to enlarge

Requisite Scrolling Video (Xvid AVI, ~2MB)

The Skinny On Scrolling

The interesting thing about the scrolling method that I settled on is that it’s not based on any sort of overall world coordinate system. World coordinates don’t actually exist, the only true coordinate system is the screen coordinate system (with coordinates ranging from -16,-9 to +16,+9, for a delicious [and integer-tastic] 32:18 [2x 16:9] visible area).

So how does it work? Each level will be built out of tiles, in order. Each tile has the following data:

  • A model to render
  • Collision Data
  • The Camera Path

The camera path for a tile is currently just an input position and an output position. That is, the position at which the camera ENTERS the tile, and the position at which the camera EXITS the tile.

Now, here’s the trick: Say you have two of the same tile next to each other. Each has an input coordinate of (0,1) and an output coordinate of (4, 0). What the system does is it moves the second one so that its input coordinate is in the same spot as the first one’s output coordinate. (that is, the second one’s input coordinate becomes effectively (4,0) like the first’s output coordinate and, relative to that, the second’s output coordinate becomes (8, -1)).

However, actual world coordinates aren’t strictly necessary, so whichever tile the camera is currently in is considered the “origin” tile. That is, it is used as the basis by which all other visible tiles get their on-screen positioning.

Thus, the setup is easy: figure out where on-screen (given the camera’s position in the tile) the tile should display, then make all of the visible tiles to the left and right relative to that.

This is nice for a few reasons:

First off, if, for some reason, a level were RIDICULOUSLY long, I would never have to worry about accumulating floating point round-off error.

The big thing is this allows me to have what is essentially a staple of the shoot-em-up game (and is actually quite visible in the video posted above): an endless loop of background.

These loops are especially useful for when fighting bosses. Say you’re zooming down a metallic corridor while scrapping with a boss that happens to be flying along with you. Rather than have to hope that the player finishes the fight before the camera hits the level’s end, you can just rely on the fact that the corridor will keep on looping until something triggers the loop’s end, signaling that the level should keep going (or end, assuming that there’s no more to the level).

This triggering system is not yet implemented, and I hope to get it done this weekend (though I have a ton of other, smaller items on the to-do list, so it may have to wait for the NEXT weekend).

Proximity Alert

One design element that was tricky was signaling to the player that the ship is too close to a wall. The obvious metric is, of course, a shadow. However, standard shadows only cast in one direction, which would be great if all we cared about was distance to the floor. However, we really need “distance to any object.” This looks like a job for the existing lighting system!

A new type of “light” was designed: essentially a black light, which has a center, a length, and a radius (thus, the actual light is more like a line light than a point light). Consequently, the fakey shadow from the ship will “cast” onto any surrounding objects.


Click to enlarge

And, once again, that’s all we have time for on this week’s episode of “What Did Drilian Do Last Weekend”. Stay tuned next week, same Bat-Time, same Bat-Channel!

One Pass, Two Pass, Red Pass, Blue Pass

Here it is: another late-week journal update that pretty much chronicles my weekend accomplishments, only later.

But First, Beams!

First up, here’s a preview of the powered-up version of the final main weapon for the project:


Click to enlarge

The beam itself actually locks on to targets and continually damages them. Implementation-wise, it’s a quadratic bezier. Initially, I tried to calculate the intersection of a quadratic-bezier-swept sphere (i.e. a thick bezier curve) and a bounding sphere exactly. That’s all great, only it becomes a quartic equation (ax^4 + bx^3 + cx^2 + dx + e == 0), which is ridiculously difficult to compute programmatically (just check the javascript source on this page to see what I mean). So I opted for another solution:

I divided the curve up into a bunch of line segments, treated those segments as sphere-capped cylinders (capsules), and did much simpler intersection tests. PROBLEM SOLVED!

When Is Deferred Shading Not Deferred Shading?

I also implemented Light Pre-Pass Rendering, which is sort of a “Low-Calorie Deferred Shading” that Wolfgang Engel devised recently. Considering my original plan for lighting was to only allow right around 3 lights on-screen at a time, this gives me a much greater range of functionality. It’s a three-step process, as illustrated below.

Render Object Normals (Cut a hole in a box)

Step 1: render all of the objects’ normals and depth to a texture. Due to limitations that are either related to XNA or the fact that I want to allow a multisampled buffer, I’m not sure which, I can’t read from the depth buffer, so I have to render the object depth into the same texture that the normal map is rendering into.

Given two normal components, you can reconstruct the third (because the normal’s length is one). Generally, Z is used on the assumption that the Z component of the normal is always pointing towards the screen. However, with bump mapping (and even vertex normals), this is not a valid assumption. So just having the X and Y normal components is not enough. I decided to steal a bit from the blue channel to store the SIGN of the Z component. This leaves me with 15 bits of Z data, which, given the very limited (near-2D) range of important objects in the scene, is more than plenty for the lighting (as tested by the ugly-yet-useful “Learning to Love Your Z-Buffer” page).

Consequently, the HLSL code to pack and unpack the normals and depth looks like this:

float4 PackDepthNormal(float Z, float3 normal)
{
  float4 output;

  // High depth (currently in the 0..127 range
  Z = saturate(Z);
  output.z = floor(Z*127);

  // Low depth 0..1
  output.w = frac(Z*127);

  // Normal (xy)
  output.xy = normal.xy*.5+.5;

  // Encode sign of 0 in upper portion of high Z
  if(normal.z < 0)
    output.z += 128;

  // Convert to 0..1
  output.z /= 255;

  return output;
}

void UnpackDepthNormal(float4 input, out float Z, out float3 normal)
{
  // Read in the normal xy
  normal.xy = input.xy*2-1;

  // Compute the (unsigned) z normal
  normal.z = 1.0 - sqrt(dot(normal.xy, normal.xy));
  float hiDepth = input.z*255;

  // Check the sign of the z normal component
  if(hiDepth >= 128)
  {
    normal.z = -normal.z;
    hiDepth -= 128;
  }

  Z = (hiDepth + input.w)/127.0;;
}

And, it generates the following data:


Click to enlarge

That’s the normal/depth texture (alpha not visualized) for the scene. The normals are in world-space (converting from the stored non-linear Z to world position using the texcoords is basically a multiply by the inverse of the viewProjection matrix, a very simple operation).

Render Pure Lighting (Put your junk in that box)

Next step, using that texture (the object normals and positions), you can render the lights as a screen-space pass very inexpensively (the cost of a light no longer has anything to do with the number of objects it’s shining on, it’s now simply a function of number of pixels it draws on). Bonus points: you can use a variation on instancing to render a bunch of lights of a similar type (i.e. a group of point lights) in a single pass, decreasing the cost-per-light even further.

The lighting data (pure diffuse lighting, in my case, though this operation can be modified in a number of ways to do more material lighting types and/or specular lighting if necessary, especially if you have a separate depth texture) is rendered into another texture, and it ends up looking as follows:


Click to enlarge

That’s a render with three small point lights (red, green and blue in different areas of the screen) as well as a white directional light.

Render Objects With Materials (Make her open the box)

Finally, you render the objects again, only this time you render them with materials (diffuse texture, etc). However, instead of doing any lighting calculations at this time, you load them from the texture rendered in step 2.

This ends up looking like this (pretty much what you expect from a final render):


Click to enlarge

And that’s how light pre-pass rendering works, in a nutshell. At least, it’s how it works in my case, which is very simplistic, but it’s all I need for the art style in my game. It’s a lot easier on the resources than deferred shading, while still separating the lighting from the objects.

Delicious!

Hopefully, in my next update, I’ll have an actual set of background objects (as that’s the next item on the list, but it does require the dreadful tooth-pulling that is “artwork,” so we’ll see how quickly I can pull this together.

Until next time: Never give up, never surrender!

Mork Calling Drilian, Come in, Drilian

I haven’t had nearly as much time to get stuff done at home as I’d like, as work has been a bit of a scramble recently. Working ridiculously hard at a code-related day job and then coming home and trying to code is…difficult. And recently, highly unsuccessful.

However, this last weekend I was able to get a few things done.

Silos Are Not Just For Grain and Missiles

First up, I decided to check out Nevercenter Silo, a 3D modeling program that I swear has to be the easiest-to-use modeling software I have ever SEEN. For some reason, this software just completely clicks with me.

Maybe it’s that it allows me to start with something as simple as a box and push/pull/extrude/warp/etc it slowly into the shape that I want, or maybe it’s that it has built-in support for symmetrical modeling (you basically model HALF of a model and the other half changes shape along with it). It’s hard to say. However, it feels more like sitting down with clay and slowly morphing it into the shape that I want vs. the usually-cumbersome task of modeling a 3D mesh.

Consequently, not terribly long after I got it, I modeled what will likely be the first ship for my new game!


Click to enlarge

I used subdivision surfaces to do the modeling, so the source geometry (pictured to the left above) is actually rather simple. But when subdivided, it forms (what I think is) a pretty cool-looking spacecraft.

S.S. Procedural Is Leaving Port

I also managed to get my procedural content (textures, mostly) generation ported over to run on XNA (from C++, so it was a bit of work). Since I’m aiming to release this game as part of the XNA dealy on the 360 (with an option for XBLA if I’m really, really lucky), I’ve been taking great care in ensuring that it is going to run well on the Xbox 360. So far, so good. I have background asset generation working, etc etc.

But more importantly, I have my ship model loading into the game.

AND it’s procedurally textured using my super-spiffo texture generation framework, so that’s awesome, too.

Check it: (Yes, that is a wireframe bezier patch in the background and yes, that means I’m also going to have procedural geometry generation)


Click to enlarge

So, to sum: not a lot of coding done (the overwhelming majority of that has been done at work), but I do finally have some pretty new screenshots!

Slow Progress Is Still Progress

I’ve gotten less done recently that I would have liked, due to a lack of time to sit down at my computer. However, I did implement tiling perlin noise in-shader.

It uses the same basic technique as I used to set up the tiling noise (so you can read it in one of my earlier posts).

Basically, I can generate perlin noise that tiles at any given (integer) position.

Quite handy for generating tiling textures (because not everything I’m generating needs to be 3d)

Here are some examples. It’s hard to tell that they tile without actually tiling them yourself, but they do. So there.


Click to enlarge

So I used it to put together a seamless version of my earlier brick texture:


Click to enlarge

Finally, I did the same basic trick with the Worley (cellular) noise.

The first screenshot is a large area repeating Worley, the second repeats at a very small level so it should be obvious how it tiles, even looking at the thumbnail:


Click to enlarge

Next up: Maybe I should actually do something useful with these things.

Jpeg Buoys Amidst a Sea of Text

So I put off working on this entry long enough that it’s now two entries worth of data in one.

Too Many Instructions: Cutting Down On the Noise

So, the implementation of Improved Perlin noise from GPU Gems 2 boils down to 48 pixel shader instruction slots (9 texture, 39 arithmetic). That’s one octave of noise. What I needed, desperately, was a faster implementation of noise, where the base quality doesn’t matter (especially useful for things such as fBm and the like).

In the FIRST GPU Gems, in the chapter on Improved Perlin Noise, Ken Perlin makes a quick note about how to make a cheap approximation of perlin noise in the shader, using a volume texture. The technique is straight forward, but it took me some effort to understand exactly what was supposed to go into the volume texture.

In my case, I ended up using a 32x32x32 volume texture to simulate an 8x8x8 looping sample of perlin noise space. Essentially, when sampling this texture, divide the world position by 8, and use that as the (wrapped) texcoord into the volume.

Crazy 8s: Modifying Perlin Noise To Loop At A Specified Location

The first trick is that it has to be LOOPING Perlin noise. But how do you generate such a thing?

Turns out, in the reference implementation of Improved Noise, there are a bunch of instances where there are +1s. For instance:

A = p[X  ]+Y;
AA = p[A]+Z;
AB = p[A+1]+Z;

B = p[X+1]+Y;
BA = p[B]+Z;
BB = p[B+1]+Z;

(Later, AA, AB, BA, and BB are also accessed with +1s).

Figuring out how to make the noise wrap at a specific value (in my case, 8), was a matter of rethinking those as follows:

A = p[X  ]; // note: no +Y here
AA = p[A+Y]  (+Z); // +Z in parens because it actually gets added later, like the Y does here
AB = p[A+(Y+1)] (+Z);

B = p[X+1]; // again, no +Y
BA = p[B+Y] (+Z);
BB = p[B+(Y+1)] (+Z);

So, really, the +1s are added to the coordinate added earlier.
So, to make the noise wrap at a certain value, you need to take those (coordinate+1)s and change each into a ((coordinate+1)%repeatLocation).

The final version of the texture shader that generates noise that loops at a specific location is as follows:

// permutation table
static int permutation[] = { 151,160,137,91,90,15,
131,13,201,95,96,53,194,233,7,225,140,36,103,30,69,142,8,99,37,240,21,10,23,
190, 6,148,247,120,234,75,0,26,197,62,94,252,219,203,117,35,11,32,57,177,33,
88,237,149,56,87,174,20,125,136,171,168, 68,175,74,165,71,134,139,48,27,166,
77,146,158,231,83,111,229,122,60,211,133,230,220,105,92,41,55,46,245,40,244,
102,143,54, 65,25,63,161, 1,216,80,73,209,76,132,187,208, 89,18,169,200,196,
135,130,116,188,159,86,164,100,109,198,173,186, 3,64,52,217,226,250,124,123,
5,202,38,147,118,126,255,82,85,212,207,206,59,227,47,16,58,17,182,189,28,42,
223,183,170,213,119,248,152, 2,44,154,163, 70,221,153,101,155,167, 43,172,9,
129,22,39,253, 19,98,108,110,79,113,224,232,178,185, 112,104,218,246,97,228,
251,34,242,193,238,210,144,12,191,179,162,241, 81,51,145,235,249,14,239,107,
49,192,214, 31,181,199,106,157,184, 84,204,176,115,121,50,45,127, 4,150,254,
138,236,205,93,222,114,67,29,24,72,243,141,128,195,78,66,215,61,156,180
};

// gradients for 3d noise
static float3 g[] = {
    1,1,0,
    -1,1,0,
    1,-1,0,
    -1,-1,0,
    1,0,1,
    -1,0,1,
    1,0,-1,
    -1,0,-1,
    0,1,1,
    0,-1,1,
    0,1,-1,
    0,-1,-1,
    1,1,0,
    0,-1,1,
    -1,1,0,
    0,-1,-1,
};

int perm(int i)
{
	return permutation[i % 256];
}

float3 texfade(float3 t)
{
	return t * t * t * (t * (t * 6 - 15) + 10); // new curve
//	return t * t * (3 - 2 * t); // old curve
}

float texgrad(int hash, float3 p)
{
  return dot(g[hash%16], p);
}

float texgradperm(int x, float3 p)
{
	return texgrad(perm(x), p);
}

float texShaderNoise(float3 p, int repeat, int base = 0)
{
	int3 I = fmod(floor(p), repeat);
	int3 J = (I+1) % repeat.xxx;
	I += base;
	J += base;

  p -= floor(p);

  float3 f = texfade(p);

	int A  = perm(I.x);
	int AA = perm(A+I.y);
	int AB = perm(A+J.y);

 	int B  =  perm(J.x);
	int BA = perm(B+I.y);
	int BB = perm(B+J.y);

  	return lerp( lerp( lerp( texgradperm(AA+I.z, p + float3( 0,  0,  0) ),
                                 texgradperm(BA+I.z, p + float3(-1,  0,  0) ), f.x),
                           lerp( texgradperm(AB+I.z, p + float3( 0, -1,  0) ),
                                 texgradperm(BB+I.z, p + float3(-1, -1,  0) ), f.x), f.y),
                     lerp( lerp( texgradperm(AA+J.z, p + float3( 0,  0, -1) ),
                                 texgradperm(BA+J.z, p + float3(-1,  0, -1) ), f.x),
                           lerp( texgradperm(AB+J.z, p + float3( 0, -1, -1) ),
                                 texgradperm(BB+J.z, p + float3(-1, -1, -1) ), f.x), f.y), f.z);

}

Whee!

Noise + Real Numbers + Imaginary Numbers == ???

So, the second trick: the texture actually needed to contain two values (R and G channels), to act as real and imaginary parts. Very simple, I added a base parameter (in the code above) so that I could offset into a different 8x8x8 cube of noise. I drop a different 8x8x8 noise into the G channel.

Finally! We have a texture with 8x8x8 noise. But 8-cubed noise sucks, because it’s ridiculously repetative. That’s where that weird imaginary part comes into play. You sample the 8-cube volume again, but at 9x scale (so it’s lower frequency). You then use the (real component of) high-frequency as an angle (scaled by 2pi) to do a quaternion rotation on the low-frequency noise.

float noiseFast(float3 p)
{
  p /= 8; // because the volume texture is 8x8x8 noise, divide the position by 8 to keep this noise in parity with the true Perlin noise generator.
  float2 hi = tex3D(noise3dSampler, p).rg*2-1; // High frequency noise
  half   lo = tex3D(noise3dSampler, p/9).r*2-1; // Low frequency noise

  half  angle = lo*2.0*PI;
  float result = hi.r * cos(angle) + hi.g * sin(angle); // Use the low frequency as a quaternion rotation of the high-frequency's real and imaginary parts.
  return result; // done!
}

And that’s it! Compare the instruction counts of the real Perlin noise to this fast fake:

Old (high-quality):  approximately 48 instruction slots used (9 texture, 39 arithmetic)
New (lower-quality): approximately 20 instruction slots used (2 texture, 18 arithmetic)

Essentially, wherever I don’t need the full quality noise, I can halve my instruction count on noise generation. Score!

Here’s a comparison: on the left, the weird confetticrete chair with the original noise, and on the right is the new faster noise:


Old (left) vs. New (right)
Click to enlarge

They look roughly the same, there are some artifacts on the new one (the diamond-shaped red blob on the upper-right of the new chair due to the trilinear filtering), but it’s way faster.

Cellular Noise

Okay, I have some cool perlin noise stuff. But man cannot live on Perlin noise alone, so I decided to implement cellular noise, as well.

Turns out, there’s something called Worley noise which does exactly what I was hoping to do. Implementation was pretty simple.

void voronoi(float3 position, out float f1, out float3 pos1, out float f2, out float3 pos2, float jitter=.9, bool manhattanDistance = false )
{
  float3 thiscell = floor(position)+.5;
  f1 = f2 = 1000;
  float i, j, k;

  float3 c;
  for(i = -1; i <= 1; i += 1)
  {
    for(j = -1; j <= 1; j += 1)
    {
      for(k = -1; k <= 1; k += 1)
      {
        float3 testcell = thiscell  + float3(i,j,k);
        float3 randomUVW = testcell * float3(0.037, 0.119, .093);
        float3 cellnoise = perm(perm2d(randomUVW.xy)+randomUVW.z);
        float3 pos = testcell + jitter*(cellnoise-.5);
        float3 offset = pos - position;
        float dist;
        if(manhattanDistance)
          dist = abs(offset.x)+abs(offset.y) + abs(offset.z);
        else
          dist = dot(offset, offset);
        if(dist < f1)
        {
          f2 = f1;
          pos2 = pos1;
          f1 = dist;
          pos1 = pos;
        }
        else if(dist < f2)
        {
          f2 = dist;
          pos2 = pos;
        }
      }
    }
  }
  if(!manhattanDistance)
  {
    f1 = sqrt(f1);
    f2 = sqrt(f2);
  }
}

The gist is that each unit cube cell has a randomly-placed point in it. for each point being evaluated by the shader, you find the distance to the nearest point (a value called “F1”), and the distance to the next-nearest (“F2”), etc (to as many as you care about – though anything past F4 starts to look similar and uninteresting). Using linear combinations of these distances gives interesting results:


Left: F1 Right: F2
Click to enlarge


Left: F2-F1 Right: (F1+F2)/2
Click to enlarge

Something cool to do, also, is to use Manhattan distance instead of standard Euclidian distance to calculate the distance. You end up with much more angular results. Here are the same 4 calculations, using manhattan distance:



Click to enlarge

Considering that a few levels of my current project will take place in a metallic fortress, this will especially come in handy.

So, what can you do with these?

I, predictably, have made a few test textures:


Click to enlarge

Also, it still looks pretty cool if you use fBm on it. For instance:


4 octaves of F1 Worley noise

But I hear you asking “duz it wrok n 3deez, Drilian?!?!?!” Oh, I assure you it does!


Click to enlarge

And now I hear you asking “Can u stop typing nau? I is tir0d of reedin.” (or alternately, “I is tir0d uv looking @ imagez sparsely scattered thru the text taht I dun feel liek reedin.”) To this, I say: Sure, but it worries me that you’re asking your questions in some form of lolcat.

That’s all I got.

Short Skirt, Long Jacket

So yesterday I got the crack filling up and running.

Tonight, I improved the routine dramatically.

The Trouble With Texcoords

The problem was, the edge-expanding algorithm I used was detecting way more edges than it needed to. Here’s an image of a normal map generated using this (old, bad) method (I made it render ONLY the skirts, for illustration):


Click to enlarge

As you can see, way more edges through the UV charts were getting expanded than necessary. This was messing up the maps, because there were angles and edges where there didn’t need to be, and it was introducing artifacts, especially at lower mip levels.

The problem arose because each of those “extra” edges marked areas where the vertex positions were the same, but the texcoords were different. Since the original algorithm was using the vertex’s index as the identifying feature, each time there was a texcoord change meant that the indices for neighboring triangles were different, blah blah blah, you get the point.

UV: Vectors, Not Rays

Basically, the system was rewritten to glom together vertices with the same uv map coordinates, and treat them as one single vertex. All of those interior edges get discarded. Because a single “vertex” could actually be composed of multiple source vertices, the edge expanding code had to be modified to take that into account.

Here’s the old way again, followed by the NEW way (And then the new way completely filled in):


Click to enlarge

As you can see, they’re now proper outlines (not outandsometimesinlines), and the actual outer areas are much cleaner.

I Don’t Think That Clown Is Healthy

So, here’s a new render (and its diffuse map). I modified the concrete because I was sick of all of my pictures being grayscale, so here’s my artist’s rendition of “Gray Chair That A Clown Puked Onto”:


Click to enlarge

That’s all! I’m going to release the code that I’m using for all of this, but I want to clean it up just a bit, and add variable gutter width support (instead of the lame hardcoded way that I have it now).

But for now…away!

The Big Procedural Easy

I took it easy today, so I was barely near the computer, but I did make some awesome progress.

Last night, I was able to finally get a prototype of my texture caching setup going.

Diffusing the Procedural Situation Using Bad Puns

Right now, it’s a command-line tool that does the following:

  • Loads up a mesh and UV atlases it to get unique texture coordinates for the entire mesh (similar to what you’d do for lightmapping
  • Loads a D3DX effect
  • Renders the mesh into the a render target, using the UV atlas texcoords as position, using the actual model’s position/normal as shader inputs to generate the noise
  • Writes both the UV atlased mesh and the rendered texture to file

Simple enough. What I ended up with was as follows:


Click to enlarge

Not bad, but for two things:

  1. No normal mapping (per-vertex normals only)
  2. Cracks along the seams of the UV maps.

Both are solvable problems, and I opted to tackle the normal mapping first.

Returning to Normalcy

How does one generate a normal map with a procedural function?

In my case, I have the procedural function not only generate a color but a height. Generating three heights in close proximity (using (pos), (pos+tangent*scaler), (pos+bitangent*scaler)) gives me two edges which I can take the cross-product of to get a pixel normal map. Adding this gave me some better shading (but didn’t fix the cracks):


Click to enlarge

The normal map generated is in object space (though it could easily be in world space, assuming a static object). This simplifies the lighting code (I simply transform the light position by the inverse world matrix before passing it to the shader) and eliminates the need for tangent and bitangent (yes, bitangent, not “binormal”) vectors.

Cracks are Unappealing on Plumbers AND Procedurally-Textured Models

Finally, it was time to solve the cracking problem. I decided to solve it by using skirts around the edges of the UV map sections. Essentially, they’re degenerate textures in the actual mesh (the positions are the same), but the UV coordinates are expanded to fill in some of the gapping.

Basically:

  • Use your favorite method to get a list of edges that are only used once
  • Use these edges to generate “UV normals” for each vertex (which has two edges, one leading in and one leading out), which are basically ( perpendicular[(edge+edge2)/2] ).
  • duplicate each vertex, move its UV coordinate some distance along this uv normal
  • Create new strips of indices, using the old and new
  • render these into the UV map first, before rendering the standard data

This basically puffs out each procedurally-generated area, as you can (maybe) see here (Easier to see at full size):


Click to enlarge

Thus, when the UV coordinates along the edges of these areas either go out of bounds or blend with the no-man’s-land around the texture, it blends with data that’s very close to what it’s near, hiding the cracks.

The result:


Click to enlarge

And that’s “all” there is to it!

The UV atlasifying and skirt generation will be a pre-process, so all of the vertex (mesh) data will be ready for immediate rendering into the texture after load.

Woot!

Sometimes They Come Back

Return of the chairs!

I needed a quick test to make sure that the noise textures work well in 3D, since that’s their intended use, so I decided to run them on some chairs.

A few things to note:

  • These aren’t currently being written to texture first (which is the ultimate idea) so the chairs in the background have a certain amount of…sparkle.
  • Also, this means that this is SLOW. This scene brought my 8800GTX to its little silicon knees. The concrete shader is 1906 instructions (way over the sm3 guaranteed min spec of 512), 306 texture and 1600 arithmetic, so it’s a bit…intense.
  • The lighting looks a little weird. I’m not sure if it’s an artifact or if it’s right and I’m just imagining things, but there you go.


Click to enlarge

That’s all!

PS – broken finger: still sucks.