I’m Supposed To DRIVE on THAT?

Another week-plus break in journal entries. That’s starting to become a bad habit, but I don’t like updating when I don’t have anything to show.

Which brings us to today’s entry: I have something to show.

I created a new track editor. You may remember seeing screenshots of my previous track editor, way earlier in the journal. Those people who DO remember this are thinking to themselves, “That bonehead said that he would keep the existing editor because it works.” This is true. But, I decided that it was slightly hard (and by “slightly hard” I mean “nigh impossible”) to do some of the things that I really wanted to do fairly frequently (like spiral curves and full loops, to name a few). Plus, as the previous system was written using bezier patches , it took many patches to do any one feature (a loop generally took about 16-20 patches). This was lame.
>Since I am no longer using beziers as my basis for all terrain rendering, it’s no longer advantageous to use them exclusively in track creation. So I started over.


Click to Enlarge

Each track segment gets the following attributes (which are relative to the track segment’s initial orientation):

  • Length – This represents the angle of the track relative to the in-game representation of the north star. That, or just the length of the track down the middle. I can never remember which is which.
  • Width – Determines what the width will be by the end of the section of track (the previous section determines the initial width).
  • Curve – The amount of curvature of the track. A negative curve is to the left, positive is to the right. A 360 degree curve is, as expected, a full rotation.
  • Horizontal Skew – This is how much the track strafes to the side (along a sinusoidal sort of path) across its length. It’s like a lane-change. Only the pavement is changing lanes, not the car.
  • Incline Curve – This is the circular incline of the track. A 360 degree value of this would be a full loop. Positive is up, negative is down.
  • Vertical Skew – Similar to the horizontal skew, only vertical. How’s THAT for descriptive?
  • Twist – This is how much the track rolls to one side or the other along its length, which modifies the orientation for the next segment
  • Bank – This is similar to twist, except it raises one side instead of twisting around the middle, and the orientation remains unchanged in the next segment of track (this is to allow for banked curves, which I wouldn’t want to curve upwards because of the bank or something stupid like that).

Given those properties (And a semi-complicated way to combine them which I won’t detail here), I can still generate a track in segments. But I can use less segments to do the job.


Click to Enlarge

Once I got it generating, then I decided to make it considerably more adaptive. So it divides up the curves according to an error metric, which is why, in the last screenshot, some of the bits of track have fewer polygons (the straight bits only have one big one), and others have more.

The last thing to do is be able to export it to an actual mesh, so that I can load it up in a 3D modelling program and build the landscape around it.

But, for now, I can work on actually getting a car to drive on the track. Because now that I have a few levels, I want to find out what it’s like to drive on them!

WTS: Plane of +1 Infinity

So I finally got something interesting working. While there’s no pretty shading or even eye candy, what there IS is an infinite plane renderer, using a grid. The idea was to use this for water rendering, but there’s an easier and less complicated way to do it (a la Far Cry) that I’m going to use instead. I just thought I’d finish this up anyway because it’s moderately novel (at least, to me. Maybe it’s not.)
A few pictures of it in action (Hooray for poorly-compressed JPGs and their many artifacts!):


Click to Enlarge

What makes this interesting (to me at least, if nobody else) is that the visible portions of the plane are (for the most part) the whole visible bit of the plane. Also, the grid spacing is interpolated in post-projection space, so it’s a constant-ish LOD across the screen (which was to be a great help in rendering water waves with the detail in the near waves, but not the far waves).
Here are some pictures of it with the gridlines:


Click to Enlarge

Advantages:

  • With the exception of the four 4-component vectors used as shader constants, nothing is transferred to the card on a per-frame basis. The vertex/index buffers are completely static.
  • A very minimum of the grid is off-screen. Thus, the transformations are reserved for the on-screen objects only
  • With the screen-space linear interpolation of the grid, detail is concentrated where it’s needed.

Disadvantages:

  • Complex. Finding the best four points for the on-screen representation of the grid wasn’t quite as easy as I had initially thought. Especially since there are 5 and 6 edge cases.
  • The vertex shader is ever-so-slightly more complex than a normal shader. Just a few instructions, but every little bit, right?
  • It actually is quite difficult to make it handle the variations in height necessary for a water wave renderer. Actually, I haven’t done that part yet (and since I’ve found an easier way of doing the same thing, I probably won’t). That is left as an exercise for the reader.

How does it work, you ask? Okay, nobody asked that, but I’m going to tell you anyway. Because that’s what I do.

First up is the on-CPU setup. Given a plane in the 4-vector form of [a, b, c, d] (i.e. ax + by + cz +d = 0)

  1. Project the plane into post-projection space. To transform a plane using the 4×4 transformation matrix T, you multiply the 4-vector plane representation by the matrix Transpose(Inverse(T)). I decided to do the plane clipping in post-projection space because the clipping against the frustum is easier there (as it’s a box instead of a sideways headless pyramid).
  2. Get the vertices of the intersection between the frustum and the plane.
    • This intersection is the intersection of three planes: The two planes making up the frustum side, and the plane being rendered. However, since the planes of the frustum are axis-aligned planes in post-projection space, this can be simplified by substituting in the two known components (from the frustum planes) for that edge and solving for the third variable. For instance given the upper-left corner (x = -1, y = -1), z = -(a*x + b*y + d)/c.
    • Once you have the third component, check to make sure that it is within the valid half-cube range (Note: in Direct3D, visible post-perspective space is within the half-cube where x is in [-1, 1], y is in [-1, 1] and z is in [0, 1]). If it IS in range, add it to the list of edge points.
    • There can be at most six points generated by this set of checks (giving one polygonal edge per frustum plane. 6 edges = 6 points, see?)
  3. Ensure a clockwise winding order for the points. I used the gift-wrap sort of method, starting at the point nearest the screen, do a 2D check (ignore Z) to find the “most clockwise” point along the way (i.e. the point at which, given the line between the current point and the next point, all vertices are to the right of that line).
  4. This is the complicated part. We need to get the number of points to exactly 4 (as what the shader is expecting is a quad)
    • Given 3 points, duplicate the one nearest the camera.
    • Given 5 points:
      • Find the diagonal edge that crosses from one side to an adjacent side (i.e crosses from the top side to the right side, as opposed to left to right)
      • Look at the intersection between that diagonal line and the two sides that it currently doesn’t touch.
      • Choose the side that has the intersection point nearest the screen, and extend the corresponding point along the diagonal to the intersection point
      • At that point, along the edge where that intersection was, there are now three collinear points. Remove the one in the middle, which brings the total down to four points
    • Given 6 points:
      • There are two diagonals that cross from one side to an adjacent one, so we’ll pick the one that represents the far plane intersection (the z coordinate of both of the points on this diagonal will be 1).
      • That diagonal gets extended to both of the sides that it doesn’t touch, similar to the 5-point case. Except that it extends both directions instead of just the one.
      • Remove the two redundant vertices
  5. Send the 4 points to GPU (I pass them in in the world matrix slot, since they’re 4 float4 values).

Okay, that was the worst of it. Now there’s the GPU-side bits:

  1. The mesh input is a grid of u,v coordinates ranging from 0 to 1. Linearly interpolate between the four post-projective planar intersection points passed in from the CPU using the u,v values. worldSpace = lerp(lerp(inMatrix[0], inMatrix[1], in.u), lerp(inMatrix[3], inMatrix[2], in.u), in.v) The 3 and 2 are in that order because the vertices are given in clockwise order.
  2. Using the inverse viewproj matrix, project these points back into worldspace.
  3. The worldspace x and z can be used as texture coordinates (scaled, if you want. I use them verbatim right now).
  4. Reproject the worldspace coordinates back into projected space. This is necessary because the linearly interpolated points are not perspective-correct (causing the texture mapping to totally flip out. It was like a bad flashback to the original Playstation. I do not wish that upon others).

Math-heavy? Yep.
Poorly explained in this post? Probably.
Able to be cleared up by questions in the comments section? You betcha.

Hope this has been informative (though it’s more of a dry read than I would have liked).