This is an authoring “gotcha” in X-Plane 10:

Do not use a degenerate UV map if you are using bump maps.  If you use both techniques, your object will have black splotches in HDR mode.

Let me define what I mean:

A degenerate UV map is a UV map where the “U” and “V” axis (that is, the horizontal and vertical axes of your texture) are not moving in different directions across the face of a single triangle.

The most common case is when all three vertices of a triangle are mapped to a single point in the texture, giving you a “solid color” triangle (that is, the whole triangle picks up the color of the single point of the texture.

You can also have this problem if you use “1-dimensional” texturing – that is, all three texture points for a triangle are along a line in the UV map or two of the three points are colocated.

If you need to use bump maps for your OBJ and you need a “single color” spot, separate the UV map of each vertex of the triangle by a little bit (even just one pixel).

You’ll know you have this problem when you see some of your polygons show up as pure black when HDR lighting spills on them.  Here the top of the APU/air conditioner on the jetway has a degenerate UV map.  In HDR mode it shows as a black area despite the spill light.  This kind of artifact will show through any geometry that should normally cover the affected area.

Why This Happens

(What follows is strictly for the computer graphics nerds – you don’t need to understand this to fix your OBJs).

For the computer graphics nerds: in order to apply a tangent space bump map, you need three basis vectors to create a 3-d space for the normal vectors; the tangent space bump map is then essentially a perturbation of the regular normal vector.  (Or put another way, the tangent space coordinate space system defined by U, V, N is used to transform the tangent space bump map back into world space.  For the degenerate case of no normal map, the normal map can be thought of as 0,0,1 and the transform is a no-op from what we would have had anyway.)

Some engines require that the U and V axes be specifically passed per vertex.  X-Plane, however, reconstructs the U and V map from the UV map of the model.  When the UV map of the model is degenerate (e.g. it doesn’t form distinct non-zero length basis vectors along the output triangle) the coordinate system of the tangent space normal map ends up degenerate and the output normals suffer from a divide-by-zero.

The divide-by-zero is burned into the G-Buffer and the artifacts are thus persistent.

About Ben Supnik

Ben is a software engineer who works on X-Plane; he spends most of his days drinking coffee and swearing at the computer -- sometimes at the same time.

5 comments on “Avoid Degenerate UV Maps When Using Normal Maps

  1. Will it blend? That is the question.

    I wonder. Wouldn’t hardware optimize optimize cases like alpha=1? Or does it also know there are NaNs in the framebuffer? Even though it wouldn’t make much sense most of the time, as you can see here. Well, at least it might not bleed through any and all geometry.

    1. The floating point rules say that NaNs have to be treated as standard IEEE NaNs, that is, NaN * anything = NaN. Soo…once you burn a NaN into the framebuffer, I think the GPU is required to do the annoying thing, e.g. preserve the NaN.

      In this case there’s often no accum; some hardware may optimize out alpha=1 in the blender, but you’ll still end up with some hosed bits. The _shaders_ have to follow IEEE so no math optimization there.

      1. I had a look at some specs. OpenGL specifically doesn’t care about NaNs. An implementation can do whatever it wants with them, as long as it doesn’t crash. It appears Direct3D does care, probably in the blender as well. Also, the uniformity across the GPU must be a good thing (even if 0 * NaN is 0 most of the time anyway, mathematically speaking). I just didn’t really expect this.

        1. I can’t say that all GPUs have this problem – just at least one I have tested. I definitely see different results when I write crappy shaders – the annoying case is when the hardware I happen to be using accepts my bad code and I find out later.

          1. Yeah, that’s another factor. I got the impression that ATI/AMD is more tolerant. And it usually ends up with the ‘right’ results, which was my point about the usefulness of things. Kinda puts a damper on “Beware of bugs in the above code; I have only proved it correct, not tried it.” (Donald Knuth)

Comments are closed.