Tag: hardware

A New Broken Record

For years now I’ve been harping about ways to keep the number of batches down in your scenery. A batch is a single submission of triangles to the graphics card for drawing. Batches get rendered fast even if they contain a ton of triangles, but changing modes between batches is not very fast, so a few large batches is hugely better for performance than a large number of tiny batches.

To play that broken record one more time, there are two ways that you (a scenery designer) can cut down the number of batches):

  • Use a small number of larger textures instead of a large number of small textures, preferably sharing textures between similar scenery elements that are placed nearby. X-Plane will do its best to merge the content that uses those textures into single batches. We call this the “crayon rule.”
  • Use less attributes in your objects. Attributes usually require a new batch (after the graphics card mode has been changed due to the attribute). So if you’ve got 1000 attributes in your object, you’ve got a problem.

Well, with X-Plane 9 I have a new broken record: avoid overdraw!

Overdraw is the process of drawing pixels on top of other pixels on screen. It happens any time we use blending to do translucency, and any time we use polygon offset to build the image in layers.

Overdraw is bad because with X-Plane 9’s pixel shaders, most users are slowed down by the graphics card’s ability to fill in pixels (pixel fill rate), with those complex shaders being run for every pixel. If you are at a screen-res of 1200×1024 looking at the ground with no objects, that might be 1.2 million pixels to fill. But if there is an overlay polygon covering the ground, we have 2.4 million pixels to fill! That’s a huge framerate hit.

Right now there’s not much you can do about overdraw. Once MeshTool comes out I will post some guidelines on how you can limit overdraw.

We took a step in the v9 global scenery to limit overdraw: in X-Plane 8 the global scenery tried to hide repetition of flat textures by drawing them over each other with offsets. In X-Plane 9 this is done in a pixel shader (e.g. the texture is analyzed and swizzled in the shader and then drann once), cutting down the number of times we must draw.

If you turn pixel shaders on and off in a flat area like Kansas you might see this if you compare the screenshots – the farm textures are more repetitive without shaders. This gives faster fps to everyone (with or without shaders) by eliminating overdraw.

Posted in Development, Scenery by | Comments Off on A New Broken Record

Beware the Forceware

Brett and a few other users pointed out to me that the nVidida ForceWare 169.21 release appears to hose video for X-Plane. If you have an nVidia card, don’t update, or you may want to back up a version.

This kind of thing has happened in the past – hopefully a revision will come out fairly soon. (But I do not have contact with the nvidia driver team for Windows on this…)

Also some users are seeing corrupt startup screens with ATI hardware – apparently some configs don’t react well to us turning vsync off. Not quite sure what we’ll do about that yet, but hopefully a fix will be in the next few betas.

Posted in Development by | 3 Comments

Script Files and Options

Sometimes we find that users machines cannot run without hiding OpenGL driver features from X-Plane. That is, the computer says it can support VBOs, but when the sim asks for a VBO, something really bad happens. X-Plane (since mid-version 8) accepts a series of command-line options that cause the sim to ignore the given feature.

These kind of bugs come and go as drivers are updated, the sim changes which technology it uses, and hardware cycles through the user base. The biggest one we’re seeing right now is that the new iMacs show runway lights as white squares unless the sim is run with the –no_sprites option.

We’re trying a new way to address these problems. In the past we would give users the command-line option; now we are building double-clickable script files that launch the sim with the appropriate options. Theoretically this…

  • Is less error prone for users.
  • Is quicker for users who may have to use the command-line option every time they launch the sim until a driver update becomes available.
  • Is quicker for us (in that we spend less time mailing out instructions and helping users who are unfamiliar with a command-line environment).

We did consider some other options, but this seemed like the least evil. The runners-up:

  • Just turning off the hardware feature permanently. We ruled this out because the performance hit would be significant and affect all users.
  • Attempt to identify and auto-turn-off known bad options. We do this in some cases, but it requires changes to the sim code, so this does no good if a new video card comes out after a given x-plane release and introduces new problems. so it’s not a total solution. Also since a bug might be resolved in-field, if X-Plane auto-avoids certain configurations we have to patch the sim once the configuration starts to work again.
  • Provide a user interface to turn these options off inside the sim. This was ruled out for two reasons: first, some of these options cause the sim to crash before a user can ever get to the rendering settings. Second, turning off these kinds of options can really kill performance, so leaving the options in sight produces a whole new tech support option. (A user tells us their framerate is awful at the lowest settings…perhaps they turned off the hardware acceleration options in an attempt to get the lowest settings…etc.)

We’ll see how well this approach works. So far it seems to be working better than handing out command-line options.

(Of course if we can address the problem by working around it with a change to our code, we almost always choose this option.)

Posted in Development by | Comments Off on Script Files and Options

NVidia: 2 Ben: 0

I found the root cause of another NVidia specific bug, and once again it’s my own stupid code. If you Google for driver bugs, you’ll find plenty of grumpy developers ranting about how card X does this wrong thing and card Y does that wrong thing…I figure it’s only fair to follow up and say “yep, that one was mine.”

Like the previous nVidia-only crash, this was a case where X-Plane was always doing something wrong, but only some drivers had problems with the behavior. So the crash was NVidia-specific, but X-Plane caused.

I believe that this bug was manfiesting itself either as a message that “scenery shift took more than 30 seconds” or some kind of crash. One of the problems was that the diagnostics for this particular bit of code were really bad. So we’ve improved things a bunch…

  • There is more careful error checking during scenery shift, and those error messages are reported.
  • If the sim does crash, some new code will output a crash log on Windows that helps us isolate what actually happened.

Beta 12 will be out soon with the fix that caused problems on NV hardware as well as the improved diagnostics. So you may find that the sim just works better, but if it does still crash or report errors, please tell us – now we’ll have log files that will let us diagnose the problem a lot faster!

Posted in Development by | Comments Off on NVidia: 2 Ben: 0

V-Sync – Problematic in Practice

To those who have sent performance info: thank you, but you probably won’t hear for me for a week. I’m up to my eyeballs in reports and it’s going to take a while to get through them.

I finally found the code that allows X-Plane to turn off V-Sync. This should help nVidia users who are having framerate problems.

The basic idea is this:

  • X-Plane tells the graphics card to draw a lot of stuff.
  • The video card accumulates this “todo” list and works on it while X-Plane runs.
  • X-Plane indicates that the entire frame to be seen is done and tells the card to show the results.
  • Eventually some time later the card finishes the todo list and then shows it to the user.

V-Sync relates to the question of when this last step happens. When V-Sync (vertical sync) is off, the card shows the results as soon as it is done drawing.

But when V-Sync is on, when the card finishes drawing the world, it then waits until the monitor is done drawing its frame, and then shows the results. Without V-Sync we can have a situation where the top half of the monitor is showing a new frame and the bottom half is showing an old frame. (This is called “tearing”.)

So normally V-Sync is good because it prevents tearing. But the problem with V-Sync is that a frame can only be shown when the monitor refreshes. The video card has to wait until this happens, and this slows our framerate down.

In particular, most users have their monitors set to 60 hz. If X-Plane can only produce frames at 50 hz, the video card will have to further slow the framerate down to30 hz (one x-plane frame for every two monitor refreshes). If X-Plane falls below 30 hz, we end up with 20 hz (one X-Plane frame for three monitor refreshes), and if X-Plane goes below 20 hz, we would clamp at 15.

So when monitor refresh is on, there can be large framerate hits for small losses of performance in the sim, and a real risk of getting locked around 20 fps.

(The minimum framerate in X-Plane is intentionally set to 19 so that we won’t fog up if the video card clamps us at 20 fps.)

So when beta 11 comes out, you may get some framerate back if you haven’t already hacked your graphics card’s control panel settings. If you still want v-sync, you can always set it this way in the control panel. But most users I’ve talked to are happy to have it off.

In an only vaguely related note, one of the reasons to have high frame rate is to have a smooth flight model. But Austin has now put a new setting in the operations-and-warnings dialog box: you can pick how many times per graphics frame the physics run. The normal ratio is 1:1, but for fighter and acrobatic pilots, you might find that you can get a nice feel at lower fps (20-30) by setting a higher ratio.

Posted in Development by | 3 Comments

Two Truths of Hardware

I would have to say my track-record at predicting hardware developments in the sim world has been pretty poor. But these two factors seem like they won’t change for a bit:

  1. The amount of working data a system can crank through in one frame keeps increasing while the total amount of virtual address space stays the same (3 GB).
  2. The gap between the fastest and slowest systems from a finite time period keeps widening.

To elaborate on this first point, as video cards get faster (both GPU and internal bus/RAM speeds), system RAM and buses get faster (graphics slot changes are rare, but with enough VRAM this is relatively moot) the amount of texture and geometry data that X-Plane can tell the card to draw in a 30th of a second keeps going up. So users are running with more trees, roads, 3-d, etc. than in the past.

But all of this stuff lives in memory, and even if a user has 8 GB, X-Plane can’t load more than 3. Imagine what will happen when a graphics card can draw 3 GB of data in one frame? X-Plane will have to use all of its available memory for things you see right now or it will waste graphics power. This would mean purging from memory anything that isn’t on screen!

On the second point, video card power doubles every, um, six to twelve months (perhaps more like 18, now that the card makers are hitting the same fabrication and power limits that the CPU makers have already hit, but this is all seat of my pants). So even if we only supported the last three generations of cards (we support at least seven!!) that gap in performance doubles!

This means that every year it requires a more flexible rendering engine to make a sim with decent frame-rates for old computers and up-to-date graphics on new ones.

Posted in Development by | 9 Comments

X-Plane 9: The Absurdity of Pretending

There have been plenty of rumors and semi-official posts regarding the upcoming major revision of X-Plane (X-Plane 9). I have been trying to keep my mouth shut about it…the problem with pre-announcing anything is that the upside to us is small (at best we do what we said) and the downside is large (at worst we don’t do what we said and people get grumpy). No one complains if XP9 turns out to have no-pause scenery load and it’s a surprise…but plenty of people complain if we say “there won’t be pauses” and then they are.

But…the situation is becoming mildly absurd…plenty of info is out there, and saying “the upcoming major release”, etc. just feels political and weaselly. Austin would be disgusted.

So listen: I am going to try to provide some info on X-Plane 9. This info is subject to change. This is what we think is going to happen to the best of our knowledge. The release is still a ways away and enormous changes will happen. When things change, do not bitch to me that “you promised X” would happen. I do not promise anything. This info is provided to try to help those making add-ons for X-Plane plan appropriately.

With that in mind…I will try to post some more details on the authoring environment in the next few days. For now, here’s some very basic guidance on compatibility and hardware requirements:

  • The hardware requirements will be at least as high as X-Plane 8. If your machine is gasping and wheezing on 8, it’s not going to be any better on 9.
  • X-Plane 9 will depend more heavily on pixel shaders. If your hardware doesn’t have pixel shaders (GeForce 2-4, Radeon 7000-9200) or has really crappy shaders (GeForce FX series) you will miss out on a lot of the cool stuff in v9, and possibly have the scenery look worse (but faster) than v8 (as we move features from the CPU to pixel shaders).
  • Scenery that opens in x-plane should open in X-Plane 9 unmodified – if the scenery works in 8 but not 9, report it as a bug!
  • Plugins that work in v8 should work in v9 without modification.

Finally, we are trying to finish up X-Plane 861…this is a bug-fix patch for version 8 – it contains no new features, but it does fix a few nasty bugs, some of which cause crashes. So if there is any new feature, it’s coming in 9, not 861. Version 8 has been out for a very long time, so I will accept no argument that v8 should have more features than it does now. It’s been a long run!

(One of my main goals with 861 is to try to fix any weird behavior for third party scenery add-ons, so that a scenery add-on looks the same in v8 and v9. If we left the bug in 861 and fixed it in v9, authors would have to hack the scenery to make it work with v8, and then remove the hack and republish for v9. By trying to fix the authoring bugs in v8, at least when possible, it lets authors publish one package for both versions. Of course, v9 will have new features, so I expect some v9-only scenery will emerge pretty quickly.)

Posted in Development, File Formats, Scenery by | 9 Comments

Growing Pains for Mac Users

We are now seeing two sets of Mac OpenGL driver-related bug reports: problems with the new Mac Book Pros (with an nVideo GeForce 8600) and with the new iMacs (with an ATI Radeon HD2400 or HD2600).

This doesn’t surprise me for two reasons (both of which are speculation, btw…I am not privy to what goes on inside Apple):

  1. Both of these cards are “next-gen” in a major way: DX-10 compatible hardware. That means they have a lot of cool new features. My speculation is that this larger jump in functionality of the hardware means more changes (and more bugs) in the drivers.
  2. We’re getting closer and closer to the next major OS upgrade (10.5), but this new hardware has to run 10.4. So I wouldn’t be surprised if everyone at Apple is fighting two fires at once, limiting resources.

So while this is frustrating to Mac users who have these machines, I would encourage patience. I don’t think we’ve seen to date a Mac that has been “left broken” — several machines have had problems with the drivers in their initial intro, and Apple usually fixes them as soon as possible. In the long term I believe that these will both be really nice X-Plane machines.

Posted in Development by | 3 Comments

DX10: Why I’m Not an Early Adopter

Before I begin, X-Plane uses OpenGL as its interface to 3-d hardware, not Direct3D. So when I talk about “X-Plane doesn’t utilize DX10”, isn’t that meaningless? I mean, X-Plane has never supported any version of Direct3D.

But I like to use the term “DX9” and “DX10” anyway for this reason:

  • For all practical purposes, within the “games space”, most advances to the Direct3D and OpenGL APIs that I care about are created for the purpose of exposing new hardware capabilities to applications. That is, the point of DX10 (including the new Direct3D) is to allow games to use the newest video cards more efficiently.
  • OpenGL is revised by adding “extensions”, that is, independent features that can be mixed and matched. DirectX tends to have whole-API revisions. So I prefer DirectX because it puts a nice “number” on an entire set of technology. Since the graphics cards are revised in generations as new GPUs are designed, these generations match up reasonably well with the hardware.

So in this context, when I say “DX10” I really mean the very newest set of super-programmable cards, of which the GeForce 8800 is the first, and by use them I mean take advantage of some of these really great new features:

  • Instancing (the ability to draw a lot of objects with only one command to the card, which could relieve the CPU cost of huge numbers of objects).
  • Geometry shaders (the ability to do per-triangle and not just per-vertex calculations on the graphics card) which could move some of the logic for terrain generation to the graphics card. (We precompute this and save it in the DSF in X-Plane, so we use DVD space and RAM, while I believe MSFS does this kind of thing on the CPU.)
  • Better management of state changes (good for unloading the CPU).
  • A bunch of really interesting ways to work with data strictly on the card (don’t know what it’s good for yet, but it unlocks a lot of cool possibilities).

So why doesn’t X-Plane utilize all of these new features? Or rather, when will we?

Well, my goal in working on X-Plane’s rendering engine is to know these things are coming but not be an early adopter. The way I look at the economics of software development is: the amount of labor we can put into a release is somewhat constrained. If we put in more months between releases, we have fewer releases. If we have more programmers, we have to pay them more, and we have to charge more money. There’s a lot of things you can say (or people have told us) about our business model. But I tend to view these as the invariant conditions we have to work with.

So what I worry about is efficiency: if we are limited to exactly X man-months of work per release, how can we make the best of them? Is being an early adopter the most efficient use of limited programmer resources when developing X-Plane? (We have to consider opportunity cost: what features won’t be implemented because we spent time on early adoption of new graphics technologies.)

I think there are a few things going against early adoption, particularly for a small company like us where labor is at a premium. (Our list of things we can be doing is very long, so any new feature takes away from a lot of other good ideas.)

  • New graphics hardware isn’t widely distributed among our user base. If we adopt DirectX-10 style features, this work helps a very small numer of our users. Eventually everyone will have hardware like this, but we can cover a case by adapting the new technology later.
  • When new features come out, there is often vendor disagreement on how to code for them. It takes time to come up with cross-vendor standards. Consider that ATI hasn’t come out with their DX10 hardware, and the OpenGL extensions to use these are all nVidia proposals. My guess is that ATI will have their own extensions, and the real ones that get used will be a mix of each. If we adopt now, we’ll be “betting on the wrong horse” a few times — code that will have to be rewritten, for a total loss of efficiency.
  • New drivers can be buggy. It takes a while for support for new features to be both universal and reliable. The earlier we jump in, the buggier the environment we develop in, and thus the more difficult it is for us to develop.

Of course, there’s definitely a cost. The 8800 is capable of doing some amazing things, and X-Plane does not yet fully take advantage of it. But I do believe that in the long term we end up delivering more value to X-Plane users by taking a slower wait-and-see approach to new hardware.

(To GeForce 8800 users I can only say that the code we write now while waiting for the right environment may do some cool things too, so we’re not just taking a vacation! And the GeForce 8800 also delivers an overall speed boost to the entire system.)

EDIT: the same logic applies to operating systems like Vista and OS X 10.5 to some extent, but I hesitate to bring this up because: a number of users are having problems with X-Plane and Vista, and it is not because we have delayed support for Vista. The real problem is that the graphics card drivers for Vista are still new and have some problems. I believe that the various Vista problems we’re seeing will be addressed by code changes by ATI and nVidia.

Posted in Development by | 1 Comment

I’m not a fan of SLI/CrossFire

When it comes to video cards, I’ve always been in the “don’t spend more than $200” school of thought. My logic is: video card technology moves so fast that paying a lot of money for the “first six months” of any new technology level is very expensive. Bless all of you who are early adopters – you’re helping keep nVidia and ATI humming, but it’s an expensive hobby to main an up-to-date machine.

This is one of my favorite tables (there is a similar one on Wikipedia for ATI). It shows performance of graphics cards and when they came out. Compare the GeForce 7950 GT2 and the GeForce 8800 GTS. If you want 24,000 MT/S of fill rate, you could buy a top-of-the-line two-cards-in-one-via-SLI 7950, but if you waited six months, a SINGLE intermediate-speed 8800 would give you the same thing while supporting DX10 shaders (e.g. geometry shaders, instancing, and all that awesome stuff). The 7950 GT2 apparently retailed at $600+, which was a real discount compared to actually chaining two separate 7950’s together (that’d get you up around $850). Look on newegg.com and you’ll see that GeForce 8800 prices aren’t that expensive (compared to an SLI combination). And the 7950 GT2’s come down a lot from what it used to cost.

For another datapoint, compare the Geforce 7600’s to the GeForce 6800’s. The 6800 was the monster card when it came out, putting nVidia back in the number one spot. But the next-generation’s intermediate range cards can do what was top-end before. (The 7600 can be had for a little over $100. Compare that to several hundred for the 6800 ultra about one year earlier.)

Simply put, you pay a huge premium to get a given performance level when it’s new and top-end. Wait one generation of cards (by buying last-year’s top end cards or this-year’s middle-range cards) and you save a lot.

It’s in this context that I don’t believe that SLI makes a lot of sense. In an environment where (IMO, and my opinion only) the top-end video cards are already expensive for what they do, SLI simply makes the situation worse, by allowing you to spend double what the already-high-end cards cost to get performance that will be available in one card in the next generation.

To do the math, does it make sense to spend double the price on your video card to extend its useful life by six months? Only if you intend to change cards every six months.

(nVidia makes an argument that SLI allows developers to preview the next-gen hardware, and this is true. My strategy is different: simply run X-Plane slowly and assume that the next-generation hardware will go faster.)

I don’t feel good about criticizing nVidia and ATI because overall I feel that their products provide an extraordinary value at a very good price, and the growth of performance in video cards has been astounding. Todays cards just hit it out of the park.

But to me SLI and CrossFire strikes me as a solution looking for a problem. They solve the problem of making the most expensive cards more expensive, but I don’t think they’re the best way to spend money on a flight simulation system. (Better might be to not buy at the “SLI/Crossfire” level of video cards, meaning spending $700+ on your video cards, but rather to go down a level and upgrade your motherboard/CPU more frequently.)

Some users email me asking for video card recommendations, in particular whether to buy an SLI/Crossfire configuration. The bottom line is, it depends on how much you value your money vs. your graphics card performance. If money is on object, and you want maximum speed, SLI configurations will provide the fastest performance (by some marginal amount). I believe that a good value lies below $200.

On the other side of the equation, I do recommend that everyone spend at least $100 if you’re going to buy a video card at all. Below $100 the price cuts come from remaindering really old inventory and removing parts from the card to save cost. For the savings of $25 you might lose half your card’s performance or half of its VRAM when you get down to the really cheap cards.

The other thing I tell users is the truth: no one at Laminar Research has an SLI system, so the reports we get on SLI come from users. Some users have told us they’ve gotten some benefit at very high FSAA levels. But at this point a single 8800 wll do the same thing. And SLI doesn’t address CPU speed at all. Consider this list of features – nothing on the CPU side will get even remotely faster with SLI.

And in full disclosure: my two Macs have a Radeon X1600 Mobility, a Radeon 9600, and a GeForce 5200 FX sits on the shelf for testing purposes. (This isn’t intentional bias toward ATI, it’s what Apple ships.)

Posted in Development by | 5 Comments