Identifying an underpowered video card is difficult. Video cards have simple, non-confusing names like the CryoTek GeForce FX 9999 XYZ. What the heck does all that stuff mean?

(The lists below will contain a number of “specs”. Do not panic! At the end I will show you where to look this stuff up on Wikipedia.)

A modern graphics card is basically a computer on a board, and such, it has the following components that you might care about for performance:

  • VRAM. This is one of the simplest ones to understand. VRAM is the RAM on the graphics card itself. VRAM affects performance in a fairly binary way – either you have enough or your framerate goes way down. You can always get by with less by turning texture resolution down, but of course then X-Plane looks a lot worse.

    How much VRAM do you need? It depends on how many add-ons you use. I’d get at least 256 MB – VRAM has become fairly cheap. You might want a lot more if you use a lot of add-ons with detailed textures. But don’t expect adding more VRAM to improve framerate – you’re just avoiding a shortage-induced fog-fest here.

  • Graphics Bus. The GPU is connected to your computer by the graphics bus, and if that connection isn’t fast enough, it slows everything down. But this isn’t really a huge factor in picking a GPU, because your graphics bus is part of your motherboard . You need to buy a GPU that matches your motherboard, and the GPU will slow down if it has to.

  • Memory Bus. This is one that gets overlooked – a GPU is connected to its own internal memory (VRAM) by a memory bus, and that memory bus controls how fast the GPU can really go. If the GPU can’t suck data from VRAM fast enough, you’ll have a slow-down.

    Evaluating the quality of the internal memory bus of a graphics card is beyond what I can provide as “buying advice”. Fortunately, the speed of the bus is usually paired with the speed of the GPU itself. That is, you don’t usually need to worry that the GPU was really fast but its bus was too slow. So what we need to do is pick a GPU, and the bus that comes with it should be decent.

  • Of course the GPU sits on the graphics card. The GPU is the “CPU” of the graphics card, and is a complex enough subject to start a new bullet list. (As if I wouldn’t start a new bullet list just because I can.)

So to summarize, you want to look at how much VRAM your card has and make sure the bus interface matches your motherboard. What about the GPU? There are three things to pay attention to on a GPU:

  • Generation. Each generation of GPUs is superior to the previous generation. Usually the GPUs can create new effects, and often they can create old effects more cheaply.

    The generation is usually specified in the leading number, E.g. a GeForce 7xxx is from the GeForce 7 series, and the GeForce 8xxx is from the GeForce 8 series. You almost never want to buy a last-generation GPU if you can get a current generation GPU for similar price.

  • Clock Speed. A GPU has an internal clock, and faster is better. The benefit of clock speed is linear – that is, if you have the same GPU at 450 mhz and 600 mhz, the 600 mhz one will provide about 33% more throughput , usually.

    Most of the time, the clock speed differences are represented by that ridiculous alphabet soup of letters at the end of the card. So for example, the difference between A GeForce 7900 GT and a GeForce 7900 GTO is clock speed – the GT runs at 450 mhz and the GTO at 650 mhz.*

  • Core Configuration. This is where things get tricky. For any given generation, the different card models will have some of their pixel shaders removed. This is called “core configuration”. Basically GPUs are fast because they have more than one of all of the parts they need to draw (pixel shaders, etc.) and in computer graphics, many hands make light work. The core configuration is a measure of just how many hands your graphics card has.

    Core configuration usually varies with the model number, e.g. an 8800 has 96-128 shaders, whereas an 8600 has 32 shaders, and an 8500 has 16 shaders. In some cases the suffix matters too.

How would you ever know your core configuration, clock speed, etc.? Fortunately Wikipedia is the source of all knowledge. Here are the tables for NVidia and ATI.

Important: You cannot compare clock speed or core configuration between different generations of GPU or different vendors! A 16-shader 400 mhz GeForce 6 series is not the same as a 16-shader 400 mhz GeForce 7 series card. The GPU designers make serious changes to the card capabilities between generations, so the stats don’t apply.

You can see this in the core configuration column – the number of different parts they measure changes! For example, starting with the GeForce 8, NVidia gave up on vertex shaders entirely and started building “unified shaders”. Apples to oranges…

Don’t Be Two Steps Down

This is my rule of thumb for buying a graphics card: don’t be two steps down. Here’s what I mean:

The most expensive, fanciest cards for a given generation will have the most shaders in their core config, and represent the fastest that generation of GPU will ever be. The lower models then have significantly less shaders.

Behind the scenes, what happens (more or less) is: NVidia and ATI test all of their chips. If all 128 shaders on a GeForce 8 GPU work, the GPU is labeled “GeForce 8800” and you pay top dollar. But what if there are defects and only some of the shaders work? No problem. NV disables the broken shaders – in fact, they disable so many shaders that you only have 32 and a “GeForce 8600” is born.

Believe me: this is a good thing. This is a huge improvement over the old days when low-end GPUs were totally separate designs and couldn’t even run the same effects. (Anyone remember the GeForce 4 Ti and Mx?) Having “partial yield” on a chip set is a normal part of microchip design; being able to recycle partially effective chips means NV and ATI can sell more of the chips they create, and thus it brings the cost of goods down. We wouldn’t be able to get a low end card so cheaply if they couldn’t reuse the high-end parts.

But here’s the rub: some of these low end cards are not meant for X-Plane users, and if you end up with one, your framerate will suffer. Many hands make light work when rendering a frame. I you have too few shaders, it’s not enough hands, drawing takes forever, your framerate suffers.

For a long time the X-Plane community was insulated from this, because X-Plane didn’t push a lot of work to the GPU. But this has changed over the version 9 run – some of those options, like reflective water, per-pixel lighting, etc. start to finally put some work on the GPU, hitting framerate. If you have a GeForce 8300 GS, you do not have a good graphics card. But you might not have realized it until you had the rendering options to really test it out.

So, “two steps down”. My buying advice is: do not buy a card where the core configuration has been cut down more than once. In the GeForce 8 series, you’ll see the 8800 with 96-128 shaders, then the first “cut” is the 8600 with 32 shaders, and then the 8500 brings us down to 16.

A GeForce 8800 was a good card. The 8600 was decent for the money. But the 8500 is simply underpowered.

When you look at prices, I think you’ll find the cost savings to be “two steps down” is not a lot of money. But the performance hit can be quite significant. Remember, the lowest end cards are being targeted at users who will check their email, watch some web videos, and that’s about it. The cards are powerful enough to run the operating sytem’s window manager effects
, but they’re not meant to run a flight simulator with all of the options turned on.

If you do have a “two step” card, the following things can help reduce GPU load:

  • Turn down or off full screen anti-aliasing.
  • Turn off per pixel lighting, or even turn off shaders entirely.
  • Reduce screen size.

* GT = Good Times, GTO = Good Times Overclocked? Only NVidia knows for sure…

About Ben Supnik

Ben is a software engineer who works on X-Plane; he spends most of his days drinking coffee and swearing at the computer -- sometimes at the same time.

One comment on “Is Your Video Card “Two Steps Down”

  1. Thanks for the useful post! I always love being corrected by people that are smarter than me. Seriously, I never know a lot of these things about video cards. Sometimes I wish hardware companies would speak english!

Comments are closed.