The CRJ Is Here!
Simon and Chip have a whole series of posts reviewing the plane, with lots of nice pictures.
Simon and Chip have a whole series of posts reviewing the plane, with lots of nice pictures.
If I could have a nickel for every time I get asked “should I buy X for X-Plane 10”, well, I’d at least have enough nickels to buy a new machine. But what new machine would I buy? What hardware will be important for X-Plane 10?
The short answer is: I don’t know, it’s too soon. The reason it’s too soon is because we have a lot of the new technology for version 10 running, but there’s still a lot of optimization to be done.
As I have posted before, the weakest link in your rendering pipeline is what limits framerate. But what appears to be the weakest link now in our in-development builds of X-Plane 10 might turn out not to be the weakest link once we optimize everything. I don’t want to say “buy the fastest clocked CPU you can” if it turns out that, after optimization, CPU is not the bottleneck.
One thing is clear: X-Plane 10 will be different from X-Plane 9 in how it uses your hardware. There has been a relatively straight line from X-Plane 6 to 7 to 8 of being bottlenecked on single-core CPU performance; GPU fill rate has stayed ahead of X-Plane pixel shaders (with the possible exception of massive multi-monitor displays on graphics cards that were never meant for this use). X-Plane 10 introduces enough new technology (instancing, significantly more complex pixel shaders, deferred rendering) that I don’t think we can extrapolate.
To truly tune a system for X-Plane 10, I fear you may need to wait until users are running X-Plane 10 and reporting back results. We don’t have the data yet.
I can make two baseline recommendations though, if you are putting together a new system and can’t wait:
Finally, please don’t ask me what hardware you need to buy to set everything to maximum; I’ve tried to cover that here and here.
I’m a bit behind on posting; I’ll try to post an update on scenery tools in the next few days. In the meantime, another “you see the strangest things when debugging pixel shaders” post.
(My secret plan: to drive down expectations by posting shader bugs. When you see X-Plane 10 without any wire-frames, giant cyan splotches, or three copies of the airplane, it’ll seem like a whole new sim even without the new features turned on!)
Hint: it might not be what you think! Vertex count isn’t usually the limiting factor on frame-rate (usually the problem is fill-rate, that is, how many pixels on screen get fiddled with, or CPU time spent talking to the GPU about changing attributes and shaders). But because vertex count isn’t usually the problem, it’s an area where an author might be tended to “go a little nuts”. It’s fairly easy to add more vertices in a high-powered 3-d modeling program, and they seem free at first. But eventually, they do have a cost.
Vertex costs are divided into two broad categories based on where your mesh lives. Your mesh might live in VRAM (in which case the GPU draws the mesh by reading it from VRAM), or it might live in main memory (in which case the GPU draws the mesh by fetching it from main memory over the PCIe bus). Fortunately it’s easy to know which case you have in X-Plane:
If a mesh is in VRAM, the cost of drawing it is relatively unimportant. My 4870 can draw just under 400 million triangles per second – and it’s probably limited by communication to the GPU. And ATI has created two new generations of cards since the 4870.
Furthermore, mesh draw costs are only paid when they are drawn, so with some careful LOD you can get away with the occasional “huge mesh” – the GPU has the capacity if not everyone tries to push a million vertices at once. (Obviously a million vertices in an autogen house that is repeated 500 times is going to cause problems.)
But there is a cost here, and it is – the VRAM itself! A mesh costs 32 bytes per vertex (plus 4 bytes per index), so our mesh is going to eat at least 32 MB of VRAM. That’s not inconsequential; for a user with a 256 MB card we just used up 1/8th of all VRAM on a single mesh.
One note about LOD here: the vertex cost of drawing is a function of what is actually drawn, so if we have a million-vertex high LOD mesh and a thousand-vertex low LOD mesh, we only burn a (small) chunk of our vertex budget when the high LOD is drawn.
But the entire mesh must be in VRAM to draw either LOD! Only things drawn on screen have to be in VRAM, but textures and meshes go into VRAM as a whole, all LODs. So we only save our 32 MB of VRAM by not drawing the object at all (e.g. it being farther away than the farthest LOD).
For anything that isn’t an object, the mesh lives in main system memory, and is transferred over the PCIe bus when it needs to be drawn. (This is sometimes called “AGP memory” because this could first be done when the AGP slot was invented.) Here we have a new limitation: we can run out of capacity to transfer data on the PCIe slot.
Let’s go back to our mesh: our million vertex mesh probably takes around 32 MB. It will have to be transferred over the bus each time we draw. At 60 fps that’s over 1.8 GB of data per second. A 16x PCIe 2.0 slot only has 8 GB/second of total bandwidth from the computer to the graphics card. So we just ate 25% of the bus with our one mesh! (In fact, the real situation is quite a bit worse; on my Mac Pro, even with simple performance test apps, I can’t push much more than 2.5 GB/second to the card, so we’ve really used 75% of our budget.)
On the bright side, storage in main memory is relatively plentiful, so if we don’t draw our mesh, there’s not a huge penalty. Careful LOD can keep the total number of vertices emitted low.
I don’t want to say anything and risk murphy’s law, but it looks like the CRJ will see the light of day after all.
I always enjoy seeing third party add-ons that really show what the rendering engine is capable of. Also, it’s good to know that Javier brushes his teeth. 🙂
More dubious screen-shots of in-development pixel shaders gone bad. This one was taken while working on full-screen anti-aliasing for X-Plane’s deferred renderer.
Deferred renderers cannot use the normal hardware full screen accelerated anti-aliasing (FSAA) that you’re used to in X-Plane 9. (This problem isn’t specific to X-Plane – most new first person shooter games now use deferred rendering, so presentations from game conferences are full of work-around tricks.)
It looks like we will have a few anti-aliasing options for when X-Plane is running with deferred rendering (which is what makes global lighting possible): a 4x super-sampled image (looks nice, hurts fps), a cheaper edge-detection algorithm, and possibly also FXAA.
No discussion of OSM would be complete without a discussion of licensing and copyright. OSM makes this particularly complex because the project is in the process of changing their license from CC-SA-BY to ODbL. For a full discussion, I recommend the OSM Wiki, but the short version is:
OSM is CC-SA-BY now and is working to switch to ODbL. Basically their lawyers realized that CC-SA-BY is great for images and text, but isn’t actually legal for databases (which is what OSM is). The ODbL will protect OSM as it is – that is, as a “database”. Since OSM is a huge open project, the license change is going to take a long time and lots of people will post lots of rants on lots of mailing lists in the process.
Here’s our plan: we are going to make the v10 global scenery abide by both the spirit and the legal requirements of both licenses. (At least, I hope we are going to try to do this. I am not a lawyer and wouldn’t mind if this was all a lot simpler.)
This last case is important because we do a lot of processing to the raw OSM data before we create the DSFs. Technically we are required by the ODbL to “give back” thsoe changes, but the truth is that I don’t think anyone really wants the hundreds of GB of temporary files we create as we process. So instead we will give back the tools that do that processing, so people can recreate our processed database as desired. From my discussions with OSM community members, this is apparently an acceptable way to ‘give back’ our changes.
I should say that nothing is guaranteed here. Heck, it’s even possible that OSM will change its license in a way that screws up the whole global scenery project before we ship. (This is highly unlikely – I’m just saying that there is legal uncertainty with OSM that we haven’t had to deal with when using other data sources.) But I think we’ll be okay; X-Plane’s use of OSM (to create a mash-up of OSM plus other data sources like SRTM elevation to create a derived copyrightable work like a DSF) is definitely one of the use cases that OSM wants to make possible.
In my previous post I provided a brief description of how we’re going to use OpenStreetMap data in X-Plane 10. How do you get involved? Map your area; improve the quality of OpenStreetMap where you live.
A brief note to users in the United States: the US is a little bit different from other OSM countries because we have more free data than Europe. As a result, the US OSM data has been “seeded” with imports of data like TIGER and NHD. (For more on this, I recommend Steve Coast’s SOTM.US keynote video.) The result is that while unmapped areas in other countries tend to be empty, unmapped areas in the US are often filled in with data that is present but not particularly good.
So if you live in the US, take a look at your home town. Some of the most common problems are: incorrect road types or incorrect one-way information, missing bridges, and missing water bodies. To meet the level of quality that OSM already has in Europe (have you seen what the Germans have mapped?!?!) the imported free data needs a scrubbing by real human beings who know the area.
In my previous post I announced that we are using OSM for our vector data for the version 10 global scenery. But what data are we using exactly?
The short answer is: we are using road/rail/powerline data for our road grid and coastline/lake/polygonal river data for water bodies. This data replaces our use of TIGER and VMAP0 for those vector features.
We are not planning on using individual building data in the first global scenery cut from OSM; we don’t have the infrastructure for this yet. OSM has a lot of data, and it will take time to find ways to use it all in the global scenery. The distribution requirements of the global scenery also imply that we may not be able to use all of the data to its full potential due to the limits of DSF size.
OSM data comes with a tag scheme: a given way (line) or node (point) comes with one or more key/value pairs. The tag scheme is not limited or controlled; anyone can put any tag on any piece of data. (This usually shocks experienced GIS users when they first see it…it certainly astonished me. You really have to think of OSM as more of a Wiki and less of a database.) We only use certain tags for the global scenery.
The road tags we care most about are the tags that define:
We may someday get clever and start using width, speed limits, surface, and the other interesting tags, but for getting a first version of the global scenery done, those are the big ones.
We try to use any area-style waterbody (e.g. natural=land/water, landuse=lake, etc.) as well as the coastlines defined by the natural=coastline tag.
Our general strategy is: if you can see land/water and road type on the map, we try to use it. In other words, we try to use the same tags the maps show so that people can see what the effects of their editing are.
We may have to augment water data with other data sources; OSM is often missing data, and while an area without roads is no worse than the old VMAP0 data, we don’t want to lose water that was visible in version 9. I do not know precisely how we will cope with this, but we will probably have to mix in other water data to complete the global scenery.
(We had to do the same thing in version 9: we combined the SWBD and VMAP0 water data to make complete world water, a strategy that gave acceptable but not beautiful results.)
There’s a lot of other interesting data we can use, and I do not know how far we will get with it in the first version. For example, vector polygonal parks are notated via the leisure=park tag; this can include giant state forests, so we can’t simply make all of these areas green. But we might be able to use them as a “hint” that the land class data should be interpreted with parks in mind.
I see our work with OSM data for the version 10 global scenery as the first step in what will be an ongoing process to include more and more data in the global scenery. But given the huge amount of ‘interactivity’ we pick up from OSM (e.g. for the first time, anyone can change the data we use, and for the first time, anyone can get their own complete copy of the data we use) we’ll need to evolve the global scenery process.
If you have an Android device, check your market for the new update. The update allows users to purchase aircraft and scenery add-ons if they wish. This is an “a la carte” style system. We hope you’ll find it more cost effective as a user to buy the aircraft/scenery that you want instead of having to buy an entirely new product for $9.99 just to get at a couple of planes that you like.