I am still trying to dig out my in-box this morning, but Chris tells me that there is a lot of concern over X-Plane 10’s framerate. For now I can only offer one bit of advice:
Don’t panic.
I have no doubt that many users are seeing terrible fps with version 10. But I think that we’re going to get this sorted out over the next few weeks and the end result will be really good.
How can I think that when so many users are seeing poor performance? Here’s why…
- With X-Plane 9, four years after its debut, you still can’t run X-Plane 9 at extreme objects on a modern computer.
- With X-Plane 10, all of 1 day old, with very little performance ,i can run at extreme objects on my core i5 and get > 20 fps in the demo area.
In other words, the hardware usage of v9 was never properly “fit” to the capabilities of a modern desktop. By comparison, version 10 is at least on the right curve, and we know how to tune for performance.
Please bear with me for a few days: the current state of chaos in our server download farm is a high priority right now, as are a few serious bugs. So if you send me a list of your hardware specs, settings, and some numbers, I cannot help you.
When I do have time to look at performance (hopefully real soon) here’s how we will do it:
- I will post some standard fps-test command lines that you and other users can try.
- You can send me back the log.
One I have that data, controlled for all rendering settings, across a variety of hardware, then we’ll be able to understand performance and fix engine bugs and provide work-arounds.
X-Plane 10 is only fast if it hits the fast path through the driver every time it needs to; there are all sorts of little things that can go wrong that can kill fps that are not an indication that long-term the sim will suck. We have worked through nasty performance issues before and we will again! It took two months to kill off the very last driver problem in X-Plane 9.0 – I certainly hope it won’t take that long to get the vast majority of users running well in version 10. But I do believe that fps problems we see now may be as much a case of driver vs. engine quirks than a fundamental performance gap.
Happy Thanksgiving! X-Plane 10 is now out the door, and over the next few weeks I’ll post on a number of topics. But for tonight: that burning smell is our five download servers pushing 100 mbit each and not even coming close to meeting the demand for X-Plane 10 demos.
We are working on this now, and over the next few days we will get more servers into the pool. But first we are going to analyze traffic and try to find the most efficient way to deploy our available bandwidth given such high demand. Hopefully over the next few days we’ll be able to improve overall demo download times.
The demo is 2 GB and a lot of you have fast pipes, so it’s going to be slow going for a bit no matter how many servers we throw at the problem. If you can run BitTorrent, you may want to give it a try, but we are definitely working to make a straight demo-download a viable option.
Posted in News
by
Ben Supnik |
I haven’t had much time to post, and it’s going to very quiet for another two to three weeks. This is the calm before the storm. Everyone on the team is somewhere past redline trying to get as much as we possibly can into the initial release. We won’t get everything we want into 10.0 – we already knew that – we have too many ideas and there is way too much potential unlocked by some of the major changes to X-Plane. But we’re still going to try to stuff the release as much as we absolutely can right up until the clock runs out.
So: please bear with us for a few weeks. There is a ton to discuss, and once we get that DVD master burned, I’ll be in a position to answer more questions, describe some of the new features (which we are really excited about) and generally come out of hiding.
Back to code…BBIAB.
I’ve seen a lot of fretting in blog comments and forums to the tune of “I’m afraid my computer won’t be able to handle X-Plane 10.” And sometimes the response is “it will be more efficient than version 9.” I want to take a second to address both points.
First: don’t panic. Wait for version 10 to come out, then try it. You might be pleasantly surprised. If your computer is an absolute basket case on version 9 (or your graphics card is dropping below our minimum hardware requirements*) then version 10 won’t work for you, but if you can run 9, you may be able to run 10 acceptably. It’ll be a question of whether you’re happy with the graphics quality or you want more.
Second: yes, X-Plane 10 is more efficient at a number of very specific rendering tasks. (For example, we have a number of much faster OBJ paths, and improved memory use for draped meshes and forests.) But there are a few reasons why you might not see the win:
- In some cases, the efficiency only helps certain parts of the system. If we make forests use less memory, and you turn forests off, you get no memory savings compared to version 9. The performance work is specific to subsystems, not across-the-board.
- In some cases, we made the system more efficient and then promptly “spent” that efficiency by drawing more stuff. For example, we can draw a lot more cloud vertices per frame, but the new weather system makes more cloud puffs. So the system is more efficient but you need that efficiency to handle the more complex effects.
- A few of the new effects are always on. If something is expensive, I’ve tried to leave it out of the lowest settings, but there are some effects that don’t have an off switch. For example, X-Plane 10 always runs with a linear lighting model – and at the lowest setting this still costs a few percent of your GPU.
- The rendering settings don’t match up between version 9 and 10. “Tons” of objects doesn’t mean the same thing in versions 9 and 10. The art assets aren’t even remotely comparable. So if you send me a screenshot of your rendering settings in version 9 and 10 and ask why your framerate is different, I am going to politely direct you to what you actually see on the screen, which will be quite different.
This is all pretty normal for a major version. I try not to raise the cost of minimum rendering during a version run so that everyone can update without buying new hardware. So when a major version and hardware spec reset comes along, we have to normalize our configuration a bit.
Finally, I’ve said this before but: I am not going to answer any questions about “will my machine run well with X-Plane 10.” X-Plane 10 isn’t done, I don’t have your machine, and what defines good performance has everything to do with what rendering settings you select.
(See also the thread on the org where a user put in a big GPU upgrade and felt like he got nothing because he couldn’t increase object count. He could get better FSAA settings, but unlike some users, he just didn’t care. Many other users won’t run with anything lesst han 4x. The sim has too many options and user preferences are too varied to define “good”.)
* X-Plane 10 will require a graphics card that has programmable pixel shaders – if you are still nursing a fixed-function graphics card, you won’t be able to run version 10.
I’ll try to summarize some of our hardware findings for X-Plane 10 over the next few posts. But in my previous post I mentioned that the new MacBook Pros have only an 8x PCIe connection to the discrete GPU (that is, the nice GPU that isn’t built in to the CPU, the one you want to fly X-Plane with) and this got a bit of attention.
So it begs the question: what is this PCIe bus and why do we need to care all of a sudden?
The PCIe bus is the connection between the CPU/main memory and your graphics card (with its memory and GPU). It is the bottleneck through which all communications must flow – sometimes every frame, sometimes every now and then.
PCIe slots are named by the number of lanes (e.g. 16x means 16 lanes) – each lane has fixed capacity (which is doubled in PCIe 2.0). So a graphics card in a 16x slot drink data from your computer at double the rate of one in an 8x slot – it’s an extra wide straw.
(Nerds: I realize this is about the worst description of the PCIe bus you will ever find. Go read Wikipedia!)
What Do We Use the PCIe Bus For?
X-Plane needs the PCIe bus to:
- Send the instructions to draw each frame to the GPU.
- Transfer any textures, new OBJ meshes, and other data that will be held in VRAM. The data is born on the CPU, goes over the PCIe bus once, and then lives in VRAM.
- Send to the GPU anything that changes every frame to the GPU. For example, smoke puffs and car headlights have to go over the PCIe bus every frame because they are constantly changing.
- Send to the GPU mountains, forests and other non-repeating geometry. This data gets sent every frame.
If the sum of all of the stuff on that list gets too big, your framerate drops as the CPU and GPU both wait data to make it over the bus. In other words, the bus can at times be the bottleneck in terms of framerate.
If you set your rendering settings near the maximum that your computer can handle and get the occaisional stutter, that may be X-Plane running out of PCIe bus bandwidth. As you fly to a region with new textures that haven’t been used before, the OpenGL driver will transfer our textures over the PCIe bus from system RAM to VRAM. If the PCIe bus is already nearly maxed out, the extra traffic of those textures is going to temporarily hurt framerate – sometimes in the form of a stutter or pause.
Are You Sure You Know What You’re Doing?
At this point those of you who know some things about 3-d graphics are shouting at your monitors: why are you guys transferring the mountains and forests over the PCIe bus every frame? Why not just put them in VRAM, since they don’t change?
That’s a good question and if you have a better solution than the one we use, I’d love to hear it.
The problem is this: OpenGL doesn’t give us a good way to prioritize which meshes (VBOs) stay in VRAM and which ones are purged out when we run out of VRAM. If we put every mesh in the sim into VRAM, framerate gets better (because we aren’t using the PCIe bus) right up until we run out of VRAM. At that point the OpenGL driver freaks out and starts throwing out textures to make room for meshes, and then the textures have to be sent back over the PCIe bus, and we end up in a world of hurt. We end up in a state of texture thrash as we have too much “stuff” for VRAM and framerate falls off of a cliff.
The real problem is this: X-Plane has no idea how much VRAM is available for its own use. Sure the card might have 256 MB, but how much is being used by the OS window manager for those translucent window effects, or by other applications? We can’t even add up how much VRAM we use with ultimate precision because we don’t know the granularity of allocation on the video card (there’s real overhead for VBOs being rounded up to the VM page size, for example) or whether side buffers like a hierarchial Z buffer have been allocated.
X-Plane works around this with a simple rule: all OBJs go to VRAM, because their geometry is likely to be repeated, and non-repeating geometry, like forests and mountains, stay in system RAM and go over the bus.
This heuristic actually works pretty well in X-Plane 9 – we have enough bandwidth to transfer all of that “stuff” once per frame, and we tend not to run out of VRAM and thrash.
Why Does X-Plane 10 Want more PCIe Bus Bandwidth?
X-Plane 10 is hungrier for bus bandwidth for three reasons:
- The OBJ engine’s performance has been improved a lot. In the past, we’d run out of CPU capacity (to draw OBJs) long before we ran out of bus bandwidth. This isn’t always the case with X-Plane 10. The graphics are always held back by their weakest link. If you have a strong GPU (and low effects settings) and the OBJ engine is efficent, the PCIe bus is the weakest link.
- The art assets are more detailed and thus contain more vertices.
- Shadows.
When shadowing is on we have to draw the entire world multiple times, once to build shadow maps and once to draw the real world. So shadowing can double (or even triple or worse) our bus bandwidth usage. We didn’t have that kind of free capacity on the bus in the first place.
We’re still working on the engine, art assets, and performance, so my hope is that we’ll find ways to cut down bus use (especially with shadows). And there has to be one slowest part of the system – as of this writing, the PCIe bus is often it.
We maintain a live copy of the scenery tools code via GIT. But as of now it’s in a temporary coma – I made a CVS admin change to the original CVS repo to fix a screwed up checkin and the GIT bridge somehow got out of sync. Janos is trying to figure out how to fix things now.
If we need to release new builds of scenery tools before we fix this, I’ll post snapshots of the source code. If you need live access (e.g. you take source updates via GIT) ping me, or perhaps ping Janos directly. I’m hoping we’ll have this fixed in short order.
In my previous post I described some of our findings for X-Plane 10 with respect to GPUs. This begs the question we get asked all the time: what about SLI/Crossfire?
For those of you not familiar with SLI and CrossFire, they are technologies (from NVidia and AMD respectively) that use two graphics cards to share the load of drawing a 3-d scene. The idea is to double the shading power of your graphics system by having each card render every alternate frame.* You can get SLI/CrossFire with two GPUs and an appropriate motherboard, or by buying one of the monstrous “x2” GPUs.
Now to try to answer some often-asked questions:
Can X-Plane take advantage of Crossfire/SLI?
No. Neither X-Plane 9, nor 10.0, will be capable of running with Crossfire or SLI. From what I can tell, there are a few sites in the code that need to be reworked a bit to be ready for these technologies. If you have an SLI/Crossfire setup, I would expect you to get the same framerate with only one of the GPUs.
Will X-Plane 10 ever take advantage of Crossfire/SLI?
Maybe someday. If we can get our code clean enough to work with these technologies, I will post an update on this blog. But I can’t promise anything – for all I know we may someday hit some horrible show-stopping problem.
Would SLI/Crossfire be useful for X-Plane?
Not for X-plane 9. X-Plane 9 can’t even max out a single GPU from the previous generation’s top end cards, so there’s really no need for two of them.
For now there are resolutions where two GPUs would be necessary to get better frame-rate. If we can run 2560 x 1440 at 20-30 fps on one card, it would in theory be nice to run at 40-60 fps on two.
But the next generation of GPUs is on the way, so it may be that the next generation of cards will be fast enough on their own.
Should I Buy an SLI/Crossfire setup?
Not for X-Plane – we can’t take advantage of it right now.
Here’s another way to look at it.
- If you wanted 2400 GFLOPs on your GPU in August of 2008, you could buy the Radeon HD 4870 x2, which is basically two 4870s jammed together on a single card via Crossfire. This would set you back $549.
- 13 months later, in September of 2009, AMD released the 5870, which could put out 2720 GFLOPs as a single GPU for $400.
The performance curve for GPU power is really quite steep, and a dual-GPU system typically costs at least twice as much as two of the single-GPU form. That’s a lot of money to pay for fill rate that will be available in about 18 months in single-card form.
So the short version is: we don’t support SLI/Crossfire yet. Someday we may work with these technologies, but even if/when we do, they’ll only make sense if you really like high resolutions and framerates and money isn’t an object.
* SLI can operate in “split-frame” mode where each video card takes half of a frame, but this is not so good because a full frame of geometry must be sent to each card for each frame, effectively doubling PCIe bus use. In alternate-frame mode, each card only sees every other frame, and thus only needs half the bandwidth – both cards together use the same bandwidth as one card would have.
As X-Plane 10 gets closer to shipping, we’re getting a better idea of what its performance characteristics are going to be. This will be the first of several posts on hardware and X-Plane 10. Spoiler alert: the buying advice I can give is at best fairly vague and obvious (don’t buy an old piece of junk DirectX 7.1 graphics card for $29) – once the sim ships and people start posting real performance comparisons, I suspect community data will be a much richer source of information for planning a new system (or a system upgrade). Please don’t ask whether your particular graphics card will work well with X-Plane 10.
Last week I had a chance to bring X-Plane 10 to ATI and run the sim on a few different modern GPUs. It’s the Mac team that is nearby, but ATI’s mac drivers are of a quality such that platform isn’t really an issue.
We ran X-Plane on a current-generation Mac Pro with a single Radeon HD 5870, driving a 2560 x 1440 display. The 5870 is a very solid card – to do better you have to spend a lot of money.
Here’s what we found. Bear in mind that the art asset set is still a partially completed one, and the shaders are still being debugged and optimized, so anything could change in both directions.
Geometry: How Much Stuff Can You See?
First: geometry and 3-d.
- I could turn up almost everything that creates geometry. To keep fps smooth I had to turn down something one notch – either world LOD, visibility, or autogen density – but only one notch. This is a big step forward from X-Plane 9. (But also remember this is a current generation CPU.)
- Shadows took a big bite out of the geometry budget – basically the sim has to draw geometry again to build the shadows, amplifying the cost of your objects and trees. Current generation hardware simply doesn’t have enough kick to max shadows out and max all of the 3-d out. Not yet.*
- PCIe 16x matters – we saw a significant reduction in performance on the newer 8x PCIe Thunderbolt Mac laptops. You definitely want PCIe 16x!
Geometry in X-Plane 10 is a little bit different from X-Plane 9 – it can be bottlenecked not only by your CPU, but also by PCIe bus bandwidth. Because CPU use is more efficient in version 10, we sometimes get stuck when there isn’t enough bandwidth to transfer geometry from the CPU to GPU.**
(Please note that the fast path for geometry in X-Plane 10 requires OBJs to be stripped of most attributes; if you load a version 9 DSF and put objects on insane, you’ll get high CPU use and low fps because the objects are going to hit the slow code path. The autogen in X-Plane 10 is being completely redone, so it should all hit the fast path.)
Hardware Advice: Make sure you have your video card in a PCIe x16 slot!
The new Thunderbolt Mac laptops have only 8x PCIe to their discrete GPUs; this may turn out to be the limiting factor for X-Plane.
Pixels: How Much Do Effects Cost?
How about shader effects? Some of the new effects in X-Plane 10 (atmospheric scattering, fade-out level of detail, FXAA, linear lighting, deferred rendering) use only pixel shading power on the GPU. We turned on pretty much every effect that isn’t CPU/bus based (that is, the GPU pixel effects) and found that we could run with all of them at 20-30 fps. Heck, we even turned on a few things that aren’t going to be ready for 10.0 but may come out during the version run.
This is pretty different from X-Plane 9. With X-Plane 9, you can turn on all of the GPU effects (there is really only one, “per pixel lighting”, crank 16x FSAA, run at 2560 x 1024 and get 27 fps on my older Radeon 4870. In other words, X-Plane can’t even keep a previous generation GPU busy at very high settings.
(To compare, the HD 5870 has double the shader cores and a 13% faster clock, so it is at least double the GPU of the 4870 for fill and shading. In other words, in the test system we used that machine would have run X-plane 9 at perhaps 45 fps when completely maxed out on a huge display.)
In other words, X-Plane 9 couldn’t max out a GPU; X-Plane 10 can.
If you’ve been using a “medium” tier car (e.g. a GTX 560 instead of a GTX 580, or a 5650 instead of a 5870), this is going to mean less “stuff” in X-Plane 10: less eye candy, or less resolution, or less fps. With X-Plane 9, a top-end video card would be bored; with X-Plane 10 it might have something to do. I think this may surprise some users who have been buying cheaper video cards and “getting away with it” for a while now.
Hardware Advice: if you want the highest quality visuals at a high resolution, get a full powered video card!
I still concur with Austin’s advice to the news list to buy the second most expensive video card. Looking up the price and stats of the Radeon 6970 vs 6950, you pay a 40% premium to get about 19% more fill-rate. More importantly, the HD 6990 is actually just two video cards jammed together using internal CrossFire, at over double the price of one card. But since X-Plane does not yet leverage CrossFire, you’re paying a lot of money and you won’t see a fill-rate improvement.
The one case where you might want to go really crazy is if you want to drive an insane number of pixels, e.g. multiple huge monitors. Your framerate is going to be inversely proportional to the number of pixels on screen, so if you are getting 30 fps with one 2560 x 1440 monitor, you’re going to get 15 fps if you try to run at 5120 x 1440 on two monitors.
Epilogue: Are We Winning?
Something to think about: is it good that X-Plane uses more GPU and less CPU? I think the answer is “yes” – and these performance results are the result of a steady process of modernizing our rendering engine.
You can max out X-plane 9 with a cheaper video card; not so with X-Plane 10. But this isn’t because X-Plane 10 is more of a pig; it is because X-Plane 10 provides new options (options you can turn off) that X-Plane 9 did not have. With X-Plane 9, it was as if we had decided for you to turn off advanced rendering settings, leaving your GPU idle.
I believe that it must be the goal of the X-Plane 10 rendering engine to scale, meaning X-Plane should be able to max out a very powerful system (and provide increased visual quality in doing so), but also be able to run on a less powerful system by backing off on rendering quality.
The Radeon HD 2600 has between 144 and 192 GFLOPS of computing power; the 5870 has 2720 GFLOPS – that’s at least 14 times more computing power between an older iMac (but still one that can run HDR mode if you’re a little crazy) and the current maxed out Mac Pro. (If you have the HD 2400 in your iMac, that gap widens to 68x!) It is in the face of that performance gap between the high and low end system that we are trying to make X-Plane as scalable as possible.
* PCIe 3.0 will be out one of these days – I don’t know when, and I haven’t had a chance to run X-Plane on such beastly hardware, but it is my hope that we’ll get additional shadowing performance via the increased geometry budget.
** Why not keep all of the geometry in VRAM? That’s a long sad story about the OpenGL API that we’ll have to save for another day.
Full screen anti-aliasing is an effect where your graphics card renders everything at a larger size and scales it down. The result is that jagged 1-pixel stair-step edges on polygons become soft and blended. Without FSAA, mountains look jagged, and the edges of wings can “swim”.
The terminology “2x” FSAA or “4x” FSAA refers (roughly) to how many more pixels are used to draw polygons when FSAA is on. In other words, 4x FSAA is the equivalent of making your image twice as tall, twice as wide (that’s 4x the pixels, hence 4x) and then sizing it down by a factor of two. The result is that a stair-step edge will gain 4 levels of translucency to smooth it out. (In practice, the GPU vendors are very clever about not just doing 4x or 16x the work – so you might pick 16x FSAA and not pay a 16x fps cost.)
There are a few things you should know about FSAA:
- It makes images look a lot better, to the point where most of you won’t run X-Plane without it.
- It happens entirely on the graphics card. We just say “make it so” and it happens.
- It’s really very fast for all of the work it does. The GPU companies have spent a lot of effort on improving FSAA performance – see the above comment about being clever.
- It doesn’t work with deferred rendering.
That last point is a bit of a problem for X-Plane 10, as well as pretty much every other modern game engine in the universe. Deferred rendering is what powers X-Plane’s new global lighting, and it has become the standard for real time rendering. Sadly, the two pass rendering scheme of a deferred renderer doesn’t work with hardware FSAA.
Fortunately, since this is a problem for everyone, and not just us, the GPU companies have started working on the problem, and they’ve come up with some very clever stuff. In particular, FXAA.
X-Plane 10 will have two optional ways to anti-alias a deferred scene:
- FXAA, which operates as a post-processing pass on the entire scene. What’s good about FXAA is: it anti-aliases everything – texture sampling and alpha as well as polygons, and it’s very fast.*
- 4x SSAA (super-sampling) – basically we just draw everything twice as big and then size it down. This will hit your fps – X-Plane is doing putting 4x load on your GPU. The advantage of 4x SSAA is that it anti-aliases texture-alpha cut-outs, something that FXAA can’t do (because the image is already broken by the time FXAA runs).
Here is a comparison: this is an area around KSEA at sunset in X-Plane 10; it’s a debug build but I’ve turned off all of the 3-d to keep fps up. The fps you’re seeing are almost entirely a function of graphics card fill rate, so it will give you some idea of the relative costs of these algorithms.
Here we have no anti-aliasing – notice the jaggies on the mountains in the distance.
Here we have 4x SSAA. The mountains look better but not great – it’s only 4x.
Here we have FXAA. It does a pretty good job of smoothing out the mountains at very little frame-rate impact.
Full screen anti-aliasing aims to smooth out jaggy lines created by polygons. Normal full-screen anti-aliasing does not improve jaggy lines created by alpha-tested geometry (that is, geometry where the ‘shape’ is created by a binary keep/kill decision based on the alpha channel of a texture). So no amount of FSAA is going to help smooth vegetation and fences, for example.
This scene shows a fence – the fence is “cut” using alpha testing – that is, the links are not polygons, but rather alpha spaces in the texture. As a result, when we get far enough away that the cutouts are < 1 pixel, we get nasty aliasing. You can also see the railing on the top of the control tower missing part of its geometry due to aliasing.
In this picture, we are rendering with 4x SSAA. This means that while the cutouts are too small for us to see on screen (the cutouts are < 1 pixel), they are actually two pixels when originally rendered; the SSAA shrinks them down after the cutout is done, resulting in a much nicer look. The tower railing (since it was rendered at double the res) is also now visible.
This is the same image with FXAA. Since FXAA is a post process, FXAA can try to smooth out the results, but fundamentally the information about the fence cutouts was lost. The missing tower railing is smoothed out, but still missing- by the time FXAA post-processes the image, the railing is already gone.
In this side view of the fence, we can see aliasing as the fence gets farther away (and each hole in the fence is smaller due to perspective). Anisotropic filtering will not fix this because the aliasing is coming from the per-pixel alpha cutout, not from reading the texture itself.
Again, 4x SSAA gives us double the resolution for alpha operations, so the fence looks better over a longer distance.
With FXAA the aliasing is less ugly (as FXAA smooths the errors) but it cannot reconstruct the lost resolution.
If 4x SSAA makes better alpha, why not use it all the time ? The answer is expense: 4x SAA uses 4x the VRAM for the deferred rendering buffers, and burns 4x the fill rate on all geometry and all lighting! 4x SSAA is one of the only settings that will seriously destroy framerate even for a high-end GPU. FXAA, by comparison, only costs a few fps on a decent graphics card.
* In the screenshots, the setting that turns on FXAA is labeled “fake aa”. This is not meant to be an insult to FXAA, which is really an astoundingly nice piece of shader code. That control wires up my own home-rolled anti-aliasing algorithm post-processing which works very badly and looks ugly – I called it “fake” because I was just faking it. I then installed FXAA into the same setting but never re-labeled it.
My last few posts have tried to paint a picture as to where the various art assets and files will live for the new ATC and airport lego bricks in X-Plane 10. I want to specifically point out how the library interacts with these features because the library lets you do some very powerful things, and I suspect that it is often misunderstood.
First: if you want to use our art assets, you get them via the library. Almost every scenery art asset we provide in X-Plane sits in the library somewhere. This includes most of the art assets we use to build airport apt.dat layouts. So whether you want our lego brick terminals, ATC voices, taxiway centerline light strings, or autogen buildings, they’re all in the library and you can “just use them” in your custom scenery by referencing the library.
But second: since the default scenery goes to the library to get art assets, you can replace our art assets with your own using a scenery pack. Don’t like our runway lights? Make your own and put them in the library via a custom scenery pack. Want more variety of cars? Make a few more and put them in the library via a scenery pack.
Since the library can be customized in location specific ways, you can also make new themes. Replace our autogen with something appropriate for your own country, and put the pack in the library with the locations you want your art used for.
There are only a few files we intend to collect and redistribute – and they’re all files of data. But modifying the sim’s art assets is still really easy – if you want to add art assets to the default parts of the scenery system, just make a custom pack that puts your assets in the library – it’s as good as if it was built into X-Plane.