Tag: performance

Fact or Fiction

More ranting on the question of whether a file format is based on factual information or not. For the sake of taxonomy, let’s call this:

  • Factual. The file format aims to capture “real world” information. The file spec is thus written against real world norms. Example: a runway is described by the location of its centerline at its threshholds, the type of aproach lighting fixtures, and the material it is built out of. This is all fact that can be verified by going to the runway and measuring it (while trying to avoid 747s).
  • Artistic. The file format gives authors a creative platform to create “stuff”, e.g. an image, a model; the file format dictates how client applications might interpret that “stuff”. Example: OBJs are artistic – it describes what affect on drawing the various bits of the OBJ file have.

Apt.dat is actually a hybrid format – most of it is factual, with one glaring exception: pavement surface areas.

Pavement surface areas are simply an overlapping pile of bezier polygons with holes. There are multiple ways to create a given layout, and you couldn’t make an argument that one is “more factually correct” than the other.

Artistic file formats give us a way to be open-ended, and so they are particularly useful for problems that we cannot solve in a practical manner using factual file formats. When we worked on the apt.dat 850 format, I clung to a 100% factual approach for as long as I could, hoping to be able to truly describe “ground truth” about airport pavement. What I found in the end was that real world instances of airport pavement are so varying and weird in real life that almost any factual approach would fail to correctly model important real-world airports. So we punted and simply said “put pavement wherever you want, make it look nice.”

The result of going artistic instead of factual is two-fold:

  1. The taxiway data in the apt.dat file is less broadly useful to a wide range of client applications; you might be able to infer some aspects of the real taxiways from the data, but the taxiway shape has very little structure to it.
  2. You can model just about anything you can dream of – there really aren’t any limits.

That taxiways are “artistic” will probably always bug me a little bit from a theoretical viewpoint, but I think there is no question that this was the only practical standpoint.

Final thought: factual file formats are usually not precomputed – that is, if we have a list of runways described by their real-world properties (and not modeled as a collection of textured triangles) then there is probably work that still needs to be done to make the file useful for X-Plane. (That work is done by X-Plane’s file loading code.)

Okay – I’m OOTO for a while – see you before thanksgiving!

/ben

Posted in File Formats by | Comments Off on Fact or Fiction

Precomputed Scenery – the Good and the Bad

This thread on X-Plane.org sparked off quite the discussion. Now a lot of this is a discussion of when LR will have an overlay editor – there are a few overlay editing functions that Jonathan Harris’ excellent OverlayEditor apparently does not yet support, sparking this discussion.

(I am not saying that LR should rely on Jonathan to do an overlay editor. But I am saying that the complaints I hear about a lack of overlay editing go down when Jonathan’s overlay editor does everything that the file formats can do.)

But another part of the discussion focused on the problem of mesh editing. In particular, the basic terrain in a DSF is a fully baked output of a complex process that starts with higher level GIS input data. In other words, we start with a raster DEM, polygon coastline, apt.dat file, vector roads, and a bunch of config files and hit “bake” and a DSF comes out the other side, with a lot of processing.

This is very different than FS X, which integrates its data sources on the fly. Why did we choose a precomputed route for scenery? It has some pros and cons. (In understanding how we made these decisions, think back to what scenery was like with X-Plane 7 and ENVs and single-core machines.)

Performance

The main benefits of preprocessing scenery are performance related. When you process scenery data into the final scenery while flying, that computer power takes away from the rendering engine, thus cutting down fps. At some point you have a zero-sum game between how much cost there is to loading scenery and how complex the scenery integration can be; you have to pick very simple scenery integration algorithms to keep fps up.

(This is less of an issue as more cores become available, but is still a factor.)

When pre-processing, we can use algorithms that take minutes per DSF without affecting framerate.

Similarly, there might be scenery processing algorithms that improve fps by optimizing the output triangles – but do we have time to run these algorithms during load? With preprocessing we have all the time in the world because it happens once before the DVDs are burned.

Preprocessing also breaks a similar zero sum game between scenery data size and quality; the source data we use to make the scenery is a lot bigger than the 78 GB of DSFs we cut; if we had to ship the source data, we’d have to cut down the source data quality to hit our DVD limitations. With be-baking we could use 500 GB of source data without penalty.

Format Flexibility and Stability

The second set of benefits to preprocessing are flexibility benefits. (Consider the file format churn of the ENV days.)

– With a preprocessed scenery file, what the author creates is what the user sees – X-Plane does not go in and perform subjective integrations on the scenery later that might change how it looks in a negative way.

  • There is no need to revise the scenery file formats to introduce new data sets, because new data sets and old are all processed to the same final DSF container format.
  • A wide variety of mesh generation techniques can be employed because the mesh generation is not built into X-Plane. This is a flexibility that I don’t think anyone has really utilized.
  • Changes of behavior in the scenery generation toolset can never affect existing scenery because that scenery is already preprocessed; this help compatibility of old file formats.

Integration Issues

There are some real limitations to a pre-processed format, and they are virtually all in the bucket of “integration issues” – that is, combining separate third party add-ons to improve scenery. In particular, in any case where we preprocess two data sources, we lose the opportunity for third parties to provide new scenery to replace one of those data sources and not the other.

Airport is the achilles heal where this hurts us most; while airport layouts are overlays and can be added separately to the scenery system, the elevation of the base mesh below the airport needs to be preprocessed. This is something I am still investigating – a tolerable fix that other shave proposed is to allow an overlay scenery pack to flatten a specific region regardless of the user setting (so an author can be assured of a flat base to work from).

Preprocessing does fundamentally limit the types of third party add-ons that can be done; with version 9 and overlay roads, we are getting closer to letting road add-ons be overlays (see this post).

It appears to me that integration isn’t the primary complaint about the scenery system (the primary complaint is lack of tools) but we’ll have to see once we have mesh editing tools (mesh recreation tools really) whether preprocessing still limits certain kinds of scenery.

Note that a lack of tools or a lack of tool capability is not an inherent limitation of pre-processed scenery. We have an incomplete tool set because I have not written the code for a complete tool set, not because it cannot be done.

(The complexity of writing base mesh editing tools is a function of the complexity of a vector-based base mesh – this is also not related to pre-processing per se.)

Tools

In the end, I think the question of tools is not directly tied to the question of pre-processing. Whether we have scenery that is processed by X-Plane or a preprocessing tool, we have the same issues:

  • Good tools require an investment in coding user interface.
  • The code to convert source data which users might want to edit (like a polygon that defines a lake) to data the simulator might want to use (like a list of 78,231 triangles) has to be written.

I don’t think either option (pre-processing or in-simulator processing) reduces the amount of work to be done to create a good toolset.

As a final thought, using scenery file formats that are “easier to edit” (e.g. a file format that contains a polygon for water rather than triangles) doesn’t make the total code for scenery tools + simulator any easier; it just moves the task of “processing” the scenery from the tools to the simulator itself.

Posted in Development, File Formats, Scenery, Tools by | Comments Off on Precomputed Scenery – the Good and the Bad

Threaded FM – Probably Not

I always have to hesitate before posting a possible future direction to my blog – our future plans are a road map, a direction we intend to follow, but if circumstances change, our plans change. (This is one of the great powers of software: the ability to be flexible!) Unfortunately in the past, I’ve posted ideas, and then when we didn’t productize them, gotten back “but you promised X” from users. So now I’m a little bit gun-shy.

But let’s try the reverse: what about a feature that I am now pretty sure won’t go into the sim?

We were looking at running the flight model on a separate core from the rendering engine.  The idea is that the less work must be done in series with that main rendering thread, the higher the total frame-rate.  But now it looks like it’s not worth it.  Here’s my logic:
  • The rendering engine now runs best on at least two cores, because all loading is done on a second core.  So unless you have a 4+ core machine, X-Plane is utilizing close to all of your hardware already.
  • The flight model isn’t very expensive – and the faster the machine, the less percent of time the flight model takes (because it does not become more expensive with higher rendering settings).
  • Therefore I must conclude: threading the flight model would only help framerate on hardware that doesn’t need the help – modern 4+ core machines.

So why not code it?  (Even if the improvement in framerate would be pretty low, it would be more than zero.)  Well, besides the opportunity cost of not coding something more useful, there’s one thing that makes a threaded flight model very expensive: plugins.

Plugins can run during various parts of the rendering engine, and they can write data into the flight model.  I bounced a number of ways of coping with this off of Sandy, Andy, and others, and I don’t see a good way to do it.  Basically every scheme includes some combination of a huge performance hit if a plugin writes data from render time, a lot of complexity, or both.
So the simplest thing to do is to not try to thread the FM against the rendering engine, and instead continue to use more cores to improve the rendering engine.
This doesn’t apply to running more than one FM at the same time (e.g. AI planes and the main plane at the same time).  It’s the question of the FM vs. the rendering engine that I think now is not worth the benefit.
Posted in Development by | 1 Comment

The Future of Triangles Part 4: Pie in the Sky

Per-pixel lighting is something I hope to have in X-Plane soon.  A number of other features will take longer, and quite possibly might never happen.  This is the “pie in the sky” list – with this list, we’re looking at higher hardware requirements, a lot of development time, and potential fundamental problems in the rendering algorithm!

High Dynamic Range (HDR) Lighting
HDR is a process whereby a program renders its scene with super bright and super dark regions, using a more detailed frame-buffer to draw.  When it comes time to show the image, some kind of “mapping” algorithm then represents that image using the limited contrast available on a computer monitor.  Typical approaches include:
  • Scaling the brightness of the scene to mimic what our eyes do in dark or bright scenes.
  • Creating “bloom”, or blown out white regions, around very bright areas.

Besides creating more plausible lighting, the mathematics behind an HDR render would also potentially improve the look of lit textures when they are far away.  (Right now, a lit and dark pixel are blended to make semi-lit pixels when far away as the texture scales down.  If a lit pixel can be “super-bright” it will still look bright even after such blending.)

Besides development time, HDR requires serious hardware; the process of drawing to a framebuffer with the range to draw chews up a lot of GPU power, so HDR would be appropriate for a card like the GeForce 8800.
While there aren’t any technical hurdles to stop us from implementing HDR, I must point out that, given a number of the “art” features of X-Plane like the sun glare, HDR might not be as noticeable as you’d think.  For example, our sun “glares” when you look at it (similar to an HDR trick), but this is done simply by us detecting the view angle and drawing the glare in.
Reflection Mapped Airplanes
Reflection maps are textures of the environment that are mapped onto the airplane to create the appearance of a shiny reflective surface.  We already have one reflection map: the sky and possibly scenery are mapped onto the water to create water reflections.
Reflection maps are very much possible, but they are also very expensive; we have to go through a drawing pass to prepare each one.  And reflection maps for 3-d objects like airplanes usually have to be done via cube maps, which means six environment maps!
There’s a lot of room for cheating when it comes to environment maps.  For example: rendering environment maps with pre-made images or with simplified worlds.
Shadows
Shadows are the biggest missing feature in the sim’s rendering path, and they are also by far the hardest to code.  I always hesitate to announce any in-progress code because there is a risk it won’t work.  But in this case I can do so safely:
I have already coded global shadow maps, and we are not going to enable it in X-Plane.  The technique just doesn’t work.  The code has been ripped out and I am going to have to try again with a different approach.
The problem with shadows is the combination of two unfortunate facts:
  • The X-Plane world is very, very big and
  • The human eye is very, very picky when it comes to shadows.

For reflections, we can cheat a lot — if we don’t get something quite right, the water waves hide a lot of sins.  (To work on the water, I have to turn the waves completely off to see what I’ m doing!)  By comparison, anything less than perfect shadows really sticks out.

Shadow maps fail for X-Plane because it’s a technology with limited resolution in a very large world.  At best I could apply shadows to the nearest 500 – 1000 meters, which is nice for an airport, but still pretty useless for most situations.
(Lest someone send the paper to me, I already tried “TSM” – X-Plane is off by about a factor of 10 in shadow map res; TSM gives us about 50% better texture use, which isn’t even close.)
A user mentioned stencil shadow volumes, which would be an alternative to shadow maps.  I don’t think they’re viable for X-Plane; stencil shadow volumes require regenerating the shadow volumes any time the relative orientation of the shadow caster and the light source change; for a plane in flight this is every single plane.  Given the complexity of planes that are being created, I believe that they would perform even worse than shadow maps; where shadow maps run out of resolution, stencil shadow volumes would bury the CPU and PCIe bus with per-frame geometry.  Stencil shadow volumes also have the problem of not shadowing correctly for alpha-based transparent geometry.
(Theoretically geometry shaders could be used to generate stencil shadow volumes; in practice, geometry shaders have their own performance/throughput limitations – see below for more.)
Shadows matter a lot, and I am sure I will burn a lot more of my developer time working on them.  But I can also say that they’re about the hardest rendering problem I’m looking at.
Dynamic Tessellation
Finally, I’ve spent some time looking at graphics-card based tessellation.  This is a process whereby the graphics card splits triangles into more triangles to make curved surfaces look more round.  The advantage of this would be lower triangle counts – the graphics card can split only the triangles that are close to the foreground for super-round surfaces.
The problem with dynamic tessellation is that the performance of the hardware is not yet that good.  I tried implementing tessellation using geometry shaders, and the performance is poor enough that you’d be better off simply using more triangles (which is what everyone does now).
I still have hopes for this; ATI’s Radeon HD cards have a hardware tessellator and from what I’ve heard its performance is very good.  If this kind of functionality ends up in the DirectX 11 specification, we’ll see comparable hardware on nVidia’s side and an OpenGL extension.
(I will comment more on this later, but: X-Plane does not use DirectX – we use OpenGL.  We have no plans to switch from OpenGL to DirectX, or to drop support for Linux or the Mac.  Do not panic!  I mention DirectX 11 only because ATI and nVidia pay attention to the DirectX specification and thus functionality in DirectX tends to be functionality that is available on all modern cards.  We will use new features when they are available via OpenGL drivers, which usually happens within a few months of the cards being released, if not sooner.)
Posted in Development, File Formats by | 2 Comments

The Future of Triangles Part 3: X-Plane 9

Before I post anything to my blog saying what might happen, standard disclaimers:

  • This blog represents my rambling about the directions I am considering for X-Plane’s rendering engine.
  • This blog is not a promise or commitment of any kind to deliver any particular feature.
  • If I say I am looking at doing feature X, and feature X does not materialize, either in the near or far future, or, like, ever, consider this to be one big fat “I told you so.”

With that in mind, I think the direction for lighting in version 9 is to introduce per-pixel lighting.

I don’t know what other set of features we’ll get with per-pixel lighting, but I am reviewing normal maps, specular maps, and the material attributes.  Per pixel lighting will mean smooth, round, shiny looking surfaces without using a huge number of triangles.
Now there are two sets of hardware that will not be able to support per-pixel lighting:
  • Cards without pixel shaders.  (GeForce 2,3,4, Radeon 7000-9200.)  You might know your card does not have pixel shaders because the pixel shader check box is not available in the rendering settings.
  • Cards with first generation shaders.  (This is the GeForce FX series and the Radeon 9500-9800 and X300-X600.)  These cards can actually perform per-pixel lighting, but they are so slow that per-pixel lighting will bring them below minimum frame-rate.

So unfortunately, there will be an authoring decision: add more triangles so that per-vertex lighting looks good, or use fewer triangles and rely on per-pixel lighting.  The decision will depend on what hardware you want to target at what performance level.  (For what it’s worth, hardware that cannot support per-pixel lighting usually isn’t very powerful, so there is something to be said for not having a lot of triangles on these lower end machines.)

Posted in File Formats, Scenery by | 3 Comments

The Future of Triangles Part 2: X-Plane 8

X-Plane 8 provides a useful baseline for rendering technology:

  • It is finished and unchanging.
  • Its use of shaders is very minimal, so even lower-end hardware can show the “X-plane 8 model” of lighting.
  • X-Plane 8 rendering is completely supported in X-Plane 9.  (That is, turn off shaders, and OBJs should look the same in X-Plane 8 and 9.)

So what do we have, and is it any good?  Well, we have:

  • Per-vertex lighting.  Lighting is calculated per vertex, and interpolated between vertices.
  • Very limited materials.  Basically you can use attributes to set emissive lighting (so your day texture stays bright when back-lit, like taxiway signs) and shininess (to induce white specular hilites).  The shininess ratio isn’t very flexible, but it does match what the built-in ACF shiny property does.
  • Very fast vertex output within a batch.

I looked at some nice third party planes before writing this up, and one thing became clear: X-Plane can output a lot of vertices in an object if they are batched, and authors are using this aggressively.  The advantage of just using a lot of vertices is: curved surfaces look round, the errors that are induced by per-vertex lighting are less ugly, and the object looks the same everywhere (because this path isn’t dependent on having pixel shaders).

The big weakness of the current situation is that you have to burn a lot of vertices to get close to per-pixel lighting, particularly for very shiny surfaces.  I saw at least one plane (I do not recall who authored it) that just had more triangles in the engine nacelles than you could imagine.  They look beautiful even in X-Plane 8 – great specular hilites.  But that eats into your vertex budget pretty severely – it’s not a technique that you could use for every static airplane on a tarmac at LAX.
Posted in File Formats, Scenery by | 3 Comments

I Can’t Talk Now, I’m Flying a Plane!

Traditionally, a pilot’s priorities are: aviate, navigate, communicate.

But that might not be true for X-Plane for the iPhone.

It’s real! And it pretty much is X-Plane – there really are OBJs and DSFs in there, as well as an ACF model, all tuned for the iPhone.

In the next few posts I’ll blog a little bit about the impact of doing an iPhone port on scenery development. The iPhone is an embedded device; if you go digging for system specs you’ll see that it’s a very different beast from the desktop. The porting process really helped me understand the problems of the rendering engine a lot better, and some of the techniques we developed for the iPhone are proving useful for desktop machines as well.

Posted in Development, News, Scenery by | 11 Comments

OpenGL 3.0

A few people have asked me about OpenGL 3.0 – and if you read some of the news coverage of the OpenGL community, you’d think the sky was falling.  In particular, a bunch of OpenGL developers posted their unhappiness that the spec had prioritized compatibility over new features.  Here’s my take on OpenGL 3.0:

First, major revisions to the OpenGL specification simply don’t matter that much.  OpenGL grows by extensions – that is, incremental a la carte additions to what OpenGL can do.  Eventually the more important ones become part of a new spec.  But the extensions almost always come before the spec.  So what really matters for OpenGL is: are extensions coming out quickly enough to support new hardware to its fullest capacity?  Are the extensions cross-vendor so that applications don’t have to code to specific cards?  Is the real implementation of high quality?
So how are we doing with extensions?  My answer would be: “okay”.  When the GeForce 8800 first came out, the OpenGL extensions that provide DirectX 10-like functionality were NVidia-specific.  Since then, it has become clear that all of this functionality will make it into cross-platform extensions, the core spec, or some of each.  But for early adopters there was a difficult point where there was no guarantee that ATI and NVidia’s DirectX 10 features would be accessible through the same extensions.
(This was not as much of an issue for DX9-like features, e.g. the first generation of truly programmable cards.  NVidia had a bunch of proprietary additional extensions designed to make the GeForce FX series less slow, but the basic cross-platform shader interface was available everywhere.)
Of more concern to me is the quality of OpenGL implementations – and for what it’s worth, I have not found cases where a missing API is standing between me and the hardware.  A number of developers have posted concern that OpenGL drivers are made too complex (and thus too unreliable or slow or expensive to maintain) because the OpenGL spec has too many old features.  I have to leave that to the driver writers themselves to decide!  But when we profile X-Plane, we either see a driver that’s very fast, or a driver that’s being slow on a modern code path, in a way that is simply buggy.
Finally, I may be biased by the particular application I work on, but new APIs that replace the old ones don’t do me a lot of good unless they get me better performance.  X-Plane runs on a wide range of hardware; we can’t drop cards that don’t support the latest and greatest APIs.  So let’s imagine that OpenGL 3.0 contained some of the features whose absence generated such fury.  Now if I want to take advantage of these features, I need to code that part of the rendering engine twice: once with the new implementation and once with the old implementation.  If that doesn’t get me better speed, I don’t want the extra code and complexity and wider matrix of cases to debug and test.
In particular, the dilemma for anyone designing a renderer on top of modern OpenGL cards is: how to create an implementation that is efficient on hardware whose capabilities is so different.  I’ll comment on that more in my next post.  But for the purposes of OpenGL 3.0: I’m not in a position to drop support for old implementations of the GL, so it doesn’t bother me at all that the spec doesn’t drop support either.
The real test for OpenGL is not when a major revision is published; it is when the next generation of hardware comes out.
Posted in Development by | 2 Comments

MeshTool vs. Draped Polygons

An author asked me some questions that I think are so important that I’ll blog the answers:

  • The new texture paging system (LOAD_CENTER) works for both terrain textures (.ter files) and draped polygons (.pol files). You do not have to use draped polygons to get texture paging – you can use paging in a base mesh!
  • Orthophoto terrain via a (.ter) file is by far the preferred method for orthophoto sceneries – it is a vastly better option than draped polygons. Draped polygons are horribly wasteful of hardware resources, and should really only be used for tiny areas, e.g. airport surface areas. If you are using even a moderate amount of orthophotos, make a base mesh!
  • MeshTool is the future of photo scenery, and will continue to be the way to make high performance orthophoto meshes for X-Plane.

The future of MeshTool is bug fixes, a richer syntax, and some day maybe a UI front-end.

Posted in Scenery, Tools by | 2 Comments

ATTR_cockpit_region – Are We Confused Yet?

The choice of panels (2-d panel vs. 3-d panel) for your cockpit and the choice of OBJ commands (ATTR_cockpit vs. ATTR_cockpit_region) both affect how your 3-d cockpit looks.  Since these two techniques can both be varied, there are a lot of combinations, and 920RC2 does not have the right behavior.  (RC3 will fix this I think.)
2-d vs. 3-d Panel
The 3-d panel is a new flat panel whose purpose is to provide the image for ATTR_cockpit or ATTR_cockpit region.  Building a new panel for 3-d has a few advantages:
  • The instruments can be packed together – no need for windows or other texture-wasting elements.  This can help reduce panel size — panel size is expensive when using ATTR_cockpit_texture.
  • The 3-d panel can be smaller than the 2-d panel; having a huge panel feed the 3-d object is slow.
  • Instruments that are drawn with perspective in the 2-d panel can be redrawn orthographically, which is more useful for texturing real 3-d overhead panels.
Because the 3-d panel is meant only to be used as part of a 3-d cockpit object, spot lights and flood lights are not available, nor is a night-lit alternative.  Why not?
  • Such customized 2-d lighting would not match the rest of the 3-d cockpit visually.
  • We will eventually have a more global lighting solution.
Basically I don’t want to provide features that will clash with the future implementation and eat framerate!  The 3-d panel is aimed at next-generation content.
ATTR_cockpit vs. ATTR_cockpit_region
ATTR_cockpit_region provides a new alternate panel texturing path that gets rid of legacy behavior for improved performance and image quality.
  • ATTR_cockpit_region requires the region be a power of 2, which saves VRAM.  (If your panel is 1280×1024, then ATTR_cockpit rounds it to 2048×1024.  Yuck!)
  • ATTR_cockpit_region grabs the lit and unlit elements of the panel separately, and can thus provide lighting that is consistent with the rest of OBJ.
  • ATTR_cockpit_region does not preserve transparency (which isn’t a good way to model a 3-d cockpit performance wise) – removing the alpha feature improves framerate and saves VRAM.
  • ATTR_cockpit_region lets you pick out parts of a panel to texture only what you need.

This last point is less important now that we have 3-d panels (ATTR_cockpit_region came first) – it was meant to let you pick out a small subset of a large size 2-d panel, skipping windows.  But if, for example, you need more than 1024×1024 pixels of panel texture, two cockpit regions are better than one 2048×1024 – some graphics cards hit a performance cliff when a cockpit or region exceeds 1024×1024.

Expected Behaviors:
(Under all situations, the instrument brightness rheostats should be preserved correctly.)

ATTR_cockpit + 2-d panel:

  • The 3-d cockpit should look exactly like the 2-d cockpit.
  • The 2-d panel is used as source.
  • Panel transparency is preserved.
  • Spot/flood lighting effects are available and work.
  • Flood color is the forward flood color.
  • The panel texture and object texture may not look the same under some lighting conditions.
ATTR_cockpit + 3-d panel:
  • The 3-d panel is used as source.
  • Transparency is preserved.
  • Spot lights are not available, but flood flights work.
  • Flood color is the side flood color.
  • The panel texture and object texture may not look the same under some lighting conditions.
ATTR_cockpit_region + 2-d panel:
  • The 2-d panel is used as source.
  • Transparency is not available.
  • Spot and flood lights are not available.
  • Panel and object texture colors should match under all lighting conditions.

ATTR_cockpit_region + 3-d panel:

  • The 3-d panel is used as source.
  • Transparency is not available.
  • Spot and flood lights are not available.
  • Panel and object texture colors should match under all lighting conditions.

The Future

Basically both the 3-d panel and ATTR_cockpit_region are aimed at next-generation cockpits – they both strip legacy features to provide a clean platform for real 3-d cockpits.  The expectation is:
  • Global lighting will be applied to all 3-d geometry – panel texture and object texture. Non-emissive lighting (spot lights, flood lights) will apply to everything.
  • Windows will be built using geometry, not alpha.
  • The panel texture can be minimized by packing a 3-d panel and using regions.  Manipulators let you provide interaction to regular object geometry.

Posted in Aircraft, Cockpits, File Formats, Panels by | 1 Comment