On Tuesday Apple announced new Macs powered by Apple’s M1 chip, a custom ARM system-on-a-chip based on the Apple A-series System on a Chip (SoC) from the iPhone and iPad.

The rest of this post is probably only of interest to Mac users, but for Windows users, it’s worth noting that the M1 chip is fast. It targets laptop and low power use cases, not gamer-class hardware, and it’s not a discrete GPU. Here’s my 27″ iMac – Intel says the i9 in it is a 95W part:

Single core: 1265 Multi-core: 9414

and here’s a new M1-based MacBook Air, with 8 cores running at ten watts:

Single core: 1732 Multi-core: 7545

That’s…a pretty high score for Apple’s first trip into desktop land. One more for perspective:

AMD’s new Ryzen 5900X, which is a great chip, with a 105W TDP:

Single Core: 1619, Multi-core: 13656

The take-away here is that Apple doesn’t just have fast chips for their new machines, they might have the fastest ones.

Now, how is this going to work with X-Plane and plugins?

X-Plane 11 is an x86_64 app, as are all plugins ever written for it. So if you run it on an Intel Mac, it just works, and if you run it on one of the new ARM Macs, it will run using Rosetta, which will translate the code as you fly.

In the future, we will have an X-Plane build that is “universal”–that is, it contains ARM and x86_64 code, and we will have a plugin SDK that contains both ARM and x86_64 code. At this point, plugin authors can start recompiling plugins to contain both types of code as well. Users with ARM Macs will have the choice to (1) run ‘natively’ in ARM for higher performance and use only plugins that are universal or (2) continue to run x86_64 code under Rosetta, so that all plugins work.

(This option is available for all apps that are universal on an ARM Mac – you turn “Use Rosetta” on or off in the app properties.)

This situation is exactly the same as the PPC->x86 transition we went through years ago.

Plugin developers: once Big Sur and the new X-code are out and we have an ARM plugin SDK, you can add a new architecture to your project and that should be it, as long as you don’t use any x86 assembly code in your add-on.

About Ben Supnik

Ben is a software engineer who works on X-Plane; he spends most of his days drinking coffee and swearing at the computer -- sometimes at the same time.

50 comments on “ARM Macs

  1. Why so you say Apple have the fastest chips when their multi core score is almost half that of the referenced AMD chip?

    1. Because the single core perf is faster and the AMD chip is getting only 1.8x better multi-core throughput with 10x the power consumption, 1.5x more physical cores and 3x as many logical cores. And this is with some of Apple’s cores being intentionally low power – only four of them are performance oriented.

      We don’t know what will happen when Apple pushes into higher-end parts…it isn’t surprising that M1 aims at battery powered devices (where Apple’s perf-per-watt really matters) and we don’t know anything about whatever parts come next or whether they could run into die-size constraints. But based on the scalability they’ve shown, they should be competitive scaling up too.

      1. thats nice but the ARM mac GPUs are orders weaker so you have fast CPU with GPU that can bearly render a potato. its not worth the time …

        1. They replace Intel’s embedded GPUs and I expect them to be an upgrade in that regard. I’d love it if everyone had a 2080 or something but we’ll have users with embedded GPUs and we have to cope with them. I’ll take the improvements I can get. That they have brains in common with the iPhone is useful for us as a company.

          1. They are seriously an upgrade. One reviewer said they are ‘much quicker than a 1050ti’ (which is a low end card these days admittedly), but bearing in mind this is only the first gen of new silicon, I expect great things.
            I had hoped that Apple had gone for rDNA2 though, it would have probably resulted in drivers being available at some point for AMD’s new desktop cards (for our Hackintoshes).

          2. For X-plane 12 I’ll say that You will have to look past integrated GPU’s, as designing for them will seriously limit the graphichs “base”, which in turn will degrade the simming experience for anyone who has a dedicated GPU.
            -Which is by far the majority of the X-planers, IT IS very few who sit with a laptop when they fly X-plane, and none of these have flightsimming as a serious hobby, so You should rather keep “mobile” for (all the…) mobile devices.

          3. We already share code between mobile and desktop. So in theory we can write rendering code for big discrete GPUs to maximize graphics, targeting NV and AMD cards, and we can write a second pile of rendering code that targets Apple GPUs for the ARM Macs and iPhone Xs. This would be an _intermediate_, not _low end_ renderer – we’ve seen from the iPhone X that it can do more than we ship on mobile, particularly if we take advantage of knowing at the API level that it is a tiler and has shared on-chip memory.

            If this seems like a lot of rendering code, remember that _right now_ we ship:
            – A forward and deferred renderer for desktop, targeting 3 driver APIs
            – These renderers have ‘steps’ in their quality via the settings, including an ultra-low-quality mode for hard-luck GPUs. (Set that FX setting to zero and things look pretty bad.)
            – A separate simpler forward renderer on mobile, also with several quality levels to cope with GPU differences.

            We need a range of levels on all of our products. In a world where we had Intel and Apple GPUs in the low-end bucket on desktop, we might not be able to give them special treatment. But since the Apple “A” chips on mobile are _so_ similar to the M1, we can cover our entire high-end* on mobile by targeting iphones/ipads, reuse that code on the M1, and I think the results won’t look low-end at all.

            In other words, over the bigger picture this represents less total fragmentation to us in that we’ll be seeing two classes of Metal GPUs and not three.

            * in the android world the NV Tegras are very nice, but they just don’t seem to have deep market penetration, and they’re often not clocked up that high. Realistically the Android side of the GPU world really means Malis, Adrenos, etc. It’s hard to understate Apple’s raw hardware computing dominance on the mobile side – it’s been like that for several iphones now.

      2. This is a low power laptop CPU versus the highest performance AMD and Itel currently have to offer.

        I’m absolutely thrilled to see what Apple has to offer in iMacs or even Mac Pros.

  2. How does it compare to the new AMD 4000 series mobile chips? I know they’re quick and use less power, but not super low power like these Apple chips. I guess the AMD chips are also a known quantity with GPUs.

    1. Looks like Apple’s faster, although the mobile AMD chips are a gen old. The top-end 4000 series might be the same or better for mutli-core (it’s got more logical cores), definitely more wattage. But the other concern for Geekbench scores is that on the PC/Linux side you might have overclocked machines in there. That’s not necessarily invalid (in that you _can_ overclock your box, as long as you don’t tell me about it) but it implies even higher power use.

  3. I can confidently say that with this bench with a 10W CPU, they are gonna kill it.
    Ironically, it was X-Plane that stopped me being a Mac guy (well for most things) because Mac’s don’t currently (or will ever?) support VR. I don’t think having the fastest CPUs (and they *are* fast; anyone looking at the multicore score and going ‘meh’ is blind) is going to bring me back. Unless they get VR. Then hell yes.

  4. Great news, thanks. Will that binary app already come with X-Plane 11.X or will that be something for X-Plane 12?
    Looking forward to it, in any case! 🙂
    Hope 3rd party developers will follow asap!

  5. Hello Ben,

    I am curious if you tested X-Plane 11 under Rosetta 2 using a mac mini DTK.
    If yes, can you share some bench just to imagine how it can works on an Apple M1 ?
    If no, will you have a mac with M1 and bench it ?

    Thanks.

    Kind Regards,

    André

    1. DTKs are all covered by NDAs, so if we had hardware access we couldn’t post anything.

      I can say geekbench CPU benchmarks match our overall gut feeling of how well X-Plane does in terms of CPU-bound stuff.

  6. Those of us that develop primarily on Windows may not be aware of the requirements of moving towards a capability to include the ARM architecture in our current Mac builds. Could you recap those, to be sure something more convenient has not come about? Is there any situation or use-case where someone would use x86 Assembly code, or were you just kidding there? 😉

    It does sound as if we’ll be safe, if limited, if we do not include the ARM architecture and thus expect our plugins to be used under Rosetta. I wonder what the performance hit might be, in a general sense – mainly FPS.

    1. If you have the latest X-code 12 on Catalina, you turn on ARM as an architecture in the build settings and then you just need to have libraries that have done that too. If you don’t know why you’d need x86, you probably didn’t write any by hand. :-).

      1. LOFL. Yeah, I think that about sums it up. No cryptic, nearly impossible to read code here. One of the joys of being largely self-taught – you don’t get exposed to that scary stuff. But thanks for outlining the basics – simple enough, even though there are hardware limitations in there too. The price of progress.

      2. On further thought… one clarification, please:

        By libraries, are you referring to the X-Plane SDK libs that LR will release as an update? One would think that X-code 12 would already have libraries enabled for ARM.

        1. Libraries refers both to all of:
          – our frameworks (we have to release ARM versions), that’s on us to do
          – system libraries (you get them with x-code 12.2) and
          – third party libs you may be using that are provided in binary and not source – IF you have this last category, you have to go find a universal update. Not all plugins have this category.

      3. Did you mean Big Sur, Ben? I see that requirement in what appears to be an edit to the original article.

        Catalina is preferable – Big Sur means a system update if a dev has an older Mac that’s just used to build plugins. I think we’ve touched on this question before. 😉

        1. As far as we can tell, the tool chain to develop ARM (SDK for Big Sur, X-Code 12.2 DO work on Catalina, so this is a case where the new OS isn’t needed to dev for the US.

  7. Judging by all the M1 reviews out at the moment raving over the Apple Silicon performance, it might only be a few years before the Windows community will migrate towards an ARM based processor, maybe produced by NVIDIA instead of Intel? Just think how similar then the coding could become for developers.

    1. Right – except it’s not all ARM chips that are putting up the big numbers – just Apples – they’re way ahead of the other chip providers in the mobile space.

  8. What about OpenGL? Apple says it’s deprecated but “will be available on Apple silicon” (and they appear to have removed it from Big Sur). That’s going to break a lot of plugins.

    1. OpenGL is available on Big Sur. I would be very surprised if the M1 chip didn’t have OpenGL support, but I also don’t have the hardware here to actually test it.

      1. Ah, my bad: OpenGL library access was _changed_ on Big Sur: python can no longer find it. Guys at python.org are working a fix. So XPPython3 plugins cannot also use OpenGL without a patch.

  9. To answer some questions concretely:

    I decided, just for yucks, to install XP Mac on my new M1 Mini. 16GB RAM/VRAM.
    It’s a non-native app, running on an “iGPU,” on the lowest-end version of Apple’s new architecture, on the 1.01 version of its operating system for it.

    Oh, and I’m on the ground at MisterX’s KSFO, with Orbx’s NorCal.
    With that in mind, here are my XP settings for 2560×1440:

    https://www.dropbox.com/s/4m2126mzgqwkuas/Screen%20Shot%202020-11-18%20at%202.41.46%20AM.png?dl=0

    Here are my frames from the cockpit:

    https://www.dropbox.com/s/11net3d8hits8ng/Screen%20Shot%202020-11-18%20at%202.42.31%20AM.png?dl=0

    And here’s an outside view:

    https://www.dropbox.com/s/3424fj1hirpkfxr/Screen%20Shot%202020-11-18%20at%202.43.11%20AM.png?dl=0

    This, ladies and gents, is just totally bonkers. Oh, and the fans never ramped up, and the chassis stayed cool to the touch. XP native, on, say, an ARM iMac, with more dedicated GPU cores, ought to be devastating. That said, my little experiment has already blown my mind.

    1. Right – the M1s are stronger _CPU_ machines than GPU machines in that around here we sort of expect big discrete GPUs for high res.

      But while X-Plane runs its CPU-side code under JIT emulation (rosetta), the GPU code runs _natively_ because we’re already a metal app. So the good news is that we’re a native app where we would have been most constrained. But this also does demonstrate that the M1 is screaming fast and Rosetta works pretty well.

      1. Dunn. Uninstalled it. It was only a science experiment, or perhaps a carnival act, as Brunner never saw fit to provide a version of their yoke driver for MacOS. I did notice that cranking textures up to Max did produce an outright Out Of VRAM message, and antialiasing was expensive, too. So, clearly, it’s not magic, just a fast chip with a nicely Metal implementation on Laminar’s part. Bodes very well for the future.

        1. this is the bigger issue your slitting 8 or 16GB of ram with the CPU and GPU… its likely only running with about 2GB of ram at most for the GPU maybe 4GB in the case of 16GB option its just not going to run well. and then has to do this wile sharing the bus with the CPU and its only 128bit bus on top of that. better off just porting XP Moblie to the ARM macs

          1. Clearly the M1 doesn’t have an NV or AMD-style super-wide memory bus…but since it’s strictly a tiling GPU, it’s not an Apples-to-Apples comparison for things like fill rate – a lot of rendering that is over-the-bus on an AMD card is on-chip on an M1 or the A-series.

    2. That’s comparing favourably with the performance I get on a 5,1 MacPro Server with dual, six core 2.66 Xeons, 64 GB of RAM, a Radeon 5770 and a 12 GB NVIDIA Titan X and an NVMe Drive.

  10. I am not a developer, just a pilot. I currently have a 2017 iMac I am using to run X-Plane 11, and have an small “nuc” type pc computer so I can use second monitor as a touch screen with air manager.

    It sounds like there isn’t much to worry about as far as performance goes once the higher end chips are released in the next year or two. Hopefully the expectations pan out for the mid/high end systems.

    I am wondering how easy or hard it will be for those pc centered programmers to create the arm format plugins and drivers needed for their cross platform apps such as air manager or physical Logitech hardware?

    Hopefully, this will increase the options those of us with apple hardware have.

    1. I think if they can make Mac apps _at all_ the ARM part won’t be that hard. The big change is going from one set of compilers/tools to another, having a Mac access to compile on, and not using Win32 APIs. x86 Macs will be able to build “universal” add-ons. High-end perf will depend on what they do for a GPU in the ‘big’ machines, which is a total mystery right now.

        1. unlikely they will go with there own GPUs based on what they have already. the point of ARM like this is so Apple has full in house SoCs

          1. Do you mean “Unlikely. They will go with their own…” or “Unlikely they will go with their own…”?

            Punctuation is important.

            Anyway, they still have to pay licence to ARM for their own CPUs, and guess what, ARM is owned by Nvidia. There is no way to completely get away from it.

      1. bigger issue is Apple has locked down ARM to require ALL code be signed… this means even free ware plug ins will have to have signed code…

        1. Wat? You got a source for this? (I’d be particularly surprised that the x86 and ARM code signing requirements would differ, when they don’t represent differential security threats)

  11. I’ve been testing X-Plane on my new M1 MacBook Pro and the results are mind bending. I can’t wait to see what it’s like when X-Plane is actually compiled to run native on the M1 chip set. My main machine is a 12 core, 2019 Mac Pro with 96GB of RAM and an AMD Pro Vega II with 32GB of VRAM. Same aircraft, scenery, and settings, the little MacBook Pro, running in EMULATION is faster than the Mac Pro?! It’s unbelievable, and truly awesome what will be possible in the future. I have to say it even looks better somehow, seems like the colors, lighting, and shadow are a little smoother and better rendered. Even running with Max objects, max visual effects, max texture quality, antialiasing 2x SSAA + FXAA, Anisotropic filtering at 8x, and shadows, using XCODR’s pretty intense KDEN package and the Tolis A319, I’m getting 25 to 50 fps.

    1. For what it’s worth, you’d only see a CPU, not GPU win from going native. So if _pausing_ the sim doesn’t improve your FPS (e.g. you’re GPU bound), the native version won’t help a ton.

Comments are closed.