• atzero@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Awesome video! I’d never even thought about x86 applications using native Arm libraries but it makes sense.

    • Tman1677@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      It wasn’t possible until recently but it’s a really interesting sleeper feature that is incredibly smart imo.

  • wtallis@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Interesting topic, but tragically shallow treatment of the subject. He tested with Handbrake and found Apple’s Rosetta 2 to be slightly better (lower overhead) than Microsoft’s solution, and tested with some of his own code and found Microsoft’s to be significantly better than Apple’s. But we didn’t get any description of what that mystery code is doing, so it’s a largely worthless result and leaves him without enough justification to make any solid conclusions (Handbrake is a good test to start with, but not enough on its own).

    I would have liked to see a comparison of what x86 instruction set extensions are supported by the respective compatibility layers (especially the SIMD extensions), an overview of the general techniques used by the compatibility layers (ie. ahead-of-time translation or JIT or instruction-by-instruction emulation or a mix of techniques) and how much caching of translations affects first-run performance vs subsequent runs, and comparison of both single-threaded and multi-threaded workloads (because ARM’s weaker memory model than x86 is a major challenge for translating multi-threaded code with low overhead).

    • Pristine-Woodpecker@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      because ARM’s weaker memory model than x86 is a major challenge for translating multi-threaded code with low overhead

      As someone else here has pointed out, Windows on ARM emulation ignores this by default and just accepts the possibility of wrong results or app crashes.

      So, you’re likely not going to see much performance difference, but…

  • MrMobster@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    This video entirely misses the point. The x86 emulator (technically a transpiler) is just a part of equation. The important thing is that Apple emulates some of the x86 behavior in hardware. And it’s not something a software emulator can do, at least not without significant performance problems. As far as I know, current Windows on ARM ”cheats” by pretending memory order is not a problem. This works until it doesn’t, as some software will crash or produce incorrect results. Microsoft has different emulation levels as described in their support docs, each with those come with increasing performance cost. As Qualcomm did not announce technology to safely emulate x86, I assume they don’t have it. Which would mean the same clusterfuck of crashing and incompatible software Windows on ARM has to deal with now.

    https://learn.microsoft.com/en-us/windows/arm/apps-on-arm-program-compat-troubleshooter

    • SentinelOfLogic@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      There is a post in r/hardware from just a few days ago showing that the hardware features only have a small impact on performance, not a “significant” one.

      • boredcynicism@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        One could enable the “very strict” option above and see which tests have their performance impacted and how much.

    • Pristine-Woodpecker@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      As far as I know, current Windows on ARM ”cheats” by pretending memory order is not a problem.

      Indeed, as your link confirms:

      These settings change the number of memory barriers used to synchronize memory accesses between cores in apps during emulation. Fast is the default mode, but the strict and very strict options will increase the number of barriers. This slows down the app, but reduces the risk of app errors. The single-core option removes all barriers but forces all app threads to run on a single core.

      Very good catch!

  • jocnews@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Windows emulation is less likely to get rugpulled out from under your feet by Microsoft, whereas history show Apple will do just that in a few years and remove the translation layer to mess with people/devs (‘cattle loses discipline without flogging’ Apple style).

    So I’d say it wins on that ground.

    • iindigo@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      It’s mainly a difference in philosophy. Apple has long seen translation/compatibility layers as exclusively transitional rather than something that can be leaned on indefinitely, because even the best of that kind of technology comes at a cost to performance and efficiency, not to mention if you let it hang around for too long you can end up with multiple compatibility modes in play which compounds those losses.

      As such, they push devs to release updated binaries that run natively. This isn’t a difficult for the vast majority of Apple platform apps, usually involving little more than ticking a new arch checkbox in Xcode thanks to AppKit/UIKit long having been architecture-agnostic. The software that suffers tends to be cross platform, where major assumptions have been made (“I don’t need to think about anything but x86” and similar).