• Defaced@lemmy.world
    link
    fedilink
    English
    arrow-up
    49
    arrow-down
    2
    ·
    1 year ago

    Whoever can make a compatibility layer that successfully translates x86/64 to arm and vice versa and make it widely available will be a major player in the market. Valve has already somewhat done something similar with proton and Apple with Rosetta 2.

    • Spiderfarmer@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      ·
      1 year ago

      Apple developed it as a stopgap. In the Windows world x86/64 will be around for a long long time. Not sure if anyone is willing to support something like that for the next 10 years.

      • AggressivelyPassive@feddit.de
        link
        fedilink
        English
        arrow-up
        13
        ·
        1 year ago

        It’s all a question of market share. If (big if) arm gets a foothold into the Windows market, software vendors will simply offer two binaries and/or Microsoft could offer tooling to offer easy porting.

        Apple’s real genius move though is not Rosetta, but including x86 compatibility features into the Mx chips. That way the emulation is much faster.

        • ggppjj@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Windows does have a 32/64 bit x86 compatibility layer and most of what I’ve seen through limited bashing around in VS2022 leads me to believe that it has arm as a fairly targetable build target already.

    • mr_tyler_durden@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      1 year ago

      Proton and Rosetta 2 are two totally different beasts. One allows windows programs to run on non-windows hosts and one translates x86 to Arm.

      I’m not aware of Proton doing anything like Rosetta 2 and if it did Steam would have probably used an Arm chip in their Steam Deck instead of an x86.

      Maintaining 2-way compatibility doesn’t seem like an important goal. One way, x86->Arm, sure but not Arm->x86. Apple clearly sees x86 as a dead end for its own product lines and we will see if the rest of the industry follows suit over time. Of course there is a ton tied up in x86 but aside from legacy apps or games I don’t have much need of x86 in my life.

      Even the servers I run are trending towards Arm due to the power savings. AWS graviton stuff is like ~25-30% cheaper than x86 last I looked

      • Defaced@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        I understand proton isn’t the same thing, it was just an example of a compatibility layer…and how would a bidirectional compatibility layer not be beneficial? X86 in servers may be going away and even that’s debatable, but x86 isn’t going anywhere in the consumer space. Graviton chips are great, but they’re useless if there’s no viable way to translate those x86 legacy applications over to ARM without breaking the bank until your business is ready to transition the workload to ARM.

        Amazon was working on a compatibility layer specifically for this purpose, however I suspect they’ve given up because they’ve slowly added Intel and AMD chipsets back into their general purpose ec2 class for newer generations and there hasn’t been a single word about compatibility with graviton other than just use arm based workloads.

        You just can’t move to ARM because it’s cheaper, that’s just not going to work. You need to make the effort to move away from x86 and adopt applications that are arm native before making that jump. With a compatibility layer it doesn’t matter, that’s where the money is, if I can build a compatibility layer that translates an x86 binary to an arm binary, then I can move those workloads to the cheaper and more efficient server class.

        • Dark Arc@social.packetloss.gg
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          it was just an example of a compatibility layer

          It’s a different problem under the same name. You’re comparing apples to oranges by comparing WINE and Rosetta.

          WINE (as I understand it) explicitly does its best to maintain the instructions and basically just implements functions that are missing.

          Rosetta actually changes out every instruction (and presumably has to do a WINE-like thing as well to handle dynamically linked code, though in theory this is more of a 1-to-1 mapping since you’re not swapping OSes).

          if there’s no viable way to translate those x86 legacy applications over to ARM

          For a properly written application that doesn’t have a lot of third party binary dependents… This really shouldn’t be that big of a job. You just recompile the code on ARM, test, and fix any bugs.

          I’d speculate that most of the time this stuff comes up like: “this was written for Windows with some very specific Microsoft libraries/tech in mind and we want to run it on Linux and ARM as cost cutting measures.”

          Granted, it’s not unusual for a large enough code base to contain some code that wasn’t written in an architecture neutral way/does some shady stuff.

          • Defaced@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 year ago

            I’m not comparing Rosetta and proton…I’m just using them as examples of compatibility layers that make their products viable. I frankly don’t give a shit about the inner workings of Rosetta or proton, but thanks for clarifying I guess.

      • Troy@lemmy.ca
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 year ago

        Gord forbid those binaries want to do any actual work though…

          • Troy@lemmy.ca
            link
            fedilink
            English
            arrow-up
            8
            ·
            1 year ago

            Windows on ARM will run x86 binaries. But if these binaries require any real processing power, they choke or run really really slowly.

            • IchNichtenLichten@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              2
              ·
              1 year ago

              I guess it depends on how you define “real processing power”. I run Windows on Arm on my Mac Studio through Parallels. I installed Steam and played Civ 4 and it’s great. Sure, it’s an old game but it runs smoothly.

    • impiri@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      It’s gotta be Microsoft building it into Windows. The Apple Silicon transition wouldn’t have been nearly as smooth if people had to pay for, say, CrossOver or something to use their Intel apps. And the tepid response to the ARM Surface models makes me think that it’s a must, despite the UWP dream.

      The good news is that Rosetta 2 shows it can be done extremely well!

      • merthyr1831@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I could be wrong but that might be Linux only. Windows and MacOS both have their own proprietary compatibility layers, but Windows had shit support for theirs for years which hurt their reputation badly.

  • N-E-N@lemmy.ca
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    1
    ·
    1 year ago

    The recent Snapdragon chips have been awesome, but that’s not Apple’s magic, it’s their x86 to ARM translation layer

    • fuckwit_mcbumcrumble@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      1 year ago

      It’s not just the translation layer. There’s hardware built into the CPU to accelerate the translation.

      Pair that with the CPUs already being so incredibly efficient and you’ve got something that runs x86 programs as good as the old hardware.

    • Anomalous_Llama@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 year ago

      And the fact that they have silicon space dedicated to accelerating that translation layer specifically.

      Good software and good hardware married to make a kickass move to ARM.

    • atetulo@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      1 year ago

      Why do you want to use ARM in a desktop?

      The main benefits of it are power-saving.

        • atetulo@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          More powerful than X86? Or are there other reasons to use it in a desktop?

          • blujan@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            13
            arrow-down
            1
            ·
            1 year ago

            At least as powerful for less energy, being energy efficient is also a good thing even in desktops.

              • Dremor@lemmy.world
                link
                fedilink
                English
                arrow-up
                10
                ·
                1 year ago

                Both are instructions sets. They are part of the equation that gives a CPU its “power”, but it isn’t the only reason.

                What gives ARM its power efficiency edge is its smaller instructions set, which translate to smaller die size to do the same work, which is also its Achilles heel as it means that some workload that uses those missing instructions need to be either translated by the hardware or the software, or it will just not work. Both have their own inconvenience (bigger die size and less energy efficiency or bigger overhead and slower execution.

                But for workloads that do not use x86’s specificities, ARM is very competitive.

                • atetulo@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  1 year ago

                  But for workloads that do not use x86’s specificities, ARM is very competitive.

                  Yeah, but would those workloads be more performant if they used CISC features?

  • 👁️👄👁️@lemm.ee
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    It would be an instant buy for me, especially since it wouldn’t be as locked down and work better with Linux. I’ll deal with the software issues later, gimmie now!

    • erwan@lemmy.ml
      link
      fedilink
      English
      arrow-up
      25
      ·
      1 year ago

      Question is - is Qualcomm going to do the same bullshit as their mobile ARM processors with the drivers?

      Where you need to wait for their proprietary Linux driver to upgrade your kernel?

      • CalcProgrammer1@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 year ago

        Hopefully the open source Qualcomm drivers will support this chip. The SDM845 chip is pretty well supported on mainline kernel, using SDM845 with 6.5 kernel and postmarketOS to type this comment. The Freedreno driver is probably the best ARM GPU driver in Mesa.

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    5
    ·
    1 year ago

    Good luck making that work with Windows. If it does its not going to be profitable as Microsoft will eat your arm.

    I would love to see one of these running Linux but I don’t see that happening realistically

  • exu@feditown.com
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    I sure hope it will come with SystemReady or a similar standard for something UEFI-like instead of the custom per board image many ARM devices need currently.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    This is the best summary I could come up with:


    Qualcomm’s annual “Snapdragon Summit” is coming up later this month, and the company appears ready to share more about its long-planned next-generation Arm processor for PCs.

    The company hasn’t shared many specifics yet, but yesterday we finally got a name: “Snapdragon X,” which is coming in 2024, and it may finally do for Arm-powered Windows PCs what Apple Silicon chips did for Macs a few years ago (though it’s coming a bit later than Qualcomm had initially hoped).

    But those chips have never quite been fast enough to challenge Intel’s Core or AMD’s Ryzen CPUs in mainstream laptops.

    Any performance deficit is especially noticeable because many people will run at least a few apps designed for the x86 version of Windows, code that needs to be translated on the fly for Arm processors.

    Even if Qualcomm delivers an Arm chip that’s significantly faster and more power-efficient than its current offerings, there are still software hurdles to overcome.

    In other words, they were negotiated based on Nuvia’s then-stated focus on server CPUs, rather than high-volume processors for consumer PCs.


    The original article contains 619 words, the summary contains 178 words. Saved 71%. I’m a bot and I’m open source!

  • Tiger Jerusalem@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    An ARM Galaxy Book 360 with S-pen running Photoshop and Illustrator while running relatively cold. I would buy that in an instant