• Balance-@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    CPU is impressive.

    Their GPU is fighting toe-to-toe with the Apple M2.

    Considering:

    • They use very fast 8533 MT/s memory, giving them 33% memory bandwidth than Apple’s 6,400 MT/s on the M2;
    • Apple’s TDP (20 watt) is likely lower than both configurations 23W and 80W “Device TDP”
    • They choose the benchmarks;
    • The Snapdragon X Elite will launch mid 2024;
    • The Apple M2 will 2 years old then;

    I’m not that impressed by the Snapdragon X Elite’s GPU.

    The M2 Pro already beats it hard (it has both twice GPU cores and twice the memory bandwidth of the M2), and the M3 will most likely beat is as well.

    Let alone the M3 Pro.

    Then AMD will release their Strix Point APU also likely in the first half of 2024 - increasing the GPU core count by 33% (from 12 to 16).

    Intel’s Meteor Lake’s iGPU, called Xe-LPG, also looks promising.

    So as Ryan said:

    Ultimately, the 6+ month gap until retail devices launch means that the competition for Qualcomm’s upcoming SoC isn’t going to be today’s chips such as the Apple M2 series or Intel’s various flavors of Alder/Raptor Lake. Almost everyone is going to have time to roll out a new generation of chips between now and then. So while Qualcomm’s SoC may be ready right now, we’ve yet to see what they’ll be competing against in premium devices. That doesn’t make today’s benchmark disclosure any less enlightening, but it means that Qualcomm is aiming at a moving target – beating Apple, AMD, or Intel today is not a guarantee that it’ll still be the case in 6 months.

    Let’s do some new benchmarks in 6 months!

    That being said, it’s great to see more competition in the laptop SoC market. I hope Qualcomm also pushes competitors on their wireless capabilities: 5G should be an option on almost every laptop.

    • UsefulBerry1@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      On the contrary I am really excited for this Chip. Sure M2 pro beats but those start at 1.8k-1.9k and laptop with dgpu is a very different category (also qualcomm says X Elite will support dgpu). If anything, Windows on Arm, it’s software support and hardware options will get a huge boost. Also, I am not holding my breath for anything from Intel at this point. Every time they promise but efficiency is still lowest. I was optimistic when they introduced little-big arch, but it was meh 😑

  • msolace@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Well see how it rolls later. Mac chips look strong, but they still can’t do 90% of the things I need from a chip. so It could be 9000000000000000000000000000000 score and still be useless.

  • bazooka_penguin@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    2800 in Geekbench ST at 4ghz doesn’t strike me as amazing, considering the Snapdragon 8 gen 3/Cortex x4 leaks point to a geekbench score of 2200-2300 at 3.3Ghz.

    • Vince789@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Note that’s the GB6 ST in Windows, it would be around 3030 for Linux which is more comparable to Android

      Still I’d agree IPC isn’t amazing compared to Apple or Arm, which I guess sorta makes sense given NUVIA were originally targeting servers, where MT+efficiency is the main focus not ST

      IMO how quickly Qualcomm can iterate on the X Elite will be critical to their success (that and Microsoft pulling their weight on the software front)

      We’ve seen several companies release decently competitive custom Arm cores, but then fail to keep up with Arm’s rapid yearly development

      E.g. Samsung’s Exynos M, NVIDIA’s Denver, and Cavium’s ThunderX2 all started reasonably competitive with Arm, but fell further and further behind in their following iterations

    • AlexLoverOMG@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      In the end the ratio that matters is power use per performance, frequency within the same architecture is generally tied to power draw but comparing different architectures can have different power/clock speed curves.

    • basil_elton@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Perf/clock in Geekbench ST is in line with the leaked Cortex X4 scores. Yeah mobile phones don’t have 8533 MT/s LPDDR5x, but still it is really impressive.

  • dbcoopernz@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Has there been any information about what process node they are using?

    Edit: Speculation from an earlier Anandtech article.

    https://www.anandtech.com/show/21105/qualcomm-previews-snapdragon-x-elite-soc-oryon-cpu-starts-in-laptops-

    Qualcomm is fabbing the chip on an unspecified 4nm process. Given their previous performance issues with Samsung’s 4nm line, it’s a very safe bet that they’re building this chip at TSMC – possibly using the N4P line. The silicon itself is a traditional monolithic die, so there is no use of chiplets or other advanced packaging here (though the wireless radios are discrete).

  • VankenziiIV@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Thats fast, looks like they used transistor budget on cpu and left gpu hanging. Or they knew catching dgpu on soc will be too expensive. As m2 max can lose to 3050 in some apps. Due largely to nvidia software nd RT. They want to partner up with amd or nvidia? Why would they?

    If they pair with dgpu that reduces the point of arm, since dgpus will use considerable power

    It comes 6-7 months or something which will face competition from m3, mtl, ada refresh, zen5 and at the of the year awl.

    Anyways good competition but unfortunate for them, competition wont allow them to succeed

  • letsgoiowa@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Ehhhh this is mostly more ARM-based benchmarks. Almost every major application I can think of is amd64 or ye olde x86 now and I really want to see performance on that. Honestly, I would love to see how it does with an “office user” performance profile: we never got the ARM-based Surface because it simply couldn’t run our antivirus or endpoint management package. Would like to see what it does with Autocad stuff.

  • MuAlH@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    really impressive scores, we are in for a huge change in the PC market if this succeeds, now we wait for the battery life benchmarks and performance on battery.

    • Son_of_Macha@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Given that both Nvidia and AMD have announced ARM based chips for Windows, the market is changing dramatically whether QC is a success or not

    • virtualmnemonic@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      I imagine a lot of optimization will be applied before release, seeing as it’s still months away. Hopefully, they’ll ditch the fan entirely in the lower watt model. Altogether, it’s good to have competition against Apple in battery life and sustained performance on battery.

  • undernew@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Interestingly the X Elite doesn’t even support hardware accelerated ray tracing which even the 8 Gen 2 supported.

    • Chromatinfish@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Likely they’re either targeting laptops which don’t need GPU power like productivity/business class machines, or they expect to have OEMs pair them with dGPUs if need be.

    • GodTierAimbotUser69@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      doubt people use raytracing if they buy this product. even the people who can use it dont even use it (myself included)

      • iDontSeedMyTorrents@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        In barely a year, hardware ray tracing will be part of virtually every x86 consumer CPU being sold. People do more than game with their computers.

      • undernew@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        Blender’s Cycles engine can make use of hardware accelerated ray tracing and this will hopefully help the M3 catch up with NVIDIA GPUs.

  • siazdghw@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    The more I look into the details the more gotacha’s I see.

    LPDDR5X @ 8533Mbps, which is going to be expensive and affects every benchmark run, even Cinebench 2024 is now memory sensitive. Likely no upgradeable SODIMM memory options for OEMs.

    Considerably higher Linux scores than Windows.

    GPU benchmarks perform better than the actual gaming demos they’ve shown (seen in other previews)

    Geekbench and Cinebench 2024 natively support Arm, very few Windows applications and games do, they all have to be emulated from x86.

    80W needed to edge out competition was more than I thought these chips were using.

    Its a very good showing, but I question if it actually will be enough to convince people to use Windows on Arm, when Meteor Lake and Zen 5 should be very competitive.