• zatagi@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Watched this and their A17 back to back. Their E core is still used 3 times more power and 3 times weaker than Apple E core, which I care most for everyday tasks.

      • itsjust_khris@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Probably a lot of them. I can only look at core usage on my MacBook but the E cores are in use way more than the P cores. That certainly adds up over time.

  • 31c0c3@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    adreno 750 is flat out impressive, great job on that qualcomm. cortex x4 is disappointing with same power as A17 p core but a lot less performance. winning in multi core with less power is good though. overall its more impressive than the A17 to me

    • theQuandary@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Peak performance doesn’t mean anything. The FPS on 8gen3 is nearly identical to 8gen2 in their real-world test as is the power consumption (which is somewhat shocking because 8gen3 has more shaders which should provide higher performance at lower clockspeeds improving PPW).

      The Apple comparisons were particularly interesting. Peak power consumption was higher on benchmarks, but the real-world performance per watt tests better for Apple (though Apple really needs to work on their heat dissipation). This shows that Apple hasn’t at all optimized their GPU to run at those crazy burst clockspeeds, but it does get good efficiency at reasonably clocks.

      As I’ve said over and over, the GPU peak performance metrics don’t mean anything unless we see these chips in a laptop or mini-PC. They exist only to lie to consumers about what they should expect from their phones.

    • undernew@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      For Geekbench 6 the power consumption is basically the same at higher frequencies and worse at lower frequencies.

      The reason the Snapdragon 8 Gen 3 does better in Geekbench 5 is because it has more cores. In Geekbench 5 each core does a separate task, while in 6 they work together to solve a task. This change translates better to how smartphones use multiple cores in real life, but it also means multiple cores don’t scale perfectly in the benchmark.

      (Sorry if this was mentioned in the video, didn’t get a chance to watch it with subtitles yet).

    • AlexLoverOMG@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      You want to power off as many components as possible at idle to save power, a detachable cache is definitely a power help

  • SomeKindOfSorbet@alien.topOPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    TLDR:

    Snapdragon 8G3 is the first mobile SoC based off ARMv9.2 (64-bit-only). It still uses TSMC’s N4P node like the 8+G1 and 8G2. Besides using ARM’s brand new core architecture (X4, A720, A520), it also now uses a 1+5+2 core configuration instead of the 8G2’s 1+4+3 or the more traditional 1+3+4 in the 8+G1.

    ARM’s new cortex-X4 big core boasts some very decent improvements in ST performance (~20%) over the previous X3 in the likes of the Snapdragon 8G2. However, power consumption has also increased accordingly, so ST efficiency isn’t noticeably improved. This is mostly fine considering the big core is generally leveraged to handle short and bursty workloads in order to make the phone feel snappy and fast. Still noticeably behind Apple’s A16 /A17 Pro in terms of ST performance/efficiency though.

    The A720 medium core and A520 small core are mostly incremental upgrades both in terms of performance and efficiency over their predecessors. A720 does get a pretty sweet efficiency improvement in floating point compute, but nothing crazy.

    The swapping of yet another small core for a medium core results in some very nice MT performance improvements, now putting the Snapdragon 8G3 pretty much ahead of the A17 Pro’s perf/W in GB 5 and matching it in GB 6. Qualcomm has officially catched up to Apple in MT performance and efficiency. It does remain slightly less efficient at lower power levels though.

    Apparently, this chipset is capable of partially turning of its L3 cache and SLC to save power, but we still don’t know much about how that would work and whether it requires software-based scheduling or not.

    The GPU is yet again a noticeable upgrade over the SD 8G2, and seems to mostly come from scaling up the compute like 8G2 did over 8+G1. 8G2 was already on top of the competition when it came to GPU performance, so this upgrade pretty much puts 8G3 in a class of its own when it comes to both to performance and efficiency on the GPU side. It completely overshadows every other smartphone chipset and even seems to match the performance a Radeon 780M/GTX 1050Ti with an overclock.

    Overall, 8G3 is a very solid upgrade over the 8G2 which was already a beast of an SoC. None of the performance or efficiency improvements are not groundbreaking by any means, but they are definitely there. 8G3 vs. 8G2 is a jump pretty much equivalent to 8G2 vs. 8+G1 both in terms of CPU and GPU performance/efficiency, which is great. This really puts to shame Google’s Tensor G3, which is barely a performance/efficiency improvement over the notoriously bad Tensor G2 (Samsung Foundries at it again). The amount of competition Qualcomm is putting up with Apple is really exciting to see.

    • SkillYourself@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      ARM’s new cortex-X4 big core boasts some very decent improvements in ST performance (~20%) over the previous X3 in the likes of the Snapdragon 8G2. However, power consumption has also increased accordingly, so ST efficiency isn’t noticeably improved.

      It’s actually considerably worse efficiency.

      +17.5% ST perf for +40% power in SPEC2017Int

      About as expected going from 6-wide to 10-wide decode and increasing clock speed…

      • Dudeonyx@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Using +40% power for +17.5% performance doesn’t necessarily mean it’s less efficient.

        I highly doubt the previous gen can get that level of performance even with +60% power.

        Comparing efficiency requires the comparison to be done on performance at the same power usage, or alternatively, power usage at the same performance level.

        • theQuandary@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          This argument only applies in “race to sleep”, but that also doesn’t look very good. Taking +17% longer, but using -40% power still means that the slower processor is using less energy overall.

          You’d have a point if user interactions lagged because of the performance difference, but typical user interactions wouldn’t even be noticeably different going from an X1 to an X4 let alone an X3 to X4.

    • XavandSo@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It completely overshadows every other smartphone chipset and even seems to match the performance a Radeon 780M/GTX 1050Ti with an overclock.

      Holy Christ, I want to upgrade my S23 Ultra for that fact alone. What a beast.

    • AlexLoverOMG@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Haven’t seen it all yet, but since the G2 GPU was already going blow for blow with the A17, I assume this one is comfortably ahead. Apple’s MT and GPU leads are gone, all they retain is single core and the margin is narrowing substantially.

      I wonder if they can ever separate themselves from the pack again, or this is like the Zen moment of CPU convergence where AMD and Intel have just stuck close since.

  • SkillYourself@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Interesting how the GPU alone does so well Wild Life Extreme, beating the A17Pro by 30% in perf and efficiency but then the A17Pro runs Genshin 720P60FPS at 30% lower power than the Gen3.

    Just goes to reinforce how the entire SOC and software (Metal API) matters to overall performance.

    • MMyRRedditAAccount@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Genshin (and most other mobile games) are cpu bound. That performance is probably more due to apples better cores than any api stuff

      Also, isn’t genshin using upscaling on iOS? It isn’t really an apples to apples comparison if so

      • marxr87@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        i wonder if this potentially unlocks xbox emulation. it looks like xenia’s recent update now requires gpus to have Direct3D 12’s Rasterizer-Ordered Views. But performance-wise, it is right on the line.

      • Apophis22@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        They just introduced ai upscaling with the A17. And genshin isn’t using it yet in their official versions.

        • SchighSchagh@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          ai upscaling

          AMD FSR probably works fine on Apple hardware? I have no idea if some mobile game would bother porting it because I don’t think there any official support from AMD; but in principle you should be able to get decent upscaling without AI.

        • AlexLoverOMG@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Fwiw MetalFX Upscaling was released I think two WWDC’s ago for iOS and runs on everything back to A13, but for some reason Mihoyo so far has targetted A17, maybe it really needs the twice as fast neural engine to be a benefit

    • bazooka_penguin@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The genshin devs also seem to prioritize iOS. Android still doesn’t have controller support from what I understand whereas iOS has had it for a year or two. Seems like they’re favoring iOS to me

  • Raikaru@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    It’s kinda seeming like unless the a18 is a huge performance increase, Snapdragon will likely beat or match Apple’s A series in every metric with better power efficiency as well by next year

    • SchighSchagh@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      let’s ignore the fact that apple always improve their chips every year and fantasize that for some reason they’ve topped out at their current levels but at the same time Qualcomm will keep chugging forward and so clearly Apple is getting smoked next year

      what a weird take

      • AlexLoverOMG@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Of course they also do, but their gains slowing is what allowed these gaps to narrow so much, the MT lead is gone, the GPU lead is gone, and the single core gap hasn’t been smaller in many many years. And this was with them buying out the entire first run of 3nm. So it is fair to wonder if they ever make a big bound ahead again, or if we’re just at chip convergence, like the Zen moment where AMD and Intel have stuck pretty close on performance since. We’re used to Apple leading for the previous decade.

      • vlakreeh@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Apple has been having minor clock speed improvements with low single digit IPC increases every generation since A14, it doesn’t take a rocket scientist to look at the trajectory of Qualcomm and Apple to determine what’ll happen if these trends don’t change. Unless A18 bucks that trend of minor improvement Qualcomm will be ahead, especially considering they’ll be using oryon based cores.

      • IntrepidEast1@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Apple’s had a couple weak years, while Qualcomm’s Oryon cores will be in phones the following year. It’s entirely possible, but how well Oryon performs especially on mobile is a question yet answered.

    • undernew@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The issue is likely that 3nm node was delayed. The A16 and A17 have the same codename internally (H15 and H15 Coll), it’s rumored that the A17 is what the A16 should have been if 3nm wasn’t delayed. Now they are basically a generation behind schedule, let’s see what happens next year. At least on the GPU side they have to improve power efficiency.

      • hwgod@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        it’s rumored that the A17 is what the A16 should have been if 3nm wasn’t delayed.

        There was the same “rumor” last year. Sounds like nonsense at this point. Apple’s had plenty of time to adjust.

        • undernew@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Yes, last year they had to roll back the GPU changes due to heat issues / 3nm being delayed. This year they were able to implement what they initially planned for the A16. I’m not sure how what you are saying contradicts my comment.

        • theQuandary@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          This isn’t just an Apple thing. Intel’s 12000 series should have launched in 2017. Instead, it got launched in 2021 FOUR YEARS late because of all their 10nm issues. In the interim, Intel backported whatever they thought would actually be economically feasible on 14nm.

          Apple no doubt started designing what A16 should have been way back in 2017-2018. When it became apparent TSMC would miss targets, the most they could do was rapidly try to backport a few features to the old node which is basically what we got in A16.

          Then, even though TSMC promised 1.7x scaling for non-SRAM transistors, their real scaling turned out to be around 1.2x (they didn’t change the size of the significant cache blocks, so they should have gotten a lot more of a shrink).

          • hwgod@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago
            1. Apple is not Intel. They never tie their architectures to a particular node.

            2. Under no circumstances could Intel 12th gen have launched earlier than 2021. The process was more or less fixed by Tiger Lake in 2020. If they had something better, they would have shipped that instead.

  • wwbulk@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Also, isn’t genshin using upscaling on iOS? It isn’t really an apples to apples comparison

    No there is no upscaling. Would be nice if you fact checked before postings.

    In the video, they also said the iOS version actually runs at a slightly higher resolution, so the testing results are bias toward 8G3.

  • 1994_BlueDay@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I am not expert in mobile Cpu dept, Pls enlighten me someone,

    As much i know about these cpus, the benefit is limited to mobile gaming only. Mid range cpus like Dimensity 8200 or Snapdr 870 are enough for everything 99% of users need. SN 8 gen 2 is fast. so whats the use of these fast cpus on phone (except gaming). (Am not against progress, just interested to know the opinion i guess, like where are mobile devices heading, like samsung dex or like that or there is more?)

    • BlacksmithMelodic305@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      You can use your android phone as an emulation dock and run ps2 , switch emulation and if you want there are windows emulators for windows games to play on Android like box64 and winlator.