• 0 Posts
  • 9 Comments
Joined 10 months ago
cake
Cake day: November 14th, 2023

help-circle



  • Because they got a very good technical team from intel, but not the other side of the equation in terms of telco carrier infra engagement.

    Even if the carrier-side of things are true (mostly), the Intel-division was everything but stuffed with competence, as they struggled hard on anything wireless mobile/modem. Their 3G were a hot mess, LTE was even worse and drained batteries trice as fast as Qualcomm’s modems (while delivering half the throughput) and no-one wanted the stuff.

    Apple went to Intel only to have negotiating-power towards QC and Intel never even came close to anything 5G, despite claiming the exact contrary (outright lying for years and promised Apple jam tomorrow) and with that, bringing Apple in a VERY tough and costy spot towards Qualcomm.

    Apple literally had billions to pay for Intel’s feigned competence (read: incompetence), only to crawl back to Qualcomm. They likely never would’ve engaged in any legal disputes with Qualcomm, if they weren’t assured by Intel they could make some 5G and finally ditch/avoid Qualcomm’s license-fees by sporting Intel-modems.


    Intel amassed over $20B of debts on their mobile wireless-division for a reason before ditching it to Apple.
    Intel also never made a single cent of profit since their modem-business was outright uncompetitive to begin with when Apple was always their only lone customer and Intel even needed to pay Apple to equip their modems (on LTE that is; Motorola got paid about $380M to equip Intel’s UMTS-modems IIRC on 3G).

    So to picture Intel’s Mobile & Wireless-division as IF they’d be even remotely competent as that of Qualcomm, Huawei, Samsung, HiSilicon, MediaTek and others is giving way too much credit to them to say the least.

    Also, that has nothing to do with Infineon. Since when Intel bought it from German Siemens, it was profitable.
    It was Intel’s typical in-house incompetence and outrageous impertinent style which made them claim they could do anything modem for the better part of a decade, while constantly failing along the way.
    Their infamous toxic work-environment may have been another nail on the coffin though.


  • That’s why it’s so bizarre that people support losing the ability to plug in headphones on their smartphones because the 3.5mm jack is “old”.

    Please stop using the term ‘old’! You won’t stop them refusing it that way. All you do is to induce FOMO.

    It’s not old, it’s proven … Proven to be sturdy, robust and long-serving technology and just reliable.


  • Helpdesk_Guy@alien.topBtoHardware[More Than Moore] - 5 Years Late, Only #2
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    10 months ago

    From the article …

    5 Years Late, Only #2
    Supercomputer Aurora Misses Targets
    […]
    The submission produced 0.585 ExaFLOPs, around a quarter of the expected performance, consuming 24.7MW, around half the power.

    It becomes a particularly poignant question when we compare the result to the world #1 supercomputer. This is Frontier, which was delivered 18 months earlier, and delivered 1.2 ExaFLOPs while consuming 22.7MW of power.
    So roughly double the performance, for the same power, but 18 months ago.
    […]
    The ratio of RPeak to RMax gives some insights into each system on the Top500 list as to how easy it is to extract performance from a system. Typically a system with a high RPeak to RMax is a crown worth holding.
    Most accelerator based systems on the list sit in the 65-75% range, and a select few sit above 80%. Anything lower than 60% sounds unoptimized or may have additional consideration.

    The Aurora submission only reaches 55% of that ratio.
    — Felix LeClair and Dr. Ian Cutress

    Still plenty of work to do, I guess.


  • And its not like Intel has any plans to move away from heterogeneous designs anytime soon, even AMD is now doing them and they have their own scheduler issues (X3D on 1/2 CCDs and Zen4+Zen4c).

    AMD isn’t really doing anything heterogeneous, pal.
    Correct me if I’m wrong here, but apart from the different clock-frequency properties, Zen4c-cores are in fact *identical* to the usual full-grown Zen4-Cores. Zen4c-Cores are barely anything else than a compactly built and neatly rearranged Zen4-Core, without the micro-bumps for the 3D-Cache. The only downside is the lower max clocks, and that’s literally it.

    The main reason for AMD introducing any whatsoever Zen4c-Core was the mere fact of their increased power-density (Server-space; Muh, racks!), so solely for space-saving reasons andoverall efficiency and that’s it.
    Even the L2-cache is identical, isn’t it?

    → A Zen4c-Core is not a E-Core, as it’s architecturally identical to any Zen4-Core, same IPC.
    Same story for the X3D-endabled Cores/Chiplets. Identical apart from a larger cache.

    So I don’t really know what you’re actually talking about when erroneously claiming AMD would also have jumped the heterogeneous hype-train. That statement of yours is utter nonsense.

    On AMD there’s no heterogeneous mixing in terms of different IPC-/architecture-cores, being different and as such needing to be scheduled accordingly to run properly. Only Intel needs to rely on a heterogeneous-aware (and capable!) scheduler and depends on proper scheduling to NOT kill performance.

    Meanwhile, for any mix-and-max AMD Zen4/Zen4c-CPU, it’s fundamentally irrelevant what core a thread is running on, as it doesn’t matter anyway. In fact, the scheduler doesn’t even need to know which core is a usual Zen4 and which is a Zen4c.

    AMD’s designs are heterogeneous in terms of different chiplets/configs, yes.
    The heterogeneousness you are talking about isn’t even remotely the same as heterogeneousness in terms of Heterogeneous computing (system [on a Chip], that uses multiple types of computing-cores) in terms of different architectures as Intel uses in their Hybrid-SoCs. So no, no heterogeneousness for you!


  • I think I read on some insight-article over Aurora on NextPlatform.com a few years ago, that today’s ‘Aurora’ is actually internally referred by them as AuroraNext.

    To be fair, they bluntly sold a fairy tale and blueprints about some soon-to-be-engineered Supercomputer-hardware and resulting arbitrary performance-numbers, and that’s basically it.


    Ironically enough, Intel didn’t made a single dime with anything Aurora…
    As the ~$600M USD contract-penalty and compensation for delayed completion netted Intel a hefty loss on top of all the delaying mess. Intel had to pay ANL a fine of $299M USD while the remainder Intel managed to blame-shift to Cray Computing (as if Cray could’ve done anything to prevent the actual Intel-mess!).

    Since initially the whole contract for Aurora was awarded with $200M USD, netting Intel a -$100M loss (on paper), bar the years-long costs of billion USD for hardware-designing, the excessively faulty and costy SPR- and Ponte Vecchio-prototyping and costs of final installation.

    Rumour had it back then, that Intel had to guarantee the ANL a 2-year cost-free window of maintenance post-installation (complete absorption of costs on Intel’s behalf, including the outrageous power-bill), in order to prevent the ANL angrily throwing in the towel after all the delayes, switching to completely AMD/Nvidia once and for all.

    That’s why the ANL the very moment Aurora was supposed to be completed in 2021, immediately contracted another smaller Super-Computer and awarded AMD/Nvidia the Polaris alongside Aurora (equipped with AMD’s Epyc-CPUs and Nvidia’s A100 Tensor-GPUs), as a interim-solution (and threat towards Intel, to hurry up).

    Ironically, Polaris, being awarded in August 2021, was already completed well before Aurora itself ahead of schedule in August 2022 …

    So in other words, the ANL was so darn bold, to award and contract another testbed Super-Computer in-between from the very money of the former contractual penalty Intel and Cray paid them ($600M USD), just to have it installed right in front of Intel’s Aurora, while having Intel and Cray pay for everything!

    Oh, and having granted another Super-Computer free of charge after it (Aurora itself), just because.
    If that isn’t some absolute genius “F–ck you, Intel!”, I don’t know what is …


  • Actually, Aurora never really materialised, as the original Aurora was cancelled even before any kind of hardware-installation took place (The one with the Xeon Phi; → Knights Landing), when Intel pulled the plug on anything Xeon Phi right before Aurora’s supposedly scheduled beginning installation in 2015 and to be delivered completed for 2018.

    After it, Intel somehow again sold the ANL (Argonne National Laboratory, U.S. Energy Department) the blueprint of another Super-Computer scheduled for installation in 2021, which in turn *again* was spec-wise pretty much made up and hardware-wise out of thin air (on both instances, same as the original one). As Intel had neither any clue if they’d be able to deliver as promised nor had they the supposed hardware at any disposal.

    Talking about delusion and bragging for a living …

    Cringy enough, they hadn’t even engineered any whatsoever hardware by then (Sapphire Rapids, Ponte Vecchio) which was supposed to make up Aurora in the first place, that’s why it took so long to deliver anything.