• Knjaz136@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Isn’t this basically a thread scheduler fix that makes E cores do what they are actually supposed to do?

    And they are reserving this fix for 14th gen only for, seemingly, no reason? With a good chance that they had this fix for a while, but management decided to reserve it for 14th gen?

    This is what I’m reading from their reply to HUB.

    • reddanit@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Well, it does look like it’s just a scheduler fix at the very surface level. On the other hand it does seem to need some firmware support and presumably there is some reason why it only supports 2 games. So maybe it is something more complicated?

  • aj0413@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    This feature is a lot like DLSS1; was cool to follow and for some of us early adopters to trial, but not something anyone should be basing any serious discussion/evaluation on

    • from someone that updated every RTX gen specially for DLSS
  • XenonJFt@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    The insanity that after 3 generations. Windows kernel still can’t priorotise P-E core usage in games and background desktops. parking them still gives them better results. AMD cache was kinda acceptable on 7950x3d vs 7800x3d debait because games cant utilise that much cores anyway.

    And its that all that bios and mobo hoops you have to go through to be compatible for 2 titles.

    Intel mostly abandoned ship on any gaming competitiveness. The clock speeds and high tgp is at least has its use in workloads

  • battler624@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I have no idea how it works but its probably moving everything away from p-cores that isn’t the game itself and keeps the game restricted to P-Cores.

    • Knjaz136@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Question is, why Windows doesnt have that option.
      Instead of core affinity, just restricting cores to manually defined task, forbidding everything else.

      • F9-0021@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Because Microsoft, the biggest software company in history, cannot make good software.

  • Due_Teaching_6974@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Intel’s E cores doing what they are supposed to on 2 games and 2 years after their debut, and only on their newest cpu lineup, peak Intel engineering right here

    • siazdghw@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      It’s the exact opposite of what you’re saying.

      Intel’s E-cores + Thread Director work perfectly fine 98% of the time, but there are edge cases where the Windows Scheduler cant get it right, even with the hints from Thread Director, and that’s where APO comes in, to manually force the correct scheduling.

      Also lets not pretend that AMD isnt suffering scheduling issues themselves, the 7950x3D and 7900x3D are shunned because they have WORSE scheduling in games as they rely on the Windows Scheduler to just try and figure things out itself, and that doesnt usually work with 2 CCD’s with one having a higher frequency and the other more cache.

      • shopchin@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Importantly, you think the fix will come for 12/13 gen Intel? You seem to know what you are talking about.

    • splerdu@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I mean unless you’re Apple and have full top to bottom control of your hardware and software stack it takes some time for software to catch up with the hardware.

      Took a while for games to use MMX, SSE, AVX. Stuff that uses AVX512 can probably be counted on one hand.

      Good ray traced games are becoming mainstream just now, two whole generations after GeForce 20 series.

      I do begrudge Intel for holding this back from 12th and 13th gen users though.

      • p3ngwin@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Took a while for games to use MMX

        Even Intel’s 1st iteration of MMX was a kludge, as it used the floating point unit, so you could either use FP, or MMX, but not both simultaneously o.O

        Took awhile for that to be separated and gain the benefits of both available together.

        Intel also added 57 new instructions specifically designed to manipulate and process video, audio, and graphical data more efficiently.

        These instructions are oriented to the highly parallel and often repetitive sequences often found in multimedia operations.

        Highly parallel refers to the fact that the same processing is done on many different data points, such as when modifying a graphic image.

        The main drawbacks to MMX were that it only worked on integer values and used the floating-point unit for processing, meaning that time was lost when a shift to floating-point operations was necessary.

        These drawbacks were corrected in the additions to MMX from Intel and AMD.

        https://www.informit.com/articles/article.aspx?p=130978&seqNum=7

    • AgeOk2348@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      and they refuse to let people buy cpus without them, cant let amd win every bench mark that the vast majority of gamers will never use

    • F9-0021@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      More like peak Microsoft engineering, since this is something that was always supposed to be done by the operating system. Microsoft is so awful Intel had to do it themselves.

    • msolace@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      which just shows the scheduler is wrong. which people who cared to put the effort in already did manually with lasso. the only missing piece is random main kernel threads jumping on to p cores. AMD scheduler isn’t perfect either. And both companies are going big/little. so plenty of room to keep improving.

      • CascadiaKaz@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        correction: it shows that Intel Thread Director is wrong, and that the scheduler shouldn’t trust it.

        • SkillYourself@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Thread Director doesn’t do any directing, it’s a a set of new registers the OS scheduler is supposed to read for feedback on how well a thread is running on a core. If APO can do it right, it means the scheduler is wrong.

          15.6 HARDWARE FEEDBACK INTERFACE AND INTEL® THREAD DIRECTOR

          Intel processors that enumerate CPUID.06H.0H:EAX.HW_FEEDBACK[bit 19] as 1 support Hardware Feedback Interface (HFI). Hardware provides guidance to the Operating System (OS) scheduler to perform optimal workload scheduling through a hardware feedback interface structure in memory.

          • CascadiaKaz@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            facepalm are you daft?

            how the scheduler gets information from ITD doesn’t change what ITD does.

  • GenZia@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    From what I’m seeing, even with APO enabled, only 4 E-Cores are actually doing anything. The rest of the cluster is parked, doing absolutely nothing.

    Actually, that’s false. They’re actually consuming power, how miniscule it may be!

    And that’s one of the many reasons I don’t understand why Intel is stuffing so many E-Cores into their CPUs. Their practicality in real-world scenarios is mostly academic from the perspective of most users.

    A quad-core or - at most - an octa-core cluster of E-Cores should be more than enough for handling ‘mundane’ background activity while the P-Cores are busy doing all the heavy-lifting.

    Frankly, I just can’t help but feel like the purpose of these plethora of little cores it to artificially boost scores in multi-core synthetic benchmarks! After all, there are only a handful of ‘consumer-grade’ programs which are parallel enough to actually make use of a CPU with 32 threads.

    Anyhow, fingers crossed for Intel’s mythical ‘Royal Core.’ A tile-based CPU architecture sans hyper-threading sounds pretty interesting… at least on paper.

    • soggybiscuit93@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      More E cores aren’t for “mundane background tasks”. They’re to maximize MT performance in a given die space.

      It’s why 8+16 14900K competes with 7950X in MT applications, but would clearly lose if it was the alternative 12+0.

      Most people, myself included, would struggle to really utilize 32 threads. But the 7950X and 14900K exist for those that can or may be able to.

      • GenZia@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        They’re to maximize MT performance in a given die space.

        And I never said otherwise.

        I explicitly mentioned that more E-Cores can boost scores in multi-threaded synthetic benchmark and - in turn - any parallel workload.

    • liesancredit@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      The 10900K was the last best designed intel CPU. Just straight up 10 powerful cores. That’s how a CPU should be.

      • dudemanguy301@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        ah yes who could forget the absolute TRIUMPH of the same tired architecture recycled for the 4th time in a row, on the same tired process recycled for the 5th time in a row.

    • VankenziiIV@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      You think e cores are only for synthetics? What if I show you 6p+6e or 6p+8e can defeat 8p in real world applications?

      • GenZia@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Well, applications are definitely getting optimized for 8C/16T as of late so it won’t be all that surprising.

        Hyper-threaded threads (hyper-threads?) can’t match an actual core by design, after all.

        However, I’m merely question the addition of 8+ E-Cores in Intel’s high-end SKUs. I believe I explicitly mentioned that I can see the potential of integrating 4 to 8 E-Cores into a CPU.

        • carpcrucible@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          It’s perfectly reasonable for high-end SKUs.

          You either have single-threaded workloads or games that might use 6-8 threads at most. Or you have “embarrassingly parallel” workloads like rendering or all sorts of scientific computing that will use as many cores as you have.

          If you literally only game on your PC then I guess just disable the e-cores.

        • VankenziiIV@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          What if I showed you Intel 12th 6p+6e was able to defeat amd’s 8p in real world applications 2 years ago?

          • GenZia@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            A quad-core or - at most - an octa-core cluster of E-Cores should be more than enough for handling ‘mundane’ background activity while the P-Cores are busy doing all the heavy-lifting.

  • nohpex@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Has anyone else seen these videos where people change the frequency (I believe*) of how often Windows has an interrupt request to check the power of the system to reduce overall system latency.

    For whatever reason, Windows checks this every 15ms, but people are changing it to the maximum setting of 5,000ms, which reduces latency for the CPU considerably… apparently fiddling with this setting is particularly bad for AMD’s X3D chips.

    What are the pros and cons to this? Has any reputable journalist looked into this?

    • veotrade@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      It works. Set to 5000ms, which is the max value.

      It’s garbage that end users need to do any tweaking at all.

      A good number of tweaks are unproven and famously just bog down the system even more.

      As a casual user myself, I wouldn’t even know if changing one setting, let alone dozens of settings, makes a difference. I’m not qualified to test, so on some of these “fixes” I just blindly follow the advice of the tutorial.

      But disabling e cores, and changing the frequency 15ms->5000ms have helped me.

      I also have prescribed to the LatencyMon optimizations. Like setting interrupt affinity masks for my gpu, ethernet, and usb host controller.

  • No-Roll-3759@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    12600k owner. i’m so frustrated. big.Little has never delivered on the behavior they promised, and now i’m being locked out of the fix. forcing me over to windows11 was not a fix, it was just aggravation.

    i early adopted the new arch because i really wanted to use an optane accelerator. intel quietly software locked 12th gen out of optane support, so when i built my system i spent an hour poring through the bios trying to figure out how to get it running and wondering why intel’s web instructions weren’t working for me.

    overall it’s been a pretty bad experience, and one intel curated for me. based on my 12600k experience i’ll be very reluctant to adopt intel proprietary technologies in the future.

  • Gawdsauce@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Glad I went with AMD, I knew Intel would fuck that shit up one way or another, they don’t care about the consumer space, they care about the server market and nothing else.

  • advester@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    The most interesting thing is that APO dropped the power from 190W to 160W while increasing the performance.

      • byGenn@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        I mean, my 12700K can’t deliver a consistent +360 FPS outside of the in-game benchmark, so any boosts are nice. The 7800X3D still looks more appealing as an upgrade for me, though.

        • ramblinginternetgeek@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          You’re probably fine with the 12700k for a bit longer.
          Might be worth jumping onto X3D version of Zen 5 though, but that’s likely 6-12 months out.

  • benefit420@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I can’t get this to work on an ASUS Z790-E board. I tried both the ASUS DTT drivers and someone suggested trying the ASROCK DTT drivers. The ASROCK ones installed just fine but the apo app still says failed to connect

  • Berengal@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    To me this looks like it’s too early to make any definite conclusions on APO. I get that it’s tempting to conclude that they only support 14th gen CPUs as some sort of planned obsolescence scheme, but given that it also only works in two games really weakens that idea and makes the early release idea fit much better. So don’t judge them on the current state of APO, they may provide support for older gens in the future, but also don’t give them credit for it and factor it into the value of the product until APO becomes useful in practice, not just as a tech demo. This discussion is rather pointless at the moment. The technical details of how it works are much more interesting to discuss.

    • kasakka1@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      If Intel in the response to HUB says “We have no plans to support previous generations for APO”, how else are you supposed to interpret it?

      Ok, plans may change, but it’s very possible Intel will simply keep this locked on 14th gen just to be able to sell them.

      For me as a 4K gamer, it doesn’t seem like APO brings anything to the table, but it’s still disappointing to see software feature gatekeeping without a technical requirement behind it.

      • siazdghw@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        If Intel in the response to HUB says “We have no plans to support previous generations for APO”, how else are you supposed to interpret it?

        When a reviewer or journalist reaches out to companies, they usually get a response from someone that has no technical knowledge or insight on future products or changes, unless the inquiry is very serious and then it gets forwarded internally.

        Im not saying this wont possibly stay exclusive to 14th gen and beyond, but this response is almost certainly by someone that has zero knowledge of how APO works, what the team working on APO is doing, and if it will come to older generations and what games they are currently testing.

        • MdxBhmt@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          When a reviewer or journalist reaches out to companies, they usually get a response from someone that has no technical knowledge or insight on future products or changes, unless the inquiry is very serious and then it gets forwarded internally.

          And that’s on them, not on journalists or consumers. It’s their job to have messaging in line with the technical side of the business.

          Hell, if a PR team is making such explicitly stated messaging without consulting with engineering, it’s frankly a disfuncional corporate PR inventing stuff on the spot. We should take them at their word and act accordingly. Eat the damn negative PR from a damn anti-consumerist response. They could have stated it differently if they wanted some margin of interpretation.

  • DktheDarkKnight@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    So APO is just Intel fixing the E-core issues. Whoa. I thought Intel stumbled onto something special when they mentioned per application optimization.