For much of the 2010s, we were stuck with mainly dual-core and quad-core CPUs in PCs. However the arrival of Ryzen shook the PC industry, causing a rapid increase in core counts. At the time, there was fervent discussion on this matter, with many questioning if more cores were worth it, and how many cores are more than enough?

So how do things stand today? The latest Intel and AMD consumer processors top out at 24 and 16 cores respectively. What extent of modern software can take advantage of all those cores? What modern workloads are still bottlenecked by single threaded performance?

    • crazyates88@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      People keep saying this, but does it tho? I’ve seen multiple benchmarks where 8 slower cores are faster than 6 high speed cores. I would say cache is more important that single threaded performance in 2023.

      • yabucek@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Really? Gaming benchmarks for Ryzen 7000 seem to be pretty much flat across the board, with minimal gains upwards from the 6-core 7600. Which games are you thinking of?

      • 100GbE@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Main Threads in games are still limited by single core performance.

      • 100GbE@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Main Threads in games are still limited by single core performance.

      • Ketorunner69@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        One thing that gets constantly overlooked in these scenarios is the fact that 8 core CPU’s have more L3 cache than 6 core CPU’s. so if a game uses 6 threads, an 8 core CPU of the same architecture with the same clock speed will potentially perform better than it’s 6 core counterpart.

        • VenditatioDelendaEst@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          * only on Intel, which has the L3 made out of slices attached to each P-core or E-core cluster (x4).

          AMD segregates its L3 at the CCX level, so every part made from the same die set has the same L3. There’s a bit of a complication with the 12 and 16 core, because if all the threads are working on the same data the L3 is effectively 1-CCD-sized, but if they’re working on different data (like with make -j, VMs, or some batch jobs), you get the benefit of both CCD’s worth of L3.

        • einmaldrin_alleshin@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          That’s only true for intel, who disable part of the L3 along with the cores. AMD however has the full L3 enabled on their 6-cores.

          The marketing folks love to add up the L2 cache as well, but since that is not shared cache, each core still has the exact same amount of cache available to it, in spite of having a lower number on the spec sheet.

    • Prince-of-Ravens@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      And just in case the “experts” here crawls out of their swamp holes with the default excuses from way back when AMD single thread performance was bad, just because a game uses 2-3 cores worth of CPU does not make it not single thread limited. IN the modern proliferation of cores where you get them by the dozens, you are NEVER limited my MP throughput but always by the ST performance on the limiting critical thread.

      • Geddagod@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        The difference on average across the 12 games was like 5% between the 7700x and 7600x’s 1% lows.

        • VenditatioDelendaEst@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago
          1. Average doesn’t matter. If the game I play uses parallelism well, I don’t care about the ones that don’t.

          2. The difference in boost clock is only ~2%, so anything more than that is either due to core count or less (soft) thermal throttling from spreading the heat across more die area. And since they tested with a 360mm AIO, it’s probably not soft throttling.

      • Geddagod@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        The difference on average across the 12 games was like 5% between the 7700x and 7600x’s 1% lows.