• Pamani_@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    https://youtu.be/yDEUOoWTzGw?t=731

    > The 7970X required 3.6 min making the 7980X [2.2 min] 39% faster for about 100% more money. You’re never getting value for those top of the line parts though.

    Except that’s not it. The 7980X speed is 1/2.2 = 0.45 render/minute, which is 64% faster than the 7970X (1/3.6=0.28 render/minute). An faster way to do the math is 3.6/2.2 = 1.64 --> 64% faster. What Steve did is 2.2/3.6=0.61 --> 1-0.61=0.39 –> 39% faster.

    It’s not the first time I see GN stumble on percentages when talking inverse performance metrics (think graphs where “lower is better”). Sometimes it doesn’t matter much because the percentage is small. Like with 1/0.90=1.11, 11%~10%. But on bigger margins it gets very inaccurate.

    Another way to see this is by pushing the example to the extreme. Take the R7 2700 at the bottom of the chart completing the test in 26.9 minutes. Using the erroneous formula (2.2/26.9=0.08 --> 1-0.08=0.92) we get that the 7980x is 92% faster than the 2700, which is obviously silly, in reality it’s 12x faster.

    • hieronymous-cowherd@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      You’re never getting value for those top of the line parts though.

      Yeah, and saying “never” is not a good take either. Plenty of customers are willing to spend stiff upcharges to get the best performance because it works for their use case!

      • Zevemty@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        The take is that you’re never getting more perf/$ (aka value) on top of the line parts compared to lower tier ones. Whether you can utilise that extra more expensive performance to make the worse value worth it for you is irrelevant to their take.

      • dern_the_hermit@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Especially in professional settings where that extra minute or so, added up over multiple projects/renders and team members, can mean the difference of thousands or even tens of thousands of dollars, if not more. Looking at things in terms of percentages is useful but absolute values are important, as well.

    • Exist50@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      This is the kind of stuff that GN would claim justifies a 3 part video series attacking another channel for shoddy methodology. But I guess they’ve never been shy about hypocrisy.

      • Gravityblasts@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        EXACTLY!..I’ve been saying this since GN released their Anti LMG agenda, and no one believed me lol…

    • VenditatioDelendaEst@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Blender seems like it should be pretty close to embarrassingly parallel. I wonder how much of the <100% scaling is due to clock speed, and how much is due to memory bandwidth limitation? 4 memory channels for 64 cores is twice as tight as even the 7950X.

      Eyeballing the graphs, it looks like ~4 GHz vs ~4.6 GHz average, which…

      4000*64 / (4600*32) = 1.739
      

      Assuming a memory bound performance loss of x, we can solve

      4000*64*(1-x) / (4600*32) = 1.64
      

      for x = 5.7%.

    • Gravityblasts@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I wonder if LMG should make a video calling GN out for their incorrect numbers…doubt Steve would like that very much lol