• 0 Posts
  • 72 Comments
Joined 11 months ago
cake
Cake day: October 25th, 2023

help-circle
  • Processors do wear out over time, but usually not this fast. Might be that undervolt was just barely stable at one point. And maybe even unstable in some conditions you never tested. Now even the tiniest amount of wear, has dropped it below the line.

    Could also be RAM being defective. But that’s usually from factory, not wear in.









  • RDNA1 and 2 were pretty successful. Vega was very successful in APUs, just didn’t scale well for gaming, but was still successful for data center. You can’t hit them all, especially when you have a fraction of the budget that your competition has.

    Also, he ran graphic divisions, not a Walmart. People don’t fail upwards in these industries at these levels. When they fail upwards working in some other industries, they fail to middle management. Somewhere you’re not in the spotlight, and out of the public’s eye, but don’t get to make final decisions. Somewhere to push you out of the way. Leading one of less than a handful graphics divisions in the world is not where you land.



  • “Why? I am still learning, but my observations so far: the ‘purpose’ of purpose-built silicon is not stable. AI is not as static as some people imagined and trivialize [like] ‘it is just a bunch of matrix multiplies’.”

    But it is stable in a lot of cases, is it not? I mean if you’re training a system for autonomous driving, or a training a system for imagine generation, it seems pretty stable. But for gaming it certainly needs flexibility. If we want to add a half a dozen features to games that rely on ML, it seems you need a flexible system.

    That does remind me, of how Nvidia abandoned Ampere and Turing when it comes to frame generation because they claim the optical flow hardware is not strong enough. What exactly is “Optical Flow”? Is it a separate type of machine learning hardware? Or is it not related to ML at all?






  • A 33% speed increase for a 12% memory bump makes no sense at all. Typically a 12% OC like your describing would not result in more than a 6% increase, and on average we’re likely talking less than 4%. Hell, even going to DDR5 6000 should not get you a 33% performance pump. Typically 4000MT/s DDR4 has per lose timings, that makes it not much better than a 3600 kit with tight timings.


  • First of all, those are rumors, and given how the leaker doesn’t even know if it’s 128bit or 192bit this late in the game when it’s been in development for 3 years and 10 months from release, it means the leaks are pretty much completely made up. RedGamingTech has a pretty bad leak accuracy record.

    That being said, if it’s targeting 7900xt to 7900xtx performance, and it’s using GDDR7, then 192 but makes sense. It’s at about 7900xt memory bandwidth in total if you work out the math at 34 Gbps. Currently GDDR7 is aiming for 32 to 36 Gbps.



  • I feel like Intel GPUs have such a bad reputation for drivers now, that even Battlemage is going to fumble out of the gate, even if it’s good value, and even if the drivers by some miracle are better than what AMD has. On one hand they seem to be working hard on Arc to fix things, but I’m just super sceptical they can keep this up with less then 5% market share. They really needed Arc to be a success and to give a good first impression.

    What’s Nvidia’s plans with their ARM CPUs? I can’t imagine people will be using them for desktop anytime soon. Are they trying to get into tablets and chrome books?