• doneandtired2014@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Radeon VII wasn’t a lower tier die. Every Radeon VII ever sold was effectively a salvaged Instinct MI50 that was unable to be validated for that particular market segment. It was and remains AMD’s only equivalent to NVIDIA’s (very dead) Titan line of products (as all Titans were salvaged Quadros and Teslas).

    The jump from 14nm (which wasn’t really that much different than 16nm) to 7nm can’t be overstated. It was only slightly less of a leap as NVIDIA recently made when jumping from Samsung 8nm to TSMC N4 this generation (which was *massive*). VEGA 20 might be significantly smaller than VEGA 10, but it also packs 10% more transistors into that smaller surface area. Additionally, the memory interface is twice as wide in VEGA 20 (4096 bit) relative to VEGA 10 (2048bit) because AMD doubled the HBM2 stacks from 2 to 4. HBM2 was/is insanely expensive compared to GDDR5, GDDR5x, GDDR6, and GDDR6x modules, so much so that Radeon VII’s VRAM cost *by itself* was equitable to the BOM board partners were paying to manufacture a complete RX 580.

    All in all, it was an okay card. It wasn’t particularly good for gaming relative to its peers but the same criticism could easily be made for VEGA 56 and 64. It was an phenomenal buy for content creators who needed gobs of VRAM but couldn’t afford the $2500 NVIDIA was asking for Titan V.

    • handymanshandle@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      I remember one of the primary driving factors of cost on the R9 Fury cards (Fury, Nano and Fury X, as well as stuff like the Radeon Pro Duo) being the ridiculous cost of HBM manufacturing. Given that it’s, well, stacked memory with little room for manufacturing defects, it was not cheap to manufacture.

      I want to say that this was also the primary reason that the RX Vega cards (Vega 56 and 64 more accurately) were cheaper than their Fury counterparts - less memory modules that were insanely expensive to make means, well, a less expensive card. I could honestly see why AMD ended up dropping HBM for consumer graphics cards, as its ridiculous memory bandwidth advantage was diminished heavily by its buy-in cost and the rise in suitable gaming performance of GDDR5/6 memory, even if it meant that the cards consumed more power.

    • capn_hector@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Radeon VII wasn’t a lower tier die. Every Radeon VII ever sold was effectively a salvaged Instinct MI50 that was unable to be validated for that particular market segment.

      Sure, but couldn’t they make a bigger chip that performed even faster? Why did they reduce the size of the flagship, why not make it both smaller node and also keep the size the same?

      Yeah, it’d take architectural changes to GCN, but, that’s not consumers’ problem, they’re buying products not ideas.

      Isn’t that exactly what NVIDIA did with Ada, shrink the node but all the dies get much smaller, so a x80 tier product is now the same size as a 3060 or whatever?