The RTX 3080 definitely could take advantage of extra VRAM, with just 10 GB it can struggle in some tittles on Ultra settings at 4K.
The RTX 3080 definitely could take advantage of extra VRAM, with just 10 GB it can struggle in some tittles on Ultra settings at 4K.
After this point I guess AMD doesn’t want to put extra expensive Zen 3 CPUs on AM4… Pricing would be an issue because the R9 5950X is at 434$ (Amazon) while the R9 7900X is at 388$ (Amazon), so unless they launch a R9 5950X3D for 400$ I don’t see a reason to launch it (Besides needing some workarounds like Game Bar to make it work alright).
Surely the 4080 might not age very well but it’s very likely it’s RT performance will be insufficient before VRAM becomes an issue, even Alan Wake II limits its use of path tracing at max settings and still uses a decent amount of raster.
It really was a good GPU, sadly the 1st impression Vega gave wasn’t good stacked with being one year late, overvolted and barely could reach the GTX 1080 at launch…
I would’ve gotten one if they weren’t quite uncommon in my country, even Navi was a lot more easy to find in the used market so I ended up getting a RX 5700 that’s serving me very well!
Sadly they weren’t that impactful besides the Vega 56 - competed very well with the GTX 1070 and Nvidia launched the GTX 1070 Ti because of it - they consumed too much power at stock because of overvoltage and they launched way too late…
Their last “high end” before the RX 6900 XT was the Vega VII and before that Vega 64, yeah they only were able to compete with the RTX 2080 and GTX 1080 respectively but so does the RX 7900 XTX that can only compete with the RTX 4080…
PS.: Also the high end AMD GPU before Vega 64 was the R9 Fury X (R9 390X was a 290X refresh that launched in the same period), that was quite competitive with the GTX 980 Ti but it’s 4 GB of HBM and and the necessity to be water cooled limited its sales…
Yes, for the looks of it Intel’s alternative to compete with 3D V-Cache is something akin to HBM but more efficient, I bet Intel would use it for their integrated graphics.
It is quite sad that HBM hasn’t decreased in price significantly for it to be embedded in consumer APUs, even 2/4 GB wouldn’t be that bad if HBCC (Uses system RAM as VRAM and HBM is used to cache data) could be used.
Technically yes, but AMD had used monolithic dies for their APUs for years and trying to add cache on top of a bigger piece of silicon than a CPU chiplet would need a redesign at least and it would jack up prices quite significantly and most likely we would only see a single SKU with it.
I’d guess a better alternative would be a chiplet based APU that’s similar to Navi 32/31 design (GPU die surrounded by cache chiplets). It would be quite malleable, you could add a Zen X/Zen XC chiplet (or a hybrid of both that you can add 3D V-Cache on top if you wish), a GPU chiplet and a cache chiplet to keep more data to the GPU so it’s less reliant on RAM bandwidth.
Probably AMD may do that in the near future with a RDNA 4/5 APU, the main problem is power consumption (Transferring data between chiplets consumes more power than a monolithic chip with all of its components in a single die) and that would be a significant downside for mobile devices, that’s why they haven’t made a chiplet based APU yet and it’s easier to just recycle/resell any laptop APU as a desktop one.
Sincerely I would rather prefer them to focus on VOPD implementation to increase their overall performance in some scenarios, if they manage to get at least 10 to 20% in gaming it would be enough to make the 7900 XTX competitive with the 4090.
Thanks Steve!
Technically “yes” but that’s quite some meticulous GPU modding and you’d need to be skilled enough with soldering and being able to modify the GPU firmware to detect it and even then it most likely wouldn’t work flawlessly… There was a guy that actually did that with a RTX 3070 but with some caveats.