• lrussell887@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    With how hot PCIe 5.0 M.2 drives get, using the GPU as a heatsink sure makes a lot of sense if this feature becomes mainstream.

  • Velcrowrath@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I have 0 experience with electrical engineering, so this may be a dumb question. Is there a way to modify a gpu to accommodate for additional VRAM chips? Or is there a limit to GPU architecture that inhibits adding more ram like this?

  • Joemarais@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    It’s supposed to be $100 more than the already overpriced 4060ti dual. Kind of useless when you can buy a board with 1 more nvme slot if you need one more, or if you have a cause with a dedicated spot for vertical mounting like a hyte y40/y60, just get a $20 x16 to x8/x8 riser and use the second x8 with a $10 nvme adapter.

    • TheRealSeeThruHead@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      If you buy the mobo with the most nvme slots. This still gives you one more.

      Otherwise you’re wasting pcie 5.0 slots. Current glue don’t need 16x pcie 4.0. Using 16x pcie 5.0 for a gpu is a waste on any motherboard

      • Stingray88@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        If you buy the mobo with the most nvme slots. This still gives you one more.

        If you are already buying a motherboard with many NVMe slots, populating all of them and still need more space… and your motherboard supports PCIe bifurcation… I highly doubt you’re buying a 4060ti, just so you can get one more. It would be a lot easier to just one of the many available PCIe NVMe adaptors available and use that in one of your other PCIe slots.

        Otherwise you’re wasting pcie 5.0 slots. Current glue don’t need 16x pcie 4.0. Using 16x pcie 5.0 for a gpu is a waste on any motherboard

        While that may be true, GPU manufacturers aren’t going to make an 8x version of a high end GPU just so you can fit on an additional NVMe SSD. The only reason Asus did it with the 4060ti is because it’s already an 8x GPU.

        This is a weird niche we’re unlikely to see more of.

      • danielv123@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        It doesn’t give you an extra, unless you have an itx board as it just takes the 8 lanes from the second 16x slot and feeds 4x of them to the SSD.

        • TheRealSeeThruHead@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          i thought it would work liek this
          without ssd on gpu
          slot 1: gpu (uses 8x lanes)
          slot 2: dual ssd bifurcation card (4x lanes per ssd)
          total 2 nvme ssd in 2 slots

          with ssd on gpu
          slot 1: gpu + ssd (4x lanes each)
          slot 2: dual ssd bifurcation card (4x lanes per ssd)
          total 3 nvme in 2 slots

          assuming that the gpu only needs 4x lanes of pcie 5.0

          you get an extra ssd in this scenario which would not be possible without a GPU that natively does PCIE 5.0 at 4x.

          • danielv123@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            Nope, if you use the second 16x slot the first only gets 8 lanes due to bifurcation, so no GPU SSD. If on the other hand we got 4x pcie5 GPUs this would make sense. Sadly there are no PCIe gen 5 GPUs yet, so they all use more gen 4 lanes.

    • danielv123@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      I wish there was an option for more gen4 instead. New thread ripper has gone away from the all gen5 philosophy, providing 88 lanes only 48 of which are gen5.

    • Stingray88@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      It’s pretty unlikely to see this on any GPU that utilizes 16 PCIe lanes. Even if it doesn’t actually affect performance, it’s significantly more complicated than what ASUS is doing here. They’re basically just using unused lanes, and even then it’s only going to work on certain motherboards. Pretty niche.

      • TheRealSeeThruHead@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        throwing away bandwidth is pretty dumb

        maybe we will see gpus being plugged into pcie 5.0 x4 slots in the future.and more x4 slots showing up in motherboards.

        But if gpus continue to take up full 16x slots and not use the bandwidth, that’s just going to hurt the hurt the already horrible amount of pcie lanes we get on consumer platforms

        • Stingray88@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          throwing away bandwidth is pretty dumb

          I mean… it’s fine. You’re not maxing out the bandwidth on every single connection in your PC 24/7. No one is.

          I don’t know why you think this is dumb.

          no gpu needs 16 lanes of PCIE 5.0. And likely won’t for several generations

          Sure… but high end GPUs do see a significant difference between PCIe 3.0 16x and PCIe 4.0 16x. You know what’s the equivalent of PCIe 3.0 16x? PCIe 4.0 8x. Not everyone has PCIe 5.0 today, in fact most don’t. Many people still have PCIe 4.0 that are buying high end GPUs. I’m one such person, I have a 4090 with a PCIe 4.0 AM4 motherboard.

          You can’t make a GPU that uses 16 lanes only on PCIe 4.0 but drops down to 8 lanes on PCIe 5.0. That’s not how it works. And considering most people today don’t yet have PCIe 5.0, it’s very unlikely GPU manufacturers would be willing to create 8x only versions that are only meant to be used on PCIe 5.0 motherboards. The only reason Asus did this with the 4060ti is because that GPU is already an 8x GPU.

          maybe we will see gpus being plugged into pcie 5.0 x4 slots in the future.and more x4 slots showing up in motherboards.

          We absolutely will not.

          But if gpus continue to take up full 16x slots and not use the bandwidth,

          Again… they do use the bandwidth on older versions of PCIe. Not everyone is on the newest platform. You will always have a portion of the market pairing newer GPUs with older motherboards.

          By the time GPUs actually start to see a benefit on PCIe 5.0, we will have PCIe 6.0 and you’ll be making the same argument… but not everyone will be on PCIe 6.0 yet.

          that’s just going to hurt the hurt the already horrible amount of pcie lanes we get on consumer platforms

          This hasn’t been a real problem in years. Where do you think you’re being bottlenecked in terms of lanes on a consumer platform? What do you think you need them for?

    • Stevesanasshole@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      They don’t share any pcie lanes - card gets 8 and SSD gets 4. Mobo needs to support single slot bifurcation.