• Sleyeme@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    This is an interesting innovation. Curious how this plays out in the future, something I’m definitely interested.

    • Dealric@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Hopefully it will die.

      We shouldnt be getting pcie x8 cards for hundreds of dollars. Really shouldnt

    • 1mVeryH4ppy@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It requires PCIe bifurcation to work which I believe most mid to low end motherboards don’t support.

      • Berengal@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I don’t think Intel supports anything other than 8x/8x on 12th gen and newer on any MB. AMD should support 4x/4x/4x/4x and 8x/4x/4x on the primary slot on any AM5 CPU and I think most AM4 CPUs too regardless of MB. Although I guess the MB manufacturer could disable that functionality if they really wanted.

        • benjiro3000@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          AMD should support 4x/4x/4x/4x and 8x/4x/4x on the primary slot on any AM5 CPU and I think most AM4 CPUs too regardless of MB.

          AMD is 4x/4x/4x/4x for the all desktop CPUs EXCEPT the G (laptop) series. Those are 8x/4x/4x (and always with a lower pcie version).

          AND it also depends on the motherboard manufacture because some do not implements 4x/4x/4x/4x. Forgot what board it was.

          Intel 1700 is 8x/8x for sure. Alderlake and successors are hard coded in the CPUs to this. Sucks!

          • You want great idle power consumption => Intel, but you lose ECC and 4x/4x/4x/4x bifurcation. Yes, there are workstation boards that have ECC but the price alone is, just get a PCIe card with a active lane splitter chip for NVME. Your probably cheaper.
          • You want ECC and 4x/4x/4x/4x => AMD but you eat around 15W higher power draw from the wall (chiplet issue).
          • You want low power AMD? => G series but then no ECC and only 8x/4x/4x OR you need to find the insane overpriced XX50G PRO versions for ECC support but same bifurcation issue. And the lower pcie issue will be there.

          There is no, “does it all” solution.

          • madn3ss795@alien.topB
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            AMD’s support depends on the motherboard. My B550 board support all 3 modes (8x/8x, 8x/4x/4x and 4x/4x/4x/4x).

    • wtallis@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      No.

      This is a M.2 slot that fully bypasses the GPU. It’s not wired to the graphics chip at all; the PCIe link from the SSD goes to the CPU as usual.

  • ecktt@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Yeah, this falls under the “because you can do something doesn’t mean you should” category. If you really need that extra NVME slot, you might need to rethink your storage strategy.

  • yvng_ninja@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    If this becomes mainstream, I wonder how it will affect watercooling. Would it make active backplates more relevant?

  • Justifiers@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Now here’s something to consider

    Pcie5 cards, they claim won’t need more than x8 anyways

    Could be a way to add 2x extra Gen-5 m.2s without incurring motherboard or CPU lane limitations

  • Simon_Paul_99@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    This feels cool I guess but it solves a total of one (1) minor issue (in case your motherboard lacks a PCI M.2 slot and you need one) and straight up creates a big issue (PCIe bifurcation) plus a potential whole host of compatibility issues. Still fun to see experimentation happen though. Also don’t really see a future where we move M.2 drives to the GPU fully because most consumer computers don’t have a dedicated GPU.

    I actually thought the SSD was FOR the GPU and it would take advantage of the fast NVMe speeds to use it as a slower secondary graphics memory. That’d be a good use of old high-end PCI-4 drives when people start upgrading to PCI-5 ones.

    • Slyons89@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Going from main system memory when the VRAM is full would probably still be orders of magnitude faster than going from any storage device. But maybe it will work out somehow with directstorage.

  • Jeffy29@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Well, that’s a first, I am shocked that Nvidia even allowed them to do this given how much they have controlled the uniformity of their recent cards. Bit dubious about the application, PCI-E 4.0 SSDs are already pretty hot and slapping in on top of a GPU seems crazy, but the idea is pretty cool.

    • Donard80@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      There was a test somewhere, either hardware unboxed, linux, level1tech, gamers nexus or der8auer and temps were good. I don’t think it’d lower speed as it’s just pcie lanes extension basicly.

    • Donard80@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      probably, as long as motherboard has support for pcie bifurcation. Apart from that u can just slap pcie to m2 converter card in any spare pcie slot and enjoy nvme on old hardware. It’ll also probably will cost less than premium of this gpu