• Actius@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    To recap, spend $400 for a 5-10% increase in performance, which in real world terms brings you from 75fps to 80fps. How grand

  • WH34TB01@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I feel dumb for asking, but given these prices is a 6800, 6750xt, or 6700xt the better buy? I’m coming from a 6GB 1060 so they’re all a huge jump for 1440p, but I want this to last a few years so I can upgrade my mobo and cpu next year

  • galloway188@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    1080 still running. Fans gave out so I just replaced it with a 3d printed fan mount and added a couple noctua fans

  • IIvoltairII@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    If I’m starting a brand new build (first one) should I go for a 3060ti or 4060ti?

    Thanks for the help in advance.

  • cherubim02@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I’m still running a 1060 on my i7-8700K rig. However, the tiny 3 GB GPU ram ist becoming more and more of an issue. Any recommendations on a reasonable upgrade? I’m eying the 7700 xt. But 450 € is a bit too steep for such an old system imo.

  • mrpoops@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Best cheapest option to run smaller ai models on?

    Like the gguf’d Mistral 7b versions that are lighter on memory, for example. I need fast inference and I don’t really feel like depending on OpenAI, or paying them a bunch of money. I’ve fucked up and spent like $200 on api charges before, so definitely trying to avoid that.

    I have a 980ti and it’s just too damn old. It works with some stuff but it’s super hit or miss with any of the newer libraries.

    • YoloSwaggedBased@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Consider something with a bit more VRAM, like a 2080ti or a 3080. The extra headroom of 10gbs will help with higher quantisation precision (e.g. 4 bit vs 8 bit). You need a bit over 14GBs to run full 7B models w/o quantisation, so you’ll need quantisation.

    • cupatkay@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Maybe a used RTX 30 series? They have tensor cores which helps a lot ini running AI stuff. I got a used 3070 for $240 a few weeks ago.