• DarkeoX@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    It’s gone quite a lot better but still cumbersome in many aspects. They need to get some dedicated maintainers in the main ML FOSS projects to make new ROCm versions available easily.

    Every time a new ROCm version is released, it takes ~2 months to have all the stars lined up and builds available for the most commonly used LLM stacks.

    Backported ROCm builds should be a thing to. Doesn’t help that there’s some ROCm7 Pytorch 2.2 nightly builds when most projects still use 2.1 and are stuck with ROCm6 (esp. when AMD devs essentially push you upward to solve any problem/crash you may have).

    Also, although I completely understand the need to settle somewhere in terms of kernel / distro support when it comes to Linux, it’s too bad their highest supported kernels are 6.2.x.

  • rW0HgFyxoJhYka@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I’m writing this entry mostly as a reference for those looking to train a LLM locally and have a Ryzen processor in their laptop (with integrated graphics), obviously it’s not the optimal environment, but it can work as a last resort or for at-home experimenting.

    Oh.

  • Geoe0@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Wow I only need to recompile everything 😂 Im a big amd fan, but scnr in this case

    • LoafyLemon@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Yep. I just did that the other day to run Stable Diffusion as I could not get it to install required drivers any other way.

  • PierGiampiero@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    as a reference for those looking to train a LLM locally

    It took me hours to finetune a small (for today’s standards) BERT model with an RTX 4090, I can’t imagine doing anything on chips like those referenced in the article, even inference.

    I wouldn’t do any training that’s not at least with a 7800/7900 XTX, if you can get them to work.

  • TeutonJon78@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Maybe when AMD supports ROCm for more than like 3 consumer cards at at time.

    CUDA supports cards going back many generations. AMD keeps cutting off cards that already had existing support – Polaris has been cut for awhile, Vega/VII either just got cut or in is in the next release.