https://www.youtube.com/watch?v=QEbI6v2oPvQ

I had a lot of trouble setting up ROCm and Automatic1111. I tried first with Docker, then natively and failed many times. Then I found this video. It has a good overview for the setup and a couple of critical bits that really helped me. Those were the reinstallation of compatible version of PyTorch and how to test if ROCm and pytorch are working. I still had a few of those Python problems that crop up when updating A1111, but a quick search in A1111 bug reports gave work arounds for those. And a strange HIP hardware error came at startup, but a simple reboot solved that.

Also he says he couldn’t make it work with ROCm 5.7, but for me now 2 months later, ROCm 5.7 with 7900 XTX and Ubuntu 22.04 worked.

And coming from a Windows DirectML setup, the speed is heavenly.

  • CasimirsBlake@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    It should not be this involved. It is still a cluster of a process. But I hope some folks can get this to work.

    • DarkeoX@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      It could be way easier with proper Docker images. That’s what I tend to do for all these projects.

      ROCM team had the good idea to release Ubuntu image with the whole SDK & runtime pre-installed. But that’s simply not enough to conquer the market and gain trust. Ideally, they’d release images bundled with some of the most popular FLOSS ML tools ready to use and the latest stable ROCm version.