• DevAnalyzeOperate@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Pushing hard with ROCm?

    There are millions of Devs who develop for CUDA. Nvidia I believe has north of a thousand (can’t remember if it’s like 1 or 2 thousand) people working on Cuda. CUDA is 17 years old. There is SO MUCH work already done in CUDA, Nvidia is legit SO far ahead and I think people really underestimate this.

    If AMD hired like 2000 engineers to work on ROCm they would still take maybe 5 years to get to where Nvidia is now, and still be 5 years behind Nvidia. Let’s not even get into the magnitudes more CUDA GPUs floating around out there compared to ROCm GPUs, because CUDA GPUs started being made earlier at higher volumes and even really old stuff is still usable for learning/home lab. As far as I know, they’re hiring a lot less, they just open sourced it and are hoping they can convince enough other companies to write stuff for ROCm.

    I don’t mean to diminish AMD’s efforts here, Nvidia is certainly scared of ROCm, ROCm I expect to make strides in the consumer market in particular as hobbyists try and get their cheaper AMD chips to work with Diffusion models and whatever. When it comes to more enterprise facing stuff though CUDA is very very far ahead and the lead is WIDENING and the only real threat to that status quo is that there literally are not enough NVIDIA GPUs to go around.

    • itsjust_khris@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      CUDA’s moat is being undone by things like OpenAI Triton. Soon most ML code will be coded in interfaces that allow for any supported hardware vendor to run the code. AMD doesn’t have to replicate all of Nvidia’s work, especially when the industry has multiple giants all working on undoing Nvidia’s software moat.

      Nvidia’s dominance won’t last forever, they have the advantage but one day all this AI hardware/software will be commoditized.