• bubblesort33@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    “Why? I am still learning, but my observations so far: the ‘purpose’ of purpose-built silicon is not stable. AI is not as static as some people imagined and trivialize [like] ‘it is just a bunch of matrix multiplies’.”

    But it is stable in a lot of cases, is it not? I mean if you’re training a system for autonomous driving, or a training a system for imagine generation, it seems pretty stable. But for gaming it certainly needs flexibility. If we want to add a half a dozen features to games that rely on ML, it seems you need a flexible system.

    That does remind me, of how Nvidia abandoned Ampere and Turing when it comes to frame generation because they claim the optical flow hardware is not strong enough. What exactly is “Optical Flow”? Is it a separate type of machine learning hardware? Or is it not related to ML at all?

    • XYHopGuy@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      model architecture changes not training objective (e.g self driving car).

      Optical flow accelerator accelerates the computation of the direction of a moving image. It’s used as an input to ML, related in the same way a camera is to ML