• Gittykitty@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Good take. AI is, at the end of the day, a buzzword. Machine Learning is simply not the same as true AI, and while Machine Learning’s acceleration has been impressive as well as frightening, it’s not the coming of Skynet. All the fearmongering about AI’s potential that deliberately references our cultural touchstone examples of dangerous AI is all just part of the hype machine, it’s simply not there yet, and wont be for a good chunk of time - but making the gullible public and idiotic investors think it’s that impressive? Well, that makes your company sound like a good investment!

    • jammsession@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      We also simply apply the batch AI to stuff we already did years ago but did not call it AI back then. Like, Airbus using machine learning to find better and lighter shapes for airplane parts.

      And for stuff like writing code, it turns out to be not as helpful as expected. It is impressive for coding noobs like me. According to people I know that code for a living, it is not that big of a deal. A nice addition that helps a little bit on some tasks.

      AI really shines in tasks where accuracy is not important. Like making up stories, drawing pictures, creating designs and logos, and writing buzzword PR. And of course rule 34!

      • Gittykitty@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Yeah, like, I remember reading about Machine Learning practising StarCraft 2 back in like… 2014? Haha

      • moofunk@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        And for stuff like writing code, it turns out to be not as helpful as expected.

        This is not a good judgment and is just taking the ChatGPT user interface at surface value.

        It has simply been that LLMs like GPT4 have not been allowed to use tools to write programs outside of lab conditions, so it’s the equivalent of you running code in your head based on what is already in your memory.

        Once an LLM has access to compilers or interpreters that run code, they can feedback their own mistakes into the next prompt and write working code. We already know that GPT4 can learn python, bash and other interpreted languages by simply allowing it to use the tools and allowing it to feed results back into new prompts. It can also tell which tool to use, based on the input.

        The concept of tool use in LLMs is almost the same as for humans in amplifying a specific ability, such as using a calculator for numerical computations or using an SQL database to manage large tables of information.

        The tool use that ChatGPT allows today is simply prompting search engines or Dall-E, reading some webpages as input prompts, but there is no feedback loop allowed for fact checking itself.