• stoy@lemmy.zip
    link
    fedilink
    English
    arrow-up
    100
    arrow-down
    2
    ·
    6 days ago

    I suspect this will happen all over with in a few years, AI was good enough at first, but over time reality and the AI started drifting apart

    • Kirp123@lemmy.world
      link
      fedilink
      arrow-up
      93
      arrow-down
      1
      ·
      6 days ago

      AI is literally trained to get the right answer but not actually perform the steps to get to the answer. It’s like those people that trained dogs to carry explosives and run under tanks, they thought they were doing great until the first battle they used them in they realized that the dogs would run under their own tanks instead of the enemy ones, because that’s what they were trained with.

      • merc@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        ·
        5 days ago

        It’s not trained to get the right answer. It’s trained to know what sequence of words tends to follow another sequence of words, and then a little noise is added to that function to make it a bit creative. So, if you ask it to make a legal document, it has been trained on millions of legal documents, so it knows exactly what sequences of words are likely. But, it has no concept of whether or not those words are “correct”. It’s basically making a movie prop legal document that will look really good on camera, but should never be taken into court.

    • Spezi@feddit.org
      link
      fedilink
      arrow-up
      38
      ·
      6 days ago

      And then, the very same CEOs that demanded the use of AI in decision making will be the ones that blame it for bad decisions.

    • jj4211@lemmy.world
      link
      fedilink
      arrow-up
      27
      ·
      6 days ago

      They haven’t drifted apart, they were never close in the first place. People have been increasingly confident in the models because they’ve increasingly sounded more convincing, but the tenuous connection to reality has been consistently off.

      • jballs@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 days ago

        Yeah it’s not even drift. It’s just smoke and mirrors that looks convincing if you don’t know what you’re talking about. It’s why you see writers say “AI is great at coding, but not writing” and then you see coders say “AI is great at writing, but not coding.”

        If you have any idea what good looks like, you can immediately recognize AI ain’t it.

        For a fun example, at my company we had a POC done by a very well known AI company. It was supposed to analyze a MS Project schedule, then compare tasks in that schedule to various data sources related to to tasks, then flag potential schedule risks. In the demo to the COO, they showed the AI look at a project schedule and say “Task XYZ could be at risk due to vendor quality issues or potential supply chain issues.”

        The COO was amazed. Wow it looked through all this data and came back with such great insight. Later I dug under the hood and found that it wasn’t looking at any data behind the scenes at all. It was just answering specifically “what could make a project task at risk?” and then giving a hypothetical answer.

        Anyone using AI to make any sort of decision is basically doing the equivalent of Googling your issue and then taking the top response as gospel. Yeah that might work for a few basic things, but anything important that requires any thought whatsoever is going to fail spectacularly.

        • jj4211@lemmy.world
          link
          fedilink
          arrow-up
          5
          ·
          6 days ago

          It’s why you see writers say “AI is great at coding, but not writing” and then you see coders say “AI is great at writing, but not coding.”

          I’ve always thought of this as being just like Hollywood. If you have expertise in whatever field they present an expert in, it’s painful how off they are but it lookks fine for everyone outside the field of expertise.

        • merc@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          5 days ago

          It was just answering specifically “what could make a project task at risk?” and then giving a hypothetical answer

          It wasn’t even doing that. It was “looking” at training data for what a an analysis like that might look like, and then inventing a sequence of words that matched that training data. Maybe “vendor quality issues” is something that appears in the training data, so it’s a good thing to put in its output.