• hotdogcharmer@lemmy.zip
    link
    fedilink
    arrow-up
    5
    ·
    14 hours ago

    This isn’t harmless though, and I’d argue it’s still really harmful. Imagine becoming convinced a loved one is trapped inside ChatGPT. We’ve already got plenty of reports of chatbot induced psychosis, and a few suicides.

    • MudMan@fedia.io
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      14 hours ago

      I imagine being convinced that a loved one is trapped inside ChatGPT the same way I imagine believing they’re trapped on the TV or the telephone. I mean, yeah, ChatGPT can generate text claiming this is the case, but ultimately the whole thing requires a fundamental disconnect with the technology at play.

      I’m less concerned with the people who are in that situation and more with the current dynamic where corporate shills are pushing fictions around that idea while media and private opposition is buying into that possibility and accepting the wild narrative being passed by the other side because it’s more effective to oppose the corpse-trapping semi-sentient robot that makes you go mad than it is to educate people about the pretty good chatbot.

      The shills aren’t helping, the people who made their entire personality to fearmonger about this online aren’t helping, the press sure as hell isn’t helping. This is mostly noise in the background of a pretty crappy state of the world in general, but it sure is loud.