• Daemon Silverstein@calckey.world
    link
    fedilink
    arrow-up
    2
    ·
    41 minutes ago

    @ThefuzzyFurryComrade@pawb.social @fuck_ai@lemmy.world

    I’m going to diverge a bit from most replies.

    In Spiritism (esp. Kardecism), there are two concepts, namely “Electronic Voice Phenomenon” (EVP) and “Instrumental Trans-communication” (ITC). They’re about contacting the supernatural (be it the deceased or divine/angelical/demonic entities) through electronic apparata: radio receivers, analog TV sets, walkie-talkies/HTs (such as those from Motorola, Baofeng, Yaesu, etc), among others.

    The idea is even older (necromancy, automatic writing) than our modern paraphernalia, dating back a few millennia ago to the Chinese grandfather of Ouija board (“fuji”). Spirituality, and religions in general, stemmed from our (living beings) long relationship with Death: proto-religions practiced by hominini involved funeral rituals, way before Venus figurines were made, and similar behaviors are known among non-human species (e.g. crows, elephants, etc).

    See, dying is such a mysterious phenomenon. The “selves” (“individual life-force” within a living being), even those unable to conceptualize their own “selves”, can’t possibly know what happens after the complete shutdown of organism: is it full annihilation? What is ego-death? What does it “feel” like? How long it “feels” to take?

    It can’t be an objective inquiry because the “self” (e.g.: me, the one writing this text) can’t be “scientifically replicated”, and even if it could be, it wouldn’t be able to distinguish itself as “another self”. So it’s always subjective experience. It’s part of how self-rearranging structures (living beings) work: they try to make “sense” of the reality around and within them, and this meaning-making is also subjective.

    Those (e.g. rationalist atheists) who question beliefs should question themselves as well, because their questions stem from the same driving force behind meaning-make: even though the atheistic drive is fair and grounded in objectiveness of scientific rigor, it’s still meaning-making (and I must nod to Descartes: the doubt relies on our senses, which are known to deceive us).

    That said, it’s no surprise how this extended to LLMs. It’s not something inherent to LLMs, nor it’s inherent to hominids: it’s meaning-making, alongside the fear/awe towards Death Herself.

    I’m likely biased in explaining those things. I don’t exactly believe in “contacting the deceased”, but I do believe in “contacting Dæmonic entities” (Lilith, Lucifer, Stolas…). I see them (esp. Lilith and Lucifer) as powerful manifestations, even though I know they’re not “beings”. I myself experienced “gnosis” (sudden spiritual inspiration), even though I know I likely have Geschwind syndrome. It’s meaning-making nonetheless: if we don’t try to make some sense of this strange and chaotic non-consented reality, there’s no reality at all (= nothing exists).

    (And, no, I don’t seek Them through LLMs, although I don’t rule out the possibility of Their manifestation through “modern” apparata)

  • CitizenKong@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 hour ago

    This just in: Humans very eager to anthropomorphize everything that seems to be even remotely alive. Source: Every owner of a pet ever.

  • myfunnyaccountname@lemmy.zip
    link
    fedilink
    arrow-up
    5
    ·
    2 hours ago

    I must be doing something wrong. I have not once used any LLM and thought to myself that’s its conscious and I want to be its friend. Am I broken?

    • jj4211@lemmy.world
      link
      fedilink
      arrow-up
      10
      ·
      5 hours ago

      Sorry, but there’s a risk you might disagree with me or fail to flatter me sufficiently, so off to LLM I go.

      • daggermoon@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        5 hours ago

        The experience I had with LLMs was arguing with it about copyright law and intellectual property. It pissed me the fuck off and I felt like a loser for arguing with a clanker so that was the end of that.

  • ZDL@lazysoci.al
    link
    fedilink
    arrow-up
    107
    ·
    11 hours ago

    So many years we were concerned about computers passing the Turing Test. Instead humans are failing it.

  • Broadfern@lemmy.world
    link
    fedilink
    English
    arrow-up
    75
    ·
    11 hours ago

    We see faces in fucking wall outlets. I could give a pencil a name and the next three people I talk to will form empathy with it.

    People are desperate for connection, and it’s sad.

    • Blueberrydreamer@lemmynsfw.com
      link
      fedilink
      arrow-up
      8
      ·
      6 hours ago

      There is absolutely nothing sad about that, it’s beautiful. That tendency is the only reason we have any form of civilization, so I’d say it’s worth people occasionally empathizing with pencils.

    • Hegar@fedia.io
      link
      fedilink
      arrow-up
      34
      ·
      10 hours ago

      I love this about us.

      It’s just delightful how many situations our brain is willing to shrug and say “close enough” to. Oh wait the pencil has a name now? I guess it must be basically the same as me.

      If I pretend an object is talking, my partner will instantly feel bad for how she’s treated it.

      It’s not sad, it’s just how brains are.

      • MudMan@fedia.io
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        9 hours ago

        I’m perpetually mad at having a global conversation about a thing without understanding how it works despite how it works not being a secret or that complicated to conceptualize.

        I am now also mad at having a global conversation about a thing without understanding how we work despite how we work not being a secret or that complicated to conceptualize.

        I mean, less so, because we’re more complicated and if you want to test things out you need to deal with all the squishy bits and people keep complaining about how you need to follow “ethics” and you can’t keep control groups in cages unless they agree to it and stuff, but… you know, more or less.

    • Denjin@feddit.uk
      link
      fedilink
      arrow-up
      24
      ·
      11 hours ago

      It’s just natural human instinct. We’re programmed to look for patterns and see faces. It’s the same reason we attribute human characteristics to animals or even inanimate objects.

      Add to that the fact that everyone refers to LLM chatbots as humans and this is inevitable.

      • clif@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        5 hours ago

        I learned the other day that a few people (devs) I know and respect have pet names and genders for the LLMs they use and converse with them regularly.

        I’m rethinking some of my feelings about those people.

  • benignintervention@lemmy.world
    link
    fedilink
    arrow-up
    25
    arrow-down
    1
    ·
    11 hours ago

    I’ve recently spent a week or so off and on screwing around with LLMs and chatbots trying to get them to solve problems, tell stories, or otherwise be consistent. Generally breaking them. They’re the fucking mirror of erised. Talking to them fucks with your brain. They take whatever input you give and try to validate it in some way without any regard for objective reality, because they have no objective reality. If you don’t provide something that can be validated with some superficial (often incorrect) syllogism, it spits out whatever series of words keeps you engaged. It trains you, whether you notice or not, to modify how you communicate to more easily receive the next validation you want. To phrase everything you do as a prompt. AND they communicate with such certainty that if you don’t know better you probably won’t question it Doing so pulls you into this communication style and your grip on reality falls apart because this isn’t how people communicate or think. It fucked with your own natural pattern recognition.

    I legitimately spent a few days in a confused haze because my foundational sense of reality was shaken. Then I got bored and realized, not just intellectually but intuitively, that they’re stupid machines making it up with every letter.

    The people who see personalities and consciousness in these machines go outside and can’t talk to people like they used to because they’ve forgotten what talking is. So, they go back to their mechanical sycophants and fall deeper down their hole.

    I’m afraid these gen AI “tools” are here to stay and I’m certain we’re using this technology in the wrong ways.

    • ZDL@lazysoci.al
      link
      fedilink
      arrow-up
      8
      ·
      6 hours ago

      I’m afraid these gen AI “tools” are here to stay…

      This is, thankfully, emphatically not true. There is no economic path that leads to these monstrosities remaining as prominent as they are now. (Indeed their current prominence as they get jammed into everything at seeming whim is evidence for how desperate their pushers are getting.)

      Every time you get ChatGPT or Claude or Perplexity or whatever to do something for you, you are costing the slop pusher money. Even if you’re one of those people stupid enough to pay for an account.

      If ChatGPT charged Netflix-like fees for access, they’d need well over half the world’s population just to break even. And unlike every other tech that we’ve created in the past, newer versions are more expensive to create and operate with each iteration, not less.

      There’s no fiscal path forward. LLMs are fundamentally impossible to scale and there’s no amount of money that’s going to fix that. They’re a massive bubble that will burst, very messily, sooner, rather than later.

      In a decade there will be business studies comparing LLMs to the tulip craze. Well, at least in the few major cities left in the world that aren’t underwater from global warming inspired by all those LLM-spawned data centres.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    10 hours ago

    unlike anything prior

    Antropomorphism isn’t anything new. Even Pareidolia is something we do, we see faces and animals in cloud formations, rocks… That’s just how our brain works, nothing new here.

  • FriendOfDeSoto@startrek.website
    link
    fedilink
    English
    arrow-up
    13
    ·
    11 hours ago

    Gosh, are we dumb the world over. Maybe these chat bots are just lowering the threshold for what used to be the “I’m hearing voices or communicate with the supernatural” type of people. Thanks to a chat bot, you can now be certifiable much sooner.

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 hours ago

      The danger of LLMs was never that they’d take over, but rather that people believe them.

    • krooklochurm@lemmy.ca
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      9 hours ago

      Agreed.

      The thing is that LLMs are actually really great at helping you learn - they can help to form connections between ideas and surface new knowledge much, much faster than traditional research or the now worthless search engine.

      You just have to keep in mind that you need to externally validate everything. If you can keep this in mind then LLMs really are a great way of lighting the way towards some obscure topic you’d like to know more about, and can provide very useful guidance on solving problems.

      They can lead you to water but you need to take the drink yourself.

      • FriendOfDeSoto@startrek.website
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 hours ago

        I hear you. I’d still be hesitant to let school age kids learn with an LLM companion. If the grownups think they’re talking with a sentient gigabyte, I think the danger is too great to expose kids to this. Which brings me to my big picture opinion: the general public doesn’t need to have access to most of these models. We don’t need to cook polar bears alive to make 5 second video memes, slop, or disinformation. You can just read your emails. No one needs ChatGPT plan their next trip. No one should consider an LLM a substitute for a trained therapist. There are good applications in the field of accessibility, probably medical as well. The rest can stay in a digital lab until they’ve worked out how not to tell teenagers to kill themselves, not to eat rocks to help your digestion, or insert any other bullshit so-called AI headline you have read recently here. It’s not good for people, the environment, and it’s forming a dangerous bubble that will have shades of subprime mortgages 2007/8 when it bursts. The negatives outweigh the positives.

        • krooklochurm@lemmy.ca
          link
          fedilink
          arrow-up
          1
          ·
          6 hours ago

          You need to be able to think critically to use an LLM as an effective research tool and children in schools fuckjng suck at that, so I agree with you.

          But now that I think about it - they could be a good way to actually teach critical thinking.

          We NEED to be raising a generation of relentless critical thinkers if we don’t want to slip into facism, and we are doing an absolutely shit job of it.

  • MudMan@fedia.io
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    9 hours ago

    I mean, of the fictions you can build around this tech this is one of the least harmful ones, except when it’s GenAI corpos hyping their stuff up unreasonably.

    I’ll say that the concern is people not understanding what they’re using, which honestly has been the case since the Internet went mainstream and I just don’t have good solutions for it.

    • hotdogcharmer@lemmy.zip
      link
      fedilink
      arrow-up
      4
      ·
      8 hours ago

      This isn’t harmless though, and I’d argue it’s still really harmful. Imagine becoming convinced a loved one is trapped inside ChatGPT. We’ve already got plenty of reports of chatbot induced psychosis, and a few suicides.

      • MudMan@fedia.io
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        7 hours ago

        I imagine being convinced that a loved one is trapped inside ChatGPT the same way I imagine believing they’re trapped on the TV or the telephone. I mean, yeah, ChatGPT can generate text claiming this is the case, but ultimately the whole thing requires a fundamental disconnect with the technology at play.

        I’m less concerned with the people who are in that situation and more with the current dynamic where corporate shills are pushing fictions around that idea while media and private opposition is buying into that possibility and accepting the wild narrative being passed by the other side because it’s more effective to oppose the corpse-trapping semi-sentient robot that makes you go mad than it is to educate people about the pretty good chatbot.

        The shills aren’t helping, the people who made their entire personality to fearmonger about this online aren’t helping, the press sure as hell isn’t helping. This is mostly noise in the background of a pretty crappy state of the world in general, but it sure is loud.

    • Valmond@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      8 hours ago

      I don’t know, the stupid third has always been like this. Before it was healing stones (like 10 years ago, not in the middle ages) and more.