I’ve recently spent a week or so off and on screwing around with LLMs and chatbots trying to get them to solve problems, tell stories, or otherwise be consistent. Generally breaking them. They’re the fucking mirror of erised. Talking to them fucks with your brain. They take whatever input you give and try to validate it in some way without any regard for objective reality, because they have no objective reality. If you don’t provide something that can be validated with some superficial (often incorrect) syllogism, it spits out whatever series of words keeps you engaged. It trains you, whether you notice or not, to modify how you communicate to more easily receive the next validation you want. To phrase everything you do as a prompt. AND they communicate with such certainty that if you don’t know better you probably won’t question it Doing so pulls you into this communication style and your grip on reality falls apart because this isn’t how people communicate or think. It fucked with your own natural pattern recognition.
I legitimately spent a few days in a confused haze because my foundational sense of reality was shaken. Then I got bored and realized, not just intellectually but intuitively, that they’re stupid machines making it up with every letter.
The people who see personalities and consciousness in these machines go outside and can’t talk to people like they used to because they’ve forgotten what talking is. So, they go back to their mechanical sycophants and fall deeper down their hole.
I’m afraid these gen AI “tools” are here to stay and I’m certain we’re using this technology in the wrong ways.
Every time you get ChatGPT or Claude or Perplexity or whatever to do something for you, you are costing the slop pusher money. Even if you’re one of those people stupid enough to pay for an account.
If ChatGPT charged Netflix-like fees for access, they’d need well over half the world’s population just to break even. And unlike every other tech that we’ve created in the past, newer versions are more expensive to create and operate with each iteration, not less.
There’s no fiscal path forward. LLMs are fundamentally impossible to scale and there’s no amount of money that’s going to fix that. They’re a massive bubble that will burst, very messily, sooner, rather than later.
In a decade there will be business studies comparing LLMs to the tulip craze. Well, at least in the few major cities left in the world that aren’t underwater from global warming inspired by all those LLM-spawned data centres.
I hope you’re right, but also that’s really bleak. I understand that Nvidia, Microsoft, and OpenAI are essentially passing money in a circle and can only wonder how long they can keep it up. It’s not a lossless circuit
They’re an iterative statistical process that predicts word order through mathematical context via weight distributions based on uncountable pre-given data sets. I’m not entirely sure what you’re getting at
I’ve recently spent a week or so off and on screwing around with LLMs and chatbots trying to get them to solve problems, tell stories, or otherwise be consistent. Generally breaking them. They’re the fucking mirror of erised. Talking to them fucks with your brain. They take whatever input you give and try to validate it in some way without any regard for objective reality, because they have no objective reality. If you don’t provide something that can be validated with some superficial (often incorrect) syllogism, it spits out whatever series of words keeps you engaged. It trains you, whether you notice or not, to modify how you communicate to more easily receive the next validation you want. To phrase everything you do as a prompt. AND they communicate with such certainty that if you don’t know better you probably won’t question it Doing so pulls you into this communication style and your grip on reality falls apart because this isn’t how people communicate or think. It fucked with your own natural pattern recognition.
I legitimately spent a few days in a confused haze because my foundational sense of reality was shaken. Then I got bored and realized, not just intellectually but intuitively, that they’re stupid machines making it up with every letter.
The people who see personalities and consciousness in these machines go outside and can’t talk to people like they used to because they’ve forgotten what talking is. So, they go back to their mechanical sycophants and fall deeper down their hole.
I’m afraid these gen AI “tools” are here to stay and I’m certain we’re using this technology in the wrong ways.
This is, thankfully, emphatically not true. There is no economic path that leads to these monstrosities remaining as prominent as they are now. (Indeed their current prominence as they get jammed into everything at seeming whim is evidence for how desperate their pushers are getting.)
Every time you get ChatGPT or Claude or Perplexity or whatever to do something for you, you are costing the slop pusher money. Even if you’re one of those people stupid enough to pay for an account.
If ChatGPT charged Netflix-like fees for access, they’d need well over half the world’s population just to break even. And unlike every other tech that we’ve created in the past, newer versions are more expensive to create and operate with each iteration, not less.
There’s no fiscal path forward. LLMs are fundamentally impossible to scale and there’s no amount of money that’s going to fix that. They’re a massive bubble that will burst, very messily, sooner, rather than later.
In a decade there will be business studies comparing LLMs to the tulip craze. Well, at least in the few major cities left in the world that aren’t underwater from global warming inspired by all those LLM-spawned data centres.
I hope you’re right, but also that’s really bleak. I understand that Nvidia, Microsoft, and OpenAI are essentially passing money in a circle and can only wonder how long they can keep it up. It’s not a lossless circuit
You really don’t understand how LLMs work at all, do you?
They’re an iterative statistical process that predicts word order through mathematical context via weight distributions based on uncountable pre-given data sets. I’m not entirely sure what you’re getting at
Seems he just figured it out.
What do you mean?