We’re good at scamming investors into thinking that a room full of monkeys on typewriters can be “AI.” And all it takes to make that happen is to waste time, resources, lives and money, (ESPECIALLY money) into building an army of fusion-powered robots to beat the monkeys into working just a little bit harder.
Because that’s businesses solution to everything: work harder, not smarter.
We’re good at scamming investors into thinking that a room full of monkeys on typewriters can be “AI.”
Current generations of LLM from everything I’ve learned are basically really, really, really large rooms of monkeys pounding on keyboards. The algorithm that sifts through that mess to find actual meaning isn’t even particularly new or revolutionary, we just never had databases large enough that can be indexed fast enough to actually find the emergent patterns and connections between fields.
If you pile enough libraries in front of you and can sift out the exact lines that you know will make you feel a certain way, you can arrange that pile of information in ways that will give you almost any result you want.
The thing that tricks a lot of us is we’re never really conscious of what we want. We want to be tricked though, we want to control and manipulate something that seems conscious for our own ends, that gives a feeling of power so your brain validates the experience by telling you the story that it’s alive. You see pictures that look neat and depict the scenes you wanted to see in your mind, so your brain convinces you that it’s inventing things out of nothing and that it has to be magically smart to be able to mash pikachu with darth vader.
We’re good at scamming investors into thinking that a room full of monkeys on typewriters can be “AI.” And all it takes to make that happen is to waste time, resources, lives and money, (ESPECIALLY money) into building an army of fusion-powered robots to beat the monkeys into working just a little bit harder.
Because that’s businesses solution to everything: work harder, not smarter.
Current generations of LLM from everything I’ve learned are basically really, really, really large rooms of monkeys pounding on keyboards. The algorithm that sifts through that mess to find actual meaning isn’t even particularly new or revolutionary, we just never had databases large enough that can be indexed fast enough to actually find the emergent patterns and connections between fields.
If you pile enough libraries in front of you and can sift out the exact lines that you know will make you feel a certain way, you can arrange that pile of information in ways that will give you almost any result you want.
The thing that tricks a lot of us is we’re never really conscious of what we want. We want to be tricked though, we want to control and manipulate something that seems conscious for our own ends, that gives a feeling of power so your brain validates the experience by telling you the story that it’s alive. You see pictures that look neat and depict the scenes you wanted to see in your mind, so your brain convinces you that it’s inventing things out of nothing and that it has to be magically smart to be able to mash pikachu with darth vader.