- it’s not actually AI
- it’s just fancy auto complete/ glorified Markov chains
- it can’t reason it’s just a pLagIaRisM MaChiNe
Now if I want to win the annoying Lemmy bingo I just need to shill extra hard for more restrictive copyright law!
Doing the Lord’s work in the Devil’s basement
Now if I want to win the annoying Lemmy bingo I just need to shill extra hard for more restrictive copyright law!
Reasoning has nothing to do with knowledge though.
You should have asked chatgpt to explain the comment to you cause that’s not what they say
And certainly not as spooky as spectrography
Arch Linux is a good alternative to Linux and is a good choice for most use cases where you can use it for a variety of tasks and and it is a good fit to Linux and Linux.
Yeah it always strikes me how religious extremism is framed. You rarely hear about christian extremists, who operate in the open on all social networks.
Yet, you could argue that Christian extremists have done more harm to western societies in the last 20 years than any Islamic group.
That’s a nice hypothetical but the facts of this case are much simpler. Would you agree that a country is sovereign, and entitled to write its own laws? Would you agree that a company has to abide by a country’s laws if it wants to operate there? Even an American company? Even if it is owned by a billionaire celebrity?
Why can’t a woman take illegal drugs? Control of your own body is a philosophical concept not a legal one.
Then you have to agree that piracy is theft and people pirating content should be sued.
Even if you were extremely generous and didn’t factor in the scams in your analysis, the reality is that a Blockchain solves problems 99.9% of people will never face. This breaks the whole imagined model, when your product is ultra niche but relies on mass adoption for its security.
Even if you were extremely generous and didn’t factor in the scams in your analysis, the reality is that a Blockchain solves problems 99.9% of people will never face. This breaks the whole imagined model, when your product is ultra niche but relies on mass adoption for its security.
Alternative interpretation cause i find i18n extremely boring and hate the indirection it adds to a code base : you’re telling me I can start making an app without this hassle, and it will only cost me a 2Kloc PR some time in the future. That’s a totally manageable price to pay and makes the early dev experience much better (which can have a lot of impact on momentum).
If you like to write, I find that story boarding with stable diffusion is definitely an improvement. The quality of the images is what it is, but they can help you map out scenes and locations, and spot visual details and cues to include in your writing.
Just yesterday I got an ad for actual shrooms! The website even had a “how is this legal” section, but the legal theory in there was… Not very convincing…
But canonically, the Futurama dog waited like 10 minutes! Some time travel shenanigans overwrote the reality where he spends his life waiting. In the final timeline he got to live a few more happy years with his master 🥲
Then these models are stupid
Yup that is kind of the point. They are math functions designed to approximate human tasks.
These models should start out with basics of language, so they don’t have to learn it from the ground up. That’s the next step. Right now they’re just well read idiots.
I’m not sure what you’re pointing at here. How they do it right now, simplified, is you have a small model designed to cut text into tokens (“knowledge of syllables”), which are fed into a larger model which turns tokens into semantic information (“knowledge of language”), which is fed to a ridiculously fat model which “accomplishes the task” (“knowledge of things”).
The first two models are small enough that they can be trained on the kind of data you describe, classic books, movie scripts etc… A couple hundred billion words maybe. But the last one requires orders of magnitude more data, in the trillions.
That’s what smaller models do, but it doesn’t yield great performance because there’s only so much stuff available. To get to gpt4 levels you need a lot more data, and to break the next glass ceiling you’ll need even more.
I think there are a few tricks that still make it possible. First, nothing says that you have to, or really that you can simulate a universe 1:1. When you think of it we already simulate millions of universes in video games, but they are dramatically simpler than our reality. So, our parent reality could be much more complex than our own.
Consequently, physics could be vastly different from one layer to another. Maybe in the real reality, entropy isn’t that significant and quasi-perpetual motion is possible, making energy super cheap. Maybe the limits in our universe like the speed of light and Planck constants are just hardware caps to prevent us from using too much compute.
No the article is badly worded. Earlier models already have reasoning skills with some rudimentary CoT, but they leaned more heavily into it for this model.
My guess is they didn’t train it on the 10 trillion words corpus (which is expensive and has diminishing returns) but rather a heavily curated RLHF dataset.