No, it doesn’t. Unless you can show me a paper detailing that literally any amount of synthetic data increases hallucinations, I’ll assume you simply don’t understand what you’re talking about.
Ah yes, and you can’t show us that research because it goes to another school? And all companies that train LLMs are simply too stupid to realize this fact? Their research showing the opposite (which has been replicated dozens of times over) was just a fluke?
no because this is literally in development, this isn’t some 60 year old mature tech
algorithms sure, nn for some narrow topics yep great, not the this bullshit though
there is already academic accessible research talking about LLM issues of which the major concern is hallucinations, to the point where the word bailout is starting to make the rounds in the us from these very companies
the argument is whether or not you believe this is inherent or fixable and a big focus is on the training
anyone listening to any ai company right now is a damn fool with the obvious circular vendor bullshit going on
but you do you, if the market could be trusted to be sane i’d be timing it right now
no because this is literally in development, this isn’t some 60 year old mature tech
Of course, you don’t have research supporting your position because it’s still in development. So obviously we can just ignore all the papers released over the last decade+ which show the opposite of what you’re claiming - convenient!
there is already academic accessible research talking about LLM issues of which the major concern is hallucinations, to the point where the word bailout is starting to make the rounds in the us from these very companies
the argument is whether or not you believe this is inherent or fixable and a big focus is on the training
anyone listening to any ai company right now is a damn fool with the obvious circular vendor bullshit going on
Yeah, as I expected - you literally don’t understand what this conversation is even about. Since you have a bone to pick with the industry, you make up random claims that you think make the industry look bad. But what you don’t understand is: you’re just making a fool of yourself by making subjective claims around topics you simply don’t understand. Critique the AI industry for the greedy, useless shit they’re doing and creating, not by making up wrong “facts” and ignoring all evidence against them.
And just to save us both time, I’ll list try to list positions you seem to think I hold, which I don’t:
I don’t think LLMs will ever get rid of hallucinations
I don’t think LLMs will get better and better by only training on output from previous LLMs
I don’t think LLMs are the path to AGI
I don’t think any of the marketing done by AI companies is truthful
If you choose to reply again and think I’m lying about not holding these positions, re-read the conversation until you understand it.
No, it doesn’t. Unless you can show me a paper detailing that literally any amount of synthetic data increases hallucinations, I’ll assume you simply don’t understand what you’re talking about.
what paper? no one in industry is gonna give you this shit, it’s literal gold
academics are still arguing about it but save this and we can revisit in 6 months for a fat i told you so if you still care
ai is dead as shit for anything that matters until this issue is fixed
but at least we can enjoy soulless art while we wait for the acceleration
Yeah, that’s what I guessed. Try to look into the research first before making such grandiose claims.
i know the current research, i know it’s going to eat your lunch
Ah yes, and you can’t show us that research because it goes to another school? And all companies that train LLMs are simply too stupid to realize this fact? Their research showing the opposite (which has been replicated dozens of times over) was just a fluke?
no because this is literally in development, this isn’t some 60 year old mature tech
algorithms sure, nn for some narrow topics yep great, not the this bullshit though
there is already academic accessible research talking about LLM issues of which the major concern is hallucinations, to the point where the word bailout is starting to make the rounds in the us from these very companies
the argument is whether or not you believe this is inherent or fixable and a big focus is on the training
anyone listening to any ai company right now is a damn fool with the obvious circular vendor bullshit going on
but you do you, if the market could be trusted to be sane i’d be timing it right now
Of course, you don’t have research supporting your position because it’s still in development. So obviously we can just ignore all the papers released over the last decade+ which show the opposite of what you’re claiming - convenient!
Yeah, as I expected - you literally don’t understand what this conversation is even about. Since you have a bone to pick with the industry, you make up random claims that you think make the industry look bad. But what you don’t understand is: you’re just making a fool of yourself by making subjective claims around topics you simply don’t understand. Critique the AI industry for the greedy, useless shit they’re doing and creating, not by making up wrong “facts” and ignoring all evidence against them.
And just to save us both time, I’ll list try to list positions you seem to think I hold, which I don’t:
If you choose to reply again and think I’m lying about not holding these positions, re-read the conversation until you understand it.
the focus i know about is literally on this issue, you keep acting like you’d actually expect anyone to give you details lol
but sure please give me your facts and access to your research