• FooBarrington@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    1 day ago

    I mean - yeah, it is? This is a well-researched part of the data pipelines for any big model. Some companies even got into trouble because their models identified as other models, whose outputs they were trained on.

    It seems you have a specific bone to pick that you attribute to such training, but it’s just such a weird approach to deny pretty broadly understood results…

      • FooBarrington@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        1 day ago

        No, it doesn’t. Unless you can show me a paper detailing that literally any amount of synthetic data increases hallucinations, I’ll assume you simply don’t understand what you’re talking about.

        • baines@lemmy.cafe
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          4
          ·
          1 day ago

          what paper? no one in industry is gonna give you this shit, it’s literal gold

          academics are still arguing about it but save this and we can revisit in 6 months for a fat i told you so if you still care

          ai is dead as shit for anything that matters until this issue is fixed

          but at least we can enjoy soulless art while we wait for the acceleration

            • baines@lemmy.cafe
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              1 day ago

              i know the current research, i know it’s going to eat your lunch

              • FooBarrington@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                23 hours ago

                Ah yes, and you can’t show us that research because it goes to another school? And all companies that train LLMs are simply too stupid to realize this fact? Their research showing the opposite (which has been replicated dozens of times over) was just a fluke?

                • baines@lemmy.cafe
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  23 hours ago

                  no because this is literally in development, this isn’t some 60 year old mature tech

                  algorithms sure, nn for some narrow topics yep great, not the this bullshit though

                  there is already academic accessible research talking about LLM issues of which the major concern is hallucinations, to the point where the word bailout is starting to make the rounds in the us from these very companies

                  the argument is whether or not you believe this is inherent or fixable and a big focus is on the training

                  anyone listening to any ai company right now is a damn fool with the obvious circular vendor bullshit going on

                  but you do you, if the market could be trusted to be sane i’d be timing it right now

                  • FooBarrington@lemmy.world
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    23 hours ago

                    no because this is literally in development, this isn’t some 60 year old mature tech

                    Of course, you don’t have research supporting your position because it’s still in development. So obviously we can just ignore all the papers released over the last decade+ which show the opposite of what you’re claiming - convenient!

                    there is already academic accessible research talking about LLM issues of which the major concern is hallucinations, to the point where the word bailout is starting to make the rounds in the us from these very companies

                    the argument is whether or not you believe this is inherent or fixable and a big focus is on the training

                    anyone listening to any ai company right now is a damn fool with the obvious circular vendor bullshit going on

                    Yeah, as I expected - you literally don’t understand what this conversation is even about. Since you have a bone to pick with the industry, you make up random claims that you think make the industry look bad. But what you don’t understand is: you’re just making a fool of yourself by making subjective claims around topics you simply don’t understand. Critique the AI industry for the greedy, useless shit they’re doing and creating, not by making up wrong “facts” and ignoring all evidence against them.

                    And just to save us both time, I’ll list try to list positions you seem to think I hold, which I don’t:

                    • I don’t think LLMs will ever get rid of hallucinations
                    • I don’t think LLMs will get better and better by only training on output from previous LLMs
                    • I don’t think LLMs are the path to AGI
                    • I don’t think any of the marketing done by AI companies is truthful

                    If you choose to reply again and think I’m lying about not holding these positions, re-read the conversation until you understand it.