• 15 Posts
  • 87 Comments
Joined 3 years ago
cake
Cake day: June 12th, 2023

help-circle
  • Surely, the energy cost to verify the translation would be the same as translating it? If you’re struggling that much, why are you translating it at all? I cannot trust your translation.

    If you tell an LLM to generate reports, it will, regardless of the actual quality of the environment. It doesn’t know what’s secure and what isn’t. All you’ve shown it to do is convince the kinds of security analysts with a system so insecure as to have a LOT of good reports that their system is more secure than it is. Which is useless at best, detrimental at worst.

    It’s useless for translation. It’s useless for security analysis. It’s useless for rhyming (I notice you didn’t mention that one). You’re trying so hard to prove how useful it is, and your failure demonstrates how useless it is.

    You can’t condemn confident wrongness and defend LLMs. And you can’t defend the billionaire’s toxic Nazi plagiarism machine while questioning someone else’s morals. You can’t cherry-pick my argument and claim I’m the one fighting a strawman. …Well, not if you’re arguing in good faith.


  • If you know enough to verify a translation as accurate, or you have the tools to figure out an accurate translation through dictionaries or some such, then you know enough to do the translation yourself. If you don’t, then I cannot trust your translation.

    And if you can’t trust the output to be comprehensive or correct, then why would you trust something like system security to an LLM? Any security analyst who deserves their job would never take that risk. You don’t cut those corners.

    Quick reminder: rhyming dictionaries exist. LLMs solved a solved problem, but worse.

    Once again, even if the billionaire’s toxic Nazi plagiarism machine was useful, it is so morally repugnant that it should never be used, which makes it functionally useless. This is an absolute statement, but trying to “um actually” that makes you look like either a boot-licker, a pollutant, a Nazi, a plagiarist, an idiot, or some combination of those.

    I would rather look like an absolutist. How about you?


  • The only way to know if LLM output is accurate is to know what an accurate output should look like, and if you know that, you don’t need an LLM. If you don’t know what an accurate output should look like, an LLM is equally likely to confidently lie to you as it is to help you, making you dumber the more you use it. The only other situation is if you know what an accurate output should look like, but you want an inaccurate one, which is a bad thing to encourage.

    “Demonstrably useful” is a lie. It’s a blatant and obvious lie. LLMs are so actively detrimental to their users, and society as a whole, that calling them useless is being generous. And even if they were the most beneficial thing on the planet, there is still no reason to use the billionaire’s toxic Nazi plagiarism machine.










  • Susaga@sh.itjust.workstoProgrammer Humor@programming.devIs Windows FOSS now?
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    2 months ago

    Yes. You’re giving the companies WAY too much credit for owning the blender they throw stolen content into, and you’re even trying to give them ownership of what clearly doesn’t belong to them. I’m sure they’re just as eager to claim they did all the work and license the materials they use as you are.

    I try not to call people idiots in debates, so there’s really only one reason you’d be giving them so much support.


  • Susaga@sh.itjust.workstoProgrammer Humor@programming.devIs Windows FOSS now?
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    2 months ago

    The AI company stole other people’s code, threw it into a blender, and is selling the output. They didn’t do any real work, and they don’t own the materials. They have no legal claim over the result. You do not own a car you made from stolen parts, no matter how many cars you stole from.

    Stop trying to imply your buddies at AI companies have value.