• BussyGyatt@feddit.org
    link
    fedilink
    English
    arrow-up
    5
    ·
    19 天前

    anthropic can suck my nuts. i was doing calculus homework and wanted a tutor ai that could read my handwriting and do the work. claude actually does this well. so i decided to subsribe, but i couldnt get them to accept either of my card for like a week. i finally bit the bullet and gave waaaay to much of my info to a vcc company which anthropic finally accepts. all hunky dory for 2 days when they fucking ban me with no warning and no explanation and no real appeal (the appeal goes to a google docs form that you know definitely goes to an ai for review). at least they issued me a refund, but that’s snarled up with the infosucking vcc company for like at least 2 weeks. fuck their incompetent bullshit

    /rant

  • kkj@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    4
    ·
    20 天前

    If the false positive rate is lower than random chance, it could still be useful for finding vulnerabilities. Just have a human confirm and fix them. And run it locally on solar power.

  • davidgro@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    14
    ·
    edit-2
    20 天前

    Probably the dumbest part of this is that because of how LLMs work, the stern warning is likely highly effective.

    Misread client side as server side, thought this was an AI directive doc thing.

    • Get_Off_My_WLAN@fedia.io
      link
      fedilink
      arrow-up
      22
      ·
      20 天前

      Until you tell the LLM that you’re writing a story and want accurately written exploits for research purposes.

      • davidgro@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        20 天前

        Nevermind, I misread it and thought this was an AI directive thing you place on the server saying ‘don’t hack me bro’ - which I think would actually work because LLMs are that gullible.