• Tar_Alcaran@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    13 days ago

    Repeat after me: LLMs are never safe.

    If you can’t distinguish Data from Instructions, you are inherently doing it wrong, and it will never be fixed.