• pagenotfound@lemmy.world
    link
    fedilink
    English
    arrow-up
    72
    arrow-down
    3
    ·
    14 days ago

    We’re truly in a dystopian future when big tech nerds are doing mafia hits. Reminds me of that guy in Better Call Saul that hired Mike as a bodyguard.

  • Ganbat@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    64
    ·
    14 days ago

    Police say it appears to be a suicide. Probably true, honestly, but that doesn’t mean he wasn’t driven to it.

  • calcopiritus@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    1
    ·
    14 days ago

    When the working class kills a CEO, there’s a reward by the FBI and is found in a week. When a company does it, the world is silent.

  • elucubra@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    2
    ·
    14 days ago

    Whistleblower deaths should have all of a company’s director’s investigated by default. It may be that 99% are innocent, but just one or two seeing their massively valuable stock, and options in danger, may be driven to such actions on their own.

    • Jack@slrpnk.net
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      13 days ago

      The police are too busy shooting people’s pets and being scared of acorns to investigate something like this.

    • chingadera@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      ·
      14 days ago

      I didn’t even read the article. I just barely skimmed it and guess what I found within 2 seconds.

      “Balaji’s death comes three months after he publicly accused OpenAI of violating U.S. copyright law while developing ChatGPT, a generative artificial intelligence program that has become a moneymaking sensation used by hundreds of millions of people across the world.”

    • Jimmycakes@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      14 days ago

      He had hard proof chat gpt used copyright work to train. Opening them up to lawsuits of said copyright holders and basically collapsing the whole company.

    • phoneymouse@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      14 days ago

      You don’t even need “Hard” proof. The mere fact that ChatGPT “knows” about certain things indicate that it ingested certain copyrighted works. There are countless examples. Can it quote a book you like? Does it know the plot details? There is no other way for it to get certain information about such things.

      • zqps@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        14 days ago

        The issue is proving that it ingested the original copyrighted work, and not some hypothetical public copyleft essay.

      • sean@lemmy.wtf
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        9 days ago

        Facts aren’t protected by copyright. Regurgitating facts about a thing is in no way illegal, even if done by ai and done by ingested copyrighted material. I can legally make a website dedicated to stating only facts about Disney products (all other things the same) when prompted by questions of my users.

        • phoneymouse@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          9 days ago

          I think you’re missing the point. We are talking about whether it is fair use under the law for an AI model to even ingest copyrighted works and for those works to be used as a basis to generate the model’s output without the permission of the copyright holder of those works. This is an unsettled legal question that is being litigated right now.

          Also, in some cases, the models do produce verbatim quotes of original works. So, it’s not even like we’re just arguing about whether the AI model stated some “facts.” We are also saying, hey can an AI model verbatim reproduce an actual copyrighted work? It’s settled law that humans cannot do that except in limited circumstances.

          • sean@lemmy.wtf
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            9 days ago

            The mere fact that ChatGPT “knows” about certain things indicate that it ingested certain copyrighted works.

            This is the bit I’m responding to. This “mere fact” that you propose is not copyright infringement by facts I’ve stated. I’m not making claims to any of your other original statements

            Verbatim reproduction may be copyright infringement, but that wasn’t your original claim that I quoted and am responding to (I didn’t make that clear earlier, that’s on me).

            “Apologies” for my autistic way of communicating (I’m autistic)

            • phoneymouse@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              9 days ago

              I think you’re using the word fact in two senses here.

              I am making an argument that ChatGPT and other AI models were created by copyrighted works and my “proof” is the “fact” that it can reproduce those works verbatim or state facts about them that can be derived from nowhere else but in the original copyrighted work or a derivative copyrighted work that used the original under fair use.

              Now, the question is — is it fair use under copyright law, for AI models to be built with copyrighted materials?

              If it is considered fair use, I’m guessing it would have a chilling effect on human creativity given that no creator can guarantee themselves a living if their style of works can be reproduced so cheaply without them once AI has been trained using their works as inputs. So, it would then become necessary to revisit copyright law to redefine fair use such that we don’t discourage creators. AI can only really “remix” what it has seen before. If nothing new is being created because AI has killed all incentive to make new things, it will stagnate and degrade.

    • Grimy@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      14 days ago

      It was more of an opinion piece. They were already being sued and he didn’t bring any new info forward from what I understand.

  • werefreeatlast@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    14 days ago

    Obviously suicide because it happened at his house/apartment. Because who else would suicide him self in his apartment right? I wouldn’t go trying to figure out how it happened. Like with finger printing surfaces or sniffing dogs or checking cameras. Why would sniffing a dog even help?