• wizardbeard@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    17 hours ago

    The only way AI training is even remotely like how humans learn is if you have a very limited understanding of both how humans learn and how AI model training works.

    Also, AI image generation is largely built off datasets of images that are classified and made useful for training by what is effectively slave labor, or at least considerably underpaid third world people.

    Please stop anthropomorphizing code. We have generations of study into how the brain forms connections, various aspects of how memories are formed and what keeps some salient and others not, how different people process information and learn differently, how people build skills over time and applied effort… the list goes on. And all of AI is built off absurdly complex math in application, but not as much so in concept.

    What I’m getting at is that actual scholarly information sources are out there about how humans learn and develop skills, and there are likewise scholarly sources on how AI and the underlying algorithms work (albeit harder to find unbiased papers with the current hype bubble around AI at the moment).

    It doesn’t take much effort to expose yourself to enough to understand that comparing AI training to human learning is an incredibly lossy analogy that doesn’t hold up under barely any scrutiny.

    • village604@adultswim.fan
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      edit-2
      10 hours ago

      Bro, literally all I said was that the training dataset was the same.

      I’m well aware of how generative AI works. I’m an industry professional who attended the very first AI/ML training course by a major cloud provider back in 2023 and was instrumental in changing the way they operated the AI side of the house.

      But the parallels are there. Neural networks are designed based on the human brain, and vector driven databases aren’t super dissimilar to how neurons interact. A ton of human memory and processing is based on referential data, like gen AI.

      Yes, it might not be able to approximate human intelligence and the actual workings of the brain yet, but it’s on its way.

      People tend to forget how computers used to take up entire floors of office buildings and a simple hard drive was the size of an industrial washing machine. Gen AI is in its infancy, and while its current state is highly inefficient and flawed, that’s all the reason for it to improve.

      Is current gen AI a solution for anything? Absolutely not. Is it a stepping stone towards true artificial intelligence? Absolutely.

      I swear, Lemmy users would have been the people complaining that Excel and Word would be the downfall of society. There are absolutely legitimate complaints about the technology, but to completely dismiss it without looking at the nuances of the situation is assanine.

      The truth is that it’s entirely possible for a business to ethically use an LLM. I know this for a fact because I was intimately involved in the implementation of one. The entire thing was trained on our proprietary dataset that had been built over 40 years of industry experience, on our servers which were powered by green energy.