• 0 Posts
  • 31 Comments
Joined 8 months ago
cake
Cake day: March 20th, 2024

help-circle

  • I recently read an article about a doctor who was making a case that the issue is not that those 1 in 5 are “neurodivergent”, but our current society is causing harm. When he sees ADHD symptoms his first “treatments” are proper nutrition, making sure they feel like they’re doing meaningful things in life, enough exercise, etc…

    I’m also sometimes starting to wonder if for a part we’re not just medicating people to “thrive” in a society that’s inhuman, rather than make society work for as many people as possible.

    But it’s of course a very complex & grey area, and let’s be honest, something as vague as ADHD probably encompasses a lot of different causes. And it’ll probably take decades of research before we actually manage to split up all the things that are today lumped together into the separate things with each their own propert treatment.


  • racemaniac@lemmy.dbzer0.comtoScience Memes@mander.xyzHoney
    link
    fedilink
    English
    arrow-up
    10
    ·
    18 days ago

    I’d say the issue is that if honey isn’t vegan because you’re causing harm to bees, isn’t most of modern vegetable agriculture at least equally harmful to bees & other insects due to all the pesticides being used?

    Or is it just if we directly involve bees, it’s bad, but if we inflict greater harm in a less direct way, it’s acceptable?







  • He just means it’s been all over the tech internet lately, and he has a point.

    of course not everyone knows everything, but this and the humane AI pin have been featured everywhere as they’re the first companies bringing llm focused AI products to market, and are generating a lot of hype, get a lot of critical articles, and a lot of youtube videos & investigations regarding them.

    Not hearing about the Rabbit R1 when you followed tech news the past month was harder than playing whamagheddon during christmas time. So i get his surprise, and i don’t think his reply was mean spirited, it was hard to avoid hearing about it.






  • I’m kind of wondering what forums you visited.

    What however is a recurrent issue with young people on forums is them asking questions that have already been answered a million times. On sites like reddit & discord, that’s the norm, we need new content all the time, the 526th person asking just keeps the social media going.

    On forums however the etiquette is that you do some effort yourself, and something that gets asked that often is either a sticky, or a long running thread with all the information you could possibly want (but you’ll need to invest some of your own time to get the information from there). And if you then arrive on the forum, read nothing, and ask the same question… again… yeah… you won’t be welcomed with open arms.





  • I’m not saying chess engines became better than humans so LLM’s will become concious, just using that example to say humans always have this bias to frame anything that is not human is inherently less, while it might not be. Chess engines don’t think like a human do, yet play better. So for an AI to become concious, it doesn’t need to think like a human either, just have some mechanism that ends up with a similar enough result.


  • racemaniac@lemmy.dbzer0.comtoFuck AI@lemmy.worldTimmy the Pencil
    link
    fedilink
    arrow-up
    5
    arrow-down
    5
    ·
    5 months ago

    The problem i have with responses like yours is you start from the principle “consiousness can only be consiousness if it works exactly like human consiousness”. Chess engines intiially had the same stigma “they’ll never be better than humans since they can just calculate, no creativity, real analysis, insight, …”.

    As the person you replied to, we don’t even know what consiousness is. If however you define it as “whatever humans have”, then yeah, a consious AI is a loooong way off. However, even extremely simple systems when executed on a large scale can result into incredible emergent behaviors. Take the “Conway’s game of life”. A very simple system of how black/white dots in a grid ‘reproduce and die’. It’s got 4 rules governing how the dots behave. By now we’ve got reproducing systems in there, implemented turing machines (means anything a computer can calculate can be calculated by a machine in the game of life), etc…

    Am i saying that GPT is consious? nope, i wouldn’t know how to even assess that. But being like “it’s just a text predictor, it can’t be consious” feels like you’re missing soooo much of how things work. Yeah, extremely simple systems at large enough scale can result in insane emergent behaviors. So it just being a predictor doesn’t exclude consiousness.

    Even us as human beings, looking at our cells, our brains, … what else are we than also tiny basic machines that somehow at a large enough scale form something incomprehenisbly complex and consious? Your argument almost sounds to me like “a human can’t be aware, their brain just exists out of simple braincells that work like this, so it’s just storing data it experiences & then repeats it in some ways”.