It’s not as clear cut as this, a lot of the studies that suggest this don’t stand up to scrutiny, with for example some having numbers of estimated birds killed by cats over the total estimated bird population. Also the fact that the majority of kills by cats are already weak or sick animals, animals that would die soon either way, and not really affect the total population. The only ones that seem legit and suggest an effect on the population are from small islands.
- 0 Posts
- 17 Comments
ganryuu@lemmy.cato Privacy@lemmy.dbzer0.com•Airlines Sell 5 Billion Plane Ticket Records to the US Government For Warrantless Searching2·17 days agoIt always warms my heart to see older memes still being relevant and shared.
I see it more as a step towards banning a ton of content they don’t like by claiming they are porn, or porn-adjacent (for example any LGBTQ+ content)
ganryuu@lemmy.cato Technology@lemmy.world•Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claimsEnglish2·1 month agoVery fair. Thank you!
ganryuu@lemmy.cato Technology@lemmy.world•Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claimsEnglish41·1 month agoI agree with the part about unintended use, yes an LLM is not and should never act as a therapist. However, concerning your example with search engines, they will catch the suicide keyword and put help sources before any search result. Google does it, DDG also. I believe ChatGPT will start with such resources also on the first mention, but as OpenAI themselves say, the safety features degrade with the length of the conversation.
About this specific case, I need to find out more, but other comments on this thread say that not only the kid was in therapy, suggesting that the parents were not passive about it, but also that ChatGPT actually encouraged the kid to hide what he was going through. Considering what I was able to hide from my parents when I was a teenager, without such a tool available, I can only imagine how much harder it would be to notice the depth of what this kid was going through.
In the end I strongly believe that the company should put much stronger safety features, and if they are unable to do so correctly, then my belief is that the product should just not be available to the public. People will misuse tools, especially a tool touted as AI when it is actually a glorified autocomplete.
(Yes, I know that AI is a much larger term that also encompasses LLMs, but the actual limitations of LLMs are not well enough known by the public, and not communicated enough by the companies to the end users)
ganryuu@lemmy.cato Technology@lemmy.world•Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claimsEnglish7·1 month agoI’m honestly at a loss here, I didn’t intend to argue in bad faith, so I don’t see how I moved any goal post
ganryuu@lemmy.cato Technology@lemmy.world•Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claimsEnglish6·1 month agoCan’t argue there…
ganryuu@lemmy.cato Technology@lemmy.world•Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claimsEnglish183·1 month agoThat seems way more like an argument against LLMs in general, don’t you think? If you cannot make it so it doesn’t encourage you to suicide without ruining other uses, maybe it wasn’t ready for general use?
ganryuu@lemmy.cato Technology@lemmy.world•Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claimsEnglish81·1 month agoI get the feeling that you’re missing one very important point about GenAI: it does not, and cannot (by design) know right from wrong. The only thing it knows is what word is statistically the most likely to appear after the previous one.
Are you telling me that I should have diluted some bullet material, instead of trying to start by shooting myself with a small caliber and work up my immunity from that? All this work, wasted!
To add to your point, it used to be that the village idiot was just that, known for it, and shamed or shunned. Now that they can connect to other village idiots, they can find a community of like minded idiots that reinforces their beliefs.
Probably why they talked about looking at a stack trace, you’ll see immediately that you made a typo in a variable’s name or language keyword when compiling or executing.
ganryuu@lemmy.cato Technology@lemmy.world•Schools are using AI to spy on students and some are getting arrested for misinterpreted jokes and private conversationsEnglish51·2 months agoEven when we go per capita the US stays a shithole, it’s not like they were trying to actively misinform people.
ganryuu@lemmy.cato Technology@lemmy.world•Google Gemini struggles to write code, calls itself “a disgrace to my species”English2·2 months agoI’d say that it’s simply because most people on the internet (the dataset the LLMs are trained on) say a lot of things with absolute confidence, no matter if they actually know what they are talking about or not. So AIs will talk confidently because most people do so. It could also be something about how they are configured.
Again, they don’t know if they know the answer, they just say what’s the most statistically probable thing to say given your message and their prompt.
ganryuu@lemmy.cato Technology@lemmy.world•Google Gemini struggles to write code, calls itself “a disgrace to my species”English172·2 months agoYou’re giving way too much credit to LLMs. AIs don’t “know” things, like “humans lie”. They are basically like a very complex autocomplete backed by a huge amount of computing power. They cannot “lie” because they do not even understand what it is they are writing.
Way I heard this joke, it continues with:
A real customer enters.
He asks where the toilets are.
The bar explodes.