I like to ask a variety of questions, sometimes silly, serious, and/or strange. Never asking in an attempt to pester or “just asking questions” stuff.
I’m generally curious and/or trying to get a sense of people’s views.
Is this a rare car tree, upon which the cars grow?
Why is this all so convoluted and, seemingly, legal? Is this purposely convoluted to obfuscate illegal activity?
I think separating them improves the user experience for regular users, which I think counts as a real advantage. As I wrote in the body text:
As-is seeing an indication of a comment for a post only for it to turn out to be a bot is slightly disappointing at best, and mildly confusing at worst when their display has been disabled.
It’s a small detail, but small details add up when it comes to the user experience.
Can you imagine crimes of the wilderness?
I haven’t paid interest in over a decade and have made thousands from rewards.
I’m not too familiar with credit cards, do you mean this in a literal money sense or something more complex, i.e. the value of rewards & money?
Have you seen the !politicaldiscussion@lemmy.world community? This would be a good post there as well, I think!
What’s your purpose for doing so?
Curiosity, of course!
Could you provide an example image of the sort of tote bag you’re mildly confused by?
I had been publishing articles on my own website since 2003, but I did that mostly manually by writing whole HTML pages.
Huh, so literally raw html? I know it’s not too difficult, but I have wondered occasionally how many small websites may have been written that way.
Appreciate the reply! It’s a cool way to view it in individual terms. I was thinking in more social terms, however, which I’ve been a little fascinated to find seems to be a little atypical from the replies so far.
This does seem to come closer to what I was wondering about when I originally posted, good eye!
OP asks the real life equivalent of being AFK which, assuming you’re normally regularly online, only really corresponds to being high or sleeping.
The funny thing is, it didn’t occur to me how vague my question was until after I posted and started seeing the replies. That’s made it more fun tbh, and interesting as in this context (online vs. in real life) I’ve not really thought of being online in such individualistic terms as this and some other replies suggest.
While Lemmy doesn’t have enough people for each product category yet, have you checked out the community !buyitforlife@slrpnk.net?
There’s also !recommendations@lemmy.world for broader discussion, but it’s not gained much traction yet.
Do you suppose most may only be half or quarter-reading too?
Was it a matter of some good timing that these casts were able to be made? That is, with enough time, wouldn’t the voids/cavities themselves likely collapse with the gradual shifting of the soil?
Fun part is, that article cites a paper mentioning misgivings with the terminology: AI Hallucinations: A Misnomer Worth Clarifying. So at the very least I’m not alone on this.
Yeah, on further thought and as I mention in other replies, my thoughts on this are shifting toward the real bug of this being how it’s marketed in many cases (as a digital assistant/research aid) and in turn used, or attempted to be used (as it’s marketed).
perception
This is the problem I take with this, there’s no perception in this software. It’s faulty, misapplied software when one tries to employ it for generating reliable, factual summaries and responses.
It’s not a bad article, honestly, I’m just tired of journalists and academics echoing the language of businesses and their marketing. “Hallucinations” aren’t accurate for this form of AI. These are sophisticated generative text tools, and in my opinion lack any qualities that justify all this fluff terminology personifying them.
Also frankly, I think students have one of the better applications for large-language model AIs than many adults, even those trying to deploy them. Students are using them to do their homework, to generate their papers, exactly one of the basic points of them. Too many adults are acting like these tools should be used in their present form as research aids, but the entire generative basis of them undermines their reliability for this. It’s trying to use the wrong tool for the job.
You don’t want any of the generative capacities of a large-language model AI for research help, you’d instead want whatever text-processing it may be able to do to assemble and provide accurate output.
Is this part of your sibling goofing routine?