

There was once a business facing problems that negatively impacted them.
“I know,” said the CEO. “Let’s use AI!”
Now they have two problems that negatively impact them.
My Dearest Sinophobes:
Your knee-jerk downvoting of anything that features any hint of Chinese content doesn’t hurt my feelings. It just makes me point and laugh, Nelson Muntz style as you demonstrate time and again just how weak American snowflake culture really is.
Hugs & Kisses,
张殿李
P.S.:
There was once a business facing problems that negatively impacted them.
“I know,” said the CEO. “Let’s use AI!”
Now they have two problems that negatively impact them.
One further point: even if the prompt is a “poem” (since literally anything can be a poem these days¹), there is nothing preventing someone from reciting that poem out loud. Or typing it into a conversational medium. Copyright protects others from making money from your work uncompensated. It doesn’t stop them from reciting it or every time you hummed the riff to “Iron Man” you’d get your ass sued.
¹ Case in point:
I addressed that elsewhere above. The prompt can be legally protected as copyright except insofar as it might be viewed too trivial to deserve that. (What can and cannot be protected by copyright is very slippery; there has to be an element of creativity in what’s being protected, so you can’t copyright “I’ll have the egg salad” and prevent people from ever using that phrase when ordering sandwiches.)
But what you generate with the prompt can’t be. Because it’s not creative output from a human being. And if your prompt is sufficiently trivial, it can’t really be protected by copyright either. Courts have a tendency to go with what things are, not what they’re labelled as. In the USA, where courts tend to side with billionaires over actual human beings, the risk is higher that the courts will make a stupid ruling, but thankfully the USA’s laws aren’t extraterritorial, no matter how much they try to make them be.
I think the problem here is that we were vociferously agreeing. I was pointing out where the conflation lies but agreeing with you that LLMs are not “artificial intelligence” in the way that normal people think of the term.
The real problem with LLMs isn’t that they “passed” the Turing Test. The problem is that humans failed it.
Yes. The point being that primary sources can only be trusted if it is in their perceived interests.
ChatGPT understands absolutely nothing. If you think it did, I think the bias check may need to be on your side of the table.
Primary sources are unreliable too. Ask, say, Dole about its history of killing workers who strike for more money…
A very scary number of people do.
In Summer 2024 I was in Canada visiting family. On the way I stopped by Whistler and went up with my SO in the … whatever you call those suspended car things on wires that take you up mountains. (Too lazy to look it up.) A bunch of people in their '20s were asking ChatGPT about things to do in Ottawa (apparently their next stop) and calling out the things that ht was recommending.
I recognized about 1/3 of the named restaurants, specifically, as places that had closed. But these people were asking ChatGPT about them and thus getting hallucinated information.
The problem is conflating a technical term “Artificial Intelligence” which is a specific field of study that is just incredibly poorly named (the name came about from aspirational beliefs of the early founders of the field) with what common conversation views as artificial intelligence.
In a very technical sense of topical taxonomy it is correct to call LLMs “Artificial Intelligence”.
In any other use of language, calling LLMs Artificial Intelligence is falling for a scam.
ChatGPT didn’t get the information somewhere. This is because ChatGPT doesn’t have “information” in the first place. ChatGPT has a large database of sentence structures, in effect. And it follows random paths through that to make its output. And somewhere in its language structures it has “5%” near “fees” and happened to use that this time.
Ironically copyrighting the prompt is legally sound. The prompt is the only work of human creation in the process.
The output of the prompt should not be protected by copyright, though.
That was covered by the court’s generation of more images using the same model and prompt. A poem is an entire work. It is the art work. If you type the poem twice you get the same poem:
偏见蒙眼盲
无知恨中华
若知龙之美
悔恨泪满颊
偏见蒙眼盲
无知恨中华
若知龙之美
悔恨泪满颊
See?
But if I write the same prompt into, say, Perplexity twice watch what happens: “Write me a jueju in Classical Chinese that criticises people who use AI slop and call it art. Make sure that the contrast point is in place. Then translate that jueju into English, focusing more on the poetic qualities than literal translation.”
First run:
## 绝句:讥AI伪艺
画屏新巧出机声,
点墨无魂似鬼行。
人手一挥真意在,
虚光何敢唤丹青。
### English Poetic Translation
On screens, new wonders birthed by code appear,
Yet soulless strokes drift ghostlike, insincere.
A human hand—one gesture, spirit shown;
How dare mere light call itself art alone?
Second run:
**绝句**
画意千年笔下生,
今朝机器乱涂成。
人心未动空称艺,
一片虚声满世鸣。
**English Translation**
For ages, art was born from living hands—
Now, soulless engines smear the canvas dry.
No heart has stirred, yet “art” is what they claim;
Empty applause, a hollow world’s reply.
There’s barely any resemblance between the two poems, despite the prompt being identical. And here’s the thing: that prompt is approximately the same size as the poem. Could you get a very precise poem output that’s a direct match for exactly what you intend to communicate? Maybe. (I personally doubt it, but I’m not going to come out and say it’s not possible.) If you can, though, create a prompt that generates poems that are all very close to each other I suspect it’s a lot more work than, you know, just writing a fucking poem would be.
Personally I think this is a really good acid test that the judges devised to decide how much “human input” there was in the work to see if it qualifies as “art”. Kudos to them!
When have voters ever had anything meaningful to say? I mean the system is built to minimize their input!
Prompt engineer.
It’s time to turn that ridiculous term into a slur.
But … “moron” is already right over there.
It very much depends on which specific front you use. In February of 2025 I had DeepSeek insist that any speculation about Donald Trump being president was “future speculation” for example. Because its training data cut off at … I think October 2024?
The ones that factor search into the prompt are better at keeping up to date, but even then I had Perplexity argue with me that there were no devastating fires in Jasper so it was just fine to go travelling in it.
I disagree with him. It is not like a heartless monster. It is a heartless monster.
I think we can safely assume that any woman with public photos who says anything that someone, somewhere gets offended by will be deepfaked with porn.
Whose statistics? Reviewed by whom?
I’ll need citations here, complete with methodology, questionnaires used, if any, and the raw data. Oh, and did you do a literature search to see if someone has already assessed and found wanting the conclusions of this purported study? I don’t want to reinvent the wheel after all.
Or, as is far more likely, did you find a “statistic” that suited your bias and stick with it?