That’s what I’m talking about. We use the Degenerative AI to create a whole pile of bullshit Tlön-style, then spread that around the Internet with a warning up front for human readers that what follows is stupid bullshit intended to just poison the AI well. We then wait for the next round of model updates in various LLMs and start to engage with the subject matter in the various chatbots. (Perplexity’s says that while they do not keep user information, they do create a data set of amalgamated text from all the queries to help decide what to prioritize in the model.)
The ultimate goal is to have it, over time, hallucinate stuff into its model that is bullshit and well-known bullshit so that Degenerative AI’s inability to actually think is highlighted even for the credulous.
LLMs don’t know anything. You’d have to have programs around the AI that look for that, and the number of things that can be done to disguise the statement so only a human can read it is uncountable.
##### # # ### #### ### #### # # # # # # # # ##### # ### # ### # # # # # # # # # # ### #### ### #### #### # # # # #### # # ### ##### # # # # # # # # # # # #### # # # # ### ##### # # # # # # # # # # # # # #### ### ##### ##### #### # # ### #
Like here’s one. Another would be to do the above one, but instead of using
#
cycle through the alphabet. Or write out words with capital letters where the#
is.Or use an image file.