![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Latest new exhibits in the LLM-Generated Garbage hall of shame
Machine-Generated Garbage Hall of Shame: “What these bots are designed to do is essentially a matter of statistical programming, and presenting them as reliable sources of information can be misguided, foolish, exploitative, or even dangerous, as demonstrated by the examples on this list.”
Similarly, AI Hallucination Cases: “This database tracks legal decisions in cases where generative AI produced hallucinated content – typically fake citations, but also other types of arguments.”
Not to be confused with cases about AI hallucinations. “A solar firm in Minnesota is suing Google for defamation after the tech giant’s shoddy AI Overviews feature allegedly made up wild lies about the company — and significantly hurt its business as a result.”
“The unreliability and hallucinations themselves are the hook — the intermittent reward, to keep the user running prompts and hoping they’ll get a win this time. This is why you see previously normal techies start evangelising AI coding on LinkedIn or Hacker News like they saw a glimpse of God and they’ll keep paying for the chatbot tokens until they can just see a glimpse of Him again. And you have to as well. This is why they act like they joined a cult.”
“Executives and directors from around the world have called me to say that they can’t fund any projects if they don’t pretend there is AI in them. Non-profits have asked me if we could pretend to do AI because it’s the only way to fund infrastructure in the developing world. Readers keep emailing me to say that their contracts are getting cancelled because someone smooth-talked their CEO into believing that they don’t need developers.”
My website host, Siteground, has been trying to shove AI hype into their services lately. I can’t help wondering how many customers are actually asking for this, versus how many VCs and managers are insisting they’ve gotta be on the bandwagon. Especially given my fun new personal experience of bringing a problem to their customer-service LLM, where its very first response included a hallucination — advising me to change a nonexistent setting it just made up.