
Sunday November 16th 2025 by SocraticDev
That's the message Leo Herzog wants to get across. He's an IT specialist working at a college in Michigan, and his little website gives us major nostalgia for those sarcastic web apps like lmgtfy or no hello that gently teach people who commit tech faux pas why their behavior is annoying and how to improve it to reduce friction in their communications with us.
The recently launched "Stop Citing AI" site scratches the same itch caused by naive users of new technologies. "Let me Google that for you" (lmgtfy) taught newbies to use a search engine to answer their questions before bothering a friend or coworker with something Google could easily answer. Same deal with "No Hello," which teaches people unfamiliar with asynchronous communication tools like Microsoft Teams or Slack that you shouldn't just ping a colleague with "hello" and then wait for them to respond—instead, write a complete message detailing your request right away and let them reply when they're free.
"Don't copy-paste text from a chatbot and send it to someone as if its response is authoritative."
"Stop Citing AI" takes things a step further into the realm of argumentation theory and epistemology. Since Large Language Models and interactive tools like ChatGPT and Claude became mainstream, we've all noticed the creeping arrival of an annoying form of intellectual laziness. Someone responding to a disagreement falls back on an LLM to get an answer that supports their position. Then they send that response to you, as if it's some kind of slam-dunk argument.
Leo Herzog's website explains, in simplified terms of course, how an LLM works: it produces a sensible response by relying on statistics to predict the next word.
the degradation of critical discussion
The democratization of artificial intelligence seems to have amplified our natural tendency to "want to be right" rather than use our rationality to solve problems and disagreements optimally.
And "Stop Citing AI" is first and foremost a wake-up slap we give ourselves. Those first months with ChatGPT were often great for our egos—getting responses that confirm our opinions and beliefs is fire! Today, we realize that pretty much everyone with beliefs and opinions completely contrary to ours is also getting their positions reinforced. More importantly, by learning how LLM models are produced and how they work, we give ourselves a little reality check and take LLM responses less seriously.
Will I now send the URL "https://stopcitingai.com" to someone trying to prove me wrong with a ChatGPT response?
Nope.
First, because it's sarcastic and probably won't help untangle our disagreement. Second, because it would reveal even greater intellectual laziness on my part and would probably just shut down the discussion.
For healthy argumentation, several factors need to be in place. Both parties must genuinely want to resolve their disagreement optimally for everyone. They need to recognize first that their relationship is more important than their disagreement. The adversarial aspect where there's only a winner and a loser (zero-sum game) must be completely avoided.
I'd recommend first evaluating whether it's worth the effort. If it's not, and your conversation partner is showing intellectual laziness by using ChatGPT to support their position, then don't engage in the discussion.
Otherwise, you need to keep your cool and recognize that a simple copy-paste of a ChatGPT response can in no way be considered an argument. The mental effort required to read and understand an AI citation compared to the effort it took your conversation partner to generate it is wildly unbalanced—don't let them act in such an outrageously unfair way.
Slowing down the exchange and getting them to take responsibility for their side of the debate seems like the right strategy: "Can you summarize in your own words what you just pasted here?"
The study of argumentation theory teaches us an important lesson: the best option is to avoid getting into argumentative debates with just anyone about just anything.
First, it needs to be an issue that's worth debating.
Ultimately, if someone shows laziness by serving you a copy-pasted ChatGPT response, chances are it's not worth your time.
translated from french by Sonnet 4.5
sources
You've been sent here because you cited AI as a source to try to prove something.
