Skip to content
Go back

Kopi Luwak Internet Theory

Published:  at  08:20 AM

I’m back from my unplanned hiatus of the past few months. Had something of a writer’s block that I’ve mostly recovered from.
Fair warning, I cut loose in this post and swear a few times. All opinions are my own, and do not represent those associated with me in a personal or professional capacity.

As it does every so often, the topic of the internet’s enshittification came up in the group chat. It’s starting to feel like the internet is less dead, and more embalmed in the vile soulless glaze of AI fuelled capitalism. We termed it “kopi luwak internet theory. The internet has become digested by a fucking ugly looking mammal, and excreted into the woods to die in the cold leaf litter. The ugly mammal in this instance would be the grotesque cabal of generative AI frontier model providers and their twisted creations.

Bots now account for over 51% of internet traffic. Part of this rise has been AI content scrapers stealing the last remaining vestiges of original human thought to feed to their deus ex machina. Amoral critters devoid of any ethical consideration or basic human empathy are now publishing entirely AI written content in a torrential flood of chunky bin juice.
It’s drowning every place humans used to share their creativity online. The kindle store is a cesspit of plagiarism and slop, book publishers have stopped taking submissions due to the tsunami of frothy excreta being forcefully squirted directly to their inboxes, and artists are being accused of passing off AI slop as their own (and how exactly do you prove a negative to clear your name? Every scrap of digital evidence you’d use can be generated with AI). At work, we are now dealing with a rising tide of unedited slop documents and slop code that makes one’s eyes bleed with rage.
In every gathering place online, the stench of AI writing, AI art, AI everything is like a C. diff fart in a crowded and locked elevator. It makes one envious of Helen Keller, both before and after death.

My student brings me their essay, which has been written by AI, & I plug it into my grading AI, & we are free!
— Slavoj Žižek

There is hope though - more and more people are using the machines to summarise the slop. So no-one’s writing with actual human thought anymore, which is fine, because no-one’s reading the soulless simulacra anyway. Finally we are free. Unfortunately, what we’ve just freed ourselves from was not actually a trap, but might actually have been a worthwhile wealth of activities that make life worth living.

The rise of AI generated text, images, audio, and video has begun to poison the well of human attention and participation online. It’s doing so at a scale and speed that are starting to resemble chugging hot draino with the aid of a pump in terms of the resulting damage. No aspect of human life online is safe from the fetid, slimy tentacles of AI.

Even our evening meals aren’t safe from this toxic surge. Recipe bloggers have seen up to an 80% drop in traffic since the rise of LLMs 1. The odds are good that the slop recipes surging through social media don’t produce anything edible. After all, they lack the one thing that recipe blogs and books were actually selling - the assurance that a human might have actually cooked the fucking food in the first place.

The vulgar beast that is AI is an all new type of predator prowling in the dark forest of the internet. This monstrous animal isn’t just slouching towards us for our money (like so many of the forest’s carnivores are). It’s coming to take your creativity, your work, your ability to connect with other people online, and your ability to think - a cognitive hazard preying on those of us who are actually putting in the effort to contribute something of value to the creative corpus of humanity.

Cognitive vampires

Like a vampire (Bela Lugosi or Christopher Lee, not some pathetic ass glitter bomb casualty), the use of LLMs drains our ability to think. Your ability to marshal an argument, structure your thoughts, and communicate without looking like you’re cosplaying a role from Idiocracy all depends on actually practicing these skills.
For any desirable creative enterprise, from art to writing, the difficult process of creation is the point. The talentless promptstitutes summoning slop from the bots have rifled through their own pockets, and thrown out the only thing of value in a creative practice (you know, the actual practice part). Some of these cognitively impoverished individuals pay for the privilege of being robbed.

We’ve had the current evolution of LLMs since November 2022 when Satan’s favourite finger puppet, Sam Altman, dropped ChatGPT on us. The studies on the effects of the bullshit hose are now starting to arrive in force, and it’s not looking good. Study after study are finding that LLM use is eating away at our ability to think and learn like a particularly virulent cancer.

Searching for information by asking an LLM has become a fairly common usecase. It’s great, ain’t it - you don’t have to expend any effort at all, and the answers you want are just vomited into your lap. Except that the lack of effort is what’s most harmful about using an LLM vs actually doing some legwork to find what you need. A recent study found that “LLM use also suppressed participants’ reported depth of learning on the topic compared with traditional web search”.
It’s not just the participant’s reported depth of learning. The study asked participants to write a piece using what they’d learned. The result? “…content of advice written after searching through ChatGPT (vs. Google) contained linguistic markers suggestive of shallower learning.2.
LLM use is quite literally making us stupider. Instead of seeing Idiocracy as a cautionary tale, the techbros saw it as a challenge. They’ve been beavering away amid the brimstone and sulphurous air of silicon valley’s echo chamber to make it a reality.

It’s not just the artefacts of thinking that show signs of LLM induced decay. It causes measurable changes in our wetware. An MIT Media Lab study on the effects of LLM use concluded (fairly logically) that “Cognitive activity scaled down in relation to external tool use”. The study used essay writing as the vehicle of cognitive effort, and examined 3 groups - LLM assisted, search engine only, and brain only (aka no external tools).
It used not only the essays written by participants, but EEG scans as well to examine this effect. The final conclusion was bleak to say the least - “Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels.” 3.

Now I let it fall back in the grasses.
I hear you. I know this life is hard now.
I know your days are precious on this earth.
But what are you trying to be free of?
The living?
The miraculous task of it?
Love is for the ones who love the work.

— Joseph Fasano, written for a student who used AI to write a paper

Personally, I enjoy the research and learning process too much to give it up. The journey you inevitably go on reveals more than just the answers you sought in the first place. This wealth of extra context on the surrounding subject area has informed my work and writing again and again.
Curiosity is not a sin, and nor is it a waste of your time. By reading more than just the answer you sought, you will learn something that changes your question entirely (regardless of whether that question was “what to have for dinner” or “how do I setup Apache Kafka on AWS”). If you want to be outpaced by those of us who enjoy learning, by all means keep using an LLM. My job security is assured when my competition is paying hundreds of dollars to make themselves worse at their jobs. I even get the rare gift of being able to enjoy my work. Can you say the same?

Personal experimentation

‘It is no measure of health to be well adjusted to a profoundly sick society’
— Krishnamurti

The cognitive effects of AI use are not just something I read in a study. My workplace pays for an LLM. I’ve trialed it, as a sledgehammer, a scalpel, and a research tool. The result is the same. It generates landmines. The code it writes looks superficially correct, but it’s hiding all kinds of trouble just waiting to be stepped on, usually by someone else.
I don’t enjoy reviewing the output of the tool. I enjoy the thinking, the creative effort involved in designing a system and implementing it. In all my usage of these tools, they have left a hollow feeling, a blank space in my memory where otherwise would be inscribed the knowledge and skill gained by actually doing my job. In short, it is a trade in which I am giving up the parts of my job that I enjoy and am good at for a hollow simulacra of competence. To top that off, I’m still responsible for whatever the bot explosively sharted into the repository. Why in fuck would I want to use this again?

Look at us. Look at what they make you give.
— Jason Bourne, Bourne Ultimatum (2007)

The fact that AI is currently freely available for people is not a signal of safety. Cocaine and morphine used to be freely available in cough syrup. It doesn’t make it healthy just because you can buy it at the corner store. At this point ChatGPT has been linked to multiple suicides, and episodes of psychosis. It’s damaging the ability of students to learn. At what point should we stop and say “it’s too much”?

Escaping the vulgar beast

At this point, having painted a bleak picture, it’s good to remind ourselves that the predators are not the only thing in the forest. Glazed and embalmed in slop as it is, the open internet and the zaibatsu that run the 5 largest sites are not the only place people exist in the forest of the digital realm. Much like a forest is more than wolves and bears, so too the online world contains multitudes.


”Listen bro, if you send me slop again, I’m gonna light a cigarette and get medieval”

It’s possible to find and build spaces unmolested by the rough beast. Unfortunately, the wave of AI has moved with such speed that we’re having trouble catching our breath let alone mounting a useful defence. The challenge for modern technologists is solving the question of poisonous AI slop in spaces that should have been left sacred for human thought and creativity. A good start might be exploring the use of Dunbar’s number for small group spaces where you have a higher degree of trust for the people in it.

A bandaid solution I’ve been using is to create a lense with kagi.com that restricts search results to those that existed pre-November 2022. It’s not something I use all the time, but it’s handy for finding information on designing and creating game AI without needing to wade through the sweaty drooling of the LLM fetishists. For general searching, I highly recommend kagi as well, it’s the least shit search engine in the age of AI.

Immaturity is the inability to use one’s understanding without guidance from another. This immaturity is self-imposed when its cause lies not in lack of understanding, but in lack of resolve and courage to use it without guidance from another. Sapere Aude! “Have courage to use your own understanding!”—that is the motto of enlightenment.
— Immanuel Kant, What is Enlightenment?, 1784

I do know that the most immediate thing you can do is to stop using AI, or be extremely discerning about when and how you use it. Such a thing is easier said than done. LLMs are like a slot machine. The siren’s call of “just one more prompt bro, trust me” to get a working result can be hard to quit, especially if you have a deadline. It’s worth the effort though, for your mind, and your wallet. Sapere Aude, as the little professor said.

Concluding thoughts

I know this piece might read like the start of my villain arc. However, I’d like to be clear - what we can do with most of the latest batch of AI is nothing short of amazing. Star trek style voice commands (or better) is now a reality waiting to reach mass market.
It’s now within our reach to create ambient interfaces that just work to remove stumbling blocks for the differently abled - those with low vision, or hearing issues for example. Imagine a future in which we can seamlessly patch over the cruelty of chronic illness and accident with wearable ambient tech. We are now able to leverage machine learning to make breakthroughs in medical research that will save lives.

It is also true that this new technology can be incredibly dangerous. For all the reasons I’ve expounded on in this post and more. The bright and rosy future I’ve just described is overwhelmingly not what we’re doing with contents of Pandora’s Box. Instead, we’ve created an addictive bullshit fountain that replaces our ability to think, and to connect, and deployed it wholesale simply because there’s a quick buck to be made. The Canadian media theorist, Marshall McLuhan, is credited with saying that “we shape our tools, and thereafter our tools shape us”.

What exactly are we letting ourselves be shaped into just so some upper-class American fascists can increase their dragon’s horde of wealth by a few more percentage points?

References

Footnotes

  1. Alba, D. and Arroyo, C. (2025). AI Slop Recipes Are Taking Over the Internet — And Thanksgiving Dinner. [online] Bloomberg.com. Available at: https://www.bloomberg.com/news/articles/2025-11-25/ai-slop-recipes-are-taking-over-the-internet-and-thanksgiving-dinner [Accessed 5 Dec. 2025].

  2. Nataliya Kos’myna (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task – MIT Media Lab. [online] MIT Media Lab. Available at: https://www.media.mit.edu/publications/your-brain-on-chatgpt/.



Next Post
Captain’s Log - A game design idea