Imitation, I'm told, is the greatest form of flattery. With that in mind, allow me to steal a trick from my friend and mentor (and boss), Ed Zitron, by including a soundtrack with this post. And, given the morose subject of this newsletter, what better than Caledonia's most depressed songwriter, Malcolm Middleton?
Story Time — Malcolm Middleton and David Shrigley
I've been feeling pretty disheartened lately. Do you ever fall into a black hole where everything feels terrible, and there's no obvious way that things could get better, and the only thing you're really able to say is "fuck it?"
Apathy isn't quite the right word. It's more like a feeling of resignation — that the world's brokenness, and its lack of fairness, is so great and so immutable that any screaming feels… well… like screaming. Impotent, pointless screaming that, while cathartic, is ultimately pointless.
The tech industry, I believe, directly contributes to those feelings of hopelessness.
Over the past few years, the tech industry's obsession with control, and its amoral pursuit of AI and outsourcing at the expense of society writ large, has virtually eviscerated the entry-level runs of the career ladder. The tech industry is actively building products that are designed to be uncontrollable and incomprehensible, and that appeal to our worst impulses for cruelty.
Those are all, obviously, bad, and they all tie into one unavoidable truth — that the tech industry genuinely doesn't respect its users, or society, or human beings at large, and nowhere is that more obvious than the proliferation of generative AI.
The Price of Contempt
If you could sum generative AI up in two words, it would be: "good enough." And before you reach for the little ‘x’ on the tab for this article, let me be clear — that isn’t a compliment. Quite the opposite.
Ask ChatGPT or Claude to write something for you, and it'll produce something that's ostensibly readable, but scratch beneath the surface, you'll find a shallow analysis peppered with hallucinated facts and citations. It’ll make up facts and figures. The writing style will be bland, generic, and flavorless — the unseasoned oatmeal of content.
Good enough, but not quite good.
Ask MidJourney or Dall-E to draw you something, and it'll kinda resemble the thing you asked for, but it'll be… weird, in some meaningful way. Maybe the fingers will be off. There'll be some writing in the background, except the letters will be all garbled — or won't look like any characters in any known alphabet.
Good enough, but still, not actually good in any objective sense.
You're trying to focus, so you click on an instrumental jazz playlist on Spotify. After the tenth track, you notice something weird. All these songs are different — they may have different phrasing, or melodies, or tempos — but at the same time, they all sound the same. That's when you realize that all of the songs you’re listening to are AI-generated.
Those songs were good enough, insofar as they allowed you to tune out outside distractions and focus on the task at hand, but they were still nonetheless soulless and artificial. You’re not even sure if they count as “music.”
Your Facebook Reels and TikTok feed have, overnight, become filled with semi-photorealistic videos produced by Google's new Veo3 model. These videos all make the same cheap, lazy, opportunistic jokes about disabled people, or women, or ethnic minorities that only a moron would find entertaining.
At first glance, these videos look real. They're good enough. But they're still fucking terrible.
It’s 11PM at night and your ADHD meds have finally worn off. You find yourself in a Wikipedia rabbit hole, clicking link after link, until finally you stumble upon something that truly catches your attention. Your curiosity piqued, you open YouTube and type the name of the article into the search bar.
The first result is, by now, an all-too-familiar AI voice reading out a script that you’re certain was generated with ChatGPT. You click another video. It’s the same voice. The same fucking voice.
I believe that people deserve to listen to music, read words, and watch videos that are better than just "good enough." I believe that “good enough” is, in fact, pretty bad. And I believe that even the worst piece of shit created by a person has infinitely more soul than something created by a probabilistic AI model.
This is an opinion that, regrettably, isn't shared by the tech industry, which is spending hundreds of billions to shove generative AI down our collective throats, no matter what people actually think.
This is the biggest insult. It's the ultimate insult. Big tech is so confident that you're stupid, that you're venial, that you're willing to settle for stuff that's only "good enough" in the most superficial of ways, that it's betting nation-state-sized piles of cash to prove it.
Last October, Microsoft said it plans to spend $80 billion during the 2025 financial year to construct the infrastructure required for the mass-deployment of generative AI systems. Those figures are in doubt, with Redmond having cancelled some major data center contracts — something that almost never happens — but whatever the figure ends up being, you're certain that it's going to be a massive, massive, multi-billion dollar figure.
In February, Google put its planned generative AI capex spend at $75bn. Amazon said it'll spend around $100bn. Meta plans to spend $72bn. Oracle expects to spend $30bn.
If we add up all those numbers, and include the planned spend from smaller players like CoreWeave and CoreScientific, we end up with something that's north of $300bn. Just for this year.
Big tech thinks you're so dumb, so easily satisfied, that it's willing to collectively bet the same amount of money that's greater than the 2024 GDP of Portugal — a country that's an EU and NATO member nation, with a population of roughly 10.5m people.
More than $300bn. That's how much these companies hold you in contempt.
Sloppy Seconds
Let me put my cards on the table. I firmly believe that generative AI is not the future of technology, or the workforce, or entertainment, or content, or anything. While it may exist in some small form in the future, it'll do so as a small, less exciting niche that caters to specific needs — most likely for big, enterprise clients — rather than something that we all use on a daily basis, and that plays a major role in our lives.
I believe that OpenAI will eventually die. I believe that Anthropic is actively dying. I think that Satya Nadella will eventually back away from generative AI, just like he did from the metaverse, but not before firing a bunch of people in the process to justify the company's insane investments in data center infrastructure.
Mark Zuckerberg will do the same, and he won't suffer any consequences, because he's literally impossible to fire. He'll move to the next big thing, which he'll insist is the future, and Casey Newton and Kevin Roose will repeat whatever he says and applaud like sea lions, because they're fucking imbeciles that only believe in whatever the nearest billionaire tells them at that moment.
Midjourney, Stability AI, Cursor, Replit — I believe that all of these companies are going to zero.
This is what gives me hope. Eventually, this will all end. And while you might be tempted to dismiss this as "copium," it's harder to argue with pure numbers. Ed Zitron's analysis of OpenAI and Anthropic are essential reading to understanding why generative AI is, fundamentally, a bad business. I won't repeat that analysis here, but I encourage you to read it in your own time.
That said, generative AI is a sector that only exists through a kind of brain-dead corporate welfare. The moment that Microsoft, or Google, or Amazon, or Oracle, or Softbank walk away is the moment when this all ends. These companies cannot survive on their own.
So, why am I so unhappy? Because I'm all too aware of the damage that this shit is inflicting right now, and how it’s cheapening our collective existences by forcing AI-generated slop upon us wherever we go.
There is no firewall. No untainted part of the web where humans still reign supreme. And the algorithm-centric nature of the online content ecosystem — where rather than showing you what you asked for, an AI model gives you what it thinks you want to see — plays an active role in foisting this shit upon us.
By now, we’re all used to seeing this garbage. We know the tell-tale signs of an AI-generated image, or a video created using Veo3. When I wrote about “the same fucking voice” earlier in this piece, you knew exactly what I meant, even without an example. You know what ChatGPT-written text looks like, and smells like, even without having to see a smoking gun, like a hallucinated fact or a made-up statistic.
And we know what AI-generated content looks like, not just because we’ve been exposed to so much of it, but because none of it passes the sniff test. Although these models are trained with an aim to mimic human behavior in the most accurate way possible, they still fail. Nobody writes like ChatGPT does, even when crafting the most vanilla business content. Most people know that a human hand has four fingers and a thumb.
This content is created in a matter of seconds, and to even the most untrained observer, it’s dismissed in seconds.
But fuck it, those seconds matter. It’s time that none of us are getting back. It’s time that, in an ideal world, would be spent watching videos we actually want to see, or reading content that’s written by a real human being and that informs us, or that teaches us something new, or that simply entertains us.
But more to that point, I think that the constant barrage of AI slop is, on a fundamental level, bad for us. I mean on a psychological level.
I’m not a therapist, or a psychiatrist, and I have no formal training in this space. I can only speak from my own personal experience. If everything you see online is shit, it makes it so much easier to think that everything is shit. The web — which felt genuinely magical when I first used it in the mid-1990s — starts to feel far less magical.
I guess what I’m describing here is a digital version of broken windows theory — an admittedly controversial theory of policing that states that crimes which directly affect the environment of an area create a space where criminality flourishes, and thus leads to more serious crimes.
From a 1982 article by James Q. Wilson and George L. Kelling, which introduced broken windows theory:
Social psychologists and police officers tend to agree that if a window in a building is broken and is left unrepaired, all the rest of the windows will soon be broken. This is as true in nice neighborhoods as in rundown ones. Window-breaking does not necessarily occur on a large scale because some areas are inhabited by determined window-breakers whereas others are populated by window-lovers; rather, one un-repaired broken window is a signal that no one cares, and so breaking more windows costs nothing. (It has always been fun.)
Twitter, TikTok, Facebook, and YouTubes are the neighborhoods of the digital era. And if our neighborhood is littered with low-effort, machine-generated slop, it’s all too easy to conclude that our neighbourhood sucks. It’s harder to identify the things that truly inspire joy, or that offset the existential insult of whatever AI-generated content you saw that day, because they were created by a human being who actually gives a shit.
Fake, Fake News
Probably the biggest challenge with this newsletter is that I'm writing about tech — and how tech has gotten worse, and how that decline is directly ruining our lives — at a time when the non-tech world is even more screwed.
Talking about AI slop and News Feed algorithms feels trite in comparison, if not in poor taste, especially when you consider what’s happening in the Middle East, or in Ukraine, or in the United States, or even in my own home country, where the government just spent an insane amount of political capital on trying to strip disabled people of funds that they need to live dignified lives.
I've thought long and hard about that. Anyone writing about this stuff has to make the case about why this stuff matters at a time when there's so much non-tech bleakness that also demands our attention.
The best answer I can muster is that tech is the primary way that we understand that bleakness. Tech is how we navigate through the non-tech world. These issues are intrinsically linked.
And yet, we're at a point where, rather than help us understand the world around us, tech is an impediment to it. Over the course of decades, YouTube, Twitter, Facebook, and Google created a digital public square unlike anything we’d ever seen in history. Twitter allowed politicians to engage with their constituents without the filter of the legacy media. Facebook created an ecosystem where political organizing could flourish.
As for YouTube, if we could designate websites as world wonders, or UNESCO Heritage Sites, it would qualify. I credit YouTube with helping me understand things that I otherwise would be completely oblivious to.
To give you one example: Because of one YouTube creator, FriendlyJordies, I learned about the Papua Conflict, the bad actors that turned Australia into problem gambling capital of the world, and the shady political dealings that led to the evisceration of huge swaths of the natural environment, which, in turn, pushed countless native species to the brink of extinction.
If I want to learn something new about programming, I turn to YouTube. If my washing machine is glitching and I need to fix it, I turn to YouTube. Despite its flaws, I credit this one site with making me a smarter, more aware person.
Even TikTok had its virtues, before the tidal wave of generative AI submerged it. It allowed me to learn about lifestyles and careers that I had never considered before. TikTok gave me the ability to spend a day in the life of a teacher, or a soldier, or a civil engineer, or someone working at a coffee shop.
Although some of these platforms started their spiral into enshittification before generative AI was a thing, it’s undeniable that generative AI massively accelerated the process.
Try as I might, I still don't understand how this isn't headline news every night. I don't know how people aren't as angry as I am.
Well, no. I do. People are resigned to accept the broken state of our information ecosystem because they've accepted that things are broken, they're only getting more broken, and they'll never be fixed.
Or, perhaps, they’ve convinced themselves that these platforms were always this bad, and generative AI doesn’t change the extent to which these platforms are bad, but rather the form in which that badness assumes.
They might argue that while generative AI is used to flood the zone with misinformation and disinformation, social media has always been a tool for misleading people, with Cambridge Analytica and AggregateIQ being obvious examples.
That argument falls apart, however, when you consider the motivations that facilitated the destruction of our online spaces.
I'm from the UK. Trust me, we know fake news. We had fake news before fake news was a thing. We're a country where the legacy media ecosystem is largely dominated by a handful of large corporations, each with their own specific political alignment, and with no scruples about lying.
Before Brexit — and before teenagers in Macedonia were writing blogs about Hillary Clinton’s favorite way to prepare roast infant — the European Union was the preferred popinjay of the Daily Mail and the Daily Express, which would routinely publish articles about how meddling Brussels bureaucrats were dictating the curvature of our bananas, plotting to ban prawn cocktail crips, or legislating how much cleavage a barmaid could show.
These stories were, obviously, bullshit, but they were depressingly effective in shaping public opinion, ultimately leading to the disastrous decision for the UK voting to leave the European Union in 2016. And, because these publications had a near-monopoly on the media, particularly the print media, any rebuttal would go unheard.
Perhaps my favorite example of UK legacy media shithousery was in 2014, where Ross Slater, a reporter for the Mail on Sunday, described how he was able to obtain an emergency food hamper from a food bank "no questions asked."
And, in the third paragraph, said: "The woman, called Katherine, who was in her 60s, asked our reporter a series of questions about why the food bank vouchers were needed."
I'm not making that up. That's literally a verbatim quote. This article is almost eleven years old and I still remember how egregious it was. The headline contains a lie that was directly contradicted by the substance of the article only three paragraphs in, and even now — more than a decade later — the lie remains.
So, yeah, most Brits are used to living in a media ecosystem where accuracy and truth is, at best, a hypothetical nice-to-have — although I acknowledge that many publications from both sides of the political aisle, like The Guardian, The Times, The Independent, The Economist, and The New Statesman generally strived to inhabit reality.
The only virtue of that era was you could, at least, choose to not read those publications. The liars weren't massive tech companies, but rather individuals who had names and faces, and Twitter accounts where you could direct your scorn.
And you could understand that the reason these liars were lying was because they had an objective that, although reprehensible, was being served by those lies. The reason why the right wing press spread anti-EU disinformation was because it benefited their proprietors.
The reason why Ross Slater went to a food bank to scam some free food was because, at that time, the UK government was inflicting massive swingeing cuts to the welfare system that were increasingly unpopular, and had a massive human cost, and these publications were trying to build consent for these cuts by framing welfare recipients as undeserving — a bit like Reagan’s “welfare queens” in the 1970s.
They were bastards, sure, but they were bastards with an objective that you could understand — even if it was utterly fucking amoral.
The problem with Big Tech (particularly the new generative AI-flavor of Big Tech) isn't just that, in terms of volume, it's the most prolific misinformation bad actor the world has ever known. Nor is it that big tech is utterly unrepentant about the damage it's caused.
It's that, deep down, I don't believe that even the people working at Google, or YouTube, or Facebook, or OpenAI understand the damage they're causing, or why they're doing it, or the long-term implications of it. I’m not sure they acknowledge whether they’re causing any damage, and even if they did, I’m not sure they care.
There’s no real ideological motivation behind what these companies do, other than a sneering contempt for people, and a rapacious desire for self-enrichment at the expense of literally everything else.
What I’m describing is the difference between a terrorist and a mercenary.
If Google cared about truth, and honesty, and the quality of our information ecosystems — or even thought about those things — it wouldn’t have released Veo3, and if certainly wouldn’t have made it available to anyone who signs up for a free trial of Google AI Pro. It would have recognized the potential dangers of this technology, and recklessness of releasing it to the public with no gatekeepers.
I can only assume Google’s reasoning for releasing Veo3 in its current form, and under its current model, is because of one of three reasons:
It genuinely didn’t anticipate how damaging Veo3 would be. That, to me, is a bit like Lockheed Martin selling F-35s with the expectation that they’ll be used to fire rockets filled with food parcels. It doesn’t make sense.
Google doesn’t care how its technology will be used.
Google understands that Veo3 will be used to make content that makes people meaner, and dumber, and crueller, but it assumes that people want to watch this stuff, and so, it’s addressing a commercial need.
Out of those three options — which, I admit, are not mutually exclusive — I believe that the second and third choices are the most likely.
Google doesn’t give a shit. If Google cared, it wouldn’t have crammed generative AI “answers” into every search query. Nor, for that matter, does it care what generative AI-created content is doing to us, because it genuinely believes that we’re so vacuous, so stupid, this is actually what we want.
I believe that Google, and Facebook, and OpenAI, and Anthropic, and Microsoft all think so little about their customers. So certain are they of our collective stupidity and cruelty, they feel confident enough to spend $300bn in a single year to accelerate our collective descent into further cruelty, and further stupidity.
This is tech’s biggest insult.
There’s No Going Back
In June, a former Cloudflare executive, John Graham-Cumming, launched a website called lowbackgroundsteel.ai, which serves as an archive of pre-AI content.
Low background steel, if you're curious, refers to steel that was forged before the first explosions of nuclear weapons. Steel produced after the late 1940s is indelibly contaminated with trace elements of radioactive materials which, as a result, makes them unsuitable for things like Geiger counters and particle detectors. This contamination continues, even to this day.
It's a powerful analogy. Generative AI has permanently polluted the web, just like the fallout from nuclear weapons permanently tainted the steel we produce. And, just like we're forced to obtain low-background steel from pre-Hiroshima shipwrecks, we're now trying to preserve our pre-AI web.
This is the ultimate tragedy of generative AI. The damage it has caused is, fundamentally, irreparable.
I take comfort in knowing that, eventually, this will all go away. There is no way that OpenAI survives. There is no way that Anthropic, or any other generative AI firm, survives. I believe that we’re witnessing a major backlash to generative AI that, eventually, these companies will have to reckon with.
I believe that, in addition to the financial unviability of these AI companies, they’re also vulnerable from a copyright perspective, too. It just takes one successful lawsuit, one injunction, to send this house of cards tumbling.
I believe that, eventually, investors will tire of subsidizing Anthropic and OpenAI.
I believe that, eventually, Sundar Pichai and Satya Nadella will realize that everyone fucking hates Copilot and Gemini, and they resent having generative AI shoved into every nook and crevice of every app they use.
I believe that, eventually, this will all go away.
But that won’t be much comfort to those freelance artists who lost contracts and clients because some dipshit thought an AI-generated picture of a man with six fingers on one hand and seven on another was “good enough.”
It won’t wipe away the stain of AI-generated slop from our information ecosystem. That taint will never go away. We’ll have to live with it, just like we live with the isotopes that still circulate our atmosphere following the nuclear tests of the 50s and 60s, and just like we live with microplastics and PFAS chemicals in our water supply.
It won’t result in the developers fired by Microsoft so the company could continue subsidizing generative AI being re-hired.
Generative AI has broken the web, and while we can remediate the effects, there’s no such thing as a full recovery.
And so, we should never forgive the people who led us to this moment. We should revile the names Sam Altman, Sundar Pichai, Dario Amodei, Mark Zuckerberg, and Satya Nadella for as long as we live.
We should remember how these companies were so excited about the prospect of destroying the labor market, and how they thought people could be satisfied with soulless, artless AI slop, they committed to spending more than $300bn in a single year on data centers and GPUs.
The sheer existence of Generative AI is an insult to all of us, and we should never, ever turn the other cheek.
Afterword:
Couple of notes:
Apologies for not getting a newsletter out last week. It was… a week. I’ll be returning to my originally stated schedule this week, and my next post (aiming for Wednesday or Thursday) is a follow-up to this newsletter.
If you haven’t already, check out Ed Zitron’s newsletter, as well as his podcast — sorry, I mean Webby award-winning podcast — Better Offline. A lot of the points I made in this newsletter were informed by his writing, and there’s nobody covering generative AI quite like him.
Thank you to everyone who’s subscribed, shared, liked, and commented so far — and especially those who have chucked me a few bucks and signed up for a paid subscription, even though I haven’t actually made any paid content yet. It means a lot. Genuinely.
If anyone wants to shoot me an email, I can be reached at me@matthewhughes.co.uk.
Thanks for this analysis. Personally, I don’t think your observations (“AI generates slop, and slop is bad”) justify your conclusion (“all AI companies will die”). There is an alternative path: many human activities can get by on “good enough”. Especially in our ever-fleeting attention-starved world, often “good enough” gets the point across, gets the likes, gets the “B+” grade, and we move on. I don’t think any of that is good for us cognitively, but our social media addiction has demonstrated that companies can thrive plenty fine without being good for us.
It is amusing that LLMs will continue to become worse because they pollute the internet corpus with generated slop. As passive consumption grows, the training set becomes less weighted with meaningful human output, and the algorithms have no choice but to fill their gaping, ever hungry, ever scaling maws with their own produce.
Like a dog returning to its own vomit…