10 Comments
User's avatar
YakiUdon's avatar

It is amusing that LLMs will continue to become worse because they pollute the internet corpus with generated slop. As passive consumption grows, the training set becomes less weighted with meaningful human output, and the algorithms have no choice but to fill their gaping, ever hungry, ever scaling maws with their own produce.

Like a dog returning to its own vomit…

Expand full comment
Dr Brian's avatar

Thanks for this analysis. Personally, I don’t think your observations (“AI generates slop, and slop is bad”) justify your conclusion (“all AI companies will die”). There is an alternative path: many human activities can get by on “good enough”. Especially in our ever-fleeting attention-starved world, often “good enough” gets the point across, gets the likes, gets the “B+” grade, and we move on. I don’t think any of that is good for us cognitively, but our social media addiction has demonstrated that companies can thrive plenty fine without being good for us.

Expand full comment
Matthew Hughes's avatar

Hey! So, I should have been clearer, but the shittiness of AI and the doomedness of generative AI companies are two totally different points, although I'd argue that they're interwoven because the inherent shittiness contributes (though not exclusively) to them being bad businesses.

OpenAI needs more money than any startup has ever needed in history, and it needs it constantly, and it has no pathway to profitability, and if that funding ever stops, it's dead. Even if GenAI was good, this would be true.

Part of the problem is that genAI isn't a space where economies of scale work. The more successful these businesses are (in terms of users), the steeper the losses, and the more money they need to actually fund their expansion. Data centers and Blackwell GPUs are crazy expensive!

The problem is that genAI companies lose insane amounts of money (not one is actually profitable), and the products are too unreliable for them to actually be used for any real commercial stuff at any large, meaningful scale.

Expand full comment
Dr Brian's avatar

Thanks for your response, I definitely misinterpreted. It’s hard to predict whether future efficiencies (model compression, etc) will give them a pathway, but directionally I agree with your argument. Keep up the great writing!

Expand full comment
Matthew Hughes's avatar

Thank you!

Expand full comment
Josh's avatar

> Big tech thinks you're so dumb...

I think it's more like "Big tech knows you've become so addicted..."

Great article. I agree with the gist of it, that AI is a horrible invention that hurts all of us.

And while I sure hope you're right that AI will go away, I don't think it will. I sadly think it's here to stay, and it will continue to push humanness out of humanity without a care in the world. Your critique about how unnatural or obvious AI generated content is ignores how rapidly the technology is progressing. Think about how bad AI generated content was 10 years ago, 5 years ago, 3 years ago, 1 year ago, last week.

> I believe that Google, and Facebook, and OpenAI, and Anthropic, and Microsoft all think so little about their customers.

Are we their customers or their product?

Expand full comment
Matthew Hughes's avatar

I'd challenge you on the idea that these AI models are getting better. LLMs still hallucinate, and they're only getting worse, and it's because LLMs don't actually "know" anything in the sense that a human knows something. These are probablistic models using math to guess the right word in a sentence, and nothing else.

https://techcrunch.com/2025/04/18/openais-new-reasoning-ai-models-hallucinate-more/

Expand full comment
Josh's avatar

They definitely hallucinate, but I haven't seen any evidence that their hallucinations affect their adoption in any meaningful way. I'm not sure consumers have much say in the matter, and the market forces don't seem to care. I also wonder if hallucinating AIs are still better than the non-AI products consumers are provided, at least for some areas such as web searches and code generation.

Expand full comment
Matthew Hughes's avatar

I think the evidence is in the relatively slow adoption of these technologies, the high churn in some products, and the fact that API sales account for a small fraction of OpenAI’s revenue.

If genAI was actually delivering value, it would be in everything, and OpenAI’s revenue would skew heavily towards APIs rather than d2c subscriptions!

Expand full comment
Francis Turner's avatar

I mostly agree with the AI is all mediocre slop argument, but not entirely.

I know a fair number of people who use AI to generate the starting points for their art. They use midjourney (or sometimes grok) to generate some base images which they then merge, combine and enhance using some other graphics tools. They say it saves a lot of time doing tedious work that isn't exactly a great use of their time or creative talent.

And there are people like me who have more or less zero artistic talent but want a quick illustration or want a chart or diagram made to look better. Assuming that you check the output and are prepared to tweak it, using AI for this gets way better results way quicker than anything else. And no I'm not going to pay the rate of a graphic designer to do this, the budget for this is essentially close enough to $0 that no graphic designer could make a living wage from spending the time needed to produce an equally good product unless he used an AI tool himself.

There are other examples. Fundamentally it boils down to whether people care enough to spend a little extra time checking and tweaking or not. Sadly, evidence suggests most do not because most people seem to be lazy tossers who don't care about quality and who have found that search engines and the like don't either.

Until enough people make enough effort to use alternative search engines and social media that aren't adtech infested and AI "enhanced" no one has any incentive to change this.

Expand full comment