Readers of this publication know that I work for Ed Zitron, author of the Where’s Your Ed At newsletter and the Better Offline podcast, and arguably one of the most high-profile critics of generative AI in the tech media.
Despite his polemic style, Ed’s actually a bit of a moderate. He’s eminently reasonable. His criticisms of generative AI stem, primarily, from the fact that large language models can’t do what their proponents claim, and that the businesses propagating them are, fundamentally, unsound. OpenAI’s basically what happens if someone looked at the Greek government’s balance sheets in 2007 and said “hold my beer.”
While Ed hates the environmental cost of generative AI, and the fact that it’s decimating the labor market (not because it can do anything, but because idiot CEOs think that it can, or because it’s a useful smokescreen for outsourcing and layoffs), he’s open to a world in which none of those things are true, and GenAI is actually good, and isn’t harmful. He’s a technologist at heart, and — like me — would love for the technology industry to make products that allow us to live better, more dignified lives. If genAI did that, he wouldn’t object.
I am not a moderate. I am not open-minded. I am not objective, and I make no apologies for that.
If there was a world in which Microsoft built Azure data centers that sucked up carbon dioxide and belched golden retriever puppies, and where LLMs didn’t hallucinate, and where any jobs displaced by GenAI were replaced by even better paid jobs — or led to some kind of utopian, Star Trek world where nobody had to work, and where everyone’s needs were provided through some form of UBI — I’d still hate it, purely out of principle.
I’m a fundamentalist. I am not reasonable.
I want to make this clear upfront, for two main reasons:
While I’ll never lie to make a point, and while I’ll always strive for accuracy, and to represent points that diverge from my own, I want you to know that I am biased as hell. If you’re coming to this newsletter for impartiality, you’re in the wrong place.
My objection to generative AI is not just because GenAI is objectively shitty (from an environmental perspective, from a business perspective, or from a content perspective), but because I feel as though generative AI is bad for us. As a species. As human beings.
This article is a follow-up to the newsletter I published on Sunday, where I talked about the intrinsic insult at the heart of generative AI — that tech companies are spending hundreds of billions of dollars on building the infrastructure for AI models that produce content that’s intrinsically flawed because they believe that people are willing to settle for content that’s “good enough.”
While we should be outraged at that, I also recognize that anger isn’t enough. We live in a world that’s filled with injustice, and that has always been so, and if anything, is only getting worse with each passing day. While I believe that the rise of generative AI — and the assumptions at the heart of generative AI — are their own pernicious type of injustice, I also believe that we need to make a positive case for humanity.
I’m as much pro-human as I am anti-genAI. It’s this belief in the potential of people — and the intrinsic goodness of people, however flawed — that drives me to write this newsletter. Generative AI, by contrast, is an expression of contempt towards people, one that considers them to be a commodity at best, and a rapidly-depreciating asset at worst.
To support generative AI is to say that you don’t believe that people have value — either as colleagues, or as creatives, or human beings. Whenever you use generative AI to write an essay, or an email, or to create a piece of AI “art” (which I’ve put in quotation marks for obvious reasons), you’re saying something about the recipient, making an unspoken value judgement about their worth or their intelligence.
Ironically, I also view the users of generative AI as victims. This technology has lowered their standards, both of other people, but also of themselves. By using AI to create a piece of “art,” or to build an app, you’re saying that you’re incapable of doing the very thing you’ve asked an AI model to do — even if said AI model ultimately produces something that's complete garbage, like a drawing of a person where the hands have too many fingers, or a “vibe coded” app that barely works and is full of security vulnerabilities.
Generative AI cheats the user of the opportunity to get better. To actually learn something. To think, and to develop their own opinions and perspectives. It’s the digital equivalent of methanol, a kind of computer-inflicted cognitive decline that’s destroying our brains, and the worst part is that said destruction is entirely voluntary.
To use generative AI is to hate yourself, and to hate other people, and to put undue trust in the hands of companies that have consistently shown themselves to be untrustworthy, and to essentially surrender your own human potential to a soulless, unthinking magic box that promises the world, but delivers nothing but disappointment.
Wasted Potential
While planning this newsletter, my mind turned to Star Trek: Insurrection. Released in 1998, it’s arguably the weakest of the four feature-length films set in the Star Trek: The Next Generation universe, although it still has some charms.
The film is set around a planet populated by a race called the Ba’ku who, thanks to some strange cosmological anomaly, enjoy a state of perpetual youth. Although the Ba’ku were once a spacefaring race, with the same advanced technology as the Federation, they later came to reject it. When asked by the Captain Jean Luc Picard why, one of the Ba’ku responds:
“Our technological abilities are not apparent because we have chosen not to employ them in our daily lives. We believe that when you create a machine to do the work of a man, you take something away from the man.”
As someone who loves technology, and has occasionally indulged in techno-utopian thinking, I always found this line strange. If you could create humanoid androids like Lt. Commander Data, or use a replicator to produce everything you could possibly want — from food, to clothing — and live a life of workless luxury, why wouldn’t you? It’s a perspective that I genuinely struggled to understand.
I imagine that some people believe generative AI to be a step towards the kind of world depicted in Star Trek. In some respects, they may be right. After all, what’s the difference between asking ChatGPT to explain how JavaScript’s prototype-based theory of object oriented programming works, and Geordi LaForge creating a holographic version of Dr Leah Brahms to help him solve a problem with the Enterprise’s warp engines?
Perhaps. But I don’t think that matters. My objection to generative AI isn’t solely based on the quality of its outputs, or whether it does or doesn’t live up to its promises. OpenAI could create a perfectly-accurate version of GPT and my argument — that generative AI, by definition, robs the user of something — would still stand.
Paraphrasing the Ba’ku villager from Star Trek: Insurrection: “I believe that when you create an AI model to do the work of a person, you take something away from the person.”
That might be something as fundamental as the ability to think. In May, MIT’s Media Lab published a study that examined how ChatGPT affected a person’s ability to think, critically examine a subject, and their overall — for lack of a better term — motivation to engage with a subject.
From Time’s coverage of the study:
The study divided 54 subjects—18 to 39 year-olds from the Boston area—into three groups, and asked them to write several SAT essays using OpenAI’s ChatGPT, Google’s search engine, and nothing at all, respectively. Researchers used an EEG to record the writers’ brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioral levels.” Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.
Now, it’s important to note that there are some important caveats here. First, this was a single study with a relatively small sample size. I wouldn’t put that much trust into this individual study, largely because there’s a replication crisis in psychology and the social sciences, where many studies that are hailed as having produced landmark discoveries about the human mind cannot be reproduced by other labs.
While I wouldn’t accept these findings as absolute truth until they’re reproduced elsewhere (and, ideally, with a larger cohort of participants), I’m generally inclined to believe them. It’s something that passes the “sniff test.” Something that, even without the gold standard of empirical verification, feels believable — if not obvious.
Because, let’s face it, if you ask ChatGPT to write an essay, or to do your “research,” which you’ll then re-word so that it sails through whatever plagiarism checker your university uses, you aren’t actually thinking about anything other than the mechanics of how you stitch together the outputs of ChatGPT.
This might get you a passing grade, but it won’t actually result in you learning something. By leaning on genAI, you’ve robbed yourself of an opportunity to expand your knowledge, or to improve at something.
We’re seeing something similar in programming. So far, the evidence suggests that the biggest productivity gains from generative AI are being felt disproportionately by entry-level developers. When you look at those with more career experience, those benefits start to drop precipitously.
When you consider that junior developers have less experience programming in a given language, or lower levels of familiarity with a framework compared to someone who has been working as a professional developer for a decade or more, this makes sense. They’re using a GenAI model that has been trained on billions, or even trillions of lines of code written in that particular language, or for that development framework.
That model is saving them the effort of having to remember the design patterns, or the exact APIs or libraries to call, or the syntax for using said libraries and APIs. Even if it produces a wrong answer, the junior developer can usually fiddle with it enough to get it to work — even if they don’t actually understand what’s happening behind the scenes.
The problem is that they aren’t actually learning anything. If experience is what distinguishes a junior developer from a senior developer, how are they supposed to progress in their careers if an AI model is doing all the hard work?
Additionally, being a developer isn’t just about hammering lines of code into a text editor. It’s about thinking about problems. It’s about reading documentation and books and blogs, learning about how something works, or how someone else solved a problem. It’s about thinking how to architect a product in a way that’s easy to maintain and improve upon.
Human cognition is core to being a good engineer. That’s the biggest weakness of LLMs. They don’t think. They don’t understand anything. They’re probabilistic guessing machines, using complex mathematical models to predict the next set of characters in a sentence.
The idea that we’re building technology with something that’s, quite literally, guessing is terrifying enough. But, by using these tools, we’re also actively discouraging people from thinking.
The Measure of a Man
One aspect of generative AI that I don’t believe gets enough attention is how genAI tools are warping our expectations of other people — and, thus, surreptitiously warping our interpersonal relationships, particularly those in the workplace.
On Tuesday, I had a phone conversation with a good friend of mine who, for the past decade or so, has run a successful PR agency in the Bay Area. What makes this agency so successful is that my friend is a passionate technologist at heart, but also someone who understands people.
Her clients tend to skew towards the complicated, and the developer-centric. As a result, whenever she onboards a new client, she has to think carefully about how to actually craft their messaging. How to translate the complexity of Linux software and developer tools into something that the reader — the journalist who receives her pitches — can understand, and can then convey to their audience.
This is hard work, and the kind of work that you don’t want to rush. In our call, she told me that she sometimes spends as much as one day thinking about this, experimenting with ways to phrase things in a way that balances accuracy with comprehensibility. It’s the kind of important but unglamorous graft that separates a good PR from a bad PR.
She also told me that, since the launch of ChatGPT, some of her clients — presumably those who have never worked with a publicist before — have begun to demand faster and faster turnarounds, with the expectation that she’ll just automate the thinking that she prides herself in doing, outsourcing the very thing that makes her good at her job to a magical, mathematical slop machine.
She didn’t say this, but from my perspective, it felt that genAI is actively devaluing her work and her abilities. The immediacy that genAI provides is now making some clients expect her to act like an AI model — when, in fact, she’s a very good, very clever, and very experienced professional.
When I talk about genAI devaluing humans, this is part of what I mean. It reduces a person, not to the quality of their work, or the effort behind it, or the thought and experience that guides it, but rather how quick they can do it. Everything is flattened to a single metric of “performance” and “efficiency,” and quality is thrown to the wayside.
To reiterate, I don’t believe that genAI is the future. I don’t believe that genAI models are capable of replacing human effort. Nor, for that matter, do I believe that our future workforce will consist of one person in a vacant office that’s issuing edicts to Salesforce Agents through a Slack window.
But let’s suppose that Mark Benioff isn’t just a bloated, bloviating idiot that looks like a cross between Jurassic Park’s Dennis Nedry and a mafia lawyer, and that we are actually careening to a nightmare world where AI agents start to dominate the workforce. Are we prepared for a world where humans are, quite literally, substituted with AI models that superficially act like people — and who you talk to through an instant messenger, as though they actually are people.
Are we ready, as a society, for that level of dehumanization?
To be clear, the business world has long treated human beings as an interchangeable commodity to be used and discarded at will, and it celebrates the ghouls that act with the most indifference to their fellow human beings. Jack Welch — who, if hell exists, is surely working on his tan right now — is a great example of that, being regarded as some kind of business visionary rather than the person who turned GE from the crown jewel in America’s industrial base, to a glorified spreadsheet that also makes jet engines.
This feels different, however. Agentic AI will crank that dehumanization to eleven. As much as I fear for those whose jobs may be lost, I also worry about those left behind. I can’t imagine it’s mentally healthy to have a job where your role involves barking orders at AI models in a chat window — and where you don’t need to show empathy, or exercise consideration, because your colleagues aren’t human.
I worry about how that dehumanization will, in turn, bleed into that person’s life outside of work, and how it’ll affect their relationships with their friends and family, their spouse and children.
I’d argue that the rise of AI has already crippled the ability of some to consider the feelings and circumstances of other people, and fundamentally damaged their scope for empathy.
A good example of this is Matt Turnbull, an Executive Producer at Microsoft Xbox Studios, who, in the wake of 9,000 workers being laid off, published a note to LinkedIn (of course he fucking posted it to LinkedIn), where he suggested that those fired employees turn to AI for solace. Here’s what he said:
“No AI tool is a replacement for your voice or your lived experience. But at a time when mental energy is scarce, these tools can help get you unstuck faster, calmer, and with more clarity.”
This is the kind of idiocy that anyone with an iota of self-awareness or emotional intelligence would never write. And it’s evidence of the impact that AI is having on the ability for (allegedly) normal people to understand other normal people, and to empathize with their plight — like having been laid off during the tightest job market the tech industry has known for years.
There are dark times ahead.
Flaws Are A Virtue
Readers of this newsletter know I have somewhat of a prolix writing style — one that takes the reader on twists and turns, and veers into random tangents. I’m only five newsletters into this project, and I’m yet to publish something that’s under 3,500 words. It’s one of the things that, while some may identify as a flaw, is also something that makes me who I am.
I’m a deeply flawed person. I’m perfectly imperfect. I mess up. I sometimes let my friends and family down. As a journalist, I’ve published stories that were, in some way, incorrect. As a developer, I’ve shipped buggy, sloppy code. I have mannerisms that annoy people, and that verbosity mentioned earlier is one such example.
I, however, take solace in the knowledge that everyone else that has ever walked this earth has been equally flawed. That everyone has their triumphs and their failures. Their successes and their disappointments. Their idiosyncrasies that grate on some people, but are nonetheless a core part of what makes them as unique as their fingerprints.
Every person who has ever mattered to me — who has ever made a difference to my life, or anyone’s life — has been, in some way, flawed.
That’s the thing about people. We’re complicated. We have strengths and weaknesses, and these combine into what makes us unique, and what makes us special. All of us have the potential for self-improvement and self-actualization. Just like we don’t know what the lottery numbers are each week, we don’t know what we’ll do tomorrow — and whose lives we’ll touch, or what we’ll accomplish, or how we’ll grow.
I’ll take a flawed, imperfect, idiosyncratic human over an AI any day of the week, in part because I recognize that complexity. I recognize that human beings aren’t a sole attribute, but the product of multiple attributes that are constantly fluctuating, and that have the potential to change over time.
Next week, I might be a better writer. Or a better friend. Or a better husband. Or a better human.
Conversely, I might be more verbose, publishing a newsletter that, over the course of ten thousand words, takes the reader on a journey that weaves and ducks through countless subjects, and may not make perfect sense.
I might have a really bad day and act like a total asshole. I might miss a deadline and let someone down.
That variability — that oscillation between fortitude and failure — is core to the human experience, and I argue that generative AI’s biggest sin is that it tries to engineer away that thing that’s core to our humanity.
First, by flattening people to the sole metric of “performance,” thereby forcing human beings to compete against complex LLMs that are executed on GPUs that cost more than most people spend on rent each year, while also disregarding the things that really matter — like creativity, empathy, or simple human cognition.
Second, by presenting a phony substitute to humanity. LLMs are being framed by ghouls like Sam Altman, Mark Benioff, and Satya Nadella as a replacement for people, but without the trade-offs that people bring to the workforce. LLMs don’t get sick, or have bad weeks, or take vacation, or have kids, or demand a living wage.
If we believe the hype — like that from Dario Amodei, Anthropic CEO and suspected melted waxwork of Jonah Hill, who claimed that AI will replace half of all entry-level jobs within the next one-to-five years — AI is capable of effortlessly stepping into the shoes of a human being.
We can’t afford to believe that — not because it’s untrue, and was just a line designed to make a technology seem more inevitable than it is, spoken by someone who stands to benefit from people believing said lie, and regurgitated by credulous dipshits in the tech media that are drunk on access — but because it inherently cheapens the value of human beings.
AI cannot replace a human being because it is not a human being. It’s as simple as that.
A Time To Stand
We’re at an inflection point. Right now, AI is being shoved into every touchpoint in the tech ecosystem. I’m not merely talking about Google Search, or Microsoft Copilot, or Slack, or Meta AI. Terabox — a Chinese alternative to Dropbox that, from my experience, is mostly used by people to pirate content — now touts its ability to write essays and create presentations for the user. Insane.
I believe this proliferation of genAI is down to three reasons. First, as Ed Zitron has argued so effectively in the past, this is the last Hail Mary of a tech industry that’s run out of ideas for new products, and has no more growth markets left.
Secondly, I believe that the people behind these companies genuinely believe that generative AI is the future — albeit, not from any rational sense, but because they’re blinded by the prospect of being able to slash their workforce down to the bone, and to make an insane amount of money by helping other companies do the same.
Finally, I believe that the incursion of generative AI into every aspect of our digital lives is intended to convince the public that this technology is inevitable, and that it’ll be a part of what we do in our personal and professional lives, no matter what we think.
We must reject this idea with every breath. Generative AI is not essential, nor is it inevitable, and should it become so, it would be a tragedy. What we need at this moment is unfailing, unflinching moral clarity.
Here is what I believe.
Human beings have innate value.
People are more than the sum of their outputs, or how fast they produce something.
Quality matters more than velocity.
Empathy, generosity, and creativity are things worth valuing.
AI is physically incapable of replicating the characteristics that make human beings special.
By using generative AI, you rob yourself of the opportunity to improve yourself.
By using generative AI, you cheapen the world around you.
By using generative AI, you devalue yourself and other people.
Generative AI is inherently opposed to human worth.
A person on their worst day is still better than a generative AI model on its best day.
We shouldn’t celebrate a technology that people genuinely hope will allow them to push hundreds of millions onto the dole queue.
None of these beliefs are rooted in the stability of the companies building generative AI, or the actual capabilities of generative AI models. Again, OpenAI or Anthropic could produce a model that never hallucinates, and I’d still hate it for all the reasons I mentioned above.
None of this means I hate technology. My life has been shaped by technology. The computer was — and is — core to so many of my relationships. It has improved my life in ways that are too innumerable to mention.
Which is just another reason why it’s worth standing against generative AI, because the ways in which this technology is ruining our world are, similarly, far too innumerable to mention.
Generative AI is anti-person. And if we’re to stop this from ruining our lives, and our world, it’s up to people to stand up and find their voices.
Who’s with me?
https://www.youtube.com/watch?v=0fPUWSv2JCI
I suspect he is with you too.
Wow - beautiful. I wish I had written anything as good as that!