I launched this newsletter late last week, but if I was to tell you the truth, I’d say that it’s something that’s been in the works for a while now. I have a bunch of half-finished drafts on my laptop where I lost steam and never regained it, and even entire articles that I completed but was never quite happy with.
Part of the problem is that while I feel profoundly depressed about the direction of the tech industry, and feel as though the tech products I use in my daily life are fundamentally degraded, it’s tough to articulate why. It’s hard to look beyond the individual examples and identify a common thread.
You can point to obvious things — like the fact that Google is practically useless, or that Facebook is no longer really a social network, but rather an aggregator for the worst slop the Internet has to offer — but they seem like symptoms of a much more insidious disease.
Something feels profoundly wrong, but why? And what?
Those questions are, ultimately, what I hope this newsletter will answer. As I revisited the incomplete or abandoned drafts on my computer, I noticed one theme emerge: technology products have become increasingly algorithmic in how they work, and as a result, nobody really understands how they work. As a result, nobody really has any control over the things they use.
Control. This is the biggest thing that the technology industry has stolen from us. A person’s intent, or their desire, has little-to-no influence on how something works, because our agency has been usurped by algorithms and AI which don’t work to advance our interests, but to mediate our interactions to produce the best financial outcome for the company that built them.
Sidenote: Forgive the interruption, but I just want to say a massive thank you to everyone who has read my introductory posts to this newsletter, who has shared them, or simply said nice things.
I started What We Lost as a labor of love. Its purpose was — and is — primarily therapeutic. I’m unhappy with tech, unhappy with the tech industry, and I wanted a space to express that displeasure. While I expected that some people would find my writings interesting, or useful, I genuinely didn’t expect the overwhelmingly positive response I got.
Every comment, every like, every subscription — and I’ve had over 250 so far — means the world to me.
Honestly, the most flattering thing was how, after just one proper post (I’m not counting the article where I explained why I started this project) four people decided to pledge money to this newsletter. To me. I still can’t really believe it.
I didn’t have monetization turned on at the time, and I had no idea you could even pledge support to non-monetized newsletters. I’ve since set it up, and if you’re so moved, feel free to chuck me a few bucks — although it’s not expected, and I’m going to work on this regardless of whether I get paid or not.
I’m planning to publish one article a week. That’s my aim, but it ain’t a guarantee. Things happen. Life gets crazy. Occasionally, you’re so tired and burned out, you need a break.
Still, one article a week — four articles a month — that’s the goal. And I’ve set a tentative publication date of Wednesday or Thursday for each post. Emphasis on the ‘tentative.’
Anyway, I’m rambling. Once again, thank you. If you like this, feel free to subscribe, to share, to drop me a message on BlueSky, or send me an email. My email address is me@matthewhughes.co.uk.
And back to our regularly scheduled programming.
To use technology in 2025 is to be impotent, powerless, and to exist at the whims of these algorithms. To use social media is to engage in a battle with the platform itself in order to see the content you want, or to speak to the people you care about. Finding a specific video on YouTube requires you to play a game of tricking the algorithm. The same is true for Google, Amazon, and a myriad of other places.
And as these companies rely on AI to power more aspects of their products, the websites and services we use will become even more incomprehensible, and even more hostile to users.
Anti-Social Media
In April, Mark Zuckerberg — the founder of Facebook — made a revealing admission during his testimony to the Federal Trade Commission, as part of a long-running antitrust lawsuit that may see the company broken up. Just 20 percent of the posts people see on Facebook, and 10 percent of the posts on Instagram, come from their connections — accounts made and operated by other human beings that the user has “friended.”
Social media, as we have long understood it, no longer exists. Facebook is not a social network. Neither is Instagram. They’ve transformed something else, and something that few people actually like.
The cause of this transformation is a matter for debate. Have people stopped posting because they’re just fatigued and don’t really see the value in posting snaps of their lunches and vacations, as they did in the early days of social media, when we felt empowered and encouraged to share every aspect of our lives to the public?
Are they wary of the personal and professional consequences of misspeaking about a contentious political issue, as one comment on Reddit I found suggested? Is it because as they’ve gotten older, they no longer value having an artificially large social circle — which includes former colleagues, random strangers they met on nights out, and so on — and would prefer to direct their focus to their closest friends and family?
Perhaps. I’m open to the idea that everything I just said — and perhaps some factors I didn’t even list — are behind the trend that Zuckerberg identified in his FTC testimony. It’s entirely possible that social media as we knew it was just a passing phase that only existed — and could only exist — while the sector was still in its nascent stages of development.
Maybe. Possibly.
Or, perhaps it’s because Facebook and Instagram are now just shitty products that, over time, have completely stripped their users of any autonomy, and people don’t want to waste time posting life updates when the algorithm decides whether it’s worth showing them to their friends.
Do you think it’s because Facebook and Instagram just suck now, eh Mark?
I’m convinced that Mark Zuckerberg has created a self-reinforcing cycle of enshittification, where one original sin — one idiotic UX decision, or algorithm change — to increase user activity had, in the long run, the opposite effect. And so, to counteract those negative effects, Facebook doubled down.
Sidenote: Enshittification is Cory Doctorow’s term to describe the seemingly-inevitable degradation of tech products, which often happens when a company goes public and becomes accountable only to shareholders and markets, with every other concern, be they users or employees or the product itself, ranked second. It’s one of my favorite words, and you’ll see it a lot in this newsletter.
Here’s my theory. As Facebook and Instagram get worse, people are less likely to engage with the platform, because they’re not stupid, and they can tell that the sites and apps they once used and loved are broken.
As growth stalls, and on-site activity slumps, Facebook and Instagram have to depend more on algorithms to juice engagement. They deliberately make the platform more broken, so that you need to spend more time, exert more energy, and view more ads to do the things you want to do.
These tactics, in turn, make Facebook and Instagram even shittier places to spend time — and so, Meta does the only thing it knows how, which is to fill your feeds with more AI slop, more unsolicited posts from accounts you don’t follow, and cede control of more aspects of the app to growth-hacking algorithms. Rinse, repeat, and despair.
We’re all familiar with the sorry state of the News Feed — the first thing you see when you type “facebook.com” into your browser, which at one point contained things like baby pictures and life updates, but now is dominated by random posts from accounts you’ve never interacted with and don’t care about.
The News Feed is the most obvious example of the trend I identified earlier — things that are algorithmically-driven, and thus impossible to understand or control in any meaningful sense — and I’m going to spend a lot of time talking about it.
That said, I want to make it clear that it’s not the sole example of this phenomenon, either on Facebook, or elsewhere. I’ll talk about them later in this newsletter.
When I say the News Feed is “dominated” by digital detritus, I’m not exaggerating. As an experiment, I just opened Facebook on my laptop’s browser and decided to count how many posts I would have to scroll through before I saw something from a person I knew.
Seven. Seven posts. There were a couple of ads, one from the CIA, one from Facebook itself, and another from Rabbi Schmuley Boteach. No, I don’t know why either.
I opened Facebook on my phone and was similarly confronted with random posts — one from a Canadian MP somewhere in Ontario, another page that exists to mock people working in service jobs, and a random post from a group where people post pictures of the in-flight meals they’ve enjoyed.
Given that I am neither Canadian, nor a sociopath that derives pleasure from making fun of poor people, nor someone who particularly enjoys convection-heated lasagna served on a plastic tray, I have no idea why I was shown these posts.
My friend — and, I guess, boss — Ed Zitron has published an excellent historical retelling of how Facebook’s newsfeed transformed into its current state, and it has informed a lot of what you’re going to read next. You should check it out. It lays bare how Facebook doesn’t care about user value, and that there’s no limit to how shitty it’s prepared to make your experience.
If that shittiness includes stripping away any control, or autonomy, or influence on the products you use, that’s perfectly fine. Remember, this is a company where a senior figure said that if Facebook was used to facilitate a terrorist attack, or was responsible for cyberbullying, it would be an acceptable price to pay for its growth.
Seriously, this is what Andrew “Boz” Bosworth, Meta’s Chief Technology Officer, said in a memo leaked to Buzzfeed in 2018. This is the actual quote:
“We connect people. Period. That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in. The work we will likely have to do in China some day. All of it.
So we connect more people. That can be bad if they make it negative. Maybe it costs someone a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools.
And we still connect people.
The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is *de facto* good. It is perhaps the only area where the metrics do tell the true story as far as we are concerned”
Fun fact: Andrew Bosworth is now a Lieutenant Colonel in the US Army, along with senior executives from OpenAI and Palantir, as part of Detachment 201 — which purportedly is about “[fusing] cutting-edge tech expertise with military innovation.” How incredibly reassuring!
Facebook created the News Feed in 2006, back when the company was still in its growth stage, and around the time when it opened membership to everyone — not just those enrolled at university. At first, it was a bit like a panopticon showing the activity of your friends. Every interaction, every post and comment, would be recorded and broadcast for all to see.
Naturally, this feature was incredibly unpopular, and Facebook was forced to backtrack. Zuckerberg even apologized — something that he’s loath to do — saying the company “really messed this one up.” Eventually, it morphed into a kind-of miniature Twitter, showing the kinds of personal updates we typically associate with the news feed.
For the first six years, Facebook didn’t really do much with the News Feed, besides one brief experiment with monetization between 2006 and 2008. That changed in 2012 when the company started inserting adverts between user posts. Although nobody really enjoys ads, this was, at the very least, understandable. Facebook is a business and it needs to make money.
But what came next went far beyond monetization, and instead saw Facebook engage in a process of experimentation and exploitation that continues to its day, and has culminated with its users being completely disconnected from their friends and family, and with no means to take control over what they see.
There is a villain in this story, and his name is Adam Mosseri.
When Adam Mosseri — who now runs Instagram, or should I say, is running Instagram into the ground — was elevated to the top job in Instagram’s news feed division in 2012, he started looking at ways to augment the feed with content from other parts of the site in an attempt to juice the amount of time people spent on the platform. Zuckerberg said that this would turn Facebook into “the best personalized newspaper in the world."
Spoiler alert: it did not.
There are two things that I think are important to note here. First, this algorithmic newsfeed has always been, to an extent, really unpopular. We know this because of the many times in which Facebook has been forced to announce, publicly, that it’s reprioritizing human-made content and human interactions.
In 2015, Facebook announced changes to the news feed that, as Josh Constine of TechCrunch put it, “reprioritizes your real friends above pages.” It said the same thing again in 2016. In 2018, Facebook said the newsfeed would emphasize “meaningful social interactions.” In 2019, Facebook tweaked the algorithm again to, as Constine of TechCrunch put it, “promote worthwhile [and] close friend content.” In 2022, Facebook launched the Feeds tab — which showed posts from friends in chronological order — and then buried it so deep that nobody actually used it.
If the algorithmic News Feed was popular, or even good, or even only mildly offensive, Facebook wouldn’t have to constantly provide near-annual reassurances that it’s putting human interactions first. If you had any control — if your opinion carried any weight whatsoever — then Facebook would have probably reverted back to the original algorithm, which didn’t hide posts from friends, and didn’t bombard you with content you don’t care about.
The second point that’s worth noting: Facebook has, by its own admission, used the newsfeed to influence what people see, what opinions they’re exposed to, and who they interact with. It has long ceased to facilitate those human interactions that Andrew Bosworth waxed so lyrically about in his memo, where he said that (I’m paraphrasing) terrorism is cool if it means that more people can “poke” their friends.
In 2012, when the company had started its algorithmic curation in ernest, we learned posts and updates were viewed, on average, by just 12 percent of a person’s friends. In 2015, the company published an academic paper where — in dry, scientific terms — it described how it wouldn’t show certain news stories to people based on their political leanings, even if those posts were shared by the person’s friends. One year prior, the company revealed that it had been changing the content of the News Feed as part of an experiment to see how emotions influence on-site activity.
That last experiment was roundly condemned as unethical, in no small part because the participants — people who used the site — did not consent to having their emotions fucked with, and because they had no knowledge of the experiment, they could not opt-out.
You. Have. No. Control. Seriously, Facebook can literally perform unethical psychological experiments on hundreds of thousands of people, and there’s nothing you — or anyone else — can do about it.
And the most maddening part is that when you look at Facebook’s other features beyond the News Feed, you realize how powerless you actually are.
Let’s talk about notifications. You’d expect these would work… well, the same way that notifications work on any other application — by sending you a small alert when something happens, like when you get a message from someone, or a page you follow just posted a new update.
Nope! Facebook uses these as an algorithmic growth-hacking tool to increase user engagement, even if said engagement isn’t useful. You see this a lot with pages you follow, with Facebook issuing notifications for posts that are often several days — and sometimes even weeks — old.
I follow a page called 10 Ways that posts deals on online shopping. These are, by their very nature, time-limited. A store may sell out of a certain item, or on retailers that use dynamic pricing (like Amazon), the price may go up as people start buying the item in large numbers. To get the best bargains, you have to be fast.
Facebook’s algorithm, in its infinite wisdom, thinks it’s useful to send me links to posts that are several days old — and where the deals have since expired.
While you can tell the algorithm that you’d like to see more posts from a given page, this isn’t treated as a clear instruction to notify you immediately when a new post goes live. Rather, the algorithm takes it under advisement, and while it might show you more posts from said page, and faster, the extent to which this happens isn’t under your control.
Similarly, in the rare chance that Facebook actually recommends a post in your News Feed that you’re actually interested in, it still won’t let you see what people are saying about it. It’ll curate the comments based upon its own perception of “relevance.” If you click onto a thread, it’ll tell you that some comments are hidden, and to see everything you have to tick another box.
And when you do so, it immediately unfurls every thread, changes the position in which each comment appears, and you lose your place — and, likely, your train of thought.
Ever tried to search for something on Facebook? It’s a nightmare. You can’t tell the algorithm that certain words should be grouped together by placing them within quotation marks, as you have with every other search tool throughout history. It just doesn’t work.
Instead, the search algorithm will return posts and videos that will include these words, but arranged in a completely random order, and often the words themselves have no relation to each other.
You can’t tell the algorithm that a word should appear verbatim in results, again by using quotation marks. Instead, it’ll return content that features the word, but in a different conjugation or tense, or it’ll return posts with a synonym for the word in your query.
I’m not joking. From time to time, Facebook has been known to substitute the word “stream” with “creek.” This company, I hasten to add, offers a livestreaming service. Two, in fact. Facebook Live and Instagram Live.
It even does that with people’s names. If someone has a somewhat unusual name that’s kind-of like a traditional name, but with a “my parents were alternative” vibe, god help you. It’ll automatically “correct” it to the traditional spelling by listing those people ahead of the person you’re actually searching for.
Oh, and Facebook has a nasty habit — as does pretty much every search product that exists right now — of treating certain words in a query string as optional. If you have more than one word in your search query, it’ll show you results that match some of those terms, with those partial matches often ranked higher than posts and videos that include every term in the query, arranged in the same order in which it was written.
While the latter point seems like a highly-specific complaint, it goes back to the point that these applications have now been engineered to eliminate any semblance of user control. You have no way of demonstrating intent anywhere — from search, to the News Feed, to notifications, to the comments section under a post. The algorithm handles that on your behalf, because you’re clearly too stupid to do it yourself.
Facebook — and, to be fair, a lot of other platforms — think it knows what you want better than you do. It thinks it knows the names of your friends better than you. It’s patronizing, and offensive, and there’s literally no way to make it stop.
A Depressing New Normal
I’ve dedicated a lot of time in this newsletter to focus on one company — Facebook (I refuse to call it Meta, just like I refuse to call Twitter ‘X,’ and how I refuse to spell Internet without a capital ‘I’. Old habits die hard). It’s important to realize that Facebook isn’t an outlier here, but rather one company engaging in practices that are depressingly endemic within the technology industry.
Remember how I said that I wrote multiple (and since abandoned) posts for this newsletter before I actually launched it? One was about this very topic and it was inspired by an incident where I couldn’t find a video on Youtube, despite knowing the actual title word-for-word.
Although I never actually published that post, I still remember the incident with perfect clarity — not least because it still (still!) fills me with rage.
The video in question had enjoyed a spell of viral notoriety, with 1.6 million views. It occasionally crops up on the frontpage of Reddit, and I’ve watched it several times. Oh, and it was still live on the site. It didn’t infringe upon any of YouTube’s content standards, nor did it violate any copyrights.
This should have been so straightforward. Instead, the first videos that YouTube presented in my results were of other creators reacting to the original video, delivering ten-minute-long monologues about a 23-second clip — presumably because longer videos are easier to monetize, or are otherwise favored by YouTube’s recommendations and search algorithms.
I scrolled down a bit and was confronted with YouTube’s People Also Watched category. The videos included a montage of clips from the TV series Parks and Recreation, a video from The Onion, and a seven-year-old news story from The Guardian that was, incidentally, nothing to do with my actual query.
I scrolled down further. YouTube showed me a list of videos I’d watched previously, and even more ten-minute-long videos reacting to the original 23-second-long clip.
Do you know how I eventually found the video I wanted to see? I removed a word from the title. That’s right. I had to search for something other than the video’s name.
You could make a case that this is just an example of YouTube’s search algorithm being bad, but I’m not sure that’s the case. Or, at least, I think there’s something motivating YouTube to create and maintain such a bad algorithm.
YouTube could show me the video I care about. I’d watch it — maybe see a single ad — and then move on with my life. Or, it could distract me with derivative videos that are longer, and might include ads both at the start of the video, and in the middle. It could try to recommend me content so that I stay on the site longer, endlessly scrolling and clicking, and watching more ads as a result.
In which scenario do you think YouTube makes more money?
This comes back to control. YouTube isn’t just content on controlling what you see. Its ambitions are much higher. It wants to control your time, and your attention span. It wants to manipulate you into watching, and consuming, and viewing ads — even if that activity doesn’t actually help you.
Let’s talk about Instagram. Hey, who runs Instagram these days? Oh, it’s Adam Mosseri, the same guy who destroyed Facebook!
Instagram similarly insists on selecting which posts from your friends you’re worthy of seeing, and has the nasty habit of inserting recommended posts into your feed based on what it deems “relevant.”
The enshittification of Instagram isn’t new, but even as the platform was deep in the throes of its terminal decline, it still had some value for users. If you were a new business trying to raise awareness of your products, or a creative trying to build an audience, Instagram provided the means to reach people you’d otherwise never connect with.
The main way people did this was through hashtags. There are hashtags for every conceivable niche, subculture, and interest group. By clicking on one of these, you could find other people like yourself. Instagram would show you an algorithmicly-curated selection of posts, and by clicking on the “recents” tab, it’d show you a list of posts that contained the hashtag, arranged in chronological order.
It was a great system. Was.
Around 2023, Instagram removed the “Recents” tab, replacing the complete and chronological list with an algorithmically-curated and incomplete selection of posts uploaded in the previous few weeks. And then it removed that, leaving you just with a random selection of posts that are not linked to any period of time. And so, if you search for a hashtag now, the first results may be from yesterday, or they may be from 2015. .
According to Mosseri, this change was because Hashtags were “a vector for problematic content.” That’s funny, because as someone who has used Instagram for well over a decade, and has covered social media in a professional capacity for a variety of publications, I’ve never heard anyone complain about that.
Ever.
Conversely, it’s not hard to find people complaining about the current hashtag system, and how it’s made it harder for them to find people with similar interests, or to market their business, or to run events and promotions, or to build an audience for their content.
These complaints have fallen on deaf ears. And I’d argue that it’s because, as a tool, hashtags were too powerful. They empowered people to promote their work, or their business, without any intermediaries.
By crippling hashtags, Instagram has effectively forced people to work directly with the company in order to do the thing they once did for free with hashtags, primarily by paying for ads. Stripping users of any agency is, I’d speculate, part of Instagram’s strategy to grow its ad business.
Instagram has made other changes that similarly serve to eliminate any semblance of discoverability on Instagram. Search is undeniably broken. It’s not uncommon to be told that there are no results for a given query, when, in reality, there are thousands, or tens of thousands, or even millions. You can’t search for a hashtag in combination with another word.
As part of this push to eliminate any sense of organic discoverability on Instagram, the company has reworked how hashtags work on a technical, mechanical level.
Previously, searching for a hashtag would only retrieve posts that included it. Now, it’ll include posts that include terms that vaguely resemble the hashtag. If you have a hashtag that includes two words, for example, it’ll return posts that include one or both of those words. This happens even if those words are found in different places within the caption.
Most insultingly, Instagram will highlight posts that include synonyms for the words in the hashtag. Not even the same word!
Zuckerberg claimed in his FTC testimony that only ten percent of the posts that people see on Instagram come from their friends. Again, I’ll ask the same question I asked at the start of this newsletter: Do you think that’s because Instagram sucks now, Mark?
I should wrap this up, in part because if I wrote a company-by-company indictment, I’d end up repeating myself. A lot of the sins I’ve described aren’t unique to a single tech giant, but found in multiple places, in multiple apps, because we’re currently trapped in an anti-user race to the bottom.
TikTok’s whole schtick is that an algorithm chooses what you see — and people don’t object to that because the algorithm is actually reasonably good at guessing what people want to see, and it’s not replacing something that worked perfectly fine previously. And yet, it makes it deliberately hard to find content through the search tool, treating the terms in queries as optional, using synonyms for terms, and showing potentially related posts in search results, even when those posts don’t contain any of the search terms.
Google is… well. It’s Google. I’ll hold fire here, in part because I’ve got an entire post about this dogshit company — and its dogshit search product — in the works. I will, however, encourage you to read Ed Zitron’s impeccable investigation into the cause of its decline — The Man Who Killed Google Search.
Taking Back Control
Social media provides a useful tool to frame this problem, as it’s something that most people use, and it’s something that most people are unhappy with. And yet, I think it’s important to stress that the trend I identified in this post — of tech companies stripping users and customers of their autonomy — is present across the entire tech industry.
It’s when Apple and other tech companies removed the ability for you to repair your computer by soldering the RAM and storage to the motherboard.
It’s when a consumer tech company stops you from fixing your own products by linking the components to the device through software, or by refusing to provide you access with the parts and tools you need.
It’s when you get rejected for a job because the company uses an AI tool to filter out your algorithm.
It’s when Apple stopped you from using software you didn’t download directly from the iPhone app store, or a browser of your own choice, unless said browser is basically a re-skinned version of Safari.
It’s when you can’t use your PlayStation 4 because you need an official controller to initialize it, which Sony doesn’t make any more, and so you have to spend the original retail price buying one that’s second-hand.
It’s when you can’t use fast charging on your electric car because you went to a mechanic that wasn’t approved by the vendor, or because your car was written off and subsequently repaired.
It’s when you’re trying to speak to customer service, but first you have to wrestle with an AI chatbot before reaching a human.
It’s when a company stops you from installing alternative operating systems on the devices you already own.
It’s when a company strips away a feature you once used, making something that you own less functional and capable, and you can’t do anything about it.
It’s when you buy an IoT device and the company decides to shut down its servers, turning it into a paperweight, while also preventing you from modifying or otherwise hacking it.
It’s when a company announces that it’s using all your data to train AI models, and there’s nothing you can do about it because you agreed to it somewhere in a 300-page Terms of Service document.
It’s when Microsoft announced it was hiking the cost of your subscription to Office365 to pay for a bunch of AI features that you didn’t ask for, don’t want, and perhaps don’t even trust.
It’s when you bought an application and the company discontinued the activation servers, so now you can’t use it, or can’t install it on a new computer. And it won’t remove the DRM either, even though it doesn’t sell that product any more, so you’re forced to pay for the latest-and-greatest version.
It’s when EA shut down the servers for The Crew, meaning that people couldn’t even play the game in single-player mode, even if they’d paid for the game and owned a physical copy of the disk.
It’s when you’re using Facebook on your phone and you click a link, and it opens in the in-app browser rather than the default browser, all because Mark Zuckerberg wants to track your activity.
When Microsoft sets arbitrary hardware requirements for Windows 11, so you’re forced to buy a new computer in order to use the latest operating system.
When your printer only works with officially-licensed cartridges, which are sold at a massive markup compared to third-party cartridges.
The tech industry acts like Cartman from South Park, yelling “Whatever! Whatever! I’ll do what I want!” It doesn’t care about you, or your happiness, or whether you’re more productive, or whether you’re more connected to your friends and family, or whether you’re getting value-for-money from the tech you use.
It. Does. Not. Care.
The tech industry doesn’t care about value creation, or innovation, or its customers, but rather value extraction. These companies behave like bottom-trawling fishing vessels, destroying everything that crosses their path — even if that thing is their own products, or their own customer base.
These companies can only do this if they’re unchallenged. If you — the user — is left completely impotent, stripped of any meaningful voice or control.
The only move we can make is to not play. To vote with our voices, and our wallets, and our attention, and our engagement, and to make our displeasure clear.
That’s easier said than done, and I wouldn’t judge anyone for using Facebook, or Instagram, or owning an iPhone or a MacBook. In fact, I do both. One of the most insidious things that tech companies have done is to create ecosystems where it’s virtually impossible to divest yourself from.
It’s hard to walk away from your friends, and from the memories you’ve posted over the years, and from iMessage, and all the software you’ve bought on the App Store. You don’t want to deal with the hassle of moving from iCloud, or having to figure out how to use a different office suite, or to migrate your email from Gmail to your own server, or to an ethical company like ProtonMail.
But here’s the thing: even a small change is a change.
Start using BlueSky. It’s awesome. It shows you posts in chronological order, it’s easy to find smart people, and it doesn’t try to manipulate you into doing things that you don’t want to do — like insisting that links open in the in-app browser, where you can be tracked and surveilled.
BlueSky doesn’t have to be your default social network, but by shifting your time from Facebook or Twitter, you send a message about the kind of things you value, and the experiences you care about.
Sidenote: I talked about creator discoverability earlier. If this is something you care about, you need to be on Bluesky. The bulk of the traffic on my last post (that wasn’t from Substack itself) came from Bluesky, a site where I have around 600 followers. Twitter, meanwhile, where I have just shy of 7,000 followers, gave me four clicks.
Four. I assume that’s because I don’t — and never will — pay for Twitter Blue.
Remember what I was saying about control earlier? Twitter under Elon Musk says that unless you pay a monthly fee, you’re invisible to everyone else. Given the type of people who currently dominate Twitter, I don’t necessarily think that’s a bad thing.
When it’s time to replace your laptop, or your phone, think about what choices respect your freedom. I currently use a first-generation M1 MacBook Pro and it’s reaching the end of its useful life — not least because, with only 8GB of RAM, it struggles with a lot of basic browsing tasks.
Sidenote: This isn’t related to what I’ve talked about, but I want to mention it anyway. Facebook is an absolute RAM-hungry nightmare. Here’s an experiment: Search for something and keep scrolling until your laptop starts to get hot, and your browser becomes almost unusable. It’s not uncommon for a single tab to consume as much as 3GB of memory.
When my finances permit, I’m probably going to get a new laptop. And, for the first time in a long time, I’m considering options beyond Apple. Framework’s machines — which are user-upgradable and user-repairable — are seriously temping, and I’m very comfortable with Linux (although I believe they also work with Windows).
I might also go for some flavor of Dell or Lenovo, assuming they’re user-upgradable (and actually good).
You can vote with your wallet. By choosing products that respect user freedom and user control, you send a message to the companies that have long tried to dictate what we do with the physical products that we own, all for the purpose of making us spend more, and more often than we’d choose ourselves.
These tech companies want us to be confused, and to feel powerless. When we don’t understand how the things we use work, and when we’re confronted with barriers to the things we want to do — and used to do unhindered — they hope that we feel despair, and to just accept that this is how things work.
Our best weapon — our only weapon — is to refuse to accept this state of affairs.
If tech wants to be Cartman, that’s fine. We’ll be César Milan.
Tsst.
This is really, really good writing.
My personal disenchantment moment with YouTube was when I set up a new account in 2018ish, and looked up an album with an orange cover. I then was recommended 6 videos with orange covers.
The hedonic treadmill continues apace. Perhaps the gears will jam for a couple years someday.