Breathe right and
you'll change everything

21/02/2026

N.B: Throughout this essay, I refer to the "web" as a shorthand for our entire digital lives, as opposed to websites exclusively.

The old internet

My grandpa was the first person in my family to get a computer. It was 1993 and I was 5 years old. He was 60 and an art teacher. It ran Windows 3.1.

I spent every weekend with my grandparents and grew up using that computer: slowly at first, but always freely. I was young enough that I wasn't scared of breaking it if I pressed the wrong button. Then, six years later, he got the internet. Suddenly, it was like my whole life was mapped out in front of me all at once. That's the year I decided I wanted to be a designer. It's the year I started to learn HTML. Grandpa enrolled in a web design evening class at the local art college, and when I came over for the weekend he would teach me what he'd learnt that week.

I'd like to tell you about that time, and specifically about the first website I ever visited.

The author aged five years old, grinning, he is sat at a table next to another child.

The author aged five.

So the first website I ever visited?... I actually don't remember. And that's the point. Back when the web (and I) was young (and both of us were a little ridiculous), you could jump from a page about medieval siege weapons to someone's fan fiction about Dragon Ball Z to a page about a local metal band covered with GIFs of spinning skulls. And none of it felt tiring. It all felt like discovery.

I want to be careful here because "the internet used to be better" is the digital equivalent of “they don't make good music anymore” (though they don't). It's the complaint of someone who has confused their nostalgia for evidence. The old web was a mess. Largely inaccessible, predominantly white, and called for a level of technical fluency that excluded most people. I'm not saying we should go back. We shouldn't.

But what we should do is ask:
What specifically did we lose, and was it necessary to lose it?

•••

What we lost (and how we lost it)

The first thing to acknowledge is that we didn't "lose" the old web. We sold it.

The early web was simple. Simple as in honest: open protocols, permission-less publishing, links that went where you clicked, and a basic respect for the idea that you were sharing this space with other people. CERN put the thing into the world for free, and the invitation was to participate. Not “ask permission to participate,” not “subscribe to participate,” not “agree to 47 pages to participate.” Just participate.

Then the capitalists arrived. They looked at the mess and thought, “I can make money from this”. And that's fine, I guess — everyone wants to get paid — The trouble is that they didn't just build products for people. They built platforms that treated people as the product: our time, our behavior, our social lives fed in at one end, and profit came out the other. They weren't only selling services, they were monetising participation itself.

This, of course, isn't an original observation. Way back in 2010 both Wired and The Economist commented on the beginning of this process. Then Shoshana Zuboff wrote a whole book about it called Surveillance Capitalism back in 2018. Speaking in 2018, she reminded us:

"The age of surveillance capitalism is a titanic struggle between capital and each one of us. It is a direct intervention into free will, an assault on human autonomy." Shoshana Zuboff, 2018

Cory Doctorow builds on this, calling the later stages of this process 'enshittification': the lifecycle of platforms that first attract users with genuine value, lock them in, then extract economic value (in all its forms) from them on behalf of advertisers and shareholders until the product is so hollowed out that even the people running it seem a little embarrassed. Examples of enshittified services include TikTok, Facebook, and Twitter (I refuse to call it the new name). The pattern is so consistent it appears less like a series of decisions and more like a law of physics. Capital-led platforms enshittify. That's what they do.

Wired magazine cover from 2010 which declared that THE WEB IS DEAD.

Wired and The Economist covers from 2010 declaring the internet under threat.

These platforms don't start that way. Often they start optimistically with naive, wide-eyed makers. People who have spotted a problem and want to fix it.

First, their creators look for a solution which seems to work for people: so-called 'Product-market Fit'. It's the stage that many of us experienced when growing up, exploring the web along the way. This is when we formed our opinions and emotional attachment to the web. We welcomed this time with open arms. It felt like a time of abundance. A place where everyone was equal: where everyone could have a voice.

For me this was the early days of Twitter. Before folks treated it as another social media channel, Twitter was more like going to the pub and sitting on the table beside two experts having a chat. Previously you'd have had to work at the right company or have been a part of the right friend group. Joining as one of the first few thousand users I, a design student in rural England, could be part of candid conversations between the people I looked up to. This is the stage Doctorow describes where products are delivering genuine value.

In order to get here, many founders turn to venture capital, which offers vast amounts of money with often shockingly low barriers to entry. A prototype, a handful of users, or a nice pitch deck can get you there.

This is where the problems begin. Once they have product-market fit, the investors realise they could see a return on their investment. It's important to note that the investors don't share the same motivations as that of the business. Venture capitalists are essentially gamblers: They place bets by investing in companies, they don't really care which of those companies succeeds, just that whichever few do, succeed in a big way. Their entire business model depends on it, over a 29 year period it was measured that just 6% of deals provide 60% of returns, whilst half of all deals result in a loss.

A chart from the VC firm Andreessen Horowitz stating only 6% of deals succeed.

A chart from the VC firm Andreessen Horowitz stating only 6% of deals succeed.

To do this, they put pressure on founders to make decisions which are bad for the business, bad for the people using their product, but good for investors looking to identify which bets will make it into that 6%.

That pressure often drives a familiar loop: chase aggressive growth targets, hire ahead of reality, watch costs surge, then demand more revenue to justify the burn; when genuine growth slows, the easiest lever is to squeeze the people already on the platform through price hikes, ads, fees and paywalls all amounting to a worse experience for users. The result is a business increasingly optimized for investor timelines rather than customer value, and each round of pressure makes the next round more likely.

You can see why a technology like AI, which promises to eliminate the human costs but retain the profits, would be appealing to capitalists.

•••

Enter AI

These same capitalists are now peddling the idea that AI and its societal consequences are inevitable: that the die has been cast.

But there's a moment, after the die has been cast but before it hits the table. Breathe right, and you'll change the way it lands. Will Bailey, A character on The West Wing not a real person

And right now? If we do nothing? The same people who already own our governments, our media and our personal data will own our future. The people with the servers, the distribution, the app stores, and the “Oops We 'Accidentally' Changed the Algorithm and Your Democracy Evaporated” button.

The International Monetary Fund estimates that artificial intelligence will affect roughly 40% of jobs globally, with greater exposure in wealthy economies. The World Economic Forum projects enormous structural churn through 2030: jobs created, yes, but also jobs displaced, skills rendered obsolete, transitions that will be — to use the WEF's characteristically gentle language — 'the hard part'.

Whether AI actually delivers on all of this, I think, is largely irrelevant. Business leaders are laying off people in huge waves on the promise that it might.

Astra Taylor calls this "the automation charade." Her argument isn't that automation is fake. She argues that automation is both real and ideological. It's used as a story to tell workers that their displacement is as inevitable as the weather, when in fact it is a series of decisions made by specific people in specific boardrooms. Many "automated" systems, she points out, are actually work shifted onto customers or hidden from view. And this matters enormously for how we think about AI right now, because the question of who benefits from it is not a technological question. It's a political question.

The capitalist class pushing for this future derive their power from the platforms they own and the influence they hold over the people reliant on those platforms. So what happens if we build alternatives which ensure people, communities and governments aren't reliant on them?

Mark Carney, the Canadian Prime Minister, speaking at the World Economic Forum in Davos.

Mark Carney, the Canadian Prime Minister, speaking at Davos.

Mark Carney, former central banker turned Canadian Prime Minister, stood at Davos a few weeks ago and said the quiet part out loud. He gave a speech in which he described 'a rupture in the world order' and 'the end of a pleasant fiction'. He was referring to the fiction that global interdependence would always be governed by something like neutral rules, by institutions that were at least trying to be fair. That fiction, he said, is over. It's crucial to note that it was always a fiction. We in the west were discouraged from looking at it too closely, whilst the Global South and Low to Middle Income countries felt it every day. What's actually over is the charade: the shared suspension of disbelief which allowed it to proliferate. What comes next is a world where what Carney calls the Middle Powers — countries that are not the United States, not China, not in the first tier of anything — will need to build what he describes as 'strategic autonomy.'

In real terms, this means the 'Middle Powers' building an alternative to AWS, an alternative to SWIFT, an alternative to YouTube, Instagram, Shopify, and everything else.

I find it genuinely strange to write the words from Mark Carney's Davos speech in an essay about the soul of the internet. And yet here we are. He is describing on the international stage something the citizens of the internet have been feeling for many years. Countries who have spent thirty years building their infrastructure on American capitalist platforms: their cloud computing, their payment systems, their social media, their AI, are feeling the same hollowed-out lack of reliability we've all felt individually for many years now.

And that's important because it's what creates the moment where we can change how the dice will land. Where before there was only ideological interest in change from a small group of people there is now societal and political interest from entire nations.

Cory Doctorow has been calling this moment an opportunity to deshittify the internet. To build a new enshittification-resistant internet, and I agree. But I don't believe there's anything inherently superior about a Middle Powers internet vs an American internet. The alternative platforms Carney describes will still be funded by private capital, and capitalists are capitalists no matter where they're from. If we build on top of them, we're ultimately leaving ourselves open to the same flaws from which the current enshittified internet suffers.

•••

But there's hope

The irony is that the same corporations I've spent several paragraphs dissecting, in their eager excitement to eliminate humans from the profit-making equation entirely, somewhat accidentally gave us the tools by which to liberate ourselves.

I think AI is here to stay. I think it will genuinely change things, probably faster than we're ready for. But I also think the conversation about AI serves certain interests: the interest in paying workers less whilst extracting more labour; the interest of taking those profits and channeling them to shareholders further deepening inequality; the interest in moving fast and breaking the social contracts that took decades to build. Our democratic systems of governance aim to slow down these interests' pursuit of profit. However, if they stress these systems to the point of failure then it's easier to cast them as obsolete. And this opens the door to alternative forms of government which formally centralise power under the uber-wealthy.

The question for me isn't whether AI will transform society. It already is. The question is whether the transformation will channel power toward the already powerful, or whether we can use it to build something different.

Which leaves us with a choice: what do we build? And I think…

…the first thing we should build is alternatives to all the platforms we rely on every day.

Built on the founding values of the web, the web we all feel nostalgia for, but informed by the lessons of the past 20 years. This means open platforms with federated data storage, decentralised distribution layers and no gatekeepers. Platforms which enable democracy, instead of undermining it.

And I'm not just talking about large platforms. Every weather app, every workout app, every online shop, every game — these principles should apply there too. This should be our default mode of building digital products.

This isn't a new desire. Folks have been trying to build alternatives for decades. However the biggest barrier to these movements has always been cost. Change of this scale takes time, deep expertise, vast capital and organisation. But generative AI dramatically reduces these costs.

The second biggest challenge is persuading people to actually make the move. By and large the people who have embraced the alternative internet have been driven by a principled view. This in turn has given them the willpower to overcome, or at least endure, janky and often deeply technical setup processes. Again, generative AI allows us to overcome this. Designers are now more empowered than ever before to own experiences and sweat the details.

•••
"The best minds of my generation are thinking about how to make people click ads. That sucks." Jeff Hammerbacher, 2011

I want to talk specifically to designers for a moment.

The reason the current web is the way it is isn't primarily technical. The technology for building something more open, interoperable, and human-scaled has existed for years. The reason we have dark patterns, infinite scroll, and notification systems designed to induce anxiety is that designers (talented, often well-intentioned designers, many of whom are friends of mine) were given metrics that rewarded engagement over dignity, retention over transparency, and growth over honesty. All in the pursuit of profit.

A comic by Design Thinking depicting a man on his deathbed continuing to obsess over social media.

A comic by Design Thinking depicting a man on his deathbed continuing to obsess over social media.

The alternative web, the one I'm gesturing at, will not win because it is technically superior, or because a standards body endorsed it, or because European regulators mandated it. Nor did the capitalist internet win purely through scale. It won through convenience, smooth onboarding, glossy interfaces, social gravity, low friction, and the time-honoured effectiveness of “it just works”. This is where the bar has been set, and therefore our new internet needs to have these same qualities.

The technologies already exist, and the cost of production has been lowered dramatically: this is now, first and foremost, a design problem.

•••

So how do we get there?

With every new model release, and each update to Cursor, I see a wave of designers experimenting with generative AI. It's exciting. There's a nascent sense of that exploration we've missed from the web for years.

I would like to suggest some shared principles to consider whilst exploring.

  1. Build for collective wellbeing

    For too long we've been forced by unhealthy business models to build for shipping faster or growing bigger at the expense of the individual and collective dignity of the people using these products.

    "We knew full well (that) technology can have deep social impacts. We just assumed those impacts would always be positive." Cennydd Bowles, 2020

    Vibe coding makes it easy to move fast and follow the timeworn paths we've made through day to day work, but this moment is an opportunity to step back and reassess where we can do better. Some products shouldn't exist, and we should all dedicate more time to considering the longer-term impacts of our product decisions.

  2. Individual data sovereignty

    To me this means keeping data on-device (or on a user's personal server) as much as possible. Yes, you could use that integration with Supabase to vibe code a cloud server. But do you need to? Are there alternative ways to provide the same product vision? If you're building a feature which utilises AI, allow the user to select which model provider is used — or to even use on-device models so their data doesn't get sent anywhere.

    This comes with the added benefit of reducing your own infrastructure costs, which in turn means you have more freedom to make product choices you believe in as opposed to choices which are necessary to keep the lights on.

  3. Sustainability as a feature

    A prerequisite of hollowing out the web for commercial interests has been the vast amounts of energy required to do so. Each TikTok, every analytics event and every cat GIF lives on physical infrastructure somewhere, together taking up vast amounts of energy. There are many solutions to this problem: some are physical, some are technical, and some are design-led.

    As someone designing a product, there's a simple formula you can follow to get started.

    Less stuff moved + less work done = less energy used

    If you can keep data on-device, then you should. If you don't need to track something, then you shouldn't. Features like infinite-scroll, which encourage people to overconsume, auto-play videos and data-heavy dashboards which update in real time all lead to more stuff moved + more work done = more energy used.

    The best sustainability moves usually also make products faster, cheaper to run, more accessible, and less annoying. It's one of those rare times in life where virtue lines up with good engineering and good design.

  4. Team up!

    The autonomy that generative models give us is a heady drug. I see folks posting in wonder about the bright future that lies ahead: a future where they don't need to go ten rounds with an Engineer or a PM just to dial in the padding on a button. We can now build entire products alone and be our own boss.

    But just because you can vibe code something alone, doesn't mean you should vibe code something alone. There's a real danger here that we build a bunch of AI slop code, which at worst can introduce critical security vulnerabilities and at best will undermine reliability and trust.

    Producing high quality software still requires deep technical expertise that is very rare to find in a single person. You'll need to be great at design, systems architecture and technical quality assurance. And then you need to know them all well enough to steer the agent to producing good outcomes.

    So I encourage you to find some teammates, share the load. Making things with other people is one of the true joys in life, and we shouldn't look to lose it now.

A screenshot of a Thread post where a designer rejoices cutting engineers out of the loop when implementing a UI.

A happy designer.

•••

Yes, we're more able to sweat the details now. It's fun! But I urge you to look further. We get to define the ethics and structures of the platforms we design now too, and that's what sweating the details really means.

Henry Desroches recently wrote a great essay called A Website to Destroy All Websites, in which he talks about the importance of embracing “…tools & methodologies like POSSE (Publish On your Own Site, Syndicate Elsewhere), ActivityPub, and ATProto …” to build your own corner of the internet, where you own your data. And I agree that's massively important, but to do so requires time and technical knowledge which many people don't have and never will have.

There are huge numbers of people who won't be able to participate in this way, for any number of reasons, and it's important that we bring them along with us. We can't have a free internet if the majority of folks remain on compromised corporate platforms. And we do that by building these principles into the platforms themselves and then making them wonderful to use.

•••

There's a line I keep returning to…

…despite it being a little grandiose.

Building great products and giving them away for free is an act of resistance.

Not because “free” is morally pure. But because common infrastructure reduces switching costs. It makes power harder to centralise. It gives everyone a way out. When the global system is designed to exploit everyone as a core aim, building tools that let people exit (a greatly under-appreciated human right), are auditable, forkable, self-hostable, and composable is not naive. It's essential.

For decades, the biggest barrier to building those alternatives wasn't ideology. The barrier was cost. The expertise, the capital, the sheer organisational mass required to build something that could compete. The capitalist platforms won, in large part, because they had rooms full of engineers and billions of dollars and the kind of momentum that gathers around money.

Generative AI changes that equation in ways that I don't think we've fully reckoned with yet. The costs are collapsing and the expertise gap is narrowing. Someone who six months ago couldn't write a single line of backend code can now build and ship a working federated social platform in a weekend. That is a genuinely extraordinary thing.

We are in the brief window (perhaps the only window) where that capability exists but the new power structures haven't yet calcified around it.

The people who currently own your attention, your data, your democracy, and in some cases your government are moving fast. But, for once, we can too.

Use these tools to build the web we were promised. Build it on open protocols. Build it federated, so no single entity can enshittify it into submission. Build it with data sovereignty baked in, not bolted on as an afterthought. Make it wonderful. Make it so easy and so delightful that people don't need principles to use it. Make it so that choosing the open platform isn't an act of sacrifice, but an act of preference.

That idea is worth fighting for. And right now, for a brief, strange, unrepeatable moment, we have the tools to fight for it.

The die has been cast, and we sure as hell better breathe right.