The God That Didn't Show Up
The God That Didn't Show Up
A middle finger to AGI, and why the people selling it to you might not get there.
I'm a fan of AI. Let me get that out of the way. What I'm not a fan of is the hype — this half-baked sci-fi fantasy people have pulled from The Matrix and aimed at a future they insist is already happening. It isn't. The world is changing fast, sometimes absurdly so, but AI isn't replacing us. At least not in the way people think.
I watched the AI doc on Prime last night. The director was clearly working through some pre-parental anxiety layered on top of the general anxiety of the times, and he was interviewing a rotating cast of "experts" who gave him the usual spread — optimism, pessimism, cautious somewhere-in-between. The questions were ethical and philosophical, which are fine questions, but they're the kind you can hand-wave your way through. Nobody asked anything technical. Nobody had to.
This post is my attempt to do the thing the doc didn't, with a middle finger pointed squarely at AGI. I'm not saying AGI is impossible. I'm saying two things: we haven't made a serious case for why we're trying to build it, and even if we do build it, it almost certainly won't be the thing people are picturing.
The elephant
Let's cut through it. The elephant in the room, the one the industry refuses to look at, is sustainability. We have data centers the size of small cities. We're spinning up decommissioned nuclear plants from the 70s to keep them fed. And the actual workload running on all of that compute? A huge share of it is people asking for help writing an email, asking a homework question, or talking to a chatbot because they're lonely or curious or bored. Anthropic's own Economic Index shows coding dominates Claude at around 36% of conversations, and after that it's writing help and everyday Q&A. OpenAI's published usage patterns look similar. HBR did a survey of top generative AI use cases and "therapy and companionship" came in at number one.
None of this requires AGI. None of it requires anywhere close to AGI. Most of it doesn't even require frontier-scale models. We are building a nuclear-powered industrial apparatus to run GPT-4-tier chat traffic, and the pitch is that if we just keep scaling, something qualitatively different happens at the end.
Forget the existential stuff for a second. Forget job replacement — that's happening with or without AGI, and would have happened with or without LLMs. The question I keep coming back to is simpler and more boring: is this practical? Are we actually building the thing we say we're building, or are we burning the planet down to ship slightly better autocomplete?
Who actually benefits
Here's what actually bothers me. The mega-corps leading the charge are selling humanity on the idea that AGI will be good for us. And maybe it will be. It's already accelerating research in real, measurable ways — protein folding, materials science, drug discovery. There will be genuine downstream benefits. I'm not going to pretend otherwise.
But there will be an equal weight of negative consequences, and the people making the decisions about how much power to burn and how many ecosystems to flatten aren't weighing those consequences. They can't. The incentive structure doesn't let them. The easy read is that the driving factor in this race is investors and billionaires getting extremely fucking rich. That's partially true. The money is real. But it's downstream of something harder to talk about, which I'll get to.
When a hyperscaler plants a data center in a rural county, they don't spend five minutes thinking about the watershed, the local culture, or what happens to the community when their power bills double. The framing is always "jobs." We're bringing jobs to the region. Which is the grand irony of the whole thing — the same companies telling a town they're bringing jobs are simultaneously telling the rest of us that AGI will mean we don't have to work anymore. Pick one.
We aren't worker bees
The deeper problem is the assumption baked into that pitch. The idea that work is a burden we're trying to escape, and that liberating humans from labor is obviously good. It treats us like worker bees — units of productivity whose highest destiny is to be replaced by something more efficient.
But we're not worker bees. We're social animals. Work is one of the main ways we're social. It's one of the main ways we build meaning. The struggle is most of the point. A life without anything to push against isn't a utopia — it's a nursing home.
And AI, as it's currently being deployed, isn't just removing the labor. It's removing the social fabric around the labor. The coworker you vent to. The customer you built a rapport with. The craft you got good at because you had to. We're automating those away and calling it progress.
The existential bit
The documentary kept circling the idea of AGI as an existential threat to humanity. Maybe it is. But let's be honest about our baseline. We are all born with a terminal life sentence. Every living thing on this rock eventually dies. We are on a spherical rock moving thousands of miles an hour through a vacuum, orbiting a giant fireball, in a galaxy that will eventually collide with another one, in a universe that is, as far as we can tell, indifferent to all of it.
Existential threat is the default condition. It's not something AGI is going to introduce. It's something humans spent the last few thousand years building religions, art, families, and entire civilizations to cope with.
So when a billionaire in a Patagonia vest tells me AGI is the existential question of our time, what I hear is a guy who has confused his portfolio with the human condition.
The part that's harder to say
Here's the irony I can't get past.
I spent a whole section arguing that work gives humans meaning and that the AGI pitch treats us like we'd be better off without it. The uncomfortable flip side is that the people building AGI are the ones who most need the work to mean something. They aren't cynics cashing checks. A lot of them are true believers. And their belief is load-bearing.
If AGI doesn't show up, their life's project was a very expensive chatbot. A decade of capital, ecosystems, political capital, and personal identity spent chasing a tool instead of a god. That's not a thing you can admit halfway through. So the rhetoric escalates instead of moderating as the evidence gets messier. Every plateau becomes a pause before the next leap. Every missed timeline becomes a sign we're closer than we thought. The goalposts move because they have to move. Stopping means being wrong about the thing you built your life around.
Which closes the loop on the whole argument. The people telling you that AGI is the meaning-giving event of human history are doing exactly what they say the rest of us will no longer need to do. They're finding purpose through struggle and work. Their work just happens to be a planet-scale resource extraction project aimed at a goal nobody can define.
Cynics can be negotiated with. True believers can't. That's the part that worries me more than the technology.
They might not get there
Here's the part nobody selling you AGI wants to say out loud. They might not get there.
The current approach is transformer architectures scaled up on more data with more compute. It has worked remarkably well. It has also started to show its seams. The returns on scale are diminishing. The training data is running out — they've essentially ingested the public internet and are now resorting to synthetic data, which has its own problems. The jumps between model generations are getting smaller and more expensive. GPT-4 to GPT-5 was not the leap GPT-3 to GPT-4 was, and it cost a hell of a lot more to get there.
And that's just the model side. Even if you assume a frontier lab cracks something genuinely new tomorrow, you still have the embodied intelligence problem. An AGI locked in a server rack isn't an AGI in any meaningful civilizational sense — it's a very smart chatbot. To make it matter in the physical world, you need robotics, sensors, actuators, and real-world feedback loops that are nowhere close to where the language models are. That's not a compute problem. That's a decades-long hardware and materials problem, and throwing more nuclear plants at it doesn't move it faster.
The real question isn't whether AGI is possible. It's whether it arrives before the physical constraints tighten. Water, power, grid capacity, political will in the counties being asked to host all of this. Those are real limits with real calendars. The believers are betting that capability compounds faster than the constraints close in. That's a specific, testable bet. Nobody knows who wins, but the clock is running on both sides.
The quiet end
So here's my actual prediction. Somebody claims AGI within a few years. The claim gets contested. The definition shifts. The race continues under new metrics. Meanwhile the models get very good, then they plateau. The capex bubble pops. A handful of companies that bet everything on transcendence have to explain to their shareholders — and more painfully to themselves — why they lit a decade of capital on fire chasing a god that didn't show up. And the rest of us are left with what's actually in our hands — really good tools.
Tools people use to write code, make music, learn things, build businesses, talk to each other about ideas. The same things humans have always used tools for. The worker bees won't be freed from labor, because we were never worker bees. We'll just have better instruments. And the meaning, the struggle, the social fabric, the reason any of this is worth doing in the first place — that stays with us. It was never theirs to take.
That's my hope. Not that AGI saves us. That it doesn't arrive, and we get to keep being human anyway.