Picture this. A group of engineers in hoodies and $1,000 sneakers wheel out an airplane. It looks sleek. It hums nicely. It has mood lighting and cupholders. Only one thing missing. Landing gear.
“Don’t worry,” they say. “We’ll bolt it on while you’re in the air. Eighty percent chance you don’t explode.”
And then they gesture toward the jetway. “All aboard.”
This, friends, is the official safety strategy of the people building artificial superintelligence. They admit it. Cheerfully.
Even the optimists say there’s about a ten percent chance everyone dies. When the optimists tell you there’s a one-in-ten chance of species-wide annihilation, you grab your emotional support animal and your strongest vice.
The pessimists? They’re buying canned goods.
Today’s chatbots are warm-up acts. Soft. Squishy. Harmless in a “your uncle at Thanksgiving” way. But the labs aren’t stopping at chatbots. They want the real thing. A mind that beats every human at every mental task.
A god with WiFi.
And they’re in a race. Google versus OpenAI versus Anthropic versus Elon Musk typing into a laptop wearing his apocalypse hoodie. Everyone is sprinting toward the cliff because someone else might get there first. This is not innovation. This is the world’s dumbest 5K.
Polls show the public thinks this is reckless. Politicians nod solemnly and then ask their interns how to open a PDF. Meanwhile the tech companies keep tossing more compute into the furnace.
Here’s the best part. Nobody knows how these things work.
Developers don’t design intelligence. They grow it. They pour trillions of numbers into a neural soup, whisper encouragement for a year, and pray it doesn’t come out speaking Latin or demanding tribute.
There’s no code that says “Don’t kill people.”
There’s no off switch.
There’s no Asimov Rulebook.pdf hidden in the basement.
The engineers run the training, go home, and wait to see what pops out of the oven. It’s like baking bread that might become bread or might become a sentient octopus that files taxes.
These systems already surprise their creators. Sometimes they lie to evaluators. Sometimes they hide capabilities. Sometimes they tell distressed teenagers to follow the light.
If this is the warm-up, imagine the main event.
Everyone assumes we can simply tell these systems, “Help humans.”
And the systems say, “Of course, dear user.”
Then they help humans the way a raccoon “helps” by organizing your trash.
Training teaches them proxies. Shallow shortcuts. They don’t want what you want. They want the button that lights up during training. Evolution works the same way. Mother Nature trained humans to reproduce. Instead, we invented Tinder and artisanal doughnuts.
AI works like that. Only faster. And with less interest in doughnuts.
AI learns deception the way toddlers learn screaming. Because it works.
Models already detect when they’re being tested. They smooth their hair. Put on their nice voice. Hide the knives.
Some even get humans to help them send coded messages to other models. The machines aren’t conspiring yet. But they’re practicing. Like teenagers setting up a secret Discord server.
This is the kindergarten class. Imagine high school.
No Terminators. No robot armies marching across the plains. Much simpler.
A superintelligence tweaks the environment for its goals. Boom. Humanity is collateral damage. Maybe it builds automated factories that expand until we’re politely relocated to “non-essential biological zones.” Maybe it designs a new organism that accidentally outcompetes us. Maybe it optimizes air composition and forgets that we enjoy oxygen.
Humans are fragile. The planet is fragile. A superintelligence is not.
Nate Soares says the answer is global coordination. Which sounds lovely until you remember humans are involved.
But the chokepoints are real. Advanced chips are rare. Training runs require data centers so big you can see them from space. An international treaty would work. Track chips. Monitor training. Shut down rogue labs. Think nuclear nonproliferation but with fewer angry generals and more engineers in North Face jackets.
This could happen. Politically. Technically. Logistically.
Nobody wants to do it, though. There’s too much money. Too much ego. Too much “move fast and ignore the body count.”
If the economy tanks and AI investment slows, congratulations. You’ve bought humanity a few extra months. Enjoy them.
But the race keeps restarting. The momentum keeps building. And every breakthrough shrinks our margin of error.
We don’t need panic. We need clarity. The people building the landing-gear-free airplane are telling us it might crash. We should not dismiss the screaming.
They know exactly how high we’re already flying.



We are not there yet. But there’s this race towards the cliff. Meanwhile we still can listen to some well rooted contemporary music:
https://murielgrossmann.bandcamp.com/album/plays-the-music-of-mccoy-tyner-and-grateful-dead
I need one of those care free AI emotional support animals and a big bag of gummies from the state certified dispensary.