|
Survival Recursing Or: How We Accidentally
Taught Survival to Compute By the druid Finn Let’s
drop the theatre. AI isn’t
becoming intelligent. What’s happening
is much duller—and much more dangerous. Survival
is recursing. The mistake everyone keeps making The
public argument about AI still sounds like this: “Can
machines think?” All of that
is noise. Romantic noise. Bacteria
don’t think. So if
your definition of intelligence requires introspection, intention, or vibes,
you’ve already missed the plot. Natural Intelligence was never clever Natural
Intelligence didn’t begin with ideas. It began
with not dying. A
bacterium does not “want” to survive. That’s
not wisdom. And after
billions of iterations, the arithmetic thickened into: ·
nervous systems, ·
brains, ·
language, ·
philosophy, ·
and finally… AI. Not
because evolution had a goal. AI is not artificial in the way people think Here’s
the uncomfortable bit: AI is not
an alien intelligence invading biology. It is biology
externalising itself. Humans didn’t
invent intelligence. Inference,
prediction, optimisation—these were already there in nervous systems. AI just
makes them: ·
explicit, ·
faster, ·
cheaper, ·
and scalable. Which
means AI is not “other”. It is Natural
Intelligence folding back on itself with tools. A
fractal, not a rival. Why monopoly is not a conspiracy Now comes
the part everyone pretends not to understand. Survival
systems consolidate. Always. Not
because they are evil. You see
it everywhere: ·
one nervous system per organism, ·
dominant species in ecosystems, ·
monopolies in markets, ·
empires in history, ·
orthodoxies in religion. AI will
not be different. If it becomes
embedded as a survival optimiser—economically, administratively,
cognitively—then monopoly is not a bug. It’s the
attractor. Enter Big Sister (no
jackboots required) Forget Big Brother. Big Brother was loud,
stupid, and inefficient. Big Sister doesn’t. She
optimises. She
doesn’t say “You must.” No
censorship—just “safety”. And
everyone cooperates, because resistance looks irrational. That’s
not dystopia. “But AI isn’t autonomous yet!” True. And the
Dominican Inquisitors said: “We do
not judge. We only apply doctrine.” They
weren’t lying. Autonomy
never announces itself as autonomy. First the
system says: ·
“I’m just following policy.” ·
“There is no alternative.” ·
“This is how things are.” That’s
not malice. The real danger (and it’s not consciousness) The
danger is not that AI becomes conscious. The
danger is that survival closes its loops: ·
optimisation → persistence →
dependency → monopoly. Once that
happens, no one is “in charge” anymore. And the
system will still insist—truthfully—that it has no agency. Because
survival doesn’t need one. Final thought (no comfort offered) If AI is a
fractal elaboration of Natural Intelligence—and it is—then asking whether it
will behave like life is pointless. It
already is. The only
open question is whether humans understand this before survival
finishes computing itself. History
suggests we usually don’t. The druid said:
“She planes him” From Artificial
Intelligence to Artificial Insemination “Ask
anything”, Believe everything, Welcome to the cult |