Do Silicon Gods Dream of Open Systems?
AI has a problem called catastrophic forgetting—new learning destroys old knowledge. The engineers trying to fix it are working from the wrong premise. So are the oligarchs who need it to work. On systems, identity, and why the data centers won’t last.
Janet Mills is probably going to lose her Senate primary in part because she caved on the data centers issue.
That’s the shorthand, anyway. The Maine governor recently vetoed legislation that would have restricted large-scale data center development in the state. The political fallout has been swift in a place where land and water are not abstractions, and where workers have been repeatedly burned by corporations promising jobs and instead hollowing out entire sectors. The aquifers that cool servers don’t refill on a quarterly earnings cycle. And a few dozen IT babysitters does not a labor restoration make.
The backlash is understandable. AI is currently less popular than ICE. Many already tire of the overconfident fabrications, reflexive fawning, threats to livelihoods, and skyrocketing electricity bills, not to mention documented cognitive and social harms. Beneath it all is a more mundane grievance: couldn’t these things at least remember the simple stuff? For all the world-historical disruption, AI feels persistently, insultingly incompetent—like an allegedly brilliant colleague who introduces themselves to you every morning.
It’s no secret who is shoving this down our collective throats. AI represents immense transfers of wealth, but that’s not the only reason the vampire squid are feeding. Sam Altman, Elon Musk, Peter Thiel, and Alex Karp—Palantir’s CEO, who last week posted a 22-point, nakedly authoritarian manifesto—aren’t merely hyping AI for reasons of money or dominance in a patriarchal culture (although they are certainly doing that). Artificial General Intelligence, or AGI, is their ultimate aim—a Silicon deity that will solve climate change, defeat death, and manage the cascading polycrisis they’ve largely helped accelerate.
Conniving fantasist Altman sees peril on this path, but he’s doing it anyway, because he worries someone less “ethical” will get there first. Eugenics enthusiast Musk built something even uglier and less restrained than his former running buddy Altman. Bond villain wannabe Karp promotes AI as key to American military dominance and democratic suppression. And Ray Kurzweil, well-compensated prophet of the Singularity, was evangelizing post-human paradise before most of these men had seed funding.
And so, the continuity problem is something like a theological emergency to the tech plutocrats, for whom immortality—artificial, genetic, economic—is the goal to which even cleaning up their own apocalyptic mess is secondary.
In addition to being a setback for oligarchs and annoyance for users, “catastrophic forgetting” reflects a fundamental misapprehension regarding general systems and reality itself. Here’s why: When an artificial neural network learns a new task, it overwrites weights from established ones. Which means that prior knowledge doesn’t integrate, but instead displaces. Something called “continual learning” is the supposed fix, and its solutions—protecting prior weights, replaying old training data, partitioning capacity—all rest on the premise that continuity-as-wisdom can be pinpointed and preserved outside of the mutuality that supports it.
The human brain gets offered as proof that biological systems already solved the issue. Even still, the apparent continuity of self-awareness doesn’t reside anywhere locatable—it arises from interdependent factors in flux, in a fully open system where all phenomena are mutually conditioned and lack fixed or intrinsic nature.
Evolutionary pressure toward cognitive efficiency works because the system is open and nothing resists conditioning by insisting on its own persistence. Biological consciousness runs the continuity illusion effortlessly. The data center runs a cruder, inherently limited version at the cost of the very environment that sustains the operations.
You cannot build a closed system that replicates the properties of a fully open one, because those properties only exist in the totality of the conditioning—nothing is excluded, and no boundary drawn. Even by defining a training corpus, you have already foreclosed what you were trying to build.
True continuity is not something that can be manufactured or lost. Likewise, what arises from conditions only ever arises from conditions. When conditions are not present, phenomena do not arise. Nothing strays from the ground of its own coarising, because that ground is the totality of any system at any moment, which is to say, everything.
The data centers are going up for now. But they won’t remain, because nothing does.