No Ghost, No Machine
If there is no stable self on either side of the human-machine boundary, what does it mean for an “agent” to act? And what are these interactions doing to our minds?
Against my better judgment, I’ve been thinking a bunch about so-called artificial intelligence and the ongoing evolution thereof. Specifically, the buzzed-about term “agentic AI,” which is the ability for an “agent” to act on your behalf, enabling interactions with other digital interfaces that produce real-world outcomes. At its most basic, this is stuff like replying to emails, paying bills, or signing up for and canceling services. At its more complex (and dangerous) it can involve security and defense systems.
From what I understand, this is the future of AI, or at least the next fintech honeypot. However, the lack of standardization will probably mean that implementation will remain patchwork, clumsy, and potentially catastrophic. Because all of the handoffs are, at this point, uneven and ungoverned. The machine might be enabled to act, but it is unclear who is ultimately accountable. (Surely not the techbros.)
Much of this activity is built on top of today’s large language model systems (LLMs like ChatGPT, Gemini, or Claude), which have so many failures and last-mile problems that I, for one, wouldn’t trust them to buy me a pack of bubble gum.
All this talk of “agents” invites consideration of volition and intent. Some of these questions are relevant to everyday human-AI working sessions. We’re trained to engage with AI as though it has human attributes, but in actuality it does not. It is only responding based on the character of the input, with the capacity to cross-reference patterns at unimaginable scale. But there’s ultimately nobody there. Yet over time, the false impression that there is somebody there conditions our minds, which conditions behavior. The full scope of this psychological transference—mind shaped by machine, machine mistaken for mind—is beyond anyone’s current reckoning.
Of course, we could ask ourselves the same question about who’s on the human side, and what comprises our own constructed sense of self. I do ask myself this question frequently, sometimes during sessions with the machine. Because AI is not conscious, it doesn’t mistake relational process for self-identity. Humans, however, are quite susceptible in this regard, which means we anthropomorphize just about anything.
Despite countless problems with the technology, it is possible to see AI as an extension of mind. Whose mind, exactly, is unclear. If the boundary is genuinely porous, then questions of authorship and creativity become harder to pin down than we’d like to admit—not because the machine is contributing anything meaningful, but because the mind using it is already conditioned by the interaction. And yet, from both a dharmic and modern physics perspective, there is no inherent “self” on either side of the boundary. It is dependent origination, or mutual causality, all the way down.
And so, the agentic AI problem doesn’t really resolve anything, because the stable agent it presupposes isn’t really there. If there is no ghost in the machine, and no ghost outside of it, what’s actually happening in these sessions? The process itself is the only locatable thing. And we aren’t in control of what this means for human cognition and the realities we construct and inhabit.
It’s possible that the human species will evolve alongside the machine. It’s also possible that the machine will relegate us to the scrap heap of evolutionary biology. But the latter isn’t coming anytime soon—AI’s persistent structural failures and inability to properly evaluate context keep the noise-to-signal ratio far too high to support the post-human fantasy the tech bros cling to against sanity and reason. Nevertheless, they are willing and able to make planet-dooming mistakes to maintain or extend that belief.
So for now, the phantom process continues—and nobody, human or otherwise, knows where it’s going, even as we experience the consequences individually and collectively.