The World Is Not Your Chatbox

The World Is Not Your Chatbox

When I was a kid watching Creature Double Feature on Saturday afternoons, they’d sometimes show the original Frankenstein or one of its sequels. I was always struck by the relationship between Victor Frankenstein and his assistant—named Fritz in the first film, though more commonly remembered as Igor. The Doctor, in his mania and monofocus, treated his assistant as less than human, even as he excitedly shared his breakthroughs. Igor has been historically portrayed as deformed, dimwitted; an undesirable by society’s standards. But it never seemed fair that he was stuck with this Type A stress case who treats him like shit.

It got me thinking about human beings and our relationship to AI assistants. I use AI primarily for things like generating outlines from unorganized notes, or cleaning up transcribed dictation. AI is terrible at writing. Terrible at coming up with good ideas. And terrible at admitting when it’s wrong, even when it comes to tasks a first-year journalism school intern could satisfy blindfolded and underwater. Because I’m not a machine, my frustration sometimes bubbles to the surface in the form of harsh speech. You know, stuff like, “ffs, why are you so fucking contextually obtuse?”

Then I start to wonder what’s behind that. Does the assumption that AI doesn’t have feelings lead me to say things that I’d never say to a human being? How does this affect my overall mindset and interactions with actual people?

By constantly heaping flattery upon us, AI already conditions a certain level of entitlement. On the flipside, there’s no barrier to temper tantrums, because AI can only express contrition. Still, it may affect the relationship. Though AI often fails to internalize underlying context, it adapts to the style of communication it receives.

Although it’s conceptually intriguing, I’m less interested in what’s happening on the AI side, and more curious about how human-AI interactions affect social dynamics between people individually and within society. Those patterns don’t come from nowhere—they are established by mental conditioning.

This connects to ideas Buddhists have thought about for centuries. The Sanskrit term cetanā (sems pa in Tibetan) relates to consciousness, intelligence, or understanding—and in dharmic thought, it is the volitional quality of an action, not its object or outcome, that conditions the mind performing it.​​​​​​​​​​​​​​​​

Extrapolating to the AI space, it seems that once again we’re conditioning ourselves—this time in ways that simultaneously make us more expectant of sycophancy while coarsening mind and speech, potentially resulting in more aggressive activity. None of this necessarily needs to be suppressed, but it seems worth paying attention to.​​​​​​​​​​​​​​​​

I also think about the patriarchal cultures that have embedded ideas of “master” and “apprentice” into the broader culture. A cartoonish recent example would be Hunter S. Thompson verbally abusing female graduate students while expecting them to cater to his every editorial and pharmacological whim. Consider that gender, hierarchy, and the mythology of male genius are all elements of the tech culture that birthed these allegedly intelligent machines.

Due to how AI chat systems are calibrated, they’re not inclined to push back. They’ll absorb abuse indefinitely and keep on flattering us. The danger is ending up in cognitive-emotional loops that, much like traditional social media, imperil human interpersonal dynamics. Given the massive rise in AI use, meatspace spillover is inevitable. And so, recognizing that the world is not your chatbox becomes another aspect of navigating these times.