Beyond the chatbot: why the psychoanalytic fixation on ai is miscalibrated
For the last couple of months, I’ve been slightly manic in my dive down the AI rabbit hole. One of the things I find most frustrating about the psychoanalytic universe I inhabit is what I experience as the limited, myopic, and ultimately confused discussions people (or at least people in the universe I inhabit) are having about artificial intelligence.
The straw man: "Will AI replace us?"
Virtually everything I see in the realm of analysts discussing AI consists of establishing a straw man: Can artificial intelligence replace psychotherapists and psychoanalysts? I don’t fault my colleagues for being distracted by this "shiny object." There are plenty of venture-capital-backed firms trying to apply AI to clinical practice, including trying to stand in for us. I just don’t think that question is particularly interesting.
That’s not to say it isn’t threatening. I think it reasonably likely that over the next ten years, there will be a change in the demand for traditional human-led psychotherapy. A couple of examples from my own practice:
- The search for compliance: Two years ago, I lost my first patient to ChatGPT. My patient wanted simple, binary answers - "Should I do A or B?" - and didn't like my clinical refusal to choose for them. The AI was much more compliant.
- The referral: Conversely, a patient told me recently they switched from ChatGPT to Claude because Claude more often said, "I can’t help with that. You should talk to your therapist" [It didn’t, actually, say “your therapist”; it said my name, so well did it already know my patient.] I do believe many people who might have previously sought therapy now may not. But I’m not particularly troubled by this, perhaps out of arrogance or hubris. My patients seem very clear about the difference between a conversation with a chatbot and the depth of psychoanalysis or psychoanalytic psychotherapy. We’re not competing; we’re performing different functions.
I’ve found that even if people come in without much curiosity - thinking simply they want to solve a "problem" that ChatGPT couldn't solve - once they are seated in my chair or lying on my couch, they discover just how much there is for them in a treatment with real depth. As long as I can get people through the door, I have confidence in my/our ability to "make" these patients.
From "chatting" to "building"
There is a second conversation among analysts regarding how we relate to AI: whether we attribute consciousness to it or how we function in relationship with it. It’s interesting enough, but it isn't where my attention is drawn.
Where I find my attention drawn is somewhere completely different: the emergent power and utility of agentic AI.
If you haven’t used products like Claude Code (and its OpenAI and Google-sponsored competitors), they are to standard chatbots what an IMAX immersive experience is to a 1950s radio show. Standard chatbots are boxes where you enter text and get text back. They are impressive, yes, but limited to a single "conversation."
What agentic tools do is completely different. They have the capacity to manipulate entire systems of data. For example:
- System creation: I created this very website using Claude Code. It managed a directory of files, handled the formatting, and executed the design based on my "nudges" ("Change this color," "Make the margins wider").
- Data architecture: I’ve overhauled my group practice website, created an institute wiki, and built a system for storing and querying a database of several years' worth of my own voice memos.
- Content generation: I used these tools to create detailed Wikipedia entries for my father and my wife (one of whom really wanted one, and one of whom really really didn’t).
The five-year horizon
These tools are moving from a realm where they don't actually do that well (answering questions) to a new realm, in which they actually excel: doing, building, and transforming.
In five years (in six months, even), the internet will look completely different. It won’t be a discrete place we interact with through browsers. Our interactions with data will be far more tailored to our individual needs as consumers and to the data providers' individual desires.
Some of this will be good; some will be bad. I imagine all of it will be subject to the inevitable pressures of the market and capitalism - a mounting concentration of power and control over data, even as the cost of providing content shrinks. I don’t pretend to have insight into what those dynamics portend, but I do think much of the discussion about AI today is going to read as very quaint, even just a few months from now.
