Musing

AI doesn’t threaten psychotherapists or psychoanalysts

Lots of therapists and psychoanalysts – a group who collectively fear technology, and have little integration with the wider world of work other than through the eyes of our patients – fear AI and the impacts it will have on therapy. I have no such fear: AI doesn’t threaten psychotherapists or psychoanalysts.

Many of the podcasts I like best feature ads for “BetterHelp.” A couple of years ago, BetterHelp had a promotion where they’d pay clinicians $500 when they had their first meeting with a patient. And another $500, when someone they referred had their first meeting with a patient. (Update: as of April 2024, I get $1k if someone signs up using my link. If you do, I’ll gladly send you $250 of the $1k I get!) I signed up and made a couple thousand dollars in just a few weeks. I met with two patients. I did so in good faith, and/but was reasonably certain that I wouldn’t be what someone seeking a therapist on BetterHelp might hope for. I was right. I spent all of thirty minutes in BetterHelp sessions (for which they paid me something on the order of fifteen dollars). In addition to making a couple thousand dollars, I learned a lot about BetterHelp specifically, and about online therapy in general. Maybe some time I’ll write about that, but, for the time being, I will say simply this: whatever it is that BetterHelp sells, it has almost nothing in common with the therapy I provide.

And if BetterHelp has almost nothing in common with the therapy I provide, AI has – and will have – absolutely nothing in common with the psychoanalysis I provide.

We clinicians all have experience of patients who come to us with a problem – a challenging relationship, low self-esteem, a habit or addiction, a phobia, shame, something else. These people often have the fantasy that they will come to therapy and the therapist will, like a magician (or like the mighty Oz) remove their symptom. Some therapists promise such results. Hell, some therapists may well deliver such results.

I practice differently. I don’t promise to “help” my patients. I don’t promise to “solve” (or to lead them to solutions for) their “symptoms.” I warn them of this on my web page, if slightly disingenuously. On this page, I write, “I can’t possibly predict the likelihood that your treatment will lead to the results you desire.” And a paragraph or two later, I write, “The best predictors of ‘success’ – of a treatment’s leading to results valued by a patient – are your commitment to our work together and the relationship we build over time. Neither of these things typically is immediately apparent, but rather, they become evident over the course of our work together.” In this latter sentence, I demonstrate a bit of verbal sleight of hand, substituting “results valued by a patient” for the earlier “results you desire.” I didn’t consciously notice what I did here, but my words reflect my understanding: therapy rarely produces the results a patient desires, but it often produces results they value.

Between my web page and how I conduct myself in my early sessions, patients quickly learn how I think, how I work. They see quickly that I won’t, that I can’t, offer them the sort of quick results they often arrive craving. When I fit well with a patient, the patient fairly quickly intuits that what I offer – an intimate, thoughtful, curious engagement with who they are, with what it’s like to be them – might lead to a far richer experience than the one they arrived seeking. And, they similarly intuit that whatever it is that might lead to that richer experience, it’s not some facts I might have in my head or my books, and it’s not my ability to answer their questions or give them advice. Rather, they sense – usually not for a long while in a symbolized, verbal way, but rather, in a bodily, intuitive way – that what will lead us toward that richer experience isn’t me, but the relationship that they and I will form together.

The way I practice, the way I listen, it’s not just a matter of my “hearing” things and responding to them. It’s a matter of two people with profound subjectivity interacting, navigating relational challenges together, and, as they do so, observing that interaction, that navigation, with a view toward making meaning of it. Together.

I have a lot of faith in A.I. – in its abilities today, and in its abilities tomorrow. I believe that ChatGPT (or Claude, or Gemini) one day quite soon will be able to simulate very convincingly what a therapist might say in any given situation. At the same time, I imagine that we’re quite some distance from when one of those chatbots might say to a patient, as I said this morning, as I say often, to patients: “I’m curious: as you tell me this story, you’re smiling. You’re smiling, but I have tears in my eyes. Your story is making me sad. I wonder if you have thoughts about that?”

Never mind all the information that I glean from my patients non-verbally – from their interaction with my waiting room, with my scheduling, with my billing. All that information informs, and becomes part of, my relationship with – and my knowledge of – my patients – in ways that redound to my patients’ benefit in all sorts of difficult-to-quantify (and simply impossible to replicate in typed back-and-forth) ways.

After the quarantine period ended, I returned to in-person practice with all but a couple of my patients (the ones who had moved away). My practice today – except for three vestigial patients thousands of miles from me – takes place in my office, with my patient and me sitting together, just feet from one another. I don’t do phone sessions. I don’t do video sessions. EVER. That’s me. It’s the therapy I provide, because it’s what I know how to do, it’s what I believe I can do. I don’t judge or think less of phone or video therapy. I do, however, know that – just as there are plenty of patients who, when they learn I only work in person, learn I’m not the therapist for them – there are plenty of patients who only are interested in working together, in person, in real time.

I lost a patient a year or so ago to ChatGPT. My patient wanted answers to questions, and I was maddening to him. “What should I do in this or that situation?” my patient would ask. I would try to be curious about the meaning, the function, of that question. About the assumption that I might know better than the patient what is “better” for them. About the assumption that there is a correct answer to a question like “What should I do?” This particular patient didn’t want therapy; they wanted answers. ChatGPT did a great job, even then, of confidently telling my patient what to do. Today, it does an even better job of it.

Maybe that’s a form of “therapy.”

But.

That’s not the therapy I provide.