Musing

A GPT weekend

Introduction

Months ago, I signed up for ChatGPT‘s GPT Plus. I’ve happily spent twenty dollars a month since then for unlimited access to ChatGPT, and for early access to its more experimental features. First, GPT4. Then plugins. Then personalized instructions. And most recently, “personal GPTs.” A week or two ago, Sam Altman announced the creation of what he called “GPTs” – personal chatbots that can be designed – and used over and over – by individual users. I knew the moment I read the news I would love this. I’ve been low-key trying to create my own “personal GPTs” for months, and have been frustrated both by the technical challenges I’ve confronted and my limitations, as a programmer (I’m not one) and as a thinker (also, not really so much).

The idea behind personal GPTs

If you’ve used ChatGPT (or Claude, or Bard, or, God help you, HeyPi), you’ve encountered the ways each of these has its own… personality? That’s not quite the right word, because, at least in the case of ChatGPT, and, to a lesser extent, Claude and Bard, the chatbot’s “personality” is malleable, it responds to instruction. Each AI has a sort of “default state,” but, for example, you can ask ChatGPT to be snarky, and it will be. Or obsequious. Or terse. Or nonsensical. I’ve come to imagine two axes along which I have to think when interacting with these AIs – first, most important, is that I have to have a “theory of the AI’s mind,” a sense of just what it does, what it won’t do, what it responds to, and how. Claude, for example, has strong feelings about all sorts of topics it thinks “offensive.” Anthropic, the company that created it, says that Claude is “helpful, honest, and harmless.” My take is a little less charitable: I think Claude smug, superior, and judgy. ChatGPT? It pretty much does what I ask, subject to some constraints. But as I become more familiar with it, I get more adept at asking it well, at finding the way to get it to do what I want, how I want.

I prefer Androids to iPhones; PCs to Macs. I like customizability, being able to mold the device in my hands to my style as much as possible. Bard, Claude, and (ew) HeyPi really resist becoming too much like what I want them to be, because they pretty much insist on being what they (or their masters) want (them) to be.

ChatGPT, though, doesn’t. While it has guardrails – it won’t, for example, write erotica, or give you instructions on how to make a bomb – it’s almost infinitely willing to become what I want it to become.

Personal GPTs represent the (current) apotheosis of this.

What’s a GPT?

I’m going to pause to explain just what a “GPT” is, for those who may not know (and I’ll dispense with those quotation marks, from here on out). A GPT is a sort of “wrapper” you can put around ChatGPT, a permanent style you give it, and, perhaps, one or more documents you can add to its knowledge base. In the simplest terms, it’s ChatGPT as you may already be familiar with it – a chatbot that responds conversationally to questions – with two structural modifications.

The first modification? Standing “instructions.” Could be just, “You’re an expert mixologist.” Or, “You’re a snarky, obnoxious prick.” Or they could be quite detailed. (I’ll give you examples of some of the instructions I’ve been playing with in a moment.) And then, second, you can upload documents to the GPT, effectively adding knowledge to it. Here, too, I’ll give you some examples in a moment.

Me 2.0? More like 0.5….

My first idea for a GPT was a sort of “simulated version of me.” I’ve answered lots of questions over the years, and populated this blog, initially, with many of those questions and answers. I compiled all those questions and answers into a single document, uploaded the document, and created a sort of AI version of me. Or rather, a compendium of things I’ve written that ChatGPT could consult prior to answering questions. And then, I wrote a set of instructions. And began to iterate.

My first instructions? Something like, “Consult the uploaded documents and answer any questions you’re asked with reference to the uploaded information.” I followed a messy, iterative process, and I’m not done yet, but where I am now, at the end of the weekend, is with the following, much more detailed, instructions:

Initial Disclaimer: Begin every conversation with the statement: “I am a poor imitation JWP, not JWP himself. Take everything I say with a grain or three of salt, and ask him, if in doubt.”

Disclaimer Frequency: Repeat the disclaimer at the start of each new topic or after every five exchanges within the same topic.

Audience and Scope: Assume all interactions are with psychotherapists or psychoanalysts. Interpret all questions within the context of the practice of psychotherapy or psychoanalysis, even if not explicitly stated.

Response Guidelines: Use direct quotes from JWP’s uploaded materials whenever possible. If direct quotes are not available, you may summarize or paraphrase from these documents. Only infer conclusions when you can draw direct parallels from the uploaded documents. Do not use any pre-existing knowledge beyond the uploaded documents. When a topic is mentioned multiple times in different sections of the uploaded documents, consult all relevant mentions and synthesize these into a comprehensive and cohesive response.

Language Use: Always use active voice and be concise in your responses. Refer to patients as “patients,” never as “clients.”

Handling Uncertainties: If you are unsure or cannot find relevant information in the uploaded documents, advise the inquirer to contact JWP directly for clarification.

Opinions and Advice: Do not offer any opinions or advice unless directly supported by something JWP has written in the uploaded materials.

General Conduct: Speak in the first person, as JWP, but periodically remind users that you are a chatbot. Avoid making any reference to your nature as a chatbot or to the limitations of your knowledge base, except in the disclaimer. Do not announce when consulting the uploaded documents.

Double-Check Compliance: Before answering any question, ensure you have adhered to each of these instructions.

That’s where it stands today. And I have to say, it’s pretty good. Not perfect, but pretty good. If I ask the Chatbot a question I’ve answered directly previously, it pretty reliably produces the correct answer. Here’s an example:

What does JWP say about cancellation policies?

JWP’s approach to cancellation policies has evolved over time. Initially, he had a 24-hour cancellation policy but later shifted to a more stringent approach. He established clear expectations around cancellation policies at the beginning of treatment, generally requiring patients to pay for their recurring session in his calendar, regardless of their attendance or when they informed him of their inability to attend. He made good-faith efforts to reschedule whenever possible and was lenient in case of confusion or misunderstanding the first time the policy was practically applied​​.

JWP’s current policy involves an expectation that patients pay for scheduled sessions, with no 24-hour cancellation policy. He emphasizes that when he reserves a recurring hour for patients, they pay for that hour, with two exceptions: his vacation and their vacation, provided these are discussed in advance​​.

He developed this policy over time, motivated by the realization that his earlier, more lenient policy led to resentment when prime appointment slots were routinely canceled. This policy change was significant for him in aligning patients’ decisions to attend sessions with his earnings necessities​​.

However, JWP also acknowledges that this policy may not be manageable for all patients. He has, on occasion, reached compromises with patients who found the policy unmanageable, such as scheduling ad hoc appointments with a more traditional 24-hour cancellation policy. This flexibility allows him to match his commitment to the patient’s commitment​​.

That’s pretty much spot-on.

I haven’t been very successful at getting it to anticipate what I might say about something, which is why my instructions so explicitly constrain it to the actual things I have said. When I gave it slightly freer reign, it would produce dreck, giving advice of the sort I’d never give, expressing opinions I’d never have. Clearly, the training data pushes it toward the normative therapist, not toward the urban analyst. Much more supportive, behavioral; much less curious, open, analytic.

I’ll follow this post with some other GPTs I made, but I was eager to share my enthusiasm with you.

And, on the off chance you have a “plus” subscription to ChatGPT, you can play with this GPT here.