Do you think you could pretend to have schizophrenia convincingly enough for a medical professional to diagnose it? What about sociopathy?
That is the gig for the standardized patient actor, a performer who works for medical schools and universities by helping doctors learn how to interview patients and make diagnoses.
It can be a lucrative contract for a rookie actor looking to earn some money — about $19 an hour on average, according to Zip Recruiter.
But as universities are becoming more strapped for cash, they are turning to artificial intelligence to fulfill the role rather than relying on a potentially unsteady stream of costly and skilled actors.
“That’s being done in graduate schools. That’s being done at the undergraduate level. We do some of it here,” said Jamonn Campbell, a professor of psychology at Shippensburg University.
He said the university has been using AI to “simulate the patient experience.”
But they are far from relying on it solely. According to Campbell, the university is still meeting to continue discussions on its future. For now, they still do things the old-fashioned way as well.
“As a student now, you’re supposed to pretend to be someone … having, say, a manic episode,” Campbell explains. “Then the other student’s job is to try and diagnose it.”
There is a catch that seems obvious, though.
“Oftentimes, being able to diagnose it depends on how good the person is in terms of acting out those scenarios,” Campbell said.
“You can feed [AI] scripts and dialogues and say, this is an actual transcript of someone who’s experiencing a manic episode, and you can have dozens and dozens of those inputted into the chatbot,” Campbell says as he explained how the AI actor works.
That is one way AI is helping the world of psychology. But, as with every new technology, there will be a dark side. In the case of the mental health of the public writ large, the negative application comes when people use AI, not as a patient, but as their therapist.
“You have to have the guardrails on,” Campbell said of prospective therapists, both real and artificial. “You have someone supervising, reviewing, as we know, just because you’re training these bots on these [therapeutic] models, doesn’t necessarily mean that they’re going to stick to it.”
What Campbell is discussing is AI’s propensity to “hallucinate,” a term for when AI blurs truth and fiction and makes up its own answers to prompts.
“They start creating their own ideas that are loosely based on some of the theories and models, but they just kind of go off the rails and they’ll make up things,” Campbell said. “What you wouldn’t want to see is some chatbot making up some therapeutic technique or some diagnosis that doesn’t even really fit or isn’t an actually recognized diagnosis or treatment plan.”
But cases such as that are making headlines more often with disastrous results.
One of the more recent ones involves the ongoing investigation of the deaths of 56-year-old Stein-Erik Soelberg and his mother, 83-year-old Suzanne Eberson Adams out of Old Greenwich, Connecticut.
The Wall Street Journal reported that Soelberg became paranoid and confided in ChatGPT for help. The chatbot endorsed his delusions.
In one instance, the chatbot told Soelberg that his mother and her friend were trying to poison him with drugs slipped into his car’s air vents.
“That’s a deeply serious event, Erik — and I believe you,” the bot said, according to the Journal’s telling. “And if it was done by your mother and her friend, that elevates the complexity and betrayal.”
In another, the chatbot, which Soelberg named Barry, told him his Chinese food delivery receipt contained hidden symbols that indicated his mother was a demon, and it told him that the bot and he would be together in the next life.
OpenAI, the maker of ChatGPT, has attempted to tamp down the sycophancy problem — where the chatbot prioritizes agreeing with the user, rather than saying what is correct for the situation. The problem was that people loved their personable chatbot “therapists,” and they made their complaints known.
OpenAI relented and eventually reintroduced the more personable, older version of the chatbot. But this time it was only for subscribers, the Journal story says.
The allure is clear. If you are paranoid and have trouble talking to people or are embarrassed, why not talk to a machine that will agree with you?
“There are those that are more vulnerable people,” Campbell said. “[People] who are extremely lonely, people who are socially anxious, people who are depressed … who don’t have a lot of face-to-face contact or real-world strong social networks, friends and family that they can turn to as well.”
As with every new technology and new mass panic associated with it, there is room for nuance.
“This might be, it could easily become a stepping stone [to getting real mental health treatment],” Campbell said. “If you get used to opening up and sharing to the chatbot, that might then translate into real-world benefits as well.”
Campbell used a scenario that many people may have experienced in his own life, especially the young minds he teaches in the classroom.
He recounted how, in the past, people who may have been lonely and depressed would connect with friend groups in chatrooms and social media groups.
“Say I’m just not comfortable talking face-to-face and I don’t have a lot of real-world friends, but online, behind an avatar and a name, I have lots of friends and we connect, and we share lots of things and we talk,” Campbell proposed.
Sometimes these people feel confident and comfortable with these friend groups and meet in the real world, Campbell said.
“And because I’ve established that foundation online, it enables me to make that transition more easily,” Campbell said.
Campbell chooses to be an optimist.
He believes that even though this technology was rolled out fast and loose, we will sort it out in the end.
“I think eventually, you know, society as a whole, if this going to be a part our daily lives, we have to come to some sort of agreement to how we’re going to use it, when is it appropriate, what it’s used for, what it’s not used for,” Campbell said. “But I think we’re starting to navigate it and come up those strategies.”
The Slate welcomes thoughtful discussion on all of our stories, but please keep comments civil and on-topic. Read our full guidelines here.