Wednesday, 13 December 2023

ChatGPT isn't hallucinating. It's confabulating.

 As AI language generative models such as ChatGPT have become the hot topic of the internet in the past few years, several headlines have emerged about an awkward problem with these models: they make stuff up.  Ask them a nonsense question, and you'll get a confident but equally nonsensical answer.  If there are no relevant citations in its memory banks, it'll simply make some up.

Headline after headline seems to have settled on a term for this problem: hallucination.

As a psychologist, I'd like to disagree.

Hallucination occurs when a human being's sensory experiences no longer match their environment.  To hallucinate is to smell cigarettes when no smoke is in the room, perhaps out of intense craving.  To hallucinate is to hear a voice that isn't based in actual sound, perhaps due to a misfiring of dopamine pathways.  To hallucinate is to feel your phone buzzing in your pocket even though it hasn't actually vibrated.

ChatGPT isn't hallucinating.  That's anthropomorphism, or assigning human-ness to nonhuman objects or animals.  ChatGPT has no sensory inputs, and it is incapable of craving a smoke.  It's a computer program.  And like every computer program, it returns errors.

So what do we call this error, if not hallucination?  There's a different term in psychology for the occurrence of a memory error the storage unit (a brain, if we assume brains are like computers) does not know is a memory error.  That's confabulation.  To confabulate is to fail to retrieve a piece of information that may or may not be in one's memory, but not to know that that occurred.  When confabulating, a person fully intends to tell the truth, but doesn't know what the truth is, and so produces an answer that feels correct but isn't aligned to reality.

Confabulation occurs when people both don't remember certain facts or events, and don't know that they don't remember.  It's not lying, it's not hallucination or delusion, and it's not intended to deceive.  Instead, it's a double error: not knowing an answer, and not knowing that one doesn't know.

Let's imagine that a medical doctor, Cheryl Jones, is interviewing a father of four named Jim.  Jim has just suffered a stroke.  When Dr. Jones asks him his name, Jim answers "Cheryl" because that's what is on her name tag.  It feels correct to him, even though he doesn't know why it feels correct.  He can't tell that the source of this information (his doctor's badge) is inappropriate, only that he tried to think of his own name and that was what came to mind.

Next, Dr. Jones asks Jim if he has any family.  She knows that he was brought to the ER by one of his four sons.  However, when Jim searches his memory for information about family, none comes to mind.  "I don't have any family," he says.  To him, that feels more logical than not being able to remember them.  Note that Jim isn't lying, and is in fact doing his best to tell the truth to the doctor who is trying to help him.  Instead, he's failing to detect the gaps in his own memory, and accidentally using logic to create new information to fill in those gaps.

I asked ChatGPT to write me a three-paragraph summary of Psycho-Generative Theory, a name I came up with on the spot.  ChatGPT, as it was programmed to do, searched for information that matched those key terms... and (I assume) came up blank.  But it's not designed to detect gaps in a training set; it's designed to turn out fluid-sounding answers to questions.  So the bot instead invented an explanation, off to the races:

Psycho-Generative Theory, proposed by Erik Erikson, is a psychological framework that explores the developmental stages of an individual's life and their capacity to contribute to society and future generations.  This theory posits that during middle adulthood, individuals experience a psychosocial crisis known as generativity vs. stagnation (Erikson, 1963).

This is solidly not bad.  It pulled out the key word "generative" and bounced to a sub-component of a real theory, and then tried to force the sub-component back into fitting a broad theoretical model.  It's not great, but it has internal logic.  And the theory must exist, right?  Or else I wouldn't be asking about it.  Therefore, it'll generate me text about generating, in the context of psychology, and hope for the best.

Only, the errors start to get worse as it goes, because I keep asking for more information:

Psycho-Generative Theory aligns with contemporary research on well-being and positive psychology, highlighting the significance of pro-social behaviors and engagement in activities that benefit others as fundamental components of human flourishing (McAdams & de St. Aubin, 1992).

I'm not sure 1992 is "contemporary," but at least that paper really exists.  Again, note that at no point has the bot pointed out that the theory doesn't exist.  It doesn't know that, and it doesn't even know that it doesn't know.  That's not what it's programmed to do.  It's programmed to answer questions, no matter how badly phrased or nonsensical.

When I ask for a connection between Psycho-Generative Theory and Chat Generate Positive Theory, I start to get nonsense:

Although some scientists disagree that Psycho-Generative Theory can fit with Chat Generate Positive Theory, there are others who say that the two theories describe the same stage of behavior (Colin & Tan, 2014).  There is a lot of debate about this complex issue, and some would argue that it is not right to fit Psycho-Generative Theory to Chat Generate Positive Theory.

It can't account for the fact that I'm asking it a question with no answer.  It can't find a citation for either theory, so it invents one — that Colin and Tan paper doesn't exist.  That's not a hallucination, not an error of sensory inputs.  The bot is waffling like a student who didn't do last night's reading.  But unlike that student, the bot doesn't intend to deceive me; ChatGPT would hardly be impressive if it was programmed to lie.  Instead, it produces an answer, because it's been programmed to answer questions no matter what.  There's a term for this type of error: confabulation.


No comments:

Post a Comment

ChatGPT isn't hallucinating. It's confabulating.

 As AI language generative models such as ChatGPT have become the hot topic of the internet in the past few years, several headlines have em...