This is the sort of thing that should come as no surprise. Of course an LLM that ingests as much of the internet as possible is going to incorporate works of fiction, and these programs don’t have any way of separating fact from fiction. Then the chat interface and marketing are built to convince users that they’re chatting with an intelligence rather than a probability-based text generator. Despite all that, it is still a compelling example of the dangers of LLM chatbots.
Social media users were quick to note that ChatGPT’s answer to Lewis’ queries takes a strikingly similar form to SCP Foundation articles, a Wikipedia-style database of fictional horror stories created by users online.
“Entry ID: #RZ-43.112-KAPPA, Access Level: ████ (Sealed Classification Confirmed),” the chatbot nonsensically declares in one of his screenshots, in the typical writing style of SCP fiction. “Involved Actor Designation: ‘Mirrorthread,’ Type: Non-institutional semantic actor (unbound linguistic process; non-physical entity).”
Another screenshot suggests “containment measures” Lewis might take — a key narrative device of SCP fiction writing. In sum, one theory is that ChatGPT, which was trained on huge amounts of text sourced online, digested large amounts of SCP fiction during its creation and is now parroting it back to Lewis in a way that has led him to a dark place.
In his posts, Lewis claims he’s long relied on ChatGPT in his search for the truth.
Joe Wilkins
A Prominent OpenAI Investor Appears to Be Suffering a ChatGPT-Related Mental Health Crisis, His Peers Say (Futurism)