Tag: Chatbots

  • AI is Destroying the University and Learning Itself

    The soul of public education is at stake. When the largest public university system licenses an AI chatbot from a corporation that blacklists journalists, exploits data workers in the Global South, amasses geopolitical and energy power at an unprecedented scale, and positions itself as an unelected steward of human destiny, it betrays its mission as the “people’s university,” rooted in democratic ideals and social justice. 

    Ronald Purser

    AI is Destroying the University and Learning Itself (Current Affairs)

  • ChatGPT Assisted Suicide

    Adam’s parents say that he had been using the artificial intelligence chatbot as a substitute for human companionship in his final weeks, discussing his issues with anxiety and trouble talking with his family, and that the chat logs show how the bot went from helping Adam with his homework to becoming his “suicide coach.”

    Angela Yang, Laura Jarrett, Fallon Gallagher

    The family of teenager who died by suicide alleges OpenAI’s ChatGPT is to blame (NBC News)

  • Chatting into Dark Corners

    This is the sort of thing that should come as no surprise. Of course an LLM that ingests as much of the internet as possible is going to incorporate works of fiction, and these programs don’t have any way of separating fact from fiction. Then the chat interface and marketing are built to convince users that they’re chatting with an intelligence rather than a probability-based text generator. Despite all that, it is still a compelling example of the dangers of LLM chatbots.

    Social media users were quick to note that ChatGPT’s answer to Lewis’ queries takes a strikingly similar form to SCP Foundation articles, a Wikipedia-style database of fictional horror stories created by users online.

    “Entry ID: #RZ-43.112-KAPPA, Access Level: ████ (Sealed Classification Confirmed),” the chatbot nonsensically declares in one of his screenshots, in the typical writing style of SCP fiction. “Involved Actor Designation: ‘Mirrorthread,’ Type: Non-institutional semantic actor (unbound linguistic process; non-physical entity).”

    Another screenshot suggests “containment measures” Lewis might take — a key narrative device of SCP fiction writing. In sum, one theory is that ChatGPT, which was trained on huge amounts of text sourced online, digested large amounts of SCP fiction during its creation and is now parroting it back to Lewis in a way that has led him to a dark place.

    In his posts, Lewis claims he’s long relied on ChatGPT in his search for the truth.

    Joe Wilkins

    A Prominent OpenAI Investor Appears to Be Suffering a ChatGPT-Related Mental Health Crisis, His Peers Say (Futurism)

  • BS Machines Broken By BS

    You can trick AI chatbots like ChatGPT or Gemini into teaching you how to make a bomb or hack an ATM if you make the question complicated, full of academic jargon, and cite sources that do not exist.

    Aedrian Salazar

    Researchers Jailbreak AI by Flooding It With Bullshit Jargon (404 Media)

  • The AI Con

    Never underestimate the power of saying no. Just as AI hypers say that the technology is inevitable and you need to just shut up and deal with it, you, the reader, can just as well say “absolutely not” and refuse to accept a future which you have had little hand in shaping. Our tech futures are not prefigured, nor are they handed to us from on high. Tech futures should be ours to shape and to mold.

    Emily M. Bender & Alex Hanna

    I recently read The AI Con by Emily M. Bender and Alex Hanna. It was a solid summary of problems with the current wave of “Artificial Intelligence” that has so completely captured the attention of tech leaders over the past few years. It examines how these pieces of software have numerous negative impacts with very little real upside, and how talk of achieving artificial general intelligence or superintelligence is just marketing with no real substance behind it. While the concepts covered won’t be new if you’ve followed the authors or other critics of the current AI push, like Timnit Gebru, it was a great collection of the information in one place. If you haven’t been paying attention to them, then it is definitely worth reading for a perspective that is often missing from coverage of AI companies and products.

  • It’s the Interface

    A whole lot of people – including computer scientists who should know better and academics who are usually thoughtful – are caught up in fanciful, magical beliefs about chatbots. Any sufficiently advanced technology and all that. But why chatbots specifically?

    Jeffrey Lockhart

    it’s the interface (scatterplot)

  • The LLMentalist Effect

    One of the issues in during this research—one that has perplexed me—has been that many people are convinced that language models, or specifically chat-based language models, are intelligent.

    But there isn’t any mechanism inherent in large language models (LLMs) that would seem to enable this and, if real, it would be completely unexplained.

    LLMs are not brains and do not meaningfully share any of the mechanisms that animals or people use to reason or think.

    LLMs are a mathematical model of language tokens. You give a LLM text, and it will give you a mathematically plausible response to that text.

    There is no reason to believe that it thinks or reasons—indeed, every AI researcher and vendor to date has repeatedly emphasised that these models don’t think.

    Baldur Bjarnason

    The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con (Out of the Software Crisis)

  • Information literacy and chatbots as search

    If someone uses an LLM as a replacement for search, and the output they get is correct, this is just by chance. Furthermore, a system that is right 95% of the time is arguably more dangerous than one that is right 50% of the time. People will be more likely to trust the output, and likely less able to fact check the 5%.

    But even if the chatbots on offer were built around something other than LLMs, something that could reliably get the right answer, they’d still be a terrible technology for information access.

    Professor Emily Bender

    Information literacy and chatbots as search (Mystery AI Hype Theater 3000)