Tag: Artificial Intelligence

  • Chatting into Dark Corners

    This is the sort of thing that should come as no surprise. Of course an LLM that ingests as much of the internet as possible is going to incorporate works of fiction, and these programs don’t have any way of separating fact from fiction. Then the chat interface and marketing are built to convince users that they’re chatting with an intelligence rather than a probability-based text generator. Despite all that, it is still a compelling example of the dangers of LLM chatbots.

    Social media users were quick to note that ChatGPT’s answer to Lewis’ queries takes a strikingly similar form to SCP Foundation articles, a Wikipedia-style database of fictional horror stories created by users online.

    “Entry ID: #RZ-43.112-KAPPA, Access Level: ████ (Sealed Classification Confirmed),” the chatbot nonsensically declares in one of his screenshots, in the typical writing style of SCP fiction. “Involved Actor Designation: ‘Mirrorthread,’ Type: Non-institutional semantic actor (unbound linguistic process; non-physical entity).”

    Another screenshot suggests “containment measures” Lewis might take — a key narrative device of SCP fiction writing. In sum, one theory is that ChatGPT, which was trained on huge amounts of text sourced online, digested large amounts of SCP fiction during its creation and is now parroting it back to Lewis in a way that has led him to a dark place.

    In his posts, Lewis claims he’s long relied on ChatGPT in his search for the truth.

    Joe Wilkins

    A Prominent OpenAI Investor Appears to Be Suffering a ChatGPT-Related Mental Health Crisis, His Peers Say (Futurism)

  • BS Machines Broken By BS

    You can trick AI chatbots like ChatGPT or Gemini into teaching you how to make a bomb or hack an ATM if you make the question complicated, full of academic jargon, and cite sources that do not exist.

    Aedrian Salazar

    Researchers Jailbreak AI by Flooding It With Bullshit Jargon (404 Media)

  • The AI Con

    Never underestimate the power of saying no. Just as AI hypers say that the technology is inevitable and you need to just shut up and deal with it, you, the reader, can just as well say “absolutely not” and refuse to accept a future which you have had little hand in shaping. Our tech futures are not prefigured, nor are they handed to us from on high. Tech futures should be ours to shape and to mold.

    Emily M. Bender & Alex Hanna

    I recently read The AI Con by Emily M. Bender and Alex Hanna. It was a solid summary of problems with the current wave of “Artificial Intelligence” that has so completely captured the attention of tech leaders over the past few years. It examines how these pieces of software have numerous negative impacts with very little real upside, and how talk of achieving artificial general intelligence or superintelligence is just marketing with no real substance behind it. While the concepts covered won’t be new if you’ve followed the authors or other critics of the current AI push, like Timnit Gebru, it was a great collection of the information in one place. If you haven’t been paying attention to them, then it is definitely worth reading for a perspective that is often missing from coverage of AI companies and products.

  • What Happens When the AI Bubble Bursts

    The AI bubble is likely to pop, because the amount of investment in the technology in no way correlates with its severe lack of profit. Sooner rather than later, this will become clear to investors, who will start to withdraw their investments. In this piece, I’m interested in thinking about what the world will look like when the AI bubble pops. Will AI go away completely? What happens to the world economy? What will our information ecosystems look? All of these are quite impossible to predict, but let’s try anyway.

    What happens when the AI bubble bursts? (Ada Ada Ada – Patreon)

  • The Myth of Artificial General Intelligence

    Claims of “AGI” are a cover for abandoning the current social contract. Instead of focusing on the here and now, many people who focus on AGI think we ought to abandon all other scientific and socially beneficial pursuits and focus entirely on issues related to developing (and protecting against) AGI. Those adherents to the myth of AGI believe that the only and best thing humans can do right now is work on a superintelligence, which will bring about a new age of abundance.

    Alex Hanna, Emily M. Bender

    The Myth of AGI (TechPolicy.Press)

  • It’s the Interface

    A whole lot of people – including computer scientists who should know better and academics who are usually thoughtful – are caught up in fanciful, magical beliefs about chatbots. Any sufficiently advanced technology and all that. But why chatbots specifically?

    Jeffrey Lockhart

    it’s the interface (scatterplot)

  • AI Necromancy

    Using any means to put words into the mouth of a dead person feels grotesque, especially for the purposes of a legal case. Generative AI unfortunately makes it easy to accomplish in a rather compelling way.

    An AI avatar made to look and sound like the likeness of a man who was killed in a road rage incident addressed the court and the man who killed him: “To Gabriel Horcasitas, the man who shot me, it is a shame we encountered each other that day in those circumstances,” the AI avatar of Christopher Pelkey said. “In another life we probably could have been friends. I believe in forgiveness and a God who forgives. I still do.”

    It was the first time the AI avatar of a victim—in this case, a dead man—has ever addressed a court, and it raises many questions about the use of this type of technology in future court proceedings.

    Matthew Gault, Jason Koebler

    ‘I Loved That AI:’ Judge Moved by AI-Generated Avatar of Man Killed in Road Rage Incident (404 Media)

  • The Idea of Computer Generated Employees is Weird

    The very idea of having an “employee” that is just a large language model and a generated portrait seems bizarre to me. A boss immediately hitting on the large language model when he sees the generated portrait is even weirder. Let’s just not do this.

    On Monday, the co-founder of Business Insider Henry Blodget published a blog on his new Substack about a “native-AI newsroom.” Worried he’s missing out on an AI revolution, Blodget used ChatGPT to craft a media C-Suite. Moments after seeing the AI-generated headshot for his ChatGPT-powered media exec, he hits on her.

    Matthew Gault

    Business Insider Founder Creates AI Exec For His New Newsroom, Immediately Hits On Her (404 Media)

  • Trusting LLM Generated Code Is a Security Risk

    The rise of LLM-powered code generation tools is reshaping how developers write software – and introducing new risks to the software supply chain in the process.

    These AI coding assistants, like large language models in general, have a habit of hallucinating. They suggest code that incorporates software packages that don’t exist.

    Running that code should result in an error when importing a non-existent package. But miscreants have realized that they can hijack the hallucination for their own benefit.

    Thomas Claburn

    LLMs can’t stop making up software dependencies and sabotaging everything (The Register)

  • Hallucinations in Law

    When even folks who are well-educated and held to high standards are falling for the lies of generative AI, the tech companies creating these products are clearly failing their customers. It needs to be absolutely clear to anyone using a generative AI product that none of the output from it can be trusted no matter how plausible it sounds.

    Much like a chain saw or other useful by potentially dangerous tools, one must understand the tools they are using and use those tools with caution. It should go without saying that any use of artificial intelligence must be consistent with counsel’s ethical and professional obligations. In other words, the use of artificial intelligence must be accompanied by the application of actual intelligence in its execution.

    Judge Mark J. Dinsmore

    Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases (404 Media)