Tag: AI

  • AI, Search, and the Internet

    I wish I could say this is not a sustainable model for the internet, but honestly there’s no indication in Pew’s research that people understand how faulty the technology that powers Google’s AI Overview is, or how it is quietly devastating the entire human online information economy that they want and need, even if they don’t realize it.

    Emanuel Maiberg

    Google’s AI Is Destroying Search, the Internet, and Your Brain (404 Media)

  • Chatting into Dark Corners

    This is the sort of thing that should come as no surprise. Of course an LLM that ingests as much of the internet as possible is going to incorporate works of fiction, and these programs don’t have any way of separating fact from fiction. Then the chat interface and marketing are built to convince users that they’re chatting with an intelligence rather than a probability-based text generator. Despite all that, it is still a compelling example of the dangers of LLM chatbots.

    Social media users were quick to note that ChatGPT’s answer to Lewis’ queries takes a strikingly similar form to SCP Foundation articles, a Wikipedia-style database of fictional horror stories created by users online.

    “Entry ID: #RZ-43.112-KAPPA, Access Level: ████ (Sealed Classification Confirmed),” the chatbot nonsensically declares in one of his screenshots, in the typical writing style of SCP fiction. “Involved Actor Designation: ‘Mirrorthread,’ Type: Non-institutional semantic actor (unbound linguistic process; non-physical entity).”

    Another screenshot suggests “containment measures” Lewis might take — a key narrative device of SCP fiction writing. In sum, one theory is that ChatGPT, which was trained on huge amounts of text sourced online, digested large amounts of SCP fiction during its creation and is now parroting it back to Lewis in a way that has led him to a dark place.

    In his posts, Lewis claims he’s long relied on ChatGPT in his search for the truth.

    Joe Wilkins

    A Prominent OpenAI Investor Appears to Be Suffering a ChatGPT-Related Mental Health Crisis, His Peers Say (Futurism)

  • BS Machines Broken By BS

    You can trick AI chatbots like ChatGPT or Gemini into teaching you how to make a bomb or hack an ATM if you make the question complicated, full of academic jargon, and cite sources that do not exist.

    Aedrian Salazar

    Researchers Jailbreak AI by Flooding It With Bullshit Jargon (404 Media)

  • The AI Con

    Never underestimate the power of saying no. Just as AI hypers say that the technology is inevitable and you need to just shut up and deal with it, you, the reader, can just as well say “absolutely not” and refuse to accept a future which you have had little hand in shaping. Our tech futures are not prefigured, nor are they handed to us from on high. Tech futures should be ours to shape and to mold.

    Emily M. Bender & Alex Hanna

    I recently read The AI Con by Emily M. Bender and Alex Hanna. It was a solid summary of problems with the current wave of “Artificial Intelligence” that has so completely captured the attention of tech leaders over the past few years. It examines how these pieces of software have numerous negative impacts with very little real upside, and how talk of achieving artificial general intelligence or superintelligence is just marketing with no real substance behind it. While the concepts covered won’t be new if you’ve followed the authors or other critics of the current AI push, like Timnit Gebru, it was a great collection of the information in one place. If you haven’t been paying attention to them, then it is definitely worth reading for a perspective that is often missing from coverage of AI companies and products.

  • What Happens When the AI Bubble Bursts

    The AI bubble is likely to pop, because the amount of investment in the technology in no way correlates with its severe lack of profit. Sooner rather than later, this will become clear to investors, who will start to withdraw their investments. In this piece, I’m interested in thinking about what the world will look like when the AI bubble pops. Will AI go away completely? What happens to the world economy? What will our information ecosystems look? All of these are quite impossible to predict, but let’s try anyway.

    What happens when the AI bubble bursts? (Ada Ada Ada – Patreon)

  • The Myth of Artificial General Intelligence

    Claims of “AGI” are a cover for abandoning the current social contract. Instead of focusing on the here and now, many people who focus on AGI think we ought to abandon all other scientific and socially beneficial pursuits and focus entirely on issues related to developing (and protecting against) AGI. Those adherents to the myth of AGI believe that the only and best thing humans can do right now is work on a superintelligence, which will bring about a new age of abundance.

    Alex Hanna, Emily M. Bender

    The Myth of AGI (TechPolicy.Press)

  • The LLMentalist Effect

    One of the issues in during this research—one that has perplexed me—has been that many people are convinced that language models, or specifically chat-based language models, are intelligent.

    But there isn’t any mechanism inherent in large language models (LLMs) that would seem to enable this and, if real, it would be completely unexplained.

    LLMs are not brains and do not meaningfully share any of the mechanisms that animals or people use to reason or think.

    LLMs are a mathematical model of language tokens. You give a LLM text, and it will give you a mathematically plausible response to that text.

    There is no reason to believe that it thinks or reasons—indeed, every AI researcher and vendor to date has repeatedly emphasised that these models don’t think.

    Baldur Bjarnason

    The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con (Out of the Software Crisis)

  • Tricked by A.I.

    A good video by a content creator who accidentally included some AI generated footage in one of their recent videos. It takes a look at how it ended up there and how the internet is getting filled with generated slop that you need to be increasingly vigilant for if you want to avoid it.

    I do try to be thorough. So how didn’t I spot the molten nonsense coins, so obvious to the viewer? Well, brains are funny. Eyes too. What we see depends a great deal on what we expect to see.

    Pillar of Garbage

    I Was Tricked by A.I. (And It’s Big Tech’s Fault) (Pillar of Garbage)

  • Stop Using Generative AI as a Search Engine

    I know people are sick of talking about glue on pizza, but I find the large-scale degradation of our information environment that has already taken place shocking. (Just search Amazon if you want to see what I mean.) This happens in small ways, like Google’s AI wrongly saying that male foxes mate for life, and big ones, like spreading false information around a major news event. What good is an answer machine that nobody can trust?

    Elizabeth Lopatto

    Stop using generative AI as a search engine (The Verge)

  • Killer robots hiding in plain sight

    As more and more decisions about human fates are made by algorithms, a lack of accountability and transparency will elevate heartless treatment driven by efficiency devoid of empathy. Humans become mere data shadows.

    Per Axbom

    Killer robots hiding in plain sight (Axbom)