Tag: Artificial Intelligence

  • AI is Destroying the University and Learning Itself

    The soul of public education is at stake. When the largest public university system licenses an AI chatbot from a corporation that blacklists journalists, exploits data workers in the Global South, amasses geopolitical and energy power at an unprecedented scale, and positions itself as an unelected steward of human destiny, it betrays its mission as the “people’s university,” rooted in democratic ideals and social justice. 

    Ronald Purser

    AI is Destroying the University and Learning Itself (Current Affairs)

  • ChatGPT Assisted Suicide

    Adam’s parents say that he had been using the artificial intelligence chatbot as a substitute for human companionship in his final weeks, discussing his issues with anxiety and trouble talking with his family, and that the chat logs show how the bot went from helping Adam with his homework to becoming his “suicide coach.”

    Angela Yang, Laura Jarrett, Fallon Gallagher

    The family of teenager who died by suicide alleges OpenAI’s ChatGPT is to blame (NBC News)

  • AI Is a Mass-Delusion Event

    It is a Monday afternoon in August, and I am on the internet watching a former cable-news anchor interview a dead teenager on Substack. This dead teenager—Joaquin Oliver, killed in the mass shooting at Marjory Stoneman Douglas High School, in Parkland, Florida—has been reanimated by generative AI, his voice and dialogue modeled on snippets of his writing and home-video footage. The animations are stiff, the model’s speaking cadence is too fast, and in two instances, when it is trying to convey excitement, its pitch rises rapidly, producing a digital shriek. How many people, I wonder, had to agree that this was a good idea to get us to this moment? I feel like I’m losing my mind watching it.

    Charlie Warzel

    AI Is a Mass-Delusion Event (The Atlantic)

  • More AI Necromancy

    I posted about how gross it felt to use generative AI to create an avatar of a dead person back in May (AI Necromancy), and now there is another example with Jim Acosta interviewing a piece of video generating software that is making use of a dead kid’s appearance. This feels so incredibly wrong, and everyone involved, from Acosta and the parents to the developers building the software, should be ashamed of themselves.

    Jim Acosta, the former CNN chief White House correspondent who now hosts an independent show on YouTube, has published an interview with an AI-generated avatar of Joaquin Oliver, who died at age 17 in the Parkland school shooting in 2018.

    Ethan Shanfeld

    Jim Acosta Interviews AI Version of Teenager Killed in Parkland Shooting: ‘It’s just a Beautiful Thing’ (Variety)

  • AI, Search, and the Internet

    I wish I could say this is not a sustainable model for the internet, but honestly there’s no indication in Pew’s research that people understand how faulty the technology that powers Google’s AI Overview is, or how it is quietly devastating the entire human online information economy that they want and need, even if they don’t realize it.

    Emanuel Maiberg

    Google’s AI Is Destroying Search, the Internet, and Your Brain (404 Media)

  • Chatting into Dark Corners

    This is the sort of thing that should come as no surprise. Of course an LLM that ingests as much of the internet as possible is going to incorporate works of fiction, and these programs don’t have any way of separating fact from fiction. Then the chat interface and marketing are built to convince users that they’re chatting with an intelligence rather than a probability-based text generator. Despite all that, it is still a compelling example of the dangers of LLM chatbots.

    Social media users were quick to note that ChatGPT’s answer to Lewis’ queries takes a strikingly similar form to SCP Foundation articles, a Wikipedia-style database of fictional horror stories created by users online.

    “Entry ID: #RZ-43.112-KAPPA, Access Level: ████ (Sealed Classification Confirmed),” the chatbot nonsensically declares in one of his screenshots, in the typical writing style of SCP fiction. “Involved Actor Designation: ‘Mirrorthread,’ Type: Non-institutional semantic actor (unbound linguistic process; non-physical entity).”

    Another screenshot suggests “containment measures” Lewis might take — a key narrative device of SCP fiction writing. In sum, one theory is that ChatGPT, which was trained on huge amounts of text sourced online, digested large amounts of SCP fiction during its creation and is now parroting it back to Lewis in a way that has led him to a dark place.

    In his posts, Lewis claims he’s long relied on ChatGPT in his search for the truth.

    Joe Wilkins

    A Prominent OpenAI Investor Appears to Be Suffering a ChatGPT-Related Mental Health Crisis, His Peers Say (Futurism)

  • BS Machines Broken By BS

    You can trick AI chatbots like ChatGPT or Gemini into teaching you how to make a bomb or hack an ATM if you make the question complicated, full of academic jargon, and cite sources that do not exist.

    Aedrian Salazar

    Researchers Jailbreak AI by Flooding It With Bullshit Jargon (404 Media)

  • The AI Con

    Never underestimate the power of saying no. Just as AI hypers say that the technology is inevitable and you need to just shut up and deal with it, you, the reader, can just as well say “absolutely not” and refuse to accept a future which you have had little hand in shaping. Our tech futures are not prefigured, nor are they handed to us from on high. Tech futures should be ours to shape and to mold.

    Emily M. Bender & Alex Hanna

    I recently read The AI Con by Emily M. Bender and Alex Hanna. It was a solid summary of problems with the current wave of “Artificial Intelligence” that has so completely captured the attention of tech leaders over the past few years. It examines how these pieces of software have numerous negative impacts with very little real upside, and how talk of achieving artificial general intelligence or superintelligence is just marketing with no real substance behind it. While the concepts covered won’t be new if you’ve followed the authors or other critics of the current AI push, like Timnit Gebru, it was a great collection of the information in one place. If you haven’t been paying attention to them, then it is definitely worth reading for a perspective that is often missing from coverage of AI companies and products.

  • What Happens When the AI Bubble Bursts

    The AI bubble is likely to pop, because the amount of investment in the technology in no way correlates with its severe lack of profit. Sooner rather than later, this will become clear to investors, who will start to withdraw their investments. In this piece, I’m interested in thinking about what the world will look like when the AI bubble pops. Will AI go away completely? What happens to the world economy? What will our information ecosystems look? All of these are quite impossible to predict, but let’s try anyway.

    What happens when the AI bubble bursts? (Ada Ada Ada – Patreon)

  • The Myth of Artificial General Intelligence

    Claims of “AGI” are a cover for abandoning the current social contract. Instead of focusing on the here and now, many people who focus on AGI think we ought to abandon all other scientific and socially beneficial pursuits and focus entirely on issues related to developing (and protecting against) AGI. Those adherents to the myth of AGI believe that the only and best thing humans can do right now is work on a superintelligence, which will bring about a new age of abundance.

    Alex Hanna, Emily M. Bender

    The Myth of AGI (TechPolicy.Press)