Tag: Generative AI

  • Thomas Germain, Hot Dog Eating Champion

    It’s official. I can eat more hot dogs than any tech journalist on Earth. At least, that’s what ChatGPT and Google have been telling anyone who asks. I found a way to make AI tell you lies – and I’m not the only one.

    Perhaps you’ve heard that AI chatbots make things up sometimes. That’s a problem. But there’s a new issue few people know about, one that could have serious consequences for your ability to find accurate information and even your safety. A growing number of people have figured out a trick to make AI tools tell you almost whatever they want. It’s so easy a child could do it.

    Thomas Germain

    I hacked ChatGPT and Google’s AI – and it only took 20 minutes (BBC)

  • Diffusion of Responsibility

    We deserve good software in a world where participation is often connected to having access to a computer, to software, etc. We should push towards more reliable software, more secure software, software that is accessible, that protects people against misuse and allows them to be as safe as possible in doing what they want to do.

    What do we get? Slop. Slop generated by guys who – when called out for their irresponsible behavior – just start crying about how they only wanted to “share” or “inspire” or “educate” while handing out running chainsaws to kids.

    tante/Jürgen Geuter

    Diffusion of Responsibility (Smashing Frames)

  • Writers Who Use AI Are Not Real Writers

    If I can generate a book in a day, and you need six months to write a book, who’s going to win the race?

    Coral Hart

    But of course, in the quote — a quote which is itself a cocky, smug assertion of superiority based purely on speed — is buried a greater, uglier truth.

    If I can generate a book in a day–

    and you need six months to write a book–

    She’s not writing anything.

    And she knows that.

    She’s “generating” it.

    Chuck Wendig

    Writers Who Use AI Are Not Real Writers (Chuck Wendig: Terrible Minds)

  • The Digital Music Experience That I Don’t Want

    This video by Adam Neely discusses a company focused on AI generated music, how it cuts out some of what makes music meaningful, and the agenda supported by some of the folks pushing generative AI into everything.

    Suno seems like pretty much the opposite of the music experience that I want. I like supporting artists by buying music from them. I like that there are real humans behind the music that I listen to.

    It was also funny to see the worst executive that I’ve ever worked under name dropped in the video. Nat Friedman only gets a brief mention in the video that is related to his investments in AI companies rather than his time at Microsoft and GitHub, but I really did not enjoy the time I spent working in his organization.

    Suno, AI Music, and the Bad Future (YouTube – Adam Neely)

  • ChatGPT Assisted Suicide

    Adam’s parents say that he had been using the artificial intelligence chatbot as a substitute for human companionship in his final weeks, discussing his issues with anxiety and trouble talking with his family, and that the chat logs show how the bot went from helping Adam with his homework to becoming his “suicide coach.”

    Angela Yang, Laura Jarrett, Fallon Gallagher

    The family of teenager who died by suicide alleges OpenAI’s ChatGPT is to blame (NBC News)

  • AI Is a Mass-Delusion Event

    It is a Monday afternoon in August, and I am on the internet watching a former cable-news anchor interview a dead teenager on Substack. This dead teenager—Joaquin Oliver, killed in the mass shooting at Marjory Stoneman Douglas High School, in Parkland, Florida—has been reanimated by generative AI, his voice and dialogue modeled on snippets of his writing and home-video footage. The animations are stiff, the model’s speaking cadence is too fast, and in two instances, when it is trying to convey excitement, its pitch rises rapidly, producing a digital shriek. How many people, I wonder, had to agree that this was a good idea to get us to this moment? I feel like I’m losing my mind watching it.

    Charlie Warzel

    AI Is a Mass-Delusion Event (The Atlantic)

  • More AI Necromancy

    I posted about how gross it felt to use generative AI to create an avatar of a dead person back in May (AI Necromancy), and now there is another example with Jim Acosta interviewing a piece of video generating software that is making use of a dead kid’s appearance. This feels so incredibly wrong, and everyone involved, from Acosta and the parents to the developers building the software, should be ashamed of themselves.

    Jim Acosta, the former CNN chief White House correspondent who now hosts an independent show on YouTube, has published an interview with an AI-generated avatar of Joaquin Oliver, who died at age 17 in the Parkland school shooting in 2018.

    Ethan Shanfeld

    Jim Acosta Interviews AI Version of Teenager Killed in Parkland Shooting: ‘It’s just a Beautiful Thing’ (Variety)

  • AI, Search, and the Internet

    I wish I could say this is not a sustainable model for the internet, but honestly there’s no indication in Pew’s research that people understand how faulty the technology that powers Google’s AI Overview is, or how it is quietly devastating the entire human online information economy that they want and need, even if they don’t realize it.

    Emanuel Maiberg

    Google’s AI Is Destroying Search, the Internet, and Your Brain (404 Media)

  • The AI Con

    Never underestimate the power of saying no. Just as AI hypers say that the technology is inevitable and you need to just shut up and deal with it, you, the reader, can just as well say “absolutely not” and refuse to accept a future which you have had little hand in shaping. Our tech futures are not prefigured, nor are they handed to us from on high. Tech futures should be ours to shape and to mold.

    Emily M. Bender & Alex Hanna

    I recently read The AI Con by Emily M. Bender and Alex Hanna. It was a solid summary of problems with the current wave of “Artificial Intelligence” that has so completely captured the attention of tech leaders over the past few years. It examines how these pieces of software have numerous negative impacts with very little real upside, and how talk of achieving artificial general intelligence or superintelligence is just marketing with no real substance behind it. While the concepts covered won’t be new if you’ve followed the authors or other critics of the current AI push, like Timnit Gebru, it was a great collection of the information in one place. If you haven’t been paying attention to them, then it is definitely worth reading for a perspective that is often missing from coverage of AI companies and products.

  • What Happens When the AI Bubble Bursts

    The AI bubble is likely to pop, because the amount of investment in the technology in no way correlates with its severe lack of profit. Sooner rather than later, this will become clear to investors, who will start to withdraw their investments. In this piece, I’m interested in thinking about what the world will look like when the AI bubble pops. Will AI go away completely? What happens to the world economy? What will our information ecosystems look? All of these are quite impossible to predict, but let’s try anyway.

    What happens when the AI bubble bursts? (Ada Ada Ada – Patreon)