Tag: Generative AI

  • The Myth of Artificial General Intelligence

    Claims of “AGI” are a cover for abandoning the current social contract. Instead of focusing on the here and now, many people who focus on AGI think we ought to abandon all other scientific and socially beneficial pursuits and focus entirely on issues related to developing (and protecting against) AGI. Those adherents to the myth of AGI believe that the only and best thing humans can do right now is work on a superintelligence, which will bring about a new age of abundance.

    Alex Hanna, Emily M. Bender

    The Myth of AGI (TechPolicy.Press)

  • The Who Cares Era

    In the Who Cares Era, the most radical thing you can do is care.

    Dan Sinker

    The Who Cares Era (Dan Sinker /blog)

  • It’s the Interface

    A whole lot of people – including computer scientists who should know better and academics who are usually thoughtful – are caught up in fanciful, magical beliefs about chatbots. Any sufficiently advanced technology and all that. But why chatbots specifically?

    Jeffrey Lockhart

    it’s the interface (scatterplot)

  • AI Necromancy

    Using any means to put words into the mouth of a dead person feels grotesque, especially for the purposes of a legal case. Generative AI unfortunately makes it easy to accomplish in a rather compelling way.

    An AI avatar made to look and sound like the likeness of a man who was killed in a road rage incident addressed the court and the man who killed him: “To Gabriel Horcasitas, the man who shot me, it is a shame we encountered each other that day in those circumstances,” the AI avatar of Christopher Pelkey said. “In another life we probably could have been friends. I believe in forgiveness and a God who forgives. I still do.”

    It was the first time the AI avatar of a victim—in this case, a dead man—has ever addressed a court, and it raises many questions about the use of this type of technology in future court proceedings.

    Matthew Gault, Jason Koebler

    ‘I Loved That AI:’ Judge Moved by AI-Generated Avatar of Man Killed in Road Rage Incident (404 Media)

  • The Idea of Computer Generated Employees is Weird

    The very idea of having an “employee” that is just a large language model and a generated portrait seems bizarre to me. A boss immediately hitting on the large language model when he sees the generated portrait is even weirder. Let’s just not do this.

    On Monday, the co-founder of Business Insider Henry Blodget published a blog on his new Substack about a “native-AI newsroom.” Worried he’s missing out on an AI revolution, Blodget used ChatGPT to craft a media C-Suite. Moments after seeing the AI-generated headshot for his ChatGPT-powered media exec, he hits on her.

    Matthew Gault

    Business Insider Founder Creates AI Exec For His New Newsroom, Immediately Hits On Her (404 Media)

  • Trusting LLM Generated Code Is a Security Risk

    The rise of LLM-powered code generation tools is reshaping how developers write software – and introducing new risks to the software supply chain in the process.

    These AI coding assistants, like large language models in general, have a habit of hallucinating. They suggest code that incorporates software packages that don’t exist.

    Running that code should result in an error when importing a non-existent package. But miscreants have realized that they can hijack the hallucination for their own benefit.

    Thomas Claburn

    LLMs can’t stop making up software dependencies and sabotaging everything (The Register)

  • Hallucinations in Law

    When even folks who are well-educated and held to high standards are falling for the lies of generative AI, the tech companies creating these products are clearly failing their customers. It needs to be absolutely clear to anyone using a generative AI product that none of the output from it can be trusted no matter how plausible it sounds.

    Much like a chain saw or other useful by potentially dangerous tools, one must understand the tools they are using and use those tools with caution. It should go without saying that any use of artificial intelligence must be consistent with counsel’s ethical and professional obligations. In other words, the use of artificial intelligence must be accompanied by the application of actual intelligence in its execution.

    Judge Mark J. Dinsmore

    Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases (404 Media)

  • OpenAI Furious DeepSeek Might Have Stolen All the Data OpenAI Stole From Us

    It is, as many have already pointed out, incredibly ironic that OpenAI, a company that has been obtaining large amounts of data from all of humankind largely in an “unauthorized manner,” and, in some cases, in violation of the terms of service of those from whom they have been taking from, is now complaining about the very practices by which it has built its company.

    Jason Koebler

    OpenAI Furious DeepSeek Might Have Stolen All the Data OpenAI Stole From Us (404 Media)

  • The LLMentalist Effect

    One of the issues in during this research—one that has perplexed me—has been that many people are convinced that language models, or specifically chat-based language models, are intelligent.

    But there isn’t any mechanism inherent in large language models (LLMs) that would seem to enable this and, if real, it would be completely unexplained.

    LLMs are not brains and do not meaningfully share any of the mechanisms that animals or people use to reason or think.

    LLMs are a mathematical model of language tokens. You give a LLM text, and it will give you a mathematically plausible response to that text.

    There is no reason to believe that it thinks or reasons—indeed, every AI researcher and vendor to date has repeatedly emphasised that these models don’t think.

    Baldur Bjarnason

    The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con (Out of the Software Crisis)

  • Never Forgive Them

    The people running the majority of internet services have used a combination of monopolies and a cartel-like commitment to growth-at-all-costs thinking to make war with the user, turning the customer into something between a lab rat and an unpaid intern, with the goal to juice as much value from the interaction as possible. To be clear, tech has always had an avaricious streak, and it would be naive to suggest otherwise, but this moment feels different. I’m stunned by the extremes tech companies are going to extract value from customers, but also by the insidious way they’ve gradually degraded their products.

    Edward Zitron

    Never Forgive Them (Where’s Your Ed At?)