Tag: Artificial Intelligence

  • It’s the Interface

    A whole lot of people – including computer scientists who should know better and academics who are usually thoughtful – are caught up in fanciful, magical beliefs about chatbots. Any sufficiently advanced technology and all that. But why chatbots specifically?

    Jeffrey Lockhart

    it’s the interface (scatterplot)

  • AI Necromancy

    Using any means to put words into the mouth of a dead person feels grotesque, especially for the purposes of a legal case. Generative AI unfortunately makes it easy to accomplish in a rather compelling way.

    An AI avatar made to look and sound like the likeness of a man who was killed in a road rage incident addressed the court and the man who killed him: “To Gabriel Horcasitas, the man who shot me, it is a shame we encountered each other that day in those circumstances,” the AI avatar of Christopher Pelkey said. “In another life we probably could have been friends. I believe in forgiveness and a God who forgives. I still do.”

    It was the first time the AI avatar of a victim—in this case, a dead man—has ever addressed a court, and it raises many questions about the use of this type of technology in future court proceedings.

    Matthew Gault, Jason Koebler

    ‘I Loved That AI:’ Judge Moved by AI-Generated Avatar of Man Killed in Road Rage Incident (404 Media)

  • The Idea of Computer Generated Employees is Weird

    The very idea of having an “employee” that is just a large language model and a generated portrait seems bizarre to me. A boss immediately hitting on the large language model when he sees the generated portrait is even weirder. Let’s just not do this.

    On Monday, the co-founder of Business Insider Henry Blodget published a blog on his new Substack about a “native-AI newsroom.” Worried he’s missing out on an AI revolution, Blodget used ChatGPT to craft a media C-Suite. Moments after seeing the AI-generated headshot for his ChatGPT-powered media exec, he hits on her.

    Matthew Gault

    Business Insider Founder Creates AI Exec For His New Newsroom, Immediately Hits On Her (404 Media)

  • Trusting LLM Generated Code Is a Security Risk

    The rise of LLM-powered code generation tools is reshaping how developers write software – and introducing new risks to the software supply chain in the process.

    These AI coding assistants, like large language models in general, have a habit of hallucinating. They suggest code that incorporates software packages that don’t exist.

    Running that code should result in an error when importing a non-existent package. But miscreants have realized that they can hijack the hallucination for their own benefit.

    Thomas Claburn

    LLMs can’t stop making up software dependencies and sabotaging everything (The Register)

  • Hallucinations in Law

    When even folks who are well-educated and held to high standards are falling for the lies of generative AI, the tech companies creating these products are clearly failing their customers. It needs to be absolutely clear to anyone using a generative AI product that none of the output from it can be trusted no matter how plausible it sounds.

    Much like a chain saw or other useful by potentially dangerous tools, one must understand the tools they are using and use those tools with caution. It should go without saying that any use of artificial intelligence must be consistent with counsel’s ethical and professional obligations. In other words, the use of artificial intelligence must be accompanied by the application of actual intelligence in its execution.

    Judge Mark J. Dinsmore

    Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases (404 Media)

  • OpenAI Furious DeepSeek Might Have Stolen All the Data OpenAI Stole From Us

    It is, as many have already pointed out, incredibly ironic that OpenAI, a company that has been obtaining large amounts of data from all of humankind largely in an “unauthorized manner,” and, in some cases, in violation of the terms of service of those from whom they have been taking from, is now complaining about the very practices by which it has built its company.

    Jason Koebler

    OpenAI Furious DeepSeek Might Have Stolen All the Data OpenAI Stole From Us (404 Media)

  • The LLMentalist Effect

    One of the issues in during this research—one that has perplexed me—has been that many people are convinced that language models, or specifically chat-based language models, are intelligent.

    But there isn’t any mechanism inherent in large language models (LLMs) that would seem to enable this and, if real, it would be completely unexplained.

    LLMs are not brains and do not meaningfully share any of the mechanisms that animals or people use to reason or think.

    LLMs are a mathematical model of language tokens. You give a LLM text, and it will give you a mathematically plausible response to that text.

    There is no reason to believe that it thinks or reasons—indeed, every AI researcher and vendor to date has repeatedly emphasised that these models don’t think.

    Baldur Bjarnason

    The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con (Out of the Software Crisis)

  • Tricked by A.I.

    A good video by a content creator who accidentally included some AI generated footage in one of their recent videos. It takes a look at how it ended up there and how the internet is getting filled with generated slop that you need to be increasingly vigilant for if you want to avoid it.

    I do try to be thorough. So how didn’t I spot the molten nonsense coins, so obvious to the viewer? Well, brains are funny. Eyes too. What we see depends a great deal on what we expect to see.

    Pillar of Garbage

    I Was Tricked by A.I. (And It’s Big Tech’s Fault) (Pillar of Garbage)

  • Stop Using Generative AI as a Search Engine

    I know people are sick of talking about glue on pizza, but I find the large-scale degradation of our information environment that has already taken place shocking. (Just search Amazon if you want to see what I mean.) This happens in small ways, like Google’s AI wrongly saying that male foxes mate for life, and big ones, like spreading false information around a major news event. What good is an answer machine that nobody can trust?

    Elizabeth Lopatto

    Stop using generative AI as a search engine (The Verge)

  • Killer robots hiding in plain sight

    As more and more decisions about human fates are made by algorithms, a lack of accountability and transparency will elevate heartless treatment driven by efficiency devoid of empathy. Humans become mere data shadows.

    Per Axbom

    Killer robots hiding in plain sight (Axbom)