Tag: Generative AI

  • The Idea of Computer Generated Employees is Weird

    The very idea of having an “employee” that is just a large language model and a generated portrait seems bizarre to me. A boss immediately hitting on the large language model when he sees the generated portrait is even weirder. Let’s just not do this.

    On Monday, the co-founder of Business Insider Henry Blodget published a blog on his new Substack about a “native-AI newsroom.” Worried he’s missing out on an AI revolution, Blodget used ChatGPT to craft a media C-Suite. Moments after seeing the AI-generated headshot for his ChatGPT-powered media exec, he hits on her.

    Matthew Gault

    Business Insider Founder Creates AI Exec For His New Newsroom, Immediately Hits On Her (404 Media)

  • Trusting LLM Generated Code Is a Security Risk

    The rise of LLM-powered code generation tools is reshaping how developers write software – and introducing new risks to the software supply chain in the process.

    These AI coding assistants, like large language models in general, have a habit of hallucinating. They suggest code that incorporates software packages that don’t exist.

    Running that code should result in an error when importing a non-existent package. But miscreants have realized that they can hijack the hallucination for their own benefit.

    Thomas Claburn

    LLMs can’t stop making up software dependencies and sabotaging everything (The Register)

  • Hallucinations in Law

    When even folks who are well-educated and held to high standards are falling for the lies of generative AI, the tech companies creating these products are clearly failing their customers. It needs to be absolutely clear to anyone using a generative AI product that none of the output from it can be trusted no matter how plausible it sounds.

    Much like a chain saw or other useful by potentially dangerous tools, one must understand the tools they are using and use those tools with caution. It should go without saying that any use of artificial intelligence must be consistent with counsel’s ethical and professional obligations. In other words, the use of artificial intelligence must be accompanied by the application of actual intelligence in its execution.

    Judge Mark J. Dinsmore

    Judges Are Fed up With Lawyers Using AI That Hallucinate Court Cases (404 Media)

  • OpenAI Furious DeepSeek Might Have Stolen All the Data OpenAI Stole From Us

    It is, as many have already pointed out, incredibly ironic that OpenAI, a company that has been obtaining large amounts of data from all of humankind largely in an “unauthorized manner,” and, in some cases, in violation of the terms of service of those from whom they have been taking from, is now complaining about the very practices by which it has built its company.

    Jason Koebler

    OpenAI Furious DeepSeek Might Have Stolen All the Data OpenAI Stole From Us (404 Media)

  • The LLMentalist Effect

    One of the issues in during this research—one that has perplexed me—has been that many people are convinced that language models, or specifically chat-based language models, are intelligent.

    But there isn’t any mechanism inherent in large language models (LLMs) that would seem to enable this and, if real, it would be completely unexplained.

    LLMs are not brains and do not meaningfully share any of the mechanisms that animals or people use to reason or think.

    LLMs are a mathematical model of language tokens. You give a LLM text, and it will give you a mathematically plausible response to that text.

    There is no reason to believe that it thinks or reasons—indeed, every AI researcher and vendor to date has repeatedly emphasised that these models don’t think.

    Baldur Bjarnason

    The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con (Out of the Software Crisis)

  • Never Forgive Them

    The people running the majority of internet services have used a combination of monopolies and a cartel-like commitment to growth-at-all-costs thinking to make war with the user, turning the customer into something between a lab rat and an unpaid intern, with the goal to juice as much value from the interaction as possible. To be clear, tech has always had an avaricious streak, and it would be naive to suggest otherwise, but this moment feels different. I’m stunned by the extremes tech companies are going to extract value from customers, but also by the insidious way they’ve gradually degraded their products.

    Edward Zitron

    Never Forgive Them (Where’s Your Ed At?)

  • Tricked by A.I.

    A good video by a content creator who accidentally included some AI generated footage in one of their recent videos. It takes a look at how it ended up there and how the internet is getting filled with generated slop that you need to be increasingly vigilant for if you want to avoid it.

    I do try to be thorough. So how didn’t I spot the molten nonsense coins, so obvious to the viewer? Well, brains are funny. Eyes too. What we see depends a great deal on what we expect to see.

    Pillar of Garbage

    I Was Tricked by A.I. (And It’s Big Tech’s Fault) (Pillar of Garbage)

  • Stop Using Generative AI as a Search Engine

    I know people are sick of talking about glue on pizza, but I find the large-scale degradation of our information environment that has already taken place shocking. (Just search Amazon if you want to see what I mean.) This happens in small ways, like Google’s AI wrongly saying that male foxes mate for life, and big ones, like spreading false information around a major news event. What good is an answer machine that nobody can trust?

    Elizabeth Lopatto

    Stop using generative AI as a search engine (The Verge)

  • Information literacy and chatbots as search

    If someone uses an LLM as a replacement for search, and the output they get is correct, this is just by chance. Furthermore, a system that is right 95% of the time is arguably more dangerous than one that is right 50% of the time. People will be more likely to trust the output, and likely less able to fact check the 5%.

    But even if the chatbots on offer were built around something other than LLMs, something that could reliably get the right answer, they’d still be a terrible technology for information access.

    Professor Emily Bender

    Information literacy and chatbots as search (Mystery AI Hype Theater 3000)