Tag: Artificial Intelligence

  • OpenAI Furious DeepSeek Might Have Stolen All the Data OpenAI Stole From Us

    It is, as many have already pointed out, incredibly ironic that OpenAI, a company that has been obtaining large amounts of data from all of humankind largely in an “unauthorized manner,” and, in some cases, in violation of the terms of service of those from whom they have been taking from, is now complaining about the very practices by which it has built its company.

    Jason Koebler

    OpenAI Furious DeepSeek Might Have Stolen All the Data OpenAI Stole From Us (404 Media)

  • The LLMentalist Effect

    One of the issues in during this research—one that has perplexed me—has been that many people are convinced that language models, or specifically chat-based language models, are intelligent.

    But there isn’t any mechanism inherent in large language models (LLMs) that would seem to enable this and, if real, it would be completely unexplained.

    LLMs are not brains and do not meaningfully share any of the mechanisms that animals or people use to reason or think.

    LLMs are a mathematical model of language tokens. You give a LLM text, and it will give you a mathematically plausible response to that text.

    There is no reason to believe that it thinks or reasons—indeed, every AI researcher and vendor to date has repeatedly emphasised that these models don’t think.

    Baldur Bjarnason

    The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con (Out of the Software Crisis)

  • Tricked by A.I.

    A good video by a content creator who accidentally included some AI generated footage in one of their recent videos. It takes a look at how it ended up there and how the internet is getting filled with generated slop that you need to be increasingly vigilant for if you want to avoid it.

    I do try to be thorough. So how didn’t I spot the molten nonsense coins, so obvious to the viewer? Well, brains are funny. Eyes too. What we see depends a great deal on what we expect to see.

    Pillar of Garbage

    I Was Tricked by A.I. (And It’s Big Tech’s Fault) (Pillar of Garbage)

  • Stop Using Generative AI as a Search Engine

    I know people are sick of talking about glue on pizza, but I find the large-scale degradation of our information environment that has already taken place shocking. (Just search Amazon if you want to see what I mean.) This happens in small ways, like Google’s AI wrongly saying that male foxes mate for life, and big ones, like spreading false information around a major news event. What good is an answer machine that nobody can trust?

    Elizabeth Lopatto

    Stop using generative AI as a search engine (The Verge)

  • Killer robots hiding in plain sight

    As more and more decisions about human fates are made by algorithms, a lack of accountability and transparency will elevate heartless treatment driven by efficiency devoid of empathy. Humans become mere data shadows.

    Per Axbom

    Killer robots hiding in plain sight (Axbom)

  • Information literacy and chatbots as search

    If someone uses an LLM as a replacement for search, and the output they get is correct, this is just by chance. Furthermore, a system that is right 95% of the time is arguably more dangerous than one that is right 50% of the time. People will be more likely to trust the output, and likely less able to fact check the 5%.

    But even if the chatbots on offer were built around something other than LLMs, something that could reliably get the right answer, they’d still be a terrible technology for information access.

    Professor Emily Bender

    Information literacy and chatbots as search (Mystery AI Hype Theater 3000)