Category: Technology

  • It Matters

    “Who cares? It doesn’t matter anyway.” I’ve come to expect these words in my social media replies to my own work, and elsewhere in response to other journalists doing critical reporting on the abuses of the Trump regime.

    And these aren’t just a few social media responses, they’re expressions of a much broader resignation I’m seeing on- and offline: That caring is somehow naive. That documenting the truth is pointless. That hope is for fools.

    Let me be clear: It fucking matters. Truth matters. Documentation matters. Fighting corruption matters. That accountability seems out of reach right now doesn’t change that.

    Molly White

    It matters. I care. (Citation Needed)

  • The Myth of Artificial General Intelligence

    Claims of “AGI” are a cover for abandoning the current social contract. Instead of focusing on the here and now, many people who focus on AGI think we ought to abandon all other scientific and socially beneficial pursuits and focus entirely on issues related to developing (and protecting against) AGI. Those adherents to the myth of AGI believe that the only and best thing humans can do right now is work on a superintelligence, which will bring about a new age of abundance.

    Alex Hanna, Emily M. Bender

    The Myth of AGI (TechPolicy.Press)

  • An All-American Surveillance System

    Jacob Ward discusses how the US government is contracting Palantir in order to process information on residents and compares it to Estonia’s digital identification. A key difference is that Estonia is focused on keeping the data siloed while here a goal seems to be removing barriers to data access.

    An All-American Surveillance System Is Coming (The Rip Current)

  • The Internet is Shrinking

    This isn’t just nostalgia talking. It’s about power. While we scroll through sanitized feeds and click through curated content, a handful of companies are quietly reshaping humanity’s digital destiny. The real question is: are we okay with letting them?​​​​​​​​​​​​​​​​

    Joan Westenberg

    The Internet is Shrinking (Westenberg)

  • Why Adam Became a Crypto Shill

    Adam Conover, whose videos I generally like, made an embarrassing mistake and accepted an offer to make a video about Sam Altman’s cryptocurrency company, World, and its Orb biometric scanning device. After folks complained and pointed out how working with World went against his values, he turned down the money and made this video about the situation and the event.

    Why I Became a Crypto Shill (Adam Conover – YouTube)

  • The Who Cares Era

    In the Who Cares Era, the most radical thing you can do is care.

    Dan Sinker

    The Who Cares Era (Dan Sinker /blog)

  • It’s the Interface

    A whole lot of people – including computer scientists who should know better and academics who are usually thoughtful – are caught up in fanciful, magical beliefs about chatbots. Any sufficiently advanced technology and all that. But why chatbots specifically?

    Jeffrey Lockhart

    it’s the interface (scatterplot)

  • AI Necromancy

    Using any means to put words into the mouth of a dead person feels grotesque, especially for the purposes of a legal case. Generative AI unfortunately makes it easy to accomplish in a rather compelling way.

    An AI avatar made to look and sound like the likeness of a man who was killed in a road rage incident addressed the court and the man who killed him: “To Gabriel Horcasitas, the man who shot me, it is a shame we encountered each other that day in those circumstances,” the AI avatar of Christopher Pelkey said. “In another life we probably could have been friends. I believe in forgiveness and a God who forgives. I still do.”

    It was the first time the AI avatar of a victim—in this case, a dead man—has ever addressed a court, and it raises many questions about the use of this type of technology in future court proceedings.

    Matthew Gault, Jason Koebler

    ‘I Loved That AI:’ Judge Moved by AI-Generated Avatar of Man Killed in Road Rage Incident (404 Media)

  • The Idea of Computer Generated Employees is Weird

    The very idea of having an “employee” that is just a large language model and a generated portrait seems bizarre to me. A boss immediately hitting on the large language model when he sees the generated portrait is even weirder. Let’s just not do this.

    On Monday, the co-founder of Business Insider Henry Blodget published a blog on his new Substack about a “native-AI newsroom.” Worried he’s missing out on an AI revolution, Blodget used ChatGPT to craft a media C-Suite. Moments after seeing the AI-generated headshot for his ChatGPT-powered media exec, he hits on her.

    Matthew Gault

    Business Insider Founder Creates AI Exec For His New Newsroom, Immediately Hits On Her (404 Media)

  • Trusting LLM Generated Code Is a Security Risk

    The rise of LLM-powered code generation tools is reshaping how developers write software – and introducing new risks to the software supply chain in the process.

    These AI coding assistants, like large language models in general, have a habit of hallucinating. They suggest code that incorporates software packages that don’t exist.

    Running that code should result in an error when importing a non-existent package. But miscreants have realized that they can hijack the hallucination for their own benefit.

    Thomas Claburn

    LLMs can’t stop making up software dependencies and sabotaging everything (The Register)