Trusting LLM Generated Code Is a Security Risk

The rise of LLM-powered code generation tools is reshaping how developers write software – and introducing new risks to the software supply chain in the process.

These AI coding assistants, like large language models in general, have a habit of hallucinating. They suggest code that incorporates software packages that don’t exist.

Running that code should result in an error when importing a non-existent package. But miscreants have realized that they can hijack the hallucination for their own benefit.

Thomas Claburn

LLMs can’t stop making up software dependencies and sabotaging everything (The Register)