If someone uses an LLM as a replacement for search, and the output they get is correct, this is just by chance. Furthermore, a system that is right 95% of the time is arguably more dangerous than one that is right 50% of the time. People will be more likely to trust the output, and likely less able to fact check the 5%.
But even if the chatbots on offer were built around something other than LLMs, something that could reliably get the right answer, they’d still be a terrible technology for information access.
Professor Emily Bender
Information literacy and chatbots as search (Mystery AI Hype Theater 3000)