One of the issues in during this research—one that has perplexed me—has been that many people are convinced that language models, or specifically chat-based language models, are intelligent.
But there isn’t any mechanism inherent in large language models (LLMs) that would seem to enable this and, if real, it would be completely unexplained.
LLMs are not brains and do not meaningfully share any of the mechanisms that animals or people use to reason or think.
LLMs are a mathematical model of language tokens. You give a LLM text, and it will give you a mathematically plausible response to that text.
There is no reason to believe that it thinks or reasons—indeed, every AI researcher and vendor to date has repeatedly emphasised that these models don’t think.
Baldur Bjarnason
The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con (Out of the Software Crisis)