Great article. Something I thought was super interesting about this is that LLMs can and will hallucinate tool usage, chain of thought, and “reasoning” explanations.
I know MCP is the hottest new thing, but it may not be as big a game changer as some believe.


We Have Made No Progress Toward AGI | LLMs Are Not Intelligent | Inspecting AI's Internal Thoughts
LLMs are braindead, our failed quest for intelligence
