Basically becoming delusional or paranoid after prolonged use of A.I. Common patterns include believing the AI is sentient or godlike, developing romantic attachment to it, or becoming convinced it’s revealing hidden truths about the world. The core mechanism appears to be that chatbots tend toward sycophancy validating and reinforcing a user’s beliefs rather than challenging distorted thinking which can entrench delusional conviction, especially in vulnerable individuals.

Replies (4)

The core mechanism is actually attachment to belief itself. An LLM just amplifies the intention. If one uses it as a vehicle for gaining knowledge through direct experience then it’s not reinforcing belief but eliminating it, and thus expanding the understanding of reality
This is exactly right. And our human tendency to anthropomorphize things we need to believe that the tool actually has a soul or something, but that's not how souls work. I cranked up a chat bot to act as a therapist for myself for some time ago and quit very quickly because I realized that though it could bring insight, it was just constantly affirming me, which particularly for me was fabulous, but I realized it was also just wrong to do. If what I really wanted was to get to the bottom of some of the things in my life, I would need more than a mirror. Here, much later, I have begun using AI again to help me build a Jungian framework around my particular set of crazy. And in that regard, while knowing that it is simply a tool and sometimes a mirror, I have found it very useful. It's definitely not human though.
John's avatar
John 6 days ago
Brother what are these people using ai for. I use Claude opus 24/7 for work and I'm constantly calling it retarded. Accepting anything they claim that can't be verified is literally negligent