Thread

Zero-JS Hypermedia Browser

Relays: 5
Replies: 0
Generated: 19:57:30
Interesting thread on X about why LLMs "hallucinate". But the TLDR that I took from it is that LLMs are trained such that they are rewarded for making wrong guesses rather than admitting not to have information about something. The way to correct it, then, is to train such that the model is rewarded for admitting not to know something and penalized for providing made up information. And I'm like, duh. Why did it take them this long to figure that out? I said essentially that same thing back when the hallucinations became apparent in early chatGPT. https://x.com/rohanpaul_ai/status/1964136402893099522 image image
2025-09-06 16:47:26 from 1 relay(s)
Login to reply