This is why one of my favorite uses for AI is to teach me the lingo. They make crap up but they do use the right words. It gives me a starting point for my own research.
Login to reply
Replies (1)
LLMs aren't really intelligent. they can parse a bunch of things and make inferences out of a body of text, that is based on the memory encoded into the parameters but their actual brains comes from the network of parameters not the knowledge inside it.
hallucinations are when the information exceeds the brain capacity not the data set. it's kinda cool because actually, to some extent, the hallucinations can sometimes be creative, sometimes even almost correct. you can watch them recognise this when you point out things in some facts that they give you that are not mainstream theory but validate, against the model, and the LLM affirms it which is nice. but they are dumb, and when i say dumb, i mean under human 80 IQ. the big words are just read off a dictionary, more or less.