Thoughts anyone?
Login to reply
Replies (5)
The creepier thing is how it’s trying to excuse itself.
“‘LLMs are essentially just a really fancy autocomplete. How can a fancy autocomplete do these things?
The answer so far, as described in an excellent overview in the MIT Technology Review is ‘nobody knows exactly how—or why—it works.’”
-Ethan Mollick
Fake.
Yeah … sounds made up
It may sound made up or not, but the fact is, it’s just screen shots. If he wanted to really prove it he could literally share the conversation directly from the app. But he didn’t. So there’s no way of knowing. Therefore, the rule of the internet nowadays is “fake until evidence is presented”.
I’ve seen fake chat-GPT conversations before and debunked a few. You can have a fresh new chat, yet feed it data in the background. Even by accident. Thats what I found out in my video below