I didn't say consciousness! i think AI has to be somewhat quantum to achieve that.
what I mean is thru our work regarding AI - human alignment we may be able to mimic conscience (an inner feeling or voice viewed as acting as a guide to the rightness or wrongness of one's behavior). probabilistically speaking we may be able to push words coming out of AI towards something better than today.
Login to reply
Replies (2)
I see, sorry, language nuances. Yes, that makes sense, but that's already something in AI right? with guard models and output validation. I've also heard about the exploration of the latent space which seems like a continous meta analysis of the context window
Yes they probably have guardrails that stop chats when they detect attempts to jailbreak or simply asking dangerous questions. Regarding validation, I don't know what is going on. I think if a government AI happens an auditor LLM can be a good way to check what is being produced by the main AI.
Anthropic does that kind of research: looking into the black box. It is interesting but not talking about the elephant in the room I think (conscience). And they also use that kind of scaring tactics to push more regulation which stifle open source imo.