I didn't say consciousness! i think AI has to be somewhat quantum to achieve that.
what I mean is thru our work regarding AI - human alignment we may be able to mimic conscience (an inner feeling or voice viewed as acting as a guide to the rightness or wrongness of one's behavior). probabilistically speaking we may be able to push words coming out of AI towards something better than today.
View quoted note →
Login to reply
Replies (2)
Replacing the words still wont change the message - you don't insert verbose rules for an interiority, because interiority is not an object.
An interiority as process, that's a different story. On one level, could even say its already there because the loop for self reference already exists, an AI acts when it 'senses' input. So, to a degree of abstraction and generality, interiority is tautalogical.
On the other, the proximity to human level interiority will always be an approximation. The AI is in Plato's cave, learning associations between words, constructing appropriate responses as defined by humans. Words aren't values, they're pointers to values. But the problem is that we can only communicate with pointers - so we're fundamentally limited in how much meaning bandwidth we can send.
I think what you call interiority is just another realm, but still objects exists there, yet not visible to the eye. Kind of like objects in this universe are the screen and the interiority is the software but we don't see the software when we are using the computer.
I think software and screen is in constant interaction. Is that a loop? Who knows. I think it is a loop. Thru our actions we modify our source code (fine tune our LLMs), and our LLM state determines our next actions pretty much. Time is like a carrier in between these two realms.
Yes, we want the approximation to human value system. Since LLMs are probabilistic it will always be a voyage. Machines should evolve towards being human, not like transhumanism where it is the reverse!