we need credibly neutral tools that can verifiably translate between code and simple language.
do we expect users to simply believe that you can trust ‘xyz’ because there are honest nodes in the underlying network?
in a world of sovereign individuals everyone can’t be expected to understand the code (algorithm) of the platform they’re using. an ai model/frontend is expected to translate that information in simple language - but how can the user trust the translation? through an attestation? how can the attestation on a frontend be trusted?
Login to reply
Replies (1)
🙏