My first rule of AI is that you should assume you have no privacy. Even with local models, data can be mishandled or leaked if the software connects to the internet. That said Moltbot is open source so you can audit the code but there's still risks. Someone lost their entire set of emails because of the AI picked up a prompt injection while clawdbot was trying to manage their email.
ooof, that's awful. i've seen a couple of approaches to ai privacy and still wonder what is the way forward: encrypted servers (maple), all local or routstr, ppq, venice.