Signal’s Meredith Whittaker on the privacy nightmares that agentic AI introduces.

Replies (42)

Manuel's avatar
Manuel 9 months ago
Is the full talk available?
j_humphr3y's avatar
j_humphr3y 9 months ago
Does something like Maple running on your own server alleviate some of this? I mean at a technical level it still has to have these permissions to accomplish the task, but if it's self hosted and not sharing with third parties, does that mitigate some of the risk?
Maybe, however, your own security practices dwarf anything a cloud provider is doing, and even then, the best hackers can drive a truck right through the defenses
Can't I simply not give access to Signal and do this part myself? And I could give access to my Brave Browser for buying me the Ticket but not giving it access to my Firefox Browser where my bitcoin nudity memes are.
I am stupid but i think the problem is not that big. I actually had most time 2 Smartphones. This would be an Argument for habing 2 again. Problem fixed. So that I can use the ai for good and have my privacy on the other one
I think of the comparison to a human assistant. The super-wealthy, celebrities, etc, they all certainly have people around them that have root access to their devices, to do all of the kind of "agent" tasks that Meredith describes here. So how do they solve that "trust" problem, and does agentic AI really change the underlying dynamic of this? Is an agentic AI more or less motivated to misbehave than (say) a hired housekeeper or nanny? Perhaps self-hosting, or at least "end to end encrypted" AI is the answer here.... is an example of this.
yes, as long as you're guaranteed that it does not share that information with third parties, which imo is impossible to do. the issue comes when you have to trust the agent. these models are at best black boxes; even the open source models are trained using data we don't have access to. we've come to trust traditional computation devices because they are largely dumb and we understand them very well. they do what they're instructed to do and only that. can't say we understand these ai models to the same degree yet. maybe if these systems incorporated something like homomorphic encryption them we wouldn't have to trust them.
db's avatar
db 9 months ago
privacy is a human right
1) self hosted solves half of this 2) using an agent with lightning or cashu with a budget per week or day solves the credit card part. 3) I don't see how you would want an ai agent impersonating you on signal. It's called a not for a reason and signal just needs to make it possible to build them into groups