Chad Lupkes's avatar
Chad Lupkes 1 year ago
Not sure I agree on the harassment issue. The protocol layer is dedicated to providing a stable foundation where anyone can post anything, and I don't see the issue of harassment changing that, because whether a note or reply is harassment is a value judgement. It might not be the intention, and there's a spectrum that needs to be considered. The protocol layer can't really deal with that level of complexity. I think the functionality that you are looking for is in network logistics, where we could be given the option of seeing whether an npub is a close connection with someone else that we have already muted, and make it an easy single click to mute that npub as well. That would work well at the client level, if the client has the type of network analytics that I'm thinking about. That kind of development work is going to be coming, but right now I haven't seen anything. And I think the second issue of "random jerks" is also going to be related to network analytics as well. We can see a list of Nostr Highlights in our feed. Maybe a client could dig a bit and provide a list of "most muted npubs". If someone decides to be a complete jerk to anyone and everyone, they can post whatever they want as long as they are in a silo that protects the rest of the network from abuse. So put them on a list and give people the option to mute jerks sight unseen.

Replies (1)

Yeah I’m mistaken about the protocol layer as the place where mutes of replies and mentions should happen. Based on some conversations today it makes the most sense that the clients and relays screen for these. In terms of filtering out the random jerks - other protocols are experimenting with mute lists - similar to what you suggest above and I think it’s a good option for Nostr as well. A user can subscribe to a mute list of their choice based on whatever criteria they have - it could be words / content or people. Another framing is feed curation. The beauty of Nostr at its basic level is that the user is in control of their feed, rather than an algorithm. Sorting through harassment challenges makes that control truly possible.