This seems to follow the same principles we’ve been talking about for Nos social. The major difference is we are leaning into NIP-32 labels for “passing judgement” on users and content rather than lists, which I think is more expressive and will scale better. For instance you can have client-side rules that say things like “if 1 of my friends reported this person put a content warning on their stuff, if 3 reported them don’t show them at all”. We’re about to start building a web app that allows relay owners (or any user) to view content reports take action on them.
I think shared blocklists are still really useful in the short term. This is the main way Matrix fights spam to this day and it’s still working for them at their scale I believe. It would be nice if we could leverage the existing mute lists (kind 10,000) somehow, as there is already a lot of valuable moderation-type data sitting in those. I would be careful about using the word "block" because it implies that the other user can't see your content if you've blocked them (which is true on most social platforms today but not Nostr).
I wrote some more about our vision for moderation here: naddr1qqxnzd3cxsurvd3jxyungdesqgsq7gkqd6kpqqngfm7vdr6ks4qwsdpdzcya2z9u6scjcquwvx203dsrqsqqqa282c44pk / 

Habla
Our Vision for Decentralized Content Moderation - nos
Our vision for what moderation should look like on Nostr, and what the next steps are for us to get there.