Replies (100)
Absolutely. That’s what the proposal is all about.
There would be no content moderation forced on anyone. Each user can choose the blocklists they want to utilise including none at all or one they manage themselves.
It’s just a way to allow the community to create an ecosystem of blocklists based on various tags that commonly people want filtered from their feed.
Blocklist providers would be service providers to users, clients and relays so that they don’t have to do duplicate effort in identifying content that will get their relay taken down and/or users to complain about their feed.
If a relay you use uses a blocklist you disagree with then use a relay that uses a different set of blocklists. Or relays that use none at all.
Not ignoring I’m just working out. I’ll look when I get to work.
blacklists won't stop the unsavory content from being posted and shared. if your relay chooses to blacklist, I could spin up a relay in a moments notice and host unsavory notes elsewhere, hence the dumb relay model smart client model.
be your own algorithm, choose what you want to see. that is why we are all here.
I saved your proposal to read later, but it worries me a lot that we are talking about content moderation in #nostr
We must have some 10k of active users and we already started with this subject. Of course, the vast majority of people are like crying babies and are afraid of freedom, I think the right path is education and not moderation.
"If a relay you use uses a blocklist you disagree with then use a relay that uses a different set of blocklists. Or relays that use none at all."
this is literally same as moving to a relay that *will* choose to host whatever the end user wants, regardless of how unsavory.
I agree. People should be their own algorithm. And this proposal isn’t about shoving blocklists on users. They’d start using them only if they want to. That would help normal people be able to manage their algorithm with less time.
This is about allowing a spectrum to exist for users on Nostr. Not everyone has the time or energy to carefully curate their feed and build their own algorithm. Some will want to have help which is the idea of having blocklists that people can opt into.
But on the topic of how this won’t actually ban bad actors or unsavoury content. Agreed.
The usage of blocklists will help to isolate bad actors onto a smaller set of relays so they’re not poisoning the well for users that aren’t sharing and consuming that content.
There’s no way to ban users or content from Nostr that’s the whole point. But this could be a way to keep censorship resistance but still isolate the truly bad actors so that all Nostr usage doesn’t get lumped together with terrorists and human traffickers.
Thanks for making time to take a read.
I agree we are still pretty early and I don’t think this proposal would even really come to fruition for years.
I’m just trying to start the conversation because we will have to (at some point) reckon with the people that will use Nostr for truly nefarious purposes.
Please hear one thing if nothing else. This strategy only works if a vast majority opts in. There is nothing about this proposal that is about forcing content moderation on anyone. It’s just helping users and relays to curate their own feed as you’ve stated is your hope.
I agree education and users managing their own feed is the best path forward. I just want to give people more tools to do it with in the future.
Yes. I think we actually agree with each other and I’m not sure if I’ve communicated that very well.
It’s not about eliminating unsavoury content. That’s impossible because of Nostr’s architecture which I agree with.
It’s about isolating it so that bad content doesn’t poison the well.
🫂 no rush! I’m learning a lot from the conversations happening already 😊
I think I’ll want to reframe the proposal’s title to help make it clear this isn’t about implementing censorship.
As long as it's client side and full control of the user I can see potential
It’s a proposal for usability when there are more users of the protocol. 💜
The lists a relay applies has implications for its users — this might be a service, but is more likely to be a hindrance, because it's easier to apply more filters, but not so easy to un-filter results.
Using lists makes sense, but applying mute lists is probably not a good idea, since you can mute something for multiple different reasons, which is why I mention NIP 32, it gives you the granularity you need to make a more educated filtering decision.
IMO all the moderation problems stem from the fact that a “twitter-like” application has been the focus. We should give up on the idea of a global nostr social media network, and instead focus on small internet communities that form various hubs.
One relay (and maybe some back ups) one community. Let the community police things
Public Opt-in blocked lists are probably the best approach right now. We could do that at a relay level but then someone else is deciding for you. At least with public blocked list you opt-in yourself, or for you child.
Then who maintains those blocked lists?
Satoshi solved this through verifiable consensus and most proof of work efforts.
Not sure we would ever come to a consensus on a blocked list, let alone multiple blocked lists (issue 0 blocked list, issue 1 blocked list, etc…).
Its up to the individual, at scale, to sensor themselves and children. That what freedom tech allows.
That’s a good point I’ll keep digging into that.
For relays I think the advantage of subscribing to lists is they have more tools to not get taken down by law enforcement nits not about deciding what users should see, it’s just about being defensive against the statists until the statists can be overpowered by freedom tech.
And core to this is user choice. Users should choose relays that manage content the way they agree with (or need to because of local laws). Including no management of content at all!
Agreed! That future would be supported by this proposal. Communities would be able to choose for themselves what kind of content they want filtered out. If any!
Lots of typos above, ehhh thats what keeps #nostr weird 🤙
My entire existence in a nutshell. 😂
Agreed there are challenges to this architecture.
As part of the proposal I talk about how these blocklist will likely need to be paid for. At scale small subscriptions can fund moderate amount of ongoing labor to maintain them.
People will only pay if they find value which is how people will vote on how they want their own feeds managed.
Definitely need this to be self sustaining for it to work and not add to centralisation
Got it. It’s like a set of tools for mods. Good stuff
Great frame. I may edit the proposal to phrase it that way.
Side note that has to do with none of this, I love your pfp and banner look monk. It’s cool.
Beautifully said 😎
I totally respect this position.
How many npubs a day would you be willing to mute? I’m just asking would you be willing to spend an hour a day or more muting bots and illegal content? Just curious.
I appreciate that eliza! It’s just some public domain art I found online.
Same for my alt
@npub1w83q...ynzu
It’s really cool. Good eye.
Lots of thoughts on this. I've built basic Allowlist and Blocklist functionality and support for nip51 lists into relay.tools already. I think that it will take a multi-faceted approach to achieve what you're talking about. It's not as simple as having generic lists. Creating pubkeys is too frictionless for blocklists to function on their own. Incentives for moderation on it's own, also, not going to work. I think a community approach to lists and moderation could work but does require a moderation team that is aligned to a particular goal. Once this goal is achieved communities could potentially work with each other to collaborate on lists.
You're right about it being controversial, anytime I mention it someone attacks the relays with some random mess of stuff to try to break things. It's annoying but it also shows what works and what doesn't.
Clients like to think they're using a serverless peer to peer network but this is not how relays work. Someone is on the hook for storing and relaying the data (in clear text!!!) so it's selfish and naieve for a client to just say 'no blocking, only client logic'. I encourage anyone who's kneejerk reaction is such to consider what it's like to store and relay someone who you don't know's data. It's an important distinction and if we get this wrong we will end up exactly like big tech with a centralized 'relay authority' deciding for us or simply no one wanting to run a relay. Right now, I gotta say that incentives to run a relay are slim to none, so I hope that savvy nostr users enjoy and understand the tech enough and will come around to being more supportive.
You’re so right.
We need more paid relays, but I think over time free relays will die off naturally because it’s unsustainable. On top of that we need more robust relay software including the ability to manage one’s own filters. It’s where I want to spend time.
For now though, I think I’ll build a blocklist manager that just allows someone to manage and publish NIP 51 lists as a proof of concept for aiding people trying to moderate their own feed.
Regarding blocking npubs not being enough, I agree. I think having blocklists include hashtags would help as well since a one npub <> one note setup would make it hard for unsavoury content to be distributed without communication off Nostr…or a hashtag.
I think that will be a good start. It’ll evolve if it solves problems people care about.
Cool, yeah it'd be great to have more tools to manage these lists. Keywords are also something that relay.tools supports in lists, though I don't think nip51 has keywords 🤔. I'd also be very happy to accept ideas and modifications to relay tools, it's opensource and I could help you get a development env setup if you're interested.
If you want to see the current state of things and sign up for a testing relay I'll refund you the sats. relay.tools, GitHub.com/relaytools
You’re totally right, I’m gonna keep digging because there’s some combo here that I think could be the foundation for the solution.
This seems to follow the same principles we’ve been talking about for Nos social. The major difference is we are leaning into NIP-32 labels for “passing judgement” on users and content rather than lists, which I think is more expressive and will scale better. For instance you can have client-side rules that say things like “if 1 of my friends reported this person put a content warning on their stuff, if 3 reported them don’t show them at all”. We’re about to start building a web app that allows relay owners (or any user) to view content reports take action on them.
I think shared blocklists are still really useful in the short term. This is the main way Matrix fights spam to this day and it’s still working for them at their scale I believe. It would be nice if we could leverage the existing mute lists (kind 10,000) somehow, as there is already a lot of valuable moderation-type data sitting in those. I would be careful about using the word "block" because it implies that the other user can't see your content if you've blocked them (which is true on most social platforms today but not Nostr).
I wrote some more about our vision for moderation here: naddr1qqxnzd3cxsurvd3jxyungdesqgsq7gkqd6kpqqngfm7vdr6ks4qwsdpdzcya2z9u6scjcquwvx203dsrqsqqqa282c44pk /

Habla
Our Vision for Decentralized Content Moderation - nos
Our vision for what moderation should look like on Nostr, and what the next steps are for us to get there.
Hell yeah I’ll look more into it. I think y’all might be right about the architecture.
Hard problem indeed. Problem with payment is its final and not future proof.
Great wrote up 🤙
Currently i mute about 3 per day as is needed but I would do more. Freedom is more important to me and my country is moving to an authoritarian stance, so the need to get unfiltered information trumps my personal preferences
You didn’t answer my question, how many a day would you be willing to mute?
Example: Would you spend 2 hrs a day doing it to see one human or non illegal post about an actual news article from an independent journalist about trump?
I said I'd do more. Yet to mute anything but bots as I primarily use Paid relays
Yes I'd mute whatever I needed. I'm not going to hand control to any entity
But you won’t need to by design, since man over machine makes the decision a man’s privilege and responsibility.
Entities only provide a temporal causal fact of the unknown.
💎
#mainvolume

🐉
✉️

Meh!
Ok so if some of the different suggestions above or something similar isn’t implemented, you hand over control of Nostr to bad actors at some point. You will lose the freedom.
Once you have freedom online or offline, you have to keep it. You have to defend it.
You are talking about getting freedom but not keeping it. A few mutes a day is no defending or protecting freedom on a large scale.
The conversation above is about protecting it and keeping it on a large scale.
You won’t be forced or coerced into defending freedom here. It will become unusable for you personally if you opt out of different types of defense and protections when it grows. That’s the conversation that we are having. “We have freedom, how do we protect it.”
If you want to solve social problems by hiding them, you’re in for a bad surprise.
Or have I misunderstood your words or do you wish to defend freedom with censorship? Content that you think is illegal others may find legal.
My thoughts: Legal content that isn’t abusively spamming the system should never be blocked by the system. But if individuals want to block someone… yes, they should be able to do that. As for setting up to auto block anyone that three of my follows/ftiends have reported - well that would suck because i often follow spinmeisters & other shills so I can expose their lies. And these bad players often gang together to get truthful & good content &accounts banned. So please don’t make it easy for them. On the contrary how about making an algo to temporarily take away reporting privileges when they’re obviously mass /spam reporting and/or they’ve reported X amount of content that turned out to not be in violation?
I’m always specifically speaking about child sexual abuse material, child sexual exploitation material, Ai generated child sexual abuse material, the sale of humans for labor or sex using force, fraud, or coercion. All violations of the NAP and human rights violations globally.
These issues specifically put the builders and others in an extremely venerable position. When you fully understand what is at stake and the scale of the threat you look for ways to protect a right to speak freely.
No government wants anything like Nostr. Expect any and all attacks at their disposal. That’s just the governments.
This is an adult conversation about the reality that the world has evil parts and there are solutions available that don’t silence anyone’s voice across the protocol while protecting their opportunity to have a voice.
Very few are able to have this type of conversation. Anyone who can’t have this specific discussion either doesn’t understand (no shade I don’t know everything) is disingenuous or is larping because they want freedom without the responsibility of protecting it.
The beautiful part about Nostr is that if you want Wild West clients there is nothing stopping you from that.
I haven’t spoken to a creator of a client yet that isn’t concerned and aggressively looking for solutions. They have skin in the game. So that’s why there is a conversation about possible solutions.
You sound like you want to be the arbiter of truth. Who will be deciding what is to be seen?
I’m open to ideas.
But I imagine you understand that we can’t just have a free for all at scale. There are governments that will take down relays if they’re sources of child porn and copyrighted material. You don’t need to agree that that material is illegal or bad, just at least recognise that states will use it as an excuse to try to destroy Nostr.
This proposal is meant to be the most freedom and decentralization maximising way to handle the problem. It’s entirely opt in.
In a world of many small relays there will be enough choice of relays and the blocklists that they use to choose a regime you agree with. Even if that regime is a free for all.
I’m trying to preserve our right to use Nostr when it scales and this kind of content will be something that threatens that future.
As you said here...
"The beautiful part about Nostr is that if you want Wild West clients there is nothing stopping you from that."
There is no way to stop this type of content.
Censoring, in my opinion, is a futile effort. Education is the way.
In time, today there are already clients who do moderation, zbd and primal.
It’s not about solving the social problem because I agree that required education and probably generations of it.
This is about allowing our community to isolate bad actors so they don’t poison our well.
I’m tagged in this and perhaps I don’t fully understand, why would anyone be concerned with what anyone else is or isn’t doing and just focus on building their own thing the way they want and ignore those who have chosen a different path?
That’s an honest question I’m not trying to be funny.
I agree that legal content that isn’t abusively spammy shouldn’t be blocked. UNLESS the user decides for themselves to mute it.
This proposal is all about how to help the user to do that muting when there’s tens of thousands of a abusive spam bots instead of the handful we have now.
You will. By deciding which if any blocklists you want to use to filter your feed.
Couple things
1. Agreed there is no way to stop the content. That’s not the point of the proposal, this is about helping users to get it off their feed at scale, using lists that they choose (or choose not to use any)
2. I really think there’s an order to this. Users should do this primarily. Relays may have to because of their legal liability and they should do it as little as possible. And I hope clients never have to use blocklists. I believe the primary responsibility is on users to choose the kind of content they want (and want filtered out). But relays will also be forced to content moderate at some point.
I just haven’t heard a proposed solution that will help relay operators to not get taken down. Nostr can not be mainstream unless we have many relays in many jurisdictions. But forcing a user-only moderation strategy will have consequences at scale.
I actually deeply agree with this. Unfortunately education and awareness doesn’t always stop bad things from happening in the world. Greg is making suggestions for when the human rights violations are in process or have already occurred.
You down to work on a program to educate and raise awareness to the youth about internet safety? That’s the wave that I’m on because I don’t work in tech. 💜
Stopping the bad content is a matter of education, and every human on earth working to protect the people in their life from exploitation. It’s a generations-long effort.
but protecting Nostr can help in that effort and preserve the freedom-maximising effects Nostr can have on the world.
I hope we never get censorship or any form of blocking at the relay level. I will be more than happy to pay for relays (wherever they may be) that DO NOT censor or block any content. All blocking, muting, censorship should be done at the client level. If the task becomes too heavy for a client-side app, we could have more beefy "personal" private relays that do that for us and our client-side app gets its data only from that personal relay (which would then rebroadcast our notes). Same goes for email : I choose to use servers that do not block any spam. I can do that for myself in my email client. I don't want someone else deciding for me.
All moderation/censorship lists should be private lists and I would stay away from any list from any of the current major social media players. If I wanted that censorship, I'd be on those platforms.
I would simply recommend that it be easy to share private lists between nostr users.
Kids-friendly moderation scares me if it's done by the likes of youtube. They actually *target* kids. I've seen it in action.
Small kids probably shouldn't be on social media, period. But if one wants them to be on nostr, I would like to have a personal, private, whitelist of people they can follow (family and family friends) and the client app limits what they can see to only what those few people share. Even then, I really don't think young developping minds should be putting out their every thought for all to see, for all eternity, with no possibility of taking any of it back.
There might be a case for only allowing a watch-only login (npub, no private key) for young kids. It might be a useful education tool which would provide different (and timely) topics of conversation with our kids.
Just my two sats.
I wanna support your right I choose relays that don’t moderate at all!
As for kids. I was thinking about allow lists instead of block lists. I also wrote about it a while back
Should avoid some of what you see with YouTube but also not throw them to the wolves.

Greg White's Blog
How to safely open the internet to children
Parents should have more choice over what their children see online.
Sorry I was trying to reply to eliza. I think we agreed on client side moderation. Sounds like she is advocating for more than this
Not our duty. There are agencies we pay to do that. Do you actively check people in the streets to see if they are talking about something illegal? Why would you do that here? All we have to do here is not interact with your “bad actors” and in exceptional cases notify the authorities.
I respect your point of view, but it dangerously adheres to the political doctrine that we all feel as a yoke.
We definitely outsource that to police officers at the scale of a city or a region, but if you have a community (a church, a group of friends, etc) and they’re doing something that’s harmful to the group or going to get the group in trouble with the law and the community doesn’t want any part in it…It’s the responsibility of the community to protect itself.
The motivation of this is to help users and communities to protect themselves from people *they* determine are bad actors in their space (their feed, their DMs, their relays for example).
This isn’t about nostr-wide moderation / censorship. That’s impossible and against the ethos of Nostr. This proposal is about giving people the tools to protect themselves from bad actors.
And I still am not sure I’m communicating this well, but I’m not trying to define what bad actors are. But I know that governments do, and they will go after relay operators that aren’t blocking content the governments want blocked.
Nost relays running in very permissive jurisdictions will have the luxury of not needing to do content moderation. And luckily the internet is still fairly open so you can connect to any relay you like from your Nostr clients. So if you want to use relays that don’t do any moderation that’s your right!
If you want to use a relay that operates in the US, then that relay operator needs tools to make sure they can stay within the boundaries of the law. I hope that Nostr chips away at the power of states so they give up on censorship.
But what would help relay operators prevent copyrighted material from spreading via their relays (even if they disagree with those laws) will also be a useful tool to prevent the spread of child porn and other content that a vast majority of Nostr users will agree has no place in their community. Let’s give each person and each community the tools to curate their own domain.
Okay, looks like we've come to an agreement that it's not possible to stop any kind of content in #nostr
If the intention is to help with a cleaner feed for new users who don't want or can't for whatever reason be their own algorithm, we have to look at filter.nostr.wine, I use it to access global, it brings me notes of who I I follow and who they follow, great service that keeps my global feed clean. I always talk about this service to help those who are arriving. What we need is more options for this type of service. Not blocks, simply because it won't work.
Yes, kids-friendly private "allow" lists that users can share would be very useful. Especially if it's easy to edit them.
There's one thing that intrigued me, why do you think that at some point relays will be forced to moderate content?
I would be happy to help on a project like this. I'm a dev and I can help you with anything you need in that regard.
That’s an interesting strategy and I believe in your right to contribute to the community in that way. I don’t know if that’ll scale when we have millions of active users, but I hope it does.
I wanna double down on this point. I don’t wanna mandate anything or garner support for any mandates.
I wanna build a solution that I think will help and if no one adopts it then the proof will be in the pudding and it will become clear it wasn’t the right solution.
I’m predicting a future where relay operators come under threat from law enforcement and we will be scrambling for ways to continue operating under that scrutiny.
Nostr cannot scale if it remains a niche offering that can only operate in jurisdictions unreachable by the US and China.
I don’t pretend to know the right answer, but I wanna make progress on an idea and start the conversation.
Thanks for engaging so deeply. I truly respect your opinions, your feedback, and what you do for the community.
child porn.
That's the magic of #nostr we have different views for the same problem and that's ok, here we can debate freely. 🫂
That’s a good question. Based on my research about the fediverse (the only close approximation to Nostr) whoever manages the servers that host and distribute content are legally liable in a variety of ways.
I linked this at the top of my proposal because it’s helpful context:
https://www.eff.org/deeplinks/2022/12/user-generated-content-and-fediverse-legal-primer?ref=gregwhite.blog
The western world generally adheres to this regime of who is responsible for the distribution of illegal content. China is far less permissive, so I’m not sure if there’s any point is trying to satisfy their legal demands.
What you said is really far from the paid centralised blocklists applied by relays of your precedent post.
If all this will put in the hands of the users new and more efficient means to filter notes, I surely welcome that.
That's a good reason, but if we haven't been able to eliminate this type of content on centralized platforms until today, I believe that on a decentralized protocol we won't be able to. But yeah, I wish my relays didn't feed me that content. But I want transparency, I want to know what content is censored and a way to check.
Nah.
I’m an-cap so I have these types of conversations regularly. Child abusers and those who violate the NAP wouldn’t be allowed in my community.
Like adults we would discuss solutions to keep the community safe.
You can have your community in ancapistan and do whatever. Your community would not be my problem.
Now put that online and that’s the conversation that I’m having.
That’s freedom. You choose what you prefer and I’ll choose what I prefer. Don’t use a client that has any safeguards etc if that’s what you prefer. If that’s your personal preference this conversation has nothing to do with you.
I wish you the best of luck with your version of freedom. Truly. 💜
“It’s the responsibility of the community to protect itself” is a really really dangerous way.
I’ll give you a couple of examples.
I’m catholic. I want to protect the integrity my “group”. I want all homosexuals blocked and muted.
I’m homosexual. I want my rights to be acknowledged by all the people. I want all the users that disagree blocked and muted.
If all the groups have “responsibility” (you are talking about rights, though) it’s the end of free speech and free minds.
Social polarisation comes from that and it’s one of the cancers we should try and eradicate.
I understand your concern, in this scenario you would start a Catholic client for queer folks.
The hateful group wouldn’t lose their right to have their opinions and your group wouldn’t lose the right to have their opinions.
You replied to my response to a dev. Don't tell me that a conversation you chimed in on has nothing to do with me. This is the part that makes me think your just here to try and gain a position of power. Authoritarians make me sick
Tagged in the og post. Wasn’t trying to make a problem. Just popped up in my mentions so my apologies about that. Obviously because you have lowered yourself to name calling under a civil conversation I won’t respond anymore. Have a blessed day. 💜

Finally
True. That would be the right decision. Build a bridge between the opposing positions. Learn to accept and accept to learn. Nostr could be that bridge, but it must remain free.
It will be interesting to see where things go.
👍💯😍
here’s the thing…
even if child sex trafficking wasn’t a multi billion dollar industry (which makes me want to go on a Rambo outing), the FBI would plant child porn on the network at some point. Or “terrorist communications” to justify whatever they wanted to do.
Nostr is a sitting duck until it solves for these things. Unfortunately, short sighted soy devs who also want to censor the world for their fefes, make up the majority of the “moderation” crowd. So thinking through how to deal with this in a new way has to happen.
Either easy way of dealing with this, censorship or removal of anonymity, are the things the corrupt State wants. And as soon as the door is open, the Feds will come in.
The problem comes from dealing with it using centralized power. So the question becomes how can it be dealt with on the individual level without changing relay’s simplicity, or making all clients censorship tools of a corrupt state, and groups of crying soy devs?
How does one censor content without censoring content?
It’s a real question…
I think we see the problem the same way. Im open to more ideas!
we are all looking for ideas.
it concerns me that most solutions aren’t simple. and the temptation for hero position seeking behavior around this is strong.
Agreed but is every relay operator going to spend the hours a day (at least ) to do that work?
Forcing this into individual relay operators seems like something that will force centralisation more than giving relay operators the ability to subscribe to blocklists maintained by third parties whose job it is to create and maintain them.
At first there will only be a few providers but people are very opinionated about how they want their content filtered (if at all) which will drive more competition and therefore decentralization around these block list providers.
I think the easiest first step is a Reddit-esque voting system, where rather the downvotes are reports of abuse/copyright/etc and when they hit a certain threshold, relays can have auto delete settings to block or delete. That way it’s user curated, but decentralized by relay and that operators desired settings.
Sounds like a great idea.
Larger relays will need AI filters. Your Blocklist idea isn't gonna work when I can generate a new id for each post. Smaller relays can moderate themselves just like the fediverse does. I'm not saying we can't make shared Blocklist but it's gonna do jack shit to stop illegal material.
And people will generate a million accounts to make all the voting not work. Votes can't work on nostr.
I mean within reason. Curation is valuable. Censorship is not. It must be toggable. On and off button.
Agreed however the curation happens will have to evolve with scale.
I’ve always been partial to Slashdot’s mod point system and you can surf at whatever level you want.
I’m not talking about optional levels of engagement. if that were the discussion, i wouldn’t be engaging in this conversation.
My concern is purely about illegal content, and how that could be used by the censorship industrial complex to trojan horse nostr.
The best way to mitigate that is to come up with a system that handles that problem without offering the Feds an attack surface they can use in other ways. Which is no small feat.
It just dawned on me that keeping media delivery separate is the key.
clients could incorporate external media delivery services like nostr.build, and those services would bear the brunt of dealing with illegal content. If people then used external links to bypass media delivery systems, this is something outside of the control of nostr, and outside of client developers responsibility. That is the purview of the Feds.
If then, a media delivery service got weird and started censoring for reasons outside of illegality, they could be replaced, or bypassed by posting external links, preventing full centralized censorship.
If a client starts moving past this, and centralizes censorship for arbitrary reasons, they can be replaced.
I'm down with client side and full user control.
Thread started here.
View quoted note →
Ursula? Is this you, Ursula?
Decentralised muting, I like it.
That could do it 🤔 but then how would they deal with it?
That would really be up to them.
They could use reporting, AI, and even go so far as to tap into the GIFCT database…
That’s where we need to work on real verification. NIP-05 was good for the spot in time. I think many have said that. But somehow there needs to be a verification of sorts or the bots will make Nostr just as unusable. I get what Greg is saying but we can’t have blinders that it will just fix itself. We have to at minimum at least think about these things and have ideas on the back burner.