GA Nostr. I wonder what the eventual implications of this are for folks running personal relays like HAVEN. I mean, I run a pretty clean and heavily moderated set of relays myself… No porn, self-harm content or anything of the sort. One good thing about Nostr is that nobody has ever tagged me in any such content on Nostr (For those who say I’m unfair to Nostr: this has happened to me on the Fediverse… once… but still, it didn). If I were ever tagged by someone posting that kind of material, they'd be booted from my WoT pronto, and their content would of course be deleted from my relays. Still, the question remains: do I need to implement any form of age checking right now? And even if not, if the UK government (or any other) decides that personal relays need to enforce age verification, what should we actually do? Only the owner of the relay can write to the Outbox relay, so that should be fine (I think? Right?). Just don’t post stupid stuff. But what about the Inbox? How do we “age check” a bunch of bots and maybe a couple dozen people, most of whom are using pseudonyms and, let’s be honest, are unlikely to cooperate? Should I just give users a flag to disable the Inbox relay? I have absolutely no idea what to do, or even whether this affects personal relays at all. #GM #UKOnlineSafetyAct #haven #relay #personalRelay #ageVerification
Silberengel's avatar Silberengel
We didn't choose this battle, Nostriches, but it might choose us. Godspeed. 🫡 >> The age-verification rule isn't aimed solely at sex sites, but at any digital entity where racy content or other "harmful" speech could be found. In addition to Bluesky, Reddit, X, Discord, and Grinder "have now announced they will deploy age assurance" schemes, Ofcom says. Services had until last week to start complying or face serious financial consequences. On Bluesky, this means submitting credit card information or submitting to a facial scan.<< https://reason.com/2025/07/28/the-age-gated-internet-is-here/
View quoted note →

Replies (64)

Tor and the average commercial VPN like PIA or ExpressVPN will do close to nothing for relay operators if an enforcement agent comes for them, unfortunately. (And I'm not talking out of my arse, by the way… end users need to keep this in mind: their anonymity is mostly an illusion, and most folks who think they're "off the grid" are actually doxxing themselves in a myriad of ways every single day.) I'm still in the "Is there anything to comply with? And if so, what and how?" stage. I assume most people on Nostr won’t comply, and at worst, they'll just take their personal relays offline. But assuming there is an option to comply (involving relays) I need to take my "the resistance" hat off for a moment, and try to understand if it's even possible to provide people with a compliance path to begin with. Ultimately each relay operator should make the call as they are the ones with skin in the game.
Niel Liesmons's avatar
Niel Liesmons 5 months ago
Is it the domain name that you tie to your person (i.e. the legal identity you choose to use) that is the legal attack vector? If we use #pubky servers or similar for hosting our Community and Personal content, what attacks are we talking about then?
In terms of what would hold up in court, I really can't say. I don’t want to sound too relaxed, nor do I want to paint too much of a dystopian picture… so I’ll focus more on the technical feasibility of deanonymising someone. Yes, DNS is a major one that projects like pkarr, Onion Services, etc., try to address (pkarr is still pretty new, and Tor is no silver bullet). But for personal users, there are a gazillion other layers that can expose them, from software running on their mobile or desktop (starting with keyboard apps, AI tools users have "authorised" to learn from their behaviour, to shady background daemons like Meta was using to the OSes themselves …), them there's your ISP if you're self-hosting, CAs if you are using HTTPS, CDNs, the VPS or Cloud provider provider you're renting hardware from all owning bits and pieces of personal information and metadata that can be pierced together to paint a picture. Then there are all sorts of fingerprinting techniques… and about a gazillion other possible deanonymisation vectors. Again, I can’t say what would hold up in court, but I’d work with the assumption that, for the vast majority of people exposed to Nostr, the authorities can figure out who’s behind an npub or operating a relay fairly easily.
Agreed. Morals are what emerge beyond fear of repression… Either when you're genuinely beyond repression (which is rare), or when you're fully aware of the consequences and still act according to your beliefs. I don't think I need to state where I stand morally, or I wouldn’t be maintaining Haven as my most expensive hobby, nor encouraging people to self-host in the first place. If we had 20,000 people self-hosting relays (be it Haven or any other relay software), the picture would look very different compared to just a few hundred. And if Nostr blows up like torrents did back in the day, that would again be a very different scenario. I strongly believe is that the default path on Nostr will be one of non-compliance (again, assuming there’s even anything to comply with). Still, my take is that healing happens when, given the choice, people voluntarily choose correctly (which, as you're suggesting, might be doing nothing). I don't believe in imposing compliance, but I also don't believe in imposing non-compliance. That doesn’t mean I can’t hold strong opinions about it, of course.
You're doing great. Took notes of all of your questions as well as on feedback from other users. Hopefully other devs will join in here as the number one ask from users so far are installers, packages for their ecosystems and admin UI. And you folks really, really don't want to use any UI built by me 🤣🤣🤣.
What I decided to do with chorus is that the inbox events are visible only to the people that the relay serves, which in my case is only 3 people, and the event's author.... until I actively approve them for public view. This prevents illegal content from being disseminated from my relay (even if it might arrive and get stored). The downside is that some people watching the conversation don't see the conversation in real time (if they however follow the stranger, they will see it from the stranger's relay). Sometimes I may go many days without moderating events on the relay, I've been pretty lax about it.
This is an interesting solution. Thanks for sharing! Haven could sort of do the same for the Inbox relay, for example, provide an optional admin interface where folks could release or deny events they’re tagged in (all behind a flag; by default, it would simply auto-release all events). Then it’s up to each relay operator to "validate" the npubs however they see fit. If they really want to go out of their way to procure people’s documents, face scans, or whatever, that’s totally up to them and outside Haven’s scope. I'm getting so many requests and ideas that require an admin page… Haven really, really needs a frontend person, or you folks will soon have to deal with my HTMX-powered "lazy backend" UI 🤣.
So are you speaking from the relay operator perspective or from the developer's responsibility? I would expect operators to comply with legislation for the region they operate in, exactly how typical web2 application operate. I would expect to be forced to comply with cease and desist or take-down orders if you fall under the jurisdiction of the region summoning you. If you operate a relay in the EU, and the EU government slapped you with a take-down request I'd expect to comply. These are things we've discussed at GitCitadel, we agree we don't intend to operate out of compliance, however we will do everything we can to empower customers to operate their own equipment to avoid government action. Dealing with feds in any country is no joke, they have the power to turn your life upside-down with the click of a pen. I personally believe a single company or founder of a company should not die on the hill of non-compliance, there are ways (for now) to keep you on the outside of a prison cell. IMO we are way more useful outside than inside one.
That's the reasonable take, comply and promote freedom, silly idealism combined with low technical skills it's far too common and far too dangerous.
so, let's just give up now image they are going to shut down the mobile devices first, because of the duopoly of google and apple. for this reason a more reasonable response is to stick with open platforms anyway. it's already a waste of time trying to put nostr on iOS as it is, just ask will.
force them to overreach and try to shut down the entire linux industry. that will be highly successful i'm sure. it also puts a big roadblock up against the "but muh adoption" meme all the influencoors and even some not so much influencoor who think that adoption is everything we can't get people to adopt stuff that the government has blocked from normies. the end. we target linux, web browsers, and if that means that we rule out 90% of the addressable market, so be it. there isn't a normie in the world who would even care to fight the governnment on this. and there isn't a freak who wouldn't.
also, let's be clear about this: gitcitadel is not about attracting hordes of normie social media dopamine fiends it's about providing back end infrastructure and applications to use it to private businesses. the government can't push those people around as easily as they can lock you out of google play and apple store. they already are routinely using all kinds of technologies that are completely bespoke and nothing directly to do with social media they also can't stop private associations from using them. they can't stop private citizens from using open protocols on their open operating systems. the inevitable conclusion from all of this is that this whole meme about "mUh AdOpTiOn" needs to die right here and now. it's not gonna happen, ok? they won't let it happen. people will only learn if they have friends who are doing it. as it is, i'm virtually shut out of reddit already. i'm shut out of X because they shadow ban me automatically, if not explicitly suspend my account within 5 of my posts. this is their cow paddock and they won't have any goats in it.
Yeah, widespread adoption could only be shortlived. We've always seen Nostr as something thinly connecting LANs and similar. I think that's what the next Internet will look like: No more global stage, just people moving between local stages.
Answering here since the conversation below went in a different direction. In Anthony’s Wishful Thinking Universe, there’s basically no distinction. That is, users running relays on their mobile devices or Raspberry Pi are operators just like the big ones, such as @someone, @cloud fodder, and others. Folks running Haven, for instance, are storing and serving notes from people in their WoT when those people tag them. (I’m serving your note above from my Inbox relay; it’s small scale, of course, but people running personal relays are still relay operators.) In practice, I expect law enforcement to go after the big players (Primal, Damus, nos.lol, nostr.mom) first, due to their scale and the higher likelihood of finding adult content on their relays. But we also need to provide tools for the smaller-scale operators to self-moderate as well. Regarding government compliance (e.g. facial recognition, ID checks for age verification), I don’t think it’s up to me or other relay software devs to bake compliance directly into the relay software. However, moderation tools should be designed in a way that allows operators to build their own ad hoc compliance solutions. For example, @Mike Dilger ☑️ suggested implementing an approval queue. Then it's up to the relay operator to verify the user’s age before releasing a post, however they see fit, especially if they’re operating in the UK. Of course, I wouldn’t impose this on all Haven users / operators. There should be an auto-approval flag for folks in countries with less strict compliance requirements. Still, the option to enable an approval queue should be there, and it’s definitely jumping the queue on my list of side quests (That said, it’s not something that’ll be ready tomorrow, unfortunately).
Agreed... Client devs, especially mobile client devs, may be low-hanging fruit. And I guess that “My client just provides a WebSocket between the client and the relay” will work about as well as the old “Demonoid is just an indexer and search engine” excuse. Still, as someone almost strictly on the relay side of things, I don’t think pushing all responsibility to clients with a "I’m just running some JSON backend thing" stance is going to hold up either. At a bare minimum, relay software should offer proper tools to manage Kind 0, Kind 1, and NIP-23 content. And folks running additional things like Blossom, NIP-96, etc., need solid ways to moderate media. At the moment, Haven has neither. Well... the moderation tools are basically the file system and a database client 😅. I haven’t played with relay.tools myself, but given the scale of some relays running on it, I assume you already have some moderation tooling in place. I think that both client and relay software devs have to work with operators (and you and I are both people playing both roles at the moment).
Lol, got it. To be honest, I don’t have much access to client devs (and for the most part, I don’t even know where they hang out to begin with). I started this thread mostly with relays in mind, since the “Other things” folks seem to know what they’re doing. I really can’t say what they’re discussing at the moment—but I do wonder if there’s a plan. @General Ned Ludd, @hzrd149, @verbiricha, @Cesar Dias, @Cody, @Vitor Pamplona, you’re some of the cool client devs I’ve had the chance to interact with. Some of you even reply to me occasionally 🤣. Are you planning any changes to your respective clients in terms of moderation and compliance with the whole age verification thing?
> Are you planning any changes to your respective clients in terms of moderation and compliance with the whole age verification thing? I thought about adding it as a joke... because there is no way I could verify a users age That said I agree we really need more moderation tools for relays and blossom servers. I'm just not good at building backend applications 😞
To be fair I think that noStrudel is not in the worst position given that it doesn't rely on app stores, etc. But I assume that sooner or later all layers will have to do something about it. I really can't imagine noStrudel doing face verification or asking for a photo of my ID, but I really wonder what any clients planning to comply will do. I hope that you all just don't block the UK (as you will be having to do the same for several other countries soon)
Amethyst is a 17+ app on the Play Store because we couldnt get rid of porn safely enough. For haven users, I do think we need a LOT of tooling. You should be running an AI that parses every type of post for illegal stuff and another one to process every image in your blossom server. And if those tools can decide, they need a way to send you a message so that you can decide to keep something dubious or not. Relay Ops development is one of the worst funded areas in nostr, unfortunately. What I know is that clients alone will not be able to solve this.
I would even say that we need two layers here. One to release all events signed by a certain npub you t4ust and other to review events signed by anims one by one. Blog comment moderation style.
Niel Liesmons's avatar
Niel Liesmons 4 months ago
I'm building npub-validation into Zapchat. If we find a way to do this with events, you don't need the admin panel. For that part at least.
Default avatar
Exodia38 4 months ago
My opinion is that you should analyze the root of the problem in order to find appropriate solutions
How do we handle waiting room though? Since signed event dates can't be changed, if a user publishes one day then a few days or even hours later the note will get burried by any client that shows chronological right? So if it takes an operator days to respond, any social even its basically unseen. I say this as it just seems like either get on the whitelist or don't bother posting to said relay unless the operator responds quickly. I could see this being more useful for content that doesn't require timing.
Default avatar
Exodia38 4 months ago
A little hint is this: how do you deal with a black hole of lies that wants absolute control
I suppose, but I'd also assume that clients use the note time to display notes not relying on the relay to return notes in any known ordering. So clients displaying chronological notes would burry old notes.
Right but then why bother with the grey-list then? Just whitelist or don't share the notes I guess. No one is going to check if their notes are propagating on relays, they just hit send and hope they get reach, but sitting in a greylist you'd just sit there wondering why your note never got reach. Grey-list meaning it requires manual human intervention, otherwise it's still just a whitelist/blacklist with spam-filters. How many relay operators will be minute-level attentive for social feeds? We both know that nostr is highly time based right now. There are all kinds of dates and times that are optimal to get even just your closer friends to see your notes.
Think of this from the relay operator’s perspective as well. You care less about immediately releasing someone else's content and more about filtering out bots and weird stuff, but there may be new people writing interesting content worth spreading around. If they do it often enough, you’ll eventually whitelist their npub. But if there’s no grey middle ground, you’re forced to choose between allowing by default (and risking the spread of awful content) or blacklisting by default (and creating a bubble). Another way to look at it is the GitHub-like PR model: send enough helpful PRs and you may eventually gain write privileges my repo. IMO, this is a fair middle ground for moderation. I agree with you that this should be mentioned somewhere to make users aware that their notes might remain in limbo for a while...Maybe via NIP-11. As for making sure operators don’t simply forget about notes, maybe daily reminder emails? That’s what Mastodon offers.
I think it's going to take me a bit to get comfortable with the usefulness if this proposition. I agree with the argument, but still not sure it works in the benefit of the user any more than simply begging for the operator to whitelist them, to which the operator would look at their past notes stored on another relay perhaps and choose to authorized them. Not to mention were simply discussing non-paid public relays I suppose. This also only works if the number of notes to review is low enough to interact. I think were just discussing a human high-level filter. I guess my final thought is, does my opinion matter here? This sounds like a relay feature/implementation detail. The relay should allow for this to be enabled/disabled and offer an admin interface for implementing this. My whole software dev MO is optional features. Build it, but let me turn it off/on. > As for making sure operators don’t simply forget about notes, maybe daily reminder emails? That’s what Mastodon offers. Also who am I to say what the operator _should_ do. What I do know is what I would do, and that's check it whenever I have a few seconds of "free" time. It would probably never happen, so that's my bias!
On my side, I was thinking more about personal or small community relays. But honestly, even at scale, it’s either a big team of human reviewers or a mix of that and algorithms à la big tech social mefis (and given the current state of things, I’d much prefer to go back to the army of mods). My take is: give operators the tools and let them decide. Want to whitelist everyone by default? Fine. Want to blacklist everybody but yourself by default? Also fine. Want a moderation queue? Here you go. Want to be notified about new items to review? No problem, just let me know the level of granularity you want for notifications. This is what we can do on the relay development side without imposing our own views.
Niel Liesmons's avatar
Niel Liesmons 4 months ago
Deleting from the community relay = revoking already. Good point tho, I didn't make that clear.
Niel Liesmons's avatar
Niel Liesmons 4 months ago
Second way of revoking is removing the badge from the profile label (or badge awarding event) of that profile.
Niel Liesmons's avatar
Niel Liesmons 4 months ago
Of course, but then I need an API so that people who already have a nip-05 can create (pending) nip-05s from within my apps. Or if need an event for a Nip-05 invite or sth. But then the relay had to look at events as well and then the Badge solution wins.
Niel Liesmons's avatar
Niel Liesmons 4 months ago
Members need to able to invite others by directly handing them out write access. But only one level deep. With nip-05 that gets complex fast
You are right it is a problem. I've always felt bad about this aspect, that I was breaking other people's experience. But I can't think of another way to be sure that I'm not going to get in trouble. We need more thinking on this area.
I just want to see replies from strangers, without letting them use the relay to send material that would get me in trouble. Maybe I never 'release' their notes. How bad would it be? All of their followers will still see their reply (posted to their outbox as well as my inbox). Only my followers would not see it. Unless of course I wanted to reply to it, in which case I would release it and reply.