s3x_jay's avatar
s3x_jay
s3x_jay@s3x.social
npub1veeq...38wt
I'm the guy behind https://s3x.social and a bunch of other sites. For ~15 years I've focused on building & running #gay sites. I'm also one of the Community Ambassadors at https://XBiz.net - the #porn industry's leading B2B discussion forum. #LGBT #NYC #Harlem
s3x_jay's avatar
s3x_JAY 2 years ago
Nostr clients need more support for `a` tags & `naddr` / `nevent` / `nprofile` IDs with embedded relay info. They’re incredibly powerful. Situations like in the pic shouldn’t happen. image
s3x_jay's avatar
s3x_JAY 2 years ago
Yesterday @rabble put (new) "NIP-68" and a redraft of NIP-69 into the PR that was originally started two weeks ago. NIP-68 deals with labeling. It can be used for everything from reviews, to scientific labeling, to stock ticker symbols. It allows for both structured and unstructured labels to be put on _any_ applicable event. With NIP-68 authors can update and correct the labeling of their events after initial publication. It also allows third parties to add labels. (It is expected that client apps will restrict visibility of 3rd party labels to people in the labeler's "network" or trusted in some other way.) NIP-69 was largely rewritten. It is now based on NIP-68 labels. It specifies two "vocabularies" that can be used for content moderation. One of the vocabularies is fairly set and rigid and deals with the types of moderation issues that are most likely to arise on Nostr. The other vocabulary is completely organic and open, and intended for things like regional moderation issues (e.g. insulting the Thai king). Client apps can use as much or little of the vocabularies as they like. NIP-69 tries to establish a model where content moderation isn't black and white, but rather has many shades of gray. So people can recommend everything from showing the content, to content warnings, to hiding the content, to actual deletion. Another "shades of gray" factor is that our approach to content moderation is based on the idea that everyone can be a moderator - it's just some moderators are trusted by more people than others. Moderators that are trusted by relay owners will obviously have the biggest impact since only relays can actually delete events. It's a bottom-up approach where people pick their own moderators. (The next step will be a NIP for "Trust Lists" so people can specify whose reports can filter their feed.) Given that censorship is an act of power and control where someone imposes their preferences on someone else, this approach to content moderation is highly censorship-resistant since it's a voluntary, opt-in scenario. @fiatjaf @jb55 @npub1h52v...fjkq @primal @Sirius @Kieran @Fabian @hodlbod @Matt Lorentz @nos
s3x_jay's avatar
s3x_JAY 2 years ago
Bluesky is getting a few things right… The compete to get accepted thing is brilliant. Clubhouse did that and it was really hot - until it wasn’t. I’ve heard no onboarding complaints. Then again it seems to be mirroring the experience of corporate social media - for both better and worse.