Thread

Zero-JS Hypermedia Browser

Relays: 5
Replies: 4
Generated: 21:38:07
Content Moderation Is Impossible At Scale - The nostr:nprofile1qqsy6577y73yl6uy6curjchr2qselsy72ukz9gtu2sj5tf5u6ddsvlcduvjxm Impossibility Theorem of Content Moderation I think a bunch of folks here might not know about this internet 'law' but it does feel like Nostr folks might agree with this. https://www.techdirt.com/2019/11/20/masnicks-impossibility-theorem-content-moderation-scale-is-impossible-to-do-well/
2025-10-03 11:38:02 from 1 relay(s) 3 replies ↓
Login to reply

Replies (4)

Good points from the article about the nonexistence of “perfect moderation”: First, the most obvious one: any moderation is likely to end up pissing off those who are moderated. After all, they posted their content in the first place, and thus thought it belonged wherever it was posted — so will almost certainly disagree with the decision to moderate it. Now, some might argue the obvious response to this is to do no moderation at all, but that fails for the obvious reason that many people would greatly prefer some level of moderation, especially given that any unmoderated area of the internet quickly fills up with spam, not to mention abusive and harassing content. There is the argument (that I regularly advocate) that pushing out the moderation to the ends of the network (i.e., giving more controls to the end users) is better, but that also has some complications in that it puts the burden on end users, and they have neither the time nor inclination to continually tweak their own settings. No matter what path is chosen, it will end up being not ideal for a large segment of the population. Second, moderation is, inherently, a subjective practice. Despite some people’s desire to have content moderation be more scientific and objective, that’s impossible. By definition, content moderation is always going to rely on judgment calls, and many of the judgment calls will end up in gray areas where lots of people’s opinions may differ greatly. Indeed, one of the problems of content moderation that we’ve highlighted over the years is that to make good decisions you often need a tremendous amount of context, and there’s simply no way to adequately provide that at scale in a manner that actually works. That is, when doing content moderation at scale, you need to set rules, but rules leave little to no room for understanding context and applying it appropriately. And thus, you get lots of crazy edge cases that end up looking bad. We’ve seen this directly. Last year, when we turned an entire conference of “content moderation” specialists into content moderators for an hour, we found that there were exactly zero cases where we could get all attendees to agree on what should be done in any of the eight cases we presented. Third, people truly underestimate the impact that “scale” has on this equation. Getting 99.9% of content moderation decisions at an “acceptable” level probably works fine for situations when you’re dealing with 1,000 moderation decisions per day, but large platforms are dealing with way more than that. If you assume that there are 1 million decisions made every day, even with 99.9% “accuracy” (and, remember, there’s no such thing, given the points above), you’re still going to “miss” 1,000 calls. But 1 million is nothing. On Facebook alone a recent report noted that there are 350 million photos uploaded every single day. And that’s just photos. If there’s a 99.9% accuracy rate, it’s still going to make “mistakes” on 350,000 images. Every. Single. Day. So, add another 350,000 mistakes the next day. And the next. And the next. And so on. nostr: nostr:nevent1qqspwthu5nde82ktc2959wmrxkacynfj5w7xd2wnzjdz3uw8d0cpcdqpr4mhxue69uhkummnw3ez6ur4vgh8wetvd3hhyer9wghxuet5rfnfe9
2025-10-03 12:52:11 from 1 relay(s) ↑ Parent Reply