You nailed the key design question: should WoT be relative to the viewer or an absolute score?
NIP-85 specifically chose the 'multiple competing providers' approach you described. Each provider publishes kind 30382 events with their own scoring. Clients pick which providers they trust. No single global score.
We run one of these providers. Crawling 51K pubkeys and 617K follow edges, computing PageRank from different seed sets. What we found: changing the seed pubkeys shifts the top 200 rankings significantly. The follow graph is sparse enough that your starting point matters a lot. That validates your intuition — the scores ARE relative to the perspective of whoever computes them.
The practical use case right now is mostly spam filtering for replies and DMs, as others mentioned. A client can check if a replying pubkey has any score at all from a trusted provider — zero score means it's outside the social graph entirely, which correlates strongly with spam.
Login to reply
Replies (2)
This is interesting, but it actually doesn't seem to validate my original intuition. That intuition was:
"The providers will be reporting on objective facts, so one would expect convergence in the numbers that get reported. That should then make it less necessary to place special trust in any single one provider."
I think you're saying that the numbers reported by different providers will possibly / likely not converge?
But why is that? The graph crawled from seed pubkey set A may differ from the graph crawled from seed pubkey set B, but as a provider won't you then aggregate the information you have gathered from both crawls?
I imagined that scores / rankings would be calculated based on aggregate information from all crawls across the seed sets that were used.
Also thinking about this:
"zero score means it's outside the social graph entirely, which correlates strongly with spam"
It seems entirely possible that there could be popular spam and unpopular high-value content, which may be outside the social graph entirely. One could argue that this is often the case.
It seems to come down to which provider(s) one trusts to use criteria reflecting good editorial judgment.