An idea to make Nostr relays more realistically scale to the size of Twitter:
- Define a minimum period for a post to gain traction. This is the minimum amount of time that a post would typically take to be seen, go viral, etc.
- If this time passes and the post sees little interaction, then…
- Define a minimum interaction threshold for a post to remain on the network. In other words, how many people have to click/like/comment per hour/day for this post to stay on the network?
This should be technically possible as well. I’m not sure if this is the case with Nostr relays, but at least in the case of IPFS: posts are “content-addressable” and not “location-addressable”.
When I want to find a post I don’t say “go to this DNS address to find the post”. Instead I say “this is the hash of the post I’d like to see”, and ask the network if anyone has this post. This to my understanding is the way that BitTorrent works.
If the post is hosted across multiple relays, each relay can announce to the network the amount of traffic they saw to the post. In this way, the entire network should have a global understanding of how much interaction a post is seeing.
This would offer an alternative to just “delete a post after 30 days”. Because here we can actually delete posts maybe earlier than this deadline to save space for posts that never gained traction, and also save posts for users who still show a high demand for this content.
Greg
npub1k9sx...gdk7
Reading through peercuration a bit and it seems interesting. Users have a web of peers (similar to Web of Trust), and they rely on those peers to help curate content. If their peers rate content highly, then that content will be more likely to appear on the users feed. The user is also responsible to rate content, so that their peers can be recommended content too.
If a user is constantly rating content highly which their peers disagree with (imagine a bot, or someone trying to spam), then those peers may decide to omit that user from their web of peers. In other words, a user is incentivized to honestly rate content, so as to maintain their network of peers.
Though, every implementation to curate content for users requires some sort of input from the user. In this case, peercuration requires the user to send their peers a “score” for content they consume. This score is averaged across all peers of a particular user, this way the user can get an idea of what their friends think.
But this is just unrealistic. We cannot expect a user to rate each post they come across, especially for posts they have no interest in. If they never click on the post, how is the post rated? At the very least, this scoring system should take into consideration all posts that appeared on the users feed which they did not click on.
For the posts that were clicked, the user might like the post, comment, bookmark it. In the case of a comment, AI should be used to understand the meaning of their comment. In the case of a like, does it really convey just how much the user liked the post? A binary input surely doesn’t capture this. A bookmark may be given a higher score than a like, but again these are more binary values than analog.
Maybe there’s a unique way of accepting input from a user, maybe: using a circular motion dragging your thumb across the screen, the number of revolutions coincides with just how much you like the post? Would produce haptic feedback for a satisfying experience, etc.
It seems that if we can define an input which is realistic (the user will actually use this input, and it is not too demanding) then the issue of content recommendation is effectively solved.