as i see it, the issue is about UX subscribing, or blocking seeing events from authors is simple enough, but it doesn't assign trust, or distrust. it does, sorta, but sorta doesn't. the problem is about how do you impute trust without burdening the user with excessive input on every single one of them to add at least one bit of trust/distrust value to it i think the solution is that you analyse the user's interactions, and assign a score based on which of their follows they respond to the most, and from that, you have a much better proxy for trust, without adding complexity to the UI. it also has analytical uses in the sense that you can define a filter that selects from the data gleaned from some specified set of users, and then modulates that based on their interaction levels with them and sets a threshold or a probability of including a given event in a feed as defined by that group. this group can also then be defined by the user's own follow and interaction history. if there was an algorithm that i would like to have on my feed, that would be it

Replies (1)

also, people can be perverse and follow and interact with people to troll them. this is another layer of how to build such an algorithm, but much harder to evaluate. something that would require an LLM classifier to evaluate the positive/negative nature of these interactions to add a third dimension to the algorithm