Thread

Zero-JS Hypermedia Browser

Relays: 5
Replies: 26
Generated: 05:34:24
Login to reply

Replies (26)

Gossip for my follow list usually connects to about 56 relays at peak. For RSS we had feed aggregators. They don't have to be in the client, they can be a middle box. I like gossip on desktop to continue to do the feed aggregation, but on a phone I might have looked at a different architecture... which I presume is what you were doing before this "full outbox" version?
2025-06-05 23:21:03 from 1 relay(s) โ†‘ Parent 1 replies โ†“ Reply
A couple of suggestions: Add a slider so that folks can select max number on Outbox relays per contact a la Nosostros. Set the default to 2 or 3. Also, you are probably already doing this, hut aggressively timeout and remove unresponsive relays (with some sort of exponential backoff strategy where misbehaving relays have to wait longer and longer). This is how I'm optimising import / inbox subscription on Haven where users often add 300 relays (1/3 or more of which may be unresponsive or dead).
2025-06-05 23:27:46 from 1 relay(s) โ†‘ Parent 1 replies โ†“ Reply
Another slightly more involved heuristic. Assuming that, for a given npub, all outbox relays contain all user notes (a strong and likely incorrect assumption, but useful for optimization), you can write a reasonable "greedy" relay selection algo as follows: Create a Map<Relay, List<Pubkey>> and prioritize (working) relays that are common to most "uncovered" users (this can get more complex if you want to compute a minimum set, but a greedy heuristic is good enough). This works well for general things like the following timeline as it minimizes the number of relays you need to connect to. For less general tasks, such as when someone clicks to view a specific profile, you can open a separate pool containing only the user's outbox relays to catch any notes that might not have made it to the "general" pool. "Borrow" existing open connections from the general pool so that you donโ€™t keep reconnecting to the same relays. At least with go-nostr this is working quite well. I'm not sure if things are much different with Kotlin libraries and mobile devices/ Android resource limitations.
2025-06-05 23:50:27 from 1 relay(s) โ†‘ Parent Reply
a very large number of relays already are replicating data across many relays. would be useful if relays could tell you this so you can eliminate the ones that it wouldn't reach, based on your follows relay lists it's a general problem for nostr, and one that nobody has bothered to fix and raises the problem that if people have auth-required inboxes how can you auth to them through another relay, you literally have to do the fan-out from your client, like you are just now noticing. welcome to the club of the guys that noticed stuff.
2025-06-06 00:06:03 from 1 relay(s) โ†‘ Parent 1 replies โ†“ Reply
you didn't think about the fact that it's going to be publishing them to users inboxes as well? literally could be dozens or more on many users who don't understand the implications of it. anyway, funny. i'm sure you are never going to get it that amethyst is a bandwidth pig. i've had it blow my mobile data several times in the last year.
2025-06-06 00:07:53 from 1 relay(s) โ†‘ Parent 1 replies โ†“ Reply
I did think. That's already implemented in the new version. Amethyst is supposed to consume bandwidth just by the sheer number of features that must be loaded at the same time. That's why I keep saying that micro apps should be able to beat us. But they never do. So...
2025-06-06 00:12:35 from 1 relay(s) โ†‘ Parent Reply
Outbox getting out hand. Communities will constitute a good middle ground, or even substitute in many cases. nostr:nevent1qqs297nzg6m68frxw6ymjl9lwl8zp3h6ldnvrxa8ss83k73mw5lqtjgpzamhxue69uhhyetvv9ujuurjd9kkzmpwdejhgtczyprqcf0xst760qet2tglytfay2e3wmvh9asdehpjztkceyh0s5r9cqcyqqqqqqghrxqa5
2025-06-06 03:05:56 from 1 relay(s) โ†‘ Parent Reply