Thread

Zero-JS Hypermedia Browser

Relays: 5
Replies: 14
Generated: 16:06:33
relays scale well. we’re dealing with small json blobs and read heavy workloads. this lends extremely will to caching and horizontal scaling. the hard part is the clients. in a permissionless network, clients will be bad actors. for your client to work well it must use sane data access patterns. concretely: - get the client’s websocket connection mgmt in order - get the client’s data access patterns in order. batch queries, cache locally, etc - harden your relays with rate limiting for stomping out bad clients - implement good caching for your clients needs nostr:nevent1qqs2ztln6vaff7jq34c7ys67vwp8qpj87rxncrqf64hv9nry65tykscpr9mhxue69uhk2umsv4kxsmewva5hy6twduhx7un89ujnp7pf
2025-11-16 18:19:44 from 1 relay(s) ↑ Parent 4 replies ↓
Login to reply

Replies (14)

you cannot trust the clients in a decentralized network. that is why the relay must use common techniques to prevent abuse. BUT the client must also implement sane data access and connection patterns for the client to work at all. in this particular case I will guarantee you the vibe coded client is doing terrible things with websocket connections and queries. that is step one to making the client work.
2025-11-16 18:28:30 from 1 relay(s) ↑ Parent 1 replies ↓ Reply
Another issue using websockets over http in this case, is it defers the abuse protections completely to the application servers (or custom proxies) meanwhile hogging whole tcp streams in load balancers. In existing http infra, load balancing and resource abuse protection can happen in multiple tiers sparing the application software (and more specifically relay developers). That said I'd argue almost all web abuse protections are handled way before the client request even makes it to the application server. And it's done very quickly.
2025-11-16 18:33:21 from 1 relay(s) ↑ Parent 1 replies ↓ Reply
totally agree. websockets are annoying. but tcp connections are cheap and as you said you can quickly handle bad actors once the channel is open. to solve this specific problem i would do something like - one relay for read, heavy caching to reflect data access patterns in vine client - one relay for search - fanout to the proper relay a single api gateway tier - scale every layer independently - write a sane client
2025-11-16 18:37:42 from 1 relay(s) ↑ Parent 1 replies ↓ Reply
> tcp connections are cheap I would disagree but we could be on different scales, that's at least two extra OS handlers, at least 16k (usually 64k) of kernel space (x2 for rx and tx) and often userspace (usually shared) buffers, per connection, per LB. L7 lbs generally multiplex tcp connections and dramatically cut down on memory consumption. While it can be a bit aggressive. For every websocket that's opened in my http server, tuned to 16k, thats about 256k after the upgrade is established (because websockets are usually fully buffered) and http buffers are freed just in userspace. For 5000 connections that's over 1.2G of committed system memory minimum just hanging about. I would expect software like nginx to be more optimized in comparison but that still hogs up LBs and I know I've heard other sysadmins share their stories of websocket overhead. L7 lbs multiplex these http/1.1 connections, generally cutting down to double digit connections to service 5-6 digit ingress traffic. I agree with the basic architecture, that is, a highly functional with a "popularity decay" model. I can't remember the scientific name.
2025-11-16 18:55:46 from 1 relay(s) ↑ Parent 2 replies ↓ Reply
Yes but that's not a great model. LRU breaks down in pure form here. A single user pulling an old file forces fresher content out. TTLs have to be attached and respected by each tier. There is an official name for this model I just don't remember it. As files age their ttl decreases and avoids polluting caches.
2025-11-16 23:51:16 from 1 relay(s) ↑ Parent 1 replies ↓ Reply
Some analytics from Nostr.land shows that many old events are rarely requested. I am considering a dynamic tiering strategy where the age, access frequency and “position” of the event (relative to similar ones) is used to send it off to archival or (more likely) zstd it. I do not cache indexes due to the high complexity and low benefit.
2025-11-17 21:57:00 from 1 relay(s) ↑ Parent Reply
Yeah but there is still a more specific model Im trying to remember the acronym for. It literally describes exactly how to label content with TLLs based on the frequency of the exact content being hosted. Something one of the big tech companies published. Basically, given that the site is hosting images to be shared on social media, it's known that the images will have the highest frequency immediately after being shared, then general decay at a known rate. It can even get as accurate as saying, given an image of a puppy, you can apply TTLs that model how the image of the puppy should be cached. substitute "puppy" here for some known constant, like: my users usually upload X content. In the case of devine. Users will only be publishing short looping videos designed for user attention. Which might frequently have images of cute animals...
2025-11-17 23:17:50 from 1 relay(s) ↑ Parent Reply
testing zaps for this note… we made six attempts to⚡zap this note, at ben@northwest.io, over a period of 3 minutes. all six attempts were successful. please check on your end to be sure you received. average zap time was 14253ms (14.3 seconds). we consider this zap time slow... generally, zaps should complete in under two seconds. (other nostr users might think your zaps are broken, might not zap you again.) if you wanted to fix this... you could try getting a free rizful lightning address -- https://rizful.com ... if u get it set up, pls reply here so we can do this ⚡zap test again.
2025-11-18 17:13:30 from 1 relay(s) ↑ Parent Reply