This is interesting conversation to watch from the sidelines. We see bitcoin services that only prioritize user experience being attacked by stupid regulations, like Travel Rule, or kicked out of specific countries (WoS...). Primal's architecture is prone to similar attacks. More so compared to "smart client, dumb server" architecture like what Amethyst does. When the service becomes too big (like Coinbase), it will be kneeled on by all governments around the planet. If Primal gets big, the same will happen. What will Primal do then?
miljan's avatar miljan
Primal clients read from the caching service and write directly to the user’s relays. We chose a set of tradeoffs based on what we are trying to accomplish: best UX possible. We’ve been very transparent about it from Day 1. See my blog post from March 13, 2023 - the day we launched Primal. I still think that caching services are not only great for UX, but also a legitimate way to help scale Nostr once we hit millions of users. They could even improve censorship resistance, since anyone can stand them up and create more copies of Nostr events. Having said all that, the Primal stack is evolving and becoming more capable on the client as well. One can imagine peer-to-peer transfers between clients that have client-side databases, like Primal for Android. I think Nostr will have it all: relays, indexers, caching services, client p2p transfers. It will be very hard to stop. Claiming that there is only one way to properly build Nostr clients and that everyone must choose exactly the same set of tradeoffs is silly. For example, gossip/outbox purists might take issue with how Damus works. Everything we build at Primal is open sourced under the most permissive MIT license. I believe we offer the only open source indexer for Nostr (someone please correct me if I’m wrong). Anyone can stand up and run their own caching instance. Other projects have done so in the past. Primal users hold their keys and can move to another client at any time if they don’t like how our product evolves. On a personal note Will, you constantly fud Primal. You tried to cancel us before, joining semisol’s cEnSoRsHiP nonsense campaign. Your latest initiative - trying to impose rules on what can be called a Nostr client - is also an attempt at cancelling. I don’t know what to make of it because you are always very friendly in person. We spent a considerable amount of time together, and you never raised these issues with me face-to-face. Why not? On the contrary, you always seem to have kind words for Primal when we talk. I’ve never said a bad word about Damus or any other project. I want to be on good terms with all Nostr builders, but you are making it hard with posts like this.
View quoted note →

Replies (4)

I'm not a fan of Primal's lightning wallet kyc, and wooden recommend it to people comma but in terms of how their handling performance as a Nostr client, I might be doing the same thing if I'm understanding them right. On my end, everything so far is pretty much on Nostr, however, there will be a normal backend/database server running alongside it, not caching the nozzle event data itself but saving event addresses, for the purpose of quickly identifying and loading in people's posts and increase reliability overall. This would be running in parallel with what's currently there. This side supporting a traditional server is not needed for the client to work, it only enhances it, and it'd lower in cost than if it was holding all the data. I think this approach is the best way to go about it, where it would provide the best possible experience, all the while having it still function even if that developers of it aren't around anymore (building it with this in mind as well, where even if anyone tries to take the project down, be at a person or an entity of power, even if development stops right now, or even if the team developing it myself is gone from the face of this earth, everything would still be functional. As a result, and while not desired or pursued, it's being built with the chance of me not receiving anything in return, only having the satisfaction of seeing it still being used and being unstoppable).
I think having a caching layer is ok for performance, but that layer becomes attack vector as soon as your application gets bigger or the layer gets smarter. A bit different approach is to consider using independent caching services or providers. Some that you have no direct relationship to, but your app/architecture just uses them. Even better if the users can actually select this somehow. This is of course way harder to implement, but on the other hand it's presumably more censorship resistant. To be clear I appreciate Primal and I'm using Primal right now on desktop (I have not found a better client on desktop), but I'm afraid the architecture and incentives may be pushing Primal in the wrong direction in the future.
That's why it's not reliant on it. If it gets attacked then that's that, the site is still running, just not in pristine condition. That's about it. And yeah that can work too, in the app/setting page there'd be: Title: Caching Server [button: add]: Subtitle: Official servers: cahcing.degmods.com [button: unset] Subtitle: Recommended servers: caching.example1.com [button: set] caching.example2.com [button: set] caching.example3.com [button: set] Subtitle: Custom/user servers: input-field [button: add] caching.exampleCustom.com [button: set] This can be a nip too, now that I think about it.