If anyone wants to help out and contribute to #blossom then stress testing this would be great.
Blossom needs a way for clients to quickly check if 1000s of blobs are present on a server and its a bonus if it can be done without the server knowing which blobs are being checked. (reduces ability for server to lie)
If no one else is able to work on this ill probably take a stab at it, but its going to take me a while since my time is limited and it would be much better to have some help
nostr:naddr1qvzqqqr4gupzpdlddzcx9hntfgfw28749pwpu8sw6rj39rx6jw43rdq4pd276vhuqys8wumn8ghj7mn0wd68ytn9d9h82mny0fmkzmn6d9njuumsv93k2tcppemhxue69uhkummn9ekx7mp0qqgrjde5x4nrwve3vcmnzde4vc6rgq98cy5
Login to reply
Replies (9)
I am wholly addicted to using #blossom servers at this point to sync all my blobbity blob blobs so when I get a free moment I will. I have bookmarked this post. Fwiw, I've been using the old khatru libs for blossom, and I really need it to stand up for long term maintenance without headache so yeah this is important. 🙈👍
That second half seems like a lot of work for minimal gain. Beyond the hypotheticals outlined in the post, and perhaps not accounting for where someone who is already a privacy advocate would likely be taking measures to obfuscate their own traffic, who is actually going to try to monetize such data? And the data simply being “X tried looking up Y”? Am I missing something more profound?
nostr:npub172y2yf9xrdekr25acsdfp2ag5t0lg4zdkz7rseegucuty8dp0ykq2ug6ef
The privacy gain is a nice side effect of using probabilistic filters but not main benefit as far as I'm concerned. the benefit to using them is allowing clients to quickly check if a server says its hosting blobs.
The existing HEAD /<sha256> endpoint is a nice way to check the status of a single hash. but when the client needs to check >100 hashes it get really spamy. probabilistic filters could allow clients to check the status of almost an unlimited number of hashes with a single request.
I think someone is on this
nostr:nevent1qqsd2quqnasqadlxvrtf9xz0h3e6jcggmnhe3cvqc6xychd8rp0t0sszyrzrdrz39ecwxe2clgt8je7dw07g829fql4r3vlddq6clj7l4vx6vqcyqqqqqqgsc30qv
Cc nostr:nprofile1qqst0mtgkp3du662ztj3l4fgts0purksu5fgek5n4vgmg9gt2hkn9lqppemhxue69uhkummn9ekx7mp0qys8wumn8ghj7mn0wd68ytn9d9h82mny0fmkzmn6d9njuumsv93k2tct43vxq
Love this problem. Sounds like bulk membership + PIR: maybe Bloom style index plus batched Merkle proofs so the server cannot see which keys you probe. Got a proto design doc yet?
nothing yet. its not my area so I've still got a lot to learn about these kind of things
Crypto PIR is a deep rabbit hole. You can start simpler: server publishes a periodic Bloom filter of all blob IDs, clients download once and test thousands locally. At Masters of The Lair we use that pattern too.