The best part about #nostrdb is that you never have to keep the DB in sync with the UI, there is no serializing data in or out. Queries return pointers to flatbuffer-style note data in the page cache that can be read immediately.
When building an immediate mode UI nostrdb is fast enough to do queries at 1000s of times per second. You can render at 144+ fps while doing queries in the UI on the main thread for each frame without having any complicated async query code.
Each frame has realtime access to every profile and note you’ve ever seen. Lots of cool possibilities and features to explore with this new tech.
Login to reply
Replies (29)
Does it cache custom objects per kind as well or do you need to keep the session cache separate?

Not sure what you mean by cache custom objects or session cache.
I can fix them
I believe you.
For instance, I keep a separate cache per logged-in user to make sure only that user has access to the decrypted information with that key. That cache tends to be so large that we keep clearing it from memory as the user scrolls.
i feel like if you’re in the web you might as just use the web rendering tech. Otherwise you lose media support (video, image cache) which blows hard.
nostrdb stores notes in an optimized binary format, for 1 million notes its less than a gig. The operating system is responsible for LRU purging page caches so I don’t have to worry about it. You could have a hundreds of millions of notes mapped in 1TB of virtual memory and it would work fine.
Im using the on-disk db as my session cache in that sense, resident memory use is very minimal since data is only loaded by pages in parts of the db i need, and all that is managed by the OS.
i'm pretty sure you can add a compositing step in the pipeline and overlay them
yeah perhaps when working with canvas apis in javascript. At the level im working at im probably screwed (wasm + wgpu)
not if this is being fed into a higher level compositor
Would be really tricky i suspect. The re-run guys who made egui are using video in their app, im curious how they are doing it.
small aside:
this thread is great 😁
I learn a lot around here
yes, most likely they are using an FFI with some C/C++ compositor, that's a guess but educated based on what i know about graphics rendering systems
it's a one-time case i would use C code in my own work, to have a compositor... so i can run my own IM gui code and put video overlays on it
Right now I only store encrypted notes in nostrdb, so I might need an in-memory cache for decrypted stuff. The in-memory format for compact binary notes is the same as on disk, so you get the memory savings there as well. I still use this for damus ios and it made things a lot faster and use less memory.
This is actually explicitly supported in the rust bindings, as you can see you can have an owned or db-owned note, where the db one is tied to the lifetime of a transaction. But in either case its just a pointer to memory or virtual memory
Note in nostrdb - Rust
API documentation for the Rust `Note` enum in crate `nostrdb`.
Sure, but the question was if we can add more pre-processed attributes (key-value pairs) to the Event objects so that when it loads from the disk it's not just a raw event. Most of the processing Amethyst does is in the "getting an event ready for the UI" phase, like zap amount sums, reply counts filtered by the user's hidden words list, creating blurhash bitmaps, tables for zap comments per note etc. That's were 95% of the app's CPU usage goes.
Ah yeah that’s the same with damus, which is why I’m trying to put most of that upfront work into nostrdb. i feel a bit uncomfortable putting too much business logic into it, but the multithreaded ingester at least does content parsing, fulltext indexing, stat counting (zap amounts soon, likes, etc). Once it’s all ready the subscription is then updated for the UI.
Content is parsed into “blocks” which is also queryable and available for all notes. This was the slowest code for damus.
I want to put in all the complicated stuff that would be common in all the clients I build.
Im doing this with ffmpeg inside egui atm
View quoted note →
sweet. yeah i knew it was possible... i was working with GUI composition a few years back and i was at a point of merely struggling with the pipeline scheduling, but i built a bunch of neat extra widgets that didn't exist in the toolkit that still haven't been pushed into the upstream and i'd do it if i could get synced up with the state of it but they change the api so much my old code needs yuge amounts of changes to adapt to the new api
it would be simple to add video composition to any competent composition pipeline as a post-step, without adding a lot of complexity to the codebase
Simple apps should be able to handle decryption while scrolling just fine. So, I am not sure if anything more complicated than that is worth including in your lib. But more complicated UI components might trigger way too many pre-processing algorithms in parallel when scrolling and then pre-caching is the only viable answer. And storing this cache can be beneficial if the sum of all the pre-processing stuff to create those caches on the fly takes longer than loading them from the disk, which in many of our instances, already do.
Some time ago you mentioned that you were adding the .content parser to the db lib as well. That aligns with the cache needs here as well. Being able to pull already parsed .content from the db can provide significant gains in the rendering performance just because the CPU is left alone to do the other things the app might need.
I wouldn't hardcode any specific business logic. I would just have a map<key as string, value as object> in each event so that the dev can add/remove whatever they want to store together with the event. The db just stores this map in the same way it stores the event fields.
great exchange between 2 amazing nostr devs
View quoted note →
Just ran the egui app on a Mac. Insanely fast. Very excited to see where you take it!
awesome, wasn’t to hard to get up and running? Rust should make it easy
Desktop was very easy. Still having trouble with android because I use nix on a mac for everything, shell.nix wasn’t working b/c some android sdk file didn’t exist on Darwin-aarch64. Going to try flakebox which we use in Fedimint for cross-compilation.
our metadata table achieves this now! you can add metadata of any type to nostrdb that is tied to a note id. currently powers our like/quote/reply counts