You are not avoiding it. You are making an entire DB to mimic garbage collection because you don't have garbage collection. I fell like I am the one avoiding everything you went through to make it. And I am happy you are doing it. I really do. All I am asking is this discussion is to provide performance indicators that allows us to compare the complete performance with other stacks. If I compare my cache with your DB already in memory, the performance is exactly the same. Mine might be slower because it is full thread safe for 1000s of reads and writes in the same millisecond. But that's it. Basically its the difference between B+ trees vs hash tables. So, if I follow 500 people and use 5 Nostr apps, do you truly think duplicating 500 profiles in memory on each of those apps is an "efficient" use of memory?

Replies (1)

In your example it is not, but in the notedeck usecase (nostr browser) of potentially hundreds of apps in a single process is very efficient and faster than having in memory caches of duplicate data from a local relay across multiple processes.