I mean, if the DB is already in memory, then it's not the real start up time, right? It feels like we are comparing apples to oranges. Sure, if users have enough memory to play all the apps and come back, it works well. But, at least on mobile, user's LOVE to kill apps (swipe up) every time they leave them. And that's for a reason. The performance hit of having multiple apps running at the same time is very visible to users even though most apps are not doing anything in the background. You might argue that if users kill apps all the time, there is more memory for nostrdb pages to stay live, but what happens if 3-5 apps have embedded nostrdb at the same time? Are they all going to stay? What happens if it is 10-20 apps using nostrdb. This is very similar to how Amethyst works and why Amethyst is fast: we just keep the memory live when the app goes to the background. When the users come back, all the tabs, all the feeds are already pre-loaded. The strategy works until everybody starts doing the same and then no apps stays live and everything is slow.

Replies (1)

> You might argue that if users kill apps all the time, there is more memory for nostrdb pages to stay live, but what happens if 3-5 apps have embedded nostrdb at the same time? Are they all going to stay? What happens if it is 10-20 apps using nostrdb. From what i understand the page cache maintains an LRU cache of pages (4096 byte slices of the db). Even if you have lots of nostrdbs, the OS can intelligently evict pages that haven’t been touched in a while. This is the great thing about virtual memory mapping, the OS should make sure to have pages that are frequently accesses available while pages that don’t get touched that often can be reclaimed. You don’t need all of the data in a DB all the time, the OS will make sure there is enough memory for apps that need it based on access patterns. The OS should always use all available memory for the page cache, on my desktop arc (zfs thing) is using 16gb right now.