Considering defaulting to prepared statements on the database for the relay. I spent a lot of time trying to dynamically construct queries client side and will prob keep that as a backup for compatibility but itโs really just the same 10 over and over again.
Release time!
Nostpy relay now supports Web of Trust and Tor.
Join the relay below:
wss://relay.nostpy.lol
ws://pemgkkqjqjde7y2emc2hpxocexugbixp42o4zymznil6zfegx5nfp4id.onion
Or host a relay yourself!
Nothing like wrestling with a good old rebase and merge conflicts to remind yourself that you don't actually know how to use git as well as you think you do.
On a totally related note, new nostpy release dropping later today ๐
Getting live hurricane updates from family in the Sarasota area. They obviously lost power at this point but cell service is still working which is encouraging to hear ๐๐ป. Gonna be a long night.
Made some more progress in the memory leak investigation for the metadata updater. Apparently the process below is the culprit:
python3.12 -c from multiprocessing.spawn import spawn_main; spawn_main(tracker_fd=14, pipe_handle=16) --multiprocessing-fork
I fully expect that process to use a lot of memory while the script is running but it holds onto it after the run has finished. Oddly enough, the same exact script returns to sane memory levels immediately after runs when instrumented with a continuous profile.
Any Python folks out there have any ideas why this process won't let go of the memory if it isn't profiled? I've tried manual garbage collection in the script but it doesn't affect this at all.
#asknostr #python