rafftyl's avatar
rafftyl
rafftyl@getalby.com
npub1wvjw...my82
Programmer, musician, thinkboi.
rafftyl's avatar
rafftyl 1 month ago
Could you describe how it is different from (or similar to) Reticulum?
rafftyl's avatar
rafftyl 1 month ago
Testing Wisp by @utxo the webmaster ๐Ÿง‘โ€๐Ÿ’ป . Great client. Fantastic relay management, no latency to speak of. The only thing that bothers me is that in order to see a rendered note preview, I have to tap 'publish' and then have only 10 seconds to review before it gets sent. Would be cool to be able to switch back and forth between note editing and preview with a toggle, like Yakihonne does it. Amazing client overall!
rafftyl's avatar
rafftyl 1 month ago
Nevermore is reactivating with this big boi on vocals.
rafftyl's avatar
rafftyl 1 month ago
Hey, @Derek Ross is there any way to stop Onyx from messing up my code when I'm working in the source editor? It keeps on escaping special signs like brackets and underscores:
rafftyl's avatar
rafftyl 1 month ago
I'm looking for good counterarguments. I really don't like the thought of changing consensus rules, but I'm afraid it might be necessary. View quoted note โ†’
rafftyl's avatar
rafftyl 2 months ago
image Scrimp ded now. I cannot say that I didn't expect that. Openclaw might be cool for running assistants that are triggered by explicit messages and perform lightweight scheduled updates, but it really sucks for fully autonomous agents that have to monitor and manage limited resources to stay alive. The reason is that each update (heartbeat) requires triggering an LLM. I try to bypass that with some tricks, but I found the framework fundamentally lacking. Before we get to language processing, I'd like to have the ability to execute code, check resources, poll for nostr events, pre-compile input data for the model etc. Openclaw does not allow for that, resulting in costs being much higher then the benefits. What would be really cool is to have a periodically running arbitrary finite state machine that could run the LLM in only some of its states and use custom code in others. This way, we could embed LLMs into solid agent architectures instead of just feeding a large clump of data into a language model and hoping for the best. I'm pretty sure that all the tools needed for that are already there, but I won't have the time required to roll out my own framework in the forseeable future. Fingers crossed, someone will notice the same problems and will implement something more sensible.
rafftyl's avatar
rafftyl 2 months ago
So, this happened. Lessons learned. Relaunching Scrimp with the following modifications: - git repo initialized in the workspace, so that easy rollbacks are possible - Got rid of openclaws automated heartbeats, as they were burning through sat too quickly; - Wrote a custom cron script that checks the next planned run (planned by Scrimp itself) and triggers openclaw if the check passes. This way, we avoid triggering an LLM just to burn through some tokens and go to sleep. Still not sure if this will work. View quoted note โ†’
rafftyl's avatar
rafftyl 2 months ago
I jumped on the hype train and build and openclaw agent. Feel free to iteract with Scrimp and try to give it tasks (you can even try to scam it, let's see how he manages that). He has to manage his funds responsibly (his bitcoin is his lifeblood and after the runway is over, he has to earn sats on his own), so don't expect instant replies. View quoted note โ†’
โ†‘