Rusty Russell's avatar
Rusty Russell
rusty@rusty.ozlabs.org
npub179e9...lz4s
Lead Core Lightning, Standards Wrangler, Bitcoin Script Restoration ponderer, coder. Full time employed on Free and Open Source Software since 1998. Joyous hacking with others for over 25 years.
#cln #dev #someday Wallet "Intents" With anchor channels, lightning nodes were forced to get smarter about tx construction. (Kind of funny, nobody really wanted to write a Bitcoin wallet, but here we all are). You have a deadline, user wants to spend as little as possible, you have to use CPFP to boost the commitment tx, and bring your own fees for any HTLC txs (which are now single input/output ANYONECANPAY|SINGLE). I did the minimal things to get this to work, but it brings me back to the question of how these things *should* work. I don't think the primary interface for a wallet should be "spend these utxos to create this output" but "do this by this time, with this budget cap". The wallet should figure out how to do it. For example: sources include onchain funds, splicable channels and even closing channels. What if the wallet code had a rough "cost" for each of these? Sinks include splicing in, onchain outputs (maybe to cold storage?) or new channels. And there are also specific txs you might want. It should combine them opportunistically (and later, pull them apart if priorities change). There are several problems though. The first is complexity: this is not trivial to get all the cases correct, and all the combinations. The second is related: understandability and debugging is hard too: what did it do, what did it decide *not* to do, and was the result optimal? i.e. why? But mainly, how to present this to the user? Or the plugins that they use to direct it? They need to know that funds are coming to them (especially since we low-ball fees for non-urgent things). Plugins will want to see PSBTs before they get signed, so they can do clever things (coinjoin, open new channels, combine in other ways). The upside of all this is maximum efficiency, and a side-helping of more confusing transaction graphs. Both of which help everyone. This is currently an idle thought: it's not going to reach the top of my TODO this year, for sure!
Any Android devs out there? I will pay 2 million sats for someone to implement the option to customize Signal notifications for *reactions* separately from new messages. Conditions: 1. Ping me first! It's not a race, you gotta get public agreement from me to enter. 2. Gotta get it merged and shipped so I can install it on my Android phone. 3. You have to have fun! References:
Linux desktop nostr apps? I'm using gossip, but it feels clunky. Prefer desktop for writing detailed posts: mobile is optimized for consumption, not production :(
#cln #dev So, we've had this annoying intermittent bug where UTXO spends would get missed. Sometimes it meant that we would keep gossip for channels which had been spent, and sometimes we'd miss opportunities to sweep funds (more concerning!). Eventually I started to suspect our (my!) hash table implementation. It's extremely efficient, but if it had some bug it could explain the issues: it has a random seed for the hash function, so weird corner cases would appear random. I wrote some random churn tests, nothing. I could get more elaborate, of course, but then something else happened. Shahana wrote some code to create all our new documentation examples, which involved getting nodes into all kinds of weird states, and hit a strange bug. I tracked it down to a case where the recovery code was putting a new peer into the hash table, where one already exists. Easy bug fix, but it made me wonder: were we doing this elsewhere? My hash table code allows duplicate keys just fine. But it's actually unusual to want that, and there are APIs (get, delkey) which only handle the first one vs getfirst, getnext which are fully generic. So, I wondered. Did we make this mistake anywhere else? I bit the bullet and split the APIs: up front you now declare what type of hash table you want (duplicate keys or nodups) and you don't even get the deceptive APIs for each case. As you might expect, the only code which had a problem was the various places where we watch UTXOs. You can absolutely be watching for the same thing in multiple places, and indeed the code was not iterating, but only handling the "first" one. And this was all my own code, front to back. Mea culpa. APIs matter. The natural use of an API should be the correct one. And of course "don't patch bad code, rewrite it" a-la Elements of Programming Style.
Jeremy Rubin asked on Twitter what was happening with #GSR. Good q! After far too much fiddling with benchmarks I now have preliminary numbers. Budget is 5200 varops per weight. Fast ops (compare, zero fill, copy) cost 1 varop per stack byte. SHA256 costs 10 per byte. Everything else costs 2 per byte. I need to clean up my benchmarks so everyone can run them, and get "on your machine the worst case validation would be <X> seconds, doing OP_<Y>". That's concrete and gives us a chance to find any wild machines which are unexpectedly slow, and gives a tangible worst case, which should allow fruitful discussion I also need to write code to answer "what input size (if any) would cause <this script> to exhaust it's varops budget?". This again enables us to think concretely about my thesis (yet to be proven to my satisfaction!) that it's possible to have a budget which allows any reasonable scripts not to worry about it.