In preparation for onboarding new core lighting developers are preparing a series of videos. So I've been asking ChatGPT about CLN developer features, particularly with comparison with other projects people might be familiar with.
Of course, I compare myself with Linux, but it's interesting to see comparisons against other projects:
**Type-safety**:
OpenBSD High
Bitcoin Core High
Core Lightning Very high for C
Nginx Low
curl Moderate
MySQL Moderate
SQLite Moderate
CLN sits near the top among major C codebases for safety discipline.
**PR Submission**
Core Lightning’s PR flow is unusually strict, slow-moving, and review-heavy compared to most open-source C projects — closer to Bitcoin Core or OpenBSD than to typical GitHub projects.
---
Compared to “average OSS”
Most projects:
Feature-oriented PRs
Informal review
Few required reviewers
Patch squashing common
Tests sometimes optional
Architectural discussion often post-merge
CLN:
Patch-first culture
Pre-merge architectural scrutiny
Extremely high reviewer expectations
Tests are mandatory
Clean, narrative commit history matters
Rusty Russell
rusty@rusty.ozlabs.org
npub179e9...lz4s
Lead Core Lightning, Standards Wrangler, Bitcoin Script Restoration ponderer, coder. Full time employed on Free and Open Source Software since 1998. Joyous hacking with others for over 25 years.
Notes (12)
Seriously considering putting two RTX 6000 in my upcoming build machine. Puts the price up an order of magnitude, but truly private AI might be a worthwhile investment.
Never played with GPUs before, so informed thoughts welcome?
The latest (final?) #CLN release candidate fixes a long-standing bug where we could forget UTXO spends when we restart. This explains a variety of bug reports we have seen and been unable to reproduce over the years: the most recurrent being gossipd telling peers about channels which are long closed.
We no longer make this mistake, but we also have to walk back and revisit old UTXOs. We do this while running, but for older nodes (like mine!) that can be a lot of blocks. In fact, and I only vaguely recall this, my node tracks back to block 500 (!) so it's going to take a while.
Of course we remember progress, so you can restart slike normal during this process. Other than higher CPU consumption you shouldn't notice anything.

With my bike being repaired, I've been doing more walking. Australian cities are really optimised for driving, but I must say it gives me a lot of time to reflect on broader issues, probably a decent way to increase my nostr posting!
nostr:nprofile1qqs0w2xeumnsfq6cuuynpaw2vjcfwacdnzwvmp59flnp3mdfez3czpsprpmhxue69uhkummnw3ezumr0wpczuum0vd5kzmp0ksxxx2 recently posted on X about the danger of "store and forget" for Bitcoin over decades. Unfortunately he's right.
Originally I stored my raw private keys and UTXOs (on paper, care taken) figuring that was standard. Then bitcoin core stopped supporting them! Other wallets tend only to support them for sweeping, and I wonder how long.
If I were storing funds today I would use BIP39. BIP93 is cool and more general, but not widely supported, and I don't know what support will look like in a decade.
Christian Decker made a side comment on my PR about a potential further optimization for Postgres. I spent an hour on it, trying to get it to fit in our db API.
Since my wife was out with friends last night, so I dived into the implementation in earnest after the kids in bed.
Man, this is a horrible API which forces you to understand the design and history of PostgreSQL.
Until v14, the async API only supports one request at a time. (Surprise!!) You must explicitly turn on pipelining. Then you get one or more responses per request, with NULL between them. This is because you can have multiple SQL statements per string, or other modes which split responses like this (?).
You can no longer use non-pipelining APIs, once you turn this on.
You can't use PQsendQuery in pipeline mode. You need to use the lower level PQsendQueryParams with NULL params. This is documented as literally "You can't use this in pipeline mode" without explanation (the docs otherwise imply it's just a convenient wrapper). This is why you should always check your error returns!
And your code will still block forever. You need to explicitly flush the pipeline, otherwise it's cached locally. There are multiple different APIs to do this, and I'm not sure which to use yet.
Also, if you get an error in a SQL query, you need to drain the pipeline, turn pipeline mode off and on again.
Finally, the documentation warns that you are in danger of deadlock unless you turn on non-blocking mode. This makes some sense: the server won't read more commands if you're not reading, but I would prefer it to buffer somewhere.
This whole API seems to be implemented by and for people who have deep familiarity with PostgreSQL internals.
Hope the latency gain for CLN is worth it!
I think I got nerd sniped into implementing online compaction for CLN's gossip store. Mainly from people agitating that we should use a DB for gossip messages.
This will speed startup and reduce memory usage. And it's only going to take me a couple of days' work, depending on how many side-cleanups I do.
It can be hard to tell a flawed implementation of a good idea from a bad idea. But trust me, this is gonna be great!
I think finding a bug where printf("%*.s") was used instead of printf(".*s") was the point at which I realized that some C issues cannot be mitigated with better tooling...
I hate price talk, but if you're going to do it, please understand that "market cap" is a very rough *ceiling* on current value.
It's neither the amount of money which has gone in, nor the amount of money which can come out.
So the order of magnitude is useful to compare against other assets. But abusing it in terms of profits and losses is a category error, and I assume done mainly because it's so easy to measure.
Grump over.
https://meet.jit.si/BitcoinScriptRestoration
That's in 10 minutes!
nostr:nevent1qqszqrllj4m9zuflsfkun6eg900emhmeg44uw2z7gtalnhqaw002ftgprfmhxue69uhhxemv9ee82um5vdhhyupwvdhk6tnpw5hsyg83wf2cdfqzcp4weqvdz3u2gk42phqkc75ufp5ajlp4qvmdzmuwgvpsgqqqqqqsa9mvyn
In two days' time Julian and I will be doing an open Jitsi meeting to discuss the work on Script Restoration. Come and ask questions!
1pm Berlin time:
https://www.timeanddate.com/worldclock/meetingdetails.html?year=2025&month=10&day=15&hour=11&min=0&sec=0&p1=5&p2=37