> If a node is blind to a large segment of the real mempool, wouldn't it be slower to detect a sudden spike in the fee market, potentially causing it to fall behind in a fee-bumping war?
A fee-bumping war? I think I need more context. I am not aware of any real-world software that requires users to competitively bump their fees. Are you saying there *is* such a protocol? Are you perhaps referring to the LN attack where the perp repeatedly RBF's an input to a tx that creates old state?
Even there, the would-be victim doesn't have to repeatedly RBF anything. Instead, he is expected to repeatedly modify his justice tx to use the txid of whatever transaction the perp creates after *he* (the perp) performs an RBF. The victim *does* have to set feerate each time, but his feerate does not compete with his opponent's, as his opponent is RBF'ing the input to the justice transaction's *parent,* whereas the victim simply sets the feerate of the justice transaction *itself,* and he is expected to simply set it to whatever the going rates are.
Moreover, as mentioned above, I think it's absurd to expect a real-world scenario where Knots reports a too-low feerate for 2016 blocks in a row, despite getting information about the current rates from each of those blocks *as well as* its mempool. For that to happen, spam transactions would have to be broadcast at a completely absurd, constantly-increasing rate, for 2016 blocks in a row, with bursts of *yet further* increased speed right after each block gets mined (otherwise the fee estimator would know the new rate because it shows up in the most recent block), and the mempool would *also* have to go practically unused by anything else (otherwise the fee estimator would know the new rate when it shows up in the non-spam transactions that compete for blockspace with the spam transactions).
> On the other points we are also left with the problem that the network communication is breaking down because more nodes are rejecting the very transactions that miners are confirming in blocks.
This communication problem can be summarized as, "compact blocks are delayed when users have to download more transactions." I think driving down that delay is a worthwhile goal, but Core's strategy to achieve that goal is, I think, worse than the disease: Core's users opt to relay spam transactions as if they were normal transactions, that way they don't have to download them when they show up in blocks. If you want to do that, have at it, but it looks to me like a huge number of former Core users are saying "This isn't worth it," and they are opting for software that simply doesn't do that. I expect this trend to continue.
Login to reply
Replies (3)
BitcoinKnots myself. Great interview btw the other day on @Guy Swann show
Fair point regarding the fee estimation and appreciate the detailed breakdown. I gladly accept that the fee estimation is robust enough even with a partial view of the mempool.
> Core's strategy to achieve that goal is, I think, worse than the disease: Core's users opt to relay spam transactions as if they were normal transactions, that way they don't have to download them when they show up in blocks.
The choice isn't between a cure and a disease (purity vs. efficiency), but about upholding network neutrality. The Core policy relays any valid transaction that pays the fee, without making a value judgment.
The alternative - active filtering - is a strategic dead end for a few reasons:
- It turns node runners into network police, forcing them to constantly define and redefine what constitutes "spam."
- This leads to an unwinnable arms race. As we've seen throughout Bitcoin's history, the definition of "spam" is a moving target. Data-embedding techniques will simply evolve to bypass the latest filters.
- The logical endgame defeats the purpose. The ultimate incentive for those embedding data is to make their transactions technically indistinguishable from "normal" financial ones, rendering the entire filtering effort futile.
> It turns node runners into network police
It doesn't turn them into "network police" because they aren't policing "the network" (other people's computers) but only their own. I run spam filters because I don't want spam in *my* mempool. If other people want it, great, their computer is *their* responsibility.
> constantly define and redefine what constitutes spam
It doesn't constantly need redefinition. Spam is a transaction on the blockchain containing data such as literature, pictures, audio, or video. A tx on the blockchain is not spam if, in as few bytes as possible, it does one of only two things, and nothing else: (1) transfers value on L1 or (2) sweeps funds from an HTLC created while trying to transfer value on an L2. By "value" I mean the "value" field in L1 BTC utxos, and by "transferring" it I mean reducing the amount in that field in the sender's utxos and increasing it in the recipient's utxos.
> Data-embedding techniques will simply evolve to bypass the latest filters
And filters will simply evolve to neutralize the latest bypass. They cannot withdraw this race if the filtered are more diligent than the spammers.
> [What if they] make their transactions technically indistinguishable from "normal" financial ones
Then we win, because data which is technically indecipherable cannot be used in a metaprotocol. The spammers lose if their software clients cannot automatically decipher the spam.
If the spammers develop some technique for embedding spam that can be automatically deciphered, we add that method to our filters, and now they cannot use that technique in the filtering mempools. If they make a two-stage technique where they have to publish a deciphering key, then they either have to publish that key on chain -- which allows us to detect and filter it -- or they have to publish it off-chain, which is precisely what we want: now their protocol requires an off-chain database, and all of their incentives call for using that database to store more and more data.