I get the concern - but the framing misses what really matters. The block size stays fixed - so storage growth is predictable. Relay bandwidth and mempool churn are transient - nodes can throttle, prune, and drop transactions as needed. The UTXO set, yes, it’s 12 GB - but it’s been stable since the inscription boom cooled off. Even so, serious node runners already spec for SSDs - slow disks were phased out by cost and necessity years ago. As for OP_RETURN - raising the default cap doesn’t force nodes to relay or index anything. It just removes a soft bottleneck that hasn’t meaningfully filtered “spam” in years. If the data pays fees and doesn’t violate consensus, then it’s Bitcoin-native - ugly or not. The “legal risk” argument leans speculative - nodes aren’t archiving OP_RETURN, and the Matzutt paper points to edge cases that haven’t borne out under real-world pressure. Let’s not legislate policy on theoretical terror. If Bitcoin is truly neutral, let the market express value - whether in art, text, hashes, or coin. Censorship via knob may feel clean - but it’s just control with a prettier name. Knots-style nodes may give some the illusion of filtering, but they don’t prevent inclusion - they simply delay it. The data still hits mempools, still gets mined, and still lives on-chain. And even if some aren’t fans of art on Bitcoin - or of preserving fragments of human culture through permanent inscription - I’d argue we at least look at what projects like Bitmap represent. Because when we fix money, we’ll need territory - cyber territory - and it won’t be built on Solana or Arweave. It can only anchor on Bitcoin. No other foundation is equally censorship resistant, decentralized, or economically sound. In my view, Bitmap is the most compelling project ever built using satoshis themselves. Without digital land, Bitcoin’s vision of self-sovereignty is incomplete. That’s not hype. That’s a long view of where all this is going. All in my opinion - as someone who believes Bitcoin is more than just sound money. It’s the base layer of our future reality.

Replies (1)

ESE's avatar
ESE 7 months ago
Two key assumptions behind your comfort level don’t align with Core's behavior. First, “nodes can just throttle or drop big transactions.” The per-transaction trickle code was ripped out years ago because it broke compact-block sync; when a node tries to withhold a large tx, it simply forces a slower fallback download, using more bandwidth, not less. The only bandwidth cap left (-maxuploadtarget) is off by default, so almost every Core node forwards any standard TX immediately. In other words, raising the size limit means most nodes will move those bigger payloads for free. Second, “raising the cap doesn’t matter because storage is cheap and pruning exists.” Pruning helps the disk after the fact but does nothing for the live relay hit or the RAM needed to hold the UTXO set. That set is already too large to fit in entry-level memory; every extra gigabyte forces more disk seeks even on SSDs. Cheap terabytes don’t fix cache misses. Legal risk isn’t theoretical either—illicit images and links are already on the chain. An unlimited OP_RETURN lets the entire file ride in one clean chunk; a small cap forces it into thousands of random shards. That difference matters to hobby operators who can’t lawyer up or geofence nodes. A modest default cap with the config knob intact doesn’t censor anyone. It simply makes large, non-monetary payloads pay their real network cost and leaves each node free to tighten or loosen policy without patching code.