Considering the delta to an old-style SPV client: there, you only care about your own coins, so (modulo the privacy issues, bip37 etc) you can just request proofs for the few things you care about.
But to take part in p2p block propagation (and indeed to know quickly whether to accept new blocks) wouldn't it cause big problems to sort of scoop up the necessary proof "after the event" so to speak.
Just speaking in generalities here, I'm no expert. Might be worth digging up the old discussions on spv clients from the 2012-2015 era.
Login to reply
Replies (1)
> wouldn't it cause big problems to sort of scoop up the necessary proof "after the event"
I wonder if there's a nearly ideal setting where you dramatically reduce the size of your utxo set without greatly slowing down block validation. For example, spam outputs of the "fake pubkey" variety will eventually get pruned as they age, because most of those can never be spent -- other than a few proof-of-concept ones where someone used grinding to store data in a pubkey's first few bytes.
You could probably even optimize by whitelisting known exceptions and saying "don't prune these." I suspect there's a way to use techniques like these to prune something like 99.9% of "fake pubkey" outputs without significantly risking a detrimental slowdown of block validation.
Similar techniques can probably be applied to prune most of the 330-sat outputs associated with ordinals, which, once they pass a certain age, seem unlikely to ever be consolidated. Even if many of them *do* get consolidated someday, it wouldn't "stop" you from validating new blocks, it would just slow you down during the consolidation period; and the code can even be updated afterwards to except those ones so that it doesn't affect future users doing IBD.