Perhaps, I'm not sure. But after posting this, I thought of something that I find very interesting: it's not actually necessary for bitcoin nodes to participate in this protocol or run anything special, because I realized that electrum servers already do something equivalent to the thing I want.
What I mean is, a node that wants to follow this protocol -- let's call it the Pruning Node -- can just pretend to be a "normal" bitcoin node. It can prune its utxo set, but not tell anyone, and peer with "normal" bitcoin peers -- there is no need for them to signal anything special, because the Pruning Node will get the missing utxo data from electrum servers, as I will explain how to do in a moment. The Pruning Node just requests blocks as usual from its normal, standard bitcoin peers, as every node does, and does not act special in any way.
Now suppose one of its peers gives it a new block, and that block contains a transaction P that claims to spend utxo Q, which our Pruning Node pruned. How does it validate transaction P? Instead of asking its *bitcoin peers* for a proof that utxo Q got spent, it asks a random set of "electrum servers" for that proof. Electrum servers can already supply the requisite proof via the following procedure:
Step A. Get the txid and vout of utxo Q. You can get it from the transaction that tries to spend it, i.e. transaction P, because every bitcoin transaction has to say in its inputs section the txid and vout of any transaction it tries to spend.
Step B. Use the txid and vout of utxo Q to query electrum servers for two merkle proofs, which you should validate (note that all electrum servers have methods that return a merkle proof for any transaction you throw at them -- that's a huge part of what they are *for*); the first merkle proof proves that utxo Q was created, and the second one proves that it was spent. The latter one should just be a proof that the transaction you are trying to validate is contained in the block you are currently validating, but if they provide a proof it was spent in some *earlier* block instead, then you are done, because that means the transaction you are validating is invalid.
Note that there is an excellent reason to have confidence that *some* electrum server will give you the data you need if utxo Q was created at some point in the past and then spent in the current block (or a prior one). This is the assumption outlined in Number 1 of my idea: someone in your peer group will always supply you with the block data you need, otherwise bitcoin itself doesn't work. Here, we are treating electrum servers as a kind of peer. So, if no server gives you the data, that means utxo Q was either never created or did not get spent in the current block, and that means the transaction you want to validate is invalid; whereas if they *do* give you the data you expect, that means utxo Q was created in the past and not spent until the current block, so utxo Q was available to spend, and the block is not invalid on those grounds -- meaning you can continue validating.
Voila! Electrum servers already have methods that let you check if a utxo was unspent; if you treat them as bitcoin peers, then per bitcoin's standard trust assumptions, you can have confidence that at least one of them will provide you with the data you need, and then you can simply validate it. If the data checks out, then your node is a fully validating bitcoin node but does not need to store the utxo set; it gets the requisite merkle proofs from electrum servers as needed, validates them using to its local copy of the blockchain headers, and then adds the now-validated block to its chain.
I believe bitcoin asks 10 nodes for block data: 8 "regular" peers and 2 "blocks only" peers. It thus has an implicit trust assumption that at least one of them will provide data for your node to validate. I suspect the same N can be used in this protocol without any change in the trust assumptions: your only expectation is that one of your peers will provide the data (the proof) if it exists, and you then validate the proof yourself.
Considering the delta to an old-style SPV client: there, you only care about your own coins, so (modulo the privacy issues, bip37 etc) you can just request proofs for the few things you care about.
But to take part in p2p block propagation (and indeed to know quickly whether to accept new blocks) wouldn't it cause big problems to sort of scoop up the necessary proof "after the event" so to speak.
Just speaking in generalities here, I'm no expert. Might be worth digging up the old discussions on spv clients from the 2012-2015 era.
> wouldn't it cause big problems to sort of scoop up the necessary proof "after the event"
I wonder if there's a nearly ideal setting where you dramatically reduce the size of your utxo set without greatly slowing down block validation. For example, spam outputs of the "fake pubkey" variety will eventually get pruned as they age, because most of those can never be spent -- other than a few proof-of-concept ones where someone used grinding to store data in a pubkey's first few bytes.
You could probably even optimize by whitelisting known exceptions and saying "don't prune these." I suspect there's a way to use techniques like these to prune something like 99.9% of "fake pubkey" outputs without significantly risking a detrimental slowdown of block validation.
Similar techniques can probably be applied to prune most of the 330-sat outputs associated with ordinals, which, once they pass a certain age, seem unlikely to ever be consolidated. Even if many of them *do* get consolidated someday, it wouldn't "stop" you from validating new blocks, it would just slow you down during the consolidation period; and the code can even be updated afterwards to except those ones so that it doesn't affect future users doing IBD.