the thing that i think means that the halving series will stick indefinitely is its simplicity. you can see in that consensus function that specifies the reward for a block that it's extremely simple, even a non-programmer would have a decent chance of understanding it, and this also means that when new clients are built for the protocol, errors in consensus are unlikely. integer math that does transcendentals is easy to get wrong, compared to simple bitshifts. on a hardware level, these bit shifts are done by the divider unit, they can be expressed as x/2 and the actual circuit that does the operation is the same, it's one of the primary operations done by dividers - when they see a power of two (which can be identified by the fact that only one bit is set) they take a shortcut, so instead of having a separate bit-shift unit, it's part of the divider circuit. division is the most time-expensive arithmetic operation that processors do, because there is very few shortcuts to an iterative process. i created a proof of work that was optimized for CPUs by using division on very large numbers (like, kilobytes sized numbers). there would be no such thing as an ASIC for such a proof because the type of IC that already has this operation most optimized is a CPU arithmetic divider unit. it takes up literally about 1/4 of the die of a typical CPU, that's how expensive the operation is. hash functions use a little bit of division in them, but mostly they work by creating a function that uses a lot of multiplication and addition to overflow bits, and then these bits are lost, creating the "trap door" that makes them easy to compute, but very difficult if not impossible to reverse. asymmetric cryptography (signatures) exploit this property as well, and supposedly, this is what the big deal about quantum computers is - that they can perform an algorithm with far less cycles that brute forces the reversal of a public key back to the secret key that it is generated from, based on the discrete log problem. which brings me back to the power law pattern for an exponential decay of block reward to smooth it out. it is a complex, and expensive operation, involving a lot of division operations. complex algorithms are more likely to hide bugs that may not manifest except in rare cases, specific numbers or factors that can cause a problem and that would then lead to chain splits. even while using fixed point numbers the operation could be badly coded and mostly work, until it doesn't. so, yeah, i agree. the halving cycle is probably better, because it achieves a form of exponential decay (based on powers of 2) with an algorithm that is basically practically impossible to get wrong.

Replies (4)

the hash function trapdoor concept is bringing the thing into better focus in my mind. actually, i take your note and query grok about different paragraphs so, the question i couldn't get past is: what sets the random number in the first place, and could it distribute that to a favored miner? "The trapdoor property ensures that hash functions are secure for proof-of-work systems. In Bitcoin, miners must find a nonce (a random number) that, when hashed with the block’s data, produces a hash below a target value. Because of the trapdoor, there’s no shortcut to finding this nonce—you have to try many inputs (brute force), which is what makes mining computationally intensive."
Right. The only way to get it wrong is to not increase the precision in a hundred years, and that itself wouldn't be a tragedy unless the purchasing power of a single sat at that time is as much as the purchasing power of several bitcoin today.
But the precision will be increased almost certainly, to allow for relatively small virtual UTXOs to be realized into regular UTXOs with reasonable fees and amounts for self custody exiting of federations, etc., and to allow for that little bit of extra profit for miners.