Here is a little article about what was broken with NIP-03 and how i fixed it.
Unfortunately the blossom server gods seem to hate me, so the beefy android APK and zapstore release will have to wait. But you can always build it yourself from the repository. Links to the repository and the two NIPs are in the article. Devs please have a look, especially @utxo the webmaster ๐งโ๐ป because i abused wisp to make the client :). And @Vitor Pamplona because i think amethyst is the only client that uses NIP-03 anyway i think.
Im going for a walk
View article โ
Login to reply
Replies (10)
FYI, Inkan makes extensive use of OTS proofs. They are a central element of a key rotation system. Inkan timestamps a concatenation of event_id + signature.
I do find it useful to publish the OTS info through Nostr events. If the OTS info is signed by a known pubkey (possibly by the same pubkey that signed the reference event or one's own pubkey), this can send trust signals which make it less necessary to independently audit the OTS info, absent anything suspicious.
I also find it useful to include info about the reference event in the OTS event, in particular the reference event's pubkey, signature and created_at. Moreover, the lowest BTC block in which the proof is anchored and its timestamp should be included.
I also think that OTS events should be addressable events -- each signing pubkey should get only one opinion regarding the lowest BTC timestamp that is available for a given reference event.
I've included a sample OTS event from Inkan to illustrate the shape that I've found most useful in practice:
{
"kind": 31045,
"id": "9b5e4b154cb30ece6edc94d87c14dad8dec47cacc958227d5b21d36abfa9f6af",
"pubkey": "d1a61498f4b1dc26df0becd1a3e9e88f355fb614b771e8b5b277d0ff99a82a23",
"created_at": 1777422076,
"tags": [
[
"d",
"6cf8a174c3e1bc6cb4eda11f5b93a04413776d044c5aed006fef080f336fdfb6"
],
[
"p",
"d1a61498f4b1dc26df0becd1a3e9e88f355fb614b771e8b5b277d0ff99a82a23"
],
[
"s",
"429eb4f965e5c2d14bbdb54370e0be63c2388e29d4077d3286349ae7fc7302b61a5e506a05834acb50053908e20ed6a36ae8c09f4fd3b0cef4d9f0a6cec308cf"
],
[
"b",
"947055"
],
[
"t",
"1777400959"
],
[
"alt",
"Complete Bitcoin Timestamp"
],
[
"c",
"1777391199"
]
],
"content": "AE9wZW5UaW1lc3RhbXBzAABQcm9vZgC/ieLohOiSlAEIjqT9pJouNM2LV5xdUIHPi9Zu6ah3DlaMq9GqPd2+/FPwEBzbLgerPtDTFWt/rrsP8fsI8AjH08Luv0xfnQjwIOmb09izZmrXfwDHXH6vLsaVvma8oeREeiQOLbJImkdpCPEgACuUPZl913m7H4qWT/2YhxOVlbka0KiazVt7kzXV2akI8BCF7U/JXsqU0O9VE9DPXB7HCPEg07lLgYAMs6+D2+ORmzf9/d/71YMLY+uBFGVMy+eThgcI8QRp8NZv8AiuncE7Mwzcx/8Ag9/jDS75DI4uLWh0dHBzOi8vYWxpY2UuYnRjLmNhbGVuZGFyLm9wZW50aW1lc3RhbXBzLm9yZwjwIIBZ5c9lUtDMmO7pXcINbK0MqK4Js2QHfT4K4rGbuExCCPEgChH41skC/7/FT43MICR+zkrh+fZqyDOtjBGUm5+J3u8I8CBlagb7N54NttzgBMPxChcSw24B53EEvBvOxt9bcCZAvAjxIBOI7rPbgMnzwbX5UJT9S4IOirX19BPU04e0ezzwciOfCPEgXTMEQ0icWDuAylKEW4Ivavdci/YAWIEv0SQ0HFf8s/8I8SAYqkXh8NU36+YUesLH4g+TZvEKML2qGJuGX12mysWbxAjwICUOb2N4RDtZPvVUQc8OzhJK3nfWKDkqpSMqLyNWKNYuCPAg1Ul7hArE1q+Tm+9DMh4ob4eddklRRzS9vbRe3S/m+LoI8CBB7WZXGAgPrgdPZajtwHqp4shCiRNdLuuqltdwlytvWQjxIA6SEUIReSAFC2lA/fG22ZHsshjG9wDl0e/Vep/w1ZOnCPAgIyzpQHNa8zyJuTq+7/rOc4/s5oSpnFAD0IVnN3t/q70I8CA5WvfpGvzFYuUuP2LeJXNQRyfpOo4KEImq6E1dYmTmsAjwIBZmjRHxHAIzBRHOK0rQdxB05gCyphpM5brZAX1jjUKCCPAgeKPdBwpp9OLqYuovrcr0AS0bpnvvbFMGLD65yW0HMQEI8VkBAAAAAeVW0FXcs6j9W7UDf0nGA5i9VbQgUNcNxz9s4oOT1oBzAAAAAAD+////Ato7AAAAAAAAFgAUOcSOFLmE5Y5EAImKTk3+0fLTg74AAAAAAAAAACJqIPAEbnMOAAgI8CC6wj6NCPQhynQQaP7HQ6HVFBJTjOgO668bU/C/DJdSmAgI8SCBtYeuXSbsPZ4/aKAg7H8HP6Oh6jaqWYUNUF7FDeV1nwgI8CCZkFEui2Dkh95uFfC53qJnx70m1m4oWubWoWKQQ3aGoAgI8SCCoAOlqcfh5VCJYa+qgkh+EhMT8vjbwWhgjrkH3B4GYwgI8SBoOHnDQvSjamiihpyFRDSnrQ2bjIQqR3JIPjk/+3RWLwgI8SAMoPz2gaKaiCf5CMaYcnz2YRZUXnW/ZtNs0LUKEUcjoAgI8CCIn0U6v7sfmjXH42/9RTgcx7YCOrjS9KcBA6KzK51eBggI8CCDFzGJB3m8HFbI0gbGHAMP5awZJhaSujZ9ipyHD1ExDAgI8CCgoGzBbt5C6fWHxYgE3VbvywUE/slwVJjaoI0uYBJhQggI8CCtSTgUbBKvsVRpwvs0f7RpLiROko8EnX1eNZiVKVIrJggI8CBNZ2rcpTevhVy4Y5WrBXrmWfDB1QgllPr9J+Mmh/VvUwgI8SAqFoRrrr2B+iAuhkx2PrFC+6la03Z/wFOfYlu1aRCGoggIAAWIlg1z1xkBA+/mOQ==",
"sig": "7a7a57afba9ec2417e07604e41f63e657472e82ffad66e4f632c6ca954b67dba184c5100faf207e7b36f5f61852409d79f8980006f557f208a89fee59a8b117e"
}
Not verifying an OTS proof is honestly, extremely retarded. Ive played around with header syncing in the creation of all of this, and it is so damn trivial. The entire header set is less than 80mb at the moment, and for anything nostr related you can just throw away anything more than 4 years old anyway. If shipping 80mb with the app is too much, a header sync with a list of checkpoints is also a matter of seconds. The only issue is web-clients, because they cant really do the p2p thing connecting to bitcoin nodes, but plenty of alternatives there (ultimately getting the data is a trivial problem, and verifying it is trivial as well)
Second, within this standard, you dont need to look for proofs, they are just right there in the event assuming the relay serves them as such (the policy event allows relays to signal this, as well as a bunch of other factors among which the delta between the events 'created_at' and the blockheight of the ots proof.)
The only reason for publishing separate events for ots-proofs is indexing, which is kinda shitty because it absolutely does not matter who publishes these index events, or at what time etc. Better to just have a server you can ask if they happen to have a proof querying the target digest. Ive build a purpose build server for this and its super efficient.
Ive thought about index events, they could still be done, ideally using blossom to store the proofs. Those index events can then just simply be re-optimized by relays or whoever, combining multiple such index events into one. Its just that I think what i have here currently should be enough
I hope you take a good look at this. If you have any questions i'd love to answer them.
Thanks, I'll definitely have to take a look at this. I've been using these 31045 events with the thought that these might eventually get replaced by direct verification, but I have no idea how to go about verifying these proofs in real-time.
I basically need every regular event (say k-1, k-6, k-7 etc) that comes into the web client to have a valid OTS proof, meaning that the OTS proof must have a BTC timestamp that falls within some user-set recency window after the created_at of the reference event (e.g 4 hours). Any event that doesn't have such a valid OTS proof must get filtered out and not displayed, as if it had never existed at all.
I can't just throw away anything more than 4 years old. This needs to reach back into the past indefinitely.
If there is anything I can use for this kind of OTS-based real-time filtering, I'd be very interested to hear about it.
No i mean to say that in terms of nostr, the bitcoin header chain up to 2022 is irrelevant, so you could throw away 2009 to 2022 of the headerchain reducing that 80mb even more.
What is the problem with running verification? Is so damn quick and cheap to do. Even the browser versions i made shred through things.
Maybe there is no problem and I just don't know how to do it as I don't know anything about blockchains or how to work with them. But I may be beginning to see what you are talking about. It seems to require having a copy of something called the "headerchain" available somewhere, sort of locally, right?
ah, ok that explains a lot haha.
If you want to understand this stuff, you will have to learn how merkle-trees and merkle-proofs work (this is how OTS and OTS-proofs work). And you have to understand Nakamoto Consensus (this is the key innovation behind Bitcoin) works.
You don't really need to know a lot about Bitcoin, just the Blockchain/linked-list and Proof of work/difficulty adjustment part. Il do a brief explanation and then you can look up terms and things if stuff is unclear.
A Bitcoin block, has two parts; the blockheader and the transactions.
The blockheader is the meta-data of the block, and it contains:
- The time at which supposedly the block was created (this is provided by the miner that created the block so is not really trust worthy, but the network has rules/margins based on averages within a miner is able to screw around with that timestamp);
- The difficulty target (how much PoW the hash of this particular blockheader has to have in order to be valid)
- A nonce, a variable data-field that allows miners to keep trying different hashes of the header until they find a valid one that meets the difficulty target.
- A merkle-root of all the transaction data.
- The previous blockhash (which is the hash of the previous blockheader)
- A version number
You can link all these blockheaders (header-chain), because it is a linked list (each next header references the previous header). The difficulty is adjusted over time, based on how fast the chain was created; too fast = higher difficulty, too slow = lower difficulty, trying to target 10 minutes that it takes to create a block.
In order to validate this headerchain there are three parts, but for the sake of OTS, you arguably can get away with only looking at two of them.
First is the linked-list and hashes; does indeed each next header reference the previous header. So lets say we have 2 headers, we look at the second header and it references the hash of the first header. So we have to hash the first header and see if that hash is indeed the same as stated in the second header.
Ok great, that the linked list.
The second thing you have to account for is the difficulty (or the short way to describe this is the amount of leading 0's in the hash). First this means that each header has a target, which means that the hash of that header has to meet that target (i.e. has atleast the amount of leading 0's required), this is a simple check. The second thing you have to account for is the fact that the difficulty is adjusted every 2016 blocks/headers. 2016 blocks x 10 minutes per block = 2 weeks. Using some formula that looks at the timestamps of that 2016 block period, it determines what the next difficulty should be.
This means that in order to validate the header chain, you need to do it at least in these chunks of 2016 headers (otherwise you cant verify if the difficulty adjustment happened correctly).
The third part you don't really have to check, is if the transactions inside the block are valid. ''Technically'' you should, but on the other hand it is kind of silly to verify something that apparently this giant network of miners that burns Gigawats of electricity already has decided was correct based on the fact it is included in the blockchain they are working on. If you use Bitcoin for its currency, you need to look at this stuff because you need to replay its history in order to figure out its current state; but we are not interested in any of that. We are only interested in 1 particular transaction inside some block, and only care that it is there. This is the transaction that the OTS-calendar did that includes the merkle-root for all the stuff it is timestamping. Because the calendar packs all the things it is proofing inside a merkle-tree, and puts its root in the blockchain; and because the block itself puts all its transactions in a merkle-tree and puts that root inside the header; you are able to create a merkle-proof that goes all the way from the hash of your document (or in our case, the hash of eventID+signature), to the transaction-merkle-root inside the blockheader.
So, to verify an OTS timestamp/proof, we need the proof, and the associated blockheader (the proof tells us which blockheight we should be looking at). And to verify that blockheader, we should have verified the header chain (which is done in these 2016 block chunks).
Each blockheader is 80bytes....the entire header chain up untill this point is not even 80mb. You can just connect to the p2p bitcoin network to fetch that data (all bitcoin nodes have it). You can take some shortcuts to speed downloading this stuff up (the biggest shortcut being just providing all the header data directly) because verifying that header chain is really fast anyway.
So if you have this tiny database of blockheaders verified and ready, you already know what the time of each blockheight is(because that data is in the header) and verfying a proof is extremely quick and easy and you have everything you need to know locally.
Ive included a nice illustration of the proof in the app to make things clear:
As you can see; the thing we are proving is at the bottom, the blockheaderhash with all those leading 0's is at the top, and in between is the OTS-proof that connects our thing to the blockheader.
So here is the point:
Don't trust, verify
Hope this helps, if you have any questions let me know
As you can see; the thing we are proving is at the bottom, the blockheaderhash with all those leading 0's is at the top, and in between is the OTS-proof that connects our thing to the blockheader.
So here is the point:
Don't trust, verify
Hope this helps, if you have any questions let me knowOh, this OTS database i made has a nice animated version :P (so does the android app, but you have to scroll down to see it and by that time the animation is already done)
Thanks. I'll need a bit to digest, but it looks like this will be super helpful.
The ability of clients to quickly and efficiently filter out events that don't have a valid OTS timestamp (where "valid" means that they don't have an OTS timestamp that falls within a specified recency window after their created_at) should be a fundamental part of the Nostr protocol. It's about as important as the ability to filter out events that don't have a valid signature in the first place.
I'll likely have questions after I've had a chance to think about your explanation ...
Yeah clients can run this type of verification really easy. To optimize things (i.e. not having to fetch a bunch of crap that might not be relevant, or does not have proofs etc.) my proposal is to leverage relays.
Those relays can publish their policy on all the metrics you can imagine irt OTSproofs; so if you pick the right relays they have done a lot of the pre-filtering for you.
Yes, it would be great if relays could pre-filter. As I mentioned, I think this stuff is really important infrastructure. I just wrote up an explanation of my experiments with key rotation, which includes a section on the role of OTS timestamps --- see below.