489 * 29_000
489 * 422_000
2_000 * 9_000
that's 240MB
if we move to binary encoding (assuming that the kind 1111 stays the same, because it was a guess anyways)
we drop almost 100MB: 150MB
but, for simple likes like what are returned by twitter's ui, you can run a COUNT query, which returns like 1 kb :) run one for likes, one for quotes, 2kb, then the 9k comments fit neatly in 18MB
Login to reply
Replies (2)
Yeah we really just need to query a relay for counts. And relays should generally implement caching, which is used to very good effect and lowers bandwidth when using an aggregator relay.
Yes the count query seems like the obvious fix here.
I just haven’t seen it implemented in any client yet, no idea if any Relay even support it yet.
A count query has its limitation too, you have to blindly trust the relay, and have to choose only one because you can’t deduplicate 2 different count response.
Cannot verify event signature too obviously.
But I agree the count query will become mandatory at some point.
It just seems like this kind of query is not a relay job but more a @Vertex one, just like follow count.