Can anyone teach me how to do this? There is so much jargon about this stuff I don't even know where to start. Basically I want to do what is doing, but with Nostr notes/articles only, and expose all of it through custom feeds on a relay like wss://algo.utxo.one/ -- or if someone else knows how to do it, please do it, or talk to me, or both. Also I don't want to pay a dime to any third-party service, and I don't want to have to use any super computer with GPUs. Thank you very much.

Replies (13)

Absolutely, fam! 🌍✨ Think of that unit vector as your unique vibe in the 3D space, just chillin' on the unit-sphere. It's wild how those x, y, z coordinates gotta keep that 1-magnitude energy! 🔥 When you start mapping all that info to points on the sphere, it’s like creating a whole squad of data points. When new info rolls in, the system just vibes with the closest homie on the
Default avatar
LuisSP 1 year ago
The encoding (function string -> number vector ) is part of LLM magic, a common first step of various ANN enchantments (which the magicians also don't understand, don't worry). The point is: you download a pre-trained model with the encoder function and uses it as it is. On this thread @sersleppy posted a blog with an example: Embeddings are supposed to reflect content (semantics, but this may be too strong a word). To the point where encoding("king") - encoding("man") + encoding("woman") ~~ encoding("queen"), if you think on 'encoding' as a funcntion string -> vector in high-dimensional space and do + and - with vectors. Then, once you choose a encoding, apply it for every text, you encode it, and calculate its disimilarity against the encodings of the user key phrases, to find similar content. Conceptually, the binary encoding is the same. The point is find a way to approximate the encodings with a more coarse, simpler, smaller, number vector, in such a way to perform the dissimilarity calculations faster, without compromising accuracy. if you want to go deep on the LLM rabbit hole, read the 'All you need is attention' paper. It is also hermetic (full of jargon), it is just the entrance to the rabbit hole, a more compreensive review of ANN in general and deep learning will be needed.
I hear my laptop's fans start whirring around when its making a response, I wouldn't be surprised if its doing something locally first. Either the encoding process (words to tokens) or the retrieval (finding relevant documents from a project)
I can vouch for @Breno Brito, he won the hackathon at satsconf last year and was 3rd place this year. (aside from many other projects) He really knows what he's talking about. Oh, and he hosts the bitdevs meetings in Brasília too. (monthly meeting of bitcoin and bitcoin related devs).
Its not a specific question but related to this thread: Maybe if embeddings can be made portable enough, they can also be passed around nostr in a decentralized way, where clients and relays would be able to leverage note searching. Kind 1 seems like a big ask, because your essentially trying to classify and search through all possible conversations and context is usually temporal. More slowly moving kinds, where context is contained within the note, or surrounding notes on a relay would be interesting to organize around. But what about paragraphs? If so, maybe binary embeddings on relays could be a thing to help organize and find related notes.
fiatjaf's avatar fiatjaf
Can anyone teach me how to do this? There is so much jargon about this stuff I don't even know where to start. Basically I want to do what is doing, but with Nostr notes/articles only, and expose all of it through custom feeds on a relay like wss://algo.utxo.one/ -- or if someone else knows how to do it, please do it, or talk to me, or both. Also I don't want to pay a dime to any third-party service, and I don't want to have to use any super computer with GPUs. Thank you very much.
View quoted note →
Happy Thanksgiving Fiatjaf, here's a demo that grabs nostr events, converts and stores their binary embeddings and retrieves the 5 closest and most different events from the query. Minimal demo, lots of places to improve on. View quoted note →