yes, of course, you think i wouldn't be confident i can do it while i talk about tackling this? https://git.mleku.dev/mleku/algebraic-decomposition/src/branch/dev/ALGEBRAIC_DECOMPOSITION.md
i understand the general concepts well enough that i think i can actually build according to the steps laid out in the later part of that document. i actually didn't refer to any specs or papers when i designed the vertex tables that orly now uses to accelerate some of the nip-01 queries that touch graph relations. i independently invented it. not even sure if anyone else has done it that way, but i think so, because claude seemed to indicate that this technique of bidirectional references in tables are used for graph traversal without the cost of table intersection (aka, adjacency free thingy). see , i don't remember exact nomenclature, i only remember concepts, and usually, visually.
Login to reply
Replies (2)
also, i'm pretty sure that this deterministic, algebraic knowledge graph stuff will be impllemented precisely with these kinds of vector tables.
I think I’m the same way as you Mleku - often I’ll have concepts that are well defined in my head but I don’t have the nomenclature for them. This can facilitate creative thinking but can inhibit communication with other people, for obvious reasons.
A relevant example: only recently did the nomenclature crystallize in my mind the distinction between connectionist AI (LLMs, neural networks, machine learning, etc) versus symbolic AI, aka GOFAI (good old fashioned AI). These two distinct concepts formed in my head as an undergrad in electrical engineering forever ago but didn’t have a name or resurface in my mind until mid or late 2025 when a friend asked me if I had ever heard of “symbolic AI.”
I don’t understand the math of connectionist AI, or the math of what you’re doing, well enough to connect what you and asyncmind are talking about to what I’m doing with the knowledge graph. But some of what y’all are discussing definitely catches my attention. I’m wondering whether continuous versus discrete (quantized) is the organizing principle. Connectionist AI deals with continuous spaces where we use tools like gradient descent to arrive at local minima. GOFAI, symbolic AI, and graphs are discrete. Could it be that the basic rules and features of the knowledge graph (most notably: class threads) are an emergent property of well-trained LLMs? I conjecture the answer is yes, as evidenced by things like the proximity of hypernyms (animal type) and hyponyms (dog, cat) in embedding space.
Suppose we want to take an embedding and compress it into a smaller file size. Could it be that a graph is the ideal way to represent the compressed file? If so, can we read straight from the graph without the need to decompress the graph and rebuild the embedding space? If so, then we have to know how to query the graph, which means we have to know the rules that organize and give structure to the graph, and the class threads rule seems like a great contender for the first (maybe the only) such rule.