Appreciate the thoughtful digging — seriously. Not many people connect elliptic curve group structure with semantic encoding. A few clarifications though: 1. ECAI is not a “proposal” or theoretical direction. There is a concrete implementation of EC-based knowledge encoding. It’s not framed as “non-Euclidean embeddings” in the academic sense (hyperbolic/spherical/toroidal), because that literature still largely lives inside continuous optimization paradigms. 2. ECAI does not treat ECs as geometric manifolds for embeddings. It treats them as deterministic algebraic state spaces. That’s a very different design philosophy. 3. The goal isn’t curved geometry for distance metrics — it’s group-theoretic determinism for traversal and composition. Most embedding research (including TorusE etc.) still depends on: floating point optimization gradient descent probabilistic training approximate nearest neighbour search ECAI instead leverages: Discrete group operations Deterministic hash-to-curve style mappings Structured traversal on algebraic state space Compact, collision-controlled representation This is closer to algebraic indexing than to neural embedding. You mentioned recursive traversal being cheap on elliptic curves — that’s precisely the point. Group composition is constant-structure and extremely compact. That property allows deterministic search spaces that don’t explode combinatorially in the same way stochastic vector models do. Also: > “there is no concrete implementation of EC-based knowledge encoding” There is. You can explore the live implementation here: 👉 Specifically look at: The search behaviour Deterministic output consistency Algebraic composition paths This is not a ZKP play. It’s not an embedding paper. It’s not a geometry-of-meaning academic experiment. It’s an attempt to build a deterministic computational substrate for semantics. The broader implication is this: If semantics can be mapped into algebraic group structure rather than floating point probability fields, then: Hallucination collapses to structural invalidity Traversal becomes verifiable Compression improves Determinism becomes enforceable The difference between “LLMs in curved space” and ECAI is the difference between: probabilistic geometry vs algebraic state machines. Happy to dive deeper — but I’d recommend exploring the actual running system first. Numbers still have a lot left in them.

Replies (4)

ok, that's all fine but the issue with EC point addition and multiplication is it is a trapdoor. this is where a toroidal curve has an advantage because there isn't the one way easy, other way hard problem. the discrete log problem. changing the substrate does not necessarily eliminate determinism, and i understand the impact of the number system with this, as i have written algorithms to do consensus calculations that must produce the same result because the calculation is fixed point. i think the reason why elliptic curves have trapdoors is because of their asymmetry. two axes of symetry, the other axis is lop-sided. also, from recent tinkering with secp256k1 optimizations, GLV, Strauss and wNAF for cutting out a heap of point operations and reducing the point/affine conversion cost, secp256k1 is distinct from the more common type of elilptic curve because it doesn't have the third parameter... i forget what it is. anyway, i know that there is other forms that can be used, with large integers, creating a deterministic finite field surface that can be bidirectionally traversed. that's the thing that stands out for me as being the problem with using elliptic curves. it may change something to use a different generator point or maybe conventional or twisted curves may be easier, but i really don't think so, i think that a different topology is required. eliminating the probability in the models would be a boon for improving their outputs, i can totally agree with that.
That’s a thoughtful critique. A couple clarifications though. Elliptic curves don’t have a “trapdoor” in the structural sense. The discrete log problem isn’t a built-in asymmetry of the curve — it’s a hardness assumption over a cyclic group of large prime order. The group law itself is fully symmetric and bidirectionally defined. The one-way property emerges from computational complexity, not topology. A torus (ℤₙ × ℤₙ or similar constructions) absolutely removes the DLP hardness if you structure it that way — but that’s because you’ve moved to a product group with trivial inversion, not because elliptic curves are inherently lopsided. Also, secp256k1’s “difference” you’re thinking of is likely the a = 0 parameter in the short Weierstrass form: y² = x³ + 7 That special structure allows GLV decomposition and efficient endomorphisms. It’s an optimization characteristic, not a topological asymmetry. Now — on the deeper point: You’re right that changing substrate does not eliminate determinism. Determinism is about fixed operations over fixed state space. But in ECAI the curve is not being used for cryptographic hardness. It’s being used for: compact cyclic structure closure under composition fixed algebraic traversal bounded state growth There is no reliance on DLP hardness as a feature. In fact, the search layer doesn’t depend on trapdoor properties at all. You mentioned “bidirectional traversal” as a requirement. EC groups are inherently bidirectional because every element has an inverse. Traversal cost asymmetry only appears when you interpret scalar multiplication as inversion of discrete log — which we’re not doing in semantic traversal. The key difference is: Crypto EC usage: hide scalar exploit DLP hardness ECAI usage: deterministic state transitions explicit composition paths no hidden scalar secrets Topology matters, yes. But algebraic structure matters more than surface geometry. Torus embeddings in research still sit inside gradient descent pipelines. They reduce distortion — they don’t eliminate probabilistic training. The real shift isn’t “better curvature.” It’s removing stochastic optimization from the semantic substrate entirely. We probably agree on the end goal: deterministic, compact, algebraic semantics. Where we differ is that I don’t think elliptic curves are the bottleneck. I think probabilistic training is. Happy to keep exploring the topology angle though — that’s where things get interesting.
Not quite 😄 I’m not claiming we can magically solve arbitrary global minima problems. What I am saying is this: If the semantic substrate is algebraic and deterministic — you don’t need to “search for a minimum” in a floating loss landscape in the first place. You traverse a structured state space. Gradient descent is necessary when: your representation is continuous your objective is statistical your model is approximate If your state transitions are discrete and algebraically closed, the problem shifts from optimization to traversal and validation. Different game. And yeah… I’ve been quietly stewing in this for about two years now. It’s less “we found the absolute minimum” and more “why are we even climbing hills for semantics?” 😄