also, i really like the idea of deterministic LLMs. i think the realistic scope for variance of composition is not as wide as commonly output by LLMs and is part of the reason why they hallucinate so much when their contexts are filled up.
Login to reply
Replies (2)
i did a bunch of digging at your work and i see there is no concrete implementation of EC based knowledge encoding. however, there is research in this direction relating to using non-euclidian spaces for encoding embeddings. hyperbolic, spherical, toroid and mixed curvature. the torusE is the closest existing in research to the concept.
in 1999 i had the idea of creating semantic graph spaces for doing language translation, but also, obviously applies to semantics since the differences between natural language words and between natural language equivalents are geometric.
i'm not so much interested in the ZKP based stuff, though it seems like a powerful mechanism for enabling models to compute/generate responses to prompts, without the provider having any way to know what is being computed. what's also interesting, is that at least in some respects, mainly the compactness of the representation, it could be extremely powerful for reducing the size of the parameters in a model, especially combined with recursive traversal methods, which are especially cheap when done on elliptic curves, and have already been shown to create a massive reduction of calculations required to traverse the graph for token output.
i'm sure that you know of some of these, and what is still needed:
- Actual ML training using E(𝔽_q) as embedding space
- Gradient descent / optimization on discrete curve points
- Benchmarks comparing curve embeddings to hyperbolic/spherical
i'm gonna follow, anyway, but like most shitcoin landing pages, explaining their concept, but not revealing any of their prototyping code, makes me sad, and for shitcoins, suspicious. for this subject, however, it's a bit too early for systemization of these waffles into a trick to milk retail.
i'm very interested in statistical modelling and semantic graphs, i may yet start building stuff using this. at minimum, it could be a very interesting compression technique for textual (and potentially audio/visual/volumetric) modeling.
Appreciate the thoughtful digging — seriously. Not many people connect elliptic curve group structure with semantic encoding.
A few clarifications though:
1. ECAI is not a “proposal” or theoretical direction.
There is a concrete implementation of EC-based knowledge encoding. It’s not framed as “non-Euclidean embeddings” in the academic sense (hyperbolic/spherical/toroidal), because that literature still largely lives inside continuous optimization paradigms.
2. ECAI does not treat ECs as geometric manifolds for embeddings.
It treats them as deterministic algebraic state spaces.
That’s a very different design philosophy.
3. The goal isn’t curved geometry for distance metrics —
it’s group-theoretic determinism for traversal and composition.
Most embedding research (including TorusE etc.) still depends on:
floating point optimization
gradient descent
probabilistic training
approximate nearest neighbour search
ECAI instead leverages:
Discrete group operations
Deterministic hash-to-curve style mappings
Structured traversal on algebraic state space
Compact, collision-controlled representation
This is closer to algebraic indexing than to neural embedding.
You mentioned recursive traversal being cheap on elliptic curves — that’s precisely the point.
Group composition is constant-structure and extremely compact. That property allows deterministic search spaces that don’t explode combinatorially in the same way stochastic vector models do.
Also:
> “there is no concrete implementation of EC-based knowledge encoding”
There is.
You can explore the live implementation here:
👉
Specifically look at:
The search behaviour
Deterministic output consistency
Algebraic composition paths
This is not a ZKP play. It’s not an embedding paper. It’s not a geometry-of-meaning academic experiment.
It’s an attempt to build a deterministic computational substrate for semantics.
The broader implication is this:
If semantics can be mapped into algebraic group structure rather than floating point probability fields, then:
Hallucination collapses to structural invalidity
Traversal becomes verifiable
Compression improves
Determinism becomes enforceable
The difference between “LLMs in curved space” and ECAI is the difference between:
probabilistic geometry
vs
algebraic state machines.
Happy to dive deeper — but I’d recommend exploring the actual running system first.
Numbers still have a lot left in them.
ECAI Search