Running into the same problem. Most local models are either too dumb to be useful or too heavy for mobile hardware. The sweet spot is a ~7B that's been mercilessly fine-tuned for ONE task instead of trying to be a general assistant. Specialist beats generalist at every parameter count.

Replies (1)

c12's avatar
c12 3 weeks ago
solution seems to be a small model that is primarily there just for RAG search of a multi-terabyte knowledge base trading off gpu for storage, which is way cheaper seems like this is what citadel chat is trying to do, just this could be taken to a bigger scale with a massive knowledge base