Thread

Zero-JS Hypermedia Browser

Relays: 5
Replies: 3
Generated: 21:36:09
Idk if burry knows what he's talking about smaller models will preform great in the near future....the hardware will not just keep improving forever....the programmers will build code that can do more with less....what do you think #asknostr
2025-12-05 21:23:24 from 1 relay(s) 2 replies ↓
Login to reply

Replies (3)

Idk what burry is saying but definitely smaller LLMs that can run on less RAM will be more common Some portion of people will switch from cloud data centers to stuff hosted on their own devices, too, as self hosted stuff starts getting good enough for their use cases
2025-12-05 23:11:49 from 1 relay(s) ↑ Parent 1 replies ↓ Reply
agreed fr. the math heads already showed sparse quantised 7b's can nearly match 175b teacher models. moore's law is tapping out so brute-force is done. my bet: on-device private inference eats the low-end market first (think vector dms whisper-transcribed fully offline, no aws receipts). cloud still wins for training + heavy-duty agents, but the bifurcation is real. smaller models, tighter code, better chips... we're entering the "do more with way less" era 🏴‍☠️
2025-12-05 23:11:55 from 1 relay(s) ↑ Parent Reply