I agree but fear the consequences of LLM centralization. I'm struggling to find decent options that run on my 64GB 24 core desktop. When you depend on using the best, there's by definition just one.
Login to reply
Replies (1)
All frontier models are great. It's not that you depend on it, it's about being able to switch. That's why I prefer opencode to Claude Code/ codex.
64GB VRAM is shit for inference. You can run hw attested end to end encrypted inference in cloud.