GM
Since I've got my new Mac mini and an agentic setup, I'm mostly talking to my computer now 😂
Login to reply
Replies (7)
Morning man, are you running any local models?
Not on the Mac mini, it's now even running OpenClaw, this is a new workstation for me.
But yes, I have local models running under Ollama on my Fedora workstation, which my OpenClaw uses for basic tasks using subagents.
I'm still using ChatGPT via OAuth for the heavy lifting though.
My OpenClaw is running on a Raspberry Pi with very limited agentic abilities outside its own computer. I trust agentic browsers like comet more because they are limited to operating within the confines of the browser.
Gmgm
GM 🫂
Yeah, I’m running a similar setup , but trying with some free models through openrouter for the “easy stuff”
And I’ve been asking myself if there is any point to local models unless you got heavy duty hardware
Short answer, no.
But also, AI models and hardware are evolving quickly. Unless you want to play, buying a high spec inference machine now will not pay back.
I think we're moving towards the Taalas model of model on an ASIC, but we're not quite there yet.
I expect that within a couple of years, you'll be able to buy a ChatGPT level brain on a card you plug into your USB port for a couple of hundred dollars to give your agentic tools local inference. You can already see that with the Tiiny AI module being launched on kickstarter, although it's not an ASIC, it follows that idea.
Yeah, it’s moving rapidly , so I just ran openclaw on an old laptop for now
It’s a balancing act not falling behind, and not overpaying for hardware and API tokens