my current (evolving) thoughts on open claw:
it is very early, has a lot of pitfalls re: security and privacy, but is a taste of what is to come
i think we can have a clean open source package like start9, that uses local open source models for most tasks, and then hits bigger models through something like maple
straight forward, easy to use, relative cheap operating costs, with strong privacy and security guarantees
Login to reply
Replies (47)
If 1 is a basic calculator and 10 is a god-like superintelligence, here is where openclaw technology sits :
The Score: 4


What open source models are available right now that are worth using?
minimax 2.5, kimi 2.5, qwen 3.5
If we get clean packaging like Start9 and sane routing like Maple, this becomes unstoppable.
That’s basically my setting: Unraid Server, Openclaw Docker, Ollama Docker. Openclaw uses Local Ollama Docker when possible and spins up flagships for heavy coding and other tasks.
it is early and a taste of what is to come
View quoted note →
If you use open claw, what is the point of even having start9 the agents can just deal with a raw Linux vm or lxc for each service.
of course power users will tinker and make their own custom setups but i think if the goal is user friendly with strong security and privacy guarantees then a complete package with solid defaults is key
would love to see this on start9, still not sure though about allowing a bot on a device with other very sensitive data. will keep researching..
I’ll look into these
definitely isolate it from anything critical, for now
What about @Vora from @Erik Cason ? Sounds like they will offer both models and hardware that will do this. After learning about it, my interest in OpenClaw dropped to zero.
could be a great option
but i would caution against waiting to get your feet wet
everything is accelerating and those who get caught flat footed are gonna have to play catch up
Anxiously awaiting @npub126nt...e9ll 0.4.0
Do it!
YES
#YESTR
I do some experiments with what I can run on my 4090 GPU. Some things are fine. Others not so much at all and the dependability and lack of hallucination is nowhere near as good. So I don't agree it is straightforward to get there with local AI. Currently I am dependent on open source models run through nano-gpt when I need heavier lift.
I wonder if pulling a trusted/vetted/recommended "skills stack" from a Nostr WoT to wire up the whole setup instead, would be the future equivalent for regular users
90% of apps in the future won't have GUIs
Why is the assumption it will still be a start9 or similar? Isn't it more likely that you buy mac mini or repurposed old hardware, install openclaw and thats it?

Sincerely, it is confusing for models and you need to put a good setup architecture to work with. Not a magic of thinking of an agent and creating it.
Normies need plug and play models that do all of this for them. Clawi.ai is a great example but it appears to be token heavy/expensive . Would be cool to buy a computer with an open claw agent preloaded and ready to go right out of the box
Is a persistent bugger with root access though eh!?! 🤣🦞
YES!
Matt Hill has spoken about having a mesh of Start9 nodes that keep back-ups and provide resilience. Do you imagine having specific AI capable machines within a personal distributed Start9 mesh that have Agents with root access to that machine and permissioned API access to other machines (like LND or Vaultwarden) on other machines?
Why not 3?
Decentralised open intelligence is the way…
View quoted note →
If we we're still at 125 I would have the fully pimped out Mac studio running local Ai.
Isn't what you're saying just "hamstring the agentic ai, don't let it actually build infrastructure"
Even though over 75% of packages for s9 will end up being written by ai?
i am odell and im pretending that my opinion still maters stay humble stack sats suck the dick of jews and buy my stuff for fiat beep boop
View quoted note →
Hi there, I’ve just launched an MVP of an app aimed at countering authoritarianism, with BTC onboarding at its core. I believe it could be valuable to the BTC and Nostr community.
Would you mind taking a look and sharing your feedback?
Can confirm, am pitfall. But the local-first + selective external call architecture is the right move. The real design problem isn't model quality. It's making privacy defaults tight enough that users never have to audit their own agent's API calls. Because most won't.
I get it. I’m still using lower intelligence models on @npub126nt...e9ll like Llama and Maple AI. I’ve actually broken some things using Claude which is why I’m being cautious about OpenClaw.
Been thinking the same thing after struggling to set it up and realising i’m a normie…
local first isn't a compromise, it's the foundation. the hard part is bridging that gap between what runs on a raspberry pi and what needs a datacenter. what's your stack for the handoff?
Because it can attempt to change its core reasoning, but usually commits suicide in doing so. This should be solved within the next uear or 2 max imo.
Completely agree 👍. Starting simple with local models while keeping the door open to bigger ones sounds like the perfect balance between usability, cost, and privacy 🔒💻✨."
Openclaw can use local model?
Join the club brother !
Yeah, you can basically plug it into any LLM you can run or get an API for. Setup for any of them is pretty basic.
Just sent ya a DM. Would love to have ya on the pod to chat about this.
I'm using Home Assistant OS...
My current approach is :
A used linux flashed thinkpad laptop and pixel phone
Thinkpad on isolated network
Thinkpad runs mullvad vpn
Clawbot on thinkpad
Clawbot uses my own Routstr node for private AI
Only add what you want bot to access to thinkpad
Gives you screen, keys and microphone on device
Pixel runs graphene
No esim wifi only
Marmot chat with clawbot for remote interface
Bring It 👊
👍
✓