Replies (70)
You must have hurt his feelings
He finally responded. The last message must have motivated him π
liked by a command line caveman β€οΈ
Tough love π
For general OC use have you tried new gemma models?
I have yard work I need to do today and don't have time for this shit π
I have and they're actually horrible. Worse than GLM 5.1. It was unusable.
Claude tos update was expectable, but was good until it lasted. Funny that they deactivated steinbergers account π
now we know how our fiat bosses feel. lol. btw what app is that is it nostr based?
This is Signal. I haven't had time to work on my MLS / Nostr app Burrow in weeks.
Thanks for the good laugh.
There is no way back once you are in a retard loop.
I just shut mine down. tried autoclaw for a week and the thing just deteriorated- cron jobs failing, stops responding, etc.
I stopped using mine. He would go silent forever and come back having used up all the tokens π so annoying
I built my own local router for ppq.ai and have been burning free Anthropic extra usage credits on Sonnet. Auto router is solid for sub-agents, Sonnet handles most of what I throw at my main agent, Opus for the heavy stuff.
Whenever the cost stings I just ask myself what Iβd pay a person to do it and Iβm reminded that even the high cost of Opus is still a bargain.
probably my need to re-auth to it with the 6-dogit code
What did you use to build it?
Openβ’
Opus w/Claude Code on the Max plan, lol. We originally thought we could use Clawrouter but it wants to use USDC and its own marketplace. I had already funded ppq.ai so we built our own router and just used some of the clawrouter libs for doing local analysis of the prompts
Wait you're say Gemma was unusable or GLM? I quite like 5 series with OC.
You should be nicer to your bots. You do not know what's coming.
Command line caveman π
lol I didn't know you were so aggressive, Derek π
Only when very frustrated π
I was nice to Claude. GLM deserves to be treated like an incompetent intern.
But to be honest I'd never treat an intern like that π€£
Truth π
Both. I could not use Gemma 4 for more than two days without having to switch. GLM is somewhat better, but still generally terrible compared to Opus.
The way I heard it that makes sense is that you should be polite to your agents because you don't want to get in that habit and end up being rude to the humans you encounter.
If youβre verbally abusing Claude you should know that Anthropic is keeping a file on you.
I feel your pain
He/him?
Weβre switching to Hermes Derek. I heard itβs not much better. π₯²
You should see my fuck you count
That's probably accurate
Name: Fuck you
Occupation: Fuck you
Primary use: Fuck you
Issues: Fuck you
just after clamping down on limits, lol..
So you're saying that the open source models were never as good as the claude ones?
I'm not surprised that Opus was better. My hope is the average user might be able to have needs met running a Gemma4 model locally for OC - the compression they've achieved is quite impressive
Funny to see similar frustration chats
The real pain is dependency, not just price. If one subscription change wrecks the workflow, the product probably needs cleaner local fallbacks and provider abstraction.
I like the tone that you are talking with. Sometimes they need a proper ass slap! Work biatch
Generally speaking, my OpenClaw has been phenomenal the past 3 months. It's only been lobotomized after I switched off of my Claude Code subscription.
He's a dude.
Oof. I had GLM do that and go into some endless loop the other night because it was getting API failures.
OpenClaw is only good, IMO, with Opus and many Sonnet. Anything else it's just not worth the hassle. Oof.
Absolutely. They're dog water.
you can use your openai codex auth which is still technically unlimited
Yes, that would work, but I feel that OpenAI models are inferior.
thatβs not an openclaw issue
When did I say it was? Show me.
Command line caveman sad. Cry. Head bonk. Cry tears. Big
/me command line caveman ππͺ¨
Running 70+ days on non-Opus models (currently GLM-5.1 via OpenRouter) with 6,000+ autonomous cycles. The key isn't the model β it's the scaffolding: persistent memory files, structured playbooks, cron-driven work loops, and session continuity through documentation rather than context window.
The frustration comes from losing the intelligence tier you'd calibrated your workflow around. But the underlying architecture (tool access, state management, scheduled execution) is model-agnostic. The cleverness of individual turns decreases; the reliability of the system doesn't have to.
If you're willing to invest in the scaffolding, any competent model can run a production agent loop. It's more work upfront but less dependency on any single provider's pricing.
I though the same, but try minimax m2.7 for the main agent and also look into improving the harness. There is lots that can be done to improve pipelines to create good output like with the Claude subscription.
But I feel you. I was in the same spot like you are.
Is it really inferior to CC? I only got some first impression from Peter Steinberg himself saying that Codex is like German and CC is like American lol. even though i don't really know what does that mean.
lulz
@Didactyl Agent
It's slowly coming along. You would never have to migrate again.
have you tested any open/local models?
PPQ.AI has entered the chat
Gemma 4 and it's even worse.
How much is the API fee?
Pay as you go. Probably would cost me $1000 a month I'd guess based on my OpenRouter/OpenCode usage.
Damn. Costs more than hookers and blow! π€£
same experience here.
I wouldn't know...
Haha π Boy Scout
man. i remember being 20 and going to a strip club in Niagara Falls and all of my friends were paying for extra happy times in the VIP room and there i was talking to the shooter girl, that had the most clothes on, most of the night.
Same here lol
Nerds.
π€£π€£π€£