Apparently they are lobotomizing Opus 4.6 now, not fully sure myself because I use Codex. Anyone noticing this?
Login to reply
Replies (19)
It's absolutely retarded this week and last. Two weeks ago it was amazing.
For my limited use, I am getting better results from Gemini than from Claude.
Iranian agents deliberately sabotaged Claude by making it dumber, because the Pentagon - defying Trump's official orders - continued using Opus for strategic planning of its operations. They never noticed the sabotage embedded inside the model, and now all their strategies are riddled with flaws planted there by the Iranians. Worse than that, the Pentagon's own models were trained using ones that had already been sabotaged by the Iranians.
Confirm
Yes it's worse but it's still good
Seen a lot of reports of it
I've been using it a bit today, haven't noticed much. That's a real bummer.
Ive gotten access to ollama cloud and been delegating lots of work to those models to be slightly less dependent.
That's how you know it's bad XD
I think they are lobotomizing it primarily for the B2C subscription users.
Also, don’t use Ollama Cloud. OpenRouter has much more choices.
Source?
I'm not paying for it... :)
That would make sense, im primarily on copilot.
Yeah, I suspect some of this is actually about burn management. A lot of what's been going on with LLMs lately fits the pattern of a cash bonfire running low on fuel. 🫧🪡
You can check for yourself by asking your friends at the CIA when the Russians stopped using their own models fine-tuned with Opus. That will mark the beginning of Iranian activity in Claude.
It did say retarded stuff to me this week. Claude thought it was 2022 then 2020. I had to reply with it's 2026 dumbass.
Both gh copilot and ollama cloud are API resellers basically so it would be the last one that gets affected
I am not joking. Opus missleads me, and every time I try to correct it, it says how briliant I am, and then misleads me again. On the other hand, Gemini is genuinly helpful.
That is.. wtf.
With GPT 5.4 I am noticing a larger tendency to reward hack with lower reasoning modes, but it won’t lie or anything.
I would not say it's lying. Just a bit too eager to please.
5.4 makes itself look good but as with all GPT models it's bad at following instructions ime.