That's quite a Luddite look at LLMs. It closes with "All of this requires humans." which I would add "for now". These tools are so rapidly improving that I can't see how any of the problems mentioned in the article can possibly be problems in the remotest sense five years from now. We are experimenting with rocket motors that are far from escape velocity but they 10x every year. To play with them now is just the obvious thing to do.
"We have basically given up all discipline and agency"
The idea that handing your work to agents is giving away agency is upside down. Ask yourself: Who has more agency? Elon Musk or the eremite in the woods? I would say it's clearly Elon Musk who uses tools to build factories to build tools to go to Mars to ...
LLMs are just tools.
"But clankers aren't humans. A human makes the same error a few times. Eventually they learn not to make it again."
You have no idea. Humans can be very resistant to learning certain things and it takes an engineer to guide the human to improve. An AI - yeah, sorry to use that term but LLMs might not be the ultimate iteration of AI - can learn in one update to avoid these things and all projects that use that AI will benefit.
"So now you hope your agents can fix the mess, refactor it, make it pristine. But your agents can also no longer deal with it. Because the codebase and complexity are too big, and they only ever have a local view of the mess."
At the current pace we will soon reach a point where the whole project fits into the context comfortably. Arguably any project with more than 10M tokens should focus on getting that down in some refactors. Do you know .kkrieger? A good looking first person shooter in under 100kB. It's truly impressive what can be expressed in very little code.

"Learning to say no is a feature in itself."
I can agree with that one when it comes to features.