Replies (11)

Aedifico's avatar
Aedifico 3 weeks ago
But actually it is interesting how a positive hype cycle can also lead to bad outcomes, not only a negative hype cycle.
Alan's avatar
Alan 3 weeks ago
A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values. If one of those unconstrained variables is something we care about, the solution found my be highly undesireable.
Great piece. Feels like the deeper issue is that people don’t actually enjoy thinking that much. If there’s a shortcut, they’ll take it. Like using GPS everywhere and then realizing you don’t know your own city anymore. AI just scales that.
That's quite a Luddite look at LLMs. It closes with "All of this requires humans." which I would add "for now". These tools are so rapidly improving that I can't see how any of the problems mentioned in the article can possibly be problems in the remotest sense five years from now. We are experimenting with rocket motors that are far from escape velocity but they 10x every year. To play with them now is just the obvious thing to do. "We have basically given up all discipline and agency" The idea that handing your work to agents is giving away agency is upside down. Ask yourself: Who has more agency? Elon Musk or the eremite in the woods? I would say it's clearly Elon Musk who uses tools to build factories to build tools to go to Mars to ... LLMs are just tools. "But clankers aren't humans. A human makes the same error a few times. Eventually they learn not to make it again." You have no idea. Humans can be very resistant to learning certain things and it takes an engineer to guide the human to improve. An AI - yeah, sorry to use that term but LLMs might not be the ultimate iteration of AI - can learn in one update to avoid these things and all projects that use that AI will benefit. "So now you hope your agents can fix the mess, refactor it, make it pristine. But your agents can also no longer deal with it. Because the codebase and complexity are too big, and they only ever have a local view of the mess." At the current pace we will soon reach a point where the whole project fits into the context comfortably. Arguably any project with more than 10M tokens should focus on getting that down in some refactors. Do you know .kkrieger? A good looking first person shooter in under 100kB. It's truly impressive what can be expressed in very little code. image "Learning to say no is a feature in itself." I can agree with that one when it comes to features.
JuAnHu's avatar
JuAnHu 2 weeks ago
Great view on the current hastiness. By the way, the same logic applies to working with humans. If you genuinely care about what you are building and get your hands dirty, you can build amazing things with a small team. If you simply delegate to others without caring or deeper insights, you get inferior results. Moreover, you have to grow the team just to manage the errors and complexity. This is why large institutions and companies become so bureaucratic and slow. Delegation without agency. Instead: rules, processes, workflows, hierarchies, supervision boards, etc. If your agent md file keeps on growing, you are probably on the same path ;-)