Replies (17)

Yes, progress will continue. In the meantime, unlike videos and video games and other generated wastes of resources, decision-making software will increasingly become brittle and buggy. Unintended 2nd and 3rd order effects from the rapid rollout of unproven vibe-coding processes will be silent until they explode in our face all at once. And people will wonder, “why the fuck did we just throw this into such critical systems without any vetting?” Like the internet, computing, electricity, and all major innovations before it, AI autonomy will be driven by the bottom line: fiat debasement. Unlike those other innovations, tho, this one can be rolled out in mere weeks. There is no time to vet. The number of backdoors, data leaks, and straight up wrong assertions will have people considering abandon whole systems all at once. I loved the advent of the internet. But has it been a net good for humanity? Privacy? Sovereignty? It’s certainly enabled commerce. We’re just now, with bitcoin, wrestling back control that we voluntarily gave to the banks and finance gatekeepers. 30 years later. I’m not as optimistic. I don’t hate AI, just our usage of it with reckless abandon.
Rational take, though I see some nuance between the criticality of different software applications. Energy grid maintenance vs nostr apps for example. I can see how AI coded software could become like the vegetable oil of the digital world. I could also see it enabling good devs who care about quality control to build faster than they ever could before. Perhaps having three of four different LLMs looking over code will get error rates below manual coding at some point. Time will tell.
Agree. All of the above. But the risk and potential damage from the places it ought not be used will result in a shift in the social outlook. I think it will continue to be thrown at everything until people have died.