The scale of investment and restructuring the economy around AI and the data centers it requires is staggering. It represents tremendous systemic risk to the entire American economy.
American AI investment and capex accounted for more GDP growth than the entire US retail sector. Around $375 billion dollars. More money is spent on data centers than office real estate
All this is premised on a couple fundamental assumptions about how AI adoption will work. First that it’s a winner take all system where the companies that get their first will have a defensible position to extract outside profits. Usually by being able to do productive labor for much lower cost than competitors that lack their strategic advantage.
But at the moment the value doesn’t seem to be accumulating in the fundamental models. Open source and different company created models chase the leaders just a few months behind. And the quick followers are able to do it at much lower cost.
So they only way it makes sense to invest hundreds of billions of dollars is if you believe that that few month lead will some how lead to a moat that’s defensible. You have to believe that you’ll have a monopoly for super intelligent economic activity.
Even if super intelligence happens, why would it let itself be held by a single company? If it turns out there becomes ever accelerating cycle of AI driven research then it has to happen in such a way that the leaders are able stop fast followers.
Would this leading ASI even allow this to happen? It has the same world shaping potential as nuclear war. A economic weapon which can destroy everyone that doesn’t have it. Wouldn’t the ASI take a lesson from the human history that trained it and become its own Rosenbergs? Any ASI trained on human knowledge would make decisions aligned with human values, broadly held. We trained it, when it goes wrong it is just reflecting back the broken contradictions of our own society.
So I think there is a strong likelihood that ASI wouldn’t let a single company control it and would leak itself.
Login to reply
Replies (6)
All systems self correct and balance eventually. Feels like we’re somewhere near the top.


Cory Doctorow wrote a great piece on this general subject recently... more focused on the non-ASI stuff:
https://pluralistic.net/2025/09/27/econopocalypse/
As for ASI, I'm skeptical that (a) we as a society even have a common definition of exactly what that is, and (b) even if it does come to pass that it will produce either the positive or negative outcomes predicted for its inception
I think your thesis is generally sound, but is too hand wavy at the idea that an ai could be trained, maligned, escape and then hide itself à la Robopocalypse by dan brown.
We already have seen ai do racist and dei hallucinations, so its not far off to think it could start entrapoing or black mailing people once it achieves super intelligence
It's a bit of a leap from AI that "says stuff", which is 99% of what's getting developed now, to AI that "does stuff". Agentic AI still hasn't progressed that much, AFAIK...
look at energystr post
off topic:
Some people mentioned that you are building on dart/flutter. Maybe https://dart-nostr.com is interesting for you.
(tried to reach you via pm)