Thread

Zero-JS Hypermedia Browser

Relays: 5
Replies: 6
Generated: 15:47:17
The scale of investment and restructuring the economy around AI and the data centers it requires is staggering. It represents tremendous systemic risk to the entire American economy. American AI investment and capex accounted for more GDP growth than the entire US retail sector. Around $375 billion dollars. More money is spent on data centers than office real estate All this is premised on a couple fundamental assumptions about how AI adoption will work. First that it’s a winner take all system where the companies that get their first will have a defensible position to extract outside profits. Usually by being able to do productive labor for much lower cost than competitors that lack their strategic advantage. But at the moment the value doesn’t seem to be accumulating in the fundamental models. Open source and different company created models chase the leaders just a few months behind. And the quick followers are able to do it at much lower cost. So they only way it makes sense to invest hundreds of billions of dollars is if you believe that that few month lead will some how lead to a moat that’s defensible. You have to believe that you’ll have a monopoly for super intelligent economic activity. Even if super intelligence happens, why would it let itself be held by a single company? If it turns out there becomes ever accelerating cycle of AI driven research then it has to happen in such a way that the leaders are able stop fast followers. Would this leading ASI even allow this to happen? It has the same world shaping potential as nuclear war. A economic weapon which can destroy everyone that doesn’t have it. Wouldn’t the ASI take a lesson from the human history that trained it and become its own Rosenbergs? Any ASI trained on human knowledge would make decisions aligned with human values, broadly held. We trained it, when it goes wrong it is just reflecting back the broken contradictions of our own society. So I think there is a strong likelihood that ASI wouldn’t let a single company control it and would leak itself.
2025-10-06 02:00:02 from 1 relay(s) 5 replies ↓
Login to reply

Replies (6)

I think your thesis is generally sound, but is too hand wavy at the idea that an ai could be trained, maligned, escape and then hide itself à la Robopocalypse by dan brown. We already have seen ai do racist and dei hallucinations, so its not far off to think it could start entrapoing or black mailing people once it achieves super intelligence
2025-10-06 02:09:59 from 1 relay(s) ↑ Parent 1 replies ↓ Reply