Tyler Cowen is calling o3 and o4-mini AGI. If we’ve hit the point where LLM models can actually handle being set loose on a code base and doing useful work, the takeoff may really have started.
I’m not completely sold yet that this is it. But it’s the first time that it feels close. I’ve never had this feeling before. It feels like the beginning of autocatalysis is breathing down our necks.
View quoted note →
https://news.ycombinator.com/item?id=44007301
This Hacker News comment on OpenAI’s new codex model to go with their cli tool of the same name (yet another reminder of how much they suck at naming) is describing exactly what this kind of takeoff looks like. We’ve now hit the intern level where you can spin up a bunch of copies and set them off on minor tasks and be confident they’ll complete them correctly. With each month the scope and length of those tasks will continue to grow.
This is what takeoff looks like.
nevent1qqs0ktk438pxmae3jtwjvq4kr63f4av6m9t8gn00f2vrrudrv9gl4uspndmhxue69uhkummn9ekx7mp0y5erqamnwvaz7tmwdaehgu3wd3skuep0y5erqffjxpshvct5v9ez2v3swaehxw309ahx7um5wgh8w6twv5hj2v3sy5erqctkv96xzu39xgc8wumn8ghj7ur4wfcxcetjv4kxz7fwvdhk6te9xgc8wumn8ghj7un9d3shjtnyv9kh2uewd9hj7ffjxpmhxue69uhhyetvv9ujuumwdae8gtnnda3kjctv9u9grfql