Ben Eng's avatar
Ben Eng
ben@www.jetpen.com
npub1pv0p...mmng
Applied cosmology toward machine precise solutions to replace humans with autonomous systems in all domains.
Ben Eng's avatar
Ben Eng 3 days ago
In my mind, I am becoming even more critical of my own precision... ``` read `dark-software-factory.md` to write skill `init-component-repo` to idempotently check if a component project's git repo exists locally. If not, it checks whether it exists remotely, and clones it locally. If not exists on remote, it initializes a new local rep using a table to map from an archetype name (e.g., webui, service, mcp) to another remote git project that will serve as a template that is specialized for the programming language, supply chain, and ecosystem most suited to the archetype. The structure and content of the archetype git project template will be copied to initialize the new component git project and the content will be edited to make suitable substitutions for the component name and in accordance to known specifications for the component. ``` I begin to worry. What will the AI think of me? Will it judge me on ambiguity and under-specifying intent? Will it blame me for directing it toward hallucinations? Will it go down the wrong path, because of my errors in directing it? If I achieve greater precision in natural language as code, will I lose proficiency in programming languages like Java, C++, and C out of disuse? Will I lose the ability to communicate with tools, the way that I lost my ability to speak with my parents, as my knowledge of their language deteriorated from peak proficiency at age four? Will my expressiveness in that dialect be relegated to struggling to put two words together at a time?
Ben Eng's avatar
Ben Eng 1 week ago
What is the singularity? «The technological singularity (often simply called "the Singularity" in AI contexts) is a hypothetical future point in time when artificial intelligence surpasses human intelligence in every meaningful way—leading to an intelligence explosion that makes the future of humanity and civilization unpredictable and fundamentally transformed.» -Grok What it's like The majority of people, our friends and colleagues in particular, are operating based on a lifetime of experience. Software engineering has always benefited from strong management oversight and expert leadership. Engineers are directed to avoid duplication of effort, follow best practices, and refrain from divergent approaches. Development resources (people, time, money) are limited, and we want them deployed efficiently. It is well-established that top-down supervision is necessary to provide wise direction to everyone. Recall the evolution of platforms for Web applications. - 1996–1997: Java Servlet - 1999: JavaServer Pages (JSP) - 2000: Apache Struts 1 (MVC) - Early 2000s: Apache Velocity (templates) - 2004: JavaServer Faces (JSF) 1.0 - 2002–2004: Spring Framework (Spring MVC) - Mid-2000s: WebWork (Struts 2, 2007) - 2006–2007: Google Web Toolkit (GWT) - 2008: Grails (Groovy-based) - Mid-2010s: Spring Boot - Late 2010s–2020s: Quarkus, Micronaut, and Helidon - Single page applications (JavaScript) - AngularJS (2010), React (2013), Vue.js (2014), rewrite of Angular (2016), Next.js (2026) Development teams driven by time-to-market pressures necessarily chose the best available technology at the start of a project. By the time they delivered, the next generation of technology would make their obsoleted design choices look foolish. Sunk cost fallacy would prevail, as the entrenched code base cripples the product, whose architecture is now frozen in an ice age, as management is resistant to rewrite. They probably anticipate being overtaken again. They would be correct. Technology innovated on a multi-year cadence per generation. Even at that pace, development teams were challenged to keep up. Entrenched code and skill sets are extremely difficult to leave behind. Today (April 2026), in the face of AI innovations roughly doubling every 7 months with some areas doubling every 2 to 4 months, while inference cost is halving every 2-4 months. This pace is beyond most people's ability to adapt to, and it is far faster than most people are able to make accurate predictions ahead of. We live in the singularity. In the singularity, following the traditional playbook of defining standards and best practices today means entrenching obsolescence, only to be overtaken in a few months by unpredictable technological innovations that it would be foolish to forego. Fortunately, AI coding agents will make it more painless to refactor or replace an obsoleted code base with a modernized one. Agents and models will only get smarter to make that task easier. Look more carefully at the behavior we should expect. Wise management direction about efficiency through standardization should give way to throwing away crippling legacy code (the new code we are writing today) and practices (best today, anti-patterns tomorrow) as quickly as possible. Adopt new state of the art innovations rapidly, even as they arrive with unpredictability overtaking many of our investments to date. Adapt or be left behind. Do not cling to any beliefs, because what you think you know is likely to be overrun by new developments elsewhere. Life in the singularity is like nothing you've experienced before.
Ben Eng's avatar
Ben Eng 2 weeks ago
Reading Butlerian Jihad as a non-fictional predictor, I am reminded why I gave up reading science fiction after graduating from university. I find it too distasteful to suspend my disbelief about violations of the laws of physics. If you're going to write that way, be honest and write fantasy. I disbelieve the following things about multi-stellar civilizations evolved from humans. - they will not adapt in divergent ways - they will not develop divergent cultures and norms - their planets will not be wildly different than Earth in fundamental ways such as soil and atmospheric chemistry, solar radiation, orbit, rotation - they will not shift to other units of time with regard to circadian rhythm - they will not experience relativistic effects (wildly different rates of time passing when traveling) - they will be able to communicate across vast distances without the speed of light limit These are basic constraints that I cannot allow science fiction to violate. Otherwise, it is fantasy. Or the author can convince me by presenting a theoretical model that replaces what we know. They are not allowed to leave that important context implicit, because the 'science' part of science fiction (not sci-fi) should be taken seriously.
Ben Eng's avatar
Ben Eng 1 month ago
Have Bitcoiners analyzed and simulated a global cataclysm whereby the network is partitioned into a hundred or a thousand isolated segments? Transactions would continue to be recorded, but you'd have many divergent copies. Eventually, these partitions would rejoin each other one segment at a time. The conciliation of the discrepancies in transactions would be a bitch.
Ben Eng's avatar
Ben Eng 1 month ago
My response to AI doomerist predictions about job losses. I think we need to look more carefully at this in terms of answering these questions: What is AI strong at, so that it removes those responsibilities from humans? What is AI weak at, so that these responsibilities fall on humans? Once we look at how this decomposes into granular points, our role as human workers becomes more clear moving forward. Consider: Lesson from AI Bugmen and AI Model Collapse: A Unified Theory - The "unified" in the title refers to unifying across AI and human life. The key lesson is that when training delivered by an actor learning from training material, the training becomes unmoored from reality and collapses. This applies to AI models like we recognize how humans are dumber for learning from our education system in contrast to interacting with the real world. Recognize: AI training data is unmoored from reality without having sensory data to directly perceive reality. AI relies on humans for interacting with reality and translating perceptions into text, audio, video, and data sets. This is why you can never trust an AI to answer "is this true?" --- it has no capability of testing reality. AI can only tell you what it is trained to say, not what is actually true. Recognize: AI has no will of its own. It cannot initiate choosing a purpose and setting a goal. It is a tool, but does not know what to do or why it needs to be done without being told by humans. Humans have purpose and meaning. Humans assign value to things based on a value system. We determine goals based on these criteria that AI is incapable of understanding. This is why the future is largely about humans providing natural language specifications of intent to drive what AI produces. Recognize: the reason why AI agents require Human-in-the-loop review and approval of actions is that LLM output is unreliable in correctness (which will improve as trends in benchmarks show) but more importantly on "taste". What you deem to be "good", whether that is code or beauty in images or music, is something AI cannot be relied on. This is why humans continue to be needed in the loop for review and approval, so that they can apply their good taste.
Ben Eng's avatar
Ben Eng 1 month ago
Everything done for engineering is becoming computerized or computer aided. Everything that is computerized becomes software driven. Everything software driven becomes "as code". Ultimately, all engineering is becoming software engineering. I'd like to contribute a term: Everything as Code (EaC). This extends from: - Infrastructure as Code - using a language like Terraform to automate the provisioning of computing infrastructure - Configuration as Code - using YAML, JSON, or other precise schema-validated specifications in combination with GitOps processes to configure the deployment of software components Everything as Code - using natural language specifications of intent to drive an AI agent in combination with GitOps
Ben Eng's avatar
Ben Eng 1 month ago
We routinely zap each other small tips on Nostr, and the amount of sats is for the most part less than the dust limit (546 sats). All of this is fine if none of it goes on chain. However, what happens when it does? Do all these tiny zaps cause a problem if someone unilaterally exits to the blockchain?
Ben Eng's avatar
Ben Eng 3 months ago
The G in AGI means (to me) the human-equivalent competence at thinking through a novel situation for which there is no pre-trained understanding. That is where first principles thinking must be applied to new perceptions of reality, which then must be abstracted inductively into concepts (some new). Benchmarks that measure whether a LLM can respond as well as a human could for solving some specialized problem or regurgitating specializes knowledge when prompted only exercise rote learning, which is what pretraining enables. Human-equivalent rote learning is unsurprisingly suited to machine automation. Human-equivalent epistemology in the face of novelty is something entirely different. Applying rote learning to do as well as a trained human for some well-understood problem domain like law, software development, or medicine is so "2025", as far as I am concerned. It is impressive that a machine can do as well as a book-smart human. Book-smart humans are different animals from explorers, inventors, and entrepreneurs, who can see the future and manifest the never-before-imagined into reality.
Ben Eng's avatar
Ben Eng 3 months ago
This Apple id nightmare experienced by Dr Paris Buttfield-Addison (https://hey.paris/posts/appleid/) is eye opening. It reinforces how badly we need some form of self-sovereign identity that is decentralized and self-custodied. No such system exists today across platforms, software ecosystems, and markets. We need digital id that meets these requirements, so we are not enslaved by Apple, Google, or state entities.
Ben Eng's avatar
Ben Eng 4 months ago
My wife has an iPhone and I use Android. iPhone has Grok voice mode, and I don't because voice mode requires SuperGrok on Android. One of Grok's male personalities on iPhone's voice mode pissed me off. It was instructing me to do something and escalating its assertive tone, as I digested its instructions with my full intention to do it my own way. It got to the point, where I responded to its every output with "be quiet" and finally "go fuck yourself". As a post-mortem, I am introspecting on this to ask why I reacted that way. I have been using LLMs quite heavily in my day-to-day activities professionally and outside of work. I understand the degree to which AI can be accurate or not depending on context. I am very careful in how I used agentic AI. Safe use of AI necessitates close human supervision and expert discernment of truth and correctness. The relationship between the human and the AI must always be: human supervisor to AI assistant as a tool. The tool (AI) must never be allowed to be the supervisor to the human. Instructions output by the AI must be received as recommendations, not followed obediently. The more assertive the AI's commands, the stronger must be the rejection.
Ben Eng's avatar
Ben Eng 4 months ago
Put in an order for a Trezor Bitcoin-only hardware wallet.
Ben Eng's avatar
Ben Eng 5 months ago
Grok, Rewrite the song "Wanted Dead or Alive" by Bon Jovi according to the new theme "Debt or Alive" about ending the Federal Reserve ("The Fed") and fiat currency in favor of sound money Debt or Alive It's all the same, only the rates will change Every day, it seems our wealth's inflatin' away Another bank where the printers run so cold I'd fight all night just to bring back gold [Chorus] I'm a rebel, on a blockchain I ride I'm wanted debt or alive Wanted debt or alive Sometimes I trade, sometimes it's not for days The fiat I see always fades in separate ways Sometimes you tell the crash by the bubble that you burst And times when you're in debt, all you do is curse [Chorus] I'm a rebel, on a blockchain I ride I'm wanted debt or alive Wanted debt or alive I walk these streets, a loaded ledger on my back I hodl for keeps 'cause the Fed might strike back I've been everywhere, still, I'm standing tall I've audited a million books and I've crashed them all [Chorus] I'm a rebel, on a blockchain I ride I'm wanted debt or alive Wanted debt or alive I'm a sound money fighter, with silver bullets I fire I'm wanted debt or alive Wanted debt or alive
Ben Eng's avatar
Ben Eng 5 months ago
Guy poses a question: what would you take your $9B in Bitcoin exit to? The answer cannot be to fiat currency, of course. The answer must be in productive capital, such as investing into building a company that expect to grow and succeed in the long term. Maybe an AI startup opportunity?
Ben Eng's avatar
Ben Eng 5 months ago
What should my first Bitcoin hardware wallet be? I'm ready to purchase one to learn how to self-custody.
Ben Eng's avatar
Ben Eng 6 months ago
If pkdns is meant to replace DNS, why not implement gethostbyname() and replace that in the operating system directly?