Dustin Dannenhauer

Zero-JS Hypermedia Browser

avatar
Dustin Dannenhauer
dustind@dtdannen.github.io
npub1mgvw...pdjc
DVM maximalist Building DVMDash - a monitoring and debugging tool for DVMs https://dvmdash.live Live DVM Stats here: https://stats.dvmdash.live Hacking on ezdvm - a python library for making DVMs https://github.com/dtdannen/ezdvm

Notes (9)

Bitchat but it’s only music so you can see what everyone in the gym is listening to
2025-09-05 03:11:05 from 1 relay(s) View Thread →
Tried to use a lime scooter today in a major US city and asked me to upload my drivers license all of sudden. Took an uber instead and got a notification that I’m “being recorded for safety”. LinkedIn wants me to upload my government ID to be “verified” which is hilarious because I already pay them 70$ for premium via a credit card, but apparently that isn’t enough verification. image
2025-08-13 19:32:37 from 1 relay(s) View Thread →
Is there any nostr client that would let me make edits to a long form article?
2025-08-01 05:50:43 from 1 relay(s) View Thread →
When people think about decentralized systems, they often think of Nostr + Bitcoin, but there’s a massive decentralized system for support for addiction/drinking in the form of Al-anon and Alcoholics Anonymous that are spread all over the globe, been running for decades, and has helped millions of people. Each group is local and self-operating. But all follow strict rules regarding operation, self-sufficiency, anonymity and not getting entangled with any public views like politics so they can focus on their mission.
2025-07-30 18:28:21 from 1 relay(s) View Thread →
The alignment narrative in AI is scamming you into believe these models are smarter than they are, and deflecting failures of AI researchers who are trying to bet big on neural network based approaches. If I tell my model (aka chatgpt) to be an experienced software developer and write me code to run on a webpage, and it doesn’t work, I blame the model not being trained well, a lack of data, bad reward signals if its a reasoning model, etc. I don’t say it’s being deceptive. These AI models have no self, no identity, nothing! They only follow prompts to the best ability they can, which is the whole point of fine tuning to be good instruction followers. If anyone ever says an AI model is lying, walk away because that person has no idea how these systems work. Lying requires intention, and these models have no self that intends anything! They are probabilistic word calculators! An alignment failure where a model is blackmailing users to prevent it from being turned off is literally the exact same problem as it writing bad code when you tell it not to. AI researchers that have worked on non neural network approaches and non reinforcement learning approaches are not surprised by this. Neural networks and reinforcement learning have decades of exactly these kinds of failures where researchers hope they learn one thing but instead they learn something else. If an AI model doesn’t do what I want it to, either my instructions (prompts) are bad or the model isn’t trained well. This includes any “deceptive” behavior. If someone tries selling you a narrative that their model is deceptive, ask them why they trained it to be that way.
2025-07-23 18:01:53 from 1 relay(s) View Thread →