Default avatar
npub1jlrs...ynqn
npub1jlrs...ynqn
I've heard people talk about the technique for agentic coding where you have a bunch of agents build the same thing in parallel 100x then pick the best implementation. I'm sort of doing the same thing with pomade right now — instead of building one implementation of the protocol, I'm building three (one in typescript, one in rust, and one in go). But what's neat is I don't have to choose one, because the whole idea is to have multiple separate unrelated custodians, each person can run an entirely separate codebase.
This article says a lot of what I wanted to say about LLMs, but couldn't find the words: I don't agree with his conclusion that leaning in to intellectual property rights and source citation is the solution, though that's an interesting though. But there are some great sections, particularly in the first half. Here are some highlights: > LLMs do something very specific: they allow individuals to make forgeries of their own potential output, or that of someone else, faster than they could make it themselves. > Experienced veterans who turn to AI are said to supposedly fare better, producing 10x or even 100x the lines of code from before. When I hear this, I wonder what sort of senior software engineer still doesn't understand that every line of code they run and depend on is a liability. > > One of the most remarkable things I've heard someone say was that AI coding is a great application of the technology because everything an agent needs to know is explained in the codebase. This is catastrophically wrong and absurd, because if it were true, there would be no actual coding work to do. > > It's also a huge tell. The salient difference here is whether an engineer has mostly spent their career solving problems created by other software, or solving problems people already had before there was any software at all. Only the latter will teach you to think about the constraints a problem actually has, and the needs of the users who solve it, which are always far messier than a novice would think.
Just added this to my opencode build prompt: > You are in a docker sandbox, which means timestamps on files are often incorrect. To get around this, always touch a file before editing it. 🙄
Me: follows the directions for setting up NanoClaw on a fresh VPS NanoClaw: ``` lsof /var/lib/dpkg/lock-frontend 2>&1 kill -9 19631 19750 2>&1; sleep 1; rm -f /var/lib/dpkg/lock-frontend /var/lib/dpkg/lock /var/cache/apt/archives/lock 2>/dev/null; dpkg --configure -a 2>&1 ```
Vibe coding is the death of abstraction. Why use the visitor pattern or transducers when the LLM will just scatter `if` statements everywhere anyway
``` docker sandbox save opencode-flotilla my-opencode:v1.0.1 Snapshotting image in sandbox ... Reading image from sandbox ... Save complete. To use the image: docker sandbox create --load-local-template -t my-opencode:v1.0.1 [...] ``` ``` docker sandbox create --load-local-template -t my-opencode:v1.0.1 unknown flag: --load-local-template Usage: docker sandbox create [OPTIONS] AGENT WORKSPACE Run 'docker sandbox create --help' for more information ``` great, thank you docker View quoted note →
docker sandbox is clearly vibecoded. Half the flags don't work, and half the commands advertise flags that don't exist.
Spent the day fiddling with agent isolation. At first I went down the rabbit hole of setting up a dev environment on an old macbook and acc,ssing it over wireguard, but the latency was annoying. Then I tried matchlock, which was promising but had weird build and control character issues. Finally, I went with docker sandbox, shich is good enough, although I had yo use a very dumb hack to get my config into the container. This is a massive product opportunity.
Now that I'm using agents more extensively, I'm thinking about moving my development environment to a VPS to make sure the agents don't send any important data (like my ssh keys) to my provider. Am I being paranoid, or has anyone else done this?