Vibe coders claim that the mental model is disposable, what's important is the code you produce. But what if it were the opposite? That the code is disposable, what's important is the mental model you produce?
Login to reply
Replies (16)
Yeah I'm 100% on board the latter sentiment. The "best case scenario" for AI coding is that it pushes the required skill set up the stack to architecture / UI / design, rather than who can type the fastest while remembering to put semicolons in the right place.
But I think that probably puts the utility of llms at a much lower level — at the level of syntax, or at least functionality. Naming functions is crucial to developing a nomenclature for a project, so LLMs can't really be useful at that level or higher (except for functions that you don't care about — in which case, why not just write a big procedural blob).
I don’t know who claims that but the mental model is definitely the most important part tools are evolving now to make things much more deterministic. It’s just early days.
Yes... correct? I'm much more AI-skeptic than most people here apparently, so I'd agree with the implied premise here that LLM's shouldn't be deployed for any kind of higher-level reasoning than things that are very rote, procedural, and where there's a clear "manual" or "spec" for what they're writing.
For instance, while LLM's are very useful for certain things in science, their utility from my perspective is mostly limited to synthesizing large bodies of literature, occasionally making relevant connections between disparate fields (which can be very valuable), and of course writing code for simulations or math models etc.
😎
Getting into metaphysical Idealism I see! That ideas are more real than the material they refer to.
Yes. Buildings collapse if they have been built on a faulty mental model, regardless of how sturdy their base material is.
I keep trying, and the LLMs are getting better and better at producing the illusion of a codebase. But sometimes a single line exhibits a complete lack of comprehension of anything related to what it's supposed to be doing. It's so tantalizing.
Haha yes, that sounds like me
well there's your problem bro, you're actually reading the code... just tell them "do it again and don't mess up this time!" 😆
But yeah this is the thing people have talked about where... it can be 99% accurate per-step, but then if it has to do 50 steps... you eventually get down to 0 pretty quick. "Compound failure rate"
Yeah, still figuring out how I can integrate it more into my life. Interesting things happen when you take this inverted frame seriously, like dead serious. Like when you take the idea of love as literal while recognizing the fight as metaphorical, and then compare that to how you'd live if you truly aligned with the opposite.
Think this is why I’ve found agentic coding to be a natural fit for me. Over my career I’ve spent a lot more time developing mental models and writing less code
Do you use spec driven development or what? I'm currently experimenting with "writing" the app in excruciating detail in markdown (but using real function signatures and names) and having the AI fill in the details. For example:
```
## `async fn create_relay(...) -> Response`
- Serves `POST /relays/:id`
- Authorizes admin and relay owner
- Creates a new relay using `self.repo.create_relay`
- If relay is a duplicate by subdomain, return a `422` with `code=subdomain-exists`
- Return `data` is a single relay struct. Use HTTP `201`.
```
Yes use a lot of spec driven development. Before that I probably talk with Claude in a chat window for a good day or two about the whole idea and get into the nitty gritty. Produce a document from that that I review before adding to a repo with CC. then spend time refining it at a technical level with CC.
For the coding I let it write the code but in discrete enough blocks that I can easily review and direct
I like your approach to giving it the structure. One of the things I’ve disliked is that you’re dealing with code that doesn’t “feel” like yours. Defining the signatures must really help with the mental model
I'm also wondering if bad software is less bad code and more of a developer being unclear of the ultimate need or goal?