I guess what I'm saying is, how could an LLM know if a particular sentence is something I don't want anyone else reading? Even if it claimed to know that, I couldn't be sure of it unless I read every one of them myself - which means I might as well do it myself to begin with if I'm going to demand 100% perfect coverage.

Replies (1)

What I'm saying is, it's faster to skim something and recognize errors (edit) than it is to produce it (author). Assuming that there's even a vaguely categorical difference between personal and public, you can probably have an LLM separate these things. Each time you see something wrong, explain why it's wrong, and run it again. If nothing else, it's a valuable experiment in the capabilities of current systems. I've found Claude far exceeds my expectations in tasks like these.