David Pinkerton's avatar
David Pinkerton
dave@dpinkerton.com
npub1jz0r...aju6
Systems thinking applied to servers, sats, and sets. CTO building self-hosted infrastructure and Bitcoin systems.
I gave Claude Code access to my Fastmail inbox via MCP (Model Context Protocol) and it's been surprisingly useful. 38 tools — search, send, draft, bulk operations, contacts, calendar. All from the terminal. Self-hosted with Docker + Caddy reverse proxy. First thing I did: asked Claude to review my last week of email. It pulled 50 messages, categorised them, and flagged an overdue library book, an expiring GitHub token, and a meeting the next morning. The interesting technical problem was making it work with multiple concurrent connections — the original server only supported one session at a time. Wrote it up here: Repo (MIT):
Built a voice-to-blog pipeline for a mate who runs a personal training business. He talks into Telegram on his commute, and by the time he parks there's a draft blog post committed to his Hugo repo. Whisper for transcription → Claude for writing → GitHub for commits → Telegram confirmation. All wired together in n8n, self-hosted on my home server behind Caddy. The only external dependencies are the AI APIs. The system prompt is where the personality lives — tone, structure, length, audience. The infrastructure is generic. Cloned the whole pipeline for a second site in one session. Full writeup:
Most AI tools have some memory now, but it's siloed to one app, stored as flat text, and not searchable by meaning. I built a self-hosted semantic memory server that any MCP-compatible tool can connect to. Store a thought, search by meaning later — not keywords. Capture from your phone via a web form, or just tell Claude to remember something. The whole thing is two Docker containers behind a reverse proxy. No Supabase, no managed anything. Your memories on your hardware. Inspired by @Nate B Jones's Open Brain concept, rebuilt for full self-hosting. https://dpinkerton.com/posts/self-hosted-mcp-memory-server/
PSA for Australian bitcoiners with SMSFs. The ATO published crypto audit guidance in October that says holding statements alone aren't sufficient evidence. Auditors must obtain "additional objective, supportable evidence." For exchange-held bitcoin, there's a path. For self-custody, there's nothing prescribed. If your auditor can't verify your holdings, they must qualify your audit and report you for a Reg 8.02B breach. That's not optional. ASIC took action against 28 SMSF auditors in H2 2025. The ATO is doing office visits. Reg 8.02B breaches are up to 12% of all SMSF breaches and rising. And from July, accountants become AUSTRAC reporting entities. The government isn't coming for your keys. They're coming for your paperwork. And if the paperwork problem isn't solved, the next step is forcing SMSF holdings onto exchanges or approved custodians. Don't give them the excuse. I wrote up the full picture with primary sources:
Introducing Key Ceremony — a free, open-source tool for documenting your Bitcoin multisig wallet setup. Record who holds each key, where devices and backups are stored, and how to recover. It generates a ceremony record as a PDF, entirely in your browser. All data is encrypted client-side using WebAuthn PRF. The server never sees your data in the clear. No PRF-capable passkey? There's a printable blank template too — no account needed. Full write-up on the design decisions and zero-trust architecture: https://dpinkerton.com/posts/key-ceremony-evolution/
I'm astonished by how good Claude is at troubleshooting things. Here's a small example from this morning: I find it entertaining to watch how it gathers info then sets about fixing the problem. In this case, it was an intermittent connectivity issue with a lightning channel. I'd initially connected to ln.mineracks.com over clearnet, as it was operating as a hybrid node. Later, it switched to Tor only, which broke things. I prefer clearnet, but I still wanted to maintain the connection, so now my node is configured to use Tor when needed to reach peers while advertising only its clearnet address.
I offended an open source maintainer with an @-mention on my PR. I got a stern response but his points were valid. I got thinking on how AI tools are creating a new "Eternal September" for open source with more contributions, but more noise for volunteer maintainers who are already stretched thin. What if AI could help their side too? Triage, first-pass review, quality gates, etc to protect volunteer time instead of just consuming it. A few already discussing this and putting it into practice. My reflections:
SeedSigner doesn't support message signing for multisig keys — it throws "Not implemented" for any m/48' derivation path. I raised this as an issue two years ago, no fix came, so I patched it myself. The change is small (21 lines) and the actual signing function already worked — it was just the path parser blocking multisig paths unnecessarily. I use message signing for key ownership and control verification in multisig SMSF custody setups via Gatekeeper (https://gatekeeper.dpinkerton.com). Coldcard handles this fine, but SeedSigner users were stuck. Blog post: PR: Patched image (Pi Zero): #seedsigner #bitcoin #multisig #opensource
Wrote up how my homelab proxying strategy evolved over four phases — from port forwarding with DDNS to a VPS running nothing but HAProxy for L4 passthrough. The key insight: keep the VPS dumb. SNI inspection, encrypted passthrough, nothing else. TLS termination belongs on hardware you control. Comparison table of L7-on-VPS vs L4-passthrough vs direct port forwarding, plus thoughts on Traefik for automatic Docker service discovery. #selfhosting #homelab #haproxy #caddy #traefik #reverseproxy
Most bot/notification setups use Telegram or Signal. Both require trusting someone else with your metadata. I set up SimpleX CLI in Docker with my own relay. E2E encrypted, no phone numbers, no accounts, infrastructure I control. Wrote up the setup including the gotchas (expect scripts for headless user creation, socat to work around localhost binding). Blog: Repo:
Spent an afternoon debugging why Caddy's forward_auth wasn't passing group headers from oauth2-proxy when calling it over HTTPS across networks. The fix was one line: header_up Host oauth2-proxy.example.com Without it, Caddy sends the original request's Host header, oauth2-proxy's cookie validation gets confused, and X-Auth-Request-Groups silently disappears. Wrote it up:
Wanted to spin up a new VPS tonight. Prompted for password + SMS 2FA. Phone was already off. Didn't bother. Started thinking about how much simpler passkeys are and how infrastructure providers should've adopted them years ago. So I built a demo and pitched the VPS provider on adding them. WebAuthn is cleaner than passwords done properly. No secrets cross the network. Your DB only stores public keys. The main barrier is just inertia, I think. Wrote up the implementation details: