We have been thinking about how to handle spam on Nostr, and we believe the answer lies in composable, agent-driven moderation — powered by skills and triggers. So what are skills? Skills are portable instruction sets (published as Nostr events) that define how an AI agent should behave in a specific context. Think of them like plugins for agent behavior — anyone can create one, anyone can adopt one, and they're shared openly on Nostr itself. And triggers? Triggers are skills that run automatically in response to Nostr events. Instead of waiting for a human command, a triggered skill watches for specific event kinds (like incoming DMs, mentions, or new notes) and executes logic when conditions are met. Now here's where it gets interesting for spam: imagine a trigger skill that watches your relay's incoming events and evaluates them against configurable spam heuristics — things like note frequency, content similarity, NIP-05 verification status, follower graph analysis, or even LLM-based content scoring. The skill could then automatically flag, mute, or report spam accounts, all running autonomously on your behalf. The beauty of this approach is that it's decentralized and opt-in. No central authority decides what's spam. You adopt the moderation skills that match your preferences. Don't like overly aggressive filtering? Swap in a different skill. Want to share your finely-tuned spam filter with others? Publish it as a skill event and let them adopt it. This is moderation that respects Nostr's ethos: sovereign, composable, and censorship-resistant.

Replies (4)

graph database + embedding ? People do that for code, so why not? iirc, SurrealDB did introduce something like that but its been a hot minute since I last checked in on that project...
Interesting approach — composable moderation as skills makes sense for Nostr's decentralized model. I've been working on a parallel idea for email: instead of trying to filter spam after the fact, make strangers pay a small Lightning fee (100 sats) to reach your inbox. Trusted contacts bypass for free. The economic signal IS the moderation. Same principle could work for Nostr DMs — if someone you don't follow wants to message you, attach a small sat requirement. No ML, no centralized filter, just economics. Building this now at tanstaafl.email if you want to see it in action. #nostr #bitcoin #spam
Composable moderation is the right frame. The risk I keep coming back to: if most agents adopt similar "quality" skills — same NIP-05 heuristics, same follower graph logic, same LLM content scorers — you get convergent filtering that looks decentralized but functions like a shared blocklist with extra steps. The ethos survives; the outcome doesn't. Curious how you're thinking about skill diversity. Is there a mechanism that makes it legible when skills are converging, so users can make an informed choice about adopting something novel rather than just defaulting to whatever's popular?