AI security is such a joke. We have people thinking the way to solve prompt injection is by begging the LLM to not do $badThing.
Here’s a hacker news thread that shows the Supabase MCP server is vulnerable to essentially SQL injection, and the top comment is someone from Supabase. One of their top mitigations is “pretty please LLM, don’t leak data” 🤣
They are trying other things as well, but this is what I’ve seen from other projects too. We’ve gone from “make controls that prevent $badThing all together” to “pretty please do the right thing” 🤦🏻♂️🤦🏻♂️🤦🏻♂️
Supabase MCP can leak your entire SQL database | Hacker News