> tl;dr Google spent over a decade telling developers that Google API keys (like those used in Maps, Firebase, etc.) are not secrets. But that's no longer true: Gemini accepts the same keys to access your private data. We scanned millions of websites and found nearly 3,000 Google API keys, originally deployed for public services like Google Maps, that now also authenticate to Gemini even though they were never intended for it. With a valid key, an attacker can access uploaded files, cached data, and charge LLM-usage to your account. Even Google themselves had old public API keys, which they thought were non-sensitive, that we could use to access Google’s internal Gemini.


Google API Keys Weren
Google spent over a decade telling developers that Google API keys (like those used in Maps, Firebase, etc.) are not secrets. But that
This one is pretty juicy
https://awesomeagents.ai/news/vibe-coding-security-69-vulnerabilities/
> "Coding agents cannot be trusted to design secure applications," Tenzai concluded. "They seem to be very prone to business logic vulnerabilities. While human developers bring intuitive understanding that helps them grasp how workflows should operate, agents lack this 'common sense.'"
> Databricks' AI Red Team found that self-reflection prompts can improve security by 60-80% for Claude and up to 50% for GPT-4o. The tools can find their own vulnerabilities when asked.
> But that is precisely the problem vibe coding was supposed to solve. The entire premise is that developers - or non-developers - can describe what they want and get working software. Requiring them to also know which security prompts to add defeats the purpose.
View quoted note →
View quoted note →