zk's avatar
zk
zk_@nsec.app
npub1mm8q...gtfj
> 🌐 https://zkwallet.unstoppable
zk's avatar
zk yesterday
Malicious Google Chrome extensions have stolen large language model (LLM) conversations and browser data from hundreds of thousands of users. Application security vendor Ox Security detailed a campaign in a recent research blog involving malicious Google Chrome extensions posing as legitimate extensions from a company called AItopia that adds a sidebar on websites that enables chats with popular LLMs like ChatGPT and DeepSeek. Ox researchers found that two extensions were copying the functionality of the legitimate app while also exfiltrating user conversation and browsing data to a command-and-control (C2) server. One, titled "ChatGPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI," had more than 600,000 users and a Google Chrome "Featured" badge, while the other, "AI Sidebar with Deepseek, ChatGPT, Claude and more," had over 300,000. Stay alert, the extensions according to 0x Security have been already removed, I would not be surprised if new ones are already in the store...
zk's avatar
zk 3 weeks ago
"You'd have to be braindead to believe WhatsApp is secure in 2026. When we analyzed how WhatsApp implemented its "encryption," we found multiple attack vectors" -- Pavel Durov, co-founder of the Telegram messenger. #WhatsApp
zk's avatar
zk 2 months ago
ℹ️ For those that reside outside of USA and think thausing Amazon, Google and Microsoft cloud services or any USA cloud service is secure and private for them: . The US CLOUD Act from 2018, allows the US Government (and therefore their partners) data access regardless of storage location. . Be smart, self host your data, and if you insist in doing it wrong, encrypt your data before you upload anywhere.
zk's avatar
zk 3 months ago
Not that most of you give two cents about it since most don't care about privacy, but if you are one of those rare special individuals, stay away from ChatGTP #ChatGTP In yet another "Your chatbot may be leaking" moment, researchers have uncovered multiple weaknesses in OpenAI's ChatGPT that could allow an attacker to exfiltrate private information from a user's chat history and stored memories.