First, Outlook fails.
Now: Bluetooth won't pair.
Artemis II is NASA's most relatable mission yet.
jsr
jsr@primal.net
npub1vz03...ttwj
Chasing digital badness at the citizen lab. All words here are my own.
If you're pissing off the powerful interests, watch this video.
Follow along. Get safer.
They showed us cute missing dogs & we consented to opt into a mass human tracking system.
I think Ring's wants to be Flock. On steroids.
Because instead of just sketchy cameras in parking lots, Search Party will cover your own backyards & homes.
And if you & your neighbors want to challenge the loss of privacy? Well, how exactly would you do that effectively?
Because, instead of going to the city council, looking at the contracts, and calling out your mayor for speeding your city to dystopia, it's massive and distributed.
Will you even know which of your neighbors is now helping to feed the system?
If we had half competent privacy regulators & laws in the US this kind of thing would be a big, hard fight for Ring.
Instead? It's a Super Bowl commercial.
Oh, and yeah Ring has already partnered with Flock Safety to incorporate tools letting the government directly request footage
I TRUST YOU BUT YOUR AI AGENT IS A SNITCH: Why We Need a New Social Contract
We’re chatting on Signal, enjoying encryption, right? But your DIY productivity agent is piping the whole thing back to Anthropic.
Friend, you’ve just created a permanent subpoena-able record of my private thoughts held by a corporation that owes me zero privacy protections.
Even when folks use open-source agents like #openclaw in decentralized setups, the default /easy configuration is to plug in an API resulting in data getting backhauled to Anthropic, OpenAI, etc.
And so those providers get all the good stuff: intimate confessions, legal strategies, work gripes. Worse? Even if you’ve made peace with this, your friends absolutely haven’t consented to their secrets piped to a datacenter. Do they even know?
Governments are spending a lot of time trying to kill end-to-end encryption, but if we’re not careful, we’ll do the job for them.
The problem is big & growing:
Threat 1: proprietary AI agents. Helpers inside apps or system-wide stuff. Think: desktop productivity tools by a big company. Hello, Copilot. These companies already have tons of incentive to soak up your private stuff & are very unlikely to respect developer intent & privacy without big fights (Those fights need to keep happening)
Threat 2: DIY agents that are privacy leaky as hell, not through evil intent or misaligned ethics, but just because folks are excited and moving quickly. Or carelessly. And are using someone’s API.
I sincerely hope is that the DIY/ OpenSource ecosystem that is spinning up around AI agents has some privacy heroes in it. Because it should be possible to do some building & standards that use permission and privacy as the first principle.
Maybe we can show what’s possible for respecting privacy so that we can demand it from big companies?
Respecting your friends means respecting when they use encrypted messaging. It means keeping privacy-leaking agents out of private spaces without all-party consent.
Ideas to mull (there are probably better ones, but I want to be constructive):
Human only mode/ X-No-Agents flags
How about converging on some standards & app signals that AI agents must respect, absolutely. Like signals that an app/chat can emit & be opted out of exposure to an AI agent.
Agent Exclusion Zones
For example, starting with the premise that the correct way to respect developer (& user intent) with end to end encrypted apps is that they not be included, perhaps with the exception [risky tho!] of whitelisting specific chats etc. This is important right now since so many folks are getting excited about connecting their agents to encrypted messengers as a control channel, which is going to mean lots more integrations soon.
#NoSecretAgents Dev Pledge
Something like a developer pledge that agents will declare themselves in chat and not share data to a backend without all-party consent.
None of these ideas are remotely perfect, but unless we start experimenting with them now, we're not building our best future.
Next challenge? Local Only / Private Processing: local-First as a default.
Unless we move very quickly towards a world where the processing that agents do is truly private (e.g. not accessible to a third party) and/or local by default, even if agents are not shipping signal chats, they are creating an unbelievably detailed view into your personal world, held by others. And fundamentally breaking your own mental model of what on your device is & isn't under your control / private.
Even when folks use open-source agents like #openclaw in decentralized setups, the default /easy configuration is to plug in an API resulting in data getting backhauled to Anthropic, OpenAI, etc.
And so those providers get all the good stuff: intimate confessions, legal strategies, work gripes. Worse? Even if you’ve made peace with this, your friends absolutely haven’t consented to their secrets piped to a datacenter. Do they even know?
Governments are spending a lot of time trying to kill end-to-end encryption, but if we’re not careful, we’ll do the job for them.
The problem is big & growing:
Threat 1: proprietary AI agents. Helpers inside apps or system-wide stuff. Think: desktop productivity tools by a big company. Hello, Copilot. These companies already have tons of incentive to soak up your private stuff & are very unlikely to respect developer intent & privacy without big fights (Those fights need to keep happening)
Threat 2: DIY agents that are privacy leaky as hell, not through evil intent or misaligned ethics, but just because folks are excited and moving quickly. Or carelessly. And are using someone’s API.
I sincerely hope is that the DIY/ OpenSource ecosystem that is spinning up around AI agents has some privacy heroes in it. Because it should be possible to do some building & standards that use permission and privacy as the first principle.
Maybe we can show what’s possible for respecting privacy so that we can demand it from big companies?
Respecting your friends means respecting when they use encrypted messaging. It means keeping privacy-leaking agents out of private spaces without all-party consent.
Ideas to mull (there are probably better ones, but I want to be constructive):
Human only mode/ X-No-Agents flags
How about converging on some standards & app signals that AI agents must respect, absolutely. Like signals that an app/chat can emit & be opted out of exposure to an AI agent.
Agent Exclusion Zones
For example, starting with the premise that the correct way to respect developer (& user intent) with end to end encrypted apps is that they not be included, perhaps with the exception [risky tho!] of whitelisting specific chats etc. This is important right now since so many folks are getting excited about connecting their agents to encrypted messengers as a control channel, which is going to mean lots more integrations soon.
#NoSecretAgents Dev Pledge
Something like a developer pledge that agents will declare themselves in chat and not share data to a backend without all-party consent.
None of these ideas are remotely perfect, but unless we start experimenting with them now, we're not building our best future.
Next challenge? Local Only / Private Processing: local-First as a default.
Unless we move very quickly towards a world where the processing that agents do is truly private (e.g. not accessible to a third party) and/or local by default, even if agents are not shipping signal chats, they are creating an unbelievably detailed view into your personal world, held by others. And fundamentally breaking your own mental model of what on your device is & isn't under your control / private.NEW: Microsoft turned over Bitlocker keys to FBI.
When you key escrow your disk encryption with someone, they can be targeted with a warrant.
This case is a really good illustration that if you nudge users with a default to save their keys with you... they will do so & may not fully understand the implications.
Of course, once the requests start working... they are likely to accelerate.
Story: 
When you key escrow your disk encryption with someone, they can be targeted with a warrant.
This case is a really good illustration that if you nudge users with a default to save their keys with you... they will do so & may not fully understand the implications.
Of course, once the requests start working... they are likely to accelerate.
Story: 
Forbes
Microsoft Gave FBI BitLocker Encryption Keys, Exposing Privacy Flaw
The tech giant said providing encryption keys was a standard response to a court order. But companies like Apple and Meta set up their systems so s...
Suddenly hearing about zcash everywhere.
Feels inorganic.
What's up?
POV: you can't sleep because your bed can't talk to AWS.
Design thinking that inserts brittle dependence into our lives while extracting fees for life.
Don't be these guys.
Design thinking that inserts brittle dependence into our lives while extracting fees for life.
Don't be these guys.NEW: 🇰🇵DPRK hackers have begun hiding malware on blockchain.
Result, decentralized, immutable malware from a government crypto theft operation.
It only cost $1.37 USD in gas fees per malware change (e.g. to update the command & control server)
Blockchains as malware dead drops are a fascinating, predictable evolution for nation state attackers.
And Blockchain explorers are a natural target.
Nearly impossible to remove.
Experimentation with putting malware on blockchains is in infancy.
Ultimately there will be some efforts to try and implement social engineering protection around this, but combined with things like agentic AI & vibe coding by low-information people...whew boy this gold seam is going to be productive for a long time.
Still, where here they used social engineering, I expect attackers to also experiment with directly loading zero click exploits onto blockchains targeting things like blockchain explorers & other systems that process blockchains... especially if they are sometimes hosted on the same systems & networks that handle transactions / have wallets.
REPORT: https://cloud.google.com/blog/topics/threat-intelligence/dprk-adopts-etherhiding
It only cost $1.37 USD in gas fees per malware change (e.g. to update the command & control server)
Blockchains as malware dead drops are a fascinating, predictable evolution for nation state attackers.
And Blockchain explorers are a natural target.
Nearly impossible to remove.
Experimentation with putting malware on blockchains is in infancy.
Ultimately there will be some efforts to try and implement social engineering protection around this, but combined with things like agentic AI & vibe coding by low-information people...whew boy this gold seam is going to be productive for a long time.
Still, where here they used social engineering, I expect attackers to also experiment with directly loading zero click exploits onto blockchains targeting things like blockchain explorers & other systems that process blockchains... especially if they are sometimes hosted on the same systems & networks that handle transactions / have wallets.
REPORT: https://cloud.google.com/blog/topics/threat-intelligence/dprk-adopts-etherhidingNEW: breach of Discord age verification data.
For some users this means their passports & drivers licenses.
Discord has only run age verification for 6 months.
Age verification is a badly implemented data grab wrapped in a moral panic.
Proponents say age verification = showing your ID at the door to a bar.
But the analogy is often wrong.
It's more like: bouncer photocopies some IDs, & keeps them in a shed around back.
There will be more breaches.
But it should bother you that the technology promised to make us all safer, is quickly making us less so.
STORIES:

Proponents say age verification = showing your ID at the door to a bar.
But the analogy is often wrong.
It's more like: bouncer photocopies some IDs, & keeps them in a shed around back.
There will be more breaches.
But it should bother you that the technology promised to make us all safer, is quickly making us less so.
STORIES:

Forbes
Discord Confirms Users Hacked — Photos And Messages Accessed
Discord has sent users an email confirmation of a hack attack that has leaked personal and payment data, along with some photo and message content.

The Verge
Discord customer service data breach leaks user info and scanned photo IDs
An “unauthorized party” may have accessed the names of users, the last four digits of credit card numbers, and more.
PAY ATTENTION: The UK again asked Apple to backdoor iCloud encryption.
Backdoors create a massive target for hackers & criminal groups.
Dictators will inevitably demand that Apple build the same access structure for them.
They insert vulnerable bad things right at the place where we need the strongest protections.
This latest attempt to demand access is *yet another* unreasonable, secret demand on Apple (a TCN) from the Home Office....
https://www.ft.com/content/d101fd62-14f9-4f51-beff-ea41e8794265
Dictators will inevitably demand that Apple build the same access structure for them.
They insert vulnerable bad things right at the place where we need the strongest protections.
This latest attempt to demand access is *yet another* unreasonable, secret demand on Apple (a TCN) from the Home Office....
https://www.ft.com/content/d101fd62-14f9-4f51-beff-ea41e8794265NEW: foreign mercenary spyware is coming to the US.
ICE just quietly unsuspended contract with spyware maker #Paragon.
They got caught this year being used to hack journalists.
Friend, let me me bring you up to speed on why this is bad on multiple fronts.
YOUR BACKGROUND BRIEF:
#Paragon was co-founded in Israel in 2019 by ex head of Israel's NSA equivalent (Unit 8200) w/ major backing from former Israeli PM Ehud Barak.
Pitched themselves as stealthy & abuse-proof alternative to NSO Group's Pegasus.
The company has been trying to get into the US market for years.
For a long time all we knew about Paragon was their performance as a 'virtuous' spyware company with values.
All that came to a crashing halt in 2025 when they got very caught, helping customers hack targets across #WhatsApp.
WhatsApp did the right thing & notified users.
Almost immediately after the WhatsApp notifications, we started learning about the targets.
They weren't the supposed serious criminals... They were Journalists... human rights defenders...groups working on sea rescues.. etc
In other words, a very NSO-like scandal.
Ultimately Paragon & its Italian customer had a massive spyware scandal on their hands.
WhatsApp wasn't the only player tracking paragon & doing user notifications. Apple got in on the game.
Ultimately, we at the Citizen Lab had forensically analyzed cases from each notification round.
We testified to Italy's parliamentary intelligence oversight committee about our findings.
The conclusion? Deeply unsatisfactory.
Italy admitted hacking some targets, but denied hacking journalists.
Tons of loose ends with Paragon. And they haven't been honest about who used their tech to hack journalists in Europe.
BIG PICTURE:
After 14 years investigating countless spyware companies, I tell you with confidence:
Mercenary spyware is a power abuse machine incompatible with American constitutional rights and freedoms.
Our legal system isn't designed for it, oversight mechanisms are woefully inadequate to protect our rights...
Here's the thing. You probably know that mercenary spyware like #Pegasus gets sold to dictators.
Who, predictably, abuse it.
But We have a growing pile of cases where spyware is sold to democracies... and then gets abused.
HISTORY LESSONS
History shows: secret surveillance usually winds up abused.
The history of the US is littered with surveillance abuses.
Thing is, our phones offer an unprecedented window into our lives.
Making zero-click mercenary spyware an especially grave risk to all our freedoms.
If the government has wants access to your accounts for law enforcement...they have to prepare a judicially authorized request and send it to the company, which reviews it.
Mercenary spyware bypasses any external review.
And the whole industry behind it seeks maximum obscurity.
COUNTERINTELLIGENCE THREATS? YEAH THAT TOO
I'm concerned about the impact on our rights an dour privacy.
But there's something else that should worry everybody about the choice to work with the company: Paragon poses a potentially grave counterintelligence threat to the US. Let me explain.
When you use an integrated spyware package to conduct sensitive law enforcement / intelligence business, you have to place a lot of trust in them...
If the developers originate from a foreign intelligence service that aggressively collects against the US government, that should be a huge red flag.
America (or any country) should be maximally wary about using foreign-developed surveillance tech for the same reason that America shouldn't operate a Chinese-made stealth fighter.
So, have Paragon's spyware, people & ops been aggressively vetted for technical and human counterintelligence risks?
MERCENARY SPYWARE = FATE SHARING
Paragon's #Graphite mercenary spyware shares the same downsides as other products in their class:
❌They keep getting caught
We researchers aren't the only ones that have found techniques for tracking and identifying Paragon spyware... I'm sure hostile govs have too.
❌Customers fate share.
Since all customers roll the same tech, when one gets caught it impacts & potentially exposes everyones' activities.
Now, that fate sharing will include US law enforcement activity.
WHAT CAN YOU DO?
What can you do? Take 5 minutes and call your member of Congress.
Ask them to request a briefing on Paragon.
They should ask whether the company was properly vetted & reviewed.
What is the oversight mechanism for this maximally invasive technology?
What are the guardrails? How would abuses be handled? Etc.
PERSONAL SECURITY?
Paragon & this category of spyware is fiendishly hard to track & defend against.
And on a personal level? Apple's Lockdown Mode & Android Advanced Protection both offer some serious security benefits but neither is a silver bullet..
Unfortunately, as of right now I am pretty confident that no publicly available / commercially developed third party tool can reliably detect Paragon spyware either in realtime. Or retrospectively.
Beware a false sense of security.
If you got this far & found this post useful, let me know! Drop a comment.
SELECTED READING LIST
Exclusive: ICE reactivated its $2 million contract with Israeli spyware firm Paragon, following its acquisition by U.S. capital
Virtue or Vice? A First Look at Paragon’s Proliferating Spyware Operations
Graphite Caught
First Forensic Confirmation of Paragon’s iOS Mercenary Spyware Finds Journalists Targeted

YOUR BACKGROUND BRIEF:
#Paragon was co-founded in Israel in 2019 by ex head of Israel's NSA equivalent (Unit 8200) w/ major backing from former Israeli PM Ehud Barak.
Pitched themselves as stealthy & abuse-proof alternative to NSO Group's Pegasus.
The company has been trying to get into the US market for years.
For a long time all we knew about Paragon was their performance as a 'virtuous' spyware company with values.
All that came to a crashing halt in 2025 when they got very caught, helping customers hack targets across #WhatsApp.
WhatsApp did the right thing & notified users.
Almost immediately after the WhatsApp notifications, we started learning about the targets.
They weren't the supposed serious criminals... They were Journalists... human rights defenders...groups working on sea rescues.. etc
In other words, a very NSO-like scandal.
Ultimately Paragon & its Italian customer had a massive spyware scandal on their hands.
WhatsApp wasn't the only player tracking paragon & doing user notifications. Apple got in on the game.
Ultimately, we at the Citizen Lab had forensically analyzed cases from each notification round.
We testified to Italy's parliamentary intelligence oversight committee about our findings.

❌Customers fate share.
Since all customers roll the same tech, when one gets caught it impacts & potentially exposes everyones' activities.
Now, that fate sharing will include US law enforcement activity.
WHAT CAN YOU DO?
What can you do? Take 5 minutes and call your member of Congress.
Ask them to request a briefing on Paragon.
They should ask whether the company was properly vetted & reviewed.
What is the oversight mechanism for this maximally invasive technology?
What are the guardrails? How would abuses be handled? Etc.
PERSONAL SECURITY?
Paragon & this category of spyware is fiendishly hard to track & defend against.
And on a personal level? Apple's Lockdown Mode & Android Advanced Protection both offer some serious security benefits but neither is a silver bullet..
Unfortunately, as of right now I am pretty confident that no publicly available / commercially developed third party tool can reliably detect Paragon spyware either in realtime. Or retrospectively.
Beware a false sense of security.
If you got this far & found this post useful, let me know! Drop a comment.
SELECTED READING LIST
Exclusive: ICE reactivated its $2 million contract with Israeli spyware firm Paragon, following its acquisition by U.S. capital

Exclusive: ICE reactivated its $2 million contract with Israeli spyware firm Paragon, following its acquisition by U.S. capital
The cyber division of ICE's Homeland Security Investigations on Saturday quietly lifted a stop-work order put into place by the Biden administratio...

The Citizen Lab
Virtue or Vice? A First Look at Paragon’s Proliferating Spyware Operations - The Citizen Lab
In our first investigation into Israel-based spyware company, Paragon Solutions, we begin to untangle multiple threads connected to the proliferati...

The Citizen Lab
Graphite Caught: First Forensic Confirmation of Paragon’s iOS Mercenary Spyware Finds Journalists Targeted - The Citizen Lab
On April 29, 2025, a select group of iOS users were notified by Apple that they were targeted with advanced spyware. Among the group were two journ...
Government‑mandated KYC to read is coming fast.
And the walls of castle freedom are cracking.


Why haven't mosquitoes evolved silent flight?
"everybody who's out there thinking of using VPNs, let me just say to you directly, verifying your age keeps a child safe...So let's just not try and find a way around. Just prove your age."
- UK government.
Location tracking based on interior pictures.
It will be abused to target people.
Post the inside your place at your peril. 

Earliest days of vibecoding-as-a-target.
Without a radical increase in security, vibecoders will get wiped out & lose their savings.
And their companies will get hit with fat breaches.
Me? I'm waiting for attackers to figure out how to reliably slip backdoors into vibecoded outputs at scale.
And their companies will get hit with fat breaches.
Me? I'm waiting for attackers to figure out how to reliably slip backdoors into vibecoded outputs at scale.Neuroticism? Ripping.
Conscientiousness & agreeableness? Dipping.
Via FT: https://www.ft.com/content/5cd77ef0-b546-4105-8946-36db3f84dc43
Via FT: https://www.ft.com/content/5cd77ef0-b546-4105-8946-36db3f84dc43NEW: 🇩🇪Germany's top court says spyware severely violates fundamental rights.
Bans spyware in cases with <3year sentences.
Enforces tough proportionality tests on all surveillance.
Restricts spyware to serious cases.
Interesting development.
Court says: capturing data at the source (i.e. on someone's phone) is maximally invasive.
Especially given how much of our lives happens online.
They also surface the security risks to systems from this kind of surveillance.
Watching Germany's highest court grapple with spyware's invasiveness & rights violations is instructive.
States wielding spyware without robust legal limitations and tight judicial oversight... are almost guaranteed to be violating their citizens' basic rights.
In so many jurisdictions, state secrecy & lack of effective legal challenges means spyware harms happening daily
Huge credit to German digital freedoms organization #digitalcourage
for bringing this case.
Court statement:
Restricts spyware to serious cases.
Interesting development.
Court says: capturing data at the source (i.e. on someone's phone) is maximally invasive.
Especially given how much of our lives happens online.
They also surface the security risks to systems from this kind of surveillance.
Watching Germany's highest court grapple with spyware's invasiveness & rights violations is instructive.
States wielding spyware without robust legal limitations and tight judicial oversight... are almost guaranteed to be violating their citizens' basic rights.
In so many jurisdictions, state secrecy & lack of effective legal challenges means spyware harms happening daily
Huge credit to German digital freedoms organization #digitalcourage
for bringing this case.
Court statement:
Rules on preventative and criminal procedural (source) telecommunications surveillance and criminal procedural remote searches are constitutional for the most part
In orders published today, the First Senate of the Federal Constitutional Court rendered its decision on two constitutional complaints concerning s...
Internet-connected microphones in school bathrooms.
What could go wrong?
Mandated microphones in private spaces are a bad idea.
Throwing invasive sensors into private spaces rarely fixes socially scary problems.
But is almost guaranteed to have risky downsides.
Story: 
Mandated microphones in private spaces are a bad idea.
Throwing invasive sensors into private spaces rarely fixes socially scary problems.
But is almost guaranteed to have risky downsides.
Story: 
WIRED
It Looks Like a School Bathroom Smoke Detector. A Teen Hacker Showed It Could Be an Audio Bug
A pair of hackers found that a vape detector often found in high school bathrooms contained microphones—and security weaknesses that could allow ...
Regular people know that age verification mandates won't work.
But they are worried about their children's safety, and they aren't being offered non-dystopian alternatives.

