They showed us cute missing dogs & we consented to opt into a mass human tracking system.
I think Ring's wants to be Flock. On steroids.
Because instead of just sketchy cameras in parking lots, Search Party will cover your own backyards & homes.
And if you & your neighbors want to challenge the loss of privacy? Well, how exactly would you do that effectively?
Because, instead of going to the city council, looking at the contracts, and calling out your mayor for speeding your city to dystopia, it's massive and distributed.
Will you even know which of your neighbors is now helping to feed the system?
If we had half competent privacy regulators & laws in the US this kind of thing would be a big, hard fight for Ring.
Instead? It's a Super Bowl commercial.
Oh, and yeah Ring has already partnered with Flock Safety to incorporate tools letting the government directly request footage
jsr
jsr@primal.net
npub1vz03...ttwj
Chasing digital badness at the citizen lab. All words here are my own.
I TRUST YOU BUT YOUR AI AGENT IS A SNITCH: Why We Need a New Social Contract
We’re chatting on Signal, enjoying encryption, right? But your DIY productivity agent is piping the whole thing back to Anthropic.
Friend, you’ve just created a permanent subpoena-able record of my private thoughts held by a corporation that owes me zero privacy protections.
Even when folks use open-source agents like #openclaw in decentralized setups, the default /easy configuration is to plug in an API resulting in data getting backhauled to Anthropic, OpenAI, etc.
And so those providers get all the good stuff: intimate confessions, legal strategies, work gripes. Worse? Even if you’ve made peace with this, your friends absolutely haven’t consented to their secrets piped to a datacenter. Do they even know?
Governments are spending a lot of time trying to kill end-to-end encryption, but if we’re not careful, we’ll do the job for them.
The problem is big & growing:
Threat 1: proprietary AI agents. Helpers inside apps or system-wide stuff. Think: desktop productivity tools by a big company. Hello, Copilot. These companies already have tons of incentive to soak up your private stuff & are very unlikely to respect developer intent & privacy without big fights (Those fights need to keep happening)
Threat 2: DIY agents that are privacy leaky as hell, not through evil intent or misaligned ethics, but just because folks are excited and moving quickly. Or carelessly. And are using someone’s API.
I sincerely hope is that the DIY/ OpenSource ecosystem that is spinning up around AI agents has some privacy heroes in it. Because it should be possible to do some building & standards that use permission and privacy as the first principle.
Maybe we can show what’s possible for respecting privacy so that we can demand it from big companies?
Respecting your friends means respecting when they use encrypted messaging. It means keeping privacy-leaking agents out of private spaces without all-party consent.
Ideas to mull (there are probably better ones, but I want to be constructive):
Human only mode/ X-No-Agents flags
How about converging on some standards & app signals that AI agents must respect, absolutely. Like signals that an app/chat can emit & be opted out of exposure to an AI agent.
Agent Exclusion Zones
For example, starting with the premise that the correct way to respect developer (& user intent) with end to end encrypted apps is that they not be included, perhaps with the exception [risky tho!] of whitelisting specific chats etc. This is important right now since so many folks are getting excited about connecting their agents to encrypted messengers as a control channel, which is going to mean lots more integrations soon.
#NoSecretAgents Dev Pledge
Something like a developer pledge that agents will declare themselves in chat and not share data to a backend without all-party consent.
None of these ideas are remotely perfect, but unless we start experimenting with them now, we're not building our best future.
Next challenge? Local Only / Private Processing: local-First as a default.
Unless we move very quickly towards a world where the processing that agents do is truly private (e.g. not accessible to a third party) and/or local by default, even if agents are not shipping signal chats, they are creating an unbelievably detailed view into your personal world, held by others. And fundamentally breaking your own mental model of what on your device is & isn't under your control / private.
Even when folks use open-source agents like #openclaw in decentralized setups, the default /easy configuration is to plug in an API resulting in data getting backhauled to Anthropic, OpenAI, etc.
And so those providers get all the good stuff: intimate confessions, legal strategies, work gripes. Worse? Even if you’ve made peace with this, your friends absolutely haven’t consented to their secrets piped to a datacenter. Do they even know?
Governments are spending a lot of time trying to kill end-to-end encryption, but if we’re not careful, we’ll do the job for them.
The problem is big & growing:
Threat 1: proprietary AI agents. Helpers inside apps or system-wide stuff. Think: desktop productivity tools by a big company. Hello, Copilot. These companies already have tons of incentive to soak up your private stuff & are very unlikely to respect developer intent & privacy without big fights (Those fights need to keep happening)
Threat 2: DIY agents that are privacy leaky as hell, not through evil intent or misaligned ethics, but just because folks are excited and moving quickly. Or carelessly. And are using someone’s API.
I sincerely hope is that the DIY/ OpenSource ecosystem that is spinning up around AI agents has some privacy heroes in it. Because it should be possible to do some building & standards that use permission and privacy as the first principle.
Maybe we can show what’s possible for respecting privacy so that we can demand it from big companies?
Respecting your friends means respecting when they use encrypted messaging. It means keeping privacy-leaking agents out of private spaces without all-party consent.
Ideas to mull (there are probably better ones, but I want to be constructive):
Human only mode/ X-No-Agents flags
How about converging on some standards & app signals that AI agents must respect, absolutely. Like signals that an app/chat can emit & be opted out of exposure to an AI agent.
Agent Exclusion Zones
For example, starting with the premise that the correct way to respect developer (& user intent) with end to end encrypted apps is that they not be included, perhaps with the exception [risky tho!] of whitelisting specific chats etc. This is important right now since so many folks are getting excited about connecting their agents to encrypted messengers as a control channel, which is going to mean lots more integrations soon.
#NoSecretAgents Dev Pledge
Something like a developer pledge that agents will declare themselves in chat and not share data to a backend without all-party consent.
None of these ideas are remotely perfect, but unless we start experimenting with them now, we're not building our best future.
Next challenge? Local Only / Private Processing: local-First as a default.
Unless we move very quickly towards a world where the processing that agents do is truly private (e.g. not accessible to a third party) and/or local by default, even if agents are not shipping signal chats, they are creating an unbelievably detailed view into your personal world, held by others. And fundamentally breaking your own mental model of what on your device is & isn't under your control / private.NEW: Microsoft turned over Bitlocker keys to FBI.
When you key escrow your disk encryption with someone, they can be targeted with a warrant.
This case is a really good illustration that if you nudge users with a default to save their keys with you... they will do so & may not fully understand the implications.
Of course, once the requests start working... they are likely to accelerate.
Story: https://www.forbes.com/sites/thomasbrewster/2026/01/22/microsoft-gave-fbi-keys-to-unlock-bitlocker-encrypted-data/
When you key escrow your disk encryption with someone, they can be targeted with a warrant.
This case is a really good illustration that if you nudge users with a default to save their keys with you... they will do so & may not fully understand the implications.
Of course, once the requests start working... they are likely to accelerate.
Story: https://www.forbes.com/sites/thomasbrewster/2026/01/22/microsoft-gave-fbi-keys-to-unlock-bitlocker-encrypted-data/Hotel toilet privacy is disappearing.
Glass doors.
Or no door.
Or a big window into the room.
Who is asking for this?
Suddenly hearing about zcash everywhere.
Feels inorganic.
What's up?
YIKES: NSO floats Pegasus spyware use in a "time of domestic crisis" in 🇺🇸America.
I believe they won't stop lobbying until they get Pegasus into USA.
To hack Americans. 

POV: you can't sleep because your bed can't talk to AWS.
Design thinking that inserts brittle dependence into our lives while extracting fees for life.
Don't be these guys.
Design thinking that inserts brittle dependence into our lives while extracting fees for life.
Don't be these guys.GOOD MORNING.
Today's massive outages nicely illustrate which of your favorite internet things are secretly Amazon-dependent.
Specifically on US-EAST-1 Region, which woke up with Main Character Syndrome.
Result? Massive outages.
Sure, Amazon has regions.
But US-EAST-1 is the legacy/default for a pile of services...and other Global Amazon services also depended on it.
So when there was trouble...it was quickly everywhere.
Hyperscalers rule *almost* everything around us. And this is absolutely bad news for all sorts of resiliency.
Amazon sez: root cause = DNS resolution with DynamoDB... which a ton depends on.
They say they are mostly mitigated & have a pile of backlog to clear.
But this is a great moment to think about just how many eggs that matter are in one basket...
https://health.aws.amazon.com/health/status
But US-EAST-1 is the legacy/default for a pile of services...and other Global Amazon services also depended on it.
So when there was trouble...it was quickly everywhere.
Hyperscalers rule *almost* everything around us. And this is absolutely bad news for all sorts of resiliency.
Amazon sez: root cause = DNS resolution with DynamoDB... which a ton depends on.
They say they are mostly mitigated & have a pile of backlog to clear.
But this is a great moment to think about just how many eggs that matter are in one basket...
https://health.aws.amazon.com/health/statusNEW: 🇰🇵DPRK hackers have begun hiding malware on blockchain.
Result, decentralized, immutable malware from a government crypto theft operation.
It only cost $1.37 USD in gas fees per malware change (e.g. to update the command & control server)
Blockchains as malware dead drops are a fascinating, predictable evolution for nation state attackers.
And Blockchain explorers are a natural target.
Nearly impossible to remove.
Experimentation with putting malware on blockchains is in infancy.
Ultimately there will be some efforts to try and implement social engineering protection around this, but combined with things like agentic AI & vibe coding by low-information people...whew boy this gold seam is going to be productive for a long time.
Still, where here they used social engineering, I expect attackers to also experiment with directly loading zero click exploits onto blockchains targeting things like blockchain explorers & other systems that process blockchains... especially if they are sometimes hosted on the same systems & networks that handle transactions / have wallets.
REPORT: https://cloud.google.com/blog/topics/threat-intelligence/dprk-adopts-etherhiding
It only cost $1.37 USD in gas fees per malware change (e.g. to update the command & control server)
Blockchains as malware dead drops are a fascinating, predictable evolution for nation state attackers.
And Blockchain explorers are a natural target.
Nearly impossible to remove.
Experimentation with putting malware on blockchains is in infancy.
Ultimately there will be some efforts to try and implement social engineering protection around this, but combined with things like agentic AI & vibe coding by low-information people...whew boy this gold seam is going to be productive for a long time.
Still, where here they used social engineering, I expect attackers to also experiment with directly loading zero click exploits onto blockchains targeting things like blockchain explorers & other systems that process blockchains... especially if they are sometimes hosted on the same systems & networks that handle transactions / have wallets.
REPORT: https://cloud.google.com/blog/topics/threat-intelligence/dprk-adopts-etherhidingNEW: Cost to 'poison' an LLM and insert backdoors is relatively constant. Even as models grow.
Implication: scaling security is orders-of-magnitude harder than scaling LLMs.
Prior work had suggested that as model sizes grew, it would make them cost-prohibitive to poison.
So, in LLM training-set-land, dilution isn't the solution to pollution.
Just about the same size of poisoned training data that works on a 1B model could also work on a 1T model.
I feel like this is something that cybersecurity folks will find intuitive: lots of attacks scale. Most defenses don't
PAPER: POISONING ATTACKS ON LLMS REQUIRE A NEAR-CONSTANT NUMBER OF POISON SAMPLES https://arxiv.org/pdf/2510.07192
Prior work had suggested that as model sizes grew, it would make them cost-prohibitive to poison.
So, in LLM training-set-land, dilution isn't the solution to pollution.
Just about the same size of poisoned training data that works on a 1B model could also work on a 1T model.
I feel like this is something that cybersecurity folks will find intuitive: lots of attacks scale. Most defenses don't
PAPER: POISONING ATTACKS ON LLMS REQUIRE A NEAR-CONSTANT NUMBER OF POISON SAMPLES https://arxiv.org/pdf/2510.07192NEW: breach of Discord age verification data.
For some users this means their passports & drivers licenses.
Discord has only run age verification for 6 months.
Age verification is a badly implemented data grab wrapped in a moral panic.
Proponents say age verification = showing your ID at the door to a bar.
But the analogy is often wrong.
It's more like: bouncer photocopies some IDs, & keeps them in a shed around back.
There will be more breaches.
But it should bother you that the technology promised to make us all safer, is quickly making us less so.
STORIES:
https://www.forbes.com/sites/daveywinder/2025/10/05/discord-confirms-users-hacked---photos-and-messages-accessed/

Proponents say age verification = showing your ID at the door to a bar.
But the analogy is often wrong.
It's more like: bouncer photocopies some IDs, & keeps them in a shed around back.
There will be more breaches.
But it should bother you that the technology promised to make us all safer, is quickly making us less so.
STORIES:
https://www.forbes.com/sites/daveywinder/2025/10/05/discord-confirms-users-hacked---photos-and-messages-accessed/

The Verge
Discord customer service data breach leaks user info and scanned photo IDs
An “unauthorized party” may have accessed the names of users, the last four digits of credit card numbers, and more.
PAY ATTENTION: The UK again asked Apple to backdoor iCloud encryption.
Backdoors create a massive target for hackers & criminal groups.
Dictators will inevitably demand that Apple build the same access structure for them.
They insert vulnerable bad things right at the place where we need the strongest protections.
This latest attempt to demand access is *yet another* unreasonable, secret demand on Apple (a TCN) from the Home Office....
Dictators will inevitably demand that Apple build the same access structure for them.
They insert vulnerable bad things right at the place where we need the strongest protections.
This latest attempt to demand access is *yet another* unreasonable, secret demand on Apple (a TCN) from the Home Office....
Client Challenge
Friend,
If scrolling leaves you feeling hollowed...
If anger is frictionless and thinking feels like fighting the current,
You're not swimming, you're being swept in an algorithmic rip tide.
And your mental clarity is the target.
So, take a beat and step out
Put the thing down.
Connect with your own thoughts.
It's what the designers of these algorithms fear most.
NEW: foreign mercenary spyware is coming to the US.
ICE just quietly unsuspended contract with spyware maker #Paragon.
They got caught this year being used to hack journalists.
Friend, let me me bring you up to speed on why this is bad on multiple fronts.
YOUR BACKGROUND BRIEF:
#Paragon was co-founded in Israel in 2019 by ex head of Israel's NSA equivalent (Unit 8200) w/ major backing from former Israeli PM Ehud Barak.
Pitched themselves as stealthy & abuse-proof alternative to NSO Group's Pegasus.
The company has been trying to get into the US market for years.
For a long time all we knew about Paragon was their performance as a 'virtuous' spyware company with values.
All that came to a crashing halt in 2025 when they got very caught, helping customers hack targets across #WhatsApp.
WhatsApp did the right thing & notified users.
Almost immediately after the WhatsApp notifications, we started learning about the targets.
They weren't the supposed serious criminals... They were Journalists... human rights defenders...groups working on sea rescues.. etc
In other words, a very NSO-like scandal.
Ultimately Paragon & its Italian customer had a massive spyware scandal on their hands.
WhatsApp wasn't the only player tracking paragon & doing user notifications. Apple got in on the game.
Ultimately, we at the Citizen Lab had forensically analyzed cases from each notification round.
We testified to Italy's parliamentary intelligence oversight committee about our findings.
The conclusion? Deeply unsatisfactory.
Italy admitted hacking some targets, but denied hacking journalists.
Tons of loose ends with Paragon. And they haven't been honest about who used their tech to hack journalists in Europe.
BIG PICTURE:
After 14 years investigating countless spyware companies, I tell you with confidence:
Mercenary spyware is a power abuse machine incompatible with American constitutional rights and freedoms.
Our legal system isn't designed for it, oversight mechanisms are woefully inadequate to protect our rights...
Here's the thing. You probably know that mercenary spyware like #Pegasus gets sold to dictators.
Who, predictably, abuse it.
But We have a growing pile of cases where spyware is sold to democracies... and then gets abused.
HISTORY LESSONS
History shows: secret surveillance usually winds up abused.
The history of the US is littered with surveillance abuses.
Thing is, our phones offer an unprecedented window into our lives.
Making zero-click mercenary spyware an especially grave risk to all our freedoms.
If the government has wants access to your accounts for law enforcement...they have to prepare a judicially authorized request and send it to the company, which reviews it.
Mercenary spyware bypasses any external review.
And the whole industry behind it seeks maximum obscurity.
COUNTERINTELLIGENCE THREATS? YEAH THAT TOO
I'm concerned about the impact on our rights an dour privacy.
But there's something else that should worry everybody about the choice to work with the company: Paragon poses a potentially grave counterintelligence threat to the US. Let me explain.
When you use an integrated spyware package to conduct sensitive law enforcement / intelligence business, you have to place a lot of trust in them...
If the developers originate from a foreign intelligence service that aggressively collects against the US government, that should be a huge red flag.
America (or any country) should be maximally wary about using foreign-developed surveillance tech for the same reason that America shouldn't operate a Chinese-made stealth fighter.
So, have Paragon's spyware, people & ops been aggressively vetted for technical and human counterintelligence risks?
MERCENARY SPYWARE = FATE SHARING
Paragon's #Graphite mercenary spyware shares the same downsides as other products in their class:
❌They keep getting caught
We researchers aren't the only ones that have found techniques for tracking and identifying Paragon spyware... I'm sure hostile govs have too.
❌Customers fate share.
Since all customers roll the same tech, when one gets caught it impacts & potentially exposes everyones' activities.
Now, that fate sharing will include US law enforcement activity.
WHAT CAN YOU DO?
What can you do? Take 5 minutes and call your member of Congress.
Ask them to request a briefing on Paragon.
They should ask whether the company was properly vetted & reviewed.
What is the oversight mechanism for this maximally invasive technology?
What are the guardrails? How would abuses be handled? Etc.
PERSONAL SECURITY?
Paragon & this category of spyware is fiendishly hard to track & defend against.
And on a personal level? Apple's Lockdown Mode & Android Advanced Protection both offer some serious security benefits but neither is a silver bullet..
Unfortunately, as of right now I am pretty confident that no publicly available / commercially developed third party tool can reliably detect Paragon spyware either in realtime. Or retrospectively.
Beware a false sense of security.
If you got this far & found this post useful, let me know! Drop a comment.
SELECTED READING LIST
Exclusive: ICE reactivated its $2 million contract with Israeli spyware firm Paragon, following its acquisition by U.S. capital
Virtue or Vice? A First Look at Paragon’s Proliferating Spyware Operations
Graphite Caught
First Forensic Confirmation of Paragon’s iOS Mercenary Spyware Finds Journalists Targeted

YOUR BACKGROUND BRIEF:
#Paragon was co-founded in Israel in 2019 by ex head of Israel's NSA equivalent (Unit 8200) w/ major backing from former Israeli PM Ehud Barak.
Pitched themselves as stealthy & abuse-proof alternative to NSO Group's Pegasus.
The company has been trying to get into the US market for years.
For a long time all we knew about Paragon was their performance as a 'virtuous' spyware company with values.
All that came to a crashing halt in 2025 when they got very caught, helping customers hack targets across #WhatsApp.
WhatsApp did the right thing & notified users.
Almost immediately after the WhatsApp notifications, we started learning about the targets.
They weren't the supposed serious criminals... They were Journalists... human rights defenders...groups working on sea rescues.. etc
In other words, a very NSO-like scandal.
Ultimately Paragon & its Italian customer had a massive spyware scandal on their hands.
WhatsApp wasn't the only player tracking paragon & doing user notifications. Apple got in on the game.
Ultimately, we at the Citizen Lab had forensically analyzed cases from each notification round.
We testified to Italy's parliamentary intelligence oversight committee about our findings.

❌Customers fate share.
Since all customers roll the same tech, when one gets caught it impacts & potentially exposes everyones' activities.
Now, that fate sharing will include US law enforcement activity.
WHAT CAN YOU DO?
What can you do? Take 5 minutes and call your member of Congress.
Ask them to request a briefing on Paragon.
They should ask whether the company was properly vetted & reviewed.
What is the oversight mechanism for this maximally invasive technology?
What are the guardrails? How would abuses be handled? Etc.
PERSONAL SECURITY?
Paragon & this category of spyware is fiendishly hard to track & defend against.
And on a personal level? Apple's Lockdown Mode & Android Advanced Protection both offer some serious security benefits but neither is a silver bullet..
Unfortunately, as of right now I am pretty confident that no publicly available / commercially developed third party tool can reliably detect Paragon spyware either in realtime. Or retrospectively.
Beware a false sense of security.
If you got this far & found this post useful, let me know! Drop a comment.
SELECTED READING LIST
Exclusive: ICE reactivated its $2 million contract with Israeli spyware firm Paragon, following its acquisition by U.S. capital

Exclusive: ICE reactivated its $2 million contract with Israeli spyware firm Paragon, following its acquisition by U.S. capital
The cyber division of ICE's Homeland Security Investigations on Saturday quietly lifted a stop-work order put into place by the Biden administratio...

The Citizen Lab
Virtue or Vice? A First Look at Paragon’s Proliferating Spyware Operations - The Citizen Lab
In our first investigation into Israel-based spyware company, Paragon Solutions, we begin to untangle multiple threads connected to the proliferati...

The Citizen Lab
Graphite Caught: First Forensic Confirmation of Paragon’s iOS Mercenary Spyware Finds Journalists Targeted - The Citizen Lab
On April 29, 2025, a select group of iOS users were notified by Apple that they were targeted with advanced spyware. Among the group were two journ...
GOOD MORNING: WhatsApp caught & fixed a sophisticated zero click attack...
They just published an advisory about it.
Say attackers combined the exploit with an Apple vulnerability to hack a specific group of targets (i.e. this wasn't pointed at everybody)
That's a CROSS-APP exploit chain. Which is fancy. We'll discuss in a second.
But wait, you say, haven't I heard of WhatsApp zero-click exploits not so long ago?
You have.
A big user base makes a platform big target for exploit development.
Attacker's perspective = an exploit against a popular messenger gives you potential access to a lot of devices.
The regular tempo of large platforms catching sophisticated exploits is a good sign.
They're paying attention & devoting resources to a growing category: highly targeted, sophisticated attacks.
But it's also a reminder of the magnitude of the threat.
Here's the Apple CVE.
Somewhere, earlier this summer, some people in a room probably had a bad day when this clever cross-app chain stopped working.
The cross- app chain = probably also a sign of the increasing tech lift required to get to device compromise. Consequence of various mitigations.
The cost-to-compromise is only going up. Which is arguably a sign that the increasing scrutiny + efforts by platforms & OS developers is having an impact.
That said, the threat of this stuff is going nowhere because there's an infinite governmental appetite for compromise.
Still, I'd argue that increasing costs of zero-clicks has the effect of pricing out a bunch of potential actors which slows the proliferation of this tech to *some* bad actors.
WhatsApp Advisory:
Apple Advisory:
That's a CROSS-APP exploit chain. Which is fancy. We'll discuss in a second.
But wait, you say, haven't I heard of WhatsApp zero-click exploits not so long ago?
You have.
A big user base makes a platform big target for exploit development.
Attacker's perspective = an exploit against a popular messenger gives you potential access to a lot of devices.
The regular tempo of large platforms catching sophisticated exploits is a good sign.
They're paying attention & devoting resources to a growing category: highly targeted, sophisticated attacks.
But it's also a reminder of the magnitude of the threat.
Here's the Apple CVE.
Somewhere, earlier this summer, some people in a room probably had a bad day when this clever cross-app chain stopped working.
The cross- app chain = probably also a sign of the increasing tech lift required to get to device compromise. Consequence of various mitigations.
The cost-to-compromise is only going up. Which is arguably a sign that the increasing scrutiny + efforts by platforms & OS developers is having an impact.
That said, the threat of this stuff is going nowhere because there's an infinite governmental appetite for compromise.
Still, I'd argue that increasing costs of zero-clicks has the effect of pricing out a bunch of potential actors which slows the proliferation of this tech to *some* bad actors.
WhatsApp Advisory: 
WhatsApp.com
WhatsApp Security Advisories 2025
WhatsApp Security Advisories 2025 - List of security fixes for WhatsApp products
Apple Support
About the security content of iOS 18.6.2 and iPadOS 18.6.2 - Apple Support
This document describes the security content of iOS 18.6.2 and iPadOS 18.6.2.
Did the University of Chicago blow their endowment on shitcoins?
Nobody is exactly sure how much they gambled and lost on 'crypto.'
But they are now freezing research amidst federal funding cuts.
If only they'd put that money into BTC those labs where I slaved away as an undergrad would be humming.
Source:
If only they'd put that money into BTC those labs where I slaved away as an undergrad would be humming.
Source: UChicago Lost Money on Crypto, Then Froze Research When Federal Funding Was Cut
Government‑mandated KYC to read is coming fast.
And the walls of castle freedom are cracking.


Why haven't mosquitoes evolved silent flight?
"everybody who's out there thinking of using VPNs, let me just say to you directly, verifying your age keeps a child safe...So let's just not try and find a way around. Just prove your age."
- UK government.
WHOA: Could Germany Ban Ad Blockers?
German megapublisher Axel Springer is asking a German court to ban an ad-blocker.
They claim HTML/ CSS of their sites are protected computer programs.
And influencing they are displayed (e.g by removing ads) violates copyright.
I'm in puzzled wonderment at this claim.
Preventing ad-blocking would be a huge blow to German cybersecurity and privacy.
There are critical security & privacy reasons to influence how a websites code gets displayed.
Like stripping out dangerous code & malvertising.
Hacking risks from the online advertising are documented.
Any attempt to force Germans to run all of the code on a website without consideration for their privacy and security rights and needs will end very, very poorly.
Defining HTML/CSS as a protected computer program will quickly lead to absurdities touching every corner of the internet.
Just think of the potential infringements:
-Screen readers for the blind
-'Dark mode' bowser extensions
-Displaying snippets of code in a university class
-Inspecting & modifying code in your own browser
-Website translators
Or blocking unwanted trackers.
This is why most governments do it on their systems.
I'm not a lawyer, but if Axel Springer wins the consequences are just nuts:
Basic stuff like bookmarking & saving a local copy of a website might be legally risky.
The Wayback Machine & internet archives and libraries might be violators.
This might even extend to search engines displaying excerpts of sites.
Code sharing sites like GitHub could become a liability minefield...
The list goes on and on.
Finally, only one country has banned ad-blockers. China.
This is not good company for Germany.
READ MORE: From Mozilla
Bleeping Computer: 
I'm in puzzled wonderment at this claim.
Preventing ad-blocking would be a huge blow to German cybersecurity and privacy.
There are critical security & privacy reasons to influence how a websites code gets displayed.
Like stripping out dangerous code & malvertising.
Hacking risks from the online advertising are documented.
Any attempt to force Germans to run all of the code on a website without consideration for their privacy and security rights and needs will end very, very poorly.
Defining HTML/CSS as a protected computer program will quickly lead to absurdities touching every corner of the internet.
Just think of the potential infringements:
-Screen readers for the blind
-'Dark mode' bowser extensions
-Displaying snippets of code in a university class
-Inspecting & modifying code in your own browser
-Website translators
Or blocking unwanted trackers.
This is why most governments do it on their systems.
I'm not a lawyer, but if Axel Springer wins the consequences are just nuts:
Basic stuff like bookmarking & saving a local copy of a website might be legally risky.
The Wayback Machine & internet archives and libraries might be violators.
This might even extend to search engines displaying excerpts of sites.
Code sharing sites like GitHub could become a liability minefield...
The list goes on and on.
Finally, only one country has banned ad-blockers. China.
This is not good company for Germany.
READ MORE: From Mozilla Open Policy & Advocacy
Is Germany on the Brink of Banning Ad Blockers? User Freedom, Privacy, and Security Is At Risk. – Open Policy & Advocacy
Across the internet, users rely on browsers and extensions to shape how they experience the web: to protect their privacy, improve accessibility, b...

BleepingComputer
Mozilla warns Germany could soon declare ad blockers illegal
A recent ruling from Germany's Federal Supreme Court (BGH) has revived a legal battle over whether browser-based ad blockers infringe copyrigh...