What if all your posts were ingested by a ML engine and it was asked to create 1,000 profiles, each with a history of posts, targeting topics you care about or your existing contacts do. They can post daily and rehash other content.
Then it was asked to connect with as many of your contacts as possible, using topics they are passionate about. Perhaps work, environment, politics, hobbies or activities. Perhaps recent news topics and events you care about - liked, reposted, reported, shared, wrote.
Now your network has been infiltrated with fake profiles that all appear to be like minded individuals, each leading their own lives and sharing information you that informs you. They could make up 10-30% of your daily content.
Login to reply
Replies (16)
The information you now ingest can be targeted and malicious - yet hidden and subliminal. You have let your guard down and you are now suckling the teet of deception and malicious. A botnet 2.0 targeting individuals instead of network congestion.
You’ve been infiltrated. Your web of trust network has been corrupted. You are both a victim and pawn.
I’ve not only just described online advertising today, Reddit and Twitter, but the hyper-targeted content that’s coming. We currently have no clear defence - technical, social, biological or legal.
Governments seem to think laws against ‘misinformation’ prevent this. They do not. They will fail - if not only for the fact definitions of misinformation vary by content and individual, and government moderation or censorship is doomed to be rejected.
How do I know this is possible - because with patients (and a lack of morals), I could build this. If I can, someone else can.
Are you real?
Another critical attack is the same as above, however once they have a significant portion of your content you see daily - they bait and switch.
This attack is malicious too, as now all those environmental people you follow and listen to, the profiles have flipped and are now either harassing you directly - or, pitching you content to swing your opinion, like nuclear is bad and coal can be clean.
It doesn’t have to be a hard cut over either. It can be over time and more sneaky.
Anyway.. online and social is about to become 1,000,000+ to 1 machines to humans - and since text content doesn’t help machines, the content the machines spit out will solely be to control you with alternative motives - run by other humans.
1+ 7 == 42
I've spent some time thinking about this in the past. I did come up with a solution that involves creating a trusted network of humans verifying other humans via un-spoofable proximity to each other. You would build up your verifications with other humans by doing a Bluetooth handshake in person. It would *only allow verifications in person. Bad actors can be pruned from the tree easily if they try to create fake verifications, because the tree of verifications will always lead back to a human somewhere that allowed the infiltration. Snip snip, theyre gone. Meanwhile humans can take back control over knowing who is a human and who is not. A mesh network of proximity and interests.
Counter argument: this may be expensive - and how do the botnet operators know that you are not a bot? And if there are heuristics to figure that out, those will be available to you, too. And to relay operators. Look e.g. at @brugeman‘s spam detection.
Some good thoughts.
Perhaps a similar approach is staked Bitcoin (why similar? It adds a cost) that doesn’t move could be used. However there would then be a market selling UTXOs of hodlers. Even renting this would be 10-100x cheaper than buying that Bitcoin.
The issue with web of trust is that your weakest link is always the easiest route. Computers don’t care if it takes months to progress. And the next issue is even closed/small highly verified networks don’t cater for the wider audience social media we largely use today. What if it was your grandma who trusted the bad actor? Bye grandma?
The next issue is just like world coin using eye balls to ‘verify humans’, people will sell their human verification - or identities of the dead get to live again.
The expense will down trend for cost and power of machines and thus content. Human brains however are effectively stable and have physical limits that aren’t increasing.
I can imagine even with a 10% hit rate, perhaps using public records like birth certificates or similar, there will still be economical benefit for the bots/targeted ML. Whatever a human can do in the digital world, a simulation can do better, faster, and cheaper.
I’ve read and appreciate brugeman’s approach and research. Spam typically always has a call to action - however when you can instead hyper-target, again it’s death by 1,000 cuts.. most or all don’t have a link or product they are trying to sell you. It will be selling instead a way of thinking, a reality that is one they benefit from (they being whoever breaks even or makes money).
I have a Nostr relay too and my spam ML engine hasn’t been updated in training for 5 months now. It has 99.999% accuracy or abouts. Spam is different from targeted content.
Consider a Wikipedia page with two versions and one promotes euthanasia in the death section for suicides and the other condemns and highlights doctors are mass murders. Imagine Wikipedia shows you the version it wants you to side with. It’s very different problem from spam detection.
I’m not suggesting KYC or human verification.
I’m suggesting that we need to add a cost for machines to increase the expense of computer generated content - however humans would also have to incur that same cost - however at scale, humans post less and have an order of magnitude less expensive.
Or another solution. I don’t have a clear picture of exactly what will work best.
Yeah, so say grandma is trying to use the thing like normal, but someone convinced her to verify them in person and then proceeded to create the web of bots. All grandma has to do is cut that one verification off the tree and it would eliminate the bots that were connected to that bad contact. Reputation can be boosted or diminished based on the #of verifications from others in your tree, so in theory, the bad actors would be easier to spot because theyd be stuck in their own little tree and only able to verify via a human actor that would give them a verification.
They would influence you to some degree..
There's already a NIP around a small POW for each event sent to a relay. Seems like a sufficiently sound spam prevention mechanism, assuming most relays implement it.
If I look at my LinkedIn today, I’d wager 20% bots. Twitter none as I stopped using it - likely similar.
I think a single verification weak link is best case, and dozens is more reality. My Nostr network reach is 20,000 (2nd degree) and my 2nd degree following is 7,000. Each of those have their own 10,000s of possible weak links.
For small networks like Scuba Divers Egypt, sure, it can work. For larger social networks it would seem unlikely to work - especially when it can happen over months or even years - computers have infinite patients.
It the root issue is still that I can be impacted so significantly by mis-steps from my trusted network. And the ability to climb a network trust is actually very trivial if it has incentives.
Think of a Google maps business that gives a small gift and asks for 5 stars for a special discount or photos of your activity. It can all be gamed - and trust ranks breached by manipulation. NPS is also easily manipulated by businesses when they have KPIs and bonuses tied to them.
If I look at my LinkedIn today, I’d wager 20% bots. Twitter none as I stopped using it - likely similar.
I think a single verification weak link is best case, and dozens is more reality. My Nostr network reach is 20,000 (2nd degree) and my 2nd degree following is 7,000. Each of those have their own 10,000s of possible weak links.
For small networks like Scuba Divers Egypt, sure, it can work. For larger social networks it would seem unlikely to work - especially when it can happen over months or even years - computers have infinite patients.
It the root issue is still that I can be impacted so significantly by mis-steps from my trusted network. And the ability to climb a network trust is actually very trivial if it has incentives.
Think of a Google maps business that gives a small gift and asks for 5 stars for a special discount or photos of your activity. It can all be gamed - and trust ranks breached by manipulation. NPS is also easily manipulated by businesses when they have KPIs and bonuses tied to them.
If I look at my LinkedIn today, I’d wager 20% bots. Twitter none as I stopped using it - likely similar.
I think a single verification weak link is best case, and dozens is more reality. My Nostr network reach is 20,000 (2nd degree) and my 2nd degree following is 7,000. Each of those have their own 10,000s of possible weak links.
For small networks like Scuba Divers Egypt, sure, it can work. For larger social networks it would seem unlikely to work - especially when it can happen over months or even years - computers have infinite patients.
It the root issue is still that I can be impacted so significantly by mis-steps from my trusted network. And the ability to climb a network trust is actually very trivial if it has incentives.
Think of a Google maps business that gives a small gift and asks for 5 stars for a special discount or photos of your activity. It can all be gamed - and trust ranks breached by manipulation. NPS is also easily manipulated by businesses when they have KPIs and bonuses tied to them.
A counter point could be that we can just use machines / ML for good or to identify these types of attacks.
However there is an economic issue - what’s the monetary incentive to fund the automated identification, and secondly any automated identification will be biased and be statistical probability - no real difference to social feed algorithms.
I’m unsure if even you control your own ML modelling it will be suitable.. perhaps it could work for some time as portable compute and local personal ML models shrink.