JackTheMimic's avatar
JackTheMimic 2 months ago
Outside of the inflooenser yada yada. Are we saying that the Read-only containers that AWS hosts DOESN'T scan for malware? Because that would be a very big attack surface for them. If so, would not the contiguous bytes of a malware file not fit the fingerprint of the fuzzy hash digest they would check against? Meaning rightly or wrongly, they would assume the transaction being verified is a malware executable and kill the VM? Is that truly not an attack vector? I kind of don't even understand how that's not within the realm of possibility. Again I AM NOT SAYING THE MALWARE IS EXECUTABLE. Just that the fingerprint could read as such to the hypervisor watchdog.

Replies (26)

Stenography and data obfuscation to avoid malware detection is a pretty well understood concept in cyber security. Red teams across the globe easily hide malware in all kinds of arbitrary places. And if you're deploying a payload of malware, it doesn't need to be contiguous. BREAK llama 3 reply below about non-contiguous data and malware. Yes, there have been several historic malware campaigns that leveraged non-contiguous data to deliver a payload and bypass detection. Here are a few examples: Stuxnet (2010): Stuxnet was a highly sophisticated computer worm that targeted industrial control systems, particularly those used in Iran's nuclear program. It used a technique called "data hiding" to conceal its payload within non-contiguous areas of the hard drive, making it difficult to detect. Duqu (2011): Duqu was a malware campaign that used a similar technique to Stuxnet, hiding its payload in non-contiguous areas of the hard drive. It also used a "dropper" file to load the malware into memory, making it harder to detect. Flame (2012): Flame was a highly complex malware campaign that used a technique called "fragmented malware" to deliver its payload. It broke its payload into smaller, non-contiguous pieces and stored them in different locations on the infected system, making it difficult to detect. Havex (2013): Havex was a malware campaign that targeted industrial control systems, particularly those used in the energy and manufacturing sectors. It used a technique called "data fragmentation" to break its payload into smaller, non-contiguous pieces and store them in different locations on the infected system. BlackEnergy (2015): BlackEnergy was a malware campaign that targeted industrial control systems, particularly those used in the energy sector. It used a technique called "non-contiguous memory allocation" to load its payload into memory, making it harder to detect. NotPetya (2017): NotPetya was a highly destructive malware campaign that used a technique called "kernel-mode rootkit" to hide its payload in non-contiguous areas of the hard drive. It also used a "dropper" file to load the malware into memory, making it harder to detect. These malware campaigns demonstrate the use of non-contiguous data to deliver a payload and bypass detection. They often employed techniques such as: Data hiding: Concealing the payload within non-contiguous areas of the hard drive or memory. Data fragmentation: Breaking the payload into smaller, non-contiguous pieces and storing them in different locations. Non-contiguous memory allocation: Loading the payload into non-contiguous areas of memory. Dropper files: Using a separate file to load the malware into memory, making it harder to detect. These techniques made it challenging for traditional signature-based detection methods to identify the malware, and highlighted the need for more advanced detection methods, such as behavioral analysis and anomaly detection.
JackTheMimic's avatar
JackTheMimic 2 months ago
Right... Again I am not talking about activation or deployment... The point is FOR the malware to be found. THAT is the attack. The reflexive response. I don't care about embedded data, steganographic or otherwise. You literally CAN'T put contiguous bytes together to trip Malware detectors now because of the PUSHDATA limit (unless Side-channeled obvsly).
JackTheMimic's avatar
JackTheMimic 2 months ago
Also, can we not throw AI outputs at each other? Especially when it completely misses the point the other person is making?
JackTheMimic's avatar
JackTheMimic 2 months ago
Because 29% of BTC nodes are hosted on AWS. This signature detection would kill the VMs running Core on those servers. Meaning 29% of the network suddenly goes offline.
JackTheMimic's avatar
JackTheMimic 2 months ago
And my point was that it IS relevant to the point I was making. Which again, is not "malware activating due to blocks having packages embedded in them" My point is if you want to shut the airport down you don't hide the gun, you wave it around so everyone can see.
JackTheMimic's avatar
JackTheMimic 2 months ago
Absolutely. But there's ownership risk then there's intentional disruption. I mean if someone found an exploit to target node runners through their specific ISP *cough* Shinobi *cough* that would also be bad and tough to mitigate.
JackTheMimic's avatar
JackTheMimic 2 months ago
I am not talking existential. I am talking adoption progress.
JackTheMimic's avatar
JackTheMimic 2 months ago
For exchanges that use them for feerate, for economic nodes for transaction broadcast utility, for miners for gossip relay, kind of a lot of things.
JackTheMimic's avatar
JackTheMimic 2 months ago
It absolutely does. I have pulled their docs many times to show their guard dog service kills VMs if malware is signature identified. I feel like you may be thinking first order effects and not secondary and terceary effects. I swear I am not as dumb as I look, and I don't take Luke, Mechanic, Murch, Antoine, Voskiul, or any other dev or talking head at face value. I take what they say and check it for validity.
These docs? And we've gone over the whackamole with malware signatures. My previous company worked the red team for DoD. I promise you don't understand the cloud like you think you might. I also welcome all AWS bitcoin nodes failing. My sats remain safe.