₴Ʉ฿₦Ɇ₮₴₭ł₥₳₴₭'s avatar
₴Ʉ฿₦Ɇ₮₴₭ł₥₳₴₭
SUBNETSKIMASK@nostrverse.net
npub1law8...7spc
XMR 853KbCuadJf1bkvFmgDhXFY7YL2JMASWnK9kQkimAzfT85YSznSS62pUKufRRaprnsAKnnZm3TxN4XfrRVyvZd1CMkFZBBE
GM highly advanced decentralised beings of the relays image
MIT's Weber counters that money laundering investigators have always used algorithms to flag potentially suspicious behavior. AI-based tools, he argues, just mean those algorithms will be more efficient and have fewer false positives that waste investigators' time and incriminate the wrong suspects. “This isn't about automation,” Weber says. “This is a needle-in-a-haystack problem, and we're saying let's use metal detectors instead of chopsticks.” As for the research impact that Savage expects, he argues that even beyond blockchain analysis, Elliptic's training data is so voluminous and detailed that it may even help with other kinds of AI research into analogous problems like health care and recommendation systems. But he says the researchers do also intend their work to have a practical effect, enabling a new and very real way to hunt for patterns that reveal financial crime. “We're hopeful that this is much more than an academic exercise,” Weber says, “that people in this domain can actually take this and run with it.”
That enormous data trove will no doubt inspire and enable much more AI-focused research into bitcoin money laundering, says Stefan Savage, a computer science professor at the University of California San Diego who served as adviser to the lead author of a seminal bitcoin-tracing paper published in 2013. He argues, though, that the current tool doesn't seem likely to revolutionize anti-money-laundering efforts in crypto in its current form, so much as serve as a proof of concept. “An analyst, I think, is going to have a hard time with a tool that's kind of right sometimes,” Savage says. “I view this as an advance that says, ‘Hey, there's a thing here. More people should work on this.’” Savage warns, though, that AI-based money-laundering investigation tools will likely raise new ethical and legal questions if they end up being used as actual criminal evidence—in part because AI tools often serve as a “black box” that provides a result without any explanation of how it was produced. “This is on the edge where people get uncomfortable in the same way they get uncomfortable about face recognition,” he says. “You can't quite explain how it works, and now you're depending on it for decisions that may have an impact on people's liberty.”