|
Your weekly dose of Seriously Risky Business news is written by Tom Uren and edited by Patrick Gray. It's supported by Lawfare with help from the William and Flora Hewlett Foundation. This week's edition is sponsored by Tines. You can hear a podcast discussion of this newsletter by searching for "Risky Business News" in your podcatcher or subscribing via this RSS feed.
Listen here Last week, Anthropic revealed a real-world, AI-orchestrated cyber espionage campaign. There's a real speed and scale benefit here for malicious actors that care more about hacking everything than flying under the radar. Western governments, however, will likely stick to the tried and tested method of "slowly, slowly, catchy monkey". In the report, Anthropic detailed its discovery of the campaign that used AI "not just as an advisor, but to execute the cyberattacks themselves". Anthropic believes the threat actor was a Chinese state-sponsored group whose goals align with those of the Chinese Ministry of State Security. The group attempted to infiltrate "roughly thirty" typical victims: large tech companies, financial institutions, chemical manufacturing companies and government agencies. It succeeded in a small number of cases. The attackers built what Anthropic calls an "autonomous attack framework". It used Claude Code to support reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, data analysis, and exfiltration operations. This framework broke the attack lifecycle into discrete tasks that were carried out by Claude sub-agents. Claude Code orchestrated the entire system, aggregated results from these discrete sub-tasks and then kicked off more jobs based on its analysis of the information it discovered. This was all done "largely autonomously". Once an operator set the wheels in motion the system would execute "about 80-90% of tactical operations independently". This occurred far faster than would be possible for human hands and the system was able to hack several targets in parallel. Human operators had specific responsibilities, per Anthropic's full report: Human responsibilities centered on campaign initialization and authorization decisions at critical escalation points. Human intervention occurred at strategic junctures including approving progression from reconnaissance to active exploitation, authorizing use of harvested credentials for lateral movement, and making final decisions about data exfiltration scope and retention.
In other words, Claude carried out the "hands on keyboard" operations while humans were left with what we'd call management-level decisions: Accepting risk or triaging and prioritising intelligence collection. There's been a surprising amount of skepticism from security researchers about Anthropic's report. One element of this criticism focussed on the threat actor's use of open source tooling. The argument being that this should be relatively straightforward for defenders to detect, as compared to custom malware. But the innovation here is in the framework rather than in whether any particular hacking tool or AI model was used. To us, this campaign doesn't look like a regular intelligence operation so much as a research project on how to take advantage of AI in cyber espionage campaigns. The use of open source tools is exactly what we'd expect. The head of Anthropic's threat intelligence team, Jacob Klein, told CyberScoop the threat actor invested a significant amount of time and effort to build the attack framework. It was the "hardest part of this entire system", Klein said. "That's what was human intensive". Once the framework was created, however, it allowed single individuals to carry out a lot more hacking. Klein suggested that one person using the attack framework could achieve what he thinks "would have taken a team of about ten folks". Of course, it is not all roses. Anthropic writes: Claude frequently overstated findings and occasionally fabricated data during autonomous operations, claiming to have obtained credentials that didn't work or identifying critical discoveries that proved to be publicly available information.
Lol. Bad Claude. For threat actors with a high risk appetite or without focus on specific targets, having an AI hacking buddy that stuffs up occasionally is a win if it allows you to do a whole lot more hacking. Think ransomware actors and even state-backed groups hacking for intellectual property. Even if they, and their AI companions, screw up on one promising victim there are plenty of other fish in the sea. For threat actors focussed on difficult, security-conscious targets, that trade off is far more risky. There's only one Vladimir Putin, for example. One meticulous operation is a safer bet than scaled up attacks with a multitude of mostly-automated Claude-brained schemes. From a state perspective, at least in the short term, we imagine AI-powered hacking campaigns will be adopted by countries that run broad-based hacking campaigns with a high risk appetite: here's looking at you, North Korea and China! For Western intelligence agencies that are more tightly focussed on specific targets, it's a different story. AI will be useful as an aide and increasingly so, but the speed and scale benefits of automated attacks will be offset by the potential for errors. Google's Legal Disruption Campaign Kicks OffLast week, Google announced it had filed litigation to disrupt the Lighthouse phishing-as-a-service kit. Per the company's announcement: Our legal action is designed to dismantle the core infrastructure of this operation. We are bringing claims under the Racketeer Influenced and Corrupt Organizations Act, the Lanham Act, and the Computer Fraud and Abuse Act to shut it down, protecting users and other brands.
The company's lawsuit takes aim at 25 unnamed individuals who are believed to be responsible for the service and living in China. Lighthouse is used to send text messages that phish victims for payment card data, as described by Krebs On Security. Google's lawsuit claims Lighthouse has hit more than one million victims across 120 countries. Surprisingly, given that the operators are suspected to be in China, the lawsuit actually appeared to have an immediate impact. The next day Google's general counsel Halimah DeLaine Prado said that Lighthouse's operations had been shut down, which she called, "a win for everyone". The disruption was confirmed by other security researchers too. Lighthouse domains were reported to have disappeared and the group's Telegram channels taken down due to Terms of Service violations. That's all good news. But Google's goals are more than just short-term disruption. DeLaine Prado told Wired one aim of the lawsuits was to enable further action by other organisations too: "Filing a case in the US actually allows us to have a deterrent impact outside of the US borders," DeLaine Prado says. Rulings in the company’s favor would also allow it to "go to other platforms that are hosting vectors or aspects" of the Lighthouse network and ask them to take them down, she says. "It enables others to do the same as well. That court order can be used for good to help dismantle the actual infrastructure of the operation," she adds.
Back in September, Sandra Joyce, vice president of Google's Threat Intelligence group, announced the company was starting a cyber "disruption unit" that would look for legal and ethical opportunities to disrupt threat actors. A single lawsuit won't stop the people behind Lighthouse for good. But we are optimistic that this lawsuit is a pretty good start in an enduring campaign. Memory Safe Languages Are Safer And FasterAdopting the memory safe Rust language in Android is not only safer than using C or C++, it is also more efficient from a software delivery perspective, according to Google. Memory safety bugs are a class of vulnerabilities related to how computers read, write and store memory. They are particularly problematic, Google says, because they "tend to be significantly more severe, more likely to be remotely reachable, more versatile, and more likely to be maliciously exploited than other vulnerability types". In previous editions of this newsletter we've covered how, even in large established projects, writing new code in memory safe languages such as Rust or Go is a quick win. We also praised the Biden administration's efforts to encourage the adoption of memory safe languages. Last week Google reported that, from a security perspective, its adoption of Rust was going swimmingly. Based on data from the last few years, it reckons that per line of code, Rust is a 1000-fold less likely to contain a memory safety vulnerability than C/C++. This year, for the first time, more new code was written in Rust than C/C++. As a result, Android memory safety vulnerabilities have declined from nearly 80% of total vulnerabilities in 2019 to less than 20% this year. Not only that, Google has crunched the numbers and found that Rust is easier and faster to deploy because it is quicker to review and is more likely to be correct. When changes are pushed out, the rollback rate for Rust is about four-fold lower than C++. What's not to like? Watch Amberleigh Jack and Tom Uren discuss this edition of the newsletter: Three Reasons to Be Cheerful This Week:- US creates a Scam Center "Strike Force": The US Department of Justice announced the strike force has been established to target the transnational crime organisations in Southeast Asia running cryptocurrency-related fraud schemes. The effort involves the DoJ, the FBI, and the US Secret Service and is "seeking to use all government tools available". It has already rolled out financial sanctions and seizure warrants for Starlink terminals used by scammers.
- Kicking goals against the North Korean worker scheme: The US government announced that five people have pleaded guilty to facilitating the fraudulent North Korean IT worker scheme. They helped the scheme by providing false identities or by hosting corporate laptops at residences within the US, for example. In the same announcement, the government said that it had seized more than USD$15 million worth of the USDT stablecoin that had been stolen by North Korea's APT38.
- Dutch police seize 250 bulletproof hosting servers: The servers were located in the cities of The Hague and Zoetermeer and were being used by what the Dutch police described as a "rogue hosting company". The 250 servers were used to host "thousands" of virtual servers.
In this |