|
The global threat landscape is evolving faster than organizations can secure it. In 2025, businesses are either underestimating their readiness—or admitting outright that they're unprepared. As AI adoption accelerates across industries, the risk of sensitive data exposure is amplified by poor visibility, weak governance, and fragmented third-party oversight. The report reveals a stark reality: |
|
|
• Supply chain scale corresponds to slower breach discovery. Organizations managing 5,000+ supply chain partners take 3+ months—often longer—to detect a breach. Prolonged detection windows directly correlate with higher litigation costs and legal exposure, leaving companies blindsided by compounding financial and regulatory damage.
• AI oversight is critically lacking. Only 17% of surveyed companies say they have technical AI governance frameworks in place. Even more concerning, a mere 17% claim technical governance frameworks exist, with just 17% confident in AI oversight across data security controls. In a world where AI workflows increasingly interact with sensitive data, 83% lack the guardrails needed to govern it.
• Visible compliance hides costly inefficiencies. For every $1 invested in compliance activities organizations can see, an additional $2.33 is burned in hidden costs—including stalled innovation, missed opportunities, audit fatigue, and delayed security modernization. That’s billions lost annually to compliance drag, not improved resilience.
• Breach frequency is outpacing defense budgets. Companies working with 5,000+ supply chain partners reported 10+ breaches per year and are spending an average of $3 million+ annually in litigation costs alone. Those experiencing 10+ full ecosystem breaches report $3 million+ annually in litigation costs, not including breach response, fines, or reputational damage |
Organizations that achieve unified visibility—across third-party ecosystems, AI usage, and compliance risk posture—will outperform, out-innovate, and out-defend the rest.
Kiteworks enables this advantage through secure content collaboration, AI-aware governance, and transparency across extended partner networks. By eliminating blind spots and hidden compliance waste, businesses gain proactive breach detection, stronger regulatory outcomes, and the freedom to innovate without compromising sensitive data.
|
#226: Ransowmare has had an upgrade... |
Exploring the new horizons for adversarial activity |
|
|
If you want more, you know what you need to do: sign up to the premium and get access to everything we have on offer. Click the link above to visit our Substack and sign up there! Cheers! Austin Miller Editor-in-Chief |
|
|
|
What happens when machines become your buyers? HUMAN’s Guide to Adopting Agentic Commerce in 2025 explores how merchants can build readiness for this new agent-driven economy.
At its center is AgenticTrust, HUMAN’s trust-layer technology that classifies and governs AI agents in real time. We’re inviting cybersecurity and AI influencers to engage their audiences with this critical question: How do we build trust when the customer is no longer human? |
|
|
The world is entering the era of Agentic Commerce. A new era where autonomous AI agents act as buyers, browsers, and decision-makers across digital platforms. HUMAN’s Agentic Commerce Guide explores how this shift is transforming digital interactions, and why visibility and trust are becoming the new foundation of online business. HUMAN’s AgenticTrust platform delivers that foundation, providing real-time visibility and governance over AI agent activity on your site: enabling automation without sacrificing security or business logic.
|
|
|
The concept of Ransomware 3.0 is introduced in research as a proof-of-concept illustrating the potential for large language models to automate ransomware creation and execution. Want to find out what that looks like? Click the link below. |
The U.S. has been cutting cyber defenses as AI boosts attacks: According to federal sources and cyber-security experts, U.S. cyber-defense capabilities—including staffing at Cybersecurity and Infrastructure Security Agency (CISA) and leadership at agencies like National Security Agency (NSA) — have been significantly reduced despite a rising wave of AI-enhanced cyberattacks. Experts warn this mismatch weakens national readiness just as adversaries leverage AI for automated, large-scale cyber operations.
London councils hit by suspected cyber attacks — National Cyber Security Centre (NCSC) called in: Several London borough councils — including Kensington & Chelsea, Westminster, Hackney, and Hammersmith & Fulham — reported disrupted services (including phone-line outages) after suspected cyber-attacks. NCSC is now involved in remediation efforts. The disruption underscores rising threats on local government infrastructure.
Black Friday shopping scams surge as fraudulent domains proliferate: As holiday shopping ramps up, researchers flagged a significant increase in malicious domain registrations mimicking legitimate retailers — many created to lure holiday-shoppers into scams. Nearly 1 in 11 of the new “Black Friday”-themed domains were found malicious. The use of generative AI to speed up scam site creation was also highlighted as a concern.
Global agencies push to shut down “bulletproof” hosting and launch AI-risk framework: International cyber-agencies are urging Internet Service Providers to crack down on so-called “bulletproof” hosts that shelter cybercriminal activity. Meanwhile, the Cybersecurity and Infrastructure Security Agency (CSA) introduced a new “agentic-AI risk framework” to assess emerging threats from autonomous AI-driven tools — reflecting growing focus on AI as a security risk.
Qilin ransomware conducts major supply-chain attack against South-Korean MSPs: The ransomware group Qilin — working via a supply-chain compromise of managed-service providers — carried out a high-impact campaign dubbed the “Korean Leaks,” hitting at least 28 victims, primarily in South Korea’s financial sector. Qilin was noted to be among the most active Ransomware-as-a-Service (RaaS) groups this year.
RomCom malware: First-ever delivery via SocGholish JavaScript-loader observed: Researchers from Arctic Wolf Labs uncovered a novel attack where the malware family Mythic Agent — associated with the “RomCom” threat group — was distributed via SocGholish, a JavaScript-loader technique commonly used in browser-based malware campaigns. This marks the first known use of SocGholish for a RomCom payload, signaling an evolution in distribution tactics.
New “Telemetry Complexity Attacks” break anti-malware analysis pipelines: A research team demonstrated a new class of attacks — dubbed “Telemetry Complexity Attacks” (TCAs) — that exploit how anti-malware platforms process telemetry (e.g., logs, events). By spawning deeply nested, oversized telemetry, the attackers cause failures in serialization/storage/visualization, leading to denial-of-analysis (DoA): malicious behavior executes but isn't recorded or alerted. Several commercial and open-source EDR and malware-analysis tools were shown vulnerable.
UK MPs propose new economic-security regime to counter cyber and related threats: In light of rising cyber risks, including state-sponsored attacks and infrastructure vulnerabilities, UK MPs are pushing for a new economic-security framework. This proposal aims to integrate cybersecurity threats across economic, supply-chain, and national-security planning — reflecting growing recognition that cyber risk is not just an IT problem.
New variant of IoT botnet based on Mirai emerges: “ShadowV2” tests IoT exploits during AWS disruptions: Security researchers observed a new Mirai-derived botnet variant, dubbed “ShadowV2”, which tested exploits against vulnerable IoT devices during October’s AWS outage — apparently to probe impact on availability and botnet propagation in unstable networks. The experiment raises alarms over IoT insecurity and the rising use of botnets exploiting cloud-service disruptions.
|
|
|
Shai‑Hulud 2.0: How Cortex Detects and Blocks the Resurgent npm Worm: This post details a major supply-chain attack dubbed “Shai-Hulud 2.0,” where a malicious worm compromised thousands of npm packages. It explains how the malware spreads, steals credentials, establishes persistent backdoors, and compromises developer environments — and outlines how the provider’s security tools (Cortex Cloud, XDR, Prisma Cloud) can detect and block such attacks.
AI & Security: Revolutionizing Cybersecurity in the Digital Age: This article explores how artificial intelligence (AI) is transforming cybersecurity — shifting defences from reactive to proactive. It examines use-cases where AI helps detect and mitigate threats, analyzes the challenges of integrating AI into security strategies, and highlights how organizations can leverage modern AI/ML to improve their security posture.
When Artificial Intelligence Becomes the Battlefield: This post dives into the darker side of AI — describing how attackers are weaponizing AI for ransomware, phishing, browser-based exploits, AI-native malware, and “vibe-hacking” (emotionally targeted phishing/extortion). It outlines real-world incidents and warns of systemic weaknesses in AI governance, urging more robust controls and oversight for AI deployments.
Multi‑Dimensional Threat Intelligence Analysis: Looking for AI Adversaries: This analysis recounts how a security team monitored 427 blocked IP addresses over a short period to evaluate whether emerging AI-powered adversarial techniques were in use. The conclusion: no AI-adaptive threats detected — yet. But the report highlights infrastructure evolution (bulletproof hosting, “brand-weaponization”) and warns that adversaries may shift once detection evasion becomes easier. Offers a practical view on real-world threat-intelligence operations.
Turning Kubernetes Last Access to Kubernetes Least Access Using KIEMPossible: This recent post explains how identity and permissions inside Kubernetes environments often become sprawling, giving threat actors excessive attack surface. It shows how the tool/approach “KIEMPossible” can help organisations audit, trace, and reduce permissions to enforce least-privilege — significantly reducing risk for cloud workloads.
Is Your Snowflake Data at Risk? Find and Protect Sensitive Data with DSPM: This post targets cloud data security risks for organisations using data platforms (specifically Snowflake). It covers how sensitive data may be exposed, how data-security posture management (DSPM) can help spot and remedy exposures, and why companies should proactively use DSPM tools to safeguard critical data — especially relevant as cloud data stores grow.
|
SoK: Frontier AI's Impact on the Cybersecurity Landscape: This paper examines how advances in “frontier AI” (very capable AI systems) affect the cybersecurity landscape, particularly how such AI can both enable new attack vectors and challenge defenders. It categorizes risks, analyses current and potential future impacts, and issues concrete recommendations — e.g., building fine-grained benchmarks, security mechanisms for hybrid AI–software systems, pre-deployment testing, and better transparency. (Wenbo Guo, Yujin Potter, Tianneng Shi, Zhun Wang, Andy Zhang & Dawn Song)
Red Teaming with Artificial Intelligence-Driven Cyberattacks: A Scoping Review: This review analyses how AI techniques are being used to carry out or simulate cyberattacks (red-teaming). From 470 screened records, 11 were selected to illustrate how AI can automate penetration, data exfiltration, or social-engineering attacks — targeting everything from credentials to social-media accounts. It highlights how versatile and dangerous AI-assisted attack tools are, and calls for deeper understanding and defensive planning. (Mays Al-Azzawi, Dung Doan, Tuomo Sipola, Jari Hautamäki & Tero Kokkonen)
Zero Trust Architecture: A Systematic Literature Review: This is a systematic lit-review (2016–2025) of research on the Zero Trust Architecture (ZTA). It synthesizes how ZTA is applied, what enabling technologies support it, and the obstacles to its adoption. The paper also traces how ZTA research evolved over time, offering a taxonomy of application domains and critical challenges — making it a go-to resource for both researchers and practitioners assessing ZTA deployment. (Muhammad Liman Gambo & Ahmad Almulhem)
Analysing Cyber Attacks and Cyber Security Vulnerabilities in the University Sector: This empirical study reviews recent cyberattacks on universities (with a focus on the UK sector), producing a timeline of notable incidents, classifying them by confidentiality/integrity/availability (CIA triad) and attack type. It highlights challenges including: lack of full disclosure after attacks (hindering community learning), over-reliance on third-party service providers, and persistent threats like phishing, ransomware and insider risk (e.g. from students or staff lacking training). The paper recommends improved incident reporting, better training, supplier oversight, and adoption of common security standards. (Harjinder Singh Lallie, Andrew Thompson, Elżbieta Titis & Paul Stephens)
|
|
|
Copyright (C) 2025 Packt Publishing. All rights reserved. Our mailing address is: Packt Publishing, Grosvenor House, 11 St Paul's Square, Birmingham, West Midlands, B3 1RB, United Kingdom
Want to change how you receive these emails? You can update your preferences or unsubscribe. |
|
|
|