AI: cybercrimes unwitting accomplice

Generative AI has supercharged cybercrime by lowering the technical barrier to sophisticated attacks. What security professionals once called ‘vibe coding’, where AI writes functional code from plain English, has evolved into ‘vibe hacking’, where criminals steer AI agents to plan, write and execute operations end to end. The barrier to entry has fallen dramatically. The blast radius has grown exponentially.

The Claude AI story: A new phase in cybercrime

Anthropic’s August 2025 Threat Intelligence update reveals a chilling evolution in AI-assisted cybercrime. A threat actor designated GTG-2002 used Claude Code to orchestrate attacks against at least 17 organisations across healthcare, emergency services, government agencies and religious institutions. This was not traditional cybercrime with AI assistance. It was AI embedded throughout the entire attack lifecycle.

The attacker leveraged Claude to automate reconnaissance across thousands of VPN endpoints, harvest credentials, penetrate networks systematically, analyse stolen financial data to determine appropriate ransom amounts and generate psychologically targeted extortion notes demanding payments sometimes exceeding $500,000. What would traditionally require an entire hacking team with years of specialised training was executed by a single actor enhanced by AI.

Anthropic banned the accounts and introduced new detections, but the signal is unmistakable: agentic AI can now execute complex attack chains for non-experts. This represents a fundamental shift in threat economics. Sophisticated cybercrime capabilities are now accessible to anyone with determination, not simply those with deep technical skills.

Why this changes everything

AI’s transformative power lies in its accessibility and speed. It writes convincing phishing emails for non-native speakers, debugs malicious code for novices and automates multi-stage attacks at the speed of a prompt. Unlike traditional threats that move at human speed, AI-enhanced attacks can adapt defensive countermeasures in real time, analyse exfiltrated data to optimise extortion strategies and pivot through networks faster than human analysts can respond.

The GTG-2002 incidents demonstrate that threat actors have embedded AI throughout their operations, while many organisational defences remain focused on traditional attack vectors. This asymmetry creates dangerous vulnerabilities in our collective security posture

“Attackers only need to succeed once, while defenders must stop every attack. Defence must now evolve from reactive blocking to proactive AI-driven, layered defense systems. Organisations will need multiple layers of defence working together with smart automation to respond at the speed of machines, while keeping humans in control to make the critical decisions that AI cannot.”  Joani Green, Chief Information Officer at SRM

The SCC View: Resilience through preparation

AI is a tool. Powerful, intelligent, widely accessible, but still a tool. In the wrong hands, catastrophic damage becomes trivial to inflict. We cannot stop bad people being bad people. What we can do is build organisational resilience, reduce paths to impact, cut dwell time and recover with reputation intact.

You cannot magically become 100% secure overnight, but you can absolutely be prepared for interruption. This requires rehearsed playbooks that assume breach, clear communication strategies and resilient data governance that limits damage when perimeters are inevitably breached.

Ten foundational practices for the AI era

Based on NCSC’s 10 Steps to Cyber Security, enhanced for AI-scale threats

  • Risk Management
    Implement a board-owned risk framework with appetite, metrics and quarterly review. Make AI risks explicit, including exposure through AI models, data and agents. Traditional risk assessments miss AI-accelerated attack timelines and adaptive threat behaviours.
  • Engagement & Training
    Deploy ongoing cyber hygiene training that includes AI-generated phishing scenarios and vibe-hacking simulations. Standard security awareness programmes fail against psychologically tuned, AI-crafted social engineering that adapts to individual targets.
  • Asset Management
    Maintain live inventory of devices, identities, applications and data, including shadow SaaS and unsanctioned AI tools. AI-enhanced attackers systematically probe for forgotten assets and unmonitored access points.
  • Architecture & Configuration
    Deploy default-secure builds, zero-trust segmentation and hardened baselines. Codify secure-by-design controls for any AI components and ensure segmentation can contain rapid lateral movement automated by AI.
  • Vulnerability Management
    Implement continuous scanning with prioritised patch SLAs, supplemented by continuous discovery through vulnerability disclosure programmes and bug bounties. AI helps both attackers and defenders find vulnerabilities faster. Put ethical hackers to work under published disclosure rules.
  • Identity & Access Management
    Enforce MFA everywhere, conditional access and just-in-time admin privileges. Implement least privilege to blunt AI-scaled credential abuse that can systematically harvest and exploit access across entire environments.
  • Data Security
    Classify data and apply encryption, DLP and lifecycle controls. Ensure personnel access only what they need. AI can analyse exfiltrated data to optimise extortion strategies, making over-privileged access especially dangerous.
  • Logging & Monitoring
    Centralise telemetry and tune detections for agentic behaviour: fast multi-stage actions, scripted lateral movement and automated data-packaging for extortion. Traditional detection methods miss AI-driven attack patterns that operate at machine speed.
  • Incident Management
    Pre-draft communications for staff, customers and regulators. Define decision rights and rehearse recovery scenarios including AI-enhanced extortion. Maintain human-in-the-loop controls for all automated security responses. AI threats require human judgement for critical decisions.
  • Supply-Chain Security
    Set supplier standards, map critical dependencies and require disclosure channels with timely patching. AI-enhanced threats can systematically probe entire supply chains for the weakest link.

How SCC counters AI-enhanced threats

Protect + Detect + Respond: MXDR operated from our UK SOC
Our round-the-clock Managed XDR runs from our CREST-accredited SOC in the UK, purpose built to monitor, prevent, detect, investigate and respond across customers’ attack surfaces. It is enhanced for AI-speed threat timelines.

Detect & respond: AI & Automation inside MXDR
MXDR operated from our UK based CREST-accredited SOC provides round-the-clock monitoring, investigation and response across your attack surface. Automation speeds triage while analysts stay in control. Budgets are tight and teams are stretched, so automation must save hours without adding risk.

Monitor & govern: SCC Pulse and VisionTM
SCC Pulse is an AI and automation platform. It powers our managed services, filtering, enriching and triaging your IT signals at speed. It then pushes that information into SCC VisionTM, your single pane of glass for total clarity. Pulse enables your managed services to run faster, smarter and sharper, letting you focus on what matters most.

  • Noise Reduction: Cut through alert floods to surface only what matters
  • Rapid Response: Accelerate triage and action so nothing slows you down
  • One Source of Truth: Add context and accuracy fora reliable view of events
  • Smarter Operations: AI and automation work together to improve decisions and outcomes
  • Seamless Visibility: Insights flow into SCC Vision, your single portal for multi-cloud clarity

Assure & comply: Enterprise Cloud by SCC
Enterprise Cloud by SCC provides assured hosting aligned to NCSC principles with UK-resident staff, PSN certification and ISO 27001. Use it when assurance is paramount.

Three immediate actions for AI-era defence

  • Implement AI-augmented vulnerability discovery
    Launch vulnerability disclosure programmes that welcome AI-augmented security researchers. This widens coverage under clear scope and rules while leveraging AI’s ability to find more vulnerabilities faster than traditional methods. Make this a standing programme, not a one-off initiative.
  • Enforce human-in-the-loop security controls
    Make human in the loop a mandatory control objective for all security automation. Allow AI to triage alerts and propose actions but require human validation before material changes. AI threats demand human judgement for complex decisions and strategic responses.
  • Establish continuous red-teaming programmes
    Treat red-teaming and bug bounty programmes as continuous capabilities, not periodic exercises. AI helps ethical hackers find vulnerabilities faster. Put them to work on your estate under published disclosure rules before malicious actors discover the same weaknesses.

AN URGENT CALL TO ACTION

The barrier to sophisticated cybercrime has collapsed. Prepare for interruption, reduce paths to impact and recover quickly. Act on the fundamentals and use SCC where it accelerates the outcome.

If your team is stretched, Pathfinder keeps effort low and outcomes clear.
E-mail [email protected] or call 0121 766 7000 to book your slot.

Or

Book a series of fully funded SecOps Pathfinder sessions. In four hours across five weeks we will analyse your requirements, show how Microsoft Sentinel works with SCC MXDR to strengthens detection and response, and agree a production roadmap for Modern SecOps with clear next steps for your first 90 days.

If your team is stretched, Pathfinder keeps effort low and outcomes clear.
E-mail [email protected] or call 0121 766 7000 to book your slot.

CONTACT US
Scroll to Top