
AI-powered cyberattacks are growing so fast that researchers observed as many as 36,000 malicious scans per second across the internet last year, a signal that attackers are weaponizing automation at scale. That volume matters because speed and quantity let attackers find and exploit weak links before defenders can respond. Businesses must treat AI-assisted attacks as a present, operational risk, not a future worry.
Key takeaways
- AI is a force multiplier for attackers. It speeds reconnaissance, tailors social-engineering, and automates multi-stage campaigns.
- Deepfakes and AI impersonation are real, high-value threats. Firms have lost millions to convincing audio/video scams.
- Shadow AI is a rising blind spot. Unvetted internal AI use increases breach impact and cost.
- Quick wins exist. Enforcing MFA, patching exposed services, and performing a shadow-AI inventory cut most near-term risk.
- Defence must shift to identity, behavior, and governance. Signature rules alone are no longer enough.
What’s Changed: The Nature of AI-Assisted Attacks
Attackers needed two things in the past: time and skilled people. Generative models and agentic AI remove both constraints. Today, someone with basic intent can use widely available tools to automate reconnaissance, craft highly targeted messages, and run many variations until one succeeds. That reduces the cost per attack and raises the sheer number of attempts aimed at companies.
There are three practical differences defenders must internalize:
- Scale. Automated scanning and probing happen continuously and at high volume. A few years ago, a worm or scanner might be notable; now discovery happens in minutes. That shortens defenders’ windows to patch or harden services.
- Contextual precision. AI crafts messages that reflect public profiles, recent company events, and role-specific language. That makes phishing and vishing harder to detect by instinct alone.
- Autonomy and chaining. Researchers have shown that LLMs can be structured to plan and coordinate multi-step attacks in controlled settings, meaning an attacker can program a sequence and supervise rather than manually perform every step. This raises the risk that tools could be repurposed for offensive use.
Finally, the problem now includes internal risk: employees and teams using third-party AI without oversight can leak prompts, data, or credentials. IBM and others point to “shadow AI” as a contributor to breach complexity and cost. Treat internal AI usage the same as you’d treat any service that touches sensitive data.
AI-Powered Cyberattacks and Threats You Should Worry About
Below are the attack types you’ll see more often, and real incidents that show why they matter.
AI-Driven Phishing and Vishing
Attackers scrape public profiles, press releases and job descriptions, then build tailored emails that use the right role names, phrases and timing. The same models can produce short, convincing voice clips. When combined, email + voice makes urgent payment or credential requests feel routine, and employees comply. These attacks are now widely reported.
Deepfake Impersonation and High-Value Fraud
Several well-documented cases show large losses after employees acted on AI-generated audio/video that appeared to come from senior leaders. A notable example involved a large engineering firm that lost tens of millions following a deepfake video conference that impersonated executives. These are not edge cases, they are headline incidents that highlight how trust signals can be forged.
Automated Reconnaissance, Credential Theft, and Weaponized DDoS
Automated scans identify exposed services, and attackers use credential stuffing and other automated methods to exploit weak authentication. At the same time, AI is being integrated into botnets and attack orchestration tools that can coordinate larger DDoS campaigns more efficiently. This result in a more frequent reconnaissance and higher-impact denial-of-service events.
Attacks on AI Systems and Data Leakage From Shadow AI
Misconfigured models, lax vendor controls or unmanaged integrations can leak sensitive prompts or training data. Attackers can target these systems to extract intellectual property, customer data, or to poison models. Reports and industry guidance now emphasize that “AI governance” is a critical part of cyber risk.
These scenarios are not rare thought experiments. Organizations are seeing them in the wild. The practical result is that risk management must cover both the outward-facing threats created by attackers who use AI and the inward-facing risks created by uncontrolled AI use inside organizations.
Short, High-Impact Actions to Take
You don’t need a major program to block most current AI-powered cyberattacks. Do these five things immediately, they’re pragmatic and measurable.
- Mandatory MFA for administrators, cloud, and remote access: Turn it on and require hardware or app-based tokens for high-risk roles. SMS-only MFA is second-best; prefer modern authenticators or keys. MFA blocks the majority of credential-based intrusions.
- Shadow-AI inventory (quick): Within seven days, ask each team to list any AI service they use for work and what data is shared. Log the tools and flag any that process customer data or IP. Require a short vendor checklist for flagged tools (encryption, retention, contact for incidents).
- Patch and firewall public services: Run a focused scan of externally accessible assets and prioritize RDP, VPNs, legacy web apps, and admin panels. Remove unnecessary public exposure or add strict access controls.
- One realistic demo for staff (phishing + vishing): Show a short, redacted AI-generated email and a synthesized voice clip; teach the verification rule: stop — verify — escalate. Make the verification channel something independent of the original request (e.g., known phone line, in-person, or recorded video from the executive).
- Turn on behavior alerts in identity and cloud logs: Detect anomalies: logins from unusual IPs, multiple device enrollments, large data exports, or rare privilege escalations. Behavior alerts catch attacks that signature rules miss.
These moves increase attackers’ cost of success and buy time for deeper changes.
Building a Resilient, Longer-Term Posture
Defending against AI-powered cyberattacks is not just a technology problem. It’s identity, observability, governance and human process, in that order of urgency.
1. Identity and Access: Identity is the gatekeeper. Enforce least privilege, require conditional access (device health, geolocation, anomaly checks) for sensitive workflows, and make admin privileges ephemeral and audited. Identity controls reduce blast radius when attackers get a foothold. Use strong session logging to enable fast forensic work later.
2. Detection and behavior: Attackers using AI vary payloads constantly. Signature-based systems will miss novel content. Invest in EDR/XDR and cloud monitoring that correlate identity, endpoint and network signals. Tune alerting to focus on cross-system anomalies. For example, a new device enrolling, followed by an unusual file share and a remote login from a new country.
Practice detection by running red team scenarios that include AI-assisted phishing and deepfake vishing. Those scenarios reveal where human process fails and where tooling needs to improve. Evidence from recent industry reports shows behavioral detection shortens dwell time and reduces breach cost.
3. AI governance and vendor controls: Inventory every model, plugin or AI service that touches company data. For vendors, require a short security attestation that covers data handling, retention, access controls and incident notification. Internally, limit which roles can send sensitive data to external models; anonymize inputs when possible and retain prompts and outputs for a limited time to support audits.
When an AI tool is used to automate business processes, require human review of outputs for high-risk actions (payments, account modifications, privileged configuration changes).
Practical governance beats dogma. Treat AI tools as software services with the same lifecycle: inventory, assess, approve, monitor, and retire.
What to Tell Your Board and Customers
Boards want two things: the facts and the ask.
- The fact: Attackers now use AI to automate discovery, tailor scams, and perform multi-step attacks quickly. This increases attempt frequency and makes social-engineering more convincing.
- The impact: Incidents involving AI tools or deepfakes have led to multi-million dollar losses and more complex remediation.
- The ask: Fund a 90-day posture sprint: (1) enforce MFA across critical accounts, (2) complete a shadow-AI inventory, (3) upgrade logging and behavior detection for core services, and (4) run tabletop exercises that include deepfake scenarios.
Frame requests in measurable terms: percent of critical accounts on MFA, days to finish shadow-AI inventory, and expected reduction in exposed services after patching.
Prioritizing investment
When funds are limited, allocate where you stop the most likely attacks.
- Identity & access: 40% of initial budget. MFA, conditional access, least privilege. These reduce most credential-based and impersonation attacks.
- Visibility & detection: 25% of initial budget. Centralized logging, EDR/XDR, and SIEM rules for behavior correlation.
- Training & process: 15% of initial budget. Targeted exercises for finance, HR, and vendor owners.
- AI & vendor governance: 10% of initial budget. Inventory tools, apply vendor checklists, and limit data exposure.
- Network hygiene: 10% of initial budget. Patching, segmentation, and removing unnecessary public services.
This split is a starting point, adjust by risk profile and industry (finance and healthcare should skew more to visibility and governance).
Incident Readiness Checklist
Before an incident, confirm these items:
- Asset inventory covering cloud and on-prem systems.
- Privileged accounts segmented and audited.
- Mandatory MFA for administrative and remote accounts.
- Centralized logs and the ability to run cross-system queries quickly.
- A communications plan that includes steps for deepfake/impersonation incidents.
- A legal and regulatory contact list for notifications.
- Red team/playbook that includes AI-driven scenarios and recovery steps.
If you can’t answer “yes” to most of these, treat the top three (identity, logging, communications) as immediate priorities.
Pragmatic, Not Panicked
AI makes attacks faster and more convincing. It also gives defenders new tools for detection and response. The sensible strategy is simple: patch the most exposed services, lock down identity, find and close exposed services, and practice responses that include AI-powered cyberattacks. That combination reduces the attacker’s advantage and keeps you in control.
References for Further Reading
- Fortinet: Threat Landscape / 2025 Global Threat Landscape Report (scans per second, automated scanning trends).
- IBM: Cost of a Data Breach Report 2025 (shadow AI, breach costs, and guidance).
- World Economic Forum: Global Cybersecurity Outlook 2025 (social engineering and sector trends; Arup deepfake case coverage).
- TechRadar (Pro): reporting on LLMs/agentic models and autonomous attack research.
- ITPro: DDoS trends and AI-assisted attack orchestration.
- The Guardian: reporting and analysis on the Arup deepfake fraud and other deepfake incidents.
Discover more from Aree Blog
Subscribe now to keep reading and get access to the full archive.



