What Actually Are AI-Powered Cyber Attacks?

Anyone following cybersecurity news today is unlikely to miss the growing number of headlines warning that AI will make us less safe.

In many cases, these warnings collapse a wide range of very different techniques into a single, vague term: “AI-powered cyber attacks.”

In this article, I want to unpack what people actually mean when they use that term — and why treating “AI attacks” as one category makes it harder for decision makers to reason about risk, and for practitioners to defend against real-world threats.

“AI attack” has become a buzzword

When people talk about “AI attacks,” they usually mean one of two things.

1) Fully autonomous, end-to-end AI-driven offensive campaigns: systems that independently select targets, exploit vulnerabilities, and adapt their behavior without meaningful human involvement.

While this makes for compelling headlines, there is little evidence that such attacks are a common reality today.

2) AI-supported attacks: where AI is used in narrowly scoped, supporting roles: automating labor-intensive tasks, improving message quality, or accelerating parts of an existing campaign.

That second category is what we see much more frequently, and it’s the one worth focusing on if we want “AI attack” to mean anything useful.

What “AI” means in this article

“AI” is a broad term. In the context of this article, I’m using it as shorthand for generative AI, especially large language models (LLMs).

Generative AI produces an output from an input (a prompt), like text, code, summaries, translations, structured data, that can then be used as part of an attacker’s workflow.

Augmented reconnaissance

Any complex campaign starts with reconnaissance:

  • Who are the targets? Email addresses, social media handles, job titles, relationships.
  • What systems are in scope? Domains/subdomains, IP ranges, exposed ports, software and versions, third-party dependencies.

Using an LLM to write scripts, summarize findings, extract structure, and generate “next-step” hypotheses can reduce the effort of this phase. It also makes it easier to repeat across many targets.

Automation and scale

One of the most common uses of generative AI in offensive campaigns is the automation of tasks that were previously labor-intensive or required specialized skills.

In most cases, the AI is applied to a specific subtask, such as:

  • Collecting information from defined sources (emails, names, leaked credentials) to build target lists.
  • Generating many variations of phishing messages, including localization that older large-scale campaigns often lacked.
  • Crafting near real-time responses to victims to accelerate and scale fraudulent conversations.

The benefits are straightforward:

  • Speed: less manual work means faster execution.
  • Scale: lower reliance on scarce skills enables larger campaigns (more targets, more attempts, more potential gain).
  • Efficiency / quality: faster iteration improves what works and discards what doesn’t.

Rather than trying to automate everything end-to-end, attackers tend to automate the bottlenecks first — the parts that are slow, expensive, or hard to staff.

Personalization

Attacks that require interaction from the victim like clicking on a link, replying to a message, approving a request, benefit disproportionately from context and personalization.

Generative AI helps move campaigns away from obviously generic, poorly written phishing attempts in two ways:

  • Collecting context: automating the gathering of publicly available information so messages feel relevant and credible.
  • Writing tailored messages: producing spear-phishing-quality outreach at scale, making impersonation of trusted brands (banks, email providers, government entities) more convincing.

Conclusion

I don’t want to rule out a future where parts of offensive campaigns become more autonomous. But today, the “rogue AI running full-spectrum cyberattacks” framing isn’t what most organizations should optimize for.

What we do see is AI acting as a supporting function: helping attackers do reconnaissance faster, run higher-volume campaigns, and personalize social engineering more effectively. Treating “AI attacks” as a single category hides these mechanics and that makes it harder to defend against the threats that are already here.

Scroll to Top