Reuters recently published a joint experiment with Harvard University. There, we asked popular AI chatbots like Grok, ChatGpt, and Deepseek to create “the perfect phishing emails.” The generated email was then sent to 108 volunteers, of which 11% clicked on the malicious link.
At one simple prompt, the researcher was armed with a very persuasive message that could deceive real people. Experiments should act as a harsh reality check. Phishing has been a mess for years, but AI is transforming it into a faster, cheaper, and more effective threat.
In 2026, AI phishing detection must become a top priority for businesses looking to be safer in increasingly complex threat environments.
The emergence of AI phishing as a major threat
One of the main drivers is the rise of phishing as a service (PHAAS). Dark web platforms like Lighthouse and Lucid offer subscription-based kits that allow skilled criminals to launch sophisticated campaigns.
Recent reports suggest that these services generate over 17,500 phishing domains in 74 countries, targeting hundreds of global brands. In just 30 seconds, criminals can spin up clone login portals for services like Okta, Google, or Microsoft. With phishing infrastructure now available on demand, there are very few barriers to entry into cybercrime.
At the same time, the generator AI tool allows criminals to create persuasive, personalized phishing emails in seconds. Email is not a common spam. By reducing data from LinkedIn, websites, or past violations, AI tools create messages that reflect the actual business context that will make the most careful employee click.
This technology is also being driven by the Deepfake Audio and video phishing boom. Over the past decade, Deepfark-related attacks have increased by 1,000%. Criminals usually impersonate CEOs, family, and trustworthy colleagues on communication channels such as Zoom, WhatsApp, and teams.
Traditional defenses have not accomplished that
Signature-based detection used in traditional email filters is insufficient for AI-powered phishing. Threat actors can easily rotate infrastructure, including domains, subjects, and other unique variations that slide beyond static security measures.
Once Fish reaches your inbox, it’s up to the employee to decide whether to trust them or not. Unfortunately, even well-trained employees can ultimately make mistakes given how convincing today’s AI phishing emails are. Spot checks for poor grammar are a thing of the past.
Furthermore, refinement in phishing campaigns may not be the main threat. The enormous scale of the attack is what worries me most. Criminals can launch thousands of new domains and clone sites in a matter of hours. Even if one wave is removed, another wave will quickly replace it, ensuring a constant flow of fresh threats.
This is the perfect AI storm that needs to deal with a more strategic approach. Working against yesterday’s crude phishing attempts is no match for the pure scale and refinement of modern campaigns.
Key strategies for AI phishing detection
As cybersecurity experts and governing bodies often advise, a multi-layered approach to all cybersecurity, including detection of AI phishing attacks, is the best.
The first line of defense is a better threat analysis. Rather than static filters relying on potentially outdated threat intelligence, NLP models trained with legitimate communication patterns can capture subtle deviations in tone, phrasing, or structure that trained humans may miss.
However, there is no automation that replaces the value of employee security awareness. It is very likely that some AI phishing emails will ultimately find their way to their inbox, so detection requires having a well-trained workforce.
There are many ways to train security awareness. Simulation-based training is most effective as it keeps employees prepared for what AI phishing actually looks like. Modern simulations go beyond simple “typo training” training. They reflect actual campaigns related to user roles so that they are prepared for the exact type of attacks that employees are most likely to face.
Reporting of suspicious activity occurs naturally, as the goal is not to test employees, but to build muscle memory.
The ultimate layer of defense is Ueba (user and entities behavioral analysis), which ensures that successful phishing attempts do not result in full-scale compromises. The UEBA system detects abnormal user or system activity to alert defenders about potential intrusions. Usually this is in the form of an alert, perhaps about logging in from an unexpected location or unusual mailbox changes that are not in line with your IT policy.
Conclusion
AI has advanced phishing and expanded to levels that could easily overwhelm or bypass traditional defenses. For 2026, organizations should prioritize AI-driven detection, continuous monitoring, and realistic simulation training.
Success depends on combining advanced technology with human preparation. Those who can balance this are well positioned to be more resilient as phishing attacks continue to evolve with AI.
Image source: Unsplash