22
 min read

The Rise of AI-Generated Phishing: What Training Needs to Change in 2025

AI-generated phishing is rising in 2025. Learn how training must adapt to protect employees and organizations.
The Rise of AI-Generated Phishing: What Training Needs to Change in 2025
Published on
September 29, 2025
Category
Cybersecurity Training

AI-Powered Phishing: A New Era of Deception

Phishing has long been the most common cyberattack method, but in 2025 it has entered a daunting new phase powered by artificial intelligence. Criminals are now using advanced generative AI models (like GPT-4 and custom rogue tools) to craft highly convincing, personalized scam messages at scale. Unlike the clumsy “Nigerian prince” emails of the past, these AI-generated phishing emails arrive with perfect grammar, authentic corporate tone, and details tailored to the recipient, making them far harder to recognize as fraudulents. The U.S. FBI has officially warned that attackers leveraging AI can “craft highly convincing voice or video messages and emails” to enable fraud, resulting in “devastating financial losses [and] reputational damage” if unsuspecting employees are duped. In short, phishing scams have become more believable than ever.

This AI boost isn’t just making phishing better, it’s making it more prevalent. Security analysts observed an explosion of phishing volume correlated with generative AI’s rise; one report noted a staggering 1,265% surge in phishing attacks attributed to AI tools, as cybercriminals rapidly scaled up their campaigns. Because AI can automate what once required human effort, attackers can now launch thousands of unique phishing attempts in the time it used to take to craft one. For example, IBM researchers found that with only five well-crafted prompts, an AI model generated a polished phishing email in just 5 minutes, a task that would normally take expert humans 16 hours. This near-instant production means attackers can cast a much wider net with minimal effort.

Equally worrying, AI enables new tactics beyond text. Modern scammers use AI to scrape personal data from the web and learn a target’s job role, contacts, and writing style, allowing phishing messages to reference real projects or colleagues and avoid generic tell-tale signs. AI can also generate deepfake audio and video, impersonating voices or faces of trusted people. All of this adds up to an unprecedented threat. Even vigilant employees might find it difficult to spot AI-generated scams, since the usual red flags (poor English, unknown sender, etc.) are no longer reliable. In fact, controlled studies show AI-crafted phishing emails can trick roughly 50% of people, matching the success rate of emails written by seasoned human scammers and over three times the success rate of generic phishing attempts. Attackers have essentially leveled up their social engineering playbook with machine efficiency and creativity.

Bottom line: AI is supercharging phishing in 2025, enabling more frequent, sophisticated attacks that exploit human trust. Business leaders must recognize that the threat landscape has changed dramatically. The next phishing email that lands in an employee’s inbox could be indistinguishable from a legitimate message, painstakingly personalized by AI, or the next voice call from “the CEO” could actually be a convincing deepfake. To protect organizations, training our people to handle this new breed of phishing is just as critical as deploying the latest technical email filters. The remainder of this article examines what exactly is different about AI-generated phishing, why our traditional training approaches are struggling to keep up, and how security awareness training must adapt for this AI-driven threat environment.

AI-Generated Phishing: Why It’s Different

Traditional phishing relied on generic lures and human error, but AI-generated phishing is a game changer in several ways. First, it dramatically increases the scale and speed of attacks. With automation, criminals can instantly produce countless unique phishing messages, each tailored to a different target, instead of mass-sending one clumsy form email. This makes detection harder, even if one email is flagged, thousands of other variants may slip through. As noted earlier, researchers demonstrated that an AI can generate a phishing campaign in minutes that would take human experts days. This efficiency means attackers can afford to cast a much wider net. Unsurprisingly, many organizations have observed a sharp rise in phishing volumes since AI tools became easily accessible.

Second, AI enables unprecedented personalization and polish. Modern language models can mimic writing styles or organizational jargon, producing emails that read exactly like an internal memo from your company or a message from your business partner. These emails come free of the spelling mistakes or awkward phrasing that often signaled a scam in the past. They may even include personal or company-specific details, for instance, referencing a project you’re working on or a recent client meeting, which greatly boosts credibility. One recent study found that hyper-personalized spear-phishing emails crafted with AI were far more effective at deceiving people than the generic training examples employees are used to. In essence, AI lets attackers tailor their bait for each victim, making it feel authentic and relevant.

Finally, AI has expanded phishing beyond email into a multimedia, multi-channel threat. Sophisticated attackers use AI voice cloning to conduct phone scams (vishing) and AI video synthesis to fake video calls. This means an employee might get a phone call that sounds exactly like their CEO, urgently requesting a funds transfer, or even join a video conference and see what appears to be their CFO’s face giving instructions, when in reality both the voice and image are AI-generated frauds. These deepfake techniques exploit our strongest trust cues (hearing a familiar voice or seeing a known face) to override skepticism. As an FBI alert emphasized, criminals are increasingly using AI-generated voices and videos to impersonate trusted individuals and “exploit the trust” of their targets. Most people are not yet trained to doubt what they hear or see in a live call, which is exactly why this tactic is so dangerous.

In summary, AI-generated phishing is different because it is faster, more scalable, more convincing, and more diverse than traditional phishing. By leveraging AI, attackers can tailor each attack to the individual, avoid the usual signs of fraud, and even step outside of email to hit targets through calls and video chats. It truly is a new breed of phishing, which means our defenses, especially human-focused defenses like employee training, need a serious upgrade to cope with these changes.

Real Examples of AI-Enhanced Scams

It’s one thing to discuss AI phishing in theory, but real-world cases already illustrate just how damaging these AI-powered scams can be. Here are a couple of high-profile examples that underscore the impact and urgency of the threat:

  • Deepfake Voice Scam, $243,000 Stolen: One of the first known AI voice impersonation attacks occurred in 2019, when the CEO of a UK-based energy firm received an urgent phone call that sounded exactly like his boss. The caller (actually a criminal using an AI-generated voice) instructed the CEO to transfer funds to a supplier. Trusting the voice, he dutifully transferred €220,000 (~$243,000), only later realizing he had been conned by a fake voice recording. This audio deepfake exploit shows how even a savvy executive can be fooled when the social engineering comes via what seems like a familiar person’s voice. With AI, criminals can now clone voices with frightening accuracy, removing one more barrier to fraud.
  • Deepfake Video Conference, $25 Million Heist: In 2024, an employee at a company in Hong Kong was tricked into one of the largest phishing scams on record, a HK$200 million (≈$25M USD) transfer to fraudsters, after being duped by a deepfake video meeting. The scammers set up a video call in which the employee saw and heard what appeared to be several of her senior executives asking her to execute a confidential financial transaction. The fake video was so convincing (the participants looked and sounded like her actual bosses) that she proceeded to make 15 bank transfers to the attackers’ accounts. By the time the ruse was uncovered, the money was gone. This case demonstrates the extreme lengths to which AI allows attackers to go, staging an entire fake meeting, and how effective such a ploy can be if employees aren’t prepared to question what they see on screen.
  • AI-Generated Email Campaigns, Outperforming Humans: Security training firms have also run controlled experiments pitting AI against human “red team” phishers. In one ongoing study, a custom AI agent was tasked with creating phishing email simulations to trick employees, while an expert human team did the same. By 2025, the AI’s phishing emails were 24% more effective at getting clicks than the ones written by human experts. Similarly, an academic study found AI-written phishing emails achieved a 54% click-through rate, equaling the success of emails by professional penetration testers and vastly surpassing the ~12% click rate of generic phishing attempts. These results, although from simulations, confirm that AI can craft very compelling phish. In the wrong hands, such capability means higher success rates for real attacks.

Each of these examples drives home a key point: seeing is no longer believing in the age of AI. A message or call can appear entirely legitimate and come from a known person, yet still be a sophisticated fake. Employees who were once taught to look for obvious signs of fraud (strange sender addresses, broken English, implausible scenarios) can no longer rely on those simple cues. An AI-enhanced scam may have none of the usual red flags, until you realize too late that the person you were corresponding with wasn’t a person at all. This is why it’s so critical for organizations to update their training and awareness; staff must learn new ways to detect and respond to subtle, AI-assisted deception. The next section examines how most companies’ current training measures are lagging behind in this regard.

Why Traditional Training Falls Short

Many organizations’ security awareness training programs were developed in an era of basic phishing emails and annually refreshed content. Not surprisingly, these traditional training approaches are struggling to keep up with agile, AI-driven threats. Here are the major shortcomings that need to be addressed:

  • Infrequent and Slow to Update: A common practice is to train employees on phishing once a year or perhaps with quarterly modules. Unfortunately, AI-enhanced attack techniques are emerging and evolving month by month. By the time the next annual training rolls around, scammers may have invented entirely new deepfake schemes or phishing tricks. This gap leaves employees unprepared. As one expert noted, annual or quarterly training can’t keep pace, new AI phishing tactics (like voice clones) might already be in circulation long before staff hear about them in the next scheduled training. In short, the cadence of training is too slow relative to the fast-evolving threat.
  • Outdated, Unrealistic Content: Traditional phishing training often relies on examples like obvious fake bank emails or poorly written “password reset” requests to illustrate what to avoid. These might have been relevant a decade ago, but they no longer reflect today’s threats. Employees who only see simplistic, error-ridden phishing examples in training may develop a false sense of confidence, they think “I’d never fall for a prince from Nigeria.” Meanwhile, they remain vulnerable to hyper-personalized phishing emails that look professional and reference real internal details. Research shows targeted spear-phishing (precisely the kind AI excels at) is significantly more effective at fooling people. If training scenarios don’t evolve to include these modern tactics, employees will keep getting blindsided by convincing scams that look nothing like the training slides.
  • Focused Only on Email: Most awareness programs still focus almost entirely on email phishing. They might teach how to hover over links or spot a fake sender address. But, as we’ve seen, phishing now spans multiple channels, including phone calls, text messages, and even video calls, often in combination (e.g. an email followed by a confirming phone call from an AI-cloned voice). Traditional training rarely covers vishing (voice phishing) or deepfake scenarios. This leaves a huge blind spot: employees are ill-prepared to handle a suspicious phone call or a fraudulent video meeting. Attackers know this and exploit it. When an employee who’s only trained to spot phishing emails receives an urgent voicemail that sounds like their CFO, they may not realize it could be fake. Thus, sticking to email-only training ignores the multi-channel nature of modern social engineering.
  • Lack of Hands-On Simulation: Many training programs are lecture- or video-based, telling employees what to watch out for. But telling isn’t the same as experiencing. Facing a real-looking phishing attempt in a safe training simulation is far more impactful than reading a bullet list of tips. Traditional programs have limited ability to simulate advanced attacks like deepfakes or AI-personalized emails. Without realistic practice, employees won’t build the reflexes to handle an actual incident. For example, do they know how to respond if a video call doesn’t feel right? Most won’t, because they’ve never encountered one before. This experience gap is a critical failure in legacy training. It’s akin to a fire drill: you can’t just read about using an extinguisher, you need to practice under realistic conditions to be ready for a real fire. The same goes for phishing, practice needs to match the new level of sophistication.

In summary, traditional security training tends to be too static, infrequent, and narrow. It hasn’t kept pace with the threat. Employees are often left with outdated advice (like “look for typos”) that doesn’t apply to AI-generated messages, and they have no exposure to threats outside the email inbox. This mismatch is dangerous. As a result, many organizations are finding that even after completing standard awareness courses, their people still fall victim to well-crafted phishing scams. Clearly, it’s time to rethink and modernize training. The next section will outline exactly what needs to change in 2025 to build a workforce that can outsmart AI-enabled attackers.

How Security Training Must Evolve in 2025

To counter AI-generated phishing, companies need to radically update their training approaches. The goal is to transform employees from easy targets into a resilient last line of defense, a human firewall, even as scams become more sophisticated. Below are key strategies for overhauling security awareness training in 2025:

  • Move to Continuous Learning: Swap out annual check-the-box training for a continuous training model. This means delivering security awareness content in regular, bite-sized doses throughout the year, for example, quick monthly refreshers or even brief weekly tips. Frequent micro-learning keeps cyber threats fresh in employees’ minds and allows the program to update rapidly as new AI attack techniques emerge. If deepfake scams start trending, a quick training module or company-wide alert that month can address it immediately, rather than waiting many months. Continuous training ensures employees are never far from their last refresher, which is crucial when facing ever-changing tactics.
  • Incorporate Realistic Simulations (Including Deepfakes): Training must go beyond slides and actually simulate the types of attacks employees will encounter. This includes traditional phishing email tests and new scenarios like simulated voice phishing calls or fake video meetings. Cutting-edge security awareness platforms now offer the ability to send employees AI-crafted phishing emails and even play them deepfake voicemails or videos as tests. By experiencing these threats in a safe setting, employees can learn to recognize them and practice the correct response. For instance, an employee might receive a training call that uses a cloned voice of an executive; after the scenario, they get feedback on red flags they missed. These lifelike exercises build the muscle memory to respond calmly and correctly to real incidents. Companies should aim to run phishing simulations frequently (e.g. monthly surprise test emails) and include a variety of channels, so no type of attack feels completely foreign to staff. When a genuine deepfake attempt happens, it won’t be the first time employees have had to deal with something like it
  • Update Phishing “Indicators” Taught: Revise the typical advice given to employees on how to spot a phish. Instead of focusing on superficial indicators (bad grammar, unknown sender), training should emphasize contextual and behavioral red flags that even AI can’t mask. For example, teach staff to be alert to timing and tone inconsistencies, an email from the CEO sent at 3 AM, or a message that is too brief or formal for that sender. AI might write flawless sentences, but it can slip up on human details like timing, tone, or procedure. Highlight these subtle cues in training sessions. Likewise, educate employees on new signs of deepfake content, such as slight audio glitches or video lag, unnatural eye movements on a call, or voices that don’t perfectly match known accents/cadence. Keeping employees informed on the latest markers of fraud is an ongoing effort. In essence, the “textbook” signs of phishing need to be continuously revised for the AI era.
  • Teach Verification as a Standard Practice: Perhaps the most important skill to ingrain is a habit of verification for any request that involves sensitive data or money, especially if there’s urgency. In 2025, employees should be trained that any high-stakes request — even if it appears to come from a familiar colleague via email, phone, or video — must be verified through a second channel. For example, if you get an email from your CFO to wire funds, pause and verify by calling them on a known number (not one provided in the email) or confirming in person. Similarly, if a voice on the phone asks for confidential info, have a procedure to authenticate the caller (like calling them back through the company directory). Training should include role-playing these verification steps, so employees feel confident executing them under pressure. Make it clear that it’s not only okay but expected to double-check identities, even of bosses, before acting. This “trust but verify” mindset can stop an AI-enabled fraud in its tracks. Many companies now institute policies like requiring verbal confirmation from two different managers for large fund transfers; employees must be made aware of and comfortable with these protocols. Verifying through a second factor is one thing AI can’t easily bypass when done properly.
  • Foster an Open, No-Blame Security Culture: To support all the above, leadership and HR should cultivate a culture where security is everyone’s responsibility and reporting suspicious activity is encouraged, without stigma. Employees need to know they will be praised (not punished) for spotting a phishing attempt or even for reporting when they think they might have clicked something wrong. This is especially important as AI phishing becomes harder to spot; mistakes may happen, and swift reporting can greatly reduce damage. Use positive reinforcement in training, celebrate those who correctly identify simulation tests or who alert the company to new phishing emails they received. Also, ensure there’s an easy, well-known process (like a one-click “Report Phish” button or a dedicated hotline) for employees to ask, “Is this request legitimate?” and get quick guidance. When people feel supported, they are more likely to stick to the training under real conditions. Senior executives should openly talk about these threats and even consider sharing stories of times they themselves verified a hoax email or call, this shows that nobody is above the protocol when it comes to security.
  • Leverage AI for Defense and Training: Just as attackers use AI, defenders can too. Companies should explore using AI-driven email filters and anomaly detection tools to catch phishing attempts that humans miss. Some advanced email security systems use machine learning to flag subtle deviations in writing style or unusual sending patterns that might indicate an email isn’t really from the purported sender. While technical controls aren’t foolproof, they add a safety net. Moreover, AI can assist in training personalization, for instance, by analyzing which users are most at risk (clicking simulations) and then delivering tailored training content or extra simulations to those users to improve their awareness. AI-based training coaches can even provide real-time feedback; one example is an email plugin that warns users if an incoming message looks suspicious or if their email reply contains sensitive info. Embracing these tools amplifies the effectiveness of your security awareness program. According to industry data, organizations that extensively use AI and automation in their security see breaches detected and contained nearly 100 days faster, and millions saved in incident costs, compared to those that don’t leverage such technology. The takeaway for training is that human vigilance coupled with smart technology creates a much stronger defense than either alone.

By implementing these changes, continuous education, realistic multi-channel simulations, updated content on new scam techniques, ingrained verification habits, a supportive culture, and AI-assisted defenses, companies can significantly bolster their resilience against AI-generated phishing. The training approach becomes not just about “awareness” but about empowering employees with practical skills and a security mindset. HR professionals and business leaders have a pivotal role here: allocating time and resources for regular training, endorsing these practices from the top down, and making security awareness a core part of the organizational values. When done right, your workforce becomes an active sensor network that can catch and defuse phishing attempts, rather than the weakest link. In the final section, we’ll wrap up with some closing thoughts on preparing your organization for the challenges ahead.

Final thoughts: Empowering Employees in the AI Era

The rise of AI-generated phishing is a prime example of technology’s double-edged sword. The same advances that can benefit business are being twisted by criminals to prey on human trust at an unprecedented scale. For organizations across all industries, 2025 is a watershed moment: security awareness can no longer rely on the old playbook. Phishing training needs to evolve from a periodic formality into a dynamic, continual effort that keeps employees one step ahead of ever-more-sophisticated scams. Enterprise leaders and HR professionals should recognize that investing in modern training is not optional, it’s as critical as deploying a firewall or anti-virus software. After all, when an employee is about to click a convincing AI-crafted email or approve a request on a fake video call, no technology can save the day, only that employee’s savvy and caution can.

By adopting the training changes discussed, from frequent micro-trainings and deepfake simulations to cultivating a verify-before-trusting culture, companies can turn their people into a robust line of defense. Empowered employees, armed with up-to-date knowledge and hands-on practice, become much harder targets for even the cleverest AI-driven phish. They will know how to pause and verify unusual requests, how to spot subtle signs of fraud, and will feel confident reporting anything suspicious without delay. This empowerment extends beyond just following rules; it creates a mindset where employees take personal ownership of protecting themselves and the business. In an era when phishing emails might be indistinguishable from real emails and a voice on the phone might not be who it claims, such vigilance and skepticism are priceless.

Enterprise leaders should also remember that security is a shared responsibility. IT can implement the best filters, and security teams can run simulations, but the commitment of every department, especially HR in reinforcing training and executives in modeling good behavior, is what truly weaves security into the company’s fabric. When leadership visibly prioritizes cybersecurity (for example, undergoing the same training, or instituting policies like mandatory call-backs on any funds request), it sends a powerful message that thwarts social engineering: attackers can no longer exploit “human nature” if a culture of security awareness has reshaped what normal human behavior is in the workplace.

In conclusion, the threat of AI-generated phishing will likely continue to grow, as AI tools get more powerful and accessible. But this is not cause for despair; rather, it’s a call to action. Organizations that proactively adapt their training and awareness now will be far better positioned to thwart these attacks. By changing how we train our employees, making it continuous, immersive, and aligned with the latest threats, we change the game on the attackers. The end result is a more cyber-resilient workforce that can confidently spot and stop even AI-enabled cons. In the face of relentless innovation by cybercriminals, our best countermeasure is to equally innovate in how we educate and prepare our people. With the right training changes in 2025, every employee can become a cyber defender, and together we can meet the challenge of AI-generated phishing head on.

FAQ

What makes AI-generated phishing more dangerous than traditional phishing?

AI-generated phishing is faster, more scalable, and highly personalized. Attackers can create flawless, convincing messages and even use deepfake voices or videos, making scams far harder to detect.

Can AI phishing attacks really fool experienced employees?

Yes. Studies show AI-crafted phishing emails achieve click rates of around 50%, equal to those created by expert human scammers, and much higher than traditional phishing attempts.

Why do traditional security awareness programs fail against AI threats?

Most programs are infrequent, focus only on email, and use outdated examples. They don’t address multi-channel attacks like voice or video deepfakes and lack realistic simulations.

How should phishing training evolve in 2025?

Training should shift to continuous micro-learning, realistic phishing simulations (including deepfakes), updated fraud indicators, and teaching verification habits for all sensitive requests.

What role should leadership and HR play in tackling AI phishing?

Leaders and HR must foster a supportive culture where reporting is encouraged, reinforce new verification policies, and ensure employees receive regular, practical training that matches evolving threats.

References

  1. Federal Bureau of Investigation (FBI). FBI Warns of Increasing Threat of Cyber Criminals Utilizing Artificial Intelligence. FBI Press Release; 2024. https://www.fbi.gov/contact-us/field-offices/sanfrancisco/news/fbi-warns-of-increasing-threat-of-cyber-criminals-utilizing-artificial-intelligence
  2. Heiding F, Lermen S, Kao A, et al. Evaluating Large Language Models’ Capability to Launch Fully Automated Spear Phishing Campaigns: Validated on Human Subjects. arXiv preprint arXiv:2412.00586; 2024.
    https://arxiv.org/abs/2412.00586
  3. Damiani J. A Voice Deepfake Was Used To Scam A CEO Out Of $243,000. Forbes; 2019. https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/
  4. Milmo D. Company worker in Hong Kong pays out £20m in deepfake video call scam. The Guardian; 2024. https://www.theguardian.com/world/2024/feb/05/hong-kong-company-deepfake-video-conference-call-scam
  5. Carruthers S. AI vs. human deceit: Unravelling the new age of phishing tactics. IBM Security Intelligence (X-Force); 2023.  https://www.ibm.com/think/x-force/ai-vs-human-deceit-unravelling-new-age-phishing-tactics
Weekly Learning Highlights
Get the latest articles, expert tips, and exclusive updates in your inbox every week. No spam, just valuable learning and development resources.
By subscribing, you consent to receive marketing communications from TechClass. Learn more in our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Explore More from L&D Articles

How Generative AI Is Changing Creative Work Forever
April 28, 2025
27
 min read

How Generative AI Is Changing Creative Work Forever

Discover how generative AI is transforming creative work, boosting productivity, reshaping roles, and raising new ethical questions.
Read article
5 Compliance Trainings Needed for New Employees
April 6, 2025
13
 min read

5 Compliance Trainings Needed for New Employees

Discover 5 essential compliance trainings every new employee needs to prevent risks, ensure safety, and build a respectful workplace.
Read article
The Role of AI in Measuring and Managing Workplace Diversity
September 25, 2025
20
 min read

The Role of AI in Measuring and Managing Workplace Diversity

Explore how AI helps HR measure and manage workplace diversity, minimize bias, and foster an inclusive, data-driven culture.
Read article