In July 2024, executives at luxury carmaker Ferrari received WhatsApp messages and even a phone call from someone posing as their CEO. The voice on the line matched their chief executive’s tone and accent perfectly, yet it was a sophisticated fake. The impostors had used artificial intelligence (AI) to clone the CEO’s voice and almost tricked finance staff into authorizing a fraudulent transaction. This real-world incident highlights a growing truth: cybercriminals are leveraging AI to craft incredibly convincing scams, from voice-deepfakes to realistic phishing emails. At the same time, organizations are beginning to deploy AI to bolster their defenses, leading to a new cybersecurity “arms race.” The stakes are enormous, global cybercrime is projected to cost the world a staggering $10.5 trillion annually by 2025. Yet alarmingly, 90% of companies today lack the maturity to counter advanced AI-driven threats effectively. In this era of intelligent attacks and defenses, how should businesses respond? This article explores what’s changing in security awareness amid the rise of AI, and how leaders can prepare their people and processes to stay safe.
AI has ushered in a dramatic shift in the cybersecurity threat landscape. Attackers are now systematically integrating AI into their tactics to make their attacks faster, more scalable, and more convincing than ever. One of the clearest examples is phishing. For years, employees were taught to spot phishing emails by tell-tale signs like poor grammar or generic greetings. That era is over, modern attackers can use AI language models to generate phishing messages that are grammatically flawless, contextually tailored, and highly persuasive. In fact, phishing campaigns tied to generative AI have surged by over 1,200%, according to recent analyses. These AI-crafted emails are not only more numerous but also dangerously effective: studies found AI-generated phishing lures can convince roughly 60% of recipients to engage, compared to about 12% for traditional phishing attempts. In other words, an email composed by AI can be five times more successful at deceiving an employee than the classic badly-written scam. And because AI automates the work of crafting and customizing these messages, attackers can launch targeted phishing campaigns at a massive scale with minimal effort, effectively a 95% reduction in cost and time per attack.
Another game-changing threat is the rise of deepfakes, AI-generated audio or video content that convincingly mimics real people. We already saw how Ferrari foiled a deepfake voice scam. Unfortunately, many organizations have not been as lucky. In 2024, for example, a British energy firm’s CEO was tricked by an AI-cloned voice into transferring $243,000 to fraudsters. And in one alarming case at a global company (Arup), criminals used an AI-generated video call to impersonate the CEO and other colleagues, persuading an employee to send roughly $39 million to a fake account. These incidents underscore how AI-powered social engineering can defeat the usual verification steps, seeing a face or hearing a familiar voice is no longer proof you’re dealing with the right person. Deepfake scams are exploding in frequency: in just the first quarter of 2025, 179 deepfake incidents were recorded, exceeding the total for all of 2024. Such AI-enabled fraud could lead to tremendous financial losses; one analysis by Deloitte projects that generative AI-driven fraud may reach $40 billion in the U.S. by 2027 (up from $12.3 billion in 2023).
Attackers are also leveraging AI to bolster malware and other attack techniques. For instance, AI can automatically generate new variants of malicious code (so-called polymorphic malware), constantly changing its shape to evade detection. Recent reports indicate that over 70% of major breaches now involve polymorphic malware, often created or enhanced by AI tools. Furthermore, underground hacker forums have started offering malicious AI-as-a-service, in 2023, rogue AI chatbots named “WormGPT” and “FraudGPT” emerged, providing criminals with one-click tools to generate convincing phishing emails or malware code. In short, AI has lowered the barrier to entry for cybercrime: it lets attackers launch more attacks, of greater sophistication, with less effort. This evolving threat landscape means that old habits and defenses are no longer sufficient. As one report succinctly put it, when scammers can use AI to fool even savvy professionals, it “proves beyond doubt that yesterday’s protection measures will certainly not work in 2025 and beyond”.
On the positive side, the same AI advancements can be turned toward strengthening security. Just as attackers are arming themselves with AI, defenders are increasingly using AI to fortify cyber defenses. AI-powered security systems can monitor networks and user behaviors 24/7, analyze vast amounts of data, and flag anomalies or threats much faster than a human could. For example, machine learning models excel at spotting patterns, they can detect the subtle signs of a network intrusion or phishing attempt in real time, enabling a swift response before damage is done. In fact, modern AI-based detection systems have demonstrated threat detection rates as high as 98% in some tests. This speed and precision are crucial at a time when attack volume is high. By automating routine security monitoring and initial incident response, AI allows human analysts to focus on complex or strategic tasks, significantly reducing breach costs and response times in many cases.
Enterprise security leaders often describe this dynamic as an “AI arms race.” The key to survival is to fight fire with fire, leveraging AI tools to counter AI-empowered attackers. For instance, AI-driven email filters and user behavior analytics can help catch phishing emails or account takeovers that would slip past traditional defenses. Some organizations are implementing AI-based fraud detection that can recognize deepfake voices or unusual transaction requests by analyzing acoustic or behavioral cues. Additionally, AI can aid in predictive threat intelligence: by crunching global data on emerging threats, AI systems might warn defenders about new attack techniques (say, a novel malware strain or scam) before they hit the organization. However, adopting AI for defense is not as simple as buying a new gadget, it requires the right strategy and governance. Security teams must ensure their AI systems are properly trained, vetted, and continuously updated, since attackers will try to outsmart them. There’s also the risk of “shadow AI,” where employees or departments start using AI tools without the knowledge of security teams, potentially introducing vulnerabilities. To prepare, many enterprises are establishing AI governance frameworks (such as NIST’s AI Risk Management Framework) to manage these risks and use AI responsibly.
Most importantly, even the best AI defenses should be combined with human vigilance. Technology alone is not a silver bullet, if employees fall prey to an AI-crafted scam, the best firewall or AI filter may be bypassed. That’s why experts emphasize a hybrid approach: use AI to reinforce security, but also invest in training people to recognize and respond to threats. Real-world incidents show that while AI-driven tools are critical, the ultimate defense still relies on a well-trained, vigilant workforce.
Forward-thinking organizations are addressing this through dedicated Cybersecurity Training programs that teach employees how to identify AI-generated threats and practice secure digital behavior.
In the next section, we’ll discuss how security awareness training itself is evolving in the AI era.
Given the novel threats we’ve outlined, organizations must adapt their security awareness training programs to keep pace. Many companies still rely on static, annual training modules or quarterly phishing email drills. These traditional approaches are rapidly becoming outdated. A recent industry survey found that while 75% of organizations require employees to complete security training at least quarterly, much of it is treated as a checkbox exercise to satisfy compliance requirements. The result is often stale content and disengaged employees. When training is infrequent or irrelevant, employees tend to tune it out, which is dangerous when attackers are constantly updating their methods.
To truly defend against AI-enhanced threats, experts recommend moving toward continuous, dynamic, and personalized learning. One promising concept is Just-in-Time (JIT) training, delivering bite-sized security education to employees at the moment they need it. For example, if an employee is about to click a suspicious link or has just failed a phishing test, an AI-driven platform could instantly pop up a short lesson or warning. This way, the training is timely and highly relevant to real behavior. Such adaptive training was previously hard to scale, but AI tools now make it feasible. In fact, nearly 99% of organizations in a 2025 survey said they are in favor of using AI in future security awareness efforts, for tasks like automatically generating training content, creating individualized phishing simulations, and even providing conversational coaching via chatbots. Imagine an AI “cyber coach” that can answer employees’ security questions on demand, or auto-generate a fake phishing email tailored to an employee’s role to test their response, these are no longer far-fetched ideas, but emerging capabilities.
Here are some key ways businesses can upgrade their security awareness training for the AI era:
Crucially, executive support and company culture play a huge role in training success. Employees are more likely to take security seriously if top leaders champion the program. It’s wise to identify “security champions” across departments, not just IT, but in HR, finance, etc., who can advocate for good practices and keep colleagues engaged. And instead of framing awareness training as a mundane compliance task, leadership should communicate its importance in protecting the organization’s mission and customers. This brings us to the broader point of culture in the AI-threat era.
Technology and training solutions alone will not fully safeguard an organization; what’s needed is a strong security-aware culture that pervades all levels of the company. This is especially true as AI-blended threats target human trust and decision-making. HR professionals, CISOs, business owners, and other enterprise leaders all have a part to play in cultivating this culture. Here are some focal areas for leadership:
It’s worth noting that virtually all executives recognize the need to beef up security awareness in this new era. In one global survey, 96% of executives agreed that more organization-wide training and awareness would help reduce cyberattacks. Many leaders are already taking action: nearly 96% have initiatives or plans focused on mitigating AI-related threats as part of their cyber incident response strategies. And importantly, these efforts pay off, about 89% of organizations reported measurable improvements in their security posture after implementing comprehensive awareness programs.
Finally, keep the human factor front and center. Even as we deploy cutting-edge AI tools and processes, remember that attackers ultimately aim to exploit human psychology, be it through fear, urgency, curiosity, or trust. By building a culture where employees are aware of these manipulation tactics and feel empowered to question odd situations, you create a human firewall that complements your technical firewalls. After all, a recent report found 99% of organizations had security incidents linked to avoidable human error. Reducing those errors through education and culture is one of the most powerful defenses available, AI or not.
The AI era presents a classic double-edged sword for security. On one side, we face attackers who use AI to supercharge their scams, flooding inboxes with convincing phishing emails, mimicking voices and videos to impersonate trusted people, and probing for weaknesses at a scale unimaginable in the past. On the other side, we have unprecedented tools at our disposal to detect and deter these threats, from intelligent monitoring systems to AI-driven training programs. Organizations that succeed in this new landscape will be those that embrace AI’s advantages while staying vigilant about its pitfalls. This means investing in smart defenses and employee education, continuously updating strategies as threats evolve, and fostering a company-wide ethos of caution and verification.
For HR professionals, CISOs, and business leaders alike, the mission is clear: make security awareness an ongoing priority. In the AI era, security can no longer be seen as just an IT problem or a yearly checkbox. It’s a living, breathing part of organizational culture and risk management. By preparing your people, through knowledge, practice, and the right tools, you empower them to be the strongest link in the security chain rather than the weakest. The challenges are formidable, but not insurmountable. With a blend of human awareness and artificial intelligence on our side, we can navigate the threats of the AI age and continue to protect our enterprises in this rapidly changing digital world.
AI-powered cyber threats use artificial intelligence to enhance attack methods like phishing, deepfakes, and polymorphic malware. These threats are faster, more convincing, and harder to detect than traditional cyberattacks.
AI is used in cybersecurity defense to monitor networks 24/7, detect anomalies, block phishing attempts, and identify deepfakes. It enables faster responses and predictive threat intelligence to stop attacks before they cause damage.
Traditional training, such as annual modules, cannot keep up with evolving AI-driven threats. Modern attacks require continuous, personalized, and interactive training to keep employees alert and prepared.
JIT security training delivers timely, bite-sized lessons when employees encounter risky behavior, such as clicking on suspicious links. This makes learning relevant and improves retention.
Companies can build a security-aware culture by leading by example, encouraging reporting without fear, enforcing strict verification policies, and keeping employees informed about new threats.