16
 min read

Security Awareness in the AI Era: What’s Changing and How to Prepare?

Discover how AI is reshaping cybersecurity, from deepfake scams to AI-driven defenses, and learn how leaders can protect their organizations.
Security Awareness in the AI Era: What’s Changing and How to Prepare?
Published on
July 2, 2025
Category
Cybersecurity Training

Navigating Security in the Age of AI

In July 2024, executives at luxury carmaker Ferrari received WhatsApp messages and even a phone call from someone posing as their CEO. The voice on the line matched their chief executive’s tone and accent perfectly, yet it was a sophisticated fake. The impostors had used artificial intelligence (AI) to clone the CEO’s voice and almost tricked finance staff into authorizing a fraudulent transaction. This real-world incident highlights a growing truth: cybercriminals are leveraging AI to craft incredibly convincing scams, from voice-deepfakes to realistic phishing emails. At the same time, organizations are beginning to deploy AI to bolster their defenses, leading to a new cybersecurity “arms race.” The stakes are enormous, global cybercrime is projected to cost the world a staggering $10.5 trillion annually by 2025. Yet alarmingly, 90% of companies today lack the maturity to counter advanced AI-driven threats effectively. In this era of intelligent attacks and defenses, how should businesses respond? This article explores what’s changing in security awareness amid the rise of AI, and how leaders can prepare their people and processes to stay safe.

The AI-Powered Threat Landscape

AI has ushered in a dramatic shift in the cybersecurity threat landscape. Attackers are now systematically integrating AI into their tactics to make their attacks faster, more scalable, and more convincing than ever. One of the clearest examples is phishing. For years, employees were taught to spot phishing emails by tell-tale signs like poor grammar or generic greetings. That era is over, modern attackers can use AI language models to generate phishing messages that are grammatically flawless, contextually tailored, and highly persuasive. In fact, phishing campaigns tied to generative AI have surged by over 1,200%, according to recent analyses. These AI-crafted emails are not only more numerous but also dangerously effective: studies found AI-generated phishing lures can convince roughly 60% of recipients to engage, compared to about 12% for traditional phishing attempts. In other words, an email composed by AI can be five times more successful at deceiving an employee than the classic badly-written scam. And because AI automates the work of crafting and customizing these messages, attackers can launch targeted phishing campaigns at a massive scale with minimal effort, effectively a 95% reduction in cost and time per attack.

Another game-changing threat is the rise of deepfakes, AI-generated audio or video content that convincingly mimics real people. We already saw how Ferrari foiled a deepfake voice scam. Unfortunately, many organizations have not been as lucky. In 2024, for example, a British energy firm’s CEO was tricked by an AI-cloned voice into transferring $243,000 to fraudsters. And in one alarming case at a global company (Arup), criminals used an AI-generated video call to impersonate the CEO and other colleagues, persuading an employee to send roughly $39 million to a fake account. These incidents underscore how AI-powered social engineering can defeat the usual verification steps, seeing a face or hearing a familiar voice is no longer proof you’re dealing with the right person. Deepfake scams are exploding in frequency: in just the first quarter of 2025, 179 deepfake incidents were recorded, exceeding the total for all of 2024. Such AI-enabled fraud could lead to tremendous financial losses; one analysis by Deloitte projects that generative AI-driven fraud may reach $40 billion in the U.S. by 2027 (up from $12.3 billion in 2023).

Attackers are also leveraging AI to bolster malware and other attack techniques. For instance, AI can automatically generate new variants of malicious code (so-called polymorphic malware), constantly changing its shape to evade detection. Recent reports indicate that over 70% of major breaches now involve polymorphic malware, often created or enhanced by AI tools. Furthermore, underground hacker forums have started offering malicious AI-as-a-service, in 2023, rogue AI chatbots named “WormGPT” and “FraudGPT” emerged, providing criminals with one-click tools to generate convincing phishing emails or malware code. In short, AI has lowered the barrier to entry for cybercrime: it lets attackers launch more attacks, of greater sophistication, with less effort. This evolving threat landscape means that old habits and defenses are no longer sufficient. As one report succinctly put it, when scammers can use AI to fool even savvy professionals, it “proves beyond doubt that yesterday’s protection measures will certainly not work in 2025 and beyond”.

AI in Cyber Defense: Fighting Fire with Fire

On the positive side, the same AI advancements can be turned toward strengthening security. Just as attackers are arming themselves with AI, defenders are increasingly using AI to fortify cyber defenses. AI-powered security systems can monitor networks and user behaviors 24/7, analyze vast amounts of data, and flag anomalies or threats much faster than a human could. For example, machine learning models excel at spotting patterns, they can detect the subtle signs of a network intrusion or phishing attempt in real time, enabling a swift response before damage is done. In fact, modern AI-based detection systems have demonstrated threat detection rates as high as 98% in some tests. This speed and precision are crucial at a time when attack volume is high. By automating routine security monitoring and initial incident response, AI allows human analysts to focus on complex or strategic tasks, significantly reducing breach costs and response times in many cases.

Enterprise security leaders often describe this dynamic as an “AI arms race.” The key to survival is to fight fire with fire, leveraging AI tools to counter AI-empowered attackers. For instance, AI-driven email filters and user behavior analytics can help catch phishing emails or account takeovers that would slip past traditional defenses. Some organizations are implementing AI-based fraud detection that can recognize deepfake voices or unusual transaction requests by analyzing acoustic or behavioral cues. Additionally, AI can aid in predictive threat intelligence: by crunching global data on emerging threats, AI systems might warn defenders about new attack techniques (say, a novel malware strain or scam) before they hit the organization. However, adopting AI for defense is not as simple as buying a new gadget, it requires the right strategy and governance. Security teams must ensure their AI systems are properly trained, vetted, and continuously updated, since attackers will try to outsmart them. There’s also the risk of “shadow AI,” where employees or departments start using AI tools without the knowledge of security teams, potentially introducing vulnerabilities. To prepare, many enterprises are establishing AI governance frameworks (such as NIST’s AI Risk Management Framework) to manage these risks and use AI responsibly.

Most importantly, even the best AI defenses should be combined with human vigilance. Technology alone is not a silver bullet, if employees fall prey to an AI-crafted scam, the best firewall or AI filter may be bypassed. That’s why experts emphasize a hybrid approach: use AI to reinforce security, but also invest in training people to recognize and respond to threats. Real-world incidents show that while AI-driven tools are critical, the ultimate defense still relies on a well-trained, vigilant workforce.

Forward-thinking organizations are addressing this through dedicated Cybersecurity Training programs that teach employees how to identify AI-generated threats and practice secure digital behavior.

In the next section, we’ll discuss how security awareness training itself is evolving in the AI era.

Rethinking Security Awareness Training

Given the novel threats we’ve outlined, organizations must adapt their security awareness training programs to keep pace. Many companies still rely on static, annual training modules or quarterly phishing email drills. These traditional approaches are rapidly becoming outdated. A recent industry survey found that while 75% of organizations require employees to complete security training at least quarterly, much of it is treated as a checkbox exercise to satisfy compliance requirements. The result is often stale content and disengaged employees. When training is infrequent or irrelevant, employees tend to tune it out, which is dangerous when attackers are constantly updating their methods.

To truly defend against AI-enhanced threats, experts recommend moving toward continuous, dynamic, and personalized learning. One promising concept is Just-in-Time (JIT) training, delivering bite-sized security education to employees at the moment they need it. For example, if an employee is about to click a suspicious link or has just failed a phishing test, an AI-driven platform could instantly pop up a short lesson or warning. This way, the training is timely and highly relevant to real behavior. Such adaptive training was previously hard to scale, but AI tools now make it feasible. In fact, nearly 99% of organizations in a 2025 survey said they are in favor of using AI in future security awareness efforts, for tasks like automatically generating training content, creating individualized phishing simulations, and even providing conversational coaching via chatbots. Imagine an AI “cyber coach” that can answer employees’ security questions on demand, or auto-generate a fake phishing email tailored to an employee’s role to test their response, these are no longer far-fetched ideas, but emerging capabilities.

Here are some key ways businesses can upgrade their security awareness training for the AI era:

  • Focus on Emerging Threats: Update training content to cover AI-driven attack techniques. Employees should learn about deepfakes (e.g., how a voice or video can be faked), AI-generated phishing, and other new scams. For example, staff can be taught verification steps: if a “CEO” asks for an unusual wire transfer, don’t rely on voice alone, always verify identity through a second factor. Drilling this habit is vital, since simple verification procedures have foiled deepfake scams at companies like WPP and Ferrari. Training should emphasize a healthy skepticism: just because a message looks authentic or a caller sounds like an executive doesn’t mean it is.
  • Increase Frequency and Interactivity: Instead of one-off yearly seminars, make training a continuous effort. Use short, frequent modules or exercises that keep security top-of-mind. Many organizations are shifting to monthly phishing simulations or micro-learning videos. AI can help generate fresh scenarios so content doesn’t become predictable. The goal is to reinforce good habits through regular practice rather than a once-a-year information dump.
  • Personalize and Contextualize Learning: Leverage AI to tailor training to each user’s role, behavior, and threat profile. A finance employee might receive training scenarios about CEO fraud and invoice scams, whereas a developer might get modules on secure coding. Modern security platforms can use machine learning to identify which topics a particular employee struggles with (say, recognizing phishing emails) and then provide additional coaching on that topic. Personalized, relevant training is far more engaging than generic lessons. Security leaders note that training shouldn’t be one-size-fits-all, it works best when it feels directly applicable to one’s day-to-day work.
  • Embed Training into the Workflow: Aim to integrate awareness reminders into employees’ regular tools and routines. For instance, an email system might warn users if an incoming message looks suspicious (banner notifications), turning each risky moment into a learning opportunity. Collaboration platforms could have built-in security tips or AI assistants that users can quickly consult (“Is this request legitimate?”). The idea is to make security awareness a seamless part of work life, not an isolated classroom activity.
  • Measure and Reinforce: Take advantage of analytics (often powered by AI) to track training effectiveness and adjust accordingly. Identify which departments or topics have higher risk and double down on those. Celebrate successes, if phishing click rates drop or an employee smartly avoids a deepfake trap, share that story (anonymously if needed) to reinforce positive behavior. Moreover, continuously review and update the program content. Cyber threats evolve quickly, so the training curriculum should be revisited at least annually (if not more often) to incorporate the latest attack trends and lessons learned from any security incidents.

Crucially, executive support and company culture play a huge role in training success. Employees are more likely to take security seriously if top leaders champion the program. It’s wise to identify “security champions” across departments, not just IT, but in HR, finance, etc., who can advocate for good practices and keep colleagues engaged. And instead of framing awareness training as a mundane compliance task, leadership should communicate its importance in protecting the organization’s mission and customers. This brings us to the broader point of culture in the AI-threat era.

Building a Cybersecurity Culture in the AI Era

Technology and training solutions alone will not fully safeguard an organization; what’s needed is a strong security-aware culture that pervades all levels of the company. This is especially true as AI-blended threats target human trust and decision-making. HR professionals, CISOs, business owners, and other enterprise leaders all have a part to play in cultivating this culture. Here are some focal areas for leadership:

  • Lead by Example: Leaders should visibly practice good security hygiene and follow the same policies they expect employees to follow. For instance, if multi-factor authentication and verification callbacks are policy, executives must do it too (no “exceptions” for convenience). When staff see management being vigilant, double-checking an unusual email or personally verifying a request, it reinforces that security is everyone’s responsibility.
  • Encourage Vigilance and Reporting: Create an environment where employees feel comfortable reporting suspicious activities or admitting mistakes without fear of punishment. In the AI era, even a well-trained employee might momentarily be fooled by a very convincing scam. What’s important is that they report it immediately. Quick reporting can prevent an incident from escalating. Leadership can foster this by responding to reports with gratitude and support, not blame. A proactive reporting culture is essential to catch AI-enabled fraud early.
  • Establish Clear Verification Policies: Given the prevalence of deepfake impersonation and fraudulent communications, organizations should implement strict verification protocols for sensitive requests. For example, any request to transfer funds, change bank details, or release sensitive data should be verified through a second channel (a known phone number, an in-person confirmation, etc.). These procedures must be well-communicated and ingrained so that employees follow them even when a request comes with urgency or authority. As one CEO advised after experiencing a deepfake attempt, “Just because the account has my photo doesn’t mean it’s me”. Making this mindset part of the company DNA can stop AI scams in their tracks.
  • Stay Informed and Share Knowledge: The threat landscape is continually evolving. CISOs and security teams should stay up-to-date on the latest AI-related threats and regularly brief the rest of the organization. This could be in the form of monthly newsletters, internal webinars, or quick tips in team meetings. When a notable incident hits the news (say, a deepfake scam at another company), use it as a teachable moment: discuss how such a tactic might appear and remind staff of how to respond. Over time, this keeps awareness high. HR and department heads can integrate such content into ongoing professional development or staff onboarding, ensuring new hires also absorb the security-first culture.

It’s worth noting that virtually all executives recognize the need to beef up security awareness in this new era. In one global survey, 96% of executives agreed that more organization-wide training and awareness would help reduce cyberattacks. Many leaders are already taking action: nearly 96% have initiatives or plans focused on mitigating AI-related threats as part of their cyber incident response strategies. And importantly, these efforts pay off, about 89% of organizations reported measurable improvements in their security posture after implementing comprehensive awareness programs.

Finally, keep the human factor front and center. Even as we deploy cutting-edge AI tools and processes, remember that attackers ultimately aim to exploit human psychology, be it through fear, urgency, curiosity, or trust. By building a culture where employees are aware of these manipulation tactics and feel empowered to question odd situations, you create a human firewall that complements your technical firewalls. After all, a recent report found 99% of organizations had security incidents linked to avoidable human error. Reducing those errors through education and culture is one of the most powerful defenses available, AI or not.

Final Thoughts: Embracing AI with Vigilance

The AI era presents a classic double-edged sword for security. On one side, we face attackers who use AI to supercharge their scams, flooding inboxes with convincing phishing emails, mimicking voices and videos to impersonate trusted people, and probing for weaknesses at a scale unimaginable in the past. On the other side, we have unprecedented tools at our disposal to detect and deter these threats, from intelligent monitoring systems to AI-driven training programs. Organizations that succeed in this new landscape will be those that embrace AI’s advantages while staying vigilant about its pitfalls. This means investing in smart defenses and employee education, continuously updating strategies as threats evolve, and fostering a company-wide ethos of caution and verification.

For HR professionals, CISOs, and business leaders alike, the mission is clear: make security awareness an ongoing priority. In the AI era, security can no longer be seen as just an IT problem or a yearly checkbox. It’s a living, breathing part of organizational culture and risk management. By preparing your people, through knowledge, practice, and the right tools, you empower them to be the strongest link in the security chain rather than the weakest. The challenges are formidable, but not insurmountable. With a blend of human awareness and artificial intelligence on our side, we can navigate the threats of the AI age and continue to protect our enterprises in this rapidly changing digital world.

FAQ

What are AI-powered cyber threats?

AI-powered cyber threats use artificial intelligence to enhance attack methods like phishing, deepfakes, and polymorphic malware. These threats are faster, more convincing, and harder to detect than traditional cyberattacks.

How is AI used in cybersecurity defense?

AI is used in cybersecurity defense to monitor networks 24/7, detect anomalies, block phishing attempts, and identify deepfakes. It enables faster responses and predictive threat intelligence to stop attacks before they cause damage.

Why is traditional security awareness training no longer enough?

Traditional training, such as annual modules, cannot keep up with evolving AI-driven threats. Modern attacks require continuous, personalized, and interactive training to keep employees alert and prepared.

What is Just-in-Time (JIT) security training?

JIT security training delivers timely, bite-sized lessons when employees encounter risky behavior, such as clicking on suspicious links. This makes learning relevant and improves retention.

How can companies build a security-aware culture in the AI era?

Companies can build a security-aware culture by leading by example, encouraging reporting without fear, enforcing strict verification policies, and keeping employees informed about new threats.

References

  1. Chipeta C. 7 Deepfake Attacks Examples: Deepfake CEO scams. Eftsure Blog. https://www.eftsure.com/blog/cyber-crime/these-7-deepfake-ceo-scams-prove-that-no-business-is-safe/
  2. Khalil M. AI Cybersecurity Threats 2025: How to Survive the AI Arms Race. Deepstrike.
    https://deepstrike.io/blog/ai-cybersecurity-threats-2025
  3. Rashotte R. 3 key factors to make your cybersecurity training a success. World Economic Forum. https://www.weforum.org/stories/2024/10/3-key-factors-to-make-your-cybersecurity-training-a-success/
  4. Abnormal Security. Abnormal AI Reveals Gaps and Opportunities in Security Awareness Training Programs in New Report (Press Release). Abnormal AI Newsroom.
    https://abnormal.ai/about/news/state-of-security-awareness-training
  5. Galletti S, Pani M. How Ferrari Hit the Brakes on a Deepfake CEO. MIT Sloan Management Review. https://sloanreview.mit.edu/article/how-ferrari-hit-the-brakes-on-a-deepfake-ceo/
Weekly Learning Highlights
Get the latest articles, expert tips, and exclusive updates in your inbox every week. No spam, just valuable learning and development resources.
By subscribing, you consent to receive marketing communications from TechClass. Learn more in our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Explore More from L&D Articles

Preparing for a Compliance Crisis: Training for Incident Response and Communication
August 22, 2025
25
 min read

Preparing for a Compliance Crisis: Training for Incident Response and Communication

Learn how to prepare for a compliance crisis with incident response training and effective communication strategies for any industry.
Read article
AI in Project Management: Automating Tasks Without Losing Control
June 19, 2025
17
 min read

AI in Project Management: Automating Tasks Without Losing Control

Discover how AI can automate project management tasks while keeping human oversight, ensuring efficiency without losing control.
Read article
Leveraging Chatbots to Answer New Hire Questions in Real Time for Your Business?
June 19, 2025
21
 min read

Leveraging Chatbots to Answer New Hire Questions in Real Time for Your Business?

Enhance onboarding with AI-powered HR chatbots that give new hires instant answers, reduce HR workload, and improve employee engagement.
Read article