
TOP STORY
Apr 7, 2025
Cybersecurity expert Bruce Lee urges corporate directors to rethink oversight in an AI-driven threat landscape.
In front of an audience of board directors gathered by the National Association of Corporate Directors (NACD) Research Triangle chapter, cybersecurity expert and Zscaler CXO Advisor, Bruce Lee, delivered a simple message: “Cybersecurity isn’t just technical—it’s strategic. Your brand, your balance sheet, your board reputation—it’s all on the line.”

Zscaler VP Cybersecurity Advocacy Rob Sloan and CXO Advisor Bruce Lee share crucial AI and cybersecurity advice with corporate boards of directors
It is worth acknowledging the significant steps boards have taken in improving cyber risk oversight—establishing committees, improving cyber literacy, and integrating security into enterprise risk management. All these have contributed to leaps forward, not only in the board’s awareness, acknowledgment, and ownership of cybersecurity, but in the actual security of the companies themselves. That’s real progress.
But as we all know, risks are never static. Even the best-prepared companies find themselves facing new threats they hadn’t anticipated. One risk in particular is complicating efforts to protect data and systems: artificial intelligence (AI). AI is a double-edged sword. On one hand, it has the potential to supercharge organizations’ cyber defenses, but on the other, attackers have embraced AI far more quickly than defenders and gained the upper hand. Directors must respond.
During the discussion, Bruce captured five key takeaways that boards must consider and take action on.
Review the company’s cyber policy and ensure it is expanded to include AI risks
Bruce asked: “When was your cyber policy last reviewed? More importantly, does it say anything about AI?” This isn’t just a paperwork update; it's about confronting the fact that generative AI has altered the threat landscape and can manipulate digital content with unnerving ease.Today’s cyber risks go beyond viruses and ransomware. Generative AI can synthesize convincing emails, voices, and even video. If your company’s policy doesn’t address AI-specific risks and controls, it’s already out of date.
Zero Trust: Ask the Right Questions
“Ask your CISO and CIO not just if they’ve heard of Zero Trust, but whether they’ve implemented it, and how far they’ve gone.” Bruce explained that while traditional networks were designed to allow everything to talk to everything else, which attackers exploit. The zero trust model assumes no implicit trust and requires every request to access data or applications is screened to see whether the user has permission to connect.Directors need to understand this, because a modern zero trust architecture simplifies networks and reduces risks in ways that firewalls and VPNs cannot. It can also have a direct financial impact; companies that deploy zero trust may benefit from lower cyber insurance premiums and lower network infrastructure costs.
Scrutinize social engineering risk and insurance gaps
Bruce warned that socially engineered fraud—such as a well-crafted email that tricks employees into transferring funds (commonly known as Business Email Compromise)–remains one of the most common and costly threats. According to FBI data, BEC fraud caused losses of over $2.9 billion in the U.S. in 2023. He said: “The fraudsters aren’t just attacking the tech. They’re attacking your people.”In an era when a convincingly fake email or voice message can trigger a million-dollar transfer, boards need to ensure that process controls and insurance coverage are adequate—and tested. Directors, executives and finance employees in particular must understand they are at a higher risk of being targeted and know how to report suspicious communications.
Don’t just test systems—test people
Cybersecurity testing often focuses on system vulnerabilities, but Bruce emphasized that attackers are increasingly targeting human behavior. “You can’t just test for viruses anymore. Test for humans being human.”He urged directors to ensure that cyber drills reflect these more subtle, manipulative attack vectors. If employees can’t spot a fake message generated by AI today, how will they fare in a year? Organization-wide phishing tests are a good starting point, but focus on giving additional training to higher-risk employees.
Practice a deepfake crisis—before it happens
Awareness is a good starting point for any risk, but–short of a real incident–gaps in preparedness can only truly be identified during a simulation. Bruce suggested a tabletop exercise involving the executive leadership team and board focused on a deepfake scenario.“Imagine your next board-level crisis scenario involves an AI-generated deepfake—of your CEO, announcing a merger, or making a controversial statement,” he said. Deepfakes can potentially create chaos, manipulate markets, or tarnish reputations in minutes, and the technology is rapidly becoming available to anyone. Executives with online profiles that share video and audio–essentially every executive nowadays–are at risk of being targeted.
The Importance of ongoing awareness
Ultimately, the exponential increase in data volumes and ever-rising sophistication of attackers means companies cannot afford to shun the latest technologies to help supercharge defenses. The only way to fight AI is with AI, or as Bruce put it: “The attackers are using AI,” Bruce warned, “So should you.”
Bruce reminded directors that it isn’t realistic to expect they will become cyber or AI experts, nor is it necessary to be, but they do need to evolve their oversight and stay abreast of the latest threats and risks.
********
Zscaler is a proud partner of NACD’s Research Triangle and Northern California chapters. We are here as a resource for directors to answer questions about cybersecurity or AI risks, and are happy to arrange dedicated board briefings. Please email Rob Sloan (rsloan[@]zscaler.com), VP Cybersecurity Advocacy at Zscaler, to learn more.
Recommended