
EDITOR'S PICK
Apr 8, 2025
Many enterprise organizations are seeking to balance AI deployment with rapidly emerging global regulations.
For many organizations, adopting artificial intelligence (AI) is proving to be a difficult balancing act. The World Economic Forum (WEF) reports that while 66% of organizations expect AI to significantly impact cybersecurity within the next year, only 37% currently have processes in place to assess the security of AI use prior to deployment.
This mismatch is concerning as governments worldwide introduce more AI-related regulations and frameworks to address critical issues like user privacy, intellectual property protection, ethical use of AI, and national security. The WEF’s 2025 Global Cyber Outlook warns that the rapid adoption of technologies like AI, combined with stricter regulation, is creating a major compliance burden for organizations.
Zscaler ThreatLabz recently found that enterprises are blocking almost 60% of AI/ML transactions. This indicates that concerns about security and the challenges of adhering to expanding regulations are causing CISOs to be overly restrictive in limiting this traffic.
It falls to CISOs and their CXO colleagues to steer their organizations through these choppy waters. They must ensure AI use complies with a growing array of laws, not just from home grown solutions but from third-party solutions as well. At the same time, it's essential to find new ways AI can empower their workforces.
Understanding AI cybersecurity regulations in 2025
Broader implications of AI use continue to emerge. Accordingly, regulatory bodies are shifting focus to include AI cybersecurity and accountability protections in their frameworks to mitigate risks like:
- AI-enabled data loss, including intellectual property entering the public domain
- Malicious data poisoning and adversarial prompts
- Algorithmic biases, ethics violations, and a lack of transparency
Concerns over these issues have driven lawmakers to pass regulations aimed at increasing AI transparency. This includes requirements to disclose the specifics of AI models in use, governance rules for AI deployments, and restrictions on certain uses of AI, such as in law enforcement.
The situation is further complicated by differences between jurisdictions and the need to align to non-AI related frameworks like GDPR, NIST, and CCPA across borders. These considerations are significant: according to IT services firm Capgemini, 77% of CISOs say AI compliance challenges delay cybersecurity innovation within their organizations.
Examining relevant global legislation and regulatory frameworks
In striving to comply with AI data protections, a handful of the most stringent mandates are likely to dictate CISO priorities.
General Data Protection Regulation (GDPR)
Aimed at providing a high degree of privacy to EU citizens, the GDPR acknowledges that while some AI is trained only on anonymous data, certain applications like large language models (LLMs) may contain personal information and are therefore subject to its authority.
CNIL, a French regulatory body, has recommended that, when organizations train AI models on personal data, individuals concerned must be notified. European regulations also allow individuals rights to "access, rectify, object and delete their personal data." Given the difficulties in knowing whether training data contains sensitive data, CNIL recommends that training data should be anonymized wherever possible.
EU Artificial Intelligence Act
The EU AI Act ranks AI applications according to the risk they present to EU citizens and the organizations that do business with them. These risk levels span from prohibited uses—such as predictive systems for criminal offenses—to minimal risk categories like AI-enabled spam filters. For businesses engaging with EU citizens, it's a good idea to understand where your AI applications fall along this AI risk spectrum.
California Consumer Privacy Act (CCPA)
In January 2025, California lawmakers updated the CCPA to state that AI-generated data can be treated as personal data. While California’s law is narrower in its application than the GDPR, it too states that AI capable of responding with personal information gives users the same rights over that data as if it were collected any other way.
NIST AI Risk Management Framework
Organizations in North America are strongly encouraged to review the National Institute of Standards and Technology (NIST) best practices for AI implementation and governance. Created in cooperation with the public and private sectors, this robust framework offers detailed guidance on risk identification and mitigation strategies for deploying AI tools.
While the standards are not legally binding, adhering to them can demonstrate a commitment to responsible use of AI. This, in turn, could insulate an organization from the most severe breach penalties.
How AI cybersecurity solutions can facilitate compliance
While most jurisdictions do regulate AI usage, organizations can also use certain capabilities of AI to support compliance.
These capabilities include:
- Real-time data monitoring: AI tools can track LLMs across an IT ecosystem, categorize data according to its sensitivity, and block prompts that violate organizational policies.
- Automated consent management: AI tools simplify compliance workflows by automating adherence to data handling consent rules like those of the GDPR. It can also create audit trails that capture all users, prompts, responses, and apps involved.
- Bias detection in AI models: The “black box” nature of many LLMs can make detecting bias difficult. AI-driven bias detection tools can help pinpoint unfair classifiers in AI models.
- Risk prediction and mitigation: Powered by predictive analytics, these solutions identify potential compliance gaps in cybersecurity frameworks and anticipate threats based on existing controls.
How CISOs can strategically integrate AI-based regulatory solutions
Given the productivity, innovation, and cyber resilience benefits AI offers, taking the “sledgehammer approach” and denying all use of these tools is simply not feasible. This should not fully fall on the CISO, as these decisions also have significant implications for business operations. For instance, we may find that a company allowing 65% of AI/ML transactions gains a competitive advantage over one that allows 60%. Therefore, it’s critical for organizations’ leaders to carefully consider AI governance and both implement and enforce security guardrails to protect against compliance violations.
Actionable steps for CISOs include:
- Adopting zero trust for AI systems: Many zero trust principles are directly applicable to generative AI. Least-privileged and role-based access are zero trust best practices that should extend to the AI apps your organization permits. Approved LLMs should be placed behind a single sign-on identity platform protected by multifactor authentication. Identities should be continuously verified and user behavior anomalies should result in restricted use or step-up authentication.
- Embedding AI in data governance: AI-enabled real-time data monitoring can be a powerful tool for governance. AI-assisted data discovery, classification, and loss prevention help ensure a robust defense against the misuse of LLMs, prevent leakage of intellectual property, and guard against compliance violations such as training AI models on user data without consent.
- Expanding incident response protocols: Integrate AI-driven compliance tools to streamline post-breach regulatory reporting, ensuring timelines like GDPR’s 72-hour breach reporting or the SEC’s four-business-day disclosure rule are attainable.
The future of AI cybersecurity regulation
AI accountability frameworks are poised to expand worldwide, bringing new layers of complexity to the regulatory landscape. In the coming years, organizations should expect stricter requirements around algorithmic explainability and bias detection to address growing concerns about fairness and transparency.
Trying to keep up with these laws in all the global jurisdictions today's enterprises operate in would be a fool's errand—without the assistance of AI. CISOs should begin investigating AI-enabled compliance solutions now, well before a regulatory issue arises.
While AI adoption is inevitable for most organizations, ensuring compliance with evolving AI regulations requires careful planning and strategic investment.
Recommended