Blog Zscaler
Recevez les dernières mises à jour du blog de Zscaler dans votre boîte de réception
S'abonnerEnabling the Secure use of AI in Government
Artificial intelligence (AI) and machine learning (ML) tools hold the potential to transform governmental and public-sector organizations. Indeed, such technologies promise to reshape how public sector entities deliver services, drive innovation, and even address societal challenges at large. For many governmental organizations, the initial AI charge has centered around popular tools like ChatGPT, which itself is the most-leveraged AI application by the public sector.
At the same time, generative AI has become something of a double-edged sword: even as AI/ML tools enable rapid innovation and increase productivity, they also come with several key risks. For public-sector organizations, these risks typically fall into two categories: the internal risks of leaking sensitive data, securely enabling generative AI tools, and providing high-quality training data; and the external risks posed by threat actors and nation-state-backed groups who are leveraging AI tools to launch cyber attacks at unprecedented speed, sophistication, and scale.
To get a pulse on these opportunities and challenges with GenAI, Zscaler recently published the Zscaler ThreatLabz 2024 AI Security Report, which examined more than 18 billion AL/ML transactions across the Zscaler Zero Trust Exchange™, the world’s largest inline security cloud. The report details key AI usage trends among large organizations, provides best-practice guidance on security AI adoption, and dives into the AI threat landscape, with real-world AI attack case studies.
AI tool usage is surging across the board
Across every sector, ThreatLabz saw AI/ML transactions grow by nearly 600% from April 2023 to January 2024. Here, ChatGPT led the charge, representing more than half of all AI/ML transactions over this period.
Paired with this surge in AI adoption, organizations are taking proactive measures to protect their critical data and secure the use of generative AI. Most often, enterprises take the critical first step of blocking risky transactions to particular AI applications and domains. Indeed, ThreatLabz observed a 577% rise in blocked transactions over nine months, indicative of growing data security concerns. Overall, ChatGPT was the most-blocked application — despite, or perhaps because of its high level of visibility and deep market penetration.
Key Public Sector AI Trends
Amid this AI surge, government entities across the world are integrating AI and ML tools into their operations to improve service delivery and policymaking. Overall, applications like ChatGPT and Drift are witnessing the highest adoption by the government. These applications, in particular, can help public entities engage more effectively with citizens and deliver more impactful programs. For instance, AI chatbots and virtual assistance can make it significantly easier for citizens of all walks to access essential information and services.
Yet, in some ways, government AI adoption reveals a security contradiction: despite being a top 10 driver of AI/ML transactions in the Zscaler cloud, this vertical blocks just 6.75% of AI transactions. In other words, the government sector has been significantly more permissive around AI tool use than other verticals. The global average for blocked AI/ML transactions is 18.5%, for instance, while the finance and insurance sector blocks a full 37.2% of transactions. It will be essential for governments worldwide to establish AI regulatory frameworks and governance mechanisms to help organizations navigate these challenges while enabling responsible AI development. Indeed, many policymakers across the globe are already taking concrete steps to address these concerns.
At the same time, the education sector has rapidly adopted AI as a valuable learning tool. In similar fashion as the government sector, however, education institutions block a comparatively low 2.98% of AI/ML transactions. In both sectors, as AI adoption grows, data privacy concerns are likely to escalate, prompting organizations to implement more robust, technology-based data protection measures.
AI is driving public sector innovation… and new cyber threats
As public-sector organizations align around the transformative value of AI, new use cases abound: AI chatbots can provide public citizens with faster, more intuitive access to critical services and information, particularly across sectors like public transportation, public health, and education. Meanwhile, AI-driven data analysis can help governmental employees, researchers, and policymakers alike to make better data-driven decisions and glean faster insights from data at scale.
At the same time, cybercriminals and state-sponsored threat groups are leveraging AI-driven techniques to orchestrate highly sophisticated attacks at an unprecedented speed and scale. As AI-driven threats like deepfakes and vishing attacks make international headlines, the proliferation of many kinds of AI-powered cyberthreats pose particular cybersecurity challenges for the public sector. These range from end-to-end phishing campaigns, to AI reconnaissance and automated code exploitation, to polymorphic ransomware, and more. To avoid the substantial economic ramifications and supply chain risks of these attacks, public sector organizations must take proactive measures to safeguard critical infrastructure and sensitive information while enabling the safe usage of AI.
Governments and enterprises alike are responding accordingly: Policymakers worldwide are actively developing regulatory frameworks to govern the responsible and secure use of AI. The U.S. Justice Department, meanwhile, appointed its first Chief AI Officer, in light of these trends. At the same time, AI is becoming a cornerstone of cybersecurity strategies and spending, empowering organizations to deliver greater protection and improve their ability to adapt to a dynamic threat landscape.
Enabling the secure use of AI in government
As public-sector organizations harness the transformative power of generative AI, it will be essential to prioritize the protection of critical infrastructure, sensitive data, citizens' information, and students' privacy. Here, institutions will need to follow established best practices to secure the use of generative AI with robust data protection measures. Meanwhile, the government sector must also work to leverage AI as part of cyber defense strategies, as nation-backed threat groups and cybercriminals use these same technologies to launch more targeted attacks at scale.
For the complete findings and best practices to securely embrace AI transformation, download the ThreatLabz 2024 AI Security Report today.
Cet article a-t-il été utile ?
Recevez les dernières mises à jour du blog de Zscaler dans votre boîte de réception
En envoyant le formulaire, vous acceptez notre politique de confidentialité.