Blog Zscaler

Recevez les dernières mises à jour du blog de Zscaler dans votre boîte de réception

S'abonner
Products & Solutions

Enabling the Secure use of AI in Government

image
ADAM FORD
avril 29, 2024 - 5 Min de lecture

Artificial intelligence (AI) and machine learning (ML) tools hold the potential to transform governmental and public-sector organizations. Indeed, such technologies promise to reshape how public sector entities deliver services, drive innovation, and even address societal challenges at large. For many governmental organizations, the initial AI charge has centered around popular tools like ChatGPT, which itself is the most-leveraged AI application by the public sector. 

 

At the same time, generative AI has become something of a double-edged sword: even as AI/ML tools enable rapid innovation and increase productivity, they also come with several key risks. For public-sector organizations, these risks typically fall into two categories: the internal risks of leaking sensitive data, securely enabling generative AI tools, and providing high-quality training data; and the external risks posed by threat actors and nation-state-backed groups who are leveraging AI tools to launch cyber attacks at unprecedented speed, sophistication, and scale. 

To get a pulse on these opportunities and challenges with GenAI, Zscaler recently published the Zscaler ThreatLabz 2024 AI Security Report, which examined more than 18 billion AL/ML transactions across the Zscaler Zero Trust Exchange™, the world’s largest inline security cloud. The report details key AI usage trends among large organizations, provides best-practice guidance on security AI adoption, and dives into the AI threat landscape, with real-world AI attack case studies.

AI tool usage is surging across the board 

Across every sector, ThreatLabz saw AI/ML transactions grow by nearly 600% from April 2023 to January 2024. Here, ChatGPT led the charge, representing more than half of all AI/ML transactions over this period. 

Paired with this surge in AI adoption, organizations are taking proactive measures to protect their critical data and secure the use of generative AI. Most often, enterprises take the critical first step of blocking risky transactions to particular AI applications and domains. Indeed, ThreatLabz observed a 577% rise in blocked transactions over nine months, indicative of growing data security concerns. Overall, ChatGPT was the most-blocked application — despite, or perhaps because of its high level of visibility and deep market penetration. 

AI is driving public sector innovation… and new cyber threats 

As public-sector organizations align around the transformative value of AI, new use cases abound: AI chatbots can ‌provide public citizens with faster, more intuitive access to critical services and information, particularly across sectors like public transportation, public health, and education. Meanwhile, AI-driven data analysis can help governmental employees, researchers, and policymakers alike to make better data-driven decisions and glean faster insights from data at scale. 

 

At the same time, cybercriminals and state-sponsored threat groups are leveraging AI-driven techniques to orchestrate highly sophisticated attacks at an unprecedented speed and scale. As AI-driven threats like deepfakes and vishing attacks make international headlines, the proliferation of many kinds of AI-powered cyberthreats pose particular cybersecurity challenges for the public sector. These range from end-to-end phishing campaigns, to AI reconnaissance and automated code exploitation, to polymorphic ransomware, and more. To avoid the substantial economic ramifications and supply chain risks of these attacks, public sector organizations must take proactive measures to safeguard critical infrastructure and sensitive information while enabling the safe usage of AI.

Governments and enterprises alike are responding accordingly: Policymakers worldwide are actively developing regulatory frameworks to govern the responsible and secure use of AI. The U.S. Justice Department, meanwhile, appointed its first Chief AI Officer, in light of these trends. At the same time, AI is becoming a cornerstone of cybersecurity strategies and spending, empowering organizations to deliver greater protection and improve their ability to adapt to a dynamic threat landscape.

Enabling the secure use of AI in government

As public-sector organizations harness the transformative power of generative AI, it will be essential to prioritize the protection of critical infrastructure, sensitive data, citizens' information, and students' privacy. Here, institutions will need to follow established best practices to secure the use of generative AI with robust data protection measures. Meanwhile, the government sector must also work to leverage AI as part of cyber defense strategies, as nation-backed threat groups and cybercriminals use these same technologies to launch more targeted attacks at scale. 

For the complete findings and best practices to securely embrace AI transformation, download the ThreatLabz 2024 AI Security Report today. 

form submtited
Merci d'avoir lu l'article

Cet article a-t-il été utile ?

Recevez les dernières mises à jour du blog de Zscaler dans votre boîte de réception

En envoyant le formulaire, vous acceptez notre politique de confidentialité.