Zscaler Blog

Get the latest Zscaler blog updates in your inbox

Subscribe
Security Research

New AI Insights: Explore Key AI Trends and Risks in the ThreatLabz 2024 AI Security Report

Key ThreatLabz AI Findings

  • Explosive AI growth: Enterprise AI/ML transactions surged by 595% between April 2023 and January 2024.
  • Concurrent rise in blocked AI traffic: Even as enterprise AI usage accelerates, enterprises block 18.5% of all AI transactions, a 577% increase signaling rising security concerns. 
  • Primary industries driving AI traffic: manufacturing accounts for 21% of all AI transactions in the Zscaler security cloud, followed by Finance and Insurance (20%) and Services (17%).
  • Clear AI leaders: the most popular AI/ML applications for enterprises by transaction volume are ChatGPT, Drift, OpenAI, Writer, and LivePerson.
  • Global AI adoption: the top five countries generating the most enterprise AI transactions are the US, India, the UK, Australia, and Japan.
  • A new AI threat landscape: AI is empowering threat actors in unprecedented ways, including for AI-driven phishing campaigns, deepfakes and social engineering attacks, polymorphic ransomware, enterprise attack surface discovery, exploit generation, and more.

 

Enterprise decision point: when to allow AI apps, when to block them, and how to mitigate ‘shadow AI’ risk 

One key theme in the report is that, to reap the full transformative potential of AI, enterprises must work to securely enable AI — that is, to minimize the risks associated with integrating and developing AI tools, while devising strategies to prevent or curtail an explosion of unapproved AI tools in the enterprise, a trend dubbed ‘shadow AI’. 

 

In general, enterprises can think about these risks as falling into three broad categories: 

  1. Protecting sensitive data: Generative AI tools can inadvertently leak sensitive and confidential information, making data protection measures crucial. In fact, sensitive data disclosure is number six on the Open Worldwide Application Security Project (OWASP) Top Ten for AI Applications. Apart from adversarial threats like prompt injection attacks or malware, the biggest risks can stem from well-meaning users who inadvertently expose sensitive or proprietary data to large language models (LLMs). There are numerous ways that enterprise users may unknowingly do this, such as, for example, an engineer asking a gen AI tool to optimize or refactor proprietary code, or a sales team member asking an AI to use historical sales figures to forecast future pipeline. 

    Enterprises should implement robust AI policy guidelines and technology-based data loss prevention (DLP) measures to prevent accidental data leaks and breaches. Meanwhile, they should also gain deep visibility into AI app usage to prevent or mitigate shadow AI, with granular access controls that ensure users only leverage approved AI applications.

  2. Data privacy and security risks of AI apps: Not all AI applications have the same level of data privacy and security. Terms, conditions, and policies can vary greatly, and enterprises should consider whether their data, for example, will be used to train language models, mined for advertising, or sold to third parties. Enterprises must assess and assign security risk scores to the AI applications they use, considering factors like data protection and the security practices of the companies behind them.                                            
  3. Data quality and poisoning concerns: The quality and scale of data used to train AI applications directly impact the reliability of AI outputs. Enterprises should carefully evaluate the data quality when selecting an AI solution and establish a strong security foundation to mitigate risks like data poisoning.

 

The new era of AI-driven threats

The risks of AI are bi-directional: from outside enterprise walls, businesses face a continuous wave of threats that now includes AI-driven attacks. The reality is that virtually every type of existing threat can be aided by AI, which translates to attacks being launched at unprecedented speed, sophistication, and scale. Meanwhile, the future possibilities are limitless — meaning that enterprises face an unknown set of unknowns, when it comes to AI-driven cyber attacks. 

 

Still, clear attack patterns are emerging. In the 2024 AI Security Report, ThreatLabz provides insights into numerous evolving threats types, including:

 

  • AI impersonation: AI deepfakes, sophisticated social engineering attacks, misinformation, and more.
  • AI-generated phishing campaigns: end to end campaign generation, along with a ThreatLabz case study in creating a phishing login page using ChatGPT — in seven simple prompts.
  • AI-driven malware and ransomware: how threat actors are leveraging AI automation across numerous stages of the attack chain.
  • Using ChatGPT to generate vulnerability exploits: ThreatLabz shows how easy it is to create exploit PoCs, in this case for Log4j (CVE-2021-44228) and Apache HTTPS server path traversal (CVE-2021-41773)
  • Dark chatbots: diving into the proliferation of dark web GPT models like FraudGPT and WormGPT that lack security guardrails. 
  • And much more…
     

Best practices for secure AI transformation and layered AI + zero trust cyber defense


The transformative power of AI is undeniable. To reap its enormous potential, enterprises must overcome the bi-directional set of risks that AI creates, namely:

 

  1. Securely enabling AI: protecting enterprise data while ushering in transformative productivity changes.
  2. Using AI to fight AI: using the power of enterprise security data to drive AI threat prevention across the attack chain, deliver real-time security insights, and fast-track zero trust.

 

To that end, the Zscaler ThreatLabz 2024 AI Security Report offers key guidance, including:

 

  • How to securely enable ChatGPT: a best practice case study for securing generative AI tools, in five steps.
  • AI best practices and AI policy guidelines: AI frameworks and best practices that any enterprise can adopt. 
  • How Zscaler use AI to stop cyber threats: leveraging AI detections across each stage of the attack chain, with holistic visiblity into enterprise cyber risk
  • How Zscaler enables secure AI transformation: the key capabilities that enterprises require to securely embrace genAI and ML tools, including:
     
  • Full visibility into AI tool usage
  • Granular access policy creation for AI
  • Granular data security for AI applications
  • Powerful controls with browser isolation

 

Of course, AI begins and ends with the power of data. To dive deeper, download your copy of the Zscaler ThreatLabz 2024 AI Security Report or register for our live session with Zscaler CSO Deepen Desai, Navigating the AI Security Horizon: Insights from the Zscaler ThreatLabz 2024 AI Security Report

 

Meanwhile, if you want more information on how Zscaler is harnessing the power of AI, register for our innovation launch, The First AI Data Security Platform.

form submtited
Thank you for reading

Was this post useful?

Get the latest Zscaler blog updates in your inbox

By submitting the form, you are agreeing to our privacy policy.