Zscaler Blog

Get the latest Zscaler blog updates in your inbox

Subscribe
Products & Solutions

Generative AI: How Enterprises Can Mitigate AI-Powered Threats and Risks

image

Want to learn more about how to mitigate risks associated with generative AI? Join us for our on-demand webinar, AI vs. AI: Harnessing AI Defenses Against AI-Powered Risks.

We’re out of the frying pan and into the fire when it comes to the widespread adoption of generative AI tools. From IT, to finance, marketing, engineering, and more, there’s a broad consensus that AI-powered tools are unleashing waves of productivity and efficiency across business. The hype is real and, evidently, warranted: more than two-thirds of IT leaders are prioritizing generative AI tools for their business over the next year, while 80% of those who consider the tech “over-hyped” agree that it will help them better serve their customers, according to Salesforce research

Yet, if the value is real, so are the risks. According to a Gartner survey of senior enterprise risk executives, the widespread availability of generative AI tools was the second most-reported risk for Q2 2023, appearing in the top 10 for the first time. 

The risks of AI are many, and for enterprises, they largely fall into two buckets: employee use of AI tools, and the threat of cybercriminals enabled by generative AI. Let’s take a look at the latter first. 

 

5 ways AI helps cybercriminals deploy known and unknown threats

Just as generative AI tools empower employees, they also enable cybercriminals to launch more potent cyberattacks at higher volumes. This, in all likelihood, is ushering in an era of greater unknown threats, where enterprises must defend what they don’t know and, in many cases, can’t see. The Cloud Security Alliance (CSA) recently released the Security Implications of ChatGPT report, which analyzes in depth the risks that generative AI tools bring. Here are five ways CSA identifies threat actors using AI to enhance their toolsets. 

  1. Enumeration. AI tools make it significantly easier to gather information, such as open ports for HTTPS traffic or widely-used applications, that are used to gain further insights about an organization’s security posture and potential vulnerabilities. 
  2. Foothold assistance (or initial access). AI tools can help discover vulnerabilities and simplify or automate the process of exploiting them—making it easy to gain a foothold and further infiltrate the system. 
  3. Phishing. Natural language tools make it easy, particularly for non-native speakers, to generate legitimate-seeming and grammatically correct emails, texts, and messages at industrial scale. 
  4. Social engineering. AI tools allow attackers to simplify or automate the process of gathering comprehensive corporate and victim information—trawling the web, social media, corporate directories, and more, in what was once a painstaking process. 
  5. Polymorphic malware. Using generative AI, it’s become significantly easier to create malware that can automatically adapt its code structure and ‘appearance’—its internal content or signature—to evade detection by traditional security measures and extend its dwell time inside the organization. 

 

Understanding corporate ChatGPT risk

Next, there is the risk of AI tool use by employees. Zscaler ThreatLabz recently released new data around generative AI trends. Unsurprisingly, tools like ChatGPT and Drift dominate employee usage—but these transactions also undergo significant scrutiny, and traffic to AI/ML URLs is frequently blocked. 

Enterprises want to thread the needle in terms of enabling AI innovation for employees while mitigating the risk of data loss. Apart from the potential dangers of AI tools providing inaccurate or “hallucinated '' information to users—such as fabricated code—or providing outputs based on biased AI models, companies are presented with two broad elements of risk: 

1. The release of intellectual property and non-public information

Imagine three employee prompts to ChatGPT. In the first, a sales team member inputs the prompt, “Can you help me create sales trends based on the following Q2 pipeline data?” In the second, a PR specialist asks, “Can you take my M&A notes from this upcoming merger and turn them into a press release?” Finally, a backend developer requests, “Can you take my [source code for a new product] and optimize it?”

In all cases, the employee has potentially leaked non-public information or IP to the AI tool, which means that said data may now be discoverable by the general public—or anyone. 

2. The data privacy and security risks of AI applications themselves

Not all AI applications are created equal. Terms and conditions vary widely among the hundreds of AI/ML applications in popular use. Will your queries be used to further train a large language model (LLM)? Will it be mined for advertising purposes? Will it be sold to third parties? 

Similarly, the security posture of these applications varies widely, both in terms of how data is secured and the overall security posture of the company. Enterprises must be able to account for each of these factors, assigning risk scores among hundreds AI/ML applications, to secure their use. 

 

Harnessing AI for cyberthreat and data protection

In this AI and LLM-driven arms race, enterprises have many ways to harness AI to protect their data, applications, and users. And while cybersecurity vendors are seeking to drive hype around newly-spun AI tools, enterprise security teams should focus on grounded, practical ways that AI can advance their mission, particularly around solutions where AI/ML innovation has been a core competency for many years. 

 

3 AI best practices for cyberthreat protection

Between threats like AI-enabled advanced phishing campaigns—where new phishing sites are already generated and taken down in less than a day—and the rise of polymorphic malware, the risk of breaches and data theft grows. How can enterprises secure what they don’t know?

Here, enterprises should adopt a multi-layered defense that leverages AI-based defenses. We can map these defenses to the four stages of attack:

They find you. They compromise you. They move laterally. They steal your data.

For our purposes, we’ll focus on the latter stages. Here are three critical best practices enterprises can adopt to harness AI for cyber defense. 

  • Prevent compromise (and enable productivity) with AI-driven sandboxing. In the realm of unknown threats, sandbox technology is the tip of the spear in identifying malicious files like malware. However, absent AI, existing tools present a key weakness, because they largely assume patient zero risk. That is, users are allowed to download files before they are exploded in a sandbox (it can take nearly ten minutes for traditional sandbox analyses to happen—meaning productivity takes an unsustainable hit for full coverage).

    By leveraging AI, modern sandbox tools can detect potentially malicious files, even zero day threats, almost instantly—allowing enterprises to place them inline rather than out-of-band, thus immediately quarantining and isolating suspicious files for full sandbox analysis. Such tools can even leverage browser isolation to safely stream the pixels of risky files to the user—enabling them to remain productive—while a verdict is returned. Meanwhile, the vast majority of files can be instantly and safely ruled as benign. 
     
  • Eliminate lateral movement with AI-based app segmentation. Enabling granular user-to-application segmentation, driven by AI, should be a best practice that every enterprise seeks. Enterprise access policies tend to be overly permissive—we often see examples of tens of thousands of users with access to a finance app, for instance, where less than 100 people truly require access. Here, AI-driven segmentation can help enterprises quickly and easily identify the right application segments with intelligent access policies. The result is a dramatically reduced internal attack surface, preventing lateral movement for any attacker. 
     
  • Prevent data loss with ML-driven data classification and DLP. The first step in preventing data theft is to first understand and categorize your data. Yet, robust data fingerprinting can be a significant challenge, with sensitive data spread across dozens or hundreds of applications and data stores. Here, ML-driven data classification can help enterprises discover and classify their most critical data, like personally identifiable information (PII), credit card numbers, intellectual property, and more. Then, data categories can be translated into data loss prevention (DLP) policy. That way, sensitive data will never leave your organization—whether by attackers or by well-meaning employees using AI/ML tools. 

 

4 key questions for securing ChatGPT in the enterprise

According to a Battery Ventures survey of enterprise tech buyers, 59% plan to deploy AI/LLMs in the next six months, compared to 32% six months ago. Enterprises must get ahead of the game to secure the use of AI tools—establishing both internal AI policies and precise controls over the employee usage of AI tools. This includes, for instance, granular access policies that determine which AI apps are allowed, by which departments, teams, and users. But that’s not all. A robust enterprise approach to securing tools like ChatGPT should take these five key questions into account. 

  1. Do I have deep visibility into employee AI app usage? Enterprises must have complete visibility into the AI/ML tools in use and corporate traffic and transactions to these applications. 
  2. Can I allow access to only certain AI apps? Can I specify team access at the department, team, and user levels? Enterprise should use URL filtering to broadly block access to unsafe or unwanted AI/ML tools. Conversely, they should be able to allow access to specific applications—for the company at large, and even for only certain teams with particular use cases. Moreover, enterprises should think about the ability to allow ‘cautioned’ access—where users may use a specific tool, but they are coached around the risks and limitations when using it. 
  3. What AI applications protect private data? How secure are these apps? There are hundreds of AI tools in everyday use. In order to appropriately configure access policies, enterprises must understand which of these applications will protect their data—and how secure organizations are that are running them. 
  4. Is DLP enabled to protect key data from being exfiltrated? Can I restrict risky actions like copy and paste? Enterprises may want to safely use ChatGPT—while not outright blocking it—by placing secure guardrails around its use. Here, it is critical to configure fine-grained DLP controls to prevent sensitive data from leaking. Note that, in many cases, this will require enterprises to inspect TLS/SSL traffic as sites like ChatGPT are accessed over HTTPS. 

    Enterprises may want to block risky actions like copy and paste—one of the easiest ways for well-intentioned users to accidentally leak sensitive data in a query. By placing user AI sessions in browser isolation, enterprises can allow user prompts while restricting clipboard use for uploads and downloads and copy/paste. See an example of browser isolation in action with ChatGPT here
  5. Do I have appropriate logging of AI prompts and queries? Finally, enterprises will want to collect detailed logs that provide visibility into how their teams are using AI tools—including the prompts and data being used in tools like ChatGPT.

Want to learn more about securing generative AI tools while harnessing AI/ML innovation? See our page on Securing Generative AI.

form submtited
Thank you for reading

Was this post useful?

Get the latest Zscaler blog updates in your inbox

By submitting the form, you are agreeing to our privacy policy.