Zscaler Blog

Erhalten Sie die neuesten Zscaler Blog-Updates in Ihrem Posteingang

Abonnieren
Products & Solutions

Overcoming the Five Headwinds to Microsegmentation

image
PETER SMITH
November 18, 2021 - 6 Lesezeit: Min

In the past, corporate networks were flat, open places with unfettered access. Anything connected to the corporate LAN was assumed to be inherently good. Sure, there were firewalls between the “inside” and the “outside” of the network (the “perimeter”), but generally, once an attacker infiltrated one host on the network, they could move throughout the network without much interference. As time went on and networks grew more complex, architects began to realize that one perimeter boundary was not enough to secure traffic and systems.

Over the years, network architects have done a better job of “limiting the blast radius” of an attack by using segmentation. With traditional segmentation, barriers are created—normally using a firewall—to minimize the ability to jump from one area of the network to another. Policies are then put in place to block unnecessary traffic between subnets.

Microsegmentation, which has recently become a popular security buzzword, takes this practice one step further, isolating each workload on the network. The process applies fine-grained access controls in an attempt to allow only the network traffic required for the workload to function.

Few organizations are achieving desired success with microsegmentation though. Why? While great in theory, there are a number of practical problems:

  • Microsegmentation means translating from application speak to network speak. Networks are complex places that have grown up over years, with bits bolted on here and there. Consequently, it’s difficult to know exactly what traffic is able to be blocked without impacting the business. Creating policies that allow only the traffic needed necessitates knowing a great deal about how the application works and how that translates into network policies. This involves huge amounts of back and forth between application developers and the IT team. Unless you have a really well-oiled, truly integrated DevOps organization, this can be really tough, if not impossible.
     
  • Microsegmentation can be a policy management nightmare. Application interactions can be highly complex with many interdependencies. Some microsegmentation solutions add a layer of abstraction, looking to describe policies in terms of the application, and then providing the translation to the network layer. The net result, though, is still thousands of policies. Validating that the resulting controls are correct is beyond the scope of all but the largest organizations.
     
  • Prioritizing how to migrate applications to microsegmentation is a major challenge. Networking teams can’t just decide one day to implement microsegmentation in a “big bang” way. Organizations must prioritize vulnerable applications that would have the biggest impact on the business if they were to be compromised. The problem is that practitioners have limited data to understand the current state of risk, and are therefore unable to prioritize deployments based on concrete risk assessments. Without a risk-aligned plan, most organizations will opt for the status quo, and the project stalls.
     
  • Understanding the risks and benefits of microsegmentation is tough. Before microsegmentation is deployed, organizations need to be able to convince peers that risk will, indeed, be reduced. This includes weighing the security risk with the risk of breaking the application. Again, most practitioners struggle to accurately measure the operational risk of deploying these complex policies. As before, without the ability to articulate the risks and benefits, the status quo wins.
     
  • Software development organizations are leery of microsegmentation. By and large, most software developers view anything that smacks of strict policy controls as an impediment to their ability to introduce new features. Also, anything that might impact velocity—and in turn, business agility—is a non-starter for most CIOs.

Microsegmentation evolved out of a need to stop the progression of network-borne attacks inside the cloud and data center, but still falls short of allowing IT operations and security teams to address today’s networking problems. Policy complexity and the inability to prioritize protection based on exposure risk can introduce management headaches. Even for organizations with sufficient resources to reduce friction, the benefits of deploying microsegmentation are difficult to justify against the age-old argument for business agility and velocity. Risk reduction, especially in an organization that hasn’t yet experienced a major security event, is hard to quantify. Needless to say, an alternative is necessary.

Eliminating the challenges of microsegmentation to improve security.

After learning about microsegmentation, what it is, and the limitations and challenges that accompany it, it’s easy to see why many microsegmentation initiatives never get off the ground. Between operational overhead and the inability to overcome organizational resistance due to its complex deployment process, many organizations cannot prove or justify the ROI to take on the task of migration.

If you cannot rely on the supposed “latest and greatest” networking technology, though, you’re probably wondering what comes next—how to improve network policies so that applications communicating on your network are known-secure, and attack progression in your cloud and data center is stopped. Zscaler has created an entirely new class of security control called Zscaler Workload Segmentation to accomplish just that.

Zscaler Workload Segmentation abandons the address-centric network model used by microsegmentation and is instead based on the secure identity of the communicating application software. As a result, organizations are able to dramatically reduce complexity and improve security. The advantages of trusted application networking include:

  • Superior security and visibility with application-centric controls. First, Zscaler provides visibility into how applications communicate by mapping the application topology automatically. Then, Zscaler uses application-level language to define and enforce policies based on application components rather than IP addresses, protocols, and other underlying infrastructure elements. This allows the machine learning generated and recommended policies to be validated by the application developers and bridge the gap between application-speak and network-speak.
     
  • Simpler policy management with automatic policy recommendations. Zscaler machine learning examines application communications to summarize 99.99 percent of network activity and devise a set of policies that can reduce 98 percent of the network attack surface. Those policies are built in language that security, operations, and applications teams can understand, making it easier for those teams to validate or modify the policies as circumstances dictate.
     
  • More clarity on the risks and benefits with fewer controls to comprehend. Zscaler’s use of application-level language results in a policy set that is orders of magnitude smaller than the comparable address-based policies on a firewall. Thus, policies are more easily understandable, and able to evolve with application needs.
     
  • Better prioritization of security initiatives based on measurable risk metrics. Zscaler Workload Segmentation analyzes and visualizes application activities and defines recommended defensive controls. In addition, Zscaler measures the potential impact of applying controls on risk exposure and quantifies a confidence level of applying those controls. This results in measurable metrics that can be communicated to executives to help them understand the business benefit and risk of security controls.
     
  • Adapts to DevOps and cloud environments with flexible and intelligent policy. Policy is enforced based upon dozens of different attributes of each application. The Zscaler machine learning engine selects a subset of stable attributes to uniquely identify software in your environment. Also, as software changes, Zscaler adapts; if an application undergoes change, ZWS will automatically recognize it and continue to enforce controls. This allows you to enforce fine-grained access controls but maintain the agility to deploy rapid changes to your critical business infrastructure.

Moreover, Zscaler does not require any changes to applications or infrastructure, and won’t have any impact on network or operations monitoring tools. This results in smoother collaboration between the IT organization and the business.

Zscaler Workload Segmentation is a simpler, more streamlined approach to ensure that only trusted applications are allowed to communicate over approved network paths. The organizational barriers that exist with microsegmentation are absent with Zscaler Workload Segmentation, and managing policies becomes a breeze. Finally, Zscaler provides demonstrable risk metrics which enables operations, security teams, and the business to collaborate more effectively and prioritize security controls where they are most needed -- increasing security and aiding compliance efforts.

form submtited
Danke fürs Lesen

War dieser Beitrag nützlich?

Erhalten Sie die neuesten Zscaler Blog-Updates in Ihrem Posteingang

Mit dem Absenden des Formulars stimmen Sie unserer Datenschutzrichtlinie zu.