<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/">
    <channel>
        <title>Products &amp; Solutions | Blog</title>
        <link>https://www.zscaler.com/de/blogs/feeds/product-insights</link>
        <description>View for blog content type.</description>
        <lastBuildDate>Sat, 04 Apr 2026 02:00:38 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>RSS 2.0, JSON Feed 1.0, and Atom 1.0 generator for Node.js</generator>
        <language>de</language>
        <item>
            <title><![CDATA[Public Sector Summit 2026: Key Takeaways for Forging a Cyber Strong Nation]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/public-sector-summit-2026-key-takeaways-forging-cyber-strong-nation</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/public-sector-summit-2026-key-takeaways-forging-cyber-strong-nation</guid>
            <pubDate>Thu, 02 Apr 2026 23:46:12 GMT</pubDate>
            <description><![CDATA[Thank you to everyone who joined us for the 2026 Public Sector Summit. This year’s conversations were grounded in a shared mission:&nbsp;forging a cyber strong nation. That mission directly aligns with the recently released 2026 National Cyber Strategy, which calls for accelerating zero-trust architecture, cloud transition, and AI-powered defenses across federal networks, reinforcing the very priorities our speakers and attendees focused on throughout the summit.. It is about protecting critical services, enabling innovation that improves citizen outcomes, and modernizing security in ways that make our agencies and institutions more resilient, not more burdened.Below is a high level wrap of the most consistent takeaways I heard from our speakers, along with practical actions you can apply as you plan what comes next.1) A cyber strong nation starts with Zero Trust for every entityThe keynote reinforced a reality public sector leaders live every day: the mission depends on access, but security depends on control. The path forward is expanding Zero Trust beyond users to&nbsp;all entities that access applications, including users, cloud workloads, IoT and OT devices, and the next wave of AI agents.That is a critical shift for forging a cyber strong nation, because&nbsp;national resilience is compromised when users or agents are "on the network" and can move laterally to discover sensitive assets.&nbsp;The right entity must have the right access at the right time, with continuous verification. When access is policy based and identity based, organizations can reduce exposure without slowing the workforce.Practical takeaway: Treat “never put users or agents on the network” as a strategic principle. Build access around applications and identity, not IP ranges and implicit trust.2) Modernize branches to stop lateral movement and protect services where they are deliveredBranches and field sites are where public sector services meet the real world: hospitals, clinics, schools, transportation hubs, regional offices, factories, classified sites, and mobile operations. Multiple sessions highlighted the same risk: a branch compromise can quickly turn into lateral movement and broad disruption, especially in flat networks built on legacy architectures.The Zero Trust Branch model reframes the site as an island, similar to an internet cafe approach, where connectivity is granted through policy rather than through network adjacency. By moving traffic through policy enforcement and adding agentless internal segmentation for east west communications, organizations can make sites “dark,” reduce exposed attack surface, and limit blast radius during incidents.This is exactly what forging a cyber strong nation looks like in practice: securing the places where constituents receive services, and where OT and IoT systems increasingly intersect with mission operations.Practical takeaway: Use branch modernization as a dual lever for security and cost reduction. Simplify architectures, reduce appliance sprawl, and make segmentation policy driven instead of VLAN (Virtual Local Area Network) and ACL (Access Control List) driven.3) Cloud resilience and secure modernization require avoiding “lift and shift” securityAs government and public sector organizations expand cloud and hybrid adoption, the summit message was direct: do not rebuild old perimeters in new places. Extending networks into cloud or recreating north south and east west firewall patterns increases complexity and often fails to deliver the speed the mission requires.Instead, speakers emphasized applying Zero Trust to cloud workloads, shifting from IP based rules to identity and tag based segmentation, and enabling direct to app access patterns that keep pace as cloud environments evolve. This approach supports faster onboarding and reduces chokepoints, while improving security posture.Forging a cyber strong nation means modernizing without adding brittleness. Cloud adoption is part of that, but so is building continuity and resilience as more traffic flows through centralized security platforms.Practical takeaway: If your cloud security still relies on legacy approaches like virtual firewalls and network based trust, you will keep paying a complexity tax. Move toward identity and policy driven segmentation that can evolve at cloud speed.4) Transformation succeeds when culture and leadership match the technologyA theme that resonated strongly across customer stories was that the hardest part of modernization is often not technical, it is human.Lockheed Martin spoke about a long horizon transformation effort focused on redesigning processes and building a digital thread -&nbsp;connecting systems and data end to end so work can be traced across the lifecycle, from requirements and engineering through production and sustainment. A key lesson was that resistance is frequently about changing how people work, not about the tools themselves.&nbsp;The Centers for Medicare & Medicaid Services (CMS) echoed this point from the perspective of operating at national scale, emphasizing empathy, partnership, and workflow redesign, especially for technical teams used to designing traditional network architectures.CMS also shared concrete execution detail, including implementing thousands of micro segments to peel back access layers and remove unnecessary reach. This is the operational heart of forging a cyber strong nation: reducing risk one policy decision at a time, while keeping access stable for high volume, high impact services.Practical takeaway: Build an adoption plan the way you build an architecture plan. Expect friction, engage early, and tie Zero Trust to mission outcomes rather than “another security tool.”5) AI is accelerating innovation, and expanding the attack surfaceAI was central to the summit because it is central to the future of public sector outcomes. We heard how government is moving from pilots to scaling by focusing on repeatable patterns and building toward standardized “AI factories” over time. We also heard how quickly shadow AI and tool sprawl are growing, and how difficult it is to govern usage when business teams move faster than policy and security processes.Speakers consistently framed AI security in three practical buckets that align well to forging a cyber strong nation:Visibility and inventory: discover AI apps and embedded AI usage across users, endpoints, and cloud services.Secure access: sanction and enable approved AI platforms, restrict risky behaviors, and block what should not be used.Guardrails and lifecycle security: secure AI apps and infrastructure with runtime protection and continuous red teaming to defend against malicious behavior like prompt injection.A major forward looking point was the arrival of agentic AI. As agents proliferate, they become both productivity accelerators and a new weak link. Securing agent identities, authorization, and agent to agent communication will be essential to preventing high speed, high impact misuse.Practical takeaway: Start with AI visibility, then apply Zero Trust as the foundation. Move quickly toward guardrails and continuous testing so innovation can scale safely.6) Threats are faster, more automated, and still deeply humanThreat intelligence sessions underscored how adversaries are chaining techniques across discovery, phishing and voice based social engineering, malware staging, lateral movement, and exfiltration through legitimate services. AI is helping attackers speed up reconnaissance, craft more convincing lures, and scale campaigns.At the same time, several speakers reminded us that many of the most effective attacks still exploit human behavior. Email remains the top vector, and deepfake enabled fraud is a growing reality. Forging a cyber strong nation requires both technical control and operational readiness, including the ability to respond under pressure when adversaries time incidents for maximum disruption.Practical takeaway: Align defenses to the attacker’s path: reduce attack surface, prevent compromise, stop lateral movement, and prevent data theft with strong controls across web, email, endpoints, and cloud.7) SecOps needs context and closed loop enforcementA recurring operational pain point was tool sprawl and alert overload. The summit highlighted the importance of modernizing traditional SecOps by connecting signals into context, prioritizing what truly creates risk, and then using Zero Trust controls for precise response. When detection and enforcement are linked, response becomes faster and blast radius becomes smaller.Deception was also highlighted as a high fidelity signal, because interaction with realistic decoys is rarely legitimate. In complex environments, deception can help defenders detect earlier, reduce noise, and disrupt attackers before production systems are impacted.Forging a cyber strong nation is not just about preventing incidents. It is about ensuring public sector organizations can detect quickly, contain precisely, and recover confidently.Practical takeaway: Invest in approaches that reduce “chair swivel” and turn intelligence into action, including the ability to tighten access rapidly when threat conditions change.Closing: What forging a cyber strong nation looks like nextIf there is one takeaway I would leave you with, it is that forging a cyber strong nation is not a single program or product. It is a sustained commitment to modernize security around mission outcomes, resilient operations, and responsible innovation.A few actions you can take now:Reduce attack surface by hiding apps that require authentication behind Zero Trust.Do not put users, devices, workloads, or agents “on the network.”Treat branches and sites as islands to prevent lateral movement.Segment mission critical applications and protect crown jewels with least privilege access.Build AI governance starting with visibility, then enforce secure access and add guardrails.Modernize SecOps with better context and faster response by correlating key signals into incidents, reducing alert noise, and connecting detections to enforcement so you can contain threats quickly.Plan for resilience as more activity centralizes through security platforms.Thanks again for joining us at the Public Sector Summit. We are offering the recorded sessions on demand and hope these help you bring the ideas back to your teams and turn them into measurable progress as we keep forging a cyber strong nation together.]]></description>
            <dc:creator>Sanjit Ganguli (Vice President, Product Strategy )</dc:creator>
        </item>
        <item>
            <title><![CDATA[This Wasn’t a Hack: What the Claude Mythos Leak Teaches About SaaS Misconfigurations]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/wasn-t-hack-what-claude-mythos-leak-teaches-about-saas-misconfigurations</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/wasn-t-hack-what-claude-mythos-leak-teaches-about-saas-misconfigurations</guid>
            <pubDate>Thu, 02 Apr 2026 17:00:09 GMT</pubDate>
            <description><![CDATA[SummaryIn March 2026,&nbsp;reports emerged that Anthropic had inadvertently exposed thousands of unpublished internal assets—including documents related to its next-generation AI model, Claude Mythos—due to a simple CMS misconfiguration.There was no exploit, no sophisticated attacker.Just a default setting left unchanged.Incidents like this highlight a broader reality: in modern SaaS environments, exposure is far more often caused by misconfiguration than by intrusion.&nbsp;This isn’t an Anthropic problem—it’s an enterprise realityThis isn’t an isolated failure. It’s a systemic issue across SaaS environments.Today’s enterprises rely on dozens—often hundreds—of SaaS applications:Microsoft 365, Google WorkspaceConfluence, JiraGitHub, SalesforceSlack, Box, Dropbox and so onEach introduces:Complex and evolving sharing modelsThird-party integrations with varying permissionsConstant configuration changes across teamsMisconfigurations aren’t edge cases—they’re inevitable byproducts of how SaaS works:Collaboration features favor accessibility over restrictionDefault settings are often permissiveChanges happen continuously without centralized visibilityIt’s no surprise that the majority of cloud security incidents trace back to configuration issues and overexposed access.&nbsp;What likely went wrongBased on publicly available reporting, the incident appears to stem from a combination of common SaaS security gaps rather than a sophisticated attack.The exposure suggests potential issues such as:Default-open or overly permissive access settingsLimited visibility into sharing configurationsLack of continuous monitoring for configuration changesInsufficient controls around exposure of sensitive contentWhile the exact internal conditions may vary, these patterns are widely observed across SaaS environments and are consistent with how similar incidents occur.This is precisely the category of risk that&nbsp;SaaS Security Posture Management (SSPM) is designed to address—by continuously identifying and remediating misconfigurations before they lead to exposure.&nbsp;How Zscaler SSPM could have prevented the Claude Mythos leakZscaler Advanced SSPM goes beyond generic posture checks. It applies granular, platform-specific controls and correlates them with context.Here’s how Zscaler SSPM is designed to identify and prevent this type of exposure:1. Detecting public and anonymous access (Core root cause)Zscaler SSPM provides a comprehensive set of controls focused on detecting and preventing overexposure of data across SaaS platforms. These controls continuously monitor for risky configurations such as public links, unrestricted sharing settings, and excessive external access across applications like Confluence, Microsoft 365, and Google Workspace.By identifying scenarios where content is broadly accessible—whether through anonymous links or overly permissive sharing—Zscaler SSPM acts to ensure that sensitive data is not unintentionally exposed.In this case, a CMS configured with “public-by-default” access would be immediately flagged as a high-risk misconfiguration.2. Enforcing external sharing restrictionsZscaler SSPM includes controls designed to govern how data is shared beyond the organization, ensuring that external access is tightly managed across SaaS platforms.These controls continuously evaluate:Exposure of internal assets to external usersPermissions granted to guests and collaboratorsUnintended external sharing of sensitive contentBy enforcing least-privilege access and identifying overexposed resources, Zscaler SSPM helps prevent internal data from being inadvertently shared outside the organization.In this scenario, any Mythos-related documents accessible to external users would be immediately flagged as high-risk.3. Monitoring third-party and integration riskModern SaaS environments rely heavily on interconnected applications and integrations, which often introduce hidden risk.Zscaler SSPM provides deep visibility into the third-party ecosystem, continuously identifying integrations with excessive permissions, unused access, or elevated risk profiles. This ensures that external apps connected to core platforms do not become unintended pathways to sensitive data.If the CMS or content workflow involved third-party tools, any overprivileged or risky access would be quickly identified and addressed.&nbsp;4. Detecting configuration drift in real timeSaaS risk is not static—configurations change constantly as users interact with applications.Zscaler SSPM continuously monitors for changes in configurations and detects deviations from secure baselines. This allows security teams to identify new exposures as they occur, rather than discovering them after the fact.If sensitive content was uploaded and left publicly accessible, Zscaler SSPM would detect this drift immediately.&nbsp;5. Context-aware risk correlation (The differentiator)Most security tools generate isolated alerts, making it difficult to understand true risk.Zscaler SSPM correlates signals across:MisconfigurationsSensitive data exposureUser accessThird-party integrationsThis provides a unified view of risk, enabling security teams to focus on what truly matters.Instead of isolated findings, teams see actionable insights like:“Sensitive AI content + public access + external exposure = critical risk.”&nbsp;6. Risk-based prioritization and fast remediationNot all risks carry the same impact, and not all require the same effort to fix.Zscaler SSPM prioritizes findings based on business impact and remediation complexity, while providing guided or automated remediation options. This ensures that the most critical issues are addressed first and resolved quickly.High-risk exposures—such as publicly accessible AI assets— surface and are remediated in minutes, not weeks.&nbsp;The bottom line for security leadersThe Claude Mythos incident wasn’t a sophisticated breach.It was a preventable misconfiguration that went unnoticed.Zscaler SSPM targets this risk by:Continuously monitoring SaaS configurationsDetecting drift in real timeCorrelating risk across data, users, and appsEnabling rapid remediationBecause in modern SaaS environments:You don’t get breached because someone broke in.You get breached because something was left open.&nbsp;Final thoughtYou shouldn’t need:A security researcherA journalistOr a public incident…to discover your SaaS exposure.Your security platform should find it first.&nbsp;&nbsp;&nbsp;&nbsp;This blog post has been created by Zscaler for informational purposes only and is provided "as is" without any guarantees of accuracy, completeness or reliability. Zscaler assumes no responsibility for any errors or omissions or for any actions taken based on the information provided. Any third-party websites or resources linked in this blog post are provided for convenience only, and Zscaler is not responsible for their content or practices. All content is subject to change without notice. By accessing this blog, you agree to these terms and acknowledge your sole responsibility to verify and use the information as appropriate for your needs.]]></description>
            <dc:creator>Niharika Sharma (Staff Product Manager - CASB PM)</dc:creator>
        </item>
        <item>
            <title><![CDATA[What New Zealand’s New Cyber Security Strategy Means for Organisations]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/what-new-zealand-s-new-cyber-security-strategy-means-organisations</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/what-new-zealand-s-new-cyber-security-strategy-means-organisations</guid>
            <pubDate>Wed, 01 Apr 2026 05:29:04 GMT</pubDate>
            <description><![CDATA[The New Zealand Government recently released its&nbsp;Cyber Security Strategy 2026-2030, a refreshingly concise document at just 15 pages, accompanied by a&nbsp;one-page action plan for 2026-27.&nbsp;For organisations operating in New Zealand - particularly those delivering essential services - the strategy offers valuable insights into future policy, regulatory expectations, and cybersecurity best practices.A Clear Focus on Critical Infrastructure ProtectionOne of the most significant signals in the strategy is the government’s intention to develop a regulatory regime to strengthen the protection of critical infrastructure. New Zealand appears to be closely observing international approaches, including Australia’s Security of Critical Infrastructure Act 2018 and its subsequent amendments. As part of the action plan, the Government, led by the Department of Prime Minister and Cabinet, has committed to develop any regulations through public consultation. This is already moving beyond strategy into action, with a&nbsp;public consultation underway on the proposed regulatory framework.&nbsp;This marks a shift from New Zealand’s traditionally light-touch approach toward a more structured model, with the potential for clearer requirements on how critical infrastructure operators manage cyber risk.For organisations across sectors such as telecommunications, finance, energy, and transport - and their technology partners - the direction is clear: cyber resilience is becoming an operational and regulatory expectation.Preparing for this shift means organisations must strengthen visibility, access control, and risk management across cloud-first and distributed environments, which are increasingly central to how critical services are delivered.Strengthening Public–Private Cyber CollaborationThe strategy strengthens the role of New Zealand’s National Cyber Security Centre (NCSC) in coordinating with industry. A key element of this is enabling the NCSC to share more information with industry partners to improve prevention, detection, and response to malicious cyber activity. In addition, the NCSC will establish a single national reporting channel for cyber incidents, making it easier for organisations and individuals to report cyber events and receive support.For organisations, this represents an opportunity to engage more closely with national cyber authorities, participate in information sharing, and strengthen collective defenses across sectors.Raising the Security Bar Across GovernmentThe strategy places a strong emphasis on secure digital government, calling for higher and more consistent security standards in government digital procurement and system design, while strengthening the mandate of the Government Chief Digital Officer to ensure digital services are secure and resilient. This reinforces an important principle: security must be built into digital systems from the outset, not added later.Importantly, the strategy commits the government to managing the use of high-risk vendors, services, and products across the public sector to reduce risks to government-held data. As cloud services and generative AI tools become more widely used, this will become increasingly critical. Many AI applications are accessed directly via the internet, often outside traditional IT oversight, creating risks around unauthorised data sharing.Addressing these risks requires clear visibility into how applications, cloud services, and AI tools are being used across government environments, enabling organisations to identify unsanctioned services and protect sensitive data.Expanding Cyber Capabilities for National SecurityFinally, the strategy proposes updating legislative powers to enable New Zealand’s security agencies to use cyber capabilities and tools to advance national security interests. This reflects the growing role cyber operations play in protecting national interests and responding to evolving threats.Preparing for the Next Phase of Cyber ResilienceTaken together, the strategy and its action plan signal a clear direction of travel: stronger national coordination, deeper public-private collaboration, and increasing expectations for cyber resilience across critical sectors.At the same time, organisations are navigating a rapidly changing technology environment. Supercharged AI adoption and the continued move to the cloud, distributed workforces, and increasingly sophisticated threats are challenging traditional network-centric security models.How Zscaler Can HelpZscaler’s cloud-native security platform helps organisations modernise their security architecture for this new environment and new regulatory requirements. By securely connecting users, devices, and applications without exposing networks to the internet, organisations can improve visibility, strengthen access controls, and reduce risk across distributed environments.As New Zealand implements its Cyber Security Strategy, Zscaler looks forward to working with organisations across government and critical industries to support the secure delivery of digital services and strengthen national cyber resilience.]]></description>
            <dc:creator>Adam Dobell (Head of Government Affairs, APJ)</dc:creator>
        </item>
        <item>
            <title><![CDATA[What’s New in GovCloud:  March 2026 Zscaler Product Updates]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/what-s-new-govcloud-march-2026-zscaler-product-updates</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/what-s-new-govcloud-march-2026-zscaler-product-updates</guid>
            <pubDate>Tue, 31 Mar 2026 18:15:16 GMT</pubDate>
            <description><![CDATA[Staying up-to-date on product releases can be challenging, especially when you’re balancing mission requirements, operational priorities, and compliance. To make it easier, here’s a monthly roundup of notable Zscaler GovCloud updates from the past month. Each section includes a quick product refresher, brief context on what’s changing, and scan-friendly highlights you can share with your teams.Zscaler Internet Access (ZIA)Zscaler Internet Access (ZIA) is Zscaler’s secure internet and SaaS access service, providing policy-based protection and visibility for users wherever they work. For many federal environments, ZIA is central to enforcing acceptable use, preventing data loss, and maintaining consistent controls across distributed users.This month’s ZIA updates focus on smoother admin workflows, expanded policy coverage, and improved visibility, especially in logging and monitoring, so operations teams can move faster without sacrificing oversight.HighlightsInsights Logs: Insights Logs pages now feature asynchronous log retrieval, so admins can continue working while queries run in the background. This is helpful during active investigations and routine log review.DLP and file type support for MSIX files: File Type Control and DLP policies now support MSIX files in the Executable category, extending policy coverage to a modern packaging format without requiring workarounds.Logs for MCP transactions: Application activity MCP is added to Web Insights Logs to log Model Context Protocol (MCP) transactions in the ZIA Admin Portal, improving traceability for MCP-related activity.Gen AI prompt obfuscation (released to FedRAMP High): Gen AI prompts displayed in Web Insights Logs can be obfuscated when configuring admin roles, supporting least-privilege access to sensitive prompt content.Dedicated IP for ZIA in Moderate: Cloud-based service that allows organizations to be provisioned with dedicated IP addresses and use them as the source IP addresses for their traffic.Learn more:&nbsp;https://help.zscaler.us/zia/release-upgrade-summary-2026DeceptionZscaler Deception helps detect and disrupt attackers by deploying decoys and lures that expose malicious activity early and with high confidence. Deception can be especially valuable for high-signal detection. When a decoy is accessed, it often points to behavior that warrants immediate attention.This month’s update expands cloud coverage with new support for GCP-based deception resources, helping teams extend consistent detection strategies as workloads span multiple cloud providers.HighlightsCloud Deception with GCP: Integrate Google Cloud Platform (GCP) with Zscaler Deception and deploy GCP-specific decoys to detect malicious activity (based on decoy type and configuration), extending deception capabilities into GCP environments.Learn more:&nbsp;https://help.zscaler.us/deception/release-upgrade-summary-2026Cloud ConnectorZscaler Cloud Connector helps extend Zscaler policy enforcement and traffic forwarding for workloads running in public cloud environments. It supports organizations that need consistent security controls for cloud-hosted services while enabling architectures aligned to modernization initiatives.Cloud Connector updates this month support automation for Azure environments and improve usability for multisession VDI. These are two practical areas that can reduce operational friction.HighlightsAzure endpoints for partner integrations: New endpoints extend programmatic access to features and functionality for Azure accounts and groups, supporting broader integration and automation workflows.Zscaler Client Connector for VDI username visibility: In multisession VDI, users can view their username in the Zscaler Client Connector for VDI app, improving clarity in shared-session scenarios and helping streamline troubleshooting.Learn more:&nbsp; https://help.zscaler.us/cloud-branch-connector/release-upgrade-summary-2026Zscaler Digital Experience (ZDX)Zscaler Digital Experience (ZDX) provides end-to-end visibility into user experience and application performance to help IT teams pinpoint and resolve issues faster. For federal IT, this visibility supports improved service delivery and more efficient triage across network, endpoint, and SaaS dependencies.This month’s ZDX enhancements add more control over Zoom monitoring scope and strengthen admin session governance.HighlightsZoom call quality monitoring exclusion criteria: Zoom call quality monitoring now supports exclusion criteria during tenant onboarding, enabling collection for all users except specified users or groups.Session timeout duration: Configure Session Timeout Duration to control how long a user can remain in the ZDX Admin Portal session while inactive, supporting stronger session management.Learn more:&nbsp;https://help.zscaler.us/zdx/release-upgrade-summary-2026ConclusionWant the full details? Use the links above to review the complete release summaries, and check back next month for the next GovCloud update roundup.Zscaler continues to invest in a robust GovCloud roadmap and remains committed to supporting the unique security, compliance, and operational requirements of the federal market. We’ll keep delivering enhancements that help agencies and federal partners strengthen resilience, simplify operations, and advance mission success.]]></description>
            <dc:creator>Jose Arvelo Negron (Manager, Sales Engineer)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Streamlining Multi-Tenant Management: Announcing the Integration of Multi-Tenant Portal with ZIdentity for Unified SSO]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/streamlining-multi-tenant-management-announcing-integration-multi-tenant</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/streamlining-multi-tenant-management-announcing-integration-multi-tenant</guid>
            <pubDate>Wed, 25 Mar 2026 20:17:12 GMT</pubDate>
            <description><![CDATA[Managing multiple customer environments or internal departments shouldn't mean managing multiple logins. We recently announced a significant enhancement to the Zscaler Multi-Tenant Portal (MTP) and its integration with&nbsp;ZIdentity. This integration is designed to deliver a seamless, secure, and unified single sign-on (SSO) experience for our MSPs and for organizations managing multi-tenant Zscaler deployments.One Identity, Limitless ManagementThe Multi-Tenant Portal has long been the cornerstone for Managed Service Providers (MSPs) and large-scale enterprises to oversee multiple Zscaler instances. By integrating with ZIdentity—Zscaler’s authentication service—we are bringing a "One Zscaler" experience to the administrative level.With ZIdentity added on top of an existing identity provider, administrators can now log in once and gain instant access to all their managed tenants. No more juggling different sets of credentials or dealing with repetitive authentication prompts.Key Highlights of the Integration:True single sign-on (SSO): Authenticate once through ZIdentity and move freely between the Multi-Tenant Portal and your managed ZIA or ZPA instances.Seamless tenant switching: Quickly pivot from one customer tenant to another within the MTP dashboard without needing to login again. This functionality is critical for MSPs who need to respond quickly to support requests or configuration changes across different environments.Enhanced security with adaptive MFA: Leverage the advanced security capabilities of ZIdentity, including adaptive multifactor authentication. Ensure that your multi-tenant environment is protected by the most robust security standards while maintaining administrative efficiency. We support the following MFA mechanisms as of now:Security keyBiometricsSMS OTPTOTP Authenticator like Google Authenticator, etc.Centralized administration: Manage your own administrative users and their access levels centrally through ZIdentity, ensuring consistent policy application across the entire Zscaler ecosystem.Why This Matters for MSPs and Multi-Tenant OrganizationsIn a world where speed and security are paramount, administrative friction is the enemy. This integration directly addresses the challenges faced by teams managing complex, multi-tenant Zscaler environments:Efficiency gains: Administrators save valuable time by eliminating redundant login steps, allowing them to focus on high-value tasks and customer support.Robust governance: Centralizing authentication reduces the risk of credential sprawl and ensures that only authorized personnel have access to sensitive multi-tenant configurations.Improved security and compliance: With compliance requirements like PCI-DSS, HIPAA, etc., demanding the need for MFA. This integration helps customers achieve this compliance and improve security.A cohesive workflow: The Multi-Tenant Portal now acts as a true gateway, providing a streamlined path to managing Zscaler services across your entire customer base.Moving ForwardThe integration of the Multi-Tenant Portal with ZIdentity is a key step in our ongoing mission to simplify security at scale. As we continue to roll out these enhancements, our goal remains clear: Provide you with the most efficient and secure tools to manage your zero trust architecture.Stay tuned for more updates as we continue to evolve the Zscaler Multi-Tenant Portal and ZIdentity ecosystem!For more information on our Zero Trust Exchange platform, visit our&nbsp;website.]]></description>
            <dc:creator>Akhilesh Dhawan (Sr. Director, Product Marketing - Platform)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Stop “Patient Zero” Threats: Why Traditional Sandboxes Fail and How Zscaler Advanced Cloud Sandbox Changes the Outcome ]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/stop-patient-zero-threats-why-traditional-sandboxes-fail-and-how-zscaler</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/stop-patient-zero-threats-why-traditional-sandboxes-fail-and-how-zscaler</guid>
            <pubDate>Fri, 20 Mar 2026 17:55:03 GMT</pubDate>
            <description><![CDATA[Security teams don’t lose sleep over known malware. They worry about the first time a brand new threat shows up with no signature, no IOC, and an easy path to execution by the attacker.That’s the patient zero moment: the first encounter with an unknown file.In many organizations, risk comes from a common pattern: deliver then detonate.&nbsp;A file reaches the inbox or endpoint, endpoint tools classify it as&nbsp;unknown (or low prevalence), and then submit it for sandbox analysis while everyone waits for a verdict. Even if the file hasn’t been executed yet, it’s now present—and one mistaken click, share, or re-download can turn “unknown” into an incident.The real enemy: The verdict gapIn many environments, sandboxing is triggered only after the file has already reached the endpoint, often because the Endpoint security solution flags it as unknown or low prevalence and submits it for detonation.That creates a timing problem:A user downloads a file to the deviceThe file lands on the endpoint (now one click away from execution)EDR identifies it as unknown and submits it to a sandboxThe sandbox analyzes the fileA verdict returns (benign/suspicious/malicious)That delay between “file on the endpoint” and “sandbox verdict” is the verdict gap. With&nbsp;~450,000 new malicious programs per day (AV-TEST.org), the gap isn’t occasional; rather, it becomes a repeating exposure window. Patient zero threats live in that gap because the attacker only needs one successful execution to trigger credential theft, persistence, or ransomware staging.Endpoint detection and response is essential, and endpoint sandboxing is useful, but both operate after files reach the device.&nbsp;The goal is to reduce how often unknown files get that far in the first place.Inline sandboxing helps reduce how often that happens by stopping unknown threats earlier in the attack chain, lowering the number of endpoint alerts and investigation workload.Other common sandboxing pitfallsThe verdict gap is not the only problem with traditional sandboxing approaches. Many sandboxes, especially basic or standard versions, still leave coverage and timing gaps that attackers exploit.These limitations include:&nbsp;&nbsp;Limited file-type coverage (primarily executables), while modern campaigns use archives, scripts, Office/PDF files, installers, and mixed-content packagesRestrictive file-size limits that exclude realistic payloads and multi-stage droppersBlind spots on large payloads (50 MB+) increasingly used as installers, disk images, archives, and bundled droppersMany organizations start with standard sandbox protection to inspect suspicious files. This provides valuable visibility, but as attackers evolve, security teams often find they need broader inspection and faster decisions to reduce patient zero risk.What patient zero defense actually meansPatient zero defense isn’t a promise that malware will never appear. It’s a security posture:Unknown files don’t get a free passSuspicious content is stopped upstreamA verdict is reached quicklyOnly then does content reach the deviceThis is the approach behind&nbsp;Zscaler Advanced Cloud Sandbox, delivered inline through the Zscaler Zero Trust Exchange.Zscaler Advanced Cloud SandboxAdvanced Cloud Sandbox helps close the verdict gap with capabilities designed for modern attack techniques. It’s delivered through the Zscaler Zero Trust Exchange, which processes 500 B+ transactions per day, and&nbsp;Zscaler achieved 100% effectiveness in the CyberRatings SSE Threat Protection Test for two consecutive years (AAA rating).Unlimited inline prevention: Hold it at the doorInstead of&nbsp;“deliver then detonate,” Advanced Cloud Sandbox can quarantine unknown files upstream so they never land on the endpoint while analysis occurs.AI Instant Verdict: Stop unknown file-based threats in secondsBlock unknowns too aggressively and productivity suffers. Allow them through and you risk incident response later.AI Instant Verdict delivers a high-confidence verdict in seconds, enabling organizations to stop unknown threats without weakening policy or slowing down users.Patched VM analysis: Expose evasive malwarePatched VM environments help uncover threats designed to evade or “sleep through” standard sandbox environments.API-driven analysis: Extend protection to more workflowsAPI-driven out-of-band analysis enables detection of hidden threats in third-party files, acquired environments, and other workflows outside traditional traffic inspection.Zero Trust Browser integration: Maintain productivity during analysisUsers can safely interact with files during sandbox inspection through browser isolation.If malicious behavior is detected, files can be flattened into PDFs or disarmed to remove harmful content.&nbsp;&nbsp;Three ways to consume Zscaler Advanced Cloud SandboxInline deployment: Stop patient zero attacks before they land. Inspect files in line and quarantine unknown threats upstream while a verdict is reached. Best for stopping ransomware and other malware before it ever reaches the endpoint.Offline analysis (Endpoint Sandbox): Neutralize threats introduced offline. Analyze files introduced outside normal network paths (USB, Bluetooth) before execution to prevent offline “patient zero” attacks.API/SOC workflows: Inspect third-party and business-critical files. Submit files out-of-band for rapid inspection from third parties, or M&A workflows—and equip SOC teams with actionable reports and MITRE ATT&CK–mapped insights to speed triage and response.&nbsp;Why stepping up to Advanced Cloud Sandbox changes the outcomeZscaler provides standard sandbox protection as part of the platform, while Advanced Cloud Sandbox extends that protection with deeper inspection, broader coverage, and faster decisions as threats evolve. This allows organizations to start with foundational protection and step up their defenses as threat complexity grows.At a glance, here’s what’s included in a standard sandbox vs. what you gain with Advanced Cloud Sandbox:&nbsp;Budget reality: What you’re really buyingWhen evaluating sandbox protection, it helps to step back and consider the bigger picture. Organizations don’t invest in sandboxing to generate detonation reports—they invest in risk reduction.A single ransomware incident can quickly lead to downtime, incident response costs, recovery efforts, and reputational damage.&nbsp;Those losses often exceed the incremental cost of upgrading traditional sandboxing or adding Advanced Cloud Sandbox prevention alongside endpoint protection.Advanced Cloud Sandbox helps reduce those risks by delivering:Upstream quarantine of unknown filesFast AI-driven verdictsCoverage aligned with modern attack techniquesOperational efficiency through API-driven workflowsA simple evaluation checklistWhen evaluating sandbox protection for unknown files, consider the following:Can unknown files be quarantined upstream until a verdict is reached?How quickly can the sandbox deliver a high-confidence decision?Does the sandbox support the file types and sizes attackers commonly use?Does the sandbox help simplify SOC workflows by reducing alerts and investigation effort?Next stepPatient zero attacks thrive in the verdict gap—when unknown files can reach endpoints before a decision is made.If your organization currently relies on standard or traditional sandbox or an endpoint protection, this may be a good time to evaluate whether your coverage matches today’s threat landscape.Talk to your Zscaler accounts team to see how Advanced Cloud Sandbox can help stop unknown file-based threats in seconds without compromising productivity.]]></description>
            <dc:creator>Shveta Shahi (Sr. Product Marketing Manager)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Troubleshoot Device Issues Faster with ZDX]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/troubleshoot-device-issues-faster-zdx</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/troubleshoot-device-issues-faster-zdx</guid>
            <pubDate>Thu, 19 Mar 2026 20:08:05 GMT</pubDate>
            <description><![CDATA[Introduction: The Hidden Cost of "Everything's Fine"In large enterprises, many users suffer in silence, enduring slow applications, frequent crashes, and persistent device instability without ever opening an IT ticket. This "silent pain" drains productivity, damages employee confidence, and creates a massive blind spot for IT. Traditional tools, reliant on ticket data, only see the users who complain—missing the vast majority of underlying issues.This hidden instability creates distinct, critical challenges for specialized IT teams:For the Service Desk: Escalating hidden issues and high resolution times due to a lack of complete data.For Network Operations (NetOps):&nbsp;Difficulty correlating device-level instability (like driver conflicts) with network and application performance issues.For Network Security (NetSec): Gaps in visibility and inconsistent context that complicate Zero Trust adoption and experience model.Zscaler Digital Experience (ZDX) Device Health directly addresses this by detecting system and software crashes, delivering a clear device health score, and enabling remote remediation&nbsp;before users are forced to file a ticket.The Silent Challenges for Key PersonasWhen device problems go unreported, key IT teams are left to deal with the consequences blindly:1. Service Desk TeamsChallenge:&nbsp;They only see the&nbsp;loudest problems. The majority of slow-downs and minor crashes remain hidden, leading to an inaccurate view of service quality. The Service Desk workload is reactive, chasing incidents based on incomplete or late user reports.Result:&nbsp;Long triage and resolution times because they lack the cross-domain visibility to pinpoint the root cause (Is it the device, the network, or the app?). This leads to higher operational overhead and lower employee satisfaction.2. Network Operations (NetOps) TeamsChallenge: NetOps needs to ensure application and network experience is stable, but a fault on the device can masquerade as a network issue. They struggle to see how device issues relate to app and network experience because traditional monitoring tools are siloed.Result:&nbsp;Wasted time troubleshooting network performance only to find the root cause was a faulty Wi-Fi driver, device CPU issues, or a browser hang on the device, not the network path itself. Without end-to-end visibility, the NetOps team wastes critical time debugging network issues that are actually rooted in the endpoint device.3. Network Security (NetSec) TeamsChallenge: In a Zero Trust environment, security and experience must be unified. NetSec teams require consistent context across the entire data path. Multiple monitoring agents create complexity and potential security gaps.Result: Increased cost and complexity from having to integrate and correlate data from multiple, non-unified endpoint, network, and application tools, which undermines a single-platform, Zero Trust strategy.&nbsp;The ZDX Device Health Solution&nbsp;ZDX Device Health provides the visibility and control needed to eliminate silent pain and empower IT teams.&nbsp;ZDX for the Service Desk: Proactive Resolution and EfficiencyBy providing real signals from devices (memory usage, disk usage, Wi-Fi signal quality, battery, CPU usage, software crashes, average disk queue length, system crashes) and turning them into clear health scores, the Service Desk can act without waiting for tickets. Beyond a complete device score which may imply one or more key metrics are performing badly, ZDX captures trends and groups scores for individual, key metrics like CPU performance and memory performance, allowing IT to precisely target underperforming devices.Proactive Fixes:&nbsp;ZDX detects patterns (e.g., a specific driver causing blue screens on 2% of devices) and allows IT to trigger fixes via existing management tools (Intune, Jamf).Shorter Resolution Time:&nbsp;Cross-domain visibility allows IT to confirm improvement and close the loop: Detect signal → Identify cause → Apply fix → Confirm improvement.Smarter Asset Management: Data shows which devices truly need replacement versus those that only need a software or driver fix, reducing unnecessary asset costs.&nbsp;ZDX for NetOps: Cross-Domain Visibility and PrecisionZDX removes the monitoring silos that complicate root cause analysis. Because all traffic passes through the Zscaler Zero Trust Exchange, it captures device, network, and application performance in one stream.&nbsp;Correlated Experience View:&nbsp;NetOps can see how device stability impacts network and app performance in a single view, allowing them to pinpoint whether a slow video call is due to the device, the path performance, or app availability. For example, if NetOps suspects a network slowdown, ZDX's end-to-end insight immediately confirms if the problem is device-based (e.g., high CPU usage). This clarity allows them to easily redirect the issue to the Service Desk, preventing wasted time on network traces.Precise Troubleshooting: They can quickly identify which models, OS versions, or drivers are causing the most failures, enabling targeted action to prevent the problem from spreading. By providing a clear device health trend and detailed health data on the device/user page, ZDX clearly shows the problem, drastically reducing the Mean Time to Resolution (MTTR).ZDX for NetSec: Unified Zero Trust ExperienceZDX is built on the same architecture as Zscaler Internet Access and Zscaler Private Access, enabling a unified approach to security and experience.Single Data Path & Consistent Context:&nbsp;All device metrics align with application and path data, allowing clear cause analysis and maintaining consistency within the Zero Trust model.Unified Operations:&nbsp;Security and experience share a single platform, eliminating the need for multiple agents and tools. This reduces cost and management effort while improving insight across the entire digital environment.A Clear Next StepIf your organization is losing time and money to hidden device problems, ZDX Device Health offers a path to a stable, predictable, and measurable environment.Request a ZDX Device Health session to see your environment’s data mapped across device, network, and application layers.]]></description>
            <dc:creator>Rohit Goyal (Sr. Director, Product Marketing - ZDX)</dc:creator>
        </item>
        <item>
            <title><![CDATA[ZIA and ZDX Achieve DoW Impact Level 5 Provisional Authorization]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/zia-and-zdx-achieve-dow-impact-level-5-provisional-authorization</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/zia-and-zdx-achieve-dow-impact-level-5-provisional-authorization</guid>
            <pubDate>Thu, 19 Mar 2026 18:53:49 GMT</pubDate>
            <description><![CDATA[Today’s warfighter operations demand speed, resilience, and trusted connectivity across users, devices, and mission partners anywhere, across coalition networks, and in expeditionary environments while the threat landscape continues to evolve. Adversaries are increasingly targeting defense supply chains, logistics systems, and operational data as the “network” has expanded far beyond any traditional perimeter and can no longer be secured with legacy, perimeter-based defenses. This operational reality is exactly why the Department of War (DoW) mandated targeted Zero Trust adoption by FY2027. However, meeting that mandate requires platforms capable of handling highly sensitive data without degrading mission speed.That is why I am proud to share a major milestone: the Department of War (DoW) has granted Zscaler Internet Access (ZIA) and Zscaler Digital Experience (ZDX) Impact Level 5 (IL5) Provisional Authorization (PA), the DoW’s highest level unclassified cloud authorization. This authorization extends Zscaler’s cloud native Zero Trust platform into DoW environments handling Controlled Unclassified Information (CUI) and National Security Systems (NSS) information, helping defense organizations modernize mission networks without compromising security or compliance.The perimeter is gone - mission execution can’t waitDoW agencies operate in a world where users are distributed, mobile, and often deployed in various austere environments, while mission data and applications span hybrid on‑prem and multi‑cloud environments across multiple networks.&nbsp;By leveraging a full proxy architecture, agencies can securely connect users directly to applications without ever bridging the underlying networks, fundamentally cutting off lateral movement.&nbsp;Mission execution also requires collaboration with partners who may not share a common identity infrastructure, while security teams must enforce consistent policy without adding complexity or tool sprawl.Perimeter-based security can’t keep up. When protection is tied to a fixed network boundary, organizations end up with a patchwork of appliances and point products that are hard to operate, slow to change, and fragile under real operational conditions.The Department has mandated Zero Trust as its strategic answer. It assumes the environment is contested, continuously verifies users, devices, and access requests, and enforces policy on every transaction, reducing risk by eliminating implicit trust and limiting the blast radius so a single foothold can’t become lateral movement across the mission.What ZIA brings to the DoWZIA is built to secure and control internet and cloud application usage using Zero Trust principles, functioning as a cloud-based Internet Access Point. Rather than relying on legacy on-premise architectures anchored to a perimeter, ZIA enforces security policies at every transaction. This extends protection to remote users, mobile devices, and forward deployed operations without requiring reliance on perimeter appliances.DOW organizations can use ZIA to apply strong security controls and threat prevention capabilities that align to the operational demands of modern warfare, including:Inline TLS/SSL decryption and inspection: Expose and stop threats hidden in encrypted traffic.AI-driven threat prevention: Detect and block emerging and unknown attacksCommand-and-control (C2) detection and disruption: Break adversary communications earlyCloud-native DLP across web, email, and endpoints: Reduce data leakage and mission-impacting exposure.Behavioral analytics at scale: Use massive daily telemetry to identify suspicious activity and stop attacks that evade signature-based defenses.Secure coalition collaboration without network exposure: Identity-aware, deny-by-default access with cloud-native enforcement and IdP federation enables rapid cross-organization trust decisions, even without shared identity infrastructure.Detect and contain threats at mission tempo: Real-time inspection and continuous policy enforcement with automated isolation/quarantine stops adversaries from turning a foothold into lateral movement across operations.ZIA provides a globally proven SaaS platform that secures internet and cloud access while enabling distributed operations with consistent, location-agnostic policy enforcement. It eliminates legacy perimeter dependencies, reduces operational overhead, and empowers the DOW to accelerate divestment from hardware in favor of a modern, scalable, Zero Trust–aligned architecture.What ZDX brings to the DoWZscaler Digital Experience (ZDX) delivers end-to-end visibility and rapid troubleshooting for mission users across internet, cloud, and private apps. In IL5 environments where users are dispersed and networks are constrained, ZDX pinpoints whether issues are on the device, local network, path/tunnel, Zscaler service, or the application, cutting time to resolution and preserving operational tempo without heavy packet-capture tooling.DoW organizations can use ZDX to strengthen mission effectiveness in IL5-aligned operations by enabling:End-to-end path visibility: Pinpoint whether degradation is on the endpoint, local/Wi‑Fi/LAN, last mile, Zscaler service edge, or the application/SaaS itselfProactive performance monitoring: Use real user metrics and synthetic tests to identify issues before they impact missions and shift changes from reactive to plannedFaster incident triage and reduced MTTR: Guided workflows that quickly narrow root cause and reduce time spent “war-rooming” across teams and partnersApplication experience scoring and baselining: Quantify mission impact, track trends over time, and validate whether changes actually improved performanceOperational insights for distributed and forward users: Compare experience by location, network type, device, or user group—supporting prioritization for constrained expeditionary environmentsActionable evidence for partner/vendor escalation: Clear telemetry that speeds up resolution when the issue resides outside the enterprise boundaryIn practical terms, ZDX keeps IL5 missions moving by turning performance and reachability problems into clear, measurable, rapidly diagnosable outcomes cutting time to resolution, improving service reliability, and sustaining consistent operations for dispersed users across constrained networks.A unified Zero Trust platform for unclassified mission requirementsIL5 is built for unclassified environments where the sensitivity of the data and the operational impact of unauthorized disclosure demands heightened safeguards. Because it must meet DoW-specific security requirements, IL5 is among the most rigorous commercial cloud authorizations for unclassified defense workloads, enabling DoW components, military services, defense agencies, and mission partners to accelerate cloud adoption and operational agility without compromising mission security.With the IL5 PA, ZIA and ZDX now join Zscaler Private Access (ZPA) to deliver the DoW a single, unified Zero Trust platform for unclassified environments, securing internet/SaaS and private application access with consistent policy enforcement across users, devices, and locations. This reduces dependence on legacy perimeter tools and VPN backhaul, while ZDX provides end-to-end experience visibility to isolate issues quickly and protect mission tempo resulting in stronger data protection, least-privilege access, and measurable operational assurance without sacrificing user productivity.DoW Zero Trust by FY2027 - Move Forward with ConfidenceThe FY2027 Zero Trust deadline is rapidly approaching, and agencies can no longer afford to choose between rigorous compliance and operational speed. Modern operations demand secure, reliable connectivity wherever the mission goes. The ZIA and ZDX DoW IL5 PA is a meaningful step for organizations handling CUI and NSS information, enabling cloud-native, resilient security built for distributed operations while meeting rigorous compliance requirements. This milestone also reinforces Zscaler’s broader federal commitment backed by DoW IL2, FedRAMP Moderate and High authorizations, CMMC Level 2, DoW IL5, and active path to DoW IL6 so agencies and mission partners can modernize with confidence, reduce legacy complexity, and deploy Zero Trust protections aligned to today’s operational realities.]]></description>
            <dc:creator>Ryan McArthur (Federal CTO)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Zero Trust Purdue Model: How to Modernize OT Security]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/zero-trust-purdue-model-how-modernize-ot-security</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/zero-trust-purdue-model-how-modernize-ot-security</guid>
            <pubDate>Wed, 18 Mar 2026 23:14:35 GMT</pubDate>
            <description><![CDATA[For decades, the Purdue Model has been the foundation of operational technology (OT) architecture. It provides a clear structure for how factory systems are organized from sensors and programmable logic controllers (PLCs) to enterprise applications.In the past IT and OT in factories were airgapped. But in recent years the air gap has largely disappeared. Even if OT systems do not directly connect to the cloud, there are plenty of systems on the factory floor that are connected to enterprise IT or cloud for physical security, production analytics, industrial printing, and other functions that support a factory. Connectivity has become essential to modern manufacturing.What no longer works are the security assumptions that grew around it. Many of those assumptions were built when access to OT was rarely available or granted. That world has disappeared, leaving a growing gap between how factories operate and how they are protected.&nbsp;The Purdue Model Still MattersDespite predictions that the Purdue Model would eventually become obsolete, it remains deeply relevant for industrial organizations. It provides a shared framework for how OT teams design and operate manufacturing environments, organizing systems into layers that range from physical processes at the plant floor to enterprise applications in corporate networks.It also works because it mirrors how industrial systems actually function. Sensors communicate with controllers, controllers interact with supervisory systems, and operational systems exchange data with enterprise platforms. The layered model provides clarity and operational consistency. A simple and effective structure looks something like this:Level 0–1: Physical processes and sensorsLevel 2: Control systems such as PLCs and HMIsLevel 3: Operations managementLevel 4–5: Enterprise IT systems&nbsp;Why Traditional OT Security Controls Fall ShortMany factories rely on familiar tools such as firewalls, VLAN segmentation, and network access control to secure their environments. These technologies still play a role, but they were never designed for the level of connectivity seen in modern manufacturing.FirewallsFirewalls, for example, are primarily designed to control north–south traffic communication entering or leaving the plant network. While they remain effective at that boundary, they provide limited visibility into the east–west communication that occurs inside the factory itself. Many attacks today spread laterally between systems once an attacker gains a foothold, which is exactly where traditional firewall architectures struggle.VLAN SegmentationVLAN segmentation attempts to address this challenge, but in many factories VLANs contain large numbers of devices with very different risk profiles. A single VLAN may include PLCs, HMIs, SCADA systems, engineering workstations, and even contractor laptops. If malware infects one device, it can often move laterally across the entire segment with little resistance.NAC SolutionsNetwork access control (NAC) solutions face their own challenges in OT environments. Many industrial systems are decades old and cannot support modern agents or posture checks. In practice, organizations often fall back to maintaining allow lists based on MAC addresses, which are complex to manage and provide limited protection against sophisticated attackers. These approaches were designed for factories that were mostly isolated. Today’s connected industrial environments require a different security model.AI Presents Additional ChallengesIndustrial organizations are also facing a new reality: AI is accelerating cyberattacks.Tasks that once required weeks of reconnaissance can now be automated:Faster vulnerability discoveryRapid network enumerationAutomated lateral movementFaster data exfiltrationWhat once took attackers months can now occur in hours. Factories need security models that assume compromise and minimize the blast radius of an attack. Check out this report by Anthropic on an AI-orchestrated&nbsp;cyber espionage campaign.&nbsp;Bringing Zero Trust to the Purdue ModelZero Trust does not replace the Purdue Model. Instead, it modernizes how security is applied across the architecture.The core idea behind Zero Trust is simple: never assume trust based on network location. Every connection must be verified, access must be limited to what is strictly necessary, and systems should never expose more of the network than required.Applying these principles to industrial environments results in what many organizations now describe as the Zero Trust Purdue Model. This approach preserves the layered structure of Purdue while introducing controls that prevent lateral movement, restrict access to specific systems, and remove unnecessary network exposure.How Zscaler Enables the Zero Trust Purdue ModelZscaler helps enable this architecture through its Zero Trust Branch, typically deployed around Level 3 or 3.5 of the Purdue Model, where operational systems connect to enterprise IT and external services.&nbsp;&nbsp;One of the most important capabilities is segmentation that operates at the level of individual assets rather than networks. Instead of relying on VLANs or firewall zones, organizations can control communication between specific devices. This prevents malware from spreading laterally if a system becomes compromised and significantly reduces the potential blast radius of an attack.Zscaler also replaces traditional VPN-based remote access with a browser-based privileged access model. Contractors can connect directly to the machines they are authorized to maintain without exposing the broader factory network. This eliminates one of the most common entry points attackers exploit in industrial environments.As factories increasingly connect to cloud platforms and enterprise systems, the architecture also secures outbound communications, allowing organizations to apply consistent security policies across both IT and OT traffic.Finally, Zscaler incorporates deception technologies that deploy decoy systems inside the environment. These decoys mimic real OT assets, and any interaction with them immediately generates high-confidence alerts that allow security teams to detect attackers early in the attack lifecycle.A reference architecture for Zero Trust Purdue Model is&nbsp;available here.&nbsp;The Future of Factory SecurityFactories will continue to become more connected, automated, and data-driven. The Purdue Model remains a useful architectural framework for organizing these environments, but securing them requires a modern approach.By combining the structure of the Purdue Model with Zero Trust principles, organizations can protect their industrial systems while enabling the connectivity and analytics that modern manufacturing demands.]]></description>
            <dc:creator>Umang Barman (Senior Director, Marketing)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Building a Unified Data Security Platform across DSPM and DLP]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/building-unified-data-security-platform-across-dspm-and-dlp</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/building-unified-data-security-platform-across-dspm-and-dlp</guid>
            <pubDate>Tue, 17 Mar 2026 17:00:09 GMT</pubDate>
            <description><![CDATA[Data is more fluid than ever, dispersed across cloud apps, unmanaged devices, and generative AI. This sprawl has outpaced visibility, leaving security teams at a disadvantage as they manage escalating risks. Furthermore, the rapid rise of generative AI introduces new complexities as employees interact with sensitive information in increasingly unpredictable ways. This challenge is exacerbated by fragmented legacy solutions that offer isolated, single-channel point solutions rather than a holistic view of data exposure.&nbsp;The Limitations of Legacy: Why Traditional Approaches Fall ShortThis data sprawl has created visibility gaps that traditional perimeter-based security cannot keep up with. Most organizations today don’t have a single source of truth that enables security teams to see the full picture of data exposure across environments. Without a central view, it's nearly impossible to know:Data Residency: Where most sensitive data is actually storedAccess Control: Who has access to itExposure Risk: If the data is overexposedVulnerability Management: If there are misconfigurations that are creating vulnerabilities&nbsp;Traditional legacy systems, originally built for a static world, aren’t keeping pace with the environments they were supposed to protect. Many of the tools organizations have relied on—particularly legacy Data Loss Prevention (DLP)—are starting to feel more like stopgaps than solutions as they lack an intelligence layer to continuously map data, help understand the context surrounding it, and connect the dots between data, identity, and access.Furthermore, legacy DLP tools struggle with scale and nuance. Rules are often too brittle, alerts are notoriously noisy, and enforcement lacks the situational context needed to be effective. This creates a lose-lose scenario: security teams either tune DLP so loosely that it fails to detect real-time risk and threats or so tightly that it disrupts legitimate business workflows and frustrates users. This operational friction, combined with the tightening grip of global regulations such as General Data Protection Regulation and the California Consumer Privacy Act , transforms compliance from a standard procedure into an administrative nightmare.Closing the Gap with a Unified Approach: The DSPM and DLP Power DuoTo protect data effectively, organizations must bridge the divide between providing visibility by knowing where the data is and enforcement by controlling where it goes.&nbsp;DSPM and DLP - It's easy to think of these two tools and include them in your security strategy.&nbsp;Data Security Posture Management (DSPM) provides the clarity needed to identify hidden risks and overexposed data. DLP provides the control engine to prevent exfiltration, powered by precise data classification. In most cases, these two solutions are disjointed and siloed, resulting in increasing costs, operational burden and risk.&nbsp;But, when these two solutions are connected, they create a continuous feedback loop. Visibility informs smarter enforcement policies, and enforcement actions provide deeper insights into data movement. The result is a unified security layer that is significantly more intelligent, scalable, and robust.This unified approach eliminates the "visibility vacuum" created by siloed security tools.&nbsp;Integrating modern DLP, DSPM, and vulnerability management eliminates a patchwork of point solutions, which fail to keep pace with today’s complex environments where data moves freely.It simplifies one of the most complex and fragmented challenges organizations face:&nbsp;Locating their dataClassifying it correctlyControlling who can access itMonitoring how people interact with it across all channels, such as endpoints, email, web, cloud, and AI tools.&nbsp;&nbsp;Ready to Learn More?To learn more about this unified approach to secure the modern environment, please register for our on demand webinar&nbsp;Building a Unified Data Security Platform across DSPM and DLP on March 5, 2026 in partnership with Frost & Sullivan. Our experts&nbsp;Shankar&nbsp;Subramaniam, VP, Product Management, DSPM from Zscaler&nbsp;and&nbsp;Ying Ting Neoh, Industry Analyst, Cybersecurity from Frost & Sullivan will share insights on how integrating DLP with DSPM creates a proactive, comprehensive, and unified defense for the AI era.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;This blog post has been created by Zscaler for informational purposes only and is provided "as is" without any guarantees of accuracy, completeness or reliability. Zscaler assumes no responsibility for any errors or omissions or for any actions taken based on the information provided. Any third-party websites or resources linked in this blog post are provided for convenience only, and Zscaler is not responsible for their content or practices. All content is subject to change without notice. By accessing this blog, you agree to these terms and acknowledge your sole responsibility to verify and use the information as appropriate for your needs.]]></description>
            <dc:creator>Mahesh Nawale (Product Marketing Manager)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Taming Agentic Threats: Zscaler Visibility and Guardrails to Mitigate OpenClaw]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/taming-agentic-threats-zscaler-visibility-and-guardrails-mitigate-openclaw</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/taming-agentic-threats-zscaler-visibility-and-guardrails-mitigate-openclaw</guid>
            <pubDate>Wed, 11 Mar 2026 18:47:27 GMT</pubDate>
            <description><![CDATA[AI agents can automate mundane tasks and provide productivity shortcuts, but they can also&nbsp; be used by threat actors for illegitimate aims. OpenClaw, formerly known as ClawdBot and Moltbot, is an open source AI agent framework that was designed to be a helpful digital personal assistant. It runs locally on a computer and proactively takes actions on the user’s behalf without direct user input. In just five days, it amassed over 100,000 GitHub stars and now thousands of developers use it as their default assistant.Running on developers’ laptops, OpenClaw connects to their messaging apps, calendars, and developer tools and executes autonomous actions on their behalf. But its powerful convenience has also made it a significant cybersecurity threat due to its major security flaws and the resulting malicious outcomes.&nbsp;This blog focuses on how threat actors can abuse OpenClaw and turn it into an offensive tool, the risks posed when used in a malicious manner, and Zscaler’s lab-confirmed means of preventing it from compromising organizations’ environments and data.What is OpenClaw?Think of OpenClaw as a "super-assistant" for your computer. Unlike a standard Generative AI chatbot like ChatGPT that only talks to you, OpenClaw is an&nbsp;autonomous agent. This means it can actually do things on your behalf—like read your emails, browse the web, manage your calendar, or even run technical commands on your computer.OpenClaw is also referred to as "Shadow AI" because employees sometimes install it on their work computers to be more productive without their IT department knowing or approving it.How OpenClaw OperatesOpenClaw works by connecting your messaging apps (like Telegram, Slack, Discord, or WhatsApp) to your computer’s communication capabilities, including its network access. There are two major components of how OpenClaw operates:&nbsp;The “Skills” Hub:&nbsp;Users can download "skills" or plugins from a marketplace called ClawHub to give the assistant new abilities—tasks like "Summarize my emails,” “Book my next trip,” "Research this topic,” or “Order these groceries.”Autonomy:&nbsp;&nbsp;Once you give it a task, OpenClaw works in the background on your behalf. It can look at websites, download files, and interact with other software without the user clicking every button in the workflow for that task.How Threat Actors Leverage OpenClaw to Drive Malicious OutcomesBecause OpenClaw has so much power to act on your behalf, it has become a "wolf in sheep's clothing." There are three main ways it poses a threat:Threat Type&nbsp;How it WorksThe ResultFake "skills”Hackers have uploaded hundreds of malicious "skills" to the marketplace.&nbsp;A downloaded "bad" skill can silently steal passwords, credit card information and other sensitive information without the user’s knowledge.The "One-Click" TrapA major security hole (CVE-2026-25253) allows a hacker to take over the OpenClaw assistant with the click of a malicious link.Once a threat actor controls the assistant, they effectively control a computer and see everything you do.Hidden InstructionsAn attacker hides secret commands in an email or on a website.If the OpenClaw assistant reads that email or website, it might follow those hidden instructions—like "Send all my files to this address"—without the user knowing.&nbsp;How OpenClaw Compromises SecurityThe primary danger of OpenClaw is that it often has&nbsp;root access or runs with highly privileged access. Because it was designed to be helpful, on its own it doesn't have a "safety cage" (or a sandbox) to stop it from doing something harmful. Even OpenClaw’s FAQ states that it's both a product and an experiment and that “there is no ‘perfectly secure’ setup.”If an OpenClaw assistant on a work computer is compromised, a hacker doesn't just get access to that one person's files:&nbsp;they can potentially use that assistant to crawl through the entire company's network, stealing sensitive data or planting malware.How Zscaler Can Prevent OpenClaw UseAs a comprehensive security platform built on zero trust principles, Zscaler’s Zero Trust Exchange offers several layers of defense-in-depth threat detection and prevention that can block the use of OpenClaw:Prevent download or execution of OpenClaw:&nbsp;Using a combination of URL and File Type Control, Zscaler can prevent unauthorized downloads of OpenClaw on endpoints. OpenClaw install files are typically .ps1, .sh, or Docker files.&nbsp;Block the download of additional playbooks:&nbsp;OpenClaw uses markdown for its skill files. Zscaler’s custom File Type Control can detect markdown files and block downloads.Furthermore, Zscaler CASB can isolate, restrict, or block access to GitHub repositories to prevent users from duplicating repos and bypassing security by using custom repositories.Prevent callbacks to malicious malware:&nbsp;OpenClaw skill files that are malicious often call to Command and Control (C&C) servers. They can also use evasive techniques such as SSH tunnels or DOH tunnels. Zscaler can prevent these callbacks and executables/scripts that would trigger these callbacks.Protect against sensitive data leakage:&nbsp;Depending on how it’s deployed, OpenClaw will use the network for tool/skill and LLM access. During this time, Zscaler can inspect and perform data protection on&nbsp;these sessions.&nbsp;Block unauthorized LLM calls:&nbsp; Controls can be put in place so only sanctioned AIs are allowed from an organization's network and this sanctioned AI will provide visibility and guardrails. Using URL and Cloud App controls, Zscaler AI Guard can block all LLMs and &nbsp; monitor and restrict prompt usage.Isolate rogue devices and prevent lateral movement:&nbsp;In open networks users can plug in devices that have OpenClaw running. If compromised or used maliciously, these devices can be used as an entry point into the enterprise network. A common example is plugging a MacMini into an open port. This is where Zscaler can help by isolating these devices.&nbsp;Restrict BYOD devices from accessing websites and enterprise data directly:&nbsp;Contractors often need to access SaaS applications such as Workday or Salesforce with their own devices. Devices with OpenClaw installed can download skills that would allow them to use the Chrome Dev Kit to scrape data from SaaS services. Zscaler’s Zero Trust Browser can prevent data loss at a mass scale by rendering web pages in a virtual browser as pixels only: this effectively sanitizes web pages by preventing server-side javascript, applet or other embedded content from reaching an endpoint for execution.Leverage Endpoint Context: Zscaler Endpoint Context also extends visibility to AI agents like OpenClaw, delivering real-time endpoint intelligence that strengthens multilayer protection—so security teams can detect threats sooner and enforce policies with greater precision.Real-World Validation of Zscaler’s OpenClaw Exploitation Prevention MethodsOur ThreatLabz team sought to validate and provide real-world examples of how Zscaler can protect customers against the various ways threat actors seek to compromise an organization’s devices and data using OpenClaw as the entry point. These are practical examples of how the Zero Trust Exchange with its multiple layers of protection works to detect and block communication between OpenClaw, its skills repository as well as file downloads via messaging apps like Telegram.Prevent OpenClaw access with Zscaler’s URL Category for “Online Chat” appsZscaler uses&nbsp;URL Categories to classify and group the URLs of various applications—these categories can be used as actionable criteria in Zscaler URL & Cloud App Control policies to block access to the websites in that category.&nbsp;To block access to the instant messaging apps like Telegram and Discord that OpenClaw could communicate with, a Zscaler administrator could implement a URL & Cloud App Control policy to block access to the domains and ports these messaging apps use.&nbsp;The above excerpt from Zscaler’s Web Insights report shows that communication has been disrupted between OpenClaw and the Telegram messaging app.&nbsp;By using a URL & Cloud App Control policy that specifies the “Online Chat” category, Zscaler customers can block users and apps from connecting to the domains and URLs that OpenClaw can use for malicious means. Subsequently, the OpenClaw interface running on a user’s local device shows that it cannot communicate externally:Similarly, Zscaler can prevent communication between OpenClaw and URLs and ports that OpenAI uses for communication with external apps and third-party clients via API. OpenAI offers various LLM models via its ChatGPT AI app. By specifying the URL Category “ai_ml_apps” in a Zscaler URL & Cloud App Control policy, all calls to&nbsp;api.openclaw.com and similar URLs that OpenClaw could seek to communicate with are blocked:Control access to ClawHub, OpenClaw’s “skills” repository: ClawHub is an open ecosystem that enables rapid innovation and customization of OpenClaw—but it provides threat actors a means to distribute disruptive malware or other files that create security risk. Zscaler empowers organizations to block access to ClawHub using Zscaler’s URL & Cloud App Control policy and specifying the Generative AI category to block access to Clawhub.ai.Prevent malicious file downloads, including the “skill” archive downloads for OpenClaw:&nbsp;Zscaler’s Zero Trust Browser isolates users from potentially harmful content on the internet. This is done by loading the accessed web page in a virtualized remote browser in any one of 160+ Zscaler data centers across the globe, and streaming the rendered content as only pixels to the user’s native browser on the endpointLoading the OpenClaw website or ClawHub, the “skills” marketplace, can be done in isolation with the Zero Trust Browser with the option to block file downloads from isolated web sites: this ensures that any potentially harmful active content in a web page is blocked from reaching the endpoint, effectively sanitizing these websites and controlling how the user interacts with them.Zscaler customers can allow users to access Generative AI apps but prevent any potentially harmful file downloads. Below, the Zero Trust Browser displays a user notification confirming access to the OpenClaw website but in read-only mode: text input is not allowed nor are the download of skill archive files:The proxy architecture that is foundational to the Zero Trust Exchange provides a powerful means of enforcing security policy consistently for all users in every location, no matter where they are in the world—this includes preventing malicious file downloads.&nbsp; When users attempt to download a malicious file using the OpenClaw agent, the Zscaler proxy intercepts and blocks the download.&nbsp;However, Zscaler customers can enable exceptions for Generative AI downloads they deem necessary for their users—this provides flexible and granular policy criteria to allow legitimate files to also be downloaded.&nbsp;&nbsp;In this screenshot from Zscaler’s Web Insights reporting, we see that the eicar_com.zip file has been blocked from download since it’s classified as malicious malware:As a result, the user sees an error message in the Telegram app stating it cannot download the eicar_com.zip file, preventing exploitive action by a threat actor using OpenClaw to distribute malware:Learn more about how Zscaler can help your organization provide secure access to the internet, apps and workloads without compromising productivity:&nbsp;schedule a demo with our security professionals who can show you how to act fast and stay secure.]]></description>
            <dc:creator>Satish Madiraju (Sr. Director, Product Management)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Digital Sovereignty That Works in Practice: Local Control, Global Resilience]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/digital-sovereignty-works-practice-local-control-global-resilience</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/digital-sovereignty-works-practice-local-control-global-resilience</guid>
            <pubDate>Wed, 11 Mar 2026 11:17:28 GMT</pubDate>
            <description><![CDATA[Digital sovereignty has shifted from a policy aspiration to an operational requirement. For organizations around the world - governments and international organizations, critical infrastructure operators, and regulated enterprises – questions like where security decisions are made, where transactions are processed, and where telemetry is stored now determine what technology can be deployed and how risk is managed. This trend will continue and those requirements are becoming more specific as policies and regulations proliferate across regions.At the same time, another truth hasn’t changed: adversaries don’t respect borders. Attacks traverse global infrastructure, supply chains, and third parties without regard for jurisdiction. The explosion of AI has only increased the volume and sophistication of these attacks. So public and private organizations are being asked to reconcile two needs at once:Keep sensitive data under local authority and within local jurisdictions.Maintain security effectiveness, performance, and uptime at global scale.Too often, the market frames this as a trade-off. From my perspective as Chief Reliability Officer and global cloud builder, both are possible and not opposing forces if architected correctly. Sovereignty only matters if it’s enforceable in architecture and sustainable in operations, especially under stress.That’s why we’re expanding Zscaler’s digital sovereignty capabilities globally, powered by the Zscaler Zero Trust Exchange™ platform, to help customers meet strict local requirements without sacrificing global reach, speed, security, or uptime.What customers really mean when they say “sovereignty”Sovereignty isn’t a one-size-fits-all term. Different countries, industries, and risk teams define it in similar but locally nuanced ways - and for many organizations it’s best understood as a spectrum of requirements that varies by industry and evolves over time rather than a single one dimensional checkbox.In practice, when customers come to us to operationalize sovereignty, the requirements usually center on practical, auditable control:Local authority over where users transact and their policy is enforced.In-country handling of security data and telemetry with assurances that content is not stored or shared.Clear separation of responsibilities and boundaries between regions.Proof through independent validation and certifications that the design matches the claim.Service continuity assurances - defined failover, recovery, and operational processes that preserve sovereignty during disruptions.Confidence that the service will remain predictable and available, not become fragile simply because it’s “localized”.That last point matters more than people realize. If sovereignty is implemented in a way that introduces regional single points of failure or limits recovery options, it can increase operational risk. And customers don’t have the luxury of choosing between compliance and continuity.Residency is not the same as controlA common misconception is that sovereignty can be satisfied by simply keeping some data “in-country.” Data residency is necessary, but it’s just the beginning.Customers also need clear answers to questions like:Where is the control plane located and operated?Where are security decisions executed?Where are logs and telemetry stored and retained?When security services analyze content, does anything cross borders?Under outage conditions, what fails over - where, and under whose authority?These are the questions that show up in procurement language, audit evidence requests, and business continuity planning. They’re also exactly why Zscaler was built from inception with a platform architecture that separates control, data, and logging planes.That separation enables a decentralized model: customers can keep sensitive operations within a region while still benefiting from a cloud platform designed to operate globally at scale.What we’re expandingWith this announcement, we’re expanding and unifying sovereignty and resilience capabilities on our AI-powered Zero Trust cloud platform. We already offer global and in-region services across markets such as the UK, the European Union, Switzerland, India, Singapore, Australia, and Japan. We’re extending these capabilities further, including:Extending our dedicated European control plane.Introducing in-country data and logging services to new regions, including a forthcoming deployment in Canada.Continuing to invest in regional capacity and local operational support as sovereignty requirements evolve.We’re also deepening the controls customers need in practice, including:Keeping sensitive inspection in-country. With in-region malware analysis, customers can already choose where to analyze suspicious content locally, reducing cross-border exposure and helping align inspection workflows with national handling requirements.Meeting mandates that require dedicated infrastructure. Private Service Edge options provide certified, single-tenant deployments (customer-hosted and Zscaler-managed), giving customers a path for environments that require specific hardware, accreditation, or isolated operations, without giving up a consistent Zero Trust architecture and seamless options to integrate with the global Zero Trust Exchange.Region-specific expertise to meet letter and spirit. Dedicated technical expertise helps customers translate national regulations into practical policies and configurations, so data handling, logging, retention, and access controls match the intent of local requirements, not just the language.Sovereignty isn’t a one-time deployment. It’s an ongoing capability that has to work across policy, architecture, operations, and validation.Compliance is only credible when it’s provableSovereignty requirements are enforced by audits, assessments, and certifications - not promises.Zscaler’s approach is backed by rigorous third-party validation, including verification that the platform handles sensitive data securely, encrypting and decrypting traffic without writing data to disk, and supporting confidentiality for sensitive transactions. We also support the practical controls customers rely on to operationalize compliance including:Customer-controlled keys, integrated with hardware security modules (HSMs), ensuring only authorized parties can decrypt traffic. This supports stricter separation-of-duties models (e.g., where the cloud provider operates the service, but the customer retains cryptographic control), with clear audit evidence around key custody, access, and rotation.Our patent pending&nbsp;“collect once, certify all”&nbsp;approach designed to&nbsp;streamline compliance across major frameworks and regional standards. By designing controls and evidence collection to be reusable, customers can reduce duplicated audit work when they need to demonstrate alignment across multiple regimes (for example, national cloud requirements plus industry certifications).Flexible logging, including options for on-premises log servers to support strict regional mandates. Customers can choose where logs are stored and who can access them, so telemetry can stay in-country (or on-prem) while still feeding the security operations workflows teams rely on for detection, investigations, and compliance reporting.For customers, the goal is straightforward: faster time to compliance, fewer architectural compromises, and fewer exceptions that become tomorrow’s risk.Here’s the reliability reality: sovereignty without resilience is a fragile promise and not fit for purpose for the modern enterprise. Leaders need confidence that sovereign configurations won’t trade away availability. They need to know the platform won’t become a single point of failure. They need continuity plans that work in practice, not just in diagrams and decks.Zscaler owns and operates its cloud infrastructure, designed to withstand failures at multiple levels without turning localized disruption into widespread outage. For customers running essential services, that resiliency isn’t a nice-to-have, it’s the foundation of business continuity.That’s why I often say:“The true measure of a security cloud isn’t just performance on sunny days—it’s resilience when storms hit.”]]></description>
            <dc:creator>Misha Kuperman (Chief Reliability Officer &amp; GM)</dc:creator>
        </item>
        <item>
            <title><![CDATA[When the Unthinkable Happens: Maintaining Operational Resilience Amid Geopolitical Instability ]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/operational-resilience-amid-geopolitical-crises</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/operational-resilience-amid-geopolitical-crises</guid>
            <pubDate>Tue, 10 Mar 2026 02:09:51 GMT</pubDate>
            <description><![CDATA[IntroductionIn the world of IT and cybersecurity, we often talk about "five nines" of availability and regional redundancy. But what happens when the "unthinkable" occurs?An AWS data center in the Middle East was hit by “objects”1 on March 1st, 2026, a consequence of ongoing regional conflict, causing a regional blackout. Similarly, in September 20252, an undersea cable cut in the Red Sea caused a regional brownout event due to disruption in Internet access from Asia and Mideast to European and North American destinations. These events highlight the vulnerability of the modern internet infrastructure and cloud services that are&nbsp; susceptible to service outages and performance issues whether due to man made or natural disasters.In both these cases, Zscaler's infrastructure was not targeted and has remained mostly unaffected. However, outside of Zscaler’s service, our customers certainly felt the impact and we worked frantically to support them, minimizing the impact even though it was not related to the Zscaler environment.Delivering high resiliency with the Zero Trust ExchangeThe Zscaler Zero Trust Exchange is the industry's largest AI security platform, brokering more than 500 billion transactions daily, across its global platform of more than 160 locations globally.The Zscaler Zero Trust Exchange platform delivers exceptional&nbsp;resilience, guaranteeing 99.999% availability and uninterrupted security and connectivity—even when individual data centers fail, networks get congested (brownouts), or entire regions go dark (blackouts). Our globally distributed footprint, automated cloud operations, and built-in failure protections work together to maintain secure, low latency access for AI and machine workloads, users and things under any of the failure scenarios to the content and applications needed to enable modern businesses.Zscaler’s cloud infrastructure is built with high resiliency to absorb most backend system failures from impacting the end users and our customers’ operations. However, certain classes of failures like blackouts, brownouts, and critical failures primarily affecting traffic flow via the Zero Trust Exchange can end up to be customer impacting. Zscaler ensures we support our customers with tools to detect, mitigate and recover from these impacts quickly.Blackouts represent a complete failure of a data center or an entire data center region, like the incident that affected AWS customers in the UAE. Since Zscaler does not rely on that AWS region, it was unaffected. However, in the past, a blackout event during Hurricane Sandy affected our NYC facilities several years ago. Similarly a total power outage at a partner colocation facility in London a few years back affected our customers in that region. Despite the severity implied by the term "blackout," Zscaler's monitoring capabilities quickly detected these situations—whether via a tunnel or a client connector.Crucially, Zscaler has inbuilt switchover mechanisms that ensured automatic recovery by failing over to an alternative data center in both these instances. Thanks to Zscaler’s rigorous capacity planning methodology, all data centers maintain sufficient service and network capacity headroom. This proactive measure ensures that failovers are seamless and effectively prevents the risk of cascading failures.Brownouts&nbsp;occur when the Zscaler services are operating normally, but the shared responsibility area, including client premises,&nbsp; client network path between a client and Zscaler, or Zscaler and a content provider is impaired for some reason. These disruptions can significantly impact the&nbsp;end user experience for some organizations, but not all and stem from various causes, including physical events like subsea cable cuts (as recently seen in the Red Sea) or sabotage, SaaS provider outages, network congestion, and ISP failures etc.Mitigating these brownouts often relies on third-party providers and is outside the direct control of Zscaler and the customer. To minimize the impact, Zscaler offers critical, customer-controlled features such as latency-based data center selection and network path optimizations, along with continuous investment in its core network underlay. However, in specific situations, manual intervention is required, necessitating a close partnership and shared responsibility between Zscaler and its customers to identify the root cause and implement mitigation strategies—for example, pinpointing alternative customer ISPs with superior interconnectivity to Zscaler's transit providers.For Zscaler, proactive detection of performance degradation is fundamental to minimize impacts – whether from external entities such as service and cloud providers – on the user experience. To illustrate the capabilities that our operations teams have at their disposal, here is a dashboard that represents the impact observed during the September cablecut situation in the Red Sea.&nbsp;&nbsp;Our team promptly identified the root cause. It was latency spikes between the Zscaler BOM6 data center in India and Azure regions in Europe decisively ruling out&nbsp; any local connectivity issues to the DC or any Zscaler service issue.Subsequently, we were able to observe the individual impacted hops within the Microsoft network in the network centric view:Zscaler operations teams gain this unique hop-by-hop visibility, representing the platform experience from the user point of view, by leveraging millions of anonymized ZDX probes generated by the Zscaler Client Connectors across the globe.Critical Failures&nbsp;due to widespread cyberattacks and global DNS failures are much larger in scope than the blackout or brownout incidents, as they cause global infrastructure failure, supply chain disruptions etc. For example, a recent faulty security update from a leading security vendor crippled millions of endpoints and nearly halted thousands of businesses. This incident not only led to lost revenue but also compromised security defenses, making companies vulnerable to a surge of cyberattacks, including spoofed websites, impersonation scams, and malicious ZIP files. Such events demand operational and security resilience that goes beyond simple redundancy, requiring strict isolation, rapid failover, and segmentation to ensure continuous operations and security during widespread crises.&nbsp;Zscaler Business Continuity Cloud for critical failuresThe questions to ask ourselves is, when the underlying cloud infrastructure or major third-party systems fail at a global scale, should we fail open, and does the security posture vanish with it?For Zscaler customers, the answer is a definitive no.Zscaler’s cloud services are already built with high resilience and disaster recovery capabilities including controlling our fate at every level of the stack. Our&nbsp;Business Continuity Cloud provides an added layer with customer-specific backup instances that are physically and logically isolated from the Zero Trust Exchange to maintain operations during critical and larger-scale disruptions.These events—such as global network outages, infrastructure failures due to cyberattacks, sabotage, or DNS failures—often require specific backup instances beyond the scope of standard service level agreements (SLAs).Why this mattersIn the current geopolitical and environmental climate, "hope" is not a business continuity strategy. The Zscaler Business Continuity Cloud offering provides four critical advantages:Operational independence: Isolation from the primary Zero Trust Exchange cloud, providing the required redundancy you need.Security integrity: No "failing open"—your zero trust policies remain active even during a global infrastructure crisis.Reduced RTO/RPO: Recovery time and point objectives are minimized because the "last known good" state is always ready for immediate failover.Consistent end user experience: With a seamless failover from&nbsp;Zscaler Client Connector, users do not have to login again, when they access applications or the internet in business continuity mode.Building a black-swan-proof enterpriseIncidents affecting regional blackouts, brownouts, or events causing critical failures causing global impact will happen, and true leadership requires preparing for the improbable and the unknown.Zscaler Business Continuity Cloud isn't just a feature; it’s an insurance policy for the digital age when user experience and security posture must be maintained during events beyond the coverage of standard SLAs. Leveraging Zscaler’s Business Continuity Cloud, you ensure that no matter what happens to the underlying service, your business—and your people—remain protected at all times.&nbsp;For more information visit&nbsp;here.Zscaler Resilience AuditTo ensure our customers are prepared for these failure scenarios, while maintaining the appropriate security posture,, Zscaler has developed a continuous framework for assessing the resilience of your Zscaler tenant and configuration maturity. This assessment, conducted by our Technical Success Managers on a periodic basis, also includes the posture of your customer-side configuration and infrastructure.&nbsp;This assessment takes into accounts multiple domains:Operational ReadinessBlackout ReadinessBrownout Readiness Business Continuity during Critical FailuresPlease contact your account team to get a free assessment of the resilience of your ZIA & ZPA tenants.]]></description>
            <dc:creator>Misha Kuperman (Chief Reliability Officer &amp; GM)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Automating Data Governance: Strengthening Security with Zscaler DSPM and MPIP Integration]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/automating-data-governance-strengthening-security-zscaler-dspm-and-mpip</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/automating-data-governance-strengthening-security-zscaler-dspm-and-mpip</guid>
            <pubDate>Thu, 05 Mar 2026 18:00:23 GMT</pubDate>
            <description><![CDATA[In the modern enterprise, tracking business-critical data has moved beyond a simple administrative task—it has become a "superhuman" challenge. As data is generated, modified, and moved across sprawling multi-cloud environments and SaaS applications, maintaining visibility and control is increasingly difficult for even the most well-resourced security teams.To manage this complexity, many organizations rely on data labeling. By classifying data at the point of creation, organizations can help end-users understand the sensitivity of the information they handle. Furthermore, labeling is no longer just a "best practice"; it is a core requirement for many global compliance frameworks that mandate the identification of critical business assets.&nbsp;The Role of Microsoft Purview Information Protection&nbsp;Most organizations center their labeling strategy around user-generated data residing in cloud or on-premises file shares. To do this, they leverage Microsoft Purview Information Protection (MPIP)—formerly known as Azure Information Protection (AIP) —to map sensitive data, control access, and trigger security settings like encryption.Because MPIP labels are stored as persistent metadata within the files themselves, the protection "travels" with the data. This allows security teams to use these labels as anchors for Data Loss Prevention (DLP) and Cloud Access Security Broker (CASB) policies, ensuring consistent enforcement regardless of where the file resides.Bridging the Gap: Zscaler DSPM and MPIP IntegrationWhile MPIP provides the framework for labeling, Zscaler Data Security Posture Management (DSPM) provides the global engine for discovery, classification and validation.Zscaler DSPM continuously scans your data universe ranging from cloud, SaaS applications to on premise data centres—to identify and catalog files. With this integration, Zscaler DSPM now detects the MPIP labels associated with every file.Zscaler DSPM&nbsp; doesn't just read the label; it scans the content of the file using prebuilt and custom classifiers. By comparing the actual data against the existing label, Zscaler DSPM helps enable organizations to:Identify and correct mislabeled sensitive files.Automatically apply MPIP labels to unlabeled sensitive data.Validate labeling accuracy across the entire data estate.This automated validation reduces the manual "toil" on IT and security operations teams while significantly hardening the organization’s overall security posture.&nbsp;Key Benefits of the Zscaler DSPM MPIP Integration&nbsp;1. Comprehensive Visibility and Historical RemediationTraditional labeling often misses legacy data or "shadow data" created before strict policies were in place. Zscaler DSPM identifies sensitive data missing MPIP labels and allows you to apply classifications to both historical archives and newly created or modified data.2. Cross-Cloud Labeling EnforcementOne of the primary challenges of MPIP is extending its logic beyond the Microsoft ecosystem. Zscaler DSPM bridges this gap by detecting and applying MPIP labels to files stored in non-Microsoft environments, such as Amazon S3 buckets. This helps to ensure a unified classification standard across your entire multi-cloud strategy.3. Optimized Business ContextSecurity labels are often siloed within IT departments and underutilized by security teams. Zscaler DSPM breaks these silos by correlating MPIP labels with other risk signals and data profiles. By seeing the actual content inside a labeled file, security teams can demystify labeling schemes and ensure they align with specific business objectives.4. Unified Policy Management and "Label-Driven" SecurityTo prevent policy drift, Zscaler allows you to use sensitivity labels as automated policy triggers. This ensures that a label of "Highly Confidential" automatically invokes encryption or restricts exfiltration in high-risk scenarios. Making MPIP labels the "source of truth" for Zscaler security policies helps create a seamless enforcement experience for both admins and end-users.5. Simplified Regulatory ComplianceFor organizations navigating the complexities of GDPR, HIPAA, or PCI-DSS, this integration provides a robust technical control. It streamlines the labeling of business-critical data, providing a clear, automated audit trail ready for internal auditors and external regulators alike.ConclusionThe integration of Zscaler DSPM and MPIP represents a shift from passive monitoring to active, automated enforcement. By ensuring your data is correctly classified and protected everywhere it travels, you can finally close the "enforcement gap" and reduce the risk of high-impact data breaches.&nbsp;Ready to see Zscaler DSPM in action?While the MPIP integration is a powerful component of our platform, Zscaler’s DSPM solution offers even deeper capabilities for risk reduction and data discovery. A picture is worth a thousand words—schedule a session with one of our experts to see how we can secure your data estate.]]></description>
            <dc:creator>Mahesh Nawale (Product Marketing Manager)</dc:creator>
        </item>
        <item>
            <title><![CDATA[States, Municipalities, and AI: How to Secure GenAI in Government]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/states-municipalities-and-ai-how-secure-genai-government</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/states-municipalities-and-ai-how-secure-genai-government</guid>
            <pubDate>Mon, 23 Feb 2026 15:58:59 GMT</pubDate>
            <description><![CDATA[As generative AI (GenAI) promises new capability and efficiency, while at the same time raising concerns about uncontrolled use, state and local governments across the U.S. are considering adoption through a lens of both opportunity and risk. A security-first approach, paired with enforceable technical controls, helps agencies adopt GenAI with confidence while reducing operational, legal, and data-loss risk in a dynamic, fast-moving environment. In practice, three fundamentals consistently separate secure deployments from risky experimentation: visibility, guardrails, and continuous validation (including red teaming).For security leaders, the challenge isn’t whether GenAI will be used—it’s whether it will be used with visibility, enforceable controls, and audit-ready accountability. Before selecting tools or drafting policy, it helps to anchor on the failure modes agencies are already seeing as GenAI use expands.Key Issues Governments Are FacingState security teams are flagging several common issues, many of which align with themes reported by Zscaler's ThreatLabz 2026 AI Security Report. Taken together, they highlight where unmanaged GenAI adoption most often collides with existing privacy, security, and oversight requirements.Data privacy & protection: Collection, usage, retention, and exposure of personal/sensitive dataGovernment use of AI: Limitations, human oversight, review, and accountabilityTransparency: Notifying when AI is used, who is responsible, and providing oversightUnauthorized “digital replicas”: Creation or use of voice, image, or likeness without authorizationThese issues tend to surface first as “shadow AI” usage—teams adopting public GenAI tools faster than security can standardize access, logging, and data protections. Without guardrails, GenAI becomes a new pathway for sensitive-data exposure, policy violations, and operational risk at scale.Why States Need Strong GenAI ControlsFor state and local governments, addressing GenAI security helps reduce risk across cost, mission, and trust. It also creates the foundation to enable approved GenAI use cases without forcing teams into unsafe workarounds.Financial riskCitizen data leakage, misuse, or inadvertent exposureLoss of public trustLegal liabilityReputational damageThe practical question is how to translate these risks into controls that can be deployed and measured. Most state security teams prioritize capabilities that (1) establish AI usage and data visibility, (2) reduce the likelihood of data loss or unsafe outputs, and (3) support forensics, oversight, and reporting.How Zscaler’s Capabilities Map to State NeedsBelow are the capabilities that Zscaler offers through its GenAI protection/data protection suite. The goal is to operationalize GenAI security using familiar control categories – discovery, data protection, access control, and audit – so agencies can implement quickly and measure impact.The mapping below is organized the way many security programs implement GenAI controls: start with discovery and classification, then add guardrails and least privilege, and finally operationalize with monitoring, remediation, and compliance reporting.CapabilityWhat it does / key featuresHow it helpsAI/Data Visibility & Discovery / Classification (Zscaler AI-SPM, DSPM, etc.)Automatically discover and classify datasets, models, vectors, and AI services (managed and unmanaged) to understand what data is in use and where exposure might exist.Shows where “high-risk” data is used; supports risk assessments; improves transparency and reporting.Prompt / Input / Output Monitoring & GuardrailsInspect, classify, and block inputs/prompts that violate policy; control outputs; help prevent PII exposure or data exfiltration through GenAI workflows.Helps prevent misuse (e.g., disallowed content); supports guardrails when GenAI is used for communications or decisions that require controls.Browser/Session Isolation & Data Leakage Prevention (DLP)Isolate GenAI applications so risky actions (cut/paste, upload/download) can be controlled; enforce DLP across AI interactions.Helps protect sensitive or regulated data (e.g., identity, health, financial) from leaking through GenAI channels, safeguarding citizen privacy.Least Privilege / Entitlement ControlMinimize which users/roles can access which AI services or data; revoke overprivileged rights; restrict high-risk app usage.Reduces attack surface and limits misuse; supports protection of regulated data and critical systems.Audit Trails, Logging, & ReportingMaintain logs of AI usage: who submitted which prompt, when, and what response was returned; capture system/model interaction metadata.Supports transparency, accountability, oversight, and audit/readiness reporting.Policy Enforcement / Guided RemediationIdentify misconfigurations and data exposure; provide remediation guidance and real-time alerts.Enables continuous monitoring and correction; supports risk assessments, internal controls, and prevention of configuration drift.Framework AlignmentMap controls to frameworks (e.g., NIST AI RMF, HIPAA where applicable) via compliance modules and reporting.Helps demonstrate alignment to best practices and applicable frameworks.Practical Steps State Entities Should ConsiderHere are suggestions for how state agencies/entities can build (or upgrade) their GenAI security program to prepare for rapid advancement. These steps are intended to fit into existing security operations—policy, identity, data protection, and monitoring—rather than creating a separate “AI-only” track.Inventory AI UseIdentify all GenAI tools in use (chatbots, assistants, third-party tools, open tools)Identify what data is being used or referenced, where it’s stored, and how it’s accessedData Classification & Sensitivity MappingDefine categories of data sensitivity (PII, health, financial, etc.)Map which AI services have access to sensitive dataDefine Clear Policies & GuardrailsPolicies around who can use GenAI and for what purposesProhibitions consistent with agreed-upon use (including data handling and disclosure)Implement Technical ControlsPrompt/input filters, DLP blocking, browser/session isolationEntitlement/restriction controlsLogging/auditingContinuous Monitoring & Risk AssessmentMonitor for misuse and privacy violationsPeriodically assess risk and complianceTraining & AwarenessEnsure staff understand which GenAI tools are allowed and what data they can/can’t useReinforce awareness of legal and regulatory obligationsGovernance & OversightAssign a responsible party/team (e.g., a state CIO/CISO or AI Oversight Board)Embed human review/oversight for higher-risk use cases (e.g., decisions affecting citizens)Capabilities only reduce risk when they’re implemented as part of a repeatable program. The steps above provide a security-team-friendly sequence that can plug into existing IRM/GRC, data protection, and zero trust initiatives.How Zscaler Supports StatesZscaler’s GenAI protection and data security portfolio offers a toolkit that aligns well with the current environment. In practice, many agencies start by using these capabilities to define “approved GenAI usage” (tools, users, data types), then expand into continuous monitoring and audit support as adoption scales.Pre-Deployment Risk Assessment:&nbsp;Before deploying a GenAI model or enabling a GenAI tool for public-facing use, use Zscaler’s AI-SPM (Service & Posture Management) to discover what data and models are involved, classify their risk, test policy violations, and understand exposure.Implementing Transparency/Disclosure Controls: Use logging and audit trail features to capture prompts, response metadata, and user activity—supporting oversight, disclosure obligations, and responses to legal requests.Restricting/Blocking Sensitive Data Exposure: Use DLP integration, prompt filtering, and browser/session isolation to block high-risk actions (e.g., uploading sensitive documents, copying/pasting PII) when interacting with GenAI tools.Enforcing Use Policies (Entitlements, Privileges): Allow only approved roles to access external GenAI apps; enforce least privilege; quarantine or block risky apps/services until controls are validated.Monitoring & Remediation: Use guided remediation to address misconfigurations (e.g., over-entitled roles, open access to datasets, insecure storage). Trigger alerts when policy thresholds are crossed.Compliance Reporting & Audit Support: Generate reports on AI usage, data access, and incidents to support oversight and respond to inquiries, litigation, or citizen complaints.With a baseline program in place, agencies can phase implementation—often starting with discovery and DLP coverage for GenAI, then expanding into entitlement controls, isolation for higher-risk use cases, and centralized logging/reporting for oversight.ConclusionGenerative AI is reshaping how government works. Alongside opportunity, it also brings real legal, ethical, and operational risks—especially as adoption accelerates. States and municipalities bear responsibility in uncharted territory, and the time is now to put in place strong controls that increase resilience while maximizing the benefits of GenAI.Tools like those from Zscaler (AI-SPM, DLP for GenAI, prompt monitoring and filtering, isolation, audit trails, etc.) provide technical building blocks needed for secure adoption. Combined with strong policy, oversight, and continuous risk assessment, state and local governments can harness the power of GenAI while protecting citizens, supporting compliance, and reducing legal exposure.]]></description>
            <dc:creator>Fred Green (Zscaler)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Leveraging Zero Trust for More Accurate Exposure Prioritization]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/leveraging-zero-trust-more-accurate-exposure-prioritization</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/leveraging-zero-trust-more-accurate-exposure-prioritization</guid>
            <pubDate>Mon, 23 Feb 2026 15:11:59 GMT</pubDate>
            <description><![CDATA[Vulnerability management is often compared to “searching for needles in a haystack” because a small group of findings create the greatest risk as potential gateways for attackers.It’s no secret that the haystack keeps getting larger–it’s now more like a hundred-acre field. There were nearly 50,000 CVEs published last year, and Recorded Future reports that&nbsp;42% of CVEs disclosed in the first half of 2025 had a public proof-of-concept exploit. Enterprise security teams invest in upwards of&nbsp;45 different tools to monitor risk across an increasingly complex attack surface, often producing hundreds of thousands of findings.&nbsp;The good news? Attackers can do no significant harm with the vast majority of those findings. The bad news? Finding the handful that matter gets harder every day.Organizations use lots of tactics to identify what’s “risky,” including threat intelligence feeds, asset criticality, adversary behavior tracking, and applying unique business context to influence prioritization. Your teams can (and should) apply as many risk signals as are available.An equally effective prioritization factor – or deprioritization if you will – is to&nbsp;account for compensating controls that are already in place. That's exactly what Zscaler does by integrating context from our Zero Trust Exchange – our research identifies which vulnerabilities are mitigated by your zero trust policies, and we apply that context so you know where to focus instead. Let’s take a look at how Zscaler can help focus your efforts.Deprioritize CVEs Mitigated by ZIA and ZPAOne of the most effective policy engines for mitigating vulnerabilities is your zero trust program. Very few security teams automatically apply these mitigations to prioritization scoring. In other words, despite the absence of a pathway for an individual vulnerability to be exploited, security teams spend valuable cross-functional resources deploying patches or system upgrades that are actually unnecessary, simply in response to a “critical” finding from a vulnerability scanner. It’s a textbook example of a “false critical” – teams simply have too many real issues to fix and too little time to waste resources on remediations that don’t impact risk.Zscaler Exposure Management customers often see up to 80% reduction in “false critical” findings by applying context from any data source in their environment. One such source is ThreatLabz–a research organization within Zscaler that focuses on identifying and analyzing emerging threats, vulnerabilities, and attack techniques. The ThreatLabz team maintains a database of CVEs with information on how they're mitigated by different Zscaler products, including Zscaler Internet Access (ZIA) and Zscaler Private Access (ZPA).Many Zscaler customers see a significant reduction in findings truly deemed critical because of the vulnerabilities proactively mitigated by zero trust policies. Let’s look at an example.<div> <script async src="https://js.storylane.io/js/v2/storylane.js"></script> <div class="sl-embed" style="position:relative;padding-bottom:calc(50.26% + 25px);width:100%;height:0;transform:scale(1)">   <iframe loading="lazy" class="sl-demo" src="https://app.storylane.io/demo/cpf18xux96sd?embed=inline" name="sl-embed" allow="fullscreen" allowfullscreen style="position:absolute;top:0;left:0;width:100%!important;height:100%!important;border:1px solid rgba(63,95,172,0.35);box-shadow: 0px 0px 18px rgba(26, 19, 72, 0.15);border-radius:10px;box-sizing:border-box;"></iframe> </div></div>
Focus on what’s risky in YOUR environmentJust because a vulnerability is known to be exploited in the wild doesn’t always mean it poses a critical risk in your environment. Consider the following example of CVE-2021-44228, a CISA KEV most commonly known as log4shell. ZIA’s Intrusion Prevention System (IPS) mitigates this particular vulnerability, as detailed in the&nbsp;ThreatLabz Threat Library.Most vulnerability assessment tools would score this finding as critical, and with good reason: exploitation can result in Remote Code Execution. But&nbsp;Zscaler Unified Vulnerability Management (UVM) has automatically reduced the severity to a “medium” 4.7, recognizing the presence of a mitigating control in the form of ZIA.UVM has logged the original CVSS score of 10 and the “original severity score” from the scanning tool, also a 10. But UVM goes on to create a contextual, risk-adjust score – let’s drill deeper into the explanation of that score:All the tools in the environment report the finding as critical, but the vulnerability is fully mitigated by ZIA, taking it off the critical list entirely.&nbsp;As a matter of fact, the integrated ThreatLabz data has determined that all five findings associated with this ticket are mitigated by ZIA or ZPA policies, so the severity score has been automatically adjusted from 10 down to 4.7.Most exposure management programs would fail to recognize the presence of mitigating controls. The ticket would be prioritized as a critical, and organizations would spend security and IT resources fixing a problem that poses no significant risk. By adjusting the severity score automatically, UVM keeps teams focused on the work that matters, the fixes that actually reduce risk.Maximize the value of the tools you already haveIntegrating ThreatLabz research and Zscaler Client Connector (ZCC) data into your exposure management program adds valuable context to help your security team focus on truly critical vulnerabilities in your specific environment. Zscaler customers have a wealth of data and telemetry in their existing deployments that can turbocharge exposure prioritization and risk mitigation, but benefitting from all that context requires an exposure management solution capable of assimilating that data.Tool sprawl is often associated with complexity in exposure management. Dozens of siloed tools producing risk signals, none of which work together, and all contributing to the flood of data that prevents security teams from quickly identifying truly critical risk.&nbsp;Zscaler helps you channel the power of all those currently siloed tools and use the breadth of their insights to your advantage. By combining context from vulnerability scanners, cloud security tools, data security tools, identity and access management, IoT/OT security tools, threat intelligence feeds, and anything else with relevant data, organizations can use that rich context of the risk signals and mitigating controls in place to discern which findings truly represent risk. The haystack shrinks, even as the quantity of assets and findings grows larger.Evolve to a holistic exposure management program with ZscalerYou may be closer than you think to building a holistic exposure management engine that helps your security team pull the needles from the haystack. Your investments in vulnerability scanning and cyber risk assessment tools can work together with Zscaler Exposure Management, and your zero trust policy engine serves as a great foundation for inline controls and mitigation.With&nbsp;Zscaler Exposure Management, organizations can harness the power of contextual data and risk signals across the environment to deliver:Complete visibility of assets in a risk-based inventoryPrioritized exposure findings, unified from every sourceAccelerated remediation leveraging your existing tools and workflowsRequest a demo to see how your Zscaler products and existing security investments can come together to deliver better exposure management.]]></description>
            <dc:creator>Chris McManus (Senior Product Marketing Manager)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Future-Proof Your Security with the First Quantum-Ready Security Service Edge (SSE)]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/future-proof-security-first-quantum-ready-security-service-edge-sse</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/future-proof-security-first-quantum-ready-security-service-edge-sse</guid>
            <pubDate>Tue, 17 Feb 2026 09:00:00 GMT</pubDate>
            <description><![CDATA[Zscaler has already made significant investment in providing customers with&nbsp;post-quantum cryptography (PQC) visibility and logging capabilities—and now we’re building upon that foundation to ensure our customers can realize true crypto-agility.&nbsp;That's why today, we are thrilled to announce that the leading Security Service Edge (SSE) is now quantum-ready:&nbsp; Zscaler Internet Access inline inspection now supports hybrid PQC key exchange.&nbsp;This first-to-market capability allows your organization to decrypt and inspect quantum-encrypted traffic at scale, enforce your security policies, and defend against the emerging quantum threat landscape. With Zscaler’s proxy architecture, our new PQC key exchange capability also provides customers protection from “harvest now, decrypt later” (HNDL) attacks, even at the last mile if an application server does not support PQC yet.Additionally, with this launch we can now&nbsp;secure customers’ IPsec VPN tunnels with post-quantum, pre-shared Keys (PPK) which securely connects our customers’ PPK-ready endpoints to Zscaler.&nbsp; PPKs are an additional secret that both peers already share—and mixing it into the IKE key derivation results in IPsec keys that remain secure even if the Diffie-Hellman with Ephemeral keys (DHE/ECDHE)&nbsp;exchange is later broken by a quantum computer. In other words, it’s a post-quantum risk-mitigation mode for IPsec without requiring full PQC algorithms in the key exchange.Why Hybrid PQC Key Exchange MattersDuring the period of transition from classical to quantum-resilient encryption, hybrid PQC key exchange will act as a vital safety net. By combining a proven classical algorithm with a new quantum-resistant one, hybrid key exchange ensures that encrypted traffic remains secure even if one of the algorithms is compromised. This dual-layered approach provides robust protection against both current threats and the future risk of a quantum computer breaking today's standard encryption.Hybrid PQC key change is also foundational to helping address several core customer challenges in a quantum world:Defending Against Quantum Threats:&nbsp;With HNDL attacks already a viable threat, protecting data in transit is paramount. Our new capabilities that utilize hybrid key exchange mitigate the HNDL threat by making it extremely difficult for attackers to later decrypt harvested data.Meeting Compliance Mandates:&nbsp;Governments are mandating PQC adoption to protect critical infrastructure and data. Zscaler enables you to get ahead of these requirements and prove compliance with detailed reporting on quantum cipher usage across your environment.Bolstering Business Continuity:&nbsp;The crypto-transition is a predictable, high-impact event. A proactive strategy with Zscaler’s approach leveraging hybrid key exchange prevents the disruption, loss of trust, and compliance failures that a reactive approach would cause.Zscaler now provides real-time, deep inspection of PQC traffic, leveraging the NIST-standardized ML-KEM (FIPS 203) standard for post-quantum key exchange. Just as we do for classical encryption, Zscaler unlocks complete visibility and protection for PQC sessions, all without impacting performance. Our implementation of hybrid PQC key exchange is compliant with the&nbsp;draft-ietf-tls-echde-mlkem proposed standard and is fully compatible with Chrome, Firefox, Safari and other widely deployed clients as well as servers.The Zscaler Zero Trust Exchange sits inline, and our cloud-native inspection engine seamlessly decrypts, scans and enforces security policy, and re-encrypts traffic before sending it onto its destination. Here’s how our quantum-ready inspection process works:Zscaler checks the TLS ClientHello message from the client: If the client indicates TLS 1.3 support and includes a hybrid PQC key exchange in its proposal, Zscaler Internet Access uses TLS 1.3 with a supported hybrid PQC key exchange group. This process is independent of server capabilities and allows PQC usage between client and ZIA even if the server does not support it. The supported TLS version and selected key exchange group is always logged so administrators can get valuable information about PQC support on the client side. Those same insights can help security and IT teams prioritize upgrading software that is not PQC ready.Zscaler sends TLS ClientHello to the server on behalf of the client:&nbsp;In the ClientHello message it indicates support for TLS 1.3 and includes all standard hybrid PQC key exchange methods in the offer. In the TLS protocol it is up to the server to choose from a supported list of key exchange algorithms. Zscaler Internet Access logs selected TLS version and cryptographic parameters for each session that allows administrators to understand the security posture and work with service providers to use PQC capabilities.Zscaler performs traffic inspection and applies security policies:&nbsp;all threat prevention, DLP and access control policies are applied transparently for the client and server without any configuration changes to current policies. This means Zscaler provides the same industry-leading threat detection and prevention to PQC sessions that Zscaler has applied to non-PQC traffic for years.&nbsp;New Capabilities to Secure Your Quantum JourneyThis launch delivers two major innovations for the Zscaler platform:SSL/TLS Inspection with ML-KEM:&nbsp;Perform full decryption and deep content inspection on traffic flows that were established using hybrid PQC key exchange. We automatically detect and negotiate TLS groups, applying all your existing security policies without any changes to configurations or impact on user experience.&nbsp;IPsec with Post-quantum Pre-shared Keys (PPK): Secure your branch office and data center connections with future-proof VPN forwarding to Zscaler. By mixing a pre-shared key into the IKE key derivation, the resulting IPsec keys remain secure even if the Diffie-Hellman exchange is later broken by a quantum computer. This provides a practical, quantum-resistant upgrade for IPsec that can be deployed today.Begin the PQC Transition Journey NowThe shift to post-quantum cryptography is perhaps one of the defining security challenges of our time. With Zscaler, you can move from a reactive posture to a proactive one. Gain the visibility you need to stop threats hiding in PQC traffic, fortify your defenses against future decryption attacks, and meet emerging compliance mandates head-on.The members of our partner ecosystem will also play an important role in helping customers along their journey to quantum-readiness. Zscaler will work with members of our partner ecosystem, including Ernst & Young and HCLTech, to do just that:"We are thrilled to announce a strategic expansion of our partnership with EY, focused on delivering advanced Post-Quantum Cryptography (PQC) visibility through real-time crypto inventory capabilities. By leveraging Zscaler as the primary data source for cryptographic discovery, EY clients can now gain the comprehensive insights necessary to drive informed PQC migration and future-proof decision-making. This critical data allows EY’s expert consultants to help organizations develop robust, long-term security strategies tailored to their unique risk profiles. Together, we are simplifying the complex path to quantum safety and ensuring EY's clients remain resilient against emerging threats."— Adam Berman, Global Alliances Director, Zscaler“Post-Quantum Cryptography is becoming a strategic priority for enterprises committed to digital trust and total resilience. Through our collaboration with Zscaler, HCLTech is helping organizations accelerate crypto discovery, strengthen crypto-agility and secure communications against emerging quantum threats. Together, we are enabling ZIA customers to transition confidently to a quantum-safe future while meeting evolving compliance and regulatory expectations.”— Prikshit Goel, VP and Global Practice Head, Cybersecurity, HCLTechReady to future-proof your security? Learn more about preparing for the quantum future:&nbsp;watch our launch event webinar where our product experts will walk you through our PQC inline inspection capabilities and how we can help your organization prepare for the quantum era.]]></description>
            <dc:creator>Brendon Macaraeg (Sr. Product Marketing Manager)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Demystifying Key Exchange: From Classical Elliptic Curve Cryptography to a Post-Quantum Future]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/demystifying-key-exchange-post-quantum-pqc</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/demystifying-key-exchange-post-quantum-pqc</guid>
            <pubDate>Thu, 12 Feb 2026 22:54:58 GMT</pubDate>
            <description><![CDATA[In the digital world, the secure exchange of cryptographic keys is the foundation upon which all private communication is built. It’s the initial, critical handshake that allows two parties, like a user’s browser and a web server, to establish a shared secret and communicate securely over the untrusted expanse of the internet.As the quantum computing era approaches, the very mathematics underpinning our traditional key exchange mechanisms are facing an existential threat. This spurred the development of new, quantum-resistant algorithms. This blog post provides a deep dive into how modern key exchange works, from the trusted classical methods to the emerging post-quantum standards, and explores how Zscaler leverages hybrid key exchange to bridge the gap.The Key Components of Modern Key ExchangeAt a high level, a secure key exchange protocol must achieve the following:Confidentiality:&nbsp;&nbsp;The established key must be a secret shared only between the two communicating parties. An eavesdropper should not be able to determine the key.Authentication: In many cases (like with TLS), the parties must be able to verify each other's identity to prevent man-in-the-middle attacks. This is typically handled by digital certificates and is complementary to the key exchange itself.Forward Secrecy: The compromise of a long-term secret (like a server's private key) should not compromise the security of past session keys. This ensures that previously recorded encrypted traffic cannot be decrypted.Classical Key Exchange: The Reign of ECDHEFor the better part of a decade, the gold standard for key exchange on the web has been&nbsp; Elliptic Curve Diffie-Hellman Ephemeral (ECDHE). It is a cornerstone of Transport Layer Security (TLS) and is responsible for securing trillions of connections daily.How Key Exchange WorksThe Foundation: Elliptic Curve Cryptography (ECC): Instead of using very large prime numbers like traditional Diffie-Hellman, ECDHE uses the mathematical properties of elliptic curves. ECC offers the same level of security as older methods but with significantly smaller key sizes, making it faster and more efficient—a crucial advantage for mobile and IoT devices.The Handshake: Both the client and the server agree on a common elliptic curve and a starting point on that curve (the "generator").The "Ephemeral" Nature: This is where forward secrecy comes from. For each new session, both the client and server generate a new, temporary (ephemeral) key pair consisting of a private key (a random number) and a public key (a point on the curve).The Exchange:&nbsp;The client and server exchange their public keys.The Shared Secret:&nbsp;Each party then uses its *own* private key and the *other* party's public key to perform a calculation. Due to the magic of elliptic curve mathematics, both the client and the server independently arrive at the exact same point on the curve—this becomes their shared secret.Session Encryption: This shared secret is then used to derive the symmetric encryption keys that will encrypt all data for the remainder of the session.Even if an attacker were to steal the server's long-term private key years later, they could not use it to derive the ephemeral session keys from past traffic.The Quantum Threat and Post-Quantum Key Exchange: ML-KEMThe security of ECDHE relies on the difficulty of the "elliptic curve discrete logarithm problem." For a classical computer, this is an incredibly hard problem to solve. But for a sufficiently powerful quantum computer, Shor's algorithm&nbsp; makes it trivial because it can factor large integers into prime numbers with extreme efficiency.This has led to a new field of cryptography:&nbsp;Post-Quantum Cryptography (PQC). The goal is to create algorithms that are secure against attacks from both classical and quantum computers.After a multi-year competition, the U.S. National Institute of Standards and Technology (NIST) selected a suite of algorithms for standardization. For key exchange, the primary choice is the&nbsp;Module-Lattice-based Key-Encapsulation Mechanism (ML-KEM), formerly known as CRYSTALS Kyber.How it Works as a Key Encapsulation Mechanism (KEM):Unlike the interactive exchange in Diffie-Hellman, a KEM works slightly differently:The server generates a public and private key pair based on the mathematical difficulty of problems in crystal-like structures called lattices.The server sends its public key to the client.The client uses the server's public key to generate two things: a shared secret and a "ciphertext" that encapsulates (or wraps) that secret.The client sends this encapsulating ciphertext back to the server.The server uses its private key to "decapsulate" the ciphertext, revealing the exact same shared secret that the client generated.Now both parties have the secret, and an eavesdropper, even one with a quantum computer, cannot solve the underlying lattice math to discover it.The Real World: Hybrid Key Exchange (ECDHE + ML-KEM)We are in a transitional period. While powerful quantum computers are not yet widely available, the threat of "harvest now, decrypt later" is very real: adversaries can record sensitive encrypted data today and store it, waiting for the day they have access to a quantum computer to break it.To counter this, the industry is moving towards a hybrid approach. Zscaler has implemented this by combining the battle-tested classical algorithm with a next-generation post-quantum one.How Zscaler's Hybrid Implementation Works:Zscaler’s Zero Trust Exchange acts as an intelligent switchboard for connections. When a client initiates a TLS connection, it sends a "ClientHello" message advertising its capabilities.Dual Key Generation: In a hybrid key exchange, the client and server perform&nbsp;both an ECDHE key exchange and an ML-KEM key encapsulation simultaneously.Two Secrets are Better Than One:&nbsp;This process results in two independent shared secrets: one from ECDHE and one from ML-KEM.Concatenation for a Single Master Key: These two secrets are then concatenated (combined end-to-end) to create the final master secret for the session.Deriving Session Keys: This robust, hybrid master secret is then used to derive the encryption keys for the session traffic.This process secures the session end-to-end. To break the encryption and read the data, an attacker would have to break&nbsp;both the classical ECDHE algorithm and the post-quantum ML-KEM algorithm. This "belt and suspenders" model provides a powerful guarantee: the connection is at least as secure as the classical cryptography we trust today, and it is also protected against the quantum threats of tomorrow. This allows organizations to safely transition to a post-quantum world without compromising on current security.Conclusion: Two Worlds, One GoalClassical key exchange is the workhorse of today, securing trillions of connections with proven, efficient software. But the road ahead will be a hybrid one. We can expect to see Post-Quantum Cryptography (PQC)—new algorithms resistant to quantum attacks—securing our communications and critical software-dependent transactions. For security and networking practitioners, understanding the new paradigm is no longer optional—it's essential for securing today’s data against future quantum-based attacks.Learn more about preparing for the quantum future:&nbsp;save your spot for our webinar launch event&nbsp;where our product experts will walk you through how Zscaler used hybrid key exchange in service of decrypting and inspecting quantum-encrypted traffic with ML-KEM.&nbsp;]]></description>
            <dc:creator>Brendon Macaraeg (Sr. Product Marketing Manager)</dc:creator>
        </item>
        <item>
            <title><![CDATA[2026 Zscaler Public Sector Summit: Cyber Strong in the AI Era]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/2026-zscaler-public-sector-summit-cyber-strong-ai-era</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/2026-zscaler-public-sector-summit-cyber-strong-ai-era</guid>
            <pubDate>Thu, 12 Feb 2026 14:42:02 GMT</pubDate>
            <description><![CDATA[The 2026 Zscaler Public Sector Summit marks a homecoming for me and several others here at Zscaler who have recently hung up their federal spurs, and I feel a renewed sense of passion for the mission.I find myself reflecting on the common thread that binds Zscaler and the varied operational communities we support: the mission. Having recently retired from the front lines of government IT, I understand that our “customers” aren’t just users; they are the American people, all focused on protecting our country.&nbsp;Today, we stand at a critical juncture in the AI journey for our great nation. With a robust “America’s AI Action Plan,” our government is moving past the “pilot” phase of generative AI and entering a period of deep integration. However, as we weave AI into the fabric of government operations, we must ensure that the fabric itself is “Cyber Strong.”We are no longer “preparing” for AI or adversarial use of this new technology. We are in the midst of an active race. We are also realizing that while these systems are revolutionary defensive force multipliers, they are simultaneously becoming high-value targets. Our adversaries, nation-states with deep pockets and sophisticated AI capabilities, are leveraging technology at a rate that traditional defenses cannot match. The new “AI-powered script kiddies,” using large language models (LLMs) to generate, refine, and deploy malicious code without understanding the underlying mechanics, are accelerating that challenge.We are also seeing this in our recent ThreatLabz 2026 AI Security Report. From April 2024 to April 2025 alone, the Zscaler cloud blocked more ransomware attempts than in any previous year. That was more than 10.8 million hits, marking a 145.9% year-over-year increase and the highest volume recorded since tracking began. In the same year, the scale of AI/ML activity increased dramatically to 536,500,000,000 total AI/ML transactions, marking a 3,464.6% year-over-year surge across the Zscaler Zero Trust Exchange, compared to our last analysis period.To stay ahead of increasingly sophisticated adversarial AI, deploying AI isn’t enough. We must ensure that every model in a safety-, critical-, or high-value role is built on a foundation of secure-by-design and resilient architecture. True cyber strength in the AI era requires systems that are not only robust but actively instrumented to detect data integrity and performance shifts, “sensing” and ensuring we can identify and neutralize malicious activity before it compromises the mission.This March, we gather at the Ronald Reagan Building and International Trade Center, a location that holds significant personal meaning for me. Did you know it is the second-largest building in the federal inventory? It is literally a city within a city. At over 3 million square feet full of offices near the White House, it is the only federal building congressionally mandated to be a mixed-use building open to the public, effectively uniting the nation’s best public and private resources in a national forum for the advancement of trade, serving a uniquely dual mission that presents inherent security challenges. It serves as a perfect metaphor for our current technology challenge: securing a vast, interconnected digital landscape where the boundaries between “inside” and “outside” have effectively vanished—especially in the food court!The human element also comes front and center for this event. In the new digital age, securing the tech is only half the battle; we must also secure the “human” landscape. This is why I am particularly excited to welcome Eric O’Neill to our stage. Eric helped expose Robert Hanssen, a man who operated from within the very heart of our national security apparatus. It’s a stark reminder that the greatest threats often come from within, using a PalmPilot, no less.Eric’s insights into counterintelligence are more relevant now than ever. Adversarial AI is being used to craft social engineering attacks so convincing they bypass traditional human intuition. We must fight fire with fire. In 2026, the “insider” might not be a person at all, but a compromised AI agent or a deepfake identity. Eric will bridge the gap between “old school” counterintelligence and “new school” AI threats. His experience reminds us that while the tools change, the adversary’s intent remains the same: to undermine public trust and compromise our national security.Walking through the Reagan Building, above or below ground, always reminds me of the scale of our government’s responsibility. It is a place of history, but also a place of the future. As we open the 2026 Public Sector Summit, my message to my peers in the public sector is simple: the journey to Zero Trust, and now AI, is a journey of security. We cannot have one without the other.Join us on March 3, 2026. We will not just be talking about surviving the AI revolution; together with our partners, we will show how we will lead it - together. Let’s forge a nation that is not just cyber-aware, but Cyber Strong.]]></description>
            <dc:creator>Chad Tetreault (Zscaler)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Microsoft Copilot Oversharing Data? Not Anymore. Meet Zscaler’s New Wizard]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/microsoft-copilot-oversharing-data-not-anymore-meet-zscaler-s-new-wizard</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/microsoft-copilot-oversharing-data-not-anymore-meet-zscaler-s-new-wizard</guid>
            <pubDate>Thu, 12 Feb 2026 12:10:15 GMT</pubDate>
            <description><![CDATA[Microsoft Copilot is accelerating how people work in Microsoft 365—and it can accelerate exposure when access controls aren’t clean. Copilot runs on your existing permissions model, so if SharePoint, OneDrive, and Teams are over-permissioned, it can end up saying the quiet part out loud: surfacing sensitive data to underprivileged users through seemingly harmless prompts.The good news: you don’t need to hit pause on Copilot to be safe. You need to be&nbsp;Copilot-ready—with a clear understanding of what data is exposed, why it’s exposed, and how to remediate it fast at scale.That’s exactly where the&nbsp;Zscaler’s new Copilot Readiness Wizard adds value. &nbsp;But more on that later.&nbsp;Ready for Copilot Readiness?When it comes to Microsoft Copilot “readiness”, most discussions focus on licensing, user eligibility, and adoption. These are Important—but not where the try success of a deployment is.True Copilot readiness is answering questions like the following, which challenges your data risk level:Which sensitive files in M365 are dangerously overshared?Which items are missing the sensitivity labels (or have the wrong ones)?How much exposure is driven by anonymous links, org-wide links, or broad collaborator access?Can we fix the issue across our tenant without weeks of manual effort?Can we reduce risk&nbsp;without slowing users down or creating an admin bottleneck?As you can see, these force you to evaluate how overshared your data is (in the spirit of collaboration). &nbsp;A good readiness plan needs to ensure your Data Security approach can ace the test when it comes to the questions above.&nbsp;Data Risk: Brought to you by CollaborationThe main challenge with collaboration is data security often takes a back seat to other approaches in the company that help drive productivity. &nbsp;So what collaboration approaches cause the most risk?&nbsp;“Everyone in the company” permissions to “keep things simple”Org-wide links used as a shortcutExternal sharing that persists long after a project endsSharePoint sites that evolve into de facto data lakesBut let’s be clear - these collaboration approaches in Copilot don't break security. It just makes the consequences of oversharing&nbsp;immediate.&nbsp;&nbsp;Put simply, Copilot Prompt helps everyone discover data quickly using semantic search.The challenge becomes what Copilot can share in user prompts.&nbsp; Without the ability to clean up issues above, Copilot can over share sensitive data within user prompts when it isn’t appropriate - like company wide salary information, acquisitions plans, or customer level PII data. &nbsp;This type of data should be kept within a small, trusted circle—not repeated in responses prompts to underprivileged users.&nbsp;Where Microsoft Purview Fits inMicrosoft Purview provides important building blocks for governing information access and classification in Microsoft 365. It’s also true that&nbsp;Copilot respects sensitivity labels and permissions. In other words, if a document is properly labeled and protected, Copilot will follow those rules.The challenge is getting to “properly labeled and protected” across the dynamic insanity of a real-world M365 deploymentUsers often over share in the spirit of productivity and collaborationLabels are often applied inconsistently when done manually.Lack of auto-labeling capabilities, which are only available with E5 licensing.Rinse and repeat all bullets above thousands of times a day, when new data arrives.&nbsp;&nbsp;Many teams then need a faster, more actionable path to reduce overexposure beyond what Purview can help with - especially when Copilot adoption accelerates.&nbsp;Enter Zscaler Copilot Readiness Wizard&nbsp;The&nbsp;Zscaler Copilot Readiness Wizard is built to help security and IT teams quickly understand whether Copilot could surface sensitive information—and to reduce that risk with targeted, scalable remediation.It focuses on the practical realities of Copilot exposure:Sensitive data living in widely accessible locationsSharing links that got created and forgottenLarge collaborator sets that ballooned over timeInconsistent labeling (or no labeling) across high-risk contentMost importantly, it’s designed to help you move from “insight” to “action” quickly—because the window between Copilot enablement and exposure discovery is often uncomfortably short.&nbsp;&nbsp;&nbsp;Putting Copilot Readiness on SteroidsHere’s how the Zscaler Copilot Readiness Wizard can take traditional Purview approaches to the next level in order to help you control oversharing faster and smarter.&nbsp;Get Actionable Exposure VisibilityInstead of simply “you have exposure,” you want to know&nbsp;how exposure happens.&nbsp; You can see:See Public/anonymous linksSee Internal/org-wide linksUnderstand overly broad collaborator access (and how broad)This granularity matters, because it changes the remediation strategy. A public link problem is different from a “1000+ collaborators” problem.&nbsp;&nbsp;&nbsp;Understand Richer ContextRicher context for what’s overexposed provides valuable insights so&nbsp;security teams can prioritize what matters:Where sensitive info is overexposedWhich content contains privacy identifiers?Where risk is concentrated so you can reduce it quickly&nbsp;&nbsp;&nbsp;Deliver File-level remediationWith the ability to enable File-level remediation,&nbsp;you get better control over a small subset of high-value files. If remediation is only practical at the SharePoint site level, you can end up overcorrecting and disrupting business collaboration.&nbsp;&nbsp;File-level action lets you be precise:&nbsp; Fix&nbsp;the risky files without breaking the entire site’s workflows.&nbsp;Comparing Zscaler to Native Copilot ControlsSo how does Zscaler's Copilot Readiness Wizard stack up to M365 native capabilities? &nbsp;The table below spells it out.&nbsp;It’s important to note that Microsoft's Auto-labeling functionality comes at the E5 licensing level, where Zscaler’s approach can help you this achieve this key value-add functionality with only an E3 license.&nbsp;&nbsp;&nbsp;Capability areaMicrosoft Purview&nbsp;Copilot readiness&nbsp;Zscaler Copilot&nbsp;Readiness Wizard&nbsp;Auto-LabelingRequires E5 license.&nbsp; With E3 license manual error-prone labeling required.Enable with E3 license.&nbsp; Bulk actions across assets; apply&nbsp;MIP labels as part of remediation (position as operational efficiency)Remediation actions (examples)Apply labels; restrict access to SharePoint sitesApply MIP labels; remove sharing links/collaborators; quarantine; report incidentExposure visibilityLimited scope of visibilityIn-depth insights across collaboration exposure: public links, internal links, and&nbsp;Collaboration sharing tiers (0-100, 100-1000, 1000+)Detection contextFocus on exposure + label-related viewsAdds prioritization views (e.g., overexposed sensitive info; overexposed items matching DLP dictionaries)Reporting horizonOften limited to short windows (e.g., 1 week in some views)Longer lookback to spot patterns and regressionsDash boardingActivity and assessment views within Purview experiencesClear separation: readiness posture vs activity views (position as clarity + operational workflow)&nbsp;Bringing it all togetherCopilot can be transformational—but only if your data permissions and protections are ready for a world where anyone can ask,&nbsp;“Show me everything about X.”&nbsp;The&nbsp;Zscaler Copilot Readiness Wizard helps you quickly assess where Copilot could unintentionally surface sensitive information and gives you practical, file-level remediation paths to reduce risk without slowing the business down.If you're ready to learn more about Zscaler, jump on over to our solution website, or schedule a demo to chat with us!]]></description>
            <dc:creator>Steve Grossenbacher (Senior Director, Product Marketing)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Communicating Security Notifications to Users with Zscaler Client Connector EUN Notifications]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/communicating-security-notifications-users-zscaler-client-connector-eun</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/communicating-security-notifications-users-zscaler-client-connector-eun</guid>
            <pubDate>Tue, 10 Feb 2026 17:43:44 GMT</pubDate>
            <description><![CDATA[In the networking world, there is a widely known adage:&nbsp;"It's always the network". This phrase refers to the tendency of users to blame network connectivity whenever access to a resource fails, even if the true reason lies elsewhere—such as being blocked by a corporate security policy.The Need for Better User CommunicationWhen end-users receive no clear notification of why access to an application or network has been denied or other action taken, it is natural for them to assume the failure stems from a "networking issue." Left in the dark, users often retry accessing the resource, wasting valuable time and, eventually, filing help desk tickets.This pattern creates multiple challenges:Increased workload for IT support teams, draining resources that could be allocated elsewhere.Frustration across the business, as employees feel hindered by network inefficiencies.Potential security risks, as users may attempt to bypass corporate security restrictions by leveraging unsanctioned third-party solutions.In most instances, employees adopting workarounds are driven by necessity, not malice—they simply want to complete tasks without engaging with technical barriers they don’t fully understand.The solution? Providing clear, timely&nbsp;end-user notifications (EUNs) that inform users when access to a specific resource is blocked, along with the reason for the restriction.&nbsp; &nbsp;Such transparency not only reduces the volume of unnecessary tickets but also cultivates better-informed, security-aware employees. Over time, this strengthens the organization’s overall security posture.A Unique Challenge: Non-Web Traffic EUNsFor web traffic, user notifications are relatively straightforward: organizations can display a web-based&nbsp;End-User Notification (EUN) page explaining the block. This page might include customized corporate branding, a message specific to the policy violation, and instructions for contacting IT support if needed.But not all traffic is web-based. What happens, for example, when a user tries to access a resource via&nbsp;SSH in a public cloud, only to have the attempt blocked by a security policy? Since there’s no browser-based interaction, traditional EUN pages can’t be displayed in such cases. This can leave users confused, wasting time trying to troubleshoot what they perceive as “networking” or application-related issues.Enter Zscaler Client Connector EUN NotificationsThis is where&nbsp;Zscaler Client Connector EUN Notifications step in to fill the gap. Starting with&nbsp;Zscaler Client Connector version 4.8 (used in conjunction with&nbsp;Z-Tunnel 2.0), notifications can now be surfaced directly to the user for&nbsp;ZIA policies, clearly explaining that access to a site or resource has been blocked by a corporate security policy.Expanded Policy SupportPreviously, ZCC-based notifications were available for policies such as&nbsp;Inline Web Data Loss Prevention (DLP),&nbsp;Endpoint DLP, and&nbsp;Cloud App Control. Recently, Zscaler has enhanced these capabilities to include:Firewall FilteringDNS ControlIntrusion Prevention System (IPS) ControlThis expanded support is particularly valuable for&nbsp;non-web traffic, where no web-based EUN page can be presented.Key Use Cases for EUN NotificationsHere are some common scenarios in which Zscaler Client Connector EUN Notifications offer clarity:DNS Control Actions:When a DNS request is blocked due to a classification (e.g., a domain falls under a restricted category).When DNS Control redirects a request (e.g., A-record response redirected to a specified IP), but no subsequent web flow occurs, leaving the user without context for the block.Firewall or IPS Control Actions:When attempts to use protocols such as&nbsp;SSH are blocked.When an&nbsp;IPS signature match triggers a block, users are left wondering why their application or connection isn't functioning as expected.EUN notifications eliminate this ambiguity by clearly communicating the reason behind the restriction, for example, by communicating:Block actions on non-web traffic to the user.Warnings&nbsp;to the user when they go to a suspicious domain or use a protocol or application that is not banned but dangerous.Remediation steps to the user (opening a ticket, not running an app etc.).&nbsp;&nbsp;]]></description>
            <dc:creator>Siddhartha Aggarwal (Staff Technical Product Specialist - Firewall)</dc:creator>
        </item>
        <item>
            <title><![CDATA[A Guide to OpenClaw and Securing It with Zscaler]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/guide-openclaw-and-securing-it-zscaler</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/guide-openclaw-and-securing-it-zscaler</guid>
            <pubDate>Mon, 09 Feb 2026 22:23:42 GMT</pubDate>
            <description><![CDATA[What Is OpenClawOpenClaw is an application designed as a persistent, long-running Node.js service that functions as a sophisticated AI agent. It bridges the gap between the LLM and the operating system, granting the agent the capability to manipulate files, execute shell commands, and interact with third-party services via the Model Context Protocol (MCP) or API.It used to be called ClawdBot and MoltBot, and now OpenClaw. All refer to the same application.Why It Matters?In the past, agents have been specialized to one task or a group of similar tasks. OpenClaw lays the foundation to be a generalized application that can address multiple use cases while improving on the basic principles of AI agents with memory management and skills deployment.This capability, while transformative, introduces a profound security paradox: the utility of the agent is directly proportional to its level of access. This very access creates an unprecedented attack surface within the host and the environment in which it is deployed.Why Organizations Should CareIt is incredibly easy for users to download a malicious skill/library for OpenClaw. In fact, within days there were hundreds of malicious skills that users could download with a click of a button.A great example is One-Click RCE, where:“A victim would simply need to visit an attacker-controlled website that leaks the authentication token from the Gateway Control UI, which is enabled by default, via a WebSocket channel. Then an arbitrary command will run, even if the victim is hosting locally.”The fact that no administrative rights are needed to install OpenClaw locally significantly increases the risk of users running and downloading malicious content/skills, using the OpenClaw device to move laterally once compromised, as well as uploading sensitive data (captured via integrations), since it can bypass typical security controls. This is made even worse by the fact that it is not easy to identify the application or service, nor does it have an identity related to OpenClaw.This guide is for IT/security admins on how to protect their environments from a user installing, running, or bringing in rogue devices into a network that has OpenClaw installed/running. This poses a significant risk to the enterprise network and should not be allowed.There are mitigating controls that users of OpenClaw can deploy, but these are often left to the user, who might not fully understand them and might not care to implement them. These controls are not covered here.How Does OpenClaw Work?OpenClaw is a gateway-centric system designed to facilitate an agentic loop (such as ReAct)—a continuous cycle of perception, reasoning, and action. This puts the LLM between the users and the data (for integrations/tools), allowing the LLM to provide reasoning. The architecture is divided into three primary functional domains: the Gateway, the front end (node), and the integration layer. Thus, OpenClaw uses standard HTTPS for all bound connections/integrations.The GatewayThe Gateway serves as the centralized control plane, managing sessions, maintaining persistent memory, and routing communications between the user and the agent across various messaging platforms such as WhatsApp, Telegram, Slack, and Discord. Here are the default ports used by OpenClaw internally on the system:Gateway Daemon18789WebSocketCentral control plane; requires token-based authentication (but can be bypassed with a simple config change)Browser Control18791CDPUsed for headless Chrome automation; risk of web-based exfiltrationExternal APIs443HTTPSOutbound traffic to LLM providers and messaging servers.&nbsp;Node LayerThe node layer is used to access resources on the system and beyond—such as local file system access, camera access, screen recording, and location services—and provide them to the Gateway. These are also a collection of node libraries running on the endpoint as part of the Node.js process.The Integration LayerThis layer manages “skills”—modular packages of code, metadata, and natural-language instructions that define what the agent can do. It leverages the Model Context Protocol (MCP) to interface with external services (such as GitHub, Google Workspace, or Notion) using a standardized schema, ensuring the agent always uses the correct API parameters without requiring hardcoded custom integrations for every task.LLM APIs443 &nbsp; &nbsp; &nbsp; &nbsp;HTTPS &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Outbound API calls to LLM providers and messaging servers. Note these are typically different from webAI which is what is used by bowsersExternal APIs443HTTPSOutbound traffic to anything really that is hosted on the internet. It can be via API or can be via a browser.External MCP server &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;443HTTPSOutbound traffic to the MCP tools, these tools can also be hosted locally and converted to API call externally.&nbsp;Security Takeaways on ArchitectureThe key takeaway is that OpenClaw inherits the user-agent string from the Chrome browser. There is no hardcoded, unique “OpenClaw” user-agent string used globally for all outgoing traffic, which makes it difficult to differentiate OpenClaw applications from standard user browser traffic. Since all its integrations rely on outbound HTTPS connections, which are typically allowed on user devices and network firewalls, uniquely identifying it at the transport layer is challenging. Furthermore, the fact that the service runs locally on the device makes it difficult to detect at the network layer outside of the device itself.In addition, OpenClaw has extensive integrations, allowing it access to a wealth of data out of the box, which can then be extended by adding “skills.” Couple this with local system access and the ability to install it without needing admin rights, and OpenClaw becomes a significant risk vector.How Can Zscaler Help?Note: This is not a step-by-step configuration guide. It provides guidance on what controls should be strongly considered to detect and restrict OpenClaw within an environment. Please use the standard change management process within your environment to roll out any changes.There are two main ways of deploying OpenClaw:Cloud-based/centrally hosted LLM (most likely scenario)LLM deployed locally (typically needs computers with NPU/GPU and memory of over 32 GB)&nbsp;OpenClaw can be installed locally on the device, in a container, or in an IaaS/PaaS platform. For this document, we will treat both container-based and locally installed methods the same.Note that not all of these controls need to be implemented; this list merely provides a defense-in-depth strategy that would allow an organization to prevent unauthorized use from both managed and BYOD devices. A simple URL block would prevent the download, but pairing it with TLS inspection provides significantly more visibility and control. Controls such as file-type filtering, sandboxing, and DLP will enhance this protection. In addition, implementing tenancy control would allow access to enterprise GitHub while blocking other GitHub instances that could be hosting OpenClaw. Thus, it is generally recommended to implement layered controls.A note on TLS inspection: Keep in mind that Node.js by default does not use the OS credential/certificate store; thus, if TLS inspection is enabled, the user will get a certificate error while talking with external tools, LLMs, and communication channels. The node libraries will have to end up trusting Zscaler root certificates to talk externally, thus forcing TLS inspection.1. Preventing download of OpenClaw: Using URL and/or a combination of file type control, Zscaler can prevent unauthorized downloads of OpenClaw on endpoints. OpenClaw install files are typically .ps1, .sh, or Docker files. These file types should be blocked.Block URLshttps://openclaw.ai/https://github.com/openclaw/openclawURL FilteringFiletypesBlock File type ps1, sh, Docker(yaml/yml).&nbsp;File Type controlDetecting existing installs &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;Existing installs of OpenClaw can be detected using Zcaler Endpoint DLP, EDR, or MDM. See the respective sections below for details.2. Preventing the download of additional playbooks and 0day malware is crucial. OpenClaw uses markdown for its skills files. Custom file type control can be used to detect markdown files and block downloads. Furthermore, Zscaler CASB can be used to isolate, restrict, or block access to GitHub repositories to prevent users from duplicating repos and bypassing security by using custom repositories.Block URLshttps://openclaw.ai/https://github.com/openclaw/openclawTLS Inspection&nbsp;Enable TLS inspection policy as broadly as possible and at a minimum across allowed LLMs and sanctioned Apps with which OpenClaw IntegratesOpenClaw IntegrationsSandbox policyAny Executable and Archive should be Quarantine First-time Action&nbsp;&nbsp;Zscaler Sandbox&nbsp;Filetype controlBlocking File types: JSON, ps1, sh, Docker(yaml/yml), Markdown, unscannable and password protected filesZscaler File Type ControlsZscaler Custom File Type ControlsCloud App control&nbsp;Restrict access to Github to align with user role&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Zscaler Cloud App controlTenancy restrictions for Github &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;Certain users such as developers might still need access to Enterprise Github repo, Zscaler Tenant Profiles in combination with cloud app controls can be used to provide granular access.&nbsp;&nbsp;&nbsp;&nbsp;Zscaler Tenant Profile&nbsp;3. Prevent callbacks and connections to known malicious and 0-day malware. OpenClaw Skill files that are malicious would often call back to C&C servers; they can also use evasive techniques such as SSH tunnels or DOH tunnels. Zscaler can prevent these callbacks along with preventing executables/scripts that would trigger these callbacks.Advance Threat protection policyEnable Botnet productionEnable Malicious Active Content ProtectionEnable Fraud ProtectionBlock Unauthorized Communication Protection &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Block BItTorrentBlock P2P file sharingATP policySandbox policyAny Executable and Archive should be Quarantine First-time ActionZscaler SandboxDNS DGA&nbsp;&nbsp;ATP policyDNS tunnelsEnable DGA under ATP PolicyBlock DOH tunnelsBlock unknown DNS tunnelsATP policyDNS ControlSSH tunnelsUnauthorized Communication ProtectionATP policy4. Protect Against sensitive data leakage. Depending on the deployment, OpenClaw will have to use the network for tool/skill access and/or for LLM access. During this time, Zscaler can perform data protection on these sessions, if they are inspected. Keep in mind that Node.js by default does not use the OS certificate store; thus, if TLS inspection is enabled, the user will get a certificate error while talking with external tools, LLMs, and communication channels. Thus the node libraries will have to end up trusting Zscaler root certificates to talk externalling, thus forcing TLS inspection.Enable SSL inspection across allowed LLMS and sanctioned APPs the OpenClaw Integrates with&nbsp;TLS inspection policyOpenClaw IntegrationsEnable DLP inspection on HTTP postsExisting policies should be extended to GenAI, LLM, and other unsanctioned apps.Implement Zscaler Data ProtectionUse DLP for DetectionZscaler provides a way to detect presence of Node and OpenClaw files using Endpoint DLP to identify OpenClaw artifacts and restrict data movement.Endpoint DLP&nbsp;For example by default a directory structure is created under ~/.openclaw with the following files.Zscaler EDLP can detect these files and create an alert if these files exist on an endpoint. Scanning for files names under openclaw/workspace would point to existing installs..├── agents│&nbsp; &nbsp;└── main│&nbsp; &nbsp; &nbsp; &nbsp;├── agent│&nbsp; &nbsp; &nbsp; &nbsp;│&nbsp; &nbsp;└── auth-profiles.json│&nbsp; &nbsp; &nbsp; &nbsp;└── sessions│&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;└── sessions.json├── canvas│&nbsp; &nbsp;└── index.html├── credentials│&nbsp; &nbsp;├── discord-allowFrom.json│&nbsp; &nbsp;├── discord-pairing.json│&nbsp; &nbsp;└── whatsapp│&nbsp; &nbsp; &nbsp; &nbsp;└── default│&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;└── creds.json├── cron│&nbsp; &nbsp;├── jobs.json│&nbsp; &nbsp;└── jobs.json.bak├── devices│&nbsp; &nbsp;├── paired.json│&nbsp; &nbsp;└── pending.json├── exec-approvals.json├── identity│&nbsp; &nbsp;├── device-auth.json│&nbsp; &nbsp;└── device.json├── memory│&nbsp; &nbsp;└── main.sqlite├── openclaw.json├── update-check.json└── workspace&nbsp; &nbsp;&nbsp;├── AGENTS.md&nbsp; &nbsp;&nbsp;├── BOOTSTRAP.md&nbsp; &nbsp;&nbsp;├── first&nbsp; &nbsp;&nbsp;├── HEARTBEAT.md&nbsp; &nbsp;&nbsp;├── IDENTITY.md&nbsp; &nbsp;&nbsp;├── SOUL.md&nbsp; &nbsp;&nbsp;├── TOOLS.md&nbsp; &nbsp;&nbsp;└── USER.md5. Prevent unauthorized LLM calls. The most common deployment I anticipate would be using public LLMs. In which case OpenClaw will be making outbound calls to LLM using API. Controls should be placed around this where only sanctioned AIs are allowed from an organization's network and this sanctioned AI will provide visibility and guardrails.Block all LLM usage directlyBlock all LLMs via URL/Cloud app control and only allow Zscaler AI Guard from the Enterprise network.Zscaler Cloud App controlhttps://api.zseclipse.nethttps://proxy.zseclipse.netUse AI guard as Authorized AI platformDeploy AI Guardrails to monitor and restrict prompt usage.Zscaler AI Guard Rails&nbsp;6. Prevent rogue devices from running OpenClaw and/or moving laterally. In open networks such as college campuses or research institutions, users can plug in rogue devices that have OpenClaw running. If these devices are compromised or used maliciously, they can be used as an entry point into the enterprise network. A common example is plugging a MacMini into an open port. This is where Zscaler can help control and direct communications from these devices by effectively isolating them.&nbsp;Isolate DevicesEnsure new devices on network on onboarded as “island of one.”&nbsp;This can be achieved easily with Zero Trust BranchControl BYOD policy to prevent north/south communicationTunnel Traffic to ZIA from BYOD/Rogue devices.Apply ATP, DNS, and URL inspection policy (in absence of TLS inspection).This can be achieved with Zero Trust Branch7. Restrict BYOD From Accessing Enterprise data directly: Another use case to cover is for contractors and/or BYOD devices accessing SaaS applications such as Workday or Salesforce. Contractors or BYOD devices with OpenClaw can download skills that would allow them to use the Chrome Dev Kit to scrape data from your SaaS services. This is where Zscaler can help prevent data loss at a mass scale with Zscaler Zero Trust Browser.Conditional access policy &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;Implement in a Conditional Access Policy: Block when going direct to SaaS applications and only allow access via your Zscaler tenant.Zscaler Zero Trust BrowserUse Zscaler Zero Trust Browser to provide a sandboxed, isolated app access environment, preventing data from landing on the endpoints.Zscaler Zero Trust Browser +&nbsp;Zscaler SquareX&nbsp;Endpoint Controls to ConsiderAs OpenClaw runs locally on an endpoint, the Gateway and node layers have components/services that are running on the endpoint locally. EDRs have visibility and control into these, thus EDR should be paired with Zero Trust principles to gain full visibility and control over managed devices.Package/config file inspection with EDR: Inventory NPM global installations and identify OpenClaw binaries and config files in common paths.Installer Logic: Rules can be set to block common one-line "curl-to-bash" installation patterns.Process monitoring and escalation detection: Detect Node processes running on the endpoint, especially with high privilege access.&nbsp;Detecting locally hosted services: OpenClaw’s front end can be deployed as local only or a remote service. In either scenario all inbound access to endpoints should be blocked, especially the ports called out in the Gateway section.&nbsp;MDMs can also be used to detect presence of OpenClaw on managed devicesSummaryOpenClaw feels like a new frontier in agentic AI. It is poised to change how we view and use AI agents today, and potentially lay the groundwork for what Agentic AI applications could like like going forward However, at this point, OpenClaw introduces significant security and privacy risks for an organization. Zscaler can help accelerate enterprise, government, and education institutions' secure adoption of GenAI while ensuring malicious tools or risky applications are not introduced, preventing data loss, and preventing device compromise within the organization's environment.]]></description>
            <dc:creator>Hersh Patel (Zscaler)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Transforming Threat Detection: How Partnerships in Deception Technology Are Shaping the Future]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/transforming-threat-detection-how-partnerships-deception-technology-are</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/transforming-threat-detection-how-partnerships-deception-technology-are</guid>
            <pubDate>Mon, 09 Feb 2026 16:04:51 GMT</pubDate>
            <description><![CDATA[Security Operations Centers (SOCs) are drowning in alerts. The constant flood of data from disparate tools creates a significant challenge: distinguishing real threats from false positives. In this environment, a reactive security posture is not just inefficient; it’s dangerous.A truly proactive strategy requires two things: unambiguous, high-fidelity threat signals and the automated ability to act on them instantly. This is where the combination of deception technology and a connected security ecosystem shines. Zscaler Deception provides the undeniable proof of an active threat, and through our deep third-party integrations, we empower organizations to turn that critical intelligence into immediate, decisive action. This blog explores how that powerful synergy transforms your security stack from a collection of siloed tools into a cohesive, self-defending ecosystem.High-Fidelity IntelligenceZscaler Deception fundamentally changes the defensive game. By creating a digital minefield of convincing decoys and lures across endpoints, cloud workloads, Active Directory, and GenAI infrastructure, it turns the tables on attackers. Instead of searching for weaknesses, defenders create an environment where any unauthorized interaction is, by definition, malicious.When an attacker engages with a decoy, Zscaler Deception generates a high-fidelity alert. Because legitimate users have no reason to interact with these assets, the alerts produced are virtually free of false positives. This provides security teams with three critical advantages:Early Detection:&nbsp;Catching attackers at the earliest stages of the kill chain, often before they can access critical data.Rich Intelligence:&nbsp;Gathering detailed TTPs (Tactics, Techniques, and Procedures) and IOCs directly from the attacker’s actions.Unquestionable Confidence: Providing an unambiguous signal that an active threat is present in the environment.From Intelligence to Automated ActionBut what happens next? A high-fidelity alert is only the starting point. Its true power is only realized when it triggers an immediate, decisive response. The time between detection and containment is where breaches escalate, and manual intervention is often too slow.The key to closing this loop and drastically reducing Mean-Time-to-Respond (MTTR) lies in automation. This is where Zscaler Deception’s built-in orchestration and third-party integrations become transformative. By connecting its high-confidence signals directly to the other security tools in your stack, deception becomes the trigger for an automated, continuous response. The value is no longer just about finding the threat; it's about neutralizing it instantly.Endpoint Detection and Response (EDR)Integrating with an EDR partner such as Crowdstrike Falcon or Microsoft Defender, Zscaler Deception can automatically share threat intelligence, such as indicators of compromise (IOCs) and attack context, with the CrowdStrike Falcon platform. This enables immediate automated actions including quarantining compromised endpoints ensuring immediate and effective containment of the threat actors thereby preventing lateral movement and potential escalation allowing security teams to swiftly investigate and remediate the incident. Additionally, both platforms exchange threat intelligence, enrich detection and response workflows to ensure the broader security stack remains up-to-date with the most relevant IOCs and attack patterns.This integration delivers a proactive defense layer allowing joint customers to contain threats earlier in the kill chain and automate robust incident response actions across their environments.Use Case: A prominent financial institution using Zscaler Deception identified an attacker on a compromised endpoint. Through its direct integration with CrowdStrike, the system automatically quarantined the device, instantly isolating the threat and stopping the attack in its tracks.SIEM and SOAR PlatformsZscaler Deception enriches Security Information and Event Management (SIEM) platforms like Splunk, Sumo Logic, and IBM QRadar with context-rich, high-priority alerts. This allows security teams to correlate threat intelligence and visualize the attack lifecycle. But the real power is unlocked when these signals trigger a Security Orchestration, Automation, and Response (SOAR) playbook. The deception alert can initiate an automated workflow that orchestrates actions across multiple security tools—from threat hunting to triggering broader network policy changes—dramatically accelerating the entire incident response process.Use Case:&nbsp;A global travel management firm that detected active attackers probing their Active Directory endpoints when they hit a Zscaler Deception decoy. The detection was sent to their SIEM, which triggered a high-risk event translating to human attention for analysis. Based on this pre-emptive alert allowed the firm to not only determine the containment strategy for the attack but also create runbooks for any such future incidents.&nbsp;Perimeter FirewallsContaining a threat often means blocking the attacker's command and control (C2) infrastructure. By integrating with next-generation firewalls, Zscaler Deception can automatically share the source IP of an attacker engaging with a decoy. The firewall can then immediately update its rules to block that malicious IP, effectively cutting off the attacker's access to the network before they can exfiltrate data or receive further instructions.Use Case: A global travel management firm detected active attackers probing their network with Zscaler Deception. By leveraging our integration with the organization’s firewall, over 250 distinct attacker IPs were automatically blocked, instantly neutralizing the threats before they could impact critical systems.Building a Self-Defending EcosystemThe old paradigm of security—where defenders reactively chase alerts—is no longer sustainable. A proactive strategy with deception provides the early warning system, but its true potential is unlocked through automation.By integrating Zscaler Deception with your existing EDR, SIEM, SOAR, and firewall solutions, you create a continuous response cycle. High-fidelity detections reliably trigger automated investigation, containment, and eradication actions. This approach not only shrinks attacker dwell time and drastically reduces MTTR, but it also frees up your security team to focus on strategic initiatives rather than chasing ghosts. It’s time to move beyond simple detection and build a truly actionable, automated defense leveraging Zscaler’s rich technology partner ecosystem.Request a demo to learn more about how Zscaler Deception can help close the detection and response loop with 3rd party integrations.]]></description>
            <dc:creator>Jaideep Chanda (Technology Partner Manager)</dc:creator>
        </item>
        <item>
            <title><![CDATA[How Organizations Can Make a Successful Transition to Post-Quantum Cryptography (PQC)]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/organizations-make-successful-transition-post-quantum-cryptography-pqc</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/organizations-make-successful-transition-post-quantum-cryptography-pqc</guid>
            <pubDate>Thu, 05 Feb 2026 18:14:07 GMT</pubDate>
            <description><![CDATA[The Quantum Era is fast approaching—and the eventual threat is no longer a distant concern: quantum computers will change our digital world because algorithms like Shor's break the public-key cryptography that currently underpins digital security.&nbsp;The most immediate danger isn't that a quantum computer will appear overnight. It's the "Harvest Now, Decrypt Later" (HNDL) attacks that are likely already happening. Malicious actors are siphoning off encrypted data today: they can store it and wait for the day a quantum computer can unlock its secrets. For data with a long shelf life—trade secrets, government intelligence, healthcare records, financial data—the vulnerability is present now.&nbsp;The good news is that the path forward has become clearer.&nbsp;Now that standards bodies like the National Institute of Standards and Technology (NIST) have finalized their initial standards for Post-Quantum Cryptography (PQC), the time to plan, inventory, and act is now.So what steps should your organization take for a successful transition? Here is a practical, four-step guide with recommendations to building your quantum-resistant future.1. Plan and Adopt a Quantum-Safe StrategyA successful migration doesn't happen by accident: it requires a deliberate, top-down strategy. Without a plan, efforts will be fragmented, incomplete, and ultimately ineffective.&nbsp;Use a hybrid cryptography approachA "rip and replace" strategy is too risky. A hybrid approach combines a classic, proven algorithm (like ECDH) with a new PQC algorithm like ML-KEM (Module-Lattice-Based Key-Encapsulation Mechanism — finalized by NIST in FIPS 203). ML-KEM is&nbsp; a leading PQC algorithm designed to secure digital communications against future attacks by quantum computers.A session key is generated using both the classical and PQC algorithms, meaning an attacker would need to break both to compromise the connection. This provides a safety net, ensuring security against both classical attackers today and quantum attackers tomorrow, while also hedging against any unforeseen weaknesses in the first generation of PQC algorithms.Organizations should adopt NIST-recommended PQC algorithmsRelying on standardized, peer-reviewed algorithms is non-negotiable. Organizations like NIST, ISO, and ETSI have subjected these algorithms to years of intense global scrutiny. Adopting them ensures you are implementing the most secure, vetted options available and guarantees interoperability with the broader ecosystem of vendors, partners, and customers who are also making the transition.Update your internal security and acquisition standardsStrategy must be codified into policy. By explicitly requiring PQC in your organization’s cybersecurity, data security, and vendor procurement standards, you create a powerful forcing function. This ensures that all new software, hardware, and cloud services are evaluated for quantum readiness from day one, preventing the continued growth of your cryptographic debt.Assign clear ownershipWithout accountability, even the best plans fail. The PQC transition is a complex, cross-functional initiative that will touch nearly every part of the business—from IT and security to application development, legal, and supply chain management. Designating a specific leader or a dedicated team creates a center of gravity for the project, ensuring coordination, driving progress, and providing a single point of contact for executive leadership.2. Inventory Your Cryptographic-Dependent AssetsYou cannot protect what you don't know you have. This discovery phase is the foundation of your entire migration effort.Inventory all cryptographic algorithms, keys, certificates, and protocolsThis is the most critical first step. Your organization uses cryptography in thousands of places you might not expect: web servers (TLS), VPNs, SSH connections, code signing, secure boot processes, IoT devices, and internal applications. A comprehensive inventory—often called a Crypto-Bill of Materials (CBOM)—is the only way to understand the true scale of your quantum vulnerability.Prioritize IT assets vital to business operationsYou can't fix everything at once. A risk-based approach is essential. Start by identifying your "crown jewels"—the systems that, if compromised, would cause the most damage to your business. This includes systems managing financial transactions, sensitive intellectual property, customer PII, and critical operational controls. Focusing on these high-value assets first ensures you are mitigating the most significant risks immediately.Catalog critical data at risk from HNDL attacksThis action is directly tied to mitigating the "Harvest Now, Decrypt Later" threat. You must identify data based on its required confidentiality lifespan. Does this data need to remain secret for more than 5-10 years? If so, it is a prime target for HNDL. Any data encrypted today with classical algorithms—like M&A documents, long-term strategic plans, or patient health records—must be prioritized for re-encryption or protection using PQC.Identify where public-key cryptography is being used and mark these systems as quantum-vulnerableThis translates your inventory into an actionable roadmap. By pinpointing every instance of vulnerable algorithms like RSA, Diffie-Hellman, and ECDSA, you create a concrete list of systems, applications, and processes that need remediation. This moves the problem from an abstract concept ("we need to be quantum-safe") to a tangible project plan ("we need to update these 50 VPN gateways and these 200 web servers").3. Implement PQC Key ExchangeThe secure handshake that begins every encrypted session is a primary target for quantum attacks.Replace or complement current key exchange mechanisms with PQC algorithmsThe key exchange (e.g., RSA, ECDH) is how two parties establish a shared secret over an untrusted network. Shor's algorithm is specifically designed to break these mechanisms. By transitioning to a PQC key exchange algorithm like the NIST-standardized ML-KEM, you protect the very foundation of your secure connections. As mentioned earlier, implementing this in a hybrid mode is the recommended starting point, ensuring the confidentiality of your session data against all current and future threats.4. Implement PQC Algorithms for AuthenticationOnce a session is established, you need to trust the identity of who you're talking to. That's where digital signatures come in.Transition certificates to use PQC digital signature algorithmsDigital signatures (e.g., RSA, ECDSA) are used in certificates to prove identity and ensure integrity. A quantum computer could forge these signatures, allowing an attacker to impersonate a legitimate website, server, or software publisher. This would shatter digital trust. As PQC signature algorithms like ML-DSA (Module-Lattice-Based Digital Signature Algorithm — formally specified in the FIPS 204 standard) become widely available from certificate authorities, you must begin the process of replacing your existing certificates to protect against identity spoofing and man-in-the-middle attacks.Engage in proxy optimization effortsPragmatism is key to a smooth transition. PQC algorithms often have larger key and signature sizes, which can impact performance and latency, especially for legacy clients or constrained networks. A modern, intelligent security proxy like the public service edge nodes of Zscaler’s Zero Trust Exchange can act as a "crypto-translator." It can establish a PQC-secured connection to a modern server while presenting a classical connection to a legacy client, and vice-versa. This offloads the heavy lifting, optimizes performance, and allows you to roll out quantum-safe protections without needing to update every single endpoint simultaneously.The Transition to PQC Journey Starts TodayThe transition to a quantum-resistant world is a marathon, not a sprint. But it is a race that has already begun. By viewing this not as a single event but as a continuous process of strategic modernization, you can turn a monumental challenge into a competitive advantage. The organizations that start planning, inventorying, and implementing these steps today will not only defend against the threats of tomorrow but also build a more resilient and secure foundation for the future.Learn more about preparing for the quantum future:&nbsp;save your spot for our webinar launch event&nbsp;where our product experts will walk you through how Zscaler decrypts and inspects quantum-encrypted traffic with hybrid key exchange using ML-KEM.&nbsp;]]></description>
            <dc:creator>Brendon Macaraeg (Sr. Product Marketing Manager)</dc:creator>
        </item>
        <item>
            <title><![CDATA[If You're Reachable, You're Breachable, Part 3: The Adversary's Final Move – Exploiting You]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/if-you-re-reachable-you-re-breachable-part-3-adversary-s-final-move</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/if-you-re-reachable-you-re-breachable-part-3-adversary-s-final-move</guid>
            <pubDate>Sat, 31 Jan 2026 23:34:51 GMT</pubDate>
            <description><![CDATA[Over the&nbsp;part 1 and&nbsp;part 2 of this series, we have followed the adversary's journey. In Part 1, we saw how they use internet-wide scanners to&nbsp;find your exposed VPNs, Firewall and other digital assets. In Part 2, we detailed how they&nbsp;classify those assets, building a detailed blueprint of your security stack i.e. VPNs, Firewalls, and your application infrastructure.Now, we arrive at the final, inevitable conclusion of this process. The reconnaissance is over. The blueprint is complete. This phase is the "breach" in "breachable." This is the exploitation phase.From Knowledge to Action: Weaponizing IntelligenceThe adversary now has a list of your exposed services like VPNs and Firewalls, and their exact versions. This is the ammunition. The next step is to find the weapon to fire it.1. Finding the Exploit (The CVE Playbook)The first stop is a public vulnerability database, like the National Vulnerability Database (NVD). The attacker takes the version number they discovered (e.g., Apache/2.4.49, VPN/Brand Name) and searches for any associated Common Vulnerabilities and Exposures (CVEs).Instantly, they have a list of known weaknesses for that specific software. Each CVE comes with a description of the vulnerability, its severity score (CVSS), and often, links to proof-of-concept (PoC) code. The attacker isn't guessing; they are following a well-documented recipe for a breach.2. Loading the Weapon (Exploit Frameworks like Metasploit)For common vulnerabilities, an attacker doesn't even need to write code. They turn to powerful, open-source exploit frameworks. Think of these frameworks as a digital Swiss Army knife for penetration testers and, unfortunately, for criminals. It contains a vast library of pre-built "exploit modules"—scripts that are ready to fire at a vulnerable service.The process is chillingly simple:Search these repositories or frameworks for the CVE number (e.g., CVE-2024-55591).Load the corresponding exploit module.Set the target IP address (which they already have).Type exploitIf successful, the framework establishes a "shell" or a "session" on your VPN or Firewall server, giving the attacker direct command-line control. They are now inside your network. It can be that easy.AI: The Autonomous Attacker Is HereIf the commoditization of exploits wasn't bad enough, AI is now supercharging the&nbsp;entire exploitation process, enabling attacks at a scale and speed that is impossible for human defenders to counter.AI-Driven Exploit Customization: Standard exploits are often caught by security tools like Intrusion Detection Systems (IDS) or Web Application Firewalls (WAF). Adversaries are now using AI to generate polymorphic versions of their exploits. The AI can subtly alter the attack code for each attempt, creating an infinite number of variations that fly under the radar of signature-based defenses.Predictive Exploitation: An AI model can analyze the complete target profile—OS, services, patch level, detected security tools—and predict the single most effective exploit chain. It might determine that a frontal assault on the web server will be blocked, but a less-common vulnerability in an adjacent VPN has a higher chance of success and will lead directly to the internal database.Autonomous Kill Chains: The most advanced adversaries are using AI to automate the entire attack sequence. The AI finds a target, classifies its services, selects and launches the initial exploit, and then—once inside—begins moving laterally, escalating privileges, and exfiltrating data, all without direct human intervention. This compresses an attack that once took weeks or months into a matter of minutes.Breaking the Chain: How to Make Yourself Un-breachableLet’s recap the adversary's playbook: Find → Classify → Exploit.Notice a pattern? Every single step depends on one fundamental prerequisite: your internal application must be invisible and unreachable on the public internet. If an attacker can't find you, they can't classify you. If they can't classify you, they can't exploit you.Traditional security tried to solve this with better firewalls, WAFs, and VPNs—essentially, by building stronger doors and locks. But as we've seen, adversaries will always find a way to pick the lock or discover a window left open.The only way to win is to change the game entirely. The solution is not a stronger door; it’s to remove the door from public view i.e. replace your VPNs and Firewalls.The Zscaler DifferenceThis is the core principle behind the Zscaler Zero Trust Exchange.Instead of exposing your applications to the internet and hoping your defenses hold, Zscaler makes your applications and internal resources completely invisible. The Zero Trust Exchange operates as an intelligent, inline switchboard that checks identity, device posture and business policies before connecting the right party (user, application, etc.) to the right party. Here's how:No Inbound Connections: Your applications, code repositories etc., whether in the data center or a public cloud, never accept inbound connections. They are not listening on the internet. They have no IP addresses that can be discovered or scanned by any tools. Your attack surface is not just minimized—it's eliminated.Inside-Out Connectivity: To make services available, a lightweight Zscaler connector, sitting with your applications, establishes an inside-out connection to the Zscaler cloud. This connection is outbound only, so no inbound firewall rules are ever needed.Brokered Access: When an authorized user—authenticated and policy-checked by Zscaler—needs to access an application, the Zero Trust Exchange securely stitches the two outbound connections together. The user connects to the application&nbsp;through Zscaler; they never connect&nbsp;to the application directly. Secure, brokered connections are built on a session-by-session basis, following the principles of least privilege access, and continuously assessed for changes in risk.An adversary scanning the internet sees nothing. There is no VPN to find, no Firewall port to scan, no banner to grab, and no vulnerability to exploit. Your organization is off the public map. Your existing VPNs and Firewalls are not the answer as they are built on an architecture that exposes them to the Internet and hence to the attackers. Your security stack needs to protect you, not expose you. Hence, you should look at replacing your existing VPNs and Firewalls, with a solution that enables you to stay invisible and reduces your attack surface.You can't be reachable, because you're not there. And if you're not reachable, you can't be breached. It's that simple.For a summary and a visual representation, please see this&nbsp;video.]]></description>
            <dc:creator>Akhilesh Dhawan (Sr. Director, Product Marketing - Platform)</dc:creator>
        </item>
        <item>
            <title><![CDATA[If You're Reachable, You're Breachable, Part 2: The Adversary's Second Move – Classifying You]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/if-you-re-reachable-you-re-breachable-part-2-adversary-s-second-move</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/if-you-re-reachable-you-re-breachable-part-2-adversary-s-second-move</guid>
            <pubDate>Sat, 31 Jan 2026 21:49:53 GMT</pubDate>
            <description><![CDATA[In the&nbsp;first part&nbsp;of this three-part series, we explored how adversaries no longer need to hunt for you; they simply consult massive internet-wide scanning databases to&nbsp;find your exposed VPNs, Firewalls and other digital doorways. This provides them with a list of "reachable" IP addresses—the digital equivalent of a list of buildings with unlocked front doors.But finding the door is just the beginning. Before an adversary can attempt to enter, they need to understand what they're looking at. Is it a flimsy wooden door or a reinforced steel vault? Does it lead to an empty janitor's closet or the CEO's office?&nbsp;This is the second, crucial phase of the attack playbook: classification. Now that they've found you, they need to figure out exactly&nbsp;what they've found.From IP Address to Attack Plan: Active ReconnaissanceWhile the "Find" phase was largely passive, classification requires active probing. The adversary begins to interact with your exposed systems to build a detailed blueprint. They use a suite of standard, readily available tools to answer critical questions.1. Which Doors are Open? (Port Scanning)The first step is to see which services are listening on the IP addresses they found. Think of it as an attacker walking up to your digital building and checking every single one of the 65,535 possible doors and windows (ports) to see which ones are unlocked (open).A simple scan reveals which ports are listening. Is port 3389 open, suggesting a Remote Desktop? Is port 22 open, indicating an SSH server for administrative access? Is port 443 open for web traffic? Each open port is a potential attack vector.2. What’s Written on the Doorbell? (Banner Grabbing)Once an open port is identified, the attacker wants to know what service is running behind it. Often, services willingly announce themselves through a "banner"—a small bit of text sent to any new connection.A banner might look like this: Apache/2.4.29 (Ubuntu) or Microsoft-IIS/10.0. A banner like "Unauthorized Access Prohibited" may confirm a VPN.&nbsp;This is a goldmine. The banner doesn't just reveal the service; it provides the&nbsp;exact version. This sort of information along with the frequency at which these vulnerabilities are reported have made VPNs and Firewalls a favorite for attackers. An attacker can instantly cross-reference a version of these VPNs and Firewalls with a database of Common Vulnerabilities and Exposures (CVEs) to find a known, exploitable flaw. They've gone from "an open web server" to "a web server vulnerable to CVE-2021-41773" or "a VPN" to "a VPN vulnerable to CVE-2024-55591".&nbsp;3. What Kind of Lock is on the Door? (Fingerprinting)What if the banner is generic or has been removed? This is where attackers get more sophisticated, using fingerprinting techniques to identify the underlying technology.TLS/SSL Fingerprinting: The way a server negotiates a secure connection is highly unique. The combination of supported TLS versions, cipher suites, and extensions creates a fingerprint. An attacker can capture this fingerprint and compare it against a database to identify the technology. That generic web server might have a TLS fingerprint that screams the brand and the version of the VPN or a Firewall—revealing the nature of your security stack.Web Fingerprinting: For web servers (ports 80/443), some of the tools go even deeper. They inspect HTTP headers, cookie names, and HTML source code to identify not just the server, but the entire application stack: the Content Management System, the JavaScript libraries, and even embedded analytics tools. Each identified component is another potential source of vulnerabilities.Protocol Analysis: For unusual or custom services, an attacker might use a protocol analyzer to capture and dissect the traffic. This helps them reverse-engineer how the application communicates, looking for weaknesses in the protocol itself, such as unencrypted authentication or predictable session tokens.The AI Analyst: Supercharging ClassificationA skilled human can perform this analysis, but it's slow and requires deep expertise. Once again, AI is a game-changer for the adversary, acting as an automated, super-intelligent analyst.An attacker can now feed the raw data from these tools into an AI model. This model, trained on millions of known device and service profiles, accomplishes two things with terrifying speed and accuracy:High-Confidence Identification: The AI correlates all the data points—open ports, banners, headers, TLS fingerprints—to make a high-confidence classification. It moves beyond simple signatures to probabilistic analysis. For example: "The combination of this TLS fingerprint, these HTTP server headers, and this login page HTML structure gives a high probability of a specific “VPN running a vulnerable version of an OS." This allows attackers to instantly identify your perimeter security devices, which are prime targets for exploitation.Automated Vulnerability Mapping: The AI doesn't stop at identification. It immediately cross-references the identified service and version with real-time threat intelligence feeds, exploit databases, and even chatter on dark web forums. The output is no longer just a list of services; it's a prioritized list of actionable attack vectors. It tells the attacker not just&nbsp;what you are, but&nbsp;how you are vulnerable, right now.You Can't Hide What You ExposeThe classification phase is where your attack surface goes from being a list of IP addresses to a detailed blueprint for an attack. Every service you expose to the internet is broadcasting information about itself, and adversaries, armed with modern tools and AI, are listening. They are profiling your web servers, your VPN gateways, your firewalls, and your applications, patiently building a case for how to break in. A majority of enterprises have experienced an attack that started by exploiting a vulnerability in VPN and Firewall devices. And moving these devices to the cloud doesn’t solve the fundamental issue of exposed public IPs. &nbsp;The concept of public IP addresses for your security stack is incompatible with Zero Trust principles.This leads to the final, inevitable step. Now that they have found you and classified you, they are ready to exploit you.For summarizing this information, check out our&nbsp;video.Join me in the final part of this series, where we will dive into the methods attackers use to turn this intelligence into a breach.]]></description>
            <dc:creator>Akhilesh Dhawan (Sr. Director, Product Marketing - Platform)</dc:creator>
        </item>
        <item>
            <title><![CDATA[If You're Reachable, You're Breachable, Part 1: The Adversary's First Move – Finding You]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/if-you-re-reachable-you-re-breachable-part-1-adversary-s-first-move-finding</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/if-you-re-reachable-you-re-breachable-part-1-adversary-s-first-move-finding</guid>
            <pubDate>Sat, 31 Jan 2026 21:38:26 GMT</pubDate>
            <description><![CDATA[In the physical world, we understand security through simple, tangible concepts. We lock our doors, close our windows, and draw the blinds. We know that an open door is an invitation for trouble. In the digital world, however, the doors and windows aren't always so obvious. The most troubling fact is that they are your Firewalls and VPNs. The very devices that you thought were protecting you are now a front door into your organization. They are your attack surface. The continued use of the castle-and-moat security model and network security products such as firewalls and VPNs is putting organizations at risk. This brings us to a fundamental truth of modern cybersecurity: If you are reachable, you are breachable.It’s a simple but powerful premise. Every server, application, or device directly exposed to the internet is a potential foothold for an adversary. This isn't a scare tactic; it's the foundational principle of every modern cyberattack.&nbsp;Over this three-part series, we'll deconstruct the adversary's playbook, which is finding you, classifying you and then exploiting you. Let’s start with the critical first step that makes all others possible: finding you.The Old Playbook vs. The New: Reconnaissance at ScaleIn the past, reconnaissance was a noisy and laborious process. Attackers would run active scans against a target's IP range, "knocking" on digital doors to see which ones were open. It was time-consuming, and it created a lot of noise that could be detected by security teams.Today, the game has completely changed. Adversaries no longer need to knock on&nbsp;your specific door. Instead, they consult global, publicly available directories that have already cataloged every open door, window, and unlocked shed on the entire internet.The tools: The Search Engines of ExposureMeet the adversary's best friends: the tools. Think of these tools not as Google, which indexes web content, but as search engines for&nbsp;devices. They continuously scan the entire internet (every single IPv4 and IPv6 address) and index the services running on them.What can they find? Everything.Vulnerable VPNs and Firewalls: An attacker can search for a specific, vulnerable version of Firewall or a VPN and get a list of every instance on the internet that needs to be patched—a ready-made list of targets.Exposed Databases: A quick search can reveal databases that are publicly accessible, often without authentication.Vulnerable Remote Access: They can instantly find servers with exposed Remote Desktop Protocol (RDP) or SSH ports, a favorite entry point for ransomware gangs.Industrial Control Systems (ICS): Frighteningly, systems controlling water treatment plants, power grids, and manufacturing lines can be found with simple queries.These tools transform reconnaissance from an active hunt into a passive query. The attacker isn't targeting you; they are targeting a vulnerability. They simply ask, "Show me everyone who is vulnerable to X," and the tools provide a list. If your organization is on that list, you've just been "found."Enter AI: Reconnaissance on AutopilotAs powerful as these search engines are, the sheer volume of data they provide can be overwhelming. This is where Artificial Intelligence is becoming the adversary's most powerful force multiplier in the "Find" phase. Attackers are using AI to supercharge their reconnaissance in three key ways:Hyper-Efficient Pattern Recognition: An AI model can sift through petabytes of data from these tools, public records, and other sources to identify subtle patterns of exposure. It doesn't just find one open port; it can identify an organization's entire external footprint, recognizing naming conventions in subdomains or identifying all assets hosted on a specific cloud provider.Intelligent Correlation: AI excels at connecting disparate dots. It can take a list of exposed devices from these tools, correlate it with employee profiles on social media ("show me all network admins at Company X"), and cross-reference that with code snippets leaked on public repositories. This builds a rich, multi-dimensional profile of a target organization, moving beyond simple IP addresses to understand the people and processes behind them.Predictive Targeting: Most importantly, AI helps adversaries prioritize. By analyzing the data, AI models can predict which of the thousands of exposed services are most likely to be successfully exploitable or lead to high-value assets. It answers the question, "Of these 10,000 potential targets, which 10 offer the path of least resistance to the crown jewels?" This allows them to focus their efforts with surgical precision.You Must Be UnreachableThe "Find" phase of an attack is no longer a manual effort. It is a continuous, automated, AI-driven process. Your organization's attack surface is being scanned and indexed 24/7, not necessarily by someone targeting you specifically, but by automated systems looking for any opportunity.This is why the traditional castle-and-moat approach of Firewall and VPNs that is trying to protect the perimeter is failing. The perimeter has dissolved, and the doors are everywhere. In fact,&nbsp;those very VPNs and Firewalls that were supposed to protect you, have themselves become the front door for attackers. They are plagued with a myriad of actively exploited vulnerabilities. If they are part of your attack surface, they certainly cannot be part of your cybersecurity defense.&nbsp;The only winning move is to make your doors invisible. The solution is to replace your existing VPNs and Firewalls and make your internal applications and infrastructure off the internet entirely, rendering them unreachable and therefore unfindable.For a summary of this blog and for a visual representation, take a look at this&nbsp;video.In Part 2, where we explore what happens next. Now that adversaries have found you, how do they classify your assets and employees to plot their attack?]]></description>
            <dc:creator>Akhilesh Dhawan (Sr. Director, Product Marketing - Platform)</dc:creator>
        </item>
        <item>
            <title><![CDATA[From Blunt Force to Surgical Precision: Elevating Control in Zscaler Internet Access]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/blunt-force-surgical-precision-elevating-control-zscaler-internet-access</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/blunt-force-surgical-precision-elevating-control-zscaler-internet-access</guid>
            <pubDate>Sat, 31 Jan 2026 18:27:13 GMT</pubDate>
            <description><![CDATA[Search is where work starts. Engineers look for fixes. Analysts look for context. Creative teams look for assets. And in that “normal work” moment, risk can slip in quietly—inappropriate results in a shared environment, accidental IP misuse from a reused image, or controls that don’t scale cleanly across a real org.That’s why in our recent ZIA releases, we’ve rolled out key enhancements to make search governance more precise in three practical ways, so you can shape search outcomes without turning everyday work into a policy negotiation.The goal isn’t “web filtering.” It’s Search Governance: guiding what search produces and what users can safely do with it—consistently, and at scale.&nbsp;It’s exactly what these ZIA capabilities are built to deliver: moving from broad strokes to surgical control, shaping outcomes without breaking workflows.Update 1: Moving SafeSearch From a “Blunt Switch” to Precision GovernanceSafeSearch is one of those controls that looks small on paper but plays big in real life—especially in shared spaces or regulated contexts. However, until now, enforcing it was often a tenant-wide decision: either "On" for everything or "Off" for everything.This created a dilemma: to enforce safety on Google Images, you often had to force the same restrictions on YouTube or Bing, potentially blocking training videos or research material. Admins were stuck effectively "blocking the internet" for specific tools just to maintain compliance elsewhere.What’s new (and why it matters): We have introduced Granular Service Controls for SafeSearch. Instead of a global toggle, administrators can now configure SafeSearch settings with specificity regarding which search engines and services are restricted.Earlier: Turn SafeSearch "ON" for all traffic.New: Enforce SafeSearch for Google and Bing, but leave YouTube unrestricted for your marketing team.Why this is Search Governance:You’re tailoring outcomes for each application, rather than applying broader network restrictions.You avoid the security risk of bypassing SSL inspection just to unblock a specific search tool.Update 2: Rights-Safe Reuse With Creative Commons Search SupportA lot of enterprise “risk” doesn’t show up as an attack. It shows up as accidental misuse.Creative teams, field marketers, enablement folks—anyone who builds decks, campaigns, training, or customer-facing content—pulls assets from search constantly. And nobody wakes up thinking, “Today I’ll create a licensing problem.”What’s new (and why it matters):&nbsp;ZIA now supports enabling Creative Commons-focused search results as a governance control This simple toggle helps steer users toward content designed for reuse in supported search experiences.Automated Compliance: The search engine ensures results are licensed under Creative Commons, reducing the risk of accidental IP infringement.Workflow Efficiency: Users stop fighting security to get their job done. They save time manually filtering results, and the business quietly reduces risk.Update 3: Policies That Scale — Because Pilots Are Easy, Enterprises Are NotHere’s where most good intentions die. You build a clean policy, and then the "org reality" shows up.&nbsp;“We need to create an exception policy for more than 32 users/ 32 groups.”“We acquired new companies and they were managing per user based exceptions”“We acquired three companies and none of their groups map cleanly.”Suddenly, the challenge isn’t what the control does. It’s whether you can express it at scale without hitting ceilings or creating rule sprawl.What’s new (and why it matters): ZIA has expanded policy criteria limits to support cleaner, more scalable rule design—so you can represent real organizational structures with fewer fragmented policies.And if you need additional scale beyond defaults, limits can be expanded further via Support (based on tenant needs).The benefit:&nbsp;less duplication, fewer policy contortions, simpler audits, and governance that stays consistent as the org grows.The Practical Implementation PlaybookIf you want this to read like something an admin could actually run next week, here’s the playbook.1) Pick Your Governance “North Star”Workplace-appropriate discovery → lead with&nbsp;SafeSearchRights-safe reuse → lead with&nbsp;Creative CommonsConsistent enforcement at enterprise scale → lead with&nbsp;policy criteria / segmentationYou’ll probably land on all three. But naming the primary goal upfront keeps you from building a policy museum full of exceptions.2) Confirm PrerequisitesIf you’re trying to govern search-result outcomes, make sure the traffic is actually governable—SSL inspection is usually the dependency that makes or breaks the whole effort.3) Start with Rollout&nbsp;4) Measure Outcomes That Humans Actually FeelTrack:reduction in policy exceptions over timefewer “why did that show up?” incidentsfewer internal escalations about content reuseadmin time saved (because criteria scaling avoids policy gymnastics)Precision Is the Future of PolicyThese enhancements represent our commitment to building a platform that doesn't just secure your traffic, but understands the nuance of your business.&nbsp;By moving away from one-size-fits-all restrictions to granular, precise controls, Zscaler ensures that security remains a business enabler, not a bottleneck.These features are rolling out now. Log in to your ZIA portal and check your&nbsp;Advanced Policy Settings to start refining your rules today.]]></description>
            <dc:creator>Nishant Kumar (Senior Manager, Product Marketing)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Zscaler Adaptive Access Engine: Turning Logs into Logic]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/zscaler-adaptive-access-engine-turning-logs-logic</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/zscaler-adaptive-access-engine-turning-logs-logic</guid>
            <pubDate>Sat, 31 Jan 2026 14:23:46 GMT</pubDate>
            <description><![CDATA[There’s a quiet misconception in enterprise security that access is static. A one-time cryptographic handshake that holds until a token expires.But entropy doesn’t stop at the login screen.&nbsp;Risk shifts mid-session. Devices drift. Credentials change in the background. Context moves and mutates like a living system.In a hyper-connected environment, a user’s risk profile isn’t static. It oscillates. A user who looks “safe” at 9:00 AM may become a liability by 9:05 AM if their endpoint surfaces a new CVE or their identity provider flags a credential update.Yet static access policies are blind to all of this. They only see a valid token.&nbsp;We Built an Engine for EntropyWhen it comes to modern access, identity, device posture, and user behavior all generate rich signals — the kind that can sharpen decisions dramatically when they’re interpreted together.Picture a user logging in at 9:00 AM. Their SAML/OIDC assertion is clean. Everything looks normal.By 9:04 AM, though:CrowdStrike may drop their ZTA score from 50 → 5Microsoft Defender may detect a new CVEOkta may register a password reset or MFA exhaustion patternZIA may see anomalous download behaviorZPA may observe access to a sensitive private app the user has never touchedUEBA may detect a deviation in behavioral baselinesThese signals need to be automatically propagated to your enforcement points. The opportunity is simple: orchestrate the signals, kill the noise, and wire every tool into one nervous system.Without a central nervous system to aggregate them, you are forced to manage "one-off signal sharing" — building fragile bridges between your IdP and your SSE, or your EDR and your gateway.This is why we built the Adaptive Access Engine—to take this unbounded entropy and turn it into deterministic, enforceable logic.What is Adaptive Access EngineWe designed the Adaptive Access Engine as the real-time logic layer between your telemetry and your enforcement. It doesn’t replace your policies, it makes them kinetic. It ingests raw telemetry — what we call “Context Nuggets” — from Zscaler’s own data lakes and from partners like CrowdStrike, Microsoft, and Okta. Then it normalizes that input into a unified risk signal and pushes that context, instantly, to enforcement points like ZIA and ZPA.The Mechanics of the "Nugget"Let’s look at the architecture. The system relies on a few core concepts that change how you write policy.1. Turning Signals into Context NuggetsContext Nugget is the atomic unit of risk —clean, usable data that your policy engine understands immediately. It associates a subject (User or Device) with a specific data point. A Nugget includes:SubjectuserId,&nbsp;deviceId, originating source IDs (Zscaler, Okta, CrowdStrike, etc.)Typeinteger, boolean, enumeration, timestamp-based, or compositeValuee.g.,&nbsp;zta_score=8,&nbsp;credential_change=true,&nbsp;user_risk=HighLogTime / StartTime /captured in the schema (ref: profile conclusion JSON)This is documented across the Context Producer / Nugget Type Catalog sections of the PRDs you provided.Key design constraints:Nuggets must be&nbsp;non-fuzzy. No machine-learning probability fields.Nuggets must be&nbsp;deterministic.Nuggets must be&nbsp;traceable to a source system.Nuggets must be&nbsp;evaluatable at high frequency without ambiguity.Nuggets preserve&nbsp;state until TTL expiry or revocation — enabling mid-session enforcementIt answers specific questions:Has a user downloaded more sensitive documents than their normal baseline?Has an endpoint’s Defender risk level crossed a threshold?Has&nbsp;a user performed five password resets in a week?Did an Okta "Credential Change" event occur in the last 5 minutes?Is the ZIA User Risk Score "High"?Context Nuggets are explicit, logical, and built for evaluation — integers, enumerations, booleans. Nothing fuzzy. Nothing ephemeral. Nothing that breaks policy logic.&nbsp;2. Combining Nuggets into Adaptive Access ProfilesHere’s where Zscaler made an architectural leap. Adaptive Access Engine let admins express conditions that matter, combining multiple signals into one reusable definition.Instead of embedding risk logic inside hundreds of ZIA/ZPA rules, Adaptive Access Engine introduces Adaptive Access Profiles — reusable logical objects constructed from nuggets.A profile is essentially a Boolean expression tree:Why this matters:Profiles decouple context evaluation from policy evaluation.ZIA/ZPA don’t need to know how to interpret Okta or CrowdStrike models.Profiles act as a semantic layer — one definition, many policy surfaces.This is the same model used by modern policy engines (OPA, Cedar), but implemented at Zscaler scale and optimized for inline, per-request evaluation.&nbsp;3. Distribution Pipeline: How Enforcement Points Receive ContextWhen a profile evaluates to true for a user/device, the Context Engine publishes an applicability message:This means ZIA/ZPA enforcement engines always hold a current, in-memory view of:applicable profilesnugget stateTTLversioned changesThere are no API calls at enforcement time. No round trips. No synchronous dependencies. This is what makes it scalable.&nbsp;4. Enforcement: Inline, Per-Request, Real-TimeOn ZIA:Profiles appear as a first-class criteria in URL Filtering and Cloud App Control.When traffic hits ZEN, the engine evaluates:URL/App categoryuser identitydevice identitypolicy matchprofile applicability (from Adaptive Access Engine)Enforcement action is taken (allow, block, isolate, or step-up if tied to another system).On ZPA:The evaluation model is similar:Connector pathprivate app segmentidentity provider mappingdevice trustprofile applicabilityPrivate app access adapts based on signals just like internet/SaaS traffic.Mid-Session AdaptationThis is the major technical unlock:If a user’s context changes at T+17 seconds, ZIA/ZPA adapts at the very next request.No need to wait for session expiry.This is the part most SSE vendors cannot replicate because their enforcement model is not inline.Keeping the Human in the LoopWe know that automation without observability is dangerous. A "High Risk" flag shouldn't always mean a hard block, especially for a CEO traveling for a keynote.We built Adaptive Access Engine with an ability to override the context. This puts the controls back in your hands. If the system flags a user as risky but you know the context (e.g., a known travel scenario), you can manually override that specific signal for a set duration (e.g., 24 hours).It keeps the system fast, but it keeps the operator in command.What This Unlocks for the EnterpriseConsistent cross-surface context semantics:&nbsp;ZIA and ZPA now consume identical context objects. No more rewriting posture logic in two places.Immediate availability of new context types-&nbsp;No more multi-system upgrade cycles. New context types become usable immediately.Third-party integrations without custom plumbing-&nbsp;CrowdStrike, Defender, Okta, UEMs — integrated through consistent ingestion, not bespoke pipelines.False positives don’t break access anymore-&nbsp;Admins can override incorrect signals centrally.Policy sprawl collapses into reusable profiles-&nbsp;Instead of editing 2000 rules, admins modify a single profile.Policies that adapt mid-session-&nbsp;Access isn’t static — it reflects the real world’s fluctuations.And all of this sits on the Zero Trust Exchange, without adding new appliances, latency, or operational drag.Want to learn more?&nbsp;Speak to our experts.]]></description>
            <dc:creator>Nishant Kumar (Senior Manager, Product Marketing)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Beyond The Crown Jewel Fallacy: Making Segmentation Work for Your Business]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/beyond-crown-jewel-fallacy-making-segmentation-work-your-business</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/beyond-crown-jewel-fallacy-making-segmentation-work-your-business</guid>
            <pubDate>Fri, 30 Jan 2026 22:04:21 GMT</pubDate>
            <description><![CDATA[In Zero Trust conversations, there’s a familiar story many organizations tell themselves.It starts with identifying the most critical applications, the “crown jewels”, and surrounding them with some ZTNA solution. Access is locked down, dashboards turn green, and on paper, least-privilege access looks like a mission accomplished.But this story is incomplete.Focusing only on crown jewels is one of the most dangerous and pervasive myths in cybersecurity today. It gives the false sense of security while leaving the majority of your environment exposed to lateral movement.Securing your most valuable assets is a critical first step, but it’s a dangerous fallacy to believe that this alone delivers a complete segmentation strategy.&nbsp;The Fallacy: Partial Protection is a Full-Time RiskThink of your enterprise network like a house. The crown jewel approach is like installing a state-of-the-art vault door on the master bedroom while leaving the front door, windows, garage, and the back door wide open.An attacker won’t waste time trying to breach the vault. They will simply walk in through an open window instead, targeting certain “non-critical” applications that are unprotected. Once inside, they have free rein to move laterally across your network, turning a small breach into a catastrophic data leak. They can locate and steal your intellectual property and business records, while also establishing a foothold for a future ransomware attack.&nbsp;&nbsp;Modern attacks rarely start where you’ve invested the most security. They start where you’ve invested the least. By concentrating your efforts solely on a small set of crown-jewel applications, you often leave open the vast majority of your potential attack surface:Unsegmented – Users and workloads can reach far more than they shouldUnder-monitored – “Low-value” apps get less visibility and fewer controlsIdeal launchpads&nbsp;– Perfect footholds for ransomware and data exfiltrationThe Operational Nightmare: Why Manual Segmentation Fails at ScaleIf pervasive segmentation is the goal, why does everyone get stuck at the crown jewels? Because for most organizations, the operational reality of scaling segmentation is an absolute nightmare.When AJ Sofia, our CTO in Residence, meets with security leaders and customers, he often starts with a simple question:"How many applications are in your environment?"&nbsp;The answers are revealing. A CISO might say 400. Someone on their network team might say the real number is closer to 4,000.This ten-fold gap highlights the three core reasons why manual segmentation is a failing strategy:The Discovery Problem: You can’t secure what you can’t see. Manually identifying every application and mapping every user-to-app affinity across a dynamic enterprise is an impossible task.The Policy Problem:&nbsp;Even if you develop some tools and manage to discover everything, manually writing and vetting thousands of granular, identity-based policies leads to "segmentation by spreadsheet", which is a process so slow, painful and error-prone it’s often abandoned very early.The Maintenance Problem: In a modern business, users change roles, new apps are deployed, applications also scale horizontally–meaning new instances spin up and down automatically, and old ones are retired daily. Manually created policies are outdated the moment they’re written, creating security gaps or breaking user access.&nbsp;The Paradigm Shift: From Manual Effort to Automated IntelligenceThis is not a problem you can solve with more people, more processes, more spreadsheets, or bigger change-control meetings. What’s needed is a shift in how we think about segmentation itself, from a manual project to a strategic, automated, continuous process.Instead of asking:“How can my team write and manage thousands of policies?”We should be asking:“How can my platform automatically discover every application, use AI to help segment access and generate policy at scale, and continuously strengthen my security posture?”That’s where an autonomous approach to segmentation comes in.In this model, segmentation stops being a one-time initiative and becomes a native capability of your secure private access platform—constantly learning from your real user traffic and adapting as your environment changes.The answer lies in an architecture where segmentation isn’t a one-time, manual project, but an automated, continuous process. In this model, an AI engine helps you:Automatically discover all the unmanaged and unknown applications across your environmentIntelligently segment&nbsp;applications and generate policy recommendations based on business context and riskContinuously optimize through live insights dashboards that highlight gaps, trends, and opportunities to strengthen your posture. A key determinant of segmentation success is your ability to continuously monitor access and enforce true least-privilege at all times.&nbsp;This flips the model from one of overwhelming human effort to one of intelligent, autonomous control, finally making enterprise-wide segmentation a practical reality.Go Deeper: Join the WebinarThe move from partial protection to total segmentation is the most critical step in maturing your Zero Trust architecture. In our upcoming webinar,&nbsp;Beyond the Datasheet: The Autonomous Journey to User-to-App Segmentation, we will take a deep dive into the architectural principles that make this possible.We’ll explore the AI engine in action, discuss the future roadmap for autonomous policy, and provide a CTO's perspective on building a security posture that is both more comprehensive and far simpler to operate.The era of partial, manual segmentation is over. The future is autonomous.]]></description>
            <dc:creator>Olivia Vort (Senior Product Marketing Manager)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Warum Zero Trust für Finanzinstitute unverzichtbar ist]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/why-financial-institutions-should-adopt-zero-trust</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/why-financial-institutions-should-adopt-zero-trust</guid>
            <pubDate>Thu, 29 Jan 2026 18:16:20 GMT</pubDate>
            <description><![CDATA[Für Finanzdienstleister steht heute mehr auf dem Spiel als je zuvor.  KI-Innovationen und eine dauerhaft hybride Arbeitswelt überfordern unsere herkömmlichen Sicherheitsstrukturen. Was uns früher geschützt hat, ist heute ein Klotz am Bein – es macht die IT komplexer, verschlechtert die User Experience und öffnet am Ende sogar Türen für neue Risiken.Als IT- und Sicherheitsexperten liegt es in unserer Hand, diesen Wandel aktiv zu gestalten. Die alten Ansätze reichen schlichtweg nicht mehr aus, um den neuen Herausforderungen zu begegnen.Die zentrale Herausforderung: Eine veraltete Hub-and-Spoke-ArchitekturSeit Jahrzehnten basieren unsere Netzwerke auf dem klassischen Hub-and-Spoke-Modell. Dabei wurde der gesamte Traffic – egal ob aus Zweigstellen, von mobilen Mitarbeitern oder aus dem Homeoffice – mühsam in ein zentrales Rechenzentrum zurückgeleitet. Erst dort passierten die Daten eine Reihe von Sicherheitslösungen wie Firewalls, IPS und Sandboxes, bevor sie endlich ihr eigentliches Ziel erreichten.In der heutigen Zeit führt dieses Modell jedoch zu drei massiven Problemen:Schlechte User Experience: Das Backhauling des Traffics – oft als „Hairpinning“ bezeichnet – verursacht erhebliche Latenzen. Für User, die auf Cloud- und KI-Anwendungen zugreifen müssen, führen diese frustrierenden Verzögerungen zu Produktivitätseinbußen und sinkender Zufriedenheit.Erhöhtes Risiko: Dieses Modell gewährt nach der ersten Authentifizierung zu viel Vertrauen. Sobald ein Angreifer eine Firewall oder ein VPN überwindet – oder ein User mit einem infizierten Gerät Zugriff erhält –, kann er sich ungehindert im gesamten Netzwerk bewegen. Dies setzt alle vertraulichen Unternehmensdaten und das geistige Eigentum einer massiven Gefahr aus.Erschwerte Audits und Compliance-Hürden: Eingeschränkte Transparenz und komplexe Firewall-Regeln machen Audits und die Einhaltung von Compliance-Vorgaben extrem schwierig. Zudem ist es kaum möglich, über zahlreiche Einzellösungen hinweg zu prüfen, ob Sicherheitsrichtlinien konsistent durchgesetzt werden.Die Lösung: eine Zero-Trust-ArchitekturUm diese Hürden zu nehmen, brauchen wir ein völlig neues Sicherheitsdenken: Zero Trust.Der Grundsatz dieses Prinzips lautet: Vertraue dem Netzwerk nicht, sondern verifiziere jeden Zugriff – immer und überall. Zero Trust macht das Internet zu Ihrem neuen Unternehmensnetzwerk und sorgt für eine strikte Entkopplung von Anwendungen und Netzwerk.Statt Usern Zugriff auf das gesamte Netzwerk zu gewähren, verbindet Zero Trust sie direkt mit der jeweiligen Anwendung. Diese Verbindung wird über einen Cloud-nativen Exchange-Service vermittelt, der zwischen User und Anwendung agiert und Richtlinien basierend auf Identität und Kontext durchsetzt. So macht eine Zero-Trust-Architektur interne Anwendungen im Internet vollständig unsichtbar, sodass sie weder aufgespürt noch angegriffen werden können. Ein entscheidender Vorteil: Da User niemals direkt Zugriff auf das Unternehmensnetzwerk erhalten, wird die laterale Ausbreitung von Bedrohungen konsequent unterbunden.Zentrale Anwendungsfälle für FinanzinstituteDie Implementierung einer Zero-Trust-Architektur liefert sofortige und messbare Vorteile für die Sicherheitsstrategie von Finanzdienstleistern. Die zentralen Punkte, die wir in unserem Leitfaden vertiefen, sind:Schutz vor Zero-Day-Angriffen: Durch die Echtzeit- und Inline-Überprüfung des gesamten Traffics können Finanzdienstleister Zero-Day-Bedrohungen sowie Angriffe auf bereits bekannte Schwachstellen proaktiv blockieren.Weniger Risiken durch Ransomware: Die Zscaler-Zero-Trust-Exchange™-Plattform setzt das Prinzip der minimalen Rechtevergabe konsequent durch und macht Unternehmensressourcen unsichtbar. Dies verhindert die laterale Ausbreitung von Bedrohungen im Netzwerk. Sollte es dennoch zu einer ersten Kompromittierung kommen, werden die potenziellen Auswirkungen für Finanzunternehmen auf ein Minimum reduziert.Verhinderung von Kontoübernahmen: Durch die fortlaufende Überprüfung des Sicherheitsstatus von Usern und Endgeräten in Echtzeit erkennt Zscaler verdächtiges Verhalten sofort. Finanzinstitute können so verhindern, dass Angreifer Konten übernehmen und diese für betrügerische Transaktionen missbrauchen.Vermeidung von Datenexfiltration: Durch die Implementierung granularer Zugriffskontrollen, die genau definieren, wer unter welchen Bedingungen auf welche Daten zugreifen kann, und durch den Einsatz von Inline-Funktionen zum Schutz vor Datenverlusten (Data Loss Prevention, DLP) können Unternehmen das Risiko einer unbefugten Datenexfiltration erheblich reduzieren.Vereinfachung von Compliance- und Audit-Prozessen: Durch die grundlegende Verbesserung von Sicherheit und Transparenz erleichtert Zero Trust grundsätzlich die Erfüllung gesetzlicher Anforderungen und den Nachweis gegenüber Prüfern und Versicherern.Weitere Informationen finden Sie im WhitepaperDie Abkehr von einem netzwerkzentrierten Sicherheitsmodell ist ein unverzichtbarer Schritt für jedes moderne Finanzinstitut. Unser Whitepaper bietet Ihnen einen kompakten Überblick über die aktuellen Herausforderungen, die passende Lösung sowie Best Practices für die Implementierung einer zukunftsweisenden Zero-Trust-Architektur.Erfahren Sie alles über Best Practices und konkrete Praxisbeispiele: Laden Sie jetzt unser Whitepaper Gestärkte Cybersicherheit im Finanzwesen mit Zero Trust herunter. Erfahren Sie aus erster Hand, wie Zscaler-Kunden ihre Sicherheit transformiert haben und wie Sie Ihre IT-Infrastruktur moderner, agiler und effizienter gestalten können.]]></description>
            <dc:creator>Akhilesh Dhawan (Sr. Director, Product Marketing - Platform)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing the Zscaler Automation Hub and Other OneAPI News]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/announcing-zscaler-automation-hub-and-other-oneapi-news</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/announcing-zscaler-automation-hub-and-other-oneapi-news</guid>
            <pubDate>Thu, 29 Jan 2026 13:07:16 GMT</pubDate>
            <description><![CDATA[Before we dive into the latest&nbsp;OneAPI news, we need to answer a simple question for the uninitiated.&nbsp;What even is OneAPI?OneAPI is the singular application programming interface (API) for the entire&nbsp;Zscaler Zero Trust Exchange platform. It provides programmatic access that enables integrations with any Zscaler solutions, and lets admins deploy and manage Zscaler via the tools where they prefer to oversee their IT products.&nbsp;This programmatic access (meaning access via code) also allows organizations to embrace Zero Trust Automation and make their use of Zscaler more autonomous. As one example, customers can use OneAPI to automate change implementation like policy configuration. As another example, they can use OneAPI to automate the retrieval of Zscaler analytics data and the creation of custom reports and dashboards.&nbsp;Overall, this reduces the need to spend time on manual tasks, minimizes the possibility of human administrative errors, and enhances scalability, precision, and security.&nbsp;So, how are we making things even better?&nbsp;The Zscaler Automation HubCustomers want implementing automation with Zscaler to be a simple and painless process—they want it to be as easy as possible to find API specifications, code samples, and more. To fulfill that desire, we’ve launched the&nbsp;Zscaler Automation Hub. This all-in-one resource provides everything that organizations need to streamline the setup of automation via OneAPI. It does this by providing:An AI-powered copilot that answers questions and surfaces relevant contentCollections of code snippets for basic tasks like pulling data about policy violationsPlaybook templates to automate multi-step workflows like deploying App ConnectorsComprehensive help documentation that includes API specifications, rate limits, getting started guides, sample use cases, and moreBy centralizing these resources, the hub helps organizations reduce the amount of effort that’s needed to automate the use of Zscaler. They can save time by eliminating manual setups. They can improve efficiency by leveraging workflows that can be repeated for different use cases. And they can enhance ROI by minimizing management overhead and freeing up admins to focus on value-creating work. To see the hub for yourself, you can visit it at&nbsp;automate.zscaler.com or watch a demo&nbsp;here.&nbsp;Additional API coverage for configuration and management&nbsp;OneAPI is constantly evolving to provide broader coverage of public APIs that can be used to configure and manage Zscaler products. Recent updates let admins leverage OneAPI to administer:&nbsp;Zscaler Internet Access (ZIA) PAC filesClient Connector forwarding and app profilesZscaler Private Access (ZPA) app protectionShadow IT reportingSSL inspection policiesAnd much more that you can explore&nbsp;here&nbsp;Analytics for key ZIA data domainsThrough GraphQL, OneAPI provides programmatic access to key Zscaler analytics. With this capability, customers can now pull Zscaler Internet Access data to build custom dashboards, analyze trends, and extract insights related to:SaaS securityShadow ITInternet of things (IoT) securityThe Zscaler&nbsp;Zero Trust FirewallCybersecurity postureAnd web traffic behavior across their organizationAutomation for ZIdentity configurationsLike other Zscaler solutions, the Zscaler authentication service,&nbsp;ZIdentity, can be accessed programmatically via OneAPI. As a result, admins can now let automation manage users and groups in Zscaler, as well as API clients. More details on how this works can be found&nbsp;here.Where to go from hereWant to start your Zero Trust Automation journey with Zscaler? Visit the&nbsp;Automation Hub and, in particular, look at&nbsp;our official SDKs and Postman collections, which can help you get up and running quickly.&nbsp;]]></description>
            <dc:creator>Jacob Serpa (Sr. Product Marketing Manager)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Zero Trust Branch: Redefining Connectivity]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/zero-trust-branch-redefining-connectivity</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/zero-trust-branch-redefining-connectivity</guid>
            <pubDate>Wed, 28 Jan 2026 08:51:45 GMT</pubDate>
            <description><![CDATA[In Part 1, we explored why traditional network-centric architectures struggle to scale in modern enterprise environments. Layering security controls onto broadly connected networks increases complexity, expands attack surface, and creates operational friction, particularly as organizations adopt cloud services, integrate IoT/OT, and respond to faster-moving threats.&nbsp;These limitations are structural, not tactical, and cannot be resolved by adding more segmentation, firewalls, or overlays.This part introduces Zero Trust Branch as an architectural reset, one that separates connectivity from trust to reduce risk, simplify operations, lower cost, and improve performance at the enterprise edge.Introducing Zero Trust Branch (ZTB)Zero Trust Branch (ZTB) reimagines the branch network&nbsp;decoupling connectivity from trust.Instead of extending the corporate network to the branch, it connects users, devices and apps leveraging the Zero Trust Exchange.At its core:Every device is placed in a microsegment or “network-of-one”Devices cannot directly see or communicate with each other: nothing is trusted by defaultSessions between sites are authenticated and brokered by the Zero Trust Exchange.This eliminates uncontrolled peer-to-peer communication, dramatically reducing lateral movement and the internal attack surface. With no traditional inbound connections from the internet, the external attack surface is also minimized.ZTB automatically discovers, fingerprints, and classifies devices, whether end-user, servers or IoT/OT, enforcing policies based on identity and behavior rather than only relying on spoofable MAC addresses, static IPs or cumbersome inventories. East-west and north-south traffic is policed with granular security applied without agents, ACLs, or LAN redesign. With Zero Trust Branch, business partners and external suppliers only connect to the resources they need to access through the Zero Trust Exchange, based on their identity and the principle of least privilege:If they are compromised, they are not on your network and the Zero Trust Exchange is between you and themThe complexity of VPNs and Jump Hosts can be removedSimilarly, because application access is decoupled from network access, Mergings & Acquisitions activities are faster and streamlined without having to worry about IP addresses overlapping: you integrate companies without integrating networks, which results in shorter time to revenues for the business.Effectively, each branch, factory, or cloud location functions as a “virtual island”, where business policies dictate exactly which users, workloads, and devices can communicate, ensuring consistent least-privilege enforcement. Deployment can be completed in hours with zero-touch provisioning, no need to reconfigure the whole LAN or to plan for downtime, enabling rapid business agility.The results are:Reduced complexity and operational overheadLower costsMinimized blast radius for attacksSignificantly reduced lateral movementHow ZTB Differs from Traditional SASE and SD-WANTraditional SASE solutions often combine SD-WAN with cloud-delivered security, but the underlying network assumptions remain similar: routing overlays, full meshes, firewall-centric segmentation, and inbound VPN constructs.&nbsp;ZTB differs in several key ways:Minimized attack surfaceInternal devices cannot see each other.No inbound services exposed on the public internet.Automatic device discovery and classificationSimplify policy management by automatically grouping devices based on behavioral identity.&nbsp;Avoid complex inventory management.Identity-driven communicationPolicies are enforced based on device and user identity, not IP addresses or VLANs.&nbsp;No transitive trust or shared broadcast domains.No routable overlaySessions between sites are brokered by the Zero Trust Exchange.Every session is authenticated and authorized.Native east-west segmentation without VLAN/ACL/Agent complexityZero Trust is applied within the branch, not just at the perimeter.Segmentation is policy-driven rather than network-engineered.Unified security and connectivityZTB integrates seamlessly with the Zero Trust Exchange, providing consistent visibility and policy enforcement for SaaS, private apps, cloud workloads, and branch devices.Business and Security ImpactZero Trust Branch addresses the inherent weaknesses of legacy connectivity and segmentation architectures by design:Reduces the attack surface and the risk of lateral movement.Simplifies segmentation, allowing for deployments in days, without VLAN changes or downtime.Consolidates legacy infrastructure: no additional branch firewalls or point products.Aligns operations around identity and policy, and delivers consistent security policies for users, devices, apps.The outcomes:Lower cyber risk: stop ransomware spread.Lower cost and complexity: fewer appliances and tools to manage.Higher business agility: deploy in days, integrate sites and companies without worrying about IP address conflict.Better user experience: eliminate backhaul to central security stacks at DC or co-lo sites and provide the shortest path to the resources.For CISOs, architects, and IT leaders, ZTB represents more than just a product; it is a new architectural paradigm. This branch model is purpose-built for the cloud era, for today’s dynamic threat landscape, and fundamentally for Zero Trust.If you want to learn more about "How to architect a Cafe-like Branch", join our Webinar on 4th of February.]]></description>
            <dc:creator>Andrea Polesel (Principal Transformation Architect)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Building Visibility to Enable Secure Healthcare AI Adoption]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/building-visibility-enable-secure-healthcare-ai-adoption</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/building-visibility-enable-secure-healthcare-ai-adoption</guid>
            <pubDate>Tue, 27 Jan 2026 22:53:44 GMT</pubDate>
            <description><![CDATA[Generative AI isn’t just a buzzword in healthcare anymore—it’s table stakes. Physicians, nurses, and analysts are tapping into generative AI to transform patient care. Whether it’s summarizing notes into a patient record, coding faster with AI assistants, or automating time-consuming documentation, the technology promises massive improvements in operational efficiency and clinical accuracy.But as healthcare embraces AI, most organizations are flying blind. Your staff isn’t waiting for enterprise rollouts—they’re solving problems right now. There’s that cardiologist using ChatGPT to streamline discharge summaries, the nurse with a "smart" summarization tool, or the analyst uploading "anonymized" electronic health record (EHR) exports to a coding assistant. In every case, they’ve jumped ahead. Unfortunately, what they see as innovation, your network sees as risk.This is the uncomfortable truth: AI users in your organization may just have triggered your next biggest security incident. Here's why—and how to fix it.Shadow AI: The Elephant in the RoomEvery healthcare leader knows that AI adoption is happening. It’s already underway in more than 60% of organizations piloting or implementing enterprise AI solutions. But here’s the problem: the real number is likely much higher because shadow AI tools—AI systems adopted by users without enterprise approval—are flying under the radar.When one healthcare organization deployed inline WebSocket inspection, they discovered 31 unique AI tools being used within 72 hours. None of them had been approved, evaluated for compliance, or configured to safely handle Protected Health Information (PHI). AI-related traffic across enterprises has increased 3,000% over the last year, and 10–20% of that traffic already violates policies. This widespread activity creates significant blind spots for security teams—and significant opportunities for attackers.Shadow AI Risks Are RisingAI has brought unprecedented opportunities, but it has also introduced unique risks. Without visibility into what tools are being used and how your people interact with AI, you risk:PHI exposure: Shadow AI users may unintentionally upload sensitive patient data, creating major compliance risks.Vulnerability to AI-related attacks: Threat actors are using AI for sophisticated phishing campaigns, compromise tactics like prompt injections, and exploiting organizational blind spots. AI-fueled attacks jumped 146% from 2023 to 2025, with healthcare data theft rising 92%.Regulatory fines: With updated regulations like the proposed HIPAA Security Rule and the HITRUST AI Security Framework, compliance gaps related to AI adoption could lead to millions in penalties.Shadow AI isn’t a future problem. It’s happening in your organization now.Why WebSocket Blindness Keeps You in the DarkMost security teams already rely on SSL/TLS inspection for visibility. While this approach may work for traditional web traffic, it isn’t suited for generative AI platforms like ChatGPT, Microsoft Copilot, Claude, or Google Gemini. These modern platforms don’t communicate in the simple HTTPS formats you’re used to inspecting.Instead, they rely on WebSockets—persistent, bidirectional connections that continuously stream complex payloads. This creates a black box for organizations without inline WebSocket inspection. Your firewall may flag a session to an AI domain, but it won’t reveal what’s inside that session.Without WebSocket Inspection, You Miss:User attribution: Who sent the prompt?Sensitive content: PHI, MRNs, ICD-10 codes embedded in AI requests.Risks in action: Prompt chaining, jailbreak attempts, or hallucinated clinical recommendations.With WebSocket Inspection, You Gain:Full prompt and response visibility in real time.Identification and blocking of policy violations before sensitive data leaves your network.Tied attribution of AI sessions to users and devices for rich audit trails.Detection of risky or malicious prompt activities.In short, WebSocket inspection transforms AI-related blind spots into protected environments where you can allow safe use of AI without compromise.Governance and Innovation: Striking the BalanceBlocking AI outright isn’t realistic. Your clinicians, analysts, and staff will find ways to adopt tools—often through less secure methods that increase risk. Instead, organizations need to embrace AI responsibly by anchoring their governance model in Zero Trust principles.Step 1: Focus on Visibility FirstDeploy WebSocket inspection to see the tools and data your staff are already using.Monitor prompts at the application level with full attribution (who, when, what).Flag risky patterns like jailbreak attempts or PHI-laden queries in real time.Step 2: Govern Approved AI SolutionsBuild a structured approval process for generative AI tools, defining requirements for data retention, licensing, and compliance certifications like HITRUST or HIPAA.Explicitly block unsanctioned AI tools or browser extensions at the network level while enabling access to approved solutions.Step 3: Secure the DataUse contextual detection like regular expressions or natural language processing (NLP) to identify and block sensitive data (e.g., SOAP notes, clinical codes, or names) from being transmitted accidentally.Build immutable audit trails for all AI-related activity, enabling continuous improvement and compliance reporting.The Bottom Line: AI Can’t Come at the Cost of SafetyYour people are excited about AI—and for good reason. From saving hours on documentation to improving diagnostic processes and reducing errors, generative AI offers healthcare organizations incredible potential. But adoption must come with safety, visibility, and governance.With inline WebSocket inspection and a Zero Trust approach, you can:Protect PHI while enabling safe AI-driven workflows.Identify and block shadow AI usage without stifling innovation.Comply with emerging regulations and maintain trust with patients and stakeholders.Generative AI is inevitable. The question isn’t whether your organization will use it; the question is whether you’ll use it securely. Your first step to building a safer, AI-enabled future starts with visibility.Download our eBook to learn more about how you can secure AI while enabling innovation.]]></description>
            <dc:creator>Steven Hajny (Healthcare Principal Sales Engineer)</dc:creator>
        </item>
        <item>
            <title><![CDATA[The "Control" Trap: 3 Reasons Your Legacy Firewall Can’t Keep Up (And Why You Think It Can)]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/control-trap-3-reasons-your-legacy-firewall-can-t-keep-and-why-you-think-it</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/control-trap-3-reasons-your-legacy-firewall-can-t-keep-and-why-you-think-it</guid>
            <pubDate>Tue, 27 Jan 2026 08:18:41 GMT</pubDate>
            <description><![CDATA[There is a specific kind of psychological comfort associated with on-premises firewall appliances.&nbsp;The hum of the cooling fans, the perfectly dressed cables, and the rhythmic blinking of green LEDs creates a reassuring illusion: if traffic crosses this box, it’s controlled.I get why orgs hesitate to go all-in on a cloud-native proxy architecture. Letting go of the box feels like letting go of the wheel. But clinging to the appliance model is no longer the conservative choice, but an active acceptance of gaps.Let’s dismantle the three persistent myths that keep organizations tethered to the appliance model.The Reality: Centralized enforcement only works when traffic reliably transits that choke point. Topology drift has rendered the physical perimeter porous. Users originate from diverse remote networks, and applications reside in SaaS and public cloud VPCs/VNETs rather than a single data center.&nbsp;Consequently, the on-premises legacy firewall inspects a statistically shrinking slice of enterprise traffic. To maintain usability, operations teams are frequently forced to implement split-tunnelling and route exceptions for high-bandwidth applications - effectively removing policy enforcement from the highest-volume paths.The illusion of control further collapses under the weight of modern protocols such as TLS 1.3, HTTP/3 over QUIC, and WebSockets with persistent, multiplexed flows that demand sustained compute power, not burst capacity. The legacy firewall suffers from performance challenges:TLS interception is expensive per flow: Session setup, Key operations, Decryption/Re-encryption, Certificate validation/rewriting, plus full content scanning (IPS, malware, sandbox detonation, DLP, CASB controls) are CPU intensive tasks. Firewall appliances cannot scale with the needs of your organization.Feature stacking compounds cost: Enabling SSL inspection, IPS, Sandboxing and DLP materially increases CPU cycles, memory pressure, and queue depth. As the legacy firewalls hit CPU saturation, latency climbs and throughput collapses.Operational reality: When the appliances hit limits, your teams reduce coverage via category exclusions, app bypasses, and quick-fix exceptions. That creates predictable blind spots - exactly where attackers concentrate.The on-premises appliance carries inherent security risks.&nbsp;These firewalls are exposed assets because of their public IP addresses which are&nbsp;routable and continuously scanned from the internet.The management plane and dataplane&nbsp;vulnerabilities are repeatedly weaponized in the wild. Your teams spend significant time in patching the software to ensure you are up-to-date against security threats.The impact on your organization is high, if the appliances are compromised because it often sits adjacent to broad network segments and becomes a pivot point.What “better control” looks like nowA cloud-delivered Zero Trust architecture removes the inbound attack surface entirely. Users establish outbound sessions to the service where policy is enforced, and private applications are accessed via outbound connectors without public exposure.&nbsp;True control today is defined by policy consistency and inspection depth, not by the ownership of the box processing the packets.The Reality: If the problem is architectural (distributed egress + encrypted traffic + fixed capacity), running the same appliance as a VM in a public or private cloud environment doesn’t change the physics - it just changes the hosting location.You still inherit the full appliance lifecycle:&nbsp;VM firewalls still require OS/image hardening, vulnerability management, emergency patching, upgrade testing, rollback plans, and maintenance windows. High Availability remains stateful and fragile in public cloud environments.&nbsp;At cloud scale, this pattern also breeds&nbsp;image sprawl and&nbsp;configuration drift across regions and accounts.Scaling is still engineering work, not elasticity:&nbsp;When traffic grows or when the magnitude of inspection increases, you still hit performance ceilings. “Scale” with VMs means instance sizing, provisioning new nodes, tuning load balancers, and rewriting routes to preserve symmetry. When CPU cycles are saturated in individual VMs or across a cluster of VMs, you see latency, session drops, and selective inspection bypass, not a clean autoscale outcome.The architecture stays network-centric, so lateral movement persists:&nbsp;Appliance models enforce network boundaries. If users/workloads retain subnet/port reachability, compromise becomes inevitable. In the classic kill-chain, once the network has been breached, lateral movement follows. Micro-segmentation can reduce blast radius, but in appliance-centric designs, your security often devolves into distributed Access Control Lists, policy sprawl and region-by-region duplication.What changes with cloud-native securityA cloud-native enforcement fabric is delivered as a managed, multi-tenant service: the provider owns patching, scaling, and High Availability. Policy decisions are identity/device/context-driven and enforced consistently for internet, SaaS, and private apps. Critically, access is&nbsp;app-specific. There are no network-routable apps. Apps are not discoverable and lateral movement paths do not exist.The Reality: In a distributed world, the&nbsp;opposite is true. Your legacy architecture is the bottleneck.In hub-and-spoke designs, users often tunnel to a central data center for inspection, then exit to the internet - regardless of where the destination actually is. That creates the classic hairpin path: a user in London routes to a firewall in New York, then back to a SaaS front door that might be in London.&nbsp;You’ve added distance, congestion points, and failure domains&nbsp;before you even start the application session.The penalty compounds because latency isn’t one number - it is how many times you pay the round-trip tax:TCP handshakeTLS handshake (often multiple RTTs, plus cert validation)App negotiation (HTTP/2/3, auth redirects, token exchanges)Long-lived flows (WebSockets, streaming, GenAI responses) that magnify jitter and lossSo the real question isn’t “proxy or not.” It is: where is the first security decision made relative to the user.The Cloud AdvantageA properly built cloud edge model makes the first enforcement point&nbsp;local.Users connect to the nearest PoP, so the “security hop” is a few milliseconds away.Policy is enforced at that edge, then traffic rides optimized peering paths to the destination (SaaS/IaaS).Net result: you typically&nbsp;remove the backhaul hop rather than add a new one - fewer transits, fewer choke points, better p95 experience for SaaS.Caveat (the part people confuse): If your “proxy” is just a VM cluster in one region, it will behave like the old model and be slow. That’s a failure of the architecture, not an inherent property of proxying.The Bottom Line: Redefining ControlMoving to SSE isn’t surrendering control. It’s shifting control from&nbsp;infrastructure ownership to&nbsp;policy enforcement.You can continue to operate legacy firewall appliances with/ without hypervisors, managing images, HA pairs, route tables, patch cycles, and capacity events.&nbsp;Or you can operate based on&nbsp;intent:&nbsp;who can access&nbsp;what, under&nbsp;which conditions, with inspection and logging applied the same way everywhere.One model scales people's problems. The other scales security outcomes.How to evaluate your legacy firewall appliancesRun three tests. They’ll tell you more than any vendor deck:Encrypted reality testIncrease TLS decryption/inspection coverage. Track p95 latency, breakage rates, and the number of forced exclusions needed to stay stable.Operations truth testInventory what you still own: OS/image patching, HA design, scaling events, routing symmetry, policy replication, and troubleshooting paths across regions.Path and experience testTrace flows by geography and app. Measure RTT and p95 to your top SaaS/private apps with security on/off, and confirm where the first enforcement decision is made (local edge vs centralized backhaul).The real question is not “cloud vs on-prem. It is whether your architecture can&nbsp;inspect encrypted traffic at scale,&nbsp;minimize exposed attack surface, and&nbsp;enforce policy close to users without turning security into an infrastructure maintenance job.]]></description>
            <dc:creator>Nishant Kumar (Senior Manager, Product Marketing)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Accelerating AI Initiatives with Zero Trust]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/accelerating-ai-initiatives-zero-trust</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/accelerating-ai-initiatives-zero-trust</guid>
            <pubDate>Tue, 27 Jan 2026 08:00:10 GMT</pubDate>
            <description><![CDATA[Act Fast. Stay Secure. This is the critical mission for enterprise organizations in the rapidly evolving world of AI. Today we launch exciting new innovations to Zscaler’s AI security portfolio, paving the way for accelerating AI initiatives with confidence.&nbsp;Since Chat GPT’s debut three short years ago, the proliferation of AI in various forms is unlike anything the tech world has ever seen. It began with several GenAI apps that greatly improved productivity. Then AI became embedded in just about every SaaS app we use today, such as Microsoft Office, Salesforce, Atlassian and more. Today most organizations have a strategic initiative to build and deploy custom enterprise AI applications to maintain a competitive advantage. And now we are seeing the rapid emergence of agentic AI, where the promise of autonomous agents can greatly accelerate productivity.&nbsp;The AI Security Gap: A Roadblock to InnovationWhile the rapid pace of AI innovation is exciting, the reality is that traditional security has not kept pace - creating friction as organizations strive to migrate from prototypes to production. Security leaders face a number of challenges, including:AI sprawl has dramatically expanded the attack surface, increasing risks of data exposure;AI introduces new classes of attacks, such as prompt injection and context poisoning, which bypass traditional controls;New protocols, such as MCP, A2A, and websockets make AI interactions harder to inspect and secure; andAgentic AI ushers in a new frontier, where autonomous agents with excessive permissions could wreak havoc if not kept in check.Given the competitive landscape, the question for security teams is not whether to adopt AI, but how to do so securely, consistently, and at enterprise scale as business leaders expect AI to drive productivity, efficiency, and growth. This requires organizations to rethink their security frameworks to align with the new dynamic AI era.Based on the Zero Trust Exchange platform, Zscaler’s AI Security portfolio is designed to address the full range of requirements to safeguard an organization’s AI journey.&nbsp;Asset Management - Gain full visibility of your AI footprint and risksSecure Access to AI - Ensure the safe and responsible use of AISafeguard AI Apps and Infrastructure - Secure the full AI lifecycle from development through deployment.&nbsp;&nbsp;Zscaler is unveiling innovations across all of these critical pillars.&nbsp;AI Asset ManagementZscaler’s existing platform provides granular visibility into the use of GenAI apps. However, the reality today is that many traditional SaaS apps are embedding AI capabilities, which creates a unique&nbsp;blind spot. These apps may have the same URL as their parent SaaS app, but are in fact AI, adding to the shadow AI challenge. Zscaler has enhanced its solution to provide this additional level of visibility, mitigating these new risks.&nbsp;In addition to understanding the use of AI,&nbsp;most enterprises&nbsp;struggle to understand all of the AI applications and infrastructure deployed throughout their organization. Developer tools, AI models, MCP servers, and agent platforms can quickly proliferate without proper oversight. Zscaler’s new solution is pulling together a 360 degree view of your entire AI footprint leveraging a wide range of telemetry, including insights from the Zscaler platform, scanning of cloud AI platforms, code repositories and more. From these insights, Zscaler identifies the MCP servers, agents and models deployed throughout the organization and how they are interconnected - uncovering data and AI pipeline risks. In addition, Zscaler uncovers hidden risks and vulnerabilities such as posture misconfigurations, model risks, supply chain risks and more.&nbsp;&nbsp;Secure Access to AI Apps and ModelsZscaler pioneered Zero Trust Exchange for secure access and eliminating risks including lateral threat movement and more to secure their users, workloads and branches. Now, with the AI Security platform, we have extended our Zero Trust Exchange for secure access to AI apps and models everywhere. Secure access to AI includes the following:&nbsp;Access controls:&nbsp;Identify and secure access to AI apps including embedded AI apps with inline DLP.Advanced intent-based detectors:&nbsp;Safeguard user interactions with AI apps to moderate content (e.g., prevent off topic prompts) and prevent threats (e.g., responses with malicious content).Prompt extraction and classification:&nbsp;Extract and classify prompts from the request and response of dozens of Gen AI apps for insights into usage patterns.Secure access to AI development environments: Ensure zero trust based access to development environments, enforcing access controls for IDE applications accessing AI infrastructure to prevent data and PII leakage as well as security threats.&nbsp;Secure AI apps and InfrastructureThe dynamic nature of AI has radically impacted the app development process. Frequently updating models, rapidly expanding attack surfaces and new attack methods outpace traditional scanning and posture management tools.&nbsp;With our recent&nbsp;acquisition of SPLX, Zscaler now has one of the most advanced AI red teaming solutions in the market, specifically designed to address these new challenges. Harnessing over 5000 simulated attacks across a range of categories, our&nbsp;red teaming solution helps uncover and remediate vulnerabilities in real time. Insights can be leveraged to harden system prompts, improving system performance across a number of dimensions. This overall approach provides value throughout the lifecycle of an AI system, from build to deploy to runtime, ensuring continuous protection.Once applications are deployed, Zscaler offers ongoing robust runtime protection, including:AI Guard: Zscaler is announcing general availability of its&nbsp;AI guard solution. With a deep bench of prompt and response detectors, AI guardrails safeguard interactions between AI apps and models. The solution blocks malicious attacks, such as prompt injections and jailbreaks. It also moderates prompt responses to ensure your applications are aligned with corporate policies, including factors such as toxicity, competition or brand and reputation.Policy Generator for Automated AI Guardrails: Zscaler is also introducing a new integration between our red teaming and AI guard solutions. This feature leverages red team findings to automatically generate guardrail policies, closing the loop between testing and enforcement.Zscaler’s AI security portfolio also addresses governance and compliance, with built-in frameworks for EU AI Act, NIST AI RMF, OWASP Top 10 and other popular regulations.&nbsp; This enables organizations to quickly test and assess for compliance and remediate any gaps.The way forwardFor almost twenty years, organizations have relied on Zscaler to streamline and secure digital transformation, transitioning from legacy infrastructure to a cloud-native platform. A similar paradigm shift is currently occurring with the adoption of AI. Just as Zero Trust architecture established the cornerstone for a new era of security, enterprises must now extend this fundamental principle to safeguard their AI transformation. Zscaler’s proven scalability, unified platform approach and ability to address the full range of AI requirements makes us an ideal partner for your AI journey.&nbsp;Ready to See It in Action?We invite you to learn more about our&nbsp;AI Security portfolio, and request a demo to see how Zscaler can help you accelerate your AI initiatives.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Forward-Looking Statements&nbsp;This blog post contains forward-looking statements that are based on our management's beliefs and assumptions and on information currently available to&nbsp;our management. These forward-looking statements include the expected benefits of the expansion of our AI Security portfolio and the solutions and protections offered to our customers. These forward-looking statements are subject to the safe harbor provisions created by the Private Securities Litigation Reform Act of 1995. A significant number of factors could cause actual results to differ materially from statements made in this blog post, including those factors related to our ability to successfully integrate new features of our product offerings into our AI Security portfolio and the business impact additional offerings may have for our customers. Additional risks and uncertainties are set forth in our most recent Quarterly Report on Form 10-Q filed with the Securities and Exchange Commission (“SEC”) on November 25, 2025, which is available on our website at ir.zscaler.com and on the SEC's website at www.sec.gov. Any forward-looking statements in this blog post are based on the limited information currently available to Zscaler as of the date hereof, which is subject to change, and Zscaler will not necessarily update the information, even if new information becomes available in the future.&nbsp;]]></description>
            <dc:creator>Eric Andrews (VP, Product Marketing - Data Security)</dc:creator>
        </item>
        <item>
            <title><![CDATA[2025 ZDX Recap: Elevating IT Operations with Customer-Driven Innovations ]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/2025-zdx-recap-elevating-it-operations-customer-driven-innovations</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/2025-zdx-recap-elevating-it-operations-customer-driven-innovations</guid>
            <pubDate>Fri, 23 Jan 2026 18:54:45 GMT</pubDate>
            <description><![CDATA[As we ring in 2026, it’s a great moment to reflect on the significant advancements&nbsp;Zscaler Digital Experience (ZDX) has delivered throughout 2025. While ZDX made headlines with major, groundbreaking capabilities that redefined device, network, and application monitoring and troubleshooting - read the&nbsp;launch recap blog and the&nbsp;new innovations blog to learn more -&nbsp; ZDX also had a continuous stream of impactful innovations.&nbsp;These customer-driven advancements demonstrate ZDX's commitment to ongoing product velocity, delivering enhancements that streamline workflows, sharpen insights, and empower IT teams to deliver an even more seamless digital experience for users. They are not just features; they are strategic innovations built to solve real-world challenges.Let’s take a moment to shine a light on some of these&nbsp;ZDX innovations from 2025.Enhanced Visibility: See the Full Path, Prove the PerformanceIn 2025, ZDX dramatically expanded its capabilities to provide more granular, actionable visibility across the digital landscape. These enhancements are critical for rapid root cause analysis and precise understanding of performance bottlenecks.Managed Monitoring Companion Probe, and Data Explorer ViewsThis year brought a significant innovation with Managed Monitoring through the Companion Probe, dramatically extending your network visibility and troubleshooting capabilities.What's New:&nbsp;The Managed Monitoring Companion Probe functionality, paired with Data Explorer views, significantly extended network visibility. This enhancement introduces a cloud-deployed, outbound-only probe for actively monitoring connectivity and performance to any target application on any port, using TCP and ICMP.Crucially, the companion probe runs its cloud path monitoring against the exact same DNS-resolved IP address as the Zscaler Web Probe. This ensures that both the web probe (for application performance) and the cloud path probe (for network performance) target the same destination, providing a highly correlated and comprehensive view.This empowers NetOps to:Gain unparalleled insight into the exact network path to an application, including public internet segments and third-party ISPs.Compare application performance from multiple Zscaler locations to pinpoint congestion or degradation.Deliver concrete evidence to determine if a bottleneck is on the corporate network, public internet, or a cloud provider.Drastically reduce Mean Time To Innocence (MTTI).&nbsp;Wi-Fi Dashboard Enhancements&nbsp;User complaints about "bad Wi-Fi" are common, whether in the office or working remotely. ZDX’s Wi-Fi capabilities received significant enhancements to address this.What's New: The Wi-Fi Dashboard now includes advanced capabilities such as the Wi-Fi Performance by Locations List View. This enhanced view provides a comprehensive list of access points and their connected devices, alongside their&nbsp;ZDX Score.These Wi-Fi dashboard enhancements empower NetOps and Service Desk to:&nbsp;Quickly diagnose user-reported slowness related to local Wi-Fi conditions by filtering and sorting by best and worst performing locations.Identify if the issue is local to the user's Wi-Fi environment using granular metricsProvide faster, more accurate guidance to users, preventing unnecessary corporate network troubleshooting.&nbsp;Proactively Solve Issues Before They EscalateReducing MTTR and catching issues before they impact productivity is paramount for both NetOps and Service Desk. ZDX's 2025 advancements deliver on this promise with smarter diagnostics and comprehensive alerting.UCaaS Application Support for User-Level AnalysisUnified Communications as a Service (UCaaS) applications like Zoom, Microsoft Teams, and Webex are critical for modern organizations, making performance issues particularly disruptive. ZDX now provides deeper, more actionable insights into these critical communication tools.What's New:&nbsp;ZDX introduced AI/ML-driven, user-level UCaaS analysis for individual meeting sessions. This provides detailed meeting metrics like&nbsp;ZDX Score for the call, audio quality, audio latency, audio jitter, and audio packet loss, as well as video latency and jitter. ZDX now identifies specific contributing factors impacting a meeting's&nbsp;ZDX Score, such as "High Local Network Latency" or "Device Resource Exhaustion", with a clear confidence level.This intelligent root cause analysis empowers NetOps and Service Desk to:Pinpoints the exact source of poor meeting quality for specific users and individual meetings, whether it's device issues, network latency, applicationGain clear, data-backed insights into the user's experience for rapid resolution.Significantly reduce user frustration during critical communications.&nbsp;Enhanced Proactive Alerting (Custom Apps, Call Quality, Any Incident Type)Waiting for a user complaint is reactive. ZDX's expanded alerting capabilities enable early intervention, tackling issues before they escalate into user complaints.What's New:Expanded Alert Rule Support for Custom Applications: Configure alerts for critical custom applications based on network metrics like packet loss, number of hops, and packet count, extending alerting beyond just predefined SaaS apps.New Alert Support for UCaaS Call Quality Metrics: Immediate notifications when voice or video call quality dips below acceptable thresholds for any user or groupEnhanced Alert Support for Any Incident Type: Configure alerts for virtually any performance anomaly ZDX detects, including specific incident types like Last Mile ISP blackouts, brownouts, or device resource issues.These proactive notifications, deliverable via email, webhooks, or ServiceNow integrations, give your Service Desk and NetOps teams a crucial head start:Empowering them to address performance degradations or outages before users are broadly impacted.Preventing widespread disruption and maintaining productivity across the organization.Device Incident Type for WindowsOften, what appears to be a "network problem" is actually a device problem. ZDX introduced an innovation to help accurately differentiate and diagnose these issues.What's New:&nbsp;A new incident type for Windows devices proactively identifies device health issues impacting user experience, such as high CPU utilization, memory exhaustion, or application crashes. ZDX provides detailed incident reports that include impacted users by geolocation and historical trends.This proactive detection helps Service Desk:&nbsp;Quickly determine if user slowness or poor experience is device-related.Provide clear evidence and metrics for software crashes or resource exhaustion.Pinpoint and troubleshoot directly to the source, preventing misdiagnosis and avoiding unwarranted blame on the network.&nbsp;Looking Forward with ZDXThe ZDX enhancements of 2025 are more than just new features; they are strategic tools designed to empower you. They provide deep, end-to-end visibility NetOps needs to prove network performance and pinpoint true bottlenecks, and equip Service Desk with the diagnostic muscle for faster, data-backed issue resolution. From granular network path visibility to AI/ML-driven UCaaS analysis and proactive device health monitoring, ZDX empowers you to move beyond reactive troubleshooting.These innovations show our commitment to continuous improvement, building on ZDX's foundational strengths to provide an even more refined, responsive, and robust monitoring solution.Look forward to this ongoing journey of innovation and delivering even more transformative capabilities in 2026!&nbsp;To learn more, sign up for a&nbsp;demo.]]></description>
            <dc:creator>Cynthia Tu (Sr. Product Marketing Manager, DEM)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Rethinking Branch Security: Embracing Zero Trust Branch for the Modern Enterprise]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/rethinking-branch-security-embracing-zero-trust-branch-for-the-modern-enterprise</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/rethinking-branch-security-embracing-zero-trust-branch-for-the-modern-enterprise</guid>
            <pubDate>Wed, 21 Jan 2026 09:58:24 GMT</pubDate>
            <description><![CDATA[This two-part series explores why a traditional network-centric security approach with its reliance on implicit trust is no longer adequate for today's cloud-centric, high-threat environment, and introduces Zscaler's Zero Trust Branch (ZTB) as a transformative solution.Part 1&nbsp;explores the current state of enterprise branch networking, highlighting its fundamental flaws including implicit trust models, broad network reachability, and persistent vulnerabilities to lateral movement and ransomware.Part 2&nbsp;presents how Zero Trust Branch addresses and overcomes these limitations, delivering a fundamentally more secure, agile, and cost-effective architecture that extends true Zero Trust principles to all branch devices, workloads, and connections.Part 1 - The Limits of Traditional Network ThinkingFor decades, the foundation of enterprise connectivity followed a fundamentally network-centric approach. This traditional perimeter-based security model operated on a deeply flawed premise: that trust was inherent to the network itself. The core mechanism was to prioritize granting users&nbsp;full access to the corporate network first, after which various security controls such as firewalls, VRFs, access lists, and antivirus software were layered on top. This "castle-and-moat" strategy had significant consequences for security and operational efficiency.&nbsp;&nbsp;&nbsp;&nbsp;By its very design, it provided broad, general network access to anyone who could authenticate to the network, effectively making the network the primary security domain. The outcome was a system that failed to secure and grant least-privilege access specifically to individual corporate resources and applications. If an attacker managed to breach the perimeter, or if an internal user's credentials were compromised, they were often allowed almost unrestricted lateral movement, a direct consequence of the initial generalized network access. Including business partners, Mergers and Acquisitions (M&A), and contractors via VPNs or Jump Hosts inherits their attack surface increasing business risk and operational complexity: if they are compromised, you are compromised.&nbsp;This model's inherent trust in the network meant that once a user was "inside," the security enforcement became significantly weaker, allowing for easy reconnaissance and data exfiltration across the organization. Traditional network segmentation techniques (firewalls, VLANs, ACLs, VRFs, agent based segmentation) only mitigate the risk of lateral movement; they do not eliminate the underlying network reachability that attackers exploit.&nbsp;&nbsp;Persistent breaches show these legacy controls are inadequate, increasing complexity, cost, and business risk. Additionally, traditional solutions like Internet-facing firewalls, VPNs, and SD-WAN routing overlays increase costs, complexity, and, crucially, expand the attack surface.&nbsp;&nbsp;&nbsp;The fundamental issue is very simple: accessibility equals vulnerability. Any part of your infrastructure that is reachable is, by definition, breachable. If a legitimate VPN client or an SD-WAN device can locate your VPN concentrator or another SD-WAN device on the public internet, so can a malicious actor. The proliferation of AI is now dramatically intensifying these problems.&nbsp;Malicious actors, who once needed weeks or months to complete steps like discovering an attack surface or pinpointing exploitable vulnerabilities, can now accomplish the same feats in minutes using rogue AI engines. The rise of AI, cloud, IoT/OT, and the increasing convergence of IT and OT necessitate a fundamental reevaluation of legacy architectures. These trends necessitate a shift away from providing extensive, network-level access.&nbsp;The Fundamental Flaw: Implicit TrustOftentimes the concept of perimeter is still used to set the trust boundaries:&nbsp;Everything which is outside of the perimeter is deemed untrusted, Everything that is inside the perimeter is implicitly considered trusted.&nbsp;Once inside the perimeter:Everything is reachable,Security controls filter&nbsp;after connectivity is already granted,Applications determine authorization, but&nbsp;the network allows the attempt:&nbsp;the application may refuse access, but&nbsp;the network still delivers the attacker to the door.A traditional architecture based on such principles is like an office building where visitors are allowed to roam the corridors without restriction, and security checks are performed only at individual office doors. This is not Zero Trust. Least privilege requires the opposite: if a user or a device is not entitled to a resource, they should not be able to reach it in the first place.Zscaler introduced Zero Trust Access for users many years ago, enforcing context and identity-driven policy, continuous risk evaluation, and connecting users to applications, not to networks.&nbsp;This addresses the need for securing individual users and managed devices.Zero Trust Branch extends these principles to ALL devices: IoT, OT, servers, and unmanaged endpoints. By extending Zero Trust to the branch, organizations can achieve a unified, consistent security posture across their entire distributed environment, ensuring that every connection, regardless of the connecting entity, is explicitly verified and secured. This eliminates implicit trust for everything in the branch, significantly shrinking the overall attack surface and enhancing resilience against sophisticated threats targeting non-user devices.In Part 2, we will explore how Zero Trust Branch redefines branch connectivity by decoupling connectivity from trust, reducing risk and complexity by design, and enabling a more scalable, efficient model for securing the enterprise edge.]]></description>
            <dc:creator>Andrea Polesel (Principal Transformation Architect)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Top 5 Considerations for Effective AI Runtime Protection]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/top-5-considerations-effective-ai-runtime-protection</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/top-5-considerations-effective-ai-runtime-protection</guid>
            <pubDate>Tue, 20 Jan 2026 17:00:06 GMT</pubDate>
            <description><![CDATA[AI is quickly becoming the new norm for business innovation. AI apps and agents now power customer and employee experiences and streamline business processes. But as adoption accelerates, security remains a top concern - especially as agents gain access to sensitive data and enterprise resources. This creates a new attack surface that adversaries can exploit to exfiltrate data, trigger unintended actions, and disrupt the business.Legacy firewall-based systems are not built to protect AI, and though there are numerous up-and-coming security solutions on the market, none of them address the full breadth of threats and are not built for enterprise scale. AI runtime protection, in particular, is a critical piece of a comprehensive security solution. Without effective AI runtime protection, businesses are left exposed to numerous threat vectors that can damage their business and compromise their company and customer data.&nbsp;At Zscaler, we help 45% of Fortune 500 companies secure their businesses. Many of our customers are AI innovators. CTOs, CISOs, and CAIOs tell us that while AI is transforming their organizations, securing their AI initiatives remains a top concern. Based on our experience, here are the top five considerations that AI and security professionals should evaluate for effective AI runtime protection:Deep visibility into prompts and responses: AI apps and agents converse with LLMs to process queries. Malicious actors can trigger prompts for unintended responses that can lead to data leaks or unintended actions. Getting visibility into prompts and responses is the first step to securing those interactions.Guardrails that cover the full breadth of AI safety and security risks:&nbsp;The interactions between AI apps and agents are exposed to a variety of threats, including security threats such as prompt injections, malicious code insertion, and jailbreaks. Content safety issues and compliance requirements such as toxic and off-topic prompts, undesired responses, and PII data pose additional risk.Effectiveness of detection and data protection:&nbsp;A high number of false positives can distract from real vulnerabilities, while a high rate of false negatives can increase risk. A guardrails solution needs high accuracy in order to be effective. Further, many off-the shelf open-source based data loss prevention engines are not effective at detecting sensitive information across AI apps and LLMs.Ease of integration and enforcement:&nbsp;AI apps, LLMs, and the data they access are dynamic, continuously learning and evolving. Runtime protection is not a one-time action, but an ongoing process that needs to evolve with your AI apps and infrastructure. For this reason, it needs to integrate seamlessly with your AI app and security infrastructure so it can effectively block threats while reducing management overhead and risk.Audit and compliance:&nbsp;A guardrails solution needs to secure AI apps while maintaining auditable logs for compliance and troubleshooting. While visibility is key, privacy of prompts/responses and data collected to enforce security is also critical so it’s not exposed to third parties.&nbsp;Accelerate your AI initiatives with Zero TrustTo help our customers protect their enterprise AI, we introduced Zscaler AI Guard. It is a high-fidelity AI runtime protection solution that secures enterprise AI applications so organizations can adopt AI with confidence. It delivers end-to-end inline visibility and control into prompts and responses across AI apps, agents, and LLMs, along with inline allow/block/coach enforcement to reduce data leakage and policy violations. AI Guard has a broad set of detectors for AI security threats (such as jailbreaks, prompt injection, malicious code), sensitive data leakage (such as PII and source code), and content moderation risks (such as toxicity, off topic, competition). It also supports centralized governance and audit-ready reporting aligned to leading frameworks (including NIST, the EU AI Act, and OWASP Top 10 for LLM apps), integrates with major AI platforms and frameworks, and is designed for privacy.Zscaler helps more than 8,000 enterprises secure their digital transformation journeys. Zscaler’s own IT team serves as customer zero to enable delivery of our security technologies to customers. Watch this video to learn about how the Zscaler IT team uses AI Guard to enable AI guardrails for AI adoption at Zscaler.]]></description>
            <dc:creator>Neelay Thaker (Director of Product Marketing - AI Security)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Digital Experience Predictions in 2026]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/digital-experience-predictions-2026</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/digital-experience-predictions-2026</guid>
            <pubDate>Fri, 16 Jan 2026 20:03:59 GMT</pubDate>
            <description><![CDATA[AI is now table stakes, but scaled value is still rare. McKinsey’s latest&nbsp;State of AI survey shows&nbsp;88% of organizations are using AI in at least one business function, yet&nbsp;nearly two-thirds haven’t begun scaling AI across the enterprise and only&nbsp;39% report EBIT impact at the enterprise level. At the same time,&nbsp;AI agents are moving quickly from curiosity to trials:&nbsp;62% of respondents say their organizations are experimenting with agents, and&nbsp;23% report scaling an agentic AI system somewhere in the enterprise.For IT, the takeaway is straightforward:&nbsp;AI won’t compress time-to-resolution if experience data remains fragmented across endpoint tools, network tools, application monitoring, and service workflows. As we look toward 2026, the organizations that move fastest will&nbsp;consolidate first, creating end-to-end experience visibility with a&nbsp;single endpoint agent, and then use AI to turn that unified data into&nbsp;instant expertise for every operator.(Source to include on publish:&nbsp;McKinsey & Company, QuantumBlack, “The State of AI” [November 5, 2025].)Over the past year, we’ve learned that the future of digital experience isn’t about adding more dashboards or generating more alerts. It’s about reducing the time and effort required to get to the right answer, across the entire organization.We’ve seen what happens when experience data becomes immediately usable in real environments:A global IT consulting firm avoided a significant productivity hit across 1,000 employees by using network intelligence to identify an ISP-level issue in minutes—not hours—and reroute users quickly.A large U.S. healthcare system uncovered thousands of endpoint failures (including blue screens, audio failures, and browser crashes), helping protect productivity for clinicians and staff.The takeaway: when experience data is unified and actionable, teams don’t just respond faster—they prevent downstream impact.See the&nbsp;Zscaler Digital Experience launch event for more information.&nbsp;Predictions for 2026Prediction 1: Consolidation becomes the execution advantage—not just a cost playBy 2026, consolidation will be driven less by license rationalization and more by a simple operational requirement:&nbsp;speed to clarity. Tool sprawl forces operators to swivel between consoles, reconcile conflicting signals, and escalate issues simply to gather context.The winning model will start with a consistent foundation:&nbsp;a single endpoint agent that captures user experience signals across devices, networks, and applications—so teams can correlate what’s happening without manual stitching.Why it matters: consolidation is the prerequisite for faster Zero Trust rollouts, actionable device health, and AI that can deliver precise answers.&nbsp;Prediction 2: Zero Trust rollouts will accelerate when experience leads the rolloutZero Trust adoption will continue to accelerate—but the differentiator won’t be policy ambition. It will be whether teams can&nbsp;prove and protect user experience through the rollout.Organizations replacing legacy VPNs are already learning that the biggest obstacles often aren’t access controls. They’re the reality of distributed work: device instability, Wi‑Fi degradation, last-mile ISP issues, and SaaS path variability.By 2026, successful Zero Trust programs will operationalize experience insights to:baseline performance before changespinpoint friction during cutoversvalidate performance continuously after policy updatesBottom line: experience becomes the accelerant for Zero Trust because it provides the evidence to move fast without breaking productivity.&nbsp;Prediction 3: Device health becomes a first-class signal and remediation becomes a requirementDevices are no longer passive endpoints. They’re complex systems that directly shape productivity and frequently the hidden root cause behind “the network is slow” or “the app is down.”But by 2026,&nbsp;visibility alone won’t be enough. Leading IT organizations will require&nbsp;closed-loop device operations:&nbsp;detect → explain → remediate → verify.That means expecting digital experience solutions to support&nbsp;safe, role-appropriate remediation such as:approved endpoint actions to address common degraders (e.g., disk cleanup, clearing browser/DNS caches, restarting specific Windows services)posture/readiness validation signals to isolate configuration-related friction (kept generic for external audiences)standard endpoint network diagnostics (DNS lookup, latency/packet-loss tests, route/path checks)verification loops that confirm whether the action improved experienceWhy it matters: this is how service desks reduce escalations by resolving more issues at first touch with guardrails.&nbsp;Prediction 4: Real-user experience becomes the primary truth; synthetic becomes supporting coverageSynthetic monitoring still has value, but it doesn’t reflect reality at scale, especially in highly distributed environments. By 2026, teams will rely more on&nbsp;real-user experience signals from actual devices on real networks inside live applications.The challenge won’t be data collection. It will be interpretation, correlating endpoint behavior, network path changes, and application performance without overwhelming teams.Winning solutions will prioritize correlation and impact: who is affected, where the issue sits, what changed, and what to do next.&nbsp;Prediction 5: The service desk becomes an intelligence layer, measured by prevented disruptionBy 2026, service desk performance won’t be judged solely by ticket closure speed. It will be measured by how effectively teams:prevent escalationsreduce user downtimeresolve issues at first touchThis shift requires two things:instant access to cross-domain context (device, network, app, and access-path signals)dramatically lower cognitive load for first-line respondersAnd it must show up where teams work. Increasingly, customers will expect experience context and guided insights to be&nbsp;embedded directly into ServiceNow workflows, not trapped in separate tools.&nbsp;Prediction 6: AI agents move into workflows, but only unified data makes them preciseChat-based AI is a starting point, not the destination. By 2026, organizations will expect AI-powered troubleshooting to be:embedded in workflows like&nbsp;ServiceNowcallable via APIs and automationintegrated into operational views—not isolated conversationsBut practitioners will demand technical fidelity. AI must be able to ground answers in concrete evidence like endpoint failures, path changes, and network quality signals without turning every responder into a specialist.This is the unlock: AI becomes “instant expertise” only when it can reason over&nbsp;complete, end-to-end experience data. Without that foundation, AI scales guesswork.&nbsp;Prediction 7: ISP performance incidents becomes a top priority category because “the internet” is now part of your stackMore enterprise traffic will traverse public internet segments and Zero Trust overlays, meaning user experience will increasingly depend on paths IT doesn’t directly control. The operational problem isn’t just performance variability; it’s proving where the variability lives (endpoint, Wi‑Fi, ISP, intermediate carrier, or application) fast enough to act.This is why ISP performance will become a first-class incident category. Gartner reports that&nbsp;70% of organizations struggle with network complexity and lack of end-to-end visibility, which is exactly what turns routine degradations into drawn-out war rooms.&nbsp;The winning model will look less like reactive troubleshooting and more like continuous, route-aware measurement:Lightweight, frequent probing and telemetry (latency, packet loss, jitter) along the user’s actual path to the appBaselines and automatic deviation detection to flag “what changed” immediatelyAggregation by ISP/intermediary (e.g., ASN) and geography to pinpoint bottlenecks and quantify blast radiusWhy it matters: when teams can rapidly identify ISP and carrier-driven issues with evidence, they reduce MTTR, avoid unnecessary escalations, and protect productivity at scale.&nbsp;ClosingIn 2026, the advantage won’t come from adding more AI on top of fragmented tools. It will come from&nbsp;consolidating experience signals end-to-end with a single endpoint agent, accelerating Zero Trust with evidence, and enabling every operator to act with expert-level context—directly in the workflows where work happens.]]></description>
            <dc:creator>Rohit Goyal (Sr. Director, Product Marketing - ZDX)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Operationalizing Threat Intelligence with Zscaler Integrations MCP Server]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/operationalizing-threat-intelligence-zscaler-integrations-mcp-server</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/operationalizing-threat-intelligence-zscaler-integrations-mcp-server</guid>
            <pubDate>Thu, 15 Jan 2026 15:33:31 GMT</pubDate>
            <description><![CDATA[The Threat Intelligence ProblemEvery security professional faces the same challenge: threat intelligence overload. Your inbox fills with advisories from CISA, industry ISACs, vendor bulletins, and security blogs. Each contains critical Indicators of Compromise (IOCs) - malicious IPs, domains, file hashes - that should be blocked immediately. But translating these text-heavy PDFs, RSS feeds, field advisories and blog posts into actionable security policies takes hours.Zscaler has always been a threat intelligence-driven company. Our&nbsp;Zero Trust Exchange is powered by real-time analysis of&nbsp;500+ billion transactions daily, feeding into continuously updated&nbsp;threat intelligence that protects customers automatically. But what about the intelligence that's specific to your organization? The regional threats from your local CERT, the industry-specific campaigns targeting your vertical, or the emerging threats your security team discovers through threat hunting?The&nbsp;Zscaler Integrations MCP Server represents a new paradigm: AI-assisted threat intelligence operationalization that augments Zscaler's existing protections with your organization's unique intelligence requirements. Using Zscaler Integrations MCP Server, you can transform multi-hour policy creation workflows into conversational, minutes-long exchanges.What is the Zscaler Integrations MCP Server?The Zscaler Integrations MCP Server is an&nbsp;open-source integration that connects AI assistants (like Claude or ChatGPT) to Zscaler's extensive API&nbsp;ecosystem. It provides access to a growing list of tools across Zscaler's portfolio:ZIA: Firewall rules, URL categories, IP groups, etc.ZPA: Application segments, access policies, etc.ZDX: Device and network health monitoring, etc.ZCC: Client Connector managementZIdentity: User and Group managementInstead of clicking through consoles or writing scripts, you simply converse with your preferred chatbot:&nbsp;Deploying the Zscaler Integrations MCP ServerSetting up the Zscaler Integrations MCP Server takes about 10 minutes. You can deploy it in your choice of container framework (i.e. Docker,&nbsp;AWS Bedrock AgentCore, etc.). For detailed setup instructions, check out the following guides:Setup Guide:&nbsp;https://zscaler-mcp-server.readthedocs.io/en/latest/getting-started.htmlGitHub README:&nbsp;https://github.com/zscaler/zscaler-mcp-serverOnce configured, the server integrates directly with your preferred chatbot, giving you conversational access to your Zscaler environment. In this blog, we’ll demonstrate the integration using&nbsp;Claude Desktop.How the LLM Works with Threat IntelligenceWhen you provide a research-focused prompt, the LLM follows a workflow that mirrors how a human analyst would approach threat research (but at machine speed).Research & ContextualizationThe LLM begins by searching authoritative threat intelligence sources based on your prompt criteria (i.e. government sources like CISA advisories and HHS HC3 alerts, vendor research from security blogs and sector-specific intelligence feeds when relevant). Once it locates relevant threat intelligence, it builds context around the campaign: identifying threat actor attribution (ransomware groups like RansomHub or LockBit, APT groups, financially-motivated actors), understanding attack patterns and TTPs, and analyzing the timeline of events. For instance, the LLM might discover that a threat actor conducted a major disruption operation in May 2025, only to resurface with new infrastructure in July. It may also examine victim demographics (i.e. which sectors are being targeted, geographic focus, and whether attacks target specific organization types).IOC Extraction & AttributionOnce threat intelligence has been collected, the LLM extracts network indicators from the narrative that can be used inside Zscaler policy, such as:Infrastructure: C2 server IPs, domains, URLs, etc.Distribution: Malware hosting sites, phishing domains, exploit kit URLs, etc.Impersonation: Spoofed portals mimicking legitimate services (MyChart, Epic EMR, insurance sites)With effective prompting, each IOC can be linked back to its originating source (a blog post, advisory, or campaign analysis). This attribution enables you to validate the LLM's research and understand the reality/legitimacy of each indicator.Policy ProposalOnce complete, the LLM presents ZIA policy recommendations ready for review and activation. These typically include IP Destination Groups for C2 infrastructure, URL Categories for phishing domains and malicious infrastructure and Firewall Rules with appropriate actions and logging configurations. These policy proposals are then translated to API calls and implemented using the MCP Server.Example: Emerging Threat CampaignsThe ScenarioSecurity vendors and government agencies regularly publish threat intelligence on active malware campaigns. For example, Lumma Stealer (LummaC2), a prolific infostealer-as-a-service, recently rebounded after a major May 2025 takedown, with new C2 infrastructure appearing within days. Let's analyze this emerging threat intelligence and create a policy to defend against it.The PromptCopy this prompt to try it yourself:Today's date is December 11, 2025, 2:30 PM EST.Research the Lumma Stealer (LummaC2) malware campaign. Search for:Security slog postsCISA advisoriesRecent security vendor analysesExtract all network IOCs mentioned (C2 IP addresses, domains, infrastructure) and create ZIA policy recommendations to augment Zscaler's existing protections:IP destination groups for C2 infrastructureCustom URL categories for malicious domainsFirewall rules to block accessFor each policy, include in the description:Source: [Blog post or advisory]Created: [Today's date]Review: [Today's date + 90 days]Threat: Lumma Stealer (LummaC2) infostealerCreate the necessary ZIA policy to block these threats, but DO NOT activate anything yet. Use concise bullets in your summary. Under 500 words.Managing the IOC LifecycleLikewise, three months later, you may choose to revisit this policy and clean up old IOCs:Copy this follow-up prompt to try it yourself:Today's date is March 11, 2026.Please review all ZIA IP destination groups, URL categories and policies that have a Review Date of March 11, 2026 (or earlier) in their descriptions.For each one:Research the state of the malware campaign listed in the description.If NO, campaign has been contained: Recommend removing the rule (stale IOC).If YES, campaign is still a threat: Recommend extending review date by 90 days.Show me what you'd remove vs. keep, and explain your reasoning.The result? Automated IOC lifecycle management prevents "threat intel bloat" while ensuring active threats remain blocked. The 90-day review cycle aligns with research showing most C2 servers have 
Example: Sector-Specific Threat Intelligence (Healthcare)The ScenarioHealthcare organizations face unique threats. Ransomware groups specifically target medical facilities, knowing downtime can be life-threatening. In our next example, let’s augment our policy with healthcare-specific intelligence. Note the addition of priority to the prompt such that we can implement or withdraw policy suggestions easily when the time comes.The PromptCopy this prompt to try it yourself:Today's date is December 11, 2025, 3:15 PM EST.You work in healthcare cybersecurity. Research recent cyber threats specifically targeting healthcare organizations:Search HHS HC3 website for recent healthcare alertsSearch for "ransomware healthcare 2025" campaignsLook for security vendor research about healthcare-targeted attacksFind any campaigns impacting healthcare (credential theft is often initial access for ransomware)Extract network IOCs (C2 IPs, phishing domains, malware&nbsp;distribution sites) from these articles and create ZIA policies to augment Zscaler's protections with&nbsp;healthcare-specific threat intelligence:IP destination groups for healthcare-targeted C2 infrastructureCustom URL categories for spoofed medical portalsFirewall rules to block these threatsFor each policy, include:Source: [Blog post or advisory]Created: [Today's date]Review: [Today's date + 90 days]Sector: HealthcareThreat: [Campaign name]Priority: [Critical/High/Medium]Consider that:We already have Zscaler's threat intel activeUsers need access to legitimate medical sites (.nih.gov,&nbsp;.mayoclinic.org, EHR vendors)The policy should not break critical healthcare SaaS appsThe policy should implement full logging for HIPAA complianceCreate the necessary ZIA policy to block these threats, but DO NOT activate anything yet. Use concise bullets in your summary. Under 500 words.&nbsp;Managing the IOC LifecycleAnd, here again (three months later), you can easily review and clean up old IOCs:Copy this follow-up prompt to try it yourself:Today's date is March 11, 2026.Please review all ZIA IP destination groups, URL categories and policies that have a Review Date of March 11, 2026 (or earlier) in their descriptions AND a Sector of Healthcare.For each healthcare-specific policy:Research the state of the malware campaign listed in the description.If NO, campaign has been contained: Recommend removing the rule (stale IOC).If YES, campaign is still a threat: Recommend extending review date by 90 days.Show me what you'd remove vs. keep, and explain&nbsp;your reasoning.&nbsp;ConsiderationsRisk PrioritizationNot all threats are equally urgent. You can prompt the LLM to categorize threats based on relevance and immediacy. Critical-priority threats are active campaigns with confirmed victims in your sector and may demand immediate action. High-priority threats come from threat groups with documented targeting history for your industry, even if no active campaign is underway. Medium-priority threats are opportunistic malware that may have some mention of your sector but lack evidence of targeted campaigns.Copy this prompt to try it yourself:For each policy recommendation, assign a priority level:Critical: Active campaigns targeting [your sector]High: Threat groups with [your sector] targeting historyMedium: Opportunistic malware with sector mentionsShow priority in the policy description.Customizable ExecutionKeep in mind that suggested policies don’t have to be executed en masse. In fact, you may decide to execute only on the high-priority or critical suggestions while leaving the medium and low priority policies for further deliberation:&nbsp;Operational ValidationBefore implementing any policy changes, prompt the LLM to validate that proposed blocks won't disrupt legitimate operations. This includes ensuring that legitimate sites (such as vendor portals, SaaS applications, EHR systems) aren't caught in the block lists. For regulated industries, the LLM can also confirm that logging configurations meet compliance requirements like HIPAA, PCI-DSS, or SOX.Copy this prompt to try it yourself:Before creating these policies, validate:No legitimate [vendor/SaaS/EHR] sites are blockedLogging meets [HIPAA/PCI-DSS/SOX] requirementsNo conflicts with existing ZIA policiesShow validation results before proceeding.Monitor Before BlockingAlways test new policies in log-only mode (24-48 hours) before full blocking.Copy this prompt to try it yourself:Create&nbsp;this&nbsp;policy&nbsp;but&nbsp;set&nbsp;action&nbsp;to&nbsp;ALLOW&nbsp;with&nbsp;full&nbsp;logging&nbsp;enabled.&nbsp;We'll&nbsp;monitor&nbsp;for&nbsp;false&nbsp;positives&nbsp;before&nbsp;converting&nbsp;to&nbsp;BLOCK.Lifecycle ManagementMake use of date-stamped descriptions to automate IOC aging. This makes clean-up a breeze - even for non AI-assisted policy that was created!Copy this prompt to try it yourself:Show&nbsp;me&nbsp;all&nbsp;policies&nbsp;with&nbsp;review&nbsp;dates&nbsp;older&nbsp;than&nbsp;[current&nbsp;date].&nbsp;For&nbsp;each,&nbsp;research&nbsp;recent&nbsp;activity&nbsp;and&nbsp;recommend&nbsp;keep&nbsp;vs.&nbsp;remove.ConclusionZscaler's Security Cloud provides unmatched threat intelligence that automatically protects customers worldwide. The Zscaler Integrations MCP Server augments that foundation with intelligence unique to your organization:Sector-specific threats from your industryRegional threats from your local CERTEmerging threats not yet in mainstream feedsOrganization-specific IOCs from threat huntingIntelligent lifecycle management with automated agingThe analyst remains in control - making critical decisions about what to block and when. The AI handles the mechanical translation from intelligence to policy, the correlation across sources, and the lifecycle management.The result? More comprehensive coverage, faster response times, and more time for the proactive security work that truly matters.]]></description>
            <dc:creator>Aaron Rohyans (Sr. Principal Solutions Architect - Business Development)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Building Trust Through Identity: Addressing Security Challenges in Modern Healthcare]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/building-trust-through-identity-addressing-security-challenges-modern</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/building-trust-through-identity-addressing-security-challenges-modern</guid>
            <pubDate>Thu, 15 Jan 2026 04:50:38 GMT</pubDate>
            <description><![CDATA[In healthcare, patients are always the priority. But behind the scenes, the quest to secure sensitive data while empowering clinicians remains a delicate balancing act. In a recent episode of the radio show We Have Trust Issues, we (Tamer and Steven) sat down with Joel Burleson-Davis, CTO of Imprivata, to tackle one of the industry’s most pressing challenges: improving clinician efficiency while maintaining modern identity security.&nbsp;&nbsp;With regulated industries such as healthcare experiencing rapid digital transformation, this episode shed light on key strategies and technologies designed to build trust, secure workflows, and eliminate friction between clinical and IT teams.Why Healthcare Security is UniqueJoel opened the conversation with an important point: healthcare workflows are vastly different from other industries. Unlike professionals such as accountants or engineers, clinicians spend the bulk of their time focused on patient care, not interacting with technology. "The primary concern of these end users is very different," Joel explains, "and so the way they work is very different."&nbsp;&nbsp;This focus on care creates unique challenges. Healthcare professionals often share workstations, devices, and data, making it harder to track identity in real-time. Meanwhile, hospitals remain prime targets for ransomware attacks because of their mission-critical operations. The combination of shared assets, constant workflow changes, and heightened regulatory requirements has led to friction between clinical care teams and IT security departments.&nbsp;&nbsp;The "No Balance" Mindset ShiftOne of the standout moments from the discussion came when Joel challenged the common notion of balancing security and productivity. According to him, framing the relationship as a balance implies that one side must lose for the other to win. Instead, he emphasized the need to pursue both ends simultaneously.&nbsp;&nbsp;By innovating with "workflow-aware" solutions, Joel argues that healthcare systems can achieve superior security without burdening clinicians. "Technology teams need to embrace the hard problems," he said, "and eliminate the perception that security improvements must come with sacrifices on the clinical side."&nbsp;&nbsp;Innovative Solutions Driving TransformationHealthcare organizations are tasked with solving both productivity and security issues simultaneously—and technological innovation is key. Joel laid out multiple practical examples of how identity security can empower care teams while enhancing protection.Passwordless AuthenticationPasswordless authentication was highlighted as a powerful "win-win" solution. Joel explained how integrating biometric logins, behavioral analytics, and intelligent PIN systems can replace the cumbersome, time-consuming process of typing in lengthy credentials. Without passwords to remember—or to reset—clinicians can reclaim more time for patient care, while IT departments benefit from enhanced security and reduced risk of human error.&nbsp;&nbsp;For Joel, the potential savings go far beyond seconds shaved off workflows. "If you calculate the time lost worldwide to typing passwords, killing them could change the game entirely," he remarked.&nbsp;&nbsp;Mobile WorkflowsAnother transformative technology discussed was the growing use of mobile devices in clinical settings. Joel described phones and tablets as tools that could replace traditional workstations, enabling more flexible, streamlined workflows. These devices can empower clinicians to ditch rolling carts or desktop logins in favor of a smartphone that connects them directly to critical systems and apps.&nbsp;&nbsp;However, Joel cautioned that mobile integration requires careful execution. "Mobile devices are mobile—it’s in the name," he explained. For a successful rollout, healthcare organizations must address challenges such as device sharing, fleet management, and initial setup hurdles. For example, shared devices should easily transition between users with minimal effort—using badge scans or face recognition for quick personalization.&nbsp;&nbsp;AI-Powered EfficiencyIt wouldn’t be a modern conversation about technology without discussing artificial intelligence. Joel sees incredible potential for AI to make security an "invisible" part of clinician workflows. Using AI, healthcare institutions can automate identity verification and policymaking tasks that currently burden IT teams and distract clinicians.&nbsp;&nbsp;Beyond security, AI also offers opportunities to elevate workflows. For example, predictive algorithms can anticipate a clinician's needs, delivering key patient information exactly when it’s required, reducing time spent searching for critical data. However, Joel warned that the efficacy of AI solutions depends entirely on the quality, protection, and curation of the underlying data they use.&nbsp;&nbsp;The Danger of Poor Execution&nbsp;Even the best technologies can fail if they’re deployed without clinician input. In shared healthcare environments, it’s crucial for IT teams to consider factors like ease of use, device accessibility, and workflow compatibility. Joel recounted failed device rollouts where clinicians abandoned state-of-the-art workstations, not due to flawed hardware, but because boot times and added clicks slowed them down.&nbsp;&nbsp;"Doctors have literally timed how many seconds new processes take and calculated the number of patients they miss during a shift," Tamer emphasized. "It’s not something we can afford to ignore—not when clinician burnout and patient satisfaction is at stake."From Trust Issues to Trust BuildingFor Joel, the solution to these longstanding issues is to rebuild trust through technology. Features that prioritize speed, simplicity, and clarity are essential to making security "invisible," giving clinicians one less thing to worry about in their often-stressful settings.&nbsp;&nbsp;Removing friction, streamlining identity verification, and reducing cognitive load are all part of a broader strategy to align IT and clinical goals. Joel’s message is clear: security teams must collaborate with clinical teams to design systems that prioritize both care delivery and regulatory compliance—and never sacrifice one for the other.&nbsp;&nbsp;Looking Ahead: What's Next for Healthcare Identity?We closed the episode by looking to the future of identity security in healthcare—and AI took center stage. Joel predicted advances in AI-powered automation would enable healthcare systems to reduce manual tasks, enhance user experiences, and improve security postures.&nbsp;&nbsp;However, with AI's reliance on data, Joel emphasized that organizations must invest heavily in data protection and governance. "Without data, AI is a worthless piece of technology," Joel stated bluntly. "We must ensure its accuracy, security, and integrity if we’re going to depend on it."&nbsp;&nbsp;Continue the ConversationTo hear more about strategies for fostering trust while addressing modern security concerns in healthcare, check out Imprivata’s podcast,&nbsp;Access Point.&nbsp;&nbsp;For even more insights, don’t miss the upcoming&nbsp;CHIME State of Cyber Summit on January 20th, where AI’s role in healthcare security will be further explored.&nbsp;&nbsp;And of course, join us for more episodes of&nbsp;We Have Trust Issues, where the most important (and sometimes controversial) topics in cybersecurity are always on the table.]]></description>
            <dc:creator>Tamer Baker (Healthcare CTO)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Beyond Patient Zero: Why Detection is Dead and Quarantine is King]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/beyond-patient-zero-why-detection-dead-and-quarantine-king</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/beyond-patient-zero-why-detection-dead-and-quarantine-king</guid>
            <pubDate>Wed, 14 Jan 2026 12:24:18 GMT</pubDate>
            <description><![CDATA[A recent survey found the median ransomware variant can encrypt nearly 100,000 files (about 53.93GB) in 43 minutes.This is why “Time to Detect” is starting to feel like a comforting statistic from a slower decade.In times where ransomware can encrypt 300 files in under a minute, detection is a consolation prize, not a strategy. If your security tool alerts you five minutes&nbsp;after a user has downloaded a malicious file, the damage is already in motion.This is the&nbsp;"Patient Zero" Paradox: Traditional security tools often allow the first user to download a file while analyzing it in the background. They sacrifice the security of that first user to maintain speed for everyone else.It’s time to retire the "detect and remediate" model. To stop modern threats, we must move to a "quarantine and prevent" architecture.The Flaw in "Allow and Scan"Legacy sandboxing solutions (and even some modern firewalls) operate on a pass-through architecture. They inspect traffic, but to avoid latency, they often allow a file to pass through to the endpoint&nbsp;before the verdict is ready.If the file turns out to be malicious, the alert comes too late. The code has already been executed. The endpoint is compromised, data creates a blast radius, and the organization is now in a reactive state of breach containment.This approach treats the first victim (Patient Zero) as a sacrificial lamb.The Solution: AI-Driven QuarantineZscaler AdvancedCloud Sandbox isn't just about scanning more files; it's about fundamentally changing when the verdict is applied.1. Hold the File, Not the VerdictAdvanced Cloud Sandbox utilizes AI-Driven Quarantine to hold suspicious files in the cloud environment while they are analyzed. The user does not receive the file until it is verified as safe.This protects the first user (Patient Zero) from infection, rather than just alerting you after the fact.It eliminates the "race condition" where malware races to encrypt files before the sandbox finishes its analysis.&nbsp;Closing the Resilience GapAdopting a quarantine-first model is about more than technical efficacy; it’s about business continuity.Eliminate the "Safe Site" Blind Spot: The 'Developer Blind Spot' was the defining theme of late 2025. Campaigns targeting the npm and PyPI ecosystems (such as the 'Shai-Hulud' malicious packages) proved that developers are the new high-value targets. These attacks didn't come through sketchy websites; they came through 'trusted' repositories and legitimate-looking scripts. Because Basic Sandbox often ignores script files or archives from 'neutral' URLs, these supply chain attacks walked right past the perimeter.Prevent Supply Chain Poisoning: By stopping "Patient Zero," you prevent the initial foothold that attackers use to move laterally. You aren't just saving one laptop; you are protecting the integrity of the wider network.Regulatory & Compliance Maturity: For regulated industries, proving that you have controls in place to prevent malware—rather than just detect it—is a cleaner, stronger narrative for compliance frameworks and Zero Trust maturity.The Bottom LineIf your sandbox policy is set to "Detect," you are operating on a probability model that assumes you can clean up a mess faster than an attacker can make one.But true security goes beyond just blocking threats, it must also accelerate your operations. By leveraging the Zscaler Sandbox API, you can evolve your SOC from a reactive cleanup crew into a proactive intelligence hub. This integration empowers your team to:Automate AnalysisEnrich InvestigationsOperationalize IntelTo truly secure the modern enterprise, you must transition to Advanced Cloud Sandbox.Stop relying on finding the needle in the haystack after it pricks you. Insist on a system that keeps the needle out of your hand entirely.Want to talk to an expert?&nbsp;click here.]]></description>
            <dc:creator>Nishant Kumar (Senior Manager, Product Marketing)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Fortify Your Future: How Zscaler Drives a Modern Defensible Architecture for Supercharged Cyber Resilience]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/fortify-your-future-how-zscaler-drives-modern-defensible-architecture</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/fortify-your-future-how-zscaler-drives-modern-defensible-architecture</guid>
            <pubDate>Mon, 12 Jan 2026 22:15:03 GMT</pubDate>
            <description><![CDATA[In today's hyper-connected world, cyber threats are not just a possibility, they're a relentless reality. From sophisticated nation-state actors to organised criminal groups, the adversaries are more advanced and persistent than ever. Traditional perimeter based security is simply inadequate, leaving organisations with an increased attack surface and vulnerable to breaches.This escalating complexity demands a fundamental shift in how we approach cybersecurity. Enter the Australian Cyber Security Centre Foundations for Modern Defensible Architecture (MDA), a critical framework offering a strategic, layered blueprint for building contemporary cyber resilience. The MDA champions three core pillars: a Layered Architecture, comprehensive Zero Trust, and Secure-by-Design methodologies. At its core, Zscaler's Zero Trust Exchange was developed to support the implementation of these vital principles.The MDA Blueprint: Beyond the PerimeterThe Modern Defensible Architecture isn't just about replacing existing perimeter controls with new&nbsp;stronger perimeter controls; it's about building a fortress from the inside out. It acknowledges that breaches are inevitable and focuses on minimising impact, containing threats, and ensuring business continuity.&nbsp;&nbsp;Its three pillars are:Layered Architecture and Traceability: Ensuring security controls are directly linked to business objectives and provide deep visibility.Comprehensive Zero Trust: Embracing "never trust, always verify", "assume breach", and "verify explicitly" principles for every interaction.Secure-by-Design: Integrating security from the outset of all development and operational processes, making it an inherent quality, not an afterthought.These pillars are further supported by ten foundational capabilities, designed to create an environment of continuous security validation and adaptation.Zscaler's Zero Trust Exchange: The Engine of MDAZscaler's cloud-native Zero Trust Exchange platform is uniquely positioned to help organisations achieve the MDA vision. It shifts traditional network security by connecting users directly to applications, not the network. This "never trust, always verify" model transforms security from a static perimeter defense to continuous verification of every user, device, and application interaction.Here's how Zscaler provides the necessary components to build a robust and defensible architecture:Zscaler Internet Access (ZIA): For secure internet and SaaS access, inspecting all outbound traffic for threats and policy violations.Zscaler Private Access (ZPA): Providing zero trust access to internal private applications, making them "dark" to the public internet.Zscaler Digital Experience (ZDX): Offering end-to-end monitoring of user experience and application performance.Zscaler Client Connector (ZCC): The intelligent agent on endpoints that enforces policies and gathers crucial device posture.Zscaler Security Operations - Providing external attack surface management and continual risk, vulnerability and control context.Red Canary a Zscaler company: Provides 24x7 continuous security monitoring.Powering MDA: Zscaler in ActionLet's look at how Zscaler directly contributes to some of the MDA's most critical foundations:High Confidence Authentication (MDA Foundation 2)Phishing-resistant and cryptographically bound authentication is crucial. Zscaler integrates seamlessly with your existing IdP, enforcing strong MFA for all user authentications. Beyond the user, the ZCC provides a critical layer of device authentication, establishing a secure, device-bound tunnel to the Zscaler cloud. This dual-layered approach ensures access is granted only to an authenticated user from an authenticated and compliant device.Contextual Authorisation (MDA Foundation 3)The MDA demands dynamic, real-time validation for every access request, factoring in user identity, device posture, location, and threat intelligence. Zscaler's platform excels here. Its policy engine acts as the central Policy Decision Point (PDP), aggregating "confidence signals" from your Identity Provider (IdP), the ZCC (device health), and real-time threat intelligence. This allows Zscaler to enforce granular, adaptive policies, adjusting access privileges based on the evolving risk profile, truly "never trust, always verify."Reliable Asset Inventory (MDA Foundation 4)Having a reliable asset inventory drives better management, visibility and decision making. A number of Zscaler modules provide near real time asset inventory information and/or the scanning of external assets to supplement existing approaches to CMDB management. This provides more accurate information regarding assets deployed within the organisation.Reduced Attack Surface (MDA Foundation 6)The MDA emphasises minimising exploitable entry points. ZPA dramatically shrinks your attack surface by making private applications invisible to the internet. Instead of exposing services via inbound firewall ports, ZPA establishes secure, outbound-initiated microtunnels, preventing reconnaissance and direct attacks. ZIA complements this by securing all internet-bound traffic, blocking malicious sites and preventing exploitation at the web gateway. Universal vulnerability management provides up to date context to support vulnerability and patching processes.Continuous and Actionable Monitoring (MDA Foundation 10)All Zscaler components provide detailed logging which can feed into an organisation's security analytics systems. Zscaler also has a 24x7 security monitoring service provided by Red Canary, a Zscaler company.Zscaler's capabilities extend across all ten MDA foundations, from supporting Centrally Managed Enterprise Identities and Reliable Asset Inventory to enabling Resilient Networks, promoting Secure-by-Design practices, and facilitating Comprehensive Validation and Continuous and Actionable Monitoring. Its inline, cloud-native architecture generates a wealth of high-fidelity logs, seamlessly integrating with SIEM/SOAR platforms to provide unparalleled visibility and rapid response capabilities.&nbsp;Beyond Security: The Zscaler AdvantageAdopting Zscaler for your Modern Defensible Architecture offers a cascade of benefits:Reduction of risk via the implementation of controls which align to the MDA principlesAccelerated Zero Trust Adoption: Rapidly deploy "never trust, always verify" principles without complex network overhaulsEnhanced Threat Prevention and Data Protection: Inline inspection, advanced threat detection, and robust Data Loss Prevention (DLP) stop sophisticated attacks in their tracksImproved User Experience: Direct-to-app connections mean faster, more seamless access for users, regardless of their locationOperational Simplicity and Scalability: As a cloud-native platform, Zscaler removes the burden of managing security hardware, simplifying operations and scaling globally on demandUnrivaled Visibility and Auditability: Granular insights into every user activity and security event empower compliance, incident response, and continuous security validationBeyond an IT project, the journey to a truly Modern Defensible Architecture is&nbsp; a strategic imperative for every organisation. Zscaler doesn't just check the boxes for MDA requirements; it embodies them. By shifting security to a cloud-native, Zero Trust model, Zscaler empowers you to build a future-proof, resilient security posture that can withstand and mitigate the cyber challenges of today and tomorrow.Ready to transform your security from reactive defense to proactive resilience? Contact us to explore how Zscaler can be the foundational enabler for your Modern Defensible Architecture. Stay tuned for Zscaler’s whitepaper covering in depth how Zscaler maps to the MDA foundations.]]></description>
            <dc:creator>Nick Clark (Zscaler)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Top 7 Requirements for Effective AI Red Teaming]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/top-7-requirements-effective-ai-red-teaming</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/top-7-requirements-effective-ai-red-teaming</guid>
            <pubDate>Mon, 12 Jan 2026 17:00:00 GMT</pubDate>
            <description><![CDATA[Enterprises across the globe are racing to deploy AI across every business workflow, but with accelerated adoption comes a completely new set of risks – one that conventional security tooling was never designed to mitigate. LLMs hallucinate, misinterpret intent, overgeneralize policies, and behave unpredictably under adversarial pressure. Today, most organizations deploying LLM-powered systems at scale have little visibility into how their models fail or where real vulnerabilities are emerging.This is the reality customers now face: dozens of AI apps in production, hundreds more being developed, and virtually no scalable way to understand or mitigate the risks. This is where AI red teaming becomes essential – and where Zscaler differentiates itself from every available solution in the market.The Hidden/Unknown Risks Behind LLM-Powered SystemsLLMs have introduced a range of vulnerabilities that cannot be uncovered through static code scanning or manual testing efforts. Organizations today struggle with:&nbsp;Undiscovered exposure to prompt injection, jailbreaks, bias, and harmful outputsHallucinations and trust failures that impact business decisionsNo repeatable process to validate behavior across scenariosLack of on-domain testing coverage that reflects real user behaviorManual red teaming that takes weeks to complete and still lacks critical failure modesAs enterprises deploy AI globally and across different languages, modalities, and business units, the risks multiply. AI red teaming must be proactive, continuous, scalable – and deeply contextual.&nbsp;Top 7 Requirements for Effective Enterprise AI Red TeamingEarly read teaming solutions have suffered from a number of limitations, including lack of depth, limited operational scale, and tools that fail to reflect real-world threats. Here are some key requirements to look for when building a modern, enterprise-grade AI red teaming solution:&nbsp;1. Domain-Specific Testing with Predefined Scanners (Probes)AI red teaming solutions should include a large number of predefined probes that test across major categories, such as security, safety, hallucination and trustworthiness, and business alignment. These should not be simply generic tests – but instead modeled after real enterprise scenarios and reflect how regular users, employees, and adversaries interact with AI systems.&nbsp;2. Full Customizability for Comprehensive Testing DepthUsers should be able to provide structured context about their AI system and create fully customized probes:Create custom probes through natural languageUpload custom datasets with predefined test cases (Bring your own dataset)Simulate business-specific attack pathsBasic red teaming solutions lack this close alignment with enterprise environments.&nbsp;&nbsp;&nbsp;3. A Large, Continuously Updated AI Attack DatabaseA robust AI attack database is critical to a successful red teaming solution. This includes continuously updating the database through:AI security researchReal-world exploitation patternsA comprehensive attack database ensures organizations can always test against the current AI threat landscape.&nbsp;4. Scalability –&nbsp; Simulate Thousands of Test Cases in HoursA robust AI red teaming platform should be able to run thousands of on-domain test simulations in hours, not weeks. This makes enterprise-wide AI risk assessments across hundreds of different use cases achievable.&nbsp;5. Multimodal and Multilingual Testing CoverageAI red teaming solutions should test across:Text, voice, image, and document inputsTesting in more than 60 supported languagesGlobal deployments require global testing standards and multilingual support.&nbsp;6. Modular Out-of-the-Box Integrations for any Enterprise AI StackRobust AI red teaming solutions should support a wide range of built-in connector types (REST API, LLM providers, cloud platforms, enterprise communication platforms). This enables seamless integration into any enterprise AI architecture.&nbsp;7. AI Analysis with Instant Remediation GuidanceIdentifying issues is only the start. AI red teaming solutions should also provide analysis that can explain extensive testing results in plain language, highlighting the most critical jailbreak patterns, and generating actionable remediation guidance.&nbsp;Accelerate Your AI Initiatives with Zero TrustAI red teaming isn't just about showing failures – it’s about understanding them, learning from them, and operationalizing AI protection at needed scale. With its recent acquisition of SPLX, Zscaler delivers a most complete, scalable, and deeply contextual platform, turning AI risk into something measurable, manageable, and most importantly - fixable.&nbsp;Learn more about Zscaler’s newest addition to its AI security portfolio, including the unveiling of exciting new capabilities in our exclusive launch:&nbsp;Accelerate Your AI Initiatives with Zero Trust&nbsp;&nbsp;This blog post has been created by Zscaler for informational purposes only and is provided "as is" without any guarantees of accuracy, completeness or reliability. Zscaler assumes no responsibility for any errors or omissions or for any actions taken based on the information provided. Any third-party websites or resources linked in this blog post are provided for convenience only, and Zscaler is not responsible for their content or practices. All content is subject to change without notice. By accessing this blog, you agree to these terms and acknowledge your sole responsibility to verify and use the information as appropriate for your needs.]]></description>
            <dc:creator>Dorian Granosa (Director, AI Research)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Threat Intel, SSL Inspection and Other Considerations: A Real-World Checklist for SSE]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/threat-intel-ssl-inspection-and-other-considerations-real-world-checklist</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/threat-intel-ssl-inspection-and-other-considerations-real-world-checklist</guid>
            <pubDate>Mon, 12 Jan 2026 14:39:39 GMT</pubDate>
            <description><![CDATA[Somewhere in the middle of your cloud-first journey, there’s a moment that doesn’t feel like progress.Despite users, apps, and data being decentralized and spread everywhere, most real-world trouble still walks through the same front door it always has: the open web.The web now hosts SaaS, partner portals, developer tooling, and a growing pile of AI assistants—almost all wrapped in TLS/SSL. Great for privacy. Brutal for visibility. Without scalable inspection, encryption becomes a cloak for lateral movement, malware delivery, and data exfiltration.Secure Service Edge (SSE) adoption is the logical response to this shift. But it only wins if your&nbsp; Secure Web Gateway (SWG) can take a punch.So ask yourself: will your SSE-based platform hold up in production? Validate it against this five-point checklist.1. The Encryption Test: Can You Inspect Without Collapsing?With over 87% of threats now delivered via encrypted channels, SSL/TLS inspection is no longer optional— it’s baseline defense.However, the architectural challenge is not simply&nbsp;capability, but&nbsp;capacity.Legacy appliances and their virtualized equivalents are bound by fixed compute resources. When inspection load spikes, they force a choice: throttle the user or bypass security.&nbsp;A cloud-native proxy architecture eliminates this trade-off by decoupling inspection from physical hardware limits, dynamically scaling to inspect traffic without creating a bottleneck.Considerations for your SSL/ TLS inspection:Decrypt coverage: What % of relevant TLS SaaS sessions do you inspect—by app and by category?Granular TLS controls: Do you have control over specific apps to be decrypted/ bypassed (SNI-based)? Are these policy controls consistent over web and SaaS applications?Certificate reality: How is certificate distribution being managed across all managed and unmanaged devices? How are trust store updates being propagated across VDI?Performance + failure mode: What’s the p95 added latency at normal and peak, and is fail-open vs. fail-closed configurable by risk tier?Exception governance: For every application bypass, can you prove an owner, a reason, and an expiry/review cycle—with reporting?Protocol roadmap: Beyond TLS 1.3, what’s your plan for QUIC/HTTP/3 visibility and mitigation when full inspection isn’t possible?2. The Traffic Flow Test: Local Breakout vs. The HairpinIn a distributed world, network architecture equates to&nbsp;security architecture.If users in Europe have to hairpin through a U.S. hub just to reach a European SaaS endpoint, you’re paying a latency and bandwidth tax with no security upside.&nbsp;That’s not SSE. That’s hub-and-spoke with a new outfit.A true Zero Trust Exchange model inverts this. It routes users to the nearest point of presence, applies security policy instantly, and connects them directly to their destination—so the infrastructure stays invisible to attackers and users connect to apps, not networks.Considerations:Nearest enforcement point: Are you fully utilizing a truly global presence of at least 150+ Datacenters or relying on a few “regional hubs” that are recreating choke points?Real latency evidence: Do you have traceroutes and real-user latency across multiple geos and ISPs (not vendor demo networks). If you’re a Zscaler customer, use ZDX to baseline the user-to-app path (device → Wi-Fi/ISP → Zscaler cloud → SaaS/app) and show where the delay lives.One policy model, everywhere: Does policy follow the user—or do rules drift by geography, and do you have audit trails for what was applied?Predictable egress steering: Are you able to comply with regional and SaaS requirements such as in-country logging and Dedicated IPs with your own IPs being used if necessary? Are your users viewing content in their local language with little impact on performance?3. The Operational Reality: Reducing the Burden or Relocating It?Traditional appliance-based models force you to manage dozens of boxes—patching, upgrading, monitoring. That multiplies operational risk and burns scarce engineering time.A lot of “modern SWG” projects stall because they just relocate the same burden into cloud instances and call it progress.A cloud-native SWG removes the need for distributed firewalls and point products, cutting hardware spend and patch overhead—while the platform updates continuously as threats evolve, without forklift upgrades.Considerations:Ownership boundary: Do you have a clear demarcation between your service provider’s responsibilities and your own? Do you still own uptime/ scaling and patching after moving to the service?No infrastructure runbooks: If you are still scheduling reboot windows or kernel patches, are you running software, or consuming a service?Elasticity under stress: Has your M&A cutover been simplified? Do you still have to plan for infrastructure to cater to office reopening spikes?4. The Data Protection Test: Inline EnforcementSWG isn’t “web filtering” anymore. It is business protection. Modern exfiltration doesn’t look like “upload to a sketchy site.” It looks like sanctioned SaaS uploads, mis-shared links, copy/paste into AI assistants, and normal workflows moving sensitive data to places where work actually happens.The question is: Can your SWG enforce data protection policies inline, not after the fact?Considerations:Inline controls for web + SaaS sessions: Is enforcement happening inline in the SWG path—or are you leaning on API, after-the-fact scanning that shows up after the damage is done?Unified DLP policy + engine: Are the same classifiers, dictionaries, and fingerprinting used across DLP/CASB/email and enforced inline for web + SaaS—or does “HR data” trigger in email but slip through the browser?Detection depth: Do you truly cover PII/PCI/PHI, exact data matching, document fingerprinting, and regional identifiers tied to your regulatory footprint—and are decisions context-aware (user + device posture + app + action)?GenAI coverage: As AI adoption grows, does your SWG inspect prompts, uploads, and browser sessions for web AI tools—inline, in real time?Proof scenarios to run (don’t skip these):Upload source code to a developer SaaSPaste customer data into a web-based AI assistantSync sensitive files to cloud storageIs your SWG able to prevent all of the above? If the result is “we detected it in logs,” you didn’t protect anything.5. The Threat Intel Test: Cloud Speed vs. Patch SpeedFinally, look at the speed of your defense. Does threat intelligence move fast enough to matter?In an appliance model, a new zero-day often means waiting on a vendor patch—then testing it, then rolling it out across your fleet.In a cloud-native platform, a threat blocked in one geography (say, an attack on a manufacturing plant in Asia) can be turned into global protection—automatically and immediately.Considerations:Propagation speed: How quickly are cloud detections enforced for your tenant? Does it take minutes or days?Real examples, with timelines: Is your SWG sharing recent campaigns, what triggered the update, and how fast protections rolled out?Global consistency: Is the same protection available across geographies and user populations?Your signals at cloud scale: Do your IOCs/blocklists go live quickly without turning into policy spaghetti?Learning loop + telemetry:Third-party validation: Beyond vendor claims, what independent evidence validates security effectiveness and real-world impact—e.g., published lab testing, peer-reviewed evaluations, external audits, analyst assessments, or customer-run benchmarks with documented methodology?Public proof trail:&nbsp;A good benchmark is the kind of public, time-stamped research stream Zscaler ThreatLabz publishes—ongoing security research write-ups and annual reports that document what changed and when.Conclusion: SSE in production vs. on PowerPointSWG isn’t making a comeback because anyone is nostalgic. It’s central again because the web is where your business runs—and where risk shows up first.So the question isn’t “Do we still need SWG?” It’s whether your SWG model can:If the answer is no… your SSE strategy is meant to look good only on a slide.Want to talk to an expert?&nbsp;click here.]]></description>
            <dc:creator>Nishant Kumar (Senior Manager, Product Marketing)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Zscaler Expands FedRAMP Moderate Cloud Data Plane to Support Global Operations]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/zscaler-expands-fedramp-moderate-cloud-data-plane-support-global-operations</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/zscaler-expands-fedramp-moderate-cloud-data-plane-support-global-operations</guid>
            <pubDate>Thu, 08 Jan 2026 15:03:01 GMT</pubDate>
            <description><![CDATA[Zscaler has expanded its FedRAMP Moderate Authorized cloud data plane to new locations in Zurich and Singapore to better enable U.S. government agencies, Federal Systems Integrators (FSIs), and enterprises to meet the evolving demands of global operations. These strategic new locations complement Zscaler’s existing FedRAMP Moderate cloud in the U.S., providing better performance and an optimized user experience for international workforces—all powered by Zscaler’s distributed Zero Trust Exchange.Enhancing Performance and Compliance for a Global WorkforceThe expansion of Zscaler’s FedRAMP Moderate cloud platform enables government organizations and federal contractors to transform legacy IT systems into secure, high-performance environments. The data plane expansion into the new locations in Zurich, Singapore, and soon São Paulo ensure fast, secure, and compliant access for employees and stakeholders globally, while addressing the most pressing challenges for today’s government operations, including:Enabling Secure Government Missions Across BordersProviding secure, fast, zero-trust access for overseas embassies, field operations, and branch offices is critical. With local breakouts in Zurich and Singapore, agencies can reduce latency, enhance productivity, and seamlessly connect international teams to U.S. Federal systems. Sensitive communications and data remain secure under Zscaler’s industry-leading platform.Supporting an International WorkforceFederal agencies and contractors who depend on non-U.S. employees to spearhead vital Federal programs globally. The globally expanding FedRAMP Moderate cloud platform enables vendors and agencies to securely access U.S. government environments directly in these international regions, improving performance and productivity, while maintaining FedRAMP compliance for global operations.Scaling Operations Without Local Hosting BurdensThe expanded cloud platform helps customers eliminate the need for local hosting sites, proxies, or PSEs. Using Zscaler’s distributed Zero Trust Exchange, agencies and organizations can avoid the complexity of managing regional systems while staying scalable and compliant.Why Zurich and Singapore?Zurich and Singapore were chosen for their global strategic importance:Zurich supports U.S. operations across Europe, making it easier for agencies and contractors to maintain high performance and meet stringent European regulatory requirements.Singapore is a critical hub for Southeast Asia, empowering federal and enterprise customers with low-latency performance and robust compliance infrastructure in the APAC region.Looking AheadWith FedRAMP Moderate cloud expansion now live in Zurich and Singapore, and São Paulo on the horizon, Zscaler continues to transform global government operations. These expansions ensure fast, secure, and compliant performance for international employees and contractors, while enabling government agencies and federal enterprises to confidently scale their global operations with Zscaler’s market-leading cloud security and zero-trust architecture.]]></description>
            <dc:creator>Niraj Gopal ( Head of Product Management, Federal and Sovereign Clouds)</dc:creator>
        </item>
        <item>
            <title><![CDATA[2025 Reflections and 2026 Predictions: Healthcare’s Cybersecurity Frontier]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/2025-reflections-and-2026-predictions-healthcare-s-cybersecurity-frontier</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/2025-reflections-and-2026-predictions-healthcare-s-cybersecurity-frontier</guid>
            <pubDate>Tue, 06 Jan 2026 22:17:22 GMT</pubDate>
            <description><![CDATA[As cybersecurity professionals, one of the most valuable things we can do is reflect on the lessons of the past while preparing thoughtfully for the challenges ahead. Healthcare is a uniquely complex field, and its evolving cybersecurity landscape demands fresh perspectives and intentional strategies.On the latest episode of We Have Trust Issues, we (Tamer and Steven) invited Carter Groome, CEO of First Health Advisory, to join us in dissecting 2025’s major healthcare trends and anticipate what 2026 has in store. Carter’s perspective as a seasoned consultant and industry leader revealed what healthcare cybersecurity leaders need to know to navigate pressing challenges in AI adoption, regulatory compliance, risk reduction, and operational resilience. Here are the takeaways we think every reader should consider carefully.Lessons from 2025: A Pivotal Year for HealthcareTake a deep breath—2025 was a whirlwind. Beyond a surge in AI implementation, the healthcare sector faced mounting external pressures that forced security teams to evolve rapidly.Reflecting back, Carter identified two major themes that dominated 2025:Delivering Measurable Value in Cybersecurity: Boards are no longer interested in hearing about risks without action plans. 2025 saw heightened calls for rationalizing technologies, streamlining tools, and proving measurable reductions to risk exposure. Security leaders need to answer questions about their stacks: Are tools overlapping unnecessarily? Is anyone addressing the noise? How can systems integrate to reduce vulnerabilities, instead of simply highlighting them?Building Resilience: Healthcare organizations shifted heavily toward operational resilience. With the assumption that a breach isn’t a matter of “if” but “when,” CISOs are investing more in continuity plans, disaster recovery strategies, and minimum viable hospital models.“Healthcare security teams aren’t just tasked with defending anymore. They need to recover and help organizations thrive—even when bad actors succeed,” Carter noted during our conversation. The incredible pressure to enable agility while reducing costs has left security leaders juggling priorities more intensely than ever.AI Dominated 2025: But What Was the Real Impact?Artificial Intelligence was the buzzword of the year—and while it unleashed enormous potential across healthcare, it also exposed serious risks. We’ve seen enterprises rush to adopt AI solutions across operations, clinical workflows, and cybersecurity. But this “race to innovate” often lacks governance, intentionality, or alignment with real-world challenges.“There’s been an obsessive approach to implementing AI for the sake of implementing AI,” Carter noted. “Boards push competitive advantages, efficiency, and labor replacement—but often forget the critical steps like governance and risk reviews. This pressure could lead organizations into dangerous territory if left unchecked.”The parallels with the onset of the pandemic are impossible to ignore, as organizations scrambled to enable work-from-home setups overnight, figuring out security after the fact. While AI represents progress, Carter warned against deploying solutions without thoughtfulness, transparency, or careful evaluation of real use cases.As security professionals, we agree there’s a need for balance—AI adoption doesn’t have to mean sacrificing foundational principles. Instead, let’s focus on sober assessments of AI’s utility and risks, ensuring tools solve problems rather than creating new vulnerabilities.Looking Ahead: Predictions for 2026As we turn to 2026, Carter emphasized one guiding principle: intentionality. Healthcare needs more deliberate efforts to address governance structures, data strategies, and technical infrastructure. Without thoughtful preparation, healthcare organizations won’t be able to keep up with the accelerating pace of threats.Here’s what Carter predicts for 2026:Identity Takes Center Stage: Identity management—including human users, devices, and AI agents—will be mission-critical as adversaries find easier ways to exploit credential-based attacks. With healthcare tied so closely to IoT and medical devices, zero trust policies will increasingly target identity-first frameworks.Organizational Extortion Intensifies: Executive extortion and class action lawsuits after breaches are likely to increase, leaving healthcare CISOs to defend both the digital and legal standing of their organizations. Carter emphasized that industry-wide adoption of baseline cybersecurity controls, such as the Cybersecurity Performance Goals (CPG), could reduce liability and improve recoverability.Malware-Free Intrusions Become Commonplace:&nbsp;Why hack systems when stolen credentials allow bad actors to log in directly? Healthcare organizations will need to rethink defenses to address this growing trend.Authenticity Becomes a Priority: AI-generated media, voice deepfakes, and sophisticated social engineering tactics will make distinguishing real from fake harder than ever. Security strategies must emphasize authenticity, ensuring trust remains intact across systems, users, and stakeholders.Risk Reduction Must Be Measurable: Platforms will need to shift from identifying risks to actively reducing them. Carter projected that organizations will cancel contracts with tools unable to demonstrate measurable risk reduction and ROI.&nbsp;Cybersecurity Strategy in ActionAs we discussed with Carter, healthcare cybersecurity leaders have their work cut out for them in 2026. A successful strategy will hinge on intentional planning and coordinated efforts, and there are tangible steps organizations can take right now:Rationalize Your Security "Estate": Visibility across IoT, medical devices, IT systems, and data inventory is critical. Carter highlighted that high-fidelity inventories and tools explicitly designed to consolidate visibility will offer healthcare organizations a competitive edge.Prove ROI: Security is often seen as a cost center, but boards are asking for more. Carter suggested that next year’s focus will be on demonstrating reduced costs, minimized risks, and smarter resource allocation.Lead with Zero Trust and Identity Frameworks:&nbsp;The healthcare threat landscape is evolving, placing clinical workflows and patient devices at greater risk. Aligning resources with zero trust frameworks centered on human and device identity will be essential moving forward.Adopt AI Intentionally: Thoughtful use of AI requires transparent vendors and proper risk evaluation. Avoid rushing to implement technology just because it’s available—focus instead on solutions that align with measurable outcomes.The Regulatory LandscapeOne area Carter flagged for significant 2026 growth is healthcare-specific regulation. From updates to the HIPAA security rule to sector-specific Cybersecurity Performance Goals (CPGs), policy movements will shape compliance efforts.“Regulatory updates like HIPAA’s proposed rules bring significant pain points for healthcare organizations,” he explained. “If frameworks are too demanding, security leaders will need time, consultation, and scalable solutions to avoid compounding financial strain in an already vulnerable industry.”Final Thoughts: Authenticity Sets the ToneAs we said goodbye to Carter after the episode, he left us with one important point: authenticity will be at the heart of effective cybersecurity strategy in the year ahead. Healthcare leadership—boards, C-Suite executives, and cybersecurity professionals alike—must create a foundation of trust across their organizations. Whether defending against adversaries or educating teams about skepticism online, setting the right tone will drive investment in security and privacy.“Nobody wants their healthcare organization to get extorted by bad actors—and nobody wants their patients to lose confidence in their care providers,” Carter remarked. “Right now, the focus needs to be on reducing risks thoughtfully and proving value in everything we do.”We couldn’t agree more—and as we enter 2026, intentional planning and prioritized solutions must be the cornerstone of every healthcare security program.]]></description>
            <dc:creator>Tamer Baker (Healthcare CTO)</dc:creator>
        </item>
        <item>
            <title><![CDATA[ShadyPanda and the Seven-Year Browser Extension Breach: How Zscaler SSPM Strengthens SaaS Supply Chain Security]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/shadypanda-and-seven-year-browser-extension-breach-how-zscaler-sspm</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/shadypanda-and-seven-year-browser-extension-breach-how-zscaler-sspm</guid>
            <pubDate>Tue, 06 Jan 2026 16:00:03 GMT</pubDate>
            <description><![CDATA[A recently uncovered campaign known as&nbsp;ShadyPanda revealed how trusted Chrome and Edge browser extensions can be quietly weaponized over time. For seven years, the attackers behind ShadyPanda used seemingly harmless extensions—some with over&nbsp;4 million installs—to manipulate browser activity, redirect searches, collect behavioral data, and inject malicious scripts into web sessions.While browser extensions cannot directly access files stored inside SaaS applications, they operate within the user’s authenticated browser environment. This allows them to observe browsing behavior, redirect users to malicious sites, interfere with session flows, and influence how users interact with enterprise SaaS applications. When extensions possess high-risk permissions such as&nbsp;cookies,&nbsp;tabs, or&nbsp;webRequest, they introduce meaningful exposure to organizations.ShadyPanda demonstrates why extensions are part of today’s&nbsp;SaaS supply chain—and why continuous visibility and monitoring are critical.Fig: ShadyPanda Attack ChainHow Zscaler SSPM helps identify and mitigate risks like ShadyPandaZscaler SSPM provides the capabilities organizations need to detect risky browser extensions early, understand their impact, and take appropriate action through governance and endpoint controls.1. Comprehensive visibility into browser extensionsZscaler maintains a large catalog of SaaS apps, third-party integrations, and browser extensions enriched with:Publisher and version historyRequested permissionsBehavioral and risk attributesThreat intelligence indicatorsAs soon as users install an extension—regardless of how benign it appears—it is surfaced in&nbsp;third-party plugin Inventory, categorized by risk (e.g.,&nbsp;Potentially Harmful,&nbsp;Over-Privileged,&nbsp;Dormant).ShadyPanda extensions exhibited high-risk permission patterns early on, which Zscaler would have highlighted for security teams to review.The following screenshot shows how Zscaler solution identifies browser extensions such as&nbsp;“Clear Master” in the App Inventory, highlighting their permissions, risk attributes, and findings. This gives security teams immediate visibility into potentially harmful or over-privileged extensions present in their environment.&nbsp;&nbsp;2. Continuous monitoring for changes in permissions, behavior, or riskShadyPanda’s most dangerous activity began years after installation, delivered through silent updates.Zscaler SSPM continuously monitors extensions for:Increasing risk scoresNew permissions or expanded accessUpdated versions that introduce behavioral changesEmerging threat intelligence hitsIf an extension suddenly requests broader access—such as the ability to read cookies or intercept web requests—Zscaler generates an alert and notify that app risk has increasedThis early signal enables teams to investigate the extension and adjust internal controls before malicious behavior escalates.3. Understanding true impact through user and SaaS contextZscaler goes beyond identifying risky extensions—it correlates extension presence with:Which users installed itWhat SaaS applications those users accessPrivilege levels such as admin rolesExisting SaaS misconfigurations that could amplify exposureThis provides a clear blast-radius view:An extension installed by a low-privilege user may represent minimal riskThe same extension installed by a global admin interacting with critical SaaS apps requires immediate attentionZscaler gives organizations the context needed to prioritize action and strengthen governance.&nbsp;4. Enabling customers to take targeted, policy-driven actionWith clear risk categorization, drift insights, and user/SaaS correlations, customers can:Update browser and endpoint policiesRestrict certain categories of extensionsRequire security review for extensions requesting sensitive permissionsRemove or disable unapproved extensions through existing IT controlsEducate users and enforce internal governance policiesZscaler provides the intelligence and prioritization needed to make these actions timely and effective.Strengthen Your SaaS Supply Chain SecurityShadyPanda reinforces that browser extensions are part of the modern SaaS ecosystem—and that risks can evolve long after initial installation.&nbsp;Zscaler SSPM equips organizations with the visibility, context, and continuous monitoring required to surface these risks early and take action before attackers gain footholds.To learn how Zscaler can help assess and secure your SaaS and extension landscape, contact your Zscaler representative for a demo, or request one&nbsp;here.&nbsp;&nbsp;&nbsp;This blog post has been created by Zscaler for informational purposes only and is provided "as is" without any guarantees of accuracy, completeness or reliability. Zscaler assumes no responsibility for any errors or omissions or for any actions taken based on the information provided. Any third-party websites or resources linked in this blog post are provided for convenience only, and Zscaler is not responsible for their content or practices. All content is subject to change without notice. By accessing this blog, you agree to these terms and acknowledge your sole responsibility to verify and use the information as appropriate for your needs.]]></description>
            <dc:creator>Niharika Sharma (Staff Product Manager - CASB PM)</dc:creator>
        </item>
        <item>
            <title><![CDATA[Building A Better Zero Trust Culture Starts With Debunking The Myths Around Trust]]></title>
            <link>https://www.zscaler.com/de/blogs/product-insights/building-better-zero-trust-culture-starts-debunking-myths-around-trust</link>
            <guid>https://www.zscaler.com/de/blogs/product-insights/building-better-zero-trust-culture-starts-debunking-myths-around-trust</guid>
            <pubDate>Mon, 05 Jan 2026 22:02:28 GMT</pubDate>
            <description><![CDATA[The term Zero Trust is everywhere in conversations around cybersecurity, from boardroom slides, project plans, and strategy documents, to architectures and technical designs. As Zero Trust Network Access (ZTNA) moves from tech jargon to mainstream lingo in Australian public sector organisations, an unexpected side effect has arisen: discomfort. The term “Zero Trust” just… sounds harsh. For many staff, it can feel like a vote of no confidence in their integrity or professionalism. But, herein lies the misconception. Let’s unpack what Zero Trust really means, why the confusion exists, and how staff play an essential role in creating a secure digital culture.What Zero Trust Actually Is...And What It Isn’tZero Trust isn’t a judgement of someone’s loyalty, values, security clearance, or intentions. It means not blindly trusting digital transactions and systems, even when the person using them is highly trusted. The core principle of Zero Trust is that every user, device, and digital request is continuously verified because the greatest vulnerabilities in today’s hyper-connected world come from the security assumptions that are made within them.Consider a well-intentioned, long-serving staff member. They have a spotless record and always follow security protocols. But what happens if their laptop picks up malware or is compromised? Suddenly, every action from that device, regardless of how well intentioned, could be a risk. Without Zero Trust controls, one click could inadvertently expose sensitive data within an entire network – VPNs can do little to protect at this stage. The role of Zero Trust, however, is to protect the organisation, its people and its data against these evolving threats, which can have nothing to do with staff behaviour or integrity.Zero Trust: A “Defensible Modern Architecture” for Our TimesThe Australian Cyber Security Centre (ACSC) describes Zero Trust as “a fundamental building block in creating a modern defensible architecture.” Instead of relying on a perimeter firewall and blind trust within it, Zero Trust builds verification and segmentation into every step of a digital transaction. This is typically visible to staff as the interactions from their endpoint to the applications they use.This approach doesn’t diminish the user’s role in these digital transactions. In fact, it should do the opposite. Staff, who understand why continuous verification is essential, become partners in security. In practice, this leads to faster, more reliable access, including for more than 120,000 educators and administrators at the&nbsp;Victorian Department of Education. With fewer connectivity issues and smoother lesson delivery, this has led to better outcomes for more than 680,000 Victorian students. Likewise, at&nbsp;Northern Beaches Council in Sydney, mobile and field workers have seen simpler, consistent access with fewer logins and reduced disruption to everyday work, allowing them to better service their local community.Zero Trust Culture: Trusting People, Not SystemsWithout context and leadership, the continuous verification of Zero Trust may lead to a perception among staff that they are not inherently trusted. However, a healthy Zero Trust culture is never about being suspicious of staff. It’s about creating an environment where everyone has the knowledge and tools to keep digital interactions secure. Protected transactions enable access from anywhere. When this is done well, staff notice the benefits in their day-to-day workflows such as quicker paths into the tools they need and fewer support requests for access problems – just as the Victorian Department of Education and Northern Beaches Council do. Empowered, informed staff normalise verification and help prevent breaches early.How leaders can support cultural change for Zero Trust:Lead with clarity and purpose: Explain that Zero Trust protects people and services by verifying digital activity. Frame changes in terms of safer, simpler work.Design for minimal friction: Prioritise user experience so secure access feels seamless (e.g., fewer VPN dependencies, intelligent access to only the apps people need). Good UX builds trust in the model.Make it practical and role-based: Provide guidance aligned to how staff work day to day – clear, role-specific access policies, simple steps for device health, and intuitive pathways to the apps they use most.Co-create policies with staff: Involve frontline teams and champions in shaping access rules, testing changes and giving feedback before broad rollout. Shared ownership reduces resistance.Communicate early and often: Use transparent updates for what’s changing, why, and how it benefits staff. Pair announcements with short “how-to” resources and quick-win tips.Invest in targeted enablement: Run brief, scenario-based sessions on topics like phishing resistance, secure collaboration, and working securely from anywhere. Keep training lightweight and practical.Measure what matters: Track user-centric metrics – login success rates, access times to key apps, reduction in connectivity-related tickets – and share improvements with teams.Support managers to model behaviours: Equip leaders to reinforce secure-by-default practices in team routines (e.g., verifying device health, just-in-time access) and celebrate positive outcomes.Build feedback loops: Provide fast channels to report access pain points, respond visibly, and close the loop with fixes. Visible responsiveness strengthens confidence in the change.Building Security on Trust, But The Right Kind of TrustZero Trust is a foundational cybersecurity approach built for the modern workplace, where people, devices, and applications are in constant motion. Its focus is always on digital trustworthiness, not doubting staff character. By cultivating a Zero Trust culture, organisations like those in the Australian public sector can create environments that are both highly secure and empowering for staff. When we challenge misconceptions and clarify the intent, staff become the champions of Zero Trust, driving better outcomes for everyone.]]></description>
            <dc:creator>Nick Clark (Zscaler)</dc:creator>
        </item>
    </channel>
</rss>