How a universal physical constant shapes the future of cloud and security
Predicting future technological performance is tricky business: We anticipate linear growth, but we experience something much different. As much as we might like to, we can’t predict the future by extrapolating from a straight line. Unfortunately for us forecasters, the dichotomy between expectation and reality makes it difficult to anticipate the exponential nature of technological progress. And that holds all of us back as change accelerates.
Futurists frequently apply Moore’s Law -- that processing power doubles every two years -- to technological advancements. In April of 2020, Zscaler announced that the cloud-based Zscaler Zero Trust Exchange was processing more than 100 billion transactions per day. Eighteen months later, and the Zscaler Zero Trust Exchange is processing more than 200 billion transactions per day. (Thanks, Gordon!)
Moore's Law and Neven's Law is defining the trajectory of the technology revolution
Moore’s prediction has defined the trajectory of the technology revolution. But in the next ten years, Moore’s Law will confront physical limitations. Neven’s Law, a newer postulate, holds that quantum computers are gaining computational power at a doubly exponential rate. Neven’s Law could theoretically supplant Moore’s Law to accurately predict technology evolution.
Smartphone development aligns with Moore’s Law. We will continue to see smaller, more powerful devices with more memory and computational power. This is also true for networking bandwidth. But when it comes to network latency, no such luck. Improvement in latency reduction -- when it happens -- occurs in small increments.
Compute, storage, memory, and bandwidth capacity will continue to accelerate/grow in the future. So how do we deal with latency? Transmitting data faster than the speed of light is, well, presumably impossible. (I had thought leveraging quantum entanglement to transmit data might address this challenge, but that doesn’t seem to be the case, at least for the foreseeable future.)
Latency’s limit: When traveling at the speed of light isn’t fast enough
Moore’s Law correlates loosely with Zscaler’s exponential transactional-processing growth on its platform. But that same growth-forecasting model breaks when it comes up against physical limits.
Some physics: Light travels at 299,792,458 meters/second, and covers a kilometer’s distance in 3.33 microseconds. The light you see from that beautiful sunrise took more than eight minutes to reach Earth from the sun some 152 million kilometers away. The eight-minute delay is latency in its purest form: The speed of light is an absolute boundary in the physical world. As much as we might want to, we can’t go faster than that.
When moving through a physical medium, such as a fiber cable, light’s progress slows as latency increases. For example, light travels around 4.9 microseconds per kilometer through a fiber optic cable.
While 4.9 microseconds of latency may not sound like much, it adds up over distance. And that latency is particularly significant in the world of networking. For example, a direct fiber cable laid in a straight line from Copenhagen, Denmark to Auckland, New Zealand would stretch 17,500 kilometers. The roundtrip signal travel time? 178 ms. That’s direct, mind you. More realistic real-world routing includes hops, routers, and suboptimal routing protocols along the way, all of which lengthen travel distance and add additional latency: the total latency is more like 300 ms.
Why latency matters
Latency -- in all of its combined forms -- impacts enterprise network throughput, creates performance problems for collaboration platforms and affects any application requiring connectivity of any kind. It’s the bane of application performance, leading to productivity reduction and even profit loss.
Emerging technologies like 5G, IoT/OT, VR/AR, “smart city” applications, and even autonomous vehicles demand near real-time connectivity performance. Often, vendors of those technologies promote associated latency-reduction and response-time improvements.
But no matter how those technologies are promoted, there’s always going to be some element of latency. (After the speed of light, compute processing and additional overheads significantly contribute to network and application latency.)
The fixed, constant baseline for latency: Why we can't ignore the flat line
Latency improvements -- in protocols, TCP hand-shaking, DNS response, etc. -- all converge to approach an absolute baseline: the speed of light.
Figure 1. Computational exponential growth vs. slow convergence toward the speed of light for application latency (Note: speed of light not to scale).
While networking protocol overheads tend to add the most latency, there are other aspects that slow connectivity performance. To ensure an optimal path for data traffic, IT leaders seek to reduce built-in infrastructure latency, particularly when they shift to fog and edge computing. When centralized, security adds more latency as users travel long distances over backhauled MPLS networks to move data single-file through stacked appliances. Placing security processing (in a distributed fashion) at the cloud edge improves performance by shortening travel distance and -- at least in the case of the Zscaler Zero Trust Exchange -- removing linear security processing. That security must be automated and software-defined to ensure scalability and simple policy enforcement.
The next evolution in connectivity acceleration: Putting compute near to everyone
New advances in digital telecommunications are disrupting traditional connectivity. We already see this in the deployment of 5G networks: Companies are able to connect more directly and more often with employees, customers, and partners, with compute occurring closer to users and devices. Importantly, data travels a comparatively shorter distance, offering the promise of faster performance.
Telco companies behind 5G are moving away from legacy in-house, monolithic solutions, and towards massively scalable, cloud-first, and (importantly!) highly-distributed enterprise designs. They are refactoring infrastructure to be centrally managed but dynamically implemented at the cloud edge, nearer to onramps and consumption points.
An internet future: Secure Service Edge (SSE) to the rescue
Latency -- in its many forms -- complicates the delivery of effective cybersecurity solutions over traditional networking infrastructure. This new cloud-first, device-agnostic, work-from-anywhere world requires a management mindset change in the way we architect security into the organization: We must protect users, devices, and workloads no matter where resources may reside. We must ensure policy is user-, device-, and workload-centric, not network-centric. Enforcement must be architected for speed, leveraging single-scan multiple action technology to accelerate performance.
We may never be able to travel faster than the speed of light. But there is something we can do to reduce the time it takes for data to travel from A to B: We must bring security to the users and devices rather than expecting the users and devices to travel to the security. Achieving that requires distributed, cloud-edge-delivered security, and specifically, a Secure Service Edge (SSE) architecture that ensures secure connectivity while minimizing data travel.
The future is user-experience-based. The way we interact with technology requires that security enables edge-based speed. Businesses will not survive without it.
What to read next
How bandwidth obsession masks what truly matters: quality of experience
Security Service Edge (SSE) reflects a changing market: what you need to know