Zpedia 

/ What Is Multiprotocol Label Switching (MPLS)?

What Is Multiprotocol Label Switching (MPLS)?

Multiprotocol label switching (MPLS) is a method of wide area networking (WAN) that routes traffic using labels—not network addresses—to determine the shortest possible path for packet forwarding. It labels each data packet and controls the path it follows rather than sending it from router to router through packet switching. It’s intended to minimize downtime, improve quality of service (QoS), and ensure traffic moves as quickly as possible.
Cloud and SD-WAN transformation

How Does Routing Work in MPLS?

At the surface level, MPLS functionality turns routers into switches by giving traffic a predetermined path to take based on labels. In this sense, MPLS connections are much stronger and more reliable than traditional packet- or circuit-switched connections.

MPLS allows IP packets to be forwarded at OSI layer 2 (data link) switching level without being passed up to layer 3—the network or routing level. Internet protocol (IP) routing sends traffic on a lengthy path with multiple stops, but with MPLS, traffic is given an MPLS label and sent through a label-switched path (LSP) that’s inserted between the layer 2 and layer 3 headers. With this method, routers need only to interpret the MPLS labels, not the full IP address, of traffic.

There are two types of MPLS routers: label edge routers (LERs) for ingress, which label incoming data; and label switch routers (LSRs) for egress, which send labeled data on its way. There are also intermediate LSRs, which correct the data link on which the packet is being sent, if need be.

These routers combine packets with similar characteristics and place them in the same forwarding equivalence class (FEC) so they can be sent down the same LSP after being given the same label. In a corporate context, this can greatly reduce the types of traffic on a network layer, which helps reduce latency.

What Is MPLS Used For?

MPLS works by creating point-to-point paths that act as circuit-switched connections, but deliver layer 3 IP packets. In this sense, it is best for organizations that have remote branch offices in a large number of widespread locations that need data center access. At least, this was what it was best used for when workers were still going to offices.

Many organizations would run MPLS within a virtual private network (VPN), either to create the aforementioned point-to-point path or for a private LAN service. It provided diversity in that it could be deployed irrespective of what underlying network protocols—ethernet, SDH, ATM, etc.—were being used. The forwarding decision would be unaffected because, again, it only mattered whether the label matched.

What Does MPLS Consist Of?

MPLS has four unique mechanisms that improve connection quality and stability:

  1. The label: There would be no MPLS without a label attached to each connection.
  2. Traffic class field: This component prioritizes packets by QoS.
  3. Bottom-of-stack flag: This tells an egress router that there are no further labels to be put on this connection.
  4. Time-to-live: This refers to the number of hops data can make before being discarded.

These four subparts make MPLS easier to manage than other less rigid methods of traffic forwarding. It can be likened to tracking a shipment or a package based on a tracking number rather than having to guess, for example, the license plate or VIN of the delivery truck.

Benefits of MPLS

Many consider MPLS to be a slightly older approach that provides an advantage over traditional IP routing, but is struggling against more agile and flexible options such as SD-WAN. Nevertheless, it does have a number of advantages.

Namely, it’s more scalable than packet or circuit switching, offers high levels of performance, reduces network traffic congestion, and has a better end user experience. Plus, it eliminates the need to perform lookups at a routing table at every stop. And, as mentioned earlier, it’s a routing protocol that can be deployed no matter which network protocol your organization uses, which increases flexibility.

How MPLS Networks Work for Cloud Adoption

To enable MPLS to work in the cloud, you can supplement it with a number of technologies, including:

  • Virtual routing services: By using a cloud router on top of an MPLS appliance, you can leverage a software-defined network (SDN) to establish MPLS cloud connections.
  • Offloading: A direct-to-internet connection enables you to offload web traffic, allowing the MPLS to carry only the traffic heading to the office, unlocking spare capacity.
  • SD-WAN: SD-WAN augments MPLS with low-cost broadband internet links or replaces it with internet to base designs on the application and bandwidth needs.

MPLS can become a cloud-enabled technology if you pull the right strings, but as the cloud becomes ubiquitous, MPLS starts to phase itself out due to its legacy architecture. More on this in the next section.

Disadvantages of MPLS

MPLS comes with a number of issues that are accentuated as you adopt remote work and the cloud:

Increased Complexity

Deploying and managing routers at every location is time-consuming, leads to security compromises, and limits your ability to respond to changing needs.

Poor User Experience

Backhauling traffic to centralized security appliances that weren’t designed to handle cloud app demands leave users unproductive and unhappy.

A Lack of Security

When users drop off your network and MPLS VPN, your security policies go blind and risk increases. You need consistent protection no matter how users connect.

What’s more, MPLS doesn’t offer encryption, which is a major issue as operations move toward the cloud. Today, with cloud services driving up organizations' bandwidth demands and many regulations around protecting sensitive data—especially as it’s sent to and from the cloud—MPLS has become more difficult to justify.

MPLS vs. SD-WAN

MPLS labeling can provide advantages over less-refined traffic routing methods of the past, but SD-WAN uses software-defined policies to select the best path to route traffic to the internet, cloud applications, and the data center. This makes it more useful for real-time applications such as UCaaS, VoIP, business intelligence, and so on.

SD-WAN provides simpler provisioning and an increased breadth of traffic engineering configurations due to its software-defined construction. By that same token, SD-WAN offers much improved security over MPLS: software-defined policies established and enforced via the cloud help you secure network traffic wherever it’s coming from or going.

SD-WAN offers a bevy of benefits over MPLS, but to inherit a full, cloud-delivered networking and security stack that provides great experiences and tight security, wherever they are, what you really need is a secure access service edge (SASE).

A SASE Approach

To cope with the evolving challenges of the cloud, many organizations are moving toward cloud-based infrastructure for both networking and security, known as SASE. This framework provides secure access to all applications alongside full visibility and inspection of traffic across all connections.

SASE offers:

Reduced IT costs and complexity

Rather than focusing on a secure perimeter, SASE focuses on entities, such as users. Based on the concept of edge computing—processing of information close to the people and systems that need it—SASE services push security and access close to users. Using your security policies, SASE dynamically allows or denies connections to applications and services.

Fast, seamless user experience

SASE calls for security to be enforced close to what needs securing—instead of sending the user to the security, it sends security to the user. SASE is cloud secure, intelligently managing connections at internet exchanges in real time as well as optimizing connections to cloud applications and services to ensure low latency.

Reduced risk

As a cloud native solution, SASE is designed to address the unique challenges of risk in the new reality of distributed users and applications. A key component of the SASE framework is zero trust network access (ZTNA), which provides mobile users, remote workers, and branch offices with secure application access while eliminating the attack surface and the risk of lateral movement on the network.

Zscaler SD-WAN and SASE

Zscaler leverages a diverse network of partners to help your organization quickly adopt and implement SD-WAN. These partners trust our cloud-based network architecture because it’s built according to Gartner’s vision of SASE, which has been proven to be the optimal framework for cloud-delivered networking and security through SD-WAN.

The Zscaler Zero Trust Exchange™ is a cloud native SASE platform built for performance and scalability. As a globally distributed platform, it ensures users are always a short hop from their applications. Through peering with hundreds of partners in major internet exchanges worldwide, it offers optimal routing, performance, and reliability for your users.

Discover how our SASE architecture can help you reduce IT cost and complexity while improving security and user experience.

Suggested Resources

Zscaler + Network and UCaaS Partners
Visit the webpage
Secure Work-from-Anywhere
Realize the full benefits of SD-WAN
Visit the webpage
Zscaler + Cisco SD-WAN
Learn more

01 / 02

Frequently Asked Questions