MPLS (Multiprotocol Label Switching) - NetwaxLab

Breaking

Facebook Popup

BANNER 728X90

Thursday, January 29, 2015

MPLS (Multiprotocol Label Switching)

“MPLS is that it’s a technique, not a service.”

The fundamental concept behind MPLS is that of labeling packets. In a traditional routed IP network, each router makes an independent forwarding decision for each packet based solely on the packet’s network-layer header. Thus, every time a packet arrives at a router, the router has to “think through” where to send the packet next.

With MPLS, the first time the packet enters a network, it’s assigned to a specific forwarding equivalence class (FEC), indicated by appending a short bit sequence (the label) to the packet. Each router in the network has a table indicating how to handle packets of a specific FEC type, so once the packet has entered the network, routers don’t need to perform header analysis. Instead, subsequent routers use the label as an index into a table that provides them with a new FEC for that packet.

MPLS Basic Architecture
This gives the MPLS network the ability to handle packets with particular characteristics (such as coming from particular ports or carrying traffic of particular application types) in a consistent fashion. Packets carrying real-time traffic, such as voice or video, can easily be mapped to low-latency routes across the network — something that’s challenging with conventional routing. The key architectural point with all this is that the labels provide a way to “attach” additional information to each packet — information above and beyond what the routers previously had.

Multiprotocol Label Switching (MPLS) enables Enterprises and Service Providers to build next-generation intelligent networks that deliver a wide variety of advanced, value-added services over a single infrastructure.

MPLS can encapsulate packets of various network protocols. MPLS supports a range of access technologies, including T1/E1, ATM, Frame Relay, and DSL.

MPLS provides these beneficial applications:

  • Virtual Private Networking (VPN)
  • Traffic Engineering (TE)
  • Quality of Service (QoS)
  • Any Transport over MPLS (AToM)

Layer 2 or Layer 3?

MPLS is best summarized as a “Layer 2.5 networking protocol”.

There’s been a lot of confusion over the years about whether MPLS is a Layer 2 or Layer 3 service. But MPLS doesn’t fit neatly into the OSI seven-layer hierarchy. In fact, one of the key benefits of MPLS is that it separates forwarding mechanisms from the underlying data-link service. MPLS can be used to create forwarding tables for ATM or frame relay switches (using the existing ATM or DLCI header) or for plain old IP routers by appending MPLS tags to IP packets.

The bottom line is that network operators can use MPLS to deliver a wide variety of services. The two most popular implementations of MPLS are layer 3 BGP/MPLS-VPNs (based on RFC 2547) and Layer 2 (or pseudowire) VPNs.

MPLS allows most packets to be forwarded at Layer 2 (the switching level) rather than having to be passed up to Layer 3 (the routing level). Each packet gets labeled on entry into the service provider's network by the ingress router. All the subsequent routing switches perform packet forwarding based only on those labels—they never look as far as the IP header. Finally, the egress router removes the label(s) and forwards the original IP packet toward its final destination.

There are several flavors of layer 2 MPLS services, but what they have in common is that a Layer 2 packet (or ATM cell or frame relay frame) is encased in an MPLS header and forwarded through the MPLS core. When it reaches the other side, the packet’s labels are removed, and the packet that arrives at the ultimate destination exactly where it entered the MPLS network. Thus, Layer 2 MPLS services effectively extend services such as Ethernet or frame relay across an IP WAN.

How Does MPLS Work?

MPLS works by tagging the traffic, in this example packets, with an identifier (a label) to distinguish the LSPs. When a packet is received, the router uses this label (and sometimes also the link over which it was received) to identify the LSP. It then looks up the LSP in its own forwarding table to determine the best link over which to forward the packet, and the label to use on this next hop.

A different label is used for each hop, and it is chosen by the router or switch performing the forwarding operation. This allows the use of very fast and simple forwarding engines, which are often implemented in hardware.

Ingress routers at the edge of the MPLS network classify each packet potentially using a range of attributes, not just the packet's destination address, to determine which LSP to use. Inside the network, the MPLS routers use only the LSP labels to forward the packet to the egress router.

How MPLS Works?
The diagram above shows a simple example of forwarding IP packets using MPLS, where the forwarding is based only on packet destination IP address. LSR (Label Switched Router) A uses the destination IP address on each packet to select the LSP, which determines the next hop and initial label for each packet (21 and 17). When LSR B receives the packets, it uses these labels to identify the LSPs, from which it determines the next hops (LSRs D and C) and labels (47 and 11). The egress routers (LSRs D and C) strip off the final label and route the packet out of the network.

The above is only one use of MPLS. Since MPLS uses only the label to forward packets, it is protocol-independent, hence the term "Multi-Protocol" in MPLS. It can be used to carry any content (not only packets) over any link technology (using different label encoding for each layer 2 link type).

Forwarding Equivalence Class (FEC)

FEC is a group of IP packets which are forwarded in the same manner, over the same path, and with the same forwarding treatment. An FEC might correspond to a destination IP subnet, but it also might correspond to any traffic class that the Edge-LSR considers significant. For example, all traffic with a certain value of IP precedence might constitute a FEC.

MPLS Operation?

MPLS works by prefixing packets with an MPLS header, containing one or more labels. This is called a label stack. Each label stack entry contains four fields:
  • A 20-bit label value. A label with the value of 1 represents the router alert label.
  • A 3-bit Traffic Class field for QoS (quality of service) priority (experimental) and ECN (Explicit Congestion Notification).
  • A 1-bit bottom of stack flag. If this is set, it signifies that the current label is the last in the stack.
  • An 8-bit TTL (time to live) field.

These MPLS-labeled packets are switched after a label lookup/switch instead of a lookup into the IP table. As mentioned above, when MPLS was conceived, label lookup and label switching were faster than a routing table or RIB (Routing Information Base) lookup because they could take place directly within the switched fabric and not the CPU.

The presence of such a label, however, has to be indicated to the router/switch. In the case of Ethernet frames this is done through the use of EtherType values 0x8847 and 0x8848, for unicast and multicast connections respectively.

MPLS Label Stacking?

MPLS labels can also be stacked multiple times.
  • The top label is used to control the delivery of the packet.
  • When destination is reached, the top label is removed (or “popped”), and the second label takes over to direct the packet further.

Some common stacking applications are:
  • VPN/Transport services, which use an inner label to map traffic to specific interfaces, and an outer label to route through the network.
  • “Bypass” LSPs, which can protect a bundle of other LSPs to redirect traffic quickly without having to completely re-signal every LSP, in the event of a router failure.

MPLS Label Stacking

MPLS Traffic Engineering?

In MPLS traffic engineering, all configurations are done on a specific network node called the headend or ingress node. Here is where all tunnels and constraints are created. Tunnel destination address is also specified at the headend. For example, if an MPLS traffic engineering tunnel will be set up between R2 and R6 in Figure 1, all the definitions are done at R2. The tunnel destinations are called tailend or egress node.

MPLS traffic engineering tunnels are unidirectional tunnels and not congruent. This means that if one tunnel is created to carry traffic between R2 and R6, the return tunnel from R6 to R2 is not created automatically. Reverse tunnels must also be created, but this time R6 is used as the headend and R2 as the tailend. The tailend has no configuration.

MPLS Traffic Engineering
Four steps are required for MPLS traffic engineering to take place:
  1. Link-state protocols carry link attributes in their link-state advertisements (LSAs) or link-state packets (LSPs).
  2. Based on the constraints defined, the traffic path is calculated with the help of Constrained Shortest Path First (CSPF).
  3. The path is signaled by Resource Reservation Protocol (RSVP).
  4. Traffic is then sent to the MPLS traffic engineering tunnel.

Let's take a look these steps in detail:

1. By default, link-state protocols send only connected interface addresses and metric information to their neighbors. Based on this information, the Shortest Path First (SPF) algorithm creates a tree and builds the topology of the network. MPLS traffic engineering allows us to add some constraints. In Figure 1 above, let's assume the R2-R5 link is 5 Mbit/s; R5-R6 is 10 Mbit/s; and all the interfaces between the bottom routers are 6 Mbit/s.

If we want to set up a 6-Mbit/s tunnel, SPF will not even take the R2-R5-R6 path into consideration, because the link from R2 to R5 does not satisfy the minimum requirement.
In addition, we could assign an administrative attribute, also called a "color," to the link. For example, the R2-R5-R6 interfaces could be designated blue, and the R2-R3-R4-R6 route could be assigned red. At the headend, the constraint can then specify whether to use a path that contains a red or blue color.

The color/affinity information, as well as how much bandwidth must be available, reserved, and unreserved for the tunnel are carried within the link-state packet. In order to carry this information, some extensions have been added to the link-state protocols. Open Shortest Path First (OSPF) carries this information in the Opaque LSA (or Type 10 LSA), and Intermediate System to Intermediate System (IS-IS) uses TLV 22 and 135 for traffic engineering information.

2. As we stated earlier, SPF is used to calculate the path for destinations. For traffic engineering, a slightly modified version of SPF is used, called constrained SPF (CSPF). With the extensions to link state protocols that Opaque LSAs and TLVs provide, a traffic engineering database is created that is only accessible by CSPF.

CSPF can understand that the link from R2 to R5 is 5 Mbit/s and does not satisfy the 6 Mbit/s tunnel constraint. So it will not take that path into consideration in its calculation.

3. If there is an appropriate path, the path is signaled by RSVP. Previously used to provide Integrated Services QoS, RSVP incorporated new messages, including path and reservation messages, to enable MPLS traffic engineering. Label information is carried within the reservation messages.

4. Once a path is signaled, traffic is put into the tunnel. This can be accomplished via many methods including static routing, policy-based routing, class-of-service-based tunnel selection (CBTS), policy-based tunnel selection (PBTS), autoroute, and forwarding adjacency. I'll discuss these methods in detail in a future post.

Path Selection In MPLS Traffic Engineering?

First, let's have a look at a classic example of traffic engineering.

Path Selection
There are two paths you could take to get from Router 2 (R2) to Router 6 (R6):
  • R2-R5-R6 with the cost of 15+15=30
  • R2-R3-R4-R6 with the cost of 15+15+15=45

Since MPLS Traffic Engineering can only work with the link-state protocols Open Shortest Path First (OSPF) and Intermediate System to Intermediate System (IS-IS), unless otherwise specified, all our examples will be given by using link-state protocols.

Link-state protocols use the Shortest Path First (SPF) or Dijkstra algorithm to calculate the route from point A to point B. In this example, they will choose the path R2-R5-R6, because the total cost is less than the cost for R2-R3-R4-R6.

The bottom path will not be used until the primary path fails, because link-state protocols traditionally don't support unequal cost multi-path load sharing, although enhancements had been proposed at the IETF to change this. Source routing and policy-based routing (PBR) can be used to force traffic to the bottom path. However, these are complex from a configuration point of view, and open to administrative mistakes.

In the above example, R5 is connected only to R6. If PBR is used, only R2 needs to be configured. For a different topology, you may need to implement PBR at each router to send the traffic through the intended path.

MPLS traffic engineering helps to send selected traffic to alternate paths, which may not be the best paths from the interior gateway protocol point of view. To accomplish this, a traffic engineering tunnel is configured at the headend to create a point-to-point traffic engineering label-switched path (LSP).

There are two approaches to creating an LSP: tactical and strategic, also called proactive and reactive. Strategic is the systematic approach, in which a traffic matrix is identified between each ingress and egress node and a traffic engineering tunnel reservation is made based on the requirements. This is the long-term solution for an MPLS traffic engineering LSP.

Alternatively, the tactical approach can be used as a short-term solution to fix a sudden peak traffic load. The LSP can be created through the lower utilized path for a short time until the primary path traffic issue is resolved. As an example, the link might be utilized after a major news announcement, such Orhan Ergun's appointment as CEO of Cisco, causes a large surge in media traffic. Some LSPs over the primary link might be shifted to lower utilized links.

Fast Reroute In MPLS Traffic Engineering?

Before explaining how fast reroute is used in the context of MPLS traffic engineering, you'll need to understand the basics of fast reroute.

There are two paths between Router 2 (R2) and Router 6 (R6). If we assume that Open Shortest Path First (OSPF) is used in this topology, then based on end-to-end total link cost, the R2-R5-R6 path would be chosen. The information for the R2-R3-R4-R6 link is also kept in the OSPF link-state database table. If the R2-R5-R6 path fails, the SPF algorithm runs on every router in the same area, and R2 selects R3 as the next hop. It puts this information into the routing table, and if the router supports separated control and data planes, the routing information is distributed into a forwarding information base.

Fast Reroute
The detection of link failure, the propagation of information to every device in the flooding domain, and calculating and installing the new paths into the routing and forwarding tables of the devices will require some time. Interior gateway protocol parameters for propagation and detection can be changed, and convergence time might be reduced to even less one second. But for some applications like voice, this may not be enough.

We may need latency to be less than 100 or 200 ms in order to reroute traffic without experiencing adverse effects. MPLS traffic engineering can often provide a backup path within 50 ms, because the alternate path is calculated and installed into the routing and forwarding information bases before failure happens.

Fast Reroute

MPLS traffic engineering is a local protection mechanism. There are two modes of local protection: link and node protection. If the R2-R5 link fails and we need to protect that link, we call that link protection. Backup and pre-signaled paths can be created between R2-R3 and R5, so that if the R2-R5 link fails, traffic is automatically redirected to the backup path. Because the failure is local to R2, it is called local protection.

It's also possible for R5 to fail. In this case, the R2-R3-R5 path will not work, so we need to bypass R5 completely. An R2-R3-R4-R6 pre-signaled path could be created for node protection purposes, because in this case, we want to protect the node, rather than the link.
Path protection would come into play if we had the path R1-R2-R5-R6 between R1 and R6 and we wanted to protect that path from end to end.

Creating a Label-Switched Path between all the nodes in the domain might be cumbersome, so automesh and autotunnel features can streamline path creation and protection.

MPLS Protection Schemes?

There are two different ways to provide LSP protection:
  • One-to-One Protection / Detour
  • An individual backup path is fully signaled through RSVP for every LSP, at every point where protection is provided (i.e. every node).
  • The label depth remains at 1, but this can involve a huge number of reservations, and can cause significant overhead.
  • Many-to-One Protection / Facility Backup
  • A single bypass LSP is created between two nodes to be protected.
  • During a failure, multiple LSPs are rerouted over the bypass LSP.

Also different types of failures that can be protected against:
  • Link Protection / Next-Hop Backup
  • A bypass LSP is created for every possible link failure.
  • Node Protection / Next-Next-Hop Backup
  • A bypass LSP is created for every possible node (router) failure

MPLS With No Protection

with No Protection

MPLS Link Protection

MPLS Link Protection

MPLS Node Protection

MPLS Node Protection

MPLS Link and Node Protection

MPLS Link & Node Protection

Advantages of Label Switching?

  • Cost Savings- Depending on the specific mix of applications, and network configuration, MPLS-based services can reduce costs by 10% to 25% over comparable data services (frame relay and ATM). As companies add voice and video traffic, cost savings can rise to as much as 40% networkwide.
  • QOS Enablement- One of the primary benefits of MPLS-based services is the ability to support QoS, particularly key for companies that are rolling out voice and video.
  • Improved performance- Because of the any-to-any nature of MPLS services, network designers can reduce the number of “hops” between network points, which translates directly to increased response time and improved application performance.
  • Disaster Recovery- MPLS-based services improve disaster recovery in a variety of ways. First and foremost, data centers and other key sites can be connected in multiply redundant ways to the cloud (and thus to other sites on the network). Secondly, remote sites can quickly and easily reconnect to backup locations if needed (unlike with ATM and frame networks, in which either switched or backup permanent-virtual-circuits are required). That’s why several benchmark participants listed “flexibility for business recovery” as a key justifier behind their MPLS rollouts.
  • Futureproofing the Network- Most companies have come to the conclusion that MPLS represents “the wave of the future.” Investment in legacy WAN services (ATM, frame) has pretty much come to a standstill: Virtually no companies plan to invest in ATM or frame services within the next six to 12 months. As a result, companies increasingly say they’re planning to migrate to MPLS primarily to avoid being left behind.

Downsides of MPLS?

• One major drawback is that it hides suboptimal topologies from BGP, where multiple exits may exist for the same route.

For example:

• Say you peer with a major network in San Jose and Los Angeles.

• Traffic coming from Chicago would normally go directly to San Jose.

• But because of a capacity issue, the LSP is forced to go via Los Angeles first.

• In an IP network, the packet would probably be diverted to the local Los Angeles peer as it passes through Los Angeles.

• But MPLS will hide the suboptimal topology, the packet will continue to San Jose because that’s what Chicago saw as the best exit.

• This can be a good or a bad thing depending on your goals.

MPLS and Traceroute?

MPLS can also let you hide traceroute hops.
• Since you aren’t actually doing IP forwarding, there is no need to decrement the IP TTL field as you MPLS forward the packet.

• And if you don’t, the LSP shows up as a single hop in traceroute.

• Some networks prefer this behavior, as it hides the internals of their network, and makes for shorter / prettier traceroutes.

• Some networks also run MPLS-only cores, which carry no IP routes.

• This presents a problem, since if they did want to show the hops in traceroute, the router can’t do IP routing to return the ICMP TTL Exceed.

• To solve this problem, an “icmp tunneling” feature was implemented.

• If an ICMP message is generated inside an LSP, the ICMP message is carried all the way to the end of the LSP before being routed back.

• This can make traceroute look really weird, since you see all the hops along the LSP, but they all appear to have the same latency as the final hop. This causes much end-user confusion.

Comparisons?

MPLS can make use of existing ATM network or Frame Relay infrastructure, as its labeled flows can be mapped to ATM or Frame Relay virtual-circuit identifiers, and vice versa.

  • With Frame Relay

Frame Relay aimed to make more efficient use of existing physical resources, which allow for the underprovisioning of data services by telecommunications companies (telcos) to their customers, as clients were unlikely to be utilizing a data service 100 percent of the time. In more recent years, Frame Relay has acquired a bad reputation in some markets because of excessive bandwidth overbooking by these telcos.

Telcos often sell Frame Relay to businesses looking for a cheaper alternative to dedicated lines; its use in different geographic areas depended greatly on governmental and telecommunication companies' policies.

Many customers are likely to migrate from Frame Relay to MPLS over IP or Ethernet within the next two years, which in many cases will reduce costs and improve manageability and performance of their wide area networks.
  • With ATM

While the underlying protocols and technologies are different, both MPLS and ATM provide a connection-oriented service for transporting data across computer networks. In both technologies, connections are signaled between endpoints, connection state is maintained at each node in the path, and encapsulation techniques are used to carry data across the connection. Excluding differences in the signaling protocols (RSVP/LDP for MPLS and PNNI:Private Network-to-Network Interface for ATM) there still remain significant differences in the behavior of the technologies.

The most significant difference is in the transport and encapsulation methods. MPLS is able to work with variable length packets while ATM transports fixed-length (53 byte) cells. Packets must be segmented, transported and re-assembled over an ATM network using an adaptation layer, which adds significant complexity and overhead to the data stream. MPLS, on the other hand, simply adds a label to the head of each packet and transmits it on the network.

Differences exist, as well, in the nature of the connections. An MPLS connection (LSP) is unidirectional—allowing data to flow in only one direction between two endpoints. Establishing two-way communications between endpoints requires a pair of LSPs to be established. Because 2 LSPs are required for connectivity, data flowing in the forward direction may use a different path from data flowing in the reverse direction. ATM point-to-point connections (virtual circuits), on the other hand, are bidirectional, allowing data to flow in both directions over the same path (Both SVC and PVC ATM connections are bidirectional. Check ITU-T I.150 3.1.3.1).

Both ATM and MPLS support tunneling of connections inside connections. MPLS uses label stacking to accomplish this while ATM uses virtual paths. MPLS can stack multiple labels to form tunnels within tunnels. The ATM virtual path indicator (VPI) and virtual circuit indicator (VCI) are both carried together in the cell header, limiting ATM to a single level of tunnelling.
The biggest advantage that MPLS has over ATM is that it was designed from the start to be complementary to IP. Modern routers are able to support both MPLS and IP natively across a common interface allowing network operators great flexibility in network design and operation. ATM's incompatibilities with IP require complex adaptation, making it comparatively less suitable for today's predominantly IP networks.

Evolution?

MPLS has been originally proposed to allow high performance traffic forwarding and traffic engineering in IP networks. However it evolved in Generalized MPLS (GMPLS) to allow the creation of label-switched paths (LSPs) also in not native IP networks, such as SONET/SDH networks and wavelength switched optical networks.

Competitors?

MPLS can exist in both an IPv4 and an IPv6 environment (using appropriate routing protocols). The major goal of MPLS development was the increase of routing speed. Now, therefore, the main application of MPLS is to implement limited traffic engineering and layer 3 / layer 2 “service provider type” VPNs over IPv4 networks.

Besides GMPLS, the main competitors to MPLS are Shortest Path Bridging (SPB), Provider Backbone Bridges (PBB), and MPLS-TP. These also provide services such as service provider layer 2 and layer 3 VPNs. L2TPv3 has been suggested as a competitor, but has not reached any wider success.

Types?

As long as we know these are the MPLS types:
-MPLS VPN,

-LV1 MPLS,

-LV2 MPLS.


 ----

1 comment: