MPLS Core network configuration

Introduction
Christmas break – ah.. with time on my hands and between catching up with family, thinking about new year resolutions and feasting; it is time to add the Multiprotocol label switching (mpls) article I have been planning to write for this month.

Cisco MPLS configuration is best understood by separating the configuration tasks into
– core of the network
– and the edge of the network.

This article describes an mpls configuration for the core devices. I will describe how to configure the (mpls) edge network in the next post.

Let’s start with a recap – ‘Why mpls ?’.

Often there is a requirement to accommodate overlapping address space and to enforce route table containment in ip networks (usually for security reasons).
To achieve this engineers would create vrfs to contain and virtualise the routing instances.

For very small networks with a few routers, Cisco offers a “vrf lite” configuration for functionality. The drawback for vrf lite is that it requires manual configuration at every router hosting the vrf which makes it operationally inefficient and non scalable for most entreprise wide networks.

Network designers when faced with a business requirement to support different groups of users/business applications and at the same time overcome addressing and route containment issues or to provide multi-tenancy can turn to mpls as a solution.

MPLS is also extensively deployed in the Service Provider space to support the addressing overlaps and route domain segregation for customers.
MPLS deployments are often for providing L2 vpls, mpls traffic engineering and for L3 vpns.

By far the more common implementation seen in the entreprise space would be mpls to deliver L3 VPN to support vrfs within the organisation.

MPLS building blocks
MPLS came out of Cisco tag switching and has since been rectified as an IETF standard – RFC4364.
MPLS is an encapsulation technology where extra headers are introduced to the frame as depicted in the figure below.

MPLS relies on switching labels rather than referencing L3 route tables which takes longer to parse and do lookups as the route tables grow.

To understand mpls configuration we must first review some mpls terminology and device functionality definitions.
MPLS classify routers into Label Switch Routers (LSRs) and Label Edge routers (LERs).

Engineers generally refer to LSRs as P routers and LERs as PE routers.

The function of P (or provider) router is to run label distribution protocol and efficiently process these labels as frames are received.
The function of a PE (or provider edge) router is to serve as the “gateway” into an MPLS network.

A PE router connect
– to the customer (CE) router and exchange routes
– map routes to labels
– to add label when sending a frame to a P router and to strip label when sending a frame to the CE router
P routers
– behave as label switches i.e. efficiently forward traffic based on the label markings

Configuring mpls
Here is a diagram of the mpls core network.

Configuring an mpls core is straightforward.
– enable reachability between all P and PE routers by configuring an igp (route domain).
– enable label distribution protocol (ldp) on P and PE routers
– enable bgp on PE routers by including all PE routers (which need to exchange routes with each other) in the same bgp AS

For brevity, I have assumed proficiency with Cisco igp and bgp configuration.

IGP
An mpls core must run an interior routing protocol so that all P and PE routers can talk to each other.

I will use ospf as the IGP to establish routes between for all the P and PE routers shown.

There is nothing fancy, just standard ospf configuration.
All P and PE routers are part of area 0.

Reminder – use an ospf router-id to make it easy to read output when you troubleshoot the network.
On this network I am making the router-id reflect the configured loopback0 ip address.

PE4#sh ip ospf neigh

Neighbor ID Pri State Dead Time Address Interface
1.1.1.1 1 FULL/BDR 00:00:35 10.0.14.1 FastEthernet1/0

P1#sh ip ospf neigh

Neighbor ID Pri State Dead Time Address Interface
4.4.4.4 1 FULL/DR 00:00:36 10.0.14.2 FastEthernet1/0
2.2.2.2 1 FULL/DR 00:00:35 11.0.12.2 FastEthernet0/0

P2#sh ip ospf neigh

Neighbor ID Pri State Dead Time Address Interface
5.5.5.5 1 FULL/DR 00:00:37 10.0.25.2 FastEthernet1/1
3.3.3.3 1 FULL/DR 00:00:35 11.0.32.1 FastEthernet1/0
1.1.1.1 1 FULL/BDR 00:00:33 11.0.12.1 FastEthernet0/0

On completing the ospf configuration, run ping checks to verify end to end reachability between PE-to-PE routers.

PE6(config-if)#do ping 5.5.5.5 source 6.6.6.6
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 5.5.5.5, timeout is 2 seconds:
Packet sent with a source address of 6.6.6.6
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 46/199/330 ms

PE6(config-if)#do ping 4.4.4.4 source 6.6.6.6
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 4.4.4.4, timeout is 2 seconds:
Packet sent with a source address of 6.6.6.6
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 52/220/329 ms

Enable ldp
The command to enable ldp is an interface command. This makes perfect sense since ldp is essentially inserting the mpls header into the Ethernet frame.
On each interface connecting every P-to- P router and P-to-PE router run ‘mpls ip’ command.

E.g enabling mpls on P2 and PE5

P2(config-if)#int f1/1
P2(config-if)#mpls ip
PE5(config-if)#int f1/1
PE5(config-if)#mpls ip
*Dec 28 12:06:45.355: %LDP-5-NBRCHG: LDP Neighbor 2.2.2.2:0 (1) is UP

Once the ldp configuration is completed check that ldp has establish between P-to-P and P-to-PE.

e.g. on a P2 router

P2#sh mpls int
Interface IP Tunnel BGP Static Operational
FastEthernet0/0 Yes (ldp) No No No Yes
FastEthernet1/0 Yes (ldp) No No No Yes
FastEthernet1/1 Yes (ldp) No No No Yes

e.g. on a PE router

PE4#sh mpls int
Interface IP Tunnel BGP Static Operational
FastEthernet1/0 Yes (ldp) No No No Yes

You can safely ignore the report of the tunnel = No , as this feature is applicable to mpls traffic engineering rather than a L3 vpn setup.

Enable bgp on PE routers that need to share routes
The final step to complete the mpls core is to run bgp on all PE routers that share routing information.
Again nothing fancy, just implement the standard bgp set up.

To make it efficient to read bgp output, it makes sense to define your bgp router id.

For example for PE6

PE6#sh run | s bgp
router bgp 100
bgp router-id 6.6.6.6
bgp log-neighbor-changes
neighbor 4.4.4.4 remote-as 100
neighbor 4.4.4.4 update-source Loopback0
neighbor 5.5.5.5 remote-as 100
neighbor 5.5.5.5 update-source Loopback0

Here’s the usual output for verifying bgp.

PE5(config-router)#do sh ip bgp summ
BGP router identifier 5.5.5.5, local AS number 100
BGP table version is 1, main routing table version 1
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
4.4.4.4 4 100 19 19 1 0 0 00:14:16 0
6.6.6.6 4 100 17 15 1 0 0 00:12:21 0

PE6#sh ip bgp summ
BGP router identifier 6.6.6.6, local AS number 100
BGP table version is 1, main routing table version 1
Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd
4.4.4.4 4 100 14 13 1 0 0 00:10:59 0
5.5.5.5 4 100 14 16 1 0 0 00:11:22 0

Checking integrity of your mpls core
Run ‘show mpls ldp neigh’ command to confirm ldp (peer) discovery and establishment of communications between P-to-P and P-to-PE routers in the mpls core.

The neighbor is discovered over udp port 646 and once established, ldp runs over tcp port 646.

P1#sh mpls ldp neigh
Peer LDP Ident: 2.2.2.2:0; Local LDP Ident 1.1.1.1:0
TCP connection: 2.2.2.2.29292 – 1.1.1.1.646
State: Oper; Msgs sent/rcvd: 41/42; Downstream
Up time: 00:24:35
LDP discovery sources:
FastEthernet0/0, Src IP addr: 11.0.12.2
Addresses bound to peer LDP Ident:
11.0.12.2 11.0.32.2 10.0.25.1 2.2.2.2
Peer LDP Ident: 4.4.4.4:0; Local LDP Ident 1.1.1.1:0
TCP connection: 4.4.4.4.27115 – 1.1.1.1.646
State: Oper; Msgs sent/rcvd: 14/14; Downstream
Up time: 00:00:30
LDP discovery sources:
FastEthernet1/0, Src IP addr: 10.0.14.2
Addresses bound to peer LDP Ident:
4.4.4.4 10.0.14.2

P2#sh mpls ldp neigh
Peer LDP Ident: 1.1.1.1:0; Local LDP Ident 2.2.2.2:0
TCP connection: 1.1.1.1.646 – 2.2.2.2.29292
State: Oper; Msgs sent/rcvd: 43/42; Downstream
Up time: 00:26:07
LDP discovery sources:
FastEthernet0/0, Src IP addr: 11.0.12.1
Addresses bound to peer LDP Ident:
11.0.12.1 10.0.14.1 1.1.1.1
Peer LDP Ident: 3.3.3.3:0; Local LDP Ident 2.2.2.2:0
TCP connection: 3.3.3.3.33454 – 2.2.2.2.646
State: Oper; Msgs sent/rcvd: 41/41; Downstream
Up time: 00:24:24
LDP discovery sources:
FastEthernet1/0, Src IP addr: 11.0.32.1
Addresses bound to peer LDP Ident:
10.0.36.1 11.0.32.1 3.3.3.3
Peer LDP Ident: 5.5.5.5:0; Local LDP Ident 2.2.2.2:0
TCP connection: 5.5.5.5.18144 – 2.2.2.2.646
State: Oper; Msgs sent/rcvd: 22/22; Downstream
Up time: 00:07:23
LDP discovery sources:
FastEthernet1/1, Src IP addr: 10.0.25.2
Addresses bound to peer LDP Ident:
10.0.25.2 5.5.5.5

PE4#sh mpls ldp neigh
Peer LDP Ident: 1.1.1.1:0; Local LDP Ident 4.4.4.4:0
TCP connection: 1.1.1.1.646 – 4.4.4.4.27115
State: Oper; Msgs sent/rcvd: 15/15; Downstream
Up time: 00:01:32
LDP discovery sources:
FastEthernet1/0, Src IP addr: 10.0.14.1
Addresses bound to peer LDP Ident:
11.0.12.1 10.0.14.1 1.1.1.1