Saturday, June 27, 2009

QOS - Hardware Queue


Here i look at QOS starting from the ground up.

First for traffic in the outbound direction. Each interface has a hardware queue also known as the tx-ring or transmit ring. This is always serviced FIFO.

The size of this queue can be viewed

Router_1#show controllers fa0/0 | inc tx_lim
tx_limited=0(256)

In this example the default size is 256 packets. This can be adjusted. In this example i reduce the size to 50 packets.

config-if=tx-ring-limit 50

If the hardware queue becomes full then the output software queue is used for buffering traffic. When adjusting queueing mechanisms it is this logic for emptying this queue that is adjusted e.g. PQ,CBWFQ, CQ etc

The size of this queue can be seen using the standard show interface command. by default it has a size of 40 packets.

Router_1#show int fa0/0 | inc Output queue
Output queue: 0/40 (size/max)


The size of the queue can be adjusted using the following command

conf-if# hold-queue 20 out

N.B. The hold-queue size applies when default FIFO queueing is in use on the interface. When other queuing methods are in use this command does not apply and the software queue sizes are set by the relevant queuing commands.


Now Input queueing....

Packets in an inbound direction are immediately handled by the interface drivers, router cpu etc. If buffering is needed due to high throughput or router load then the input queue is used.

The size of this queue is 75 packets by default and this can be viewed using the show interface command.

Router_1#show int fa0/0 | inc Input queue
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0


This can be adjusted as follows

config-if#hold-queue 20 in

Saturday, June 20, 2009

PIM - SSM

PIM SSM or Source Specific Multicast, contrary to PIM BiDir, does NOT require the PIM shared tree, and does not use it. No RPS are required and RP protocols such as Auto RP or BSR are not needed. With SSM the SPT is always used.

Like BiDir PIM configuration is pretty straightforward.

The range of multicast groups that are using ssm signaling must be specified on all routers in the mcast domain

To enable pim ssm on the default range of 232.0.0.0/8

#conf t
config#ip pim ssm range default


Note for groups in the ssm range no shared trees are allowed and any (*,G) joins will be dropped.

The final step is to enable IGMP version 3 needs on the receiver facing interfaces.


for SW1 TO JOIN 232.8.8.8 for source 150.1.5.5

config-if#ip igmp version 3
config-if#ip igmp join 232.8.8.8 source 150.1.5.5

PIM - BIDIR


Bidirectional PIM can be used when most receivers of mcast traffic are also senders at the same time. It is an extension to PIM sparse mode that only uses the shared tree for multicast distribution. Packets flow to and from the RP only.

It is relatively easy to configure, although the BIDIR configuration example on the CISCO web site doesn't quite give the full picture, as it only shows configuration on as single router.

BiDir PIM must be enabled on all multicast routers and specified multicast groups need to be configured as BiDir. This can be done using static rp, autop rp or BSR.

I use the simple router topology SW1 ----- R3 ------ R5

On each router i enable BiDir PIM.

conf t
ip pim bidir-enable


The RP (R5 in this case) must specify which bidir groups it services.

ip access-list st 45
permit 238.0.0.0 0.255.255.255


For BSR
ip pim rp-candidate lo0 group-list 45 bidir

For AUTO-RP
ip pim send-rp-announce lo0 scope 16 group-list 45 bidir

For static RP
ip pim rp-address 150.1.5.5 45 bidir

On SW1 i join the bidir mcast group 238.0.0.1

conf t
int fa0/0
ip igmp join-group 238.0.0.1


On R3 i examine the mroute table

Router_3#s ip mroute 238.0.0.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 238.0.0.1), 00:16:10/00:02:22, RP 150.1.5.5, flags: BC
Bidir-Upstream: Serial2/0, RPF nbr 155.1.0.5
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:15:52/00:02:34
Serial2/0, Bidir-Upstream/Sparse, 00:16:10/00:00:00


From R5 i verify solution with a ping to multicast group 238.0.0.1

Router_5#ping 238.0.0.1

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 238.0.0.1, timeout is 2 seconds:

Reply to request 0 from 155.1.0.3, 88 ms
Reply to request 0 from 155.1.37.7, 228 ms
Reply to request 0 from 155.1.0.3, 88 ms

Monday, June 15, 2009

Multicast - Rate Limiting

SW1 -------- R3 ---------- R5

Here i make use of the multicast rate limiting function on R3 to control the amount of multicast traffic allowed to reach SW1.

First SW1 joins multicast groups 225.0.0.1 and 225.0.0.3

SW1
conf t
int fa0/0
ip igmp join-group 225.0.0.1
ip igmp join-group 225.0.0.3



On R3 the requirement is to limit the mcast traffic to 225.0.0.1 to 1k and 225.0.0.3 to 3k. The aggregate multicast traffic rate must not exceeed 5k.

This requirement can be achieved via the multicast rate limit function. Multiple rate limit statements can be applied to an interface and they are processed in a linear top down fashion. Hence careful consideration must be given to the order of the statements as applied.

First i use ACLS to define the mcast groups

R3
ip access-list standard GROUP_1
permit 225.0.0.1
ip access-list standard GROUP_3
permit 225.0.0.3


Then i apply the rate limit function.

interface FastEthernet0/0
ip multicast rate-limit out group-list GROUP_1 1
ip multicast rate-limit out group-list GROUP_3 3
ip multicast rate-limit out 5


After applying the mcast routes can be viewed and the bandwidth limits are shown

Router_3#s ip mroute 225.0.0.1
(*, 225.0.0.1), 00:13:28/00:02:33, RP 150.1.5.5, flags: SJC
Incoming interface: Serial2/0, RPF nbr 155.1.0.5
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:13:28/00:02:51, limit 1 kbps

Router_3#s ip mroute 225.0.0.3
(*, 225.0.0.3), 00:07:34/00:02:51, RP 150.1.5.5, flags: SJC
Incoming interface: Serial2/0, RPF nbr 155.1.0.5
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:07:34/00:02:51, limit 3 kbps


I then test the rate limiting via ping tests from R5. First i try a ping rate that conforms to the bandwidth limit.

Router_5#pin 225.0.0.1 size 100 repeat 2

Type escape sequence to abort.
Sending 2, 100-byte ICMP Echos to 225.0.0.1, timeout is 2 seconds:

Reply to request 0 from 155.1.37.7, 184 ms
Reply to request 0 from 155.1.37.7, 184 ms
Reply to request 1 from 155.1.37.7, 212 ms
Reply to request 1 from 155.1.37.7, 212 ms


Now i try a ping with an increased data size that exceeds the 1 kbps limit and, as expected, the traffic is dropped.

Router_5#pin 225.0.0.1 size 200 repeat 2

Type escape sequence to abort.
Sending 2, 200-byte ICMP Echos to 225.0.0.1, timeout is 2 seconds:
..



I repeat the above test on the mcast group 225.0.0.3 that has a higher bandwidth limit of 3k.

Router_5#pin 225.0.0.3 size 360 repeat 2

Type escape sequence to abort.
Sending 2, 360-byte ICMP Echos to 225.0.0.3, timeout is 2 seconds:

Reply to request 0 from 155.1.37.7, 208 ms
Reply to request 0 from 155.1.37.7, 212 ms
Reply to request 1 from 155.1.37.7, 268 ms
Reply to request 1 from 155.1.37.7, 268 ms
Router_5#pin 225.0.0.3 size 400 repeat 2

Type escape sequence to abort.
Sending 2, 400-byte ICMP Echos to 225.0.0.3, timeout is 2 seconds:
..
Router_5#

Sunday, June 14, 2009

PIM - IGMP GROUP LEAVE TIMERS


If a host on the LAN leaves a group it sends an IGMP leave message (assuming PIM version 2). Upon receipt the elected IGMP querier will send out an IGMP last member group query to ascertain if there are still other hosts on the LAN segment who are members of the group. If no hosts reply, after the message has been repeated, then the IGMP querier router removes the (*,G) mroute.

The timers/counters involved in this exchange are

ip igmp last-member-query-count (default 2)

ip igmp last-member-query-interval (default 1000ms)

PIM - IGMP QUERIER TIMEOUT


The router elected as IGMP querier will send out an IGMP query each configured interval. If non IGMP querier routers on the same LAN segment do not hear any IGMP queries for a period of time they will try and assume the IGMP querier role.

The timers used in this exchange are as follows

ip igmp query-interval (default 60 seconds)

ip igmp querier-timeout (default 120 seconds - 2 * ip igmp query interval)

PIM - IGMP GROUP QUERY TIMERS



The IGMP querier will send out an igmp group query to check group memmbership on the connected LAN segment. Normallly an IGMP group report response will be received. If no group report is received then the (*,G) mroute is removed.

In this message exchange the following timers can be influenced.

ip igmp query-interval (default 60 seconds)

ip igmp query-max-response-time (default 10 seconds).

Saturday, June 13, 2009

PIM/IGMP Elections


On a shared LAN segment, amongst the PIM enabled routers, a selected router must assume the responsibilty for i) sending any PIM register/prune messages to the RP and for ii) sending IGMP query messages.

I was until recently under the misunderstanding that the PIM DR router performed both of these functions - wrong!! These functions are completely decoupled and in fact they have a different election process and selection criteria.

First the Querier Election Process.
At start up each router sends a query message to the all systems group 224.0.0.1 from its own interface address. The router with the lowest ip address is elected IGMP querier.

Second the PIM DR Election Process
The router with the highest ip address is elected as PIM DR. This selection process can also be influenced by configuring a pim DR priority. By default all routers have priority 1, hence highest ip address wins by default. However if DR priority is used then highest DR priority wins.

The show igmp interface command can be used to show elected DR and querier. Here 155.1.148.1 is elected querier (lowest ip address on LAN segment) and 155.1.148.6 is elected DR (highest ip address on LAN segment).

Router_1(config)#do s ip igmp int fa0/0
FastEthernet0/0 is up, line protocol is up
Internet address is 155.1.148.1/24
IGMP is enabled on interface
Current IGMP host version is 2
Current IGMP router version is 2
IGMP query interval is 20 seconds
IGMP querier timeout is 40 seconds
IGMP max query response time is 4 seconds
Last member query count is 2
Last member query response interval is 1000 ms
Inbound IGMP access group is not set
IGMP activity: 0 joins, 0 leaves
Multicast routing is enabled on interface
Multicast TTL threshold is 0
Multicast designated router (DR) is 155.1.148.6
IGMP querying router is 155.1.148.1 (this system)
No multicast groups joined by this system

Friday, June 12, 2009

PIM - BSR load balancing

With BSR if multiple RPs are defined to service the same multicast groups then the BSR candidate router will distribute the load amongst these RPS. This is done using an algorithm based on the HASH length defined. The longer the hash length the more random the assignment.

Based on the following config, the RP assignment can be examined on the routers.


R1
ip pim rp-candidate Lo0

R3
ip pim rp-candidate Lo0

R5
ip pim bsr-candidate Lo0 32


Here i examine the RP for group 238.1.1.1 and 237.1.1.1. The BSR has assigned R3 as the R3 (150.1.9.9) as the RP for 238.1.1.1 and R1 (150.1.7.7) AS THE rp FOR 237.1.1.1.


R1#show ip pim rp-hash 238.1.1.1
RP 150.1.9.9 (?), v2
Info source: 150.1.5.5 (?), via bootstrap, priority 0, holdtime 150
Uptime: 00:00:48, expires: 00:01:46
PIMv2 Hash Value (mask 255.255.255.255)
RP 150.1.7.7, via bootstrap, priority 0, hash value 377749190
RP 150.1.9.9, via bootstrap, priority 0, hash value 1884030652
R1#show ip pim rp-hash 237.1.1.1
RP 150.1.7.7 (?), v2
Info source: 150.1.5.5 (?), via bootstrap, priority 0, holdtime 150
Uptime: 00:01:12, expires: 00:01:39
PIMv2 Hash Value (mask 255.255.255.255)
RP 150.1.7.7, via bootstrap, priority 0, hash value 1501822662
RP 150.1.9.9, via bootstrap, priority 0, hash value 860620476

PIM - BSR


PIM BSR (Bootstrap Routing) - the basics

The BSR mechanism is a nonproprietary method of defining RPs that can be used with third-party routers. There is no configuration necessary on every router separately (except on candidate-BSRs and candidate-RPs). The canidate-RPs are analagous with Auto-RP candidate RPs and the candidate-BSRs are analagous with the Auto RP mapping agent.

Thes can be defined as follows.

R1
ip pim rp-candidate Loopback0

R3
ip pim rp-candidate Loopback0

R5
ip pim bsr-candidate Loopback0 31


Router_5#show ip pim rp-hash 224.1.1.1
RP 150.1.7.7 (?), v2
Info source: 155.1.37.7 (?), via bootstrap, priority 0, holdtime 150
Uptime: 00:15:07, expires: 00:02:21
PIMv2 Hash Value (mask 255.255.255.254)
RP 150.1.7.7, via bootstrap, priority 0, hash value 1852227743
RP 150.1.9.9, via bootstrap, priority 0, hash value 800581801

Thursday, June 11, 2009

PIM - Multicast Boundary

The multicast boundary feature, when used with a standard acl, can be used to filter (S,G) and (*,G) join messages to the RP as well as filter mcast traffic destined to a multicast group. Note it does not filter PIM register messages as these are sent as unicast messages from the PIM DR to the PIM RP.

Consider the following configuration

access-list 5 deny 232.0.0.0 7.255.255.255
access-list 5 permit 224.0.0.0 15.255.255.255

int fa0/0
ip multicast boundary 5 filter-autorp



This command will filter multicast traffic for the range 232.0.0.0/5. This includes any traffic in this range plus any ranges that overlap with this range.

The addition of the filter-autorp messages ensures the filtering is applied to rp announcements as well as multicast traffic.

For example the downstream switch SW2 was receiving announcements for the following groups.

SW2#s ip pim rp map
PIM Group-to-RP Mappings

Group(s) 224.0.0.0/5
RP 150.1.7.7 (?), v2v1
Info source: 150.1.1.1 (?), elected via Auto-RP
Uptime: 00:19:45, expires: 00:02:33
Group(s) 224.0.0.0/4
RP 150.1.9.9 (?), v2v1
Info source: 150.1.1.1 (?), elected via Auto-RP
Uptime: 00:19:15, expires: 00:01:34
Group(s) (-)224.50.50.50/32
RP 150.1.9.9 (?), v2v1
Info source: 150.1.1.1 (?), elected via Auto-RP
Uptime: 00:19:15, expires: 00:02:36
Group(s) 232.0.0.0/5
RP 150.1.9.9 (?), v2v1
Info source: 150.1.1.1 (?), elected via Auto-RP
Uptime: 00:19:15, expires: 00:01:37


After the multicast boundary statement wis applied to the upstream neighbor rp announcements for 232.0.0./5 and 224.0.0.0/4 are both removed.

SW2#s ip pim rp map
PIM Group-to-RP Mappings

Group(s) 224.0.0.0/5
RP 150.1.7.7 (?), v2v1
Info source: 150.1.1.1 (?), elected via Auto-RP
Uptime: 00:23:58, expires: 00:02:20
Group(s) (-)224.50.50.50/32
RP 150.1.9.9 (?), v2v1
Info source: 150.1.1.1 (?), elected via Auto-RP

Wednesday, June 10, 2009

PIM - MA Placement in a Frame Relay network




When placing the Mapping Agent in a frame relay hub and spoke environment always aim to locate it at the hub or behind it.

PIM by default will assume NBMA interfaces are broadcast capable. However, by default, when a spoke sends a multicast message the hub will not replicate this to other spokes, obeying the split horizon rule. This can in part be solved by the placement of the ‘ip pim nbma-mode’ command on the hub. It is worth noting however that this only fixes sparse-mode traffic. Dense mode traffic will NOT be replicated.

Hence this poses a problem for auto-rp information which uses dense mode to mcast groups 224.0.1.39 and 224.0.1.40.

If the RP and mapping agent are placed on a spoke then auto-rp messages will only reach the hub node. If the mapping agent is on the hub then RPs could be located on the spokes as long as the announces reach the hub.

There are a couple of resolutions to this problem. First use sub-interfaces on the hub, or secondly create multicast enabled tunnels between the spokes.

The tunnel config for the spokes is shown here.

Router_1#
interface Tunnel0
ip address 155.1.20.20 255.255.255.0
ip pim sparse-mode
tunnel source Loopback1
tunnel destination 150.1.3.3
tunnel mode ipip

Router_3#
interface Tunnel0
ip address 155.1.20.21 255.255.255.0
ip pim sparse-mode
tunnel source Loopback0
tunnel destination 150.1.1.1
tunnel mode ipip


Another caveat is that if the tunnel is not included in the IGP routing then static multicast routes will be required pointing at the tunnel to ensure RPF checks don’t fail.

R1
Ip mroute 150.1.1.1 255.255.255.255 tu0


Note. The problem with the disssemination of traffic to mcast group 224.1.0.40 can be seen on the hub (R5) as the frame relay serial interface S2/0 is missing in the OIL interface list.


(150.1.1.1, 224.0.1.40), 01:14:19/00:02:34, flags: LT
Incoming interface: Serial2/0, RPF nbr 155.1.0.1
Outgoing interface list:
Loopback0, Forward/Sparse, 01:14:19/00:00:00
Serial2/1, Forward/Sparse, 01:14:19/00:00:00
FastEthernet0/0, Forward/Sparse, 01:14:19/00:00:00

PIM misc - dense mode reqd in sparse-dense region

Suppose the lab required one Mcast range ONLY operate in dense mode, whereas the rest of the domain should operate in sparse mode???

This can be achieved by making use of the 'deny' statement in the ACL used to denote the mcast groups serviced by the candidate RP.

SW3#s access-list 11
Standard IP access list 11
40 deny 224.50.50.50
20 permit 232.0.0.0, wildcard bits 7.255.255.255
30 permit 224.0.0.0, wildcard bits 15.255.255.255

When examining the rp mapping then the 'denied' range will be shown with a minus

Group(s) (-)224.50.50.50/32
RP 150.1.9.9 (?), v2v1
Info source: 150.1.9.9 (?), elected via Auto-RP
Uptime: 00:01:31, expires: 00:02:27
RP 150.1.7.7 (?), v2v1
Info source: 150.1.7.7 (?), via Auto-RP
Uptime: 00:01:07, expires: 00:02:50

PIM RP Load Balancing and Redundancy

Here look at achieving load-balancing and redundancy of multicast traffic between RPS.

First load-balancing

Auto-RP is being used. SW1 is configured as RP for 224.0.0.0 - 231.255.255.255 and
SW3 is RP for 232.0.0.0 239.255.255.255.

SW1
ip pim send-rp-announce Loopback0 scope 16 group-list 11
ip access-list st 11
permit 224.0.0.0 7.255.255.255

SW3
ip pim send-rp-announce Loopback0 scope 16 group-list 11
ip access-list st 11
permit 232.0.0.0 7.255.255.255

Now examining the rp mapping on the RP we can see the load is balanced between SW1 (150.1.7.7) and SW3 (150.1.9.9).

Router_5>s ip pim rp map
PIM Group-to-RP Mappings
This system is an RP (Auto-RP)
This system is an RP-mapping agent (Loopback0)

Group(s) 224.0.0.0/5
RP 150.1.7.7 (?), v2v1
Info source: 150.1.7.7 (?), elected via Auto-RP
Uptime: 00:00:06, expires: 00:02:53
Group(s) 224.0.0.0/4
RP 150.1.5.5 (?), v2v1
Info source: 150.1.5.5 (?), elected via Auto-RP
Uptime: 00:13:55, expires: 00:02:11
Group(s) 232.0.0.0/5
RP 150.1.9.9 (?), v2v1
Info source: 150.1.9.9 (?), elected via Auto-RP
Uptime: 00:00:31, expires: 00:02:24



Next step is to achieve redundancy. Make SW1 backup SW3 should it fail and vice versa.

This can be achieved by defining each candidate RP with the same duplicate range. The mapping agent will select the RP with the highest ip address.

So on SW1 and SW3 i update access list 11 as follows:-

ip access-list st 11
permit 224.0.0.0 15.255.255.255

Router_5>show ip pim rp map 224.0.0.0
PIM Group-to-RP Mappings
This system is an RP (Auto-RP)
This system is an RP-mapping agent (Loopback0)

Group(s) 224.0.0.0/5
RP 150.1.7.7 (?), v2v1
Info source: 150.1.7.7 (?), elected via Auto-RP
Uptime: 00:07:38, expires: 00:02:21
Group(s) 224.0.0.0/4
RP 150.1.9.9 (?), v2v1
Info source: 150.1.9.9 (?), elected via Auto-RP

Uptime: 00:02:01, expires: 00:01:56
RP 150.1.7.7 (?), v2v1
Info source: 150.1.7.7 (?), via Auto-RP
Uptime: 00:01:38, expires: 00:02:18
RP 150.1.5.5 (?), v2v1
Info source: 150.1.5.5 (?), via Auto-RP
Uptime: 00:21:27, expires: 00:02:46

The mapping agent shows SW3 as the winning candidate RP for the 224.0.0.0/4 range. On other routers only the winning RP will be shown in the rp map table.


Router_6>s ip pim rp map

PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4
RP 150.1.9.9 (?), v2v1
Info source: 150.1.5.5 (?), elected via Auto-RP
Uptime: 00:04:10, expires: 00:02:42


Note. When selecting ranges to advertise the mapping agent will always advertise the longest match mcast range.

Monday, June 8, 2009

PIM DR


On a multiaccess network there may be multiple IGMP enabled routers. It is the responsibility of one of these IGMP routers to send any PIM join messages towards the RP.

If no PIM DR priority is expilicitly configured the IGMP/PIM router with the highest ip address is elected as the DR and will send the join. The PIM DR priority can be used to influence which router is elected to forward the PIM join messages.

In the above scenario, without any DR priorities configured, R6 is elected DR as it has the highest ip address.

Router_1>S IP PIM NE
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
S - State Refresh Capable
Neighbor Interface Uptime/Expires Ver DR
Address Prio/Mode
155.1.148.6 FastEthernet0/0 00:01:26/00:01:17 v2 1 / DR S
155.1.148.4 FastEthernet0/0 00:02:16/00:01:18 v2 1 / S
155.1.0.5 Serial2/0 00:00:21/00:01:24 v2 1 / DR S


If the lab requirement states R1 should be the DR for this segment this can be achieved with the use of the 'ip pim dr-priority' message.

config#int fa0/0
config-if#ip pim dr-priority 100


With the above config applied i re-examine the PIM neighbors and R1 has pre-empted the DR position.

Router_4#s ip pim ne
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
S - State Refresh Capable
Neighbor Interface Uptime/Expires Ver DR
Address Prio/Mode
155.1.148.6 FastEthernet0/0 00:02:14/00:01:28 v2 1 / S
155.1.148.1 FastEthernet0/0 00:02:35/00:01:28 v2 100/ DR S
155.1.46.5 Serial2/1 00:02:49/00:01:22 v2 1 / S

From R1 this can be seen as well using the 'show ip pim interface fa0/0' command.

Router_1#s ip pim interface fa0/0
Address Interface Ver/ Nbr Query DR DR
Mode Count Intvl Prior
155.1.148.1 FastEthernet0/0 v2/SD 2 30 100 155.1.148.1

In summary the PIM DR controls upstream PIM joins, and from my previous post the PIM assert mechanism controls downstream routing of multicast traffic.

Sunday, June 7, 2009

Controlling access to RP

PIM has the functionality to specify the multicast groups that an RP will allow joins from.

This allows central control over the mcast groups serviced by the RP.

The following config will only allow joins to mcast groups 224.11.11.11 and 224.111.111.111 for the RP 150.1.5.5. This can be enabled on the RP itself, or altenatively on routers on the path to the RP.

ip access-list st 5
permit 224.11.11.11
permit 224.111.111.111

ip pim accept-rp 150.1.5.5 5



With 'debug ip pim' enabled failed attempts to the join RP are logged

*Jun 8 07:03:13.039: PIM(0): Join-list: (*, 224.20.20.20),, ignored, invalid RP
150.1.5.5 from 155.1.58.2

PIM Assert


The PIM Assert mechanism is used to shutoff duplicate flows onto the same multiaccess network. Routers detect this condition when they receive an (S,G) packet via a multi-access interface that is already in the (S,G) OIL. This causes the routers to send Assert Messages.

In this scenario the workstation attached to R6 has joined group 239.6.6.6. A multicast feed is started and both R1 and R4 begin sending the mcast.

With 'debug ip pim' enabled on R1 and R4, it can be seen that a PIM assert exhange is initiated between them.

ON R1
*Jun 8 06:18:49.419: PIM(0): Send v2 Assert on FastEthernet0/0 for 239.6.6.6, source 155.1.58.2, metric [80/65]
*Jun 8 06:18:49.423: PIM(0): Assert metric to source 155.1.58.2 is [80/65]
*Jun 8 06:18:49.423: PIM(0): We win, our metric [80/65]


ON R4
*Jun 8 06:18:49.359: PIM(0): Received v2 Assert on FastEthernet0/0 from 155.1.1
48.1
*Jun 8 06:18:49.367: PIM(0): Assert metric to source 155.1.58.2 is [80/65]
Router_4#
*Jun 8 06:18:49.371: PIM(0): We lose, our metric [90/2172416]

The winner of the assert exchange is the router with best (AD,Metric). In the above case, R1 has an AD of 80 and R4 has an AD of 90. R1 wins!

As a result R4 prunes the S,G entries in its routing table


Router_4#s ip mroute 239.6.6.6
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.6.6.6), 00:04:43/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Serial2/1, Forward/Sparse-Dense, 00:04:43/00:00:00
FastEthernet0/0, Forward/Sparse-Dense, 00:04:43/00:00:00

(150.1.8.8, 239.6.6.6), 00:00:56/00:02:05, flags: PT
Incoming interface: Serial2/1, RPF nbr 155.1.46.5
Outgoing interface list:
FastEthernet0/0, Prune/Sparse-Dense, 00:00:56/00:02:03

(155.1.58.2, 239.6.6.6), 00:00:56/00:02:05, flags: PT
Incoming interface: Serial2/1, RPF nbr 155.1.46.5
Outgoing interface list:
FastEthernet0/0, Prune/Sparse-Dense, 00:00:56/00:02:03


On R1 the S,G entries remain with an 'A' by them denoting Assert winner!

Router_1#s ip mroute 239.6.6.6
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.6.6.6), 00:05:01/stopped, RP 0.0.0.0, flags: D
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Serial2/0, Forward/Sparse-Dense, 00:05:01/00:00:00
FastEthernet0/0, Forward/Sparse-Dense, 00:05:01/00:00:00

(150.1.8.8, 239.6.6.6), 00:01:14/00:01:46, flags: T
Incoming interface: Serial2/0, RPF nbr 155.1.0.5
Outgoing interface list:
FastEthernet0/0, Forward/Sparse-Dense, 00:01:14/00:00:00, A

(155.1.58.2, 239.6.6.6), 00:01:14/00:01:46, flags: T
Incoming interface: Serial2/0, RPF nbr 155.1.0.5
Outgoing interface list:
FastEthernet0/0, Forward/Sparse-Dense, 00:01:14/00:00:00, A