Sunday, May 31, 2009

SNMP version 3

SNMP version 3 incorporates security enhancements into the SNMP protocol.

To utilise this new functionality SNMP groups with associated user names and passwords must be created.

The 1st step is to specify an acl of users allowed to access the group

config#ip access-list st 1
config-std-acl#permit 130.1.1.1


config#snmp-server group IELAB v3 auth access 1

The 2nd step is to specify the user names and passwords

config#snmp-server user rich IELAB v3 auth md5 CISCO


Verification can be done with the command

R4#s snmp user
User name: rich
Engine ID: 800000090300CA0309800000
storage-type: nonvolatile active


When defining the snmp host the authentication method can then be specified.

config#snmp-server host 154.1.3.100 version 3 auth IELAB

Thursday, May 21, 2009

Multicast - Shared Trees



I thought i would write about the multicast 'shared tree' and how it is built. Once understood, I feel this has certainly helped me with multicast and troubleshooting along the way.

This is a two stage process: the server 'registers' to the RP and the client 'joins' the RP. These 2 processes are independent of each other and is the same regardless of the underlying PIM RP selection protocol e.g. Auto rp, static rp, or BSR.

In this example i use multicast servers attached to R6 and clients attached to R4.
The RP is 150.1.5.5 on R5

First the registration process.
A server at 204.12.1.254 starts to send to a multicast address multicast address 224.4.4.4. The PIM interface on the local LAN segment receives the multicast packet and sends a PIM 'register' to the RP, 150.1.5.5 in this example. This message is actually encapsulated as a unicast message to 150.1.5.5.

When the RP receives this register message it acknowledges receipt

The output of the debug ip pim on R6 (local PIM interface) and R5 (the RP) shows this...

R6#
*May 21 06:35:16.647: PIM(0): Check RP 150.1.5.5 into the (*, 224.4.4.4) entry
*May 21 06:35:16.655: PIM(0): Send v2 Register to 150.1.5.5 for 204.12.1.254, gr
oup 224.4.4.4
*May 21 06:35:17.143: PIM(0): Received v2 Register-Stop on FastEthernet0/0 from
150.1.5.5
*May 21 06:35:17.147: PIM(0): for source 204.12.1.254, group 224.4.4.4
*May 21 06:35:17.147: PIM(0): Clear Registering flag to 150.1.5.5 for (204.12.1.
254/32, 224.4.4.4)

R5#
*May 21 06:35:16.483: PIM(0): Received v2 Register on Serial2/0 from 192.10.1.6
*May 21 06:35:16.487: for 204.12.1.254, group 224.4.4.4
*May 21 06:35:16.491: PIM(0): Check RP 150.1.5.5 into the (*, 224.4.4.4) entry
*May 21 06:35:16.495: PIM(0): Send v2 Register-Stop to 192.10.1.6 for 204.12.1.2
54, group 224.4.4.4

Following this registration process entries are placed in the mroute table on R6 and R5, but not any intervening routers in the unicast path between R6 and R5.

On R6 2 mroute entries are created: the (*,G) entry and the (S,G) entry. Both entries at this stage have a null output interface as no client has yet registered for this multicast feed. The (S,G) entry denotes that a server is sending to the multicast group.

R6#s ip mroute 224.4.4.4
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.4.4.4), 00:11:12/stopped, RP 150.1.5.5, flags: SPF
Incoming interface: FastEthernet0/0, RPF nbr 192.10.1.1
Outgoing interface list: Null


(204.12.1.254, 224.4.4.4), 00:11:12/00:02:51, flags: PFT
Incoming interface: FastEthernet1/0, RPF nbr 0.0.0.0
Outgoing interface list: Null


On R5, the RP, a similar 2 entries are created.

R5#s ip mroute 224.4.4.4
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.4.4.4), 00:10:16/stopped, RP 150.1.5.5, flags: SP
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list: Null

(204.12.1.254, 224.4.4.4), 00:10:16/00:02:47, flags: P
Incoming interface: Tunnel53, RPF nbr 154.1.0.3, Mroute
Outgoing interface list: Null


This completes the register process.


Second the client 'join' process.
A mcast client is connected to R4 and sends an IGMP join for the multicast group 224.4.5.6. Upon receipt of the IGMP join R4 sends a PIM join message towards the RP (R5).

Debug output on R5 shows receipt of the join message.

R5#
*May 21 06:59:48.519: PIM(0): Received v2 Join/Prune on Tunnel53 from 154.1.0.3,
to us
*May 21 06:59:48.523: PIM(0): Join-list: (*, 224.4.5.6), RPT-bit set, WC-bit set
, S-bit set
*May 21 06:59:48.527: PIM(0): Add Tunnel53/154.1.0.3 to (*, 224.4.5.6), Forward
state, by PIM *G Join


On the RP (R5) a (*,G) entry is created in the mroute table

R5#s ip mroute 224.4.5.6
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.4.5.6), 00:02:47/00:02:58, RP 150.1.5.5, flags: SJC
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Tunnel53, Forward/Sparse-Dense, 00:01:48/00:02:40
FastEthernet1/0, Forward/Sparse-Dense, 00:02:47/00:02:58


Notice this time the entry has a populated outgoing interface. With the join process all PIM enabled routers in the path to the RP also build such an entry in their mroute table.

This completes the join process.

Tying 'register' and 'join' together
The RP ties the join and register processes together. I initiate a server multicast feed to the multicast address 224.4.5.6.

I start a ping to 224.4.5.6

ping 224.4.5.6 repeat 10000
Reply to request 23 from 154.1.0.4, 552 ms
Reply to request 24 from 154.1.0.4, 676 ms
Reply to request 25 from 154.1.0.4, 788 ms
Reply to request 26 from 154.1.0.4, 964 ms
Reply to request 27 from 154.1.0.4, 824 ms

This will again initiate a new register from the local PIM interface to the RP for this multicast group.

I examine the mroute table on the RP with this multicast ping in process.

R5#s ip mroute 224.4.5.6
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.4.5.6), 00:03:55/00:03:29, RP 150.1.5.5, flags: SJC
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Tunnel53, Forward/Sparse-Dense, 00:02:56/00:03:29
FastEthernet1/0, Forward/Sparse-Dense, 00:03:55/00:02:50

(204.12.1.254, 224.4.5.6), 00:00:22/00:02:59, flags: T
Incoming interface: Tunnel53, RPF nbr 154.1.0.3, Mroute
Outgoing interface list:
FastEthernet1/0, Forward/Sparse-Dense, 00:00:22/00:02:50


As before the RP contains both the (*,G) and (S,G) entries. Since a client has registered for this feed both entries also contain an interface in the outgoing interface list (OIL).


I examine the mroute table on R6 (the PIM router connected to the multicast server). Similarly there are 2 entries for the multicats group 224.4.5.6. Note the (S,G) entry has an OIL entry, however the (*,G) entry does not.

Only those routers in the path between the multicast client an the RP will have a populated OIL for this entry.


R6#s ip mroute 224.4.5.6
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.4.5.6), 00:03:13/stopped, RP 150.1.5.5, flags: SPF
Incoming interface: FastEthernet0/0, RPF nbr 192.10.1.1
Outgoing interface list: Null

(204.12.1.254, 224.4.5.6), 00:03:13/00:03:25, flags: FT
Incoming interface: FastEthernet1/0, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/0, Forward/Sparse-Dense, 00:03:12/00:03:16

Sunday, May 17, 2009

BGP preferred path



In this scenario R3 is advertising network 154.1.5.0/24 to bgp peers R1 and R2. The lab requirement is for AS 300 to be configured so that the link R1 to R3 is the preferred path to reach this network.

In such a scenario there are 2 usual candidates to meet this requirement: as-path prepending and MED. In this scenario as-path pre-pending is not allowed.

So the configuration for MED on R3 is as follows

ip prefix-list VLAN10 permit 154.1.5.0/24
route-map R1 permit 10
match ip address prefix-list VLAN10
set metric 100
route-map R1 permit 20


route-map R2 permit 10
match ip address prefix-list VLAN10
set metric 200
route-map R2 permit 20


router bgp 300
neighbor 154.1.13.1 route-map R1 out
neighbor 154.1.23.2 route-map R2 out


The above was my first configured solution. I examined the bgp routing table on R2 to verify my results:-

R2#s ip bgp
Network Next Hop Metric LocPrf Weight Path
*>i154.1.5.0/24 154.1.13.3 100 100 0 300 400 i
* 154.1.23.3 200 0 300 400 i

As expected R2 had 2 paths to network 154.1.5.0/24. However R2s next hop for both learned routes was R3! I had missed one vital configuration element in terms of meeting the lab requirement. R1s advertisement of 154.1.5.0/24 to R2 was NOT adjusting the next hop. This is the correct beahviour since R1 has an EBGP peer relationship with R3.

To ensure traffic from R2 destined to R3 goes via R1 it is necessary for R1 to adjust the next hop of EBGP learned routes to itself.

R1
router bgp 200
neighbor 192.10.1.2 next-hop-self


Once applied i examined the bgp table on R2 again..

Rack1R2#s ip bgp
Network Next Hop Metric LocPrf Weight Path
*>i154.1.5.0/24 192.10.1.1 100 100 0 300 400 i
* 154.1.23.3 200 0 300 400 i


Now R2s preferred route to reach R3 is via R1. Job done!

Saturday, May 16, 2009

srr-queue commands - part IV

The final part in my look at the srr queues is how DSCP or COS marked packets are assigned to the srr queues.

First the default settings

dscp default values 0-63

0-15 queue 2
16-31 queue 3
32-39, 48-63 queue 4
40-47 queue 1

cos default values 0-7
0,1 queue 2
2,3 queue 3
4,6,7 queue 4
5 queue 1

Within each queue marked packets can be placed in one of three threshold queues. By default all packets are placed in threshold 1. By default threshold 1 has the lowest tolerance to WTD.

The above default settings can all be adjusted depending on the requirements to be met with the following commands

mls qos srr-queue output dscp-map queue {queue-id} threshold {threshold id} {dscp1} ....{dscp8}
mls qos srr-queue output cos-map queue {queue-id} threshold {threshold id} {cos1} ....{cos8}

To ensure higher priority dscp or cos values are not dropped first they can be assigned to a threshold id with a higher value {2 or 3}. By default the higher threshold id values will have a higher tolerance to WTD.

Finally to review assignments use the show mls qos maps command.

Thats it for srr-queues. In my opinion, an absolute beast of a subject. Good to have an understanding of the configurable parameters, but the doc cd will be my friend should this come up.

Friday, May 15, 2009

srr-queue commands - part III

Before i write about how traffic is allocated to queues, i realised there is another important piece to the srr queue puzzle. Namely how buffers are allocated and managed on the 4 srr queues.

This in itself appears to be a science best approached in a dark room!:-)

Buffers can be set up in advance and mapped to queue set in advance. 2 queue sets are available. An interface is then assigned to a queue set, thus applying the required buffers accordingly. By default an interface uses queue set 2.

An interface is assigned a queue-set as follows
config-if#queue-set 2
or
config-if#queue-set 1



As we know already there are 4 srr queues. A number of values can be set for each of these queues.

1) Buffer allocation
In percentage terms how much of the available interface buffer space is mapped to this queue. Allocation for the 4 srr queues must total 100%.

2) Buffer thresholds - of which there are 4
2 drop WTD (weighted tail drop) thresholds
1 reserved threshold
1 maximum threshold

First buffer allocation
mls qos queue-set {1-2} buffers {%1,%2,%3,%4}

e.g. mls qos queue-set 1 buffers 30 30 30 10
This sets the buffer allocation for srr queue 1 to 30%, queue 2 30%, queue 3 30% and queue 4 10%. N.B. if this command is not used the default allocation is 25% for each queue.

Second buffer thresholds
As mentioned there are 4 thresholds. If none are explicitly set then the following percentage defaults apply to the available buffer space:

queue 1 100 100 50 400
queue 2 200 200 50 400
queue 3 100 100 50 400
queue 4 100 100 50 400

e.g. for queue 1
100 wtd threshold 1
100 wtd threshold 2
50 reserved threshold
400 maximum threshold


So bringing the above alltogether in one example

mls qos queue-set output 1 buffers 30 20 30 20
mls qos queue-set output 1 threshold 1 40 60 100 200
mls qos queue-set output 1 threshold 2 40 60 100 200
mls qos queue-set output 1 threshold 3 40 60 100 200
mls qos queue-set output 1 threshold 4 40 60 100 200
int gi1/1
queue-set 1

i)srr buffer allocation for queues 1-4 is 30%,20%,30% and 20% respectively
ii)the srr queue thresholds are set identically for all 4 queues to 40%,60%,100% and 200%
iii) all the above config is applied to queue set 1, which is then applied to interface gi1/1

As mentioned at the start, when i first looked at this, it appears to be a another science in itself. I know i had to read the cisco doc at least a couple of times to get it straight - or maybe thats just me:-)

Monday, May 11, 2009

srr-queue commands - part II

In this post i look at the srr-queue shape and share commands, what they do and how they interact.

There are 4 interface queues serviced by SRR. Each queue can be configured for either shaping or sharing, but not both. If shaping is configured then this takes precedence.
( The way i remember this is that shaping comes alphabetically before sharing )

Shaping guarantees a percentage of the bandwidth and limits the traffic to the configured amount. Conversely sharing allocates the bandwidth amongst the sharing queues according to the ratios configured, but does NOT limit it to this level.

Shaped and shared settings are configured using

config-if#srr-queue bandwidth shape {n} {n} {n} {n}
config-if#srr-queue bandwidth share {n} {n} {n} {n}


If the values are not set then the following default values apply

config-if#srr-queue bandwitdh shape 25 0 0 0
config-if#srr-queue bandwitdh share 25 25 25 25


Bandwidth allocation for a 10 mb link can be calculated as follows:-

SHAPED Q
Bandwidth allocated = 1/25 * BW
Hence for a 10 mb interface BW for queue 1 would be 400kbps

SHARED Qs
10mb - 400kps = 9.6 mb
Hence for queue 2,3 and 4 BW = 25/(25+25+25) * 9.6 = 3.2 mb


Supposing a lab requirement was to guarantee queue 1 2 mb, queue 2 2 mb and queue 3 and 4 to share the remainder this could be achieved with the following configuration

config-if#srr-queue bandwitdh shape 5 5 0 0
config-if#srr-queue bandwitdh share 0 0 25 25


In the next post i look at how traffic is mapped to the srr queues.

Saturday, May 9, 2009

srr -queue commands - part I

The lab requirement states that the maximum output usage on fa0/1 should not exceed 75 percent of the maximum line rate.

This can be achieved by making use of the srr-queue bandwidth limit command. For the above requirement to be met configure the following:-

config#interface fa0/1
config-if#srr-queue bandwitdh limit 75


For verification use the

show mls qos int fa0/1 queueing command

N.B. A pre-requisite command is the global config command mls qos.

Thursday, May 7, 2009

OSPF routing part XII - filter-list


Consider the above scenraio where router 2 is an ABR between areas 0,1 and 2. All adjacencies are up and full exchange of routes has taken place.

A new requirement is that area 2 is deemed confidential. Area 1 must not have access to any routes originating from Area 2.

This can be achieved by making using of the ospf filter-list functionality. First a prefix-list must be defined that indicates routes that must be filtered (and allowed!).


R2
ip prefix-list AREA2 deny 150.1.24.0/24
ip prefix-list AREA2 deny 150.1.40.40/32
ip prefix-list AREA2 permit 0.0.0.0/0 le 32



N.B. the last entry in the prefix list is essential to ensure all routes other than the previously denoted denied routes are allowed through.

On router 2 the filter-list can then be applied


R2
router ospf 1
area 1 filter-list prefix AREA2 in


When the routing table on R1 is subsequently examined the 150.1.24.0/24 and 150.1.40.40/32 routes are no longer present. Nice.

Tuesday, May 5, 2009

OSPF routing part XI - preferred neighbor

A hub router in an OSPF topology has two neighbors connected over a single serial interface. The hub router is learning the same routes from both neighbors. How can the hub router be configured to prefer routers from a particular neighbor?

The solution to this problem brought to light a configuration parameter that had previously missed my attention. When using the ospf 'neighbor' command a cost can also be applied to routes learned from that neighbor.

Hence to achieve the requirement laid out simply required this parameter to be set accordingly

i.e. below i set a lower cost for neighbor 2. This could be usefull in the scenario where neighbor 2 has a connection with greater bandwidth or reliability.

router ospf 1
neighbor {ip address 1} cost 200
neighbir (ip address 2} cost 100

OSPF routing part X - non broadcast FR

I recently came across a scenario where the requirement was to run OSPF over a frame-relay network. Normally this would represent a straightforward routing configuration. As ever with the CCIE the interesting part was that no broadcast keywords had been placed on the frame-relay map statements, and none were allowed.

If there are no broadcast capabilities on the underlying network this is not necessarily a problem for OSPF as long as a suitable OSPF non-broadcast network type is chosen.

I set the ospf network type to 'non-broadcast' and then 'point-to-multipoint non-broadcast' and these worked fine as long as the required neighbor statement was configured on the hub router.