Friday, January 23, 2009

Reliable static routing

Static routing is just that static. If an interface goes down the route remains in the table and potentially black holes traffic.

Reliable static routing gets around this problem by only installing a static route based on the the state of a tracked object. By using this feature static routes in effect become dynamic!


R1 --------------- R3

On R1 I reate an sla monitor to track the reachability of the loopback on R3

ip sla monitor 1
type echo protocol ipIcmpEcho 3.3.3.3
timeout 900
frequency 1
ip sla monitor schedule 1 life forever start-time now


I then create a tacking object

track 1 rtr 1

Router_1#s track
Track 1
Response Time Reporter 1 state
State is Up
5 changes, last change 00:00:02
Latest operation return code: OK
Latest RTT (millisecs) 1
Tracked by:
STATIC-IP-ROUTING 0

I then create a reliable static route based on the status of the tracked object 1

ip route 5.5.5.5 255.255.255.255 10.0.0.3 55 track 1

The route only remains in the routing table as long as the loopback on R3 remains reachable.

Nice.

Wednesday, January 21, 2009

Configuring Voice VLAN

Before enabling a voice VLAN, CISCO recommand that QoS is enabled (by entering the mls qos command) and the port trust state is set on the interface(by entering the mls qos trust cos command).


You can configure a port to carry voice traffic in one of 2 ways:

•Configure the port to carry Voice Traffic in IEEE 802.1Q Frames
or
•Configure the port to carry Voice Traffic in IEEE 802.1p Priority-Tagged Frames

DOT1Q
To configure a port to carry voice traffic in IEEE 802.1Q frames for a specific VLAN.

mls qos
Interface Fa0/X
mls qos trust cos
switchport access vlan X
switchport voice vlan Y
(Tells the phone to use a dot1q header for VLAN Y)

n.b. The voice VLAN should be present and active on the switch when using its own vlan.

DOT1P
To configure a port to instruct the IP phone to carry traffic through the native data vlan.

mls qos
interface Fa0/X
mls qos trust cos
switchport access vlan X
switchport voice vlan dot1p
(Tells the phone to use native data vlan)

By default voice traffic is sent with a cos value of 5 and data traffic with a cos value of 0.

Use the following commands to instruct the phone to change the cos value of data packets.

mls qos trust device cisco-phone
switchport priority extended cos 1
(Tells phone to set data traffic with cos 1)

Sunday, January 18, 2009

HSRP - Tracking


In my experience it would be useful to track an end to end connection, rather than the local serial connection (which may not go down when e2e connectivity is lost).

In the above diagram R1 should be the active HSRP router as long as it maintains its frame relay connection to R3. The trouble here is that the serial connection on R1 will remain 'up' even if the serial connection on R3 goes down.

One way around this is to configure a tunnel connection between R1 and R3 and run IP keepalives over this. Then with the HSRP configuration track the status of the tunnel connection. In this instance the tunnel connection goes down when the end to end connection over the frame relay cloud goes down.

R3
int tu0
ip unnumber lo0
tunnel source 149.1.123.3
tunnel dest 148.1.123.1
keepalives 10 3


R1
int tu0
ip unnumber lo0
tunnel source 149.1.123.1
tunnel dest 148.1.123.3
keepalives 10 3


int fa0/0
standby ip 149.1.127.254
standby priority 110
standby preempt
standby track tu0 11


R2
int fa0/0
standby ip 149.1.127.254
standby priority 100
standby preempt

banner tokens

There are the 3 banner commands: motd, login and exec. motd is displayed upon arriving at the router, login when prompted for login, and exec when arriving at the exec prompt. Pretty self explanatory.

Commands to enter the banners are
i) banner motd
ii) banner login
iii) banner exec

The message is then delimited by the control character of your choice.

However did you know there are a few dynamic messages or 'tokens' that can be added to the message!


$(hostname) Displays the host name for the router.

$(domain) Displays the domain name for the router.

$(line) Displays the vty or tty (asynchronous) line number.

$(line-desc) Displays the description attached to the line.


By way of an example
Router_1(config)#banner login @
Enter TEXT message. End with the character '@'.
you are on router $(hostname)
@


So telnetting onto the router, the output is as follows :

Router_1>1.1.1.1
Trying 1.1.1.1 ... Open

you are on router Router_1


User Access Verification

Password:

Saturday, January 17, 2009

Multicast - Auto RP Filtering

In this post i detail auto rp filtering. This can be done by the mapping agent in the auto rp domain.

I had some difficulty getting this feature to work as required. In essence the concept is simple: the mapping agent denotes which RPs are allowed to advertise which multicast groups. However i discovered there are some limitations around how this works.

Consider the following

R1 -------- R2 -------- R3

R1 is the rp for groups 230.0.0.0/8 and 231.0.0.0/8

R2 is the mapping agent

R3 is the router initiating pings to the multicast groups.


In the above situation the requirement is for the mapping agent to allow R1 to be the RP for 231.0.0.0/8 and NOT 230.0.0.0/8. The config might be applied as follows

R1
int lo0
ip address 1.1.1.1 255.255.255.0
access-list 1 permit 230.0.0.0 1.255.255.255
ip pim autorp listener
ip pim send-rp-announce lo0 scope 16 group-list 1



R2
access-list 1 permit 1.1.1.1
access-list 2 permit 231.0.0.0 0.255.255.255
ip pim autorp listener
ip pim send-rp-discovery lo0 scope 16
ip pim rp-announce-filter rp-list 1 group-list 2


R3
ip pim autorp listener



I applied the above config and tried a ping from R3 to both multicast groups. To my surprise i was not able to ping either group!

Some research and head scratching later i realised that R2 was filtering both multicast groups from R1. The reason for this is that R1 is advertising itself as the RP for these groups in 1 ACL statement. If the mapping agent blocks any group within this advertised ACL ALL groups within this advertised space are denied.

In summary the mapping agent can only filter based on the same granularity as the multicast groups are advertised by the RP.

Hence to achieve the requirement in the above scenario R1 must first advertise itself as the RP for these multicast groups seperately.

If i change the config on R1 to the following then all is well.

R1
access-list 1 permit 230.0.0.0 0.255.255.255
access-list 1 permit 231.0.0.0 0.255.255.255



Router_3#p 231.31.31.31 repeat 3

Type escape sequence to abort.
Sending 3, 100-byte ICMP Echos to 231.31.31.31, timeout is 2 seconds:

Reply to request 0 from 142.1.13.1, 96 ms
Reply to request 1 from 142.1.13.1, 156 ms
Reply to request 2 from 142.1.13.1, 168 ms
Router_3#p 230.30.30.30 repeat 3

Type escape sequence to abort.
Sending 3, 100-byte ICMP Echos to 230.30.30.30, timeout is 2 seconds:
...

Note to myself: Watch out for this feature/anomoly:-)

Narrowing an acl

By narrowing an acl i mean delimit traffic in the mimimal number of acl lines.

As an example consider the following addresses, and express as a 1 line acl!!

200.0.1.2
200.0.3.2
200.0.3.10
200.0.1.18
200.0.3.26
200.0.1.10
200.0.3.18
200.0.1.26

To break down consider the varibale portions of the acl in bit notation.
Then decide which bits can be either a zero or one without allowing any further traffic address combinations through the filter.

3rd Oct 4th Oct
200.0.1.2 0000 0001 0000 0010
200.0.3.2 0000 0011 0000 0010
200.0.3.10 0000 0011 0000 1010
200.0.1.18 0000 0001 0001 0010
200.0.3.26 0000 0011 0001 1010
200.0.1.10 0000 0001 0000 1010
200.0.3.18 0000 0011 0001 0010
200.0.1.26 0000 0001 0001 1010

0000 00*1 000* *010

Hence the one line acl can be represented as follows....

permit 200.0.1.2 0.0.2.24

Thursday, January 15, 2009

Multicast - BSR


I recently came across a lab scenario where 3 routers were operating in PIM Sparse mode. The question stated that an interface on one router should join a multicast group and all other routers should be able to ping this address.

The sting in the tail was that the configuration must be achieved on only 1 router and the 'ip pim autorp listener' command could NOT be used.

The obvious answer would have been to have configured auto rp. However a pre-requisite for this to work in a PIM sparse mode environment would be for the 'ip pim autorp listener' command to be configured on all routers in the domain.

The way around this problem is to configure BSR (bootstrap routing). This can operate in sparse mode without the 'autorp listener' prerequisite.

Therefore on one of the routers the place the following configuration

ip pim rp-candidate lo0
ip pim bsr-candidate lo0

Sunday, January 11, 2009

MRM - Multicast Routing Monitor

A tool designed to notify network admin of any multicast routing problems.

There are 3 components to MRM
-Manager
-Sender
-Receiver

The configuration is laid out on the doc cd under 'Using IP Multicast Tools'(well apart from the fact it neglects to inform you how to start the test!)

I implement MRM on the same topology used in the previous MSDP post.

The test sender is on R1

R1
int s1/0
ip mrm test-sender


R2
access-list 1 permit 10.0.0.1
access-list 2 permit 150.50.5.69
ip mrm manager test1
manager fa0/0 group 239.1.1.1
senders 1
receivers 2 sender-list 1



R4
int s1/0
ip mrm test-receiver




And so to the crucial piece of config NOT on the doc cd.
On R2 the manager

mrm test1 start

PGM - Pragmatic General Multicast.

PGM is a reliable multicast transport protocol. It guarantees that a receiver in a multicast group receives all data packets (in comparison to normal multicast which is a best effort protocol).

Configuration is very straightforward.

config-if#ip pgm router

Thats it!

Saturday, January 10, 2009

MSDP - Multicast Source Discovery Protocol




Consider the above scenario where there are two separate Multicast domains. In the left hand side domain the RP is statically configured as 22.22.22.22. On the right hand side domain the RP is statically configured as 33.33.33.33

On the left the workstation has joined multicast group 224.99.99.99
From R2
Router_2#p 224.99.99.99

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 224.99.99.99, timeout is 2 seconds:

Reply to request 0 from 10.0.0.1, 120 ms


On the right the workstation has joined multicast group 225.0.0.1
Router_3#p 225.0.0.1

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 225.0.0.1, timeout is 2 seconds:

Reply to request 0 from 150.50.5.69, 80 ms




Now I join the two multicast domains using MSDP. The MSDP configuration reminds me of BGP neighbor configuration.

R2
ip msdp peer 3.3.3.3 connect-source Loopback0

R3
ip msdp peer 2.2.2.2 connect-source Loopback0
Router_2#s ip msdp summary
MSDP Peer Status Summary
Peer Address AS State Uptime/ Reset SA Peer Name
Downtime Count Count
3.3.3.3 ? Up 00:00:17 2 0 ?

Router_3#show ip msdp summary
MSDP Peer Status Summary
Peer Address AS State Uptime/ Reset SA Peer Name
Downtime Count Count
2.2.2.2 ? Up 00:00:08 2 0 ?
Router_3#


Now to test the power of MSDP!! From R3 on the right hand side of the multicast domain I try and ping the host on the left hand side of the multicast domain that is a member of 224.99.99.99

Router_3#p 224.99.99.99

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 224.99.99.99, timeout is 2 seconds:

Reply to request 0 from 10.0.0.1, 232 ms

Success!!!!

From R2 on the left hand side of the multicast domain I try and ping the host on the left hand side of the multicast domain that is a member of 225.0.0.1

Router_2#p 225.0.0.1

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 225.0.0.1, timeout is 2 seconds:

Reply to request 0 from 150.50.5.69, 208 ms

Success!!!




Now to take things one step further and introduce MSDP Anycast. MSDP Anycast in the words of CISCO is ‘an intradomain feature that provides redundancy and load-sharing capabilities’. In my own words this features allows two separate Multicast domains to be configured with RPs sharing the same ip address. Should the RP in one domain become unavailable then the RP in the other domain transparently takes over.

In the given scenario I change the RP on the right hand side to share the same ip address (22.22.22.22) as the left hand side domain.

From R3 I can still ping both multicast hosts. To test the redundancy I take down the loopback interface on R3 that has the RP address.

Router_3(config)#int lo1
Router_3(config-if)#shut

I now check I can still ping the muticast host on the left hand side (even though there is actually no active RP in the original right hand side multicast domain).

Router_3#p 224.99.99.99

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 224.99.99.99, timeout is 2 seconds:

Reply to request 0 from 10.0.0.1, 276 ms

Success!!

Friday, January 9, 2009

Multicast - Bidirectional

Bidir PIM is designed for multicast apps that have a many to many architecture (as opposed to the typical one to many architecture of traditional multicast).

Configuring Bidir PIM is very similar to traditional PIM. To enable bidir-PIM on a router

Config#ip pim bidir-enable

And for the RPS just use the normal rp selection commands with the addition of the bidir option

STATIC RP: Ip pim rp-address {address} bidir
AUTO RP: Ip pim send-rp-announce (interface} scope {ttl} bidir
BOOTSTRAP RP: Ip pim rp-candidate {interface} bidiR

Use the same pim verification commands used for standard pim.

Multicast Routing - Misc

In this post i detail some of the plethora of multicast commands that may be of use...

config-if#ip igmp access-group {acl}

interface command that delimits the multicast groups that the hosts are allowed to join.

config-if#ip igmp limit {n}
interface command that limits the number of groups that users on an interface may join

config-if#ip igmp join-group
joins multicast group with process switching

config-if#ip igmp static-group
joins multicast group with fast switching

ip pim spt-threshold {n}
specifies the threshold that must be reached before moving to the spt

ip multicast rate-limit in out group-list {x} {y}
Controls transmission rate TO a multicast group
where x is the acl matches the multicast groups
where y is the bandwidth statement

ip multicast-boundary
Implements a bidirectional boundary for multicast traffic
The following example shows how to set up a boundary for all administratively scoped addresses:

access-list 1 deny 239.0.0.0 0.255.255.255
access-list 1 permit 224.0.0.0 15.255.255.255
interface ethernet 0
ip multicast boundary 1


ip multicast cache-headers
Enables history of multicast traffic through the router.
Can be viewed with show ip mpacket

Router_2#s ip mpacket
IP Multicast Header Cache
11 packets received over 00:00:24, cache size: 1024 entries
Key: id/ttl timestamp (name) source group

0015/254 180.476 (?) 1.1.1.1 225.0.0.1
0071/15 181.336 (?) 5.5.5.5 224.0.1.39
0016/254 182.472 (?) 1.1.1.1 225.0.0.1

SSM - Source Specific Multicast

A few key notes after reviewing the cisco doc cd...

Requires IGMP version 3

Reserved IANA range 232.0.0.0 232.255.255.255

Operates in PIM sparse mode or sparse dense mode

Enabled by ip pim ssm global command

When enabled PIM operations in the SSM address range are treated as PIM SSM.

Ip pim ssm
Int x
Ip pim sparse-mode | ip pim sparse-dense-mode
Ip igmp version 3


Enabling PIM SSM may cause problems for legacy PIM operations in the reserved SSM address range. The following config enables RP to restrict sources in the SSM range from registering

ip pim accept-register list no-ssm-range
ip access-list extended no-ssm-range
deny ip any 232.0.0.0 0.255.255.255
permit ip any any

Thursday, January 8, 2009

Multicast Routing - Part V Tunnel


The Multicast tunnel is a feature that can be used to join two multicast routers seperated by a 'non multicast' region.

In the above scenario i create a tunnel between the 2 multicast routers as follows...

R2
interface Tunnel0
ip address 20.20.20.2 255.255.255.252
ip pim sparse-dense-mode
tunnel source Loopback0
tunnel destination 5.5.5.5
ip pim send-rp-announce Loopback0 scope 16
ip pim send-rp-discovery scope 16

R5
interface Tunnel0
ip address 20.20.20.1 255.255.255.252
ip pim sparse-dense-mode
tunnel source Loopback0
tunnel destination 2.2.2.2
end
ip pim send-rp-announce Loopback0 scope 16



The dest pc joins mcast group 225.0.0.1

I then try pinging across the tunnel from the source PC.

Router_2#p 225.0.0.1 repeat 3

Type escape sequence to abort.
Sending 3, 100-byte ICMP Echos to 225.0.0.1, timeout is 2 seconds:
...

Failure!!!


If i enable multicast debugging on R5 the reason becomes clear

debug ip mapcket
Router_5#
*Jan 9 07:03:40.263: IP(0): s=2.2.2.2 (Tunnel0) d=225.0.0.1 id=19, ttl=254, pro
t=1, len=100(100), RPF lookup failed for source or RP
*Jan 9 07:03:40.275: IP(0): s=2.2.2.2 (Tunnel0) d=225.0.0.1 id=19, ttl=253, pro
t=1, len=100(100), RPF lookup failed for source or RP

Whilst the multicast regions are now successfully joined the multicast route now differs from the unicast route. As indicated in the debug the reverse path check is now failing i.e. the path back to the multicast source does NOT match the unicast path to the source.

To resolve this issue i configure a static multicast route on R5 so that all multicast traffic is directed through the tunnel

Router_5(config)#ip mroute 0.0.0.0 0.0.0.0 tu0

I now ping again from R2 and success!!

Router_2#p 225.0.0.1 repeat 3

Type escape sequence to abort.
Sending 3, 100-byte ICMP Echos to 225.0.0.1, timeout is 2 seconds:

Reply to request 0 from 14.0.0.6, 484 ms
Reply to request 0 from 14.0.0.6, 484 ms
Reply to request 1 from 14.0.0.6, 180 ms
Reply to request 1 from 14.0.0.6, 180 ms
Reply to request 2 from 14.0.0.6, 296 ms
Reply to request 2 from 14.0.0.6, 308 ms

OSPF routing part IX - distance

Suppose R1 has two ospf neighbors R2 and R3 and the lab requires all routes to be preferred from R2. This can be achieved by marking all routes from ospf neighbor R3 with an admin distance of 109 (compared to the default of 110).

If the ospf router id of r3 was 3.3.3.3 this can simply be achieved as follows...

ip access-list st 1
permit any

router ospf 1
distance 109 3.3.3.3 0.0.0.0 1

Monday, January 5, 2009

Director Response Protocol

As far as i can tell this is one of those oddball subjects that just may rear its head on the CCIE Lab.

In this post i just denote
i) what is it
ii) where to find it on the doc cd
iii) provide a basic config.

Director Response Protocol.
This is a CISCO proprietary product. The DRP server agent is used to communicate with the Distributed Director platform. As a requirement in a lab i guess configuration of the server agent is a possibility.

On the doc cd look in 12.2 IP Configuration Guide. Then under Configuring IP Services. There is a reasonable description on configuring the server agent there.

The only configuration that is required to enable DRP is the global configuration
command 'ip drp server'. Over and above that the DRP server agent can then be configured to only allow certain Directors to communicate with it.


access-list 10 permit 185.28.8.1
access-list 10 permit 104.12.8.1

ip drp access-group 10



Additionally some authentication can be included between server agent and directors

key chain RICH
key 1
key-string CISCO
!
ip drp authentication key-chain RICH



I must confess the DRP still reamins a bit of a 'black box' subject to me, but i figure the above will be enough to cover off the basics.

NTP Broadcast Setup

Most of the scenarios i have encountered with NTP involve setting up NTP relationships with the 'ntp server' or 'ntp peer' commands. In this post i am not going to detail this setup.

An interesting scenario i came across was when the lab requirement was to establish NTP in an environment but WITHOUT using the 'ntp peer' or 'ntp server' commands.

To enable this requires use of NTP in a broadcast mode. The config for this is straightforward (as always if you know the answer:-})

R1
config-if#ntp broadcast client

R2
config-if#ntp broadcast

Thats it!

As with standard udp NTP the setup can be verified via 'show ntp associations'.

Sunday, January 4, 2009

Frame Relay Traffic Shaping - Part III

A short post just to clarify a couple of frame relay traffic shaping features...

The minimum time slice on a Frame Relay interface is 10ms or 1/100 of a second. Hence to set the time slice on the interface to be the minimum, simply divide the CIR by 100.

As a further note....If there is a requirement to ensure that a single packet cannot take more than one interval to be transmitted, divide the bc by 8 to get the number of bytes and then set the fragment size to this. For example

map-class frame-relay DLCI_304
frame-relay cir 256000
frame-relay bc 2560
frame-relay fragment 320

Saturday, January 3, 2009

Multicast Routing - Part III Controlling Access Part 2

In my fist post on controlling multicast access i described the 'ip igmp access-group' command.

As denoted this can be usefull controlling access to specified multicast address spaces. On a further lab i encountered a multicast access scenario that required multicast traffic to be restricted in both directions i.e. not only prevent multicast feeds being accepted from an interface, but also prevent multicast feeds being sent out an interface.

In such a situation where multicast access control is required in both directions then the 'multicast boundary' functionality can be used. This creates more stringent access control.

Access can be controlled in a granular fashion by utilising the access-list parameter.

For example...use this to prevent access to the administratively scoped address space

Router(config-if)# ip multicast boundary 1
Router(config)# access-list 1 deny 239.0.0.0 0.255.255.255
Router(config)# access-list 1 permit 224.0.0.0 15.255.255.255



Whilst researching on multicast boudaries i then realised there was a 3rd option to control multicast access:-)...

The 'ip igmp access-group' command works perfectly for L3 interfaces. However if required to restrict access on a L2 interface this command will NOT cut the mustard.

This is where 'igmp profiles' can be used on an L2 access port.

int f0/01
switchport mode access
switchport access vlan 7
ip igmp filter 1
!
ip igmp profile 1
deny
range 239.0.0.0 239.255.255.255

Friday, January 2, 2009

BGP snippets

BGP Default Router

R1 -------- R2

In this scenario R1 must advertise a BGP default route to R2. NO other BGP routes are allowed.

R1

router bgp 1
neighbor {a.b.c.d} default-originate
neighbor {a.b.c.d} prefix-list DEFAULT out

ip prefix-list DEFAULT permit 0.0.0.0/0






BGP - NON transit area

How do you ensure a BGP area is not used as a tranit area? One answer is to ensure it only advertises routes originated in it own area.

ip as-path access-list 1 permit ^$
router bgp 1
neighbor {a.b.c.d} filter-list 1 out





Redistribution from BGP

router ospf 1
redistribute bgp 1


With the statement above all BGP EXTERNAL routes will be propogatedinto OSPF. If there is a requirement for BGP internal routes to be propogated into OSPF then the following command is required under at BGP router prompt.

router bgp 1
bgp redistribute-internal





Redistribution of a single BGP area

If redistribution from BGP is required but only from a single BGP area this can be achieved via a route-map in conjunction with an as-path access-list.


router ospf 1
redistribute bgp 1 route-map BGP2OSPF
!
ip as-path access-list 1 permit _54_
!
route-map BGP2OSPF permit 10
match as-path 1




thats it:-)

OSPF Routing Problem


Consider the following scenario.

Router 1 is sending a lot of traffic to the 164.1.5.0 subnet on Router 2. The preferred route (by default) is via R3 as this is an Intra-Area Route. The route between R1 and R2 is an Inter Area route so will NOT be considered for the traffic flow - this is regardless of the metric cost value.

The question is how to get R1 to prefer the direct, faster route to R2! As mentioned altering the route cost will have NO bearing, as Intra vs Inter are considered above such comparisons. Intra Area routes always win regardless of costs.

The way around this problem is to build a virtual link between R1 and R2. This way once up, the route to 164.1.5.0 will beomce preferrable.

Router 4
router ospf 100
area 1 virtual-link 5 150.1.5.5

Router 5
router ospf 100
area 1 vitual-link 150.1.4.4

Thursday, January 1, 2009

EIGRP metric calculation

If the standard bandwidth and delay parameters are used within EIGRP the formula for calculating the metric is as follows...

(10,000,000/bw(kbit) + delay/10) * 256

As an example examine the following entry from the eigrp topology table.


Router_1#s ip eigrp top 164.1.26.0 255.255.255.0
IP-EIGRP (AS 100): Topology entry for 164.1.26.0/24
State is Passive, Query origin flag is 1, 1 Successor(s), FD is 2684416
Routing Descriptor Blocks:
164.1.13.3 (Serial2/1), from 164.1.13.3, Send flag is 0x0
Composite metric is (3026432/2514432), Route is Internal
Vector metric:
Minimum bandwidth is 1280 Kbit
Total delay is 40100 microseconds
Reliability is 255/255
Load is 1/255
Minimum MTU is 1500
Hop count is 2

Using the formula....

(10,000,000/1280 + 40100/10) * 256 = (7812.5 + 4010) * 256
= 11822 * 256
= 3026432!!

N.B. Before moving figures outside of brackets truncate (not round) the figure to 0 decimal places.

Understanding this metric calculation may seem a bit like overkill - it did to me! However it could feasibly come into play if a question asked for EIGRP load balancing to be performed according to a certain ratio. In this instance it may be necessary to adjust the EIGRP metric to achieve the desired balancing.