Wednesday, October 27, 2010

Amazon EC2 IPsec tunnel to Cisco IOS router - Update!

This is an update to the Amazon EC2 IPsec tunnel to Cisco IOS router post I made several weeks ago.  Amazon has changed the offering a bit, and not all of the commands, nor the distribution I previously used is still available.

Launch an Instance
  • Click the "Launch Instance" button
  • Choose "Basic 32-bit Amazon Linux AMI 1.0"
  • Set "Micro" instance type
  • Download a new SSH key (or use an existing one)
  • Configure a security group (this is the firewall service) like this:


Configure OpenSwan on the EC2 Instance
  • Connect to the instance using the directions found here.
  • Install IPsec packages:
sudo yum -y update
sudo yum -y install openswan openswan-doc ipsec-tools
  • Set some variables that will be useful later
# The private IP address assigned to your EC2 instance.
EC2PRIVATE=`/sbin/ifconfig eth0|grep Bcast|cut -d: -f 2|cut -d\  -f 1`

# The public IP address assigned to your EC2 instance.
EC2PUBLIC=`curl -s http://169.254.169.254/latest/meta-data/public-ipv4`

# The public IP address of the home router
HOMEPUBLIC=1.2.3.4

# The private address space in use at home
HOMEPRIVATE=192.168.0.0/16

# Generate a secret key
PSK=`< /dev/urandom tr -dc a-zA-Z0-9_ | head -c30`
  •  Configure the 'home' openswan connection.  The leading whitespace is important here.
echo "conn home" > /tmp/home.conf
echo "  left=%defaultroute" >> /tmp/home.conf
echo "  leftsubnet=$EC2PRIVATE/32" >> /tmp/home.conf
echo "  leftid=$EC2PUBLIC" >> /tmp/home.conf
echo "  right=$HOMEPUBLIC" >> /tmp/home.conf
echo "  rightid=$HOMEPUBLIC" >> /tmp/home.conf
echo "  rightsubnet=$HOMEPRIVATE" >> /tmp/home.conf
echo "  authby=secret" >> /tmp/home.conf
echo "  ike=aes128-sha1-modp1024" >> /tmp/home.conf
echo "  esp=aes128-sha1" >> /tmp/home.conf
echo "  pfs=yes" >> /tmp/home.conf
echo "  forceencaps=yes" >> /tmp/home.conf
echo "  auto=start" >> /tmp/home.conf
  •  Configure the 'home' preshared key:
echo "$EC2PUBLIC $HOMEPUBLIC: PSK \"$PSK\"" > /tmp/home.secrets
  • Enable the IPsec service:
sudo sed 's!^#\(include /etc/ipsec.d/\*.conf\)!\1!' /etc/ipsec.conf > /tmp/ipsec.conf
sudo chmod 600 /tmp/home.* /tmp/ipsec.conf
sudo chown root:root /tmp/home.* /tmp/ipsec.conf
sudo mv /tmp/home.* /etc/ipsec.d
sudo mv /tmp/ipsec.conf /etc
sudo chkconfig ipsec on
sudo service ipsec start

Configure the IOS end of the tunnel
We'll need one more variable to build the IOS configuration:
HOMEEXTIF=FastEthernet0/0
Paste the following text into the EC2 command line.  It should spit out IPsec configuration for your IOS device:
cat > /tmp/IOS.cfg << EOF
crypto isakmp policy 10
 encr aes
 authentication pre-share
 group 2
 lifetime 86400
crypto isakmp key $PSK address $EC2PUBLIC no-xauth
crypto ipsec security-association lifetime seconds 1800
ip access-list extended AMAZON-CRYPTO-ACL
 permit ip any host $EC2PRIVATE
crypto ipsec transform-set AMAZON-TRANSFORM-SET esp-aes esp-sha-hmac
crypto map INTERNET-CRYPTO 10 ipsec-isakmp
 description Amazon EC2 instance
 set peer $EC2PUBLIC
 set transform-set AMAZON-TRANSFORM-SET
 set pfs group2
 match address AMAZON-CRYPTO-ACL
interface $HOMEEXTIF
 crypto map INTERNET-CRYPTO
EOF
clear
cat /tmp/IOS.cfg
That's it!  Now I can ping the private ($EC2PRIVATE) address of the EC2 instance from one of my internal machines at home.  This works in my environment because the 10.x.x.x address assigned by Amazon happens to fall within the default route in use by my home gateway.  You may need to add a static route if you're pushing the 10/8 block elsewhere in your environment.

Being able to talk securely to the private address is preferable to using the public one because of applications (SIP, FTP) that embed IP address information into their application payload.  These don't NAT well, and now they don't have to.

If you want to be able to talk securely to the public address of an EC2 instance, that can probably be done with a dummy interface on the EC2 end.  I'll work on that later.

Tuesday, October 19, 2010

Enterprise IPv4 Multicast - MSDP Peering Diagram

This is part 3 in the IPv4 multicast series.  Part 1 covered scoping and address assignment.  Part 2 covered scoping and RP placement.

The traffic scopes we've defined are: building, campus, region and enterprise.  This article explains the strategy we're going to use when bolting these various scopes together with MSDP.

Our hypothetical enterprise has nine sites, arranged according to the following diagram.
There's a lot going on in this diagram. It's all meaningful, and I'm going to talk through a good portion of that minutia, but want to begin with the following:  The lines on the diagram represent MSDP peering.  MSDP is a multihop protocol, so the lines have nothing to do with layer1/2/3 topology, nor with PIM neighbor relationships.  This is just the map of how routers around the enterprise share multicast metadata among themselves.



The small colored circles represent individual routers which act as RPs. Their color indicates the scope of data for which the RP holds top-level responsibility.

Each RP is in a building (blue oval). Buildings are in a campus (green oval). Campuses are in a region (red oval). All regions are within the enterprise (no boundary shown).

The lines connecting between RPs indicate the scope of MSDP source-active (SA) advertisements flowing between those routers. The colored lines never cross their same-colored scope boundaries:
  • Blue lines represent building scope SA's, so they never cross out of building (blue oval) 
  • Green lines represent campus scope SA's, so they never cross out of a campus (green oval) 
  • Red lines represent region scope SA's, so they never cross out of a region (red oval) 
Our enterprise has two data centers in the Chicago campus: Building 1 and Building 2. They're responsible for propagating enterprise scope data everywhere, as well as being responsible for the various smaller scopes that they happen to fall in. The RPs in building 3 and 4 share only building scope data among themselves. They exchange SA messages for all larger scopes only with their upstream RPs in the data centers.

The Tokyo office is the only site in the Asia-Pac region, so the Tokyo RPs share building, campus and regional multicast data only among themselves, as explained in Part 2. Enterprise scope data is shared only with the RPs in Chicago.

Finally, I want to call your attention to the pair of RPs in Paris. They share building and campus data among themselves, and share regional data with their upstream RPs in London and Madrid. But with whom do they talk about enterprise flows? Rather than going straight to the source (Chicago 1 and 2), they go to their upstream in-region peers. That's the model for all of this peering: Every router may peer only with his direct neighbor, or with a router one layer above or below in the hierarchy. This isn't the only way to do it, but it's the model I've chosen for this hypothetical deployment.

Monday, October 18, 2010

VMware runs in promiscuous mode?

I've recently discovered that VMware runs its physical NICs in promiscuous mode.  At least, I think I have made that discovery.

There's a lot of chat out there about VMware and promiscuity, but it's usually devoted to the virtual host side of the vswitch.  On that side of the vswitch, things are usually pretty locked-down:


  • No dynamic learning of MAC addresses (don't need to learn when you know)
  • No forging of MAC addresses allowed (for the same reason)
On a physical switch, we might accomplish the same thing with:
interface type mod/port
  switchport port-security mac-address mac-address
  switchport port-security maximum 1
This leads to frustration for people trying to deploy sniffers, intrusion detection, layered virtualization and the like within VMware, and it's not what I'm interested in talking about here.


I'm interested in something much more rudimentary, and which has always been with us.  But which has begun to vanish.


History Lesson 1
On a truly broadcast medium (like an Ethernet hub), all frames are always delivered to all stations.  Passing a frame from the NIC up to the driver requires an interrupt, which is just as disruptive as it sounds.  Fortunately, NICs know their hardware addresses, and will only pass certain frames up the stack:
  • Frames destined for the burned-in hardware address
  • Frames destined for the all-stations hardware address
You're probably aware that it's possible to change the MAC address on a NIC.  It's possible because the burned-in address just lives in a register on the NIC.  The contents of that register can be changed, and in all likelyhood were loaded there by the driver at initialization time anyway.  The driver can load a new address into this register.


In fact, most NICs have more than one register for holding unicast addresses which must be passed up the stack, allowing you to load several MAC addresses simultaneously.


History Lesson 2
Multicast frames have their own set of MAC addresses.  If you switch on a multicast subscriber application, a series of steps happen which culminate in the NIC unfiltering your desired multicast frames and passing them up the stack.  This use case is much more common than loading multiple unicast addresses, and hardware designers saw it coming before they allowed for multiple reconfigurable unicast addresses.


This mechanism works in much the same way that an EtherChannel balances load among its links:  Deterministic address hashing.  But it uses a lot more buckets, and works something like this:
  1. Driver informs the NIC about the multicast MAC that an upstream process is interested in receiving.
  2. The NIC hashes the address to figure out which bucket is associated with that MAC.
  3. The NIC disables filtering for that bucket.
  4. All frames that hash into the selected bucket (not just the ones we want) get passed up the stack.
  5. Software (the IP stack) filters out packets which made it through the hardware filtering, but which turn out to be unwanted.
Modern implementations
Surprisingly, nothing here has changed.  I reviewed data sheets and driver development guides for several NIC chipsets that are currently being shipped by major label server vendors.  Lots of good "server class" NICs include 16 registers for unicast addresses and a 4096-bucket (65536:1 overlap) multicast hashing scheme.

And VMware fits in how?
Suppose you're running 20 virtual machines on an ESX server.  Each of those VMs has unique IP and MAC addresses associated with it.  But the physical NIC installed in that server can only do filtering for 16 addresses!

The only thing VMware can do in this case is to put the NIC (VMware calls them pNIC) into promiscuous mode, then do the filtering in software, where hardware limitations (registers chiseled into silicon) aren't a problem.

It's good news for the VMware servers that they're (probably) not plugged into a hub, because the forwarding table in the physical switch upstream will protect them from traffic that they don't want.

Promiscuity in NICs is widely regarded as suspicious, performance impacting, and a problem.  ...and 101-level classes in most OS and network disciplines cover the fact that NICs know their address and filter out all others.  The idea that this notion is going away came as a bit of a surprise to me, and makes a strong argument for:
  • Stable L2 topologies (STP TCN messages will "unprotect" VMware on the switches)
  • IGMP snooping (without it, switches won't protect VMware from multicast frames)
How close to the limit?
Okay, so 16 addresses per NIC, isn't quite so dire.  A big VM server running dozens of guests probably has at least a handful of NICs, so the ratio of guests+ESX-overhead/pNIC_count might not be higher than 16 in most cases.

VMware could handle this by using unicast slots one-by-one until they're all full, and only then switching to promiscuous mode.

I've only found one document that addresses this question directly.  It says:
Each physical NIC is put in promiscuous mode
  • Need to receive frames destined to all VMs
  • Not a issue at all on a modern Ethernet network

Friday, October 15, 2010

You CAN connect a switch to a Nexus Fabric Extender

I didn't mention it in the data center design writeup, but it is possible to connect a switch to a Nexus fabric extender.

Search queries that have led readers here, conversations with customers, and various blog posts, have made it clear that people are running full-speed into an interesting feature of the Cisco Nexus Fabric Extenders:  FEX ports always run bpduguard.  You can't turn it off.

For the uninitiated, Bridge Protocol Data Units (BPDUs) are the packets used by bridges (switches) to build a loop-free topology.  BPDU Guard is a Cisco interface feature that immediately disables an interface if a BPDU arrives there.  It's appropriate on interfaces where you never expect to plug in a switch.  If you've ever plugged a switch into your cubicle jack at work, you may have experienced this feature firsthand.

BPDU Guard tends to go hand-in-hand with portfast, a feature that makes switch interfaces ready for use as soon as they link up, instead of forcing these fresh links to jump through the Spanning Tree Protocol (STP) loop prevention hoops.

If you're going to use portfast, you must use bpduguard.

Nexus Fabric Extenders (2148, 2248, 2232) run bpduguard on their interfaces all the time.  It can't be disabled.

Who cares?
Bpdugaurd means that you can't hang a switch from a FEX.  If you've adopted Cisco's vision of the modern data center, this can become a problem because the 2148 fabric extenders can only do gigabit.  No 10/100 capability here.  It turns out there's a lot of stuff in a modern data center that still can't do gigabit, but lives out in the general population that's intended to be served by the Fabric Extenders:
  • HP server iLO interfaces
  • Terminal server appliances
  • Power strips
  • Environmental monitors
  • KVM equipment
The best option for these small, far-flung clients might be the installation of a small, 10/100 capable switch nearby.  I selected the WS-C2960-24TC-S for this purpose in a recent build because it's super cheap (list price is $725) and because it has dual-purpose uplink ports.  The natural inclination is to try to uplink directly into the nearby 2148T fabric extender.  But, as soon as you do, the 2960 sends a BPDU, and the 2148 shuts down the interface.

Now what?
You could disable spanning-tree on the 2960, but that's asking for trouble, and makes redundancy impossible.  You could disable spanning tree on just the uplink interface because it's safer, but still doesn't accomplish redundancy.  You could link the small switch directly to the distribution layer, but that's a lot of buck for relatively little bang.

Another possible answer is flexlinks, a long-forgotten uplink redundancy mechanism that probably predates stable STP operation.  I had assumed so, but it doesn't look like flex links is quite that old.  I don't know why this feature was introduced, but it's useful here.

Enabling flexlinks is a one-liner:  In 'interface GigabitEthernet 0/1 configuration context' do:
switchport backup interface GigabitEthernet 0/2
Now, Gig0/2 backs up Gig0/1.  Spanning tree protocol is disabled on these two ports, so BPDU Guard won't cause a problem, and there's no risk of these ports creating a topology loop because only one one of them will be in forwarding mode.

Plug both interfaces into a fabric extender and you're in business.

The interfaces can be trunks or they can be access ports.  If they're trunks, you can even balance the vlans across the uplinks if you're so inclined (personally, I don't care for it, but this is a very common strategy).

A topology change will flood bogus packets which appear to be sourced from your client systems so that upstream switches update their forwarding tables.  ...A process that can be made even quicker with the 'swichport backup mmu' mechanism - but I doubt the Nexus supports that anyway.

Going forward:
100Mb/s is really the only real requirement.  I can only think of two devices that are limited to only 10Mb/s in any of the networks I work on.  And both of those are in my basement, which has not yet been migrated to the Nexus platform.  Fortunately, the Nexus 2248 Fabric Extenders can do 100/1000, so you'll be much less inclined to try to hang a switch off of them.  Plus they're cheaper, and have a few other benefits over the 2148T.  As far as I'm aware, there's not a single technical reason to prefer a 2148 over the 2248, so stop buying 2148s.

Wednesday, October 13, 2010

Load balancing IP Multicast flows

Unicast traffic in Cisco IOS devices is usually forwarded by the CEF mechanism, which by default will load balance traffic across multiple equal-cost paths.

Multicast traffic is not so lucky.  Multicast flows are attracted to receivers by routers in the path.  If a subscription needs to be sent upstream toward a multicast source, and multiple equal-cost paths exist, PIM will send the subscription to the upstream router with the highest IP address.

Load Balancing Mulicast Flows
In the scenario below, R1 has two equal-cost paths to the 192.168.10.0/24 network, but R1 will send all PIM joins for all 8 flows in the direction of R3, because of R3's higher IP address.  The result will be that all 8 multicast flows traverse the R3-R1 link, and the R2-R1 link sits idle.


The fix to balance multicast traffic across both links is to enable one of the EMCP multicast multipathing mechanisms.  The simple ip multicast multipath directive in global configuration mode will enable load sharing using the S-Hash algorithm.  Much like load sharing in EtherChannels, this is a deterministic hashing mechanism that considers the source IP (for (S,G)) or RP IP (for (*,G)) when performing RPF calculations.  Unlike the EtherChannel case (which requires determinism to maintain the ordered delivery LAN invariant), determinism is required here because the RPF calculation performed at PIM-join time must select the same upstream interface as the one used when performing the RPF check on incoming multicast data packets in the future.

In our case, with eight groups evenly distributed across four servers, we're done.  But what if there's only one server talking on many groups?  Load balancing with S-hash would force all subscriptions for that one server onto one upstream link even though multiple links are available.  The next step in load balancing is ip multicast multicast s-g-hash basic.  It load balances RPF decisions by taking both the Source IP and the multicast Group address into account, and will satisfactorily balance the few-producers-many-groups scenario.

Polarization
Consider a multi-tier network like the one depicted below.  R1, R2 and R3 each have two choices (interfaces) when performing RPF lookups for sources on the 192.168.10.0/24 network.  For simplicity, I've labeled them "0" and "1".


It doesn't matter whether we use s-hash or s-g-hash algorithm in this example.  Assume we've selected one, and applied it to all seven routers.  R1 balances the load beautifully: half of the flows are subscribed via upstream link "0" to R2, and the other half are subscribed via upstream link "1" to R3.

What will R2 and R3 do?  Remember that the hashing scheme is deterministic.  This means that R2 will request all multicast flows from R4.  Determinism:  Every flow going through R2 is a "link 0 flow", so R2 will always choose R4, because R2's RPF lookup is using the same criteria as R1's.  Likewise, R3 will send all join requests to R7.  The R5-R2 and R6-R3 links will sit idle.

Polarization Fix
To balance the load equally we need to use different path selection criteria at each routing tier, and ECMP has a mechanism to do this.  We can add the next-hop router address to the hashing mix to re-balance the subscription load at each tier.  This works because each router in the topology has a unique perspective on the the next hop address.  This is implemented with ip multicast multipath s-g-hash next-hop-based in global configuration mode.

Nexus 7000 : startup-config

Nexus 7000 has a surprising feature when it comes to installing startup configurations:  You can install any startup-config you want, as long as it's the running-config.


The Nexus 7000 platform stores the startup configuration in some sort of binary or compiled state, not a flat ascii file like you'd find on an IOS device.  I think that when you 'copy running-config startup-config' on the Nexus, your running config gets nvgened, compiled and written, rather than just nvgened and written as would happen on IOS.


Frustratingly, you can't copy to the startup-config from any source other than running-config:




N7K# copy bootflash:my-config startup-config
This command is deprecated. To obtain the same results, please use
the sequence 'write erase' + 'reload' + 'copy  running-config' +
'copy running-config startup-config'.


There is some documentation which indicates otherwise, but it's been updated.  The only supported way to load a fresh configuration on a Nexus 7000 is the one recommended in the error message above.


Thank goodness for terminal servers.

Tuesday, October 12, 2010

Mapping multicast groups to MAC addresses

When an Ethernet station builds a frame for an IP packet, it needs to know what destination address to put on that frame.

For a unicast IP packet, the sending station uses the destination node's unique MAC address, which it learns through the ARP mechanism.

For a broadcast IP packet, the broadcast MAC address (ff:ff:ff:ff:ff:ff) is used.

But what about multicast IP packets?  A unicast MAC address isn't appropriate, because there might be several stations on the segment which are interested in receiving the packet.  Conversely, a broadcast frame isn't appropriate, because we'd be bothering systems that don't want to process the packet.

Sensibly, multicast IP packets get encapsulated into multicast Ethernet frames, using a block of addresses from 01:00:5e:00:00:00 - 01:00:5e:7f:00:00.  RFC 1112 has all the details.

Most network folks have seen this process, and then forgotten it.  The times it's come up at work, I've found that people think it's much uglier than it really is.  It's a little ugly, but worth learning, and luckily, there's an interesting story behind it.

IP multicast group numbers look like IP addresses.  They fit in the "Class D" space from 224.0.0.0 through 239.255.255.255.  There are 2^28 unique multicast groups in that range.  Unfortunately, there are only 2^23 unique multicast MAC addresses, so there's some overlap which needs to be taken into consideration when handing out multicast groups to applications.

I'm going to cover two historical points here.  They're both interesting tidbits that make the multicast mapping rules make sense.

Ethernet frames are structured to make things easy on stations and bridges.
An Ethernet frame doesn't really begin with the destination MAC address.  It starts with the preamble, which can be thought of as a way to "wake up" stations on a shared media segment, and get them ready to receive an incoming frame.  I think of it like a rumble strip you'd encounter before a higway toll plaza, because it serves a similar function.  And because it looks like one.  The preamble, along with it's partner the start-of-frame-delimiter (SFD) comprises a 64 bit pattern of alternating ones and zeros ending with an errant one: 101010....101011

That pattern-breaking '11' at the end of the SFD indicates that the destination address will begin in the next bit.  If you're a bridge, you're going to use the next 6 bytes to make a forwarding decision.  If you're a station, you'll use thse 6 bytes to decide whether to process the frame or ignore it.  The Ethernet designers did this so that the receiving NIC can quickly determine whether the frame is worthy of processing.

But that's not all.  The very first bit in those 6 bytes, the bit that comes immediately after the '11' in the SFD is special:  Bytes on Ethernet are transmitted in little-endian order, so that first bit to arrive is the least significant bit in the first byte of the address, otherwise known as the individual/group bit.  If it is a '1', a bridge knows immediately (only one bit into the frame!) that this frame will need to be flooded out all ports.  Nifty, and makes very speedy cut-through bridging decisions possible.

If you look at the various hardware addresses in one of your device's mac-address or arp tables, you shouldn't find any stations where the first byte is an odd number because stations must use unicast addresses.  An odd numbered first-byte would mean that the individual/group bit is set.  The broadcast (all-ones) Ethernet address, appropriately enough, has the bit set.  Along with all of the other bits.


Somebody else's tight budget can become your forwarding problem.
The story goes that when Steve Deering was putting RFC 1112 together, he wanted to purchase 16 Ethernet OUIs.  Each OUI allows for 2^24 unique addresses, so 16 of them would be required to cover the whole 28-bit IP multicast space.  But the budget wouldn't cover 16 OUIs.  The budget wouldn't even cover one OUI.  Instead, he was able to procure only half of an OUI.  So, that's why we map 28 bits of multicast group into 23 bits.


Armed with these two bits of information, we know more than two thirds of the resulting multicast frame.  Here's how all 6 bytes of the multicast frame are derived:
  1. Must be an odd number (multicast/broadcast bit is set), happens to be "01".  That should be easy to remember now.
  2. Always "00".  Memorize it.
  3. Always "5E".  Memorize it.
  4. Mapped from the multicast group, keeping in mind that Dr. Deering only procured 7 of the 8 bits in this byte.
  5. Mapped directly from the multicast group.
  6. Mapped directly from the multicast group.



I find that knowing the origin story behind these things makes them much easier to remember than:
The least significant bit of the most significant byte is the multicast/broadcast flag.
The budgetary reasoning behind this technical decision, and the long term implications it has for filtering multicast at L2 is a real bummer.

Sunday, October 10, 2010

An EtherChannel using dissimilar hardware

I had to build a 2x 10Gb/s aggregate link with one member on a 6704 CFC card and the other member on a 6716 DFC3 card. Not ideal, but it was what I had to work with.
I knew going in that I'd need to configure no mls qos channel-consistency on the port channel interface because these cards have different hardware queueing capabilities. Somehow I forgot this detail immediately prior to the implementation.
The etherchannel interface was up with just the 6704, and I was adding the new link to the mix. My plan was to do show run int tengig x/y on the existing member link, and then copy/paste that configuration onto the new member link.
Everything went fine until I got to switchport trunk encapsulation dot1q, which earned me a %Unrecognized command in reply. I've noticed that on some platforms you can't tell a tagging interface what type of encapsulation to use because you don't have multiple encapsulation options anymore: ISL is obsolete, and support for it is drying up. ...What I hadn't noticed until right then is that support for ISL varies on a per-module basis within a platform: The 6704 could do ISL and required the encapsulation type directive, but the 6716 can't do ISL, and wouldn't accept the encapsulation command. Huh. Encapsulation/decapsulation is a hardware feature. It makes perfect sense, but I'd never considered it before.
The next thing I noticed is that the 6716 wasn't joining the aggregation. I puzzled over this for a while, comparing the running configuration of the two intended link members. With the exception of the 'encapsulation' directive, they were identical, but the 6716 wouldn't join the aggregation.
My problem, of course, was the QoS consistency check. EtherChannel doesn't care if the configuration on each interface looks the same. It only matters they are running the same, which they were.
Once I disabled the QoS consistency check, the 6716 link joined the 6704 link, and everything was fine. Of course, I didn't remember to disable the consistency check until immediately after I'd downed the 6716 interface, ripped it out of the channel-group, set the STP port path cost ridiculously high, and brought up the second link as an STP-blocked alternate path.
No change is so simple that it doesn't deserve a detailed execution plan.

Enterprise IPv4 Multicast - Rendezvous Points

Having established a scheme for assigning applications to multicast groups, we need to figure out what to do about RPs.

In a vanilla PIM deployment, every router knows the one router that serves as RP for any given multicast group.  You can have a single active RP (serving all groups), or many RPs, each one serving a different range of groups.

PIM routers can learn the RP address for a given group through one of several mechanisms:

MethodSyntaxProsCons
staticip pim rp-address rp-addressSimpleNot Robust
Auto-RPip pim autorp listener
ip pim send rp-announce scope
   ttl-value

ip pim send-rp-discovery scope
   ttl-value
AutomagicCisco proprietary.
Complicated when running multiple overlapping scopes
BSRip pim bsr-candidate
   interface-type interface-number
ip pim rp-candidate
   interface-type interface-number
Like Auto-RP, but standards basedNot widely used.
Complicated when running multiple overlapping scopes
Anycast PimN/AStandards based, scalable, awesome.Cisco only supports on NX-OS, not IOS
Anycast RP + MSDPip pim rp-address rp-address
Plus a mess of MSDP configuration
Scalable, awesome, interoperable.?

Anycast RP + MSDP
Anycast RP is far more simple than the election-based mechanisms, and lets us do lots of nifty scoping tricks fairly simply.  It also saves us from the pain of running multiple different RPs for different purposes. If you're not familiar with anycast, it's a simple concept: Run the same service on the same IP address on multiple different areas in a network, and advertise those IPs into your IGP. Routing protocols will deliver your packets to the closest implementation of that service (IP address). It works great for connectionless services (like PIM or DNS), where it doesn't matter if every packet you send hits exactly the same server.

In the case of anycast RP, we just spin up a loopback interface on every participating router, using the same IP address on each of them.  Be careful if the anycast address has the numerically highest address among loopback interfaces.  Then manually configure the anycast IP as the RP on all leaf routers in the network, just like you would for static RP configuration.

Leaf routers will now send PIM traffic to the closest RP.  But there's a missing bit here:



The missing piece is synchronization between RPs, and that's where Multicast Source Discovery Protocol (MSDP) fits in.  MSDP was designed to share information about active multicast sources between the (presumably single) RPs in different administrative domains.  You might use MSDP peering with your ISP to learn about active multicast flows out in the Internet, or with a business partner in order to attract flows from their network, because once your RP knows about an active source, it can send subscription requests (PIM joins) in the direction of the active source.

A clever use of MSDP is peering between your own Anycast RPs, so that each of the many simultaneously active RPs will know about all active flows in your network:



Anycast RP + MSDP is a great solution.  Anycast PIM (available on NX-OS) lacks rich filtering capability, which is key to the enterprise scoping scheme.  You can do interesting combinations of anycast RP with Auto-RP, but it gets complicated quickly, and I don't see a compelling use case for it.

Where do RPs belong?
So, having established that we're going to run lots of RPs, where do they belong exactly?  Lots of people spend lots of time thinking about exactly where RPs should go relative to sources and receivers.  Someplace between them is ideal, if you can manage it.  ...But it probably doesn't matter much.  With the default PIM sparse configuration, data won't flow through the RPs, except for a brief moment when subscribers initially come online.  If your packet rates are high enough, and your subscribers transient enough, then you should place RPs carefully.

So, where exactly do we need RPs?  Start with the smallest routable scope.  In our case, it's "building".  If there will be intra-building multicast flows, then by definition there must be an RP in each building.  And if you need one RP, then you probably need a second one for redundancy.  So, that settles it:  Two RPs in each building.  With wild, crazy and carefully planned MSDP peering to bring them all together.

Thursday, October 7, 2010

Enterprise IPv4 Multicast - Addressing

This is the first in a series of posts about deploying IPv4 multicast within an enterprise.  I'm starting with allocation of multicast group addresses because the way groups get laid out will impact other aspects of the design.


This exercise assumes a large global enterprise running sparse mode PIM with multicast everywhere, and lots of different multicast applications with different relevant scopes.  Some applications will never reach beyond the local link, some will multicast between continents, the others fall somewhere in between.  This design is a scalable framework.  You won't be deploying this whole scheme all at once, but it's helpful to get all of these bits and pieces into place early to allow for room to grow without having to rip things apart later.


I assume we don't have the luxury of using Source Specific Multicast (SSM), a mechanism in which the receivers (or, alternatively, the leaf routers) know the address of the originating endpoints, and ask for them by source IP.  Instead, we'll plan for Any Source Multicast (ASM), which requires the placement, configuration and peering of PIM Rendezvous Points (RPs) to bring together active data flows and interested receivers.


IPv4 Multicast falls into the 224/4 address block: 224.0.0.0 - 239.255.255.255.  Within this range there are sub-ranges used for various purposes, which will drive our assigment decisions:


RFC2365 Administratively Scoped IP Multicast
RFC2365 gives us the "IPv4 Local Scope" 239.255/16.  The only restrictions on the use of this scope is that it not be further subdivided, and that it be the smallest scope in use.  It's perfect for Link-Local multicast applications that won't be routed off-net.  Things like application heartbeat traffic, server load balancing coordination, pricing application backends, etc...  Because this traffic won't be routed, the group addresses can be re-used on different subnets.  Depending on your perspective, this re-use can make your life easier (production and disaster recovery instances of an application can run with the same backend configuration), or more complicated (trying to keep track of exactly what is using 239.255.5.5 on each subnet).  Proceed with caution.


RFC2365 also prescribes the 239.192/14 block for private use within an organization.  The block breaks nicely into four /16s, which is exactly the number of routable scopes I'm going to present.  If you need more scopes, you can dip into the expansion range described by section 6.2.1 (not recommended), or you can slice the /14 into smaller chunks.


Don't use x.0.0.x or x.128.0.x
There are thirty-two /24s that should be avoided at all costs.  These are a
byproduct of RFC 1112 Section 6.4 and RFC 3171 Section 3.  These multicast
groups map into MAC addresses 01:00:5E:00:00:XX.  L2 flooding suppresion mechanisms don't work on these groups.  They will always flood to all ports in a broadcast domain unless you're using very new and expensive equipment which can restrain L2 multicast traffic based on information in the L3 header.


Internet groups
Most of the 224/4 block is registered space.  The applications here could conceivably be delivered to you over The Internet, but more likely will arrive on dedicated circuits from vendors and business partners.  A common example of this sort of traffic is market pricing data in financial firms.  You won't likely be talking PIM with your ISP, but you might see multicast using registered space at your B2B network edge.

The Scopes
239.255.0.0/16Link localUsed for non-routable traffic only.  Group addresses are universally re-usable.
239.192.0.0/16Building local scopeUsed for applications where the sender and receivers live in the same building. A building-wide public address system might use this scope. Group addresses can be re-used in each building: Perhaps the same PA system is in each building. If you use the same addresses, the application owners won't need a special configuration for each building.
239.193.0.0/16Campus local scopeThe campus local scope works just like the building scope, but has wider reach. Perhaps you're pulling MPEG2 HDTV out of the air and multicasting it onto your LAN. You probably don't want to put these fat streams on the wide area links, so you duplicate the multicasts in each campus. By re-using group addresses, you'll only need a single TV guide for the whole enterprise. Users who tune into 239.193.1.1 will find their local ABC affiliate (for example), no matter which office they're in.
239.194.0.0/16Region local scopeWorks just like Campus and Building scopes, but with national or continental scale.
239.195.0.0/16Enterprise local scope
The enterprise local scope is for application streams that will be used enterprise-wide.  These group addresses are not reusable.
224.0.0.0/4Internet scopeSubsets of this /4 are registered space.  You probably won't be multicasting to or from The Internet anytime soon, but might find yourself forwarding registered applications that arrive on private circuits.

One final detail about these scopes:  Lets say you have offices in London, Madrid, Tokyo, Los Angeles, and a multi-building campus in Chicago.


The Building scopes are easy to identify:  Each building is  a scope!


The Chicago campus obviously constitutes a campus scope, but what about those one-office cities?  They're campuses too.  One-building campuses.  As applications roll out in LA, pretend you have multiple buildings there, and assign addresses accordingly.  If you assign KTLA to a Building-scope multicast group, you'll have to reconfigure things when a new office opens so that those folks won't  miss watching any live police chases.


Accordingly, the Tokyo office represents a Building, Campus and Region, all by itself.  When the Osaka office opens, the Region-scoped Japanese music-on-hold multicast stream that you deployed in Tokyo will be available for use in Osaka.


PIM BiDir
Finally, we need to carve those scopes up one more time.  Bidirectional PIM is a mechanism where multicast traffic flows in both directions between end stations.  Everybody is a sender and a receiver at the same time.  BiDir PIM doesn't use a shortest-path-tree like ASM, so it's good to set aside an address block for it, even if we're not going to use it right away.  We'll take the top half of each local routable /16 for BiDir.
Link Local
Block 1239.255.1.0 ‑ 239.255.127.255
Block 2239.255.129.0 ‑ 239.255.255.255
Building
ASM239.192.1.0 ‑ 239.192.127.255
BiDir239.192.129.0 ‑ 239.192.255.255
Campus
ASM239.193.1.0 ‑ 239.193.127.255
BiDir239.193.129.0 ‑ 239.193.255.255
Region
ASM239.194.1.0 ‑ 239.194.127.255
BiDir239.194.129.0 ‑ 239.194.255.255
Enterprise
ASM239.195.1.0 ‑ 239.195.127.255
BiDir239.195.129.0 ‑ 239.195.255.255

Friday, October 1, 2010

vPC failure scenario

Cisco Nexus vPC operation is well documented all over the 'net, so I won't be covering the basics here.  Instead, I want to focus on a particular failure scenario, in which the vPC safety mechanisms can indefinitely prolong downtime when other failures have occurred.

Consider the following topology:

Nothing fancy is going on here.  Nexus 5000s have 3 downstream (south-facing?) vPC links and a redundant vPC peer-link.  The management interface on each Nexus is doing peer-keepalive duty through a management switch.

My previous builds have been somewhat paranoid about redundancy of the peer-keepalive traffic, but I no longer believe that's helpful, and I'll be doing keepalive over the non-redundant mgmt0 interface going forward.

Each Nexus knows to bring up its vPC link members because the peer-link is up, so the activity can be coordinated between chassis.  If the peer-link fails each Nexus pair can still coordinate their vPC forwarding behavior by communicating each other's state over the peer-keepalive management network.

If a management link (or the whole management switch) were to fail, then no problem.  It's the state-only backup to the peer-link, and not required to forward traffic.

If a whole Nexus switch fails, the surviving peer will detect the complete failure of his neighbor, and continue forwarding traffic for the vPC normally.

When the failed Nexus comes back up, he waits until he's established communication with the survivor before bringing up the vPC member links, because it wouldn't be safe to bring up aggregate link members without knowing what the peer chassis is doing.

...And that brings us to the interesting failure scenario:  Imagine that a power outage strikes both Nexus 5Ks, but only one of them comes back up.  The lone chassis can't reach his peer over the peer link or the peer-keepalive link.  He's got no way of knowing whether it's safe to bring up the vPC member links, so they stay down.

If this happened to me in production, I'd probably do two things to bring it back online:

  1. Take steps to ensure the failed box can't come back to life.  How do you kill a zombie switch, anyway?
  2. Remove the vpc statement from each vPC port channel interface
Nortel had a similar problem with their RSMLT mechanism, but that deadlock centered around keeping track of who owns the first-hop gateway address (not HSRP/VRRP/GLBP).  They solved it by recording responsibility for the gateway address into NVRAM (flash? spinning rust?  wherever it is that Nortel records such things).