Friday, July 30, 2010

Rollover cables for terminal server

In the previous post, I detailed configuration of a Lantronix SLC box.

The serial ports on this device are pinned exactly the same as a Cisco router's console port. You also get the same result with the chrome DB-25 and DE-9 connectors shipped by Sun, and the DE-9 adapters that Cisco used to ship.

They all look like this:
PIN Signal
1 Don't Care
2 Don't Care
3 Transmit
4 Ground
5 Ground
6 Receive

7 Don't Care
8 Don't Care

Those 'Don't Care' pins usually do something related to hardware flow control, but unless you're dealing with an old XL series Catalyst, you probably won't need to think about it.

For these devices to talk to one another, you'll need to connect Transmit to Receive, and Ground to Ground. Looking at the chart, it should be obvious why a "rollover" cable is important here.

I've seen customers use common straight-through Ethernet cables with lots of rollover dongles like this:
Those things work perfectly fine, but they're ugly (you might have 48 of them in 1RU), and never seem to be available when needed. And check out the price! Per-port, they cost almost as much as the SLC itself!

You could also use the traditional console cable like the ones that Cisco used to ship:


The problem with these is that they tend to retain their shape, and only bend in two directions. High-density cable management is awful.

My preference is to get a bag of 8P8C ends, a cheap crimper, and a big roll of this stuff. Then I leave all of it right alongside the terminal server.

It's cheap, it's super-easy to make a cable, because the conductors cooperate (not like making an Ethernet cable), the resulting cable is always exactly the right length. Also, the two-pair stuff is both small and flexible. It's a cable management dream.

Friday, July 23, 2010

Managing gear with a Lantronix SLC

Out of band management is a must for an enterprise networker, and where possible, I like to deploy a robust RS-232 based terminal server. It must be the UNIX admin in me that's so gung-ho about console access, because I've encountered lots of network administrators with a take-it-or-leave-it attitude on this issue. Me? I never consider a deployment complete without some sort of console access to every device.

Strategies I've seen include:
Variations on the terminal server theme, include:
  • Mapping serial ports to TCP/IP ports for telnet or SSH access
  • Mapping serial ports to IP addresses for telnet or SSH access
  • Running terminal server management software on a central console host for convenience, logging and access management

The Lantronix SLC product is widely deployed for this purpose, and will get the job done. Don't mistake that for a ringing endorsement.

Here's what I usually do when deploying a Lantronix SLC terminal server:
  1. Set defaults using front panel controls. The password is probably 999999.
  2. Set IP/Mask/Gateway via front panel controls or console interface.
  3. Upgrade to latest OS version (5.4). Http is the easiest way to do this. Default login is sysadmin/PASS.
  4. Login via SSH, set admin password with 'passwd'
  5. The rest of the configuration follows.
Basic configuration:
set network host mylantronix domain mydomain
set network dns 1 ipaddr 8.8.8.8
set network dns 2 ipaddr 8.8.4.4
set datetime timezone UTC
set ntp localserver1 clock1.mydomain.com
set ntp localserver2 clock2.mydomain.com
set ntp localserver3 clock3.mydomain.com
set services location building suchandsuch rack suchandsuch
set services snmp enable
set services rocommunity BigSeekr1t
set services rwcommunity BiggerSeekr1t
set services telnet disable

Per-Port Addressing:
I like to give an IP address to each serial port. This way I can SSH directly to a device console, by name. At 3 AM, it's useful to have little details like remembering how to get to the console server taken care of, so I create a DNS record ending with '-console' for each piece of gear. It points at the IP address assigned to the serial port where that console can be found. Usually, it's a CNAME, pointing at one of the A records associated with the Lantronix. Multiple supervisors complicate things. A snippet of the DNS zone file might look like:

myswitch IN A 192.168.255.1 # This is the loopback address of a piece of network gear
mylantronix
IN A 192.168.50.10
mylantronix-port1 IN A 192.168.50.11
mylantronix-port2 IN A 192.168.50.12
mylantronix-port3 IN A 192.168.50.13
mylantronix-port4 IN A 192.168.50.14
mylantronix-port5 IN A 192.168.50.15
mylantronix-port6 IN A 192.168.50.16
mylantronix-port7 IN A 192.168.50.17
mylantronix-port8 IN A 192.168.50.18
myswitch-console-slot3 IN CNAME mylantronix-port1
myswitch-console-slot4 IN CNAME mylantronix-port2
myswitch-console IN CNAME myswitch-console-slot3

The names in red are the only ones that I expect to type during the course of most of my work. I quickly forget just how many terminal servers are out there, and where they live, but a quick nslookup of any '-console' name clues me in about where stuff is plugged in, if I need to know. Also, if myswitch-console lands me on a standby supervisor, nslookup makes it easy to figure out that myswitch-console-slot4 is where I should be headed.

Note that there's a CNAME pointing to another CNAME here. It works, but the
RFC says it should be avoided. In this case, it's never going to loop, and it's handy.

To make those DNS records useful, set up the per-port addresssing on the SLC:

set deviceport port 1-8 sshin enable
set deviceport port 1 ipaddr
192.168.50.11
set deviceport port 2 ipaddr
192.168.50.12
set deviceport port 3 ipaddr
192.168.50.13
set deviceport port 4 ipaddr
192.168.50.14
set deviceport port 5 ipaddr
192.168.50.15
set deviceport port 6 ipaddr
192.168.50.16
set deviceport port 7 ipaddr
192.168.50.17
set deviceport port 8 ipaddr
192.168.50.18

Label the ports:


set deviceport port 1 name myswitch-console-slot3
set deviceport port 2 name
myswitch-console-slot4
set deviceport port 3 name core-router-A
set deviceport port 4 name core-router-B
set deviceport port 5 name firewall-A
set deviceport port 6 name firewall-B
set deviceport port 7 name edge-router-A
set deviceport port 8 name
edge-router-B

Syslog Servers:

set services syslogserver1 syslog1.mydomain.com
set services 
syslogserver2 syslog2.mydomain.com


TACACS+
Frustratingly,
it doesn't seem to be possible to use a different secret with each server
set tacacs+ server1 tacacs1.mydomain.com
set tacacs+ 
server2 tacacs2.mydomain.com
set tacacs+ server3 tacacs3.mydomain.com
set tacacs+ secret BigSeekr1t
set tacacs+ permissions ad,nt,sv,dt,lu,ra,um,dp,rs,fc,dr,sn,wb,sk,do
set tacacs+ group admin
set tacacs+ state enable
I can't remember what these do, and the lackluster documentation provides syntax, but no clue as to the meaning of these commands. I knew once, and decided they were a good idea. I think I figured out that they were helpful through the GUI.
set services authlog info
set services devlog info
set services clicommands enable
set services includesyslog enable

NFS:
I like to have the SLC spool every byte that crosses my serial port off to an NFS filesystem. It's pretty easy if you own the fileserver, somewhat trickier if you have to interface with the storage dudes. You'll need a share setup for you, with write access for UID 0 from all of the Lantronix IP addresses.
The box is kinda flaky this way.
This is one of the ways the box is flaky. Traffic originated by the SLC might come from any of its IP addresses. Even those assigned to serial ports. Ugh.

Make a directory for each SLC that will be writing to the share. Ensure that UID 0 can write within those directories.
set nfs mount 1 remdir myfileserver:/path/to/share/mylantronix locdir /nfs1 rw enable mount enable
set deviceport port 1-8 nfsdir /nfs1
set deviceport port 1-8 nfslogging enable
set deviceport port 1-8 nfsmaxsize 250000
set deviceport port 1-8 nfsmaxfiles 4
The share is at myfileserver:/path/to/share and it contains a directory called mylantronix. Presumably you'll have lots of SLCs, so it's nice to have a directory for each one.
Within the mylantronix directory, the SLC will create a file for each serial port. As each file reaches 250KB, a second file will be created, and so on, with a maximum of 4 files (1MB) per port. 1MB is a lot of console data at 9600 baud!

Broken: Interface Bonding
The SLC has two NICs, but until recently, there wasn't much you could do with the second NIC. Redundancy boiled down to:
  • Assign the secondary NIC an IP on a different subnet than the primary NIC
  • Configure an alternate default gateway secondary subnet
  • Configure a healthcheck-IP, to be pinged through the primary NIC/gateway
  • If the healthcheck IP fails to ping, the SLC will swap to its alternate default gateway
  • You better remember how to get to the box via its alternate NIC
  • per-port addressing is impossible
Now, the situation is slightly improved: Interface bonding is supported. Sort of. The command to configure bonding is simple:

set network bonding active-backup

Don't do it! Per-port addressing and interface bonding are a deadly combination. It doesn't matter which you set first. When you enable the second of those features, the box goes completely offline. You can't even get any output from the SLC's own console port! The only option is a front-panel reset. Lantronix support hasn't shown much interest in fixing the issue. I suspect they're too busy dealing with their PCMCIA supplier crisis. They have told me that they might enhance the documentation, CLI and GUI to warn the administrator who dares such a a configuation <eyeroll>. This is my single biggest issue with these boxes. I really wish it was fixed. Lantronix, are you listening? I'm pretty sure that opengear is listening!


Nexus Data Center Switching Design - Part 2

The previous post, detailed a data center design using Cisco's Nexus 5020 and Nexus 2148T Fabric Extender (FEX).

I revisited that design recently, with a new environment in mind. The new environment's requirements are pretty much the same, with the following changes:
  • There will be a small population of 10 Gigabit servers
  • The average count of server NICs will be way down
Otherwise, it's not much different. NIC density will be going down because the servers with 9 NICs will be replaced by servers with reduntant 10Gb/s connections (plus a copper iLO).

Since that last build, Cisco has added to the Nexus lineup. The most interesting additions are the 2248TP fabric extender and the Fabric Extender Transceiver (FET).

The 2248TP has a bunch of advantages over the 2148T, including QoS features, downstream etherchannel support, some local ACL processing capability, and lower price! But the best parts are support for 100Mb/s clients, and its support for the FET.

Last year I put the Nexus 5020s into the server row, using the inexpensive twinax cables for 5000->2000 downlink. That decision saved 10% of the project budget, and seemed so cutting edge at the time. But the FET has made that layout obsolete. The 5020s get centralized now.

FET?
The FET looks just like any other SFP fiber transceiver, but can only be used between the Nexus 5000 (update: or 7000!) and the new fabric extenders, allowing them to be up to 100m away. It's not an Ethernet transceiver, but a fabric extender transceiver, so it won't interoperate with SR, LR, or any other Ethernet optics. It costs around 10% of what those other optics cost.

List pricing on the old and new FEXes is about the same:
  • $10,000 for the 2148T, plus around $1,000 worth of twinax cables.
  • $11,000 for the 2248TP, which comes bundled with 8 FETs (enough to build 4 uplinks).
Last year, I called for 48 little Catalysts to support 100Mb/s interfaces (mostly iLO ports). The 2284TP supports 100Mb/s gear, so those cheapie 2960s are gone.

I really liked the original 2148T Fabric Extender. Now it's dead to me.

Design Changes
So, what does the FEX update do to last year's design?
  • The little Catalysts at the top of every rack are gone.
  • 15% of the Nexus 5020 ports were previously dedicated to the little Catalysts.  Now they're available for servers.
  • The Nexus 5020s move from center of row to somewhere near the Nexus 7000 core.
  • The eight distributed Lantronix boxes managing 64 switches become one centralized Lantronix box managing 16 switches.
  • NO COPPER is required to be pulled into the server row.
Other changes:
  • Due to the new 10Gb/s server requirement, I'll be adding ports to the 5020s
  • Those 10Gb/s servers will be doing IP storage, instead of SAN, so I'll be looking for additional uplink bandwidth.
  • Last year's design included a dedicated EtherChannel for vPC keepalive traffic. It's overkill, and now I need the ports. vPC keepalive traffic will move to the Nexus management (mgmt0) interfaces.
  • By putting the 5020s in close proximity to the core, we'll be able to use twinax cables for these links in the near future. This can save around $330K by turning $3000 SR links into $400 copper links. There are 128 of these links. The options here are:



    • Passive twinax cables connected to OneX Converters installed in thN7K-M108X2-12L  cards in the Nexus 7000. This should work with the "Delhi" release of NX-OS, and is the option I've selected for this exercise.
    • Passive twinax cables connected to the un-announced N7K-M132XP-12L cards in the Nexus 7000.
    • Active twinax cables connected to the N7K-M132XP-12 cards in the Nexus 7000. This should work with the "Cairo" release of NX-OS.

The result looks like this:


And each row is wired up like this:


What have we gained?
  • Flexible layout: We're no longer constrained to a 6-rack row length by the 5000->2000 CX1 cables. FEXes can live anywhere in the data center.  I've left the layout alone for a straightforward comparison.
  • No overhead copper runs. (unspecified $)
  • Less overhead fiber runs due to fewer SAN clients (unspecified $)
  • 144 additional Rack Units for servers
  • 55 fewer devices to manage (48 Catalysts and 7 Lantronix)
  • Support for 64 dual-homed 10-gig servers
  • 2X more throughput: It's 80Gb/s in both scenarios, but last year's design was oversubscribed 2:1

Pricing

Each row houses the following gear, which together list for $161,280:

Part Number Description Quantity
N5020P-6N2248TF-B Nexus 5020P/6x2248TP/48xFET Bundle 2
SFP-H10GB-CU1M 10GBASE-CU SFP+ Cable 1 Meter 4
SFP-H10GB-CU3M 10GBASE-CU SFP+ Cable 3 Meter 8
CAB-C13-C14-JMPR Recessed receptical AC power cord 27 4
CAB-C13-C14-2M Power Cord Jumper, C13-C14 Connectors, 2 Meter Length 24
N2K-C2248TP-BUN Nexus 2248TP for N5K/N2K Bundle 12
N5K-C5020P-BUN-E Nexus 5020P in N5020P-N2K Bundle 2
FET-10G 10G Line Extender for FEX 96
N5020-ACC-KIT Nexus 5020 Accessory Kit, Option 2
N5K-M1-BLNK N5000 1000 Series Expansion Module Blank 4
N5K-PAC-750W Nexus 5020 PSU module, 100-240VAC 750W 4
N5KUK9-421N1.1 Nexus 5000 Base OS Software Rel 4.2(1)N1(1) 2

Additionally, we'll need 8 N7K-M108X2-12L cards, and 64 OneX converters. Together, that's $364,800.

Throw in a Lantronix SLC-16, and the grand total is $1,656,740.

Thanks to the new FEX and FET, the whole build, with simplified wiring, no overhead copper, twice as much bandwidth, more server space, less gear to manage, and 128 10-gig server ports costs almost 10% less than last year's design.


Not Enough Bandwidth?
...By doubling up on Nexus 7000 modules, OneX converters, twinax cables and adding ports to the Nexus 5020s, we can scale the environment up to line-rate 160 Gb/s in each row (with 80Gb/s crossconnect), now with 192 10-gig server ports.

We've reached $2,212580. It's a big number, but only 23% bigger than last year's design, and it sports 4 times the bandwidth, 192 new 10-gig server ports, and reduced cabling and maintenance costs, all thanks to a couple of seemingly tiny details: 100Mb/s support on the new FEX, and the new FET module.

I think this might be a reasonable project to build right now for somebody with lots of legacy servers, no place to put them, and an idea they might want some 10-gig in the near future. But I don't expect to be trotting this design out twelve months from now. 10G-BASET is right around the corner, and it will have a big impact on the ratio of 10G/1G servers in new data centers. If I were building the environment sketched here, it would have lots of space dedicated for future 10-gig only top of rack switching.

Upcoming features (CX-1 support on the N7K, FEX support on the N7K) will likely change things again soon.

Wednesday, July 21, 2010

Nexus Data Center Switching Design - Part 1

This post details how I might have built a Cisco Nexus 5000/2000 switching environment last year. All of the pricing and product availability information is late 2009 vintage.

Recently, I was running through this same design exercise again with
new fabric extenders and some updated requirements in mind. Subtle differences in the newest Nexus gear changed the result much more than I expected. This series of posts will detail the designs, with the intent of highlighting the new 2248TP fabric extender. A follow up post will detail the new gear, and how it changes everything.


Last Year's Design

Requirements
  • Several rows of top-of-rack 1Gb/s copper switching
  • Full redundancy, including redundant supervisors (in the FEX world, we'll use vPC FEX uplinks to satisfy this requirement)
  • Between 3 and 9 links on each server (one link on each server is an iLO module)
  • Multiple 10Gb/s upstream links from each row
  • Copper says in-cabinet wherever possible
  • Limited intra-row cabling is okay
  • Keep an eye on the budget
Hardware
The resulting design splits the environment into 6-rack rows, with the following gear in each row:
  • Two Nexus 5020s (rack 3 and rack 4)
  • Twelve Nexus 2148T (two per rack)
  • Six Catalyst 2960s (one per rack)
  • One Lantronix SLC
The result looked something like this

Cabling
Inter-row connectivity consists of:
  • 216 multimode fiber pairs (36 pairs in each rack). These are mostly used by servers for storage, but also by the Nexus 5020s for uplink to a Nexus 7000 vPC pair. Open cassette positions in each panel allow for growth.
  • One 24-pair copper bundle (an AMP MRJ21 cable) terminates on the copper panel in the third rack of every row. There was no good way to avoid inter-row copper altogether because of the Nexus mgmt0 interfaces, and 8 console connections in each row. It's a 6-port panel with 5 connections in use: Each Nexus 5020 uses one, and the Lantronix uses two for Ethernet plus a third for it's own console. Lantronix interface redundancy is buggy, so only one of the NICs works correctly. More about that in a future post.
Inter-rack connectivity within each row consists of:

  • 48 TwinAx cables of various lengths connect the FEXes to the Nexus 5020s.
  • 6 TwinAx cables connect between the Nexus 5020 pair.
  • vPC copper uplink from the Catalyst 2960s to the Nexus 5020 pair using GLC-T modules (because the first 16 ports on the Nexus can do 1Gb/s).
  • Copper management interfaces on the 5020s and Lantronix connect to a central management net via the patch panel in rack 3.
Every server NIC and HBA is patched straight to the top of rack Nexus, Catalyst or fiber panel for SAN. Server cabling within the racks is very clean and straightforward.

Here's how some of these design elements came together
The Nexus 5020s are in-row because of the pricing of 10Gb/s twinax cables. Last year, building a 10Gb/s link on multimode fiber required $1800 SR transceivers... On both ends of the cable! The price of those transceivers has since come down to $1500, but a twinax cable doing the same job lists for as little as $150. The problem with twinax is their length limitation: 5m maximum. Moving the 5020s out of close proximity to the fabric extenders would cost around $160,000 using SR transceivers!

The 6-rack row length came from a design requirement and a FEX limitation. The requirement was that "everything be dual-sup'ed". With redundant supervisors in a chassis switch, all line cards and ports continue to operate after a supervisor failure. The closest analogy in the FEX environment is to uplink the FEXes to a vPC Nexus 5020 pair. I don't love the vPC FEX layout, but let's assume that it's a layer-8 requirement that can't be worked around. Both 5020's in each row connect to all 12 FEXes. ...And that's the limitation. A Nexus 5000 can only address 12 FEX modules. There's a rumor that the limit may be raised.

The 2960s are here because iLO interfaces on the server do 10/100 Mb/s, and the 2148T can only do gigabit. The iLO ports need to plug in somewhere! Also, the port density was getting kind of high: As it is, each rack can support at most 12 of those 9-NIC servers. Fortunately the average NIC density is less than 6 NICs per server, so the racks will run out of space shortly before the network runs out of ports.

I'd considered ways to reduce the iLO switch management footprint by using the 4506-E bundle (great pricing on that bundle) at the center of the row, or a Catalyst 3750 stack braided across the top of the row:
The added expense of those solutions just didn't make sense. The Catalyst 2960s (that's a plural -- not the new 2960-S) are so cheap, and their configuration so simple that managing 48 little switches for 10/100 connections seemed worthwhile. Note that while the 2960's are linked in-row to the Nexus 5020s, the WS-C2960-24TC-S has dual-purpose uplink ports. Should the need to free up 5020 ports ever arise, the 2960s can be homed elsewhere via the in-rack multimode panels.

The single copper panel would have been nice to avoid, but it just wasn't possible without resorting to hokey media converters for the Nexus and Lantronix management ports. During an early stage of the build, I had these copper links running to the 2960s, but that creates a chicken-and-egg problem (the egg came first, BTW): I planned to use the Lantronix SLC to put the initial configuration on all of the other devices, but couldn't get to the SLC until the Nexus 5000 and Catalyst 2960 were configured! The cabling folks eventually showed up with the copper, but not until after I'd spent a long time dangling from a blue cable.

Each 8-port Lantronix SLC unit is fully populated: 2 Nexus 5020s and 6 Catalysts. Additionally, each SLC's own console port (for managing the SLC) is patched back to a central SLC. The 25-pair cable feeding the copper patch panel includes enough pairs to carry all console and management traffic, but it would have required funny wiring to handle both Ethernet and console connections. An in-row SLC is going to be easier to manage in the long run.


Airflow
The Nexus 5000 and 2000 are configured with front-to-back airflow, which in this case, means that the exhaust is on the same end as the interfaces. Intake, accordingly, is on the other end. That's great for the Nexus 5000: It's 30 inches long, so it will be inhaling air from the cold side of the rack. The 2148T fabric extender on the other hand, is only 20 inches deep, putting its air intake in the middle of the rack. The Catalyst 2960 draws air in on its sides, and exhausts backwards into the middle of the rack. T
he Lantronix SLC does something equally unfortunate, though the specifics elude me. So, it's a bit of a challenge to keep the FEX, Catalyst and SLC cool. Putting the first server at the top of the rack (rather than the bottom) will help isloate this gear, and will prevent hot server exhaust from pooling in the open cavity at the top of the rack. Installing blanking panels on the hot side of the rack (rather than the aesthetically pleasing cool side) will help too. Fortunately, the FEX fans move enough air to keep stuff up there from getting too hot. No they don't.  Consider your cabinet's airflow requirements carefully, consider adding baffles to prevent hot air getting to the FEX intake.  Blanking panels at the cold side are no help.  Blanking panels at the hot side might not be enough if air can rise up the sides of the cabinet between the front and rear mounting rails.

Wiring Detail
Here's how the Nexus gear in each row is wired together.

Get out your wallet
This is an expensive build, but cost was one of the design drivers. For the pricing exercise and comparison with the new products, it's important to know that these rows uplink to a Nexus 7000 via SR optics. The 7000 is populated with N7K-M132XP-12 cards, and they're oversubscribed 2:1.

A single row, including the downlink optics installed in the core consists of the following, which together list for $188,350:


Part Number Description Quantity
N5020P-N2K-BEC Cisco Nexus 2148T and Nexus 5020 Bundle with Twinax cables 2
SFP-H10GB-CU1M= 10GBASE-CU SFP+ Cable 1 Meter 14
SFP-H10GB-CU3M= 10GBASE-CU SFP+ Cable 3 Meter 16
SFP-10G-SR= 10GBASE-SR SFP Module 4
WS-C2960-24TC-S Catalyst 2960 24 10/100 + 2 T/SFP LAN Lite Image 6
GLC-T= 1000BASE-T SFP 12

Additionally, we'll need 4 N7K-M132XP-12 cards to install in the core switches. These list for $70,000 each.

Oh, and the Lantronix SLCs cost around $1,200 each

Alltogether, this build provides 1728 RU of server space and 4608 gigabit switchports. It's oversubscribed 2:1 at the core, 6:1 at the 5020 layer, and 1.2:1 at the FEX (assuming all ports get plugged in). Spanning tree isn't blocking any links.

List price: $1,796,400 including the Nexus 7000 parts, but not the Nexus 7000 itself.