Thursday, November 24, 2011

Dirt Cheap Terminal Servers

My job puts me in contact with lots of first-rate network gear.  I like much of that shiny stuff, but there are some relics for which I continue to have a soft spot.

One of those relics remains relevant today, but few seem to know about it.

It's the Xyplex MaxServer line of terminal servers.  When deployed for out-of-band console access, these things are capable of doing essentially the same job as a Cisco router with asynchronous serial ports, but at a tiny fraction of the price, and without the annoying octopus cable that's required for most Cisco async ports.

MX-1640 on top, MX-1620 below
MX-1640 on top, MX-1620 below
Price
The main thing that's so great about these guys is the price.  I bought that 40-port unit for US $26 including delivery to my door.  That's only $0.65 per serial port.

They're cheap enough that I've got a flock of these things at home.  Everything with a serial port in my house is wired up.  I can take them to customer sites for labbing purposes and don't care if I get them back.

For comparison, the cheapest NM-16A (16 port asynchronous serial module) on ebay right now is $184.  Figure about $20 each for a couple of octal cables, plus $30 for a 2600 series router to run it all, and the cheapest Cisco alternative runs over $250, nearly $16 per serial port.

I know that people are buying Cisco 2511s and NM-16As for their home labs, and I'm guessing that ignorance of cheaper options is the reason the Cisco stuff bids to such ridiculous levels on ebay.

In an effort to spread the love for these relics, I'm sharing what I know about them.

Identification

The same basic product is available as both Xyplex MaxServer and MRV MaxServer.  I've seen an MRV unit once, but never configured one.  It looks identical, but may behave differently.

There is an older model MaxServer 1600 that has 8 or 16 ports, a rounded face with integrated rack ears, and no onboard 10Base-T transceiver.  You'll need an AUI transceiver for 10Base-T to use one of these older units.  I haven't used one since 1999 or 2000, so I don't remember if they're completely the same, and can't recommend them.

The 1620 and 1640 units have a label on the bottom indicating that their "Model" is MX-1620-xxx or MX-1640-xxx.  I'm not sure what the xxx suffix means, but I've got units with suffixes 002, 004 and 014, and I can't tell any difference between them.

Flash Cards
Sometimes these boxes come with a hard-to-find PCMCIA card.  If you have the card, then these guys can run totally independently.  Without the card, a TFTP server is required for boot and configuration.  The TFTP server can be anywhere on the network.  It doesn't need L2 adjacency to the MaxServer.  If one unit has the card, it can serve other units without cards.

The card has to be some ancient small capacity "linear" flash card, and not just any linear flash card will do.  I've tried lots of cards from different vendors, and only found a single 3rd party card that's recognized by the Xyplex (pictured below).  They run fine without the cards, so don't sweat looking for the card.



Rack Mounting
OEM rack ears are tough to find.

But the screws and ears from a Cisco 2600 work fine with the addition of a couple holes to the MaxServer chassis:
Cisco 2600, MaxServer with Cisco ear installed, unmolested MaxServer
Use a 1/16" bit and chuck it deep in the drill so that it won't reach more than about 1/4" into the chassis when it punches through.

Software
You'll need a file called xpcsrv20.sys.  The md5sum of the file I use is:
b7252070000988ce47c62f1d46acbe11

Drop this file on the root of your TFTP server.  While you're there, if your TFTP server doesn't allow remote file creation (most don't), create a couple of empty files for your server to use for its configuration.  The format is:

x<last-6-of-MAC>.prm and x<last-6-of-MAC >.bck

So, I've got files named x0eb6b1.prm and x0eb6b1.bck that are used by my MaxServer with hardware address 0800.870e.b6b1.  The address is printed on a label near the Ethernet port.

Serial Pinout
Connecting these guys to a Cisco console requires a rollover cable.  I like to make my own using 2-pair cable for reasons I've previously explained.

Configuration
Here's how I go about configuring one of these guys for:
  • Static IP addressing
  • TFTP download of the system software
  • TFTP download and storage of configuration elements
  • Telnet access to the serial ports
  • Telnet access for admin functions
The first step is to reset the box to factory defaults.

Connect a terminal to the first serial port using the same gear you'd use on a Cisco router's console port.  9600,8,N,1

With the system powered on, use a paperclip to manipulate the button behind the hole near the console LED on the front panel.
  1. Press the button
  2. Release the button
  3. Press the button
  4. The numbered LEDs should scan back and forth, then stop with two LEDs lit.
  5. Release the button.
Hit Enter on the terminal a couple of times.  You should see something like:
Terminal Server, Type 92, Rev G.00.00
Ethernet address 08-00-87-0B-DD-43
Configuration in progress.  Please wait
Type access and hit Enter.  The letters won't echo on screen.  You'll get a configuration menu.  The following text is the steps I follow for initial configuration purge.  Bold text is stuff that I've typed.
Terminal Server Configuration Menu
   1. Display unit configuration
   2. Modify unit configuration
   3. Initialize server and port parameters
   4. Revert to stored configuration
   S. Exit saving configuration changes
   X. Exit without saving configuration changes
 
Enter menu selection [X]: 2
Modify Unit Configuration Menu
   1. Initialization record #1 (Enabled)
   2. Initialization record #2 (Disabled)
   3. Initialization record #3 (Disabled)
   M. Miscellaneous unit configuration
   D. Set unit configuration to defaults
   X. Exit to main menu
The following series of answers sets everything to default.
Enter menu selection [X]: 1
Set Initialization record #1 to defaults (Y,N) [N]? Y
Enable initialization record #1 (Y,N) [N]? Y
Enter menu selection [X]: 2
Set Initialization record #2 to defaults (Y,N) [N]? Y
Enable initialization record #2 (Y,N) [N]? N
Enter menu selection [X]: 3
Set Initialization record #3 to defaults (Y,N) [N]? Y
Enable initialization record #3 (Y,N) [N]? N
Enter menu selection [X]: M
Display load status messages (Y,N) [Y]? Y
Total installed memory in megabytes (4,6,8) [4]: 4
Enter menu selection [X]: D
Initialize ALL configuration data for this unit to defaults (Y,N) [N]? Y
Still at the Modify Unit Configuration Menu, configure the first initialization record.  This information is what's used by the pre-boot environment.  We have to configure the IP information twice: the first instance represents what's used by the bootloader (or whatever), and the second instance is the IP information used by the running system.  I'm putting the terminal server at 192.168.15.11/24, and its TFTP server is at 10.122.218.33

Enter menu selection [X]: 1
Set Initialization record #1 to defaults (Y,N) [N]? N
Enable initialization record #1 (Y,N) [Y]? Y
Enable ALL methods for image loading (Y,N) [N]? N
Toggle (CARD,DTFTP,XMOP,MOP,BOOTP,RARP) load methods [C,X,M,B,R]: C
Toggle (CARD,DTFTP,XMOP,MOP,BOOTP,RARP) load methods [X,M,B,R]: D
Toggle (CARD,DTFTP,XMOP,MOP,BOOTP,RARP) load methods [D,X,M,B,R]: X
Toggle (CARD,DTFTP,XMOP,MOP,BOOTP,RARP) load methods [D,M,B,R]: M
Toggle (CARD,DTFTP,XMOP,MOP,BOOTP,RARP) load methods [D,B,R]: B
Toggle (CARD,DTFTP,XMOP,MOP,BOOTP,RARP) load methods [D,R]: R
Toggle (CARD,DTFTP,XMOP,MOP,BOOTP,RARP) load methods [D]: 
Enable ALL methods for parameter loading (Y,N) [Y]? N
Toggle (NVS,XMOP,MOP,BOOTP,RARP) load methods [N,X,M,B,R]: X
Toggle (NVS,XMOP,MOP,BOOTP,RARP) load methods [N,M,B,R]: M
Toggle (NVS,XMOP,MOP,BOOTP,RARP) load methods [N,B,R]: B
Toggle (NVS,XMOP,MOP,BOOTP,RARP) load methods [N,R]: R
Toggle (NVS,XMOP,MOP,BOOTP,RARP) load methods [N]:
Enable ALL methods for dumping (Y,N) [Y]? N
Toggle (XMOP,MOP,BOOTP,RARP) load methods [X,M,B,R]: X
Toggle (XMOP,MOP,BOOTP,RARP) load methods [M,B,R]: M
Toggle (XMOP,MOP,BOOTP,RARP) load methods [B,R]: B
Toggle (XMOP,MOP,BOOTP,RARP) load methods [R]: R
Toggle (XMOP,MOP,BOOTP,RARP) load methods []: 
Enter unit IP address [0.0.0.0]: 192.168.15.11
Enter host IP address [0.0.0.0]: 10.122.218.33
Enter gateway IP address [0.0.0.0]: 192.168.15.1
Enter TFTP image filename (64 characters max.) []: xpcsrv20.sys
Enter menu selection [X]: X
Now we're back at Terminal Server Configuration Menu.  We'll Initialize server and port parameters and then Exit saving configuration changes.

Enter menu selection [X]: 3
Should default server and port parameters be used (Y,N) [Y]? Y
Enter menu selection [X]: S
Save changes and exit (Y,N) [Y]? Y
At this point, the system should grab xpcsrv20.sys from the TFTP server.  Give this a minute to complete. Next you'll get a prompt like this:
               Welcome to the Xyplex Terminal Server.


Enter username>

With the way I use these boxes, it doesn't matter what username you type (at this prompt and at later prompts).  Just type something in.  I suppose it would matter if you configured RADIUS authentication, but I've never done that.  The default password is system.


Enter username> foo
Xyplex -901- Default Parameters being used

Xyplex> set priv
Password> system (doesn't echo)

Welcome to the Xyplex Terminal Server.
For information on software upgrades contact your local representative,
or call Xyplex directly at

in USA: (800) 435 7997
in Europe: +44 181 564 0564
in Asia: +65 225 0068

Xyplex>>


The prompt with >> indicates that we're in privileged user mode.  Configuration parameters are entered with either set or define commands.  set commands take effect immediately but are not persistent across reboots.  define commands don't take effect until after reboot.  Issuing set server change enabled causes define commands to take effect immediately, in addition to persisting across reboots, so I start with that one.
set server change enabled
define server change enabled
define server name My-Xyplex
define server internet name My-Xyplex.home.marget.com
define server welcome "My Xyplex"
define login password "myloginpassword"
define priv password "myenablepassword"
define server login prompt "passwd "
define server internet address 192.168.15.11
define server internet subnet mask autoconfigure disabled
define server internet subnet mask 255.255.255.0
define server internet broadcast address 192.168.15.255
define server internet primary gateway address 192.168.15.1
define server internet gateway auto discovery disabled
Port 0 is like line vty 0.  It's the in band management interface.  Ordinarily it listens on TCP/2000.  I move it to TCP/23.
define port 0 telnet echo remote
define port 0 telnet remote 23
define port 0 prompt "My-Xyplex"
define port 0 idle timeout 30
This next bit of configuration is the stuff I apply to the serial ports so that I can connect to them remotely.
define port 1-20 autoconnect disabled
define port 1-20 autobaud disabled
define port 1-20 access remote
define port 1-20 flow disable
define port 1-20 access remote
define port 1-20 telnet transmit immediate
define port 1-20 telnet binary session mode passall
define port 1-20 default session mode passall
define port 1-20 telnet newline nothing
define port 1-20 telnet echo character
define port 1-20 speed 9600
Accessing the ports
Once things are configured you access these ports in pretty much the same way that you would with a Cisco box configured for "reverse telnet".  Each serial port is listening on a TCP port: (portnum * 100) + 2000

So, port 1 is listening on TCP/2100, port 2 on TCP/2200, etc...

Useful commands
The only reason I log into the Xyplex directly is to kill off a stuck telnet session. Log in and access a privileged session like this:

chris$ telnet 192.168.15.11
Trying 192.168.15.11...
Connected to 192.168.15.11.
Escape character is '^]'.

passwd myloginpassword

My-Xyplex



Enter username> blah
My-Xyplex> set priv
Password> myenablepassword
My-Xyplex>>
To kill an existing session, use
My-Xyplex>> kill port X sessions all
To reload the Xyplex use
My-Xyplex>> init delay 0

Monday, November 21, 2011

10GBASE-T X2 module shipping soon?

I noticed this morning that a new X2 module appeared in the Cisco docs last week:
Cisco X2-10GB-T

The Cisco 10GBASE-T Module supports link lengths of up to 100m on CAT6A or CAT7 copper cable.

Unfortunately, while the compatibility guide lists this new transciever, it doesn't indicate that it's supported by any current switches.

As far as I'm aware, this module is the 4th 10GBASE-T device in Cisco's lineup, having been preceeded by:
  • 6716/6816-10T card for the 6500 (these are the same card with different DFCs)
  • 4908-10G-RJ45 half-card for the 4900M
  • 2232TM Fabric Extender
This new module isn't showing up in the pricing tool yet.

I'm not too excited about 10GBASE-T connections (and Greg Ferro hates them!), but I've seen a couple of use cases where having just a single port or two on a stackable switch would have been helpful.

It'll be interesting to see how these get deployed.  A 10GBASE-T SFP+ module would be much more helpful these days, but I suspect that power, packaging or both are standing in the way of a 10gigabit version of the GLC-T.

Update 11/30/2011 -- I missed a 10GBASE-T offering (or maybe it just appeared).  There's also the C3KX-NM-10GT, a dual-port 10GBASE-T network module for the 3560-X and 3750-X switches.

Thursday, November 17, 2011

Nexus 5K Layout for 10Gb/s Servers - Part 4

My first couple of blog posts were about building 1Gb/s top of rack switching using the Nexus 5000 product line.  This is a new series comparing some options for 10Gb/s top of rack switching with Nexus 5500 switches.

These posts assume a brand-new deployment and attempt to cover all of the bits required to implement the solution.  I'm including everything from the core layer's SFP+ downlink modules through the SFP+ ports for server access.  I'm assuming that the core consists of SFP+ interfaces capable of supporting twinax cables, and that each server will provide it's own twinax cable of the appropriate length.

Each scenario supports racks housing sixteen 2U servers.  Each server has a pair of SFP+ based 10Gb/s NICs and a single 100Mb/s iLO interface.

Option 4 - Top of Rack Nexus 2232 + Catalyst 2960

This is a 3-rack pod consisting of two Nexus 5548 switches in a central location near the core, and Nexus 2232 and Catalyst 2960 deployed at the top of rack.
Option 4

Each Nexus 2232 uplinks to a single Nexus 5548, and the Catalyst 2960s are uplinked via vPC to the fabric extenders.  Connecting the 2960 to the FEX requires some special consideration.  In this case, there are two reasonably safe ways to do it:
  1. Configure BPDUfilter on the uplink EtherChannel and BPDUguard on all the field ports.  Put each 2960 into its own VLAN.  Any cable mistakenly linked between racks should be killed off by BPDUguard immediately, and the 2960s can't form a loop because they're in different VLANs anyway.
  2. Configure the lanbase-routing feature on the 2960 and run each 2960 uplink as a /31 routed link.  BPDUfilter will still be required on the 2960 (routed interfaces aren't supported by lanbase-routing so you have to use SVI+VLAN), but a loop cannot form because each 2960 (uplink and downlink) is in a different VLAN.  Might not be possible with this specific model of 2960.  Upgrading the 2960 to a WS-C3560V2-24TS-S ($3K for limited L3 features).
Even if a loop does form, the 5K and 7K should be able to move the 2960's 6.5Mpps maximum capacity with no problem :-)  If we're not comfortable with all of this, the 2960s can uplink to the 5548s for an additional $3600 (list) and twelve strands of fiber.

48 servers have 960Gb/s of access bandwidth with 2:1 oversubscription inside the pod.  The pod's uplink is oversubscribed by 6:1, same as Option 1 and option 3.

Because the Nexus 5548 are installed in a central location (not in the server row), management connections (Ethernet and serial) do not require any special consideration.  Only multimode fiber needs to be installed into the server row.

The 2960 console connections are less critical. My rule on this is: If the clients I'm supporting can't be bothered to provision more than a single NIC, then they can't be all that important. Redundancy and supportability considerations at the network layer may be compromised.

The advantages of this configuration include:
  • Plenty of capacity for adding servers, because each 10Gb/s FEX is only half-full (oversubscription would obviously increase).
  • Use of inexpensive twinax connections for 7K<-->5K links.  There are more boxes here, but the overall price is lower becasue of this change.
  • Centralized switch management - serial and Ethernet management links are all in one place.
  • This model translates directly to 10GBASE-T switching.  When servers begin shipping with onboard 10GBASE-T interfaces, we switch from Nexus 2232PP to 2232TM, and the architecture continues to work.  This isn't possible with top-of-rack Nexus 5500s right now.
Basically, it's the same topology as option 3, but I've swapped out the 2224s in favor of 2960s to save a few bucks.  Spending $25,000 for 100Mb/s iLO ports drove me a little crazy.

Here's the resulting bill of matierals with pricing:
Config Set Lines
Line Item / Part#/Description List Price Qty. Discount(s) Unit Price Extended Price
1.0 N5548UPM-4N2232PF Nexus 5548UP/Expansion Module/4xN2232PP/64xFET 78,000.00 1 0%  78,000.00 78,000.00
1.1 N55-DL2 Nexus 5548 Layer 2 Daughter Card 0.00 1 0%  0.00 0.00
1.2 N55-M16UP-B Nexus 5500 Series Module 16p Unified 0.00 1 0%  0.00 0.00
1.3 N5KUK9-503N2.1 Nexus 5000 Base OS Software Rel 5.0(3)N2(1) 0.00 1 0%  0.00 0.00
1.4 DCNM-L-NXACCK9 DCNM for LAN Advanced Edition for N1/2/4/5K 0.00 1 0%  0.00 0.00
1.5 GLC-T 1000BASE-T SFP 395.00 3 0%  395.00 1,185.00
1.6 SFP-H10GB-CU1M 10GBASE-CU SFP+ Cable 1 Meter 150.00 5 0%  150.00 750.00
1.7 SFP-H10GB-CU3M 10GBASE-CU SFP+ Cable 3 Meter 210.00 8 0%  210.00 1,680.00
1.8 N5548P-FAN Nexus 5548P and 5548UP Fan Module, Front to Back Airflow 0.00 2 0%  0.00 0.00
1.9 N55-PAC-750W Nexus 5500 PS, 750W, Front to Back Airflow(Port-Side Outlet) 0.00 2 0%  0.00 0.00
1.10 CAB-C13-CBN Cabinet Jumper Power Cord, 250 VAC 10A, C14-C13 Connectors 0.00 2 0%  0.00 0.00
1.11 CAB-C13-C14-2M Power Cord Jumper, C13-C14 Connectors, 2 Meter Length 0.00 8 0%  0.00 0.00
1.12 N2K-C2232PP-BUN Standard airflow/AC pack: N2K-C2232PP-10GE, 2AC PS, 1Fan 0.00 4 0%  0.00 0.00
1.13 N5K-C5548UP-BUN Nexus 5548UP in N5548UP-N2K Bundle 0.00 1 0%  0.00 0.00
1.14 FET-10G 10G Line Extender for FEX 0.00 64 0%  0.00 0.00
1.15 N5548-ACC-KIT Nexus 5548 Chassis Accessory Kit 0.00 1 0%  0.00 0.00
2.0 N5K-C5548UP-FA Nexus 5548 UP Chassis, 32 10GbE Ports, 2 PS, 2 Fans 25,600.00 1 0%  25,600.00 25,600.00
2.1 N5548P-FAN Nexus 5548P and 5548UP Fan Module, Front to Back Airflow 0.00 2 0%  0.00 0.00
2.2 N55-PAC-750W Nexus 5500 PS, 750W, Front to Back Airflow(Port-Side Outlet) 0.00 2 0%  0.00 0.00
2.3 CAB-C13-CBN Cabinet Jumper Power Cord, 250 VAC 10A, C14-C13 Connectors 0.00 2 0%  0.00 0.00
2.4 GLC-T 1000BASE-T SFP 395.00 3 0%  395.00 1,185.00
2.5 SFP-H10GB-CU1M 10GBASE-CU SFP+ Cable 1 Meter 150.00 5 0%  150.00 750.00
2.6 SFP-H10GB-CU3M 10GBASE-CU SFP+ Cable 3 Meter 210.00 8 0%  210.00 1,680.00
2.7 N55-DL2 Nexus 5548 Layer 2 Daughter Card 0.00 1 0%  0.00 0.00
2.8 N55-M16UP Nexus 5500 Unified Mod 16p 10GE Eth/FCoE OR 16p 8/4/2/1G FC 11,200.00 1 0%  11,200.00 11,200.00
2.9 N5KUK9-503N2.1 Nexus 5000 Base OS Software Rel 5.0(3)N2(1) 0.00 1 0%  0.00 0.00
2.10 DCNM-L-NXACCK9 DCNM for LAN Advanced Edition for N1/2/4/5K 0.00 1 0%  0.00 0.00
2.11 N5548-ACC-KIT Nexus 5548 Chassis Accessory Kit 0.00 1 0%  0.00 0.00
3.0 N2K-C2232PF-10GE Nexus 2232PP with 16 FET (2 AC PS, 1 FAN (Std Airflow)) 14,000.00 2 0%  14,000.00 28,000.00
3.1 CAB-C13-C14-2M Power Cord Jumper, C13-C14 Connectors, 2 Meter Length 0.00 4 0%  0.00 0.00
3.2 FET-10G 10G Line Extender for FEX 0.00 32 0%  0.00 0.00
4.0 WS-C2960-24TC-S Catalyst 2960 24 10/100 + 2 T/SFP LAN Lite Image 725.00 3 0%  725.00 2,175.00
4.1 CAB-C13-C14-AC Power cord, C13 to C14 (recessed receptacle), 10A 0.00 3 0%  0.00 0.00

List price for this layout is $152,205, or $50,735 for each 16-server cabinet.

Nexus 5K Layout for 10Gb/s Servers - Part 3

My first couple of blog posts were about building 1Gb/s top of rack switching using the Nexus 5000 product line.  This is a new series comparing some options for 10Gb/s top of rack switching with Nexus 5500 switches.

These posts assume a brand-new deployment and attempt to cover all of the bits required to implement the solution.  I'm including everything from the core layer's SFP+ downlink modules through the SFP+ ports for server access.  I'm assuming that the core consists of SFP+ interfaces capable of supporting twinax cables, and that each server will provide it's own twinax cable of the appropriate length.

Each scenario supports racks housing sixteen 2U servers.  Each server has a pair of SFP+ based 10Gb/s NICs and a single 100Mb/s iLO interface.

Option 3 - Top of Rack Nexus 2232

This is a 3-rack pod consisting of two Nexus 5548 switches in a central location near the core, and Nexus 2232 and 2224 deployed at the top of rack.

Option 3
Each Nexus 2232 uplinks to a single Nexus 5548, and the Nexus 2224s are uplinked via vPC.

48 servers have 960Gb/s of access bandwidth with 2:1 oversubscription inside the pod.  The pod's uplink is oversubscribed by 6:1, same as Option 1.

Because the Nexus 5548 are installed in a central location (not in the server row), management connections (Ethernet and serial) do not require any special consideration.  Only multimode fiber needs to be installed into the server row.

The advantages of this configuration include:
  • Plenty of capacity for adding servers, because each 10Gb/s FEX is only half-full (oversubscription would obviously increase)
  • No iLO switches to manage - switching for iLO/PDU/etc... is handled by the Nexus now.
  • Use of inexpensive twinax connections for 7K<-->5K links.  There are more boxes here, but the overall price is lower becasue of this change.
  • Centralized switch management - serial and Ethernet mangement links are all in one place.
  • This model translates directly to 10GBASE-T switching.  When servers begin shipping with onboard 10GBASE-T interfaces, we switch from Nexus 2232PP to 2232TM, and the architecture continues to work.  That isn't possible with top-of-rack Nexus 5500s right now.
Here's the resulting bill of matierals with pricing:
Config Set Lines
Line Item / Part#/Description List Price Qty. Discount(s) Unit Price Extended Price
1.0 N5548UPM-4N2232PF Nexus 5548UP/Expansion Module/4xN2232PP/64xFET 78,000.00 1 0%  78,000.00 78,000.00
1.1 N55-DL2 Nexus 5548 Layer 2 Daughter Card 0.00 1 0%  0.00 0.00
1.2 N55-M16UP-B Nexus 5500 Series Module 16p Unified 0.00 1 0%  0.00 0.00
1.3 N5KUK9-503N2.1 Nexus 5000 Base OS Software Rel 5.0(3)N2(1) 0.00 1 0%  0.00 0.00
1.4 DCNM-L-NXACCK9 DCNM for LAN Advanced Edition for N1/2/4/5K 0.00 1 0%  0.00 0.00
1.5 SFP-H10GB-CU1M 10GBASE-CU SFP+ Cable 1 Meter 150.00 10 0%  150.00 1,500.00
1.6 N5548P-FAN Nexus 5548P and 5548UP Fan Module, Front to Back Airflow 0.00 2 0%  0.00 0.00
1.7 N55-PAC-750W Nexus 5500 PS, 750W, Front to Back Airflow(Port-Side Outlet) 0.00 2 0%  0.00 0.00
1.8 CAB-C13-CBN Cabinet Jumper Power Cord, 250 VAC 10A, C14-C13 Connectors 0.00 2 0%  0.00 0.00
1.9 CAB-C13-C14-2M Power Cord Jumper, C13-C14 Connectors, 2 Meter Length 0.00 8 0%  0.00 0.00
1.10 N2K-C2232PP-BUN Standard airflow/AC pack: N2K-C2232PP-10GE, 2AC PS, 1Fan 0.00 4 0%  0.00 0.00
1.11 N5K-C5548UP-BUN Nexus 5548UP in N5548UP-N2K Bundle 0.00 1 0%  0.00 0.00
1.12 FET-10G 10G Line Extender for FEX 0.00 64 0%  0.00 0.00
1.13 N5548-ACC-KIT Nexus 5548 Chassis Accessory Kit 0.00 1 0%  0.00 0.00
2.0 N5K-C5548UP-FA Nexus 5548 UP Chassis, 32 10GbE Ports, 2 PS, 2 Fans 25,600.00 1 0%  25,600.00 25,600.00
2.1 N5548P-FAN Nexus 5548P and 5548UP Fan Module, Front to Back Airflow 0.00 2 0%  0.00 0.00
2.2 N55-PAC-750W Nexus 5500 PS, 750W, Front to Back Airflow(Port-Side Outlet) 0.00 2 0%  0.00 0.00
2.3 CAB-C13-CBN Cabinet Jumper Power Cord, 250 VAC 10A, C14-C13 Connectors 0.00 2 0%  0.00 0.00
2.4 N55-DL2 Nexus 5548 Layer 2 Daughter Card 0.00 1 0%  0.00 0.00
2.5 N55-M16UP Nexus 5500 Unified Mod 16p 10GE Eth/FCoE OR 16p 8/4/2/1G FC 11,200.00 1 0%  11,200.00 11,200.00
2.6 N5KUK9-503N2.1 Nexus 5000 Base OS Software Rel 5.0(3)N2(1) 0.00 1 0%  0.00 0.00
2.7 DCNM-L-NXACCK9 DCNM for LAN Advanced Edition for N1/2/4/5K 0.00 1 0%  0.00 0.00
2.8 N5548-ACC-KIT Nexus 5548 Chassis Accessory Kit 0.00 1 0%  0.00 0.00
3.0 N2K-C2232PF-10GE Nexus 2232PP with 16 FET (2 AC PS, 1 FAN (Std Airflow)) 14,000.00 2 0%  14,000.00 28,000.00
3.1 CAB-C13-C14-2M Power Cord Jumper, C13-C14 Connectors, 2 Meter Length 0.00 4 0%  0.00 0.00
3.2 FET-10G 10G Line Extender for FEX 0.00 32 0%  0.00 0.00
4.0 N2K-C2224TF Nexus 2224TP with 4 FET, choice of airflow/power 8,000.00 3 0%  8,000.00 24,000.00
4.1 CAB-C13-C14-2M Power Cord Jumper, C13-C14 Connectors, 2 Meter Length 0.00 6 0%  0.00 0.00
4.2 N2224TP-FA-BUN Standard airflow pack: N2K-C2224TP-1GE, 2AC PS, 1Fan 0.00 3 0%  0.00 0.00
4.3 FET-10G 10G Line Extender for FEX 0.00 12 0%  0.00 0.00

List price for this layout is $168,300, or $56,100 for each 16-server cabinet.

Nexus 5K Layout for 10Gb/s Servers - Part 2

My first couple of blog posts were about building 1Gb/s top of rack switching using the Nexus 5000 product line.  This is a new series comparing some options for 10Gb/s top of rack switching with Nexus 5500 switches.

These posts assume a brand-new deployment and attempt to cover all of the bits required to implement the solution.  I'm including everything from the core layer's SFP+ downlink modules through the SFP+ ports for server access.  I'm assuming that the core consists of SFP+ interfaces capable of supporting twinax cables, and that each server will provide it's own twinax cable of the appropriate length.
Each scenario supports racks housing sixteen 2U servers.  Each server has a pair of SFP+ based 10Gb/s NICs and a single 100Mb/s iLO interface.

Option 2 - Top of Rack Nexus 5548
This is a 2-rack pod consisting of two Nexus 5548 switches and two Catalyst 2960s for iLO/PDU/environmental/etc...
Option 2

32 servers have 640Gb/s of access bandwidth with no oversubscription inside the pod.  The pod's uplink is oversubscribed by 4:1.

The 5548 management connections (Ethernet and serial) will require the installation of some structured wiring (copper) so that they can reach their management network and terminal server.

The 2960 console connections are less critical.  My rule on this is: If the clients I'm supporting can't be bothered to provision more than a single NIC, then they can't be all that important.  Redundancy and supportability considerations at the network layer may be compromised.

Here's the resulting bill of matierals with pricing:
Config Set Lines
Line Item / Part#/Description List Price Qty. Discount(s) Unit Price Extended Price
1.0 N5K-C5548UP-FA Nexus 5548 UP Chassis, 32 10GbE Ports, 2 PS, 2 Fans 25,600.00 2 0%  25,600.00 51,200.00
1.1 N5548P-FAN Nexus 5548P and 5548UP Fan Module, Front to Back Airflow 0.00 4 0%  0.00 0.00
1.2 N55-PAC-750W Nexus 5500 PS, 750W, Front to Back Airflow(Port-Side Outlet) 0.00 4 0%  0.00 0.00
1.3 CAB-C13-CBN Cabinet Jumper Power Cord, 250 VAC 10A, C14-C13 Connectors 0.00 4 0%  0.00 0.00
1.4 GLC-T 1000BASE-T SFP 395.00 4 0%  395.00 1,580.00
1.5 SFP-10G-SR 10GBASE-SR SFP Module 1,495.00 32 0%  1,495.00 47,840.00
1.6 SFP-H10GB-CU1M 10GBASE-CU SFP+ Cable 1 Meter 150.00 6 0%  150.00 900.00
1.7 N55-DL2 Nexus 5548 Layer 2 Daughter Card 0.00 2 0%  0.00 0.00
1.8 N55-M16UP Nexus 5500 Unified Mod 16p 10GE Eth/FCoE OR 16p 8/4/2/1G FC 11,200.00 2 0%  11,200.00 22,400.00
1.9 N5KUK9-503N2.1 Nexus 5000 Base OS Software Rel 5.0(3)N2(1) 0.00 2 0%  0.00 0.00
1.10 DCNM-L-NXACCK9 DCNM for LAN Advanced Edition for N1/2/4/5K 0.00 2 0%  0.00 0.00
1.11 N5548-ACC-KIT Nexus 5548 Chassis Accessory Kit 0.00 2 0%  0.00 0.00
2.0 WS-C2960-24TC-S Catalyst 2960 24 10/100 + 2 T/SFP LAN Lite Image 725.00 2 0%  725.00 1,450.00

List price for this layout is $125,370, or $62,685 for each 16-server cabinet.

The 5548s are maxed-out.  All of the other configurations use an 80Gb/s vPC peer-link, but these guys can only accomodate 40Gb/s, which is probably fine.  Moving the vPC peer-keepalive traffic and Catalyst 2960 uplinks onto some other network would make it possible to max out the peer-link, but I don't believe it matters.

Nexus 5K Layout for 10Gb/s Servers - Part 1

My first couple of blog posts were about building 1Gb/s top of rack switching using the Nexus 5000 product line.  This is a new series comparing some options for 10Gb/s top of rack switching with Nexus 5500 switches.

These posts assume a brand-new deployment and attempt to cover all of the bits required to implement the solution.  I'm including everything from the core layer's SFP+ downlink modules through the SFP+ ports for server access.  I'm assuming that the core consists of SFP+ interfaces capable of supporting twinax cables, and that each server will provide it's own twinax cable of the appropriate length.

Each scenario supports racks housing sixteen 2U servers.  Each server has a pair of SFP+ based 10Gb/s NICs and a single 100Mb/s iLO interface.

Option 1 - Top of Rack Nexus 5596

This is a 3-rack pod consisting of two Nexus 5596 switches and three Catalyst 2960s for iLO/PDU/environmental/etc...
Option 1

48 servers have 960Gb/s of access bandwidth with no oversubscription inside the pod.  The pod's uplink is oversubscribed by 6:1.

The 5596 management connections (Ethernet and serial) will require the installation of some structured wiring (copper) so that they can reach their management network and terminal server.

The 2960 console connections are less critical. My rule on this is: If the clients I'm supporting can't be bothered to provision more than a single NIC, then they can't be all that important. Redundancy and supportability considerations at the network layer may be compromised.

Here's the resulting bill of matierals with pricing:
Config Set Lines
Line Item / Part#/Description List Price Qty. Discount(s) Unit Price Extended Price
1.0 N5K-C5596UP-FA Nexus 5596UP 2RU Chassis, 2PS, 4 Fans, 48 Fixed 10GE Ports 36,800.00 2 0%  36,800.00 73,600.00
1.1 CAB-C13-CBN Cabinet Jumper Power Cord, 250 VAC 10A, C14-C13 Connectors 0.00 4 0%  0.00 0.00
1.2 GLC-T 1000BASE-T SFP 395.00 10 0%  395.00 3,950.00
1.3 SFP-10G-SR 10GBASE-SR SFP Module 1,495.00 32 0%  1,495.00 47,840.00
1.4 SFP-H10GB-CU3M 10GBASE-CU SFP+ Cable 3 Meter 210.00 10 0%  210.00 2,100.00
1.5 N55-M16UP Nexus 5500 Unified Mod 16p 10GE Eth/FCoE OR 16p 8/4/2/1G FC 11,200.00 2 0%  11,200.00 22,400.00
1.6 N55-M16UP Nexus 5500 Unified Mod 16p 10GE Eth/FCoE OR 16p 8/4/2/1G FC 11,200.00 2 0%  11,200.00 22,400.00
1.7 N5KUK9-503N2.1 Nexus 5000 Base OS Software Rel 5.0(3)N2(1) 0.00 2 0%  0.00 0.00
1.8 DCNM-L-NXACCK9 DCNM for LAN Advanced Edition for N1/2/4/5K 0.00 2 0%  0.00 0.00
1.9 N55-M-BLNK Nexus 5500 Module Blank Cover 0.00 2 0%  0.00 0.00
1.10 N55-PAC-1100W Nexus 5500 PS, 1100W, Front to Back Airflow 0.00 4 0%  0.00 0.00
1.11 N5596-ACC-KIT Nexus 5596 Chassis Accessory Kit 0.00 2 0%  0.00 0.00
1.12 N5596UP-FAN Nexus 5596UP Fan Module 0.00 8 0%  0.00 0.00
2.0 WS-C2960-24TC-S Catalyst 2960 24 10/100 + 2 T/SFP LAN Lite Image 725.00 3 0%  725.00 2,175.00
2.1 CAB-C13-C14-AC Power cord, C13 to C14 (recessed receptacle), 10A 0.00 3 0%  0.00 0.00

List price for this layout is $174,465, or $58,155 for each 16-server cabinet.

There is sufficient capacity on the Nexus 5596 (only 69 ports are in use) to hang a fourth cabinet from it for an additional $23,915.  The price includes an expansion module for each 5596, a 2960 and a couple of GLC-T modules.  That brings the per-cabinet cost down to $49,595 but raises the oversubscription ratio to 8:1.