Thursday, November 17, 2011

Nexus 5K Layout for 10Gb/s Servers - Part 1

My first couple of blog posts were about building 1Gb/s top of rack switching using the Nexus 5000 product line.  This is a new series comparing some options for 10Gb/s top of rack switching with Nexus 5500 switches.

These posts assume a brand-new deployment and attempt to cover all of the bits required to implement the solution.  I'm including everything from the core layer's SFP+ downlink modules through the SFP+ ports for server access.  I'm assuming that the core consists of SFP+ interfaces capable of supporting twinax cables, and that each server will provide it's own twinax cable of the appropriate length.

Each scenario supports racks housing sixteen 2U servers.  Each server has a pair of SFP+ based 10Gb/s NICs and a single 100Mb/s iLO interface.

Option 1 - Top of Rack Nexus 5596

This is a 3-rack pod consisting of two Nexus 5596 switches and three Catalyst 2960s for iLO/PDU/environmental/etc...
Option 1

48 servers have 960Gb/s of access bandwidth with no oversubscription inside the pod.  The pod's uplink is oversubscribed by 6:1.

The 5596 management connections (Ethernet and serial) will require the installation of some structured wiring (copper) so that they can reach their management network and terminal server.

The 2960 console connections are less critical. My rule on this is: If the clients I'm supporting can't be bothered to provision more than a single NIC, then they can't be all that important. Redundancy and supportability considerations at the network layer may be compromised.

Here's the resulting bill of matierals with pricing:
Config Set Lines
Line Item / Part#/Description List Price Qty. Discount(s) Unit Price Extended Price
1.0 N5K-C5596UP-FA Nexus 5596UP 2RU Chassis, 2PS, 4 Fans, 48 Fixed 10GE Ports 36,800.00 2 0%  36,800.00 73,600.00
1.1 CAB-C13-CBN Cabinet Jumper Power Cord, 250 VAC 10A, C14-C13 Connectors 0.00 4 0%  0.00 0.00
1.2 GLC-T 1000BASE-T SFP 395.00 10 0%  395.00 3,950.00
1.3 SFP-10G-SR 10GBASE-SR SFP Module 1,495.00 32 0%  1,495.00 47,840.00
1.4 SFP-H10GB-CU3M 10GBASE-CU SFP+ Cable 3 Meter 210.00 10 0%  210.00 2,100.00
1.5 N55-M16UP Nexus 5500 Unified Mod 16p 10GE Eth/FCoE OR 16p 8/4/2/1G FC 11,200.00 2 0%  11,200.00 22,400.00
1.6 N55-M16UP Nexus 5500 Unified Mod 16p 10GE Eth/FCoE OR 16p 8/4/2/1G FC 11,200.00 2 0%  11,200.00 22,400.00
1.7 N5KUK9-503N2.1 Nexus 5000 Base OS Software Rel 5.0(3)N2(1) 0.00 2 0%  0.00 0.00
1.8 DCNM-L-NXACCK9 DCNM for LAN Advanced Edition for N1/2/4/5K 0.00 2 0%  0.00 0.00
1.9 N55-M-BLNK Nexus 5500 Module Blank Cover 0.00 2 0%  0.00 0.00
1.10 N55-PAC-1100W Nexus 5500 PS, 1100W, Front to Back Airflow 0.00 4 0%  0.00 0.00
1.11 N5596-ACC-KIT Nexus 5596 Chassis Accessory Kit 0.00 2 0%  0.00 0.00
1.12 N5596UP-FAN Nexus 5596UP Fan Module 0.00 8 0%  0.00 0.00
2.0 WS-C2960-24TC-S Catalyst 2960 24 10/100 + 2 T/SFP LAN Lite Image 725.00 3 0%  725.00 2,175.00
2.1 CAB-C13-C14-AC Power cord, C13 to C14 (recessed receptacle), 10A 0.00 3 0%  0.00 0.00

List price for this layout is $174,465, or $58,155 for each 16-server cabinet.

There is sufficient capacity on the Nexus 5596 (only 69 ports are in use) to hang a fourth cabinet from it for an additional $23,915.  The price includes an expansion module for each 5596, a 2960 and a couple of GLC-T modules.  That brings the per-cabinet cost down to $49,595 but raises the oversubscription ratio to 8:1.

11 comments:

  1. $50k+ per rack? Madness. No wonder Cisco's stock is getting killed.

    ReplyDelete
  2. Is $50K (list) too much for a high-performance and high-density 10Gb/s access layer like this?

    Honestly I have no idea. People are definitely buying/building topologies like this, but they're not choosing and locating components as carefully as I am, so they're paying more. Sometimes a lot more.

    If you have some alternative to suggest, I'd be curious to see how it compares.

    ReplyDelete
  3. Just shooting from the hip but you could equip each rack with a high performance 24-port 10GE switch such as the Dell Force10 S2410 for roughly $23K per rack list price (including the optic uplinks). No mess of cross cabling between racks either. All server cabling stays in the rack, nice and clean.

    I agree with "Anonymous", $58K per rack is way too much. What's the incremental value justifying such a premium?

    Cheers,
    Brad

    ReplyDelete
  4. Hey Brad, thank you for your input.

    The topology in these examples includes 32 server interfaces in each cabinet, so we'd need two of those switches per cabinet right? Plus Ci$co optics for the core. It starts to sound an awful lot like $50K list again, doesn't it?

    Maybe you meant that $23K would buy two switches per rack?

    If the discount rates are wildly different, then that might sway things again.

    I'm with you on the inter-rack cabling. Awful.

    This topology isn't the one I'd choose to deploy. My preference is for options # 3 and 4. I wrote the blog to explore the different Nexus-based strategies after reading Pete Welcher's blog and the discussion we had in the comments: http://www.netcraftsmen.net/resources/blogs/the-new-datacenter.html

    ReplyDelete
  5. Hey Chris,
    Yeah, good catch, I forgot about the 2nd in-rack switch :-)

    Probably a more comparable design would be (2) 48-port Dell Force10 S4810 switches which would connect all (3) racks. At about $25K per S4810 (list) we're back to about $17 per rack ;-)

    Cheers,
    Brad

    ReplyDelete
  6. 3 cabinets include 96 server NICs. Two 48-port switches would only suffice if we skip the uplinks.

    The Force10 S4810 solution sounds a lot like option 2 (two racks and two 48-port 5548s). List price for the 48-port 5548 is around $37K (chassis + 16 port module).

    Does the Force10 solution include MLAG? How wide?

    I didn't address your question about incremental value (though the price gap isn't so wide as we initially thought).

    The answer is:

    I'm not here to sell the gear, nor to defend it. I just wanted to compare some of the different ways this gear can be used to solve a single problem. I wanted to explore the tradeoffs associated with different design decisions.

    Each of these choices has different requirements in terms of managent, copper and fiber structured cabling, etc... Quoting the cost of "a switch" as we've done here in the comments doesn't really tell the whole story, and I wanted to consider every facet of the deployment to find the surprises.

    Given the number of google searches for "connect ilo to nexus 2148T" and "connect switch to FEX" that lead people here, it's clear that half-baked solutions abound :-)

    Here are four fully baked soluions (though I may have made unfortunate power cord choices) presented for the purpose of exploring the small differences between them.

    ReplyDelete
  7. Chris,
    The Dell Force10 S4810 has 48 SFP+ ports and 4 QSFP ports. Therefore you can deploy it as 48 ports of 10GE + 4-port 40GE, or 64-ports of 10GE. So you have 48 ports for server connections and 160GE remaining for uplinks for 3:1 oversubscription out of the rack.

    Yes, the S4810 switch is capable of MLAG through a featured called VLT. How wide? Two switches, just like vPC.

    Not trying to spam your comments, just thought I'd chime in when you asked if $50K per rack was too much. ;-)

    Cheers,
    Brad

    ReplyDelete
  8. Brad,
    It's hard for me to imagine thinking of your comments as spam. Vendor vs. Vendor wasn't what I intended for this series, but it's interesting and I really appreciate the input.

    Okay, I'm following you on the S4810. 48 *access* ports does mean that a pair of them can support 3 of these server cabinets. Cool.

    When I asked "how wide?" I was thinking of uplink bandwidth, rather than chassis count.

    I'm under the impression that 802.1AX (LACP) limits an aggregation to eight active link members and that vPC is being (ahem) clever in the way it supports sixteen members.

    But using the Force10 uplinks as 40Gb/s makes the width question moot. With just four links per switch, the Force10 can produce an LACP-compliant uplink that's double capacity currently available with Nexus 5K.

    ReplyDelete
  9. Login via Google account seems broken, so posting as anonymous again.

    You can achieve the same oversubscription rato with two Arista 7050T-64 switches, QSFP+ twinax coppper for MLAG, and QSFP+ fiber for uplinks. 10GBASE-T to the servers. Keep your 2950 managment network with dual uplinks per switch:

    Item|Qty|Unit|Total
    Airsta 7050T-64 switches|2|$25,000|$50,000
    Cat 6A patch|96|$5|$480
    Arista CAB-Q-Q-1M (MLAG)|2|$200|$400
    Arista QSFP-SR4|4|$2,000|$8,000
    Cisco 2960 switches|3|$725|$2,175
    Cisco GLC-T 1000BASE-T SFP|6|$395|$2,370

    Total|$63,425
    per-rack|$21,142

    You can achieve similar with Force10, HP, whatever. The Arista prices are not discounted heavily as far as I can tell, but I do not have access to their list pricing.

    ReplyDelete
  10. Well this is a very good series of posts you have put together here Chris! Sometimes people don't want to dig through a 500 page design guide just to get some simple and accurate design guidance like you have done here.

    The Nexus 5K is a good switch. God knows I've sold a lot of them in my days at Cisco :-)

    Cheers,
    Brad

    ReplyDelete
  11. Brad, only problem with the S2410 is that it runs 'SFTOS' and from what I was told, this isn't going to change. FTOS is great, but I feel sorry for anyone that has to manage a data center filled with SFTOS based switches. :)

    And when will VLT (MLAG style) be officially supported? Even today VLT is not officially supported or in use widely by customers. Force10 customers have to stack switches and the fail-over from primary to standby stack unit members takes 30-40 seconds. Less than optimal compared to an MLAG style solution..

    ReplyDelete