Friday, July 23, 2010

Nexus Data Center Switching Design - Part 2

The previous post, detailed a data center design using Cisco's Nexus 5020 and Nexus 2148T Fabric Extender (FEX).

I revisited that design recently, with a new environment in mind. The new environment's requirements are pretty much the same, with the following changes:
  • There will be a small population of 10 Gigabit servers
  • The average count of server NICs will be way down
Otherwise, it's not much different. NIC density will be going down because the servers with 9 NICs will be replaced by servers with reduntant 10Gb/s connections (plus a copper iLO).

Since that last build, Cisco has added to the Nexus lineup. The most interesting additions are the 2248TP fabric extender and the Fabric Extender Transceiver (FET).

The 2248TP has a bunch of advantages over the 2148T, including QoS features, downstream etherchannel support, some local ACL processing capability, and lower price! But the best parts are support for 100Mb/s clients, and its support for the FET.

Last year I put the Nexus 5020s into the server row, using the inexpensive twinax cables for 5000->2000 downlink. That decision saved 10% of the project budget, and seemed so cutting edge at the time. But the FET has made that layout obsolete. The 5020s get centralized now.

FET?
The FET looks just like any other SFP fiber transceiver, but can only be used between the Nexus 5000 (update: or 7000!) and the new fabric extenders, allowing them to be up to 100m away. It's not an Ethernet transceiver, but a fabric extender transceiver, so it won't interoperate with SR, LR, or any other Ethernet optics. It costs around 10% of what those other optics cost.

List pricing on the old and new FEXes is about the same:
  • $10,000 for the 2148T, plus around $1,000 worth of twinax cables.
  • $11,000 for the 2248TP, which comes bundled with 8 FETs (enough to build 4 uplinks).
Last year, I called for 48 little Catalysts to support 100Mb/s interfaces (mostly iLO ports). The 2284TP supports 100Mb/s gear, so those cheapie 2960s are gone.

I really liked the original 2148T Fabric Extender. Now it's dead to me.

Design Changes
So, what does the FEX update do to last year's design?
  • The little Catalysts at the top of every rack are gone.
  • 15% of the Nexus 5020 ports were previously dedicated to the little Catalysts.  Now they're available for servers.
  • The Nexus 5020s move from center of row to somewhere near the Nexus 7000 core.
  • The eight distributed Lantronix boxes managing 64 switches become one centralized Lantronix box managing 16 switches.
  • NO COPPER is required to be pulled into the server row.
Other changes:
  • Due to the new 10Gb/s server requirement, I'll be adding ports to the 5020s
  • Those 10Gb/s servers will be doing IP storage, instead of SAN, so I'll be looking for additional uplink bandwidth.
  • Last year's design included a dedicated EtherChannel for vPC keepalive traffic. It's overkill, and now I need the ports. vPC keepalive traffic will move to the Nexus management (mgmt0) interfaces.
  • By putting the 5020s in close proximity to the core, we'll be able to use twinax cables for these links in the near future. This can save around $330K by turning $3000 SR links into $400 copper links. There are 128 of these links. The options here are:



    • Passive twinax cables connected to OneX Converters installed in thN7K-M108X2-12L  cards in the Nexus 7000. This should work with the "Delhi" release of NX-OS, and is the option I've selected for this exercise.
    • Passive twinax cables connected to the un-announced N7K-M132XP-12L cards in the Nexus 7000.
    • Active twinax cables connected to the N7K-M132XP-12 cards in the Nexus 7000. This should work with the "Cairo" release of NX-OS.

The result looks like this:


And each row is wired up like this:


What have we gained?
  • Flexible layout: We're no longer constrained to a 6-rack row length by the 5000->2000 CX1 cables. FEXes can live anywhere in the data center.  I've left the layout alone for a straightforward comparison.
  • No overhead copper runs. (unspecified $)
  • Less overhead fiber runs due to fewer SAN clients (unspecified $)
  • 144 additional Rack Units for servers
  • 55 fewer devices to manage (48 Catalysts and 7 Lantronix)
  • Support for 64 dual-homed 10-gig servers
  • 2X more throughput: It's 80Gb/s in both scenarios, but last year's design was oversubscribed 2:1

Pricing

Each row houses the following gear, which together list for $161,280:

Part Number Description Quantity
N5020P-6N2248TF-B Nexus 5020P/6x2248TP/48xFET Bundle 2
SFP-H10GB-CU1M 10GBASE-CU SFP+ Cable 1 Meter 4
SFP-H10GB-CU3M 10GBASE-CU SFP+ Cable 3 Meter 8
CAB-C13-C14-JMPR Recessed receptical AC power cord 27 4
CAB-C13-C14-2M Power Cord Jumper, C13-C14 Connectors, 2 Meter Length 24
N2K-C2248TP-BUN Nexus 2248TP for N5K/N2K Bundle 12
N5K-C5020P-BUN-E Nexus 5020P in N5020P-N2K Bundle 2
FET-10G 10G Line Extender for FEX 96
N5020-ACC-KIT Nexus 5020 Accessory Kit, Option 2
N5K-M1-BLNK N5000 1000 Series Expansion Module Blank 4
N5K-PAC-750W Nexus 5020 PSU module, 100-240VAC 750W 4
N5KUK9-421N1.1 Nexus 5000 Base OS Software Rel 4.2(1)N1(1) 2

Additionally, we'll need 8 N7K-M108X2-12L cards, and 64 OneX converters. Together, that's $364,800.

Throw in a Lantronix SLC-16, and the grand total is $1,656,740.

Thanks to the new FEX and FET, the whole build, with simplified wiring, no overhead copper, twice as much bandwidth, more server space, less gear to manage, and 128 10-gig server ports costs almost 10% less than last year's design.


Not Enough Bandwidth?
...By doubling up on Nexus 7000 modules, OneX converters, twinax cables and adding ports to the Nexus 5020s, we can scale the environment up to line-rate 160 Gb/s in each row (with 80Gb/s crossconnect), now with 192 10-gig server ports.

We've reached $2,212580. It's a big number, but only 23% bigger than last year's design, and it sports 4 times the bandwidth, 192 new 10-gig server ports, and reduced cabling and maintenance costs, all thanks to a couple of seemingly tiny details: 100Mb/s support on the new FEX, and the new FET module.

I think this might be a reasonable project to build right now for somebody with lots of legacy servers, no place to put them, and an idea they might want some 10-gig in the near future. But I don't expect to be trotting this design out twelve months from now. 10G-BASET is right around the corner, and it will have a big impact on the ratio of 10G/1G servers in new data centers. If I were building the environment sketched here, it would have lots of space dedicated for future 10-gig only top of rack switching.

Upcoming features (CX-1 support on the N7K, FEX support on the N7K) will likely change things again soon.

6 comments:

  1. I came across this blog while investigating if it is possible to use regular SFP-10G-SR simple copper twinax to connect between the 5000 and 2000. I appreciate if you have any information you can share.

    ReplyDelete
  2. SFP-10G-SR is not copper twinax. It's a multimode fiber module. You can use the following modules for that interconnection:
    SR (multimode)
    LR (singlemode)
    CU (twinax up to 10m)
    FET (multimode for 22xx only - not supported by 2148T)

    The best choice is the FET unless you need to run over 100m. If you use twinax, you'll learn to hate it quickly.

    ReplyDelete
  3. Great Article! Would advise against using Twinax and stick with the FET transceivers?

    ReplyDelete
  4. I wouldn't use anything other than FET to uplink a fabric extender these days.

    TwinAx is still a good (cost effective) choice for many non-FEX interconnections.

    ReplyDelete
  5. why do you hate twinax? seems to work just fine?

    ReplyDelete
  6. I guess I don't *hate* twinax. My August 27 comment was worded a bit strongly :-)

    When we're talking 5K<--->FEX links, TwinAx offers no benefit over fiber and has the following limitations:
    - short reach
    - difficult to work with
    - bulky
    - limited availability (compared to MMF patch cords)
    - requires direct runs (can't use patch panels)

    ReplyDelete