10Gb/s Servers? Rack your Nexus 5500 with the core switches. Connect servers to Nexus 2232s.
I know of several networks that look something like this:
|Top Of Rack Nexus 5500|
|Centralized Nexus 5500|
We save lots of money on optics by moving the Nexus 5500 out of the server rack and into the vicinity of the Nexus 7000 core. Then we spend that savings on Nexus 2232s, FETs and TwinAx. These two deployments cost almost exactly the same amount.
The pricing is pretty much a wash, but we end with the following advantages:
- The ability to support 10GBASE-T servers - I expect this to be a major gotcha for some shops in the next few months.
- Inexpensive (this is a relative term) 1Gb/s ports at top of rack for low speed connections
- Greater flexibility for oversubscription (these servers are unlikely to need line rate connections)
- Greater flexibility for equipment placement (drop an inexpensive FEX where ever you need it)
- Look at all those free ports! 5K usage has dropped from 24 ports to 8 ports each! Think of how inexpensive next batch of 10Gig racks will be if we only have to buy 2232s. And the next. And the next...
It's not immediately apparent, but oversubscription is an advantage of this design. With top-of-rack 5500, you can't oversubscribe the thing; you must dedicate a 10Gb/s port to every server whether that's sensible or not. With FEXes you get to choose: oversubscribe them, or don't.
The catches with this setup are:
- The core has to be able to support TwinAx cables: The first generation 32-port line cards must use the long "active" cables and the M108 cards will require OneX converters which list for $200 each. And check your NX-OS version.
- You need to manage the oversubscription.
Inter-pod (through the core) oversubscription is identical at 2.5:1 in both examples. Intra-pod oversubscription rises from 1:1 to 2.5:1 with the addition of the FEX. Will it matter? Maybe. Do you deploy applications carefully so that server traffic tends to stay in-pod or in-rack, or do you servers get installed without regard to physical location ("any port / any server" mentality), with VMware DRS moving workload around?
We can cut oversubscription in this example down to 1.25:1 for just $4000 in FETs and 16 fiber strands by adding links between the 5500 and the 2232. This is a six-figure deployment, so that should be a drop in the bucket. You wouldn't factor in the cost of the 5500 interfaces in this cost comparison because we're still using less of them than the first example..
I recognize that this topology isn't perfect for everybody, but I believe it's a better option for many networks. It Depends. But it's worth thinking about, because it might cost a lot less and be a lot more flexible in the long run.