Introduction to the New Cisco UCS 6296UP Fabric Interconnect and 2204XP I/O Module

May 18th, 2012

Today Cisco began shipping two brand new additions to the Cisco UCS fabric portfolio – a new Fabric Interconnect (6296UP) and a new I/O module (2204XP).

In 2011, Cisco began shipping the UCS 6248UP Fabric Interconnect. This year, Cisco augments the Fabric Interconnect portfolio with the Cisco UCS 6296UP Fabric Interconnect. You can think of the 6296UP Fabric Interconnect as a 6248UP Fabric Interconnect on steroids (that haven’t banned Vitamin S for blades yet). The 6296UP provides up to 96 unified fabric ports, 1.92 Terabits of switching capacity, 2.0 us latency, and support for up to 20 UCS B Series chassis (or up to 160 UCS servers – either B Series or C series).

 

In 2009, Cisco launched Unified Computer System (UCS) with an I/O module (a.k.a FEX or fabric extender) called the UCS 2104XP. The 2104XP provided 4x uplinks and 8x downlinks to the B Series blade servers. In 2011, Cisco launched the 2208XP I/O modules which provided 8x uplinks and 32x downlinks. This year, Cisco adds the 2204XP to the portfolio. You can think of the 2204XP as a 2104XP on steroids (you’re seeing a theme here, huh?). The 2204XP provides the same number of uplinks as the 2104XP, however, the 2204XP cuts the latency almost in half to ~500ns, doubles the server downlinks to (16x 10GE unified fabric ports – or 2 ports per server slot in the 5108 blade chassis), doubles the CoS queues to 8, adds 64 policers per 8 ports, and adds support for port channeling on both the uplinks and the downlinks.

Now that all the boring marketing stuff is out of the way, let’s talk about how the addition of a new Fabric Interconnect really helps UCS customers. Primarily, the size of a Fabric Interconnect is driven by the need for speed…well, actually, bandwidth, but I just wanted to use a cliché. Customers do NOT choose between the Cisco UCS 61248UP and the Cisco UCS 6296UP because of the number of chassis they support –both Fabric Interconnects support the same number…20 chassis or 160 servers (blades or rack servers). Customers choose between the Fabric Interconnects because of the need to provide additional bandwidth to their UCS servers… plain and simple. And the great thing about UCS is how flexible it is – the CUSTOMER gets to choose how big of a Fabric Interconnect to purchase AND how many ports to throw at downlink bandwidth vs. uplink bandwidth. Alternate solutions force the customer to dedicate at least two 10GE uplinks and two FC uplinks for every 16 servers…whether you need it or not…because that’s how the mini-rack was designed over a decade ago.

Fortunately for UCS customers, they don’t live by the old mini-rack rules anymore. They get to choose to add ports for the blade servers for only one reason… they need more bandwidth… and they get to choose to purchase those ports ONLY when they need to use them. In other words, a customer can purchase the 6296UP up front without having to incur the cost of 96 ports up front. Cisco licenses the ports on the Fabric Interconnect so the customer can choose to “buy” them only when they need them. Fibre Channel has been doing that for years and most customers prefer that cost model.

Did you know you can design a UCS deployment so that the A side fabric and the B side fabric have ZERO over-subscription between servers (east-west traffic) and the whole system has a near 2 to 1 oversubscription rate on the uplinks to the core (north-south traffic)? Yeah, you can…if you REALLY want to. However, most customers simply DO NOT need that kind of bandwidth per server. In fact, I’ve been working with UCS for 3.5 years and I’ve presented to several hundred customers…not one has asked for it.  But, for those of you who just want to see that it’s possible, I present to you the deployment diagram below. The example diagram below assumes all 64 servers are all simultaneously transmitting at a full 20 Gigabit each (a very unlikely scenario). You’ll notice that the new 6296UP provides enough ports to allow all FEXs (2208XP in this example) to provide up to 160 Gigabit of bandwidth to each chassis (or 20 Gigabit of bandwidth to each half-width server) and STILL provide up to 56x 10GE uplinks for northbound connectivity. You’ll also notice charts that show a full mapping of every 2208XP to every 2208XP and the amount of bandwidth (80 Gig each) and over-subscription (1:1 or no over-subscription) between them. The third chart shows the amount of chassis uplink bandwidth available for each 10 GE blade server NIC. Basically, every server has two 10GE NICs and every NIC has a full 10 Gigabit of bandwidth leaving the chassis.

 

Example Deployment of UCS Using 6296UP

 

Interestingly enough, I still hear lots of FUD (Fear, Uncertainty, & Doubt) from UCS’s competitors in regards to UCS’s networking capabilities. I always laugh when I see/hear it because that’s the LAST thing someone should criticize Cisco for. You know, Cisco’s shipped a couple of networking products before so I’m guessing they have at least a little experience at it. Criticizing Cisco’s networking ability is like criticizing an Irishman’s drinking ability… (but I digress). I’ll cover the competitive comparison and fight the FUD with facts in a future blog article.

 

In summary, the new additions to the UCS fabric portfolio (especially the 6296UP) provide yet more options for UCS customers and further extend the flexibility and extensibility of the UCS platform.

May 18th, 2012 | Posted in Cisco UCS
Tags: