UCS 2.0: Cisco Stacks the Deck in Las Vegas

July 15th, 2011

This week at CiscoLive 2011 in Las Vegas, Cisco announced new additions to the Cisco UCS fabric architecture. In addition to the existing UCS fabric hardware, UCS customers now have a choice of a new Fabric Interconnect, a new chassis I/O module, and a new Virtual Interface Card.  The 6248UP Fabric Interconnect delivers double the throughput, almost half the latency, and more than quadruple the virtual interfaces per downlink, while the new 2208XP chassis I/O module delivers double the chassis uplink bandwidth and quadruple the server downlinks.  Last but not least, the 1280 Virtual Interface Card (VIC) server adapter provides quadruple the fabric bandwidth for UCS blade servers by delivering two active 40 Gbps paths per server.

Did I mention these new announcements were additions to the UCS product portfolio, not replacements? I’m not sure I did, so I’ll repeat it… UCS customers now have three Fabric Interconnects, two chassis I/O modules, two Virtual Interface Cards, and multiple traditional network adapters to choose from – and they’re all interoperable.

In addition to the new fabric devices, the soon-to-be-released UCS 2.0 firmware adds several features for existing and future UCS customers: Support for disjoint Layer 2 networks, UCS Service Profile support for iSCSI boot, and support for VM-FEX on RedHat KVM.

 

Additions to the UCS Fabric Portfolio

The UCS 6248UP Fabric Interconnect

The UCS 6248UP Fabric Interconnect, similar to the Nexus 5548 platform, provides up to 48 Unified Ports in a single Rack Unit (1 RU). Unified Ports are ports that accept either Ethernet and Fibre Channel transceiver (SFP+/SFP) modules. As such, the 6248UP can provide practically any distribution of Ethernet or Fibre Channel uplinks need to meet a customer’s design and bandwidth needs.

Don’t let the tiny package fool you… While the 6248UP is the same size as the UCS 6120 Fabric Interconnect, 1 rack unit, the 6248UP delivers double the throughput, almost half the latency, more than quadruple the virtual interfaces per downlink, and quadruple the VLAN capabilities. Here’s a comparison chart to show how the three Fabric Interconnects compare to each other.

The UCS 2208XP Blade Chassis I/O Module (Fabric Extender)

The UCS 2208XP Blade Chassis I/O module is a new choice for UCS blade customers. The 2208 doubles the uplinks and quadruples the downlinks in comparison to the existing UCS 2104 blade chassis I/O module. Using dual 2208XP modules provides up to 160 Gbps worth of uplink bandwidth per chassis and up to 80 Gbps of downlink bandwidth to each half slot server. In addition, the 2208XP provides 8 class of service queues.

The UCS 1280 Virtual Interface Card (VIC)

The 1280 VIC is the world’s first 8 port 10GE adapter for an x86 blade server providing up to 40 Gbps throughput per dual fabric. The 1280 VIC consists of two groups of 4x 10GE ports that are automatically port channeled when paired with a UCS 2208XP chassis I/O module. All eight blade server slots in the UCS 5108 blade chassis can be equipped with the 1280 VIC and all eight blade server slots will have dual active 40 Gbps paths.

Using the Service Profile in UCS Manager, a user defines the number of NICs and HBAs that should be visible to the Operating System, up to a maximum of 116 virtual interfaces (software imposed limit at FCS) per 1280 VIC. The operating system or hypervisor host “sees” the Service Profile defined NICs and HBAs and not the two 40 Gbps paths. The Service Profile allows the user to set the QoS settings and speed of the NIC.

 

Here is a diagram that shows the existing UCS fabric components on the left and the new additions to the UCS portfolio on the right.

 

Here is the full selection of UCS fabric components now available to UCS customers. Just to reiterate what I said at the beginning of this post, the new hardware announced this week at CiscoLive is an addition to the UCS fabric portfolio, not a replacement of any hardware in the portfolio.

 

Competitive Comparison: When compared to an HP BladeSystem, a UCS B200 blade server with the UCS 1280 VIC plus the UCS 2208 IOM would be similar to deploying a HP BL4xx series server with 2 port 10GE FlexFabric LOMs, a 2 port 10GE FlexFabric mezzanine card in Mezz 1, a 4 port 10GE FlexFabric mezzanine card (non-existent) in Mezz 2, plus 8 Virtual Connect FlexFabric modules (at $18,499 each). In other words, $147,992 per HP enclosure just for the interconnects connecting 16 HP blade servers is several times the cost of two UCS chassis with dual 2208 I/O modules connecting 16 UCS blade servers.

 

Support for Chassis Uplink Port Channeling and Server Adapter Port Channeling

In addition to the increased bandwidth, one of the most beneficial features of the next generation UCS hardware is the ability to port channel both 2208 IOM chassis fabric uplinks and the server adapter port uplinks on the 1280 VIC. The major benefit of this feature is better distribution of server traffic load across more links – across both the server ports and the fabric uplink ports.

For the fabric uplinks, the user can choose whether to use discrete mode (today’s behavior of pinning blade slots to specific chassis uplinks) or port channel mode (creating a single virtual chassis uplink that’s shared for all blade servers in the chassis). In Port Channel mode, users can use 4, 8 or 16 uplinks depending on the amount of bandwidth per chassis needed.

Fabric uplink port channels require both the 6248 fabric interconnect and the 2208XP I/O module.

At the server port level, port channeling between the 1280 VIC and the 2208 is configurationlessable (SIC – I’m working on my presidential credentials by making up words). In other words, they are channeled automagically so that, in effect, the 1280 VIC behaves as a 2 port 40 Gbps server adapter. Server adapter port channels require both the 2208XP I/O module and the 1280 VIC server adapter.

NICs or HBAs (or VMs when using VM-FEX) created in the Service Profile have their traffic load balanced across all ports on the fabric to which they’re pinned. This means that any single NIC has access to up to 40 Gbps of bandwidth for transmits or receives (39.6 Gbps per the throughput test, to be exact). The traffic is load balanced across the channel based on MAC, IP, and TCP/UDP information so that multiple conversations for any particular NIC/HBA/VM is load balanced across multiple 10GE ports. Either side A or side B (diagram below) can burst up to 40 Gbps each, but, simultaneously, cannot exceed the PCIe Gen 2 x16 bus limitation of 64 Gbps. I can already hear the competitive FUD coming… 😉 If you need more than 64 Gbps of bandwidth in a half slot server or more than 128 Gbps of bandwidth in a full slot server… PLEASE let me know. I’d really like to buy you a beer and find out what you’re doing with all that bandwidth.


Backwards Compatibility and Interoperability

As you’ve probably already figured out, it’s very important to me to emphasis that Cisco’s announcement this week is about NEW ADDITIONS to the UCS fabric portfolio. I also want to specifically point out that the new and existing UCS fabric products are backwards compatible and interoperable (to the least common denominator feature). For example, if you own 6120s, 2104s, and M81KR CNAs today and decide to replace the 6120s with 6248s… it’s supported and works just fine. You want to just replace the 2208s and keep the 6120s/6140s and M81KRs? Great! We’ll support it. Would you just like to start deploying the new 1280 VIC but keep the existing 6120s and 2104s? Perfect! Supported! Would you like to deploy your new chassis with the 2208 but not retrofit your existing chassis and have all of them connected to the same pair of 6120s, 6140s or 6248s? Excellent! Supported! Last but not least, we’ll support you doing mixed combinations of 6120s/40s + 6248s, for example, during upgrades but not as a permanent production deployment.

One thing to keep in mind, though, is the features that require the new hardware. For example, without both 2208s and a 1280 VIC, you can’t port channel the ports on the server adapter. Or, without both the 6248s and 2208s, you can’t port channel the chassis fabric uplinks.

 

UCS 2.0 Platform Software Features

In addition to the new hardware discussed above, the UCS 2.0 firmware will add several platform or software features that will be available across both existing and future UCS deployments.

Support for Layer 2 Disjoint Networks in End Host Mode

UCS adds support for flexible VLAN configurations on Fabric Interconnect uplink ports while using End Host Mode. This feature provides for extremely flexible UCS deployments to support almost all combinations of upstream network configurations.

 

iSCSI Boot Support in UCS Service Profile

iSCSI boot is now supported in the UCS Service Profile for several of the UCS Converged Network Adapters – including the M81KR, 1280 VIC, and others. In addition to NICs and HBAs, users can now create iSCSI NICs for UCS servers. Unlike other solutions (ahem, HP FlexFabric), the UCS fabric is more flexible and allows the creation of both FC HBAs, iSCSI NICs, and Ethernet NICs all on the same server.

Support for VM-FEX in RedHat KVM

Cisco’s Fabric Extender (FEX) technology can be deployed at several levels (see graphic below) – Rack FEX (like a Nexus 5000 + 2000), Chassis FEX (UCS 6248 + 2208), Adapter FEX (M81KR or 1280 VIC) and VM-FEX (M81KR/1280 VIC plus hypervisor integration). VM-FEX allows using the UCS Virtual Interface Cards (M81KR or 1280 VICs) as remote switch line cards (FEXs) inside of a hypervisor and directly assigning an independent NIC to each and every Virtual Machine. This allows each VM to have its own logical switch port on the upstream Fabric Interconnect – complete with its own configuration, statistics, etc.

VM-FEX deployments have always been only for VMware environments until UCS 2.0. VM-FEX is now supported for RedHat KVM environments also.

Logical Operation of Cisco's Four Deployments of Fabric Extenders

 

Frequently Asked Questions

Q1: With the new announcement, does this mean UCS customers have to go through a forklift upgrade?
A1: Absolutely not. If a user doesn’t need the additional bandwidth or any of the other new features (like port channeling), none of the new hardware is needed for future deployments. A user is more than welcome to continue buying UCS 6120 or 6140 Fabric Interconnects, UCS 2104 IOMs, and UCS M81KR VICs. Future UCS software releases will support existing and new hardware for the foreseeable future.

Q2: When can I get it?
A2: General availability is scheduled for Q3CY2011 (3rd quarter of this year).

Q3: Do I have to buy a new chassis to use the new 2208 IOM or the new 1280 VIC and get increased server bandwidth?
A3: Absolutely not. Cisco planned for the future and planned on providing customers with investment protection… the UCS 5108 chassis mid-plane provides support for 4 lanes to each of the eight server slots. Each lane is 10G-Base-KR rated. As such, inserting a 2208 IOM into an existing chassis immediately provides 4x 10GE downlinks to each of the eight server slots. Adding a UCS 1280 VIC to an existing server then allows the server to utilizes all eight of the 10GE downlinks.

Q4: Do I have to buy new servers?
A4: Absolutely not. The 1280 VIC works in all shipping servers today.

Q5: Are you End of Life-ing (EOLing) the existing UCS fabric hardware?
A5: Not only no, but _ _ _ _ no. J Choices folks, choices.

Q6: Do I have to have new hardware to get Port Channeling capabilities?
A6: Yes. If you want port channeling between the chassis and the fabric interconnect, you’ll need the UCS 6248UP and the UCS 2208 IOM. If you want port channeling between the IOM and the Server VIC, you’ll need the UCS 2208 IOM and the UCS 1280 VIC.

Q7: I have the 6120 or 6140 Fabric Interconnects (or the M81KR VIC or the 2104 IOMs). Can I use the new software features like disjoint L2, iSCSI boot, etc?
A7: Yes. Once the UCS Manager 2.0 software upgrade releases in Q3, you can upgrade your existing UCS environment and have those new software features (no new hardware requires and no software license fees).

Jul 15th, 2011 | Posted in Cisco UCS
Tags: