UCS 2.0: Cisco Stacks the Deck in Las Vegas

July 15th, 2011

This week at CiscoLive 2011 in Las Vegas, Cisco announced new additions to the Cisco UCS fabric architecture. In addition to the existing UCS fabric hardware, UCS customers now have a choice of a new Fabric Interconnect, a new chassis I/O module, and a new Virtual Interface Card.  The 6248UP Fabric Interconnect delivers double the throughput, almost half the latency, and more than quadruple the virtual interfaces per downlink, while the new 2208XP chassis I/O module delivers double the chassis uplink bandwidth and quadruple the server downlinks.  Last but not least, the 1280 Virtual Interface Card (VIC) server adapter provides quadruple the fabric bandwidth for UCS blade servers by delivering two active 40 Gbps paths per server.

Did I mention these new announcements were additions to the UCS product portfolio, not replacements? I’m not sure I did, so I’ll repeat it… UCS customers now have three Fabric Interconnects, two chassis I/O modules, two Virtual Interface Cards, and multiple traditional network adapters to choose from – and they’re all interoperable.

In addition to the new fabric devices, the soon-to-be-released UCS 2.0 firmware adds several features for existing and future UCS customers: Support for disjoint Layer 2 networks, UCS Service Profile support for iSCSI boot, and support for VM-FEX on RedHat KVM.

 

Additions to the UCS Fabric Portfolio

The UCS 6248UP Fabric Interconnect

The UCS 6248UP Fabric Interconnect, similar to the Nexus 5548 platform, provides up to 48 Unified Ports in a single Rack Unit (1 RU). Unified Ports are ports that accept either Ethernet and Fibre Channel transceiver (SFP+/SFP) modules. As such, the 6248UP can provide practically any distribution of Ethernet or Fibre Channel uplinks need to meet a customer’s design and bandwidth needs.

Don’t let the tiny package fool you… While the 6248UP is the same size as the UCS 6120 Fabric Interconnect, 1 rack unit, the 6248UP delivers double the throughput, almost half the latency, more than quadruple the virtual interfaces per downlink, and quadruple the VLAN capabilities. Here’s a comparison chart to show how the three Fabric Interconnects compare to each other.

The UCS 2208XP Blade Chassis I/O Module (Fabric Extender)

The UCS 2208XP Blade Chassis I/O module is a new choice for UCS blade customers. The 2208 doubles the uplinks and quadruples the downlinks in comparison to the existing UCS 2104 blade chassis I/O module. Using dual 2208XP modules provides up to 160 Gbps worth of uplink bandwidth per chassis and up to 80 Gbps of downlink bandwidth to each half slot server. In addition, the 2208XP provides 8 class of service queues.

The UCS 1280 Virtual Interface Card (VIC)

The 1280 VIC is the world’s first 8 port 10GE adapter for an x86 blade server providing up to 40 Gbps throughput per dual fabric. The 1280 VIC consists of two groups of 4x 10GE ports that are automatically port channeled when paired with a UCS 2208XP chassis I/O module. All eight blade server slots in the UCS 5108 blade chassis can be equipped with the 1280 VIC and all eight blade server slots will have dual active 40 Gbps paths.

Using the Service Profile in UCS Manager, a user defines the number of NICs and HBAs that should be visible to the Operating System, up to a maximum of 116 virtual interfaces (software imposed limit at FCS) per 1280 VIC. The operating system or hypervisor host “sees” the Service Profile defined NICs and HBAs and not the two 40 Gbps paths. The Service Profile allows the user to set the QoS settings and speed of the NIC.

 

Here is a diagram that shows the existing UCS fabric components on the left and the new additions to the UCS portfolio on the right.

 

Here is the full selection of UCS fabric components now available to UCS customers. Just to reiterate what I said at the beginning of this post, the new hardware announced this week at CiscoLive is an addition to the UCS fabric portfolio, not a replacement of any hardware in the portfolio.

 

Competitive Comparison: When compared to an HP BladeSystem, a UCS B200 blade server with the UCS 1280 VIC plus the UCS 2208 IOM would be similar to deploying a HP BL4xx series server with 2 port 10GE FlexFabric LOMs, a 2 port 10GE FlexFabric mezzanine card in Mezz 1, a 4 port 10GE FlexFabric mezzanine card (non-existent) in Mezz 2, plus 8 Virtual Connect FlexFabric modules (at $18,499 each). In other words, $147,992 per HP enclosure just for the interconnects connecting 16 HP blade servers is several times the cost of two UCS chassis with dual 2208 I/O modules connecting 16 UCS blade servers.

 

Support for Chassis Uplink Port Channeling and Server Adapter Port Channeling

In addition to the increased bandwidth, one of the most beneficial features of the next generation UCS hardware is the ability to port channel both 2208 IOM chassis fabric uplinks and the server adapter port uplinks on the 1280 VIC. The major benefit of this feature is better distribution of server traffic load across more links – across both the server ports and the fabric uplink ports.

For the fabric uplinks, the user can choose whether to use discrete mode (today’s behavior of pinning blade slots to specific chassis uplinks) or port channel mode (creating a single virtual chassis uplink that’s shared for all blade servers in the chassis). In Port Channel mode, users can use 4, 8 or 16 uplinks depending on the amount of bandwidth per chassis needed.

Fabric uplink port channels require both the 6248 fabric interconnect and the 2208XP I/O module.

At the server port level, port channeling between the 1280 VIC and the 2208 is configurationlessable (SIC – I’m working on my presidential credentials by making up words). In other words, they are channeled automagically so that, in effect, the 1280 VIC behaves as a 2 port 40 Gbps server adapter. Server adapter port channels require both the 2208XP I/O module and the 1280 VIC server adapter.

NICs or HBAs (or VMs when using VM-FEX) created in the Service Profile have their traffic load balanced across all ports on the fabric to which they’re pinned. This means that any single NIC has access to up to 40 Gbps of bandwidth for transmits or receives (39.6 Gbps per the throughput test, to be exact). The traffic is load balanced across the channel based on MAC, IP, and TCP/UDP information so that multiple conversations for any particular NIC/HBA/VM is load balanced across multiple 10GE ports. Either side A or side B (diagram below) can burst up to 40 Gbps each, but, simultaneously, cannot exceed the PCIe Gen 2 x16 bus limitation of 64 Gbps. I can already hear the competitive FUD coming… ;) If you need more than 64 Gbps of bandwidth in a half slot server or more than 128 Gbps of bandwidth in a full slot server… PLEASE let me know. I’d really like to buy you a beer and find out what you’re doing with all that bandwidth.


Backwards Compatibility and Interoperability

As you’ve probably already figured out, it’s very important to me to emphasis that Cisco’s announcement this week is about NEW ADDITIONS to the UCS fabric portfolio. I also want to specifically point out that the new and existing UCS fabric products are backwards compatible and interoperable (to the least common denominator feature). For example, if you own 6120s, 2104s, and M81KR CNAs today and decide to replace the 6120s with 6248s… it’s supported and works just fine. You want to just replace the 2208s and keep the 6120s/6140s and M81KRs? Great! We’ll support it. Would you just like to start deploying the new 1280 VIC but keep the existing 6120s and 2104s? Perfect! Supported! Would you like to deploy your new chassis with the 2208 but not retrofit your existing chassis and have all of them connected to the same pair of 6120s, 6140s or 6248s? Excellent! Supported! Last but not least, we’ll support you doing mixed combinations of 6120s/40s + 6248s, for example, during upgrades but not as a permanent production deployment.

One thing to keep in mind, though, is the features that require the new hardware. For example, without both 2208s and a 1280 VIC, you can’t port channel the ports on the server adapter. Or, without both the 6248s and 2208s, you can’t port channel the chassis fabric uplinks.

 

UCS 2.0 Platform Software Features

In addition to the new hardware discussed above, the UCS 2.0 firmware will add several platform or software features that will be available across both existing and future UCS deployments.

Support for Layer 2 Disjoint Networks in End Host Mode

UCS adds support for flexible VLAN configurations on Fabric Interconnect uplink ports while using End Host Mode. This feature provides for extremely flexible UCS deployments to support almost all combinations of upstream network configurations.

 

iSCSI Boot Support in UCS Service Profile

iSCSI boot is now supported in the UCS Service Profile for several of the UCS Converged Network Adapters – including the M81KR, 1280 VIC, and others. In addition to NICs and HBAs, users can now create iSCSI NICs for UCS servers. Unlike other solutions (ahem, HP FlexFabric), the UCS fabric is more flexible and allows the creation of both FC HBAs, iSCSI NICs, and Ethernet NICs all on the same server.

Support for VM-FEX in RedHat KVM

Cisco’s Fabric Extender (FEX) technology can be deployed at several levels (see graphic below) – Rack FEX (like a Nexus 5000 + 2000), Chassis FEX (UCS 6248 + 2208), Adapter FEX (M81KR or 1280 VIC) and VM-FEX (M81KR/1280 VIC plus hypervisor integration). VM-FEX allows using the UCS Virtual Interface Cards (M81KR or 1280 VICs) as remote switch line cards (FEXs) inside of a hypervisor and directly assigning an independent NIC to each and every Virtual Machine. This allows each VM to have its own logical switch port on the upstream Fabric Interconnect – complete with its own configuration, statistics, etc.

VM-FEX deployments have always been only for VMware environments until UCS 2.0. VM-FEX is now supported for RedHat KVM environments also.

Logical Operation of Cisco's Four Deployments of Fabric Extenders

 

Frequently Asked Questions

Q1: With the new announcement, does this mean UCS customers have to go through a forklift upgrade?
A1: Absolutely not. If a user doesn’t need the additional bandwidth or any of the other new features (like port channeling), none of the new hardware is needed for future deployments. A user is more than welcome to continue buying UCS 6120 or 6140 Fabric Interconnects, UCS 2104 IOMs, and UCS M81KR VICs. Future UCS software releases will support existing and new hardware for the foreseeable future.

Q2: When can I get it?
A2: General availability is scheduled for Q3CY2011 (3rd quarter of this year).

Q3: Do I have to buy a new chassis to use the new 2208 IOM or the new 1280 VIC and get increased server bandwidth?
A3: Absolutely not. Cisco planned for the future and planned on providing customers with investment protection… the UCS 5108 chassis mid-plane provides support for 4 lanes to each of the eight server slots. Each lane is 10G-Base-KR rated. As such, inserting a 2208 IOM into an existing chassis immediately provides 4x 10GE downlinks to each of the eight server slots. Adding a UCS 1280 VIC to an existing server then allows the server to utilizes all eight of the 10GE downlinks.

Q4: Do I have to buy new servers?
A4: Absolutely not. The 1280 VIC works in all shipping servers today.

Q5: Are you End of Life-ing (EOLing) the existing UCS fabric hardware?
A5: Not only no, but _ _ _ _ no. J Choices folks, choices.

Q6: Do I have to have new hardware to get Port Channeling capabilities?
A6: Yes. If you want port channeling between the chassis and the fabric interconnect, you’ll need the UCS 6248UP and the UCS 2208 IOM. If you want port channeling between the IOM and the Server VIC, you’ll need the UCS 2208 IOM and the UCS 1280 VIC.

Q7: I have the 6120 or 6140 Fabric Interconnects (or the M81KR VIC or the 2104 IOMs). Can I use the new software features like disjoint L2, iSCSI boot, etc?
A7: Yes. Once the UCS Manager 2.0 software upgrade releases in Q3, you can upgrade your existing UCS environment and have those new software features (no new hardware requires and no software license fees).

Jul 15th, 2011 | Posted in Cisco UCS
Tags:
  • RedSneakers18

    Really great blod article nice to see mobility in development of a fairly new product.

  • Harm De Haan

    wow!!! where is the competition? UCS equals what is VMWare for virtualization; years ahead. iSCSI boot!!! Unified storage rocks!

    • Chris_Donohoe

      These types of comments continue to confuse me.  Can you please help me understand why everyone keeps calling UCS a hardware equivalent of VMware?  Does it accomplish a stateful mirror of a blade like VMware does with a VM?  So if a blade fails, do I not have to provision another and boot into the new blade?

    • Chris_Donohoe

      Nevermind.  I answered my own question with some research.  I know it will look like I’m throwing rocks, but Cisco UCS looks like it’s just another blade offering that doesn’t supply much additional functionality (if any).  Considering that, I don’t know why anyone would want to consider Cisco UCS over HP Bladesystem unless it was for a very niche purpose at a great price.  Even then, I’m risking my environment on a first generation product as opposed to a seventh generation product.  I’ve used Cisco as a network solution for a very long time, but I don’t know if I’ll ever be ready to consider them for a server solution.  It’s like going to McDonalds and asking for a chicken sandwich.

      • http://www.mseanmcgee.com M. Sean McGee

        Hi Chris,
        You sound just like many of our biggest UCS customers before they had a chance to get into the details of UCS and see it in action. :) I highly recommend a face-to-face meeting with Cisco – and I’d be more than glad to help arrange it if you’d be open to it.  I won’t promise that UCS will solve all of your problems or that it’s the best solution for you. I will promise that a face-to-face meeting will help you understand UCS much better than any blog or online research can.

        If UCS was just another blade offering, I don’t think we’d see the kind of success that it’s had in the industry.  Data center teams (server/lan/san teams) aren’t adopting UCS just because it’s a Cisco product. In fact, many times we have to help server teams get past that issue. :) They find out that UCS is a solution like no other and most find that it helps them solve real problems in their environment.

        I’d love to at least help you understand UCS a little better so that you can make an informed decision as to whether it’s the right solution for you or not.  Please reach out if you’d like to chat (seanmcge at cisco dot com).

        Best regards,
        -sean

        • Difernan

          Hey Sean, if you need help let me know, AS UCA practice difernan at cisco dot com

  • Pingback: UCS 2.0: New Innovation » TJs Thoughts

  • http://twitter.com/vMackem David owen

    Top Post! Cant wait to get my hands on 2.0 and the new kit also. Only one way UCS is going and that is ^

  • Dan

    Confused on the VIC stats.
    All the Sldieware says 256 but your text says 116.
    Am I missing something?

    • http://www.mseanmcgee.com M. Sean McGee

      Hi Dan,
      Great question and I’ll get to it in a sec. First, I want to say that it’s ok that you’re asking questions about UCS even though you seem to be from HP (posting from IP address 15.195.201.91/zswa01cs006-da01.atlanta.hp.com). I don’t mind answering anyone’s questions – even questions from a competitor. I do prefer that everyone is upfront about being a customer, a partner, or a vendor, though. Transparency keeps everyone honest in this open forum.

      In regards to your question… There are two numbers: A)the number of logical interfaces supported by the VIC hardware (256) B)the number of logical interfaces the Fabric Interconnect will allow the VIC to instantiate (116 at FCS). Since the VIC is effectively a remote line card to the Fabric Interconnect (FI), the FI ‘owns’ the virtual port once it’s created and controls the number of interfaces that each VIC can create.

      At FCS, UCS Manager will limit (in software) the number of usable/definable virtual interfaces per VIC to 116.  As customer demand requires that number to grow, we’ll increase the supported number of interfaces up to the VIC hardware limit of 256.

      Again, great question and I hope I was able to clearly answer it for you.

      Best regards,
      -sean

      • Cford

        Hi Sean, cam from xsigo here….disclosure out of the way.

        Seems like there are always at least 2 numbers for everything cisco announces. There are usually the numbers they announce….and the numbers that ship…..and there are rarely in the ballpark of each other. Here are a few areas I came across purusing your docs….maybe you can clear this up for all of is.

        The pics above say that the 6248 doubles the bandwidth. The 6140 already has over 1 tb of bw, and the new 6248 actually drops the bandwidth down to 960 gb…..seems like you are going the wrong direction. You do gain 1u per box,…..though for those of you in need of an extra 2u in your rack.

        A picutr above shows the new 6249 as supporting 4096 vlans, but the spec sheet only states 1024….I am assuming this is standard cisco marketing……hw supports what the software cannot deliver…correct?…..kind of like the Vic numbers.

        I didn’t see the power specs for the 2208… Can you help? Seems to me that with the new 4x10g lag, thebinterior ports will need to be 32x10g with another 8 ports for uplink….so this is a 40 port 10g switch…..pluss all of the chassis management hardware. So it is roughly equivalent to the 6248 switch from a hardware perspective. The 6248 is showing peak peer at 600w. Are these blades also 600w? What will the price be for this 40 port 10 g switch? Gonna need to ne around 25k or so to make those 70% cisco margins.

        Let’s talk about your new Vic. Sounds like an impressive engineering feat to squeeze 8 ports of cna into a single asic. Bet that puppy is pretty hot. The old palo adapter was listed at 18w…..and this new adapter is listed at 18w “typical”……I find that hard to believe. Your 6248 switch says 600 w peak and 350w typical…..that comes out to about 7w per port for typical and 12 w peak. Nics and especially cnas are usually hotter than switch ports….but even at these numbers, they don’t add up. Even at 7 w per port, this sucker should be over 50 w.

        Let’s also nip this 80 g in the bud. I believe your adapter typically works in active passive mode with hardware failover….correct? So you are really talking about 40g of throughput per your numbers. The pci bus is x16 whic really can only support about 52g of actual packet throughput…..so your 40g adapter should be fine as demonstrated. Also, with the lag, any single stream can only use 10g…..but if you have multiple flows per nic, they can spread across the multiple interfaces…is this correct?

        What about storage bandwidth? The old palo adapter couldnonly push about 4g of fcoe…..what will this new adapter support? Does the lag spread the fcoe traffic as well or treat it as a single flow?…gotta thing reordering fcoe is not a great idea…so I imagine a single hba is always a single flow….besides it will be pinned to an outbound 8 g port.

        Switching bw…..all switching between servers happens in the 6248…. Correct? So I can cab switch full 40g across any 2 servers on the same chassis….but that’s about it. Also, in order to use that bandwidth, I would need all 8 uplinks available….that means 16 cables for each blade chassi….or 2 cables per server…..how is that cable consolidation……might as well use rack servers and save a lot of money.

        With 8 uplinks to a 6248 per chassis….looksnlike I top out at 4 chassis…..not very scalable is it? Speaking of scalability…..the old 6140 claimed support for 40 chassis….but the new 6248 only claims 20 chassis…..you guys going backwards……or is this just another case of the actualnproduct not living up to the marketing hype…..pernmost cisco products.

        Lat one….seems like this stuff is “ready” for a lot of stuff…..l3 ready, fabric path ready, 40g ready….so maybe you can clarify for us exactly what this stuff will actually do when it actually ships….and when that might actually be.

        Thanks…looking forward to the straight facts…..or maybe just some more marketing hype….your blog….your choice

        • http://www.mseanmcgee.com M. Sean McGee

          Hi Cam,

          Wow, you’ve really outdone yourself this time. This year I think you’ll definitely have a good shot at the Oscar award for Best Competitive FUD Rant.  Only one problem is that competitive Fear, Uncertainty and Doubt (FUD) is usually based on a least mostly CORRECT information. So, that might hurt your chances.  You REALLY need to consider attending a UCS Training class if you’re going to keep coming back to “educate the masses” on my blog. As Xsigo’s Director of Product Management, it’s exposing how much you guys truly don’t understand about Cisco’s solution or our capabilities. That has to impact your credibility with a would-be customers.  Let me know if you’re interested in a UCS class and I can help find one for you. It would really save me a lot of time in responses. ;)

          For example:
          - no, the 6248 doesn’t “drop the bandwidth” (since its comparable to the 6120 as the article plainly states)
          - no, Palo is not limited to 4G
          - no, PCIe gen 2 x16′s limit is not 52g
          - no, we don’t have thermal issues
          - no, the 2104/2208 is not a switch
          - no, our adapter doesn’t work in “active passive mode”
          - no, we’re not “reordering FCoE” frames
          - your power numbers are way off
          - your $$ estimates are ever farther off
          - no, there’s nothing wrong with being open and honest with the customer about today’s software capabilities vs. future software capabilities (based on hardware limits or being “ready” for future capabilities)
          - etc.

          In all seriousness, it’s really unfortunate when a conversation so rapidly deviates from healthy competitive discourse and heads headlong into disingenuous competitive mudslinging. I honestly think it doesn’t do anyone any good – not for either the vendors or the customers.  Any chance we can bring it up a level? Maybe not all the way up to the “high road” (I realize that would be a stretch) but at least up to the feeder road and out of the gutter? I really don’t mind answering serious questions, like the post from Egenera, but your post is in a completely different category.  

          Sincerely and Respectfully,
          Sean

          • Cford

            Ok, as you suggest….lets uplevel it and actually answer the questions…..

            What is the power?
            What is the price?
            What is the bandwidth?

            Just saying no doesnt add any credibility to your story either…..

            • http://www.mseanmcgee.com M. Sean McGee

              Now those are good questions, Cam.  Glad to see we’re finally on the same page.

              The power and price questions will be answered publicly closer to product launch.  The bandwidth question was answered in detail in my response to Egenera below.

              Best regards,
              Sean

              • Jason

                Hi Sean,
                I love the blog and am happy to hear about the UCS advancements.  I’m not a vendor, but a customer, not a Server guy, but a Network guy.  I was, however, interested in an answer to Cam from xsigo’s (albeit loaded) question about FCoE throughput across a VIC 1280 port-channel.  Does the VIC similarly port-channel the FCoE traffic from each server so that the practical FC throughput from each server will exceed the 4 Gbps/fabric available in the Palo adapter? (that would be very advantageous in my opinion)

  • Buzz

    This doesn’t add up……8 x 10Gb downlink ports on each VIC 1280?
    The VIC 1280 adaptor is specified as a PCIe 2.1 16x interface. This is the interface to the motherboard and processor/memory complex. This PCIe 2.1 16x interface has a maximum throughput of 64Gb/s. ( encoded it is 80Gb/s but typically un-encoded is used). The VIC 1280 is also specified as having 8 x 10Gb ports which results in 160Gb/s of bi-directional throughput. The interface to the motherboard would need to support 160Gb/s to service the 8 x 10Gb ports at full throughput. So, it seems that although the VIC 1280 has 80Gb/s of potential throughput, the PCIe x16 would limit the total for the VIC 1280 to 64Gb/s – across 8 ports…that’s actually 8x 4Gb/s ports ( 8Gb/s bi-directional time 8 = 64Gb/s).And, the Cisco UCS B230 blade specification says “One dual-port mezzanine slot for up to 20 Gbps of redundant I/O throughput”….optimistically, this may mean 40Gb/s …..if the motherboard interface is limited to 40Gb/s, then the VIC1280 increased throughput does not seem like it can be realized ?Maybe there are new blades coming that will allow tapping this performance, but it doesn’t seem that the current crop of blades will allow tapping this new throughput..

    • http://www.mseanmcgee.com M. Sean McGee

      Hi Buzz@Egenera.com,
      Thanks for stopping by.

      1. In the article, I specifically said “Either side A or side B (diagram below) can burst up to 40 Gbps each, but, simultaneously, cannot exceed the PCIe Gen 2 x16 bus limitation of 64 Gbps.”. Did you miss that section? I tried to make that abundantly clear – I’m an engineer, not a marketeer. I have to stand in front of customers and defend what I write, so, it serves me no purpose to over-promise and under-deliver. I even included a comment about anticipating competitive FUD. With my obvious ability to predict the future, my stock portfolio should look better than it does… :)

      2. Our testing shows that a customer can achieve up to 39.6 Gbps on a single fabric (side A or side B). That means that while a single VIC’s total throughput is limited by the PCI Gen 2 x16 bandwidth capability of 64 Gbps (after accounting for 8b/10b encoding overhead), either side can burst up to almost the full 40 Gbps.

      3. The PCIe 2.0 x16 speed is 64 Gbps… *per direction*. (http://www.pcisig.com/news_room/faqs/pcie3.0_faq/#EQ3).

      4. Don’t make the mistake of assuming that the total bus bandwidth is equally divided across all 8 ports and no port can ever exceed it’s share of the 64 Gbps per direction. In data center networking, hardly any high throughput traffic pattern is sustained. As such, our design provides lots of throughput for sustained flows across all 8 ports (subject to PCIe 2′s limitation) or periodic bursting on a single port or single fabric up to the interface’s theoretical maximum.

      5. Yes, both Fabrics (both 4x 10 Gbps paths) can be active at the same time, subject to the total throughput limitation plainly called out in the article and in 1 & 2 above.

      6. If you want more than 64 Gbps on a single server, deploy a full width UCS server (B250 or B440) that can accommodate up to 2 x VICs. Let me know if you have an interested customer and I’ll discuss it with’em. =]

      7. It’s obvious that Cisco has engineered an adapter ASIC in anticipation of PCIe 3.0 (http://www.pcisig.com/news_room/faqs/pcie3.0_faq/), which doubles gen 2′s throughput. Don’t hold it against us for being prepared early…

      8. When the specs on the B230 were released, the only available adapters were dual 10 Gbps. Therefore, the Product Manager said it was limited to 20 Gbps. That limit was caused by the adapter, not the B230′s capabilities. We believe in “truth in advertising”. :) If the PM had said the B230 was capable of up to 64 Gbps but we only had a 2x 10 Gbps card available at the time, we’d have been called out on that instead.

      Once it ships, installing a 1280 VIC in a B230 would obviously provide more than 20 Gbps of throughput. See 1 & 2 above.

      9. Do you know of any customer pushing more than 64 Gbps of sustained bandwidth on a 2 proc Nehalem platform? Most likely not. Like I said in the article, let me know and I’ll buy them a beer. In other words, this whole conversation regarding 64 Gbps vs. 80 Gbps from a single adapter is really a moot point for customers. Let’s call it “NIC-picking (sic) from UCS competitors”. ;)

      10. I really want to have a tenth point. I hate ending on an odd number. However, I already feel like we’ve beat this horse to death and I’m at risk of triggering some folk’s narcolepsy.

      Bottom line is that the 1280 VIC provides the most throughput of any single x86 blade mezzanine Ethernet adapter. The 1280 VIC eliminates the need to deploy a ton of mini-rack (http://bit.ly/chQYnw) switch modules in a single blade chassis just to get more 10 Gbps lanes. Cisco UCS customers now have it as an option in addition to all the other options in the UCS CNA portfolio.

      Again, I do sincerely appreciate you taking the time to post. It helps educate our current and future UCS customers on misunderstandings caused by my poor writing skills. ;)

      Sincerely,
      -sean

  • Chatinbox

    Do you have any information on management capabilities & design of ucs manager.looking at the buzz for ucs,wanted to learn if there are any innovations in ucsm also

    • http://www.mseanmcgee.com M. Sean McGee

      Hi Chatinbox,
      UCS Manager added a couple of features also – iSCSI Boot, disjoint L2 connectivity, and VM-FEX for RedHat KVM.

      Thanks for stopping by!
      -sean

  • Pingback: Internets of Interest:22 Jul 2011 — My Etherealmind

  • http://twitter.com/JeffSaidSo Jeff Allen

    I think it’s safe to say I enjoyed the comments almost as much as the article. The Xsigo/Egenera gang  certainly keep life interesting. It’s got to be tough having to teach people how to pronounce your company name at the start of each preso…

  • http://www.facebook.com/profile.php?id=1848532876 Ian Erikson

    Support for Layer 2 Disjoint Networks in End Host Mode….this is great for our setup- I don’t have to use two different chassis to combine our DMZ into one UCS domain- or change to switching mode and lose my bandwidth.

  • Pingback: Cisco delivers new hardware for UCS soon! | Marcels Blog

  • http://twitter.com/juniperbill Bill Graham

    Hello Sean, Full disclosure, I’m a former Juniper SE in the service provider space but now a potential UCS customer.  I’m serious about finding the best of breed elements to build a private cloud computing solution.  I’ve inherited a small investment in an IBM BladeCenter/BNT solution and was pinning my hopes on swapping out BNT and FC pass-through w/Brocade FC ToR for the BladeCenter integrated Brocade Converged Switch.  Unfortunately, the thing runs two different CLIs despite being branded a “converged switch”.  Need I say more?  Just a little background on why I’m down the UCS path.

    What are your thoughts about UCS 2.0 for a Q1 2012 greenfield deployment?  Are there going to be any advantages to using the 1st generation hardware and 1.4 software given my timeline?  I’m not familiar with the UCS software development cycle.  I’ve seen scenarios, unrelated to UCS, where the hardware starts development based on v1.3 of software (current at dev time) but then 1.4 is released while the hardware is still in development or systest.   So, the new hardware ends up at FRS/FCS running 2.0 with all features up to v1.3, new features specific to the new hardware, but must wait for a later 2.x release to catch-up with 1.4 features.  I’m sorry for such a long wind-up to ask: Any feature parity gaps when using UCS 2.0 hardware at FRS?  Thanks in advance.  Bill

  • Pingback: Updates to the Cisco UCS « Andrew Travis's Blog

  • Pingback: 8 Cool Features you may not know about in UCS Manager | Jeff Said So

  • Pingback: Server Networking With gen 2 UCS Hardware — Define The Cloud

  • Pingback: Cisco UCS 6248 Unified Port Configuration | Jeremy Waldrop's Blog

  • Pingback: tout savoir sur les dernières nouveautés de l’UCS « Blog Cisco Data center

  • Azzafir

    Hi Guys,
    I was wondering if i can do away with FC ports on the UCS Fabric Interconnect and use Pure FCoE(10G cable) to support my SAN and LAN needs. Right now I am having 2 connections to my Nexus 5548UP-L3 with Storage License. 
    1 connection is the 10G ethernet for my LAN and the other is an 8Gb FC  which i also connect to the Nexus 5548UP for my SAN. I was wondering if I could consolidate and just use a single FCoE 10Gbps to support my LAN and SAN needs.

    Thanks guys. Went thru many documents but it seems that they use separate FC and 10G links on for the SAN and LAN when it comes to the UCS. I am not going for iSCSI solution :)

    Thanks all.
    Azzafir Patel.

  • Anonymous

    Hi Sean,

    I need a clarification regarding bandwidth.
    Each server has 80 Gbps (64Gbps because of PCIe bus limitation), that is a very high value. The total downlink capability of IOM is so 640Gbps.
    The two IOM has 160Gbps uplink capability, but if you consider that 0% of the traffic is resolved internally because of FEX way of working (it seems that switching is done at FI level) the 160Gbps is a  limitation.  Considering that all traffic must exit and re-enter in the IOM if all server are using network the real troughtput is 20Gbps for each server.
    This is to the large due btween uplink and downlink plus the FEX passthru way of working. 

  • Berend Schotanus

    Hi Sean,

    Very interesting stuff. I have a question about maximums. Can you tell me anything about the total number of vNIC’s that is supported by UCS Manager 2.0? I have read the Cisco VM-FEX Best Practices for VMware ESX Environment Deployment Guide, but is their a way to calculate this.

    For instance when I have 1 5108 chassis with 2 uplinks per 2208 to the 6248 using 4 B200M2 blades with M81KR VIC using the normal VM-FEX mode.

    We will be using vSphere 5.0

  • Pingback: UCS Boot-from-SAN Troubleshooting with the Cisco VIC (Part 2) | Jeff Said So

  • Pingback: New Cisco UCS Fabric and Management products » TJs Thoughts

  • http://twitter.com/vMackem David Owen

    Sean,

    Do you know if there is a slide (3rd image down) that includes the 6296UP comparison in existance?