Cisco’s Stocking Stuffer for UCS Customers: Firmware Release 1.4(1)

December 20th, 2010

Santa came early this year for Cisco UCS customers. Today, Cisco released UCS firmware version 1.4(1).  This release is the single most impressive feature enhancement release I’ve seen in all my 11 years of working on blade servers.  Allow me to walk you through this list of new features and provide a deeper dive into some of the details behind each one.

Note: The Release Notes are posted here: http://www.cisco.com/en/US/partner/docs/unified_computing/ucs/release/notes/OL_24086.html


Server Platform and Management Enhancements:

  • Support for new B230 server blade
    See “The Cisco UCS B230 – the Goldilocks Blade Server” for more details on this new server blade.
    __
    _
  • UCS C-Series Rack server integration into UCS Manager – Unified Management for the entire UCS portfolio

    Yes, you read me right – Cisco is the 1st server vendor to integrate rack server management into the “blade server and blade chassis management” management interface so that a single management tool configures and monitors both your blade and rack servers. This initial release includes support for the C200, C210, and the C250 Cisco UCS rack servers. Support for additional Cisco UCS rack servers will be added in the near future.

    UCS Manager features extended to C-Series Rack servers include: Service Profiles, Service Profile migration between compatible B-Series and C-Series servers, automated server discovery, fault and monitoring, firmware updates, etc.
    _
    _
    _
    _
    _
    _

  • Chassis and multi-Chassis power capping for UCS B-Series Blade Servers

    Cisco has enhanced the facility manager’s control over UCS blade server power consumption by adding Group Level Power Capping, Dynamic Intra-chassis power redistribution, and Service Profile Priorities. Within the data center, power should be distributed to a blade chassis or groups of blade chassis, not to individual blade servers. If a server is “statelessly” moved using a Service Profile from one chassis to another, a statically defined power cap per server is mostly useless. What if you moved a bunch of servers with static power caps (in watts) to the same power distribution unit (PDU) – the sum of which exceeded the PDUs capability? Not very intelligent or practical, huh? A server’s power cap needs to be relative to the power cap of the blade chassis or blade chassis group so that the infrastructures maximum power draw can be guaranteed.

    Cisco’s approach allows for facilities administrators to define “power groups”, comprised of one or more physical blade chassis, and a “power cap” for the group based on the size of the power circuit that the chassis are connected to in the datacenter.  Power Groups, and their associated power cap, are decoupled from the physical blade server. Server administrators use Service Profiles to assign individual workloads (OS + Applications) a “Priority” relative to other workloads contained within the same power group. A workload can move between power capping groups and maintains its priority. The facilities manager never has to worry about power exceeding the define power cap per group no matter how the server administrators move servers/workloads around. This feature brings “change ready” and “facilities on demand” to UCS customers.

    To summarize: Cisco decoupled the power problem (facilities power cap) from the workload importance (priority). Facility admins control the infrastructure’s power cap and the server admins control a workload’s (server) priority within that infrastructure.

  • Blade Chassis support increased to 20 chassis in UCS Manager

    You can now deploy, configure and manage up to 160 physical blade servers under a single UCS Manager interface. In other words, a single UCS Manager interface now replaces the equivalent of up to 10x HP Onboard Administrator interfaces, up to 10x Virtual Connect Manager interfaces, Virtual Connect Enterprise Manager, and most of the popular functions of HP Systems Insight Manager.

    Cisco UCS now only needs 3 infrastructure management IP addresses for 160 servers compared to HP’s need for up to 7 management IP addresses for each chassis – or 70 management IP addresses for 160 servers (up to 50 IPs if using FlexFabric).

    _
    _
    _
    _

  • Service Profile Deployment Scheduling
    Ever want to make one or more changes to one or more Service Profiles but had to wait to do all the work until your change window began? No more. With Service Profile Deployment Scheduling, you can queue up Service Profile changes for one or more servers, doing the work up front, and then schedule the changes to take effect during the next change window.

    Highlights are:

    • Service Profile changes to hardware are scheduled for future maintenance windows instead of taking effect immediately.
      _
    • Scheduling is centrally managed via Maintenance Policies
      _
    • Hardware resources are held/reserved until Service Profile deployment_
      _
      _
      _
      _
      _
      _
      _
      _
  • IP address for CIMC (remote KVM) added to UCS Service Profile
    To further the concept of “server statelessness“, Cisco has added a CIMC (remote KVM) IP address to the UCS Service Profile. Now, the physical server blade owns a CIMC IP address and, optionally, the Service Profile owns a CIMC IP address. If the additional CIMC IP Address in the Service Profile is used, the server admin can reach the KVM console no matter which physical blade the Service Profile is assigned to.

    Prior to this feature, if Service Profile A moved from Physical Server Blade 1 to Physical Blade Server 2, the CIMC IP address changed and required the server admin to track down the new KVM console IP address. By using this optional feature, the server admin can always use the Service Profile owned CIMC IP address (example below: 10.21.32.46) and will always reach UCS Service Profile A no matter which physical server it’s assigned to.

  • Service Profile “pre-flight” checks (impact analysis BEFORE committing changes)

    This feature allows a customer to run a pre-flight check on a physical server before attempting to apply a Service Profile to it. In cases where the Service Profile requires certain hardware (like a Cisco VIC “Palo” CNA), the pre-flight check will alert the server administrator BEFORE going through Service Profile assignment. In addition, the Service Profile will “remember” the hardware it was associated with and if the new hardware has meaningful differences, UCS Manager will warn the user.
    _

  • SNMP GET support for ALL UCS components

    SNMP query (GET) support has been extended to cover all UCS components – Fabric Interconnects and Fabric Extenders, Blade Chassis, Blade Servers, and Rack Servers. The new 58 MIBs are available here: http://www.cisco.com/public/sw-center/netmgmt/cmtk/mibs.shtml
    _

  • Syslog enhancements

    In addition to supporting categorization for different components and adding additional filtering capabilities per syslog server destination, UCS’s syslog functionality has also been enhanced to provide more descriptive syslog messages.
    _

  • UCS 6100 Licensing Enforcement and Warnings

    Well, some may claim this is a feature enhancement for “Cisco” rather than for the “customer”. ;) In reality though, this is a nice feature for customers that honestly want to stay in compliance with Cisco licensing requirements (e.g. Fabric Interconnect port licensing). A new GUI based licensing management interface and licensing warning messages are part of the “usability enhancements” of this feature.

    • UCS Manager can assign or revoke licenses
    • Port licenses are based on used fixed port count (no need to assign license to ports individually)
    • Expansion ports (GEM – Gateway Expansion Modules) don’t require port licenses
      _
  • UCS Manager Usability Enhancements
    Several UCS Manager usability enhancement are also included in release 1.4(1). These include:

  1. Firmware Upload using local file system instead of FTP/TFTP
    Yes, we finally added it! Upload your firmware directly to UCS via your local desktop! FTP/TFTP are no longer required. This feature is especially useful for demo or lab environments where FTP/TFTP servers are not readily available.

  2. Enhanced UCS Firmware Descriptions
    UCS Manager now provides better descriptions for firmware images. The description allow you to quickly identify which hardware product the firmware image is intended for.

  3. Service Profile Aliases
    Server administrators can now add a free-form (any character is legal) description to a Service Profile for quick identification of the Service Profile object they want to work on. The description is displayed at the end of a Service Profile name on the Server tab in the left panel of UCS Manager._

    _
    _
    _
    _
    _
    _

  • Enhanced integration with Microsoft Active Directory

    UCS Manager now supports the ability to map Active Directory (AD) groups to user roles within UCS Manager. UCS Manager looks up AD user groups and allows the UCS Domain admin to assign UCS roles to the AD User Groups. This eliminates the per-user role assignment within UCS Manager that was required before 1.4(1).
    _
    _
    _
    _
    _
    _
    _
    _

  • Simultaneous Support for all authentication methods (local, TACACS+, RADIUS, and LDAP/Active Directory) in UCS Manager

    When UCS initially launched in 2009, it supported authentication via local users, TACACS+, RADIUS, or LDAP/AD servers. However, UCS Manager only supported a single authentication method at a time. With release 1.4(1), UCS Manager now supports all authentication methods simultaneously.

    A user selects their authentication domain during login.

    _

  • Support for authentication to multiple Active Directory Domains

    In addition to support for multiple authentication methods discussed above, UCS Manager also now allows for authenticating against multiple Active Directory domains. This is a key feature for supporting multi-tenancy environments with multiple AD Domains or key to supporting environments with separate AD Domains per region.

  • Multi-user CIMC Enhancements

    Cisco UCS’s remote KVM feature, called Cisco Integrated Management Controller or CIMC, now provides enhancements for multi-user access. The first user accessing the KVM gets RW priviledges to the session while subsequent users are granted permission by the first user to join as read-only (by default). It also includes the ability for the UCS admin user to force termination of the KVM for a user.

    ___
    _
    _

  • UCS “Server Packs”
    Support for server and adapter hardware can now be delivered independently of support for the infrastructure components. This allows customers to load supporting firmware packages for new server hardware and adapter hardware without having to upgrade their Fabric Interconnect or UCS Manager software at the same time.

    Server and Adapter Packs, or bundles, will be provided anytime new server or adapter hardware is released. These Server Packs or Adapter Packs can then be loaded into the “infrastructure” to provide immediate support of the new server or adapter hardware without upgrading the infrastructure firmware in UCS Manager or the Fabric Interconnects.

    _
    _
    _
    _
    _

Ethernet and Fibre Channel (FC) Networking Enhancements:

  • New Fabric Interconnect Port Types: Ethernet Appliance, FC Target, and FCoE Target
    In addition to Ethernet and FC monitoring ports covered later, UCS release 1.4(1) introduces three new port types for the Fabric Interconnect uplinks:

    Ethernet Appliance: When a Fabric Interconnect uplink is configured as an Appliance port, a user can connect several types of “appliances” directly to the UCS Fabric Interconnects. Such appliances could be NFS/NAS/iSCSI storage targets, Security Appliances, Nexus 1010 appliances, etc. You can even use port channeling to increase the “pipe” to the appliance if needed. Prior to version 1.4(1), appliances could be directly connect to the Fabric Interconnect but only when “switch mode” was used. Release 1.4(1) adds support for “appliances” in “End Host Mode” also. This is a key feature since Cisco’s usual recommendation is to use End Host Mode instead of Switch Mode.

    FC and FCoE Target ports: UCS users can now direct connect FC targets and FCoE targets to UCS Fabric Interconnects. While the default zoning config is all that is supported for now, the Fabric Interconnect will inherit the zoning configuration from an upstream MDS switch (if necessary).



    __

  • Support for 1024 VLANs per Fabric Interconnect
    Up to 1024 VLANs per Fabric Interconnect are supported. Prior to release 1.4(1), only 512 VLANs were supported per Fabric Interconnect.
    _
  • SPAN (port monitoring) support for both Ethernet and Fibre Channel
    Cisco has added support for SPAN to release 1.4(1). SPAN, or Switch Port Analyzer, provides selective traffic mirroring from a source (one or more server ports or vNICs) to a destination (Ethernet or Fibre Channel uplink). Up to four simultaneous sessions are supported – two on each Fabric Interconnect. In addition, both the LAN and the SAN administrators have the ability to define their own SPAN session via the LAN or SAN tab, respectively, in UCS Manager.
    _
    In addition to traditional monitoring (NIC ->Ethernet Analyzer or HBA->FC Analyzer), users can now monitor both vNIC and vHBA traffic when SPANed to an ethernet destination uplink.  Also, when using Cisco Palo CNA with interface virtualization, each individual vNIC can be monitored/SPANed separately.  If the vNICs are used with Passthrough Switching in VMware, this allows monitoring traffic from every individual VM. When Palo is used with a bare metal OS install, this feature allows each NIC port presented to the OS to be monitored independently.


    _
    _

  • Private VLAN (Isolated Access Port) Support
    Without Private VLAN (PVLAN) support, network administrators would be required to use VLANs to maintain Layer 2 separation between physical or virtual servers. This method of secure separation doesn’t scale well. Instead, Private VLANs can be used to enforce a Layer 2 boundary between physical or virtual servers assigned to the same VLAN. UCS release 1.4(1) provides support for isolated PVLAN support for physical server access ports or for Palo CNA vNIC ports.

    Example: Below, all hosts (bare metal and two VMs) are all in the same VLAN A and assigned IP addresses in the same subnet. All three hosts and communicate with the same external devices external to UCS, however, none of the three hosts can communicate with each other. They are all separated/isolated from each other at Layer 2.
    _

  • FabricSync
    Instead of reinventing the wheel and spending the time to define this feature, I’ll refer you to a blog (link below) written by an esteemed colleague of mine named Brad Hedlund. Brad explains how Fabric Failover works for ‘implicit’ MAC addresses and how Fabric Failover works with ‘learned’ MAC addresses (as of release 1.4(1)). The synchronizing of ‘learned’ MAC addresses between Fabric Interconnects is referred to as “FabricSync” now (even though Brad doesn’t use the ‘FabricSync’ feature name in his blog article).

    http://bradhedlund.com/2010/09/23/cisco-ucs-fabric-failover/

    P.S. This is one of my favorite new features because I helped come up with the name – FabricSync. <insert humility here> J

  • Support for FET-10G FEX transceivers
    UCS Fabric Interconnects and Fabric Extenders (FEX) now support FET-10G FEX transceivers. FET stands for “Fabric Extender Transceiver”. These transceivers are based on multimode fiber and support distances of 25 or 100 meters between the FEX and Fabric Interconnect. In addition, the FET-10G transceivers are low power (~1W per transceiver) and extremely low latency (~.01 ms).
    _
  • Management Interface Monitoring and Failover
    VIP, or Virtual IP, is the equivalent of a cluster IP address for UCS Manager. The VIP IP address needs to be available via whichever management port is available on either Fabric Interconnect. If the management port on Fabric Interconnect A is the ‘active’ port and it fails, the VIP needs to failover to the management port on Fabric Interconnect B so that users can still access UCS Manager.

    As of this release, 1.4(1), Cisco has augmented VIP availability so that the management ports are actively monitored for connectivity to a Pingable ARP target, a Pingable Gateway target, in addition to link failure. After a failure, the VIP address is failed over to the new active management port. Also, the CIMC (remote KVM)/IPMI/SSH sessions to each blade server are also failed over to the new active management port.

    Note: After a failover of the management instance you will need to re-authenticate to the new instance.


    _

  • FC Port Channeling on FC uplinks
    Port Channeling is now supported on Fibre Channel uplinks. The main benefit of FC port channeling is that host logins assigned to a failed FC uplink in a port channel can be quickly moved to another FC uplink in the same port channel without re-logging the host into the upstream fabric.
    _
  • FC VSAN Trunking on FC uplinks
    Fibre Channel VSAN trunking is similar to VLAN trunking on an Ethernet port – a single physical port (or port channel) can carry multiple VSANs.
    _

In summary, this new release by the Cisco UCS development team absolutely blew my socks off. The ability of our development, test, beta, services, support, and field sales teams to work together to once again deliver a whole slew of new features based on customer requests would be impressive even for one of the legacy server vendors – much more so for a team that is completing their second year of shipping server products. No existing or potential UCS customer should doubt Cisco’s commitment to this product line or doubt the technical ability of our people. They’re top-notch and they’ve out done themselves once again.

Dec 20th, 2010 | Posted in Cisco UCS
Tags:
  • Guest

    Can you all please give TAC a Christmas present and hold off your upgrades until the New Year? :)

  • Interested

    Awesome article! So the UCS will now support end-to-end FCoE without the use of an Nexus 5k? Being able to plug the FCoE array directly into the Fabric Interconnects?

    • http://www.mseanmcgee.com M. Sean McGee

      Correct. Thanks for reading!

  • Guest

    Hi Sean, Great blog. :)

    Just thought I would add a clarification on this bit:

    “Management Interface Monitoring and Failover” – Also, the CIMC (remote KVM)/IPMI/SSH sessions to each blade server are also failed over to the new active management port.

    * The configurations required to establish the KVM/IPMI/SSH connections are failed over from the affected FI to the other. Just thought I would explicitly mention that active sessions would be disconnected and the user will need to reconnect again.
    The difference from before is that, the connection paths to these end points would have been lost before(till the mgmt interface state is recovered). Now, the “configurations are failed over” on detecting failure so that connections can be re-established right away.

    • http://www.mseanmcgee.com M. Sean McGee

      Good clarifications. Thanks for contributing!

  • Pingback: Cisco’s kerstcadeau: UCS 1.4 & N5K update « CiscoNL – Technology()

  • Guest

    So on my UCS, I’m now seeing the following license message for both FIs. I only have 6 ports active (4 to single chassis and 2 to uplink N5k) on each FI, and it’s my understanding that the FI comes from Cisco with 8 ports activated by default. According to the licensing screen, I’ve now consumed 3 days of a 120 day grace license.

    warning F067093835 sys/license/feature-ETH_PORT_ACTIVATION_PKG-cisco-1.0/inst-A/fault-F0670 license-graceperiod-entered license for ETH_PORT_ACTIVATION_PKG on fabric-interconnect A has entered into the grace period.

    That’s a “Usability” enhancement to me!

    • http://www.mseanmcgee.com M. Sean McGee

      Yes, that was an erroneous message. 1.4(1j) fixes that message for you.

      Thanks for reading!

  • http://BladesMadeSimple.com/ Kevin Houston

    Great article, Sean with great info. Since you know it so well, I’ve posted an article linking to your site. Appreciate the hard work!

    • http://www.mseanmcgee.com M. Sean McGee

      Thanks for the plug on your blog, Kevin!

  • Mike_j_roberts

    nice post sean,
    i work for dell by the way. i dont understand the “new” feature of being able to manage up to 20 chassis via UCS manager. thought the message out the gate for this product was it could manage 40 chassis/320 servers? was that never actually possible before?

    • http://www.mseanmcgee.com M. Sean McGee

      Hi Mike,
      Thanks for the disclosure. I appreciate it.

      Yes, you are correct. In the past, you may have seen “marketing” messaging discussing the 40 chassis/320 servers. 40 chassis/320 servers is an architectural maximum configuration… meaning, Cisco *can* support up to that many in the future if customer demand drives it. However, today the supported limit is 20 chassis/160 servers.

      Hope that clarifies things. Thanks for reading and I appreciate your question!

  • Craig Weinhold

    I’ve been playing with Appliance ports and can’t quite grasp how they’re supposed to work.

    For iSCSI multipathing, the iSCSI appliance would have one NIC to each fabric, each on a fabric-local VLAN with a different subnet. No problem. That should work very well.

    But for every other appliance (NFS, Nexus 1010, etc), you’d connect NICs to each fabric and configure them as an active/failover NIC team or with SMAC pinning. That’d work, but since each appliance IP/MAC is local to only one fabric, unlucky traffic from the other fabric must exit UCS and traverse the northbound switches. That’ll hurt appliance performance, especially when 1GE uplinks are used.

    I’d hoped that inter-fabric links configured as appliance ports could carry the appliance VLANs, but they don’t (errDisable).

    Am I missing something fundamental about how Appliance ports are designed to work? Right now, my conclusion is that they’re great for iSCSI, but everything else should continue using switch mode.

    • http://www.mseanmcgee.com M. Sean McGee

      Hi Craig,
      Most appliances should contain some form of NIC Teaming/NIC Bonding for layer 2 redundancy. As such, my “recommended practice” is to configure the components (i.e. the vswitch uplinks or the Nexus 1010 uplinks in your example) so that the active NIC (uplink) in the NIC team/bond has ‘affinity’ for the same Fabric Interconnect. This design avoids any east/west traffic having to traverse an external switch. If the ‘active’ NICs from both the Nexus 1000v and the ESX hosts control/packet are all connected to the same Fabric Interconnect (FI), all layer 2 traffic stays within the FI.

      Thanks for reading, Craig.

      Regards,
      -sean

  • http://pulse.yahoo.com/_LCMD4XDWB7O2S75MOWPG2OEQW4 youngec2000

    What happened to the 1.4(1) firmware release? It appears that it was pulled. Is there any timeline for it being re-released? Thank you.

  • Netsonar

    Disclaimer I work for a neutral third party consulting group where a large majority of our services are for global banks and telecom the opinions below are my opinions and by no means represent my firms belief.

    After seeing multiple poc head to head with ucs and hp I firmly believe that cisco is well ahead of the curve both with technology and management. However let’s just assume cisco is equal with hp regarding technology the management capability of ucs vs hp is a clear decision leaning heavily in favor of ucs. Recently I was asked to fast track a hybrid dc architecture design and I have come to realize that the cisco ucs solution can be stood up in days as opposed to weeks compared to c7000 and matrix. Much of the time in my experience has been on srdf data replication in this example otherwise the ucs solution could have been stood up in hours. Everything we had to plan for and design with cisco was straightforward and the palo card delivering dma capabilites for ull applications is far superior to any flex technology hp offers. In fact one client has been forced to include additional WEEKS into move planning due to HP “white glove” service providing very slow turn around times for firmware and building of management
    consoles. Further the customer feedback has been why in the world are we using hp at all considering all of our delays are caused by a single vendor. Finally, at another client I saw a senior executive ask an hp rep multiple time regarding Eva vs 3par strategy with no clear answer or strategy for a client that spends tens of millions with hp. I believe the reason we see the fud from the hp camp is simple they have no clear strategy , matrix is a pipedream, support , and local talent is almost nonexistent for enterprise accounts.

    Typed on my iPad while commuting sorrybfor grammar and spelling errors.

    • Craig

      I had done multiple head to head POC with HP on the client site, they are really worry with some of the capability we demonstrate which given them a tough time.

  • Randy

    After multiple issues with Virtual Connect Flex-10 that caused both VCM’s in a C7000 enclosure to reset at the same time (taking out connections in two enclosures), hp support taking months to address the issue, we are moving away from HP to UCS. HP just does not seem to have enough knowledgable people to support a technology that frankly needs to be completely re-architected.

  • Matt

    Excellent news! For a smaller deployment this makes much more sense than investing in Nexus upstream, especially since Layer 3 is not there with the 5548s just yet. The only 10Gig ports I need upstream are for NAS/SAN, as the whole point of UCS is to replace all the rack metal elsewhere in my environment.

    One question though … if I put the NetApp Unified Target adapters in my storage array, will I be able to connect those to the 6100s in unified mode? In other words, is there a port mode that is a combination of Appliance Mode and FCoE Mode, that runs DCB between the appliance UTA and the 6100? I’d love it if there is, as it would mean (amongst other things) that I could SAN boot my UCS blades from NetApp – it seems so old-fashioned to have to have local drives in the blades just to boot because iSCSI boot is not supported with Palo.

    BTW – for the Cisco/HP debate, declaration of interest: I’m an existing HP customer with 100% HP server infrastructure, about 85% of it on c-class blades. Our major architectural review at the end of last year decided that HP is not a defensible solution for a virtualized infrastructure at the moment, so it’s being phased out in favour of UCS. The only two tipping points were the lack of a Nexus/Fabric Extender in the chassis, and the ‘unique’ approach to bandwidth management on the blade CNAs. Without that we would have gone for a hybrid HP/Cisco solution.

    • http://www.unifiedcomputingblog.com Dave Alexander

      Matt – as of today, the ports are either Appliance or FCoE. You could (at the expense of ports on both the 6100s and your NetApp) connect some ports as Appliance for iSCSI and some as FCoE for FC.

  • Pingback: UCS Firmware 1.4 support FCoE Direct Attach SAN | Malaysia VMware Communities()