Cisco’s Stocking Stuffer for UCS Customers: Firmware Release 1.4(1)

December 20th, 2010

Santa came early this year for Cisco UCS customers. Today, Cisco released UCS firmware version 1.4(1).  This release is the single most impressive feature enhancement release I’ve seen in all my 11 years of working on blade servers.  Allow me to walk you through this list of new features and provide a deeper dive into some of the details behind each one.

Note: The Release Notes are posted here: http://www.cisco.com/en/US/partner/docs/unified_computing/ucs/release/notes/OL_24086.html


Server Platform and Management Enhancements:

  • Support for new B230 server blade
    See “The Cisco UCS B230 – the Goldilocks Blade Server” for more details on this new server blade.
    __
    _
  • UCS C-Series Rack server integration into UCS Manager – Unified Management for the entire UCS portfolio

    Yes, you read me right – Cisco is the 1st server vendor to integrate rack server management into the “blade server and blade chassis management” management interface so that a single management tool configures and monitors both your blade and rack servers. This initial release includes support for the C200, C210, and the C250 Cisco UCS rack servers. Support for additional Cisco UCS rack servers will be added in the near future.

    UCS Manager features extended to C-Series Rack servers include: Service Profiles, Service Profile migration between compatible B-Series and C-Series servers, automated server discovery, fault and monitoring, firmware updates, etc.
    _
    _
    _
    _
    _
    _

  • Chassis and multi-Chassis power capping for UCS B-Series Blade Servers

    Cisco has enhanced the facility manager’s control over UCS blade server power consumption by adding Group Level Power Capping, Dynamic Intra-chassis power redistribution, and Service Profile Priorities. Within the data center, power should be distributed to a blade chassis or groups of blade chassis, not to individual blade servers. If a server is “statelessly” moved using a Service Profile from one chassis to another, a statically defined power cap per server is mostly useless. What if you moved a bunch of servers with static power caps (in watts) to the same power distribution unit (PDU) – the sum of which exceeded the PDUs capability? Not very intelligent or practical, huh? A server’s power cap needs to be relative to the power cap of the blade chassis or blade chassis group so that the infrastructures maximum power draw can be guaranteed.

    Cisco’s approach allows for facilities administrators to define “power groups”, comprised of one or more physical blade chassis, and a “power cap” for the group based on the size of the power circuit that the chassis are connected to in the datacenter.  Power Groups, and their associated power cap, are decoupled from the physical blade server. Server administrators use Service Profiles to assign individual workloads (OS + Applications) a “Priority” relative to other workloads contained within the same power group. A workload can move between power capping groups and maintains its priority. The facilities manager never has to worry about power exceeding the define power cap per group no matter how the server administrators move servers/workloads around. This feature brings “change ready” and “facilities on demand” to UCS customers.

    To summarize: Cisco decoupled the power problem (facilities power cap) from the workload importance (priority). Facility admins control the infrastructure’s power cap and the server admins control a workload’s (server) priority within that infrastructure.

  • Blade Chassis support increased to 20 chassis in UCS Manager

    You can now deploy, configure and manage up to 160 physical blade servers under a single UCS Manager interface. In other words, a single UCS Manager interface now replaces the equivalent of up to 10x HP Onboard Administrator interfaces, up to 10x Virtual Connect Manager interfaces, Virtual Connect Enterprise Manager, and most of the popular functions of HP Systems Insight Manager.

    Cisco UCS now only needs 3 infrastructure management IP addresses for 160 servers compared to HP’s need for up to 7 management IP addresses for each chassis – or 70 management IP addresses for 160 servers (up to 50 IPs if using FlexFabric).

    _
    _
    _
    _

  • Service Profile Deployment Scheduling
    Ever want to make one or more changes to one or more Service Profiles but had to wait to do all the work until your change window began? No more. With Service Profile Deployment Scheduling, you can queue up Service Profile changes for one or more servers, doing the work up front, and then schedule the changes to take effect during the next change window.

    Highlights are:

    • Service Profile changes to hardware are scheduled for future maintenance windows instead of taking effect immediately.
      _
    • Scheduling is centrally managed via Maintenance Policies
      _
    • Hardware resources are held/reserved until Service Profile deployment_
      _
      _
      _
      _
      _
      _
      _
      _
  • IP address for CIMC (remote KVM) added to UCS Service Profile
    To further the concept of “server statelessness“, Cisco has added a CIMC (remote KVM) IP address to the UCS Service Profile. Now, the physical server blade owns a CIMC IP address and, optionally, the Service Profile owns a CIMC IP address. If the additional CIMC IP Address in the Service Profile is used, the server admin can reach the KVM console no matter which physical blade the Service Profile is assigned to.

    Prior to this feature, if Service Profile A moved from Physical Server Blade 1 to Physical Blade Server 2, the CIMC IP address changed and required the server admin to track down the new KVM console IP address. By using this optional feature, the server admin can always use the Service Profile owned CIMC IP address (example below: 10.21.32.46) and will always reach UCS Service Profile A no matter which physical server it’s assigned to.

  • Service Profile “pre-flight” checks (impact analysis BEFORE committing changes)

    This feature allows a customer to run a pre-flight check on a physical server before attempting to apply a Service Profile to it. In cases where the Service Profile requires certain hardware (like a Cisco VIC “Palo” CNA), the pre-flight check will alert the server administrator BEFORE going through Service Profile assignment. In addition, the Service Profile will “remember” the hardware it was associated with and if the new hardware has meaningful differences, UCS Manager will warn the user.
    _

  • SNMP GET support for ALL UCS components

    SNMP query (GET) support has been extended to cover all UCS components – Fabric Interconnects and Fabric Extenders, Blade Chassis, Blade Servers, and Rack Servers. The new 58 MIBs are available here: http://www.cisco.com/public/sw-center/netmgmt/cmtk/mibs.shtml
    _

  • Syslog enhancements

    In addition to supporting categorization for different components and adding additional filtering capabilities per syslog server destination, UCS’s syslog functionality has also been enhanced to provide more descriptive syslog messages.
    _

  • UCS 6100 Licensing Enforcement and Warnings

    Well, some may claim this is a feature enhancement for “Cisco” rather than for the “customer”. 😉 In reality though, this is a nice feature for customers that honestly want to stay in compliance with Cisco licensing requirements (e.g. Fabric Interconnect port licensing). A new GUI based licensing management interface and licensing warning messages are part of the “usability enhancements” of this feature.

    • UCS Manager can assign or revoke licenses
    • Port licenses are based on used fixed port count (no need to assign license to ports individually)
    • Expansion ports (GEM – Gateway Expansion Modules) don’t require port licenses
      _
  • UCS Manager Usability Enhancements
    Several UCS Manager usability enhancement are also included in release 1.4(1). These include:

  1. Firmware Upload using local file system instead of FTP/TFTP
    Yes, we finally added it! Upload your firmware directly to UCS via your local desktop! FTP/TFTP are no longer required. This feature is especially useful for demo or lab environments where FTP/TFTP servers are not readily available.

  2. Enhanced UCS Firmware Descriptions
    UCS Manager now provides better descriptions for firmware images. The description allow you to quickly identify which hardware product the firmware image is intended for.

  3. Service Profile Aliases
    Server administrators can now add a free-form (any character is legal) description to a Service Profile for quick identification of the Service Profile object they want to work on. The description is displayed at the end of a Service Profile name on the Server tab in the left panel of UCS Manager._

    _
    _
    _
    _
    _
    _

  • Enhanced integration with Microsoft Active Directory

    UCS Manager now supports the ability to map Active Directory (AD) groups to user roles within UCS Manager. UCS Manager looks up AD user groups and allows the UCS Domain admin to assign UCS roles to the AD User Groups. This eliminates the per-user role assignment within UCS Manager that was required before 1.4(1).
    _
    _
    _
    _
    _
    _
    _
    _

  • Simultaneous Support for all authentication methods (local, TACACS+, RADIUS, and LDAP/Active Directory) in UCS Manager

    When UCS initially launched in 2009, it supported authentication via local users, TACACS+, RADIUS, or LDAP/AD servers. However, UCS Manager only supported a single authentication method at a time. With release 1.4(1), UCS Manager now supports all authentication methods simultaneously.

    A user selects their authentication domain during login.

    _

  • Support for authentication to multiple Active Directory Domains

    In addition to support for multiple authentication methods discussed above, UCS Manager also now allows for authenticating against multiple Active Directory domains. This is a key feature for supporting multi-tenancy environments with multiple AD Domains or key to supporting environments with separate AD Domains per region.

  • Multi-user CIMC Enhancements

    Cisco UCS’s remote KVM feature, called Cisco Integrated Management Controller or CIMC, now provides enhancements for multi-user access. The first user accessing the KVM gets RW priviledges to the session while subsequent users are granted permission by the first user to join as read-only (by default). It also includes the ability for the UCS admin user to force termination of the KVM for a user.

    ___
    _
    _

  • UCS “Server Packs”
    Support for server and adapter hardware can now be delivered independently of support for the infrastructure components. This allows customers to load supporting firmware packages for new server hardware and adapter hardware without having to upgrade their Fabric Interconnect or UCS Manager software at the same time.

    Server and Adapter Packs, or bundles, will be provided anytime new server or adapter hardware is released. These Server Packs or Adapter Packs can then be loaded into the “infrastructure” to provide immediate support of the new server or adapter hardware without upgrading the infrastructure firmware in UCS Manager or the Fabric Interconnects.

    _
    _
    _
    _
    _

Ethernet and Fibre Channel (FC) Networking Enhancements:

  • New Fabric Interconnect Port Types: Ethernet Appliance, FC Target, and FCoE Target
    In addition to Ethernet and FC monitoring ports covered later, UCS release 1.4(1) introduces three new port types for the Fabric Interconnect uplinks:

    Ethernet Appliance: When a Fabric Interconnect uplink is configured as an Appliance port, a user can connect several types of “appliances” directly to the UCS Fabric Interconnects. Such appliances could be NFS/NAS/iSCSI storage targets, Security Appliances, Nexus 1010 appliances, etc. You can even use port channeling to increase the “pipe” to the appliance if needed. Prior to version 1.4(1), appliances could be directly connect to the Fabric Interconnect but only when “switch mode” was used. Release 1.4(1) adds support for “appliances” in “End Host Mode” also. This is a key feature since Cisco’s usual recommendation is to use End Host Mode instead of Switch Mode.

    FC and FCoE Target ports: UCS users can now direct connect FC targets and FCoE targets to UCS Fabric Interconnects. While the default zoning config is all that is supported for now, the Fabric Interconnect will inherit the zoning configuration from an upstream MDS switch (if necessary).



    __

  • Support for 1024 VLANs per Fabric Interconnect
    Up to 1024 VLANs per Fabric Interconnect are supported. Prior to release 1.4(1), only 512 VLANs were supported per Fabric Interconnect.
    _
  • SPAN (port monitoring) support for both Ethernet and Fibre Channel
    Cisco has added support for SPAN to release 1.4(1). SPAN, or Switch Port Analyzer, provides selective traffic mirroring from a source (one or more server ports or vNICs) to a destination (Ethernet or Fibre Channel uplink). Up to four simultaneous sessions are supported – two on each Fabric Interconnect. In addition, both the LAN and the SAN administrators have the ability to define their own SPAN session via the LAN or SAN tab, respectively, in UCS Manager.
    _
    In addition to traditional monitoring (NIC ->Ethernet Analyzer or HBA->FC Analyzer), users can now monitor both vNIC and vHBA traffic when SPANed to an ethernet destination uplink.  Also, when using Cisco Palo CNA with interface virtualization, each individual vNIC can be monitored/SPANed separately.  If the vNICs are used with Passthrough Switching in VMware, this allows monitoring traffic from every individual VM. When Palo is used with a bare metal OS install, this feature allows each NIC port presented to the OS to be monitored independently.


    _
    _

  • Private VLAN (Isolated Access Port) Support
    Without Private VLAN (PVLAN) support, network administrators would be required to use VLANs to maintain Layer 2 separation between physical or virtual servers. This method of secure separation doesn’t scale well. Instead, Private VLANs can be used to enforce a Layer 2 boundary between physical or virtual servers assigned to the same VLAN. UCS release 1.4(1) provides support for isolated PVLAN support for physical server access ports or for Palo CNA vNIC ports.

    Example: Below, all hosts (bare metal and two VMs) are all in the same VLAN A and assigned IP addresses in the same subnet. All three hosts and communicate with the same external devices external to UCS, however, none of the three hosts can communicate with each other. They are all separated/isolated from each other at Layer 2.
    _

  • FabricSync
    Instead of reinventing the wheel and spending the time to define this feature, I’ll refer you to a blog (link below) written by an esteemed colleague of mine named Brad Hedlund. Brad explains how Fabric Failover works for ‘implicit’ MAC addresses and how Fabric Failover works with ‘learned’ MAC addresses (as of release 1.4(1)). The synchronizing of ‘learned’ MAC addresses between Fabric Interconnects is referred to as “FabricSync” now (even though Brad doesn’t use the ‘FabricSync’ feature name in his blog article).

    http://bradhedlund.com/2010/09/23/cisco-ucs-fabric-failover/

    P.S. This is one of my favorite new features because I helped come up with the name – FabricSync. <insert humility here> J

  • Support for FET-10G FEX transceivers
    UCS Fabric Interconnects and Fabric Extenders (FEX) now support FET-10G FEX transceivers. FET stands for “Fabric Extender Transceiver”. These transceivers are based on multimode fiber and support distances of 25 or 100 meters between the FEX and Fabric Interconnect. In addition, the FET-10G transceivers are low power (~1W per transceiver) and extremely low latency (~.01 ms).
    _
  • Management Interface Monitoring and Failover
    VIP, or Virtual IP, is the equivalent of a cluster IP address for UCS Manager. The VIP IP address needs to be available via whichever management port is available on either Fabric Interconnect. If the management port on Fabric Interconnect A is the ‘active’ port and it fails, the VIP needs to failover to the management port on Fabric Interconnect B so that users can still access UCS Manager.

    As of this release, 1.4(1), Cisco has augmented VIP availability so that the management ports are actively monitored for connectivity to a Pingable ARP target, a Pingable Gateway target, in addition to link failure. After a failure, the VIP address is failed over to the new active management port. Also, the CIMC (remote KVM)/IPMI/SSH sessions to each blade server are also failed over to the new active management port.

    Note: After a failover of the management instance you will need to re-authenticate to the new instance.


    _

  • FC Port Channeling on FC uplinks
    Port Channeling is now supported on Fibre Channel uplinks. The main benefit of FC port channeling is that host logins assigned to a failed FC uplink in a port channel can be quickly moved to another FC uplink in the same port channel without re-logging the host into the upstream fabric.
    _
  • FC VSAN Trunking on FC uplinks
    Fibre Channel VSAN trunking is similar to VLAN trunking on an Ethernet port – a single physical port (or port channel) can carry multiple VSANs.
    _

In summary, this new release by the Cisco UCS development team absolutely blew my socks off. The ability of our development, test, beta, services, support, and field sales teams to work together to once again deliver a whole slew of new features based on customer requests would be impressive even for one of the legacy server vendors – much more so for a team that is completing their second year of shipping server products. No existing or potential UCS customer should doubt Cisco’s commitment to this product line or doubt the technical ability of our people. They’re top-notch and they’ve out done themselves once again.

Dec 20th, 2010 | Posted in Cisco UCS
Tags: