The Cisco UCS B230 – the Goldilocks Blade Server

Sept 14, 2010

As the fairy tale goes, Goldilocks thought the chairs were either too big or too small. Unfortunately, she could have also been describing many blade servers out there today – they’re either too big (taking up too many expensive blade chassis/mini-rack slots) or they’re too small (providing too little memory). In a direct response to this problem, Cisco has developed the first blade server that isn’t too big or too small, but “just right”…

On Tuesday, September 14th Cisco announced the 13th server in the UCS server portfolio – the Cisco UCS B230 M1. This innovative blade has a unique form factor and memory density relative to the rest of the industry. This is a breakthrough form factor that offers the highest memory density and up to 16 cores in a single slot blade. A 6U blade chassis can now hold up to 256 DIMMs and 128 cores. This means you can put more blades, more memory, and more cores in a smaller space, reducing costs and complexity.

May 2011: The Cisco B230 M2 just announced. The M2 supports the Intel E7-2800 processor family and adds support for 16 GB DIMMs (512GB per server or 28TB per rack)
http://www.cisco.com/en/US/products/ps10280/prod_models_comparison.html

To truly understand how very cool this server is, we need to compare it to the latest blade servers from Cisco’s competitors.

As you can see in the chart above, the Cisco B230s pack an impressive punch in a single 42U rack – 56 servers delivering 896 cores and 14+ TB of RAM.

Here’s a closer look at the B230 M1:

The B230 will ship supporting the following processors:

The B230 will ship supporting the following mezzanine-based network adapters:

The B230 has achieved two performance benchmarking world records:

  • SPECjbb2005
    #1 2-Socket x86 world record: Intel Xeon 7560 Scored 1,015,802 BOPS on Cisco UCS B230 M1
    (http://www.spec.org/jbb2005/results/jbb2005.html)
    _
  • SPECjAppServer2004
    #1 world record dual-node:  Intel Xeon 7560 Scored 11,283.80 JOPS on Cisco UCS B230 M1
    (http://www.spec.org/jAppServer2004/results/jAppServer2004.html)

 

2011-05-06 Update: B230 M2 Information Added
2010-09-20 Update: B230 Benchmark Achievements Added

Sep 14th, 2010 | Posted in Cisco UCS
Tags:
  • Pingback: Blades Made Simple » Blog Archive » Cisco Announces 32 DIMM, 2 Socket Nehalem EX UCS B230-M1 Blade Server

  • Pingback: Cisco Announces 32 DIMM, 2 Socket Nehalem EX UCS B230-M1 Blade Server « BladesMadeSimple.com (MIRROR SITE)

  • http://BladesMadeSimple.com/ Kevin Houston

    Great write up! Great details. I’m glad to be able to link to it in my own short post.

  • TT

    Cisco’s unproven technology will require far more than specs to compete with something like the IBM HX5.

    (Posted from IP address: 129.42.208.174)

    • http://www.mseanmcgee.com M. Sean McGee

      Hi TT,
      Thanks for stopping by to read the blog and I also appreciate you taking time to post your comment. I’ll assume that you are an IBMer since you posted from an IBM owned IP address (129.42.208.174). Let’s all try and disclose our employers so that readers understand our perspectives – whether a vendor, a partner, or a customer.

      In regards to your comment, I don’t understand what you mean by “unproven technology”. The Cisco B230 uses off-the-shelf Intel Nehalem EX processors and the Intel designed memory layout of 32 DIMMs (16 per socket). The B230 doesn’t need any magic Cisco pixi dust; just an effecient use of real estate within the space of a single blade slot – more effecient than IBM, Dell, or HP.

      In contrast, the IBM HX5+MAX5 is proprietary in how it achieves its memory density. It’s a design/layout that IBM came up with and no other vendor uses. Since this design is brand new and only used by one vendor (IBM), this seems to more accurately fit your description of “unproven technology”.

      Can you elaborate on exactly what you mean by “Cisco’s unproven technology”? Which part is “unproven”? Intel EX? 16 DIMMs per socket?

      Thanks for the discussion,
      -sean

    • Mirage

      Like a solid management platform and CNAs that actually work? I have a customer with a very large HX5 install that has had nothing but problems with various hardware failures and issues with the Emulex CNAs, mostly firmware related. To get the firmware updated it takes about a 90 minute process and if you multiply it out, that is almost a full year to update firmware.

      NOTE.. I work for a Cisco Partner who sells UCS, but we are also an IBM partner.

  • Pingback: links for 2010-09-17 : Bob Plankers, The Lone Sysadmin

  • Anonymous

    Looks neat. But what about cost? Aren’t those 6500 and 7500 series a lot more expensive than the 5600 series?
    We’re a HP shop, but I am going to a Cisco bootcamp with the UCS since they look very interesting. I’m just afraid its more expensive than the HP stuff, which we can buy refurbished for a fraction of the price. Even HPs “converged infrastructure” is far from worth it when you can, for example, get older 4gbit FC switches which will do the job just as well for less than 1/10 of the price of “flexfabric”. Yes, yo do get rid of a few cables and save a few watts, but who cares about that really.

    • http://www.mseanmcgee.com M. Sean McGee

      Hi Taliz,
      Thanks for stopping by and taking time to comment.

      As with any technology, the new stuff usually costs more than the old stuff. That’s because the new stuff usually does MORE than the old stuff. For example, using fewer (but more expensive) 7500 procs to service the same number of VMs might actually be cheaper than using more (but less expensive) 5600 procs. Buying less 7500s and less hypervisor licenses might be cheaper than buying more 5600s and more hypervisor licenses.

      I recommend discussing your needs with one of Cisco’s many server engineers and allowing them to quote a design that meets those needs. We have many internally-created ROI calculators and Compute Sizing utilities that can provide very accurate recommendations for our customers. Based on that information, you’ll be able to compare your options.

      Best regards,
      -sean

  • http://twitter.com/Tschokko Tschokko

    I like the hot-plug SSD solution. :) That’s a really cool idea and it tooks only little of space. :)

    But how’s about heat? I think this blade can produce lot’s of heat !

    Kind regards
    Tschokko

  • Djarzynka

    Great article! I added to to my UCS Blog as a link and extent my cudos to you!

    • http://www.mseanmcgee.com M. Sean McGee

      Hi Djarzynka,
      Thanks for stopping by to read, comment, and cross-link the article. Much appreciated!

      Best regards,
      -sean

  • Tell

    what company manufacture these blade servers for Cisco?

    • http://www.mseanmcgee.com M. Sean McGee

      Hi Tell,
      Cisco designs their own blade servers. They are not OEMed from another vendor. That’s why UCS provides so many features that no other blade server in the industry has.

      Regards,
      -sean

  • MArcos

    i just updated the firmware from four ucs domains (256 b200 m2 blades) in three hours (fabric interconnects , bios , cnas , bmc~s) i remember my ibm/hp , one disk for bios , another for cna , another for broadcomm , another for RSA/ilo , each reboot in a ibm machines takes more than 5 minutes.. i just dont want to remember those times.,..nights spent upgrading firmware

    • http://www.mseanmcgee.com M. Sean McGee

      Yes, I really like the centralized management of infrastructure firmware… It just makes sense, doesn’t it?

  • Pingback: Cisco’s Stocking Stuffer for UCS Customers: Firmware Release 1.4(1) | M. Sean McGee

  • Royc621

    How much power does it take for a chasis with eight 2P B230? Is ~5KVA about right? If so, I need a 35KVA rack to fir the 7 enclosures in your chart…marketing what a great pass time

  • NATrevains

    Hi Sean,

    Great write-up, great blog. Just a couple of quick thoughts – how do we power/cool this in a traditional datacenter and would any typical datacenter raised floor really support a 42U rack with 7 fully populated chassis of these blades? I’m an Executive Technology Consultant for Firefly (although these comments are my own) and we try and steer clear of absolute compute density. Keen to get your thoughts.

    Great blog – always recommend it to any of my class attendees!

    Kind regards,

    Neil

  • http://twitter.com/trekkie Tom Boucher

    How about power consumption? Typically I’ve found that most datacenters that haven’t been designed in the last 5 years can’t handle the maximum rack density that blade servers offer. is there a power/core number or some other measurement ? Just curious

  • Lookin-at-ucs

    Have you had any updates on how many cores per blade, per chassis, and per rack on the HP?

    • http://twitter.com/JeffSaidSo Jeff Allen

      HP’s “equivalent” blade to the B230 is the BL620c. As of today, it’s a full-width blade providing 32 servers per rack and 64 cores total. Compare that with the B230 which provides 48 blades per rack with 96 cores total. Cisco provides support for the higher wattage (130W) processors and HP currently does not – not sure why. Both blades accommodate 32 dimm slots, but currently HP supports 16GB dimms and Cisco only supports 8GB today – emphasis on “today” – :) .

  • Colin

    Hi Sean
    Whats your opinion, does the B230M2 with 16GB dimms (512GB) in a dual socket half width blade negate the capacity benefits offered by EMT on the B250 (384GB in a full width blade). Appreciate you still have the benefits of using a large ammount of smaller cheaper DIMMs in the 48 DIMM slots on the B250 but these Chassis slots are a bit of a premium and I can’t see me specing up a B250 anymore on the sole premise of using EMT to gain the memory capacity Vs Number of Sockets when I can now have a B230M2 with 512GB albeit with much more expensive 16GB DIMMS.

    • http://www.mseanmcgee.com M. Sean McGee

      Hi Colin,
      Well, I hate to say it, but…. it depends. :)

      I wouldn’t say memory capacity is the end-all-be-all in all cases.  Memory costs aside as you say, amount of network throughput (40 Gbps on B250 vs. 20 Gbps for B230), memory bus speed (1333Mhz for B250 vs. 1066Mhz for B230) and single core speed/single threaded performance (higher on EP), are all examples of “sizing” attributes than may sway you towards the B250 vs. the B230.  For example, engineers on my team have found that in many configurations the B250 allows for higher VDI density over other blades – and that can easily negate the “slot tax” in large deployments.

      Sorry I can’t provide a definitive answer. Please reach out to us if you have any sizing opportunities and you’d like us to assist. We have a whole army of server engineers here to help.

      Best regards,
      -sean

      • Frank

         Hey Sean,

        With the new B230 M2 E7 chip and the ability to run 1333 DIMMS, have you seen any good results in the field using this for VDI?