EqualLogic Multipathing Extension Module V1.1 for VMware® vSphere Early Production Access Kit (EPA)

If you’re like me, and you manage EqualLogic Arrays in a VMware vSphere environment, you may have noticed that you couldn’t just upgrade to vSphere 5 if you installed the EqualLogic Multipathing Extension Module (MEM Plugin) in vSphere 4.

Today, Dell has finally released the much anticipated update to their EqualLogic MEM Plugin, which is now compatible with vSphere 5. Note however, when upgrading from ESX 4.x to 5.0, any third party modules will not be transferred to the new operating system. After you have completed the operating system upgrade, you can install the MEM 1.1.0, which supports ESX 5.0.

Here are some documents regarding the new release!
Multipath Extension Module Release Notes V1.1
Installation and User Guide V1.1
Fix List V1.1

And here’s the notes from the download itself outlining the features gained when installing the MEM plugin.

The EqualLogic Multipathing Extension Module (EqualLogic MEM) provides the following enhancements to the standard VMware vSphere multipathing functionality

  • Automatic connection management
  • Automatic load balancing across multiple active paths
  • Increased bandwidth
  • Reduced network latency

New Equallogic Arrays PS4100, PS6100, FS7500

I wanted to post a quick update for those looking for info on the new series of Equallogic Arrays which will integrate with Dell’s Fluid Data Architecture.


Product Overview

The EqualLogic PS4100 and PS6100 continue the EqualLogic tradition of performance scaling linearly with capacity.  With up to 60% IOPS improvements on typical workloads over previous-generation EqualLogic arrays the new arrays are designed to grow with data demands—all while providing simple management and seamless expansion using innovative Fluid Data™ technology.  PS Series arrays are based on a virtualized modular storage architecture allowing customers to purchase only the storage you need when you need it. PS Series arrays include SAN configuration features and capabilities that sense network connections, automatically build RAID sets, and conduct system health checks.  With the planned August 22, 2011 announcement of EqualLogic PS 4100 and 6100 Series arrays, customers are offered a vast array of choices that scale linearly with capacity to meet their needs using Dell Fluid Data™ technologies.


EqualLogic PS6100 and PS4100

The EqualLogic PS4100 and PS6100 continue the EqualLogic tradition of performance scaling linearly with capacity, providing you with a vast array of choices to meet your customer’s needs. With up to 67% IOPS improvements on typical workloads over previous-generation EqualLogic arrays, the new arrays are designed to grow with data demands—all while providing simple management and seamless expansion using innovative Fluid Data™ technology. PS Series arrays are based on a virtualized modular storage architecture allowing your customers to purchase from you only the storage they need when they need it.

The PS4100 and PS6100 represent the latest in the line of EqualLogic iSCSI storage arrays, both offering significant advances in capacity, performance, and reliability over their predecessors, the PS4000 and PS6000. These new platforms deliver:

  • Up to 67% IOPS improvements
  • Up to 3x storage pool performance
  • Up to 800% density improvement
  • Improved network bandwidth and reliability
  • Support for 2.5” drives


Hardware Differences

New to the EqualLogic line are 2U and 4U form factors, each providing unique benefits:

  • 2U qty 24 2.5” SAS 10k and 15k drives – available with PS4100X, XV, PS6100X, XV, XS, and S
  • 2U qty 12 3.5” NLSAS and SAS 15k drives – available with PS4100XV* and E
  • 4U qty 12 3.5” NLSAS and SAS 15k drives – available with PS6100XV* and E

Also new is the departure of the standard Xyratex rails in favor of Rapid and Versa Rail options.  Both are now available for square and round-hole rack customers.

Single, non-redundant controller configurations are generally available in each model’s lowest capacity configuration.

Here’s a video briefly describing the new arrays!

VMware vSphere 5 VRAM Entitlement Changes

Earlier today I received an email from VMware regarding some new changes they’ve made to their new vSphere 5 licensing model.

As I’m sure you’re all aware, VMware created quite the stir in the enterprise market by announcing earlier last month that they would be changing the way vSphere 5 is licensed.

They are migrating to more of a service model license, which I personally don’t have a problem with.  I would much rather IT departments take ownership of these systems, and charge internal company departments for usage.  It’s a much more efficient way of deploying resources when HR, WEB, SALES, etc. don’t own physical pieces of equipment.

But alas I digress.
Previous to version vSphere 5, when most virtual infrastructures were being designed by companies, they would load hosts with as much memory as they possibly could afford at that time.  Why?? Because VMware licensed vSphere 4 on a per CPU basis.  While you were only allowed to have 12 cores per processor with an Enterprise Plus license, there was no restriction on how much RAM you could allocate to the host.  And if you have any enterprise virtualization experience, you’ll know that RAM is the one resource that goes the quickest.

So in comes vSphere 5 with features everyone has been waiting for like Storage I/O control, up to 32 vCPU’s per VM, and more.  But now there’s a catch.  Instead of licensing only for each CPU, you now have to watch how much RAM you have assigned to powered on virtual machines.  This is what’s reffered to as VRAM.  Now, when you buy a socket license, there’s a certain amount of VRAM entitlement which is included with each license.  If you have a server with 256GB of RAM, and wanted to use 100% of your available physical memory with virtual machines on that host, under the original VRAM entitlement numbers, you’d have to purchase 6 licenses when with vSphere 4 you would have only had to purchase 2.  Now you see why the blogosphere was up in arms?

Well today, VMware has come out and made things a little easier to swallow.

Here is a table of new VRAM entitlements.

vSphere edition Previous vRAM entitlement New vRAM entitlement
vSphere Enterprise+ 48 GB 96 GB
vSphere Enterprise 32 GB 64 GB
vSphere Standard 24 GB 32 GB
vSphere Essentials+ 24 GB 32 GB
vSphere Essentials 24 GB 32 GB

They’ve also changed how VRAM is calculated.

• We’ve capped the amount of vRAM we count in any given VM, so that no VM, not even the “monster” 1TB vRAM VM, would cost more than one vSphere Enterprise Plus license. This change also aligns with our goal to make vSphere 5 the best platform for running Tier 1 applications.
• We’ve adjusted our model to be much more flexible around transient workloads, and short-term spikes that are typical in test & development environments for example. We will now calculate a 12-month average of consumed vRAM to rather than tracking the high water mark of vRAM.

Overall, I’m glad VMware was able to admit to their mistake, and try to make it right by their partners and customers by increasing entitlements for all editions.

I still don’t know where they got their initial figures for the VRAM entitlements, I can only guess that they pulled the available figures out of a hat.  I mean really, 48GB per CPU socket is all you’re going to give me when I can purchase a Dell R910 with 1TB of RAM capacity and 4 CPU sockets?  And as I mentioned before, it’s VRAM, not physical RAM.  But a large number of my customers are using 80% or better of their current physcal RAM capacity. And without this most recent change, they were looking at pretty hefty license increases for vSphere 5.

If you have any questions feel free to post them in the comments!

AOC-USAS-H4iR firmware download

Yes, this is a post about being able to download updated firmware for the Supermicro AOC-USAS-H4iR.

Recently I re-purposed a craptacular storage array from a company called Wasabi Systems into a Windows Storage Server 2008 R2 array.

But in my quest to do so, I wanted to make sure I had all of the updated drivers and such for the miscellanious components inside.  The LSI based AOC-USAS-H4iR was by far the biggest pain in my ass to track downloads for.

So without further ado, I give to you, SAS MegaRAID Firmware Release for MegaRAID 1078 Controllers – 11.0.1-0039 (EF-P25) : http://www.mattlestock.com/Downloads/LSI-11.0.1-0039_SAS_FW_Image_APP-1.40.252-1113.zip


Deleting VMware Snapshots is Slow

So in the land of VMware, and most other hypervisors, we have a great technology called “snapshots”.
You’ve probably landed on this page because when you try to delete a snapshot, it’s taking forever.
If that’s the case, I have to ask… How long have you had this snapshot? And how much data has been written since it was taken?

To understand how snapshots operate, it’s important to understand the composition of your average virtual machine. To be fair, various virtualization architectures exist, but VMware’s is fairly straightforward. Every virtual machine consists of two parts, a *.vmx, and a *.vmdk. You’ll fairly frequently see other components, but in the end, if you do not have a *.vmx, and a *.vmdk, you don’t have a virtual machine. As we dive a little deeper, the *.vmdk consists of two parts: