Work Related Posts

Migrate RADIUS config from Windows 2003 IAS to Windows 2008 R2 NPS

Recently I was in the process of replacing a fair number of Windows domain controllers for a customer when an interesting issue was raised.  How do we migrate from our existing Windows 2003 IAS based RADIUS install to a new Windows 2008 R2 based NPS?  The problem was they had about 2 dozen different devices authenticating against this particular RADIUS server and couldn’t remember the secrets they had used for the devices, and they didn’t want to reconfigure all of the clients.

Enter the solution… iasmigreader.exe (Bulit into Windows 2008 R2 and Later) it’s a command-line tool that exports the configuration settings of IAS on a computer running Windows Server 2003 to an Ias.txt file. This Ias.txt file is in a format that can be imported on an NPS server running Windows Server 2008 with the command netsh nps import path\ias.txt Cool huh?!  Here’s a step by step!

  1. Copy the iasmigreader.exe file from the following folder:
    C:\Windows\winsxs\x86_microsoft-windows-n..n_service_migreader_31bf3856ad364e35_6.1.7600.16385_none_64707cf9c089e26b
  2. Paste the file in a computer that is running Windows Server 2003 together with IAS (the IAS server).
  3. On the IAS server, run the iasmigreader.exe file (NOTE: if you’ve recently made a change to the configuration of the IAS server, please wait 5 minutes before running the iasmigreader.exe file). This creates an Ias.txt file in the%windir%\system32\ias folder. If you are running a 64-bit version of Windows Server 2003, the Ias.txt file is created in the %windir%\syswow64\ias folder.
    Note The exported Ias.txt file contains all shared secret information from the configuration. Therefore, make sure that the file is stored in a secure location.
  4. Copy the Ias.txt file to the location on your Windows 2008 NPS server.
  5. At the netsh prompt on the NPS server, run the netsh nps import command, and specify the ias.txt file you copied from the IAS server as the parameter. For example, at a command prompt, type the following command: netsh nps import <path>\ias.txt

Now when you open up the NPS MMC snap-in you should see all of your configurations migrated!  The great thing is all that’s required is to point your RADIUS clients to their new location and everything should just work because the secrets and individual device settings were all contained in that IAS.txt file.  Once you’ve confirmed the conversion is correct remember to delete the IAS.txt file.

Hope this helps someone out!

Making the Business Case for Virtualization

I speak with a number of people every year who have never heard of virtualization, and quite frankly this is a point that saddens me as a technology enthusiast.  How can everyone in the world not know about the glorious wonders that await them if they virtualized their infrastructure?!  Haha, ok maybe that’s going a bit too far, but I think you get my point.

To date, I’d have to say that of the business decision makers I meet, only about 30 percent of them know what virtualization is.  Now on the flip side, of the people who are actively down in the trenches managing systems on a day to day basis, nearly 75% of them have either used some method of virtualization or heard of it.  My question is, why is there such a disparity in knowledge of something that can be so beneficial to an organization between these two groups of people?

Well that’s what this post is about, educating decision makers about the numerous business benefits that virtualization solutions provide.

 

Virtualization History Lesson

Virtualization as noun, refers to technologies designed to provide a layer of abstraction between computer hardware systems and the software running on them. By providing a logical view of computing resources, rather than a physical view, virtualization solutions make it possible to do a couple of very useful things: They can allow you, essentially, to trick your operating systems into thinking that a group of servers is a single pool of computing resources. And they can allow you to run multiple operating system installations simultaneously on a single machine, thereby greatly increasing the utilization of any one piece of hardware.

Virtualization has it’s origins in partitioning, which divides a single physical server into multiple logical servers. Once the physical server is divided, each logical server can run an operating system and applications independently. In the 90s, virtualization was used primarily to re-create end-user environments on a single piece of mainframe hardware. If you were an IT administrator and you wanted to roll out new software, but you wanted see how it would work on a Windows NT or a Linux machine, you used virtualization technologies to create the various user environments.

But with the advent of the x86 architecture and inexpensive PCs, virtualization faded and seemed to be little more than a fad of the mainframe era. It’s fair to credit the recent rebirth of virtualization on the x86 architecture to the founders of the current market leader, VMware. However VMware couldn’t have done it alone, and I often credit Moore’s Law in helping computing power reach a point where virtualization was once again a viable solution in the enterprise.

EqualLogic Multipathing Extension Module V1.1 for VMware® vSphere Early Production Access Kit (EPA)

FINALLY!
If you’re like me, and you manage EqualLogic Arrays in a VMware vSphere environment, you may have noticed that you couldn’t just upgrade to vSphere 5 if you installed the EqualLogic Multipathing Extension Module (MEM Plugin) in vSphere 4.

Today, Dell has finally released the much anticipated update to their EqualLogic MEM Plugin, which is now compatible with vSphere 5. Note however, when upgrading from ESX 4.x to 5.0, any third party modules will not be transferred to the new operating system. After you have completed the operating system upgrade, you can install the MEM 1.1.0, which supports ESX 5.0.

Here are some documents regarding the new release!
Multipath Extension Module Release Notes V1.1
Installation and User Guide V1.1
Fix List V1.1

And here’s the notes from the download itself outlining the features gained when installing the MEM plugin.

The EqualLogic Multipathing Extension Module (EqualLogic MEM) provides the following enhancements to the standard VMware vSphere multipathing functionality

  • Automatic connection management
  • Automatic load balancing across multiple active paths
  • Increased bandwidth
  • Reduced network latency

New Equallogic Arrays PS4100, PS6100, FS7500

I wanted to post a quick update for those looking for info on the new series of Equallogic Arrays which will integrate with Dell’s Fluid Data Architecture.

 

Product Overview

The EqualLogic PS4100 and PS6100 continue the EqualLogic tradition of performance scaling linearly with capacity.  With up to 60% IOPS improvements on typical workloads over previous-generation EqualLogic arrays the new arrays are designed to grow with data demands—all while providing simple management and seamless expansion using innovative Fluid Data™ technology.  PS Series arrays are based on a virtualized modular storage architecture allowing customers to purchase only the storage you need when you need it. PS Series arrays include SAN configuration features and capabilities that sense network connections, automatically build RAID sets, and conduct system health checks.  With the planned August 22, 2011 announcement of EqualLogic PS 4100 and 6100 Series arrays, customers are offered a vast array of choices that scale linearly with capacity to meet their needs using Dell Fluid Data™ technologies.

 

EqualLogic PS6100 and PS4100

The EqualLogic PS4100 and PS6100 continue the EqualLogic tradition of performance scaling linearly with capacity, providing you with a vast array of choices to meet your customer’s needs. With up to 67% IOPS improvements on typical workloads over previous-generation EqualLogic arrays, the new arrays are designed to grow with data demands—all while providing simple management and seamless expansion using innovative Fluid Data™ technology. PS Series arrays are based on a virtualized modular storage architecture allowing your customers to purchase from you only the storage they need when they need it.

The PS4100 and PS6100 represent the latest in the line of EqualLogic iSCSI storage arrays, both offering significant advances in capacity, performance, and reliability over their predecessors, the PS4000 and PS6000. These new platforms deliver:

  • Up to 67% IOPS improvements
  • Up to 3x storage pool performance
  • Up to 800% density improvement
  • Improved network bandwidth and reliability
  • Support for 2.5” drives

 

Hardware Differences

New to the EqualLogic line are 2U and 4U form factors, each providing unique benefits:

  • 2U qty 24 2.5” SAS 10k and 15k drives – available with PS4100X, XV, PS6100X, XV, XS, and S
  • 2U qty 12 3.5” NLSAS and SAS 15k drives – available with PS4100XV* and E
  • 4U qty 12 3.5” NLSAS and SAS 15k drives – available with PS6100XV* and E

Also new is the departure of the standard Xyratex rails in favor of Rapid and Versa Rail options.  Both are now available for square and round-hole rack customers.

Single, non-redundant controller configurations are generally available in each model’s lowest capacity configuration.

Here’s a video briefly describing the new arrays!

VMware vSphere 5 VRAM Entitlement Changes

Earlier today I received an email from VMware regarding some new changes they’ve made to their new vSphere 5 licensing model.

As I’m sure you’re all aware, VMware created quite the stir in the enterprise market by announcing earlier last month that they would be changing the way vSphere 5 is licensed.

They are migrating to more of a service model license, which I personally don’t have a problem with.  I would much rather IT departments take ownership of these systems, and charge internal company departments for usage.  It’s a much more efficient way of deploying resources when HR, WEB, SALES, etc. don’t own physical pieces of equipment.

But alas I digress.
Previous to version vSphere 5, when most virtual infrastructures were being designed by companies, they would load hosts with as much memory as they possibly could afford at that time.  Why?? Because VMware licensed vSphere 4 on a per CPU basis.  While you were only allowed to have 12 cores per processor with an Enterprise Plus license, there was no restriction on how much RAM you could allocate to the host.  And if you have any enterprise virtualization experience, you’ll know that RAM is the one resource that goes the quickest.

So in comes vSphere 5 with features everyone has been waiting for like Storage I/O control, up to 32 vCPU’s per VM, and more.  But now there’s a catch.  Instead of licensing only for each CPU, you now have to watch how much RAM you have assigned to powered on virtual machines.  This is what’s reffered to as VRAM.  Now, when you buy a socket license, there’s a certain amount of VRAM entitlement which is included with each license.  If you have a server with 256GB of RAM, and wanted to use 100% of your available physical memory with virtual machines on that host, under the original VRAM entitlement numbers, you’d have to purchase 6 licenses when with vSphere 4 you would have only had to purchase 2.  Now you see why the blogosphere was up in arms?

Well today, VMware has come out and made things a little easier to swallow.

Here is a table of new VRAM entitlements.

vSphere edition Previous vRAM entitlement New vRAM entitlement
vSphere Enterprise+ 48 GB 96 GB
vSphere Enterprise 32 GB 64 GB
vSphere Standard 24 GB 32 GB
vSphere Essentials+ 24 GB 32 GB
vSphere Essentials 24 GB 32 GB

They’ve also changed how VRAM is calculated.

• We’ve capped the amount of vRAM we count in any given VM, so that no VM, not even the “monster” 1TB vRAM VM, would cost more than one vSphere Enterprise Plus license. This change also aligns with our goal to make vSphere 5 the best platform for running Tier 1 applications.
• We’ve adjusted our model to be much more flexible around transient workloads, and short-term spikes that are typical in test & development environments for example. We will now calculate a 12-month average of consumed vRAM to rather than tracking the high water mark of vRAM.

Overall, I’m glad VMware was able to admit to their mistake, and try to make it right by their partners and customers by increasing entitlements for all editions.

I still don’t know where they got their initial figures for the VRAM entitlements, I can only guess that they pulled the available figures out of a hat.  I mean really, 48GB per CPU socket is all you’re going to give me when I can purchase a Dell R910 with 1TB of RAM capacity and 4 CPU sockets?  And as I mentioned before, it’s VRAM, not physical RAM.  But a large number of my customers are using 80% or better of their current physcal RAM capacity. And without this most recent change, they were looking at pretty hefty license increases for vSphere 5.

If you have any questions feel free to post them in the comments!