Shared Nothing Live Migration: Goodbye Shared Storage?

This article was originally written for the Canadian IT Pro Connection.

Many smaller companies and individuals with home labs see shared storage – usually a SAN (Storage Area Network) device as the impediment to Live Migration.  In April of 2011 Microsoft released the iSCSI Software Target 3.3 as a free (and supported) download.  At the time Pierre and I wrote a series of articles in this space as guest bloggers (The Microsoft iSCSI Software Target is now free, All for SAN and SAN for All!, Creating a SAN using Microsoft iSCSI Software Target 3.3, Creating HA VMs for Hyper-V with Failover Clustering using FREE Microsoft iSCSI Target 3.3).  It seems that those articles were so well liked that Pierre and I are now the resident technical bloggers for this space! Smile

Ok, but seriously… Software SANs make life easier for smaller companies with smaller environments.  The fact that you can now build a failover environment without investing in an expensive SAN is a great advancement for IT Professionals, and especially for those who want to do Live Migration.  Windows Server 2012 now includes the iSCSI Software Target out of the box, and IT Pros are taking full advantage.

Now let’s go one step further.  You have started to play with Hyper-V… or maybe you have a small environment built on a single host.  You get to the point where you are going to add a second host, but you are still not ready to create shared storage.  Are you stuck with two segregated hosts? Not anymore!

Shared Nothing Live Migration allows you to have VMs stored on local (direct attached) storage, and still be able to migrate them between hosts.  With absolutely no infrastructure other than two Hyper-V hosts (and the appropriate networking) you can now live migrate virtual machines between hosts.

Requirements

Any live migration, whether it be Hyper-V or any other platform, have a number of requirements in order to work.

  • Both hosts must have chipsets in the same family – that is, you cannot live migrate an Intel to an AMD or vice-versa.  If the processors are similar enough (i7 to i5 is fine, i7 to Core2 Duo is not) then no action is necessary.  In the event that you do have dissimilar processors (newer and older but still within the same family, then you have to configure your virtual machine’s CPU compatibility, as outlined in the article Getting Started with Hyper-V in Server 2012 and Windows 8.
  • If your virtual machine is connected to a virtual switch then you need to have an identically named virtual switch on the destination host.  If not your migration will be paused while you specify which switch to use on the destination server.
  • The two virtualization hosts must be connected by a reliable network.

Settings (Host)

In order to perform Live Migration you have to configure it in the Hyper-V Settings.

1) In Hyper-V Manager click Hyper-V Settings… in the Actions Pane.

image

2) In the Hyper-V Settings for the host, click on the Live Migrations tab on the left.  In the details pane ensure that the Enable incoming and outgoing live migrations box is checked, and that you have selected an option under Incoming live migrations.  In this screenshot you will see that I have left the default 2 Simultaneous live migrations, and that I selected the option to Use any available network for live migration.  Depending on your network configuration and bandwidth availability you can adjust these as you like.

image

NOTE: These steps must be performed on both hosts, although the configuration options do not have to be the same.

Migrating a VM

Performing a Live Migration is easy.

1) In the Hyper-V Manager right-click on the virtual machine that you want to migrate and click Move…

NOTE: In this screenshot I am managing both hosts from the same MMC console.  This is NOT a requirement.

image

2) On the Before You Begin screen click Next>.

3) On the Choose Move Type screen select Move the virtual machine and click Next>.

4) On the Specify Destination Computer screen enter the name of the destination host and click Next>.  You also have the option to browse other hosts in Active Directory.

image

5) On the Choose Move Options screen select what you want to do with the virtual machine’s items (see screen capture).  I usually select the option Move the virtual machine’s data to a single location.  This option allows you to specify one location for all of the VM’s items, including configuration files, memory state, and virtual storage.  Click Next>.

image

6) On the Choose a new location for virtual machine screen enter (or browse to) the location on the destination host where you would like to move the VM.  This screen will also tell you how big your files are (note the Source Location in the screen capture says 9.5 GB).  Click Next> then on the Summary screen click Finish.

image

Now that your virtual machine migration is in progress you can watch the progress bar in two places: In the Performing the Move progress bar, and in the Hyper-V Manager under Status.

image

The one place where you would not be able to watch the progress is from within the virtual machine.  There is nothing to see.  If you are in the VM while the migration is happening there is no indication of it, and you (and all of your processes and networking) will be able to continue as normal.  The operating system within the VM itself has no concept that it is virtualized, and therefore has no concept that it is being moved.  Should the live migration fail (as has been known to happen) the VM would experience… nothing.  It would continue to work on the source host as if nothing had happened.  In fact the only time it ceases to work on the source host is when it is fully operational on the destination host.

image

Notice now that the virtual machine SWMI-DC2, which we moved from SWMI-HOST5 to SWMI-HOST6 is now running as normal on the destination host.  You will see that the Uptime is reset – that is because the uptime is tied to the VM on the host, and not the uptime of the guest OS.

VIDEO!

Now that you understand how it works, why not watch the video of my performing a Shared Nothing Live Migration.  For the sake of good TV I cut out the three minutes of waiting while the migration performed, but everything else is in real time.  Check it out here:

Shared Nothing Live Migration–real time, minus a few minutes cut out of the Watching it happen phase.

Conclusion

Whether you have a small infrastructure and want to be able to live migrate between a couple of hosts, or you have a large infrastructure but still have VMs stored on direct-attached storage, Shared Nothing Live Migration is one of the new features in Windows Server 2012 that will make your virtualization tasks easier.  Remember that it is not a license to get rid of your SAN devices, but is a great (and easy) way to migrate DAS-attached VMs between hosts without any downtime.

When I’m Sixty-Four… TERABYTES!

Hard Disk Spindle
Hard Disk Spindle (Photo credit: Fr3d.org)

Okay, I am asking for a show of hands: How many of you remember 100MB hard drives? 80? 40?  While I remember smaller, my first hard drive was a 20 Megabyte Seagate drive.  Note that I didn’t say Gigabytes…

Way back then the term Terabyte might have been coined already as a very theoretical term, but in the mid-80s most of us did not even have hard drives – we were happy enough if we had dual floppy drives to run our programs AND store our data.  We never thought that we could ever fill a gigabyte of storage, but were happier with hard drives than with floppies because they were less fragile (especially with so many magnets about).

Now of course we are in a much more enlightened age, where most of us need hundreds of gigabytes, if not more.  With storage requirements growing exponentially, the 2TB drives that we used to think were beyond the needs of all but the largest companies are now available for consumers, and corporations are needing to put several of those massive drives into SAN arrays to support the ever-growing database servers.

As our enterprise requirements grow, so must the technologies that we rely on.  That is why we were so proud to announce the new VHDX file format, Microsoft’s next generation virtual hard drive files that has by far the largest capacity of any virtualization technology on the market – a whopping 64 Terabytes.

Since Microsoft made this announcement a few months ago several IT Pros have asked me ‘Why on earth would I ever need a single drive to be that big?’  A fair question, that reminds me of the old quote from Bill Gates who said that none of us would ever need more than 640KB of RAM in our computers.  The truth is big data is becoming the rule and not the exception.

Now let’s be clear… it may be a long time before you need 64TB on a single volume.  However rather than questioning the limit, let’s look at the previous limit – 2TB.  Most of us likely won’t need 64TB any time soon; however over the last couple of years I have come across several companies who did not think they could virtualize their database servers because of 2.2TB databases.

Earlier this week I got an e-mail from a customer asking for help with a virtual to physical migration.  Knowing who he reached out to, this was an obvious cry for help.

‘Mitch we have our database running on a virtual machine, and it is running great, but we are about to outgrow our 2TB limitation on the drive, and we have to migrate onto physical storage.  We simply don’t have any other choice.’

As a Technical Evangelist my job is to win hearts and minds, as well as educate people about new technologies (as well as new ways to use the existing technologies that they have already invested in).  So when I read this request I had several alternate solutions for them that would allow them to maintain their virtual machine while they burst through that 2TB ‘limit’.

  1. The new VHDX file format shatters the limit, as we said.  In an upcoming article I will explain how to convert your existing VHD files to VHDX.  The one caveat: if you are using Boot from VHD from a Windows 7 (or Server 2008 R2) base then the VHDX files are not supported.
  2. Storage Pools in Windows Server 2012 allow you to pool disks (physical or virtual) to create large drives.  They are easy to create and to add storage to on the fly.  I expect these will be among the most popular new features in Windows Server 2012.
  3. Software iSCSI Target is now a feature of Windows Server 2012, which means that not only can you create larger disks on the VM, you can also create large Storage Area Networks (SANs) on the host, adding VHDs as needed and giving access as BitLocker-encrypted Cluster Shared Volumes (CSVs), another new functionality of the new platform.
  4. New in Windows Server 2012, you can now create a virtual connection to a real Fibre Channel SAN LUN.  As large a volume as you can create on the SAN is your limit – in other words if you have the budget your limit would be petabytes!

With all of these options available to us, the sky truly is the limit for our virtualization environments… Whether you opt for a VHDX file, Storage Pool, Software- or Hardware-SAN, Hyper-V on Windows Server 2012 has you covered.  And if none of these are quite right for you, then migrating your servers into an Azure VM in the cloud offers yet more options for the dynamic environment, without the capital expenses required for on-premises solutions.

Knowing all of this, there really is no longer any reason to do a V2P migration, although of course there are tools that can do that.  There is also no longer a good reason to invest in third-party virtualization platforms that limit your virtual hard disks to 2TB.

Adaptable storage the way you want it… just one more reason to pick Windows Server 2012!

Virtual Hard Disks: Best Practices for dynamic / thin provisioned disks and other stories…

 sort of an armillary sphere (or at least a disk)

Virtual environments offer us many new options with regard to storage.  Whether you are using Hyper-V or VMware, a few of the options (and recommendations) are the same, even when the terms are different.

One of the options I am often asked about is dynamically expanding hard drives.  MY first reaction is that I am against them… and I have very good reasons for that, except that there are caveats to make them acceptable for a production environment.

Hyper-V has dynamically expanding disks, vSphere has thin provisioned disks.  They both do the same thing – rather than creating a file equivalent in size to the disk you are creating (say, 20GB on disk for a 20GB VHD) it will create a tiny little file that grows as data is written to it, up to the limitations set forth in the configuration.

Dynamically expanding disks make our VMs much more easily transported… it is quicker and easier to move a 6GB file than it is a 20GB file.  However in our server environments transportability is seldom a factor.  Performance and stability are.

The performance issue is not really a factor anymore… although the dynamic expansion of a file must on some level take resources, both Microsoft and VMware have made their virtual disks so efficient at the difference is negligible.  As for stability, I rarely see virtual disks corrupting because of expansion… unless they expand onto corrupt sectors on the physical drive, and even then the drives and operating systems are so good these days that seldom happens.

There are two main reasons I provision my virtual disks as thick/static disks:

  1. Storage is cheap… at least, it is cheaper than downtime.  If you need 5TB of storage for your virtual environment, you should buy it.  That way you know what you have, and don’t have to worry about a situation where your virtual disk file grows and fills the volume on which it is stored, which would prevent it (and any other dynamically expanding disks on the same volume) from growing, resulting in the virtual machines that are dependent on them to crash.
  2. Files fragment as they grow.  Fragmentation is bad.  Need I say more?

Now both of these issues do have mitigations that would, when necessary, allow you to implement dynamic/thin disks without risking these problems.  In fact, one of these mitigations will actually solve both issues:

  1. Under-provision your storage.  While this sounds a lot like the ‘Hey Doc, it hurts when I do this…’ So don’t do that! discussion, it is true.  if you have two virtual disks that can expand to 35GB residing on a 72GB volume, there is no threat of them filling the space.  This does not address the fragmentation issue though.
  2. Put each virtual disk file onto its own volume.  This mitigates both issues.  Unfortunately it also means that you are limiting the number of virtual disks you have to the number of hard drives (or LUNs) that you have.  Fortunately there is a mitigation for that too: partition! Bad for Quebec, but good for maximizing the number of virtual disks you can store segregated on a single drive.

When speed is a factor…

The partition plan does not address one more issue – the performance issue.  Mitch’s Rule #11 is very clear: more spindles = more speed = more better.  I have said this in class and on stage a thousand times, and I firmly believe that.  Every virtual disk that you spin up on a hard drive must share that rpms (and therefore the speed) with all of the other virtual disks on that drive.  Partitioning the drive does not change the fact that the multiple partitions still exist on the same spindles.

There are two answers to this: more spindles, or fewer virtual disks.  In larger organizations storage costs are easily justified and therefore accepted.  Smaller organizations very often have to make do with what they have, or often buy the cheapest solution (external USB drives, etc…).  This is never a good idea, but if you are going to use external drives make sure you invest in the fastest bus available – in other words, USB 2.0 is not a good option, and USB 3.0 or eSATA are better bets. 

What other options do I have?

Of course, we have not discussed two great solutions, but they are going to be a little costlier:

  • Storage Area Networks (SANs).  SANs are great because they allow us to take several disks in a unit and partition the total rather than the individual drives, and then we can share them across multiple hosts.  Absolutely every organization considering virtualization, no matter how small, should invest in a SAN device. 

I say invest because they are not cheap… a decent entry-level SAN should cost around Five Thousand Dollars ($5,000.00).

  • Raw Device Map (RDM) / Pass-through disks (PTD). This is a great functionality that ensures that a virtual machine gets the best performance available from the drive by giving it exclusive access to it.  It eliminates the worry of performance degradation due to resource sharing.

The downside to this is that you cannot partition the drive, and while you can create an RDM/PTD to a SAN LUN (Logical Unit Number) that does not change the fact that are limiting the number of drives you can have.  This functionality can also limit the portability of your virtual machines – neither Microsoft nor VMware prevents you from live migrating a VM with a RDM/PTD, but there is an added layer of complication.

Conclusion

While planning for resources is important in a physical environment, it is infinitely moreso in a virtual environment where resources have to be shared and virtual machines have to co-exist on the same hardware resources.  The ability to plan your storage strategy begins with a good understanding of the technologies, as well as an understanding of best practices and recommendations from others who have been there before.  I still wouldn’t use dynamically expanding disks in my servers, but with more and more people asking me about it I wanted to make my reasons clear, as well as outline the steps I would take if I were in a position where I had no choice.

Now it’s up to you!