Cloning with Customization Specifications

Being back in a VMware environment, there are a few differences I need to remember from Hyper-V and System Center.  It is not that one is better or worse than the other, but they are certainly different.

Customization Specifications are a great addition in vCenter to Cloning virtual machines.  They allow you to name the VM, join domains, in short set the OOBE (Out of Box Experience) of Windows.  They just make life easier.

The problem is, they do a lot of the same things as Microsoft’s deployment tools… but they do them differently.  We have to remember that Microsoft owns the OS, so when you use the deployment tools from Microsoft, they inject a lot of the information into the OS for first boot.  Customization Specifications work just like answer files… they require a boot-up (or two) to perform the scripts… and while those boots are interactive sessions, you should be careful about what you do in them.  They will allow you to do all sorts of things, but then when they are ready they will perform the next step – a reboot.

I am not saying that you shouldn’t use Customization Specifications… I love the way they work, and will continue to use them.  Just watch out for those little hiccoughs before you go 🙂

A response to a VMware article… written by someone I respect.

English: VMware vSphere in the Enterprise

While he may not be very well know to the Microsoft community, Mike Laverick is a legend in VMware circles.  Mike owns a blog called RTFM Education, a source of white papers for VMware technology, although he did start out as a Microsoft Certified Trainer.  He now works for VMware as a Senior Cloud Infrastructure Evangelist.  I was very happy to read on his blog that he has decided to try learning Hyper-V and Microsoft’s Private Cloud.  Unfortunately from what I can tell he was still trying to think way too VMware, rather that trying to learn the Microsoft way of doing things.

(To read the article follow this link:

http://www.mikelaverick.com/2013/10/i-cant-get-no-validation-windows-hyper-v-r2eality-fail-over-clustering/)

This is a problem that I see all the time, and going both ways.  When I was teaching vSphere Infrastructure classes my Microsoft-focused students had a hard time getting out of the Microsoft mindset.  When I teach Microsoft courses, my VMware students have the same problem going the other direction.  It would be much easier if people would open their minds and just let the technology flow… but then I have been a Star Wars fan for too long so I believe in that sort of thing.

I found several points of the article quite amusing.  Mike opens the article with a picture and quote from the book Windows NT Microsoft Cluster Server.  The first words that he actually types are ‘Mmm, so much has changed since then or has it?’  I am sorry Mike, but to even insinuate that Microsoft Clustering in Windows Server 2012 R2 is anywhere near the disaster that was clustering in Windows NT (or Server 2000, or Server 2003) is a joke.  Yes, you have to have the proper pieces in place, and yes, you have to configure it properly.  You even have to spend a little time learning Microsoft Clustering and how it works.  If you were to spend thirty minutes with someone like me I’d say you’d be good.

Also, I know you don’t like that you have to install the Failover Clustering Feature to all of the servers before you can create your cluster.  However please remember that unlike a pure hypervisor, Windows Server is an operating system that does many things for many people.  To install all of the possible features out of the box is a ridiculous notion – for one thing, it would triple the footprint and multiply exponentially the attack surface of Windows Server… to say nothing of having code running that you don’t need which takes resources.

To save time, I recommend the following PowerShell cmdlets:

Install-WindowsFeature –Name Failover-Clustering –IncludeManangementTools –ComputerName MyServer1
Install-WindowsFeature –Name Failover-Clustering –IncludeManangementTools –ComputerName MyServer2
Install-WindowsFeature –Name Failover-Clustering –IncludeManangementTools –ComputerName MyServer3
New-Cluster –Name MyCluster –Node MyServer1, MyServer2, MyServer3 –StaticAddress 172.17.10.5

(There are probably ways to wildcard that – -ComputerName * or something, but that is not the point of the article).

The point of this article is not to Mike’s article apart – for one thing, he is probably doing better on Microsoft technology than I would have when I was new to VMware, for another I have great respect for him, both as a person and as an IT Pro.  I just find it amusing that a VMware evangelist is struggling to learn Hyper-V and System Center, just as so many of the Microsoft evangelists have been struggling to learn VMware.  There is a huge learning curve to be sure… no matter which way you go.

While I am reasonably fluent and certified in both technologies, there is no question that I favour Microsoft… just as Mike favours VMware.  I am glad to see that he is trying to learn Microsoft though… even though some of the ways he is going about it may be questionable.

The one thing that I will point out though is that Mike is right… there are two ways of building a Microsoft Cluster – you can use the Failover Cluster Manager, or you can use System Center VMM.  Michael points out that these technologies would do well to communicate better.  I agree, and recommend that users pick one or the other.  I would also like to point out that in vCenter Server you can create a cluster, but if you are only using ESXi (Vmware’s hypervisor) without vCenter Server there is no way to create a cluster… the technology is simply not supported unless you pay for it.  Score one for Microsoft.

Mike, on a personal note, I would love to sit with you and show you the vastness of System Center and Microsoft’s Private Cloud one day.  Geography seems to work against us, as you are (I believe) in Scotland, and I am in Japan.  There is a catch though… I will gladly teach you Microsoft’s virtualization stack from top to bottom… but I want you to do the same for me with the vSphere stack.  I know the technology and am certified, but I would cherish the opportunity to relearn it from you, as I have followed your articles with reverence for many years.

If you ever do care to take me up on the offer Mike, my email address is mitch@garvis.ca.  Drop me a line, we’ll figure it out.  I suspect that we would both be able to write some great articles following those sessions, and we would both have newfound respect for the other’s technology of choice.

It’s Coming… Can we now compare Hyper-V with vSphere as both new products prepare for launch?

On July 5th I published an article titled A Response to VMware’s ‘Get the Facts’ page comparing vSphere to Hyper-V & System Center. In the seven weeks since it went live it has become the 4th most read article I have ever published (in seven years as a blogger), as well as being by far the most commented on, discussed, and shared article I have ever written.

André Andriolli, a former VMware field engineer and now a Systems Engineer Manager with VMware in Brazil, responded very well.  One of the first points he made was:

we should start by comparing what’s in the market TODAY with what’s in the market today: I mean vSphere 5 versus Hyper-V 2, or vSphere 5.1 with Hyper-V 3. Since vSphere 5.1 news are not in the street yet, we should go with the first. Comparing a future MSFT release with what VMware customers are running for over 1 year now is simply not fair, to me at least.

While I did not entirely agree with this at the time, I accept that it is a valid point.  I am looking forward to hearing comments in the next few weeks though… as Windows Server 2012 (with Hyper-V 3.0) becomes generally available on September 4th, and vSphere 5.1 becomes available on September 11th.

My opinion is simple… VMware still makes a great product, but so does Microsoft; the benefits of the former, in my opinion (and that of many VMware customers I have spoken with), simply are not worth the the difference in cost over the latter.  While it will be a relief that VMware is abandoning their Virtual Memory Entitlements (commonly referred to as the Memory Tax), I think the last year will have left a sour note with a lot of their customers, and given them an opportunity to see for themselves just how good Hyper-V really is.

I do like the fact that both platforms are being released at the same time though; I once made a comment that I regretted right away that of course one would always be ahead of the other because one would come out with a new feature, and the other would take that feature and include it in their next release, along with whatever else they were planning, and that would continue on.  For the next year the two will be compared as equals.

Now, this is one place where VMware has a slight advantage… insofar as they have a one-year product cycle, and Windows Server has a 3-year product cycle.  This was adjusted last year when they took the rare step of adding new (and major) functionality into Service Pack 1 of Windows Server 2008 R2.  For now, frankly I am not sure that pound for pound Hyper-V (with System Center) is not already the better product.  I guess we will find out what the market says though…

If you are in Toronto, we would love for you to join us for the Windows Server 2012 Launch Event on September 5th, or if you are in another city across Canada, later in the month.  Check out Ruth Morton’s blog to see the dates, and to click to register.  We hope to see you there!

Virtual Hard Disks: Best Practices for dynamic / thin provisioned disks and other stories…

 sort of an armillary sphere (or at least a disk)

Virtual environments offer us many new options with regard to storage.  Whether you are using Hyper-V or VMware, a few of the options (and recommendations) are the same, even when the terms are different.

One of the options I am often asked about is dynamically expanding hard drives.  MY first reaction is that I am against them… and I have very good reasons for that, except that there are caveats to make them acceptable for a production environment.

Hyper-V has dynamically expanding disks, vSphere has thin provisioned disks.  They both do the same thing – rather than creating a file equivalent in size to the disk you are creating (say, 20GB on disk for a 20GB VHD) it will create a tiny little file that grows as data is written to it, up to the limitations set forth in the configuration.

Dynamically expanding disks make our VMs much more easily transported… it is quicker and easier to move a 6GB file than it is a 20GB file.  However in our server environments transportability is seldom a factor.  Performance and stability are.

The performance issue is not really a factor anymore… although the dynamic expansion of a file must on some level take resources, both Microsoft and VMware have made their virtual disks so efficient at the difference is negligible.  As for stability, I rarely see virtual disks corrupting because of expansion… unless they expand onto corrupt sectors on the physical drive, and even then the drives and operating systems are so good these days that seldom happens.

There are two main reasons I provision my virtual disks as thick/static disks:

  1. Storage is cheap… at least, it is cheaper than downtime.  If you need 5TB of storage for your virtual environment, you should buy it.  That way you know what you have, and don’t have to worry about a situation where your virtual disk file grows and fills the volume on which it is stored, which would prevent it (and any other dynamically expanding disks on the same volume) from growing, resulting in the virtual machines that are dependent on them to crash.
  2. Files fragment as they grow.  Fragmentation is bad.  Need I say more?

Now both of these issues do have mitigations that would, when necessary, allow you to implement dynamic/thin disks without risking these problems.  In fact, one of these mitigations will actually solve both issues:

  1. Under-provision your storage.  While this sounds a lot like the ‘Hey Doc, it hurts when I do this…’ So don’t do that! discussion, it is true.  if you have two virtual disks that can expand to 35GB residing on a 72GB volume, there is no threat of them filling the space.  This does not address the fragmentation issue though.
  2. Put each virtual disk file onto its own volume.  This mitigates both issues.  Unfortunately it also means that you are limiting the number of virtual disks you have to the number of hard drives (or LUNs) that you have.  Fortunately there is a mitigation for that too: partition! Bad for Quebec, but good for maximizing the number of virtual disks you can store segregated on a single drive.

When speed is a factor…

The partition plan does not address one more issue – the performance issue.  Mitch’s Rule #11 is very clear: more spindles = more speed = more better.  I have said this in class and on stage a thousand times, and I firmly believe that.  Every virtual disk that you spin up on a hard drive must share that rpms (and therefore the speed) with all of the other virtual disks on that drive.  Partitioning the drive does not change the fact that the multiple partitions still exist on the same spindles.

There are two answers to this: more spindles, or fewer virtual disks.  In larger organizations storage costs are easily justified and therefore accepted.  Smaller organizations very often have to make do with what they have, or often buy the cheapest solution (external USB drives, etc…).  This is never a good idea, but if you are going to use external drives make sure you invest in the fastest bus available – in other words, USB 2.0 is not a good option, and USB 3.0 or eSATA are better bets. 

What other options do I have?

Of course, we have not discussed two great solutions, but they are going to be a little costlier:

  • Storage Area Networks (SANs).  SANs are great because they allow us to take several disks in a unit and partition the total rather than the individual drives, and then we can share them across multiple hosts.  Absolutely every organization considering virtualization, no matter how small, should invest in a SAN device. 

I say invest because they are not cheap… a decent entry-level SAN should cost around Five Thousand Dollars ($5,000.00).

  • Raw Device Map (RDM) / Pass-through disks (PTD). This is a great functionality that ensures that a virtual machine gets the best performance available from the drive by giving it exclusive access to it.  It eliminates the worry of performance degradation due to resource sharing.

The downside to this is that you cannot partition the drive, and while you can create an RDM/PTD to a SAN LUN (Logical Unit Number) that does not change the fact that are limiting the number of drives you can have.  This functionality can also limit the portability of your virtual machines – neither Microsoft nor VMware prevents you from live migrating a VM with a RDM/PTD, but there is an added layer of complication.

Conclusion

While planning for resources is important in a physical environment, it is infinitely moreso in a virtual environment where resources have to be shared and virtual machines have to co-exist on the same hardware resources.  The ability to plan your storage strategy begins with a good understanding of the technologies, as well as an understanding of best practices and recommendations from others who have been there before.  I still wouldn’t use dynamically expanding disks in my servers, but with more and more people asking me about it I wanted to make my reasons clear, as well as outline the steps I would take if I were in a position where I had no choice.

Now it’s up to you!

Virtualization Lessons–Both Positive and Negative!

As I sit in the back of the room for Microsoft Canada’s Virtualization Boot Camp Challenge today I see that the lab environments that we are providing to the attendees actually mimics the setup I use for my Virtual Partner Technology Advisor (vPTA) sessions.  As such, I am seeing a lot of potential for attendees to learn a lot of great technologies, but there are a few lessons that they should know.  I outlined these in an article last year called ‘vPTA: What NOT to take away from my 1-day virtualization training.’  I will urge all of the attendees (as well as all of you!) to click on the link and read the article. While a lot of the practices we use are fine for a test/lab environment, you should be aware of them before you try to implement them in your production environment!

I have written a bunch of other articles that are pertinent to the discussion… here are just some of those links:

How to get a head start on the NEW Management and Virtualization Competency

Layer 1 or Layer 2 Hypervisor? A common misconception of Hyper-V, and a brief explanation of the Parent Partition

Virtualization Infrastructure: Which platform is right for you?

Microsoft Virtualization Learning Resources

Hyper-V Training – 10215AE is now available in E-Learning!

Real Help in A Virtual World

Busting the Myth: You cannot cluster Windows Small Business Server

A follow-up to my article on configuring iSCSI initiator in Server Core & Hyper-V Server

A brief response to the vSphere 5 vs. Hyper-V question…

Gartner agrees with me… Hyper-V is for real!

Do you have your Virtcerts?

MCITP: Virtualization Administrator 2008 R2 (and other R2 Virt Certs)