Virtual environments offer us many new options with regard to storage. Whether you are using Hyper-V or VMware, a few of the options (and recommendations) are the same, even when the terms are different.
One of the options I am often asked about is dynamically expanding hard drives. MY first reaction is that I am against them… and I have very good reasons for that, except that there are caveats to make them acceptable for a production environment.
Hyper-V has dynamically expanding disks, vSphere has thin provisioned disks. They both do the same thing – rather than creating a file equivalent in size to the disk you are creating (say, 20GB on disk for a 20GB VHD) it will create a tiny little file that grows as data is written to it, up to the limitations set forth in the configuration.
Dynamically expanding disks make our VMs much more easily transported… it is quicker and easier to move a 6GB file than it is a 20GB file. However in our server environments transportability is seldom a factor. Performance and stability are.
The performance issue is not really a factor anymore… although the dynamic expansion of a file must on some level take resources, both Microsoft and VMware have made their virtual disks so efficient at the difference is negligible. As for stability, I rarely see virtual disks corrupting because of expansion… unless they expand onto corrupt sectors on the physical drive, and even then the drives and operating systems are so good these days that seldom happens.
There are two main reasons I provision my virtual disks as thick/static disks:
- Storage is cheap… at least, it is cheaper than downtime. If you need 5TB of storage for your virtual environment, you should buy it. That way you know what you have, and don’t have to worry about a situation where your virtual disk file grows and fills the volume on which it is stored, which would prevent it (and any other dynamically expanding disks on the same volume) from growing, resulting in the virtual machines that are dependent on them to crash.
- Files fragment as they grow. Fragmentation is bad. Need I say more?
Now both of these issues do have mitigations that would, when necessary, allow you to implement dynamic/thin disks without risking these problems. In fact, one of these mitigations will actually solve both issues:
- Under-provision your storage. While this sounds a lot like the ‘Hey Doc, it hurts when I do this…’ So don’t do that! discussion, it is true. if you have two virtual disks that can expand to 35GB residing on a 72GB volume, there is no threat of them filling the space. This does not address the fragmentation issue though.
- Put each virtual disk file onto its own volume. This mitigates both issues. Unfortunately it also means that you are limiting the number of virtual disks you have to the number of hard drives (or LUNs) that you have. Fortunately there is a mitigation for that too: partition! Bad for Quebec, but good for maximizing the number of virtual disks you can store segregated on a single drive.
When speed is a factor…
The partition plan does not address one more issue – the performance issue. Mitch’s Rule #11 is very clear: more spindles = more speed = more better. I have said this in class and on stage a thousand times, and I firmly believe that. Every virtual disk that you spin up on a hard drive must share that rpms (and therefore the speed) with all of the other virtual disks on that drive. Partitioning the drive does not change the fact that the multiple partitions still exist on the same spindles.
There are two answers to this: more spindles, or fewer virtual disks. In larger organizations storage costs are easily justified and therefore accepted. Smaller organizations very often have to make do with what they have, or often buy the cheapest solution (external USB drives, etc…). This is never a good idea, but if you are going to use external drives make sure you invest in the fastest bus available – in other words, USB 2.0 is not a good option, and USB 3.0 or eSATA are better bets.
What other options do I have?
Of course, we have not discussed two great solutions, but they are going to be a little costlier:
-
Storage Area Networks (SANs). SANs are great because they allow us to take several disks in a unit and partition the total rather than the individual drives, and then we can share them across multiple hosts. Absolutely every organization considering virtualization, no matter how small, should invest in a SAN device.
I say invest because they are not cheap… a decent entry-level SAN should cost around Five Thousand Dollars ($5,000.00).
-
Raw Device Map (RDM) / Pass-through disks (PTD). This is a great functionality that ensures that a virtual machine gets the best performance available from the drive by giving it exclusive access to it. It eliminates the worry of performance degradation due to resource sharing.
The downside to this is that you cannot partition the drive, and while you can create an RDM/PTD to a SAN LUN (Logical Unit Number) that does not change the fact that are limiting the number of drives you can have. This functionality can also limit the portability of your virtual machines – neither Microsoft nor VMware prevents you from live migrating a VM with a RDM/PTD, but there is an added layer of complication.
Conclusion
While planning for resources is important in a physical environment, it is infinitely moreso in a virtual environment where resources have to be shared and virtual machines have to co-exist on the same hardware resources. The ability to plan your storage strategy begins with a good understanding of the technologies, as well as an understanding of best practices and recommendations from others who have been there before. I still wouldn’t use dynamically expanding disks in my servers, but with more and more people asking me about it I wanted to make my reasons clear, as well as outline the steps I would take if I were in a position where I had no choice.
Now it’s up to you!
Leave a Reply