Home » Posts tagged 'Hyper-V 2012'
Tag Archives: Hyper-V 2012
As a subject matter expert (SME) on virtualization, I was neither excited nor intimidated when Microsoft announced their new exam, 74-409: Server Virtualization with Windows Server Hyper-V and System Center. Unlike many previous exams I did not rush out to be the first to take it, nor was I going to wait forever. I actually thought about sitting the exam in Japan in December, but since I had trouble registering there and then got busy, I simply decided to use my visit to Canada to schedule the exam.
This is not the first exam that I have gone into without so much as a glance at the Overview or the Skills Measured section of the exam page on the Internet. I did not do any preparation whatsoever for the exam… as you may know I have spent much of the last five years living and breathing virtualization. This attitude very nearly came back to bite me in the exam room at the Learning Academy in Hamilton, Ontario Wednesday morning.
Having taught every Microsoft server virtualization course ever produced (and having written or tech-reviewed many of them) I should have known better. Virtualization is more than installing Hyper-V. it’s more than just System Center Virtual Machine Manager (VMM) and Operations Manager (OpsMgr). It is the entire Private Cloud strategy… and if you plan to sit this exam you had better have more than a passing understanding of System Center Service Manager (ServMgr), Data Protection Manager (DPM), and Orchestrator. Oh, and your knowledge should extend beyond more than one simple Hyper-V host.
I have long professed to my students that while DPM is Microsoft’s disaster recovery solution, when it comes down to it just make sure that your backup solution does everything that they need, and make sure to test it. While I stand behind that statement for production environments, it does not hold water when it comes to Microsoft certification exams. When two of the first few questions were on DPM I did a little silent gulp to myself… maybe I should have prepared a little better for this.
I do not use Service Manager… It’s not that I wouldn’t – I have a lot of good things to say about it. Heck, I even installed it as recent as yesterday – but I have not used it beyond a passing glance. The same used to be true of System Center Orchestrator, but over the last year that has changed a lot… I have integrated it into my courseware, and I have spent some time learning it and using it in production environments for repetitive tasks. While I am certainly not an expert in it, I am at least more than just familiar with it. That familiarity may have helped me on one exam question. Had I taken the time to review the exam page on the Microsoft Learning Experience website I would have known that the word Orchestrator does not appear anywhere on the page.
Here’s the problem with Microsoft exams… especially the newer ones that do not simply cover a product, but an entire solution across multiple suites. Very few of us will use and know every aspect covered on the exam. That is why I have always professed that no matter how familiar you may be with the primary technology covered, you should always review the exam page and fill in your knowledge gaps with the proper studying. You should even spend a few hours reviewing the material that you are pretty sure you do know. As I told my teenaged son when discussing his exams, rarely will you have easy exams… if you feel it was easy it just means you were sufficiently prepared. Five questions into today’s exam I regretted my blasé attitude towards it – I may be a virtualization expert, but I was not adequately prepared.
As I went through the exam I started to get into a groove… while there are some aspects of Hyper-V that I have not implemented, those are few and far between. the questions about VHDX files, Failover Clustering, Shared VHDX, Generation 2 VMs, and so many more came around and seemed almost too easy, but like I told my son it just means I am familiar with the material. There were one or two questions which I considered to be very poorly worded, but I reread the questions and the answers and gave my best answer based on my understanding of them.
I have often described the time between pressing ‘End Exam’ and the appearance of the Results screen to be an extended period of excruciating forced lessons in patience. That was not the case today – I was surprised that the screen came up pretty quickly. While I certainly did not ace the exam, I did pass, and not with the bare minimum score. It was certainly a phew moment for a guy who considers himself pretty smart in virtualization.
Now here’s the question… is the exam a really tough one, or was I simply not prepared and thus considered it tough? And frankly, how tough could it have been if I didn’t prepare, and passed anyways? I suppose that makes two questions. The answer to both is that while I did not prepare for the exam, I am considered by many (including Microsoft) a SME on Hyper-V and System Center. I can say with authority that it was a difficult exam. That then leads to the next question, is it too tough? While I did give that some thought as I left the exam (my first words to the proctor was ‘Wow that was a tough exam!) I do not think it is unreasonably so. It will require a lot of preparation – not simply watching the MVA Jump Start videos (which are by the way excellent resources, and should be considered required watching for anyone planning to sit the exam). You will need to build your own environment, do a lot of reading and research, and possibly more.
If you do plan to sit this exam, make sure you visit the exam page first by clicking here. Make sure you expand and review the Overview and Skills Measured sections. If you review the Preparation Materials section it will refer you to a five day course that is releasing next week from Microsoft Learning Experience – 20409A- Server Virtualization with Windows Server Hyper-V and System Center (5 Days). I am proud to say that I was involved with the creation of that course, and that it will help you immensely, not only with the exam but with your real-world experience.
Incidentally, passing the exam gives you the following cert: Microsoft Certified Specialist: Server Virtualization with Hyper-V and System Center.
Good luck, and go get em!
The IT Pro Evangelism team, Microsoft Learning and the Microsoft Virtual Academy are pleased to announce the next FREE & PUBLIC event Live Q&A: Introduction to Hyper-V on Wednesday April 3rd, from 8:30 am – 10:30am PST with virtualization experts Jeff Woolsey & Symon Perriman.
Ask your customers to join this live online event designed for IT professionals that have questions about Microsoft virtualization and want to learn about Windows Server 2012 Hyper-V. Register here: http://aka.ms/MVAf-HyperV. If you cannot make the live event, sign up anyway so you can receive a notification when the recording is published on the Microsoft Virtual Academy.
Topics and demos may include:
· Introduction to Microsoft Virtualization
· Hyper-V Infrastructure
· Hyper-V Networking
· Hyper-V Storage
· Hyper-V Management
· Hyper-V High Availability and Live Migration
· Integration with System Center 2012 Virtual Machine Manager
· Integration with Other System Center 2012 Components
Microsoft has released a poster diagramming virtual networking in Hyper-V 2012. Much of it revolves around Virtual Machine Manager, and is actually branded System Center 2012 SP1. If you are building or managing datacenters – even smaller ones – you should download this document and review it. We all have something to learn from it!
The VMM networking poster is available for download here.
Now: If you are going to be at MMS, I am told that the Windows Server team will be giving out printed copies – I had one of the original Hyper-V environment and wore it out – it was my most referenced document for months!
If you are interested in evaluating Windows Server or System Center 2012 you can can do so by clicking here:
In October, 2011 I posted an article called vPTA: What NOT to take away from my 1-day virtualization training! It was only partly tongue-in-cheek on the environment that I have been using for several years to demonstrate server virtualization from a pair of laptops. A few months later Damir Bersinic took that list and made some modifications, and published it on this blog as Things NOT To Take Away from the IT Virtualization Boot Camp. Because we spend so much time in our IT Camps demonstrating similar environments, I decided it was a good time to rewrite that article.
Normally when I revisit an article I would simply republish it. There are two reasons that I decided to rewrite this one from scratch:
- The improvements in Windows Server 2012, and
- My more official position at Microsoft Canada
Since writing that original article I have tried to revise my writing style so as to not offend some people… I am trying to be a resource to all IT Professionals in Canada, and to do that I want to eliminate a lot of the sarcasm that my older posts were replete with. At the same time there are points that I want to reinforce because of the severity of the consequences.
Creating a lab environment equivalent to Microsoft Canada’s IT Camps, with simple modifications:
1. In our IT Camps we provide the attendees with hardware to use for their labs. Depending on the camp attendees will work in teams on either one or two laptops. While this is fine for the Windows 8 camps, please remember that in your environment – even in a lab where possible – you should be using actual server hardware. With virtualization it is so simple to create a segregated lab environment on the same server as your production environment, using virtual switches and VLAN tagging. In environments where System Center 2012 has already been deployed it is easy enough to provision private clouds for your test/dev environments, but even without that it is a good idea. The laptops that we use for the IT Camps are great for the one- or two-day camps, but for longer than that you are going to risk running into a plethora of crashes that are easy enough to anticipate.
2. You should always have multiple domain controllers in any environment, production or otherwise. Depending on who you speak to many professionals will tell you that at least one domain controller in your domain should be on a physical box (as opposed to a virtual machine). I am still not convinced that this does not fall into the category of ‘Legacy Thinking’ but there is certainly an argument to be made for this. Whether you are going to do this in physical or virtual, you should never rely on a single domain controller. Likewise your domain controllers should be dedicated as such, and should not also be file or application servers.
3. I strongly recommend shared storage for your virtualization hosts be implemented on Storage Area Networks (SANs). SAN devices are a great method of sharing data between clustered nodes in a failover cluster. In Windows Server 2012 we have included the iSCSI Software Target that was previously an optional download (The Microsoft iSCSI Software Target is now free). While this is still not a good replacement of physical SANs, it is a fully supported solution for Windows Failover Cluster Services, including for Hyper-V virtual machine environments. It is even now recognized as an option for System Center 2012 private clouds. As well the Storage Pools feature in the new Server is a compelling feature to consider. However there are some caveats to consider:
A. Both iSCSI software targets and Storage Pools rely on virtual storage (VHDX files) for their LUNs and Pools. While VHDX files are very stable, putting one VHDX file into another VHDX file is a bad idea… at least for long-term testing and especially for production environments. If you are going to use a software target or Storage Pool (which are both fully supported by Microsoft for production environments) it is strongly recommended that you put them onto physical hardware.
B. While Storage Pools are supported on any available drive architecture (including USB, SATA, etc…) the only architecture that will be supported for clustered environments are iSCSI and SAS (Serial Attached SCSI). Do not try to build a production (or long-term test environment) cluster on inexpensive USB or SATA drives.
C. In our labs we use a lot of thin-provisioned (dynamically expanding, storage-on-demand) disks. While these are fully supported, it is not necessarily a best practice. Especially on drives where you may be storing multiple VHDX files you are simply asking for fragmentation issues.
4. If you are building a lab environment on a single host, you may run into troubles when trying to join your host to the domain. I am not saying that it will not work – as long as you have properly configured your virtual network it likely will – but there are a couple of things to remember. Make sure that your virtual domain controller is configured to Always Start rather than Always start if it was running when the service stopped. As well it is a good idea to configure a static IP address for the host, just in case your virtual DHCP server fails to start properly, or in a timely fashion.
5. Servers are meant to run. Shutting down your servers on a daily basis has not been a recommended practice for many years, and the way we do things – at the end of the camp we re-image our machines, pack them into a giant case and ship them to the next site – is a really bad idea. If you are able I strongly recommend leaving your lab servers running at all times.
6. While it is great to be able to demo server technologies, when at all possible you should leave your servers connected (and turned on) in one place. If you are able to bring your clients to you for demos that is ideal, but it is so easy these days to access servers remotely on even the most basic of Internet connections. If your company does not have a static IP address I would recommend using a dynamic DNS service (such as dyndns.com) with proper port-forwarding configured in your gateway router to access then remotely.
7. I am asked all the time how many network adapters you need for a proper server environment. I always answer ‘It depends.’ There are many factors to consider when building your hosts, and in a demo environment there are concessions you can make. However unless you have absolutely no choice it should be more than one. For a proper cluster configuration (excluding multi-pathing and redundancy) you should have a production network, a storage network, and a heartbeat network… and that is three just for the bare minimum. Some of these can share networks and NICs by configuring VLANs, but again, preferably only in lab environments. Before building your systems consider what you are willing to compromise on, and what is absolutely required. Then build your architectural plan and determine what hardware is required before making your purchase.
7a. While on the subject of networks, in our demo environment the two laptop-servers are connected to each other by a single RJ-45 cable. BUY SWITCHES… and the ones that are good enough for you to use at home are usually not good enough for your production environment!
8. When it is at all possible your storage network should be physically segregated from your production network. When physical segregation is not possible then at least separating the streams by using vLANs is strongly recommended. The first offers security as well as bandwidth management, the second only security.
9. Your laptop and desktop hardware are not good-enough substitutes for server-grade hardware. I know we mentioned this before, but I still feel it is important enough to state again.
10. In Windows Server 2008 R2 we were very adamant that snapshots, while handy in labs and testing, were a bad idea for your production environment. With the improvements to Hyper-V in Windows Server 2012 we can be a little less adamant, but remember that you cannot take a snapshot and forget about it. When you delete or apply a snapshot it will now merge the VHDX and AVHDX files live… but snapshots can still outgrow your volume so make sure that when you are finished with a snapshot you clean up after yourself.
11. Breaking any of these rules in a production environment is not just a bad idea, it would likely result in an RGE (Resume Generating Event). In other words, some of these can be serious enough for you to lose your job, lose customers, and possibly even get you sued. Follow the best practices though and you should be fine!
This article was originally published on the Canadian IT Pro Connection.
Some veteran IT Pros hear the term ‘Microsoft Clustering’ and their hearts start racing. That’s because once upon a time Microsoft Cluster Services was very difficult and complicated. In Windows Server 2008 it became much easier, and in Windows Server 2012 it is now available in all editions of the product, including Windows Server Standard. Owing to these two factors you are now seeing all sorts of organizations using Failover Clustering that would previously have shied away from it.
The service that we are seeing clustered most frequently in smaller organizations is Hyper-V virtual machines. That is because virtualization is another feature that is really taking off, and the low cost of virtualizing using Hyper-V makes it very attractive to these organizations.
In this article I am going to take you through the process of creating a failover cluster from two virtualization hosts that are connected to a single SAN (storage area network) device. However in Windows Server 2012 these are far from the limits. You can actually cluster up to sixty-four servers together in a single cluster. Once they are joined to the cluster we call them cluster nodes.
Failover Clustering in Windows Server 2012 allows us to create highly available virtual machines using a method called Active-Passive clustering. That means that your virtual machine is active on one cluster node, and the other nodes are only involved when the active node becomes unresponsive, or if a tool that is used to dynamically balance the workloads (such as System Center 2012 with Performance and Resource Optimization (PRO) Tips) initiates a migration.
In addition to using SAN disks for your shared storage, Windows Server 2012 also allows you to use Storage Pools. I explained Storage Pools and showed you how to create them in my article Storage Pools: Dive Right In! I also explained how to create a virtual SAN using Windows Server 2012 in my article iSCSI Storage in Windows Server 2012. For the sake of this article, we will use the simple SAN target that we created together in that article.
Step 1: Enabling Failover Clustering
Failover Clustering is a feature on Windows Server 2012. In order to enable it we will use the Add Roles and Features wizard.
1. From Server Manager click Manage, and then select Add Roles and Features.
2. On the Before you begin page click Next>
3. On the Select installation type page select Role-based or feature-based installation and click Next>
4. On the Select destination server page select the server onto which you will install the role, and click Next>
5. On the Select server roles page click Next>
6. On the Select features page select the checkbox Failover Clustering. A pop-up will appear asking you to confirm that you want to install the MMC console and management tools for Failover Clustering. Click Add Features. Click Next>
7. On the Confirm installation selections page click Install.
NOTE: You could also add the Failover Clustering feature to your server using PowerShell. The script would be:
Install-WindowsFeature -Name Failover-Clustering –IncludeManagementTools
If you want to install it to a remote server, you would use:
Install-WindowsFeature -Name Failover-Clustering –IncludeManagementTools –ComputerName <servername>
That is all that we have to do to enable Failover Clustering in our hosts. Remember though, it does have to be done on each server that will be a member of our cluster.
Step 2: Creating a Failover Cluster
Now that Failover Clustering has been enabled on the servers that we want to join to the cluster, we have to actually create the cluster. This step is easier than it ever was, although you should take care to follow the recommended guidelines. Always run the Validation Tests (all of them!), and allow Failover Cluster Manager to determine the best cluster configuration (Node Majority, Node and Disk Majority, etc…)
NOTE: The following steps have to be performed only once – not on each cluster node.
1. From Server Manager click Tools and select Failover Cluster Manager from the drop-down list.
2. In the details pane under Management click Create Cluster…
3. On the Before you begin page click Next>
4. On the Select Servers page enter the name of each server that you will add to the cluster and click Add. When all of your servers are listed click Next>
5. On the Validation Warning page ensure the Yes. When I click Next, run configuration validation tests, and then return to the process of creating the cluster radio is selected, then click Next>
6. On the Before You Begin page click Next>
7. On the Testing Options page ensure the Run all tests (recommended) radio is selected and then click Next>
8. On the Confirmation page click Next> to begin the validation process.
9. Once the validation process is complete you are prompted to name your cluster and assign an IP address. Do so now, making sure that your IP address is in the same subnet as your nodes.
NOTE: If you are not prompted to provide an IP address it is likely that your nodes have their IP Addresses assigned by DHCP.
10. On the Confirmation page make sure the checkbox Add all eligible storage is selected and click Next>. The cluster will now be created.
11. Click on Finish. In a few seconds your new cluster will appear in the Navigation Pane.
Step 3: Configuring your Failover Cluster
Now that your failover cluster has been created there are a couple of things we are going to verify. The first is in the main cluster screen. Near the top it should say the type of cluster you have.
If you created your cluster with an even number of nodes (and at least two shared drives) then the type should be a node and disk majority. In a Microsoft cluster health is determined when a majority (50% +1) of votes are counted. Every node has a vote. This means that if you have an even number of nodes (say 10) and half of them (5) go offline then your cluster goes down. If you have ten nodes you would have long since taken action, but imagine you have two nodes and one of them goes down… that means your entire cluster would go down. So Failover Clustering uses node and disk majority – it takes the smallest drive shared by all nodes (I usually create a 1GB LUN) and configures it as the Quorum drive – it gives it a vote… so if one of the nodes in your two node cluster goes down, you still have a majority of votes, and your cluster stays on-line.
The next thing that you want to check is your nodes. Expand the Nodes tree in the navigation pane and make sure that all of your nodes are up.
Once this is done you should check your storage. Expand the Storage tree in the navigation pane, and then expand Disks. If you followed my articles you should have two disks – one large one (mine is 140GB) and a small one (mine is 1GB). The smaller disk should be marked as assigned to Disk Witness in Quorum, and the larger disk will be assigned to Available Storage.
Cluster Shared Volumes was introduced in Windows Server 2008R2. It creates a contiguous namespace for your SAN LUNs on all of the nodes in your cluster. In other words, rather than having to ensure that all of your LUNs have the same drive letter on each node, CSVs create a link – a portal if you will – on your C: under the directory C:\ClusterStorage. Each LUN would have its own subdirectory – C:\ClusterStorage\Volume1, C:\ClusterStorage\Volume2, and so on. However using CSVs means that you are no longer limited to a single VM per LUN, so you will likely need fewer.
CSVs are enabled by default, and all you have to do is right-click on any drive assigned to Available Storage, and click Add to Cluster Shared Volumes. It will only take a second to work.
NOTE: While CSVs create directories on your C drive that is completely navigable, it is never a good idea to use it for anything other than Hyper-V. No other use is supported.
Step 4: Creating a Highly Available Virtual Machine (HAVM)
Virtual machines are no different to Failover Cluster Manager than any other clustered role. As such, that is where we create them!
1. In the navigation pane of Failover Cluster Manager expand your cluster and click Roles.
2. In the Actions Pane click Virtual Machines… and click New Virtual Machine.
3. In the New Virtual Machine screen select the node on which you want to create the new VM and click OK.
The New Virtual Machine Wizard runs just like it would in Hyper-V Manager. The only thing you would do differently here is change the file locations for your VM and VHDX files. In the appropriate places ensure they are stored under C:\ClusterStorage\Volume1.
At this point your highly available virtual machine has been created, and can be failed over without delay!
Step 5: Making an existing virtual machine highly available
In all likelihood you are not starting from the ground up, and you probably have pre-existing virtual machines that you would like to add to the cluster. No problem… However before you go, you need to put the VM’s storage onto shared storage. Because Windows Server 2012 includes Live Storage Migration it is very easy to do:
1. In Hyper-V Manager right-click the virtual machine that you would like to make highly available and click Move…
2. In the Choose Move Type screen select the radio Move the virtual machine’s storage and click Next>
3. In the Choose Options for Moving Storage screen select the radio marked Move all of the virtual machine’s data to a single location and click Next>
4. In the Choose a new location for virtual machine type C:\ClusterStorage\Volume1 into the field. Alternately you could click Browse… and navigate to the shared file location. Then click Next>
5. On the Completing Move Wizard page verify your selections and click Finish.
Remember that moving a running VM’s storage can take a long time. The VHD or VHDX file could theoretically be huge… depending on the size you selected. Be patient, it will just take a few minutes. Once it is done you can continue with the following steps.
6. In Failover Cluster Manager navigate to the Roles tab.
7. In the Actions Pane click Configure Role…
8. In the Select Role screen select Virtual Machine from the list and click Next>. This step can take a few minutes… be patient!
9. In the Select Virtual Machine screen select the virtual machine that you want to make highly available and click Next>
NOTE: A great improvement in Windows Server 2012 is the ability to make a VM highly available regardless of its state. In previous versions you needed to shut down the VM to do this… no more!
10. On the Confirmation screen click Next>
…That’s it! Your VM is now highly available. You can navigate to Nodes and see which server it is running on. You can also right-click on it, click Move, select Live Migration, and click Select Node. Select the node you want to move it to, and you will see it move before your very eyes… without any downtime.
What? There’s a Video??
Yes, We wanted you to read through all of this, but we also wrote it as a reference guide that you can refer to when you try to build it yourself. However to make your life slightly easier, we also created a video for you and posted it online. Check it out!
For Extra Credit!
Now that you have added your virtualization hosts as nodes in a cluster, you will probably be creating more of your VMs on Cluster Shared Volumes than not. In the Hyper-V Settings you can change the default file locations for both your VMs and your VHDX files to C:\ClusterStorage\Volume1. This will prevent your having to enter them each time.
As well, the best way to create your VMs will be in the Failover Cluster Manager and not in Hyper-V Manager. FCM creates your VMs as HAVMs automatically, without your having to perform those extra steps.
Over the last few weeks we have demonstrated how to Create a Storage Pool, perform a Shared Nothing Live Migration, Create an iSCSI Software Target in Windows Server 2012, and finally how to create and configure Failover Clusters in Windows Server 2012. Now that you have all of this knowledge at your fingertips (Or at least the links to remind you of it!) you should be prepared to build your virtualization environment like a pro. Before you forget what we taught you, go ahead and do it. Try it out, make mistakes, and figure out what went wrong so that you can fix it. In due time you will be an expert in all of these topics, and will wonder how you ever lived without them. Good luck, and let us know how it goes for you!
This article was originally written for the Canadian IT Pro Connection.
Many smaller companies and individuals with home labs see shared storage – usually a SAN (Storage Area Network) device as the impediment to Live Migration. In April of 2011 Microsoft released the iSCSI Software Target 3.3 as a free (and supported) download. At the time Pierre and I wrote a series of articles in this space as guest bloggers (The Microsoft iSCSI Software Target is now free, All for SAN and SAN for All!, Creating a SAN using Microsoft iSCSI Software Target 3.3, Creating HA VMs for Hyper-V with Failover Clustering using FREE Microsoft iSCSI Target 3.3). It seems that those articles were so well liked that Pierre and I are now the resident technical bloggers for this space!
Ok, but seriously… Software SANs make life easier for smaller companies with smaller environments. The fact that you can now build a failover environment without investing in an expensive SAN is a great advancement for IT Professionals, and especially for those who want to do Live Migration. Windows Server 2012 now includes the iSCSI Software Target out of the box, and IT Pros are taking full advantage.
Now let’s go one step further. You have started to play with Hyper-V… or maybe you have a small environment built on a single host. You get to the point where you are going to add a second host, but you are still not ready to create shared storage. Are you stuck with two segregated hosts? Not anymore!
Shared Nothing Live Migration allows you to have VMs stored on local (direct attached) storage, and still be able to migrate them between hosts. With absolutely no infrastructure other than two Hyper-V hosts (and the appropriate networking) you can now live migrate virtual machines between hosts.
Any live migration, whether it be Hyper-V or any other platform, have a number of requirements in order to work.
- Both hosts must have chipsets in the same family – that is, you cannot live migrate an Intel to an AMD or vice-versa. If the processors are similar enough (i7 to i5 is fine, i7 to Core2 Duo is not) then no action is necessary. In the event that you do have dissimilar processors (newer and older but still within the same family, then you have to configure your virtual machine’s CPU compatibility, as outlined in the article Getting Started with Hyper-V in Server 2012 and Windows 8.
- If your virtual machine is connected to a virtual switch then you need to have an identically named virtual switch on the destination host. If not your migration will be paused while you specify which switch to use on the destination server.
- The two virtualization hosts must be connected by a reliable network.
In order to perform Live Migration you have to configure it in the Hyper-V Settings.
1) In Hyper-V Manager click Hyper-V Settings… in the Actions Pane.
2) In the Hyper-V Settings for the host, click on the Live Migrations tab on the left. In the details pane ensure that the Enable incoming and outgoing live migrations box is checked, and that you have selected an option under Incoming live migrations. In this screenshot you will see that I have left the default 2 Simultaneous live migrations, and that I selected the option to Use any available network for live migration. Depending on your network configuration and bandwidth availability you can adjust these as you like.
NOTE: These steps must be performed on both hosts, although the configuration options do not have to be the same.
Migrating a VM
Performing a Live Migration is easy.
1) In the Hyper-V Manager right-click on the virtual machine that you want to migrate and click Move…
NOTE: In this screenshot I am managing both hosts from the same MMC console. This is NOT a requirement.
2) On the Before You Begin screen click Next>.
3) On the Choose Move Type screen select Move the virtual machine and click Next>.
4) On the Specify Destination Computer screen enter the name of the destination host and click Next>. You also have the option to browse other hosts in Active Directory.
5) On the Choose Move Options screen select what you want to do with the virtual machine’s items (see screen capture). I usually select the option Move the virtual machine’s data to a single location. This option allows you to specify one location for all of the VM’s items, including configuration files, memory state, and virtual storage. Click Next>.
6) On the Choose a new location for virtual machine screen enter (or browse to) the location on the destination host where you would like to move the VM. This screen will also tell you how big your files are (note the Source Location in the screen capture says 9.5 GB). Click Next> then on the Summary screen click Finish.
Now that your virtual machine migration is in progress you can watch the progress bar in two places: In the Performing the Move progress bar, and in the Hyper-V Manager under Status.
The one place where you would not be able to watch the progress is from within the virtual machine. There is nothing to see. If you are in the VM while the migration is happening there is no indication of it, and you (and all of your processes and networking) will be able to continue as normal. The operating system within the VM itself has no concept that it is virtualized, and therefore has no concept that it is being moved. Should the live migration fail (as has been known to happen) the VM would experience… nothing. It would continue to work on the source host as if nothing had happened. In fact the only time it ceases to work on the source host is when it is fully operational on the destination host.
Notice now that the virtual machine SWMI-DC2, which we moved from SWMI-HOST5 to SWMI-HOST6 is now running as normal on the destination host. You will see that the Uptime is reset – that is because the uptime is tied to the VM on the host, and not the uptime of the guest OS.
Now that you understand how it works, why not watch the video of my performing a Shared Nothing Live Migration. For the sake of good TV I cut out the three minutes of waiting while the migration performed, but everything else is in real time. Check it out here:
Whether you have a small infrastructure and want to be able to live migrate between a couple of hosts, or you have a large infrastructure but still have VMs stored on direct-attached storage, Shared Nothing Live Migration is one of the new features in Windows Server 2012 that will make your virtualization tasks easier. Remember that it is not a license to get rid of your SAN devices, but is a great (and easy) way to migrate DAS-attached VMs between hosts without any downtime.
Okay, I am asking for a show of hands: How many of you remember 100MB hard drives? 80? 40? While I remember smaller, my first hard drive was a 20 Megabyte Seagate drive. Note that I didn’t say Gigabytes…
Way back then the term Terabyte might have been coined already as a very theoretical term, but in the mid-80s most of us did not even have hard drives – we were happy enough if we had dual floppy drives to run our programs AND store our data. We never thought that we could ever fill a gigabyte of storage, but were happier with hard drives than with floppies because they were less fragile (especially with so many magnets about).
Now of course we are in a much more enlightened age, where most of us need hundreds of gigabytes, if not more. With storage requirements growing exponentially, the 2TB drives that we used to think were beyond the needs of all but the largest companies are now available for consumers, and corporations are needing to put several of those massive drives into SAN arrays to support the ever-growing database servers.
As our enterprise requirements grow, so must the technologies that we rely on. That is why we were so proud to announce the new VHDX file format, Microsoft’s next generation virtual hard drive files that has by far the largest capacity of any virtualization technology on the market – a whopping 64 Terabytes.
Since Microsoft made this announcement a few months ago several IT Pros have asked me ‘Why on earth would I ever need a single drive to be that big?’ A fair question, that reminds me of the old quote from Bill Gates who said that none of us would ever need more than 640KB of RAM in our computers. The truth is big data is becoming the rule and not the exception.
Now let’s be clear… it may be a long time before you need 64TB on a single volume. However rather than questioning the limit, let’s look at the previous limit – 2TB. Most of us likely won’t need 64TB any time soon; however over the last couple of years I have come across several companies who did not think they could virtualize their database servers because of 2.2TB databases.
Earlier this week I got an e-mail from a customer asking for help with a virtual to physical migration. Knowing who he reached out to, this was an obvious cry for help.
‘Mitch we have our database running on a virtual machine, and it is running great, but we are about to outgrow our 2TB limitation on the drive, and we have to migrate onto physical storage. We simply don’t have any other choice.’
As a Technical Evangelist my job is to win hearts and minds, as well as educate people about new technologies (as well as new ways to use the existing technologies that they have already invested in). So when I read this request I had several alternate solutions for them that would allow them to maintain their virtual machine while they burst through that 2TB ‘limit’.
- The new VHDX file format shatters the limit, as we said. In an upcoming article I will explain how to convert your existing VHD files to VHDX. The one caveat: if you are using Boot from VHD from a Windows 7 (or Server 2008 R2) base then the VHDX files are not supported.
- Storage Pools in Windows Server 2012 allow you to pool disks (physical or virtual) to create large drives. They are easy to create and to add storage to on the fly. I expect these will be among the most popular new features in Windows Server 2012.
- Software iSCSI Target is now a feature of Windows Server 2012, which means that not only can you create larger disks on the VM, you can also create large Storage Area Networks (SANs) on the host, adding VHDs as needed and giving access as BitLocker-encrypted Cluster Shared Volumes (CSVs), another new functionality of the new platform.
- New in Windows Server 2012, you can now create a virtual connection to a real Fibre Channel SAN LUN. As large a volume as you can create on the SAN is your limit – in other words if you have the budget your limit would be petabytes!
With all of these options available to us, the sky truly is the limit for our virtualization environments… Whether you opt for a VHDX file, Storage Pool, Software- or Hardware-SAN, Hyper-V on Windows Server 2012 has you covered. And if none of these are quite right for you, then migrating your servers into an Azure VM in the cloud offers yet more options for the dynamic environment, without the capital expenses required for on-premises solutions.
Knowing all of this, there really is no longer any reason to do a V2P migration, although of course there are tools that can do that. There is also no longer a good reason to invest in third-party virtualization platforms that limit your virtual hard disks to 2TB.
Adaptable storage the way you want it… just one more reason to pick Windows Server 2012!