Home » Posts tagged 'Servers'
Tag Archives: Servers
Being back in a VMware environment, there are a few differences I need to remember from Hyper-V and System Center. It is not that one is better or worse than the other, but they are certainly different.
Customization Specifications are a great addition in vCenter to Cloning virtual machines. They allow you to name the VM, join domains, in short set the OOBE (Out of Box Experience) of Windows. They just make life easier.
The problem is, they do a lot of the same things as Microsoft’s deployment tools… but they do them differently. We have to remember that Microsoft owns the OS, so when you use the deployment tools from Microsoft, they inject a lot of the information into the OS for first boot. Customization Specifications work just like answer files… they require a boot-up (or two) to perform the scripts… and while those boots are interactive sessions, you should be careful about what you do in them. They will allow you to do all sorts of things, but then when they are ready they will perform the next step – a reboot.
I am not saying that you shouldn’t use Customization Specifications… I love the way they work, and will continue to use them. Just watch out for those little hiccoughs before you go 🙂
This article was originally published on the Canadian IT Pro Connection.
Some veteran IT Pros hear the term ‘Microsoft Clustering’ and their hearts start racing. That’s because once upon a time Microsoft Cluster Services was very difficult and complicated. In Windows Server 2008 it became much easier, and in Windows Server 2012 it is now available in all editions of the product, including Windows Server Standard. Owing to these two factors you are now seeing all sorts of organizations using Failover Clustering that would previously have shied away from it.
The service that we are seeing clustered most frequently in smaller organizations is Hyper-V virtual machines. That is because virtualization is another feature that is really taking off, and the low cost of virtualizing using Hyper-V makes it very attractive to these organizations.
In this article I am going to take you through the process of creating a failover cluster from two virtualization hosts that are connected to a single SAN (storage area network) device. However in Windows Server 2012 these are far from the limits. You can actually cluster up to sixty-four servers together in a single cluster. Once they are joined to the cluster we call them cluster nodes.
Failover Clustering in Windows Server 2012 allows us to create highly available virtual machines using a method called Active-Passive clustering. That means that your virtual machine is active on one cluster node, and the other nodes are only involved when the active node becomes unresponsive, or if a tool that is used to dynamically balance the workloads (such as System Center 2012 with Performance and Resource Optimization (PRO) Tips) initiates a migration.
In addition to using SAN disks for your shared storage, Windows Server 2012 also allows you to use Storage Pools. I explained Storage Pools and showed you how to create them in my article Storage Pools: Dive Right In! I also explained how to create a virtual SAN using Windows Server 2012 in my article iSCSI Storage in Windows Server 2012. For the sake of this article, we will use the simple SAN target that we created together in that article.
Step 1: Enabling Failover Clustering
Failover Clustering is a feature on Windows Server 2012. In order to enable it we will use the Add Roles and Features wizard.
1. From Server Manager click Manage, and then select Add Roles and Features.
2. On the Before you begin page click Next>
3. On the Select installation type page select Role-based or feature-based installation and click Next>
4. On the Select destination server page select the server onto which you will install the role, and click Next>
5. On the Select server roles page click Next>
6. On the Select features page select the checkbox Failover Clustering. A pop-up will appear asking you to confirm that you want to install the MMC console and management tools for Failover Clustering. Click Add Features. Click Next>
7. On the Confirm installation selections page click Install.
NOTE: You could also add the Failover Clustering feature to your server using PowerShell. The script would be:
Install-WindowsFeature -Name Failover-Clustering –IncludeManagementTools
If you want to install it to a remote server, you would use:
Install-WindowsFeature -Name Failover-Clustering –IncludeManagementTools –ComputerName <servername>
That is all that we have to do to enable Failover Clustering in our hosts. Remember though, it does have to be done on each server that will be a member of our cluster.
Step 2: Creating a Failover Cluster
Now that Failover Clustering has been enabled on the servers that we want to join to the cluster, we have to actually create the cluster. This step is easier than it ever was, although you should take care to follow the recommended guidelines. Always run the Validation Tests (all of them!), and allow Failover Cluster Manager to determine the best cluster configuration (Node Majority, Node and Disk Majority, etc…)
NOTE: The following steps have to be performed only once – not on each cluster node.
1. From Server Manager click Tools and select Failover Cluster Manager from the drop-down list.
2. In the details pane under Management click Create Cluster…
3. On the Before you begin page click Next>
4. On the Select Servers page enter the name of each server that you will add to the cluster and click Add. When all of your servers are listed click Next>
5. On the Validation Warning page ensure the Yes. When I click Next, run configuration validation tests, and then return to the process of creating the cluster radio is selected, then click Next>
6. On the Before You Begin page click Next>
7. On the Testing Options page ensure the Run all tests (recommended) radio is selected and then click Next>
8. On the Confirmation page click Next> to begin the validation process.
9. Once the validation process is complete you are prompted to name your cluster and assign an IP address. Do so now, making sure that your IP address is in the same subnet as your nodes.
NOTE: If you are not prompted to provide an IP address it is likely that your nodes have their IP Addresses assigned by DHCP.
10. On the Confirmation page make sure the checkbox Add all eligible storage is selected and click Next>. The cluster will now be created.
11. Click on Finish. In a few seconds your new cluster will appear in the Navigation Pane.
Step 3: Configuring your Failover Cluster
Now that your failover cluster has been created there are a couple of things we are going to verify. The first is in the main cluster screen. Near the top it should say the type of cluster you have.
If you created your cluster with an even number of nodes (and at least two shared drives) then the type should be a node and disk majority. In a Microsoft cluster health is determined when a majority (50% +1) of votes are counted. Every node has a vote. This means that if you have an even number of nodes (say 10) and half of them (5) go offline then your cluster goes down. If you have ten nodes you would have long since taken action, but imagine you have two nodes and one of them goes down… that means your entire cluster would go down. So Failover Clustering uses node and disk majority – it takes the smallest drive shared by all nodes (I usually create a 1GB LUN) and configures it as the Quorum drive – it gives it a vote… so if one of the nodes in your two node cluster goes down, you still have a majority of votes, and your cluster stays on-line.
The next thing that you want to check is your nodes. Expand the Nodes tree in the navigation pane and make sure that all of your nodes are up.
Once this is done you should check your storage. Expand the Storage tree in the navigation pane, and then expand Disks. If you followed my articles you should have two disks – one large one (mine is 140GB) and a small one (mine is 1GB). The smaller disk should be marked as assigned to Disk Witness in Quorum, and the larger disk will be assigned to Available Storage.
Cluster Shared Volumes was introduced in Windows Server 2008R2. It creates a contiguous namespace for your SAN LUNs on all of the nodes in your cluster. In other words, rather than having to ensure that all of your LUNs have the same drive letter on each node, CSVs create a link – a portal if you will – on your C: under the directory C:\ClusterStorage. Each LUN would have its own subdirectory – C:\ClusterStorage\Volume1, C:\ClusterStorage\Volume2, and so on. However using CSVs means that you are no longer limited to a single VM per LUN, so you will likely need fewer.
CSVs are enabled by default, and all you have to do is right-click on any drive assigned to Available Storage, and click Add to Cluster Shared Volumes. It will only take a second to work.
NOTE: While CSVs create directories on your C drive that is completely navigable, it is never a good idea to use it for anything other than Hyper-V. No other use is supported.
Step 4: Creating a Highly Available Virtual Machine (HAVM)
Virtual machines are no different to Failover Cluster Manager than any other clustered role. As such, that is where we create them!
1. In the navigation pane of Failover Cluster Manager expand your cluster and click Roles.
2. In the Actions Pane click Virtual Machines… and click New Virtual Machine.
3. In the New Virtual Machine screen select the node on which you want to create the new VM and click OK.
The New Virtual Machine Wizard runs just like it would in Hyper-V Manager. The only thing you would do differently here is change the file locations for your VM and VHDX files. In the appropriate places ensure they are stored under C:\ClusterStorage\Volume1.
At this point your highly available virtual machine has been created, and can be failed over without delay!
Step 5: Making an existing virtual machine highly available
In all likelihood you are not starting from the ground up, and you probably have pre-existing virtual machines that you would like to add to the cluster. No problem… However before you go, you need to put the VM’s storage onto shared storage. Because Windows Server 2012 includes Live Storage Migration it is very easy to do:
1. In Hyper-V Manager right-click the virtual machine that you would like to make highly available and click Move…
2. In the Choose Move Type screen select the radio Move the virtual machine’s storage and click Next>
3. In the Choose Options for Moving Storage screen select the radio marked Move all of the virtual machine’s data to a single location and click Next>
4. In the Choose a new location for virtual machine type C:\ClusterStorage\Volume1 into the field. Alternately you could click Browse… and navigate to the shared file location. Then click Next>
5. On the Completing Move Wizard page verify your selections and click Finish.
Remember that moving a running VM’s storage can take a long time. The VHD or VHDX file could theoretically be huge… depending on the size you selected. Be patient, it will just take a few minutes. Once it is done you can continue with the following steps.
6. In Failover Cluster Manager navigate to the Roles tab.
7. In the Actions Pane click Configure Role…
8. In the Select Role screen select Virtual Machine from the list and click Next>. This step can take a few minutes… be patient!
9. In the Select Virtual Machine screen select the virtual machine that you want to make highly available and click Next>
NOTE: A great improvement in Windows Server 2012 is the ability to make a VM highly available regardless of its state. In previous versions you needed to shut down the VM to do this… no more!
10. On the Confirmation screen click Next>
…That’s it! Your VM is now highly available. You can navigate to Nodes and see which server it is running on. You can also right-click on it, click Move, select Live Migration, and click Select Node. Select the node you want to move it to, and you will see it move before your very eyes… without any downtime.
What? There’s a Video??
Yes, We wanted you to read through all of this, but we also wrote it as a reference guide that you can refer to when you try to build it yourself. However to make your life slightly easier, we also created a video for you and posted it online. Check it out!
For Extra Credit!
Now that you have added your virtualization hosts as nodes in a cluster, you will probably be creating more of your VMs on Cluster Shared Volumes than not. In the Hyper-V Settings you can change the default file locations for both your VMs and your VHDX files to C:\ClusterStorage\Volume1. This will prevent your having to enter them each time.
As well, the best way to create your VMs will be in the Failover Cluster Manager and not in Hyper-V Manager. FCM creates your VMs as HAVMs automatically, without your having to perform those extra steps.
Over the last few weeks we have demonstrated how to Create a Storage Pool, perform a Shared Nothing Live Migration, Create an iSCSI Software Target in Windows Server 2012, and finally how to create and configure Failover Clusters in Windows Server 2012. Now that you have all of this knowledge at your fingertips (Or at least the links to remind you of it!) you should be prepared to build your virtualization environment like a pro. Before you forget what we taught you, go ahead and do it. Try it out, make mistakes, and figure out what went wrong so that you can fix it. In due time you will be an expert in all of these topics, and will wonder how you ever lived without them. Good luck, and let us know how it goes for you!
This post was originally written for the Canadian IT Pro Connection blog, and can be seen there at http://blogs.technet.com/b/canitpro.
Server Core may look boring – there’s nothing to it except the command prompt – but to an IT Pro it is really exciting for several reasons:
- It requires fewer resources, so in a virtualization environment you can optimize your environment even more than previously;
- It has a smaller attack surface, which makes it more secure;
- It has a smaller patch footprint, which means less work for us on Patch Tuesdays; and
- We can still use all of the familiar tools to manage it remotely, including System Center, MMC Consoles, and PowerShell.
Despite all of these advantages in my experience a lot of IT Pros did not adopt Server Core. Simply states, they like the GUI (Graphical User Interface) manageability of the full installation of Windows Server. Many do not like command lines and scripting, and frankly many are just used to the full install and did not want to learn something new. I have even met some IT Pros who simply click the defaults when installing the OS, so they always ended up with the full install.
As you can see in this screenshot, the default installation is now Server Core. This is not done to confuse people, but going forward most servers are going to be either virtual hosts or virtual machines, and either way Server Core is (more often than not) a great solution.
Of course, if you do this and did not want Server Core you are still in good shape, because new in Windows Server 2012 you can add (or remove) the GUI interface on the fly. You can actually switch between Server Core and Full (GUI) Install whenever you want, making it easier to manage your servers.
There are a couple of ways to install the GUI from the command prompt, although both use the same tool – DISM (Deployment Image Service Manager). When you are doing it for a single (local) server, the command is:
Dism /online /enable-feature /featurename:ServerCore-FullServer /featurename:Server-Gui-Shell /featurename:Server-Gui-Mgmt
While the Dism tool works fine, one of the features that will make you want Windows Server 2012 on all of your servers now is the ability to manage them remotely, and script a lot of the jobs. For that Windows PowerShell is your friend. The script in PowerShell would be nearly as simple:
Enable-WindowsOptionalFeature –online -Featurename ServerCore-FullServer,Server-Gui-Shell,Server-Gui-Mgmt
It takes a few minutes, but once you are done you can reboot and presto, you have the full GUI environment.
While that in and of itself is pretty amazing, we are not done yet. There is a happy medium between Server Core and Full GUI.
MinShell (Minimum Shell) offers the administrator the best of both worlds. You have the GUI management tools (Server Manager) but no actual GUI, which means that you are still saving the resource, have a smaller attack surface, less of a patch footprint, AND full manageability!
What the product development team has done is simple: they made the GUI tools a Server Feature… in fact, they made it three separate features (see graphic). Under User Interfaces and Infrastructure there are three options that allow the server administrator to customize the visual experience according to his needs.
The Graphical Management Tools and Infrastructure is the Server Manager, along with the other GUI tools that we use every day to manage our servers. It also includes the Windows PowerShell Integrated Scripting Environment (ISE) which allows administrators an easier to create and manage their PowerShell scripts.
The Desktop Experience gives the administrator the full desktop experience – similar to the Windows 8 client OS – including features such as Picture and Video viewers.
The Server Graphical Shell is exactly that: the GUI. In other words we can turn the GUI on or off by using the Add Roles and Features Wizard (and the Remove Roles and Features Wizard).
Now there are a number of catches to remember:
First of all when you go down to MinShell the Add Roles and Features Wizard is still available, but not in Server Core. Make sure you have this article on hand if you do go down to Server Core.
Next, if you install the full GUI and then remove the components then re-adding them isn’t a problem; however if you install the Server Core installation from the outset then the GUI (and Management) bits are not copied to the drive, which means that if you want to add them later you will need to have the installation media handy.
While hard drive space is pretty cheap, and it is easy to decide to install the full GUI every time and then remove it (so that the bits will be there when you want them). However remember that with Hyper-V in Windows Server 2012 the limits are pretty incredible, and it is entirely possible that you will have up to 1,024 VMs on a host; that means that the few megabytes required for the GUI bits could add up.
Whether you opt for the Server Core, Full GUI, or the MinShell compromise, Windows Server 2012 is definitely the easiest Server yet to manage, either locally or remotely, one-off commands or scripts. What I expect admins will be most excited about is the choices. Run your servers your way, any way!
- Memory Limits in Windows 8 & Windows Server 2012 (garvis.ca)
- Default File Locations in Hyper-V (garvis.ca)
- Client-Side Hyper-V: How Microsoft is changing the game (garvis.ca)
I made what I thought was a reasonably innocuous statement in front of an audience a few months ago, and couldn’t believe the pushback I got.
Our job as the IT providers – whether as in-house providers or as contractors – is not to make decisions. In fact, people are often amazed by how few decisions we have to make.
There was a chorus of objections from this group of high-level systems administrators who protested that they made decisions all of the time, with regard to licenses, solutions, whose hardware to buy, what password policies to implement, and so much more. They wanted to assure me that they made important decisions all of the time that would affect the user experience of everyone in their organizations.
As a service provider, and I hope that by now we can all agree that in most organizations IT is indeed a service provider, it is not our job to make decisions, it is our job to implement the decisions of others. Our job is not to be decision makers, it is to be trusted business advisors. That is an important distinction that we can never forget.
We don’t tell our clients what they need to do; they know what they need to do. We simply advise them how they can use different technologies to do it, and then they make the decision. It is our job to let them know what tools we can make available to them to facilitate their jobs.
Electronic communications is a great example of this. A few short years ago it was our job to tell our organizations that they could better communicate with their customers, suppliers, and everyone if they would start using e-mail. Then we often had to make a business case for using our own domain name – email@example.com – rather than a public cloud (although we didn’t call them that) free address such as firstname.lastname@example.org. Of course it usually made business sense, but we so often had to make the case anyways. From there it was servers – should our mail servers be in-house, or should we rely on our ISP (or another third party) for that service. I even remember having to convince one boss that his e-mail address should be printed on his business card.
In the entire process above, I didn’t make a single decision. I made recommendations, but it was the boss, the board, the committee that made the decisions.
So when this decision was made – our company will host our own mail servers – at least I could make the decision as to what mail servers to buy, right?
If I was an honest and trusted business advisor I would research what was available, cost out different solutions weighing in such factors as cost, reliability, features, and ease of use. I would then present a number of options to the board (often at this point an IT Committee), and they would make the ultimate decision. Again, I would make my recommendations, but the decisions were someone else’s.
Fast forward to 2012, the world is moving into the cloud. Private Cloud or Public Cloud? Whose solution? I present my customers with recommendations. I make my recommendations based on several factors, including operational expenses versus capital expenses, bandwidth requirements, service level agreements (SLAs), and so many other factors. Most of the time, because of my reputation as a trusted business advisor, my clients (and students) follow my advice. However in the end they are free to make their own decisions.
I was in an interview with a potential client recently who came to me because they need to replace their current service provider, and we sat down for a great conversation. Near the end of the chat he said to me:
Mitch, you obviously have the requisite skills and staff to do what we need, and I hope we can continue to work together going forward. But you have a lot of very strong opinions. What would you do if we disagree? You tell me we should do <A>, I say that I want to do <B>. What do you do then?
It was an almost obvious question that I had never been asked before. I told him honestly ‘Mark, if we disagree on what to do then I am going to do my best to convince you that I am right. I will make every proposal and reasonable argument, and will do everything I can to sway you to my side. If I cannot do that, then the simple answer is that you are paying the bills, and that makes the decision yours. In almost every case I will do what you ask me to do, because they are your servers and your infrastructure.’
Wait a minute… you said ‘almost’? Why the qualifier?
‘Very simple. If you ask me to do something that will compromise the security of your organization’s systems then you will have to ask someone else to do it. I compromise on everything else, but not on security.’
That, really, is the only major decision we can make… the decision to walk away when our customer (or boss) won’t take our advice. Sure, others can delegate the details to us – what version of what server to use on what hardware – but the real decisions belong to others.
While this may be (to some) a bruise to our egos, the reality is we should be relieved; we have enough as IT administrators on us without having to shoulder the burden of those major decisions. We are responsible for so much – and seldom get the credit we deserve for the jobs we do. We are responsible not only for keeping our systems working, but also for giving the people who do make the decisions the best advice and suggestions.
Let someone else make the decisions