Server Core on VMware

When I was a Virtual Technical Evangelist for Microsoft Canada I spent a lot of time telling you why you should use Server Core… especially if you were on Hyper-V.  Why?  You save resources.

It is now over two years since I turned in my Purple Badge, and I still think Server Core rocks.  In fact, when Windows Server 2016 comes out I will probably spend a lot of time telling you about the new Nano Server option that they are including in that version.  More on that to come.

Of course, I still like Hyper-V, but as an independent consultant I recognize (as I did quietly when I was with the Big Blue Machine) that the vast majority of the world is still running VMware for their enterprise-level server virtualization needs.  That does not change my opinion of Server Core… it still rocks, even on VMware.

Of course, in order to get the full benefits of the virtualized environment, a VMware machine requires the installation of the VMware Tools (as Hyper-V requires the installation of Integration Services).  With a Server with a GUI that is easy to do… but since Server Core is missing many of the hooks of the GUI, it has to be done from the command line.  Here’s how:

1. As you would with any other server, click Install VMware Tools

image

2. Connect to and log on to the virtual machine.  You will have to do this with Administrator credentials.

3. navigate to the mounted ISO (if you only have a single hard drive attached it will usually be D:)

4. type in the following command line: setup64.exe /S /v “/qn reboot=Y”

image

Once you have done this, the VMware tools will install, and your server will reboot.  Nothing to it!

Advertisements

How To Cheat With PowerShell

Admit it… you are a crappy coder.  You may be a pretty fair IT Professional, but you cannot script your way out of a paper bag.  There’s a support group for you.

Hi, my name is Mitch, and I’m a lousy scripter. 

Admittedly I have never been to an AA or NA meeting; I have never really done well with support groups, and the only addiction I ever had I kicked without anyone’s help.  However I’ve seen plenty of AA meetings on TV and in movies, and that’s how they usually start.

4214_Powershell%20blore-logo_png-550x0Recently I wrote an article called iSCSI Virtual Disks: If you have to make a bunch… Use PowerShell! Thanks to one of my loyal readers (who despite or because of their loyalty are always quick to point out when I make a mistake) I realized that despite saying that it did, the script did not actually connect the virtual disks to the iSCSI Target… and I had to find a way to do that before looking stupid for too long.

Here’s the problem… I’m not really a PowerShell guru, just a regular IT guy who realizes the amazing power of the tool.  And as was written in an article that went live this week (guest-written by a colleague and friend), while using Google to find samples of scripts is great, there are two spectacular tools to help you on your way. 

The first such tool is called Get-Help.  You can type that in PowerShell to find out about any cmdlet.  Cool!

However what do you do if you know there is probably a cmdlet, but you don’t know what it is?  Well, the second one is the Integrated Scripting Environment (ISE).  PowerShell’s ISE is the easiest way to build your scripts, whether they are simple, single-line cmdlets, or large, vast, flowing scripts that take up pages and pages.

Step 1: Run PowerShell ISE.  This is pretty easy, and if you haven’t figured it out, just click on the Start menu and type ISE.

Step 2: Select your Module.

image

The PowerShell ISE window is generally divided into three parts: A live PowerShell window, a scripting window, and the Commands list.  The Commands section is literally a list of every command and cmdlet in PowerShell… thousands of them.  However let’s say you know the command you are looking for has to do with iSCSI Targeting… select that Module from the drop-down list, and all of a sudden your thousands of commands turn to twenty-six.

What I want to do is to map a previously created iSCSI Virtual Disk to an iSCSI Virtual Target… so the top target (Add-IscsiVirtualDiskTargetMapping) sounds pretty spot-on. I’ll click on it, and if this is the first time clicking on a command for this module, I will get the following message:

To import the “iSCSITarget” module and its cmdlets, including “Add-IscsiVirtualDiskTargetMapping”, click Show Details.

SNAGHTML37e7dde

When I click Show Details, I am presented with several options.  These will differ for every cmdlet, and they will correspond to the optional (and required) command-line switches that I might need.

The Path is going to be the full name and path of the previously created iSCSI Virtual Disk.  In my case I created several, but they all look like q:\iSCSIVirtualDisks\Disk1.vhdxThat is what I am going to enter there.

The TargetName is the name of the target I created… in this case it might look like Target1.

It is important that you pay attention to the ComputerName box because as you saw in my previous article, I might name the iSCSI Virtual Disks (my VHDX files) the same thing on each host.  When I enter the ComputerName TargetServer1 PowerShell knows to look for Target1 on that server.  If you do not enter a ComputerName then it will assume that it should look on the local server… and that could be disastrous, especially if those VHDX files are already otherwise mapped and in use.

The Credential box is exactly what it sounds like… If your user account does not have credentials to execute a command on a remote system, you can use this box to specify alternate credentials.

The Lun box allows you to set the LUN (Logical Unit Number) of the virtual disk.  If you are not concerned by this, the default is for the lowest available LUN number to be assigned automatically.

If you want more help, notice that there is a blue circle with a ? right in the window.  Click on that, and you get a much more detailed Help dialog than you would get by typing Get-Help Add-IscsiVirtualDiskTargetMapping in the PowerShell window will pop up for you.  If you don’t believe me, try them both Smile

image

imageSee? I told you!

So let’s go ahead and populate those fields the way I said:

image

Once you populate them, there are three buttons at the bottom of the Commands console that you can use:

Run does exactly what you would think… it runs the command with the appropriate switches.

Insert puts the command and switches into your PowerShell (blue) window, but does not execute the command.

Copy is also pretty self-explanatory… it copies it to the clipboard for you to put in the scripting (white) window… or anywhere else you might want to insert it with a Ctrl-V.

So I don’t really know how to script, but I know what I want to accomplish… PowerShell ISE takes me from base-camp to the goal like a Sherpa guiding me on Everest.  Yet another way to love PowerShell… and get to know it better!

iSCSI Virtual Disks: If you have to make a bunch… Use PowerShell!

I don’t mean to sound like a broken record but when you have to do things over and over again, it simply doesn’t make sense to do it manually.  Sure, the GUI (Graphical User Interface) in Windows is great… but if you have to click through the same wizard ten times to create ten of something… well, I guess all I am saying is that PowerShell makes a lot more sense.

Last month I went through a relatively time-consuming exercise to create three LUNs on each of three Software SANs on Windows Server 2012R2.  Ok great… but I then discovered that for my project I couldn’t use three LUNs of 1.5TB each… rather I needed to create nine LUNs of 500GB each.  What a royal pain!  By the way, seeing as I have to do this on three separate servers, my workload just tripled from doing it 9 times to doing it 27 times!  This does not sound like fun.

Fortunately, I can do it all in PowerShell, which means I can save a whole lot of clicking.  We are going to do this all on three different servers, named   Let’s look at how:

Parameters

a) We are going to create three iSCSI Target Servers called TargetServer1, TargetServer2, and TargetServer3.

b) We are going to present the targets to five servers called InitServer1, InitServer2, InitServer3, InitServer4, and InitServer5.

c) We are going to create 9 500GB drives on each server, plus three 1GB drive on each server.  In case you can’t tell, these drives will be used for nine different Failover Clusters, and the 1GB drive will be the witnesses.

d) We are going to attach all of the iSCSI Virtual Disks to the appropriate Targets.

Let’s Go!

1) Before we do anything, we want to create a session that will repeat the same tasks on each computer.

PS C:\> $session=New-PSSession –ComputerName Server1,Server2,Server3

That will save us having do do a few things over again, even though we could have done it with a simple ‘'<Up-Arrow> <Backspace>” or two.

2) We have to install the iSCSI Target Role Feature on all of these server. So:

PS C:\> Invoke-Command –Session $session {Install-WindowsFeature –Name iSCSI-TargetServer

2) The next thing we are going to do is actually create the iSCSI Targets on the three servers.  By doing this with the $session that we created we will end up with three targets with the same name.  I trust you will go back and fix that by hand later on.  If you prefer to avoid that step though, we could bypass the $session and use the manual-PowerShell way Smile

PS C:\> Invoke-Command –session $session {New-IscsiServerTarget –TargetName Target1 –Credential InitServer1,InitServer2,InitServer3,InitServer4,InitServer5

or…

PS C:\> New-IscsiServerTarget –ComputerName TargetServer1 –TargetName Target1 –Credential InitServer1,InitServer2,InitServer3,InitServer4,InitServer5

PS C:\> New-IscsiServerTarget –ComputerName TargetServer2 –TargetName Target2 –Credential InitServer1,InitServer2,InitServer3,InitServer4,InitServer5

PS C:\> New-IscsiServerTarget –ComputerName TargetServer3 –TargetName Target3 –Credential InitServer1,InitServer2,InitServer3,InitServer4,InitServer5

PS C:\> New-IscsiServerTarget –ComputerName TargetServer4 –TargetName Target4 –Credential InitServer1,InitServer2,InitServer3,InitServer4,InitServer5

PS C:\> New-IscsiServerTarget –ComputerName TargetServer5 –TargetName Target5 –Credential InitServer1,InitServer2,InitServer3,InitServer4,InitServer5

3) Now that we have created the Targets, we have to create the disks.  Unlike the Targets (whose names will be used outside of their own servers), I don’t mind if the names of the actual disks on each server.

Invoke-Command –session $session {

New-IscsiVirtualDisk –Path q:\iSCSIVirtualDisks\Disk1.vhdx –SizeBytes (500GB) –UseFixed

New-IscsiVirtualDisk –Path q:\iSCSIVirtualDisks\Disk2.vhdx –SizeBytes (500GB) –UseFixed

New-IscsiVirtualDisk –Path q:\iSCSIVirtualDisks\Disk3.vhdx –SizeBytes (500GB) –UseFixed

New-IscsiVirtualDisk –Path q:\iSCSIVirtualDisks\Disk4.vhdx –SizeBytes (500GB) –UseFixed

New-IscsiVirtualDisk –Path q:\iSCSIVirtualDisks\Disk5.vhdx –SizeBytes (500GB) –UseFixed

New-IscsiVirtualDisk –Path q:\iSCSIVirtualDisks\Disk6.vhdx –SizeBytes (500GB) –UseFixed

New-IscsiVirtualDisk –Path q:\iSCSIVirtualDisks\Disk7.vhdx –SizeBytes (500GB) –UseFixed

New-IscsiVirtualDisk –Path q:\iSCSIVirtualDisks\Disk8.vhdx –SizeBytes (500GB) –UseFixed

New-IscsiVirtualDisk –Path q:\iSCSIVirtualDisks\Disk9.vhdx –SizeBytes (500GB) –UseFixed

New-IscsiVirtualDisk –Path q:\iSCSIVirtualDisks\Disk1W.vhdx –SizeBytes (1GB) –UseFixed

New-IscsiVirtualDisk –Path q:\iSCSIVirtualDisks\Disk2W.vhdx –SizeBytes (1GB) –UseFixed

New-IscsiVirtualDisk –Path q:\iSCSIVirtualDisks\Disk3W.vhdx –SizeBytes (1GB) –UseFixed}

Warning: This script is going to take a ridiculously long time.  That is because when creating the virtual disks, PowerShell is zeroing out all of the bits.  This is the safer way to do things if you are re-using your disks.  If they are brand new clean disks, then you can add the switch DoNotClearData to your statements.  However unless you are in a real hurry, I would take the extra time.

4) Our disks have been created, but we have to attach them to the Targets.  So:

Invoke-Command –session $session {

Add-IscsiVirtualDiskTargetMapping –Path q:\ISCSIVirtualDisks\Disk1.vhdx –TargetName Target1

Add-IscsiVirtualDiskTargetMapping –Path q:\ISCSIVirtualDisks\Disk2.vhdx –TargetName Target1

Add-IscsiVirtualDiskTargetMapping –Path q:\ISCSIVirtualDisks\Disk3.vhdx –TargetName Target1

Add-IscsiVirtualDiskTargetMapping –Path q:\ISCSIVirtualDisks\Disk4.vhdx –TargetName Target1

Add-IscsiVirtualDiskTargetMapping –Path q:\ISCSIVirtualDisks\Disk5.vhdx –TargetName Target1

Add-IscsiVirtualDiskTargetMapping –Path q:\ISCSIVirtualDisks\Disk6.vhdx –TargetName Target1

Add-IscsiVirtualDiskTargetMapping –Path q:\ISCSIVirtualDisks\Disk7.vhdx –TargetName Target1

Add-IscsiVirtualDiskTargetMapping –Path q:\ISCSIVirtualDisks\Disk8.vhdx –TargetName Target1

Add-IscsiVirtualDiskTargetMapping –Path q:\ISCSIVirtualDisks\Disk9.vhdx –TargetName Target1

Add-IscsiVirtualDiskTargetMapping –Path q:\ISCSIVirtualDisks\Disk1W.vhdx –TargetName Target1

Add-IscsiVirtualDiskTargetMapping –Path q:\ISCSIVirtualDisks\Disk2W.vhdx –TargetName Target1

Add-IscsiVirtualDiskTargetMapping –Path q:\ISCSIVirtualDisks\Disk3W.vhdx –TargetName Target1}

So if we did it properly, we should now have three software SANs (Targets) with nine virtual disks each, that are each connected to three iSCSI targets, which are in turn each presented to five iSCSI initiators.  Additionally, we have three ‘Quorum Disks’ on each Target. 

In my next article, I will show you what needs to be done on the initiator side to get these all going for your Failover Clusters.  Until then… Happy scripting!

Server Core: Every bit the Full Server as the GUI is!

Microsoft introduced Server Core with Windows Server 2008, which means that it was the same kernel as Windows Vista.  Now, nobody is going to stand up and sing the praises of Vista, but Server 2008 was a very solid OS.

You may (or may not) remember that there was a campaign around Vista called ‘The WOW starts NOW!’ Catchy, huh?  Well, because Server Core was literally taking the ‘bling’ out of Windows Server, there was an internal joke at the time that ‘The Wow STOPS Now.’

While Server Core never looked very exciting for end users, for IT Admins, and especially those who were building virtualized environments, Server Core was a godsend. Let’s go through this one more time to demonstrate why:

  • The Windows Graphical User Interface (GUI), which is the difference between Server Core and not, takes resources.  How much?  Well, depending on the server it might be as much as 3-4GB on the hard drive and as much as 350MB of RAM.  Neither of these is a big deal in a world where servers have 128GB of RAM and terabytes of storage on them, right?  Well on a virtualization host that may have on average 100 virtual machines running simultaneously, that translates to 400GB of storage and a ridiculous 35GB of RAM… Ouch.
  • Every component that is installed in Windows has to be patched from time to time.  The fewer components you have installed, the less patching that has to be done.
  • The more you have installed in Windows the larger your attach surface.  By removing components, you can minimize this, making your computer more secure.

servercore01In Windows Server 2008 here’s what we saw when we initiated the installation… a menu with all three editions (Standard, Enterprise, Datacenter) Full Installation, and the three editions with Server Core Installation.

I have been singing the praises of Server Core for as long as it has been available, but often to deaf ears.  I always assumed this was because most IT Admins liked the GUI.  Recently I was in a presentation given by Jeffrey Snover, who gave me another perspective on it… the terminology in Server 2008 was part of it.  You see, people look at the options ‘Full Server’ versus ‘Server Core’ and they immediately think ‘power & performance.’ A Full Server must do more than a server core server… why?  It is FULL!

Of course, in Server 2008 it didn’t help that Server Core actually was a hobbled version of Server… there were a few roles that worked on it, but not too many.

As with so many Microsoft products, that got better in 2008 R2, and even better in Server 2012 and 2012 R2.  Today you would be amazed at what can run on Server Core… in fact, nearly everything that you do on a server can run on Server Core.  So there is little wonder that Microsoft made a change to the terms…

servercore02No longer is it a question of FULL versus CORE… Now our options are Server Core Installation and Server with a GUI.

There are two differences to notice in this screen shot… the first is that there are only four options because Microsoft eliminated the Enterprise SKU.  The second is that the default option (a la next – next – next installations) is Server Core.  While some admins might say ‘Yeah I wasn’t paying attention so I ended up with Server Core and had to reinstall,’ the reality is that most of us, once we understand the benefits and the manageability options, will want to install Server Core instead of the GUI servers.

Of course, there are admins who will still be afraid of the command line… but because most of the ongoing administration of our servers (the things we usually do with MMC consoles) Server Core, or at the very least MinShell will make our lives easier.  MinShell removes most of the GUI, but leaves the MMC consoles.

But what if I wanted to use the GUI to configure the system, and then get rid of it completely?  We can definitely do that.  One method of doing it is to use the Server Manager’s Remove Roles and Features option.  (The GUI is a feature, and is listed under User Interfaces and Infrastructure – Server Graphical Shell)  This will uninstall the components and save the RAM… but it will not free up your hard disk space.  To do that, use the following PowerShell cmdlet:

Uninstall-WindowsFeature –Name Server-Gui-Mgmt-Infra,Server-Gui-Shell –ComputerName <Name> -Remove -Restart

The -ComputerName option allows you to do this to remote computers, and the -Remove option actually removes the bits from the hard drive.

What can you do with Server Core? I won’t say everything… but nearly so.  It is no longer just your Hyper-V hosts… it is your domain controllers, SQL Servers, Web Servers, and so much more.  As long as you are able to learn a little bit of PowerShell… and how to enable remote management on your servers.

Now go forward and save your resources!

Where am I? HELP!

My colleague created a virtual machine for me in our datacentre a few weeks ago.  (Thanks Michael!)  Earlier this week I needed to create a second virtual machine to cluster with it, and I felt that the best way to maximize my resources completely would be to create another virtual machine identical to the first.  Okay, all I had to do was pop open the Settings window for the virtual machine and copy it.

We have 25 physical host servers in the lab environment in question, and no Virtual Machine Manager.  Crap.

I could, if I had to, connect to each host one by one looking for the virtual machine in question, but that would be a waste of time… not to mention that as a one-off solution it could work, but it is a bad habit to get into.  I needed a better solution.

If you ever find yourself in the position, here’s a tip: As long as you have the Integration Services installed, there is a registry key in the virtual machine that gives me my answer.  So open Regedit and navigate to:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Virtual Machine\Guest\Parameters

See? There it is, right there in black and white.  In fact, it’s there three times – under HostName, PhysicalHostName, and PhysicalHostNameFullyQualified.   I no longer need a map, I no longer need to go looking by hand.

But Mitch, isn’t there a way to do this in PowerShell?

I’m glad you asked.  Sure, here it is:

(Get-ItemProperty –path “HKLM:\SOFTWARE\Microsoft\Virtual Machine\Guest\Parameters”).PhysicalHostName

Of course, if you are a stickler about it, you can change the last bit to PhysicalHostNameFullyQualified, but that’s up to you.

Now that you know where you are… keep going!

Let’s Spread the Action Around… With NLB! (Part 1)

**AUTHOR’S NOTE: I have written hundreds of articles on this blog over the past decade.  Until recently I spent a lot of time taking screen shots of GUI consoles for my how-to articles.  For the time being, as I try to force myself into the habit, I will be using Windows PowerShell as much as possible, and thus will not be taking screen shots, but instead giving you the cmdlets that I use.  I hope this helps you as much as it is helping me! –MDG

I have written at length about Failover Clusters for Active-Passive services.  Let’s move away from that for a moment to discuss Network Load Balancing (NLB) – the tool that we can use to create Active-Active clusters for web sites (and other static-information services).

While NLB does, after a fact, cluster services, it is not a failover service… and is in fact a completely different service.  For my use case, it is usually installed on a server running IIS.  Start by installing it:

PS C:\> Install-WindowsFeature NLB –ComputerName Server1

Of course, having a single server NLB cluster is like juggling one ball… not very impressive at all.  So we are going to perform the same function for at least a couple of hosts…

PS C:\> Install-WindowsFeature NLB –ComputerName Server1,Server2,Server3

By the way, notice that I am referring to the servers as hosts, and not nodes.  Even the terminology is different from Failover Clusters.  This is going to get confusing at a certain point, because some of the PowerShell cmdlets and switches will refer to nodes.

Now that the feature is installed on all of our servers, we are almost ready to create our NLB Cluster.  Before we do, we have to determine the following:

  • Ethernet Adapter name
  • Static IP Address to be assigned to the Cluster

You are on your own for the IP address… it is up to you to pick one and to make sure it doesn’t conflict with another server or DHCP Server.

However with regard to the Ethernet Adapter name, there’s a cmdlet for that:

PS C:\> Invoke-Command –ComputerName Server1 –ScriptBlock {Get-NlbClusterNodeNetworkInterface}

Notice that I am only doing this, for the time being, against one server.  That is because I am going to create the cluster on a single server, then add my hosts to it afterward.

So now that we have the information we need, let’s go ahead and create an NLB Cluster named WebCluster, on Server1, with the Interface named Ethernet 2, and with an IP Address of 172.16.10.199:

PS C:\> New-NlbCluster –HostName Server1 –InterfaceName “Ethernet 2” –ClusterName WebCluster –ClusterPrimaryIP 172.16.10.199 –OperationMode Multicast

It will only take a minute, and you will get a response table listing the name, IP Address, Subnet Mask, and Mode of your cluster.

Now that we’ve done that, we can add another host to the NLB Cluster.  We’ll start by checking the NIC name on the second server, then we will add that server to the NLB Cluster:

PS C:\> Invoke-Command –ComputerName Server2 –ScriptBlock {Get-NlbClusterNodeNetworkInterface}

PS C:\> Get-NlbCluster –HostName Server1 | Add-NlbClusterNode –NewNodeName Server2 –NewNodeInterface “Ethernet”

Notice that in the first part of the script we are getting the NLB Cluster Name from the Host Name, and not the Cluster Name.

This part may take a few minutes… Don’t worry, it will work.  When it is done you will get a response table listing the name, State, and Interface name of the second host.

You can repeat this across as many hosts as you like… For the sake of this series, I will stick to two.

In the next article of the series, we will figure out how to publish our web sites to the NLB Cluster.

Help! My Servers Aren’t Being Monitored!

SNAGHTML6643d4fThis isn’t right… I have System Center Operations Manager monitoring all of my servers for me, but this morning I noticed that several of my servers are in a warning state, but they are greyed out (which implies that they aren’t reporting in properly).  What do I do?

This is not uncommon, especially in smaller organizations where you may have a single IT Professional running everything.  While it is not a good practice, some IT Pros will use their own credentials (which are obviously going to be Domain or Enterprise Admin accounts) to make things work.  Here’s the problem… you set up your credentials in System Center Operations Manager as a Run As account… and then at some later date you changed your password.

It is never a good idea to use an individual’s credentials as a Run As account.  It is also never a good idea to provide Domain Admin credentials to a program, but that is another issue that I will tackle later on.  What you should do, when configuring System Center Operations Manager, is create action (or Service) accounts in Active Directory.  Use ridiculously long and impossible to guess passwords (Jean MacDonald Kennedy was the 23rd Queen of Tahiti) and change them on a less frequent basis… say, when you change the batteries in your smoke detectors.

So now we have a bunch of computers that are being monitored… oh wait, no they aren’t.  They only look like they are being monitored.  We’d better fix that, and pronto!

We have to figure out what servers this account applies to.  We cannot simply delete the RunAs account, because it is going to be associated with a profile.  So let’s start by figuring out what profile that is.

1) In the Administration workspace navigate to Run As Configuration – Accounts and locate the errant account in the list of action accounts.  Right-click on it, and click Properties.

2) In the Properties window click on Where is this credential used?For the sake of this article, the only profile listed is Default Action Account.  Close Account Usage and Run As Account Properties.

3) Navigate to Run As Configuration – Accounts and locate the profile.  Right-click on it and click Properties.

4) In the Run As Profile Wizard navigate to Run As Accounts.

5) In the list of Run As accounts find all instances where the user account is listed.

image

6) One by one, click Edit… In the Add a Run As Account window change the account to your Service Account.  Click OK.

SNAGHTML6821e2c 

7) When you have done this for all instances (remember, you may need to scroll down) click Save.

** IMPORTANT NOTE: If you get error messages preventing you from saving the profile, you can either break your back trying to troubleshoot the SQL errors… or if there aren’t too many systems using the offending account, you can delete those servers from SCOM, and when you have resolved the issue, go back and re-discover them.

Once this is done, you can now delete the Run As account:

8) Navigate to Run As Configuration – Accounts

9) Right-click on the offending account and click Delete. (Accept any warning).

That should do it!  Go forth and manage, and remember… an unmanaged server can work great and save you all sorts of time… until it stops working and you have no idea why, or even that it did stop working.

Insanity Is…

Insanity

We have all heard this quote before… and it is exactly true.  However in your server environment, when you want things identical, then we would turn this phrase around:

Insanity: Doing things manually over and over and expecting identical results.

I have not spent a great deal of time learning PowerShell… but whenever I have a task to do, such as installing a role or a feature, I try to do it with PowerShell.  I actually leverage another of Einstein’s great axioms:

Memorize

The Internet is a great tool for this… I can look up nearly anything I need, especially with regard to PowerShell. 

So previously, when I wanted to install a role on multiple servers I would run a series of cmdlets:

PS C:\>Install-WindowsFeature Failover-Clustering –IncludeManagementTools –ComputerName Server1

PS C:\>Install-WindowsFeature Failover-Clustering –IncludeManagementTools –ComputerName Server2

PS C:\>Install-WindowsFeature Failover-Clustering –IncludeManagementTools –ComputerName Server3

Of course, this would work perfectly.  However recently I was looking up one of the cmdlets I needed on the Internet and stumbled across an easier way to do it… and especially when I want to run a series of identical cmdlets across the multiple servers.  I can simply create a multi-server session.  Watch:

PS C:\>$session=New-PSSession –ComputerName Server1,Server2,Server3

PS C:\>Invoke-Command –session $session {Add-WindowsFeature Failover-Clustering –IncludeManagementTools}

Two lines instead of three doesn’t really make my life a lot easier… but let’s say I was doing more than simply adding a role… this could save me a lot of time and, more importantly, ensure uniformity across my servers.

Creating a PSSession is great for things like initial configuration of servers… think of all of the tasks you perform on every server in your organization… or even just every web server, or file server.  This will work for Firewall rules, and any number of other settings you can think of.

Try it out… It will save you time going forward!

Broken Cluster? Clear it up.

Three years ago I wrote an article about cleaning up nodes of clusters that had been corrupted and destroyed (See Cluster Issues… how to clean out cluster nodes from destroyed clusters). 

Unfortunately the cluster command has been deprecated in Windows Server 2012 R2, so we need to go to PowerShell… which frankly is where we should be going anyways!

PS C:\> Clear-ClusterNode –Cluster Toronto –Force

In this example we had a cluster named Toronto that is no longer accessible.  Unfortunately one of the nodes was off-line when the cluster was destroyed, so it didn’t ‘get the message.’  As such, when we try later to join it to a new cluster we get an error that the server is already a node in another cluster.

The cmdlet only takes a minute to run, and when you do run it you are all set… you will immediately be able to join it to another cluster.

For the fun of it, I have not figured out yet how to (natively) run this cmdlet against a remote server, so you can either do it by connecting to each server or…

Invoke-Command –ComputerName Server1 –ScriptBlock {Clear-ClusterNode –Cluster Toronto –Force}

I covered this option in a previous article (Do IT Remotely) which shows how to run cmdlets (or any script) against a remote server.

No go forth and script!

Cluster-Aware Updates: Be Aware!

When I started evangelizing Windows Server 2012 for Microsoft, there was a long list of features that I was always happy to point to.  There are a few of them that I have never really gone into detail on, that I am currently working with.  Hopefully these articles will help you.

Cluster Aware Updates (CAU) is a feature that does exactly what it says – it helps us to update the nodes in a Failover Cluster without having to manually take them down, put them into maintenance mode, or whatever else.  It is a feature that works in conjunction with our patch management servers as well as our Failover Cluster.

I have written extensively about Failover Clusters before, but just to refresh, we need to install the Failover Clustering feature on each server that will be a cluster node:

PS C:\Install-WindowsFeature –Name Failover-Clustering –IncludeManagementTools –ComputerName <ServerName>

We could of course use the Server Manager GUI tool, but if you have several servers it is easier and quicker to use Windows PowerShell.

Once this is done we can create our cluster.  Let’s create a cluster called Toronto with three nodes:

PS C:\New-Cluster –Name Toronto –Node Server1, Server2, Server3

This will create our cluster for us and assign it a dynamic IP address.  If you are still skittish about dynamic IP you can add a static IP address by modifying your command like this:

PS C:\New-Cluster –Name Toronto –Node Server1, Server2, Server3 –StaticAddress 10.10.10.201

Great, you have a three-node cluster.  So now onto the subject at hand: Cluster Aware Updates.

You would think that CAU would be a default behaviour.  After all, why would anyone NOT want to use it? Nonetheless, you have to actually enable the role feature.

PS C:\Add-CauClusterRole –EnableFirewallRules

Notice that we are not using the –ComputerName switch.  That is because we do not install the role service to the servers but to the actual cluster.  You will be asked: Do you want to add the Cluster-Aware Updating clustered role on cluster “Toronto”? The default is YES.

By the way, in case you are curious the Firewall Rules that you need to enable is the ‘Remote Shutdown’ rule.  This enables Cluster-Aware Updating to restart each node during the update process.

Okay, you are ready to go… In the Failover Cluster Manager console right-click on your cluster, and under More Actions click Cluster-Aware Updating.  In the window Failover – Cluster-Aware Updating click Apply updates to this cluster.  Follow the instructions, and your patches will begin to apply to each node in turn.  Of course, if you want to avoid the management console, all you have to do (from PowerShell) is run:

PS C:\Invoke-CauRun

However be careful… you cannot run this cmdlet from a server that is a cluster node.  So from a remote system (I use my management client that has all of my RSAT tools installed) run:

PS C:\Invoke-CauRun –ClusterName Toronto

You can watch the PowerShell progress of the update… or you can go out for ice cream.  Just make sure it doesn’t crash in the first few seconds, and it should take some time to run.

Good luck, and my the cluster force be with you!

Keep Up: How to configure SCOM to monitor the running state of services and restart them when they stop

Windows runs on services.  Don’t believe me?  Open your Services console and count just how many are running at any given time.  Of course, some of them are more important than others… especially when you are talking about servers that are critical to your organization.

A new customer recently called me for a DEAR Call (emergency visit) because their business critical application was not working, and they couldn’t figure it out.  I logged into the server, and at first glance there didn’t appear to be anything wrong on the application server.  However I knew that the application used SQL Server, and I did not see any SQL instances on the machine.  A quick investigation revealed that there was an external SQL Server running on another server, and it only took a few seconds to see why the application was failing.

image

Very simply put, the service was not started. I selected it, clicked Start the service, and in a few seconds the state changed:

image

A quick look showed that their business critical application (in this case SharePoint 2010) was working properly again.

My customer, who was thrilled to be back in business, was also angry with me.  ‘We spent tens of thousands of dollars on System Center Operations Manager so that we could monitor our environment, and what good does it do me?  I have to call you in when things stop working!’

Yell as much as you like I told him, but please remember the old truism… if you think it is expensive hiring professionals, try hiring amateurs.  After he had learned about the benefits of implementing a proper monitoring solution he told his IT guy to install it… and that is exactly what he did.

System Center Operations Manager (SCOM) is a monitoring framework, and really quite a good one.  In fact, if Microsoft included the tools within the product itself to monitor every component that it is capable of monitoring, it would have to come in a much bigger box.  Instead, what it gives you is the ability to import or create Management Packs (MPs) to monitor aspects of your IT environment.  It is up to you to then implement those MPs so that SCOM can monitor the many components of your infrastructure… and take the appropriate action when things go wrong.

Of course, there are much more in-depth MPs for monitoring Microsoft SQL Server, but for those IT generalists who do not need the in-depth knowledge of what their SQL is doing, simply knowing that the services are running is often good enough… and monitoring those services is the exact same step you would take to monitor the DNS Server service.

Although it is long, following these relatively simple steps will do exactly what you need.

1) Open the Operations Manager console.

2) In the Operations Manager console open the Authoring context.

3) In the navigation pane expand Management Pack Objects and click on Monitors.

image

4) Rick-click on Monitors and select Create a Monitor – Unit Monitor…

5) At the bottom of the Create a unit monitor window select the Management Pack you are going to save this to.  I never save to the default management packs – create your own, it is safer (and easier to recover when you hork something up).

6) In the Select the type of monitor to create section of the screen expand Windows Services and select Basic Service Monitor.  Click Next.

SNAGHTML33d5f150

7) In the General Properties window name your monitor.  Make sure you name it something that you will recognize and remember easily.

8) Under Monitor target click Select… From the list select the target that corresponds to the service you will be monitoring.  Click OK.

9) Back in the General Properties window uncheck the Monitor is enabled checkbox.  Leaving this enabled will try to monitor this service on every server, not just the one where it resides.  Click Next.

10) In the Service Details window click the ellipsis button () next to Service Name.

11) In the Select Windows Service window either type the name of the target server, or click the ellipsis button and select the computer from the list.  Then select the service you wish to monitor from the list under Select service.  Click OK.

SNAGHTML33e455ae

12) Back in the Service Details window the Service name window should be populated.  Click Next.

13) In the Map monitor conditions to health states window accept the defaults… unless of course you want to make sure that a service is NEVER started, at which point you can change that here.  Click Next.

SNAGHTML33e6ea20

14) In the Alert settings window select the Generate alerts for this monitor checkbox.  You can also put in a useful description of the alert in the appropriate box.  Click Create.

The saving process may take a minute or two, but when it is done search for it in the Monitors list.

14) Right-click on your custom monitor.  select Overrides – Override the Monitor – For a specific object of class: <Name of the product group>

image

15) In the Select Object window select the service you are monitoring and click OK

16) In the Override Properties window, under the Override-controlled parameters list, scroll for the parameter named Enabled and make the following changes:

a) Select the Override checkbox.

b) Change the Override Value to True.

c) Click Apply

d) Click Show Monitor Properties…

17) In the Monitor Properties window click the Diagnostic and Recovery tab.

18) Under Configure recovery tasks click Add… and when it appears click Recovery for critical health state.

image

19) Under the Create Recovery Task Wizard click Run Command and click Next.

20) In the Recovery Task Name and Description window

a) enter a Recovery name (Re-Start Service works for me!).

b) Select the checkbox Recalculate monitor state after recovery finishes.

c) Click Next.

21) In the Configure Command Line Execution Settings window enter the following information:

Full path to file: %windir%\System32\Net.exe

Parameters: start <service name>

Working directory: %windir%

Timeout (in seconds): 120

22) Click Create.

23) Close the Monitor Properties window.

24) In the Override Properties window click Apply then OK.

The doing is done, but before you pat yourself on the back, you have to test it.  I always recommend running these tests during off-hours for non-redundant servers.

1) Open the services.msc console.

2) Right-click on Services (Local) and click Connect to another computer…

3) Connect to the server where your monitored service is running.

4) Right-click on the service and click Stop Service.

It may take a couple of minutes, but if you get up and go for a walk, maybe make a cup of coffee or tea… by the time you get back, the service should be restarted.

There seems to be a reality in the world of IT that the more expensive something costs, the less it is likely to do out of the box.  It is great to have a monitoring infrastructure in place, but without configuring it to properly monitor the systems you have it can be a dangerous tool, because you will have a false sense that your systems are protected when they really aren’t.  Make sure that the solution you have is properly configured and tested, so that when something does go wrong you will know about it immediately… otherwise it will just end up costing you more.

End Of Days 2003: The End is Nigh!

In a couple of days we will be saying goodbye to 2014 and ringing in the New Year 2015.  Simple math should show you that if you are still running Windows Server 2003, it is long since time to upgrade.  However here’s more:

When I was a Microsoft MVP, and then when I was a Virtual Technical Evangelist with Microsoft Canada, you might remember my tweeting the countdown to #EndOfDaysXP.  That we had some pushback from people who were not going to migrate, I think we were all thrilled by the positive response and the overwhelming success we had in getting people migrated onto either Windows 8, or at least Windows 7.  We did this not only by tweeting, but also with blog articles, in-person events (including a number of national tours helping people understand a) the benefits of the modern operating system, and b) how to plan for and implement a deployment solution that would facilitate the transition.  All of us who were on the team during those days – Pierre, Anthony, Damir, Ruth, and I – were thrilled by your response.

Shortly after I left Microsoft Canada, I started hearing from people that I should begin a countdown to #EndOfDaysW2K3.  Of course, Windows Server 2003 was over a decade old, and while it would outlast Windows XP, support for that hugely popular platform would end on July 14th, 2015 (I have long wondered if it was a coincidence that it would end on Bastille Day).  Depending on when you read this article it might be different, but as of right now the countdown is around 197 days.  You can keep track yourself by checking out the website here

It should be said that with Windows 7 there was an #EndOfDaysXP Countdown Gadget for the desktop, and when I migrated to Windows 8 I used a third party app that sat in my Start Menu.  One friend suggested I create a PowerShell script, but that was not necessary.  I don’t remember exactly which countdown timer I used, but it would work just as well for Windows Server 2003 – just enter the date you are counting down to, and it tells you every day how much time is left.

The point is, while I think that migrating off of Server 2003 is important, it was not at that point (nor is it now) an endeavour that I wanted to take on.  To put things in perspective, I was nearing the end of a 1,400 day countdown during which I tweeted almost every day.  I was no longer an Evangelist, and I was burnt out.

Despite what you may have heard, I am still happy to help the Evangelism Team at Microsoft Canada (although I think they go by a different name now).  So when I got an e-mail on the subject from Pierre Roman, I felt it important enough to share with you.  As such, here is the gist of that e-mail:

1) On July 14, 2015 support for Windows Server will come to an end.  It is vital that companies be aware of this, as there are serious dangers inherent in running unsupported platforms in the datacenter, especially in production.  As of that date there will be no more support and no more security updates.

2) The CanITPro team has written (or re-posted) several articles that will help you understand how to migrate off your legacy servers onto a modern Server OS platform, including:

3) The Microsoft Virtual Academy (www.microsoftvirtualacademy.com) also has great educational resources to help you modernize your infrastructure and prepare for Windows Server 2003 End of Support, including:

4) Independent researchers have come to the same conclusion (IDC Whitepaper: Why You Should Get Current).

      5) Even though time is running out, the Evangelism team is there to help you. You can e-mail them at cdn-itpro-feedback@microsoft.com if you have any questions or concerns surrounding Windows Server 2003 End of Support.

      Of course, these are all from them.  If you want my help, just reach out to me and if I can, I will be glad to help! Smile  (Of course, as I am no longer with Microsoft or a Microsoft MVP, there might be a cost associated with engaging me Smile)

      Good luck, and all the best in 2015!

Do IT Remotely

A few days ago I was in with my boss and he asked me to perform a particular bit of maintenance during off-hours.  ‘Just come into the office tonight after Taekwondo and do it… it shouldn’t take you very long.’  He was right, and I almost did… and then I remembered that in 2014 there is seldom a reason to have to do anything on-site.  So after Taekwondo I went home, showered, then sat down at my computer in my pajamas for a couple of hours and did what I had to do.  No sweat.

Then one morning this week he asked me to make a particular change to all of the servers in a particular group, and report back that it was done.  No problem, I thought… I can do that from my desk using PowerShell.

The change was simple… set the Regional Settings to Canada (the default is, of course, United States). No problem, the PowerShell cmdlet is Set-Culture… so against the local computer I ran:

Set-Culture en-CA.

Success.  I then started to run it against other servers using:

Set-Culture en-CA –ComputerName <ServerName>

image

Uhh… who woulda thunk it?  The Set-Culture cmdlet does not support the –ComputerName parameter.  Crap.  Does that mean I have to actually log on to every one of my servers manually to do it?

No.  Fortunately the guys who wrote PowerShell knew that some of us would want to run legacy commands across systems, and gave us a way to do it anyways. 

Invoke-Command –ComputerName Oak-FS-03 –ScriptBlock {Set-Culture en-ca}

While I suspect the original intent was to use it to run old DOS commands, but it works for PowerShell cmdlets too.

So here you go… Invoke-Command allows you to run the –ScriptBlock against a remote server, whether that is PowerShell or not.

It should be noted, by the way, that Windows Server does not by default allow scripts to be run against it remotely… you have to go into Server Manager and enable Remote management. 

image

Of course, you could also do it in PowerShell… simply run the cmdlet:

Enable-PSRemoting –Force

Of course, you cannot run that one remotely… that would defeat the point of the security Winking smile

So go forth and be lazier than you were… no more logging onto every machine individually for you!

Expand your knowledge on Windows Server 2012!

windows-server-2012-logoOkay, we know that you are probably upset that Windows Small Business Server is being retired.  Fortunately Windows Server 2012 R2 will do you well… but do you know everything you will ever need to know about Windows Server 2012 R2 for the SMB space? Probably not… but that’s okay, because we are here to help!  Microsoft Canada is offering a free webinar with a colleague of mine that will really help.

Join Sharon Bennett, Microsoft’s SMB Technology Advisor, to learn about the key benefits of Windows Server 2012.  Topics include:

  • How to upgrade from Windows Server 2003 to Windows Server 2012
  • SBS migration path
  • ROK – Reseller Option Kit
  • CALs – Client Access Licenses

Register early as spots are limited. You will also have a chance to receive an exciting giveaway during the webinar!

Date: Feb 24, 2014

Time: 2-3pm EST

Register here: https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032577602&Culture=en-CA&community=0

Another tough exam…

As a subject matter expert (SME) on virtualization, I was neither excited nor intimidated when Microsoft announced their new exam, 74-409: Server Virtualization with Windows Server Hyper-V and System Center.  Unlike many previous exams I did not rush out to be the first to take it, nor was I going to wait forever.  I actually thought about sitting the exam in Japan in December, but since I had trouble registering there and then got busy, I simply decided to use my visit to Canada to schedule the exam.

This is not the first exam that I have gone into without so much as a glance at the Overview or the Skills Measured section of the exam page on the Internet.  I did not do any preparation whatsoever for the exam… as you may know I have spent much of the last five years living and breathing virtualization.  This attitude very nearly came back to bite me in the exam room at the Learning Academy in Hamilton, Ontario Wednesday morning.

Having taught every Microsoft server virtualization course ever produced (and having written or tech-reviewed many of them) I should have known better.  Virtualization is more than installing Hyper-V.  it’s more than just System Center Virtual Machine Manager (VMM) and Operations Manager (OpsMgr).  It is the entire Private Cloud strategy… and if you plan to sit this exam you had better have more than a passing understanding of System Center Service Manager (ServMgr), Data Protection Manager (DPM), and Orchestrator.  Oh, and your knowledge should extend beyond more than one simple Hyper-V host.

I have long professed to my students that while DPM is Microsoft’s disaster recovery solution, when it comes down to it just make sure that your backup solution does everything that they need, and make sure to test it.  While I stand behind that statement for production environments, it does not hold water when it comes to Microsoft certification exams.  When two of the first few questions were on DPM I did a little silent gulp to myself… maybe I should have prepared a little better for this.

I do not use Service Manager… It’s not that I wouldn’t – I have a lot of good things to say about it.  Heck, I even installed it as recent as yesterday – but I have not used it beyond a passing glance.  The same used to be true of System Center Orchestrator, but over the last year that has changed a lot… I have integrated it into my courseware, and I have spent some time learning it and using it in production environments for repetitive tasks.  While I am certainly not an expert in it, I am at least more than just familiar with it.  That familiarity may have helped me on one exam question.  Had I taken the time to review the exam page on the Microsoft Learning Experience website I would have known that the word Orchestrator does not appear anywhere on the page.

Here’s the problem with Microsoft exams… especially the newer ones that do not simply cover a product, but an entire solution across multiple suites.  Very few of us will use and know every aspect covered on the exam.  That is why I have always professed that no matter how familiar you may be with the primary technology covered, you should always review the exam page and fill in your knowledge gaps with the proper studying.  You should even spend a few hours reviewing the material that you are pretty sure you do know.  As I told my teenaged son when discussing his exams, rarely will you have easy exams… if you feel it was easy it just means you were sufficiently prepared.  Five questions into today’s exam I regretted my blasé attitude towards it – I may be a virtualization expert, but I was not adequately prepared.

As I went through the exam I started to get into a groove… while there are some aspects of Hyper-V that I have not implemented, those are few and far between.  the questions about VHDX files, Failover Clustering, Shared VHDX, Generation 2 VMs, and so many more came around and seemed almost too easy, but like I told my son it just means I am familiar with the material.  There were one or two questions which I considered to be very poorly worded, but I reread the questions and the answers and gave my best answer based on my understanding of them.

I have often described the time between pressing ‘End Exam’ and the appearance of the Results screen to be an extended period of excruciating forced lessons in patience.  That was not the case today – I was surprised that the screen came up pretty quickly.  While I certainly did not ace the exam, I did pass, and not with the bare minimum score.   It was certainly a phew moment for a guy who considers himself pretty smart in virtualization.

Now here’s the question… is the exam a really tough one, or was I simply not prepared and thus considered it tough?  And frankly, how tough could it have been if I didn’t prepare, and passed anyways?  I suppose that makes two questions.  The answer to both is that while I did not prepare for the exam, I am considered by many (including Microsoft) a SME on Hyper-V and System Center.  I can say with authority that it was a difficult exam.  That then leads to the next question, is it too tough?  While I did give that some thought as I left the exam (my first words to the proctor was ‘Wow that was a tough exam!) I do not think it is unreasonably so.  It will require a lot of preparation – not simply watching the MVA Jump Start videos (which are by the way excellent resources, and should be considered required watching for anyone planning to sit the exam).  You will need to build your own environment, do a lot of reading and research, and possibly more.

If you do plan to sit this exam, make sure you visit the exam page first by clicking here.  Make sure you expand and review the Overview and Skills Measured sections.  If you review the Preparation Materials section it will refer you to a five day course that is releasing next week from Microsoft Learning Experience – 20409A- Server Virtualization with Windows Server Hyper-V and System Center (5 Days).  I am proud to say that I was involved with the creation of that course, and that it will help you immensely, not only with the exam but with your real-world experience.

Incidentally, passing the exam gives you the following cert: Microsoft Certified Specialist: Server Virtualization with Hyper-V and System Center.

Good luck, and go get em!