Windows Server 2016: A pet peeve

Windows Server 2016Over the next few weeks, as I do my first production infrastructure implementation based on Windows Server 2016 and System Center 2016, I am sure this list will grow longer.  In the meantime, I have uncovered my first pet peeve in the new version.

Don’t get me wrong, overall I like Server 2016… but to find out that it is no longer possible to install Windows Server with a GUI (Graphical User Interface) and then later to uninstall the GUI (see article for Windows Server 2012) is fairly annoying.

Throughout the launch of Windows Server 2012 I was with the Evangelism Team at Microsoft Canada and I traveled the country – first for the launch events, and then evangelizing and teaching that platform.  I spent a lot of time talking about Server Core because of the benefits for security, as well as for the reduced resource requirements (which, in a virtualized infrastructure, can be staggering).

Of course, Server Core looks a lot like where we started out… if you were a server administrator back in the 1980s and most of the 1990s, you were using command line tools to do your job.  However it had been too long ago, and the vast majority of admins today were not admins back then.  So I was able to discuss a compromise… Install Windows Server with the GUI, and when you were done doing whatever it was you needed the GUI for (or thought you did), you could uninstall it… or at the very least, switch to MinShell.

I showed up at my client site this week and was handed a series of brand new servers on which to work.  They all had the GUI installed.  So I went to work, and typed in that familiar PowerShell cmdlet to remove the GUI.  I was greeted by that too-familiar red text which meant I had done something wrong.  I will spare you the boring details, and after several minutes of research I discovered that Microsoft had removed the ability to remove the GUI in Windows Server 2016.

I understand that the product team has to make difficult decisions when developing the server, but this was one that I wish they had not made.  However confirmation comes directly from the product group in this article, in which they write:

Unlike some previous releases of Windows Server, you cannot convert between Server Core and Server with Desktop Experience after installation. If you install Server Core and later decide to use Server with Desktop Experience, you should do a fresh installation.

I wish it weren’t so, but it is.  Once you install the GUI you are now stuck with it… likewise, if you opted for Server Core when you first installed, you are committed as well.


Scheduling Server Restarts

If you manage servers you have likely come to a point where you finished doing work and got a prompt ‘Your server needs to reboot.  Reboot now?’  Well you can’t reboot now… not during business hours.  I guess you’ll have to come back tonight… or this weekend, right?

Wrong.  Scheduling a reboot is actually pretty easy in Windows.  Try this:

  1. Open Task Scheduler (taskschd.msc).
  2. In the Actions pane click Create Basic Task…
  3. Name the task accordingly… Reboot System for example.
  4. On the Task Trigger page click the radio button One Time
  5. On the One Time page enter the date and time when you want the server to reboot.
  6. image
  7. On the Action page select Start a program.
  8. On the Start a Program page enter the name of the program shutdown.exe.  In the Add arguments box enter /f /r /t 0.  This will force the programs to close, restart the server (instead of just turning it off), and set the delay time to 0 seconds.
  9. image
  10. Once you have done this your server will reboot at the precise time you want it to, and will come back up.

**NOTE: Don’t forget to check.  it is not unheard of in this world for servers to go down and not come back up as they are supposed to!

Do it in PowerShell!

Using PowerShell to script this will allow you to not only save the script, but also run it on remote servers.  From Justin Rich’s blog article I found this script:

register-ScheduledJob -Name systemReboot -ScriptBlock {

Restart-Computer -ComputerName $server -Force -wait

Send-MailMessage -From -To -Subject "Rebooted" -SmtpServer

} -Trigger (New-JobTrigger -At "04/14/2017 8:45pm" -Once) -ScheduledJobOption (New-ScheduledJobOption -RunElevated) -Credential (Get-Credential)


Have fun!

Remotely Enable RDP

Like most IT Managers I manage myriad servers, most of which are both remote and virtual.  So when I configure them initially I make sure that I can manage them remotely… including in most cases the ability to connect via RDP (Remote Desktop).

But what happens if you have a server that you need to connect to, but does not have RDP enabled?  Using PowerShell it is rather simple to enable the RDP feature remotely:

Enter-PSSession -ComputerName –Credential domain\username
Set-ItemProperty -Path ‘HKLM:\System\CurrentControlSet\Control\Terminal Server’-name “fDenyTSConnections” -Value 0
Enable-NetFirewallRule -DisplayGroup “Remote Desktop”
Set-ItemProperty -Path ‘HKLM:\System\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp’ -name “UserAuthentication” -Value 1

That should get you going.  Good luck!

Since When…?

Those of us who have been in the IT industry for a while remember the heady days of never having to reboot a server… otherwise known as ‘The days before Windows Server.’  Those days are long gone, and even non-Windows servers need to be patched and restarted.

But how do you know when it last happened?  If you have a proper management and monitoring infrastructure then you can simply pull up a report… but many smaller companies do not have that, and even in larger environments you may want to figure out up-time without having to go through the entire rigmarole of pulling up your reports. So here it is:

  1. Open a Command Prompt
  2. Type in net statistics server

There will be a line that says Statistics since m/dd/yyyy… That is when your server last rebooted.

If you want to shorten it, you can also just type Net Stats SRV.  It provides the same results.


Incidentally, while the command specifically states Server, it works for workstations too.

…And now you know.

Server Core on VMware

When I was a Virtual Technical Evangelist for Microsoft Canada I spent a lot of time telling you why you should use Server Core… especially if you were on Hyper-V.  Why?  You save resources.

It is now over two years since I turned in my Purple Badge, and I still think Server Core rocks.  In fact, when Windows Server 2016 comes out I will probably spend a lot of time telling you about the new Nano Server option that they are including in that version.  More on that to come.

Of course, I still like Hyper-V, but as an independent consultant I recognize (as I did quietly when I was with the Big Blue Machine) that the vast majority of the world is still running VMware for their enterprise-level server virtualization needs.  That does not change my opinion of Server Core… it still rocks, even on VMware.

Of course, in order to get the full benefits of the virtualized environment, a VMware machine requires the installation of the VMware Tools (as Hyper-V requires the installation of Integration Services).  With a Server with a GUI that is easy to do… but since Server Core is missing many of the hooks of the GUI, it has to be done from the command line.  Here’s how:

1. As you would with any other server, click Install VMware Tools


2. Connect to and log on to the virtual machine.  You will have to do this with Administrator credentials.

3. navigate to the mounted ISO (if you only have a single hard drive attached it will usually be D:)

4. type in the following command line: setup64.exe /S /v “/qn reboot=Y”


Once you have done this, the VMware tools will install, and your server will reboot.  Nothing to it!

SQL Server: How to tame the beast!

One of the benefits of virtualization is that you can segregate your SQL Servers from your other workloads.  Why?  If not then Microsoft SQL Server will hoard every last bit of resources on your machine, leaving scant crumbs for other workloads. 


Seriously… when you start the Microsoft SQL Server you will immediately see your memory usage jump through… or more accurately, to the roof.  That is because SQL Server is actually designed to take up all of your system’s memory.  Actually that is not entirely true… out of the box, Microsoft SQL Server is designed to take up 2TB of RAM, which means that in all likelihood a lot more memory than your computer actually has.

So assuming you have been listening to me for all of these years, you are not installing anything else on the same computer as your SQL Server.  You are also making sure that the virtual machine that your SQL Server is installed on (remember I told you to make sure to virtualize all of your workloads?) has its memory capped (Hyper-V sets the default Maximum RAM to 64GB).  You are doing everything right… so why is SQL performing slowly?

It’s simple really… Your computer does not have 2TB of RAM to give SQL Server… and if it did have 2TB of RAM, the operating system (remember Windows?) still needs some of that.  So the fact that SQL wants more than it can have can make it a little… grumpy.  Imagine a cranky child throwing a tantrum because he can’t have deserts or whatever.

Fortunately there is an easy fix to this one (unlike the cranky child).  What we are going to do is limit the amount of RAM that SQL actually thinks it wants… and when it has everything that it wants, it will stop misbehaving.

1) Determine how much RAM the server on which SQL Server is installed has.

2) Open Microsoft SQL Server Management Studio with administrative credentials.

3) Connect to the database (If you have multiple SQL databases on the same server see the note below)

4) In the navigation pane right-click on the actual SQL Server (the topmost item) and click Properties

5) In the Server Properties page navigate to Memory

6) Figure out how much 90% of your server’s RAM would be (in megabytes).  Use the following equation:

1GB = 1024*.90=921.6

8GB = 1024*8 (8192)*.90=7373

7) In the Maximum server memory (in MB) field type that number, then click OK.

That’s it!

**Note: The math we are using here allocates 90% of the total RAM to the SQL Server.  In the event that you have multiple SQL Server databases running on the same box you will have to do a bit of calculating to determine how much each database should use… and that can be a challenge.

If you only have the one database engine on your box, you should immediately notice marked improvements.  This breathing room does not mean that it is now time to pour more workloads onto the server… only that your SQL Server should be running a little better!

UNC Path Nightmare

imageAnyone who has taken a basic networking course will understand that UNC (Universal Naming Convention) paths are one of the common ways we in IT access file shares across our local networks.  They will usually look like this: \\oak-mgt-01\Sharename.  Of course, you can see all of the shares on a particular server by just entering the servername (\\oak-mgt-01).  Once upon a time Windows Explorer would show you that path in the address bar, but in this era of simplification of everything (i.e.: Dumbing it down) it makes it prettier by showing > Network > oak-mgt-01 > Sharename.  This changes nothing, it is the same under the hood.

Users are not the only ones who use these UNC paths.  In fact, it is our servers and applications that use them far more frequently than we do, because under the hood that is what they use to connect to any network resource.

But what happens when UNC paths stop working?

A client called me recently to tell me that none of their UNC paths were working, and because of it their production applications were down.  I checked, and sure enough a particular server could access the Internet just fine, and it could ping every internal resource it wanted, but when you tried to navigate to any UNC path, the result was a very unfriendly and generic one:


Not only was it not working, it was not even giving me a descriptive error code.  I started down a troubleshooting rabbit hole that would haunt me for hours before I found the solution.

The first thing that we confirmed is that while we were pretty sure that this was a networking issue, it was contained within the specific server.  How did we determine this?  We discovered that we got the same result when we tried to navigate to \\localhost.  Localhost is that trusty loopback adapter that exists in every network device, and is the equivalent of \\… which of course we tried as well.  Because we know that Localhost lives within the server, we knew that it was not an external issue.

Before we went out to the Internet for other ideas, we tried all of the obvious things.  We changed the NIC, we verified DNS, WINS, and even NetBIOS over TCP/IP.  We reset the network connection (netsh ip reset c:\resetlog.txt).  Nothing doing.

We went out to the Internet and followed the advice of several people who had been in our spot before.  We uninstalled and then reinstalled several networking components.  We deleted phantom devices, we ran network reset tools.  No joy.

When I came across Rick Strahl’s Web Log I thought I was close…he had experienced exactly the same symptoms, and in his article UNC Drive Mapping Failures: Network name cannot be found I was hopeful that when I re-ordered the Network Provider Order in the registry (HKLM\SYSTEM\CurrentControlSet\Control\NetworkProvider\Order) I would be home free.  Unfortunately I was wrong… but I was in the right place.

When Rick’s solution didn’t work, I was disheartened.  I was about to close it out and look at the next ‘solution’ on-line.  My instinct however told me to look again… to look closer.


There was a typo… but you have to really know what you are looking at to see it.  In fact, even if you really know what you are looking at, it is easy enough to miss.  Take a look… do you see it?  Look closer… The entry LanmanWorkstation is right there, clear as day, right?

Nobody would blame you for not noticing that there is an S at the end of the string… because S is so common at the end of words – it just makes them plural, right?  Well computers – and especially the Windows Registry – does not know English grammar, it knows binary… and the difference between LanmanWorkstation and LanmanWorkstations is the difference between working… and not working.

When I made the change it was instant – no reboot was required, the server application started working immediately.  A big sigh of relief permeated the office.

The server in question is one that several people were working on when it stopped working, and nobody is entirely sure how it happened… was it human error, or did a rogue process cause it?  We will look in our logs and figure that out later.  For the moment though, our UNC paths are back, and my client is back at work.

OEM Servers: Myths vs. Realities

In a recent conversation I realized that there are still a lot of misconceptions about OEM (Original Equipment Manufacturer) operating system rights with regard to Windows Server. While I am not here to say who is right and who is wrong (whether one should or should not buy OEM operating systems), I still think it is important to understand the facts.

Myth #1: OEM licensing is limited, and cannot be upgraded.

An OEM license is indeed tied to the physical hardware for which it was purchased. This is a distinct disadvantage to purchasing Volume Licenses (VLs). However when you buy an OEM operating system you have thirty (30) days to add Software Assurance to it. Any license with Software Assurance attached to it can be upgraded as new versions are released. However there is one important bit to understand… when decommissioning that server, the SA can be detached from the license and attached to another… but the OS itself cannot.


Myth #2: Virtualization rights are unclear on OEM licenses.

I hear this from people all the time, and although I have tried to explain it to many of them, sometimes I simply have to shrug my shoulders and walk away from it. There is nothing murky or unclear about virtualization licensing. Whether your host (hypervisor) is an OEM license, VL license, or FPP (Full Package Product) license, your virtualization rights are the same, and they depend not on how you bought the license, but what tier you bought (Standard vs. Datacenter).

The OEM license is applied to the host, and must be tied to that host. However the guest VMs (2 on Standard, unlimited on Datacenter) do not have any restrictions. Like any guest VM on any other license, they can be migrated to any other host, as long as the destination host has allowance – so if the destination host is Windows Server Standard Edition, it cannot host a third guest VM, but if the destination host is Windows Server Datacenter Edition, the only limitation is based on the available resources (CPUs, RAM, storage).


Myth #3: There are things you can with OEM Editions that you cannot do with VL Editions.

While this is a less common complaint, it is still there. I am told (and I have not really looked into this) that with Windows Server OEM versions (let’s take the HP ROK as an example) you can modify the image to show your logo during the boot-up process. While this is true, I have two points to it:

1) If you know what you are doing you can customize the boot process of any Windows Server installation, regardless of the edition or version.

2) Folks, it’s a server… how often are you rebooting it? Most of my servers (especially virtualization hosts) don’t reboot for months at a time. When they do get rebooted, it either happens at night (when I have scheduled patches) or remotely, when I am not sitting and watching the POST process. I can’t imagine there are too many customers who sit and watch their servers either…


Myth #4: When a reseller consultant sells OEM licenses there is more room for profit.

I am usually very saddened to hear this, but that is mostly because I am not the sort of consultant who makes a lot of money off products; I would rather make my money off my time, and that is what I do. I don’t like hearing that there are resellers who buy a cheaper (and less versatile) option but resells it for the same price as the full version. Aside from the previous point also applying, I am always certain that my customer will find out and call me on it, and I will lose their trust. It is just not worth it to me. That doesn’t mean it isn’t a legitimate issue for some.



There is nothing wrong with OEM licenses, and they are certainly less expensive than other ways of purchasing the operating system. They are just as versatile as non-OEM licenses, but not especially more versatile. If you replace (not upgrade or add more) servers often then they are likely not a good option for you, especially since they don’t add value to the physical server if you resell it. However if you keep your servers for more than a couple of year (as most companies will) then the cost savings might make it worthwhile, and if you do the cost benefit comparison, you might just come out ahead… and that’s CONFIRMED!


Server Core: Every bit the Full Server as the GUI is!

Microsoft introduced Server Core with Windows Server 2008, which means that it was the same kernel as Windows Vista.  Now, nobody is going to stand up and sing the praises of Vista, but Server 2008 was a very solid OS.

You may (or may not) remember that there was a campaign around Vista called ‘The WOW starts NOW!’ Catchy, huh?  Well, because Server Core was literally taking the ‘bling’ out of Windows Server, there was an internal joke at the time that ‘The Wow STOPS Now.’

While Server Core never looked very exciting for end users, for IT Admins, and especially those who were building virtualized environments, Server Core was a godsend. Let’s go through this one more time to demonstrate why:

  • The Windows Graphical User Interface (GUI), which is the difference between Server Core and not, takes resources.  How much?  Well, depending on the server it might be as much as 3-4GB on the hard drive and as much as 350MB of RAM.  Neither of these is a big deal in a world where servers have 128GB of RAM and terabytes of storage on them, right?  Well on a virtualization host that may have on average 100 virtual machines running simultaneously, that translates to 400GB of storage and a ridiculous 35GB of RAM… Ouch.
  • Every component that is installed in Windows has to be patched from time to time.  The fewer components you have installed, the less patching that has to be done.
  • The more you have installed in Windows the larger your attach surface.  By removing components, you can minimize this, making your computer more secure.

servercore01In Windows Server 2008 here’s what we saw when we initiated the installation… a menu with all three editions (Standard, Enterprise, Datacenter) Full Installation, and the three editions with Server Core Installation.

I have been singing the praises of Server Core for as long as it has been available, but often to deaf ears.  I always assumed this was because most IT Admins liked the GUI.  Recently I was in a presentation given by Jeffrey Snover, who gave me another perspective on it… the terminology in Server 2008 was part of it.  You see, people look at the options ‘Full Server’ versus ‘Server Core’ and they immediately think ‘power & performance.’ A Full Server must do more than a server core server… why?  It is FULL!

Of course, in Server 2008 it didn’t help that Server Core actually was a hobbled version of Server… there were a few roles that worked on it, but not too many.

As with so many Microsoft products, that got better in 2008 R2, and even better in Server 2012 and 2012 R2.  Today you would be amazed at what can run on Server Core… in fact, nearly everything that you do on a server can run on Server Core.  So there is little wonder that Microsoft made a change to the terms…

servercore02No longer is it a question of FULL versus CORE… Now our options are Server Core Installation and Server with a GUI.

There are two differences to notice in this screen shot… the first is that there are only four options because Microsoft eliminated the Enterprise SKU.  The second is that the default option (a la next – next – next installations) is Server Core.  While some admins might say ‘Yeah I wasn’t paying attention so I ended up with Server Core and had to reinstall,’ the reality is that most of us, once we understand the benefits and the manageability options, will want to install Server Core instead of the GUI servers.

Of course, there are admins who will still be afraid of the command line… but because most of the ongoing administration of our servers (the things we usually do with MMC consoles) Server Core, or at the very least MinShell will make our lives easier.  MinShell removes most of the GUI, but leaves the MMC consoles.

But what if I wanted to use the GUI to configure the system, and then get rid of it completely?  We can definitely do that.  One method of doing it is to use the Server Manager’s Remove Roles and Features option.  (The GUI is a feature, and is listed under User Interfaces and Infrastructure – Server Graphical Shell)  This will uninstall the components and save the RAM… but it will not free up your hard disk space.  To do that, use the following PowerShell cmdlet:

Uninstall-WindowsFeature –Name Server-Gui-Mgmt-Infra,Server-Gui-Shell –ComputerName <Name> -Remove -Restart

The -ComputerName option allows you to do this to remote computers, and the -Remove option actually removes the bits from the hard drive.

What can you do with Server Core? I won’t say everything… but nearly so.  It is no longer just your Hyper-V hosts… it is your domain controllers, SQL Servers, Web Servers, and so much more.  As long as you are able to learn a little bit of PowerShell… and how to enable remote management on your servers.

Now go forward and save your resources!

Let’s Spread the Action Around… With NLB! (Part 1)

**AUTHOR’S NOTE: I have written hundreds of articles on this blog over the past decade.  Until recently I spent a lot of time taking screen shots of GUI consoles for my how-to articles.  For the time being, as I try to force myself into the habit, I will be using Windows PowerShell as much as possible, and thus will not be taking screen shots, but instead giving you the cmdlets that I use.  I hope this helps you as much as it is helping me! –MDG

I have written at length about Failover Clusters for Active-Passive services.  Let’s move away from that for a moment to discuss Network Load Balancing (NLB) – the tool that we can use to create Active-Active clusters for web sites (and other static-information services).

While NLB does, after a fact, cluster services, it is not a failover service… and is in fact a completely different service.  For my use case, it is usually installed on a server running IIS.  Start by installing it:

PS C:\> Install-WindowsFeature NLB –ComputerName Server1

Of course, having a single server NLB cluster is like juggling one ball… not very impressive at all.  So we are going to perform the same function for at least a couple of hosts…

PS C:\> Install-WindowsFeature NLB –ComputerName Server1,Server2,Server3

By the way, notice that I am referring to the servers as hosts, and not nodes.  Even the terminology is different from Failover Clusters.  This is going to get confusing at a certain point, because some of the PowerShell cmdlets and switches will refer to nodes.

Now that the feature is installed on all of our servers, we are almost ready to create our NLB Cluster.  Before we do, we have to determine the following:

  • Ethernet Adapter name
  • Static IP Address to be assigned to the Cluster

You are on your own for the IP address… it is up to you to pick one and to make sure it doesn’t conflict with another server or DHCP Server.

However with regard to the Ethernet Adapter name, there’s a cmdlet for that:

PS C:\> Invoke-Command –ComputerName Server1 –ScriptBlock {Get-NlbClusterNodeNetworkInterface}

Notice that I am only doing this, for the time being, against one server.  That is because I am going to create the cluster on a single server, then add my hosts to it afterward.

So now that we have the information we need, let’s go ahead and create an NLB Cluster named WebCluster, on Server1, with the Interface named Ethernet 2, and with an IP Address of

PS C:\> New-NlbCluster –HostName Server1 –InterfaceName “Ethernet 2” –ClusterName WebCluster –ClusterPrimaryIP –OperationMode Multicast

It will only take a minute, and you will get a response table listing the name, IP Address, Subnet Mask, and Mode of your cluster.

Now that we’ve done that, we can add another host to the NLB Cluster.  We’ll start by checking the NIC name on the second server, then we will add that server to the NLB Cluster:

PS C:\> Invoke-Command –ComputerName Server2 –ScriptBlock {Get-NlbClusterNodeNetworkInterface}

PS C:\> Get-NlbCluster –HostName Server1 | Add-NlbClusterNode –NewNodeName Server2 –NewNodeInterface “Ethernet”

Notice that in the first part of the script we are getting the NLB Cluster Name from the Host Name, and not the Cluster Name.

This part may take a few minutes… Don’t worry, it will work.  When it is done you will get a response table listing the name, State, and Interface name of the second host.

You can repeat this across as many hosts as you like… For the sake of this series, I will stick to two.

In the next article of the series, we will figure out how to publish our web sites to the NLB Cluster.

Keep Up: How to configure SCOM to monitor the running state of services and restart them when they stop

Windows runs on services.  Don’t believe me?  Open your Services console and count just how many are running at any given time.  Of course, some of them are more important than others… especially when you are talking about servers that are critical to your organization.

A new customer recently called me for a DEAR Call (emergency visit) because their business critical application was not working, and they couldn’t figure it out.  I logged into the server, and at first glance there didn’t appear to be anything wrong on the application server.  However I knew that the application used SQL Server, and I did not see any SQL instances on the machine.  A quick investigation revealed that there was an external SQL Server running on another server, and it only took a few seconds to see why the application was failing.


Very simply put, the service was not started. I selected it, clicked Start the service, and in a few seconds the state changed:


A quick look showed that their business critical application (in this case SharePoint 2010) was working properly again.

My customer, who was thrilled to be back in business, was also angry with me.  ‘We spent tens of thousands of dollars on System Center Operations Manager so that we could monitor our environment, and what good does it do me?  I have to call you in when things stop working!’

Yell as much as you like I told him, but please remember the old truism… if you think it is expensive hiring professionals, try hiring amateurs.  After he had learned about the benefits of implementing a proper monitoring solution he told his IT guy to install it… and that is exactly what he did.

System Center Operations Manager (SCOM) is a monitoring framework, and really quite a good one.  In fact, if Microsoft included the tools within the product itself to monitor every component that it is capable of monitoring, it would have to come in a much bigger box.  Instead, what it gives you is the ability to import or create Management Packs (MPs) to monitor aspects of your IT environment.  It is up to you to then implement those MPs so that SCOM can monitor the many components of your infrastructure… and take the appropriate action when things go wrong.

Of course, there are much more in-depth MPs for monitoring Microsoft SQL Server, but for those IT generalists who do not need the in-depth knowledge of what their SQL is doing, simply knowing that the services are running is often good enough… and monitoring those services is the exact same step you would take to monitor the DNS Server service.

Although it is long, following these relatively simple steps will do exactly what you need.

1) Open the Operations Manager console.

2) In the Operations Manager console open the Authoring context.

3) In the navigation pane expand Management Pack Objects and click on Monitors.


4) Rick-click on Monitors and select Create a Monitor – Unit Monitor…

5) At the bottom of the Create a unit monitor window select the Management Pack you are going to save this to.  I never save to the default management packs – create your own, it is safer (and easier to recover when you hork something up).

6) In the Select the type of monitor to create section of the screen expand Windows Services and select Basic Service Monitor.  Click Next.


7) In the General Properties window name your monitor.  Make sure you name it something that you will recognize and remember easily.

8) Under Monitor target click Select… From the list select the target that corresponds to the service you will be monitoring.  Click OK.

9) Back in the General Properties window uncheck the Monitor is enabled checkbox.  Leaving this enabled will try to monitor this service on every server, not just the one where it resides.  Click Next.

10) In the Service Details window click the ellipsis button () next to Service Name.

11) In the Select Windows Service window either type the name of the target server, or click the ellipsis button and select the computer from the list.  Then select the service you wish to monitor from the list under Select service.  Click OK.


12) Back in the Service Details window the Service name window should be populated.  Click Next.

13) In the Map monitor conditions to health states window accept the defaults… unless of course you want to make sure that a service is NEVER started, at which point you can change that here.  Click Next.


14) In the Alert settings window select the Generate alerts for this monitor checkbox.  You can also put in a useful description of the alert in the appropriate box.  Click Create.

The saving process may take a minute or two, but when it is done search for it in the Monitors list.

14) Right-click on your custom monitor.  select Overrides – Override the Monitor – For a specific object of class: <Name of the product group>


15) In the Select Object window select the service you are monitoring and click OK

16) In the Override Properties window, under the Override-controlled parameters list, scroll for the parameter named Enabled and make the following changes:

a) Select the Override checkbox.

b) Change the Override Value to True.

c) Click Apply

d) Click Show Monitor Properties…

17) In the Monitor Properties window click the Diagnostic and Recovery tab.

18) Under Configure recovery tasks click Add… and when it appears click Recovery for critical health state.


19) Under the Create Recovery Task Wizard click Run Command and click Next.

20) In the Recovery Task Name and Description window

a) enter a Recovery name (Re-Start Service works for me!).

b) Select the checkbox Recalculate monitor state after recovery finishes.

c) Click Next.

21) In the Configure Command Line Execution Settings window enter the following information:

Full path to file: %windir%\System32\Net.exe

Parameters: start <service name>

Working directory: %windir%

Timeout (in seconds): 120

22) Click Create.

23) Close the Monitor Properties window.

24) In the Override Properties window click Apply then OK.

The doing is done, but before you pat yourself on the back, you have to test it.  I always recommend running these tests during off-hours for non-redundant servers.

1) Open the services.msc console.

2) Right-click on Services (Local) and click Connect to another computer…

3) Connect to the server where your monitored service is running.

4) Right-click on the service and click Stop Service.

It may take a couple of minutes, but if you get up and go for a walk, maybe make a cup of coffee or tea… by the time you get back, the service should be restarted.

There seems to be a reality in the world of IT that the more expensive something costs, the less it is likely to do out of the box.  It is great to have a monitoring infrastructure in place, but without configuring it to properly monitor the systems you have it can be a dangerous tool, because you will have a false sense that your systems are protected when they really aren’t.  Make sure that the solution you have is properly configured and tested, so that when something does go wrong you will know about it immediately… otherwise it will just end up costing you more.

End Of Days 2003: The End is Nigh!

In a couple of days we will be saying goodbye to 2014 and ringing in the New Year 2015.  Simple math should show you that if you are still running Windows Server 2003, it is long since time to upgrade.  However here’s more:

When I was a Microsoft MVP, and then when I was a Virtual Technical Evangelist with Microsoft Canada, you might remember my tweeting the countdown to #EndOfDaysXP.  That we had some pushback from people who were not going to migrate, I think we were all thrilled by the positive response and the overwhelming success we had in getting people migrated onto either Windows 8, or at least Windows 7.  We did this not only by tweeting, but also with blog articles, in-person events (including a number of national tours helping people understand a) the benefits of the modern operating system, and b) how to plan for and implement a deployment solution that would facilitate the transition.  All of us who were on the team during those days – Pierre, Anthony, Damir, Ruth, and I – were thrilled by your response.

Shortly after I left Microsoft Canada, I started hearing from people that I should begin a countdown to #EndOfDaysW2K3.  Of course, Windows Server 2003 was over a decade old, and while it would outlast Windows XP, support for that hugely popular platform would end on July 14th, 2015 (I have long wondered if it was a coincidence that it would end on Bastille Day).  Depending on when you read this article it might be different, but as of right now the countdown is around 197 days.  You can keep track yourself by checking out the website here

It should be said that with Windows 7 there was an #EndOfDaysXP Countdown Gadget for the desktop, and when I migrated to Windows 8 I used a third party app that sat in my Start Menu.  One friend suggested I create a PowerShell script, but that was not necessary.  I don’t remember exactly which countdown timer I used, but it would work just as well for Windows Server 2003 – just enter the date you are counting down to, and it tells you every day how much time is left.

The point is, while I think that migrating off of Server 2003 is important, it was not at that point (nor is it now) an endeavour that I wanted to take on.  To put things in perspective, I was nearing the end of a 1,400 day countdown during which I tweeted almost every day.  I was no longer an Evangelist, and I was burnt out.

Despite what you may have heard, I am still happy to help the Evangelism Team at Microsoft Canada (although I think they go by a different name now).  So when I got an e-mail on the subject from Pierre Roman, I felt it important enough to share with you.  As such, here is the gist of that e-mail:

1) On July 14, 2015 support for Windows Server will come to an end.  It is vital that companies be aware of this, as there are serious dangers inherent in running unsupported platforms in the datacenter, especially in production.  As of that date there will be no more support and no more security updates.

2) The CanITPro team has written (or re-posted) several articles that will help you understand how to migrate off your legacy servers onto a modern Server OS platform, including:

3) The Microsoft Virtual Academy ( also has great educational resources to help you modernize your infrastructure and prepare for Windows Server 2003 End of Support, including:

4) Independent researchers have come to the same conclusion (IDC Whitepaper: Why You Should Get Current).

      5) Even though time is running out, the Evangelism team is there to help you. You can e-mail them at if you have any questions or concerns surrounding Windows Server 2003 End of Support.

      Of course, these are all from them.  If you want my help, just reach out to me and if I can, I will be glad to help! Smile  (Of course, as I am no longer with Microsoft or a Microsoft MVP, there might be a cost associated with engaging me Smile)

      Good luck, and all the best in 2015!

Do IT Remotely

A few days ago I was in with my boss and he asked me to perform a particular bit of maintenance during off-hours.  ‘Just come into the office tonight after Taekwondo and do it… it shouldn’t take you very long.’  He was right, and I almost did… and then I remembered that in 2014 there is seldom a reason to have to do anything on-site.  So after Taekwondo I went home, showered, then sat down at my computer in my pajamas for a couple of hours and did what I had to do.  No sweat.

Then one morning this week he asked me to make a particular change to all of the servers in a particular group, and report back that it was done.  No problem, I thought… I can do that from my desk using PowerShell.

The change was simple… set the Regional Settings to Canada (the default is, of course, United States). No problem, the PowerShell cmdlet is Set-Culture… so against the local computer I ran:

Set-Culture en-CA.

Success.  I then started to run it against other servers using:

Set-Culture en-CA –ComputerName <ServerName>


Uhh… who woulda thunk it?  The Set-Culture cmdlet does not support the –ComputerName parameter.  Crap.  Does that mean I have to actually log on to every one of my servers manually to do it?

No.  Fortunately the guys who wrote PowerShell knew that some of us would want to run legacy commands across systems, and gave us a way to do it anyways. 

Invoke-Command –ComputerName Oak-FS-03 –ScriptBlock {Set-Culture en-ca}

While I suspect the original intent was to use it to run old DOS commands, but it works for PowerShell cmdlets too.

So here you go… Invoke-Command allows you to run the –ScriptBlock against a remote server, whether that is PowerShell or not.

It should be noted, by the way, that Windows Server does not by default allow scripts to be run against it remotely… you have to go into Server Manager and enable Remote management. 


Of course, you could also do it in PowerShell… simply run the cmdlet:

Enable-PSRemoting –Force

Of course, you cannot run that one remotely… that would defeat the point of the security Winking smile

So go forth and be lazier than you were… no more logging onto every machine individually for you!

Windows 8.1 Bits (RTM)!

This is cut and pasted directly from the TechNet blog:

Based on the feedback from you and our partners, we’re pleased to announce that we will be making available our current Windows 8.1 and Windows 8.1 Pro RTM builds (as well as Windows Server 2012 R2 RTM builds) to the developer and IT professional communities via MSDN and TechNet subscriptions. The current Windows 8.1 Enterprise RTM build will be available through MSDN and TechNet for businesses later this month. For developers, we are also making available the Visual Studio 2013 Release Candidate, which you can download here. For more on building and testing apps for Windows 8.1, head on over to today’s blog post from Steve Guggenheimer.

The advantages of selling Office 365 and Windows Server

image Many small and midsize businesses today are considering the use of cloud-based software applications for the ease, accessibility, and cost benefits they can offer. At the same time, many still need an on-site platform for a range of needs from hosting applications, to print sharing, to storing sensitive financial data.

As our valued partner of Office 365, we would love to tell you more about how both of these products have enabled many partners to provide valuable and cost-effective solutions to their customers. We will also have Sharon Bennett, a Microsoft Small Business Specialist and Microsoft Certified Trainer join to speak about deploying Windows Server 2012 with Office 365 and how you can help grow your business with these products.

Learn key resources to enable your organization to deliver these solutions and a special offer available to get you selling today!

Join the one hour webinar on September 16th, 2013 from 10:00 AM – 11:00 AM (EST)!