Home » Server

Category Archives: Server

The (Solar) WInds of Change…

I used to love System Center.  Simply put, if you were a systems administrator / engineer / architect it did… everything.  It monitors, automates, protects, virtualizes, scripts, patches, deploys, integrates… everything in your environment.  It is, in a word, comprehensive.

It is also big.  There was a time (prior to System Center 2012) when you could pick and choose the components you wanted to buy – if you only wanted monitoring then all you bought was System Center Operations Manager (SCOM).  If all you wanted was the virtualization management than all you bought was System Center Virtual Machine Manager (SCVMM).

When Microsoft announced in 2012 that all of the pieces would now be sold as a single package I thought it was a good decision for Microsoft, but not necessarily a good one for the customer.  Certainly it would increase their market share for components such as System Center Data Protection Manager (DPM) – which was probably from a 0.1% market share to something somewhat higher – but that was not what the customers wanted.  I want a reasonably simple monitoring tool that could be deployed (and purchased) independent of everything else; I could then use the backup tool that I want, the deployment tools that I want, the anti-malware tools that I want.

So when I got an e-mail from representative of SolarWinds asking if I would try out their product (Server & Application Monitor) I decided to give it a try.  After all, I knew SolarWinds by reputation, and due to the non-invasive nature of the tool I could easily deploy it along side my existing SCOM environment and monitor the same servers without risk.

The Good

The first thing I noticed about SolarWinds was the ease with which it installed.  Compared to SCOM (which even to simply install it was a bit of an ordeal (See article) it was a simple install – it did not take long, and was pretty straight-forward.

While the terminology was a little different that SCOM it was easy to understand the differences, and I suspect for a junior sys admin would be pretty easy to understand.  At the top of the Main Settings & Administration page the first option is Discovery Central, which allows SAM to search your entire environment for servers.

The Alerts & Reports option helps you set up your mail account that sends alert & notification e-mails to the admins based on the current environment and issues.  It is just as easy to send these e-mails to individuals as to groups, and configuring what is sent to whom is relatively simple.

Fortunately SAM is completely Active Directory integrated, so I can just authorize my Domain Admins and other groups to access what data they need in SAM, and to grant individuals and groups granular permissions to see and/or change what they are allowed to.

The dashboard is easy to read and understand, as well as customize.  I want my graphs to be at the top, and I want to know anything critical up front.  As with any good monitoring tool, Green=Good, Red=Bad.  All of my alerts are hyper-linked so if I see something Red I can just click and go right to it.


Actions, not words… If this happens then do that is a requirement in this day and age… Of course, if my monitoring tool can notify me that a service is down it is great… but how much better that it can bring it back up for me at the same time.  That can be as simple or complicated as you need, but the fact that certain conditions can trigger actions and not just alerts is key for me.  This was a simple task in SAM.

Of course it is important to realize that some system admins will not be as comfortable learning a tool this powerful on their own, and the fact that SolarWinds offers scores of free training resources is key.  The Customer Portal has more than just videos; they offer live classes and expert sessions with their engineers and experts which you can attend live or watch later.  They have on-demand recordings of everything you might want to learn.  Their Virtual Classroom is an amazing resource for customers who need help – whether that is learning a simple tidbit in a few minutes, or going from zero to hero over the course of a few days.

My initial impression of SolarWinds SAM was that it would be a great tool for smaller businesses; that impression changed drastically reasonably quickly.  Yes, I installed SAM in one of my 100 server environments in Q3 2015, and it performed brilliantly.  However as I learned about it and got to know the product I was convinced it was definitely Enterprise-Class, and by the end Q1 2016 I also had it installed at a client with 19,000 users and thousands of servers.

The Bad and the Ugly…

There is really only one aspect of SolarWinds that irked me, and that is the licensing model.  With some monitoring tools if you have 200 servers you know you need 200 licenses.  With SolarWinds a single server may require 100 licenses, depending on what you are monitoring.  That is not to say that SAM will be more expensive than other tools… it is just a different way of looking at the calculations that I needed to wrap my head around.  A small thing to be sure, but it is certainly an issue for me.


I was offered a trial period with SAM to try it out in my environment, and when that trial period ended I decided to renew.  SolarWinds has a great tool here, but more important to me is the support that I have been able to get from the company, which has extended beyond simple ‘how do I…’ questions.  Their engineers have gotten on-line with me to help solve a couple of custom issues that arose, and they were happy to do it.

The product offering is a home run for system admins who want a monitoring and reporting tool and do not want to break the bank… or change out all of their other management tools to drink Microsoft’s Kool-Aid.

Small environment or large, SolarWinds is worth it.  Contact them at www.solarwinds.com for more information, and a demo of their offerings!

The Perils of a Manual Environment

I am not going to lie to you and say that every environment that I manage or have managed is an optimized Secure, Well-Managed IT Environment.  It’s just not true.

In a secure, well-managed IT environment we monitor to make sure that things are working the way they are supposed to.  When we spin up a new server, for example, the proper agents are installed for anti-malware and monitoring without our lifting a finger.  Tuesday evening a new server is spun up, Wednesday morning it is already letting us know how well it is running.

But what about the other environments?  Many smaller environments do not have automated deployment infrastructures that make sure every new server is built to spec.  What do we do for those?

The answer is simple… where automation is lacking we have to be more vigilant in our processes.  When a new server (virtual or otherwise) is created, we not only install an operating system… we also make sure we add the monitoring agent, the anti-virus agent, and make sure you schedule proper backups because if you don’t it will all ne for naught if everything goes down.

So the answer is to make my environment completely automated, right?

Well, yes of course it is… in an ideal world.  In the real world there are plenty of reasons why we wouldn’t automate everything.  The cost of such systems might outweigh the benefits, for example… or maybe we do not have an IT Pro managing it, just the office computer guy.  Ideally we would get that guy trained and certified in all of the latest and greatest… but if you work in small business you know that might not always be the reality.

So what IS the answer?

Green-Check-MarkSimple.  I have a friend who has made a fortune telling people around the world how to make checklists.  I am not the guru that Karl is, and you don’t have to be either.  But if you do have a manual environment, spend the time to make a checklist for how you build out systems – make one for servers, one for desktops, and probably one for any specific type of server.  You don’t have to do it from memory… the next time you build a machine write down (or type!) every step you take. 1) Create virtual machine. 2) Customize virtual machine. 3) Install operating system… and so on.  When you are satisfied that your system is built the way you want it (every time) then you should try it again… but rather than using what you know, follow the checklist.

These checklists, I should mention, should not be written in stone.  There are ten rules that were so written, and that’s enough.  Thou shalt not murder is pretty unambiguous.  Thou shalt install Windows 8.1 may change when you decide to upgrade to Windows 10.  So make sure that every time you use the checklist you do so with a critical eye, trying to see if there is a way to improve upon the process.  The Japanese word for this is Kaizen.  They are pretty good at a lot of things from what I have seen Winking smile

True story: I gave this advice to a colleague once who thought it was great.  He started creating checklists, and had his employees and contractors follow them.  One day he invited me for a drink and told me a funny story.  His client had been using System Center Operations Manager (SCOM) to monitor all of their servers.  He had a checklist that included installing the SCOM agent in all servers.  One day the client decided to switch from SCOM to SolarWinds (a great product!) and after several weeks he decommissioned his SCOM infrastructure.  Six months later the client (a pretty big small business) complained that since they switched from SCOM to SW all of their new servers kept reporting a weird error.  It seems that the IT Pro who was following the checklists had continued installing the SCOM Agent into their servers, and since it could not find a SCOM server to report to, it was returning an error.  As I said, these checklists should be living documents, and not set in stone.


There is no one right or wrong answer for every environment.  What is a perfect inexpensive solution for one company might be cost prohibitive for another.  The only thing you have to do is use your mind, keep learning, use common sense, and keep reading The World According to Mitch!

Welcome to What’s Next…

There is irony in the title of this post… What’s next.

I posted on Friday that it was my last day working full time at Yakidoo.  I really enjoyed my time there, and am glad that my next venture will allow me to stay on there on a limited basis.

This afternoon I am meeting a colleague at the airport in Seattle, and that will begin my first day at my new gig.  I will talk more about it in a few weeks, even though today will be my first billable day.  That is what’s Next.

However the reason he and I will be in Seattle – Bellevue/Redmond actually – is the Airlift for Windows Server, System Center (WSSC), and Windows Azure vNext… the next generation of datacenter and cloud technologies that Microsoft is ‘showing off’ to select Enterprise customers several months prior to launching them.  It will be a week of deep-dive learning, combined with the usual Microsoft Marketing machine.  How do I know?  It’s not my first kick at the can Winking smile

It is, of course, not my first such Airlift.  The first one I attended was for System Center Configuration Manager (SCCM) 2007, back in November of that year. It was a consulting firm that had sent me, in advance of my heading off to Asia to teach it.  I have since been to a couple of others, each either as a consultant, a Microsoft MVP, or as a Virtual Technology Evangelist for Microsoft.  I have not given this a lot of thought, but this will be my first Airlift / pre-Launch event that I am attending as a customer.  It will be interesting to see if and how they treat me differently.

I suspect that the versions of WSSC that I will learn about this week will be the first that I will not be involved in presenting or evangelizing in any way dating back to Windows Server 2003.  I will not be creating content, I will not be working the Launch Events, and I will not be touring across Canada presenting the dog and pony show for Microsoft.  I will not be invited by the MVP Program to tour the user groups presenting Hyper-V, System Center, or Small or Essential Business Servers.  I will not be fronting for Microsoft showing off what is new, or glossing over what is wrong, or explaining business reasons behind technology decisions.  It is, in its way, a liberating feeling.  It is also a bit sad.

Don’t get me wrong… I will still be blogging about it.  Just because Microsoft does not want me in their MVP program does not mean that I will be betraying my readers, or the communities that I have helped to support over the years.  I will be writing about the technologies I learn about over the next week (I do not yet know if there will be an NDA or publication embargo) but at some point you will read about it here.  I will also, if invited, be glad to present to user groups and other community organizations… even if it will not be on behalf of (or sponsored by) Microsoft.  I was awarded the MVP because I was passionate about those things and helping communities… it was not the other way around.

What else can I say?  I am at the airport in Toronto, and my next article will be from one of my favourite cities in North America… see you in Seattle!

Onboard SAN… Issues.

A client of mine is a small business with a couple of physical servers and a couple of virtualization hosts.  One of the physical servers, a Lenovo ThinkServer, has been acting as a file server, so it has really been very under-used.  It is a good server that has never been used to its potential (like myself) but has been nonetheless a very important file server.  It has eight hard drives in it, managed by the on-board RAID controller.

When the server rebooted for no discernible reason last week, we were concerned.  When it didn’t come up again, and did not present any hard drives… we realized we had a problem.

I was relieved to discover that it was still under warranty from Lenovo, with NBD on-site support.  I called them, and after the regular questions they determined that there might be a problem with the system board.  They dispatched one to me along with a technician for the next morning, Their on-site service is still done by IBM, and in my career I have never met an unprofessional IBM technician.  These guys were no exception.  They were very professional and very nice.  Unfortunately they weren’t able to resolve the problem.

Okay, in their defense, here is what everyone (including me) expected to happen:

1) Replace the system board.

2) Plug all of the devices (including the hard drives)

3) Boot it up, and during the POST get a message like ‘Foreign drive configuration detected.  Would you like to import the configuration?’

4) We answer YES, the configuration rebuilds, and Windows boots up.

Needless to say, this is NOT what happened.  Why?  Let’s start with the fact that low-end on-board RAID controllers apparently suck.  Is it possible that a procedure was not properly followed?  I am not sure, and I am not judging.  I know that I watched most of what they did, and did not see them do that I felt was overtly wrong.

The techs spent six hours on-site, a lot of that spent in consultation with the second level support engineer at Lenovo, who had the unenviable task of telling me, at the end of the effort, that all was lost, and I would have to restore everything from our backup.

I should mention at this point that we did have a backup… but because of maintenance we were doing to that system over the December holidays the most recent successful backup was twelve days old.


Okay, we’ll go ahead and do it.  In the meantime, the client and I went to rebuild the RAID configuration.  We decided that although we were going to bolster the server – including a new RAID controller – we were going to try to rebuild the array configuration exactly as it had been, and see what happened.

Let me be clear… even the Lenovo Engineer agreed that this was a futile effort, that there was no way that this was going to work.  Of course it would work as a new array, we just weren’t going to recover anything.  I agreed… but we tried it anyways.

…and the server booted into Windows.

To say that we were relieved would be an understatement.  We got it back up and running exactly as it had been, with zero data loss.  We were not going to leave it this way of course… I spent the next day migrating data into new shares on redundant virtual servers.  But nothing was lost, and we all learned something.

I want to thank Jeff from Lenovo, as well as Luke and Brett from IBM who did their best to help.  Even though we ended up resolving it on our own (and that credit goes mostly to my client), they still did everything they could to make it right.

So my client has a new system board in their server, and hopefully with a new RASID controller, some more memory, and an extra CPU this server can enjoy a new and long, productive life as a vSphere host in the cluster.

…But I swear to you, I will never let a customer settle for on-board ‘LSI Software RAID Mega-RAID’ type devices again!

Happy week-end.

End Of Days 2003: The End is Nigh!

In a couple of days we will be saying goodbye to 2014 and ringing in the New Year 2015.  Simple math should show you that if you are still running Windows Server 2003, it is long since time to upgrade.  However here’s more:

When I was a Microsoft MVP, and then when I was a Virtual Technical Evangelist with Microsoft Canada, you might remember my tweeting the countdown to #EndOfDaysXP.  That we had some pushback from people who were not going to migrate, I think we were all thrilled by the positive response and the overwhelming success we had in getting people migrated onto either Windows 8, or at least Windows 7.  We did this not only by tweeting, but also with blog articles, in-person events (including a number of national tours helping people understand a) the benefits of the modern operating system, and b) how to plan for and implement a deployment solution that would facilitate the transition.  All of us who were on the team during those days – Pierre, Anthony, Damir, Ruth, and I – were thrilled by your response.

Shortly after I left Microsoft Canada, I started hearing from people that I should begin a countdown to #EndOfDaysW2K3.  Of course, Windows Server 2003 was over a decade old, and while it would outlast Windows XP, support for that hugely popular platform would end on July 14th, 2015 (I have long wondered if it was a coincidence that it would end on Bastille Day).  Depending on when you read this article it might be different, but as of right now the countdown is around 197 days.  You can keep track yourself by checking out the website here

It should be said that with Windows 7 there was an #EndOfDaysXP Countdown Gadget for the desktop, and when I migrated to Windows 8 I used a third party app that sat in my Start Menu.  One friend suggested I create a PowerShell script, but that was not necessary.  I don’t remember exactly which countdown timer I used, but it would work just as well for Windows Server 2003 – just enter the date you are counting down to, and it tells you every day how much time is left.

The point is, while I think that migrating off of Server 2003 is important, it was not at that point (nor is it now) an endeavour that I wanted to take on.  To put things in perspective, I was nearing the end of a 1,400 day countdown during which I tweeted almost every day.  I was no longer an Evangelist, and I was burnt out.

Despite what you may have heard, I am still happy to help the Evangelism Team at Microsoft Canada (although I think they go by a different name now).  So when I got an e-mail on the subject from Pierre Roman, I felt it important enough to share with you.  As such, here is the gist of that e-mail:

1) On July 14, 2015 support for Windows Server will come to an end.  It is vital that companies be aware of this, as there are serious dangers inherent in running unsupported platforms in the datacenter, especially in production.  As of that date there will be no more support and no more security updates.

2) The CanITPro team has written (or re-posted) several articles that will help you understand how to migrate off your legacy servers onto a modern Server OS platform, including:

3) The Microsoft Virtual Academy (www.microsoftvirtualacademy.com) also has great educational resources to help you modernize your infrastructure and prepare for Windows Server 2003 End of Support, including:

4) Independent researchers have come to the same conclusion (IDC Whitepaper: Why You Should Get Current).

      5) Even though time is running out, the Evangelism team is there to help you. You can e-mail them at cdn-itpro-feedback@microsoft.com if you have any questions or concerns surrounding Windows Server 2003 End of Support.

      Of course, these are all from them.  If you want my help, just reach out to me and if I can, I will be glad to help! Smile  (Of course, as I am no longer with Microsoft or a Microsoft MVP, there might be a cost associated with engaging me Smile)

      Good luck, and all the best in 2015!

Obvious? No…

I learned a lesson today, after I thought I had heard it all.  It seems that when there is mould in your server room it is important to specify to everyone involved at every level that the equipment in there – often valued at hundreds of thousands of dollars, not to mention the potential for lost productivity – is extremely sensitive to elements, and that pneumatic pressure hoses are not to be used in this room for anything ever ever ever.

You know, I always thought that there were some things that were so blatantly obvious that you just didn’t have to say anything.  I was reminded today that I was wrong about that.  So: For those of you who may ever be asked to clean a Server Room: NO PRESSURE HOSES.

That’s all.

Server Core: Save money.

I remember an internal joke floating around Microsoft in 2007, about a new way to deploy Windows Server.  There was an ad campaign around Windows Vista at the time that said ‘The Wow Starts Now!’  When they spoke about Server Core they joked ‘The Wow Stops Now!’

Server Core was a new way to deploy Windows Server.  It was not a different license or a different SKU, or even different media.  You simply had the option during the installation of clicking ‘Server Core’ which would install the Server OS without the GUI.  It was simply a command prompt with, at the time, a few roles that could be installed in Core.

While Server Core would certainly save some resources, it was not really practical in Windows Server 2008, or at least not for a lot of applications.  There was no .NET, no IIS, and a bunch of other really important services could not be installed on Server Core.  In short, Server Core was not entirely practical.

Fast Forward to Windows Server 2012 (and R2) and it is a completely different story.  Server Core a fully capable Server OS, and with regard to resources the savings are huge.  So when chatting with the owner of a cloud services provider recently (with hundreds of physical and thousands of virtual servers) I asked what percentage of his servers were running Server Core, and he answered ‘Zero’.  I could not believe my ears.

The cloud provider is a major Microsoft partner in his country, and is on the leading edge (if not the bleeding edge) on every Microsoft technology.  They recently acquired another datacentre that was a VMware vCloud installation, and have embarked on a major project to convert all of those hosts to Hyper-V through System Center 2012.  So why not Server Core?

The answer is simple… When Microsoft introduced Server Core in 2008 they tried it out, and recognizing its limitations decided that it would not be a viable solution for them.  It had nothing to do with the command line… the company scripts and automates everything in ways that make them one of the most efficient datacentres I have ever seen.  They simply had not had the cycles to re-test Server Core in Server 2012 R2 yet.

We sat down and did the math.  The Graphical User Environment (GUI) in Windows Server 2012 takes about 300MB of RAM – a piddling amount when you consider the power of today’s servers.  However in a cloud datacentre such as this one, in which every host contained 200-300 virtual machines running Windows Server, that 300MB of RAM added up quickly – a host with two hundred virtual machines required 60GB of RAM just for GUIs.  If we assume that the company was not going to go out and buy more RAM for its servers simply for the GUI, it meant that, on average, a host comfortably running 200 virtual machines with the GUI would easily run 230 virtual machines on Server Core.

In layman’s terms, the math in the previous paragraph means that the datacentre capacity could increase by fifteen percent by converting all of his VMs to Server Core.  If the provider has 300 hosts running 200 VMs each (60,000 VMs), then an increased workload of 15% translates to 9,000 more VMs.  With the full GUI that translates to forty-five more hosts (let’s conservatively say $10,000 each), or an investment of nearly half a million dollars.  Of course that is before you consider all of the ancillary costs – real estate, electricity, cooling, licensing, etc…  Server Core can save all of that.

Now here’s the real kicker: Had we seen this improvement in Windows Server 2008, it still would have been a very significant cost to converting servers from GUI to Server Core… a re-install was required.  With Windows Server 2012 Server Core is a feature, or rather the GUI itself is a feature that can be added or removed from the OS, and only a single reboot is required.  While the reboot may be disruptive, if managed properly the disruption will be minimal, with immense cost savings.

If you have a few servers to uninstall the GUI from then the Server Manager is the easy way to do it.  However if you have thousands or tens of thousands of VMs to remove it from, then you want to script it.  As usual PowerShell provides the easiest way to do this… the cmdlet would be:

Uninstall-WindowsFeature Server-Gui-Shell –restart

There is also a happy medium between the GUI and Server Core called MinShell… you can read about it here.  However remember that in your virtualized environment you will be doing a lot more remote management of your servers, and there is a reason I call MinShell ‘the training wheels for Server Core.’

There’s a lot of money to be saved, and the effort is not significant.  Go ahead and try it… you won’t be disappointed!

%d bloggers like this: