Out of Band Security Updates

If you run Windows Server this is very important.  Microsoft released today a number of out-of-band security updates for Microsoft Windows.  From what I have read, these patches (One of my servers has 14 applicable updates since 3am) will be applied to Windows clients as well as Windows Servers, but the vulnerability it protects is only in Windows Server.  I have a bit more information but because it is the middle of a busy work day I cannot go into it… but if you are a server admin I strongly recommend you take some time to look at these patches, test them, and apply them ASAP… the two week deadline setting in WSUS is probably not good enough for these ones 😉

Microsoft is not a company that does anything out-of-band for no good reason… if it has gone to the trouble of releasing these patches I suspect they are protecting something pretty serious so make sure you look into them – you can be certain that the hackers are!

Hyper-V 2008 R2: Still good enough?

I manage a vSphere environment at work, and it is a real change from the last few years when I spent all of my time talking about Hyper-V.  I want to be clear – it is not better or worse, it is just… different.  We have a number of virtualization hosts, plus a physical domain controller, and one physical server running Windows Server 2008 R2 (Enterprise), which has an app running that precludes us from changing that.  The app hardly uses any memory, so a lot of that was wasted.

While my physical server does not have a lot of RAM (8GB) it has a ridiculous amount of internal storage… I mean terabytes and terabytes of it.  I asked my boss about it, and he said it was there for something that they no longer use the server for… but it’s there… wasted as well… for now.

A few weeks ago I proposed a project that would require use of that space, and it was tentatively approved.  The problem is that the existing application and the proposed application are not supposed to co-exist on the same server.  I would have to come up with a way to segregate them.  No problem… I would install the Hyper-V role onto the physical server, and then create a new virtual machine for my purposes.

Once I explained to my boss that no extra licensing was required – because the physical server is licensed for Windows Server 2008 R2 Enterprise Edition, we could build as many as four virtual machines on the same license on that host – he got excited, and asked the usual ‘what else can we do?’ questions.

‘Can we cluster the virtual machine?’

No.  I mean, we could, but it would require having a second Hyper-V host which we do not have.  There is nothing we can do about that without incurring extra costs… and the purpose of the exercise is to do it for zero dollars.

‘Can we use Storage Spaces?’

No.  Storage Spaces is a great technology – one that I really loved talking about when I was working with Microsoft.  However it is a feature that was only introduced in Windows Server 2012, and we are only on Server 2008 R2.

‘Can we create the VM using 64TB .vhdx drives?’

No.  Again, .VHDX files were only introduced in Windows Server 2012.  We are limited to 2TB .VHD files… which is more than enough for our actual needs anyways.

‘How about UEFI Boot on the VM’

Nope.  Generation 2 hardware was introduced in Windows Server 2012 R2, so we are stuck with Generation 1 hardware.

So after he struck out on all of these questions, he asked me the question I was expecting… ‘Then why bother?’

I became a fan of Hyper-V as soon as it was released in Windows Server 2008.  Yes, the original.  I was not under any delusions that it was as good as or better than ESX, but it was free and it didn’t require anything to install… and if you knew Windows then you didn’t need to learn much more to manage it.

Of course it got much better in Windows Server 2008 R2, and even better in the SP1 release… and then in Windows Server 2012 it broke through, and was (in my opinion) as good as or better than vSphere… in some ways it was almost as good, in some ways it was better, and in the balance it came out even. Of course Server 2012 R2 made even better improvements, but when I spent three years with Microsoft Canada – first as a Virtual Partner Technology Advisor and then as a Virtual Evangelist – criss-crossing the country (and the US and the globe) evangelizing Hyper-V in Windows Server 2012 I was confident when I said that at last Microsoft Virtualization was on a par with VMware.

I would never have said that about Hyper-V in Windows Server 2008 R2. Sorry Microsoft, it was good… but vSphere was better.

However in this case we are not comparing Microsoft versus VMware… we are not deciding which platform to implement, because VMware is not an option. We are not even comparing the features of vOld versus vNew… because vNew is still not an option.

All we are deciding is this: Does the version of Hyper-V that is available to us for this project good enough for what our needs are for the project? Let’s review:

  • We need to create a virtual machine with 4GB of RAM. YES.
  • We need that VM to support up to 4TB of storage. YES. (We cannot do it on a single volume, but that is not a requirement)
  • We need the VM to be able to join a domain with FFL and DFL of Windows Server 2008 R2. YES.
  • We need the virtual machine to be backed up on a nightly basis using the tools available to us. YES

That’s it… we have no other requirements. All of our project needs are met by Hyper-V on Windows Server 2008 R2. Yes, Microsoft would love for us to pay to upgrade the host operating system, but they got their money for this server when we bought the license in 2011, and unless they are willing to give us a free upgrade (there is no Software Assurance on the existing license) and pay to upgrade the existing application to work on Server 2012R2 then there is nothing that we can do for them… and frankly if we were in the position where we were going to have to redeploy the whole server, it would be on VMware anyways, because that is what our virtualization environment runs on.

I spent two years evangelizing the benefits of a hybrid virtualization environment, and how well it can be managed with System Center 2012 R2… and that is what we are going to have. I have purchased the System Center licenses and am thrilled that I will be able to manage both my vSphere and my Hyper-V from one console… and for those of you who were paying attention that is what I spent the last three years recommending.

I can hold my head up high because I am running my environment exactly how I recommended all of you run yours… so many of my audience complained (when I was with Microsoft) that my solutions were not real-world because the real world was not exclusively Microsoft. That was never what I was recommending… I was recommending that the world does not need to be entirely VMware either… the two can coexist very well… with a little bit of knowledge and understanding!

Free ebook: Introducing Microsoft System Center 2012 R2

Folks you will not want to miss this!  Microsoft Press is giving away the ebook Introducing Microsoft System Center 2012 R2: Technical Overview.  It is written by Mitch Tulloch, Symon Perriman, and the System Center team… and is a great way to get up to speed on Microsoft’s private cloud!

Check it out at http://blogs.msdn.com/b/microsoft_press/archive/2013/12/16/free-ebook-introducing-microsoft-system-center-2012-r2.aspx.

Become a Virtualization Expert!

For those who missed the virtualization jump start, the entire course is now available on demand, as is the link to grab a free voucher for exam 409. This is a single exam virt specialist cert. I would encourage you to take the exam soon before all the free spots are booked.   Full info at http://borntolearn.mslearn.net/btl/b/weblog/archive/2013/12/17/earn-your-microsoft-certified-specialist-server-virtualization-title-with-a-free-exam.aspx

Server Core: Save money.

I remember an internal joke floating around Microsoft in 2007, about a new way to deploy Windows Server.  There was an ad campaign around Windows Vista at the time that said ‘The Wow Starts Now!’  When they spoke about Server Core they joked ‘The Wow Stops Now!’

Server Core was a new way to deploy Windows Server.  It was not a different license or a different SKU, or even different media.  You simply had the option during the installation of clicking ‘Server Core’ which would install the Server OS without the GUI.  It was simply a command prompt with, at the time, a few roles that could be installed in Core.

While Server Core would certainly save some resources, it was not really practical in Windows Server 2008, or at least not for a lot of applications.  There was no .NET, no IIS, and a bunch of other really important services could not be installed on Server Core.  In short, Server Core was not entirely practical.

Fast Forward to Windows Server 2012 (and R2) and it is a completely different story.  Server Core a fully capable Server OS, and with regard to resources the savings are huge.  So when chatting with the owner of a cloud services provider recently (with hundreds of physical and thousands of virtual servers) I asked what percentage of his servers were running Server Core, and he answered ‘Zero’.  I could not believe my ears.

The cloud provider is a major Microsoft partner in his country, and is on the leading edge (if not the bleeding edge) on every Microsoft technology.  They recently acquired another datacentre that was a VMware vCloud installation, and have embarked on a major project to convert all of those hosts to Hyper-V through System Center 2012.  So why not Server Core?

The answer is simple… When Microsoft introduced Server Core in 2008 they tried it out, and recognizing its limitations decided that it would not be a viable solution for them.  It had nothing to do with the command line… the company scripts and automates everything in ways that make them one of the most efficient datacentres I have ever seen.  They simply had not had the cycles to re-test Server Core in Server 2012 R2 yet.

We sat down and did the math.  The Graphical User Environment (GUI) in Windows Server 2012 takes about 300MB of RAM – a piddling amount when you consider the power of today’s servers.  However in a cloud datacentre such as this one, in which every host contained 200-300 virtual machines running Windows Server, that 300MB of RAM added up quickly – a host with two hundred virtual machines required 60GB of RAM just for GUIs.  If we assume that the company was not going to go out and buy more RAM for its servers simply for the GUI, it meant that, on average, a host comfortably running 200 virtual machines with the GUI would easily run 230 virtual machines on Server Core.

In layman’s terms, the math in the previous paragraph means that the datacentre capacity could increase by fifteen percent by converting all of his VMs to Server Core.  If the provider has 300 hosts running 200 VMs each (60,000 VMs), then an increased workload of 15% translates to 9,000 more VMs.  With the full GUI that translates to forty-five more hosts (let’s conservatively say $10,000 each), or an investment of nearly half a million dollars.  Of course that is before you consider all of the ancillary costs – real estate, electricity, cooling, licensing, etc…  Server Core can save all of that.

Now here’s the real kicker: Had we seen this improvement in Windows Server 2008, it still would have been a very significant cost to converting servers from GUI to Server Core… a re-install was required.  With Windows Server 2012 Server Core is a feature, or rather the GUI itself is a feature that can be added or removed from the OS, and only a single reboot is required.  While the reboot may be disruptive, if managed properly the disruption will be minimal, with immense cost savings.

If you have a few servers to uninstall the GUI from then the Server Manager is the easy way to do it.  However if you have thousands or tens of thousands of VMs to remove it from, then you want to script it.  As usual PowerShell provides the easiest way to do this… the cmdlet would be:

Uninstall-WindowsFeature Server-Gui-Shell –restart

There is also a happy medium between the GUI and Server Core called MinShell… you can read about it here.  However remember that in your virtualized environment you will be doing a lot more remote management of your servers, and there is a reason I call MinShell ‘the training wheels for Server Core.’

There’s a lot of money to be saved, and the effort is not significant.  Go ahead and try it… you won’t be disappointed!

A response to a VMware article… written by someone I respect.

English: VMware vSphere in the Enterprise

While he may not be very well know to the Microsoft community, Mike Laverick is a legend in VMware circles.  Mike owns a blog called RTFM Education, a source of white papers for VMware technology, although he did start out as a Microsoft Certified Trainer.  He now works for VMware as a Senior Cloud Infrastructure Evangelist.  I was very happy to read on his blog that he has decided to try learning Hyper-V and Microsoft’s Private Cloud.  Unfortunately from what I can tell he was still trying to think way too VMware, rather that trying to learn the Microsoft way of doing things.

(To read the article follow this link:

http://www.mikelaverick.com/2013/10/i-cant-get-no-validation-windows-hyper-v-r2eality-fail-over-clustering/)

This is a problem that I see all the time, and going both ways.  When I was teaching vSphere Infrastructure classes my Microsoft-focused students had a hard time getting out of the Microsoft mindset.  When I teach Microsoft courses, my VMware students have the same problem going the other direction.  It would be much easier if people would open their minds and just let the technology flow… but then I have been a Star Wars fan for too long so I believe in that sort of thing.

I found several points of the article quite amusing.  Mike opens the article with a picture and quote from the book Windows NT Microsoft Cluster Server.  The first words that he actually types are ‘Mmm, so much has changed since then or has it?’  I am sorry Mike, but to even insinuate that Microsoft Clustering in Windows Server 2012 R2 is anywhere near the disaster that was clustering in Windows NT (or Server 2000, or Server 2003) is a joke.  Yes, you have to have the proper pieces in place, and yes, you have to configure it properly.  You even have to spend a little time learning Microsoft Clustering and how it works.  If you were to spend thirty minutes with someone like me I’d say you’d be good.

Also, I know you don’t like that you have to install the Failover Clustering Feature to all of the servers before you can create your cluster.  However please remember that unlike a pure hypervisor, Windows Server is an operating system that does many things for many people.  To install all of the possible features out of the box is a ridiculous notion – for one thing, it would triple the footprint and multiply exponentially the attack surface of Windows Server… to say nothing of having code running that you don’t need which takes resources.

To save time, I recommend the following PowerShell cmdlets:

Install-WindowsFeature –Name Failover-Clustering –IncludeManangementTools –ComputerName MyServer1
Install-WindowsFeature –Name Failover-Clustering –IncludeManangementTools –ComputerName MyServer2
Install-WindowsFeature –Name Failover-Clustering –IncludeManangementTools –ComputerName MyServer3
New-Cluster –Name MyCluster –Node MyServer1, MyServer2, MyServer3 –StaticAddress 172.17.10.5

(There are probably ways to wildcard that – -ComputerName * or something, but that is not the point of the article).

The point of this article is not to Mike’s article apart – for one thing, he is probably doing better on Microsoft technology than I would have when I was new to VMware, for another I have great respect for him, both as a person and as an IT Pro.  I just find it amusing that a VMware evangelist is struggling to learn Hyper-V and System Center, just as so many of the Microsoft evangelists have been struggling to learn VMware.  There is a huge learning curve to be sure… no matter which way you go.

While I am reasonably fluent and certified in both technologies, there is no question that I favour Microsoft… just as Mike favours VMware.  I am glad to see that he is trying to learn Microsoft though… even though some of the ways he is going about it may be questionable.

The one thing that I will point out though is that Mike is right… there are two ways of building a Microsoft Cluster – you can use the Failover Cluster Manager, or you can use System Center VMM.  Michael points out that these technologies would do well to communicate better.  I agree, and recommend that users pick one or the other.  I would also like to point out that in vCenter Server you can create a cluster, but if you are only using ESXi (Vmware’s hypervisor) without vCenter Server there is no way to create a cluster… the technology is simply not supported unless you pay for it.  Score one for Microsoft.

Mike, on a personal note, I would love to sit with you and show you the vastness of System Center and Microsoft’s Private Cloud one day.  Geography seems to work against us, as you are (I believe) in Scotland, and I am in Japan.  There is a catch though… I will gladly teach you Microsoft’s virtualization stack from top to bottom… but I want you to do the same for me with the vSphere stack.  I know the technology and am certified, but I would cherish the opportunity to relearn it from you, as I have followed your articles with reverence for many years.

If you ever do care to take me up on the offer Mike, my email address is mitch@garvis.ca.  Drop me a line, we’ll figure it out.  I suspect that we would both be able to write some great articles following those sessions, and we would both have newfound respect for the other’s technology of choice.

Failover Clustering: Let’s spread the Hyper-V love across hosts!

This article was originally published on the Canadian IT Pro Connection.

Some veteran IT Pros hear the term ‘Microsoft Clustering’ and their hearts start racing.  That’s because once upon a time Microsoft Cluster Services was very difficult and complicated.  In Windows Server 2008 it became much easier, and in Windows Server 2012 it is now available in all editions of the product, including Windows Server Standard.  Owing to these two factors you are now seeing all sorts of organizations using Failover Clustering that would previously have shied away from it.

The service that we are seeing clustered most frequently in smaller organizations is Hyper-V virtual machines.  That is because virtualization is another feature that is really taking off, and the low cost of virtualizing using Hyper-V makes it very attractive to these organizations.

In this article I am going to take you through the process of creating a failover cluster from two virtualization hosts that are connected to a single SAN (storage area network) device.  However in Windows Server 2012 these are far from the limits.  You can actually cluster up to sixty-four servers together in a single cluster.  Once they are joined to the cluster we call them cluster nodes.

Failover Clustering in Windows Server 2012 allows us to create highly available virtual machines using a method called Active-Passive clustering.  That means that your virtual machine is active on one cluster node, and the other nodes are only involved when the active node becomes unresponsive, or if a tool that is used to dynamically balance the workloads (such as System Center 2012 with Performance and Resource Optimization (PRO) Tips) initiates a migration.

In addition to using SAN disks for your shared storage, Windows Server 2012 also allows you to use Storage Pools.  I explained Storage Pools and showed you how to create them in my article Storage Pools: Dive Right In! I also explained how to create a virtual SAN using Windows Server 2012 in my article iSCSI Storage in Windows Server 2012.  For the sake of this article, we will use the simple SAN target that we created together in that article.

Step 1: Enabling Failover Clustering

Failover Clustering is a feature on Windows Server 2012.  In order to enable it we will use the Add Roles and Features wizard.

1. From Server Manager click Manage, and then select Add Roles and Features.

2. On the Before you begin page click Next>

3. On the Select installation type page select Role-based or feature-based installation and click Next>

4. On the Select destination server page select the server onto which you will install the role, and click Next>

5. On the Select server roles page click Next>

6. On the Select features page select the checkbox Failover Clustering.  A pop-up will appear asking you to confirm that you want to install the MMC console and management tools for Failover Clustering.  Click Add Features.  Click Next>

7. On the Confirm installation selections page click Install.

NOTE: You could also add the Failover Clustering feature to your server using PowerShell.  The script would be:

Install-WindowsFeature -Name Failover-Clustering –IncludeManagementTools

If you want to install it to a remote server, you would use:

Install-WindowsFeature -Name Failover-Clustering –IncludeManagementTools –ComputerName <servername>

That is all that we have to do to enable Failover Clustering in our hosts.  Remember though, it does have to be done on each server that will be a member of our cluster.

Step 2: Creating a Failover Cluster

Now that Failover Clustering has been enabled on the servers that we want to join to the cluster, we have to actually create the cluster.  This step is easier than it ever was, although you should take care to follow the recommended guidelines.  Always run the Validation Tests (all of them!), and allow Failover Cluster Manager to determine the best cluster configuration (Node Majority, Node and Disk Majority, etc…)

NOTE: The following steps have to be performed only once – not on each cluster node.

1. From Server Manager click Tools and select Failover Cluster Manager from the drop-down list.

2. In the details pane under Management click Create Cluster…

3. On the Before you begin page click Next>

4. On the Select Servers page enter the name of each server that you will add to the cluster and click Add.  When all of your servers are listed click Next>

5. On the Validation Warning page ensure the Yes. When I click Next, run configuration validation tests, and then return to the process of creating the cluster radio is selected, then click Next>

6. On the Before You Begin page click Next>

7. On the Testing Options page ensure the Run all tests (recommended) radio is selected and then click Next>

8. On the Confirmation page click Next> to begin the validation process.

9. Once the validation process is complete you are prompted to name your cluster and assign an IP address.  Do so now, making sure that your IP address is in the same subnet as your nodes.

NOTE: If you are not prompted to provide an IP address it is likely that your nodes have their IP Addresses assigned by DHCP.

10. On the Confirmation page make sure the checkbox Add all eligible storage is selected and click Next>.  The cluster will now be created.

11. Click on Finish.  In a few seconds your new cluster will appear in the Navigation Pane.

Step 3: Configuring your Failover Cluster

Now that your failover cluster has been created there are a couple of things we are going to verify.  The first is in the main cluster screen.  Near the top it should say the type of cluster you have.

If you created your cluster with an even number of nodes (and at least two shared drives) then the type should be a node and disk majority.  In a Microsoft cluster health is determined when a majority (50% +1) of votes are counted.  Every node has a vote.  This means that if you have an even number of nodes (say 10) and half of them (5) go offline then your cluster goes down.  If you have ten nodes you would have long since taken action, but imagine you have two nodes and one of them goes down… that means your entire cluster would go down.  So Failover Clustering uses node and disk majority – it takes the smallest drive shared by all nodes (I usually create a 1GB LUN) and configures it as the Quorum drive – it gives it a vote… so if one of the nodes in your two node cluster goes down, you still have a majority of votes, and your cluster stays on-line.

The next thing that you want to check is your nodes.  Expand the Nodes tree in the navigation pane and make sure that all of your nodes are up.

Once this is done you should check your storage.  Expand the Storage tree in the navigation pane, and then expand Disks.  If you followed my articles you should have two disks – one large one (mine is 140GB) and a small one (mine is 1GB).  The smaller disk should be marked as assigned to Disk Witness in Quorum, and the larger disk will be assigned to Available Storage.

Cluster Shared Volumes was introduced in Windows Server 2008R2.  It creates a contiguous namespace for your SAN LUNs on all of the nodes in your cluster.  In other words, rather than having to ensure that all of your LUNs have the same drive letter on each node, CSVs create a link – a portal if you will – on your C: under the directory C:\ClusterStorage.  Each LUN would have its own subdirectory – C:\ClusterStorage\Volume1, C:\ClusterStorage\Volume2, and so on.  However using CSVs means that you are no longer limited to a single VM per LUN, so you will likely need fewer.

CSVs are enabled by default, and all you have to do is right-click on any drive assigned to Available Storage, and click Add to Cluster Shared Volumes.  It will only take a second to work.

NOTE: While CSVs create directories on your C drive that is completely navigable, it is never a good idea to use it for anything other than Hyper-V.  No other use is supported.

Step 4: Creating a Highly Available Virtual Machine (HAVM)

Virtual machines are no different to Failover Cluster Manager than any other clustered role.  As such, that is where we create them!

1. In the navigation pane of Failover Cluster Manager expand your cluster and click Roles.

2. In the Actions Pane click Virtual Machines… and click New Virtual Machine.

3. In the New Virtual Machine screen select the node on which you want to create the new VM and click OK.

The New Virtual Machine Wizard runs just like it would in Hyper-V Manager.  The only thing you would do differently here is change the file locations for your VM and VHDX files.  In the appropriate places ensure they are stored under C:\ClusterStorage\Volume1.

At this point your highly available virtual machine has been created, and can be failed over without delay!

Step 5: Making an existing virtual machine highly available

In all likelihood you are not starting from the ground up, and you probably have pre-existing virtual machines that you would like to add to the cluster.  No problem… However before you go, you need to put the VM’s storage onto shared storage.  Because Windows Server 2012 includes Live Storage Migration it is very easy to do:

1. In Hyper-V Manager right-click the virtual machine that you would like to make highly available and click Move

2. In the Choose Move Type screen select the radio Move the virtual machine’s storage and click Next>

3. In the Choose Options for Moving Storage screen select the radio marked Move all of the virtual machine’s data to a single location and click Next>

4. In the Choose a new location for virtual machine type C:\ClusterStorage\Volume1 into the field.  Alternately you could click Browse… and navigate to the shared file location.  Then click Next>

5. On the Completing Move Wizard page verify your selections and click Finish.

Remember that moving a running VM’s storage can take a long time.  The VHD or VHDX file could theoretically be huge… depending on the size you selected.  Be patient, it will just take a few minutes.  Once it is done you can continue with the following steps.

6. In Failover Cluster Manager navigate to the Roles tab.

7. In the Actions Pane click Configure Role…

8. In the Select Role screen select Virtual Machine from the list and click Next>.  This step can take a few minutes… be patient!

9. In the Select Virtual Machine screen select the virtual machine that you want to make highly available and click Next>

NOTE: A great improvement in Windows Server 2012 is the ability to make a VM highly available regardless of its state.  In previous versions you needed to shut down the VM to do this… no more!

10. On the Confirmation screen click Next>

…That’s it! Your VM is now highly available.  You can navigate to Nodes and see which server it is running on.  You can also right-click on it, click Move, select Live Migration, and click Select Node.  Select the node you want to move it to, and you will see it move before your very eyes… without any downtime.

What? There’s a Video??

Yes, We wanted you to read through all of this, but we also wrote it as a reference guide that you can refer to when you try to build it yourself.  However to make your life slightly easier, we also created a video for you and posted it online.  Check it out!

Creating and configuring Failover Clustering for Hyper-V in Windows Server 2012

For Extra Credit!

Now that you have added your virtualization hosts as nodes in a cluster, you will probably be creating more of your VMs on Cluster Shared Volumes than not.  In the Hyper-V Settings you can change the default file locations for both your VMs and your VHDX files to C:\ClusterStorage\Volume1.  This will prevent your having to enter them each time.

As well, the best way to create your VMs will be in the Failover Cluster Manager and not in Hyper-V Manager.  FCM creates your VMs as HAVMs automatically, without your having to perform those extra steps.

Conclusion

Over the last few weeks we have demonstrated how to Create a Storage Pool, perform a Shared Nothing Live Migration, Create an iSCSI Software Target in Windows Server 2012, and finally how to create and configure Failover Clusters in Windows Server 2012.  Now that you have all of this knowledge at your fingertips (Or at least the links to remind you of it!) you should be prepared to build your virtualization environment like a pro.  Before you forget what we taught you, go ahead and do it.  Try it out, make mistakes, and figure out what went wrong so that you can fix it.  In due time you will be an expert in all of these topics, and will wonder how you ever lived without them.  Good luck, and let us know how it goes for you!

Getting Certified: Things have really changed!

This article was originally published to The Canadian IT Pro Connection.Boy has it been an exciting year… Microsoft’s busiest release year ever!  On the IT Pro side we have System Center 2012 (a single product now, but truly seven distinct products for managing your environment!), Windows Server 2012, Windows 8… we have Windows Azure (which for the first time is really beginning to show true relevance to the IT Pro and not just the devs), and of course the new Office (both on-prem with Office 2013, and in the cloud with Office 365).  There is of course Windows Phone 8, Windows Intune, and the list goes on.

With all of these new versions out many IT Pros will be looking to update their certifications to remain current, while many more will be looking for their first certs.  For the first time in six years Microsoft Learning has completely changed the way you will be looking at certifications going forward.  If you are like me (and so many others) and do want to get certified in the latest and greatest, then you will need to know what is out there, and how certifications have changed with the newest product cycles.

Solutions-based Certifications

In the last few years Microsoft Learning focused on what they referred to as task-based certifications (MCTS) and job-based certifications (MCITP).  However IT Pros started to see more and more components in learning and exams that were not actually in the product – so for example an exam on Windows Server might have included a question on the Security Compliance Manager (SCM) and System Center.  Although it made sense to the SMEs writing the questions, the unprepared found themselves facing questions that they couldn’t answer, and a resounding chorus of ‘we didn’t realize we would be tested on that!’ was to be heard across the blogosphere.

This year the new certifications have been revamped to be solutions-based.  That means you are not focusing on a role or a product, but rather on the solution as a whole, which will very often include technologies not included in the product, but that are complimentary to it.  Microsoft’s Solution Accelerators are a good example of this.  The Solution Accelerators are a series of free tools available from Microsoft and include the Security Compliance Manager, Microsoft Deployment Toolkit, the Microsoft Virtual Machine Conversion toolkit, and others that are free downloads and may not be required knowledge to everyone, but every IT Pro should know about them because they really do come in handy.

Additionally you are going to see a strong interdependence between Windows Server 2012, System Center 2012, and Windows 8.  After all very few companies have only one of these, and in fact in any organization of a certain size or larger it would be rare to not find all three.

Of course it is also likely you are going to see questions that ask about previous versions of all of these technologies. ‘Your company has 25 servers running Windows Server 2003 R2 Enterprise Edition and 5000 desktops running Windows Vista Business Edition…’ sorts of questions will not be uncommon.  This will make some of us scour our archived memory banks for the differences between editions, and may seem unfair to IT Pros who are new to the industry.  Remember that every certification exam and course lists recommended prerequisites for candidates, and 2-3 years of experience is not an uncommon one.  To that I remind you that you do not need a perfect score to pass the exams… do your best!

What was old is new again

In 2005 Microsoft announced the retirement of the MCSE and MCSA certifications, to be replaced by the MCTS/MCITP certs.  During a recent keynote delivered by a guest speaker from Redmond I heard him say that this was actually Canada’s fault, and unfortunately he is partly right.  The Quebec Order of Engineers won their lawsuit regarding the usage of the word engineer in the cert.  While it may have made their lives better, it complicated the certification landscape for a lot of IT Pros and hiring managers who never quite got used to the new model.

SolAssoc_WinServ2012_Blk SolExp_PvtCloud_Blk

In April, 2012 Microsoft Learning announced that things were changing again… we would again be able to earn our MCSA and MCSE certs, but they would now stand for Microsoft Certified Solutions Associate and Microsoft Certified Solutions Expert.  In fact they thought it was a good enough idea that although they were intended as next-generation certs, they would be ported backward one generation… if you were/are an MCITP: Server Administrator or MCITP: Enterprise Administrator on Windows Server 2008 you immediately became an MCSA: Windows Server 2008.  You were also immediately only two exams away from earning your MCSE: Private Cloud certification.

associate-blueMicrosoft Learning bills the MCSA certification as ‘the foundation for your professional career.’  I agree with this because it is the basic cert on the operating system, and from there you can jump into the next stage (there are several MCSE programs available, all of which require the base MCSA to achieve).

Of course now that Windows Server 2012 has been released, so too has the new certifications.  If you want to earn your MCSA: Windows Server 2012 credentials then you are only three exams away:

Exam # Title Aligned course
70-410 Installing and Configuring Windows Server 2012 20410
70-411 Administering Windows Server 2012 20411
70-412 Configuring Advanced Windows Server 2012 Services 20412

Instead of taking all three of these exams, you could choose to upgrade any of the following certifications with a single upgrade exam:

MCSA: Windows Server 2008

MCITP: Virtualization Administrator on Windows Server 2008 R2

MCITP: Enterprise Messaging Administrator 2010

MCITP: Lync Server Administrator 2010

MCITP: SharePoint Administrator 2010

MCITP: Enterprise Desktop Administrator on Windows 7

The upgrade exam is called Upgrading Your Skills to MCSA Windows Server 2012, and is exam number 70-417.

expert-blueMicrosoft Learning calls the MCSE certification ‘the globally recognized standard for IT professionals.’  It demonstrates that you know more than just the basics, but that you are an expert in the technologies required to provide a complete solution for your environment.

The first IT Pro MCSE cert announced focused on virtualization and the System Center 2012 product.  Microsoft Certified Solutions Expert: Private Cloud launched first because System Center 2012 was released earlier in the year, and the Private Cloud cert could use either Server 2012 or Server 2008 certs as its baseline.  If you already have a qualifying MCSA certification (such as the one outlined above, or the MCSA: Windows Server 2008) then you would only require two more exams to complete your MCSE:

Exam # Title Aligned course
70-246 Monitoring and Operating a Private Cloud with System Center 2012 10750
70-247 Configuring and Deploying a Private Cloud with System Center 2012 10751
70-6591 TS Windows Server 2008: Server Virtualization 10215A

1This exam can be taken instead of exam 70-247 until January 31, 2013 to count towards the Private Cloud certification.

The next new-generation MCSE cert for the IT Pro is theMCSE: Server Infrastructure.  Like the first one the basis for this cert is the MCSA.  Unlike the Private Cloud cert, the MCSA must be in Windows Server 2012.  The required additional exams are:

Exam # Title Aligned course
70-413 Designing and Implementing a Server Infrastructure 20413
70-414 Implementing an Advanced Server Infrastructure 20414

Are you starting to worry that your current Server 2008 certs aren’t helping you toward your goal?  Never fear… the following certifications are upgradeable by taking three exams:

MCITP: Virtualization Administrator on Windows Server 2008 R2

MCITP: Enterprise Messaging Administrator 2010

MCITP: Lync Server Administrator 2010

MCITP: SharePoint Administrator 2010

MCITP: Enterprise Desktop Administrator on Windows 7

Which exams?  I’m glad you asked.  The upgrading IT Pro needs to take:

Exam # Title Aligned course
70-413 Designing and Implementing a Server Infrastructure 20413
70-414 Implementing an Advanced Server Infrastructure 20414
70-417 Upgrading Your Skills to MCSA Windows Server 2012 20417

In other words, you will be upgrading your pre-existing cert to MCSA: Windows Server 2012, and then taking the remaining exams required for the MCSE.

The third MCSE that will be of interest to IT Pros is the MCSE: Desktop Infrastructurecert.  As with the others it requires the candidate to earn the MCSA: Windows Server 2012, and then take the following exams:

Exam # Title Aligned course
70-415 Implementing a Desktop Infrastructure 20415
70-416 Implementing Desktop Application Environments 20416

If you previously held the MCITP: Enterprise Desktop Administrator 7 then you can upgrade by taking the following exams:

Exam # Title Aligned course
70-415 Implementing a Desktop Infrastructure 20415
70-416 Implementing Desktop Application Environments 20416
70-417 Upgrading Your Skills to MCSA Windows Server 2012 20417

There are actually five other MCSE paths, which are:

MCSE: Messaging

MCSE: Data Platform

MCSE: Business Intelligence

MCSE: Communication

MCSE: SharePoint

That I do not discuss these is not a judgment, simply they are outside of my wheelhouse as it were… If you would like more information about any of these, visit Microsoft Learning’s MCSE landing page.

The Unfinished Pyramid

You will notice that the MCSA and MCSE pyramids that we use are progressive… the MCSA has one level finished, the MCSE has two levels finished.  That is because there is another level of certifications above these, which is now called the Microsoft Certified Solutions Master.  This is the highest certification that Microsoft Learning offers, and only a few individuals will qualify.  It is a real commitment but if you think you are ready for it, I would love to point you in the right direction.  Personally I am happy with my MCSE: PC and don’t expect I will ever be a Master.

At present there are four MCSM tracks:

MCSM: SharePoint

MCSM: Data Platform

MCSM: Communication

MCSM: Messaging

It should be noted that of these only the MCSM: Data Platform is currently available; the others will be made available in 2013.

Also at the very top of the pyramid there is one more level – the Microsoft Certified Architect (MCA).  There are currently four MCA certifications:

MCA: Microsoft Exchange Server

MCA: Microsoft SharePoint Server

MCA: Microsoft SQL Server

MCA: Windows Server: Directory

Achieving the MCA requires a lot more than just exams.  It is a long and grueling process which in the end will likely leave you drained, but with the highest certification that Microsoft offers.

I should tell you that these last two senior certs are not for most people.  They are only for the very top professionals with in-depth experience designing and delivering IT solutions for enterprise customers, and even then only for those who possess the technical and leadership skills that truly differentiate them from their peers.

Keep it up!

Several years ago Microsoft Learning tried to retire older MCSEs – Windows NT and such.  They were unsuccessful because had they done so they would have breached the terms of the original certification.  In other words, because they never told candidates in advance that they would retire them, they couldn’t retire them.  It is not uncommon for me to hear from someone who is an MCSE, but they haven’t taken an exam since the 1990s.  In fact the logo for MCSE on Windows NT is the same logo as for MCSE on Windows Server 2003, and those MCSEs will be allowed to use that logo forever.

In 2006 they made it a little easier to differentiate.  Not only would certifications be by technology (MCITP: Enterprise Administrator on Windows Server 2008) but they would, in theory, be retired with support for that technology.  So an MCITP on Windows Vista would not be able to use the cert past a certain date.  Unfortunately I found that people did not refer to their entire cert, they would simply say that ‘I am an MCITP!’  In other words, without some clarifying it was pretty difficult to determine what technology they really knew.  Additionally it is not uncommon for some pros to have several MCITP certs, making it quite difficult to list on a business card or e-mail signature.

Now Microsoft Learning has really made an improvement to this issue.  The new MCSE certifications will require that you show continued understanding of the latest versions of the technology area by taking a recertification exam every three years.  While there was some talk of this with the MCITP program it did not come to fruition.  Today however this recertification requirement is clearly outlined on the MCSE pages.

While recertifying may seem like a bother for some, as we discussed earlier it is something we choose to do every three years to remain current anyways.  For those of us who do want to always remain current it is nice to know that we don’t have to start from scratch with every new product cycle.  For those for whom remaining current is not as important they will always be able to say ‘I was an MCSE, but I let my certs lapse.’  It shows that they do know the technology, just not necessarily the most current version,  This should be sufficient for a lot of people who often tell me ‘my clients don’t need the latest, and are not going to upgrade every three years!’

What About Small Biz?

I spent several years specializing in SMBs.  The first time I took a certification exam I remember coming out of it upset about questions that started ‘You are the administrator for a company with 500 server…’  No I am not!  At the time I couldn’t even fathom what that would be like.  So when Microsoft Learning started writing exams for SBS I was glad not because I wanted to limit myself (I didn’t, and am glad of that today) but because I knew that there are lots of IT Pros out there who do work exclusively on smaller networks.

I do not know what will become of SMB-focused certifications now that Windows Small Business Server 2011 is to be the last SBS release.  I do not have any insight into whether there will be exams around Windows Server Essentials, but could envision a cert around the tying of that product with Windows 8 and Office 365.  I have not been asked, but it would make sense.  However I have heard from a lot of SMB IT Pros that certifications are not as important to them and their clients as we feel they are in the enterprise, and I accept that; the needs of the larger do not necessarily align with the needs of the smaller.  However only time will tell if Microsoft Learning will address this market.

So in the end, should I get certified?

I have long been of the opinion that certifications are key for any IT Professional who is serious about his or her profession.  It shows that they have the respect for their profession to be willing to prove not that they know how to do it, but to do IT right.  Certifications are not for IT hobbyists, or people who dabble.  They are for the professionals who earn their living in IT, and who wish to differentiate themselves from other candidates for jobs, contracts, or promotions.

Whether you have been working in IT for years, or are fresh out of school and looking to embark on a career in IT, there are likely scores if not hundreds of candidates who will be competing with you for every job.  Why not take this opportunity to distinguish yourself?  No matter how much some people will denigrate their relevance, I have spoken to many hiring managers who have confirmed for me time and again that they are a key indicator of a candidate’s suitability to technical positions.

The Haiku Goes On…

Zen......

Zen…… (Photo credit: Moyan_Brenn_BE_BACK_on_10th_OCT)

Yesterday I began my journey to improve the world through haikus about Windows Server 2012.  Here is my second Windows Server 2012 haiku!  And remember… if you like the poem, you will love the product!  Download your evaluation copy today!

Also… if you have a poem about Windows Server, Hyper-V, Windows 8, or Office 2013 I will publish it here for you!

Hyper virtual

V will lead the industry

Competitors cringe

When I’m Sixty-Four… TERABYTES!

Hard Disk Spindle

Hard Disk Spindle (Photo credit: Fr3d.org)

Okay, I am asking for a show of hands: How many of you remember 100MB hard drives? 80? 40?  While I remember smaller, my first hard drive was a 20 Megabyte Seagate drive.  Note that I didn’t say Gigabytes…

Way back then the term Terabyte might have been coined already as a very theoretical term, but in the mid-80s most of us did not even have hard drives – we were happy enough if we had dual floppy drives to run our programs AND store our data.  We never thought that we could ever fill a gigabyte of storage, but were happier with hard drives than with floppies because they were less fragile (especially with so many magnets about).

Now of course we are in a much more enlightened age, where most of us need hundreds of gigabytes, if not more.  With storage requirements growing exponentially, the 2TB drives that we used to think were beyond the needs of all but the largest companies are now available for consumers, and corporations are needing to put several of those massive drives into SAN arrays to support the ever-growing database servers.

As our enterprise requirements grow, so must the technologies that we rely on.  That is why we were so proud to announce the new VHDX file format, Microsoft’s next generation virtual hard drive files that has by far the largest capacity of any virtualization technology on the market – a whopping 64 Terabytes.

Since Microsoft made this announcement a few months ago several IT Pros have asked me ‘Why on earth would I ever need a single drive to be that big?’  A fair question, that reminds me of the old quote from Bill Gates who said that none of us would ever need more than 640KB of RAM in our computers.  The truth is big data is becoming the rule and not the exception.

Now let’s be clear… it may be a long time before you need 64TB on a single volume.  However rather than questioning the limit, let’s look at the previous limit – 2TB.  Most of us likely won’t need 64TB any time soon; however over the last couple of years I have come across several companies who did not think they could virtualize their database servers because of 2.2TB databases.

Earlier this week I got an e-mail from a customer asking for help with a virtual to physical migration.  Knowing who he reached out to, this was an obvious cry for help.

‘Mitch we have our database running on a virtual machine, and it is running great, but we are about to outgrow our 2TB limitation on the drive, and we have to migrate onto physical storage.  We simply don’t have any other choice.’

As a Technical Evangelist my job is to win hearts and minds, as well as educate people about new technologies (as well as new ways to use the existing technologies that they have already invested in).  So when I read this request I had several alternate solutions for them that would allow them to maintain their virtual machine while they burst through that 2TB ‘limit’.

  1. The new VHDX file format shatters the limit, as we said.  In an upcoming article I will explain how to convert your existing VHD files to VHDX.  The one caveat: if you are using Boot from VHD from a Windows 7 (or Server 2008 R2) base then the VHDX files are not supported.
  2. Storage Pools in Windows Server 2012 allow you to pool disks (physical or virtual) to create large drives.  They are easy to create and to add storage to on the fly.  I expect these will be among the most popular new features in Windows Server 2012.
  3. Software iSCSI Target is now a feature of Windows Server 2012, which means that not only can you create larger disks on the VM, you can also create large Storage Area Networks (SANs) on the host, adding VHDs as needed and giving access as BitLocker-encrypted Cluster Shared Volumes (CSVs), another new functionality of the new platform.
  4. New in Windows Server 2012, you can now create a virtual connection to a real Fibre Channel SAN LUN.  As large a volume as you can create on the SAN is your limit – in other words if you have the budget your limit would be petabytes!

With all of these options available to us, the sky truly is the limit for our virtualization environments… Whether you opt for a VHDX file, Storage Pool, Software- or Hardware-SAN, Hyper-V on Windows Server 2012 has you covered.  And if none of these are quite right for you, then migrating your servers into an Azure VM in the cloud offers yet more options for the dynamic environment, without the capital expenses required for on-premises solutions.

Knowing all of this, there really is no longer any reason to do a V2P migration, although of course there are tools that can do that.  There is also no longer a good reason to invest in third-party virtualization platforms that limit your virtual hard disks to 2TB.

Adaptable storage the way you want it… just one more reason to pick Windows Server 2012!

From Server Core to GUI to… MinShell?

This post was originally written for the Canadian IT Pro Connection blog, and can be seen there at http://blogs.technet.com/b/canitpro.

In Windows Server 2008 we were introduced to a revolutionary way to install Windows Server: Server Core.

Server Core may look boring – there’s nothing to it except the command prompt – but to an IT Pro it is really exciting for several reasons:

  • It requires fewer resources, so in a virtualization environment you can optimize your environment even more than previously;
  • It has a smaller attack surface, which makes it more secure;
  • It has a smaller patch footprint, which means less work for us on Patch Tuesdays; and
  • We can still use all of the familiar tools to manage it remotely, including System Center, MMC Consoles, and PowerShell.

imageDespite all of these advantages in my experience a lot of IT Pros did not adopt Server Core.  Simply states, they like the GUI (Graphical User Interface) manageability of the full installation of Windows Server.  Many do not like command lines and scripting, and frankly many are just used to the full install and did not want to learn something new.  I have even met some IT Pros who simply click the defaults when installing the OS, so they always ended up with the full install.

As you can see in this screenshot, the default installation is now Server Core. This is not done to confuse people, but going forward most servers are going to be either virtual hosts or virtual machines, and either way Server Core is (more often than not) a great solution.

Of course, if you do this and did not want Server Core you are still in good shape, because new in Windows Server 2012 you can add (or remove) the GUI interface on the fly.  You can actually switch between Server Core and Full (GUI) Install whenever you want, making it easier to manage your servers.

There are a couple of ways to install the GUI from the command prompt, although both use the same tool – DISM (Deployment Image Service Manager).  When you are doing it for a single (local) server, the command is:

Dism /online /enable-feature /featurename:ServerCore-FullServer /featurename:Server-Gui-Shell /featurename:Server-Gui-Mgmt

While the Dism tool works fine, one of the features that will make you want Windows Server 2012 on all of your servers now is the ability to manage them remotely, and script a lot of the jobs.  For that Windows PowerShell is your friend.  The script in PowerShell would be nearly as simple:

Import-Module Dism
Enable-WindowsOptionalFeature –online -Featurename ServerCore-FullServer,Server-Gui-Shell,Server-Gui-Mgmt

image

It takes a few minutes, but once you are done you can reboot and presto, you have the full GUI environment.

While that in and of itself is pretty amazing, we are not done yet.  There is a happy medium between Server Core and Full GUI.

MinShell (Minimum Shell) offers the administrator the best of both worlds.  You have the GUI management tools (Server Manager) but no actual GUI, which means that you are still saving the resource, have a smaller attack surface, less of a patch footprint, AND full manageability!

imageWhat the product development team has done is simple: they made the GUI tools a Server Feature… in fact, they made it three separate features (see graphic).  Under User Interfaces and Infrastructure there are three options that allow the server administrator to customize the visual experience according to his needs.

The Graphical Management Tools and Infrastructure is the Server Manager, along with the other GUI tools that we use every day to manage our servers.  It also includes the Windows PowerShell Integrated Scripting Environment (ISE) which allows administrators an easier to create and manage their PowerShell scripts.

The Desktop Experience gives the administrator the full desktop experience – similar to the Windows 8 client OS – including features such as Picture and Video viewers.

The Server Graphical Shell is exactly that: the GUI.  In other words we can turn the GUI on or off by using the Add Roles and Features Wizard (and the Remove Roles and Features Wizard).

Now there are a number of catches to remember:

First of all when you go down to MinShell the Add Roles and Features Wizard is still available, but not in Server Core.  Make sure you have this article on hand if you do go down to Server Core.

Next, if you install the full GUI and then remove the components then re-adding them isn’t a problem; however if you install the Server Core installation from the outset then the GUI (and Management) bits are not copied to the drive, which means that if you want to add them later you will need to have the installation media handy.

While hard drive space is pretty cheap, and it is easy to decide to install the full GUI every time and then remove it (so that the bits will be there when you want them).  However remember that with Hyper-V in Windows Server 2012 the limits are pretty incredible, and it is entirely possible that you will have up to 1,024 VMs on a host; that means that the few megabytes required for the GUI bits could add up.

Whether you opt for the Server Core, Full GUI, or the MinShell compromise, Windows Server 2012 is definitely the easiest Server yet to manage, either locally or remotely, one-off commands or scripts.  What I expect admins will be most excited about is the choices.  Run your servers your way, any way!