Home » Windows Server 2012

Category Archives: Windows Server 2012

Help! My free storage cannot be used for an iSCSI Virtual Disk!

I have written at length in the past about both Storage Spaces (Storage Pools: Dive right in!) and Software iSCSI Targets (iSCSI Storage in Windows Server 2012), as well as Failover Clusters (See list).  These are all great technologies that IT Professionals should know.

Recently I got an e-mail from a reader who had all of the pieces in place, but he couldn’t create the iSCSI virtual disk.  “…Even though I have plenty of space available, the New iSCSI Virtual Disk Wizard is showing no eligible servers available.  What am I doing wrong?

There is a caveat that I am sure is documented somewhere, but I haven’t seen it.  You cannot create an iSCSI Virtual Disk on a server that has Failover Clustering installed.  Now, this doesn’t actually mean that you cannot have an iSCSI Target Server that is clustered… but unlike the chicken and the egg, we absolutely know which has to come first here… You have to create your iSCSI Virtual Disks before you install Failover Clustering.

I hope this clears things up!

Step by Step: Building a Scale-Out File Server (SoFS) on Windows Server 2012 R2

In several presentations since Windows Server 2012 was released I have heard Microsofties claim that SAN devices are a thing of the past.  I have a hard time getting on board with that, but have nonetheless told many an audience that if they are planning on throwing their SANs out because Microsoft said so then just let me know where they are doing it so I can come collect them.

That is not to say that Windows Server 2012 (and since 2012 R2) have not changed the storage game significantly.  I have lectured extensively on Storage Pools (also known as Storage Spaces), and Cluster Shared Volumes are huge.

However since the world is going virtual, perhaps the most important storage use right now is the storage of virtual machines… and if I may borrow the motto of the Olympic Games, the order of this era of computing is “Citius, Atius, Fortius…” or Faster, Higher, Stronger. The storage on which we trust our virtual machines must be faster than ever, with higher availability than ever, and more resilient than ever.  In short, we are trying to deliver perfection… just like an Olympian.

So how are we going to architect our Olympian storage solution for our virtual machines?

Scale-Out File Servers (SoFS).

SoFS is a redundant, Actove/Active clustered file server based on SMB (Server Message Block) 3.0.  You aren’t going to build an SoFS cluster for your normal file servers – there are plenty of great technologies available for that, ranging from DFS-R to Clustered File Servers.  Rather, SoFS is designed for high usage, always-open files, like virtual machines and SQL Servers.


If you are building this out in a lab you can get away with less, but in my experience you need a cluster that is separate from your Hyper-V cluster.  I also prefer building my SoFS on physical hardware rather than on virtual, but this is negotiable.

To get started you have to make sure you have storage that can be added to a Failover Cluster.  This can be done with Storage Pools, but I’ll do it with virtual disks.

Step 1: Build your Failover Cluster. I have already written about how to build a Failover Cluster.  Follow Steps 1 & 2 in this article and you will be good to go.  You can even go through the observational components of Step 3 that will verify quorum and things like that, but do not do the storage pars.

Step 2: Configure Storage.

This is the part that got me tripped up the first few times I tried to get it to work, and that was mostly because I didn’t RTFM.  However in my defense, a lot of what tripped me up initially was not spelled out to clearly in the reading material.

In order for disks to be added to the cluster, they must be shared by all of the nodes of the cluster.  If you are building a Software SAN on Windows Server 2012 R2 you can follow the instructions in this article.  Finding the SAN LUNs from Windows Server just requires the iSCSI Initiator.  Ensure that all nodes of your cluster are connected.

Incidentally having the shared storage may be enough to add it to your cluster, but in order to make this work the LUNs have to have a formatted partition on them.  This was one of the gotchas I discovered along the way.

**VERY IMPORTANT NOTE:  If your storage is ready to go when you create the cluster then you are fine, but remember that any time you make changes to your cluster you need to re-run the Validation Tests.  As long as these tests are run and come back as ‘Suitable for Clustering’ then you are fine, but if they do not then Microsoft will not support you in your time of need.

So in Failover Cluster Manager if you expand <Cluster Name> – Storage – Disks you should see in the main window a list of all of your drives.


In this screenshot we see that the disk we want to use (Cluster Disk 5) is assigned to Available Storage.  That is the disk we are going to use.  Right-click on it, and click Add to Cluster Shared Volumes.

A Note about Cluster Shared Volumes (CSVs)

A Cluster Shared Volume is a pointer.  It takes my LUN and puts it on my C Drive… okay, not really.  However if I want to access my LUN from my server, I could refer to it as iqn.1991-05.com.microsoft:FS-fc-03-target-01-target:T0:L3, or I can assign it a drive letter.

The problem with assigning it a drive letter is that in a failover cluster there are multiple nodes that need to access the same LUN, and since drive letters are assigned by the server and not by the cluster, in order to ensure proper functionality I would have to ensure that each LUN had the same drive letter across all nodes in my cluster… which is simple, as long as the storage configuration (including hard drives & partitions, RAID arrays, CD/DVDs, and yes even USB keys) across all nodes in the cluster are identical.  If not, then it’s a hassle.

What CSVs do for me is takes the hassle out, and assigns a pointer to each LUN on my C drive… under c:\Cluster Storage\ each CSV will have its own directory, which is really a portal to my LUN.  So the one we created in the previous step is called c:\Cluster Storage\Volume 2.


**IMPORTANT NOTE: While it may work, do not use your CSVs for anything other than Hyper-V and Failover Clusters.  It will bite you eventually, and hard.

So now that I have my storage in place, let’s go ahead and build our Scale-Out File Server.

Step 3: Creating the Scale-Out File Server

SoFS is clustered role.

1. In the Navigation Pane of Failover Cluster Manager right-click Roles and click Configure Role…

2. In the Before You Begin page read the notes and click Next.

3. In the Select Role page select File Server and click Next.

4. In the File Server Type page click the radio Scale-Out File Server for application data and click Next.  Note the warnings that SoFS does not support the NFS Protocol, DFS Replicastion, or File Server Resource Manager.

5. In the Client Access Point page type a name for your SoFS and click Next.  Note that the name must be NetBIOS compliant.

6. On the Confirmation page ensure your information is correct and click Next

When you are done click Close and then navigate to Roles.  You should see your role all ready.


Step 4: Creating a File Share

This is where all of the steps we went through before are important… you can only create a File Share for your SoFS on high availability storage.

1. Right-click on your SoFS role and click Add File Share. (If you get an error see this article, wait, then try again in a few minutes.

2. In the Select the profile for this share window select SMB Share – Applications and click Next.

3. In the Select the server and path for this share window select the your SoFS by name under the list of servers.  Ensure the radio Select by volume is selected.  Select the disk you want to create it on, and then click Next.

**NOTE: Notice that the volumes available are actually your CSVs, and that the File System listed is CSVFS.

4. In the Specify share name window type the name of your share, along with any notes you wish.  Note that the remote path to the share will be \\SoFSName\ShareName.  Click Next.

5. In the Configure Share settings window notice that several options are greyed out, including the Enable continuous availability option, which is forced.  Your only choice here is whether to Encrypt data access, which you can do for security.  Click Next.

6. In the Specify permissions to control access window you can modify the permissions, but remember that it is the Hyper-V hosts that will need access.  Click Next.

7. On the Confirm selections page ensure your settings are correct, then click Create.


8. On the View results page ensure all steps are marked Completed, then click Close.

We’re done… Your Scale-Out File Server is ready to go.  All you have to do is start migrating your VMs from where they were to (in this case) \\ServerName\PDisks. You can click on your role, and at the bottom of the screen select the Shares tab, and there it is… or in the case of this system, there they are, because yes, you can have multiple file shares on a single SoFS role.

Caveat Admin

Microsoft and Hyper-V, along with a lot of guidance from people like the author, have made virtualization available to anyone.  With Failover Clusters they have made high availability easier than ever.  However Icarus please remember that the solid foundation on which these tools are built depend on the integrity of your waxen wings; just because you are able to create something does not mean you have the knowledge of how to maintain it.  If you don’t believe me, go ask any single mother with a dead-beat absent father.  The fact that Windows Server 2012R2 makes these tasks so easy to do does not change the fact that this is still 400 level stuff, and proper education and certifications are always recommended before you bite off more than you can chew.  All of the resources you need are available, you just have to look for them.  Start at http://www.microsoftvirtualacademy.com, and go from there.

Insanity Is…


We have all heard this quote before… and it is exactly true.  However in your server environment, when you want things identical, then we would turn this phrase around:

Insanity: Doing things manually over and over and expecting identical results.

I have not spent a great deal of time learning PowerShell… but whenever I have a task to do, such as installing a role or a feature, I try to do it with PowerShell.  I actually leverage another of Einstein’s great axioms:


The Internet is a great tool for this… I can look up nearly anything I need, especially with regard to PowerShell. 

So previously, when I wanted to install a role on multiple servers I would run a series of cmdlets:

PS C:\>Install-WindowsFeature Failover-Clustering –IncludeManagementTools –ComputerName Server1

PS C:\>Install-WindowsFeature Failover-Clustering –IncludeManagementTools –ComputerName Server2

PS C:\>Install-WindowsFeature Failover-Clustering –IncludeManagementTools –ComputerName Server3

Of course, this would work perfectly.  However recently I was looking up one of the cmdlets I needed on the Internet and stumbled across an easier way to do it… and especially when I want to run a series of identical cmdlets across the multiple servers.  I can simply create a multi-server session.  Watch:

PS C:\>$session=New-PSSession –ComputerName Server1,Server2,Server3

PS C:\>Invoke-Command –session $session {Add-WindowsFeature Failover-Clustering –IncludeManagementTools}

Two lines instead of three doesn’t really make my life a lot easier… but let’s say I was doing more than simply adding a role… this could save me a lot of time and, more importantly, ensure uniformity across my servers.

Creating a PSSession is great for things like initial configuration of servers… think of all of the tasks you perform on every server in your organization… or even just every web server, or file server.  This will work for Firewall rules, and any number of other settings you can think of.

Try it out… It will save you time going forward!

Cluster-Aware Updates: Be Aware!

When I started evangelizing Windows Server 2012 for Microsoft, there was a long list of features that I was always happy to point to.  There are a few of them that I have never really gone into detail on, that I am currently working with.  Hopefully these articles will help you.

Cluster Aware Updates (CAU) is a feature that does exactly what it says – it helps us to update the nodes in a Failover Cluster without having to manually take them down, put them into maintenance mode, or whatever else.  It is a feature that works in conjunction with our patch management servers as well as our Failover Cluster.

I have written extensively about Failover Clusters before, but just to refresh, we need to install the Failover Clustering feature on each server that will be a cluster node:

PS C:\Install-WindowsFeature –Name Failover-Clustering –IncludeManagementTools –ComputerName <ServerName>

We could of course use the Server Manager GUI tool, but if you have several servers it is easier and quicker to use Windows PowerShell.

Once this is done we can create our cluster.  Let’s create a cluster called Toronto with three nodes:

PS C:\New-Cluster –Name Toronto –Node Server1, Server2, Server3

This will create our cluster for us and assign it a dynamic IP address.  If you are still skittish about dynamic IP you can add a static IP address by modifying your command like this:

PS C:\New-Cluster –Name Toronto –Node Server1, Server2, Server3 –StaticAddress

Great, you have a three-node cluster.  So now onto the subject at hand: Cluster Aware Updates.

You would think that CAU would be a default behaviour.  After all, why would anyone NOT want to use it? Nonetheless, you have to actually enable the role feature.

PS C:\Add-CauClusterRole –EnableFirewallRules

Notice that we are not using the –ComputerName switch.  That is because we do not install the role service to the servers but to the actual cluster.  You will be asked: Do you want to add the Cluster-Aware Updating clustered role on cluster “Toronto”? The default is YES.

By the way, in case you are curious the Firewall Rules that you need to enable is the ‘Remote Shutdown’ rule.  This enables Cluster-Aware Updating to restart each node during the update process.

Okay, you are ready to go… In the Failover Cluster Manager console right-click on your cluster, and under More Actions click Cluster-Aware Updating.  In the window Failover – Cluster-Aware Updating click Apply updates to this cluster.  Follow the instructions, and your patches will begin to apply to each node in turn.  Of course, if you want to avoid the management console, all you have to do (from PowerShell) is run:

PS C:\Invoke-CauRun

However be careful… you cannot run this cmdlet from a server that is a cluster node.  So from a remote system (I use my management client that has all of my RSAT tools installed) run:

PS C:\Invoke-CauRun –ClusterName Toronto

You can watch the PowerShell progress of the update… or you can go out for ice cream.  Just make sure it doesn’t crash in the first few seconds, and it should take some time to run.

Good luck, and my the cluster force be with you!

Expand your knowledge on Windows Server 2012!

windows-server-2012-logoOkay, we know that you are probably upset that Windows Small Business Server is being retired.  Fortunately Windows Server 2012 R2 will do you well… but do you know everything you will ever need to know about Windows Server 2012 R2 for the SMB space? Probably not… but that’s okay, because we are here to help!  Microsoft Canada is offering a free webinar with a colleague of mine that will really help.

Join Sharon Bennett, Microsoft’s SMB Technology Advisor, to learn about the key benefits of Windows Server 2012.  Topics include:

  • How to upgrade from Windows Server 2003 to Windows Server 2012
  • SBS migration path
  • ROK – Reseller Option Kit
  • CALs – Client Access Licenses

Register early as spots are limited. You will also have a chance to receive an exciting giveaway during the webinar!

Date: Feb 24, 2014

Time: 2-3pm EST

Register here: https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032577602&Culture=en-CA&community=0

Server Core Address Woes

From the files of “What the F@rk?!”

Here’s a little gotcha that I’ve been wrestling with all afternoon.  I hope this post can save some of you the frustration (exacerbated by jetlag) that I have been experiencing.

I am configuring a bunch of virtual machines as domain controllers for the company I am consulting for in Japan.  Things are going really smooth on the project, but we wanted to spin up half a dozen DCs for the new environment, so I figured I’d just spend a few minutes on it.  Then I had to configure the IP Addresses… something I have done thousands of times, both in Server Core and the GUI.  I have never encountered THIS before.

Server 1: Done.

Server 2: Done

Server 3: NO

Server 4: Done

…and so on.  I went back to Server 3 figuring there was a bit of a glitch, and sure enough, it had an APIPA (Automatically Provided IP Address) assigned.

I loaded up the sconfig menu, and set the IP Address by hand.  The weirdest thing happened… it replaced my IPv6 address with the Class A address I assigned, and left the APIPA address.

I went down to the command line… netsh interface ipv4 set address name=”Ethernet” static 10.x.y.z…  and it still gave me an APIPA address.

I was getting frustrated… something was simply not going right.  And then it occurred to me… someone else was playing on my network.  Sure enough, he had already assigned that address.  Instead of giving me a warning, it simply wouldn’t duplicate an address that already existed.

Now if I had already implemented my monitoring solution, this would never have happened!

What Have You Got?

With Windows 8.1 less than three weeks from GA, and Windows XP less than 200 days from end of support (#EndOfDaysXP on Twitter), I thought it would be a good time to write about the Microsoft Assessment and Planning Toolkit again, but only in the context of Windows 8 Readiness and maintaining a software and hardware inventory of the machines within your organization.

I used to work for a man who said that if you cannot measure it, you cannot manage it.  These are words I have lived by ever since.

The problem is it gets difficult to keep track of what you have in your IT environment, especially in environments where users are allowed to install their own software.  Don’t forget that software extends far beyond the major packages like Microsoft Office, it also includes things like readers and players.  Many driver packages will also install their own software, whether you realize it or not.

So how do you keep track?  The simple solution is to use a tool like the Microsoft Assessment and Planning Toolkit.  The MAP Toolkit is a Microsoft Solution Accelerator that will take an inventory of all of your machines.  Of course it does a lot more than that, like planning for virtualization and private/public clouds, but if you simply want to know what software you have installed, run the toolkit.

Downloading and Installing

The MAP Toolkit is a free tool from Microsoft, and can be downloaded from www.microsoft.com/solutionaccelerators.  The current iteration is MAP 8.5, and it is a 74 MB download.

Before you install it, you will need to have the .NET Framework 4.0, plus the 4.0.2 update.  If you are installing on Windows 8.1 it is there, but if you are on Windows 7 then you will need to download them.  The links are on the MAP Toolkit download page under System Requirements.

The installation is a PhD (Press here, Dummy!) installer… just keep pressing next.  Oh, you either opt in or out of the CEIP, and you do have to agree to the license terms.

The installer will install Microsoft SQL Server Express LocalDB if you do not have SQL Server installed (most of us do not have it on our laptops).

Getting Started

Before you begin you have to either create an inventory database, or use an existing one.  Let’s assume you don’t have one already, and name your database.  I usually name it after the company where I am consulting, as you can run the tool for multiple companies on the same machine.

In the MAP Toolkit 8.5 there are eight scenarios you can choose from:

MAP Toolkit 1

For the sake of this article we are going to stick with the second (Desktop) option, although you can experiment with the others as you wish.  In the navigation bar select the third tab (Desktop).

In order to do anything we need to collect the inventory.  In the Desktop screen at the top click Collect inventory data.

Because Microsoft realizes that there are a few non-Windows based computers out there, you can select both Windows computers and Linux/UNIX computers in the Inventory Scenarios window and click next.  (Note: If you are only doing Windows it will use WMI; if you are doing Linux as well it uses SSH.)

In the Discovery Methods window you have to determine which method you will use to discover computers.  The default is to use Active Directory.  You can also use other Windows networking protocols, SCCM, Scan an IP range, Manually enter computer names and credentials, or import computer names from a file.  Select your option then click Next.

On the next screen you have to enter the domain name, plus credentials.  This is the first of two places where you will be asked; for this time it is only to scan the Active Directory for the next step.  If you are not a domain admin then this is where you have to go ask someone who is for their assistance.  Once the information is entered click Next.

On the Active Directory Options screen you can determine whether you want to scan the entire domain (including sub-domains), or only a segment.  In a large organization the second option is probably smarter.  Once done click Next.

On the All Computer Credentials screen you need to create accounts that will actually be able to scan the computers themselves.  You may want to create multiple users (one for Active Directory, one for Linux, for example) for different types of systems.  Also if there are systems in different OUs and Domain Admin does not have access, you can create multiple accounts.

In the Credentials Order screen you can select which credentials to try first.  If you have thousands of AD computers and only a few Linux machines it makes sense that WMI is first; once a credential authenticates the tool will not try to use others.

On the Connection Properties screen you can change the TCP port that SSH uses to authenticate; by default it is Port 22.

On the Summary screen you can review your choices, then click Finish!  Your inventory is ready to run.

MAP Toolkit 2

The Inventory and Assessment window will begin detecting machines on the network.  Depending on the number of machines it can take quite some time, so be patient.  These numbers will continue counting up (Machines Inventoried) and down (Collections Remaining) until they are all counted.

Getting to and using the data

Once the data is all collected you will get a screen with five different scenarios pertaining to the desktop:

  • Windows 8 Readiness
  • Windows 7 Readiness
  • Office 2010 Readiness
  • Office 2013 Readiness
  • Internet Explorer Discovery

These boxes should display what percentage (and how many) of your devices are ready for each.  However you can drill down and get more information, which is where the inventory component comes into play.  Simply click on the Windows 8 Readiness box and the screen will display the Details page.  It will also (in the upper right corner) allow you to Generate Windows 8 Readiness Report & Proposal.  Click on that button and the MAP Toolkit will create two files for you: A Word document that you can customize with your logo and name to give to the client or to your boss, and an Excel spreadsheet with a detailed inventory of all of your hardware and software.  These files will be located in the %username%\My Documents\MAP\CustomerName directory.

If you are going to use these files for upgrade readiness, then you will appreciate that the 3rd tab along the bottom of the spreadsheet has three very helpful columns: Reasons Not Meeting, After Hardware Upgrades, and Reasons Not Upgradeable.  You won’t be left wondering what is wrong with your systems, you will know why they can’t be upgraded (and what must be done to mitigate that).  I found this very helpful when I was deploying Windows 7 to my son’s school several years ago; rather than replacing 25 computers I replaced 25 video cards and memory chips, and the deployment went smoothly after that.

The complete list of information provided by this spreadsheet is as follows:


  • Windows 8 Readiness
  • Before Hardware Upgrades
  • After Hardware Upgrades
    Assessment Values
  • Settings
  • CPU (GHz)
  • Memory (MB)
  • Free Disk (GB)
  • Flag Not Ready Video

Client Assessment

  • Computer Name
  • Current Windows 8 Category
  • Reasons Not Meeting
  • After Hardware Upgrades
  • Reasons Not Upgradeable
  • Notes
  • WMI Status
  • IP Address
  • Subnet Mask
  • Current Operating System
  • Service Pack Level

After Upgrades

  • Computer Name
  • IP Address
  • CPU
  • Memory
  • Hard Disk Free Space
  • Video Controller

Device Summary

  • Device Model
  • Manufacturer
  • Number of Computer with

Device Details

  • Computer Name
  • Device Model
  • Manufacturer

Discovered Applications

  • Application
  • Software Version
  • Number of Installed Copies
    The Word Document will also be a tremendous help… not because it contains more data than the spreadsheet, but because it explains it in terms than any CxO will understand, with charts and graphs and summaries, without having to review all of the raw data.  The document is written well enough to present proudly, and can be modified with your corporate logo and your name on it easily.


    The MAP Toolkit is a useful tool for collecting inventory data, as well as for analyzing upgrade readiness, without needing any costly management tools (although it works very well in conjunction with System Center 2012 R2).  Aside from saving you tremendous amounts of time in the collection of data, it also provides handy spreadsheets and documents so that you can use the data most efficiently.  I have long said that it is one of the best free products on the market, and I stand by that assessment.
    In this article we covered only a fraction of what the tool can do.  See what you can do with it for Server virtualization and more!
%d bloggers like this: