The Harsh Realities of the Exam Room

A couple of weeks ago I wrote about how tough I found Exam 74-409 was in my article Another Tough Exam.  I also mentioned that Microsoft exams were meant to be tough, and going into an exam unprepared can (and usually will) come back to bite you.

Last week I decided to bite the bullet and try to take home at least three certifications in a single marathon day of exams… I was hoping to achieve my MCSA: Windows 8, MCSA: Windows Server 2012, and my MCSE: Desktop Infrastructure in a single bound by passing three exams:

70-416: Implementing Desktop Application Environments
70-417: Upgrading Your Skills to MCSA Windows Server 2012
70-688: Managing and Maintaining Windows 8

The goal was lofty, but I felt I was up to the challenge.  I was wrong… but not terribly so.

Before going on I should mention that I am no dummy… I am just very busy, and taking the time to sit exams one at a time is a bit of a pain for me – I would rather, when I have to, simply write two or three in a single day.  Of course, this greatly reduces my chances of passing all of them, but because of the Microsoft and Prometric Second Shot Free offer for Microsoft Certified Trainers (see article) there is less of a risk – MCTs get a discount on the cost of exams, as well as a Second Shot.  My financial gamble on this day was minimal.  I have, by the by, passed three exams in a single day once… May 3, 2011 I passed three MCTS exams on Windows Server 2008.  If I could do it once, I could surely do it again.

Wrong.

Passing three exams in a single day was not easy, but they were all on the same general technology – Windows Server 2008.  On this silly day I went after three exams – one on Windows 8 (which I would have been surprised had I failed), one on Desktop Application environments (Windows 8 applications with a healthy dose of Windows Server, Remote Desktop Services, App-V, Group Policy, Microsoft Office, and several deployment tools), and one on Windows Server 2012…kinda.

Upgrade Exams

Thinking back to my early days of certification marathons, I remember hearing the horrors of Upgrade exams.  Essentially you are taking three exams in one.  The first Upgrade exam I sat was 70-292: Managing and Maintaining a Microsoft Windows Server 2003 Environment for an MCSA Certified on Windows 2000.  My success with this exam could be summed up with the old adage: Third time’s a charm.  I passed it in June of 2006… over a year after my first attempt.

Although I did have success with the MCDST (Desktop Support Technician) upgrade exam 70-621: Upgrading your MCDST Certification to MCITP Enterprise Support I did not fare nearly as well on the server side – 70-648 TS: Transition from Windows Server 2003 MCSA to Windows Server 2008 and TS: Transition from Windows Server 2003 MCSE to Windows Server 2008 (both of which I sat as beta exams and, coincidentally, on the same day) were not my finest hours.  I decided instead to sit all of the exams for these certifications instead of going the upgrade path again.

In hindsight, had I thought of that when scheduling the exams, I would not have done it.  Three exams in one day is mentally tough enough… add to that one of them is actually three exams, and even I wouldn’t have done it.

I never got into a rhythm for the exam, and did not notice that it was not one exam as one block of time, it was actually three sections, each with their own sub-block of time.  Unfortunately I only realized this when, with ten unanswered questions on Section 1, a pop-up warned me that I had two minutes to complete the section.  Without reading anything I clicked through and selected an answer for as many as I could (four) before being forced to leave six questions unanswered.

Now that I knew this was the case, I managed my time for the remaining sections much better… but four blind darts and six blanks doomed me.

You did not pass the exam.

I do not remember the actual wording of it, but that’s what it said… I had felt pretty good going into that last ‘Are You Sure?! ‘ button, which is why I was heartbroken when it came up.  Damn damn damn.

Wait a minute… I did a double-take when I noticed that my score was below 600.  583?  No way, I know I did better than that, there MUST BE SOME MISTAKE!  I don’t know the procedures for challenging an exam result (nor do I know if there is such a procedure) but at the end of the day when I collected my score reports I was going to find out.

Okay, that was only one of the exams… the server exam, which I could re-sit next week sometime.  I got my mindset into the application environment.  It was a really tough exam, but I passed it with a pretty respectable score.  I then went on to the Managing Windows 8 exam, which after the ordeal of the two previous exams was like a walk in the park.  I am not saying that any end user – or an IT Pro who isn’t intimately familiar with Windows 8 – could pass without a lot of preparation, but I have lived Windows 8 every day of the last 2.5 years, and even though that last ‘Are You Sure?!button is always nerve wracking, I passed very respectably.

Okay, good.  At least I could hold my head high with the knowledge that I would walk away with two Windows certifications today… MCSA: Windows 8, and MCSE: Desktop Infrastructure.  Now I could go look at the score report and go give someone a piece of my mind!

Wrong.

First the good news… I am not as much of a Windows Server bonehead as I thought.  I did not realize that for the Upgrade exam each section is marked as a complete exam… the score report actually comes out like this:

70-410: Installing and Configuring Windows Server 2012: 800
70-411: Administering Windows Server 2012: 583
70-412: Configuring Advanced Windows Server 2012 Services: 766

Aha… while the results of certification exams are really binary – Pass/Fail – I felt a lot better knowing that had they averaged out my score for the three exams I would have passed, and the abysmal score that displayed on screen was just that of the lowest section – quite obviously the section on which I only answered 2/3 of the questions.  Alright, I feel better about that, and now that I know, the next time I sit the exam I can manage my time properly (I’ll bet you if you scour my blog you will see that advice for exam takers) and pass with authority.

I was wrong about something else on this day though… Although I thought the prerequisites for the MCSE: Desktop Infrastructure were my MCSA Windows 8 and the 70-416 exam, it turns out that the first prerequisite is actually my MCSA Windows Server 2012… alas, I would only be walking away with one certification today, and not two as I was hoping and expecting.  With that said, if/when I do pass my 70-417 Upgrade exam I will with one pass earn two senior certifications… and that ain’t all bad as they say.

Conclusion

The old expression says that the shoemaker’s children go barefoot.  I got bit quite a bit by not following my own advice.  Fortunately Microsoft and Prometric have my back, and I can come back and re-sit the exam for free.  That is one piece of advice I did listen to – make sure you check for any offers such as the Second Shot before you register for your exam.  Although I have registered for several exams with previous similar offers, this is the first time I will need the safety net.  However just because you are confident does not mean you should be stupid… take any offer they will give you, and save your money.  I am glad I did!

Advertisements

Step by Step: Adding the GUI to Windows Server Core

HELP! Mitch, you told me that I should learn Server Core and I am trying, but you also told me that it wasn’t a problem to add the GUI back into a Server Core machine if I really needed it.  How do I do that?

This is a question I have gotten a few times from readers and students.over the past year.  There are a few of ways to do it, and depending on your situation you may need to try both of them.

Method 1: No problem!

You installed Windows Server with the full GUI previously, and then you removed the GUI.  This is the simplest scenario for our problem.  Here goes:

  1. Open PowerShell (powershell.exe)
  2. Run the cmdlet: Install-WindowsFeature Server-Gui-Mgmt-Infra,Server-Gui-Shell /reboot
      Now, if you are really deathly afraid of the command line, you can connect to a server with

Server Manager

      and use the

Add Roles and Features

    wizard.  Either way will work just fine.  However here’s the catch… both of them depend on the bits for the GUI being on the server’s hard drive.  If you never installed the GUI then they won’t be.  At this point you have to move on to…

Method 2: Still no problem 🙂

powershell_2 You dove in head first, decided to get right into Server Core.  That’s just how you role.  Unfortunately you discovered something that made you backpedal.  No problem, many fine IT Pros have made worse false- starts than this.  It won’t be difficult, all you have to do is add the GUI features.  However since the bits are not on the drive, you have to add a source.  Follow these steps and you’ll be on your way!

      1. Create a folder on the C Drive: MD c:\mount
      2. Check the index number for Server Datacenter (must be performed in a Command Prompt with Elevated privileges): Dism /get-wiminfo /wimfile:<drive>:sources\install.wim
      3. Mount the WIM file to the previously created directory using this command at the same elevated command prompt: Dism /mount-wim /WimFile:<drive>:\sources\install.wim /Index:<#> /MountDir:c:\mount /readonly
      4. Start PowerShell and run this cmdlet: Install-WindowsFeature Server-Gui-Mgmt-Infra,Server-Gui-Shell –Restart –Source c:\mountdir\windows\winsxs

(For the fun of it, PowerShell will accept your Command Prompt commands, so you can do all of the above in a PowerShell window.)

    Again, if you have been soooo spooked by Server Core that you cannot bear to do this in the command prompt, do the following:
      1. Connect to a GUI-based server (or Windows 8.1 system with RSAT Tools) and open the

Server Manager

    .
      2. Right-click

All Servers

      and click

Add Servers

    3. Find and add your Server, ensuring that it reports as On-line.
      4. Click on

Manage

      and from the drop-down menu select

Add Roles and Features

    .
      5. In the

Before you begin

      page click

Next

    .
      6. In the

Select installation type

      page click

Next

    .
      7. In the

Select destination server

      page select your Server Core machine from the list and click

Next

    .
      8. In the

Select server roles

      page click

Next

    .

9. In the Select features page scroll down to User Interfaces and Infrastructure.  Expand the selection, then select Graphical Management Tools and Infrastructure and Server Graphical Shell.  Click Next.

Capture-1 10. In the Confirm installation selections page click on Specify an alternate source path .

11. In the Specify Alternate Source Path page enter the path to the installation media, then click OK.

12. In the Confirm installation selections page select the checkbox marked Restart the destination server automatically if required.

13. Click Install.

That’s it… your server will reboot with the full GUI.  honestly I don’t expect you will be doing this very often – I truly feel that Server Core is the way to go with the vast majority of servers going forward.  However isn’t it nice to know that you have the option should you really need it?

…Oh, and please, for G-d’s sake, if you are re-installing the GUI at least try the PowerShell method!

Become a Virtualization Expert!

For those who missed the virtualization jump start, the entire course is now available on demand, as is the link to grab a free voucher for exam 409. This is a single exam virt specialist cert. I would encourage you to take the exam soon before all the free spots are booked.   Full info at http://borntolearn.mslearn.net/btl/b/weblog/archive/2013/12/17/earn-your-microsoft-certified-specialist-server-virtualization-title-with-a-free-exam.aspx

Server Core: Save money.

I remember an internal joke floating around Microsoft in 2007, about a new way to deploy Windows Server.  There was an ad campaign around Windows Vista at the time that said ‘The Wow Starts Now!’  When they spoke about Server Core they joked ‘The Wow Stops Now!’

Server Core was a new way to deploy Windows Server.  It was not a different license or a different SKU, or even different media.  You simply had the option during the installation of clicking ‘Server Core’ which would install the Server OS without the GUI.  It was simply a command prompt with, at the time, a few roles that could be installed in Core.

While Server Core would certainly save some resources, it was not really practical in Windows Server 2008, or at least not for a lot of applications.  There was no .NET, no IIS, and a bunch of other really important services could not be installed on Server Core.  In short, Server Core was not entirely practical.

Fast Forward to Windows Server 2012 (and R2) and it is a completely different story.  Server Core a fully capable Server OS, and with regard to resources the savings are huge.  So when chatting with the owner of a cloud services provider recently (with hundreds of physical and thousands of virtual servers) I asked what percentage of his servers were running Server Core, and he answered ‘Zero’.  I could not believe my ears.

The cloud provider is a major Microsoft partner in his country, and is on the leading edge (if not the bleeding edge) on every Microsoft technology.  They recently acquired another datacentre that was a VMware vCloud installation, and have embarked on a major project to convert all of those hosts to Hyper-V through System Center 2012.  So why not Server Core?

The answer is simple… When Microsoft introduced Server Core in 2008 they tried it out, and recognizing its limitations decided that it would not be a viable solution for them.  It had nothing to do with the command line… the company scripts and automates everything in ways that make them one of the most efficient datacentres I have ever seen.  They simply had not had the cycles to re-test Server Core in Server 2012 R2 yet.

We sat down and did the math.  The Graphical User Environment (GUI) in Windows Server 2012 takes about 300MB of RAM – a piddling amount when you consider the power of today’s servers.  However in a cloud datacentre such as this one, in which every host contained 200-300 virtual machines running Windows Server, that 300MB of RAM added up quickly – a host with two hundred virtual machines required 60GB of RAM just for GUIs.  If we assume that the company was not going to go out and buy more RAM for its servers simply for the GUI, it meant that, on average, a host comfortably running 200 virtual machines with the GUI would easily run 230 virtual machines on Server Core.

In layman’s terms, the math in the previous paragraph means that the datacentre capacity could increase by fifteen percent by converting all of his VMs to Server Core.  If the provider has 300 hosts running 200 VMs each (60,000 VMs), then an increased workload of 15% translates to 9,000 more VMs.  With the full GUI that translates to forty-five more hosts (let’s conservatively say $10,000 each), or an investment of nearly half a million dollars.  Of course that is before you consider all of the ancillary costs – real estate, electricity, cooling, licensing, etc…  Server Core can save all of that.

Now here’s the real kicker: Had we seen this improvement in Windows Server 2008, it still would have been a very significant cost to converting servers from GUI to Server Core… a re-install was required.  With Windows Server 2012 Server Core is a feature, or rather the GUI itself is a feature that can be added or removed from the OS, and only a single reboot is required.  While the reboot may be disruptive, if managed properly the disruption will be minimal, with immense cost savings.

If you have a few servers to uninstall the GUI from then the Server Manager is the easy way to do it.  However if you have thousands or tens of thousands of VMs to remove it from, then you want to script it.  As usual PowerShell provides the easiest way to do this… the cmdlet would be:

Uninstall-WindowsFeature Server-Gui-Shell –restart

There is also a happy medium between the GUI and Server Core called MinShell… you can read about it here.  However remember that in your virtualized environment you will be doing a lot more remote management of your servers, and there is a reason I call MinShell ‘the training wheels for Server Core.’

There’s a lot of money to be saved, and the effort is not significant.  Go ahead and try it… you won’t be disappointed!

What’s New in Windows Server 2012 R2 Lessons Learned Week 1

Dan Stoltz asked me to republish this article, and it is well worth it!  Check out all of the links – a lot of great material! -MDG

It has been an incredible start to the Windows Server 2012 R2 Launch Series.  Here is brief summary of what we covered so far…

  1. Windows Server 2012 R2 Launch Blog Series Index #WhyWin2012R2 the series, opening and index page we learned that from Oct 18th and every day until Thanksgiving we should visit http://aka.ms/2012r2-01 to learn all about Windows Server 2012 R2. You can also follow the excitement on twitter at #WhyWin2012R2. Download the calendar .ICS to populate your calendar here.  This post started the new launch series where Microsoft platform experts would cover  why Windows Server 2012 R2 is important, how to deploy, manage, configure any number of components in Windows Server 2012 R2, how the new OS capabilities stack up against competitors, how R2 integrates with and leverages cloud services like Windows Azure and many, many more categories. This series is deep technical content with lots of How To’s and Step-By-Step instructions. You will learn about storage, cloud integration, RDS, VDI, Hyper-V, virtualization, deduplication, replica, DNS, AD, DHCP, high availability, SMB, backup, PowerShell and much, much more!
  2. Why Windows Server 2012 R2 Rocks! #WhyWin2012R2 – You are probably like most people and realize that Windows Server 2012 was a very substantial upgrade over Windows Server 2008 R2. What would you say to Microsoft doing it again, and even better? WOW! That is exactly what Windows Server 2012 R2 has done. In this post we will look at some of the coolest additions and improvements to Windows Server 2012 R2. Regardless of which of the four pillars of focus (Enterprise-Class, Simple and Cost-Effective, Application Focused, User Centric) you are most interested in, you will find plenty in this post to appreciate! @ITProGuru will show you as he counts the top 10 biggest, most relevant and/or most differentiated new features in Windows Server 2012 R2.
  3. Where Are All The Resources For Windows Server 2012 R2? – We learned where to do go get free resources for Windows Server 2012 R2 including downloading a Free Trial of Windows Server 2012 R2, Free online cloud serversFree EBook on Windows Server 2012 R2, Free Posters, Free Online Training from Microsoft Virtual Academy, and much more.
  4. Implementing Windows Server 2012 R2 Active Directory Certificate Services Part 1 &
  5. Implementing Windows Server 2012 R2 Active Directory Certificate Services Part 2PKI is heavily employed in cloud computing for encrypting data and securing transactions. While Windows Server 2012 R2 is developed as a building block for cloud solutions, there is an increasing demand for IT professionals to acquire proficiency on implementing PKI with Windows Server 2012 R2. This two-part blog post series is to help those who, like me, perhaps do not work on Active Directory Certificate Services (AD CS) everyday while every so often do need to implement a simple PKI for assessing or piloting solutions better understand and become familiar with the process.
  6. Step-by-Step: Automated Tiered Storage with Storage Spaces in R2 – Windows Server 2012 R2 includes a number of exciting storage virtualization enhancements, including automated storage tiering, scale-out file server re-balancing and performance tuning for high-speed 10Gbps, 40Gbps and 56Gbps storage connectivity.  IT Pros with which I’ve spoken are leveraging these new enhancements to build cost-effective SAN-like storage solutions using commodity hardware.In this article, we’ll begin part 1 of a two-part mini-series on storage.  I’ll provide a technical comparison of Windows Server 2012 R2 storage architecture to traditional SAN architecture, and then deep-dive into the new Storage Spaces enhancements for storage virtualization.  At the end of this article, I’ll also include Step-by-Step resources that you can use to build your own Storage Spaces lab.  In part 2 of this mini-series, we’ll finish our storage conversation with the new improvements around Scale-Out File Servers in Windows Server 2012.
  7. iSCSI Target Server – Super Fast Mass Server Deployment! – #WhyWin2012R2 – There have been some significant updates to Windows Server 2012 with the R2 release. One of these updates helps IT Pros deal with a growing problem – How do I deploy a large number of servers quickly, at scale without adding massive amounts of storage?The updates to the iSCSI target server technologies allow admins to share a single operating system image stored in a centralized location and use it to boot large numbers of servers from a single image. This improves efficiency, manageability, availability, and security. iSCSI Target Server can boot hundreds of computers by using a single operating system image!
  8. Why Windows Server 2012 R2: Reducing the Storage Cost for your VDI Deployments with VHD De-duplication for VDI – Windows Server 2012 introduced a data deduplication for your storage workloads customers saw phenomenal storage reduction.  Windows Server 2012 R2 deduplucation now supports live VHDs for VDI, which means that data de-duplication can now be performed on open VHD/VHDX files on remote VDI storage with CSV volume support. Remote VHD/VHDX storage de-duplication allows for increased VDI storage density significantly reducing
    VDI storage costs, and enabling faster read/write of optimized files and advanced caching of duplicated data.
  9. Importing & Exporting Hyper-V VMs in Windows Server 2012 R2 One of the biggest benefits of server virtualization is the ability to backup or restore entire systems easily and quickly.  Though they are infrequently used features, Hyper-V import and export are very fast, versatile, and easy to use.  In Windows Server 2012 R2 these features get even better.  I will take a look at how this functionality works and why it is useful.  I’ll also discuss how they are very different from the commonly used checkpoints in Hyper-V, and how you can automate this process.

Keep plugged in to the series to continue learning about Windows Server 2012 R2

– See more at: http://itproguru.com/expert/2013/10/whats-new-in-windows-server-2012-r2-lessons-learned-week-1/#sthash.JWWX9vKZ.dpuf

Building the IT Camp with PowerShell Revisited

I always said I am not hard to please… I only need perfection.  So when I wrote my PowerShell script to build my environment the other day I was pleased with myself… until I realized a huge flaw in it.  Generation 1.

Actually to be fair, there is nothing wrong with Generation 1 virtual machines in Hyper-V; they have served us all well for several years.  However how could I claim to live on the bleeding edge (Yes, I have made that claim many times) and yet stay safe with Generation 1?

In the coming weeks Windows Server 2012 R2 will become generally available.  One of the huge changes that we will see in it is Generation 2 virtual machine hardware.  Some of the changes in hardware levels include UEFI, Secure Boot, Boot from SCSI, and the elimination of legacy hardware (including IDE controllers and Legacy NICs).

Of course, since Generation 1 hardware is still fully supported, we need to identify when we create the VM which Generation it will be, and this cannot later be changed.

I had forgotten about this, and when I created the script (of which I was quite proud) I did not think of this.  It was only a few hours later, as I was simultaneously installing nine operating systems, that I noticed in the details pane of my Hyper-V Manager that all of my VMs were actually Gen1.

Crap.

Remember when I said a couple of paragraphs ago that the generation level cannot be changed?  I wasn’t kidding.  So rather than living with my mistake I went back to the drawing board.  I found the proper cmdlet switches, and modified my script accordingly.

As there is a lot of repetition in it, I am deleing six of the nine VMs from the list.  You are not missing out on anything, I assure you.

# Script to recreate the infrastructure for the course From Virtualization to the Private Cloud (R2).
# This script should be run on Windows Server 2012 R2.
# This script is intended to be run within the Boot2VHDX environment created by Mitch Garvis
# All VMs will be created as Generation 2 VMs (except the vCenter VM for which it is not supported).
# All VMs will be configured for Windows Server 2012 R2
# System Center 2012 R2 will be installed.

# Variables

$ADM = "Admin"                # VM running Windows 8.1 (for Administration)
$ADMMIN = 512MB                # Minimum RAM for Admin
$ADMMAX = 2GB                # Maximum RAM for Admin
$ADMVHD = 80GB                # Size of Hard Drive for Admin

$SQL = "SQL"                # VM (SQL Server)
$SQLMIN = 2048MB            # Minimum RAM assigned to SQL
$SQLMAX = 8192MB            # Maximum RAM assigned to SQL
$SQLCPU = 2                # Number of CPUs assigned to SQL
$SQLVHD = 200GB                # Size of Hard Drive for SQL

$VCS = "vCenter"             # VM (vSphere vCenter Cerver) (Windows Server 2008 R2)
$VCSMIN = 2048MB             # Minimum RAM assigned to vCenter
$VCSMAX = 4096MB             # Maximum RAM assigned to vCenter
$VCSCPU = 2                 # Number of CPUs assigned to vCenter
$VCSVHD = 200GB                # Size of Hard Drive for vCenter

$VMLOC = "C:\HyperV"            # Location of the VM and VHDX files

$NetworkSwitch1 = "CorpNet"        # Name of the Internal Network

$W81 = "E:\ISOs\Windows 8.1 E64.iso"            # Windows 8.1 Enterprise
$WSR2 = "E:\ISOs\Windows Server 2012 R2.iso"        # Windows Server 2012 R2
$W2K8 = "E:\ISOs\Windows Server 2008 R2 SP1.iso"     # Windows Server 2008 R2 SP1

# Create VM Folder and Network Switch
MD $VMLOC -ErrorAction SilentlyContinue
$TestSwitch1 = Get-VMSwitch -Name $NetworkSwitch1 -ErrorAction SilentlyContinue; if ($TestSwitch1.Count -EQ 0){New-VMSwitch -Name $NetworkSwitch1 -SwitchType Internal}

# Create & Configure Virtual Machines
New-VM -Name $ADM -Generation 2 -Path $VMLOC -MemoryStartupBytes $ADMMIN -NewVHDPath $VMLOC\$ADM.vhdx -NewVHDSizeBytes $ADMVHD -SwitchName $NetworkSwitch1
Set-VM -Name $ADM -DynamicMemory -MemoryMinimumBytes $ADMMIN -MemoryMaximumBytes $ADMMAX
Add-VMDvdDrive $ADM | Set-VMDvdDrive -VMName $ADM -Path $W81

New-VM -Name $SQL -Generation 2 -Path $VMLOC -MemoryStartupBytes $SQLMIN -NewVHDPath $VMLOC\$SQL.vhdx -NewVHDSizeBytes $SQLVHD -SwitchName $NetworkSwitch1
Set-VM -Name $SQL -DynamicMemory -MemoryMinimumBytes $SQLMIN -MemoryMaximumBytes $SQLMAX -ProcessorCount $SQLCPU
Add-VMDvdDrive $SQL | Set-VMDvdDrive -VMName $SQL -Path $WSR2

New-VM -Name $VCS -Path $VMLOC -MemoryStartupBytes $VCSMIN -NewVHDPath $VMLOC\$VCS.vhdx -NewVHDSizeBytes $VCSVHD -SwitchName $NetworkSwitch1
Set-VM -Name $VCS -DynamicMemory -MemoryMinimumBytes $VCSMIN -MemoryMaximumBytes $VCSMAX -ProcessorCount $VCSCPU
Set-VMDvdDrive -VMName $VCS -Path $W2K8

#Start Virtual Machines
Start-VM $ADM
Start-VM $SQL
Start-VM $VCS

In the script you can see a few differences between my original script (in the article) and this one.  Firstly on all machines that are running Windows 8.1 or Windows Server 2012 R2 I have set the switch –Generation 2.  That is simple enough.

Adding the virtual DVD was a little trickier; with Generation 1 hardware there was a ready IDE port for you to connect the .ISO file to.  In Gen 2 it is all about SCSI, so you have to use the Add-VMDvdDrive cmdlet, and then connect the .ISO file (Set-VMDvdDrive –VMName <Name> –Path <ISO Path>Not only for simplicity but also to demonstrate that you can I have put these two cmdlets on a single line, connected with a pipe (the | key).

I want to thank a couple of colleagues for helping me out with the Generation 2 hardware and DVD issues… especially Sergey Meshcheryakov , who was quick to answer.  The exact cmdlet switches were not easy to track down!

…and remember, if I can learn it, so can you!  Even the great Sean Kearney once did not know anything about PowerShell… and now look at him!

Creating a New AD Forest in Windows Server Core (Revisited)

Several years ago Steve Syfuhs and I sat down and figured out how to create a new Active Directory forest in Windows Server Core.  It was an interesting experience, and even though I later gave rights to that article to the Canadian IT Pro Team (at the time it was Damir Bersinic) when you search Bing.com for the term ‘Create AD Forest Server Coremy article still comes up first.

R2 I have gotten a bit more adept with the command prompt of late (especially with my diving into Windows PowerShell recently, but even before), so when I had the need to create a new AD Forest for a courseware environment I am building, I decided to revisit this topic, and see what changes I could make.

In 2009 I had to create an answer file, or at least I believed I did.  It turns out that now I can get away with one command line string, which is as follows:

dcpromo /InstallDNS:yes /NewDomain:forest /NewDomainDNSName:alpineskihouse.com

/DomainNetBIOSName:SKI /ReplicaOrNewDomain:domain /RebootOnCompletion:yes

/SafeModeAdminPassword:P@ssword

For the record I had to break up the text into three lines, but obviously this should all be typed onto a single line.

Warnings:

The first time I ran this command it failed.  I suspect this is because I had a DHCP address assigned.  Before embarking on this trip, I suggest you assign a static IP address to your Server Core box.  While it is simpler to do it with the sconfig text-mode configuration menu tool, you can also use the following netsh command:

netsh interface ipv4 set address name=”Local Area Connection” source=static address=172.16.0.10 mask=255.255.0.0 gateway=172.16.0.1

At this point you should be ready to go… remember that with Windows Server 2012 (and R2) once you have the OS installed it is easy to manage it remotely using either PowerShell or the Server Manager console.  Just make sure you have the right credentials, and you are good to go!

Counting Down the Classics with the US IT Evangelists

 

On the first day of Christmas my true love gave to me…”

Ninety-nine bottles of beer on the wall…”

“Thirty-five articles on Virtualization…”

Pale AleAll of these are great sing-along songs, whether for holidays, camping, bus-rides, or comparing virtualization technology.  Each one is a classic.

Wait… you’ve never heard the last one? That’s okay, we are happy to teach it to you.  It has a pretty catchy tune – the tune of cost savings, lower TCO, higher ROI, and a complete end-to-end management solution.

Even if you can’t remember the lyrics, why don’t you open up the articles – each one written by a member of Microsoft’s team of IT Pro Evangelists in the United States.

You can read along at your own pace, because no matter how fast or slow you read, as long as you are heading in the right direction then you are doing it right! –MDG

The 35 Articles on Virtualization:

Date Article Author
12-Aug-13 Series Introduction Kevin Remde – @KevinRemde
13-Aug-13 What is a “Purpose-Built Hypervisor? Kevin Remde – @KevinRemde
14-Aug-13 Simplified Microsoft Hyper-V Server 2012 Host Patching = Greater Security and More Uptime Chris Avis – @ChrisAvis
15-Aug-13 Reducing VMware Storage Costs WITH Windows Server 2012 Storage Spaces Keith Mayer – @KeithMayer
16-Aug-13 Does size really matter? Brian Lewis – @BrianLewis_
19-Aug-13 Let’s talk certifications! Matt Hester – @MatthewHester
20-Aug-13 Virtual Processor Scheduling Tommy Patterson – @Tommy_Patterson
21-Aug-13 FREE Zero Downtime Patch Management Keith Mayer – @KeithMayer
22-Aug-13 Agentless Protection Chris Avis – @ChrisAvis
23-Aug-13 Site to Site Disaster Recovery with HRM Keith Mayer – @KeithMayer
25-Aug-13 Destination: VMWorld Jennelle Crothers – @jkc137
26-Aug-13 Get the “Scoop” on Hyper-V during VMworld Matt Hester – @MatthewHester
27-Aug-13 VMWorld: Key Keynote Notes Kevin Remde – @KevinRemde
28-Aug-13 VMWorld: Did you know that there is no extra charge? Kevin Remde – @KevinRemde
29-Aug-13 VMWorld: A Memo to IT Leadership Yung Chou – @YungChou
30-Aug-13 Moving Live Virtual Machines, Same But Different Matt Hester – @MatthewHester
02-Sep-13 Not All Memory Management is Equal Dan Stolts – @ITProGuru
03-Sep-13 Can I get an app with that? Matt Hester – @MatthewHester
04-Sep-13 Deploying Naked Servers Matt Hester – @MatthewHester
05-Sep-13 Automated Server Workload Balancing Keith Mayer – @KeithMayer
06-Sep-13 Thoughts on VMWorld Jennelle Crothers – @jkc137
09-Sep-13 Shopping for Private Clouds Keith Mayer – @KeithMayer
11-Sep-13 Dynamic Storage Management in Private Clouds Keith Mayer – @KeithMayer
12-Sep-13 Replaceable? or Extensible? What kind of virtual switch do you want? Chris Avis – @ChrisAvis
13-Sep-13 Offloading your Storage Matt Hester – @MatthewHester
16-Sep-13 VDI: A Look at Supportability and More! Tommy Patterson – @Tommy_Patterson
17-Sep-13 Agentless Backup for Virtual Environments Special Guest Chris Henley – @ChrisJHenley
19-Sep-13 How robust is your availability? Kevin Remde – @KevinRemde
20-Sep-13 VM Guest Operating System Support Brian Lewis – @BrianLewis_
23-Sep-13 How to license Windows Server VMs Brian Lewis – @BrianLewis_
24-Sep-13 Comparing vSphere 5.5 and Windows Server 2012 R2 Hyper-V At-A-Glance Keith Mayer – @KeithMayer
25-Sep-13 Evaluating Hyper-V Network Virtualization as an alternative to VMware NSX Keith Mayer – @KeithMayer
26-Sep-13 Automation is the Key to Happiness Matt Hester – @MatthewHester
27-Sep-13 Comparing Microsoft’s Public Cloud to VMware’s Public Cloud Blain Barton – @BlainBar
30-Sep-13 What does AVAILABILITY mean in YOUR cloud? Keith Mayer – @KeithMayer

…and as for me? Well it’s pretty simple… just go to www.garvis.ca and type Virtualization into the search bar.  You’ll see what I have to say too!

Windows Server 2012: More than Virtualization!

Since it was in pre-release I have been evangelizing Windows Server 2012.  I have gone from sea to shining sea talking about it at Launch events, at Partner showcases, in IT Camps, at user groups talking about how much better it is than Windows Server 2008, but more importantly I chiefly discuss the improvements to Hyper-V over previous versions, and how it (and System Center 2012) compares to VMware’s vSphere 5.1 and vCenter Server.

While all of that is true, to say that virtualization is the only benefit to Windows Server 2012 is doing it a disservice.  Don’t get me wrong, Hyper-V officially rocks; but if virtualization was the only benefit to the new Server, couldn’t companies simply deploy the new version on their host hardware, and leave their virtual machines running Windows Server 2008 R2?

Going forward when someone asks me what is new and exciting in Windows Server, I am going to start with the improvements to Hyper-V… but then we can go into the real meat of the product, and see where it takes us.  Improvements such as:

Storage Spaces (or Storage Pools), which I have equated to software-RAID after ten generations of improvement.  With Storage Spaces you can build your volume from multiple disks of equal or disparate size, on similar or disparate architecture.  Imagine having three SAS disks of 450GB, 146GB, and 72GB combined into a single volume of 668GB… or a 146GB SAS disk, a 500GB SATA disk, and a 2TB USB disk combined into a 2.46TB volume.  Add to that the ability to hot-add drives on the fly (in a recent demo I added two disks in under 30 seconds), and have your volume protected by Mirroring or Parity. All of this is built into Windows Server 2012, and we have written about it extensively.  Try it for yourself by following my article here.

Data Deduplication is built into the operating system.  Previously a tool that storage-conscious companies would pay thousands of dollars to third-party vendors for, is now a check box away when creating your volume.  Once it is enabled on your volume you can either use the GUI tool or, if you are efficient, Windows PowerShell to either schedule your dedup or run the job immediately on either your local or remote systems.

Software iSCSI Target was exclusively a feature of Microsoft Storage Server until April of 2010 when Microsoft released it as a fully supported free download.  Now integrated in Server 2012, it gives you the ability to create a software SAN device on your server with all of the functionality of most hardware SANs, but at a fraction of the cost.  While I will still not replace my hardware SAN devices in large organizations, it brings that functionality to smaller businesses without the budget for the extra hardware.  Couple this feature with Storage Spaces and Data Dedup and you have yourself a real ballgame!  To get started check out our article here.

MinShell is the new ‘compromise’ step between the full GUI Server installation and the Server Core installation.  It allows you to have a sort of ‘safety net’ of the GUI management tools, without actually having the Windows GUI environment installed.  You will save tons of resources across your virtualized environment because you no longer need the GUI on hundreds of virtual machines, as we wrote about here.

Server Manager was introduced to Windows Server 2003 R2 with all of the ho-hum yawning that it deserved.  Okay, a lot of our tasks were brought into one app, but that was about it.  That is why I was so surprised that the modern Server Manager in Server 2012 blew me away with its true multi-server management, the Dashboard functionality that gives the administrator a birds-eye view of the health of all of his or her systems, and the ability to manage… well, everything from one console.  Install roles and features on your local or remote servers with the same ease.  Manage multiple servers from the same console – add them by simply right-clicking the All Servers context, and then without any more work see that all of the services running on that (or those) remote server(s) are instantly added to your Dashboard.  I recorded a video of some of the great functionality in Server Manager for our blog here.

PowerShell 3.0 is the breakout version of this already incredible scripting environment, with nearly ten times the cmdlets than previously available (out of the box).  Add to that the Integrated Scripting Environment (ISE) and you have a powerful scripting environment that is even easier to learn and use than before!

Active Directory Administration Center is a new all-encompassing tool for Active Directory management.  No longer will admins have to open one of several different consoles depending on what they wanted to do, the ADAC is it… plain and simple!

Active Directory Recycle Bin was introduced in Windows Server 2008 R2, and is now even easier to use to use.  Enable it in the ADAC (remember that once enabled it cannot be disabled).  To lean how to enable it read our article here, and the to use it to restore an object we have another article here.

Windows PowerShell History Viewer records the underlying Windows PowerShell commands when action is taken in the Active Directory Administrative Center so that the admin can copy and reuse the scripts.  This is also a great way for admins to start learning PowerShell!

Cloning and Snapshotting Domain Controllers, along with DCs that are fully aware of virtualization, mean that we no longer need to maintain a physical domain controller in our fully virtualization (or cloud-based) organization.  I can rapidly deploy new domain controllers (either in an existing or new domain), and quickly and easily restore business continuity during disaster recovery.  I can rapidly provision test environments and quickly meet increased capacity needs in branch offices.  Our virtualized domain controllers will detect snapshot restoration and non-authoritatively synchronize the delta of changes for Active Directory and the SYSVOL, making DC virtualization safer.

Fine-Grained Password Policies in Active Directory allows me to have better security for my infrastructure by making it easier for users with no access to sensitive information have more lenient password policies, while enforcing stricter policies for users with more access and for service accounts.  While everyone will still have to have password awareness, this will see a marked decrease in Post-It Note Security Violations.

Dynamic Access Control is a new way of securing your information, whether on file shares, in SharePoint Document Libraries, or even in e-mail.  It works with Rights Management Server using Central Access Policies to verify who is accessing what information from where (what device).  The expression-based access policies determine before decrypting the content that both the user and the device are trusted.  If you have highly sensitive information that should only be accessed on corporately managed devices this is going to be a great new security feature available to you!

DirectAccess was introduced in the 2008 era with a plethora of complex requirements and prerequisites needed to implement.  In 2009 Rodney Buike wrote an article that is a great explanation of DirectAccess on our blog which can be read here.  In Server 2012 it is so much simpler to plan for, deploy, and use.  Anthony Bartolo wrote the article about what it is, what it needs, and what it does recently, and you can read that article here.

…and the list just keeps going and going.  I urge you to download the evaluation software and try it out by clicking on the appropriate link:

Windows Server

System Center 2012

Windows 8

In addition to downloading the software and reading our articles, you could have a chance in winning your lab computer by participating in free Microsoft offered Virtual Academy.  To have a chance to win an HP EliteBook Revolve and two chances to win 400 Microsoft Points enter here.  Complete two TechNet evaluations, and take the selected Microsoft Virtual Academy courses for your chance at a $5,000 grand prize!

What not to Learn… Revisited for 2013!

In October, 2011 I posted an article called vPTA: What NOT to take away from my 1-day virtualization training!  It was only partly tongue-in-cheek on the environment that I have been using for several years to demonstrate server virtualization from a pair of laptops.  A few months later Damir Bersinic took that list and made some modifications, and published it on this blog as Things NOT To Take Away from the IT Virtualization Boot CampBecause we spend so much time in our IT Camps demonstrating similar environments, I decided it was a good time to rewrite that article.

Normally when I revisit an article I would simply republish it.  There are two reasons that I decided to rewrite this one from scratch:

  • The improvements in Windows Server 2012, and
  • My more official position at Microsoft Canada

Since writing that original article I have tried to revise my writing style so as to not offend some people… I am trying to be a resource to all IT Professionals in Canada, and to do that I want to eliminate a lot of the sarcasm that my older posts were replete with.  At the same time there are points that I want to reinforce because of the severity of the consequences.

Creating a lab environment equivalent to Microsoft Canada’s IT Camps, with simple modifications:

1. In our IT Camps we provide the attendees with hardware to use for their labs.  Depending on the camp attendees will work in teams on either one or two laptops.  While this is fine for the Windows 8 camps, please remember that in your environment – even in a lab where possible – you should be using actual server hardware.  With virtualization it is so simple to create a segregated lab environment on the same server as your production environment, using virtual switches and VLAN tagging.  In environments where System Center 2012 has already been deployed it is easy enough to provision private clouds for your test/dev environments, but even without that it is a good idea.  The laptops that we use for the IT Camps are great for the one- or two-day camps, but for longer than that you are going to risk running into a plethora of crashes that are easy enough to anticipate.

2. You should always have multiple domain controllers in any environment, production or otherwise.  Depending on who you speak to many professionals will tell you that at least one domain controller in your domain should be on a physical box (as opposed to a virtual machine).  I am still not convinced that this does not fall into the category of ‘Legacy Thinking’ but there is certainly an argument to be made for this.  Whether you are going to do this in physical or virtual, you should never rely on a single domain controller.  Likewise your domain controllers should be dedicated as such, and should not also be file or application servers.

3. I strongly recommend shared storage for your virtualization hosts be implemented on Storage Area Networks (SANs).  SAN devices are a great method of sharing data between clustered nodes in a failover cluster.  In Windows Server 2012 we have included the iSCSI Software Target that was previously an optional download (The Microsoft iSCSI Software Target is now free).  While this is still not a good replacement of physical SANs, it is a fully supported solution for Windows Failover Cluster Services, including for Hyper-V virtual machine environments.  It is even now recognized as an option for System Center 2012 private clouds.  As well the Storage Pools feature in the new Server is a compelling feature to consider.  However there are some caveats to consider:

A. Both iSCSI software targets and Storage Pools rely on virtual storage (VHDX files) for their LUNs and Pools.  While VHDX files are very stable, putting one VHDX file into another VHDX file is a bad idea… at least for long-term testing and especially for production environments.  If you are going to use a software target or Storage Pool (which are both fully supported by Microsoft for production environments) it is strongly recommended that you put them onto physical hardware.

B. While Storage Pools are supported on any available drive architecture (including USB, SATA, etc…) the only architecture that will be supported for clustered environments are iSCSI and SAS (Serial Attached SCSI).  Do not try to build a production (or long-term test environment) cluster on inexpensive USB or SATA drives.

C. In our labs we use a lot of thin-provisioned (dynamically expanding, storage-on-demand) disks.  While these are fully supported, it is not necessarily a best practice.  Especially on drives where you may be storing multiple VHDX files you are simply asking for fragmentation issues.

4. If you are building a lab environment on a single host, you may run into troubles when trying to join your host to the domain.  I am not saying that it will not work – as long as you have properly configured your virtual network it likely will – but there are a couple of things to remember.  Make sure that your virtual domain controller is configured to Always Start rather than Always start if it was running when the service stopped.  As well it is a good idea to configure a static IP address for the host, just in case your virtual DHCP server fails to start properly, or in a timely fashion.

5. Servers are meant to run.  Shutting down your servers on a daily basis has not been a recommended practice for many years, and the way we do things – at the end of the camp we re-image our machines, pack them into a giant case and ship them to the next site – is a really bad idea.  If you are able I strongly recommend leaving your lab servers running at all times.

6. While it is great to be able to demo server technologies, when at all possible you should leave your servers connected (and turned on) in one place.  If you are able to bring your clients to you for demos that is ideal, but it is so easy these days to access servers remotely on even the most basic of Internet connections.  If your company does not have a static IP address I would recommend using a dynamic DNS service (such as dyndns.com) with proper port-forwarding configured in your gateway router to access then remotely.

7. I am asked all the time how many network adapters you need for a proper server environment.  I always answer ‘It depends.’  There are many factors to consider when building your hosts, and in a demo environment there are concessions you can make.  However unless you have absolutely no choice it should be more than one.  For a proper cluster configuration (excluding multi-pathing and redundancy) you should have a production network, a storage network, and a heartbeat network… and that is three just for the bare minimum.  Some of these can share networks and NICs by configuring VLANs, but again, preferably only in lab environments.  Before building your systems consider what you are willing to compromise on, and what is absolutely required.  Then build your architectural plan and determine what hardware is required before making your purchase.

7a. While on the subject of networks, in our demo environment the two laptop-servers are connected to each other by a single RJ-45 cable.  BUY SWITCHES… and the ones that are good enough for you to use at home are usually not good enough for your production environment! Smile

8. When it is at all possible your storage network should be physically segregated from your production network.  When physical segregation is not possible then at least separating the streams by using vLANs is strongly recommended.  The first offers security as well as bandwidth management, the second only security.

9. Your laptop and desktop hardware are not good-enough substitutes for server-grade hardware.  I know we mentioned this before, but I still feel it is important enough to state again.

10. In Windows Server 2008 R2 we were very adamant that snapshots, while handy in labs and testing, were a bad idea for your production environment.  With the improvements to Hyper-V in Windows Server 2012 we can be a little less adamant, but remember that you cannot take a snapshot and forget about it.  When you delete or apply a snapshot it will now merge the VHDX and AVHDX files live… but snapshots can still outgrow your volume so make sure that when you are finished with a snapshot you clean up after yourself.

11. Breaking any of these rules in a production environment is not just a bad idea, it would likely result in an RGE (Resume Generating Event).  In other words, some of these can be serious enough for you to lose your job, lose customers, and possibly even get you sued.  Follow the best practices though and you should be fine!

Converting VHDs to VHDX and other questions…

Many of the articles I write for both The World According to Mitch and the Canadian IT Pro Connection come directly from people I meet through my travels.  They send me questions about technology by e-mail and rather than simply replying to them, if I feel the questions are relevant, I write them up as articles.  So if you meet me at one of my sessions and you ask me a question, do not be surprised if I ask you to e-mail it to me… oftentimes I will need to research the answer, but sometimes it is because I think that it would make for an interesting write-up.

I have known Betty for as long as I have been going to her home town, and while she loves to give me grief I know that she is always attentive and learns from my presentations.  She recently sent me an e-mail with two very good questions on Hyper-V following my IT Camp on Windows Server 2012.

QUESTION 1:

I have several virtual machines that were created on Server 2008R2, and I would like to convert them to VHDX to take advantage of all the new features on Windows 2012. Is this possible?

The process for exporting the virtual machine from Hyper-V on Windows Server 2008 R2 and then importing it as a virtual machine onto a host running Windows Server 2012 is fairly simple: Export, then Import.  However as I am sure you realize this does not convert the disk file format… ViVo in this case stands for VHD in, VHD out.  However the Edit Disk Wizard in the new Hyper-V is your friend here.

  1. Ensure that your virtual machine is powered down (or better yet disconnected).
  2. From the Actions Pane of the Hyper-V Manager click Edit Disk…
  3. On the Before You Begin page click Next.
  4. On the Locate Virtual Hard Disk page navigate to the location of our VHD file (use Browse if you like!).  Click Next.
  5. On the Choose Action page select the radio marked Convert and click Next.
  6. On the Convert Virtual Hard Disk page select the radio marked VHDX and click Next.
  7. On the second Convert Virtual Hard Disk page select the disk format you prefer (Fixed or Dynamically Expanding) and click Next.
  8. On the third Convert Virtual Hard Disk page enter the name and location of your new VHDX file and click Finish.
    Depending on the size of your source disk it may take a few minutes to create the new file; for larger disks you might want to run the Edit Disk Wizard to compact it before proceeding.  However once you are done you will have both the Source and the Destination disks, and all you have to do is edit the settings of your VM and attach the new drive, and you are ready to rock!image
  • Notice that your new file is about 145 MB larger than the original.  That is perfectly normal and nothing to be concerned about.

      PowerShell: I’ve Got The Power!!
      Thanks to folks like Ed Wilson and our very own Sean Kearney it is once again cool to use the command line… or rather, the cmdlet.   Nearly anything that you can do in the GUI can also be done in PowerShell, hence allowing us to create scripts to use at various clients or sites.  If you want to convert your VHD to VHDX in PowerShell here’s how:

    Convert-VHD -Path C:\ClusterStorage\Volume1\VHDsVM-1.vhd -DestinationPath C:\ClusterStorage\Volume1\VHDsVM-1.vhdx

    SNAGHTML6c7c42

    Again, it is important to remember that a) Your hard drive be off-line (or disconnected), and b) that once you have created the new VHDX file you must attach it to the VM before spinning it back up.  As well you will notice the difference in file size.  Nothing to be concerned by.

    (This cmdlet can also be used to convert VHDX files back to VHD files)

    QUESTION 2:

    Do the virtual machines have to be Server 2012 for me to take advantage of the new features of Hyper-V in Windows Server 2012, and especially the new .VHDX file format?

    Of course not.  Remember that the host and the guest have no real conception that the other is there; as long as you can install it on x86 hardware, you can install it in a Hyper-V virtual machine.  With that being said, there is a difference between can and is supported.  Remember that your Windows NT, 2000, DOS 3.3 and OS/2 Warp VMs are not supported by Microsoft… even though they will work just fine Winking smile 

    For Bonus Points:

    What is possible technologically is not always allowed legally.  It is important to make sure that all of the operating systems in your VMs are licensed on that host.  I have seen too many companies perform P2V migrations of physical servers that had OEM licenses attached to them, only to discover during an audit that they were out of compliance.  Make sure you have verified all of your licensing so that nobody will get their nose out of joint Smile

  • Failover Clustering: Let’s spread the Hyper-V love across hosts!

    This article was originally published on the Canadian IT Pro Connection.

    Some veteran IT Pros hear the term ‘Microsoft Clustering’ and their hearts start racing.  That’s because once upon a time Microsoft Cluster Services was very difficult and complicated.  In Windows Server 2008 it became much easier, and in Windows Server 2012 it is now available in all editions of the product, including Windows Server Standard.  Owing to these two factors you are now seeing all sorts of organizations using Failover Clustering that would previously have shied away from it.

    The service that we are seeing clustered most frequently in smaller organizations is Hyper-V virtual machines.  That is because virtualization is another feature that is really taking off, and the low cost of virtualizing using Hyper-V makes it very attractive to these organizations.

    In this article I am going to take you through the process of creating a failover cluster from two virtualization hosts that are connected to a single SAN (storage area network) device.  However in Windows Server 2012 these are far from the limits.  You can actually cluster up to sixty-four servers together in a single cluster.  Once they are joined to the cluster we call them cluster nodes.

    Failover Clustering in Windows Server 2012 allows us to create highly available virtual machines using a method called Active-Passive clustering.  That means that your virtual machine is active on one cluster node, and the other nodes are only involved when the active node becomes unresponsive, or if a tool that is used to dynamically balance the workloads (such as System Center 2012 with Performance and Resource Optimization (PRO) Tips) initiates a migration.

    In addition to using SAN disks for your shared storage, Windows Server 2012 also allows you to use Storage Pools.  I explained Storage Pools and showed you how to create them in my article Storage Pools: Dive Right In! I also explained how to create a virtual SAN using Windows Server 2012 in my article iSCSI Storage in Windows Server 2012.  For the sake of this article, we will use the simple SAN target that we created together in that article.

    Step 1: Enabling Failover Clustering

    Failover Clustering is a feature on Windows Server 2012.  In order to enable it we will use the Add Roles and Features wizard.

    1. From Server Manager click Manage, and then select Add Roles and Features.

    2. On the Before you begin page click Next>

    3. On the Select installation type page select Role-based or feature-based installation and click Next>

    4. On the Select destination server page select the server onto which you will install the role, and click Next>

    5. On the Select server roles page click Next>

    6. On the Select features page select the checkbox Failover Clustering.  A pop-up will appear asking you to confirm that you want to install the MMC console and management tools for Failover Clustering.  Click Add Features.  Click Next>

    7. On the Confirm installation selections page click Install.

    NOTE: You could also add the Failover Clustering feature to your server using PowerShell.  The script would be:

    Install-WindowsFeature -Name Failover-Clustering –IncludeManagementTools

    If you want to install it to a remote server, you would use:

    Install-WindowsFeature -Name Failover-Clustering –IncludeManagementTools –ComputerName <servername>

    That is all that we have to do to enable Failover Clustering in our hosts.  Remember though, it does have to be done on each server that will be a member of our cluster.

    Step 2: Creating a Failover Cluster

    Now that Failover Clustering has been enabled on the servers that we want to join to the cluster, we have to actually create the cluster.  This step is easier than it ever was, although you should take care to follow the recommended guidelines.  Always run the Validation Tests (all of them!), and allow Failover Cluster Manager to determine the best cluster configuration (Node Majority, Node and Disk Majority, etc…)

    NOTE: The following steps have to be performed only once – not on each cluster node.

    1. From Server Manager click Tools and select Failover Cluster Manager from the drop-down list.

    2. In the details pane under Management click Create Cluster…

    3. On the Before you begin page click Next>

    4. On the Select Servers page enter the name of each server that you will add to the cluster and click Add.  When all of your servers are listed click Next>

    5. On the Validation Warning page ensure the Yes. When I click Next, run configuration validation tests, and then return to the process of creating the cluster radio is selected, then click Next>

    6. On the Before You Begin page click Next>

    7. On the Testing Options page ensure the Run all tests (recommended) radio is selected and then click Next>

    8. On the Confirmation page click Next> to begin the validation process.

    9. Once the validation process is complete you are prompted to name your cluster and assign an IP address.  Do so now, making sure that your IP address is in the same subnet as your nodes.

    NOTE: If you are not prompted to provide an IP address it is likely that your nodes have their IP Addresses assigned by DHCP.

    10. On the Confirmation page make sure the checkbox Add all eligible storage is selected and click Next>.  The cluster will now be created.

    11. Click on Finish.  In a few seconds your new cluster will appear in the Navigation Pane.

    Step 3: Configuring your Failover Cluster

    Now that your failover cluster has been created there are a couple of things we are going to verify.  The first is in the main cluster screen.  Near the top it should say the type of cluster you have.

    If you created your cluster with an even number of nodes (and at least two shared drives) then the type should be a node and disk majority.  In a Microsoft cluster health is determined when a majority (50% +1) of votes are counted.  Every node has a vote.  This means that if you have an even number of nodes (say 10) and half of them (5) go offline then your cluster goes down.  If you have ten nodes you would have long since taken action, but imagine you have two nodes and one of them goes down… that means your entire cluster would go down.  So Failover Clustering uses node and disk majority – it takes the smallest drive shared by all nodes (I usually create a 1GB LUN) and configures it as the Quorum drive – it gives it a vote… so if one of the nodes in your two node cluster goes down, you still have a majority of votes, and your cluster stays on-line.

    The next thing that you want to check is your nodes.  Expand the Nodes tree in the navigation pane and make sure that all of your nodes are up.

    Once this is done you should check your storage.  Expand the Storage tree in the navigation pane, and then expand Disks.  If you followed my articles you should have two disks – one large one (mine is 140GB) and a small one (mine is 1GB).  The smaller disk should be marked as assigned to Disk Witness in Quorum, and the larger disk will be assigned to Available Storage.

    Cluster Shared Volumes was introduced in Windows Server 2008R2.  It creates a contiguous namespace for your SAN LUNs on all of the nodes in your cluster.  In other words, rather than having to ensure that all of your LUNs have the same drive letter on each node, CSVs create a link – a portal if you will – on your C: under the directory C:\ClusterStorage.  Each LUN would have its own subdirectory – C:\ClusterStorage\Volume1, C:\ClusterStorage\Volume2, and so on.  However using CSVs means that you are no longer limited to a single VM per LUN, so you will likely need fewer.

    CSVs are enabled by default, and all you have to do is right-click on any drive assigned to Available Storage, and click Add to Cluster Shared Volumes.  It will only take a second to work.

    NOTE: While CSVs create directories on your C drive that is completely navigable, it is never a good idea to use it for anything other than Hyper-V.  No other use is supported.

    Step 4: Creating a Highly Available Virtual Machine (HAVM)

    Virtual machines are no different to Failover Cluster Manager than any other clustered role.  As such, that is where we create them!

    1. In the navigation pane of Failover Cluster Manager expand your cluster and click Roles.

    2. In the Actions Pane click Virtual Machines… and click New Virtual Machine.

    3. In the New Virtual Machine screen select the node on which you want to create the new VM and click OK.

    The New Virtual Machine Wizard runs just like it would in Hyper-V Manager.  The only thing you would do differently here is change the file locations for your VM and VHDX files.  In the appropriate places ensure they are stored under C:\ClusterStorage\Volume1.

    At this point your highly available virtual machine has been created, and can be failed over without delay!

    Step 5: Making an existing virtual machine highly available

    In all likelihood you are not starting from the ground up, and you probably have pre-existing virtual machines that you would like to add to the cluster.  No problem… However before you go, you need to put the VM’s storage onto shared storage.  Because Windows Server 2012 includes Live Storage Migration it is very easy to do:

    1. In Hyper-V Manager right-click the virtual machine that you would like to make highly available and click Move

    2. In the Choose Move Type screen select the radio Move the virtual machine’s storage and click Next>

    3. In the Choose Options for Moving Storage screen select the radio marked Move all of the virtual machine’s data to a single location and click Next>

    4. In the Choose a new location for virtual machine type C:\ClusterStorage\Volume1 into the field.  Alternately you could click Browse… and navigate to the shared file location.  Then click Next>

    5. On the Completing Move Wizard page verify your selections and click Finish.

    Remember that moving a running VM’s storage can take a long time.  The VHD or VHDX file could theoretically be huge… depending on the size you selected.  Be patient, it will just take a few minutes.  Once it is done you can continue with the following steps.

    6. In Failover Cluster Manager navigate to the Roles tab.

    7. In the Actions Pane click Configure Role…

    8. In the Select Role screen select Virtual Machine from the list and click Next>.  This step can take a few minutes… be patient!

    9. In the Select Virtual Machine screen select the virtual machine that you want to make highly available and click Next>

    NOTE: A great improvement in Windows Server 2012 is the ability to make a VM highly available regardless of its state.  In previous versions you needed to shut down the VM to do this… no more!

    10. On the Confirmation screen click Next>

    …That’s it! Your VM is now highly available.  You can navigate to Nodes and see which server it is running on.  You can also right-click on it, click Move, select Live Migration, and click Select Node.  Select the node you want to move it to, and you will see it move before your very eyes… without any downtime.

    What? There’s a Video??

    Yes, We wanted you to read through all of this, but we also wrote it as a reference guide that you can refer to when you try to build it yourself.  However to make your life slightly easier, we also created a video for you and posted it online.  Check it out!

    Creating and configuring Failover Clustering for Hyper-V in Windows Server 2012

    For Extra Credit!

    Now that you have added your virtualization hosts as nodes in a cluster, you will probably be creating more of your VMs on Cluster Shared Volumes than not.  In the Hyper-V Settings you can change the default file locations for both your VMs and your VHDX files to C:\ClusterStorage\Volume1.  This will prevent your having to enter them each time.

    As well, the best way to create your VMs will be in the Failover Cluster Manager and not in Hyper-V Manager.  FCM creates your VMs as HAVMs automatically, without your having to perform those extra steps.

    Conclusion

    Over the last few weeks we have demonstrated how to Create a Storage Pool, perform a Shared Nothing Live Migration, Create an iSCSI Software Target in Windows Server 2012, and finally how to create and configure Failover Clusters in Windows Server 2012.  Now that you have all of this knowledge at your fingertips (Or at least the links to remind you of it!) you should be prepared to build your virtualization environment like a pro.  Before you forget what we taught you, go ahead and do it.  Try it out, make mistakes, and figure out what went wrong so that you can fix it.  In due time you will be an expert in all of these topics, and will wonder how you ever lived without them.  Good luck, and let us know how it goes for you!

    Installing NetFx3 on Windows Server 2012

    image

    Okay… I am installing SQL Server 2012 on a Windows Server 2012 box, there shouldn’t be any problems.  Everything is proceeding normally until I get this message:

    Error while enabling Windows feature : NetFx3, Error Code : –2146498298 , Please try enabling Windows Feature : NetFx3 from Windows management tools and then run setup again.

    No problem… I know how to install Windows Features; I start the Add roles and Features Wizard and go looking for NetFx3… it’s not there.

    Problem.

    It turns out that Windows Server 2012 does not include NetFx3 when it is installing.  It doesn’t mean that it is gone, but it does have to be installed separately.  Here’s what you do:

    1) Insert your Windows Server 2012 media.  As I was installing SQL Server in a Hyper-V VM I ejected the SQL media and attached my Windows Server 2012 ISO.  I then checked to see what drive letter it was (D:).

    2) I opened a Command Prompt with administrative credentials.  From the Start Screen I typed CMD but instead of clicking on it or pressing ENTER I right-clicked, and at the bottom clicked on Run As Administrator.

    3) From the Command Prompt I typed the following command:

    dism /online /enable-feature /featurename:netfx3 /all /source:d:\sources\sxs

    image

    The Deployment Image Servicing and Management tool is one of the easiest ways to install features in Windows when the GUI fails you. 

    Note: Unfortunately, if you encounter this error you will have to restart your installation of SQL Server.  That doesn’t mean you should cancel it out at this point… what I did was I left the error message on the screen while I resolved the NetFx3 issue, and then let it resume.  The SQL Server installation succeeded, with several failures.  I then went back and re-installed SQL on top of the old, with the features that I needed.  It worked just fine for me, and it should for you.

    Storage Pools… Dive right in!

    This article was originally posted to the Canadian IT Pro Connection.

    Storage Pools are a new feature in Windows Server 2012 that at first glance (at the terminology) may look like  software RAID arrays that have been around for years, but are really a new concept, or at least several generations of advancement on the old concept.  They give us the ability to use disks of different sizes and bus types and create a single ‘pooled disk.’

    While Storage Pools are easy to create and use, the technology under the hood is quite complex, and certainly years ahead of anything we had seen before.  Storage Pools leverage the power of virtual hard disks and ‘thin provisioning in order to deliver ‘on demand’ storage. 

    Let’s create a scenario in which we see the true value of Storage Pools:

    Someone in your organization is working on a virtual server that will start small but will necessarily grow over time.  They request 185GB of storage for their VM.  Because of the importance of the project they request the fastest solid-state drives (SSDs) available.  You have one 64GB drive available immediately, but the part is on backorder and will take several weeks to get in.

    Rather than simply installing the disk into a server and provisioning the VM onto that disk, you connect it, create a Storage Pool, and add the disk.  You then create a virtual disk on the Storage Pool, and then create a volume on that pool.  You should now have a volume of about 63GB (formatted capacity) ready to allocate to the VM.  The project proceeds.

    A few weeks later you receive your new SSDs, and not a minute too soon because the VM is growing.  You install the new disks into the server, and from the Storage Pools screen in Server Manager you add the new drives to the pool, expand the virtual disk, and then extend the volume.  Within minutes you have the 200GB volume (on SSDs) that was requested.

    Let’s extend beyond the single server though.  You may need an iSCSI SAN, but do not have the budget for it.  Rather than make do without, you take a NAS (Network-Attached Storage) or JBOD (Just a Bunch Of Disks) appliance which are both much less expensive, and create your Storage Pool using those disks.  Then from within Windows Server 2012 you use the iSCSI Software Target to start creating LUNs on the appliance, thus creating the SAN device you couldn’t afford.

    The hardware

    I will preface this by saying that for servers I always strongly recommend server-grade hardware.  However sometimes we do not have the budget for the best hardware, and we have to use what is available.  Storage Spaces are supported on any type of drive you can connect to your computer, be it SATA or IDE, SCSI or SAS, iSCSI, or USB.  With that being said, if you are going to use your Storage Space for failover, only SAS and iSCSI are supported by Microsoft.  However it is even possible to create a Storage Pool of USB keys, as long as they are connected to your computer.

    Creating your Storage Pool

    image

    1. From within Server Manager click on the File and Storage Services workspace.
    2. In the navigation pane select the Storage Pools context.
    3. To the top-right of the Storage Pools workspace click on the TASKS drop-down and click New Storage Pool… image
    4. In the Specify a storage pool name and subsystem window name your pool, and select the group of available disks that you will use and click Next. image
      5. Select the physical (or in this case virtual) disks that you would like to add to your pool and click Next.

    image

    6. On the Confirm selections page click Create.

    image

    It will not take very long, and you will get a message that you have successfully created a Storage Pool.  Before you close the dialogue box, notice that near the bottom there is a checkbox asking if you want to create a virtual disk when the wizard closes.  Select this checkbox and then click Close.  The New Virtual Disk Wizard will come up automatically.

    7. In the New Virtual Disk Wizard select your newly created Storage Pool onto which to create the virtual disk, and then name the disk as you would. 

    8. In the Select the storage layout screen you are asked to select between Simple, Mirror, or Parity.

    Simple: data is striped across the disks, maximizing the capacity and increasing throughput, but without offering any redundancy thus decreasing reliability.  You are not protected from disk failures.

    Mirror: data is duplicated on two (or three) disks which increases reliability, but reduces capacity.  A mirror requires at least two disks to protect from a single failure, and five disks to protect from two simultaneous disk failures.

    Parity: data and parity information are striped across the disks, increasing reliability but reducing capacity.  It requires at least three disks, and cannot be used in a failover cluster.

    9. In the Specify provisioning type screen you can choose either thin-provisioned (your virtual disk starts small and grows as needed) or fixed-provisioned (your virtual disk is created as the fully provisioned file).

    image

    10. In the Specify the size of the virtual disk page enter the size of disk, and from the drop-down list select the unit of measurement – megabytes, gigabytes, or terabytes.  Click Next.

    On the Confirm selections screen verify that your settings are right and click Create.  This process should not take very long.

    image

    Once again, at the bottom of the View results page we have a checkbox, this time asking if we want to create a volume.  Leaving this checked will bring up the New Volume Wizard.

    The wizard will look a little different than it did in Server 2008, owing to the fact that you can now provision storage both locally and remotely.  On the first screen you select the server and the disk; on the second screen you select the volume size (which cannot exceed the size of the disk); you then assign a drive letter or, if you prefer, a directory to mount it to (or don’t assign a letter at all); finally you select the file system, unit size, and volume label.  On the last screen you confirm your selections and click Create

    image

    The volume will not take long to create, and you are now done.  You can navigate to Computer in Windows Explorer and your newly provisioned drive is ready to use!

    Growing your Storage Pool

    Creating it is one thing, but let’s now see how easy it is to extend the volume by adding drives.

    image

    When we navigate back to our Storage Pools workspace in Server Manager we see that our newly created pool is there; we also see (under Physical Disks) that we have two 64 GB disks that are unused (thus primordial).

    image

    1. Right-click on your storage pool and click Add Physical Disk…
    2. Your available disks will be listed.  Select the ones you wish to add and click Next. image
    3. In the VIRTUAL DISKS context on the Storage Pools workspace right-click on your virtual disk and click Extend Virtual Disk… image
    4. In the Extend Virtual Disk window enter the desired new size and click OK. image
    5. Now you are going to change the context to the Volumes workspace.  Right-click on the volume that you created and click Extend Volume.  Notice when you click on the volume the Disk is listed as belonging to a Microsoft Storage Space Device.  It lists the capacity, both allocated and unallocated, as well as the status and virtual disk name. image
    6. The Extend Volume window looks identical to the Extend Virtual Disk window.  Enter the new size and click OK. image
    7. Extending the volume only takes a few seconds, and when you are done you will see that the capacity has been extended.  image

    If you want to double-check, go to Windows Explorer and navigate to Computer in Windows Explorer and (once you hit refresh) your newly extended drive is ready to use!

    Conclusion

    Storage Spaces are going to revolutionize the way we (as administrators) think about storage.  We can now hot-add drives to volumes and extend them in seconds and not hours, and because there is no downtime involved we will not have to do any of this after hours.

    Going forward we are going to stop thinking about the disk as the main storage unit of storage in our environment, but rather it will be one piece of the equation.  Our volume sizes will not be limited to the size of a disk, but what we need, whether that be measured in gigabytes or terabytes.

    Add to the fact that we are not tied to any specific architecture, and you will see very quickly that our storage costs and complexity will drop – even as we add features like mirroring, failover disks, and parity.