Help! Where is my Client Access Point?

So you are building a Scale Out File Server (SoFS).  You are all happy because you read one of my articles (or POSSIBLY someone else’s… but really, why would you hurt me like that?) and you know you are good to go.  You have your cluster, you have your drives, and you have created your SoFS role.  Now all that’s left to do is to add a file share to the role.

2015-04-13_15-26-35

Huh?  What did you do wrong?

Relax… you didn’t do anything wrong.  When you create the SoFS role in Failover Cluster Manager, it will take some time to propagate the namespace throughout Active Directory.  How long? UCA.  However depending on how large your network topology is, it can take a little time.  Just go for lunch, a smoke, maybe get out and stretch your legs, go for a jog… when you come back you should be ready to go!

Advertisements

Another tough exam…

As a subject matter expert (SME) on virtualization, I was neither excited nor intimidated when Microsoft announced their new exam, 74-409: Server Virtualization with Windows Server Hyper-V and System Center.  Unlike many previous exams I did not rush out to be the first to take it, nor was I going to wait forever.  I actually thought about sitting the exam in Japan in December, but since I had trouble registering there and then got busy, I simply decided to use my visit to Canada to schedule the exam.

This is not the first exam that I have gone into without so much as a glance at the Overview or the Skills Measured section of the exam page on the Internet.  I did not do any preparation whatsoever for the exam… as you may know I have spent much of the last five years living and breathing virtualization.  This attitude very nearly came back to bite me in the exam room at the Learning Academy in Hamilton, Ontario Wednesday morning.

Having taught every Microsoft server virtualization course ever produced (and having written or tech-reviewed many of them) I should have known better.  Virtualization is more than installing Hyper-V.  it’s more than just System Center Virtual Machine Manager (VMM) and Operations Manager (OpsMgr).  It is the entire Private Cloud strategy… and if you plan to sit this exam you had better have more than a passing understanding of System Center Service Manager (ServMgr), Data Protection Manager (DPM), and Orchestrator.  Oh, and your knowledge should extend beyond more than one simple Hyper-V host.

I have long professed to my students that while DPM is Microsoft’s disaster recovery solution, when it comes down to it just make sure that your backup solution does everything that they need, and make sure to test it.  While I stand behind that statement for production environments, it does not hold water when it comes to Microsoft certification exams.  When two of the first few questions were on DPM I did a little silent gulp to myself… maybe I should have prepared a little better for this.

I do not use Service Manager… It’s not that I wouldn’t – I have a lot of good things to say about it.  Heck, I even installed it as recent as yesterday – but I have not used it beyond a passing glance.  The same used to be true of System Center Orchestrator, but over the last year that has changed a lot… I have integrated it into my courseware, and I have spent some time learning it and using it in production environments for repetitive tasks.  While I am certainly not an expert in it, I am at least more than just familiar with it.  That familiarity may have helped me on one exam question.  Had I taken the time to review the exam page on the Microsoft Learning Experience website I would have known that the word Orchestrator does not appear anywhere on the page.

Here’s the problem with Microsoft exams… especially the newer ones that do not simply cover a product, but an entire solution across multiple suites.  Very few of us will use and know every aspect covered on the exam.  That is why I have always professed that no matter how familiar you may be with the primary technology covered, you should always review the exam page and fill in your knowledge gaps with the proper studying.  You should even spend a few hours reviewing the material that you are pretty sure you do know.  As I told my teenaged son when discussing his exams, rarely will you have easy exams… if you feel it was easy it just means you were sufficiently prepared.  Five questions into today’s exam I regretted my blasé attitude towards it – I may be a virtualization expert, but I was not adequately prepared.

As I went through the exam I started to get into a groove… while there are some aspects of Hyper-V that I have not implemented, those are few and far between.  the questions about VHDX files, Failover Clustering, Shared VHDX, Generation 2 VMs, and so many more came around and seemed almost too easy, but like I told my son it just means I am familiar with the material.  There were one or two questions which I considered to be very poorly worded, but I reread the questions and the answers and gave my best answer based on my understanding of them.

I have often described the time between pressing ‘End Exam’ and the appearance of the Results screen to be an extended period of excruciating forced lessons in patience.  That was not the case today – I was surprised that the screen came up pretty quickly.  While I certainly did not ace the exam, I did pass, and not with the bare minimum score.   It was certainly a phew moment for a guy who considers himself pretty smart in virtualization.

Now here’s the question… is the exam a really tough one, or was I simply not prepared and thus considered it tough?  And frankly, how tough could it have been if I didn’t prepare, and passed anyways?  I suppose that makes two questions.  The answer to both is that while I did not prepare for the exam, I am considered by many (including Microsoft) a SME on Hyper-V and System Center.  I can say with authority that it was a difficult exam.  That then leads to the next question, is it too tough?  While I did give that some thought as I left the exam (my first words to the proctor was ‘Wow that was a tough exam!) I do not think it is unreasonably so.  It will require a lot of preparation – not simply watching the MVA Jump Start videos (which are by the way excellent resources, and should be considered required watching for anyone planning to sit the exam).  You will need to build your own environment, do a lot of reading and research, and possibly more.

If you do plan to sit this exam, make sure you visit the exam page first by clicking here.  Make sure you expand and review the Overview and Skills Measured sections.  If you review the Preparation Materials section it will refer you to a five day course that is releasing next week from Microsoft Learning Experience – 20409A- Server Virtualization with Windows Server Hyper-V and System Center (5 Days).  I am proud to say that I was involved with the creation of that course, and that it will help you immensely, not only with the exam but with your real-world experience.

Incidentally, passing the exam gives you the following cert: Microsoft Certified Specialist: Server Virtualization with Hyper-V and System Center.

Good luck, and go get em!

Step by Step: Adding the GUI to Windows Server Core

HELP! Mitch, you told me that I should learn Server Core and I am trying, but you also told me that it wasn’t a problem to add the GUI back into a Server Core machine if I really needed it.  How do I do that?

This is a question I have gotten a few times from readers and students.over the past year.  There are a few of ways to do it, and depending on your situation you may need to try both of them.

Method 1: No problem!

You installed Windows Server with the full GUI previously, and then you removed the GUI.  This is the simplest scenario for our problem.  Here goes:

  1. Open PowerShell (powershell.exe)
  2. Run the cmdlet: Install-WindowsFeature Server-Gui-Mgmt-Infra,Server-Gui-Shell /reboot
      Now, if you are really deathly afraid of the command line, you can connect to a server with

Server Manager

      and use the

Add Roles and Features

    wizard.  Either way will work just fine.  However here’s the catch… both of them depend on the bits for the GUI being on the server’s hard drive.  If you never installed the GUI then they won’t be.  At this point you have to move on to…

Method 2: Still no problem 🙂

powershell_2 You dove in head first, decided to get right into Server Core.  That’s just how you role.  Unfortunately you discovered something that made you backpedal.  No problem, many fine IT Pros have made worse false- starts than this.  It won’t be difficult, all you have to do is add the GUI features.  However since the bits are not on the drive, you have to add a source.  Follow these steps and you’ll be on your way!

      1. Create a folder on the C Drive: MD c:\mount
      2. Check the index number for Server Datacenter (must be performed in a Command Prompt with Elevated privileges): Dism /get-wiminfo /wimfile:<drive>:sources\install.wim
      3. Mount the WIM file to the previously created directory using this command at the same elevated command prompt: Dism /mount-wim /WimFile:<drive>:\sources\install.wim /Index:<#> /MountDir:c:\mount /readonly
      4. Start PowerShell and run this cmdlet: Install-WindowsFeature Server-Gui-Mgmt-Infra,Server-Gui-Shell –Restart –Source c:\mountdir\windows\winsxs

(For the fun of it, PowerShell will accept your Command Prompt commands, so you can do all of the above in a PowerShell window.)

    Again, if you have been soooo spooked by Server Core that you cannot bear to do this in the command prompt, do the following:
      1. Connect to a GUI-based server (or Windows 8.1 system with RSAT Tools) and open the

Server Manager

    .
      2. Right-click

All Servers

      and click

Add Servers

    3. Find and add your Server, ensuring that it reports as On-line.
      4. Click on

Manage

      and from the drop-down menu select

Add Roles and Features

    .
      5. In the

Before you begin

      page click

Next

    .
      6. In the

Select installation type

      page click

Next

    .
      7. In the

Select destination server

      page select your Server Core machine from the list and click

Next

    .
      8. In the

Select server roles

      page click

Next

    .

9. In the Select features page scroll down to User Interfaces and Infrastructure.  Expand the selection, then select Graphical Management Tools and Infrastructure and Server Graphical Shell.  Click Next.

Capture-1 10. In the Confirm installation selections page click on Specify an alternate source path .

11. In the Specify Alternate Source Path page enter the path to the installation media, then click OK.

12. In the Confirm installation selections page select the checkbox marked Restart the destination server automatically if required.

13. Click Install.

That’s it… your server will reboot with the full GUI.  honestly I don’t expect you will be doing this very often – I truly feel that Server Core is the way to go with the vast majority of servers going forward.  However isn’t it nice to know that you have the option should you really need it?

…Oh, and please, for G-d’s sake, if you are re-installing the GUI at least try the PowerShell method!

Server Core: Save money.

I remember an internal joke floating around Microsoft in 2007, about a new way to deploy Windows Server.  There was an ad campaign around Windows Vista at the time that said ‘The Wow Starts Now!’  When they spoke about Server Core they joked ‘The Wow Stops Now!’

Server Core was a new way to deploy Windows Server.  It was not a different license or a different SKU, or even different media.  You simply had the option during the installation of clicking ‘Server Core’ which would install the Server OS without the GUI.  It was simply a command prompt with, at the time, a few roles that could be installed in Core.

While Server Core would certainly save some resources, it was not really practical in Windows Server 2008, or at least not for a lot of applications.  There was no .NET, no IIS, and a bunch of other really important services could not be installed on Server Core.  In short, Server Core was not entirely practical.

Fast Forward to Windows Server 2012 (and R2) and it is a completely different story.  Server Core a fully capable Server OS, and with regard to resources the savings are huge.  So when chatting with the owner of a cloud services provider recently (with hundreds of physical and thousands of virtual servers) I asked what percentage of his servers were running Server Core, and he answered ‘Zero’.  I could not believe my ears.

The cloud provider is a major Microsoft partner in his country, and is on the leading edge (if not the bleeding edge) on every Microsoft technology.  They recently acquired another datacentre that was a VMware vCloud installation, and have embarked on a major project to convert all of those hosts to Hyper-V through System Center 2012.  So why not Server Core?

The answer is simple… When Microsoft introduced Server Core in 2008 they tried it out, and recognizing its limitations decided that it would not be a viable solution for them.  It had nothing to do with the command line… the company scripts and automates everything in ways that make them one of the most efficient datacentres I have ever seen.  They simply had not had the cycles to re-test Server Core in Server 2012 R2 yet.

We sat down and did the math.  The Graphical User Environment (GUI) in Windows Server 2012 takes about 300MB of RAM – a piddling amount when you consider the power of today’s servers.  However in a cloud datacentre such as this one, in which every host contained 200-300 virtual machines running Windows Server, that 300MB of RAM added up quickly – a host with two hundred virtual machines required 60GB of RAM just for GUIs.  If we assume that the company was not going to go out and buy more RAM for its servers simply for the GUI, it meant that, on average, a host comfortably running 200 virtual machines with the GUI would easily run 230 virtual machines on Server Core.

In layman’s terms, the math in the previous paragraph means that the datacentre capacity could increase by fifteen percent by converting all of his VMs to Server Core.  If the provider has 300 hosts running 200 VMs each (60,000 VMs), then an increased workload of 15% translates to 9,000 more VMs.  With the full GUI that translates to forty-five more hosts (let’s conservatively say $10,000 each), or an investment of nearly half a million dollars.  Of course that is before you consider all of the ancillary costs – real estate, electricity, cooling, licensing, etc…  Server Core can save all of that.

Now here’s the real kicker: Had we seen this improvement in Windows Server 2008, it still would have been a very significant cost to converting servers from GUI to Server Core… a re-install was required.  With Windows Server 2012 Server Core is a feature, or rather the GUI itself is a feature that can be added or removed from the OS, and only a single reboot is required.  While the reboot may be disruptive, if managed properly the disruption will be minimal, with immense cost savings.

If you have a few servers to uninstall the GUI from then the Server Manager is the easy way to do it.  However if you have thousands or tens of thousands of VMs to remove it from, then you want to script it.  As usual PowerShell provides the easiest way to do this… the cmdlet would be:

Uninstall-WindowsFeature Server-Gui-Shell –restart

There is also a happy medium between the GUI and Server Core called MinShell… you can read about it here.  However remember that in your virtualized environment you will be doing a lot more remote management of your servers, and there is a reason I call MinShell ‘the training wheels for Server Core.’

There’s a lot of money to be saved, and the effort is not significant.  Go ahead and try it… you won’t be disappointed!

What’s New in Windows Server 2012 R2 Lessons Learned Week 1

Dan Stoltz asked me to republish this article, and it is well worth it!  Check out all of the links – a lot of great material! -MDG

It has been an incredible start to the Windows Server 2012 R2 Launch Series.  Here is brief summary of what we covered so far…

  1. Windows Server 2012 R2 Launch Blog Series Index #WhyWin2012R2 the series, opening and index page we learned that from Oct 18th and every day until Thanksgiving we should visit http://aka.ms/2012r2-01 to learn all about Windows Server 2012 R2. You can also follow the excitement on twitter at #WhyWin2012R2. Download the calendar .ICS to populate your calendar here.  This post started the new launch series where Microsoft platform experts would cover  why Windows Server 2012 R2 is important, how to deploy, manage, configure any number of components in Windows Server 2012 R2, how the new OS capabilities stack up against competitors, how R2 integrates with and leverages cloud services like Windows Azure and many, many more categories. This series is deep technical content with lots of How To’s and Step-By-Step instructions. You will learn about storage, cloud integration, RDS, VDI, Hyper-V, virtualization, deduplication, replica, DNS, AD, DHCP, high availability, SMB, backup, PowerShell and much, much more!
  2. Why Windows Server 2012 R2 Rocks! #WhyWin2012R2 – You are probably like most people and realize that Windows Server 2012 was a very substantial upgrade over Windows Server 2008 R2. What would you say to Microsoft doing it again, and even better? WOW! That is exactly what Windows Server 2012 R2 has done. In this post we will look at some of the coolest additions and improvements to Windows Server 2012 R2. Regardless of which of the four pillars of focus (Enterprise-Class, Simple and Cost-Effective, Application Focused, User Centric) you are most interested in, you will find plenty in this post to appreciate! @ITProGuru will show you as he counts the top 10 biggest, most relevant and/or most differentiated new features in Windows Server 2012 R2.
  3. Where Are All The Resources For Windows Server 2012 R2? – We learned where to do go get free resources for Windows Server 2012 R2 including downloading a Free Trial of Windows Server 2012 R2, Free online cloud serversFree EBook on Windows Server 2012 R2, Free Posters, Free Online Training from Microsoft Virtual Academy, and much more.
  4. Implementing Windows Server 2012 R2 Active Directory Certificate Services Part 1 &
  5. Implementing Windows Server 2012 R2 Active Directory Certificate Services Part 2PKI is heavily employed in cloud computing for encrypting data and securing transactions. While Windows Server 2012 R2 is developed as a building block for cloud solutions, there is an increasing demand for IT professionals to acquire proficiency on implementing PKI with Windows Server 2012 R2. This two-part blog post series is to help those who, like me, perhaps do not work on Active Directory Certificate Services (AD CS) everyday while every so often do need to implement a simple PKI for assessing or piloting solutions better understand and become familiar with the process.
  6. Step-by-Step: Automated Tiered Storage with Storage Spaces in R2 – Windows Server 2012 R2 includes a number of exciting storage virtualization enhancements, including automated storage tiering, scale-out file server re-balancing and performance tuning for high-speed 10Gbps, 40Gbps and 56Gbps storage connectivity.  IT Pros with which I’ve spoken are leveraging these new enhancements to build cost-effective SAN-like storage solutions using commodity hardware.In this article, we’ll begin part 1 of a two-part mini-series on storage.  I’ll provide a technical comparison of Windows Server 2012 R2 storage architecture to traditional SAN architecture, and then deep-dive into the new Storage Spaces enhancements for storage virtualization.  At the end of this article, I’ll also include Step-by-Step resources that you can use to build your own Storage Spaces lab.  In part 2 of this mini-series, we’ll finish our storage conversation with the new improvements around Scale-Out File Servers in Windows Server 2012.
  7. iSCSI Target Server – Super Fast Mass Server Deployment! – #WhyWin2012R2 – There have been some significant updates to Windows Server 2012 with the R2 release. One of these updates helps IT Pros deal with a growing problem – How do I deploy a large number of servers quickly, at scale without adding massive amounts of storage?The updates to the iSCSI target server technologies allow admins to share a single operating system image stored in a centralized location and use it to boot large numbers of servers from a single image. This improves efficiency, manageability, availability, and security. iSCSI Target Server can boot hundreds of computers by using a single operating system image!
  8. Why Windows Server 2012 R2: Reducing the Storage Cost for your VDI Deployments with VHD De-duplication for VDI – Windows Server 2012 introduced a data deduplication for your storage workloads customers saw phenomenal storage reduction.  Windows Server 2012 R2 deduplucation now supports live VHDs for VDI, which means that data de-duplication can now be performed on open VHD/VHDX files on remote VDI storage with CSV volume support. Remote VHD/VHDX storage de-duplication allows for increased VDI storage density significantly reducing
    VDI storage costs, and enabling faster read/write of optimized files and advanced caching of duplicated data.
  9. Importing & Exporting Hyper-V VMs in Windows Server 2012 R2 One of the biggest benefits of server virtualization is the ability to backup or restore entire systems easily and quickly.  Though they are infrequently used features, Hyper-V import and export are very fast, versatile, and easy to use.  In Windows Server 2012 R2 these features get even better.  I will take a look at how this functionality works and why it is useful.  I’ll also discuss how they are very different from the commonly used checkpoints in Hyper-V, and how you can automate this process.

Keep plugged in to the series to continue learning about Windows Server 2012 R2

– See more at: http://itproguru.com/expert/2013/10/whats-new-in-windows-server-2012-r2-lessons-learned-week-1/#sthash.JWWX9vKZ.dpuf

Building the IT Camp with PowerShell Revisited

I always said I am not hard to please… I only need perfection.  So when I wrote my PowerShell script to build my environment the other day I was pleased with myself… until I realized a huge flaw in it.  Generation 1.

Actually to be fair, there is nothing wrong with Generation 1 virtual machines in Hyper-V; they have served us all well for several years.  However how could I claim to live on the bleeding edge (Yes, I have made that claim many times) and yet stay safe with Generation 1?

In the coming weeks Windows Server 2012 R2 will become generally available.  One of the huge changes that we will see in it is Generation 2 virtual machine hardware.  Some of the changes in hardware levels include UEFI, Secure Boot, Boot from SCSI, and the elimination of legacy hardware (including IDE controllers and Legacy NICs).

Of course, since Generation 1 hardware is still fully supported, we need to identify when we create the VM which Generation it will be, and this cannot later be changed.

I had forgotten about this, and when I created the script (of which I was quite proud) I did not think of this.  It was only a few hours later, as I was simultaneously installing nine operating systems, that I noticed in the details pane of my Hyper-V Manager that all of my VMs were actually Gen1.

Crap.

Remember when I said a couple of paragraphs ago that the generation level cannot be changed?  I wasn’t kidding.  So rather than living with my mistake I went back to the drawing board.  I found the proper cmdlet switches, and modified my script accordingly.

As there is a lot of repetition in it, I am deleing six of the nine VMs from the list.  You are not missing out on anything, I assure you.

# Script to recreate the infrastructure for the course From Virtualization to the Private Cloud (R2).
# This script should be run on Windows Server 2012 R2.
# This script is intended to be run within the Boot2VHDX environment created by Mitch Garvis
# All VMs will be created as Generation 2 VMs (except the vCenter VM for which it is not supported).
# All VMs will be configured for Windows Server 2012 R2
# System Center 2012 R2 will be installed.

# Variables

$ADM = "Admin"                # VM running Windows 8.1 (for Administration)
$ADMMIN = 512MB                # Minimum RAM for Admin
$ADMMAX = 2GB                # Maximum RAM for Admin
$ADMVHD = 80GB                # Size of Hard Drive for Admin

$SQL = "SQL"                # VM (SQL Server)
$SQLMIN = 2048MB            # Minimum RAM assigned to SQL
$SQLMAX = 8192MB            # Maximum RAM assigned to SQL
$SQLCPU = 2                # Number of CPUs assigned to SQL
$SQLVHD = 200GB                # Size of Hard Drive for SQL

$VCS = "vCenter"             # VM (vSphere vCenter Cerver) (Windows Server 2008 R2)
$VCSMIN = 2048MB             # Minimum RAM assigned to vCenter
$VCSMAX = 4096MB             # Maximum RAM assigned to vCenter
$VCSCPU = 2                 # Number of CPUs assigned to vCenter
$VCSVHD = 200GB                # Size of Hard Drive for vCenter

$VMLOC = "C:\HyperV"            # Location of the VM and VHDX files

$NetworkSwitch1 = "CorpNet"        # Name of the Internal Network

$W81 = "E:\ISOs\Windows 8.1 E64.iso"            # Windows 8.1 Enterprise
$WSR2 = "E:\ISOs\Windows Server 2012 R2.iso"        # Windows Server 2012 R2
$W2K8 = "E:\ISOs\Windows Server 2008 R2 SP1.iso"     # Windows Server 2008 R2 SP1

# Create VM Folder and Network Switch
MD $VMLOC -ErrorAction SilentlyContinue
$TestSwitch1 = Get-VMSwitch -Name $NetworkSwitch1 -ErrorAction SilentlyContinue; if ($TestSwitch1.Count -EQ 0){New-VMSwitch -Name $NetworkSwitch1 -SwitchType Internal}

# Create & Configure Virtual Machines
New-VM -Name $ADM -Generation 2 -Path $VMLOC -MemoryStartupBytes $ADMMIN -NewVHDPath $VMLOC\$ADM.vhdx -NewVHDSizeBytes $ADMVHD -SwitchName $NetworkSwitch1
Set-VM -Name $ADM -DynamicMemory -MemoryMinimumBytes $ADMMIN -MemoryMaximumBytes $ADMMAX
Add-VMDvdDrive $ADM | Set-VMDvdDrive -VMName $ADM -Path $W81

New-VM -Name $SQL -Generation 2 -Path $VMLOC -MemoryStartupBytes $SQLMIN -NewVHDPath $VMLOC\$SQL.vhdx -NewVHDSizeBytes $SQLVHD -SwitchName $NetworkSwitch1
Set-VM -Name $SQL -DynamicMemory -MemoryMinimumBytes $SQLMIN -MemoryMaximumBytes $SQLMAX -ProcessorCount $SQLCPU
Add-VMDvdDrive $SQL | Set-VMDvdDrive -VMName $SQL -Path $WSR2

New-VM -Name $VCS -Path $VMLOC -MemoryStartupBytes $VCSMIN -NewVHDPath $VMLOC\$VCS.vhdx -NewVHDSizeBytes $VCSVHD -SwitchName $NetworkSwitch1
Set-VM -Name $VCS -DynamicMemory -MemoryMinimumBytes $VCSMIN -MemoryMaximumBytes $VCSMAX -ProcessorCount $VCSCPU
Set-VMDvdDrive -VMName $VCS -Path $W2K8

#Start Virtual Machines
Start-VM $ADM
Start-VM $SQL
Start-VM $VCS

In the script you can see a few differences between my original script (in the article) and this one.  Firstly on all machines that are running Windows 8.1 or Windows Server 2012 R2 I have set the switch –Generation 2.  That is simple enough.

Adding the virtual DVD was a little trickier; with Generation 1 hardware there was a ready IDE port for you to connect the .ISO file to.  In Gen 2 it is all about SCSI, so you have to use the Add-VMDvdDrive cmdlet, and then connect the .ISO file (Set-VMDvdDrive –VMName <Name> –Path <ISO Path>Not only for simplicity but also to demonstrate that you can I have put these two cmdlets on a single line, connected with a pipe (the | key).

I want to thank a couple of colleagues for helping me out with the Generation 2 hardware and DVD issues… especially Sergey Meshcheryakov , who was quick to answer.  The exact cmdlet switches were not easy to track down!

…and remember, if I can learn it, so can you!  Even the great Sean Kearney once did not know anything about PowerShell… and now look at him!