Why We Test…

It is Friday morning in Tokyo, and there is a line out the door.  If you didn’t know any better you would think that they were lined up to get an autograph from the latest pop icon.

However if you look at the sign on the door it does not say ‘Tokyo Arena’ or ‘Tokyo Hilton.’  It says IT Service Desk, and the throngs lined up are users, and each one has their laptop with them.  It seems that they are all having similar problems, either to do with not being able to log in at all, or Outlook crashing when they receive HTML based e-mail.

If I were a Help Desk Technician I might be thinking right now that this was a bad day to get out of bed.  If I was an IT Director I would be <figuratively> screaming for answers, needing my team to find the root cause… Is it malware? Are we under attack?  Was there just some massive incompetence that killed our systems?

It wouldn’t be long before I discovered the answer.  Are we under attack?  No.  Is it malware?  No… at least, not in the most commonly accepted definition of the term.  What we were facing was a patch from Microsoft that was causing our myriad issues.  Patch KB3097877, part of the November 10 patch roll out cycle, is to blame.

With that knowledge, as an IT Director, I would be setting forth the following plan:

  1. Train the Support Counter techs to resolve the issue (as found in this article from Microsoft);
  2. Ensure the patch was immediately removed from WSUS; and
  3. Once the ‘crisis’ was over, I would bring the interested parties into a room and do a post-mortem… that is, figure out what went wrong, and how to prevent it from happening in the future.

The second point is easy.  Once you know what patch it is all you have to do is have a WSUS admin mark it as DECLINED.  The first point is stressful for the support techs, but they are well trained and will handle it.

It is during the third point – the post mortem – that I would be looking at my team and wanting them all to simultaneously burst into flames.  Because someone – one of these people whom I trust with my infrastructure, and therefore with the ability for the entire company to work – would have to look at me and say ‘We accept and push out all patches immediately without testing them.’

If I am an extremely diligent IT Director I will know that in our IT Department Policy and Procedures Statement there is a policy about applying patches, and likely it says that patches should be applied only after proper testing.  If we are a less stringent company the policy might read that patches should be applied only after a reasonable delay has passed, and the appropriate forums and blogs on the Internet have claimed they were okay.

If there is no such policy then the blame lies with me.  I can glare at the others, I can even yell if I am a bad leader.  However the buck stops here.

If however there is such a policy, I would be looking at the team and asking them why the policy hadn’t been followed?  I imaging they would be looking at me quizzically and someone would say ‘This is just what we do… it’s never caused problems before!?’

I might look at the admin who said that and ask if he wears a seat belt when he drives a car.  I might ask if he wears a life vest when he goes boating.  Chances are if you don’t, nothing will happen.  You wear them to be safe and increase your chances of survival if something does happen.  It is the reason we test patches (or let others test them) before we apply them.

The mistake caused by the admins neglecting to test patches might cause hundreds of thousands of dollars in lost productivity… and yet it is almost certain that nobody will lose their job.  They probably won’t even get a reprimand.  None of that is necessary.  What is necessary is that we learn from this.  Patches do not break things very often, but we have to remember that they can, and because of that we must take the proper steps – do our due diligence – to make sure we don’t get hit.

Out of Band Security Updates

If you run Windows Server this is very important.  Microsoft released today a number of out-of-band security updates for Microsoft Windows.  From what I have read, these patches (One of my servers has 14 applicable updates since 3am) will be applied to Windows clients as well as Windows Servers, but the vulnerability it protects is only in Windows Server.  I have a bit more information but because it is the middle of a busy work day I cannot go into it… but if you are a server admin I strongly recommend you take some time to look at these patches, test them, and apply them ASAP… the two week deadline setting in WSUS is probably not good enough for these ones 😉

Microsoft is not a company that does anything out-of-band for no good reason… if it has gone to the trouble of releasing these patches I suspect they are protecting something pretty serious so make sure you look into them – you can be certain that the hackers are!

Cloud-Based VDI!!! No.

I was having a conversation this week with a colleague about his plans to create a hybrid-cloud environment by moving many of his datacenter workloads onto Windows Azure. After all, it makes plenty of sense – eliminating new capital expenses and reducing ongoing operational expenses just makes sense.

“And once we have tested it, we plan to roll out a thousand pooled VDI clients running on Windows Azure. It is great!”

No, I’m afraid it is not. Unfortunately, while there is no technological reason why you couldn’t do this, there is a legal reason.  There is no license for the Windows Client (not even Enterprise Edition) that you can deploy in someone else’s datacenter.  In order to legally deploy VDI you must own the physical hardware on which it is installed.

By the way, let me be clear, that is not only an Azure thing, and it is not only a Remote Desktop Services issue. The same licensing limitation is true on Citrix’s Xen Desktop and VMware’ Horizons.  It is true of Azure, Amazon Web Services, Rackspace, and Joe’s Datacenter Rental.  If you do not own the hardware you can install Windows Server… but not Windows 8.1 (or 8, or 7, or XP for that matter).

I had this conversation with the VP of Sales for a major Microsoft partner in Ontario recently, and I was so flabbergasted that I went back and looked it up. Sure enough he was right.  So when I spoke with my colleague the other day I was able to save him a lot of time, effort, money, and frustration.  Unfortunately I forgot to turn on the meter, so he got the advice for free.  Oh well, I’m sure he’ll remember around the holidays J

Consultants, I want you to remember this lesson: Your customers may not always like the news you have to tell them… but you do have to tell them.  Of course, this is one of those places where good communication skills will help you out – don’t just say ‘Wow, you are scroo-ooed!’ Tell them what they need to say and offer alternative solutions for them to accomplish what they are trying to do.

WSUS: Watch out!

Here’s a great way to waste time, network bandwidth, and storage space: download excess patches that you do not need.  For bonus points, download languages you don’t support. 

Windows Server Update Services (WSUS) is a great solution that has come a long way since it was introduced.  However it gives a lot of us functionality that we don’t need (and will cost).  Here’s an example: I support an environment where people speak English, Spanish, Urdu, and Hindi.  Between us we probably speak another six languages, but those are the mother tongues in this office.  So when the WSUS configuration screen asks what languages I want to support, it is easy to forget that every operating system in the joint is English…

Imagine you have to download 10GB of patches.  That could immediately translate to 10GB of patches per language.  Time, effort, and not to mention that you should be testing them all… it’s just not worth it.  What language are your servers in?  Mine are in English.  My workstations are also English, but we might have to account for a few French workstations – especially in Quebec.  That’s it.  Don’t go overboard, and your bandwidth will thank me!

1-2-3-4-5 BitLocker 9-8-7-6-5

BitLocker Drive Encryption

BitLocker Drive Encryption (Photo credit: Wikipedia)

I was sitting in a planning meeting with a client recently in which we were discussing ways of protecting end-user machines, especially laptops that were in and out of the office.  The previous convention relied on BIOS locks that were proprietary to the hardware manufacturer, and required the end user to either enter two passwords or swipe their fingerprint on a sensor.  As the company planned to migrate away from the dedicated hardware provider and toward a CYOD (Choose Your Own Device) type of environment this would no longer be a viable solution.

As the discussion started about what they were planning to use to provide a second layer of protection from unauthorized access to systems, I asked if the company was still intending to use BitLocker to encrypt the hard drives for these machines.  When it was confirmed that they would, I presented the hardware agnostic solution: adding a PIN (Personal Identification Number) to BitLocker.

BitLocker is a disk encryption tool that was introduced with Windows Vista, and has been greatly improved upon since.  It ties in to the TPM (Trusted Platform Module) in your computer (included mostly in Enterprise-class systems) and prevents protected hard drives from being hacked.  Most people configure it and leave it there… which means that it is ‘married’ to the physical computer with the TPM chip.  However there are a few additions you can add.

Authentication has not changed much in the last few thousand years.  It is usually based on a combination of something you have and something you know.  Beyond that is it just levels of complexity and degrees of encryption.  So our TPM chip is something we have… but assuming the hard drive is in the computer, they go together.  So we need another way of protecting our data.  Smart cards and tokens are great, but they can be stolen or lost… and you have to have to implement the infrastructure with a cost (although with AuthAnvil from ScorpionSoft the cost is low and it is relatively easy to do).

Passwords work great… as long as you make them complex enough that they are difficult to hack, and ensure people change them often enough to stymie hackers… and don’t write them down, and so on.  However even with all of that, operating system passwords are still going to be reasonably easy to crack – to the knowledgeable and determined.  Hardware level passwords, on the other hand, are a different beast altogether.  The advent of TPM technology (and its inclusion in most enterprise-grade computer hardware) means that an encryption tied to the TPM will be more secure… and by adding a PIN to it makes it even more so.  Even though the default setting in Windows is to not allow passwords or PINs on local drives, it is easy enough to enable.

untitled1. Open the Group Policy Editor (gpedit.msc).

2. Expand Computer Configuration – Administrative Templates– Windows Components – BitLocker Drive Encryption – Operating System Drives

3. Right-click the policy called Require additional authentication at startup and click Edit.

4. Select the Enabled radio button.

5. Select the drop-down Configure TPM startup PIN: and click Require startup PIN with TPM.

At this point, when you enable BitLocker, you (or your user) will be prompted to enter a PIN when enabling BitLocker.

**NOTE: This policy will apply when enabling drives for the first time.  A drive that is already encrypted will not fall into scope of this policy.

By the way, while I am demonstrating this on a local computer, it would be the same steps to apply to an Active Directory GPO.  That is what my client will end up doing for their organization, thereby adding an extra layer of security to their mobile devices.

Windows To Go: Disk Behaviour

BitLocker Drive Encryption

BitLocker Drive Encryption (Photo credit: Wikipedia)

Recently I was explaining Windows To Go at a client site.  We had a few interesting discussions about the power as well as the limitations of the security features.

One attendee asked a couple of good questions:

1) Is there any way to block the ‘on-lining’ of your Windows To Go key in other installations of Windows?

2) Is there a way to block users from bringing local disks on-line from within Windows To Go?

While I did not have the answers off the top of my head, after some consideration they are actually quite simple.

1) Windows To Go is the equivalent of any hard drive.  Because the machines that you are meant to use them on will be unmanaged, it is impossible to prevent this.  However Microsoft does provide several different levels of protection:

  • The WTG drive is off-line by default;
  • When building the WTG key you can enable BitLocker
  • Although BitLocker on the WTG key cannot be tied to a TPM chip, it will have a password associated.

In other words, in order to compromise the key from another installation of Windows, you would have to bring the WTG key on-line, unlock it, and provide a password.  In other words, if you trust the person to whom you gave the key.  If you don’t, he probably should not be on your systems in the first place.

The second answer is probably a happier one.  Because Windows to Go is (or can be) a managed environment (including domain membership, Group Policy, and even System Center management) the key can be locked down as you see fit.  How you would do it depends on which of the tools you have at your disposal… but yes, this can be done.

I hope this helps you to make your environment more secure using Windows To Go!

Managing Hyper-V Virtual Machines with Windows PowerShell

Warning: The following post was written by a scripting luddite.  The author readily admits that he would have difficulty coding his way out of a paper bag, and if the fate of the world depended on his ability to either write code or develop software then you had better start hoarding bottled water and cans of tuna.  Fortunately for everyone, there are heroes to help him!

I love the Graphical User Interface (GUI).  I use it every day in both the Windows client and Windows Server operating systems.  It makes my life easier on a day to day basis.

With that being said, there are several tasks that administrators must do on a regular basis.  There is no simple and reliable way to create repetitive task macros in the GUI.  Hence we can either work harder, or we can learn to use scripting tools like Windows PowerShell.

Along the way I have gotten some help from some friends.  Ed Wilson’s books have provided a wealth of information for me, and Sean Kearney has been my go-to guy when I need help.  There was a time when I was teaching a class and was asked ‘Can PowerShell do that?’  I replied by saying that if I asked Sean Kearney to write a PowerShell script to tie my shoes, I was reasonably sure he could do it because PowerShell can do ANYTHING.  Well one of my students posted that comment on Twitter, and got the following reply from Sean (@EnergizedTech):

Get-Shoe | Invoke-Tie

It makes sense too…because PowerShell works with a very simple Verb-Noun structure, and if you speak English it is easy to learn.

I may be a scripting luddite, but I do know a thing or two about virtualization, and especially Hyper-V.  So it only stands to reason that if I was going to start learning (and even scarier, teaching) PowerShell, I would start with the Hyper-V module.  As a good little Microsoft MVP and Community Leader, it only makes sense that I would take you along for the ride 🙂

Most of what can be done in PowerShell can also be done in the GUI.  If I want to see a list of the virtual machines on my system, I simply open the Hyper-V Manager and there it is.

Get-GUI

PowerShell is almost as simple… Type Get-VM.

Get-PS

By the way you can filter it… if you only want virtual machines that start with the letter S, try:

Get-VM S*

One of the advantages of PowerShell is that it allows you to manage remote servers, rather than having to log into them you can simply run scripts against them.  If you have a server called SWMI-Host1, you can simply type:

Get-VM –Server SWMI-Host1

Starting and stopping virtual machines is simple…

Start-VM Admin

Stop-VM VMM

Again, your wildcards will work here:

Start-VM O*

This command will start all VMs that start with the letter O.

If you want to check how much memory you have assigned to all of your virtual machines (very useful when planning as well as reporting) simply run the command:

Get-VMMemory *

Get-VMMemory

I did mention that you could use this command for reporting… to make it into an HTML report run the following:

Get-VMMemory * | ConvertTo-HTML | Out-File c:\VMs\MemReport.htm

To make it into a comma separated values (CSV) file that can easily be read in Microsoft Office Excel, just change the command slightly:

Get-VMMemory * | ConvertTo-CSV | Out-File c:\VMs\MemReport.csv

The report created is much more detailed than the original screen output, but not so much so as to be unusable.  See:

Making Changes

So far we have looked at VMs, we have started and stopped them… but we haven’t actually made any changes to them. Let’s create a new virtual machine, then make the changes we would make in a real world scenario.

New-VM –Name PSblog –MemoryStartupBytes 1024MB –NewVHDPath c:\VHDs\PSblog.vhdx –NewVHDSizeBytes 40GB –SwitchName CorpNet

With this simple script I created a virtual machine named PSblog with 1024MB of RAM, a new virtual hard disk called PSblog.vhdx that is 40GB in size, and connected it to CorpNet.

Now that will work, but you are stuck with static memory.  Seeing as one of the great features of Hyper-V is Dynamic Memory, let’s use it with the following script:

Set-VMMemory –VMName PSblog –DynamicMemoryEnabled $true –MinimumBytes 512MB –StartupBytes 1024MB MaximumBytes 2048MB

Now we’ve enabled dynamic memory for this VM, setting the minimum to 512MB, the maximum to 2048MB, and of course the startup RAM to 1024MB.

For the virtual machine we are creating we might need multiple CPUs, and because some of our hosts may be newer and other ones older we should set the compatibility mode on the virtual CPU to make sure we can Live Migrate between all of our Intel-based hosts:

Set-VMProcessor –VMName PSblog –Count 4 –CompatibilityForMigrationEnabled $true

At this point we have created a new virtual machine, configured processor, memory, networking, and storage (the four food groups of virtualization), and are ready to go.

I will be delving deeper into Hyper-V management with PowerShell over the next few weeks, so stay tuned!

NOTE: While nothing in this article is plagiarized, I do want to thank a number of sources, on whose expertise I have leaned rather heavily.  Brien Posey has a great series of articles on Managing Hyper-V From the Command Line on www.VirtualizationAdmin.com which is definitely worth reading.  He focuses on an add-on set of tools called the Hyper-V Management Library (available from www.Codeplex.com) so many of the scripts he discusses are not available out of the box, but the articles are definitely worth a read.  Rob McShinsky has an article on SearchServerVirtualization (a www.TechTarget.com property) called Making sense of new Hyper-V 2012 PowerShell cmdlets which is great, and links to several scripts for both Server 2008 R2 and Server 2012.  Thanks to both of them for lending me a crutch… you are both worthy of your MVP Awards!

Getting Started With Hyper-V in Server 2012 and Windows 8

You all know by now that I am a huge Hyper-V fan… I have been using it since 2008, but with the latest release I am unabashedly loving Microsoft’s Layer 1 hypervisor.  The fact that it has been included in Windows 8  – as in, no different from the virtualization platform I use in my servers – is just the icing on the cake.

It is true that almost any IT Pro would be able to install and use Hyper-V on either the server or client platform without much guidance.  However when you do start out – either with Hyper-V in general, or on a new system – there are a few things that you should know before you go.  Here are some of my tips, in no particular order of importance.

1) Change the default file locations!

The default file locations for virtual hard disks and virtual machines are a bit obscure.  I like to change them right out of the gate.  Depending on which drive I want to store them on (in Windows 8 it is usually the C drive, while on proper servers it will usually not be) I will store them both right off the root… x:\VHDs and x:\VMs.  That way I do not have to navigate to the defaults whenever I want to copy a file.  I find x:\VHDs much easier than c:\Users\Public\Documents\Hyper-V\Virtual Hard Disks and c:\ProgramData\Microsoft\Windows\Hyper-V.

If I am going to use Failover Clustering with Cluster Shared Volumes the defaults will be different, but for standalone servers these defaults suit me fine.

STEPS:

  1. In Hyper-V Manager click on Hyper-V Settings… in the Actions Pane.
  2. Under the Server context, click on Virtual Hard Disks and change the default location.  You will have to create the directory before going ahead.
  3. Still under the Server context, click on Virtual Machines and change the default location.  Again, you will have to create the directory first.

It’s as easy as that.  Of course, VMs that are already there will not be moved, but going forward all VMs will be placed in the proper directory.

2) Create your Virtual Switch!

When you start creating virtual machines there will be nowhere for them to go and nobody for them to talk to… that is, unless you create a virtual switch (previously called virtual network) to connect them to.  Depending on your server and your environment this might be simple or complex, and may require more planning.  However the long and the short of it is you have to make three decisions when creating a virtual switch:

  • Is the network going to be External (can communicate beyond the physical host), Internal (can only communicate with other VMs on the same host, plus with the host), or Private (can only communicate with other VMs on the same host)?
  • If External, what physical NIC (uplink) will it be bound to?
  • Can the Management OS (on the host) use the same NIC?

STEPS:

  1. In the Action Pane of Hyper-V Manager click Virtual Switch Manager…
  2. In the navigation pane click New virtual switch
  3. In the right screen select External, Internal, or Private and click Create Virtual Switch
  4. In the Virtual Switch Properties window delete New Virtual Switch and name the switch something that you will understand (i.e.: CorpNet).
  5. Click OK to close the Virtual Switch Manager.

Again, this is all there is to it.  Plain and easy, no fuss, no muss.

3) Configure Dynamic Memory

When you create a virtual machine there are a few defaults that Hyper-V thinks is a good idea… which I don’t.  The main one that comes to mind is the Dynamic Memory option (per VM).  When you configure Dynamic Memory the defaults are going to be:

Startup RAM: 512 MB

Minimum RAM: 512 MB

Maximum RAM: 1048576 MB

Ok, for a lot of our virtual machines 512MB may be a fine minimum… but unless you are driving a BAWL (Big @$$ Work Load) server on the VM you will nearly never need a terabyte of RAM.  Granted it is nice that we have that ability, it is not going to be the norm.  On the other hand, not setting a realistic maximum would mean that if your VM were to place a huge memory demand – say, because of an unchecked memory leak or a compromised server, or even something as simple as an Exchange Server grabbing as much physical (ahem… virtual) RAM as it can – then this would necessarily be at the expense of contention resources, which would no longer be available to other virtual machines on the same host.

My recommended best practice is to pick minimums and maximums that are reasonable to you for each server (and those will be different from VM to VM, depending on the load expectations).  You will be able to tweak these up or down as needed, but the point is you will have reasonable limits.  For many of my servers I set limits such as these:

Startup RAM: 512 MB

Minimum RAM: 512 MB

Maximum RAM: 4096 MB

These settings allow the VM to consume up to 4 GB of RAM when needed and available, but no more than that.  If I discover the VM workload needs more then I will tweak it up incrementally.  I am not letting resources go to waste, and I am making sure that my VMs work within their means – i.e.: as efficiently as they can.

STEPS:

  1. imageWithin Hyper-V Manager click on the VM in question and then in the Action Pane (VM Name) click Settings…
  2. In the Navigation Pane click Memory
  3. In the Memory window change the Minimum and Maximum RAM as needed.
  4. Click OK.

4) …and Hard Disks!

By default the virtual hard disks that are created for us in the New Virtual Machine Wizard will be 127 GB.  But do they really need to be that big?  Actually, in a lot of cases they do.  In many cases they should be bigger.  Sometimes they should be smaller.  If you are creating your disks this way then you should right-size them in the wizard.

With that being said, the one question that the wizard does not ask you is ‘what type of disk would you like to create?’  In Server 2012 there are actually three questions that you should be asked that are only asked when creating your disks using the New Virtual Hard Disk Wizard:

a) Would you like to create a VHD file (with backward-compatibility, and limited to 2 TB in size) or a VHDX file (which adds resilience to consistency issues that might occur from power failures, and are limited to 64 TB in size but offer no backward compatibility)

b) Would you like the disk to be Fixed size (pre-provisioned storage), Dynamically expanding (storage on demand), or Differencing disk (associated with a parent-child relationship with another disk)?

c) Would you like to create the new VHD(X) file based on the contents of an attached physical drive?

The solution is to pre-create your VHD(X) files to spec, and then point to them from the New Virtual Machine Wizard.  While dynamically expanding disks are fine for labs and offer greater portability, I never recommend them in a production environment.  Also if you think you might need to port your VMs back to Server 2008 (or Windows 7) then VHD will be required.

STEPS:

    1. From the Hyper-V Manager console in the Actions Pane click New > Hard Disk…
    2. Go through the wizard and select the options you choose.
    3. From the New Virtual Machine Wizard click the radio button Use an existing virtual hard disk and point to the right file.
    4. Click Finish.

image

Alternately, you could select the radio Attach a virtual hard disk later and create your VMs, then create your VHD(X) files, and then attach them.  It seems like more work to me though…

5) …and CPUs!

There are a few new settings in the Processor tab of Hyper-V VMs than there used to be.  Not only can you set the number of virtual processors (to the lesser of either a maximum of 64, or the number of physical cores in your computer), but you can also set the VM reserve, the percent of total system resources, the VM limit, and the relative weight.  These are all set in the main screen of the Processor Settings page.

imageWhat a lot of people do not realize is that there are two subsections to the Processor tab: Compatibility and NUMA.  In order to access these you need to expand the + next to Processor in the navigation pane.

NUMA stands for Non-Uniform Memory Architecture, and essentially means that a single VM can use memory that is assigned to different physical CPUs.

Compatibility in this context refers to CPU families, and is a very handy option indeed.  In virtualization there is no way to live-migrate a VM from a host running on AMD CPUs to a host running on Intel CPUs.  This is not a limitation of Hyper-V, rather of the architecture of CPUs, and is identical in VMware.  However CPU family is not the only limiting factor to allow live migrations; CPU properties are a factor too, and because of advancements in the technology it would generally not be possible to live migrate a VM from a host with an older CPU to a host with a newer CPU.

A few years ago VMware saw this as an issue, and along with Intel developed a technology called EVC, or Enhanced vMotion Compatibility.  What EVC does is it masks the newer chipset features (generally multimedia signatures and things like that) from the VM, so that all of a sudden you can migrate between older and newer hosts (say, an Intel i7 to an Intel Core Duo).  In VMware this is assigned at the Cluster level.

Of course the technology is simple enough, but the intellectual property is not.  EVC has the word vMotion (a trademark) in the title.  Microsoft cannot use the term vMotion.  As such their compatibility mechanism (which works the same way) is called Migrate to a physical computer with a different processor version (MTPCWDPV).  The name is not nearly as sexy as EVC, but they compensated by assigning it to the VM instead of the cluster.  It is a simple checkbox that you check (or uncheck) under the Compatibility Configuration screen.

If you are going to be using Live Migration between hosts with potentially incompatible CPU then follow these steps:

STEPS:

      1. imageWithin Hyper-V Manager click on the VM in question and then in the Action Pane (VM Name) click Settings…
      2. In the Navigation Pane click Processor, then click the + next to Processor to expand the tree.
      3. Click on Compatibility
      4. Click the checkbox Migrate to a physical computer with a different processor version.
      5. Click OK.

Following these simple best practices will not make you an expert in Hyper-V by any means, but it is a good start… what they will allow you to do is get started comfortably and play with the technology without hitting some of the more common stumbling blocks that beginners seem to run into.  As your needs grow you will be comfortable enough with the technology to try new things, explore new possibilities.  Before long you will be as virtualization expert, ready to tackle concepts such as Shared Nothing Live Migration, Failover Clustering, Cluster Shared Volumes, and much much more.

In the meantime dip your toes into the virtualization waters… it’s warm and inviting, the hazards are not too dangerous, and the rewards are incredible.  In no time you will be ready to get certified… but even if that is not your goal you have already taken the first steps to becoming a virtual wiz!

Refresh Your PC – Save your bacon!

Thursday morning I did something to my main laptop that I really should not have done, and the results were disastrous.  I succeeded in completely wrecking my installation of Windows 8.  I was able to boot into the OS, but as soon as I tried to launch any application my system went into an endless flash-loop, and was completely unusable.

I want to be clear that Windows 8 is a very solid and stable platform – it is built on the foundation of Windows 7 which most people agree was the most stable OS that Microsoft had ever released.  Unfortunately when you tart to play under the hood (where the vast majority of users would never be) things can go wrong… and indeed that is what happened to my system.

Normally under these circumstances I would simply reformat the laptop, or at the very least re-install Windows on the existing partition (so as to not wipe my data).  However because my system is protected with BitLocker I would have had to extract the BitLocker Recovery Key, which I have on file… somewhere.

Because my laptop has a Microsoft corporate image on it I could have gone to the IT Help Desk at the office and had them work it out with me… but it was Thursday, I wasn’t going to be in my office until Monday, and I had several presentations to do over the course of the week-end… not to mention blog articles, e-mail, and whatever else I might have had to do.

Since I was able to boot into Windows 8 I decided to try to Refresh my PC.  This is a new feature of the OS that is found under Settings – Change PC Settings – General that refreshes my PC without affecting any files.  Essentially it reinstalls the OS in place which restores anything that I would have messed up – and I know just how badly I messed it up.  However it retains my data and settings for all users – including domain membership, files, desktop… everything.

Refresh is BitLocker-aware, and warned me before starting that it would temporarily disable my BitLocker protection and then re-enable it when the process was complete.

It took about 15 minutes.  Refresh rebooted the PC a couple of times, fixed everything that was wrong, and when I booted back into Windows it prompted me to log on as b-mitchg – my alias in the Microsoft Active Directory.  My password worked, and so did my PC.  The desktop was exactly as I had left it – a little cluttered, although not as bad as it would have been on Windows 7.

Refresh restores all of your Windows 8 apps that were installed from the Windows Store; any applications that you installed ‘the legacy way’ will have to be re-installed.  However that was a small price to pay considering that most of my apps (with the exception of Microsoft Office 2013) are all from the Store, so I didn’t have a lot of loss.

My settings were all correct, my documents were in their place, and my SkyDrive connection was intact.  Everything was as it was before the refresh… except it all worked!

Of course there is a ‘one step further’ – Remove everything and re-install Windows.  This will not preserve any of your files, settings, or even your account.  Imagine you are selling your PC, giving it to your kids, or whatever.  You don’t have to do anything but click through to the Settings – Change PC Settings – General tab and click the option to Remove Everything.  You don’t have to go looking for your Windows media, it just takes care of everything for you.

Between these two options I can imagine that technicians will spend a lot less time trying to clean malware out of their PCs… the Refresh option is much quicker and just as effective.

I know it saved my bacon last week… it saved me from something far more dangerous than malware… it saved me from myself!

Client-Side Hyper-V: How Microsoft is changing the game

I have been a virtualization guy for a long time, so when Microsoft released Hyper-V 2.0 with Windows Server 2008 R2 I was among the first to ask why they weren’t including it in the client OS.  In my opinion it was a no-brainer.

With the launch of Windows 8 with the client-side Hyper-V, they made a Layer 1 hypervisor available to the masses.  True, there have been free Layer 1 hypervisors for years (Hyper-V Server, ESXi and others), but they required another machine to manage them, and those machines had to be properly networked.  There are people out there who do not have multiple systems to play with.  When it comes to doing demos outside of your office environment not only would you need two systems, but they would both have to be portable.  For most of us, this was unmanageable.

Of course, Windows 7 did have Virtual PC, and even Windows XP Mode.  These were great solutions for what they were, but Virtual PC never supported 64-bit guests, which meant that in order to run a x64 OS (such as Windows Server 2008 R2) you needed a third-party virtualization platform.  It also meant that, as an MCT, if you wanted to run the Microsoft Official Curriculum courses on your system you needed to be running Windows Server 2008 as the base OS.

Alas, in Windows 8, Windows XP Mode is no more; however that doesn’t mean that if you need to run Windows XP you cannot simply build a Hyper-V machine running that OS.  Same is true for Windows 7, which I run in a VM for two distinct reasons: so that I can answer questions for the vast majority of people who are still running that platform, and because Windows 8 no longer supports desktop gadgets.  (If this second reason sounds a bit peculiar, then you should know my secret: I use the Windows XP End of Support countdown gadget to keep you all informed as to the number of days left until #EndOfDaysXP Smile).

In my professional capacity I have needed Hyper-V on my laptop for several years; I have used one of three methods of achieving this need: Dual Boot, Boot from VHD, and occasionally Native Boot.  All of this because I also needed the Windows client on my laptop.  Now, however, I can run my virtual machines (32-bit or 64-bit) from Hyper-V in Windows 8, and I don’t have to decide how I am going to boot my laptop each time I start up.

In addition to installing Hyper-V in the Native Boot Windows 8, you can also install it in a Boot from VHD environment, as well as on a Windows To Go (WTG) key.  However on those you should be even more aware of where you are storing your VMs, because storage space will be more scarce.

SNAGHTML3f56db4In addition to the native hypervisor, you might also want to install the Hyper-V Management Tools (either GUI or PowerShell, or both) on your client.  By doing this you can now manage remote Hyper-V servers from your desktop (in the same way that you could do in Windows 7 by installing the Remote Server Administration Toolkit).

To install these features, simply open the Windows Features screen, and select the desired features (Hyper-V Platform, Hyper-V Management Tools).

  1. From the Windows 8 Start Screen type Features.  Ensure that Search is in the Settings context.
  2. Click Turn Windows Features On or Off.
  3. The Windows Features window will appear (pictures at left).  scroll to the Hyper-V context.
  4. Expand Hyper-V, and select the desired features.

Just as is needed in Server, Windows will install Hyper-V, and then will need to reboot twice (See the article Layer 1 or Layer 2 Hypervisor? A common misconception and a brief explanation of the Parent Partition).

Once the reboots are complete, you will be able to create and start virtual machines, just as you would in Windows Server.  You can import and export them, pause, save, and snapshot them… just like you would in Windows Server!

Now it is important to remember that the same hardware requirements for Hyper-V apply to the client.  Your CPU needs to support hardware virtualization, and it must be enabled in the BIOS.  For that reason I don’t expect that MacBook users will be taking advantage of this option.  You also need to have Second Level Address Translation (SLAT).  However if you bought your PC within the last five or six years (and it doesn’t have an ATOM processor) then I expect you will be fine.

By the way, while I was writing this article I was made aware of a similar one in Windows IT Pro Magazine.  Check out Orin Thomas’ article on the  Hyperbole, Embellishment, and Systems Administration Blog called Windows 8’s “Killer Feature” for Microsoft Certified Trainers.

Good luck, and may the virtual force be with you!

Windows to Go: Better (and easier!) in the RTM!

A couple of months ago I posted an article on Windows To Go (Windows To Go: This is going to be a game changer!) outlining the benefits and use cases for Windows to Go, as well as the steps to build your WTG key.  In the RTM release of Windows 8 it has gotten easier to build… no command line required!  Here’s what you do:

  1. SNAGHTML7498dFrom the Start screen type Windows to Go.  Make sure the context is set to Settings.
  2. Click on Windows to Go.
  3. Insert the USB 3.0 key that you will use for Windows to Go.  It should appear in the Create a Windows To Go workspace screen.  Select it and click Next.
  4. On the next screen you are asked to point to a Windows 8 image.  If you are using an ISO image rather than physical media make sure you mount it in Windows, and then navigate to the proper location.  Click Next.
  5. On the next screen you are asked if you want to set a BitLocker password.  Because it is assumed you will be using the Windows To Go key on multiple computers it used the same password technology as BitLocker to Go, rather than tying it to a TPM chip.  You can either check the option to Use BitLocker with my Windows To Go workspace, or click Skip.

image

The next screen is the Ready to create your Windows To Go workspace screen.  When you click Create Windows will start building your key.  Depending on the speed of your key and your USB ports (USB 3.0 is highly recommended, but not necessarily available) it can take between five and twenty minutes.  Be patient, when the progress bar is complete, you will have your very own Windows To Go key ready to go!

It really is easy… and when you are done you will be able to take all of your applications, data, and preferences with you to any computer you use… even older Windows 7 (or even Windows XP!) systems!

Remember that I mentioned that one of the advantages to using Windows To Go is the ability to use unsecured computers safely.  For that reason, when you boot into your Windows To Go key the local hard drives will be off-line.  Likewise, if you insert your Windows To Go key into a computer running another installation of Windows, your USB key will be off-line.

I said it before and I’ll say it again; Windows To Go is a real game changer.  It is one of my favourite features of Windows 8, and one that I expect will have a lot of corporations looking at the new operating system, especially for road warriors, remote workers, and other employees who need to work away from the office.

By the way, remember that you may still need to install hardware drivers for different computer systems, the way you do on traditional Windows installations.  If you are planning on using the WTG key on multiple systems you might need to plan for that.  Recently I did a demonstration of the Windows To Go technology at HP Canada, and had to download the driver for their 42” touch screen.  It was worth it though… Windows 8 on a huge touch screen ROCKS!

When your Windows To Go key is completed you will be prompted to either save and reboot, or reboot later.  If you are building an individual key then you may want to reboot in order to install device drivers.

SNAGHTML8c8374

For Bonus Points: Using the Microsoft Deployment Toolkit you can build your own image of Windows 8 which will include your applications, drivers, and domain settings.  If you are building Windows To Go keys for your organization this might be a better alternative!

Wrong Product Key? Oh No!

Here’s a tip if you need to change your product key in Windows 8.  It also works in earlier versions of Windows, but those versions have easier ways of doing it.  In Windows 8 you need to use the Windows Software Licensing Management Tool (slmgr.vbs)

  1. Open a Command Prompt with elevated privileges (from the Windows 8 Start Screen, type cmd, right-click the result, and click ‘Run with Elevated Privileges’ along the bottom.
  2. Type slmgr –ipk <your product key>

The slmgr tool is a .vbs script that you can use to install product keys, activate, or display licensing activation for Windows.  the –ipk switch installs a new product key, either in absence of an existing one, or to replace it.

Assuming you are connected to the Internet, the Activate Windows message will disappear immediately, and you will be able to use features and settings in Windows that are blocked until you activate.

Of course, the best way to not need this trick is to add the proper product key at the installation Smile

Making Your Windows 8 ISO work for You

English: A Sandisk-brand USB thumb drive, SanD...

English: A Sandisk-brand USB thumb drive, SanDisk Cruzer Micro, 4GB. (Photo credit: Wikipedia)

Tomorrow is the day that a huge number of you will be downloading and installing the final bits (RTM) of Windows 8.  You now have an ISO image of Windows, and you need to install it onto your computer.  In order to do that you have to put it onto media – DVD or in many cases USB sticks.

DVDs are easy… since the introduction of the technology people have been burning .iso files to CDs and DVDs, thanks to such tools as Alex Feinman’s ISO Recorder.  All you need is a DVD burner and a blank DVD.  In fact in Windows 8 if you click on an ISO file in Windows Explorer there is an option to either mount or burn the image file (as seen)

SNAGHTMLb575433

A lot of PCs these days – including but not limited to ultrabooks, tablets, and minis – do not have DVD players built in, and so USB keys (sticks, thumb drives) have become the preferred method of installation for many.  All you have to do is make it bootable and you are off to the races.  There are several ways to do that.

Because it has been available to us for so long, the method I use is tried and true – I use the Disk Partition Tool (diskpart.exe) in Windows.  Because DiskPart is so destructive it is a good idea to unplug any unneeded drives before proceeding, and then continuing with extreme caution.

  1. Open a Command Prompt (from the Start Menu type cmd.exe).
  2. From the command prompt type diskpart.exe and press Enter.  If you are using Windows Vista or later a UAC window will come up.  Click on Yes (or OK).
  3. SNAGHTMLb5dda52In the DiskPart tool type list disk to see a list of connected devices.  In this example you will see that I have three disks connected. – Disk 0 (238 GB) is my internal hard disk.  Disk 1 (14 GB) is an SD card that I plugged in to transfer pictures from my camera.  Disk 2 (3841 MB) is the 4GB USB key that I am using for my bootable Windows 8 key.
  4. Type select disk X (where X is the number assigned to your USB key)
  5. Type clean.  This will wipe everything off the disk, so be careful that you have selected the appropriate drive, and be sure there’s nothing important on it.image
  6. Type create partition primary.  This creates a primary partition on the key.
  7. Type assign.  Your blank partition now has a drive letter assigned to it.  You can check in Windows Explorer to see what letter it is.  The volume name will be NEW VOLUME.
  8. Format the disk.  The easiest way is to click on the NEW VOLUME in Windows Explorer and select the options for Quick Format.  You can also, should you wish, name the volume from the Format Disk window by entering the name (15 characters or less) in the Volume Label field.
  9. Type active.  This marks the partition as active.
  10. Type exit to close the DiskPart tool.
  11. Type exit.  This will close down your command prompt.

Our USB key is now bootable.  All that is left is to copy the contents of the ISO file (and not the ISO file itself) onto the key.  Use any tool that you like to mount the ISO file (such as Windows Explorer in Windows 8, or Magic ISO Maker in Windows 7) to mount the ISO file, and do a simple file copy from that drive to your thumb drive.  Depending on the speed of your disks it might take as long as 20 minutes on USB 2.0, but if you have USB 3.0 then it’s much quicker.

The next step is easy but often overlooked… when you boot your system there are two things you have to do:

  • Make sure the bootable USB key is plugged into the system before it POSTs.  If not it may not be considered a boot option.
  • If the USB device is not set first in the boot order (in your BIOS) then you have to select it manually.  Different PC makers make you press different keys (HP is F9, I think Dell is F12.  check your PC to be sure) to show you the Boot Device menu.  Most tablets will boot from USB first by default.

At this point you are ready to go… install Windows 8, and start playing.  It’s that simple.

Welcome to the world of 8… the luckiest number in Chinese, and the newest evolution on the desktop and tablet!