Home » Posts tagged 'Windows'
Tag Archives: Windows
It is Friday morning in Tokyo, and there is a line out the door. If you didn’t know any better you would think that they were lined up to get an autograph from the latest pop icon.
However if you look at the sign on the door it does not say ‘Tokyo Arena’ or ‘Tokyo Hilton.’ It says IT Service Desk, and the throngs lined up are users, and each one has their laptop with them. It seems that they are all having similar problems, either to do with not being able to log in at all, or Outlook crashing when they receive HTML based e-mail.
If I were a Help Desk Technician I might be thinking right now that this was a bad day to get out of bed. If I was an IT Director I would be <figuratively> screaming for answers, needing my team to find the root cause… Is it malware? Are we under attack? Was there just some massive incompetence that killed our systems?
It wouldn’t be long before I discovered the answer. Are we under attack? No. Is it malware? No… at least, not in the most commonly accepted definition of the term. What we were facing was a patch from Microsoft that was causing our myriad issues. Patch KB3097877, part of the November 10 patch roll out cycle, is to blame.
With that knowledge, as an IT Director, I would be setting forth the following plan:
- Train the Support Counter techs to resolve the issue (as found in this article from Microsoft);
- Ensure the patch was immediately removed from WSUS; and
- Once the ‘crisis’ was over, I would bring the interested parties into a room and do a post-mortem… that is, figure out what went wrong, and how to prevent it from happening in the future.
The second point is easy. Once you know what patch it is all you have to do is have a WSUS admin mark it as DECLINED. The first point is stressful for the support techs, but they are well trained and will handle it.
It is during the third point – the post mortem – that I would be looking at my team and wanting them all to simultaneously burst into flames. Because someone – one of these people whom I trust with my infrastructure, and therefore with the ability for the entire company to work – would have to look at me and say ‘We accept and push out all patches immediately without testing them.’
If I am an extremely diligent IT Director I will know that in our IT Department Policy and Procedures Statement there is a policy about applying patches, and likely it says that patches should be applied only after proper testing. If we are a less stringent company the policy might read that patches should be applied only after a reasonable delay has passed, and the appropriate forums and blogs on the Internet have claimed they were okay.
If there is no such policy then the blame lies with me. I can glare at the others, I can even yell if I am a bad leader. However the buck stops here.
If however there is such a policy, I would be looking at the team and asking them why the policy hadn’t been followed? I imaging they would be looking at me quizzically and someone would say ‘This is just what we do… it’s never caused problems before!?’
I might look at the admin who said that and ask if he wears a seat belt when he drives a car. I might ask if he wears a life vest when he goes boating. Chances are if you don’t, nothing will happen. You wear them to be safe and increase your chances of survival if something does happen. It is the reason we test patches (or let others test them) before we apply them.
The mistake caused by the admins neglecting to test patches might cause hundreds of thousands of dollars in lost productivity… and yet it is almost certain that nobody will lose their job. They probably won’t even get a reprimand. None of that is necessary. What is necessary is that we learn from this. Patches do not break things very often, but we have to remember that they can, and because of that we must take the proper steps – do our due diligence – to make sure we don’t get hit.
If you run Windows Server this is very important. Microsoft released today a number of out-of-band security updates for Microsoft Windows. From what I have read, these patches (One of my servers has 14 applicable updates since 3am) will be applied to Windows clients as well as Windows Servers, but the vulnerability it protects is only in Windows Server. I have a bit more information but because it is the middle of a busy work day I cannot go into it… but if you are a server admin I strongly recommend you take some time to look at these patches, test them, and apply them ASAP… the two week deadline setting in WSUS is probably not good enough for these ones 😉
Microsoft is not a company that does anything out-of-band for no good reason… if it has gone to the trouble of releasing these patches I suspect they are protecting something pretty serious so make sure you look into them – you can be certain that the hackers are!
I was having a conversation this week with a colleague about his plans to create a hybrid-cloud environment by moving many of his datacenter workloads onto Windows Azure. After all, it makes plenty of sense – eliminating new capital expenses and reducing ongoing operational expenses just makes sense.
“And once we have tested it, we plan to roll out a thousand pooled VDI clients running on Windows Azure. It is great!”
No, I’m afraid it is not. Unfortunately, while there is no technological reason why you couldn’t do this, there is a legal reason. There is no license for the Windows Client (not even Enterprise Edition) that you can deploy in someone else’s datacenter. In order to legally deploy VDI you must own the physical hardware on which it is installed.
By the way, let me be clear, that is not only an Azure thing, and it is not only a Remote Desktop Services issue. The same licensing limitation is true on Citrix’s Xen Desktop and VMware’ Horizons. It is true of Azure, Amazon Web Services, Rackspace, and Joe’s Datacenter Rental. If you do not own the hardware you can install Windows Server… but not Windows 8.1 (or 8, or 7, or XP for that matter).
I had this conversation with the VP of Sales for a major Microsoft partner in Ontario recently, and I was so flabbergasted that I went back and looked it up. Sure enough he was right. So when I spoke with my colleague the other day I was able to save him a lot of time, effort, money, and frustration. Unfortunately I forgot to turn on the meter, so he got the advice for free. Oh well, I’m sure he’ll remember around the holidays J
Consultants, I want you to remember this lesson: Your customers may not always like the news you have to tell them… but you do have to tell them. Of course, this is one of those places where good communication skills will help you out – don’t just say ‘Wow, you are scroo-ooed!’ Tell them what they need to say and offer alternative solutions for them to accomplish what they are trying to do.
Here’s a great way to waste time, network bandwidth, and storage space: download excess patches that you do not need. For bonus points, download languages you don’t support.
Windows Server Update Services (WSUS) is a great solution that has come a long way since it was introduced. However it gives a lot of us functionality that we don’t need (and will cost). Here’s an example: I support an environment where people speak English, Spanish, Urdu, and Hindi. Between us we probably speak another six languages, but those are the mother tongues in this office. So when the WSUS configuration screen asks what languages I want to support, it is easy to forget that every operating system in the joint is English…
Imagine you have to download 10GB of patches. That could immediately translate to 10GB of patches per language. Time, effort, and not to mention that you should be testing them all… it’s just not worth it. What language are your servers in? Mine are in English. My workstations are also English, but we might have to account for a few French workstations – especially in Quebec. That’s it. Don’t go overboard, and your bandwidth will thank me!
I was sitting in a planning meeting with a client recently in which we were discussing ways of protecting end-user machines, especially laptops that were in and out of the office. The previous convention relied on BIOS locks that were proprietary to the hardware manufacturer, and required the end user to either enter two passwords or swipe their fingerprint on a sensor. As the company planned to migrate away from the dedicated hardware provider and toward a CYOD (Choose Your Own Device) type of environment this would no longer be a viable solution.
As the discussion started about what they were planning to use to provide a second layer of protection from unauthorized access to systems, I asked if the company was still intending to use BitLocker to encrypt the hard drives for these machines. When it was confirmed that they would, I presented the hardware agnostic solution: adding a PIN (Personal Identification Number) to BitLocker.
BitLocker is a disk encryption tool that was introduced with Windows Vista, and has been greatly improved upon since. It ties in to the TPM (Trusted Platform Module) in your computer (included mostly in Enterprise-class systems) and prevents protected hard drives from being hacked. Most people configure it and leave it there… which means that it is ‘married’ to the physical computer with the TPM chip. However there are a few additions you can add.
Authentication has not changed much in the last few thousand years. It is usually based on a combination of something you have and something you know. Beyond that is it just levels of complexity and degrees of encryption. So our TPM chip is something we have… but assuming the hard drive is in the computer, they go together. So we need another way of protecting our data. Smart cards and tokens are great, but they can be stolen or lost… and you have to have to implement the infrastructure with a cost (although with AuthAnvil from ScorpionSoft the cost is low and it is relatively easy to do).
Passwords work great… as long as you make them complex enough that they are difficult to hack, and ensure people change them often enough to stymie hackers… and don’t write them down, and so on. However even with all of that, operating system passwords are still going to be reasonably easy to crack – to the knowledgeable and determined. Hardware level passwords, on the other hand, are a different beast altogether. The advent of TPM technology (and its inclusion in most enterprise-grade computer hardware) means that an encryption tied to the TPM will be more secure… and by adding a PIN to it makes it even more so. Even though the default setting in Windows is to not allow passwords or PINs on local drives, it is easy enough to enable.
1. Open the Group Policy Editor (gpedit.msc).
2. Expand Computer Configuration – Administrative Templates– Windows Components – BitLocker Drive Encryption – Operating System Drives
3. Right-click the policy called Require additional authentication at startup and click Edit.
4. Select the Enabled radio button.
5. Select the drop-down Configure TPM startup PIN: and click Require startup PIN with TPM.
At this point, when you enable BitLocker, you (or your user) will be prompted to enter a PIN when enabling BitLocker.
**NOTE: This policy will apply when enabling drives for the first time. A drive that is already encrypted will not fall into scope of this policy.
By the way, while I am demonstrating this on a local computer, it would be the same steps to apply to an Active Directory GPO. That is what my client will end up doing for their organization, thereby adding an extra layer of security to their mobile devices.
Warning: The following post was written by a scripting luddite. The author readily admits that he would have difficulty coding his way out of a paper bag, and if the fate of the world depended on his ability to either write code or develop software then you had better start hoarding bottled water and cans of tuna. Fortunately for everyone, there are heroes to help him!
I love the Graphical User Interface (GUI). I use it every day in both the Windows client and Windows Server operating systems. It makes my life easier on a day to day basis.
With that being said, there are several tasks that administrators must do on a regular basis. There is no simple and reliable way to create repetitive task macros in the GUI. Hence we can either work harder, or we can learn to use scripting tools like Windows PowerShell.
Along the way I have gotten some help from some friends. Ed Wilson’s books have provided a wealth of information for me, and Sean Kearney has been my go-to guy when I need help. There was a time when I was teaching a class and was asked ‘Can PowerShell do that?’ I replied by saying that if I asked Sean Kearney to write a PowerShell script to tie my shoes, I was reasonably sure he could do it because PowerShell can do ANYTHING. Well one of my students posted that comment on Twitter, and got the following reply from Sean (@EnergizedTech):
Get-Shoe | Invoke-Tie
It makes sense too…because PowerShell works with a very simple Verb-Noun structure, and if you speak English it is easy to learn.
I may be a scripting luddite, but I do know a thing or two about virtualization, and especially Hyper-V. So it only stands to reason that if I was going to start learning (and even scarier, teaching) PowerShell, I would start with the Hyper-V module. As a good little Microsoft MVP and Community Leader, it only makes sense that I would take you along for the ride 🙂
Most of what can be done in PowerShell can also be done in the GUI. If I want to see a list of the virtual machines on my system, I simply open the Hyper-V Manager and there it is.
PowerShell is almost as simple… Type Get-VM.
By the way you can filter it… if you only want virtual machines that start with the letter S, try:
One of the advantages of PowerShell is that it allows you to manage remote servers, rather than having to log into them you can simply run scripts against them. If you have a server called SWMI-Host1, you can simply type:
Get-VM –Server SWMI-Host1
Starting and stopping virtual machines is simple…
Again, your wildcards will work here:
This command will start all VMs that start with the letter O.
If you want to check how much memory you have assigned to all of your virtual machines (very useful when planning as well as reporting) simply run the command:
I did mention that you could use this command for reporting… to make it into an HTML report run the following:
Get-VMMemory * | ConvertTo-HTML | Out-File c:\VMs\MemReport.htm
Get-VMMemory * | ConvertTo-CSV | Out-File c:\VMs\MemReport.csv
The report created is much more detailed than the original screen output, but not so much so as to be unusable. See:
So far we have looked at VMs, we have started and stopped them… but we haven’t actually made any changes to them. Let’s create a new virtual machine, then make the changes we would make in a real world scenario.
New-VM –Name PSblog –MemoryStartupBytes 1024MB –NewVHDPath c:\VHDs\PSblog.vhdx –NewVHDSizeBytes 40GB –SwitchName CorpNet
With this simple script I created a virtual machine named PSblog with 1024MB of RAM, a new virtual hard disk called PSblog.vhdx that is 40GB in size, and connected it to CorpNet.
Now that will work, but you are stuck with static memory. Seeing as one of the great features of Hyper-V is Dynamic Memory, let’s use it with the following script:
Set-VMMemory –VMName PSblog –DynamicMemoryEnabled $true –MinimumBytes 512MB –StartupBytes 1024MB MaximumBytes 2048MB
Now we’ve enabled dynamic memory for this VM, setting the minimum to 512MB, the maximum to 2048MB, and of course the startup RAM to 1024MB.
For the virtual machine we are creating we might need multiple CPUs, and because some of our hosts may be newer and other ones older we should set the compatibility mode on the virtual CPU to make sure we can Live Migrate between all of our Intel-based hosts:
Set-VMProcessor –VMName PSblog –Count 4 –CompatibilityForMigrationEnabled $true
At this point we have created a new virtual machine, configured processor, memory, networking, and storage (the four food groups of virtualization), and are ready to go.
I will be delving deeper into Hyper-V management with PowerShell over the next few weeks, so stay tuned!
NOTE: While nothing in this article is plagiarized, I do want to thank a number of sources, on whose expertise I have leaned rather heavily. Brien Posey has a great series of articles on Managing Hyper-V From the Command Line on www.VirtualizationAdmin.com which is definitely worth reading. He focuses on an add-on set of tools called the Hyper-V Management Library (available from www.Codeplex.com) so many of the scripts he discusses are not available out of the box, but the articles are definitely worth a read. Rob McShinsky has an article on SearchServerVirtualization (a www.TechTarget.com property) called Making sense of new Hyper-V 2012 PowerShell cmdlets which is great, and links to several scripts for both Server 2008 R2 and Server 2012. Thanks to both of them for lending me a crutch… you are both worthy of your MVP Awards!