Delegating Control in Active Directory

I have been saying for years that a good IT department in a secure, well-managed infrastructure will give their end users the tools they need to do their job… and nothing more.

If that is true for end users, shouldn’t it also be true for the IT department themselves?  It is frustrating to see the number of shops I go into where there are fifteen or twenty members of the Domain Admins group, and for the silliest reasons.

Windows ServerBy using the Delegation of Control Wizard, you can assign very granular permissions to regular user accounts to perform several common tasks.  In Windows Server 2016 these include:

  • Create, delete, and manage user accounts
  • Reset user passwords and force password change at next logon
  • Read all user information
  • Modify the membership of a group
  • Join a computer to the domain
  • Manage Group Policy links
  • Generate Resultant Set of Policy (Planning)
  • Generate Resultant Set of Policy (Logging)
  • Create, delete, and manage inetOrgPerson accounts
  • Reset inetOrgPerson passwords and force password change at next logon
  • Read all inetOrgPerson  information

These permissions can be set either at the domain level, or at the Organizational Unit (OU) level (except Join a computer to the domain, which must be set at the domain).  In order to do it:

  1. Open Active Directory Users and Computers (ADUC)
  2. Right-click on the domain (or OU) where you want to assign the permission
  3. Click Delegate Control…
  4. On the Welcome to… window click Next
  5. On the Users or Groups window click Add… and select the security group (or individual) that you want to affect.  Click Add, then click Next
  6. On the Tasks to Delegate window select the tasks from the list, and then click Next
  7. On the Completing the Delegation of Control Wizard window click Finish.

Remember, if you have multiple sites across slow links this might take a while to propagate, but you are done.  That’s it!

I hope this helps.  Really, it has not changed much in fifteen years, but sometimes it is important to refresh knowledge, especially for the newer generations of IT Admins!

Advertisements

What is in a Name?

Recently a client asked me to build a series of virtual machines for them for a project we were working on.  No problem… I asked what they should be named, and the client told me to call them whatever sounded right.

That did not sound right… or at least, it turned out to not be right.  Indeed, the client had an approved server naming convention, and when the manager saw my virtual machines named VM1, VM2, VM3, and so on… he asked me to change them.

If we were talking about a single server, I would have logged in and done it through Server Manager.  But there were fifteen machines in play, so I opted to use Windows PowerShell from my desktop.

Rename-Computer –ComputerName “VM1.domain.com” –NewName “ClientName.domain.com” –DomainCredential domain\Mitch –Restart

The cmdlet is pretty simple, and allowed me to knock off all fifteen servers in three minutes.  All I needed was the real names… and of course my domain credentials.

The cmdlet works just as well with the –LocalCredential switch… in case you aren’t domain joined.

image

That’s it… have fun!

Offline Files: Groan!

You’ve configured Folder Redirection in Group Policy, and it works as expected… as long as you are connected to the network.  As soon as you disconnect, things stop working.  That may be a real inconvenience if you are redirecting your Photos, but if you have redirected your Desktop folder to a network share, there is as good chance that your computer will be rendered unusable… that is, until you reconnect to your local network.

We came across this issue recently at a client’s site, and we spent a few aggravating hours trying to get things working, to no avail.  Remember, this is something that I have been doing since the days of Windows 2000, and the procedures have not changed significantly in that time.  I was baffled… until I realized that we were working with a File Server Failover Cluster, and that our servers were Windows Server 2016.

There is an option in clustered Server 2016 shares that is called Enable continuous availability.  If this option is checked (as it is by default), then even if you have done everything right… even if your Offline Files are properly configured, you are going to click on a file in that properly configured folder, and in the Details tab it will be listed as Available: Online-Only.

How do we fix that?  Simple… uncheck the box.

Capture

  1. In Server Manager, expand File and Storage Services, and then click on Shares.
  2. In your list of shares, right-click on the one where you are redirecting your files and click Properties.
  3. In the Settings tab, clear the checkbox next to Enable continuous availability.
  4. Click Okay.

Incidentally, the file share will only be listed under the cluster node that is the current owner.  Don’t worry about doing it at the Cluster Level, although if you prefer to do it in Failover Cluster Manager, you can perform the following steps to achieve the same results:

Capture2

  1. Connect to the relevant failover cluster.
  2. Navigate to Roles
  3. Click on your File Server Role in the main screen.
  4. In the Details pane below, select the Shares tab.
  5. Right-click the relevant share, and click Properties.
  6. In the Settings tab, clear the checkbox next to Enable continuous availability.
  7. Click Okay.

The Properties window will be identical to the one that you saw under Server Manager.

You shouldn’t have to refresh your group policy on the client, but you may want to log off and log on to force the initial synchronization.

That’s it… Good luck!

KB4103723: DO NOT APPLY!

image

Hey folks, if you know what is good for you, do not apply this patch yet.  KB4103723 protects against a CredSSP vulnerability that has not yet been compromised.  However, it will break lots of things in your system, including RDP and Hyper-V connections.  Errors will include CredSSP errors when trying to connect via RDP (or Hyper-V Manager, or Failover Cluster Manager, or SCVMM).

Remote Computer: This could be due to CredSSP encryption oracle remediation.

Good luck!

Automated Virtual Machine Activation

Let’s face it… Microsoft wants you to use Microsoft, so when it can, it creates technologies that make it easier for you to do so.  Automatic Virtual Machine Activation (AVMA) is one of those tools.

I remember when Microsoft got into the server virtualization game, it really had very little to compete with VMware, other than price.  That has certainly changed, and while Hyper-V is not completely where ESXi is, it is damned close… and there are some benefits, such as AVMA.

What is it?  Simple.  If your virtualization host is running Hyper-V, then your guest VMs do not need to activate to Microsoft… or even to a KMS Server for that matter.  They activate directly to the host.  That means that rather than having to keep track of (or worse, share) your Product Keys, you can simply share the AVMA keys.  The rest is done through the Data Exchange Integration Service in the Hyper-V stack.

The downside?  You have to have an (activated) Windows Server Datacenter Edition as your host.  In other words, it will not work with Hyper-V Server.  That is not a huge downside, but it is significant.

The keys are available for free on-line, and the activation is done against your host.  So use the following keys:

Windows Server 2016

Edition AVMA key
Datacenter TMJ3Y-NTRTM-FJYXT-T22BY-CWG3J
Standard C3RCX-M6NRP-6CXC9-TW2F2-4RHYD
Essentials B4YNW-62DX9-W8V6M-82649-MHBKQ

Windows Server 2012 R2

Edition AVMA key
Datacenter Y4TGP-NPTV9-HTC2H-7MGQ3-DV4TW
Standard DBGBW-NPF86-BJVTX-K3WKJ-MTB6V
Essentials K2XGM-NMBT3-2R6Q8-WF2FK-P36R2

(Notice that this works only for Server 2012R2 and later.  The feature was only introduced in that version.)

One thing you need to make sure of in the guest VM settings… You need to have Data Exchange enabled in the Integration Services context, as seen here:

Capture

…So now, you can include the AVMA key in your VM templates, and you will be all set.  But if you didn’t do that, try the following command:

slmgr.exe /ipk C3RCX-M6NRP-6CXC9-TW2F2-4RHYD

That will add the product key to your VM, and all that is left to do is activate it using the following:

slmgr.exe /ato

That’s it… Have fun!

 

Windows Server 2016: A pet peeve

Windows Server 2016Over the next few weeks, as I do my first production infrastructure implementation based on Windows Server 2016 and System Center 2016, I am sure this list will grow longer.  In the meantime, I have uncovered my first pet peeve in the new version.

Don’t get me wrong, overall I like Server 2016… but to find out that it is no longer possible to install Windows Server with a GUI (Graphical User Interface) and then later to uninstall the GUI (see article for Windows Server 2012) is fairly annoying.

Throughout the launch of Windows Server 2012 I was with the Evangelism Team at Microsoft Canada and I traveled the country – first for the launch events, and then evangelizing and teaching that platform.  I spent a lot of time talking about Server Core because of the benefits for security, as well as for the reduced resource requirements (which, in a virtualized infrastructure, can be staggering).

Of course, Server Core looks a lot like where we started out… if you were a server administrator back in the 1980s and most of the 1990s, you were using command line tools to do your job.  However it had been too long ago, and the vast majority of admins today were not admins back then.  So I was able to discuss a compromise… Install Windows Server with the GUI, and when you were done doing whatever it was you needed the GUI for (or thought you did), you could uninstall it… or at the very least, switch to MinShell.

I showed up at my client site this week and was handed a series of brand new servers on which to work.  They all had the GUI installed.  So I went to work, and typed in that familiar PowerShell cmdlet to remove the GUI.  I was greeted by that too-familiar red text which meant I had done something wrong.  I will spare you the boring details, and after several minutes of research I discovered that Microsoft had removed the ability to remove the GUI in Windows Server 2016.

I understand that the product team has to make difficult decisions when developing the server, but this was one that I wish they had not made.  However confirmation comes directly from the product group in this article, in which they write:

Unlike some previous releases of Windows Server, you cannot convert between Server Core and Server with Desktop Experience after installation. If you install Server Core and later decide to use Server with Desktop Experience, you should do a fresh installation.

I wish it weren’t so, but it is.  Once you install the GUI you are now stuck with it… likewise, if you opted for Server Core when you first installed, you are committed as well.

Sigh.

Scheduling Server Restarts

If you manage servers you have likely come to a point where you finished doing work and got a prompt ‘Your server needs to reboot.  Reboot now?’  Well you can’t reboot now… not during business hours.  I guess you’ll have to come back tonight… or this weekend, right?

Wrong.  Scheduling a reboot is actually pretty easy in Windows.  Try this:

  1. Open Task Scheduler (taskschd.msc).
  2. In the Actions pane click Create Basic Task…
  3. Name the task accordingly… Reboot System for example.
  4. On the Task Trigger page click the radio button One Time
  5. On the One Time page enter the date and time when you want the server to reboot.
  6. image
  7. On the Action page select Start a program.
  8. On the Start a Program page enter the name of the program shutdown.exe.  In the Add arguments box enter /f /r /t 0.  This will force the programs to close, restart the server (instead of just turning it off), and set the delay time to 0 seconds.
  9. image
  10. Once you have done this your server will reboot at the precise time you want it to, and will come back up.

**NOTE: Don’t forget to check.  it is not unheard of in this world for servers to go down and not come back up as they are supposed to!

Do it in PowerShell!

Using PowerShell to script this will allow you to not only save the script, but also run it on remote servers.  From Justin Rich’s blog article I found this script:

register-ScheduledJob -Name systemReboot -ScriptBlock {

Restart-Computer -ComputerName $server -Force -wait

Send-MailMessage -From mitch@email.com -To mitch@email.com -Subject "Rebooted" -SmtpServer smtp.mail.com

} -Trigger (New-JobTrigger -At "04/14/2017 8:45pm" -Once) -ScheduledJobOption (New-ScheduledJobOption -RunElevated) -Credential (Get-Credential)

 

Have fun!

Remotely Enable RDP

Like most IT Managers I manage myriad servers, most of which are both remote and virtual.  So when I configure them initially I make sure that I can manage them remotely… including in most cases the ability to connect via RDP (Remote Desktop).

But what happens if you have a server that you need to connect to, but does not have RDP enabled?  Using PowerShell it is rather simple to enable the RDP feature remotely:

Enter-PSSession -ComputerName computername.domain.com –Credential domain\username
Set-ItemProperty -Path ‘HKLM:\System\CurrentControlSet\Control\Terminal Server’-name “fDenyTSConnections” -Value 0
Enable-NetFirewallRule -DisplayGroup “Remote Desktop”
Set-ItemProperty -Path ‘HKLM:\System\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp’ -name “UserAuthentication” -Value 1

That should get you going.  Good luck!

Since When…?

Those of us who have been in the IT industry for a while remember the heady days of never having to reboot a server… otherwise known as ‘The days before Windows Server.’  Those days are long gone, and even non-Windows servers need to be patched and restarted.

But how do you know when it last happened?  If you have a proper management and monitoring infrastructure then you can simply pull up a report… but many smaller companies do not have that, and even in larger environments you may want to figure out up-time without having to go through the entire rigmarole of pulling up your reports. So here it is:

  1. Open a Command Prompt
  2. Type in net statistics server

There will be a line that says Statistics since m/dd/yyyy… That is when your server last rebooted.

If you want to shorten it, you can also just type Net Stats SRV.  It provides the same results.

Uptime

Incidentally, while the command specifically states Server, it works for workstations too.

…And now you know.

Server Core on VMware

When I was a Virtual Technical Evangelist for Microsoft Canada I spent a lot of time telling you why you should use Server Core… especially if you were on Hyper-V.  Why?  You save resources.

It is now over two years since I turned in my Purple Badge, and I still think Server Core rocks.  In fact, when Windows Server 2016 comes out I will probably spend a lot of time telling you about the new Nano Server option that they are including in that version.  More on that to come.

Of course, I still like Hyper-V, but as an independent consultant I recognize (as I did quietly when I was with the Big Blue Machine) that the vast majority of the world is still running VMware for their enterprise-level server virtualization needs.  That does not change my opinion of Server Core… it still rocks, even on VMware.

Of course, in order to get the full benefits of the virtualized environment, a VMware machine requires the installation of the VMware Tools (as Hyper-V requires the installation of Integration Services).  With a Server with a GUI that is easy to do… but since Server Core is missing many of the hooks of the GUI, it has to be done from the command line.  Here’s how:

1. As you would with any other server, click Install VMware Tools

image

2. Connect to and log on to the virtual machine.  You will have to do this with Administrator credentials.

3. navigate to the mounted ISO (if you only have a single hard drive attached it will usually be D:)

4. type in the following command line: setup64.exe /S /v “/qn reboot=Y”

image

Once you have done this, the VMware tools will install, and your server will reboot.  Nothing to it!

SQL Server: How to tame the beast!

One of the benefits of virtualization is that you can segregate your SQL Servers from your other workloads.  Why?  If not then Microsoft SQL Server will hoard every last bit of resources on your machine, leaving scant crumbs for other workloads. 

image

Seriously… when you start the Microsoft SQL Server you will immediately see your memory usage jump through… or more accurately, to the roof.  That is because SQL Server is actually designed to take up all of your system’s memory.  Actually that is not entirely true… out of the box, Microsoft SQL Server is designed to take up 2TB of RAM, which means that in all likelihood a lot more memory than your computer actually has.

So assuming you have been listening to me for all of these years, you are not installing anything else on the same computer as your SQL Server.  You are also making sure that the virtual machine that your SQL Server is installed on (remember I told you to make sure to virtualize all of your workloads?) has its memory capped (Hyper-V sets the default Maximum RAM to 64GB).  You are doing everything right… so why is SQL performing slowly?

It’s simple really… Your computer does not have 2TB of RAM to give SQL Server… and if it did have 2TB of RAM, the operating system (remember Windows?) still needs some of that.  So the fact that SQL wants more than it can have can make it a little… grumpy.  Imagine a cranky child throwing a tantrum because he can’t have deserts or whatever.

Fortunately there is an easy fix to this one (unlike the cranky child).  What we are going to do is limit the amount of RAM that SQL actually thinks it wants… and when it has everything that it wants, it will stop misbehaving.

1) Determine how much RAM the server on which SQL Server is installed has.

2) Open Microsoft SQL Server Management Studio with administrative credentials.

3) Connect to the database (If you have multiple SQL databases on the same server see the note below)

4) In the navigation pane right-click on the actual SQL Server (the topmost item) and click Properties

5) In the Server Properties page navigate to Memory

6) Figure out how much 90% of your server’s RAM would be (in megabytes).  Use the following equation:

1GB = 1024*.90=921.6

8GB = 1024*8 (8192)*.90=7373

7) In the Maximum server memory (in MB) field type that number, then click OK.

That’s it!

**Note: The math we are using here allocates 90% of the total RAM to the SQL Server.  In the event that you have multiple SQL Server databases running on the same box you will have to do a bit of calculating to determine how much each database should use… and that can be a challenge.

If you only have the one database engine on your box, you should immediately notice marked improvements.  This breathing room does not mean that it is now time to pour more workloads onto the server… only that your SQL Server should be running a little better!

UNC Path Nightmare

imageAnyone who has taken a basic networking course will understand that UNC (Universal Naming Convention) paths are one of the common ways we in IT access file shares across our local networks.  They will usually look like this: \\oak-mgt-01\Sharename.  Of course, you can see all of the shares on a particular server by just entering the servername (\\oak-mgt-01).  Once upon a time Windows Explorer would show you that path in the address bar, but in this era of simplification of everything (i.e.: Dumbing it down) it makes it prettier by showing > Network > oak-mgt-01 > Sharename.  This changes nothing, it is the same under the hood.

Users are not the only ones who use these UNC paths.  In fact, it is our servers and applications that use them far more frequently than we do, because under the hood that is what they use to connect to any network resource.

But what happens when UNC paths stop working?

A client called me recently to tell me that none of their UNC paths were working, and because of it their production applications were down.  I checked, and sure enough a particular server could access the Internet just fine, and it could ping every internal resource it wanted, but when you tried to navigate to any UNC path, the result was a very unfriendly and generic one:

image

Not only was it not working, it was not even giving me a descriptive error code.  I started down a troubleshooting rabbit hole that would haunt me for hours before I found the solution.

The first thing that we confirmed is that while we were pretty sure that this was a networking issue, it was contained within the specific server.  How did we determine this?  We discovered that we got the same result when we tried to navigate to \\localhost.  Localhost is that trusty loopback adapter that exists in every network device, and is the equivalent of \\127.0.0.1… which of course we tried as well.  Because we know that Localhost lives within the server, we knew that it was not an external issue.

Before we went out to the Internet for other ideas, we tried all of the obvious things.  We changed the NIC, we verified DNS, WINS, and even NetBIOS over TCP/IP.  We reset the network connection (netsh ip reset c:\resetlog.txt).  Nothing doing.

We went out to the Internet and followed the advice of several people who had been in our spot before.  We uninstalled and then reinstalled several networking components.  We deleted phantom devices, we ran network reset tools.  No joy.

When I came across Rick Strahl’s Web Log I thought I was close…he had experienced exactly the same symptoms, and in his article UNC Drive Mapping Failures: Network name cannot be found I was hopeful that when I re-ordered the Network Provider Order in the registry (HKLM\SYSTEM\CurrentControlSet\Control\NetworkProvider\Order) I would be home free.  Unfortunately I was wrong… but I was in the right place.

When Rick’s solution didn’t work, I was disheartened.  I was about to close it out and look at the next ‘solution’ on-line.  My instinct however told me to look again… to look closer.

image

There was a typo… but you have to really know what you are looking at to see it.  In fact, even if you really know what you are looking at, it is easy enough to miss.  Take a look… do you see it?  Look closer… The entry LanmanWorkstation is right there, clear as day, right?

Nobody would blame you for not noticing that there is an S at the end of the string… because S is so common at the end of words – it just makes them plural, right?  Well computers – and especially the Windows Registry – does not know English grammar, it knows binary… and the difference between LanmanWorkstation and LanmanWorkstations is the difference between working… and not working.

When I made the change it was instant – no reboot was required, the server application started working immediately.  A big sigh of relief permeated the office.

The server in question is one that several people were working on when it stopped working, and nobody is entirely sure how it happened… was it human error, or did a rogue process cause it?  We will look in our logs and figure that out later.  For the moment though, our UNC paths are back, and my client is back at work.

OEM Servers: Myths vs. Realities

In a recent conversation I realized that there are still a lot of misconceptions about OEM (Original Equipment Manufacturer) operating system rights with regard to Windows Server. While I am not here to say who is right and who is wrong (whether one should or should not buy OEM operating systems), I still think it is important to understand the facts.

Myth #1: OEM licensing is limited, and cannot be upgraded.

An OEM license is indeed tied to the physical hardware for which it was purchased. This is a distinct disadvantage to purchasing Volume Licenses (VLs). However when you buy an OEM operating system you have thirty (30) days to add Software Assurance to it. Any license with Software Assurance attached to it can be upgraded as new versions are released. However there is one important bit to understand… when decommissioning that server, the SA can be detached from the license and attached to another… but the OS itself cannot.

clip_image002

Myth #2: Virtualization rights are unclear on OEM licenses.

I hear this from people all the time, and although I have tried to explain it to many of them, sometimes I simply have to shrug my shoulders and walk away from it. There is nothing murky or unclear about virtualization licensing. Whether your host (hypervisor) is an OEM license, VL license, or FPP (Full Package Product) license, your virtualization rights are the same, and they depend not on how you bought the license, but what tier you bought (Standard vs. Datacenter).

The OEM license is applied to the host, and must be tied to that host. However the guest VMs (2 on Standard, unlimited on Datacenter) do not have any restrictions. Like any guest VM on any other license, they can be migrated to any other host, as long as the destination host has allowance – so if the destination host is Windows Server Standard Edition, it cannot host a third guest VM, but if the destination host is Windows Server Datacenter Edition, the only limitation is based on the available resources (CPUs, RAM, storage).

Print

Myth #3: There are things you can with OEM Editions that you cannot do with VL Editions.

While this is a less common complaint, it is still there. I am told (and I have not really looked into this) that with Windows Server OEM versions (let’s take the HP ROK as an example) you can modify the image to show your logo during the boot-up process. While this is true, I have two points to it:

1) If you know what you are doing you can customize the boot process of any Windows Server installation, regardless of the edition or version.

2) Folks, it’s a server… how often are you rebooting it? Most of my servers (especially virtualization hosts) don’t reboot for months at a time. When they do get rebooted, it either happens at night (when I have scheduled patches) or remotely, when I am not sitting and watching the POST process. I can’t imagine there are too many customers who sit and watch their servers either…

clip_image006

Myth #4: When a reseller consultant sells OEM licenses there is more room for profit.

I am usually very saddened to hear this, but that is mostly because I am not the sort of consultant who makes a lot of money off products; I would rather make my money off my time, and that is what I do. I don’t like hearing that there are resellers who buy a cheaper (and less versatile) option but resells it for the same price as the full version. Aside from the previous point also applying, I am always certain that my customer will find out and call me on it, and I will lose their trust. It is just not worth it to me. That doesn’t mean it isn’t a legitimate issue for some.

clip_image008

Conclusion

There is nothing wrong with OEM licenses, and they are certainly less expensive than other ways of purchasing the operating system. They are just as versatile as non-OEM licenses, but not especially more versatile. If you replace (not upgrade or add more) servers often then they are likely not a good option for you, especially since they don’t add value to the physical server if you resell it. However if you keep your servers for more than a couple of year (as most companies will) then the cost savings might make it worthwhile, and if you do the cost benefit comparison, you might just come out ahead… and that’s CONFIRMED!

clip_image010

Server Core: Every bit the Full Server as the GUI is!

Microsoft introduced Server Core with Windows Server 2008, which means that it was the same kernel as Windows Vista.  Now, nobody is going to stand up and sing the praises of Vista, but Server 2008 was a very solid OS.

You may (or may not) remember that there was a campaign around Vista called ‘The WOW starts NOW!’ Catchy, huh?  Well, because Server Core was literally taking the ‘bling’ out of Windows Server, there was an internal joke at the time that ‘The Wow STOPS Now.’

While Server Core never looked very exciting for end users, for IT Admins, and especially those who were building virtualized environments, Server Core was a godsend. Let’s go through this one more time to demonstrate why:

  • The Windows Graphical User Interface (GUI), which is the difference between Server Core and not, takes resources.  How much?  Well, depending on the server it might be as much as 3-4GB on the hard drive and as much as 350MB of RAM.  Neither of these is a big deal in a world where servers have 128GB of RAM and terabytes of storage on them, right?  Well on a virtualization host that may have on average 100 virtual machines running simultaneously, that translates to 400GB of storage and a ridiculous 35GB of RAM… Ouch.
  • Every component that is installed in Windows has to be patched from time to time.  The fewer components you have installed, the less patching that has to be done.
  • The more you have installed in Windows the larger your attach surface.  By removing components, you can minimize this, making your computer more secure.

servercore01In Windows Server 2008 here’s what we saw when we initiated the installation… a menu with all three editions (Standard, Enterprise, Datacenter) Full Installation, and the three editions with Server Core Installation.

I have been singing the praises of Server Core for as long as it has been available, but often to deaf ears.  I always assumed this was because most IT Admins liked the GUI.  Recently I was in a presentation given by Jeffrey Snover, who gave me another perspective on it… the terminology in Server 2008 was part of it.  You see, people look at the options ‘Full Server’ versus ‘Server Core’ and they immediately think ‘power & performance.’ A Full Server must do more than a server core server… why?  It is FULL!

Of course, in Server 2008 it didn’t help that Server Core actually was a hobbled version of Server… there were a few roles that worked on it, but not too many.

As with so many Microsoft products, that got better in 2008 R2, and even better in Server 2012 and 2012 R2.  Today you would be amazed at what can run on Server Core… in fact, nearly everything that you do on a server can run on Server Core.  So there is little wonder that Microsoft made a change to the terms…

servercore02No longer is it a question of FULL versus CORE… Now our options are Server Core Installation and Server with a GUI.

There are two differences to notice in this screen shot… the first is that there are only four options because Microsoft eliminated the Enterprise SKU.  The second is that the default option (a la next – next – next installations) is Server Core.  While some admins might say ‘Yeah I wasn’t paying attention so I ended up with Server Core and had to reinstall,’ the reality is that most of us, once we understand the benefits and the manageability options, will want to install Server Core instead of the GUI servers.

Of course, there are admins who will still be afraid of the command line… but because most of the ongoing administration of our servers (the things we usually do with MMC consoles) Server Core, or at the very least MinShell will make our lives easier.  MinShell removes most of the GUI, but leaves the MMC consoles.

But what if I wanted to use the GUI to configure the system, and then get rid of it completely?  We can definitely do that.  One method of doing it is to use the Server Manager’s Remove Roles and Features option.  (The GUI is a feature, and is listed under User Interfaces and Infrastructure – Server Graphical Shell)  This will uninstall the components and save the RAM… but it will not free up your hard disk space.  To do that, use the following PowerShell cmdlet:

Uninstall-WindowsFeature –Name Server-Gui-Mgmt-Infra,Server-Gui-Shell –ComputerName <Name> -Remove -Restart

The -ComputerName option allows you to do this to remote computers, and the -Remove option actually removes the bits from the hard drive.

What can you do with Server Core? I won’t say everything… but nearly so.  It is no longer just your Hyper-V hosts… it is your domain controllers, SQL Servers, Web Servers, and so much more.  As long as you are able to learn a little bit of PowerShell… and how to enable remote management on your servers.

Now go forward and save your resources!

Let’s Spread the Action Around… With NLB! (Part 1)

**AUTHOR’S NOTE: I have written hundreds of articles on this blog over the past decade.  Until recently I spent a lot of time taking screen shots of GUI consoles for my how-to articles.  For the time being, as I try to force myself into the habit, I will be using Windows PowerShell as much as possible, and thus will not be taking screen shots, but instead giving you the cmdlets that I use.  I hope this helps you as much as it is helping me! –MDG

I have written at length about Failover Clusters for Active-Passive services.  Let’s move away from that for a moment to discuss Network Load Balancing (NLB) – the tool that we can use to create Active-Active clusters for web sites (and other static-information services).

While NLB does, after a fact, cluster services, it is not a failover service… and is in fact a completely different service.  For my use case, it is usually installed on a server running IIS.  Start by installing it:

PS C:\> Install-WindowsFeature NLB –ComputerName Server1

Of course, having a single server NLB cluster is like juggling one ball… not very impressive at all.  So we are going to perform the same function for at least a couple of hosts…

PS C:\> Install-WindowsFeature NLB –ComputerName Server1,Server2,Server3

By the way, notice that I am referring to the servers as hosts, and not nodes.  Even the terminology is different from Failover Clusters.  This is going to get confusing at a certain point, because some of the PowerShell cmdlets and switches will refer to nodes.

Now that the feature is installed on all of our servers, we are almost ready to create our NLB Cluster.  Before we do, we have to determine the following:

  • Ethernet Adapter name
  • Static IP Address to be assigned to the Cluster

You are on your own for the IP address… it is up to you to pick one and to make sure it doesn’t conflict with another server or DHCP Server.

However with regard to the Ethernet Adapter name, there’s a cmdlet for that:

PS C:\> Invoke-Command –ComputerName Server1 –ScriptBlock {Get-NlbClusterNodeNetworkInterface}

Notice that I am only doing this, for the time being, against one server.  That is because I am going to create the cluster on a single server, then add my hosts to it afterward.

So now that we have the information we need, let’s go ahead and create an NLB Cluster named WebCluster, on Server1, with the Interface named Ethernet 2, and with an IP Address of 172.16.10.199:

PS C:\> New-NlbCluster –HostName Server1 –InterfaceName “Ethernet 2” –ClusterName WebCluster –ClusterPrimaryIP 172.16.10.199 –OperationMode Multicast

It will only take a minute, and you will get a response table listing the name, IP Address, Subnet Mask, and Mode of your cluster.

Now that we’ve done that, we can add another host to the NLB Cluster.  We’ll start by checking the NIC name on the second server, then we will add that server to the NLB Cluster:

PS C:\> Invoke-Command –ComputerName Server2 –ScriptBlock {Get-NlbClusterNodeNetworkInterface}

PS C:\> Get-NlbCluster –HostName Server1 | Add-NlbClusterNode –NewNodeName Server2 –NewNodeInterface “Ethernet”

Notice that in the first part of the script we are getting the NLB Cluster Name from the Host Name, and not the Cluster Name.

This part may take a few minutes… Don’t worry, it will work.  When it is done you will get a response table listing the name, State, and Interface name of the second host.

You can repeat this across as many hosts as you like… For the sake of this series, I will stick to two.

In the next article of the series, we will figure out how to publish our web sites to the NLB Cluster.