As a Virtual Partner Technology Advisor I have a really cool job… I go to Microsoft Partners across Canada and demonstrate not simply the virtualization component of Hyper-V, but the entire environment that the partner could leverage to architect a virtualization solution for their customers.
When we developed the program last year we had several discussions around what hardware we should use to deliver the sessions. Theoretically we wanted server-grade hardware but we couldn’t get anyone to donate it… and frankly the idea of carrying a 2U server around did not appeal to me. We briefly discussed the possibility of building it in a remote datacentre (i.e.: at SWMI Consulting Group) but decided against it because of potential Internet connectivity issues.
We ended up building the environment on laptops, and I have a suitcase that I refer to as my Mobile Datacentre. It is not an ideal solution, but it allows us to do everything we wanted to do on the client’s site; I can get onto an airplane with it as carry-on, and it takes less than 30 minutes to set up completely. In a future article I will outline what my ‘kit’ consists of, but essentially it has a couple of laptops that run Windows Server 2008 R2 SP1.
After the first few deliveries I started to get calls from the partners that I had not expected… asks for support on the most ridiculous scenarios, to which I would respond ‘Why would you ever want to do that in a production environment?’ The answer kept coming back ‘Well, isn’t that how you told us to do it?’ Of course it wasn’t, but as I thought about it I understand where some of the miscommunications came from. Based on that, I have compiled a list of lessons you should never take away from my vPTA sessions.
1. Your laptop is NOT a server!
2. Your desktop is NOT a server!
I have met people over the years – especially in the SMB space – who feel that because a computer is based on x86 hardware and the specs are similar they can run their production servers on any hardware. This is WRONG! Just as there is a difference between corporate-grade and consumer-grade hardware, servers should only be run on server-grade hardware – whether you prefer HP, Dell, or Intel OEM machines.
3. You should have multiple domain controllers!
4. If you have only ONE domain controller, and it is virtualized, there are risks in joining the virtualization host to that domain. I am not saying that it will not work – it will – as long as you are careful about it. Remember, do it carelessly at your peril!
5. When using a Storage Area Network (SAN), which is highly recommended for virtualization environments, use a proper physical SAN device. Trying to do things ‘on the cheap’ with software SAN solutions may work… but use them as a last resort. Remember, they will not have the flexibility or power of a physical SAN, nor the management tools.
6. If you do decide to use a Software SAN (such as Microsoft iSCSI Software Target 3.3), DO NOT UNDER ANY CIRCUMSTANCES BUILD IT IN A VIRTUAL MACHINE.
What software SANs do in order to ensure that the volume is not shared is it creates a fixed-size VHD. If you create a 100GB LUN (Logical Unit Number) then a 100GB VHD is created on the volume. Creating a VHD within a VHD not only slows things down, it also has the potential to… well, make things go bad.
7. Don’t (on a daily basis… or EVER!) turn your Hyper-V hosts off, disconnect them and all of your networking components, put them into a roller-board suitcase, and travel with them. Your servers should only move if your company sells your building and moves to a new one. Otherwise they should stay put and always stay on! In fact, there should be careful planning for UPS requirements and generators in the event of power outages. Remember… when I am finished at your site at the end of the day… I ‘destroy’ the demo environment and rebuild it before going to my next session!
8. YOU NEED MORE THAN ONE NETWORK CARD RUNNING ON A CHEAP D-LINK SWITCH TO MAKE YOUR VIRTUALIZATION ENVIRONMENT WORK!!! This is not a commentary on D-Link hardware… for home and SMBs they probably work pretty well (I use them for some things). When planning the network architecture of your virtualization environment you should do some serious planning around networking requirements, including how many NICs for production, how many for iSCSI, how many for Clustering, will your Production vNetwork be shared with your Management vNetwork? The answer to all of these questions depends on your requirements… but it is ALWAYS more than one. Remember: More NICs=More Better!
9. Your iSCSI (Storage) network should not be on the same wire as your Production network, and if it is out of necessity then you should at the very least implement vLAN tags to segregate the traffic. Remember, the only encryption you can put on an iSCSI network (and few people seem to…) is CHAP – not very good.
10. YOUR LAPTOP AND DESKTOP ARE NOT SERVERS! Of course this is the same as Points 1 & 2, but important enough a message that it warrants repeating.
11. VM Snapshots are great for labs and testing, but are not recommended for your production environment, and are NEVER a long-term solution. In fact this is STRONGLY discouraged by both Microsoft, VMware, AND SWMI Consulting Group They should be used in production sparingly and carefully, and only with very careful planning and monitoring. Remember, when you delete a snapshot… NOTHING HAPPENS. The VHD and AVHD files only merge when you shut down the virtual machine, and can take a lot of time!
12. Breaking any of these rules in a production environment is not just a bad idea, it would likely result in an RGE (Resume Generating Event). In other words, some of these can be serious enough for you to lose your job, lose customers, and possibly even get you sued. Follow the best practices though and you should be fine!