A response to a VMware article… written by someone I respect.

English: VMware vSphere in the Enterprise

While he may not be very well know to the Microsoft community, Mike Laverick is a legend in VMware circles.  Mike owns a blog called RTFM Education, a source of white papers for VMware technology, although he did start out as a Microsoft Certified Trainer.  He now works for VMware as a Senior Cloud Infrastructure Evangelist.  I was very happy to read on his blog that he has decided to try learning Hyper-V and Microsoft’s Private Cloud.  Unfortunately from what I can tell he was still trying to think way too VMware, rather that trying to learn the Microsoft way of doing things.

(To read the article follow this link:

http://www.mikelaverick.com/2013/10/i-cant-get-no-validation-windows-hyper-v-r2eality-fail-over-clustering/)

This is a problem that I see all the time, and going both ways.  When I was teaching vSphere Infrastructure classes my Microsoft-focused students had a hard time getting out of the Microsoft mindset.  When I teach Microsoft courses, my VMware students have the same problem going the other direction.  It would be much easier if people would open their minds and just let the technology flow… but then I have been a Star Wars fan for too long so I believe in that sort of thing.

I found several points of the article quite amusing.  Mike opens the article with a picture and quote from the book Windows NT Microsoft Cluster Server.  The first words that he actually types are ‘Mmm, so much has changed since then or has it?’  I am sorry Mike, but to even insinuate that Microsoft Clustering in Windows Server 2012 R2 is anywhere near the disaster that was clustering in Windows NT (or Server 2000, or Server 2003) is a joke.  Yes, you have to have the proper pieces in place, and yes, you have to configure it properly.  You even have to spend a little time learning Microsoft Clustering and how it works.  If you were to spend thirty minutes with someone like me I’d say you’d be good.

Also, I know you don’t like that you have to install the Failover Clustering Feature to all of the servers before you can create your cluster.  However please remember that unlike a pure hypervisor, Windows Server is an operating system that does many things for many people.  To install all of the possible features out of the box is a ridiculous notion – for one thing, it would triple the footprint and multiply exponentially the attack surface of Windows Server… to say nothing of having code running that you don’t need which takes resources.

To save time, I recommend the following PowerShell cmdlets:

Install-WindowsFeature –Name Failover-Clustering –IncludeManangementTools –ComputerName MyServer1
Install-WindowsFeature –Name Failover-Clustering –IncludeManangementTools –ComputerName MyServer2
Install-WindowsFeature –Name Failover-Clustering –IncludeManangementTools –ComputerName MyServer3
New-Cluster –Name MyCluster –Node MyServer1, MyServer2, MyServer3 –StaticAddress 172.17.10.5

(There are probably ways to wildcard that – -ComputerName * or something, but that is not the point of the article).

The point of this article is not to Mike’s article apart – for one thing, he is probably doing better on Microsoft technology than I would have when I was new to VMware, for another I have great respect for him, both as a person and as an IT Pro.  I just find it amusing that a VMware evangelist is struggling to learn Hyper-V and System Center, just as so many of the Microsoft evangelists have been struggling to learn VMware.  There is a huge learning curve to be sure… no matter which way you go.

While I am reasonably fluent and certified in both technologies, there is no question that I favour Microsoft… just as Mike favours VMware.  I am glad to see that he is trying to learn Microsoft though… even though some of the ways he is going about it may be questionable.

The one thing that I will point out though is that Mike is right… there are two ways of building a Microsoft Cluster – you can use the Failover Cluster Manager, or you can use System Center VMM.  Michael points out that these technologies would do well to communicate better.  I agree, and recommend that users pick one or the other.  I would also like to point out that in vCenter Server you can create a cluster, but if you are only using ESXi (Vmware’s hypervisor) without vCenter Server there is no way to create a cluster… the technology is simply not supported unless you pay for it.  Score one for Microsoft.

Mike, on a personal note, I would love to sit with you and show you the vastness of System Center and Microsoft’s Private Cloud one day.  Geography seems to work against us, as you are (I believe) in Scotland, and I am in Japan.  There is a catch though… I will gladly teach you Microsoft’s virtualization stack from top to bottom… but I want you to do the same for me with the vSphere stack.  I know the technology and am certified, but I would cherish the opportunity to relearn it from you, as I have followed your articles with reverence for many years.

If you ever do care to take me up on the offer Mike, my email address is mitch@garvis.ca.  Drop me a line, we’ll figure it out.  I suspect that we would both be able to write some great articles following those sessions, and we would both have newfound respect for the other’s technology of choice.

Advertisements

Counting Down the Classics with the US IT Evangelists

 

On the first day of Christmas my true love gave to me…”

Ninety-nine bottles of beer on the wall…”

“Thirty-five articles on Virtualization…”

Pale AleAll of these are great sing-along songs, whether for holidays, camping, bus-rides, or comparing virtualization technology.  Each one is a classic.

Wait… you’ve never heard the last one? That’s okay, we are happy to teach it to you.  It has a pretty catchy tune – the tune of cost savings, lower TCO, higher ROI, and a complete end-to-end management solution.

Even if you can’t remember the lyrics, why don’t you open up the articles – each one written by a member of Microsoft’s team of IT Pro Evangelists in the United States.

You can read along at your own pace, because no matter how fast or slow you read, as long as you are heading in the right direction then you are doing it right! –MDG

The 35 Articles on Virtualization:

Date Article Author
12-Aug-13 Series Introduction Kevin Remde – @KevinRemde
13-Aug-13 What is a “Purpose-Built Hypervisor? Kevin Remde – @KevinRemde
14-Aug-13 Simplified Microsoft Hyper-V Server 2012 Host Patching = Greater Security and More Uptime Chris Avis – @ChrisAvis
15-Aug-13 Reducing VMware Storage Costs WITH Windows Server 2012 Storage Spaces Keith Mayer – @KeithMayer
16-Aug-13 Does size really matter? Brian Lewis – @BrianLewis_
19-Aug-13 Let’s talk certifications! Matt Hester – @MatthewHester
20-Aug-13 Virtual Processor Scheduling Tommy Patterson – @Tommy_Patterson
21-Aug-13 FREE Zero Downtime Patch Management Keith Mayer – @KeithMayer
22-Aug-13 Agentless Protection Chris Avis – @ChrisAvis
23-Aug-13 Site to Site Disaster Recovery with HRM Keith Mayer – @KeithMayer
25-Aug-13 Destination: VMWorld Jennelle Crothers – @jkc137
26-Aug-13 Get the “Scoop” on Hyper-V during VMworld Matt Hester – @MatthewHester
27-Aug-13 VMWorld: Key Keynote Notes Kevin Remde – @KevinRemde
28-Aug-13 VMWorld: Did you know that there is no extra charge? Kevin Remde – @KevinRemde
29-Aug-13 VMWorld: A Memo to IT Leadership Yung Chou – @YungChou
30-Aug-13 Moving Live Virtual Machines, Same But Different Matt Hester – @MatthewHester
02-Sep-13 Not All Memory Management is Equal Dan Stolts – @ITProGuru
03-Sep-13 Can I get an app with that? Matt Hester – @MatthewHester
04-Sep-13 Deploying Naked Servers Matt Hester – @MatthewHester
05-Sep-13 Automated Server Workload Balancing Keith Mayer – @KeithMayer
06-Sep-13 Thoughts on VMWorld Jennelle Crothers – @jkc137
09-Sep-13 Shopping for Private Clouds Keith Mayer – @KeithMayer
11-Sep-13 Dynamic Storage Management in Private Clouds Keith Mayer – @KeithMayer
12-Sep-13 Replaceable? or Extensible? What kind of virtual switch do you want? Chris Avis – @ChrisAvis
13-Sep-13 Offloading your Storage Matt Hester – @MatthewHester
16-Sep-13 VDI: A Look at Supportability and More! Tommy Patterson – @Tommy_Patterson
17-Sep-13 Agentless Backup for Virtual Environments Special Guest Chris Henley – @ChrisJHenley
19-Sep-13 How robust is your availability? Kevin Remde – @KevinRemde
20-Sep-13 VM Guest Operating System Support Brian Lewis – @BrianLewis_
23-Sep-13 How to license Windows Server VMs Brian Lewis – @BrianLewis_
24-Sep-13 Comparing vSphere 5.5 and Windows Server 2012 R2 Hyper-V At-A-Glance Keith Mayer – @KeithMayer
25-Sep-13 Evaluating Hyper-V Network Virtualization as an alternative to VMware NSX Keith Mayer – @KeithMayer
26-Sep-13 Automation is the Key to Happiness Matt Hester – @MatthewHester
27-Sep-13 Comparing Microsoft’s Public Cloud to VMware’s Public Cloud Blain Barton – @BlainBar
30-Sep-13 What does AVAILABILITY mean in YOUR cloud? Keith Mayer – @KeithMayer

…and as for me? Well it’s pretty simple… just go to www.garvis.ca and type Virtualization into the search bar.  You’ll see what I have to say too!

What Does Being an MVP Mean to ME?

This month I will be speaking at the SMB Nation Fall Conference.  My main presentation will be on what IT will look like for small- and mid-sized businesses in what I call the ‘Post-SBS Era.’  I will be discussing Private Cloud, System Center, Virtualization, Office 365, Azure, and Windows Intune.

I have also been asked to lead a panel of Microsoft MVPs.  Topic: Open.  I can pick a topic, or I can simply open the floor to questions.  I briefly considered calling the panel ‘Whaddya mean you do it for FREE?!’ but thought better of it… however it would be fitting because MVPs do not get paid for what they do… at least not for what they do in order to be an MVP.

I have invited four other MVPs to join me on stage; until I get confirmation from all of them I will not reveal who they are.  However I tried to select people with different experiences as MVPs.  It should be an interesting time.

Over the past few days that I have been thinking about this panel I have given some thought to what it means to me.  Last week I was recognized for the seventh time (Microsoft MVPs are awarded for a period of one year, and my award date is October 1st).  I guess by now I can be considered a ‘veteran MVP,’ but I know that there are so many MVPs who have been around much longer than I have been.

In 2005 or 2006 there was an MVP Roadshow that came to Montreal; Jeff Middleton and the gang came up and after their day-long event, they agreed to do a user group event for us in the evening.  Somebody in the audience asked Jeff ‘What is expected of you as MVPs?’  I expected Jeff to start talking about speaking to user groups, answering questions in the forums and newsgroups, and whatever else.  He surprised me when he answered (not a direct quote) ‘Nothing.  The MVP Award is strictly for past contributions.  It is not a contract, and you are not actually expected to do anything further.’

It was an interesting answer, and on the surface an honest and accurate one.  It does not, however, account for the fact that if MVPs want to continue being MVPs then there are certain expectations of us.  Depending on several factors I think those expectations are not the same for all of us, but that is another topic altogether.

In November 2004 I had a conversation with a young Harp Girn who was at the time a vendor with Microsoft Canada.  He had, earlier in the evening, gotten me to volunteer to start a user group in Montreal for IT Professionals.  He made it clear to me that although he and his team would help, there wouldn’t be any direct, tangible benefits.  ‘I can’t make any promises, but a lot of user group leaders get recognized as Microsoft MVPs.’  I am not sure, but it may have been the first time I had ever heard the term.  He was right – 23 months later I did get the award.

It has been an incredible six years… My life, my career, my outlook have changed so much in that time, and who knew – a lot of that change can be traced back to the MVP Award.  Most of that indirectly of course, but a lot of the opportunities that I have been afforded over the past several years have been because I was an MVP.  Microsoft Canada has done a lot for me, and oftentimes it was because of a conversation started with the phrase ‘…do you know of any MVPs who could do this for us?’  Many of the certifications I hold (especially the Charter certs) are because Microsoft Learning sent out invites to write beta exams to… you guessed it – MVPs.

Shortly after I received the award for the first time a consulting firm asked me to do some work with them – it started as training roadshows but eventually evolved into courseware creation.  When they asked me what I knew about server virtualization I replied honestly that I knew nothing about it.  They had me learn, and that would eventually evolve into several career-changing moments, not the least of which was the opportunity to write Microsoft’s original courseware (e-learning) for Hyper-V.  That led to roadshows of course, and a company that heard about me because of the roadshow asked if I would be interested in learning VMware and then consulting and teaching it for them in Canada (and eventually internationally).  The original consulting firm that got the ball rolling on this told me point-blank that they would not have considered me had I not been a Microsoft MVP.

When the Partner team at Microsoft Canada decided to create a program called the Virtual Partner Technology Advisors, they looked for MVPs who were strong on virtualization.  That led to dozens of contracts over the course of several years, as well as the opportunity to present myself as one of the foremost VMware-compete guys in the country.

And of course, when the DPE Team at Microsoft Canada started discussing a new position called ‘Virtual Technical Evangelist’ they again looked for MVPs.

Someone asked me earlier today what I would do if I wasn’t doing what I do.  It’s a tough question and frankly I cannot fathom an answer.  I guess I need more time, but I’ll come up with something, I promise.  The question got me thinking (and not for the first time) where I would be today if I had not put my hand up to volunteer to create a local user group in Montreal, which in turn led to my eventual nomination as a Microsoft MVP.  The consequences of that single action are impossible to quantify, but let’s start with a quick list:

  • I would probably still be living in Montreal
  • I would likely have a couple of certifications… but nowhere near what I have today.
  • I would not have the vast majority of the friends I have made over the past eight years.
  • I would never have met my wife and her (now OUR) son, and we would not have had our baby.
  • It is unlikely that I would be a Black Belt
  • It is unfathomable that I would have several positions within Microsoft
  • It is highly doubtful I would have started a blog that today is read by ten thousand readers per month
  • I would never have had the opportunity to travel to 8 provinces (several times), 35 states (with many repeats), and twelve countries on behalf of companies like Microsoft and HP
  • I would never have been asked to consult on deployment projects for companies on the Fortune 15 list, nor for such organizations as the New York Police Department.

Wow… that is a simple list that took me all of five minutes to compile, but each point is easy to make the case for.  I honestly believe that had I not been awarded the Microsoft MVP way back then my life would have gone in a very different direction.  I cannot fathom what it would look like today… but it isn’t a stretch to guess that broader minds bring broader opportunities, and I would not be doing as well were I still living in Montreal servicing small business IT shops.

So while Microsoft uses the MVP Program as a thank-you for its community leaders, I expect a lot of us owe Microsoft a big thank-you back for the opportunities that have come about from our award.

It’s Coming… Can we now compare Hyper-V with vSphere as both new products prepare for launch?

On July 5th I published an article titled A Response to VMware’s ‘Get the Facts’ page comparing vSphere to Hyper-V & System Center. In the seven weeks since it went live it has become the 4th most read article I have ever published (in seven years as a blogger), as well as being by far the most commented on, discussed, and shared article I have ever written.

André Andriolli, a former VMware field engineer and now a Systems Engineer Manager with VMware in Brazil, responded very well.  One of the first points he made was:

we should start by comparing what’s in the market TODAY with what’s in the market today: I mean vSphere 5 versus Hyper-V 2, or vSphere 5.1 with Hyper-V 3. Since vSphere 5.1 news are not in the street yet, we should go with the first. Comparing a future MSFT release with what VMware customers are running for over 1 year now is simply not fair, to me at least.

While I did not entirely agree with this at the time, I accept that it is a valid point.  I am looking forward to hearing comments in the next few weeks though… as Windows Server 2012 (with Hyper-V 3.0) becomes generally available on September 4th, and vSphere 5.1 becomes available on September 11th.

My opinion is simple… VMware still makes a great product, but so does Microsoft; the benefits of the former, in my opinion (and that of many VMware customers I have spoken with), simply are not worth the the difference in cost over the latter.  While it will be a relief that VMware is abandoning their Virtual Memory Entitlements (commonly referred to as the Memory Tax), I think the last year will have left a sour note with a lot of their customers, and given them an opportunity to see for themselves just how good Hyper-V really is.

I do like the fact that both platforms are being released at the same time though; I once made a comment that I regretted right away that of course one would always be ahead of the other because one would come out with a new feature, and the other would take that feature and include it in their next release, along with whatever else they were planning, and that would continue on.  For the next year the two will be compared as equals.

Now, this is one place where VMware has a slight advantage… insofar as they have a one-year product cycle, and Windows Server has a 3-year product cycle.  This was adjusted last year when they took the rare step of adding new (and major) functionality into Service Pack 1 of Windows Server 2008 R2.  For now, frankly I am not sure that pound for pound Hyper-V (with System Center) is not already the better product.  I guess we will find out what the market says though…

If you are in Toronto, we would love for you to join us for the Windows Server 2012 Launch Event on September 5th, or if you are in another city across Canada, later in the month.  Check out Ruth Morton’s blog to see the dates, and to click to register.  We hope to see you there!

Memory Limits in Windows 8 & Windows Server 2012

Hey folks!  While I have not seen this documented anywhere official (yet) I was just sent a list of memory caps on different SKUs of Windows 8 and Windows Server 2012.  If you have a need for speed, you are going to like the new changes!

Windows 8 Client OS

  • Windows 8: 128GB RAM
  • Windows 8 Professional, Enterprise: 512GB RAM

Windows Server 2012:

  • Server Standard, Datacenter, Server Storage Standard, MultiPoint Premium, Server HyperCore: 4TB RAM
  • Server Storage Workgroup, Server MultiPoint Standard, Server Win Foundation: 32GB RAM
  • Windows 8 Essentials Server Soultion: 64GB RAM

It has been a while since I felt my laptop just didn’t have as much RAM as it could…. but the reality is that I don’t know what I would do with more than 8GB of RAM.  Ok, I really do… Hyper-V lab environment! Smile However I do have a couple of servers with 128GB of RAM, and I don’t think I will soon come close to maxing that out for what I do! Smile

Oh, by the way: VMware announced that its limit for ESXi 5.1 (launching September 11) will be 2 TB of RAM, the same as ESXi 5.0.

Is it true? The Memory Tax is gone!

CRN is reporting that next week at VMworld VMware will be announcing that they are doing away with Virtual Memory Entitlements, which you have probably heard me refer to as the Memory Tax.

According to the article (http://www.crn.com/news/cloud/240005840/vmware-kills-vram-licensing-will-focus-on-vsphere-cloud-bundles.htm?cid=nl_alert) VMware is trying to regain its competitive edge over Microsoft’s Hyper-V, which has over the past couple of years soared to nearly 30% market share, making it the fastest growing virtualization platform in the industry.

This is the first time I can remember that VMware is showing any signs that they are trying to compete against the scrappy and powerful competitor.  I heard from a source at VMware that they have heard from a great many clients that they are either testing Hyper-V out on a few servers or, in some cases, switching completely.  This comes as no surprise in a year when VMware introduced the hated Virtual Memory Requirements, and when Microsoft has made such incredible strides to make Hyper-V 3 as good or better than its larger competitor.

It will come as no surprise to readers of this blog that I was shocked by the Memory Tax, and predicted a year ago it would badly hurt VMware’s market share.  In a day and age when competition is getting better, giving their hypervisor away with the operating system, and bundling the management tools with the System Center suite (which the vast majority of companies already own) it simply made no sense for VMware to make virtualization more expensive than they already had.

VMware will be launching vSphere 5.1 at VMworld next week, and the worst of times will be over for their fans.  I wonder however if they can turn the ship around… you cannot unring a bell, and the companies who have tried Hyper-V for the first time in the past year have seen what their alternatives are.  IT managers have to consider costs, and if the less expensive product is just as good (and is supported by the largest software company in the world) then it will be interesting to see how many of them make the switch over the first twelve months of Hyper-V 3.0 with Server 2012, which RTMed August first and is set to become publically available in early September.

In other News:

Several VMware customers have told me they have received an e-mail that looks like this one:

“… I am your renewals representative from VMware.  I wanted to reach out to you regarding a Renewals promotion that we are running through September 30, 2012.

VMware is making an extended effort this year to bring current any expired customers to allow for reinstated SnS and the ability to upgrade to the newest version of vSphere (VS5). Throughout this one-time promotion, we will be offering two separate options with 100% waiver of reinstatement on your expired licenses and up to 100% waiver of your back-dated maintenance; saving you at least 20% on your renewal cost.

By reinstating your support via this promotion, you could save thousands of dollars and regain access to technical support and the most current releases.  While supported, you will be eligible for upgrades and updates as well as technical support, both online and via our Customer Support Technicians.

I would be happy to discuss the promotion and answer any questions you may have.  Again, this offer is only valid through September 30, 2012; therefore, please let me know if you would like to see pricing options and I will have those generated.  If you have further questions about this promotion, please feel free to contact me directly at the information below, or contact RenewalsHotlineAMER@vmware.com

If you are interested in quotes to see how much getting your products back on support would be please let me know and I am happy to get these for you.”

I am not surprised.  For the first time in its history VMware will have seen decreased sales in the past year, especially when it comes to renewing SnS contracts.  When they launched vSphere 5 (and the hated Memory Tax) they gave existing clients a ridiculously short window to make the commitment to upgrading their licenses… something like 30 days.  A great many companies decided to either stick with vSphere 4.1, which meant that they would avoid the Memory Tax, or better yet, begin the process of migrating their existing vSphere servers onto Hyper-V.

It does not surprise me at all that the company is now looking for ways to get that lost business back, even taking the unprecedented steps of lowering the costs AND waiving the penalties.  Unfortunately for them, as I wrote earlier in this article, you cannot unring a bell.  For the die-hard fans who stayed with vSphere 4.1 this might be enticing, but for companies that dipped their toes into the waters of Hyper-V, and were anxiously awaiting the public release of Hyper-V 3.0, there is no going back.

I have been and will continue to teach those professionals and companies how to best leverage their Microsoft virtualization platform… Welcome aboard!

Virtual Hard Disks: Best Practices for dynamic / thin provisioned disks and other stories…

 sort of an armillary sphere (or at least a disk)

Virtual environments offer us many new options with regard to storage.  Whether you are using Hyper-V or VMware, a few of the options (and recommendations) are the same, even when the terms are different.

One of the options I am often asked about is dynamically expanding hard drives.  MY first reaction is that I am against them… and I have very good reasons for that, except that there are caveats to make them acceptable for a production environment.

Hyper-V has dynamically expanding disks, vSphere has thin provisioned disks.  They both do the same thing – rather than creating a file equivalent in size to the disk you are creating (say, 20GB on disk for a 20GB VHD) it will create a tiny little file that grows as data is written to it, up to the limitations set forth in the configuration.

Dynamically expanding disks make our VMs much more easily transported… it is quicker and easier to move a 6GB file than it is a 20GB file.  However in our server environments transportability is seldom a factor.  Performance and stability are.

The performance issue is not really a factor anymore… although the dynamic expansion of a file must on some level take resources, both Microsoft and VMware have made their virtual disks so efficient at the difference is negligible.  As for stability, I rarely see virtual disks corrupting because of expansion… unless they expand onto corrupt sectors on the physical drive, and even then the drives and operating systems are so good these days that seldom happens.

There are two main reasons I provision my virtual disks as thick/static disks:

  1. Storage is cheap… at least, it is cheaper than downtime.  If you need 5TB of storage for your virtual environment, you should buy it.  That way you know what you have, and don’t have to worry about a situation where your virtual disk file grows and fills the volume on which it is stored, which would prevent it (and any other dynamically expanding disks on the same volume) from growing, resulting in the virtual machines that are dependent on them to crash.
  2. Files fragment as they grow.  Fragmentation is bad.  Need I say more?

Now both of these issues do have mitigations that would, when necessary, allow you to implement dynamic/thin disks without risking these problems.  In fact, one of these mitigations will actually solve both issues:

  1. Under-provision your storage.  While this sounds a lot like the ‘Hey Doc, it hurts when I do this…’ So don’t do that! discussion, it is true.  if you have two virtual disks that can expand to 35GB residing on a 72GB volume, there is no threat of them filling the space.  This does not address the fragmentation issue though.
  2. Put each virtual disk file onto its own volume.  This mitigates both issues.  Unfortunately it also means that you are limiting the number of virtual disks you have to the number of hard drives (or LUNs) that you have.  Fortunately there is a mitigation for that too: partition! Bad for Quebec, but good for maximizing the number of virtual disks you can store segregated on a single drive.

When speed is a factor…

The partition plan does not address one more issue – the performance issue.  Mitch’s Rule #11 is very clear: more spindles = more speed = more better.  I have said this in class and on stage a thousand times, and I firmly believe that.  Every virtual disk that you spin up on a hard drive must share that rpms (and therefore the speed) with all of the other virtual disks on that drive.  Partitioning the drive does not change the fact that the multiple partitions still exist on the same spindles.

There are two answers to this: more spindles, or fewer virtual disks.  In larger organizations storage costs are easily justified and therefore accepted.  Smaller organizations very often have to make do with what they have, or often buy the cheapest solution (external USB drives, etc…).  This is never a good idea, but if you are going to use external drives make sure you invest in the fastest bus available – in other words, USB 2.0 is not a good option, and USB 3.0 or eSATA are better bets. 

What other options do I have?

Of course, we have not discussed two great solutions, but they are going to be a little costlier:

  • Storage Area Networks (SANs).  SANs are great because they allow us to take several disks in a unit and partition the total rather than the individual drives, and then we can share them across multiple hosts.  Absolutely every organization considering virtualization, no matter how small, should invest in a SAN device. 

I say invest because they are not cheap… a decent entry-level SAN should cost around Five Thousand Dollars ($5,000.00).

  • Raw Device Map (RDM) / Pass-through disks (PTD). This is a great functionality that ensures that a virtual machine gets the best performance available from the drive by giving it exclusive access to it.  It eliminates the worry of performance degradation due to resource sharing.

The downside to this is that you cannot partition the drive, and while you can create an RDM/PTD to a SAN LUN (Logical Unit Number) that does not change the fact that are limiting the number of drives you can have.  This functionality can also limit the portability of your virtual machines – neither Microsoft nor VMware prevents you from live migrating a VM with a RDM/PTD, but there is an added layer of complication.

Conclusion

While planning for resources is important in a physical environment, it is infinitely moreso in a virtual environment where resources have to be shared and virtual machines have to co-exist on the same hardware resources.  The ability to plan your storage strategy begins with a good understanding of the technologies, as well as an understanding of best practices and recommendations from others who have been there before.  I still wouldn’t use dynamically expanding disks in my servers, but with more and more people asking me about it I wanted to make my reasons clear, as well as outline the steps I would take if I were in a position where I had no choice.

Now it’s up to you!

Can you convince your boss to let you get certified? UCA!

English: Microsoft Certified IT Professional

One of the benefits I get from conferences like Microsoft TechEd is reconnecting with friends and colleagues that I only see at these shows.  David and I have been friends for a couple of years, and when we discovered that we were  both staying over an extra night we decided to splurge and drive a ways to Tampa for dinner at what is in my opinion the best steakhouse and among the best restaurants in North America – Bern’s Steakhouse.

Of course it is slightly over an hour’s drive each way, so in addition to the 2.5 hours we spent in the restaurant we had plenty of time to discuss all sorts of topics, some personal but many business and technology related.

David works on the Microsoft Windows team at Microsoft.  His current area of focus is virtual desktop infrastructure (VDI), which is a subject that have been talking about to user groups for the past six months.  We definitely had a lot to discuss!

He was telling me that in a past job he ran an entirely VMware-based virtualization infrastructure, which makes sense because at the time most virtualized datacenters were running VMware.  He told me he thought it amusing that to this day a Google search of his name comes up with a presentation he did years ago at VMworld.

Speaking at VMworld is a very prestigious gig, on a par to speaking at Microsoft’s TechEd or MMS.  I would have thought that in order to be invited you would have to have at least a VMware Certified Professional (VCP) cert.  He told me that he wasn’t, and the reason for it was VMware Learning’s requirement that you take their course before you sit their exam, and since he knew the product well enough to run a datacenter for the City of Las Vegas, it was a tough sell to his boss to get them to give him the week off as well as pay for the class and exam.  It was not a battle he was ever able to win, so he never got VMware certified.

We started talking about his employer’s position, and that it was, after all, a reasonable one.  In the case of an IT pro who is already proficient on a technology, certifications are for your next job, not for your current one.

Some people are able to learn a technology on their own better (and certainly cheaper) than they could from a class.  Is this always true?  Of course not… it is only true of some of us.

If you know a technology and you have proven it in a production environment for your employer then although it may be reasonable to spend a couple hundred dollars on an exam that is done in an afternoon, there is little value in paying thousands of dollars for a course that takes you away from your job for several days to a week.

So if my previous statement is true, that certifications are for your next job, then what value should a company see in an IT education and certification budget and plan for its employees?

There are a number of answers to that question, and depending on the individual in question one of the following answers should help.

1)      An IT professional may know version X of a technology, but that does not mean that they will know version X+1.  For example, I am certified in Network Infrastructure on Windows 2000 and 2003, but I still studied for and wrote the exam for Server 2008.  Why?  It covers new technologies that most of us could not simply read about and then implement following best practices.  New roles and features such as virtualization, Remote Desktop, and IPv6 meant that I had a lot to learn.  A company who has technologists working on legacy products would benefit from a course that teaches the new technologies, as well as a good refresh to the old ones.

2)      When employees change roles – even within IT – education can prepare them for that new role.  I know plenty of IT pros who have been promoted out of desktop support into the server side, but knowing the one does not mean you automatically know the other.

3)      Certifications are the proof that you have the respect for your profession to learn the material the right way, and then take the time to sit down and write a test created by a panel of subject manner experts (SMEs) and prove it.  They are also a good way to learn where you are weak.  Whether you pass or fail the exam your score report (from a Microsoft exam) will let you know what aspects of the technology you are weak on, so you can go back and study those specific parts more.  The first exams I ever wrote (Windows 2000) simply said ‘Fail’ or ‘Pass’, which meant I never learned how close I was to succeeding, nor what I had to brush up on in order to do that.

4)      Technologies change, job roles change.  Over the past ten years desktop deployment specialists have had to learn components of Windows Server, Active Directory, Windows Deployment Server, Microsoft Deployment Toolkit, the Windows AIK, and of course System Center Configuration Manager.  Individually some of these are easy enough to self-learn, but for most of us they take a good deal of learning to get right.  Hacking around in Active Directory or System Center production environments when you don’t know what you are doing is just a bad idea.  A class, especially one led by a leading trainer who is also a consultant and can discuss real life scenarios and experiences that can point out shortcuts and pitfalls to be aware of is often worth so much more to the company than the cost of the class.

5)      There are companies that require industry certifications by virtue of corporate policies or external regulatory bodies.  Although many certifications do not expire, they do eventually become irrelevant.  A professional who was hired based on Server 2003 certifications nine years ago was cutting edge, but as the infrastructure is migrated to Server 2008 or Windows Server 2012 those certifications are now meaningless, and with the changes in the industry (such as the advent of the Private Cloud) they may be required to recertify as an MCSE: Private Cloud (for example) in order to remain within scope of the policy or regulations.

The list can go on and on, but the simple fact is this: spending one million dollars is not a waste if you can prove that your return on investment (ROI) will be two million dollars.  If you are struggling to convince your employer/manager/director that they should be sending you for certification training, you simply have to show them what that ROI will be.  However remember to balance that with what it would cost them to replace you with a newer model with the current certs!  Experience and tenure are important, but the era of corporate loyalty is behind us, and I have seen too many times professionals talk themselves out of their jobs by telling their boss how much they have to spend on certification and continuing education.

Good luck!

Private Cloud Training from Microsoft Learning

Hey folks!  I have been asked to post here information about the release dates for Private Cloud training from Microsoft, so here it is:

10747A: Administering System Center 2012 Configuration Manager: June 29, 2012 (currently in Beta)

10748A: Deploying System Center 2012 Configuration Manager: June 29, 2012 (currently in Beta)

10750A: Monitoring and Operating a Private Cloud with System Center 2012: July 30, 2012 (currently in Beta)

10751A: Configuring and Deploying a Private Cloud with System Center 2012: July 30, 2012 (currently in Beta)

I hope this helps!

Virtualization Lessons–Both Positive and Negative!

As I sit in the back of the room for Microsoft Canada’s Virtualization Boot Camp Challenge today I see that the lab environments that we are providing to the attendees actually mimics the setup I use for my Virtual Partner Technology Advisor (vPTA) sessions.  As such, I am seeing a lot of potential for attendees to learn a lot of great technologies, but there are a few lessons that they should know.  I outlined these in an article last year called ‘vPTA: What NOT to take away from my 1-day virtualization training.’  I will urge all of the attendees (as well as all of you!) to click on the link and read the article. While a lot of the practices we use are fine for a test/lab environment, you should be aware of them before you try to implement them in your production environment!

I have written a bunch of other articles that are pertinent to the discussion… here are just some of those links:

How to get a head start on the NEW Management and Virtualization Competency

Layer 1 or Layer 2 Hypervisor? A common misconception of Hyper-V, and a brief explanation of the Parent Partition

Virtualization Infrastructure: Which platform is right for you?

Microsoft Virtualization Learning Resources

Hyper-V Training – 10215AE is now available in E-Learning!

Real Help in A Virtual World

Busting the Myth: You cannot cluster Windows Small Business Server

A follow-up to my article on configuring iSCSI initiator in Server Core & Hyper-V Server

A brief response to the vSphere 5 vs. Hyper-V question…

Gartner agrees with me… Hyper-V is for real!

Do you have your Virtcerts?

MCITP: Virtualization Administrator 2008 R2 (and other R2 Virt Certs)