Home » RAID » Onboard SAN… Issues.

Onboard SAN… Issues.

If you value or enjoy this blog please feel free to donate. I receive no compensation for it and do not sell my endorsements.

Follow me on Twitter

Archives

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,503 other followers

A client of mine is a small business with a couple of physical servers and a couple of virtualization hosts.  One of the physical servers, a Lenovo ThinkServer, has been acting as a file server, so it has really been very under-used.  It is a good server that has never been used to its potential (like myself) but has been nonetheless a very important file server.  It has eight hard drives in it, managed by the on-board RAID controller.

When the server rebooted for no discernible reason last week, we were concerned.  When it didn’t come up again, and did not present any hard drives… we realized we had a problem.

I was relieved to discover that it was still under warranty from Lenovo, with NBD on-site support.  I called them, and after the regular questions they determined that there might be a problem with the system board.  They dispatched one to me along with a technician for the next morning, Their on-site service is still done by IBM, and in my career I have never met an unprofessional IBM technician.  These guys were no exception.  They were very professional and very nice.  Unfortunately they weren’t able to resolve the problem.

Okay, in their defense, here is what everyone (including me) expected to happen:

1) Replace the system board.

2) Plug all of the devices (including the hard drives)

3) Boot it up, and during the POST get a message like ‘Foreign drive configuration detected.  Would you like to import the configuration?’

4) We answer YES, the configuration rebuilds, and Windows boots up.

Needless to say, this is NOT what happened.  Why?  Let’s start with the fact that low-end on-board RAID controllers apparently suck.  Is it possible that a procedure was not properly followed?  I am not sure, and I am not judging.  I know that I watched most of what they did, and did not see them do that I felt was overtly wrong.

The techs spent six hours on-site, a lot of that spent in consultation with the second level support engineer at Lenovo, who had the unenviable task of telling me, at the end of the effort, that all was lost, and I would have to restore everything from our backup.

I should mention at this point that we did have a backup… but because of maintenance we were doing to that system over the December holidays the most recent successful backup was twelve days old.

Crap.

Okay, we’ll go ahead and do it.  In the meantime, the client and I went to rebuild the RAID configuration.  We decided that although we were going to bolster the server – including a new RAID controller – we were going to try to rebuild the array configuration exactly as it had been, and see what happened.

Let me be clear… even the Lenovo Engineer agreed that this was a futile effort, that there was no way that this was going to work.  Of course it would work as a new array, we just weren’t going to recover anything.  I agreed… but we tried it anyways.

…and the server booted into Windows.

To say that we were relieved would be an understatement.  We got it back up and running exactly as it had been, with zero data loss.  We were not going to leave it this way of course… I spent the next day migrating data into new shares on redundant virtual servers.  But nothing was lost, and we all learned something.

I want to thank Jeff from Lenovo, as well as Luke and Brett from IBM who did their best to help.  Even though we ended up resolving it on our own (and that credit goes mostly to my client), they still did everything they could to make it right.

So my client has a new system board in their server, and hopefully with a new RASID controller, some more memory, and an extra CPU this server can enjoy a new and long, productive life as a vSphere host in the cluster.

…But I swear to you, I will never let a customer settle for on-board ‘LSI Software RAID Mega-RAID’ type devices again!

Happy week-end.

Advertisements

2 Comments

  1. James Forbis says:

    Why was the backup 12 days old? if you had a decent BDR in there you could of ran the server in a vm until it was repaired or migrated with little to no downtime/data loss.

    • Mitch Garvis says:

      James we had taken our backup solution off-line to make some infrastructure changes over the holidays. Also because the server was also a virtualization host (Hyper-V) a lot of what was on that machine could not have been run in a VM – I would have had to rebuild the Hyper-V VMs in vSphere and I did not want to do that.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Tweet Tweet…

Follow The World According to Mitch on WordPress.com
%d bloggers like this: