A client of mine is a small business with a couple of physical servers and a couple of virtualization hosts. One of the physical servers, a Lenovo ThinkServer, has been acting as a file server, so it has really been very under-used. It is a good server that has never been used to its potential (like myself) but has been nonetheless a very important file server. It has eight hard drives in it, managed by the on-board RAID controller.
When the server rebooted for no discernible reason last week, we were concerned. When it didn’t come up again, and did not present any hard drives… we realized we had a problem.
I was relieved to discover that it was still under warranty from Lenovo, with NBD on-site support. I called them, and after the regular questions they determined that there might be a problem with the system board. They dispatched one to me along with a technician for the next morning, Their on-site service is still done by IBM, and in my career I have never met an unprofessional IBM technician. These guys were no exception. They were very professional and very nice. Unfortunately they weren’t able to resolve the problem.
Okay, in their defense, here is what everyone (including me) expected to happen:
1) Replace the system board.
2) Plug all of the devices (including the hard drives)
3) Boot it up, and during the POST get a message like ‘Foreign drive configuration detected. Would you like to import the configuration?’
4) We answer YES, the configuration rebuilds, and Windows boots up.
Needless to say, this is NOT what happened. Why? Let’s start with the fact that low-end on-board RAID controllers apparently suck. Is it possible that a procedure was not properly followed? I am not sure, and I am not judging. I know that I watched most of what they did, and did not see them do that I felt was overtly wrong.
The techs spent six hours on-site, a lot of that spent in consultation with the second level support engineer at Lenovo, who had the unenviable task of telling me, at the end of the effort, that all was lost, and I would have to restore everything from our backup.
I should mention at this point that we did have a backup… but because of maintenance we were doing to that system over the December holidays the most recent successful backup was twelve days old.
Okay, we’ll go ahead and do it. In the meantime, the client and I went to rebuild the RAID configuration. We decided that although we were going to bolster the server – including a new RAID controller – we were going to try to rebuild the array configuration exactly as it had been, and see what happened.
Let me be clear… even the Lenovo Engineer agreed that this was a futile effort, that there was no way that this was going to work. Of course it would work as a new array, we just weren’t going to recover anything. I agreed… but we tried it anyways.
…and the server booted into Windows.
To say that we were relieved would be an understatement. We got it back up and running exactly as it had been, with zero data loss. We were not going to leave it this way of course… I spent the next day migrating data into new shares on redundant virtual servers. But nothing was lost, and we all learned something.
I want to thank Jeff from Lenovo, as well as Luke and Brett from IBM who did their best to help. Even though we ended up resolving it on our own (and that credit goes mostly to my client), they still did everything they could to make it right.
So my client has a new system board in their server, and hopefully with a new RASID controller, some more memory, and an extra CPU this server can enjoy a new and long, productive life as a vSphere host in the cluster.
…But I swear to you, I will never let a customer settle for on-board ‘LSI Software RAID Mega-RAID’ type devices again!
Leave a Reply