
So in the event of a disk controller failure you could simply replace the controller (or the whole server) and the disks will remain usable. Additionally you are no longer bound to a specific piece of hardware. You’ll get the same core features as hardware with a standard management and monitoring suite.
#Softraid vs hardware raid software
It seems to me that if you care about the integrity of your data and do not need ultra-intense IO performance then software RAID is a good choice. And what about manageability after initial deployment? But at the end of the day if you loose your data then it’s gone. I can certainly understand the argument of simple deployment and having a vendor to blame. I’ve seen systems panic when a failed disk was physically removed before being logically removed. Disk replacement sometimes requires prep work-You typically should tell the software RAID system to stop using a disk before you yank it out of the system.I have yet to see a real-world performance degradation introduced by this, however. Additional load on CPU – RAID operations have to be calculated somewhere and in software this will run on your CPU instead of dedicated hardware.Slower performance than dedicated hardware – A high end dedicated RAID card will match or outperform software.– Servers won’t come mirrored out of the box, you’ll need to make sure this happens yourself.
#Softraid vs hardware raid install
Typically need to incorporate the RAID build into the OS install process.– These tools are often well documented but not as quick to get off the ground as their hardware counterparts. Need to learn the software RAID tool set for each OS.Very flexible – Software raid allows you to reconfigure your arrays in ways that I have not found possible with hardware controllers.Unless you are performing tremendous amounts of IO the extra cost just doesn’t seem worthwhile.



Luckily I had a similar system sitting idle, so I tested the same disks in this server and they worked just fine. Upon reboot the hardware controller says to me “sorry buddy, I don’t see any drives” and wouldn’t boot. But when replacing the failed disk with a shiny new one suddenly both drives went red and the system crashed. Today a server with a hardware RAID controller reported (when I say reported I actually mean lit a small red LED on the front of the machine) a bad disk, which is not uncommon. I’m a sysadmin by trade and as such I deal with RAID enabled servers on a daily basis.
