I just built a new system using an ASUS P5N32-SLI SE Deluxe mobo, which uses the nForce 4 chipset. The board has 5 internal SATA connectors, 4 off the nVidia chipset and 1 off an additional SiI chipset. It also has the usual two IDE connectors.
I want to move my RAID5 storage array into this machine. It is currently in another system, connected to a Promise SATAII 150 TX4 PCI controller card. I see several possible ways to arrange the drives in the new machine, and would like to see if anyone here has suggestions on what would be a better/worse way to do it. I will have a separate OS/boot drive, so there would be (at least) 5 drives.
- It appears the mobo will only boot off the first SATA connection, so I could hang the OS drive there, and the RAID off the remaining internal connectors. This means my fourth RAID drive will be on the SiI controller. While I'm sure it would work fine, I'm wondering about performance...
- I could use an IDE drive for the OS, which lets me put all four RAID drives on the nVidia chipset SATA connectors. I'm not overly fond of this idea, as I'm assuming that the IDE drive will be slower than a SATA one... Perhaps that's not too significant though?
- I could leave the RAID array on the PCI card, and just plug it in as well. I'm (again!) assuming the PCI add-in card would be slower than the onboard chipsets though.
Any other options that might work? One of these sound like the best approach?
I don't want to use the hardware raid (which would apparently require I boot off an IDE drive anyway to keep the boot drive separate) as I'm quite comfortable with the Linux MD setup and don't want to be quite so tied to specific hardware.