nV News Forums


nV News Forums (http://www.nvnews.net/vbulletin/index.php)
-   General Linux (http://www.nvnews.net/vbulletin/forumdisplay.php?f=27)
-   -   Dual Booting With Linux in Raid (http://www.nvnews.net/vbulletin/showthread.php?t=62467)

seeker 12-31-05 01:15 AM

Dual Booting With Linux in Raid
Some new hardware of mine is throwing me for a loop. I have 2 80GB SATAs set in a striped array (0), giving a usable capacity of 152.66GB. I divided the array into 2 partitions, therefore XP shows these partitions as being about 1/2 the total capacity as they should. with XP on the first partition, and the remainding space as unused space. The problem comes when I attempt to install Linux, because Debian shows 2 drives, not partitions and the capacity of each at 82GB, which is too high, it also shows an NTFS drive of the same size below the other 2 with a zig zag arrow pointing down. I decided to check what the SuSe installation would show, it shows both of the drives, but XP as only being on SDB1 and no other freespace or partions anywhere. I should also mention that when I plugin the IDE drive, Linux only sees 1 partion on it also, despite the fact that there are 2. A friend of mine seemed to think that the problem might be due to an incompatability between Linux and the SATA raid controller on the motherboard. My old motherboard, which had a Via chipset, instead of the nForce chipset on my new mobo, did not have this problem. That same friend suggested buying a separate PCI controller, which I will do if I must, but it seems that surely this motherboard aught to be able to do the job itself. Is there a way to approach this problem that I'm not thinking of? Would it make a difference if I used another type of raid?

seeker 12-31-05 01:45 AM

Re: Dual Booting With Linux in Raid
Thread subscription.

PeaceFaker 12-31-05 06:28 AM

Re: Dual Booting With Linux in Raid
my guess is that whatever drivers the installation kernel includes does not support the raid your motherboard supplies. Motherboard raid is in many cases a semi software raid depending on windows drivers for correct operation. Using it as an additional controller should not be a problem but the raid functionality might be.
I don't know what raid controller you have nor what the status of the linux support.

seeker 12-31-05 06:54 AM

Re: Dual Booting With Linux in Raid
It probably won't tell you much, but the raid is nForce on a Gigabyte K8 Triton (K8NSC 939). From what I have heard everyone say about this in the general sense, I thought that is would be more advance than my old Via mobo...that's why I bought it. I'm starting to have thoughts about reinstalling the old board again.

fhj52 01-02-06 01:28 AM

Re: Dual Booting With Linux in Raid
According to nVidia, who can certainly chirp-in here if they want, there are no installable drivers for Linux because direct support for NV RAID has been in the kernel for "quite some time" which means, IIRC, kernels greater than 2.6.9-12. ANY distro using ANY kernel version less than that has NO support whatsoever for what I have renamed HARM(see sig).

IF the Linux distribution uses a recent version of the Linux kernel, e.g., 2.6.9-22, it can support the HARM via dmraid (not md as most RAIDs use) for the RAID modes that Linux can support: 0, 1, 5 & 10(I don't know about the JBOD...).
The level of that support is a complete unknown as NVIDIA has not published any document that I can find which describes how the RAID support works in Linux. HARM is a FAKE RAID but because NVIDIA HIDES THAT FACT, it is unknown how much they do or do not do to assist Microsoft OS in running RAID 0, 1, 1+0, JBOD or on some *new* mobos the RAID 5. RAID 5 support was only recently added; older mobos with the nForce platform might get support through BIOS upgrade or might not. ASK nVidia or the mobo manf. directly about that. At best, I think the HARM performs parity calculations on the chipset, thereby offloading some of the more CPU intensive memory work. Whether that is of any real life benefit is also UNKNOWN because there are NO independent tests that I have seen to indicate that using the HARM as a HW+SW RAID actually increases quality(i.e., accurate & safe) throughput which is the real bottom line.

However, the problem you have described, as best as I could determine from what you wrote, is a matter of the distribution supporting dmraid (and thereby HARM) during the install. NONE do that with the possible exception of Gentoo and it is only mentioned because nobody seems to know whether one can get support for dmraid during install of Gentoo or not but there is a web page claiming it can be done with the instructions on that page. I have never tried it nor do I know of anyone who has ever claimed to have done it other than the author of that article, of course. I am sure s/he would enjoy feedback about the success/failure from anyone attempting it.

The problem boils down to support of dmraid during the install of the distribution. Mandrakelinux was working on that last year but I have no idea if the renamed company, Mandriva, continued that effort. The reason that dmraid is needed is that it supports what is known as "foreign" RAID, i.e., HW RAIDs as well as the SW RAIDs that are used by various companies. The md driver cannot do that. The md driver is what virtually all distributions use to setup and manage RAID on the distro for many years. Linux SW RAID has been around for quite a while... however the dmraid support is, in comparison to md, relatively new.
The support for dmraid requires changes to the wya the boot process is done. The is not so easy to implement but can be done. At this time, the best we can do is to test what is available so that it can be implemented universally in a streamlined manner. Sometime back (>1yr) I did see a post that someone was working on the implemenatation for Fedora also. idunno what happened on it...

I don't have the link handy but if you search this forum there is a link here for the web page describing how Gentoo gets support for dmraid during the install. Of course Google should work too ...
You can check at Man* cooker or in the Bugzilla for the dmraid support status by them. The only Debian that *might* have support would be the *buntu Deb's(Ubuntu or Kubuntu) because they use relatively recent kernels. MY personal experience indicates they do not. CentOS 4.2, which is ~ RHEL 4.2, might but I did not see it. I could report on that but while trying to get the MS support working, my mobo somewhat mysteriously died: It failed to POST after a shutdown.
I have no idea what caused it. I won't have a new one from RMA until next week...

Based upon the last couple of weeks, I would say that you will get ZERO support from nVidia on this subject. Perhaps that was because of the holidays but upon reading other much too brief replies to "Platform" problems, I think not. I think it is general nVidia standard policy to provide the shaft to Linux users as much as possible by obfuscation and ignorance. You will have better luck/answers from other non-nv forums, especially those designated for md and dmraid support.

If my mobo continues to have problems I will be exchanging it for a NON-NV platform and, by the Grace of God Almighty, be thankful that I got out from under the NV thumb by never buying another mobo with NV as the base chipset again. Foolish me thought that NV would have support for Linux because of their graphics support. I was even a bigger D! fool for believing that the platform was implementing a HW RAID solution which it does not.


seeker 01-02-06 02:50 AM

Re: Dual Booting With Linux in Raid

It certainly sounds as though you are the person that I needed to hear from. You have already been around the block, and I haven't even got out the front door. I'm still digesting what you have said, so I'm not certain what I will do yet. I do still have time to return the mobo to the store, but it does have some features that I like, and it seems to work well with XP. I'm still working on some unrelated problems, but if it turns out that any of these are due to the mobo, I probably will return it. It caught my eye, where you mentioned having a motherboard die, because I have had similar problems lately. That is why I decided to try this mobo. As it turned out, I believe that the real problerm was simply some bad wiring on the power supply, but it would sometimes work and sometimes not, which caused me alot of confusion about what was happening.

The only problem with getting rid of your nForce board is that the only option that I know anything about is a Via motherboard, and these have a set of problems of their own. I have yet to figure how to setup the system on this mobo, because from what I have read, it appears that it may have a problem that has griped me with my MSI Neo2-F board. The Via system wants to use any IDE drive as a hot spare, which might be a nice idea, but what I want is for the SATAs and PATAs to function separately. The only way that I can do that, is to alternate which controller is turned off or on. If both are on at the same time, it leads to alot of potential problems.

I do have a copy of Centos 4.2, as well a couple of others. So I will experiment with each of them to see if any can see the drives and partitions properly.

If all else fails, I almost have enough parts to build a second tower, which I could dedicate to Linux. The big problem with that is that would complicate connecting the peripherals between them.

If you learn anything else along these lines, I would be all ears.

seeker 01-02-06 03:55 AM

Re: Dual Booting With Linux in Raid
Hmm, I have now tried SuSe 10.0, Xandros 2.0.1 OCE, Kubuntu 5.04 and CentOS 4.2 in addition to the other 2 that I tried previously. All of these come up with the same results...they see the drives, but no partitions. On sda0 it sees no data, and on sda1, it sees the NTFS file system, but as though it were using the entire drive instead of half of it.

At this point, I feel like jumping on your boat, and pulling the motherboard to return to Via. I don't like Via, but at least I know how to deal with it.

seeker 01-02-06 04:08 AM

Re: Dual Booting With Linux in Raid
This raises one more question. I have never had anything to do with it before, but would a hardware SATA Raid PCI card solve this problem...without destroying my XP installation? I'm not sure about how this Nvidia raid system works, but on my Via system, I could dismantle the raid array and still have XP operational on one drive. If Nvidia raid would do the same, then I might be able to add a separate PCI controller to both reassemble XP raid, and to install Linux raid. Any ideas about this?

seeker 01-03-06 10:09 PM

Re: Dual Booting With Linux in Raid
A portion of the problem solved. After resetting the IDE jumper to slave, and reordering the boot sequence, SuSe 10.0 now sees the partitions on the IDE, however, not on the SATAs. I suppose that I could unplug the SATAs and reformat and install Linux on the IDE, but that would leave me with possibly a complex switching requirement to go between systems. I'm still reluctant to experiment with the SATAs, because SuSe pops a warning during setup that it detected the software raid, and that it is only sometimes sucessful with these. It did recommend either the Highpoint or Promise controllers, but I guess that a bit late to help me in selecting the motherboard.

BTW, the SuSe 10.0 kernel is 2.6.13-15

kenyee 01-04-06 02:16 PM

Re: Dual Booting With Linux in Raid

Originally Posted by seeker
would a hardware SATA Raid PCI card solve this problem...without destroying my XP installation?

Maybe, but you'd have to add your array one drive at a time. Most RAID controllers "tag" each drive (in a boot sector) w/ some unique ID that says "this is an Adaptec RAID drive in RAID5", "this is an nVidia RAID drive in RAID1", etc. Your RAID array won't work as a RAID array until you tag it w/ your RAID controller and RAID controllers let you choose the block sizes for sectors.
You might get lucky and find a RAID controller that'll accept your RAID0 drives w/o formatting the drives for that controller...

fhj52 01-04-06 03:09 PM

Re: Dual Booting With Linux in Raid
The Areca ARC-1210 is a SATA II PCIe *complete* hardware RAID solution that it is compatible with *nix & MS Win* so it should resolve any dual system issue. The price is slightly less than 400USD to a USA door for the 4 SATA, and ~800USD for the 12 SATA(ARC-1230). There is an 8(the 1220) but I don't know the price. The ARC needs a PCIe x8 slot for full speed but will fit into PCIe x16 as well as an open-ended PCIe x4 slot and run at whatever is available from it. Throughput is reported as > 155MB/s using x8 with SATA150s.

It has been receiving extremely high reviews but after all, it is 400USD so it should. The thing is that many others cost as much or more but do not have the performance so it is good to find one that does.

There are AMD 8111 & 8132 solutions for the mobos. AMD had those solutions before(ithink) the nForce4 with the SLI solutions. There are the Crossfire solutions too.
I am no fan of VIA as they have a long hit & miss history so would probably not go that direction.
I am hoping that the new board will not have problems.
I don't have much faith there because I have since found that there are problems with the NV audio drivers and/or mixer too. ( The drivers caused instant reboot(s) when using the mixer... ) In fact, it might be possible that is what caused my system to have a SID... don't know yet.

Since SATA do not have Master/Slave relationships, it becomes a little more difficult for the BIOS or OS to recognize which is which. I am not sure why, exactly, but it is obvious that it is true.
I setup a RAID 5 during one of the first (linux) installs using the four SATA drives(3 + 1 spare). During the subsequent (linux) installations, I found that the SATA drives were recognized as software RAID with ext3 filesystems but the RAID volume type was undefined. It was somehwere about that point that I started focusing on getting the NV RAID done and, well, the rest is history, so to speak.

I do not think XP can be saved. It is, after all, a MS product. :D
But, seriously, I think that you should try to migrate the current installation over to a new installation because I do not think you could setup a new RAID that includes the current drive with XP on it. You need to ask someone with more hands on RAID experience but when I last checked on trying to do that(a year or so ago) I found that there is a method but it is risky and quite difficult. In summary, it was not worth it.

I am glad you have made some progress! Although I am sure it is not as much or the quality you desired.
I am not sure what SuSE means by their warning.

I'm still reluctant to experiment with the SATAs, because SuSe pops a warning during setup that it detected the software raid, and that it is only sometimes sucessful with these.
I do know that sometimes such warnings are for covering their backsides in the event something fails and end user wants to blame XYZ0 for it. It might be similar to the warning one gets prior to opening disk druid or similar product to modify the partitioning. While it is true, it is basically just a reminder that you better be very sure of what you are doing and how to do it as well as have everything backed-up to a separate location which cannot be affected by the changes(the last part being essiential). It might be worth the time to look for info about what it really means.

It sounds like you have another system. Have you considered using a KVM to share monitor, mouse and keyboard? With a crossed patch cord you can connect the two systems together directly or use the standard LAN patch cord and do it with a server/client relationship. It does have the advantage of not requiring reboots to use the other OS...
I cannot do that at this time ... power supply problems(house power) won't be resolved for a while.

Hope progress continues for you. I have to work on other things and, besides, I don't have the mobo here...


seeker 01-04-06 04:02 PM

Re: Dual Booting With Linux in Raid
I seem to be running in circles on this system, and I don't know what is the problem. Last night, on rebooting XP started giving what looked like a flicker of a BSOD and rebooted again. I couldn't seem to resolve it, so I decided to install XP on the IDE drive. But it had a wild hair and decided to format the SATAs instead, so I lost everything. Not being deterred, I tried again, and then it formatted the drive that it was supposed to, so I installed XP, and afterward tried to install SuSe 10.0 on the 2nd partition of the IDE, since it seemed that the IDE was the only drive that it could see properly. That install appeared normal, but upon rebooting after the first part of the installation, it couldn't proceed. I guess that I will format it and try again, maybe with a different distro, if any of them can see the drive. What is even stranger is that when I did attempt to install XP afterward on the SATAs, the installer could not see those drives. That may be due to the fact that Windows did not get rid of everything on them as I thought, because during the SuSe install it did see these with SDB1 as containing NTFS. I deleted that and haven't had time to see what would happen now. If all of this sounds confusing, I totally agree. I have never considered myself an expert on raid systems, but I have never had problems like these.

What I need is a detailed step by step tutorial specifically for my mobo. The manual instructions is quite barebones.

All times are GMT -5. The time now is 06:09 PM.

Powered by vBulletin® Version 3.7.1
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Copyright 1998 - 2014, nV News.