To be a bit more specific:I'm doing this in virtualbox, with a VM that has one sata controller and two virtual disks added to that controller of the same size.So I'm using the... [by jefkebazaar]
I have a Proliant ML350 G8 with two SAS raid arrays currently set up - thereby maxing the default P420i raid controller. I need to set up a large video dump space in addition to this existing set-up (for non backed up, non-critical, temporary storage).
I had planned to just add a 2TB SATA disk and plug it into the motherboard.
I have a 64-bit system which has both an Adaptec 39160 SCSI PCI controller and a LSI SAS PCI-express controller - there are two SCSI disks (/dev/sda and /dev/sdb) on the Adaptec controller and four on the SAS controller (/dev/sdc, /dev/sdd, /dev/sde and /dev/sdf).
For some unusual reason my disks in my RAID 5 config have started failing. Now they haven't all failed at once, and i they did I would be looking at the RAID controller as the cause. I noticed the other day that two or three disks had an orange light.
This prompted me to take some drastic action because I wanted to upgrade the disk sizes anyway.
I was switching the disk drives in my server, running RAID 6 on 5 disks on a 3ware 9650SE controller, for newer disks and have run into a problem.
The two first disks went fine, I took out one disk and inserted a new one, let the array rebuild and repeated for disk 2.
When booting after switching the 3.
I stumbled across a little problem yesterday. Tried a bit to make my own NAS solution, but that didnt work out that well.
So far I have been using all seperate drives in a windows share to stream my HTPC with XBMC downstairs.
Now I wanted to run my server on Ubuntu and make it a large pool (4x 3TB data and 2x 2TB parity).
The only problem I have is that my MB has 4 SATA connectors.