I have a Raid5 software partitioned using LVM (at centos 5.2 installation).Actually the raid is composed by 3 320Gb HDD.I would like to replace them with 3 2T hdd, but I'm worried ... [by Kirys]
I currently have openSuse 11.0 installed on a software dmraid raid5 volume. Today I tried to install openSuse 11.2 instead, but I got stuck in the partitioning. I have a /home partition which is also a software dmraid raid5 volume and I want this partition to be left untouched.
Has anyone tried installing FC 18 or 19 (new Anaconda) on an existing RAID 5 system (I have installed Debian in software RAID5 as md0). Fedora 17 installs RAID 5 as md1 without a hitch (/boot on seperate partition of course). If you have let lime know the steps you took using the new Anaconda. Otherwise I'm just upgrading to o19 beta from 18 which was a FC 17 upgrade (using Fedora.
I just upgraded ubuntu from 12.04 to 12.10. Now in boot it won't start. My system has one hdd(where ubuntu is) and 4 1.5 tb sata drives in software raid5 configuration. In upgrade process it said some thing about raid. I think it tries boot from raid. Is there anything to fix this problem
I have 6 1tb drives in a RAID5. 1 drive went down. On the RAID was 2 virtual machines that I really need back up and running. The spare drive I have to put in the server is a 1.5tb drive, which exceeds the physical per drive limit of the 2020SA. The drive is found in the disk utility, but it is not found in the array management section. I cannot add the drive to the array to have it rebuild.