[PLUG] filesystems question
Ben Koenig
techkoenig at protonmail.com
Fri May 13 19:53:36 UTC 2022
------- Original Message -------
On Friday, May 13th, 2022 at 12:22 PM, wes <plug at the-wes.com> wrote:
> On Fri, May 13, 2022 at 12:07 PM Robert Citek robert.citek at gmail.com
>
> wrote:
>
> > In contrast, if we parse the same information for md1, we see that it is
> > also made up of 5 devices, sdb2-sdf2, but of varying sizes:
> >
> > The RAID makes sense. The smallest partition size is 288.2 GB. And 684.1
> > / 228.2 = 3 which is what you would expect for a five-drive RAID6 setup.
> > But the partitioning seems odd, given that the drives are the same size,
> > except sdc:
> >
> > The second partition takes up the remainder of the drive for only drives
> > sdc and sde. The other partitions are half or less, with no other
> > partition using up the additional space. Is that partitioning scheme
> > intended?
>
> I suspect this is a result of many years of different sysadmins replacing
> drives as they failed. we probably had the idea of eventually increasing
> the array's storage size once all the smaller drives were replaced with
> larger ones.
>
> -wes
People love to try that and it never works the way anyone expects...
Is each drive really spread across 2 different arrays? The way I'm reading it each drive has 2 partitions, one of which is associated with md0 and the other md1. My guess is that md1 was part of an LVM group and if you can figure out where the other piece is you can stitch it back together.
As far as data integrity goes, it's probably fine as long as the RAID was in a degraded state when you triggered the rebuild. What doesn't make sense is that the act of rebuilding caused it to become inaccessible. 2 disk failures on a RAID6 shouldn't be fine. Do you know what the array state was before you put new drives in?
-Ben
More information about the PLUG
mailing list