Skip to content

VG allocation policy "normal" but prefer unused PVs? #175

@briend

Description

@briend

We've noticed that the "normal" allocation policy is sensible in almost every way, except it tends to allocate all the LVs it can (contiguously) fit on the first PV in the VG, leading to an I/O imbalance. We would much prefer if new LVs are created on different PVs automatically; perhaps the PV with the most free space, or the PV with least number of LVs on it already.

Is there any interest in a new allocation policy like that? Here's an example of someone else having this problem: https://unix.stackexchange.com/questions/670584/balancing-lvm-virtual-group-between-disks

The work-around we are considering is to check pv_free and/or lv_count (pvs -o pv_name,lv_count,pv_free) before we create LVs, and manually pass the preferred PV(s) to the lvcreate command. We don't really want to stripe data or manually run pvmove to correct the problem if we can avoid most of these issues (balance and isolate I/O) during allocation.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions