Resize Nested LVM inside KVM Machines

This was supposed to be easy - extending logical volumes. But if you install a virtual machine, then it all becomes a mess. Search the web for how to extend a partition "nested" in an LV, and there are only questions and no answers.

KVM Disk Management Issues shows an alternative to using the standard install - "just put a filesystem on it and you are done" which basically means that manual partitioning should be chosen during a Ubuntu install. Resize KVM Image shows another alternative which basically involves deleting the swap partition inside the KVM which allows the root partition to be enlarged. That would be necessary when there is no nested LVM, when the partitions were created in the hosts' logical volume. And for a general introduction to LVM see Logical Volume Management (IBM).

virt-manager makes it easy to install the virtual machine using an .iso image of the OS to install. It is not easy to resize storage on a KVM virtual machine, if installed following the standard instrutions - make a logical volume on the host machine, let the virtual machine installer use it like a raw disk to create its partitions, and install the OS.

Well, the trouble is now that the standard LVM resize procedures are not helpful. This is what the picture looks like, where VG is the host machine volume group:

Host Machine:

   vg:VG
    -- lv:kvm1
    -- lv:unused (lot of unused space on the volume group VG)

lv:kvm1 is the logical volume used to install the virtual machine. Let us say it is a Debian-based system, like Ubuntu. Using the default "Guided Partitioning", it will install a logical volume on the disk.

    -- lv:kvm1
       -- vg:VIRT
          -- lv:vroot     --- this is the partition we want to extend
          -- lv:vswap

Now, the vg:VIRT volume group has used up all of the space in KVM1. And since it is nested, there is no easy way to extend lv:vroot by using the host machine's lv:unused free physical extents. Free extents cannot be shared between nested logical volumes.

There is one way to do it - by making new physical volumes and feeding them to vg:VIRT and then extending lv:vroot.

So the process looks like this, steps may not be exactly what I did, but it should provide a general idea on what to do - note that this should be considered a dangerous process so have a backup available or be ready to re-install the whole virtual machine if things go bust:

1: Shut down the virtual machine. virt-manager can be used to start/stop the virtual machines.

2: Extend the host logical volume

sudo lvextend --extents +250 /dev/vg/kvm1

This adds 250 extents, usually 4M each, for a total of 1G of space. Assuming of course that the VG vg has that much free space available, use

sudo vgdisplay -v

to get details of the volume group and number of Free PE counts.

3: Create a new partition in the kvm1 logical volume. Clearly, this cannot be done too many times - twice should be easy, more may require more complex partition editing. The Ubuntu install creates 1 primary, 1 extended, and 1 logical partition. So it leaves only 2 primary partitions free.

sudo fdisk /dev/vg/kvm1

The p command in fdisk will show the number of total cylinders, and because we resized the logical volume, the total cylinders count now will be greater than the last used cylinder in the fdisk p output. So we can create another partition using the new space.

Create a new partition, probably primary partition 3, and let fdisk use the defaults - since kvm1 was resized previously, it will use all the space at the end of the existing partitions. fdisk will ask for the first cylinder and the last cylinder, the defaults are usually right if you want to use up all the remaining space. Use the w command to write the new partition table. There may be some warnings, but usually things are just fine. Try a fdisk command again to print out the partition table to be sure. The following steps assume partition 3 was created.

[Optional: It seems like it may not be necessary to run fdisk, pvcreate, vgextend on the host machine. Once the outer logical volume has been increased, it is also possible to start the virtual machine, and run fdisk, pvcreate, vgextend, etc in that machine. May work better and avoids use of kpartx also. Working inside the virtual machine may require a reboot after fdisk partition creation.]

4: Mount the nested volume groups and partitions from the virtual machine onto the host machine:

sudo kpartx -a /dev/vg/kvm1

This shows what is available, and the names used. In this example, it will show the new volume groups and partitions in the

sudo vgdisplay -v

command, as well in the

ls /dev/mapper

output.

5: Create new physical volumes out of the new partition we just created, and extend the nested volume group which is now visible because of the kpartx command.

sudo pvcreate /dev/mapper/vg-kvm1p3
sudo vgextend vgvirt /dev/mapper/vg-kvm1p3

or if running this inside the virtual machine:
sudo pvcreate /dev/vda3
sudo vgextend vgvirt /dev/vda3

6: Remove the host mounted nested partitions (if kpartx was used to create thee mappings):

sudo kpartx -d /dev/vg/kvm1

Did see a

device-mapper: remove ioctl failed: Device or resource busy

on this command, but looks like it did remove the nested mounts from the host.

7: Now start up the virtual machine. The free space is now available to resize the nested logical volume. Resizing can be done on on-line mounted partitions.

sudo lvextend --extents +250 /dev/vgvirt/vroot

sudo resize2fs /dev/vgvirt/vroot

Complete! The vroot partition in the nested machine is now larger. (This does assume that the partition is re-sizable by the resize2fs commands, it certainly supports ext3).

This is definitely too painful to do more than once, so it seems like it might be much better to create a filesystem on kvm1, and install the virtual machine on it without creating any new partitions inside
the virtual machine OS install. This means having no swap partition which should be fine today for many of the uses of virtual machines. If swap is needed, using a file for swap is just as good as using
a partition so swap can certainly be added after-the-fact.

Caveats: after resizing, the reboot of the virtual machine reported a fsck error, but it was able to fix it and reboot. And it seemed to have had no other bad effect, maybe just some metadata was updated, and now every thing seems ok.

Cleanup: after two test iterations, and deleting partitions, the host lvm got a bit confused - it keeps entries in /etc/lvm/cache/.cache about LVM devices that no longer exist. So a vgdisplay -v printed warnings about devices not found. And it was not possible to vgremove unused volumes. Maybe a reboot would fix these, but there is an easy alternative using dmsetup.
sudo dmsetup info
will show all devices, if this list shows volumes that have been removed and should not be active, run this:
sudo dmsetup remove vg-kvm1p1
etc, using names shown in the dmsetup info output. Now vgdisplay -v should be clean, and /dev/mapper should not have any devices that should not be there.

Comments

pvresize

Hello

As an alternative would it be possible to use pvresize on the virtual disk?
So steps would be lvextend the volume /dev/vg/kvm1 on the host, pvresize the virtual physical disk from the guest. The space would then be available for the vg on the guest (maybe only after guest reboot).

Regards.

pvresize might work

It should work - instead of pvcreate, use pvresize. That should replace parts of the steps 3-5 above, primarily instead of pvcreate, use pvresize. And this is best done with the virtual machine shut down, and all other caveats.
Not sure if there are any other implications, but unless there are some restrictions on pvresize, it should be possible to use it instead of pvcreate.

additional lv on host - additional virtual disk in vm

hi, why don't you simply create another logical volume on your host machine like so:

vg:VG
-- lv:kvm1
-- lv:kvm2
-- lv:unused (lot of unused space on the volume group VG)

and then assign this new logical volume to the virtual machine as an additional virtual disk (e.g. in virt-manager). you can even do that while the virtual machine is still running.

then you log in to your virtual machine and you simply add the new virtual disk to your volume group. after that the usual steps apply to enlarge a logical volume.

have you thought about that or did you have any reasons not to do so?

cheers

Process is when nested volume is not a volume-group

Sure, that may work - but the default installs for the process followed - second paragraph in original post - do not create a nested volume group. For such cases, have to follow alternative methods.

virt-resize

Just like to say that we [the same team at Red Hat who wrote libvirt and virt-manager] wrote a tool called virt-resize which automates this process now. You choose which partition you want to extend, and it will extend the partition and the PV it contains to the desired size. You then have to just do an 'lvresize' after the virtual machine has been rebooted.

lvrename /dev/vg/Guest /dev/vg/Guest_backup
lvcreate -L newsize -n Guest vg
virt-resize /dev/vg/Guest_backup /dev/vg/Guest --extend /dev/sda2

It can also be used on non-LVM Linux guests and Windows guests.

I exactly encountered the

I exactly encountered the same issue and more or less did the same steps, except that I found issues around resizing the root partition file system afterwards. It just didn't do it. Also when I though more about it, I think, even though there could be various ways to achieve this nested lvm in guests, we lose all the flexibility that the raw partitions/file-systems tools provide. Currently most of the tools can't manipulate the lvm volumes as they can e.g. other file system type like ext3.
I was able to see the 'inside' partitions from the fdisk/parted including lvm volumes but the host lvm itself couldn't manage the nested volumes. So unless lvm tools are able to do it, such operations are indeed dangerous!