For Proxmox with zfs filesystem, we can utilize zfs to migrating a VM to another Proxmox host with minimal downtime. As long the target also use zfs file sistem. You can read more about zfs on this link.

With pipe viewer (PV) we also can limit the bandwith amount when performing migration using zfs send and recieve.

This is how to.

  • Configure ssh key-less from node source to node target
  • Check the vm disk in in zfs
    #zfs list
  • Make sure pv installed
    #sudo apt-get install pv
  • Snapshot the VM disk for initial data
    #zfs snap [pool/data/vm-id-disk-1]@snapshot-name
  • Send initial zfs data to the Proxmox host target. In this example we limit the transfer bandwith to 25Mb
    #zfs send -vc [pool/data/vm-202-disk-1]@snapshot-name | pv -q -L 25M| ssh node-target zfs recv -s [pool/data/vm-202-disk-1]@snapshot-name
  • Shutdown the vm to make sure there is no data changes to perform final disk migration and create second snapshot for the VM disk
    #zfs snap [pool/data/vm-202-disk-1]@snapshot-name2
  • Send inital and second snapshot of the vm disk to target node.
    #zfs send -i [pool/data/vm-202-disk-1]@snapshot-name [pool/data/vm-202-disk-1]@snapshot-name2 | pv -q -L 25M | ssh node-target zfs recv -s [pool/data/vm-202-disk-1]@snapshot-name
  • Send the vm config to the node-target
    #scp /etc/pve/nodes/node-source/qemu-server/202.conf node-target:/etc/pve/nodes/node-target/qemu-server/202.conf
  • Start the vm on the new host.

Hope this help.

Previous Article

Leave a Reply

Your email address will not be published. Required fields are marked *

L.

Linux backup using Rsync

This time is about using rsync to copy a Linux/Unix system to another host or backing it up. By using rsync, its more effective than using tool like dd since we can efficiently select the directory, attribute that we we want to backup or move.

Sometime ignorance is a blessing

The one who not so wise

rsync also can be use when the system is running. But you may need to do it with caution and make sure you understand the data state. When rsync with the system running, the data that have not commit the changes in files level may not transfer.

This method is work for migrating the the system to another hosts. Please be aware, that the target is the fresh installed operating system with the same version of the source.

Rsync full backup

# rsync -aAXHv --numeric-ids --info=progress2 --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} / /path/to/backup

Rsync Clone

rsync -qaHAXS [SOURCE_DIR] [DESTINATION_DIR]
  • –numeric-ids, will disable mapping of user and group names; instead, numeric group and user IDs will be transfered. This is useful when backing up over SSH or when using a live system to backup different system disk.
  • –info=progress2,  will show the overall progress info and transfer speed instead of the list of files being transferred.
  • avoid crossing a filesystem boundary when recursing, add the option -x/--one-file-system. This will prevent backing up any mount point in the hierarchy.
  • -n or known as the option for the dry-run. To simulate the file transfers

R.

Replacing Failde Raid disk Proxmox ZFS

This is the procedure on how to replace broken disk from zfs raid array. In this case we simulate to replace zfs raid-1.

  • Check zfs pool status
    #zfs status -v
  • Get the disk information and replaced it (Let’s say that the failed disk is /dev/sdb)
  • Clone the disk partition table from the health disk to replacement
    #sgdisk -R /dev/sdb /dev/sda [newdisk - existing disk]
  • Regenerate the partition UID
    #sgdisk -G /dev/sdb
  • Replacing the disk
    #zpool replace rpool /dev/sdb2 /dev/sdb2
  • Monitoring resilvering process, the
    #zpool status -v
  • For the disk where boot partition resided, make sure to update the grub configuration to mark the replaced disk
    #dpkg-reconfigure grup-pc
    #Make sure to check the new disk.

Thank You