Learn how to safely remove the disk from LVM. It’s useful when you need to free up disks from the volume group and re-use somewhere else or replace a faulty disk.
This article will serve solution for below questions :
- How to safely remove the disk from LVM
- How to remove the disk from VG online
- How to copy data from one disk to other at the physical level
- How to replace a faulty disk in LVM online
- How to move physical extents from one disk to another
- How to free up disk from VG to shrink VG size
- How to safely reduce VG
root@kerneltalks # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 10G 0 disk ├─xvda1 202:1 0 1M 0 part └─xvda2 202:2 0 10G 0 part / xvdf 202:80 0 1G 0 disk └─vg01-lvol1 253:0 0 20M 0 lvm /mydata
Now, attach new disk of the same or bigger size of the disk
/dev/xvdf. Identify the new disk on the system by using
lsblk command again and comparing the output to the previous one.
root@kerneltalks # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 10G 0 disk ├─xvda1 202:1 0 1M 0 part └─xvda2 202:2 0 10G 0 part / xvdf 202:80 0 1G 0 disk └─vg01-lvol1 253:0 0 20M 0 lvm /mydata xvdg 202:96 0 1G 0 disk
You can see the new disk has been identified as
/dev/xvdg. Now, we will add this disk to current VG
vg01. This can be done using vgextend command. Obviously, before using it in LVM you need to run pvcreate on it.
root@kerneltalks # pvcreate /dev/xvdg Physical volume "/dev/xvdg" successfully created. root@kerneltalks # vgextend vg01 /dev/xvdg Volume group "vg01" successfully extended
Now we have disk to be removed
/dev/xvdf and new disk to be added
/dev/xvdg in the same volume group
vg01. You can verify it using
root@kerneltalks # pvs PV VG Fmt Attr PSize PFree /dev/xvdf vg01 lvm2 a-- 1020.00m 1000.00m /dev/xvdg vg01 lvm2 a-- 1020.00m 1020.00m
Observe the above output. Since we created a 20M mount point from disk
/dev/xvdf it has 20M less free size. The new disk
/dev/xvdg is completely free.
Now, we need to move physical extents from disk
xvdg. pvmove is the command used to achieve this. You just need to supply a disk name from where you need to move out PE. Command will move PE out of that disk and write them to all available disks in the same volume group. In our case, only one other disk is available to move PE.
root@kerneltalks # pvmove /dev/xvdf /dev/xvdf: Moved: 0.00% /dev/xvdf: Moved: 100.00%
Move progress is shown periodically. If due to any reason operation interrupted in between then moved PE will remain at destination disks and un-moved PEs will remain on the source disk. The operation can be resumed by issuing the same command again. It will then move the remaining PE out of the source disk.
You can even run it in background with nohup.
root@kerneltalks # pvmove /dev/xvdf 2>error.log >normal.log &  1639
In the above command, it will run
pvmove in the background. It will redirect normal console outputs in
normal.log file under the current working directory whereas errors will be redirected and saved in
error.log file in the current working directory.
Now if you check
pvs output again, you will find all space on disk
xvdf is free which means its not been used to store any data in that VG. This ensures you can remove the disk without any issues.
root@kerneltalks # pvs PV VG Fmt Attr PSize PFree /dev/xvdf vg01 lvm2 a-- 1020.00m 1020.00m /dev/xvdg vg01 lvm2 a-- 1020.00m 1000.00m
Before removing/detaching disk from the server, you need to remove it from LVM. You can do this by reducing VG and opting for that disk out.
root@kerneltalks # vgreduce vg01 /dev/xvdf Removed "/dev/xvdf" from volume group "vg01"
xvdf can be removed/detached from server safely.
Few useful switches of pvmove :
Verbose mode prints more detailed information on the operation. It can be invoked by using
root@kerneltalks # pvmove -v /dev/xvdf Cluster mirror log daemon is not running. Wiping internal VG cache Wiping cache of LVM-capable devices Archiving volume group "vg01" metadata (seqno 17). Creating logical volume pvmove0 activation/volume_list configuration setting not defined: Checking only host tags for vg01/lvol1. Moving 5 extents of logical volume vg01/lvol1. activation/volume_list configuration setting not defined: Checking only host tags for vg01/lvol1. Creating vg01-pvmove0 Loading table for vg01-pvmove0 (253:1). Loading table for vg01-lvol1 (253:0). Suspending vg01-lvol1 (253:0) with device flush Resuming vg01-pvmove0 (253:1). Resuming vg01-lvol1 (253:0). Creating volume group backup "/etc/lvm/backup/vg01" (seqno 18). activation/volume_list configuration setting not defined: Checking only host tags for vg01/pvmove0. Checking progress before waiting every 15 seconds. /dev/xvdf: Moved: 0.00% /dev/xvdf: Moved: 100.00% Polling finished successfully.
The interval at which command updates the progress can be changed.
-i switch followed by a number of seconds can be used to get updates from command on user-defined intervals on progress.
root@kerneltalks # pvmove -i 1 /dev/xvdf