Category Archives: Disk management

How to rename logical volume in Linux and HPUX

Learn how to rename logical volume in Linux or Unix. Understand what happens in the background when you change the logical volume name of existing LVOL.

LVM i.e. logical volume manager is one of the widely used volume managers in Linux and Unix. A logical volume is a portion of the volume group which can be mounted on a mount point. Once mounted, space belonging to that logical volume is available for use to end-user.

In this post, we are going to see step by step how to rename logical volume. In Linux, lvrename is a direct command which does this stuff for you. But first, we will see how it works in the background so that you know the flow and you can rename LV even without lvrename command.

LV renaming procedure follows below flow :

  1. Stop all user/app access to related mount point (on which lvol is mounted) using fuser
  2. Un-mount LV using umount
  3. Rename device names of lvol using mv
  4. Mount LV using mount
  5. Edit /etc/fstab entry related to this lvol using vi

Let’s see an example where we are renaming /dev/vg01/lvol1 which is mounted on /data to /dev/vg01/lvol_one. See the below output for the above-mentioned steps (HPUX console).

# bdf /data
/dev/vg01/lvol1 524288 49360 471256 9% /data
# fuser -cku /data
/data:   223412c(user1)
# umount /data
# mv /dev/vg01/lvol1 /dev/vg01/lvol_one
# mv /dev/vg01/rlvol1 /dev/vg01/rlvol_one
# mount /data
# bdf /data
/dev/vg01/lvol_one    524288    49360   471256  9%   /data

In the above output, you can see how we renamed logical volume just by renaming its device files.

In Linux, we have a single command lvrename which do all the above steps in the background for you. You just need to provide it with old and new lvol names along with the volume group where this lvol belongs. So, the above scenario will have below command –

# lvrename vg01 lvol1 lvol_one
  Renamed "lvol1" to "lvol_one" in volume group "vg01"

You can see in the output that single command renamed lvol1 to lvol_one! This command also supports below option :

  • -t For test
  • -v Verbose mode
  • -f Forceful operation
  • -d debug

How to scan new lun / disk in Linux & HPUX

Howto guide to scan new disk or LUNs on Linux or HPUX machines. This guide explains steps to scan and then identify new disk device names.

When you add a new disk to the system, you need to scan it so that kernel will be able to identify new hardware and assign a disk name to it. Adding a new disk to the system can be local or from storage. If it’s a local then its an addition of disk in free disk slots attached to server. If its a storage LUN then it’s masking and zoning at storage level to WWN of the server.

Once the disk / LUN is made available/visible to the server, the next step is to scan it. The kernel has a know hardware tree with it. This tree needs to be updated with new disk information. To let the kernel know that a new disk is made available to server disk scanning is required. If the disk is from storage array then there are chances you have storage vendor utilities/scripts available to scan storage on server example: evainfo (for EVA storage), xpinfo (for XP12K storage), powermt (for EMC storage).    If these utilities are not available, you still be able to scan them from OS.

HPUX disk scan :

In HPUX, we have dedicated ioscan command to scan new hardware. You can ask command to scan on hard disks with -C option i.e. class. Before executing this command, keep the output of previous disks (ioscan -funC disk) handy. This output can be compared to new output (command below) to identify new disk.

# ioscan -fnC disk
Class     I  H/W Path        Driver  S/W State   H/W Type     Description
==========================================================================
disk      4  0/0/1/1.0.0     sdisk   CLAIMED     DEVICE       HP 36.4GST373455LC#36
                            /dev/dsk/c1t0d0   /dev/rdsk/c1t0d0
disk      0  0/0/1/1.2.0     sdisk   CLAIMED     DEVICE       HP 36.4GST373455LC#36
                            /dev/dsk/c1t2d0   /dev/rdsk/c1t2d0
disk      1  0/0/2/0.2.0     sdisk   CLAIMED     DEVICE       HP 36.4GST373455LC#36
                            /dev/dsk/c2t2d0   /dev/rdsk/c2t2d0
disk      2  0/0/2/1.2.0     sdisk   CLAIMED     DEVICE       HP      DVD-ROM 305
                            /dev/dsk/c3t2d0   /dev/rdsk/c3t2d0
disk      3  0/10/0/1.0.0.0  sdisk   CLAIMED     DEVICE       I2O     RAID5
                            /dev/dsk/c4t0d0   /dev/rdsk/c4t0d0

Scan output shows you all detected disks on the system and their assigned disk names in CTD format. Sometimes, ioscan unable to install special device files for newly detected disks, in such a situation you can run insf (install special files) command to ensure all detected hardware has device files in place.

# insf -e
insf: Installing special files for btlan instance 0 address 0/0/0/0
insf: Installing special files for stape instance 1 address 0/0/1/0.1.0
insf: Installing special files for sctl instance 0 address 0/0/1/0.7.0
insf: Installing special files for sdisk instance 4 address 0/0/1/1.0.0
insf: Installing special files for sdisk instance 0 address 0/0/1/1.2.0
insf: Installing special files for sctl instance 1 address 0/0/1/1.7.0
----- output clipped ----

New disk even can be identified by comparing directory structure of /dev/disk or /dev/dsk/ before and after the scan. Any new addition during the scan to these directories is your new disk.

Once you identify this new disk, you can use it on the system via volume managers like LVM.

Linux Disk scan:

In Linux, it’s a bit tricky since there is no direct ioscan available. First, you need to get currently available disk details using fdisk command as below :

# fdisk -l |egrep '^Disk' |egrep -v 'dm-'|grep -v identifier
Disk /dev/sda: 74.1 GB, 74088185856 bytes
Disk /dev/sdb: 107.4 GB, 107374182400 bytes
Disk /dev/sdd: 2147 MB, 2147483648 bytes
Disk /dev/sde: 2147 MB, 2147483648 bytes
Disk /dev/sdc: 2147 MB, 2147483648 bytes

Keep this list handy to compare with the list after scan.

Scan SCSI disks

Now, if you have connected disks via SCSI then you need to scan SCSI hosts on the server. Check the current list of hosts on the server as below :

# ls /sys/class/scsi_host/
host0  host1  host2  host3

Now, you have 4 hosts on this server (in the example above). You need to scan all these 4 hosts in order to scan new disks attached to them. This can be done by writing - - - in their respective scan files. See below commands:

echo "- - -" > /sys/class/scsi_host/host0/scan
echo "- - -" > /sys/class/scsi_host/host1/scan
echo "- - -" > /sys/class/scsi_host/host2/scan
echo "- - -" > /sys/class/scsi_host/host3/scan

This completes your scan on SCSI hosts on the server. Now you can again run fdisk command we saw previously and compare the new output with the old one. You will see a new disk being added to the system and its respective device name too.

Scan FC LUNs:

If you have connected disks via FC then you need to scan FC hosts on the server. Check the current list of hosts on the server as below :

# ls /sys/class/fc_host
host0  host1

Now there are 2 FC hosts on the server. Again we need to scan them by writing 1 to their respective issue_lip file along with scan steps from above.

# echo "1" > /sys/class/fc_host/host0/issue_lip
# echo "- - -" > /sys/class/scsi_host/host0/scan
# echo "1" > /sys/class/fc_host/host1/issue_lip
# echo "- - -" > /sys/class/scsi_host/host1/scan

This will scan your FC HBA for new visible disks. Once the command completes (check syslog for completion event), you can use fdisk command to list disks. Compare the output with ‘before scan’ output and get new disk names!

Move disks/LUN from one server to another without losing data

Howto guide to moving disks or LUN from one server to another without losing any data. This guide is related to disks or LUN which are configured under LVM.

In Unix or Linux infra, it’s pretty common scenarios when you have to move disks or storage LUNs from one server to another server with data on them intact. This is something that is happening in clusters automatically i.e. handled by cluster services. When the primary node goes down, cluster services move disk or luns from primary to the secondary node and make secondary node available for use.

We are going to see how to do this manually using commands. This howto guide gives you an insight into what cluster services do in the background to move data across nodes in case of failures. We will be using LVM (Logical Volume Manager) as our disk manager in this guide since its most widely used volume manager next to VxVM (Veritas Volume Manager).

Steps flow goes like this :

  1. Stop disk access on server1
  2. Remove disk / LUN from server1
  3. Present disk / LUN to server2
  4. Identify new disk / LUN on server2
  5. Import it into LVM
  6. Make it available to use on server2

Let’s see these steps one by one in detail with commands and their outputs. We will be moving mount point /data from server1 to server2. /data is mounted on /dev/vg01/lvol1.

1. Stop disk access on server1

Firstly, you have stopped all user/app access to the related mount points. In our case its /data. You can check if anyone accessing mount point and can kill it using fuser command.

# fuser -cu /data         #TO VIEW USERS
/data:   223412c(user1)
# fuser -cku /data        #TO KILL USERS
# fuser -cu /data

Once you are sure no one is accessing mount point, go ahead and unmount it.

# umount /data

2. Remove disk / LUN from server1

Now, we need to remove disk or LUN from LVM of server1 so that it can be detached from the server gracefully. For this, we will be using vgexport command so that configuration backup can be imported on the destination server.

# vgchange -a n /dev/vg01
Volume group "/dev/vg01" has been successfully changed.
# vgexport -v -m /tmp/vg01.map vg01
Beginning the export process on Volume Group "/dev/vg01". 
vgexport:Volume Group “/dev/vg01” has been successfully removed.

To export VG, you need to de-activate the volume group first and then export VG with a map file. Transfer this map file to server2 with FTP or sftp so that it can be used while importing VG there.

Now, your VG vanishes from server1 i.e. related disk / LUN is no more associated with LVM of server1. Since VG ix exported only, data is intact on disk / LUN physically.

3. Present disk / LUN to server2

Now, you need to physically remove the disk from server1 and physically attach it to server2. If it’s a LUN then remove the mapping of LUN with server1 and map it to server2. You will require to do zoning at the storage level and WWN of both servers.

At this stage, your disk / LUN is removed from server1 and now available/visible to server2. But, it’s not yet known to LVM of server2.

4. Identify new disk / LUN on server2

To identify this newly presented/mapped disk/LUN on server2, you need to scan hardware or FC. Once you get disk number for it (identified in the kernel) proceed with the next steps of LVM.

Read here : Howto scan new lun / disk in Linux & HPUX

5. Import it in LVM

Now, we have disk / LUN identified on server2 along with the VG map file from server1. Using this file and disk name, proceed with importing VG in server2.

# vgimport -v -m /tmp/vg01.map /dev/vg01 list_of_disk
vgimport: Volume group “/dev/vg01” has been successfully created.
Warning: A backup of this volume group may not exist on this machine.
Please remember to take a backup using the vgcfgbackup command after activating the volume group.
# vgchange -a y vg01
Volume group “/dev/vg01” has been successfully changed.
# vgcfgbackup /dev/vg01
Volume Group configuration for /dev/vg01 has been saved in /etc/lvmconf/vg01.conf

First, import VG with vgimport command. In place of the list_of_disk argument in the above example, you have to give your disk name. You can use any VG name here. It’s not mandatory that you have to use VG name same as the first server. After successful import, activate that VG with vgchange.

6. Make it available to use on server2

At this stage, you disk / LUN is available in LVM of server2 with all data on them intact. To make it available for use we need to mount it on the directory. Use mount command:

# mount /dev/vg01/lvol1 /data2

Add entry in /etc/fstab as well to make sure mount point gets mounted at boot too.

Host to guest disk mapping in HP iVM

Learn how to identify virtual machine disk on the physical host machine in HP iVM. Disk mapping makes it easy to carry out disk related activities.

In HP integrity virtual machines, disk names on the host machines and virtual machines are always different for the same disk. Whenever we are presenting disk (storage LUN or local disk) from host to guest, it will be discovered as a different name on guests than a host. So it becomes necessary to know both names of the same disk for any disk-related activities.

Let see how we can map these two names. There are two methods to do this.

  1. Using xD command
  2. With hpvmdevinfo command

Using xD command

xD command used to read raw data on disk. Since the physical disk is the same on both servers only identification at kernel level differs, we will get the same raw data from both servers. We will use xD command to get PVID of disks from the host and guest. Whenever there is a match of PVID in both outputs, consider the disk is the same.

See below example where xD command is used with host and guest disks.

------ On guest -----
vm:root-/>xd -An -j8200 -N16 -tx /dev/disk/disk76
70608a28 4ec7a7ff 70608a28 4ec7a942
vm:root-/>xd -An -j8200 -N16 -tx /dev/disk/disk72
70608a28 4ec7a7ef 70608a28 4ec7a942
vm:root-/>xd -An -j8200 -N16 -tx /dev/disk/disk74
70608a28 4ec7a7f6 70608a28 4ec7a942

----- On host -----
host:root-/>xd -An -j8200 -N16 -tx /dev/disk/disk532
70608a28 4ec7a7ff 70608a28 4ec7a942
host:root-/>xd -An -j8200 -N16 -tx /dev/disk/disk538
70608a28 4ec7a7f6 70608a28 4ec7a942
host:root-/>xd -An -j8200 -N16 -tx /dev/disk/disk526
70608a28 4ec7a7ef 70608a28 4ec7a942

Now, if you observe outputs (2nd field), guest disk disk76 has the same value as host disk disk532. That means its the same disk! So on host diusk532 is the same as disk76 on the guest. Same with disk538-disk74 & disk 523-disk72.

This is a bit of a tedious job to observe outputs and find a match if you have a huge number of disks. Also, if you are interested in only one VM’s data then its time consuming since you have to match all disks of the host with that VM’s disks. In that case we have hpvmdevinfo command which directly prints out mapping table for you.

With hpvmdevinfo command

This command comes with an HP iVM setup and shows device mappings from host to guest in tabular format. Since this command can be run against a particular VM, it’s pretty fast to get disk mapping than the previous method.

# hpvmdevinfo -P virtual_svr_2
Virtual Machine Name Device Type Bus,Device,Target Backing Store Type Host Device Name Virtual Machine Device Name
==================== =========== ================= ================== ================ ===========================
virtual_svr_2           disk            [0,1,0]          disk         /dev/rdisk/disk336 /dev/rdisk/disk4
virtual_svr_2           disk            [0,1,1]          disk         /dev/rdisk/disk332 /dev/rdisk/disk5
virtual_svr_2           disk            [0,1,3]          disk         /dev/rdisk/disk675 /dev/rdisk/disk9

You need to run this command by supplying VM name with -P option and you will be presented with device list, its ctd and disk mapping between host-guest servers.

In the above example, see the last two columns where the first one shows disk name on the host machine and last one shows guest/virtual machine. Pretty straight forward and fast!

NFS configuration in Linux and HPUX

Learn how to configure the network file system (NFS) in Linux and HPUX servers. Howto export NFS, start/stop NFS services, and control access on it.

NFS configurations

The network file system is one of the essential things in today’s IT infrastructure. One server’s file system can be exported as NFS over network and control access over it. Other servers can mount these exported mount points locally as an NFS mount. This enables the same file system making available to many systems thus many users. Let’s see NFS configurations in Linux and HPUX.

NFS Configuration file in Linux

We assume the NFS daemon is installed on a server and running in the background. If not check package installation steps and how to start service on Linux. One can check if NFS is running on the server with service or ps -ef command. For NFS server i.e. server exporting directory should have portmap service running.

Make sure you have TCP and UDP port 2049, 111 on firewalls between client and server. It can be in OS firewall, iptables, network firewalls, or security groups in the cloud.

root@kerneltalks # ps -ef |grep -i nfs
root      1904     2  0  2015 ?        00:00:08 [nfsd4]
root      1905     2  0  2015 ?        00:00:00 [nfsd4_callbacks]
root      1906     2  0  2015 ?        00:01:33 [nfsd]
root      1907     2  0  2015 ?        00:01:32 [nfsd]
root      1908     2  0  2015 ?        00:01:33 [nfsd]
root      1909     2  0  2015 ?        00:01:37 [nfsd]
root      1910     2  0  2015 ?        00:01:24 [nfsd]

root@kerneltalks # service nfs status
rpc.svcgssd is stopped
rpc.mountd (pid 1897) is running...
nfsd (pid 1913 1912 1911 1910 1909 1908 1907 1906) is running...
rpc.rquotad (pid 1892) is running...

root@kerneltalks # rpcinfo -p localhost
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
----- output clipped -----

/etc/exports is the configuration file which has all exported volume details along with their respective permissions. /etc/exports follows the format as below :

<export> <host> (options)

where –

  • export is filesystem/directory to be exported
  • the host is hostname/IP to which export is accessible where wile cards are acceptable
  • options are permissions which are ro, rw, sync, async.

Refer below chart which can be used to decide your entry in this file.

NFS config file parameters

Parameter
Example
/my_share server3 (rw, sync) Export /my_share directory for server3 with read write access in sync mode
/my_share * (ro, sync) Export /my_share for any host with read only permission and sync mdoe
/my_share 10.10.2.3 (rw,async) Export /my_share for IP 10.10.2.3 with rw in async.
/my_share server2 (ro, sync) server3 (rw, sync) Exporting to two diff servers with diff permissions
root@kerneltalks # cat /etc/exports
/my_share       10.10.15.2(rw,sync)
/new_share       10.10.1.40(rw,sync)

/etc/exports file can be edited using vi editor or using /usr/sbin/exportfs command.

How to start-stop NFS service in Linux

Once you made the changes in the file you need to restart NFS daemon to take all these changes in effect. This can be done using the service NFS restart command. If your NFS is already running and you just need to take a new configuration in action you can reload config using service NFS reload. To stop NFS you can run service nfs stop command.

root@kerneltalks # service nfs status
rpc.svcgssd is stopped
rpc.mountd (pid 1897) is running...
nfsd (pid 1913 1912 1911 1910 1909 1908 1907 1906) is running...
rpc.rquotad (pid 1892) is running...

How to re-export NFS shares after editing the configuration file

In running the NFS environment, where multiple clients already mounted NFS shares from the NFS server and you need to edit NFS share configuration. You can edit the NFS configuration file and re-export NFS shares using exportfs command.

Make sure only additions happened to config file while reloading config otherwise it may affect already connected NFS shares.

root@kerneltalks # exportfs -ra

How to mount NFS share

At the destination, where export needs to be mounted should have NFS daemon running too. Mounting a share is a very easy two-step procedure.

  1. Create a directory to mount the share
  2. mount share using the mount command.

To make this permanent i.e. mounting share at boot time, make an entry to /etc/fstab like below so that manually mounting after the reboot of server can be avoided.

10.10.2.3:/my_share /tmp/nfs_share nfs defaults 0 0

NFS configuration in HPUX

This part is the same as Linux. In some versions, you need to edit /etc/dfs/dfstab file. This file takes share commands as a per line entry. It can be filled like below :

share -F nfs -o root=server2:server3 /my_share

Above line indicates exporting /my_share directory for server2 and server3 with root account access.

Also, we need to specify NFS_SERVER=1 parameter in /etc/rc.config.d/nfsconf on the NFS server. By default, it is set to 0 i.e. server acts as NFS client. Along with this NFS_CORE and START_MOUNTD needs to be marked to value 1 as well.

How to start-stop NFS service in HPUX

We have covered it here: NFS server start/stop on HPUX

For reloading config file in HPUX, you can run shareall command.

Mounting share

This part is the same as Linux

Errors seen in Linux

If you did not prepare client properly then you might see below error :

mount: wrong fs type, bad option, bad superblock on 10.10.2.3:/mys_hare,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount.<type> helper program)

       In some cases useful info is found in syslog - try
       dmesg | tail or so.

Install nfs-utils, nfs-common packages and you should be able to mount NFS filesystem without any issues.

How to rename volume group

Learn how to rename the volume group in Linux or Unix. Understand what happens in the background when you change the volume group name of existing VG.

A volume group can be renamed with easy vgrename command Linux. But first, we will see how it can be done without vgrename command so that step by step you will understand what actually happens in the background while VG name changes.

We have seen how to create VG in the past and how to export/import VG. We are going to use these commands to rename VG. Below steps needs to be followed –

  1. Stop all user/app access to all mount points within VG using fuser
  2. Un-mount all LV using umount
  3. Deactivate VG using vgchange
  4. Export VG using vgexport
  5. Create a new name folder and group file using mknod
  6. Import VG with a new name in command options using vgimport
  7. Activate VG using vgchange
  8. Mount all LV using mount
  9. Edit related entries in /etc/fstab with a new name

See below output for the above-mentioned steps (HPUX console).

# fuser -cku /data
/data:   223412c(user1)
# umount /data
# vgchange -a n /dev/vg01
Volume group "/dev/vg01" has been successfully changed.
# vgexport -v -m /tmp/vg01.map vg01
Beginning the export process on Volume Group "/dev/vg01". 
/dev/dsk/c0t1d0 vgexport:Volume Group “/dev/vg01” has been successfully removed.
# mkdir /dev/testvg
# mknod /dev/testvg/group c major 0xminor
# vgimport -v -m /tmp/vg01.map /dev/testvg list_of_disk
vgimport: Volume group “/dev/testvg” has been successfully created.
Warning: A backup of this volume group may not exist on this machine.
Please remember to take a backup using the vgcfgbackup command after activating the volume group
# vgchange -a y testvg
Volume group “/dev/testvg” has been successfully changed.
# mount /dev/testvg/lvol1 /data

In the above step by step process, you can see how VG changes its name. We are changing its VG related file and directory and then we import it using old configuration but the new name.

In Linux, we have one command which does all this stuff in the background for you. vgrename is a command which used to rename VG in Linux. You have to supply the old VG name and required a new name.

# vgrename /dev/vg01 /dev/testvg
Volume group "/dev/vg01" successfully renamed to "/dev/testvg"
OR
# vgrename vg01 testvg
Volume group "vg01" successfully renamed to "testvg"

Keep in mind, this command also requires de-activated VG to work. So this is not an online process. It supports the below options :

  • -f Forcefully rename
  • -v Verbose mode

LVM cheatsheet

List of all LVM command of HPUX tutorials we have seen before on KernelTalks. LVM commands related to physical volume, volume group, and logical volume.

What is LVM?

LVM is a Logical Volume Manager.

LVM is a volume manager in Unix-Linux systems. It used to manage your disks. LVM enables raw disks to be used as a data store, file system defined mount points. LVM helps to manage your disk volumes efficiently for performance and data integrity. VxVM i.e. Veritas Volume Manager is another volume manager that is as popular as LVM.

Previously we have seen a series of LVM command tutorials on KernelTalks. Here is a summary of it along in the form of LVM cheatsheet for your quick reference.

Physical Volume Commands

Command
Description
Example
pvcreate Create physical volume Tutorial link
pvdisplay Display physical volume details Tutorial link
pvchange Activate, de-activate physical volume Tutorial link
pvmove Move data from one PV to another Tutorial link

Volume Group Commands

Command
Description
Example
vgcreate Create volume group Tutorial Link
vgdisplay Display volume group details Tutorial Link
vgscan Rebuild /etc/lvmtab file Tutorial Link
vgextend Add new PV to VG Tutorial Link
vgreduce Remove PV from VG Tutorial Link
vgexport Export VG from system Tutorial Link
vgimport Import VG into system Tutorial Link
vgcfgbackup Backup VG configurations Tutorial Link
vgcfgrestore Restore VG configurations Tutorial Link
vgchange Change details of VG Tutorial Link
vgremove Remove VG from system Tutorial Link
vgsync Sync stale PE in VG Tutorial Link

Logical Volume Commands

Command
Description
Example
lvcreate Create logical volume Tutorial Link
lvdisplay Display logical volume details Tutorial Link
lvremove Remove logical volume Tutorial Link
lvextend Increase size of logical volume Tutorial Link
lvreduce Decrease size of logical volume Tutorial Link
lvchange Change details of logical volume Tutorial Link
lvsync Sync stale LE of logical volume Tutorial Link
lvlnboot Set LV as root, boot, swap or dump volume Tutorial Link

LVM commands tutorial: Part 3: Logical Volume (lvsync, lvlnboot)

Series of the tutorial to learn LVM commands. In this part, learn how to sync LV and set it as a boot, root, swap device (lvsync, lvlnboot)

This is the last part of LVM command tutorials and last post for logical volume command too. Last all parts of this tutorial can be found on below links :

Let’s start with our first command here.

Command: lvsync

It synchronizes stale PE in given LV. It’s used in mirroring environment. Whenever there is any disk failure or disk path issue, PE goes bad and LV, in turn, has stale PE. Once the issue is corrected we need to sync stale PE with this command if they don’t sync automatically.

The command doesn’t have many options. It should be supplied with the LV path only.

# /usr/sbin/lvsync /dev/vg00/lvol6
Resynchronized logical volume "/dev/vg00/lvol6".

Command: lvlnboot

This command used to define logical volume as a root, dump, swap or boot volume. You have to submit an LV path along with the specific option of your choice to command. Options are as below :

  • -b Boot volume
  • -d Dump volume
  • -r Root volume
  • -s Swap volume
  • -R Recover any missing links
  • -v Verbose mode
# lvlnboot -r /dev/vg00/lvol3
Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00.conf
# lvlnboot -b /dev/vg00/lvol1
Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00.conf
# lvlnboot -s /dev/vg00/lvol2
Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00.conf
# lvlnboot -d /dev/vg00/lvol2
Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00.conf

We have already seen this command in root disk mirroring.

This concludes our LVM command tutorials!! Feel free to drop in any queries you have.

LVM commands tutorial: Part 3: Logical Volume (lvextend, lvreduce, lvchange)

Series of the tutorial to learn LVM commands. In this part, learn how to extend, reduce and change the state of the logical volume (lvextend, lvreduce, lvchange)

In continuation of last part of a logical volume, we will be seeing more commands on lvol in this post. Previous posts of this LVM command tutorial can be found on below links :

Logical volumes like VG can be extended and shrank. We will be seeing lvextend, lvreduce, lvchangecommands in this post.

Command: lvextend

To extend logical volume, you should have enough free space within that VG. Command syntax is pretty much similar to lvcreate command for size. The only thing is you need to supply the final required size in command. For example, the current LV size is 1GB and you want to extend it with 2GB. Then you need to give the final 3GB size in the command argument.

# lvextend -L 3072 /dev/vg01/lvol1
Logical volume "/dev/vg01/lvol1" has been successfully extended.
Volume Group configuration for /dev/vg01 has been saved in /etc/lvmconf/vg01.conf

Another important option is of mirror copies. It plays a vital role in root disk mirroring. -m is the option with the number of mirror copies as an argument.

# lvextend -m 1 /dev/vg00/lvol1 /dev/disk/disk2_p2
The newly allocated mirrors are now being synchronized. This operation will
take some time. Please wait ....
Logical volume "/dev/vg00/lvol1" has been successfully extended.
Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00.conf

Command: lvreduce

This command used for decreasing the number of mirror copies or decreasing the size of LV. This is the data destroying command. Hence make sure you have data of related file system backed up first. The size and mirror copy options are works the same for this command as well. -L for LE_reduce_size, -l number of LE to be reduced and -m is the number of copies to be reduced.

# lvreduce -L 500 /dev/vg01/lvol1
When a logical colume is reduced useful data might get lost;
do you really want the command to proceed (y/n) : y
Logical volume "/dev/vg01/lvol1" has been successfully reduced.
Volume Group configuration for /dev/vg01 has been saved in /etc/lvmconf/vg01.conf

While reducing mirror copies if one of the PV is failed or missing then command won’t run successfully. you need to supply -k option which will proceed to remove the mirror in case PV is missing.

Command: lvchange

This command is used for changing the characteristics of LV. There are numerous options that can be used.

  • -a y/n Activate or deactivate LV
  • -C y/n Change contiguous allocation policy
  • -D y/n Change distributed allocation policy
  • -p w/r Set permission
  • -t timeout Set timeout in seconds
  • -M y/n Change mirror write cache flag
  • -d p/s Change scheduling policy

This is the end of the second post on LV commands. In the next post, we will see lvsync and lvlnboot commands.

LVM commands tutorial: Part 3: Logical Volume (lvcreate, lvdisplay, lvremove)

Series of the tutorial to learn LVM commands. In this part, learn how to create, delete the logical volume and view details of it (lvcreate, lvdisplay, lvremove)

This is the last part of the LVM commands tutorial. Previously we have seen physical volume, volume group commands which can be seen on below links :

Logical volumes are small slices carved out of physical volumes storage space which is collectively available in the volume group. For more details check LVM legends.

Command: lvcreate

This command used to create a new logical volume. Logical volumes are mounted on directories as a mount point. So logical volume size is the size you want for the mount point. Use a command like below :

# lvcreate -L 1024 /dev/vg01
Logical volume "/dev/vg01/lvol1" has been successfully created with character device "/dev/vg01/rlvol1"
Volume Group configuration for /dev/vg01 has been saved in /etc/lvmconf/vg01.conf

In the above command, you need to supply size in MB (1 GB in the above example) to -L argument and volume group name in which you need to create that LV. If no name suggested in command then by default command creates LV with name /dev/vg01/lvolX (X is next available number).

This command supports below options –

  • -l Number of LEs
  • -n LV Name

Created LV details can be seen using command lvdisplay.

Command: lvdisplay

We have seen above how to create LV, now we will see how to view details of it. This command is the same as pvdisplay for PV and vgdisplay for VG. It shows you details like name, volume group it belongs to, size, permission, status, allocation policy, etc.

# lvdisplay /dev/vg01/lvol1
--- Logical volumes ---
LV Name                     /dev/vg01/lvol1
VG Name                     /dev/vg01
LV Permission               read/write
LV Status                   available/syncd
Mirror copies               0
Consistency Recovery        MWC
Schedule                    parallel
LV Size (Mbytes)            1024
Current LE                  32
Allocated PE                32
Stripes                     0
Stripe Size (Kbytes)        0
Bad block                   on
Allocation                  strict
IO Timeout (Seconds)        default

More detailed output can be obtained with -v option. In this detailed output, you can get the LE details where they reside and LV distribution across disks.

# lvdisplay -v /dev/vg01/lvol1
--- Logical volumes ---
LV Name                     /dev/vg01/lvol1
VG Name                     /dev/vg01

----- Output clipped ----

   --- Distribution of logical volume ---
   PV Name                 LE on PV  PE on PV
   /dev/disk/disk22        32        32

   --- Logical extents ---
   LE    PV1                     PE1   Status 1
   00000 /dev/disk/disk22        00000 current
   00001 /dev/disk/disk22        00001 current
   00002 /dev/disk/disk22        00002 current
   00003 /dev/disk/disk22        00003 current
   00004 /dev/disk/disk22        00004 current
   00005 /dev/disk/disk22        00005 current
   00006 /dev/disk/disk22        00006 current
   00007 /dev/disk/disk22        00007 current
   00008 /dev/disk/disk22        00008 current
   00009 /dev/disk/disk22        00009 current
   00010 /dev/disk/disk22        00010 current
   00011 /dev/disk/disk22        00011 current
   00012 /dev/disk/disk22        00012 current
   00013 /dev/disk/disk22        00013 current
   00014 /dev/disk/disk22        00014 current

----- output truncated -----

Command: lvremove

Removing a logical volume is data destroying task. Make sure you take the backup of data within the mount point then empty it and stop all user/app access to it. If LV is not empty then the command will prompt you for confirmation to proceed. 

# lvremove /dev/vg01/lvol1
The logical volume "/dev/vg01/lvol1" is not empty;
do you really want to delete the logical volume (y/n) : y
Logical volume "/dev/vg01/lvol1" has been successfully removed.
Volume Group configuration for /dev/vg03 has been saved in /etc/lvmconf/vg01.conf

Once lvol is deleted its number is again available for next new lvol which is being created in the same VG. All PE assigned to this LV will be released as free PE and hence free space in VG will increase.

We will be seeing how to extend and reduce LV also how to activate or deactivate LV in the next post.