Step by step procedure to add storage LUNs to integrity virtual machine on HPUX host. Further, learn to use those LUNs in LVM of the guest servers.
Steps to add new LUN into integrity virtual machine (iVM) in HPUX and use it within existing VG or create a new VG on it. In this process, storage luns are always presented to the physical host server. from host, they are attached to the virtual guest server running on it.
Step 1
Identify new LUN on the HP iVM host server. When new LUN is presented to iVM, run ioscan
command to scan new disks. Post ioscan
, run insf
command to make sure all available hardware has its related files created in the kernel.
# ioscan -fnCdisk
# insf -e
Now your new LUN is identified in the kernel. Match lun id in storage utility (syminq
in case of EMC storage, evainfo
in case of HP EVA storage etc) and get related disk number. We are using agile naming convention here so lets take /dev/rdisk/disk10
& /dev/rdisk/disk11
are new identified disks.
Step 2
Make disks LVM ready by using pvcreate.
# pvcreate /dev/rdisk/disk10
Physical volume "/dev/rdisk/disk10" has been successfully created.
# pvcreate /dev/rdisk/disk11
Physical volume "/dev/rdisk/disk11" has been successfully created.
Step 3
Attach these disks to iVM (guest) which is running on the host. Assume vmserver1
is our iVM here.
# hpvmmodify -P vmserver1 -a disk:avio_stor::disk:/dev/rdisk/disk10
# hpvmmodify -P vmserver1 -a disk:avio_stor::disk:/dev/rdisk/disk11
Step 4
Once the above commands are successful, disks are attached to iVM and need to scan in the guest. Login to iVM server and scan the new disks the same way we did in steps 1 and 2 on the host. Let’s say those disks are identified as /dev/rdisk/disk2
& /dev/rdisk/disk3
on the guest server. Observe those are identified as Virtual disk on VM.
disk 6 0/0/0/0.2.0 sdisk CLAIMED DEVICE HP Virtual Disk
/dev/dsk/c0t2d0 /dev/rdsk/c0t2d0
disk 8 0/0/0/0.3.0 sdisk CLAIMED DEVICE HP Virtual Disk
/dev/dsk/c0t3d0 /dev/rdsk/c0t3d0
Step 5
Complete LVM tasks on these disks to use space in the mount point.
To create a new VG named vg01
# mkdir /dev/vg01
# mknod /dev/vg01/group c 64 0x010000
# vgcreate -s 64 -p 60 -e 12500 vg01 /dev/disk/disk2 /dev/disk/disk3
Volume group "/dev/vg01" has been successfully created.
Volume Group configuration for /dev/vg01 has been saved in /etc/lvmconf/vg01.conf
# lvcreate -L 200 /dev/vg01
Logical volume "/dev/vg01/lvol1" has been successfully created with
character device "/dev/vg01/rlvol1".
# newfs -F vxfs -o largefiles /dev/vg01/rlvol1
version 7 layout
204800 sectors, 204800 blocks of size 1024, log size 1024 blocks
largefiles supported
# mkdir /data
# mount /dev/vg01/lvol1 /data
To extend current existing VG named vg02
& mount point /data1
within it
# vgextend vg02 /dev/disk/disk2 /dev/disk/disk3
Volume group "vg02" has been successfully extended.
Volume Group configuration for /dev/vg02 has been saved in /etc/lvmconf/vg02.conf
# lvextend -L 512 /dev/vg02/lvol1
Logical volume "/dev/vg02/lvol1" has been successfully extended.
Volume Group configuration for /dev/vg02 has been saved in /etc/lvmconf/vg02.conf
# fsadm -F vxfs -b 524288 /data1
vxfs fsadm: V-3-23585: /dev/vg02/rlvol1 is currently 7731200 sectors - size will be increased