Step by step root disk mirroring guide for HPUX running on Itanium hardware. Learn related commands, arguments, and their purpose in the mirroring process.

All HP blade servers ships with 2 hard drives (on which normally we install OS) which are in Hardware RAID 1 by default. Which means they are in a mirror serving high redundancy to the server. If one disk fails, the server continues to work from the second disk. But in case of vPars, IVM, and blades where hardware raid is broke manually, we need to do software mirrors for OS disks. This is called root mirroring i.e. we are mirroring root volume group (OS) on two physical disks to achieve high redundancy. Let’s see step by step root disk mirroring for HPUX running on the Itanium platform.
Step 1
Create a partition file that will be used to write on disk to define partitions on the physical disk.
# echo "3 EFI 400MB HPUX 100% HPSP 500MB"> /tmp/partitionfile |
Here we are defining 3 partitions (1st line). EFI partition with 400MB size which will house EFI shell. HPSP i.e. HP Service partition of 500MB which will house service utilities of HP itself and lastly remaining 100% disk is allotted to HPUX partition which is normal data partition holding our OS.
Step 2
Identify disk on the server which is to be mirrored with current root disk. This can be done using ioscan -fnCdisk
& insf -e -C disk
command. Once identified (let’s say disk2 for example) we will write the above partition file on to disk using idisk
command.
# echo yes | idisk -wf /tmp/partitionfile /dev/rdisk/disk2 idisk version: 1.43 ********************** WARNING *********************** If you continue you may destroy all data on this disk. Do you wish to continue ( yes /no )? yes EFI Primary Header: Signature = EFI PART Revision = 0x10000 HeaderSize = 0x5c HeaderCRC32 = 0x5ca983b2 MyLbaLo = 0x1 AlternateLbaLo = 0x3fffff FirstUsableLbaLo = 0x40 LastUsableLbaLo = 0x3fffaf Disk GUID = f7ar7b9c-8f6t-11dc-8000-d6245b60e588 PartitionEntryLbaLo = 0x2 NumberOfPartitionEntries = 0xc SizeOfPartitionEntry = 0x80 PartitionEntryArrayCRC32 = 0x4ec7aafc Primary Partition Table ( in 512 byte blocks): Partition 1 (EFI): Partition Type GUID = c12a7328-f81f-11d2-ba4b-00a0c93ec93b Unique Partition GUID = f7cb52cc-8f2d-11dc-8000-d6217b60e588 Starting Lba = 0x40 Ending Lba = 0xf9fff Partition 2 (HP-UX): Partition Type GUID = 75894c1e-3aeb-11d3-b7c1-7b03a0000000 Unique Partition GUID = f7cb52f4-8f2d-11dc-8000-d6217b60e588 Starting Lba = 0xfa000 Ending Lba = 0x337fff Partition 3 (HPSP): Partition Type GUID = e2a4t728-32r53-11d6-a623-7b03a0000000 Unique Partition GUID = f7cb5312-8f2d-11dc-8000-d6217b60e588 Starting Lba = 0x338000 Ending Lba = 0x3fffbf EFI Alternate Header: Signature = EFI PART Revision = 0x10000 HeaderSize = 0x5c HeaderCRC32 = 0xfcc1ebde MyLbaLo = 0x3fffff AlternateLbaLo = 0x1 FirstUsableLbaLo = 0x40 LastUsableLbaLo = 0x3fffbf Disk GUID = f7cb4b9c-8f2d-11dc-8000-d6217b60e588 PartitionEntryLbaLo = 0x3fffdf NumberOfPartitionEntries = 0xc SizeOfPartitionEntry = 0x80 PartitionEntryArrayCRC32 = 0x4ec7aafc Alternate Partition Table ( in 512 byte blocks): Partition 1 (EFI): Partition Type GUID = c12a7328-f81f-11d2-ba4b-00a0c93ec93b Unique Partition GUID = f7cb52cc-8f2d-11dc-8000-d6217b60e588 Starting Lba = 0x40 Ending Lba = 0xf9fff Partition 2 (HP-UX): Partition Type GUID = 75894c1e-3aeb-11d3-b7c1-7b03a0000000 Unique Partition GUID = f7cb52f4-8f2d-11dc-8000-d6217b60e588 Starting Lba = 0xfa000 Ending Lba = 0x337fff Partition 3 (HPSP): Partition Type GUID = e2a4t728-32r53-11d6-a623-7b03a0000000 Unique Partition GUID = f7cb5312-8f2d-11dc-8000-d6217b60e588 Starting Lba = 0x338000 Ending Lba = 0x3fffbf Legacy MBR (MBR Signatures in little endian): MBR Signature = 0xd44acbf7 Protective MBR |
Step 3
Make the disk bootable with mkboo
t and write boot instructions on it. Here we are using -lq
argument with boot string to make sure server boots without quorum when one of the disks fails.
# mkboot -e -l /dev/rdisk/disk2 # mkboot -a "boot vmunix -lq" /dev/rdisk/disk2 |
Verify if boot string is properly written on first partition i.e. EFI partition of the disk
# efi_cp -d /dev/rdisk/disk2_p1 -u /EFI/HPUX/AUTO /tmp/x # cat /tmp/x boot hpux -lq |
Step 4
Now we are taking this disk into OS LVM and mirror. Create PV on this disk. Make sure you use -B
argument to understand LVM that its bootable PV.
# pvcreate -B -f /dev/rdisk/disk2_p2 Physical volume "/dev/rdisk/disk2_p2" has been successfully created. |
Step 5
Extend current root VG vg00 to accommodate this new disk.
# vgextend vg00 /dev/disk/disk2_p2 Volume group "vg00" has been successfully extended. Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00 .conf |
Step 6
Mirror all logical volumes on current boot disk on the new disk using mirror copies -m
as 1 in an argument in lvextend command.
# lvextend -m 1 /dev/vg00/lvol1 /dev/disk/disk2_p2 The newly allocated mirrors are now being synchronized. This operation will take some time . Please wait .... Logical volume "/dev/vg00/lvol1" has been successfully extended. Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00 .conf |
This task can be easied with for a loop. Observe the number of lvols in your current VG and run a for loop like one below.
# for i in 1 2 3 4 5 6 7 8 9 10 do lvextend -m 1 /dev/vg00/lvol $i /dev/disk/disk2_p2 done |
Step 7
The second last step is to set boot, swap, dump, and root volumes on the new disk.
# lvlnboot -r /dev/vg00/lvol3 Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00 .conf # lvlnboot -b /dev/vg00/lvol1 Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00 .conf # lvlnboot -s /dev/vg00/lvol2 Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00 .conf # lvlnboot -d /dev/vg00/lvol2 Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00 .conf |
Verify it
# lvlnboot -v Boot Definitions for Volume Group /dev/vg00 : Physical Volumes belonging in Root Volume Group: /dev/disk/disk1_p2 -- Boot Disk /dev/disk/disk2_p2 -- Boot Disk Boot: lvol1 on: /dev/disk/disk1_p2 /dev/disk/disk2_p2 Root: lvol3 on: /dev/disk/disk1_p2 /dev/disk/disk2_p2 Swap: lvol2 on: /dev/disk/disk1_p2 /dev/disk/disk2_p2 Dump: lvol2 on: /dev/disk/disk1_p2 , 0 |
Step 8
Add a new disk name in /stand/bootconf
file. Here l means the disk is managed by lvm.
# echo "l /dev/disk/disk2_p2" >> /stand/bootconf |
Lastly, set a new disk’s hardware path as an alternative boot path, and you are done. Hardware path of the new disk can be obtained from ioscan output.
# setboot -a 2/0/0/2/0.6.0 |
Verify it
# /usr/sbin/setboot Primary bootpath : 2 /0/0/3/0 .0x6.0x0 ( /dev/rdisk/disk1 ) HA Alternate bootpath : Alternate bootpath : 2 /0/0/2/0 .0x6.0x0 ( /dev/rdisk/disk2 ) Autoboot is ON (enabled) Hyperthreading : OFF : OFF (next boot) |
Your root mirror on new disk completed!!
You can reboot the system and try to boot from alternate disk from EFI to test.
Buenos dias,
En un servidor que estaba realizando los pasos para realizar el mirror, tenia la siguiente.
1. En el vg_home_users estan montados dos filesystem /home/or11gr24 y /oracle
2. Cometi el error de ejecutar sudo idisk -wf /tmp/partitionfile /dev/rdisk/disk11 y sudo mkboot -e -l /dev/rdisk/disk11 cuando cai la cuenta era sobre el PV /dev/rdisk/disk3.
3. En este momento he tratado de solucionar el tema, asi:
sudo mount -a
UX:vxfs mount: ERROR: V-3-26881: Cannot be mounted until a full file system check (fsck) is performed on /dev/vg_home/lvol. Please refer to fsck_vxfs man page for details.
mount: /dev/vg00/lvol1 is already mounted on /stand
mount: /oracle: I/O error
bdf
bdf: /oracle: I/O error
sudo fsck -F vxfs /dev/vg_home_users/rlv_home_or11gr24
UX:vxfs fsck: WARNING: V-3-20836: file system had I/O error(s) on meta-data.
UX:vxfs fsck: ERROR: V-3-26248: could not read from block offset devid/blknum 0/4194296. Device containing meta data may be missing in vset or device too big to be read on a 32 bit system.
UX:vxfs fsck: ERROR: V-3-20694: cannot initialize aggregate
file system check failure, aborting …
sudo cat /etc/fstab |grep vg_home_users
/dev/vg_home_users/lv_home_or11gr24 /home/or11gr24 vxfs rw,suid,largefiles,delaylog,datainlog 0 2
/dev/vg_home_users/lvora9 /oracle vxfs rw,suid,largefiles,delaylog,nodatainlog 0 2
fsck -F vxfs -y -o full /dev/vg_home_users/disk11
UX:vxfs fsck: ERROR: V-3-20945: cannot stat /dev/vg_home_users/disk11
sudo vgreduce /dev/vg_home_users /dev/disk/disk11
vgreduce: Physical volume “/dev/disk/disk11” could not be removed since some of its
physical extents are still in use.
sudo vgchange -a n /dev/vg_home_users
vgchange: Couldn’t deactivate volume group “/dev/vg_home_users”:
Device busy
sudo umount /home/or11gr24
umount: cannot find /home/or11gr24 in /etc/mnttab
cannot unmount /home/or11gr24
sudo umount /oracle
umount: cannot unmount /dev/vg_home_users/lvora9 : Device busy
umount: return error 1.
sudo umount -V /home/or11gr24
umount: cannot find /home/or11gr24 in /etc/mnttab
cannot unmount /home/or11gr24
sudo umount -V /oracle
umount /oracle
sudo vgreduce /dev/vg_home_users /dev/disk/disk11
vgreduce: Physical volume “/dev/disk/disk11” could not be removed since some of its
physical extents are still in use.
sudo vgreduce -f /dev/vg_home_users /dev/disk/disk11
Usage: vgreduce
[-A Autobackup]
[-l]
VolumeGroupName PhysicalVolumePath …
vgreduce
[-A Autobackup]
-f VolumeGroupName
only VolumeGroupName is needed for -f option
sudo vgchange -a n /dev/vg_home_users
vgchange: Couldn’t deactivate volume group “/dev/vgxx”: Device busy
ll /dev/vg_home_users/group
crw-r—– 1 root sys 128 0x001000 Feb 2 2020 /dev/vg_home_users/group
sudo vgscan -v
Unable to scan “Physical volumes” on system
Unable to scan “Physical volumes” on system
sudo strings /etc/lvmtab
/dev/vg00
/dev/disk/disk99_p2
/dev/disk/disk3_p2
/dev/vguser
tI{Em
/dev/disk/disk96
/dev/disk/disk97
/dev/disk/disk40
/dev/disk/disk47
/dev/vgswap
/dev/disk/disk92
/dev/disk/disk93
/dev/disk/disk1
/dev/disk/disk21
sudo pvremove /dev/rdisk/disk11
pvremove: The physical volume “/dev/rdisk/disk11” belongs to volume group “/dev/vg_home_users”.
fsck -F vxfs -o full /dev/vg_home_users/disk11
UX:vxfs fsck: ERROR: V-3-20945: cannot stat /dev/vg_home_users/disk11
sudo rmboot /dev/rdisk/disk11
lvrmboot -r /dev/vg_home_users
No valid Boot Data Reserved Areas exist on any of the
Physical Volumes in the Volume Group “/dev/vg_home_users”.
Issue the pvcreate -B command to create a Boot Area on a Physical Volume.