Tag Archives: how to configure drd in hpux

DRD in HPUX

Dynamic Root Disk DRD configuration in HPUX

Learn how to configure Dynamic Root Disk (DRD) in HPUX. Understand how to clone root disk, view files in it, activate, and deactivate DRD disk.

Dynamic Root Disk aka DRD is a root disk cloning tool from HP. This tool aims to provide system integrity solutions at maintenance activities performed on the root disk. On DRD cloned disk you can perform any maintenance activity which you planned to do on actual live disk without worrying about disturbing the running system. You can activate cloned disk and reboot server which then boots from altered cloned disk. If you observe your changes are not perfect, you can re-activate your old live root disk hence getting back to the original state within minutes!

Proposed normal DRD clone disk life cycle is :

  1. Clone live root disk
  2. Mount cloned disk
  3. Make any changes you want on the cloned disk
  4. Activate cloned disk and reboot server
  5. Now system boots from cloned disk (Your old live disk is intact!)
  6. If you want to go back to the old state, set the old live disk as the primary boot disk
  7. Reboot system and your old live disk will be booted as it is.

Let’s see different operations which can be done through dynamic root disk commands.

1. How to clone root disk using DRD

DRD has its own set of commands to perform operations on clone disk. To clone your live root disk, attach/identify unused disk with the same or more capacity than live root disk with the same technology/model. Once identified, use below command :

# /opt/drd/bin/drd clone -v -x overwrite=true -t /dev/dsk/c0t2d0

=======  04/22/16 16:42:47 IST  BEGIN Clone System Image (user=root)  (jobid=testsrv)

       * Reading Current System Information
       * Selecting System Image To Clone
       * Selecting Target Disk
       * Converting legacy DSF "/dev/dsk/c0t2d0" to "/dev/disk/disk6"
       * Selecting Volume Manager For New System Image
       * Analyzing For System Image Cloning
       * Creating New File Systems
       * Copying File Systems To New System Image
       * Making New System Image Bootable
       * Unmounting New System Image Clone
       * System image: "sysimage_001" on disk "/dev/disk/disk6"

=======  04/22/16 17:14:18 IST  END Clone System Image succeeded. (user=root)  (jobid=testsrv)

DRD binary resides in /opt/drd/bin. Use clone argument todrd command and supply target disk path with -t option (which will be final cloned disk). There are a few options which can be used with -x. We used here to overwrite disk if any data resides in it. This command execution takes 30 mins to hours time depending on your root VG size.

In the end, you can see system image has been cloned on disk /dev/dsk/c0t2d0 i.e. /dev/disk/disk6. You can check the status of DRD using the below command which lists all details about the cloned disk.

# /opt/drd/bin/drd status

=======  04/22/16 17:24:21 IST  BEGIN Displaying DRD Clone Image Information (user=root)  (jobid=testsrv)

       * Clone Disk:               /dev/disk/disk6
       * Clone EFI Partition:      AUTO file present, Boot loader present, SYSINFO.TXT not present
       * Clone Creation Date:      04/22/16 16:43:00 IST
       * Clone Mirror Disk:        None
       * Mirror EFI Partition:     None
       * Original Disk:            /dev/disk/disk3
       * Original EFI Partition:   AUTO file present, Boot loader present, SYSINFO.TXT not present
       * Booted Disk:              Original Disk (/dev/disk/disk3)
       * Activated Disk:           Original Disk (/dev/disk/disk3)

=======  04/22/16 17:24:32 IST  END Displaying DRD Clone Image Information succeeded. (user=root)  (jobid=testsrv)

2. How to mount the cloned disk

Once the disk is cloned, you can view data within it by mounting it. Use mountargument with drd command.

# /opt/drd/bin/drd mount

=======  04/22/16 17:30:20 EDT  BEGIN Mount Inactive System Image (user=root)  (jobid=testsrv)

 * Checking for Valid Inactive System Image
 * Locating Inactive System Image
 * Mounting Inactive System Image

=======  04/22/16 17:30:31 EDT  END Mount Inactive System Image succeeded. (user=root)  (jobid=testsrv)

This will create a new VG on your system named drd00 and mounts clone disk within it. All you root disk mount points in the cloned disk will be mounted on /var/opt/drd/mnts/sysimage_000 e.g. /tmp in the cloned disk will be available on /var/opt/drd/mnts/sysimage_000/tmp mount point. See below output for your understanding:

# bdf
Filesystem          kbytes    used   avail %used Mounted on
/dev/vg00/lvol3    4194304  176968 3985984    4% /
/dev/vg00/lvol1    2097152  158344 1923696    8% /stand
/dev/vg00/lvol8    12582912  846184 11645064    7% /var
/dev/vg00/lvol7    10485760 3128368 7299968   30% /usr
/dev/vg00/lvol6    10485760  456552 9950912    4% /tmp
/dev/vg00/lvol5    10485760 4320288 6117352   41% /opt
/dev/vg00/lvol4    4194304   21304 4140408    1% /home
/dev/drd00/lvol3   4194304  176816 3986136    4% /var/opt/drd/mnts/sysimage_000
/dev/drd00/lvol4   4194304   21304 4140408    1% /var/opt/drd/mnts/sysimage_000/home
/dev/drd00/lvol5   10485760 4329696 6108024   41% /var/opt/drd/mnts/sysimage_000/opt
/dev/drd00/lvol1   2097152  158408 1923696    8% /var/opt/drd/mnts/sysimage_000/stand
/dev/drd00/lvol6   10485760  456536 9950928    4% /var/opt/drd/mnts/sysimage_000/tmp
/dev/drd00/lvol7   10485760 3196640 7232232   31% /var/opt/drd/mnts/sysimage_000/usr
/dev/drd00/lvol8   12582912  876016 11615544    7% /var/opt/drd/mnts/sysimage_000/var

You can even un-mount DRD cloned disk using drd unmount command.

# /opt/drd/bin/drd umount -v 

=======  04/22/16 17:30:45 IST  BEGIN Unmount Inactive System Image (user=root)  (jobid=testsrv)

       * Checking for Valid Inactive System Image
       * Locating Inactive System Image
       * Preparing To Unmount Inactive System Image
       * Unmounting Inactive System Image
       * System image: "sysimage_001" on disk "/dev/disk/disk6"

=======  04/22/16 17:30:58 IST  END Unmount Inactive System Image succeeded. (user=root)  (jobid=testsrv)

3. Different tasks which can be performed on cloned DRD disk

There are different maintenance activities that you can perform on this cloned DRD disk. To name a few: patch installation, editing some system files manually, tuning static kernel parameters, etc.

To execute tasks on the cloned disk you need to supply commands as an argument to drd runcmd option. For example, if you want to view /etc/hosts file in the cloned image,  use drd runcmd cat /etc/hosts

# /opt/drd/bin/drd runcmd kctune -B nproc+=100

=======  04/22/16 18:15:54 IST  BEGIN Executing Command On Inactive System Image (user=root)  (jobid=testsrv)

       * Checking for Valid Inactive System Image
       * Analyzing Command To Be Run On Inactive System Image
       * Locating Inactive System Image
       * Accessing Inactive System Image for Command Execution
       * Setting Up Environment For Command Execution
       * Executing Command On Inactive System Image
       * Executing command: "/usr/sbin/kctune -B nproc+=100"
WARNING: The backup behavior 'yes' is not supported in alternate root
         environments.  The behavior 'once' will be used instead.
       * The automatic 'backup' configuration has been updated.
       * Future operations will ask whether to update the backup.
       * The requested changes have been applied to the currently
         running configuration.
Tunable            Value  Expression  
nproc    (before)   4200  Default     
         (now)      4300  4300        
       * Command "/usr/sbin/kctune -B nproc+=100" completed with the return code "0".
       * Cleaning Up After Command Execution On Inactive System Image

=======  04/22/16 18:16:23 IST  END Executing Command On Inactive System Image succeeded. (user=root)  (jobid=testsrv)

See above example where I tune kernel parameters within the cloned disk.

You can even install patches using command drd runcmd swinstall -s /tmp/patch123.depot. Even if patch which needs a reboot can be installed. Since you are installing it on cloned (nonlive) root disk, the server won’t be rebooted. To make these changes live on your server, you need to boot the server with this cloned disk.

4. How to activate DRD cloned disk

To activate the dynamic root disk, you need to run drd activate command. Actually, this command sets your cloned disk path as a primary boot path which you can do by setboot command too!

# /opt/drd/bin/drd activate -x reboot=true

=======  04/22/16 18:20:21 IST  BEGIN Activate Inactive System Image (user=root)  (jobid=vm19)

       * Checking for Valid Inactive System Image
       * Reading Current System Information
       * Locating Inactive System Image
       * Determining Bootpath Status
       * Primary bootpath : 0/0/0/0.0x0.0x0 before activate.
       * Primary bootpath : 0/0/0/0.0x2.0x0 after activate.
       * Alternate bootpath : 0/0/0/0.0x1.0x0 before activate.
       * Alternate bootpath : 0/0/0/0.0x1.0x0 after activate.
       * HA Alternate bootpath :  before activate.
       * HA Alternate bootpath :  after activate.
       * Activating Inactive System Image
       * Rebooting System

If you set reboot to false, it will just set the primary boot disk path and exists. After that when you manually reboot the system, it will boot from cloned disk.

If you don’t choose auto-reboot then you will have a chance to reverse activate operation using deactivate command argument.

5. After booting cloned disk

If you boot your system from dynamic root disk, below things will be changed :

  1. Root VG mirroring will be missing
  2. Past live root disk will be intact
  3. Past live root disk will be removed from setboot primary/alternate boot path settings
  4. You have to restore the root mirror
  5. You have to check and set the alternate boot path
  6. Your system will be having all changes (patch install, kernel tuning) you made on the cloned disk

Dynamic Root Disk is a very powerful tool when it comes to chopping down your downtime. If you have little downtime window and need to perform a large number of patching which requires the reboot. Patch cloned disk and just reboot server during your short downtime!