Yearly Archives: 2016

Adding extra or secondary swap in HPUX

Learn how to add a secondary swap or extra swap in running the HPUX system without any downtime. It uses free space in root VG to mount as swap.

When the system runs low on memory and swap continuously, its time to troubleshoot. Even after troubleshooting and all available app/OS tuning you are still running out of memory then you can try adding extra swap before you think of adding RAM to the server which involves cost/resources of parent machine.

Step 1

For adding extra swap check how much space you have available in root volume group vg00. Use vgdisplay command to get free PE and PE size numbers.

# /usr/sbin/vgdisplay vg00
--- Volume groups ---
VG Name                     /dev/vg00
VG Write Access             read/write
VG Status                   available
Max LV                      255
Cur LV                      9
Open LV                     9
Max PV                      16
Cur PV                      2
Act PV                      2
Max PE per PV               4384
VGDA                        4
PE Size (Mbytes)            16
Total PE                    6544
Alloc PE                    5978
Free PE                     566
Total PVG                   0
Total Spare PVs             0
Total Spare PVs in use      0

Here we have 566 free PE with 16MB size of each. This sums up to 8.8GB of free space in root VG. We can use space from this 8.8GB for adding extra swap.

Read our Linux swap related articles :

Check the current swap configuration. Here you can see like default HPUX configuration, lvol2 is mounted as swap.

# /usr/sbin/swapinfo -tam
             Mb      Mb      Mb   PCT  START/      Mb
TYPE      AVAIL    USED    FREE  USED   LIMIT RESERVE  PRI  NAME
dev       43008       0   43008    0%       0       -    1  /dev/vg00/lvol2
reserve       -    1963   -1963
memory    40861    9261   31600   23%
total     83869   11224   72645   13%       -       0    -

Step 2

Create a new contiguous logical volume with no bad block relocation policy and size of your requirement. Let’s make an LV of 2GB.

# lvcreate -L 2048 -C y -r n /dev/vg00
Logical volume "/dev/vg00/lvol10" has been successfully created with character device "/dev/vg00/rlvol10"
Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00.conf

Step 3

Start swap on this lvol. Add -f argument to start forcefully if the below command fails.

# swapon -p 1 /dev/vg00/lvol10

Step 4

Edit /etc/fstab to mount this LV as swap on every boot. Add below entry :

/dev/vg00/lvol10 ... swap pri=1 0 1

Step 5

Check again swap size. Now you can see new lvol is added in the swap.

# /usr/sbin/swapinfo -tam
             Mb      Mb      Mb   PCT  START/      Mb
TYPE      AVAIL    USED    FREE  USED   LIMIT RESERVE  PRI  NAME
dev       43008       0   43008    0%       0       -    1  /dev/vg00/lvol2
dev        2048       0    2048    0%       0       -    2  /dev/vg00/lvol10    
reserve       -    1963   -1963
memory    40861    9261   31600   23%
total     85917   11224   74693   13%       -       0    -

Adding new storage LUN to integrity virtual machine (iVM) in HPUX

Step by step procedure to add storage LUNs to integrity virtual machine on HPUX host. Further, learn to use those LUNs in LVM of the guest servers.

Steps to add new LUN into integrity virtual machine (iVM) in HPUX and use it within existing VG or create a new VG on it. In this process, storage luns are always presented to the physical host server. from host, they are attached to the virtual guest server running on it.

Step 1

Identify new LUN on the HP iVM host server. When new LUN is presented to iVM, run ioscan command to scan new disks. Post ioscan, run insf command to make sure all available hardware has its related files created in the kernel.

# ioscan -fnCdisk
# insf -e

Now your new LUN is identified in the kernel. Match lun id in storage utility (syminq in case of EMC storage, evainfo in case of HP EVA storage etc) and get related disk number. We are using agile naming convention here so lets take /dev/rdisk/disk10/dev/rdisk/disk11 are new identified disks.

Step 2

Make disks LVM ready by using pvcreate.

# pvcreate /dev/rdisk/disk10
Physical volume "/dev/rdisk/disk10" has been successfully created.

# pvcreate /dev/rdisk/disk11
Physical volume "/dev/rdisk/disk11" has been successfully created.

Step 3

Attach these disks to iVM (guest) which is running on the host. Assume vmserver1 is our iVM here.

# hpvmmodify -P vmserver1 -a disk:avio_stor::disk:/dev/rdisk/disk10
# hpvmmodify -P vmserver1 -a disk:avio_stor::disk:/dev/rdisk/disk11

Step 4

Once the above commands are successful, disks are attached to iVM and need to scan in the guest. Login to iVM server and scan the new disks the same way we did in steps 1 and 2 on the host. Let’s say those disks are identified as /dev/rdisk/disk2/dev/rdisk/disk3 on the guest server. Observe those are identified as Virtual disk on VM.

disk 6 0/0/0/0.2.0 sdisk CLAIMED DEVICE HP Virtual Disk
/dev/dsk/c0t2d0 /dev/rdsk/c0t2d0

disk 8 0/0/0/0.3.0 sdisk CLAIMED DEVICE HP Virtual Disk
/dev/dsk/c0t3d0 /dev/rdsk/c0t3d0

Step 5

Complete LVM tasks on these disks to use space in the mount point.
To create a new VG named vg01

# mkdir /dev/vg01
# mknod /dev/vg01/group c 64 0x010000
# vgcreate -s 64 -p 60 -e 12500 vg01 /dev/disk/disk2 /dev/disk/disk3
Volume group "/dev/vg01" has been successfully created.
Volume Group configuration for /dev/vg01 has been saved in /etc/lvmconf/vg01.conf
# lvcreate -L 200 /dev/vg01
Logical volume "/dev/vg01/lvol1" has been successfully created with
character device "/dev/vg01/rlvol1".
# newfs -F vxfs -o largefiles /dev/vg01/rlvol1
 version 7 layout
 204800 sectors, 204800 blocks of size 1024, log size 1024 blocks
 largefiles supported
# mkdir /data
# mount /dev/vg01/lvol1 /data

To extend current existing VG named vg02 & mount point /data1 within it

# vgextend vg02 /dev/disk/disk2 /dev/disk/disk3
Volume group "vg02" has been successfully extended.
Volume Group configuration for /dev/vg02 has been saved in /etc/lvmconf/vg02.conf
# lvextend -L 512 /dev/vg02/lvol1
Logical volume "/dev/vg02/lvol1" has been successfully extended.
Volume Group configuration for /dev/vg02 has been saved in /etc/lvmconf/vg02.conf
# fsadm -F vxfs -b 524288 /data1
vxfs fsadm: V-3-23585: /dev/vg02/rlvol1 is currently 7731200 sectors - size will be increased

HPUX Patch naming conventions

HP releases the HPUX patch twice a year. Few break-fix patches releases as on need basis. Here are naming conventions being followed by HP.

HP releases OS patches for HPUX every 6 months i.e. twice a year. For smaller HPUX patch which are releases as on need basis, HP follows below naming conventions

Patch name format is PHxx_yyyy

Where,

xx = area of patch
CO: General HPUX commands
KL: Kernel patches
NE: Network-specific patch
SS: all other subsystem patches

yyyy = unique number

From patch name, you will be able to guess area of its impact so that you can plan your activities accordingly. If the patch requires a reboot or not can be determined while downloading the patch from the HP portal itself or even by running swinstall command with -p (preview) argument.

HP patches are available on http://software.hp.com/ which will redirect you to HP software depot home. HP passport login required to download patches or software from software depots.

Basics of LVM legends

Get acquainted with LVM (Logical Volume Manager) terms. Learn what is physical volume, logical volume, physical extent, volume group, and logical extent.

LVM (logical volume manager) legends

PV is a Physical Volume

Any single disk / LUN on the system is identified as PV. It can be raw or formatted with a file system. Raw PV is referred to as /dev/rdsk/c0t0d1 (legacy) or /dev/rdisk/disk1 (agile) whereas formatted one is referred to as  /dev/dsk/c0t0d1 (legacy) or /dev/disk/disk1 (agile). Check PV name in below output as a formatted device.

# vgdisplay -v vg00

--- Volume groups ---
VG Name                     /dev/vg00
VG Write Access             read/write
VG Status                   available
Max LV                      255
Cur LV                      13
Open LV                     13
Max PV                      16
Cur PV                      1
Act PV                      1
Max PE per PV               4355
VGDA                        2
PE Size (Mbytes)            32
Total PE                    4345
Alloc PE                    4303
Free PE                     42
Total PVG                   0
Total Spare PVs             0
Total Spare PVs in use      0

   --- Logical volumes ---
   LV Name                     /dev/vg00/lvol1
   LV Status                   available/syncd
   LV Size (Mbytes)            1024
   Current LE                  32
   Allocated PE                32
   Used PV                     1

   --- Physical volumes ---
   PV Name                     /dev/dsk/c3t0d0s2
   PV Status                   available
   Total PE                    4345
   Free PE                     42
   Autoswitch                  On
   Proactive Polling           On
Physical volume naming conventions :

/dev/rdsk/cxtxdx – Legacy character device file
/dev/rdsk/cxtxdxs2 – Legacy character device file, partition 2
/dev/dsk/cxtxdx – The legacy block device file
/dev/dsk/cxtxdxs2 – Legacy block device file, partition 2
/dev/rdisk/diskx – The persistent character device file
/dev/rdisk/diskx_p2 – Persistent character device file, partition 2
/dev/disk/diskx – The persistent block device file
/dev/disk/diskx_p2 – Persistent block device file, partition 2

PE is Physical Extent

Its smallest chunk of PV can be used as a block under the file system. PV is consists of the number of PEs. We always use PV names while using LVM commands. In the above example, PE size is set to 32MB & a total of 4345 PEs are available on disk.

Read our LVM tutorials: LVM cheat sheet.

VG is Volume Group

One or more PV come together to form a Volume Group. This grouping enables to slice down combined
storage capacity of disks to our choice of small volumes. In the above example, vg00 is volume group made up of single PV & it’s sliced down to 8 LV (only one shown in above example)

LV is the Logical Volume

Its a slice of volume group using some capacity of PV to form a smaller volume. Its basically used as a mount point /swap like drives (C:, D:) in Windows. We can see one LV in above example and its details.
LE is Logical Extent.
Same as PE, LE is the smallest chunk of LV.

Below tables gives you an idea about some numbers related to them:

LVs per VG range: 1-255, default: 255
PVs per VG range: 1-255, default: 16
PEs per VG   range : 1-66535 default : 1016

with the above table, as max PE size is 64MB and 66,535 PEs max per VG, one can create a max of 64×66353=4TB of the file system.

Password file commands

Ever wondered which all special commands can be executed on /etc/passwd file? Learn here the list of special commands and their uses for the password file.

Here is the list of commands which can be used on /etc/passwd file.

vipw

This command is being used to edit /etc/passwd file manually. It is not recommended to edit /etc/passwd file manually. All changes on user accounts should be carried out using commands like usermod. But in some scenarios, if you want to edit the password file manually, then use this command. It opens the file in vi editors and locks it for other users. So any other admin from any other terminal won’t be able to open the file in the editor for manual editing. This ensures the integrity of the file.

Also read: Understanding /etc/passwd file.

pwck 

To check the integrity of /etc/passwd file this command can be used. Once executed it checks passwd files and its all fields. It reports any issues observed in the file e.g. if the user directory does not exist on the server, it will report it.

# /usr/sbin/pwck

[/etc/passwd] sfmdb:*:107:20::/home/sfmdb:/sbin/sh
        Login directory not found

[/etc/passwd] smmsp:*:109:20::/home/smmsp:/sbin/sh
        Login directory not found

pwconv

It generates /etc/shadow file which has user passwords in the encrypted format under the second field in each user entry. If /etc/shadow file already exists on the system then this command will update relevant fields if there were any changes in /etc/passwd file. If your system is trusted (see tsconvert command) then the user password database (Trusted Computing Database) is being maintained separately and /etc/shadow doesn’t exist on the system. In that case, this command will update the TCB accordingly.

# /usr/sbin/pwconv
Updating the tcb to match /etc/passwd, if needed.

pwunconv

It reverses the changes made by pwconv command.

HPUX boot process

HPUX boot process explained. Learn which all processes happened in the background while HPUX server boots.

It’s not a fully detailed boot process. It’s a very short form of things happens during boot. To make it understand and remember (for interviews) easily!

1) PDC (processor dependent code) gets executed

  • Checks CPU
  • Checks stable storage for the boot path
  • Loads ISL utilities from leaf area of the boot disk
  • Here you can halt boot using ESC key and can run PO, SEA commands.

2) ISL (Initial system loader) gets loaded

  • Read AUTO file default kernel
  • Load and runs HPUX from LIF area
  • Here you can halt the boot process and boot system into single-user mode. You can provide diff options to SSL i.e. kernel vmunix. Like,hpux –ishpux –lq, hpux –lm

3) HPUX loads (Secondary system loader)

  • Uses options and path names from ISL to load the kernel
  • And by default loads vmunix

4) After kernel vmunix gets loaded –

  • Swapper daemon starts with PID 0
  • Kernel runs /sbin/pre_init_rc
  • Kernel calls /sbin/init
  • /sbin/init reads /etc/inittab and calls –
  1. /sbin/ioinit – to scan hardware and build kernel io tree
  2. /sbin/bcheckrc – to check FS listed in /etc/fstab
  3. /sbin/rc – to start additional services like lp, cron, cde
  4. /usr/sbin/getty – to start n show login prompt to the user.

Please note that this is not the exact hpux boot process. There are alterations depends on the system being referred to is PA-RISC or Itanium. This article gives a fair idea of what’s happening in the background when HPUX boot happens.

Run levels in HPUX at a glance

Learn the list of different run levels in HPUX and their roles. Also, see how to check the current run level in which the system is running.

A run level is the state of a system depending on which system services will spawn. Normally lower run levels are having fewer services available for the user and mainly used for administrative purposes. Higher levels have more services available and targets end user’s use. In HPUX highest run levels like 5 and 6 are kept reserved for future purposes. We will see the list of run levels and their offerings in the following article.

Current run level in HPUX can be identified using the below command :

# who -r
   .       run-level 3  Jan 19 21:14    3    0    S

The output fields of the above commands are as below:
1. A dot . indicates that the terminal has seen activity in the last minute and is therefore its. i.e. current.
2. Current run-level
3. Timestamp
4. The current state of init
5. The number of times that state has been previously entered
6. The previous state

Read also: Different usage of ‘who’ command.

List of run levels in HPUX

0 indicates shutdown state
S indicates single user mode booted to local console only with root FC (RO) mounted
s indicates the same as S only current terminal acts as system console.
1 indicates the single-user mode with local FS (RW) mounted
2 indicates multi-user state with CDE launched
3 indicates the same as 2 but with NFS
4 indicates GUI (here VUE started instead of CDE)
5,6 indicates reserved to state and not yet defined in kernel code.

How to restart NFS in HPUX

Step by step procedure to restart NFS services in HPUX. Follow this procedure with a given sequence to stop and start NFS gracefully.

Requirement :

To restart NFS server in HPUX

How to do it :

Please make a note that all exported NFS mount points will be unavailable to all clients during this restart.

Stop NFS

# /sbin/init.d/nfs.server stop
killing nfsd
killing rpc.mountd
# /sbin/init.d/nfs.client stop
killing nfs4cbd
# /sbin/init.d/nfs.core stop
killing nfsmapid
killing rpcbind

Read also :

Start NFS

# /sbin/init.d/nfs.core start
    Starting NFS CORE networking

    Starting up the rpcbind
        /usr/sbin/rpcbind
# /sbin/init.d/nfs.client start
    Starting NFS CLIENT subsystem

    Starting up nfs4cbd daemon
        /usr/sbin/nfs4cbd
      Starting up nfsmapid daemon
        /usr/sbin/nfsmapid
    Mounting remote NFS file systems ...
    Mounting remote CacheFS file systems ...
# /sbin/init.d/nfs.server start
    Starting NFS SERVER subsystem

    Reading in /etc/dfs/dfstab
    Starting up the mount daemon
        /usr/sbin/rpc.mountd
    Starting up the NFS server daemon
        /usr/sbin/nfsd
      Starting up nfsmapid daemon

Make sure you follow the sequence while stopping and starting as mentioned above.