Category Archives: HPUX

Why HPUX loosing market grip

Few observations on why HPUX is losing its grip in the market. How virtualization, cloud, costing effects HPUX existence in the market. 

hpux loosing market

Really? HPUX really losing market?  Kind of yes. But still it is one of the preferred OS for mission-critical environments like the defense industry because of its reliability and security. HPUX, one of the oldest OS (35 years old) released in 1982, no doubt is one of the best server OS we have seen in decades! Being a core HPUX certified engineer obviously I appreciate it moreover IBM’s AIX and Sun Micro system’s Solaris flavor.

But nowadays I am seeing fewer companies, corporate going for HPUX maybe because of its cost involved. Virtualization, cloud computing, and open-source Linux OS flavors made very cost-effective choices for corporate. High Availability offered by virtualization and cloud computing wins over the reliability of Unix when it comes to making cheap billing. HPUX being a hardware-dependent Os i.e. running only on HP hardware (inexpensive) and also involves costs in OS license itself including prime utilities like glance etc., choosing HPUX is a heavy pocket decision. In today’s world of cost-cutting, obviously this is one of the reasons, companies are not opting for HPUX.

Read also : How to learn, practice HPUX

The boom in cloud computing also has an effect on traditional in house data centers and hence on proprietary OS like HPUX.  Most of the cloud world is powered by Linux which adversely affects HPUX market share rather altogether AIX, Solaris too. People opting for cloud since they offer cheap services with no maintenance. Even I have seen companies are moving away from HPUX and shifting servers on Linux or cloud for billing numbers.

Another major leap back is HP backed off from new releases of HPUX. It’s like they opt to kill their own lion. The latest HPUX roadmap doesn’t even show HPUX v4 and v5 which was existing in old roadmaps. This clearly indicates they have stopped development for new versions and will be continuing current last HPUXv3 as the latest till 2025. This is a major setback since customers lookout for further development plans of service when opting for it.

I wish HP wake up to new versions and we see HPUX back alive and roaring on the IT world! Still it’s not the end of an era! There are many crucial companies and sectors still run by HPUX cause crucial data needs strong shoulders!

These are my observations and perceptions of HPUX’s current stand-in in the market. Let me know if you have any infographics, new data, matrices, graphs about the HPUX journey to date in comments.

PS: Few comments rippled over this post on Facebook

Facebook comments

How to find MAC address of LAN card in HPUX

Different ways to find the MAC address of LAN card in HPUX. Learn how to use lanscan, lanadmin, print_manifest, SAM to check MAC.

MAC addresses also known as station addresses can be found physically on LAN cards which are mostly PCI cards on your HP server. Obviously being hardware, it’s not always feasible to open up just to get MAC address! Another way is to get these details from the OS command. You can use lanscan, lanadmin, sam, print_manifest command to get the MAC address of the LAN card in HPUX.

First, you need to get a LAN number on which your expected IP is configured. You can use netstat -nvr to check all IP configured on the system and their respective LAN number.

# netstat -nvr
Routing tables
Dest/Netmask                    Gateway            Flags   Refs Interface  Pmtu
127.0.0.1/255.255.255.255       127.0.0.1          UH        0  lo0        4136
12.123.51.123/255.255.255.255   12.123.51.123      UH        0  lan0       4136
12.125.101.123/255.255.255.255  12.125.101.123     UH        0  lan1       4136
12.123.48.0/255.255.252.0       12.123.51.123      U         2  lan0       1500
12.125.96.0/255.255.248.0       12.125.101.123     U         2  lan1       1500
127.0.0.0/255.0.0.0             127.0.0.1          U         0  lo0        4136
default/0.0.0.0                 12.123.51.1        UG        0  lan0       1500

Look at the interface column to get lanX number. For example, we will try to get the MAC of lan1 interface.

lanscan command

lanscan command without any argument will give you station address i.e. MAC addresses of all available LAN on the system.

# /usr/sbin/lanscan
Hardware Station        Crd  Hdw   Net-Interface    NM   MAC       HP-DLPI DLPI
Path     Address        In#  State NamePPA          ID   Type      Support Mjr#
0/1/2/0  0x001A3B08C4A0 0    UP    lan0 snap0       1    ETHER       Yes   119
0/1/2/1  0x001A3B08C4A1 1    UP    lan1 snap1       2    ETHER       Yes   119

Look station address and column and check the value against lan1! lan1 has MAC of 0x001A3B08C4A1.

lanadmin command

This is not straight forward as lanscan command. After issuing lanadmin command you will be presented with lanadmin console prompt where you can use lanadmin commands. Example below.

# /usr/sbin/lanadmin


          LOCAL AREA NETWORK ONLINE ADMINISTRATION, Version 1.0
                       Mon, Apr 17,2017  18:10:09

               Copyright 1994 Hewlett Packard Company.
                       All rights are reserved.

Test Selection mode.

        lan      = LAN Interface Administration
        menu     = Display this menu
        quit     = Terminate the Administration
        terse    = Do not display command menu
        verbose  = Display command menu

Enter command: lan

Here type command lan You will be greeted with the LAN interface mode prompt like below.

LAN Interface test mode. LAN Interface PPA Number = 0

        clear    = Clear statistics registers
        display  = Display LAN Interface status and statistics registers
        end      = End LAN Interface Administration, return to Test Selection
        menu     = Display this menu
        ppa      = PPA Number of the LAN Interface
        quit     = Terminate the Administration, return to shell
        reset    = Reset LAN Interface to execute its selftest
        specific = Go to Driver specific menu

Enter command: ppa

Enter command ppa and change your number to 1 since we are checking lan1 in our example. Default is set to lan0

Enter command: ppa
Enter PPA Number.  Currently 0: 1

LAN Interface test mode. LAN Interface PPA Number = 1

Once LAN interface PPA changed to 1 hit command display and you will be shown all details of that lan card including station address!

Enter command: display

                      LAN INTERFACE STATUS DISPLAY
                       Mon, Apr 17,2017  18:10:26

PPA Number                      = 1
Description                     = lan1 HP PCI-X 1000Base-T Release PHNE_36237 B.11.11.15
Type (value)                    = ethernet-csmacd(6)
MTU Size                        = 1500
Speed                           = 1000000000
Station Address                 = 0x1a3b08c4a1
Administration Status (value)   = up(1)
Operation Status (value)        = up(1)
Last Change                     = 185
Inbound Octets                  = 1362884960
Inbound Unicast Packets         = 1309204600
----- output clipped -----

Here you can pad two zeros in from of station address to make it perfect 12 alphanumeric MAC. Means 1a3b08c4a1 becomes 001a3b08c4a1.

Using SAM

You can even use SAM (text based GUI tool) to get these details. Go to,

SAM -> Networking and communications -> Network Interface Cards

Select your lan (in our case lan1) using a space bar (it will be highlighted). Then choose Actions from the menu bar to get details.

Using print_manifest

If you have Ignite installed on the server then you can try print_manifest command to get all system details. Those details also include MAC of all lan cards. The only issue is your LAN PPA number won’t be available here in output to match MAC with lan id.

# /opt/ignite/bin/print_manifest
System Hardware

    Model:              9000/800/rp4440
    Main Memory:        24574 MB
    Processors:         8
    Processor(0) Speed: 999 MHz
    Processor(1) Speed: 999 MHz
    Processor(2) Speed: 999 MHz
    Processor(3) Speed: 999 MHz
    Processor(4) Speed: 999 MHz
    Processor(5) Speed: 999 MHz
    Processor(6) Speed: 999 MHz
    Processor(7) Speed: 999 MHz
    OS mode:            64 bit
    LAN hardware ID:    0x001A3B08C4A0
    LAN hardware ID:    0x001A3B08C4A1
    Software ID:        Z3e1372908dc9758e
    Keyboard Language:  Not_Applicable

----- output clipped ------

					

Complete guide: Transfer Of Control (TOC) in HP servers

Everything you need to know about TOC i.e. Transfer Of Control reset in HP servers. It’s a way to initialize system halt and memory dump in an emergency.

What is TOC?

TOC stands for Transfer Of Control! Its a way out for sysadmin when their system stops responding or hung or not taking any inputs and they need to take memory dump before resetting system. This memory dump is helpful for investigating the cause of system abnormality.

Whenever TOC order (hardware signal) has been issued to the system, it stops all current work and starts dumping current memory information in the dump device specified in configurations. Once dumping completes, the system resets.

Why to invoke TOC?

There are many reasons like utilization being high, the disk is getting full, some process going in a loop, many processes forked (Error like sh: The fork function failed. Too many processes already exist.), etc which could bring the system down to its knees. In such a situation there is no way than resetting system since these issues make the system unusable or not responding. So why TOC? Even the normal reset will do the job.

But if you are interested in the root cause of what has happened on the system which brought it down then you will need a memory dump for analysis. This memory dump can be generated when TOC is issued. Since the system doesn’t respond to the user, you can not check what’s happening and then the memory dump is only hoped for investigation after reboot. Hence, TOC reset is always recommended in case of system hung issues.

How to do TOC reset ?

  1. TOC can be invoked by using the TOC switch on the back of your HP server.
  2. Using TC command in the GSP menu.
  3. Using vparreset with option (for vPars)
TOC switch :

Its located in the back of your HP server normally a push button. Sometimes it also accompanied by GSP reset switch too. You need to use it to activate TOC.

TC command in GSP :

Login to GSP or MP. Goto command menu using CM. Then use TC command there to reset with TOC.

MP MAIN MENU:

         CO: Consoles
        VFP: Virtual Front Panel
         CM: Command Menu
         CL: Console Logs
         SL: Show Logs
         FW: Firmware Update
         HE: Help
          X: Exit Connection

[Server-mp] MP> cm
                Enter HE to get a list of available commands

                      (Use ^B to return to main menu.)

[Server-mp] MP:CM> TC
vparreset command :

Using -t option with vparreset command reset vPars with TOC.

# vparreset -p <vpar_name> -t

					

How to get boot path of vpmon in HPUX

Learn to identify the boot path of vpmon vPar Monitor. It’s important to know vpmon path when you are planning activities on virtual partitions in HP hardware

What is vpmon?

vpmon is vPars Monitor. It’s a daemon that monitors vPars in the background. It also provides a shell MON> through which various operations can be performed on vPars. Hence vpmon is a very crucial component when it comes to deal with vPars. Also, unless specified, all operations by vpmon are performed on boot disk from which it was spawned. So boot disk of vpmon is an important aspect while planning any activity on vPars.

The only vparload is the command which has the facility to specify different disk on which operation to be performed. Or else all commands of vpmon runs on boot disk it was booted from.

Boot path of vpmon

To get boot path of vpmon you need to run below command from one of the vPar running HPUX.

testsvr# vparstatus -m

Console path: No path as console is virtual
Monitor boot disk path: 0.0.4.1.0.1.0
Monitor boot filename: /stand/vpmon
Database filename: /stand/vpdb
Memory ranges used: 0x0/349224960 monitor
0x14d0c000/237568 firmware
0x14d46000/581632 monitor
----- output truncated -----

You can see boot path against Monitor boot disk path (highlighted above). This is the hardware address of the disk which you need to decode to get disk name in kernel/OS. IT can be decoded as below from left to right :

  1. This is cabinet number
  2. This is I/O chassis (0 is front, 1 is back)
  3. Its I/O bay
  4. Its slot number
  5. Rest is ctd

Normally, the first disk of first vPar people set as vpmon boot path.

Dynamic Root Disk DRD configuration in HPUX

Learn how to configure Dynamic Root Disk (DRD) in HPUX. Understand how to clone root disk, view files in it, activate, and deactivate DRD disk.

Dynamic Root Disk aka DRD is a root disk cloning tool from HP. This tool aims to provide system integrity solutions at maintenance activities performed on the root disk. On DRD cloned disk you can perform any maintenance activity which you planned to do on actual live disk without worrying about disturbing the running system. You can activate cloned disk and reboot server which then boots from altered cloned disk. If you observe your changes are not perfect, you can re-activate your old live root disk hence getting back to the original state within minutes!

Proposed normal DRD clone disk life cycle is :

  1. Clone live root disk
  2. Mount cloned disk
  3. Make any changes you want on the cloned disk
  4. Activate cloned disk and reboot server
  5. Now system boots from cloned disk (Your old live disk is intact!)
  6. If you want to go back to the old state, set the old live disk as the primary boot disk
  7. Reboot system and your old live disk will be booted as it is.

Let’s see different operations which can be done through dynamic root disk commands.

1. How to clone root disk using DRD

DRD has its own set of commands to perform operations on clone disk. To clone your live root disk, attach/identify unused disk with the same or more capacity than live root disk with the same technology/model. Once identified, use below command :

# /opt/drd/bin/drd clone -v -x overwrite=true -t /dev/dsk/c0t2d0

=======  04/22/16 16:42:47 IST  BEGIN Clone System Image (user=root)  (jobid=testsrv)

       * Reading Current System Information
       * Selecting System Image To Clone
       * Selecting Target Disk
       * Converting legacy DSF "/dev/dsk/c0t2d0" to "/dev/disk/disk6"
       * Selecting Volume Manager For New System Image
       * Analyzing For System Image Cloning
       * Creating New File Systems
       * Copying File Systems To New System Image
       * Making New System Image Bootable
       * Unmounting New System Image Clone
       * System image: "sysimage_001" on disk "/dev/disk/disk6"

=======  04/22/16 17:14:18 IST  END Clone System Image succeeded. (user=root)  (jobid=testsrv)

DRD binary resides in /opt/drd/bin. Use clone argument todrd command and supply target disk path with -t option (which will be final cloned disk). There are a few options which can be used with -x. We used here to overwrite disk if any data resides in it. This command execution takes 30 mins to hours time depending on your root VG size.

In the end, you can see system image has been cloned on disk /dev/dsk/c0t2d0 i.e. /dev/disk/disk6. You can check the status of DRD using the below command which lists all details about the cloned disk.

# /opt/drd/bin/drd status

=======  04/22/16 17:24:21 IST  BEGIN Displaying DRD Clone Image Information (user=root)  (jobid=testsrv)

       * Clone Disk:               /dev/disk/disk6
       * Clone EFI Partition:      AUTO file present, Boot loader present, SYSINFO.TXT not present
       * Clone Creation Date:      04/22/16 16:43:00 IST
       * Clone Mirror Disk:        None
       * Mirror EFI Partition:     None
       * Original Disk:            /dev/disk/disk3
       * Original EFI Partition:   AUTO file present, Boot loader present, SYSINFO.TXT not present
       * Booted Disk:              Original Disk (/dev/disk/disk3)
       * Activated Disk:           Original Disk (/dev/disk/disk3)

=======  04/22/16 17:24:32 IST  END Displaying DRD Clone Image Information succeeded. (user=root)  (jobid=testsrv)

2. How to mount the cloned disk

Once the disk is cloned, you can view data within it by mounting it. Use mountargument with drd command.

# /opt/drd/bin/drd mount

=======  04/22/16 17:30:20 EDT  BEGIN Mount Inactive System Image (user=root)  (jobid=testsrv)

 * Checking for Valid Inactive System Image
 * Locating Inactive System Image
 * Mounting Inactive System Image

=======  04/22/16 17:30:31 EDT  END Mount Inactive System Image succeeded. (user=root)  (jobid=testsrv)

This will create a new VG on your system named drd00 and mounts clone disk within it. All you root disk mount points in the cloned disk will be mounted on /var/opt/drd/mnts/sysimage_000 e.g. /tmp in the cloned disk will be available on /var/opt/drd/mnts/sysimage_000/tmp mount point. See below output for your understanding:

# bdf
Filesystem          kbytes    used   avail %used Mounted on
/dev/vg00/lvol3    4194304  176968 3985984    4% /
/dev/vg00/lvol1    2097152  158344 1923696    8% /stand
/dev/vg00/lvol8    12582912  846184 11645064    7% /var
/dev/vg00/lvol7    10485760 3128368 7299968   30% /usr
/dev/vg00/lvol6    10485760  456552 9950912    4% /tmp
/dev/vg00/lvol5    10485760 4320288 6117352   41% /opt
/dev/vg00/lvol4    4194304   21304 4140408    1% /home
/dev/drd00/lvol3   4194304  176816 3986136    4% /var/opt/drd/mnts/sysimage_000
/dev/drd00/lvol4   4194304   21304 4140408    1% /var/opt/drd/mnts/sysimage_000/home
/dev/drd00/lvol5   10485760 4329696 6108024   41% /var/opt/drd/mnts/sysimage_000/opt
/dev/drd00/lvol1   2097152  158408 1923696    8% /var/opt/drd/mnts/sysimage_000/stand
/dev/drd00/lvol6   10485760  456536 9950928    4% /var/opt/drd/mnts/sysimage_000/tmp
/dev/drd00/lvol7   10485760 3196640 7232232   31% /var/opt/drd/mnts/sysimage_000/usr
/dev/drd00/lvol8   12582912  876016 11615544    7% /var/opt/drd/mnts/sysimage_000/var

You can even un-mount DRD cloned disk using drd unmount command.

# /opt/drd/bin/drd umount -v 

=======  04/22/16 17:30:45 IST  BEGIN Unmount Inactive System Image (user=root)  (jobid=testsrv)

       * Checking for Valid Inactive System Image
       * Locating Inactive System Image
       * Preparing To Unmount Inactive System Image
       * Unmounting Inactive System Image
       * System image: "sysimage_001" on disk "/dev/disk/disk6"

=======  04/22/16 17:30:58 IST  END Unmount Inactive System Image succeeded. (user=root)  (jobid=testsrv)

3. Different tasks which can be performed on cloned DRD disk

There are different maintenance activities that you can perform on this cloned DRD disk. To name a few: patch installation, editing some system files manually, tuning static kernel parameters, etc.

To execute tasks on the cloned disk you need to supply commands as an argument to drd runcmd option. For example, if you want to view /etc/hosts file in the cloned image,  use drd runcmd cat /etc/hosts

# /opt/drd/bin/drd runcmd kctune -B nproc+=100

=======  04/22/16 18:15:54 IST  BEGIN Executing Command On Inactive System Image (user=root)  (jobid=testsrv)

       * Checking for Valid Inactive System Image
       * Analyzing Command To Be Run On Inactive System Image
       * Locating Inactive System Image
       * Accessing Inactive System Image for Command Execution
       * Setting Up Environment For Command Execution
       * Executing Command On Inactive System Image
       * Executing command: "/usr/sbin/kctune -B nproc+=100"
WARNING: The backup behavior 'yes' is not supported in alternate root
         environments.  The behavior 'once' will be used instead.
       * The automatic 'backup' configuration has been updated.
       * Future operations will ask whether to update the backup.
       * The requested changes have been applied to the currently
         running configuration.
Tunable            Value  Expression  
nproc    (before)   4200  Default     
         (now)      4300  4300        
       * Command "/usr/sbin/kctune -B nproc+=100" completed with the return code "0".
       * Cleaning Up After Command Execution On Inactive System Image

=======  04/22/16 18:16:23 IST  END Executing Command On Inactive System Image succeeded. (user=root)  (jobid=testsrv)

See above example where I tune kernel parameters within the cloned disk.

You can even install patches using command drd runcmd swinstall -s /tmp/patch123.depot. Even if patch which needs a reboot can be installed. Since you are installing it on cloned (nonlive) root disk, the server won’t be rebooted. To make these changes live on your server, you need to boot the server with this cloned disk.

4. How to activate DRD cloned disk

To activate the dynamic root disk, you need to run drd activate command. Actually, this command sets your cloned disk path as a primary boot path which you can do by setboot command too!

# /opt/drd/bin/drd activate -x reboot=true

=======  04/22/16 18:20:21 IST  BEGIN Activate Inactive System Image (user=root)  (jobid=vm19)

       * Checking for Valid Inactive System Image
       * Reading Current System Information
       * Locating Inactive System Image
       * Determining Bootpath Status
       * Primary bootpath : 0/0/0/0.0x0.0x0 before activate.
       * Primary bootpath : 0/0/0/0.0x2.0x0 after activate.
       * Alternate bootpath : 0/0/0/0.0x1.0x0 before activate.
       * Alternate bootpath : 0/0/0/0.0x1.0x0 after activate.
       * HA Alternate bootpath :  before activate.
       * HA Alternate bootpath :  after activate.
       * Activating Inactive System Image
       * Rebooting System

If you set reboot to false, it will just set the primary boot disk path and exists. After that when you manually reboot the system, it will boot from cloned disk.

If you don’t choose auto-reboot then you will have a chance to reverse activate operation using deactivate command argument.

5. After booting cloned disk

If you boot your system from dynamic root disk, below things will be changed :

  1. Root VG mirroring will be missing
  2. Past live root disk will be intact
  3. Past live root disk will be removed from setboot primary/alternate boot path settings
  4. You have to restore the root mirror
  5. You have to check and set the alternate boot path
  6. Your system will be having all changes (patch install, kernel tuning) you made on the cloned disk

Dynamic Root Disk is a very powerful tool when it comes to chopping down your downtime. If you have little downtime window and need to perform a large number of patching which requires the reboot. Patch cloned disk and just reboot server during your short downtime!

Auto port aggregation APA configuration in HPUX

Learn APA configuration in HPUX. Auto port aggregation logic is similar to network teaming in Linux. Used for network card hardware-level redundancy.

APA stands for Auto Port aggregation. It is software i.e. operating system level configuration which offers NIC (Network Interface Card also referred to as LAN card) redundancy. We have already briefed about APA in this post, refer to the first paragraph.

Also read : Network bonding-teaming in Linux

Let’s run down to configuration steps for APA in HPUX in failover group mode.

Step 1.

You need to have teaming software installed on your system. The Teaming (Auto PORT Aggregation) application is inbuild in HP-UX 11i  V2 EOE. If not you can download it from the HP software repository and install it on your HPUX server.

Step 2.

Make sure your primary network interface card (NIC) is configured with a proper IP address, mask, and gateway.  Use netstat -in command.

# ioscan -funC lan 

Class I H/W Path Driver S/W State H/W Type Description 
=================================================================== 
lan 1 1/1/0 gelan CLAIMED INTERFACE HP A4926A PCI 1000Base-SX Adapter
/dev/gelan4

Let’s assume we have identified lan1 as a secondary NIC for our config. lan0 being the primary one.

Secondly, identify your second NIC which can be used as secondary card in APA configuration (Use ioscan -fnClan command). Make sure this card is connected to a different network switch, configured with the same VLAN as primary on the network end, and is physical does not reside in the same hardware module of primary NIC. This ensures high availability in case of network, switch, or card hardware failure.

To confirm both cards have same network reachability (i.e. on same VLAN) use below command :

# linkloop -i PPA_pri StationAddr_sec

# linkloop -i 1  0x00108323463C 
Link connectivity to LAN station: 0x00108323463C 
-- OK 

---- failure output means no connectivity----
Link connectivity to LAN station: 0x00108323463C 
error: get_msg2 getmsg failed, errno = 4 
-- FAILED 
frames sent : 1 
frames received correctly : 0 
reads that timed out : 1

where station address is MAC (can be obtained from lanscan output) and PPA number is lan0, lan1 number. Try in both ways. using MAC of primary and PPA of secondary and vice versa to make sure you have connectivity between both cards. If you get shown failed error then those two cards cant be used in APA config together.

Step 3.

Edit configuration file /etc/rc.config.d/hp_apaportconf and mention interface names (lan0 and lan1 in our case) like below:

HP_APAPORT_INTERFACE_NAME[0]=lan0
HP_APAPORT_CONFIG_MODE[0]=LAN_MONITOR

HP_APAPORT_INTERFACE_NAME[1]=lan1
HP_APAPORT_CONFIG_MODE[1]=LAN_MONITOR

Step 4.

Start APA services.

# /sbin/init.d/hpapa start
/sbin/init.d/hpapa started.
         Please be patient. This may take about 40 seconds.
         HP_APA_DEFAULT_PORT_MODE = MANUAL
         /usr/sbin/hp_apa_util -S 0 LAN_MONITOR
         /usr/sbin/hp_apa_util -S 1 LAN_MONITOR
         /sbin/init.d/hpapa Completed successfully.

# /sbin/init.d/hplm start

Step 5.

Now, we will create a LAN configuration file that can be applied later to both NIC to make them aware they are working in a group under the same IP umbrella. lanqueryconf command creates ASCII file at /etc/lanmon/lanconfig.ascii

# lanqueryconf -s 

# more /etc/lanmon/lanconfig.ascii
NODE_NAME teststation 
POLLING_INTERVAL 10000000 
DEAD_COUNT 3 
FAILOVER_GROUP lan900 
STATIONARY_IP 10.10.2.5 
PRIMARY lan0 5 
STANDBY lan1 3

See the content of this ASCII file. It has node name, polling interval (microseconds, default is 10 sec). The dead count is the number of polling packets missed to consider failure and initiate failover (default is 3). The failover group is lan name which will be visible systemwide. lan900 will have our primary NIC address and lan0, lan1 will work together as lan900. IP is IP taken up by lan900. lan0 will be treated as primary NIC and lan1 as standby. Number 5,3 denotes priorities of respective NIC.

You can make changes in this file if you dont want to go with default values.

Step 6.

The above file is generated for admin to edit if any changes required. After that, the file will be checked for integrity and then can be applied to APA configuration like below :

# lancheckconf
Reading ASCII file /etc/lanmon/lanconfig.ascii
Verification of input file /etc/lanmon/lanconfig.ascii is complete.
 
# lanapplyconf
Reading ASCII file /etc/lanmon/lanconfig.ascii
Creating Fail-Over Group lan900
Updated binary file /etc/lanmon/lanconfig

Here lan900 is created and you APA is complete.

Step 7.

Now you can see lan0 and lan1 vanish from lanscan -q output and instead lan900 appeared with 0 and 1 as its members.

# lanscan -q 
2 
3 
900 0 1 
901

You can verify that lan900 will have an IP address which was configured on primary NIC lan0 before configuration (in netstat -in output).

You can even test if APA failover is happening correctly.  Follow this testing procedure to make sure your APA works properly. Sometimes lan900 won’t appear and you need to restart your system. The rebooting system takes up a new APA configuration and you will able to see lan900 in action.

If there is more than one APA configured on the system then it will follow the series of lan901, lan902, and so on.

Logs under /var/stm/logs/os in HPUX

Your /var mount point is getting full? You need to check /var/stm/logs/os directory for old logs. Lots of space can be saved by zipping or purging them.

Ever wondered why /var/stm/logs/os is taking up huge space in /var mount point of HPUX? In this post, we will see details about logs under this directory and how to handle them.

Most of the time you get /var getting full alerts from your monitoring system or you observe /var is filling up. This is normal behavior since most of the logs reside in /var and if some logs are growing fast they cause /var filling up fast. As a first troubleshooting step, you need to check huge size files and directories in /var.

Many times you see one of the culprit is /var/stm/logs/os directory. If you see inside this directory you will see something like below :

# ls -lrt /var/stm/logs/os
total 6986016
-rw-r--r--   1 root       root        512656 Apr 10  2008 log1.raw
-rw-r--r--   1 root       root        512656 Apr 10  2008 log2.raw
-rw-r--r--   1 root       root        512656 Apr 10  2008 log3.raw
-rw-r--r--   1 root       root        512656 Apr 11  2008 log4.raw
-rw-r--r--   1 root       root        512656 Apr 11  2008 log5.raw
-rw-r--r--   1 root       root        512656 Apr 11  2008 log6.raw
-rw-r--r--   1 root       root        512656 Apr 11  2008 log7.raw
----- ouput clipped -----

There are lots of raw log files taking up huge space collectively.

What are these logs under /var/stm/logs/os :

Your next question will be what are these files? what is the purpose of these files on the server?

These are raw files that are being logged and used by STM i.e. support tool manager. Those are logs collected by STM which has information about your hardware issues. By the above output, you can see those are rotated when one log file crosses a certain file size. While rotating they are sequentially numbered. This numbering makes it easy when it comes to managing those logs.

How to read these logs :

You can read these logs using log viewer by STM. Goto CSTM console using command cstm

# /usr/sbin/cstm
Running Command File (https://z5.kerneltalks.com/usr/sbin/stm/ui/config/.stmrc).

-- Information --
Support Tools Manager

Version A.59.05

Product Number B4708AA

(C) Copyright Hewlett Packard Co. 1995-2007
All Rights Reserved

Use of this program is subject to the licensing restrictions described
in "Help-->On Version".  HP shall not be liable for any damages resulting
from misuse or unauthorized use of this program.

cstm>

Then run ru and select logtool utility.

cstm>ru
-- Run Utility --
Select Utility
    1 MOutil
    2 logtool
Enter selection or cancel to quit : 2

-- Logtool Utility --
To View a Summary of Events in a Raw Log

  1. Select a raw (unformatted) log file.  (File Menu -> "Select Raw")
     The current log file ends in ".cur", e.g. "log1.raw.cur".
     You do not have to switch logs.

  2. View the summary of the selected log file. (View Menu -> "Raw Summary")

To Format a Raw Log

  1. Set the format filter for the types of entries you want to see.
     (Filters Menu -> "Format").  To see all entries, skip this step.

  2. Format the raw log file. (File Menu -> "Format Raw")

  3. Display the formatted file. (View Menu -> "Formatted Log")

  4. To further narrow the entries displayed, set a display filter.
     (Filters Menu -> "Display" -> "Formatted")

For more information, use the on-line help (Help Menu -> "General help").

Logtool Utility>

With give information on console you can view, format raw log files.

Should I purge or zip /var/stm/logs/os logs ?

Now you know what are these files and you observe there are too many of them which are too old to keep. In such a scenario, you have got two options:

  • Zip them: For few months old files. Maybe 1-2 months old. How to zip files.
  • Purge them: For very old logs like 6 or more months old.

Make a note that those logs are read by STM as well so if you purge or zip them, STM won’t be able to use them.

So be sure you check logs using logtool utility explained above and decide to purge, zip, or keep it. Normally, if you are not facing any hardware issues with the server currently then you should zip/purge according to the time frame I suggested above.

# ls -lrt /var/stm/logs/os
total 2463008
-rw-r--r--   1 root       root         65910 Apr 10  2008 log1.raw.gz
-rw-r--r--   1 root       root         57168 Apr 10  2008 log2.raw.gz
-rw-r--r--   1 root       root         53727 Apr 10  2008 log3.raw.gz
-rw-r--r--   1 root       root         40526 Apr 11  2008 log4.raw.gz
-rw-r--r--   1 root       root         39541 Apr 11  2008 log5.raw.gz
-rw-r--r--   1 root       root         37050 Apr 11  2008 log6.raw.gz
-rw-r--r--   1 root       root         37624 Apr 11  2008 log7.raw.gz

Match above output with previous and see how file size decreased after zipping which in turns saved my /var space.

Zipping of purging these logs will greatly free up space under /var mount point. This is one of the directories which we normally miss or ignore while cleaning up the mount point.

Alternatively, you can even configure logrotate utility which will take care of this zipping and purging of files automatically without human intervention.

How to learn, practice HPUX online

HPUX is HP’s own Unix operating system which runs on HP hardware only. In this post, know if you can learn or practice HPUX online.

Many of our readers asked this question “How do learn HPUX online?”, “How to practice HPUX online?”, “Are there HPUX online test servers?”. So I thought of writing this post which discussed over if its possible to learn HPUX online by sitting at your home.

In today’s open source world, there is very small amount of space being owned by proprietary kernel’s like HPUX by HP, AIX by IBM, Solaris by Sun etc. Being  proprietary, these UNIX variants needs specific hardware to run on. Most of then dont even have emulator’s platform existing and dont run on vmware either. In such scenario, it becomes mandatory to have a licensed hardware to learn those technologies.

HPUX kernel supports only Itanium (IA) and PA-RISC architectures. This is the reason this OS cant run on VMware VM. IA and PA-RISC are very expensive hardware to own for a normal learner. PA-RISC being legacy hardware you might get it cheap in re-sell but again to find someone willing to sell this hardware is another treasure hunt quest!

Own HP Hardware:

One way to learn or practice HPUX is to buy HP hardware. You can purchase it directly from HP or its resellers. But since its expensive 99% of learners won’t opt for this option. Another way is to buy old hardware at a cheap price provided you found it up for sale!

Online HPUX servers:

HP used to run a program named “HP test drive”. Under this program, HP was offering free HPUX test machines (remote non-root access). But this program is shut down and no more available.

There are some institutes or learning centers which offer online HPUX server’s access for practice on a paid basis. You need to google around for such institutes locally in your area.

HP himself offers an HP performance center” program in which you can test hand on HP objects. Contact the local HP representative for more details.

Online courses :

Obviously this is always an open option. Learning centers offering HPUX courses locally in your area. HP also offers the HPUX learning module under its eLearning program.

Online study material :

There are many online course material for certification book materials available for HPUX. To start with, if you are a beginner, I will recommend book by Ashghar Ghori which is best to start with. It’s available to purchase from many websites like Amazon, Flipkart, etc. HP also provides study material which comes with their online courses.

Apart from this, HP official ITRC forum for HPUX is the best place to get your queries regarding HPUX resolved. Also, you can go through online blogs, videos, etc to learn HPUX. Obviously you can subscribe to this website which also publishes articles on HPUX frequently!

Conclusion :

If you are ready to spend a good amount of money on learning HPUX, go ahead with HP online courses and training materials. For a low budget, you need to search for local institutes offering HPUX courses.

Let us know in comments if you know any good online HPUX resources.

Restore network Ignite backup on HPUX server

Learn how to restore network ignite backup on the HPUX server. Learn how to restore some other server’s OS backup on diff server provided hardware model is same

Ignite backup is an OS backup for HPUX OS. Ignite-Ux is a licensed tool developed by HP for its proprietary OS HPUX. Ignite backup can be taken on local tape or over the network on the Ignite server. In this post, we will be seeing how to restore network ignite backup on HPUX.

Pre-requisite :

Login to Ignite server and confirm below points :

  • Ignite backup of server is available under /var/opt/ignite/clients/<MAC of machine> directory
  • Directory ownership is bin:bin
  • Directory permissions are 755
  • One spare IP with same subnet of Ignite server for installation

Restoration :

Power up your server on which network ignite backup needs to restore. Halt boot process at EFI shell. Enter EFI shell to build your boot profile. Boot profile is profile which has booting options like boot path, network path, setting up boot network parameters for current server etc.

On EFI prompt, you need to execute below command to build your boot profile :

EFI> dbprofile -dn testprofile -sip x.x.x.x -cip x.x.x.x -gip x.x.x.x -m 255.255.255.0 -b "/opt/ignite/boot/nbp.efi"

in which,

  • -sip is Ignite server IP on which backup resides
  • -cip is machine IP to be used to boot machine with (spare IP I mentioned earlier)
  • -gip is gateway IP
  • -m is subnet mask
  • -b is boot path
  • -dn is profile name

Here we are building a profile with a name testprofile and related network parameters using which machine will boot and look for backup.

Now, boot your machine over LAN with this profile using below command from EFI shell:

EFI> lanboot select -dn testprofile

This will boot machine with taking himself IP defined in cip, with a gateway in gip, and will search for boot path on Ignite server sip. Once its query reaches the ignite server, it checks the MAC address from which query is generated and then serves backup boot path from the directory with that MAC address title. That’s why we checked permission and ownership previously.

Once everything goes smoothly, you will be served with a text-based GUI installation menu on your putty terminal. You can go ahead with the installation and restore network ignite backup.

Restoring serverA backup on serverB :

In the above method, it’s mandatory to have a backup of the same machine in place at the Ignite server to restore. In case you do not have backup and wish to restore another server’s backup then it can be done. The only thing is both machine’s hardware model should be the same.

For example, if you have serverA backed up on the Ignite server and want to restore this backup on serverB then it’s possible with a bit of trick provided serverA, and serverB should have the same hardware model.

For such instance, you need to copy an existing backup directory to the new one. The new directory should be named with the MAC address of serverB. MAC address of the new server can be obtained using lanaddress command in EFI shell in case it’s not installed with OS. After copying makes sure ownership and permissions are intact.

Once copy is done , you can follow above process and get the restore done!

Step by step procedure to take ignite tape backup in HPUX

A stepwise how-to guide for Ignite tape backup. Includes media check commands, backup log analysis, and troubleshooting steps.

Ignite is an OS backup solution for HPUX. This tool is developed by HP and available under the brand name Ignite-UX. It’s used to take system backup like a ghost image in the case of Windows. Complete OS can be restored using an ignite backup solution in case of any system failure. Ignite offers a network backup solution and tape backup solution. During network backup, OS backup is stored on ignite server over the network, and in event of restoring it’s restored over the network only (System should be booted with PXE boot). In tape backup, OS backed up in locally connected tape drive and restoration happens by booting system through bootable tape.

One needs to install this utility since it’s not native to HPUX. You can check if it’s installed or not using below command :

# /usr/sbin/swlist -l product |grep -i ignite
  Ignite-UX             C.7.12.519     HP-UX System Installation Services

If not installed, you need to purchase it and install on your HPUX machine.

In this post we will see how to take ignite tape backup along with its logs, troubleshooting and media check commands.

Media check :

Before starting your backup on tape you need to check if tape drive and media are functioning properly. After connecting your tape drive to the system and powering it on, you can identify it using ioscan -fnCtape & insf -e command. Its device name should be something like /dev/rmt/0mn . Once you identify device name for the tape you can check its status wit mt command:

# mt -t /dev/rmt/0mn status
Drive:  HP Ultrium 2-SCSI
Format:
Status: [41114200] BOT online compression immediate-report-mode
File:   0
Block:  0

Once you are able to get the status of media means tape drive is functioning properly and correctly identified in the kernel. Now you can go ahead with the backup procedure.

Taking ignite tape backup :

Ignite tape backup can be run using the command make_tape_recovery. This binary resides in /opt/ignite/bin. This command supports a list of options but we are seeing here most used ones :

  • -A: Checks disks/volume group and adds files in backup which are specified for backup inclusion
  • -v: Verbose mode
  • -I: Cause the system recovery process to be interactive when booting from the tape.
  • -x : Extra options (include=file|dir, exclude=file|dir,  inc_entire=VG or Disk) define inclusion/exclusion of file/dir/vg/disk
  • -a: tape drive address
  • -d: Description which will be displayed for archive
  • -i: Interactive execution

Since ignite is aimed at OS backup, normally we take VG00 i.e. root volume group’s backup only in Ignite tape backup. Let’s see one example :

# /opt/ignite/bin/make_tape_recovery -AvI -x inc_entire=vg00 -a /dev/rmt/0mn -x exclude=/data

=======  12/27/16 03:00:00 EDT  Started /opt/ignite/bin/make_tape_recovery.
         (Tue Dec 27 03:00:00 EDT 2016)
         @(#) Ignite-UX Revision B.4.4.12
         @(#) net_recovery (opt) $Revision: 10.611 $


       * Testing pax for needed patch
       * Passed pax tests.

----- output clipped -----

In the above example, we have started to ignite backup with all VG00 included (-x inc_entire=vg00), excluding /data mount point which is part of vg00 (-x exclude=/data), on tape drive at 0mn (-a /dev/rmt/0mn) with the interactive boot menu in the backup (-I). Verbose mode (-v) starts printing all outputs on the terminal screen as shown above.

It takes normally half an hour or more to complete backup depending on the size of your files included in the backup. If your terminal timeout is short value then you can put this command in the background (with below command) so that it won’t get killed when your terminal timed out and disconnect.

# /opt/ignite/bin/make_tape_recovery -AvI -x inc_entire=vg00 -a /dev/rmt/0mn -x exclude=/data >/dev/null 2>&1

Don’t worry all outputs are being logged to a log file so that you can analyze it later. Last few lines of output are as below which

declares backups has been completed successfully.

----- output clipped -----
       /var/tmp/ign_configure/make_sys_image.log
       /var/spool/cron/tmp/croutFNOa01327
       /var/spool/cron/tmp/croutBNOa01327
       /var/spool/cron/tmp/croutGNOa01327

       * Cleaning up old configuration file directories


=======  12/27/16 03:12:19 EDT make_tape_recovery completed successfully.

You can even schedule an Ignite backup in crontab on a monthly, weekly, or daily basis depending on your requirement.

Log files :

Your latest run output is saved under /var/opt/ignite/recovery/latest/recovery.log. All other run’s details are saved under  /var/opt/ignite/recovery directory. Whenever command runs it links the latest directory to the current run’s directory. See the below output to get an idea.

# ll /var/opt/ignite/recovery
total 14240
drwxr-xr-x   2 root       root          8192 Nov 27 03:12 2016-11-27,03:00
drwxr-xr-x   2 root       root          8192 Dec 27 03:12 2016-12-27,03:00
lrwxr-xr-x   1 root       sys             16 Dec 27 03:00 latest -> 2016-12-27,03:00
----- output clipped -----

If ignite fails then recovery.log is the first place to look for a reason for failure.

Troubleshooting :

This part is really hard to cover since there can be numerous reasons why Ignite fails. But let me cover few common reason here –

  1. Tape media is faulty (check EMS logs, Syslog)
    • Solution: media replacement
  2. The tape drive is faulty (check ioscan status, EMs, Syslog) 
    • Solution: hardware replacement
  3. One or more VG exist in /etc/lvmtab but not active on the system (verify /etc/lvmtab with bdf)
    • Solution: Remove inactive VG from lvmtab or made them active on the system
  4. One or more LVOLs  exist in /etc/lvmtab but not active on the system  (verify /etc/lvmtab with bdf)
    • Solution: Remove inactive lvol from lvmtab or mount them on system
  5. ERROR:   /opt/ignite/bin/save_config failed : One of the system attached disk/lun is faulty.
    • Solution: check hardware and replace it.