Monthly Archives: January 2017

Restore network Ignite backup on HPUX server

Learn how to restore network ignite backup on the HPUX server. Learn how to restore some other server’s OS backup on diff server provided hardware model is same

Ignite backup is an OS backup for HPUX OS. Ignite-Ux is a licensed tool developed by HP for its proprietary OS HPUX. Ignite backup can be taken on local tape or over the network on the Ignite server. In this post, we will be seeing how to restore network ignite backup on HPUX.

Pre-requisite :

Login to Ignite server and confirm below points :

  • Ignite backup of server is available under /var/opt/ignite/clients/<MAC of machine> directory
  • Directory ownership is bin:bin
  • Directory permissions are 755
  • One spare IP with same subnet of Ignite server for installation

Restoration :

Power up your server on which network ignite backup needs to restore. Halt boot process at EFI shell. Enter EFI shell to build your boot profile. Boot profile is profile which has booting options like boot path, network path, setting up boot network parameters for current server etc.

On EFI prompt, you need to execute below command to build your boot profile :

EFI> dbprofile -dn testprofile -sip x.x.x.x -cip x.x.x.x -gip x.x.x.x -m 255.255.255.0 -b "/opt/ignite/boot/nbp.efi"

in which,

  • -sip is Ignite server IP on which backup resides
  • -cip is machine IP to be used to boot machine with (spare IP I mentioned earlier)
  • -gip is gateway IP
  • -m is subnet mask
  • -b is boot path
  • -dn is profile name

Here we are building a profile with a name testprofile and related network parameters using which machine will boot and look for backup.

Now, boot your machine over LAN with this profile using below command from EFI shell:

EFI> lanboot select -dn testprofile

This will boot machine with taking himself IP defined in cip, with a gateway in gip, and will search for boot path on Ignite server sip. Once its query reaches the ignite server, it checks the MAC address from which query is generated and then serves backup boot path from the directory with that MAC address title. That’s why we checked permission and ownership previously.

Once everything goes smoothly, you will be served with a text-based GUI installation menu on your putty terminal. You can go ahead with the installation and restore network ignite backup.

Restoring serverA backup on serverB :

In the above method, it’s mandatory to have a backup of the same machine in place at the Ignite server to restore. In case you do not have backup and wish to restore another server’s backup then it can be done. The only thing is both machine’s hardware model should be the same.

For example, if you have serverA backed up on the Ignite server and want to restore this backup on serverB then it’s possible with a bit of trick provided serverA, and serverB should have the same hardware model.

For such instance, you need to copy an existing backup directory to the new one. The new directory should be named with the MAC address of serverB. MAC address of the new server can be obtained using lanaddress command in EFI shell in case it’s not installed with OS. After copying makes sure ownership and permissions are intact.

Once copy is done , you can follow above process and get the restore done!

How to zip, unzip files and directories in Linux / Unix

Learn how to zip, unzip files and directories in Linux or Unix. Compressing files helps in log management and makes data transfer easy.

Zipping files or directories compress data within them using  Lempel-Ziv coding (LZ77). This reduces the size of the resulting file. Lower size means lower storage requirement (log management) and faster transfer (FTP).

In Linux or Unix platforms gzip is widely available utility mostly native to OS which is used to zip, unzip files. In this post, we will see how to zip and unzip files using gzip utility with examples.

Compressing files

Zipping files i.e compressing is achieved by gzip without any option. You need to submit filename.xyz to gzip command. It will compress the file and the resulting file will have a name as filename.xyz.gz Point here to note is gzip removes the original file and keep new gz file in place.

# ll
total 12
-rw-r--r-- 1 root users  40 Jan  3 00:46 file2

# gzip file2

# ll
total 12
-rw-r--r-- 1 root users  63 Jan  3 00:46 file2.gz

Note in the above output after zipping original file file2 is vanished from the system and compressed archive file2.gz came in existence.

This command support wildcards like * or ? too. Also, you can supply a list of files to it and it will compress all the files supplied in the argument.

# gzip file2 file3

# ll
total 12
-rw-r--r-- 1 root users 63 Jan  3 00:46 file2.gz
-rw-r--r-- 1 root users 134 Jan  3 00:46 file3.gz

You can use forceful operation with -f option. This is helpful in case files to be compressed has multiple links in existence.

gzip also supports -v option i.e. verbose mode which shows all details about the operation being done.

# gzip -v *
file1:   45.7% -- replaced with file1.gz
file2:   22.5% -- replaced with file2.gz
file3:   10.5% -- replaced with file3.gz

In the above example, we zipped all files (hence *) within a directory using wild cards. Here verbose mode printed compression ratio for each file along with which file it replaced after the operation.

Compressing directories

Like files, even directories can be compressed recursively. When -r option is used, gzip command read through the given directory to its subtree structure and zips all the files it founds within.

# ll /tmp/dir3
total 12
-rw-r--r-- 1 root users  35 Jan  3 00:46 file1
-rw-r--r-- 1 root users  40 Jan  3 00:46 file2
-rw-r--r-- 1 root users 114 Jan  3 00:46 file3

# gzip -r /tmp/dir3

# ll /tmp/dir3
total 12
-rw-r--r-- 1 root users  51 Jan  3 00:46 file1.gz
-rw-r--r-- 1 root users  63 Jan  3 00:46 file2.gz
-rw-r--r-- 1 root users 134 Jan  3 00:46 file3.gz

In the above output, it recursively zipped all files within a given directory. This is a helpful option where there are hundreds of files in the directory.

Checking compressed files

To test the compressed archive you can use -t option with gunzip. If there are any issues with the compressed files it will report or else it will return you shell prompt.

# gzip -t file2.gz

You can even view compression details of this file using -l option with gunzip command. It shows, uncompressed and compressed size, compression ratio (0% if not known) and name of uncompressed file i.e. filename before compression

# gzip -l file2.gz
         compressed        uncompressed  ratio uncompressed_name
                 63                  40  22.5% file2

Un-Compressing files

To gain the original file from a compressed archive, -d option needs to be used and gz file to be supplied in the argument. It works vice versa and removes gz file and keeps the original file in the directory here. Recursive -r option works with -d too.

# gzip -d file2.gz

# ll
total 12
-rw-r--r-- 1 root users  40 Jan  3 00:46 file2

You can see file2 is back available now and the gz file has been removed. Using verbose mode prints more information about operations being done.

# gzip -v -d *
file1.gz:        45.7% -- replaced with file1
file2.gz:        22.5% -- replaced with file2
file3.gz:        10.5% -- replaced with file3

In the above output, we de-compressed three files in the same directory using wildcard *. It shows the compression ratio with which file replaced which file after decompression.

How to change your shell prompt to fancy one instantly

Learn to change shell prompt with your chosen character or value. Different values or shell variables can be defined to be shown as shell prompt.

After the user logged in to the system through Putty or command line he is greeted with blinking cursor in front of something called “Shell prompt”! Generally, the shell prompt with # denotes superuser and with $ denotes normal user. But going beyond these mainstream prompts, most of the admins choose the custom prompt for them and their users.

The most famous prompt used is showing the present working directory in prompt. So that users know in which directory he is while executing any command. Another widely used prompt is showing hostname. This ensures the user that he is working on the right terminal when many terminal windows are open. In this post, we will see how to set these prompts and some fancy prompts too.

Where to define Shell prompt :

Shell prompt is defined by PS1 variable in the profile file. This profile file can be any profile that is executed on user login. If multiple profiles have multiple values defined for PS1 then the last profile executed will decide the final value for PS1. For example when user logs in below profile execution can be followed :

/etc/profile -> ~/.bash_profile -> ~/.bashrc -> /etc/bashrc

In above flow system-wide profile i.e. /etc/profile calls bash profile which resides in the user’s home directory. This local profile calls bashrc script residing in the home directory. This bashrc calls up system-wide /etc/bashrc script to set the environment. In this case, PS1 value defined in /etc/bashrc would be the final one.

Sometimes there were no scripts called from profile then user’s home directory profile would be last resort to define PS1. If the profile file is missing in the user’s home directory the PS1 defined in /etc/profile will decide how your prompt looks.

How to define shell prompt :

Now, you know the file where prompt can be defined. Let’s see how to define it. It can be defined in a very basic way as below :

PS1=":->"
export $PS1

Here we are defining prompt as the symbol :-> The export command is not necessary but it’s good to have it in some flavors of Linux or Unix. Even you can test it by running command PS1=":->" on your terminal and you can see immediately your prompt will be changed to :->

You can even use an if-else loop in the profile file to decide which prompt should be served for particular users or terminal types.

Different useful prompts :

Below is the useful list of variables can be used in prompts :

Code
Description
PS1=”[$USER@$HOSTNAME]$” Shows prompt as [username@hostname]$
PS1=”[$USER@$HOSTNAME $PWD]$” Shows prompt as [username@hostname Present_directory]$
PS1=”[$HOSTNAME $PWD]$” Shows prompt as [hostname present_directory]$
PS1=”$HOSTNAME >” Shows prompt as hostname >

You can choose your own variations. See above-listed prompts in action below :

$PS1="[$USER@$HOSTNAME]$"
[user4@testsrv2]$

$PS1="[$USER@$HOSTNAME $PWD]$"
[user4@testsrv2 /home/user4]$

$PS1="[$HOSTNAME $PWD]$"
[testsrv2 /home/user4]$

$PS1="$HOSTNAME >"
testsrv2 >

Observe the first prompt is just $ sign. After each PS1 value change, prompt changes accordingly.

Some fancy prompts :

Here are some fancy prompts for fun!

$PS1=">-->"
>-->

$PS1="-=(^_^)=-:"
-=(^_^)=-:

$PS1="\m/ (-_-) \m/ :"
\m/ (-_-) \m/ :

$PS1="$USER rules $HOSTNAME >"
user4 rules testsrv2>

Hope you liked the post. Drop us your feedback, suggestions in comments below! Do follow us on Facebook, Twitter & Google+

Step by step procedure to take ignite tape backup in HPUX

A stepwise how-to guide for Ignite tape backup. Includes media check commands, backup log analysis, and troubleshooting steps.

Ignite is an OS backup solution for HPUX. This tool is developed by HP and available under the brand name Ignite-UX. It’s used to take system backup like a ghost image in the case of Windows. Complete OS can be restored using an ignite backup solution in case of any system failure. Ignite offers a network backup solution and tape backup solution. During network backup, OS backup is stored on ignite server over the network, and in event of restoring it’s restored over the network only (System should be booted with PXE boot). In tape backup, OS backed up in locally connected tape drive and restoration happens by booting system through bootable tape.

One needs to install this utility since it’s not native to HPUX. You can check if it’s installed or not using below command :

# /usr/sbin/swlist -l product |grep -i ignite
  Ignite-UX             C.7.12.519     HP-UX System Installation Services

If not installed, you need to purchase it and install on your HPUX machine.

In this post we will see how to take ignite tape backup along with its logs, troubleshooting and media check commands.

Media check :

Before starting your backup on tape you need to check if tape drive and media are functioning properly. After connecting your tape drive to the system and powering it on, you can identify it using ioscan -fnCtape & insf -e command. Its device name should be something like /dev/rmt/0mn . Once you identify device name for the tape you can check its status wit mt command:

# mt -t /dev/rmt/0mn status
Drive:  HP Ultrium 2-SCSI
Format:
Status: [41114200] BOT online compression immediate-report-mode
File:   0
Block:  0

Once you are able to get the status of media means tape drive is functioning properly and correctly identified in the kernel. Now you can go ahead with the backup procedure.

Taking ignite tape backup :

Ignite tape backup can be run using the command make_tape_recovery. This binary resides in /opt/ignite/bin. This command supports a list of options but we are seeing here most used ones :

  • -A: Checks disks/volume group and adds files in backup which are specified for backup inclusion
  • -v: Verbose mode
  • -I: Cause the system recovery process to be interactive when booting from the tape.
  • -x : Extra options (include=file|dir, exclude=file|dir,  inc_entire=VG or Disk) define inclusion/exclusion of file/dir/vg/disk
  • -a: tape drive address
  • -d: Description which will be displayed for archive
  • -i: Interactive execution

Since ignite is aimed at OS backup, normally we take VG00 i.e. root volume group’s backup only in Ignite tape backup. Let’s see one example :

# /opt/ignite/bin/make_tape_recovery -AvI -x inc_entire=vg00 -a /dev/rmt/0mn -x exclude=/data

=======  12/27/16 03:00:00 EDT  Started /opt/ignite/bin/make_tape_recovery.
         (Tue Dec 27 03:00:00 EDT 2016)
         @(#) Ignite-UX Revision B.4.4.12
         @(#) net_recovery (opt) $Revision: 10.611 $


       * Testing pax for needed patch
       * Passed pax tests.

----- output clipped -----

In the above example, we have started to ignite backup with all VG00 included (-x inc_entire=vg00), excluding /data mount point which is part of vg00 (-x exclude=/data), on tape drive at 0mn (-a /dev/rmt/0mn) with the interactive boot menu in the backup (-I). Verbose mode (-v) starts printing all outputs on the terminal screen as shown above.

It takes normally half an hour or more to complete backup depending on the size of your files included in the backup. If your terminal timeout is short value then you can put this command in the background (with below command) so that it won’t get killed when your terminal timed out and disconnect.

# /opt/ignite/bin/make_tape_recovery -AvI -x inc_entire=vg00 -a /dev/rmt/0mn -x exclude=/data >/dev/null 2>&1

Don’t worry all outputs are being logged to a log file so that you can analyze it later. Last few lines of output are as below which

declares backups has been completed successfully.

----- output clipped -----
       /var/tmp/ign_configure/make_sys_image.log
       /var/spool/cron/tmp/croutFNOa01327
       /var/spool/cron/tmp/croutBNOa01327
       /var/spool/cron/tmp/croutGNOa01327

       * Cleaning up old configuration file directories


=======  12/27/16 03:12:19 EDT make_tape_recovery completed successfully.

You can even schedule an Ignite backup in crontab on a monthly, weekly, or daily basis depending on your requirement.

Log files :

Your latest run output is saved under /var/opt/ignite/recovery/latest/recovery.log. All other run’s details are saved under  /var/opt/ignite/recovery directory. Whenever command runs it links the latest directory to the current run’s directory. See the below output to get an idea.

# ll /var/opt/ignite/recovery
total 14240
drwxr-xr-x   2 root       root          8192 Nov 27 03:12 2016-11-27,03:00
drwxr-xr-x   2 root       root          8192 Dec 27 03:12 2016-12-27,03:00
lrwxr-xr-x   1 root       sys             16 Dec 27 03:00 latest -> 2016-12-27,03:00
----- output clipped -----

If ignite fails then recovery.log is the first place to look for a reason for failure.

Troubleshooting :

This part is really hard to cover since there can be numerous reasons why Ignite fails. But let me cover few common reason here –

  1. Tape media is faulty (check EMS logs, Syslog)
    • Solution: media replacement
  2. The tape drive is faulty (check ioscan status, EMs, Syslog) 
    • Solution: hardware replacement
  3. One or more VG exist in /etc/lvmtab but not active on the system (verify /etc/lvmtab with bdf)
    • Solution: Remove inactive VG from lvmtab or made them active on the system
  4. One or more LVOLs  exist in /etc/lvmtab but not active on the system  (verify /etc/lvmtab with bdf)
    • Solution: Remove inactive lvol from lvmtab or mount them on system
  5. ERROR:   /opt/ignite/bin/save_config failed : One of the system attached disk/lun is faulty.
    • Solution: check hardware and replace it.

How to change process priority in Linux or Unix

Learn how to change process priority in Linux or Unix. Using the renice command understands how to alter the priority of running processes.

Processes in the kernel have their own scheduling priorities and using which they get served by the kernel. If you have a loaded system and need to have some processes serve before others you need to change process priority. This process is also called renicing since we use renice command to change process priority.

There are nice values defined from 0 to 20. 20 being the highest. The process with low nice values gets service before the process with a high nice value. So if you want a particular process to get served first you need to lower its nice value. Administrators may also choose to mark negative nice value (down up to -20) to speed up processes furthermore. Let’s see how to change process priority –

There are 3 ways to select a target for renice command. Once can submit process id (PID) or user ID or process group ID. Normally we use PID and UID in the real-world so we will see these options. New priority or nice value can be defined with option -n. Current nice value can be viewed in top command under NI column Or it can be checked using below command :

# ps -eo pid,user,nice,command | grep 30411
30411 root       0 top
31567 root       0 grep 30411

In above example, nice value is set to 0 for give PID.

Renice process id :

Process id can be submitted to renice using -p option. In below example we will renice value to 2 for pid 30411.

# renice -n 2 -p 30411
30411: old priority 0, new priority 2
# ps -eo pid,user,nice,command | grep 30411
  747 root       0 grep 30411
30411 root       2 top

renice command itself shows old and new nice values in output. We also verified new nice value using ps command.

Renice user id :

If you want to change priorities of all processes of a particular user then you can submit UID of that user using -u option. This option is useful when you want all processes by the user to complete fast, so you can set the user to -20 to get things speedy!

# ps -eo pid,user,nice,command | grep top
 3859 user4   0 top
 3892 user4   0 top
 4588 root    0 grep top
# renice -n 2 -u user4
54323: old priority 0, new priority 2
# ps -eo pid,user,nice,command | grep top
 3859 user4   2 top
 3892 user4   2 top
 4966 root    0 grep top

In the above example, there are two processes owned by user4 with priority 0. We changed the priority of user4 to 2. So both processes had their priority changed to 2.

Normal users can change their own process priority too. But he can not override priority set by the administrator. -20 is the lowest minimum nice value one can set on the system. This is the speediest priority, once set that process gets service and all available resources on the system to get its task done.

How to resolve mount.nfs: Stale file handle error

Learn how to resolve mount.nfs: Stale file handle error on the Linux platform. This is a Network File System error that can be resolved from the client or server end.

When you are using the Network File System in your environment, you must have seen mount.nfs: Stale file handle error at times. This error denotes that the NFS share is unable to mount since something has changed since the last good known configuration.

Whenever you reboot the NFS server or some of the NFS processes are not running on the client or server or share is not properly exported at the server; these can be reasons for this error. Moreover, it’s irritating when this error comes to a previously mounted NFS share. Because this means the configuration part is correct since it was previously mounted. In such case once can try the following commands:

Make sure NFS service are running good on client and server.

#  service nfs status
rpc.svcgssd is stopped
rpc.mountd (pid 11993) is running...
nfsd (pid 12009 12008 12007 12006 12005 12004 12003 12002) is running...
rpc.rquotad (pid 11988) is running...

If NFS share currently mounted on the client, then un-mount it forcefully and try to remount it on NFS client. Check if its properly mounted by df command and changing directory inside it.

# umount -f /mydata_nfs
# mount -t nfs server:/nfs_share /mydata_nfs
#df -k
------ output clipped -----
server:/nfs_share 41943040  892928  41050112   3% /mydata_nfs

In above mount command, server can be IP or hostname of NFS server.

If you are getting error while forcefully un-mounting like below :

# umount -f /mydata_nfs
umount2: Device or resource busy
umount: /mydata_nfs: device is busy
umount2: Device or resource busy
umount: /mydata_nfs: device is busy

Then you can check which all processes or users are using that mount point with lsof command like below:

# lsof |grep mydata_nfs
lsof: WARNING: can't stat() nfs file system /mydata_nfs
      Output information may be incomplete.
su         3327      root  cwd   unknown                                                   /mydata_nfs/dir (stat: Stale NFS file handle)
bash       3484      grid  cwd   unknown                                                   /mydata_nfs/MYDB (stat: Stale NFS file handle)
bash      20092  oracle11  cwd   unknown                                                   /mydata_nfs/MPRP (stat: Stale NFS file handle)
bash      25040  oracle11  cwd   unknown                                                   /mydata_nfs/MUYR (stat: Stale NFS file handle)

If you see in above example that 4 PID are using some files on said mount point. Try killing them off to free mount point. Once done you will be able to un-mount it properly.

Sometimes it still gives the same error for mount command. Then try mounting after restarting NFS service at the client using the below command.

#  service nfs restart
Shutting down NFS daemon:                                  [  OK  ]
Shutting down NFS mountd:                                  [  OK  ]
Shutting down NFS quotas:                                  [  OK  ]
Shutting down RPC idmapd:                                  [  OK  ]
Starting NFS services:                                     [  OK  ]
Starting NFS quotas:                                       [  OK  ]
Starting NFS mountd:                                       [  OK  ]
Starting NFS daemon:                                       [  OK  ]
Starting RPC idmapd:                                       [  OK  ]

Also read : How to restart NFS step by step in HPUX

Even if this didn’t solve your issue, final step is to restart services at the NFS server. Caution! This will disconnect all NFS shares which are exported from the NFS server. All clients will see the mount point disconnect. This step is where 99% of you will get your issue resolved. If not then NFS configurations must be checked, provided you have changed configuration and post that you started seeing this error.

Outputs in above post are from RHEL6.3 server. Drop us your comments related to this post.

How to do safe and graceful Measureware service restart in HPUX

A how-to guide for safe and graceful measureware service restart on HPUX machines. Learn how to preserve old log files during service restart and avoid overwriting them.

Measureware service is a native utility to HPUX for performance measurement. It is responsible to collect system utilization data in the background. Measureware agent mwa runs in background and stores data in logfiles called datafiles.  If you attempt measureware service restart without moving logfiles then it will overwrite current files and all historic data is on the toss. Hence you need to stop it then move data files to another location and then start it. In this sequence, you prompt agents to create new blank data files to save data.

You can view the current status of all measureware services using below command :

# mwa status all
 Perf Agent status:
    Running scopeux               (Perf Agent data collector) pid 2814
    Running midaemon              (Measurement Interface daemon) pid 2842
    Running ttd                   (ARM registration daemon) pid 2703

 Perf Agent Server status:

    Running ovcd                  (OV control component) pid 3483
    Running ovbbccb               (BBC5 communication broker) pid 3484
    Running coda                  (perf component) pid(s) 3485
       Configured DataSources(1)
                  SCOPE

    Running perfalarm             (alarm generator) pid(s) 2845

If any of the components are not running or having issues then it may call for measureware service restart. Let’s see the process of the graceful shutdown and the start of measureware services in HPUX.

Read also another performance measurement tool System Activity Report (SAR) in the below series :

1. Stop mwa

Stop all measureware services with single command as below :

# mwa stop all

Shutting down Perf Agent collection software
         Shutting down scopeux, pid(s) 2814
         The Perf Agent collector, scopeux has been shut down successfully.
NOTE:   The ARM registration daemon ttd will be left running.

Shutting down the alarm generator perfalarm, pid(s) 2845
         The perfalarm process has terminated

OVOA is running. Not shutting down coda

As you can see in the above output ttd is left running by command. You need to kill it using below command :

# ttd -k

Also, mideamon still runs after the above command. You can terminate it using :

#  midaemon -T

These three commands collectively shut off everything related to measureware services. You can confirm if midaemon, ttd and scopeux are down with status command again :

#  mwa status all
 Perf Agent status:
WARNING: scopeux    is not active (Perf Agent data collector)
WARNING: midaemon   is not active (Measurement Interface daemon)
WARNING: ttd        is not active (ARM registration daemon)

 Perf Agent Server status:

    Running ovcd                  (OV control component) pid 3483
    Running ovbbccb               (BBC5 communication broker) pid 3484
    Running coda                  (perf component) pid(s) 3485
       Configured DataSources(1)
                  SCOPE

WARNING: perfalarm is not active (alarm generator)

This ensures you can proceed with log movement before starting mwa again.

2. Log movement

Datafiles (all starts with log) resides in /var/opt/perf/datafiles directory. List of datafiles is as below :

# ll /var/opt/perf/datafiles/log*
-rw-r--r--   1 root       users      11064908 Jan  1 03:05 /var/opt/perf/datafiles/logappl
-rw-r--r--   1 root       root       43951620 Jan  1 03:05 /var/opt/perf/datafiles/logdev
-rw-r--r--   1 root       users      9556384 Jan  1 03:05 /var/opt/perf/datafiles/logglob
-rw-r--r--   1 root       root         15716 Jan  1 03:01 /var/opt/perf/datafiles/logindx
-rw-r--r--   1 root       users           15 Nov  4  2009 /var/opt/perf/datafiles/logpcmd0
-rw-r--r--   1 root       root       76492020 Jan  1 03:05 /var/opt/perf/datafiles/logproc
-rw-r--r--   1 root       root       96153856 Jan  1 03:05 /var/opt/perf/datafiles/logtran

Now move current data files to a different directory. You can use below small inline scripts to do this or you can manually move them one by one.

# cd /var/opt/perf/datafiles
# nowis=`date +%d%b%y-%H:%M`
# mkdir /var/opt/perf/datafiles.old.`echo $nowis`
# cp log* /var/opt/perf/datafiles.old.`echo $nowis`

Make sure you copied datafiles to the destination correctly and proceed to start services again.

3. Start mwa

Start it using below command :

#  mwa start all

The Perf Agent scope collector is being started.
         The ARM registration daemon
         /opt/perf/bin/ttd has been started.

         The Performance collection daemon
         /opt/perf/bin/scopeux has been started.

         The coda daemon /opt/OV/lbin/perf/coda is already running.
The Perf Agent alarm generator is being started.
         The alarm generator /opt/perf/bin/perfalarm
         has been started.

Observe while shutting down we used three commands for shutting different components but while starting up it came up with the single command. You can check the status with mwa status all command to make sure all components are started. This pretty much sums up how to do a safe and graceful measureware service restart.

All examples on this post are from the machine running HPUX 11.31. Let us know if you have any queries, suggestions, corrections in comments.