Yearly Archives: 2016

bdf command formatted output in hpux

Learn to get the neat, clean and tabbed output of bdf.  This left aligned and properly formatted bdf output is helpful for easier data processing.

Requirement :

bdf command output normally looks scattered especially when VG names are long. It will be difficult to grep out a proper pattern out of such output. Also, it’s not convenient to share this output over email/document when extra lines break exists.

In such scenarios, we need to have a properly formatted output of bdf. Also sometimes we require output with all its columns left-aligned.

Solution:

To remove line breaks from bdf output and get single row per entry output

See below normal bdf output. Note that the last 2 mount points have two-line entry since the filesystem column has long entry.

# bdf

Filesystem          kbytes    used   avail %used Mounted on
/dev/vg00/lvol3    2097152  737416 1349304   35% /
/dev/vg00/lvol1    1048576  206160  835928   20% /stand
/dev/vg00/lvol8    8388608 5475640 2902568   65% /var
/dev/vg00/lvol7    8388608 4655256 3713000   56% /usr
/dev/vg00/lvol4    2097152 1052368 1036888   50% /tmp
/dev/vg00/lvol6    8388608 6675168 1700112   80% /opt
/dev/vg00/lvol5     524288   49360  471256    9% /home
testserver01:/data
                   50574008 4541896 43463104    9% /data
/dev/vgdata/lvol1
                   918421504 591931608 306084338   66% /datastore

Now with inline awk we format the output to have one entry per row. Check below command output.

# bdf | awk '{if (NF==1) {line=$0;getline;sub(" *"," ");print line$0} else {print}}'

Filesystem          kbytes    used   avail %used Mounted on
/dev/vg00/lvol3    2097152  737408 1349312   35% /
/dev/vg00/lvol1    1048576  206160  835928   20% /stand
/dev/vg00/lvol8    8388608 5475640 2902568   65% /var
/dev/vg00/lvol7    8388608 4655256 3713000   56% /usr
/dev/vg00/lvol4    2097152 1052368 1036880   50% /tmp
/dev/vg00/lvol6    8388608 6675168 1700112   80% /opt
/dev/vg00/lvol5     524288   49360  471256    9% /home
testserver01:/data 50574008 4541896 43463104    9% /data
/dev/vgdata/lvol1 918421504 591931608 306084338   66% /datastore

To get left aligned bdf output

In the above output, columns are not aligned properly. We can even do that with the below argument.

# bdf | awk '///{printf("%-30s%-10s%-10s%-10s%-5s%-10sn",$1,$2,$3,$4,$5,$6)}'

/dev/vg00/lvol3               2097152   737408    1349312   35%  /
/dev/vg00/lvol1               1048576   206160    835928    20%  /stand
/dev/vg00/lvol8               8388608   5472792   2905392   65%  /var
/dev/vg00/lvol7               8388608   4655256   3713000   56%  /usr
/dev/vg00/lvol4               2097152   1052368   1036888   50%  /tmp
/dev/vg00/lvol6               8388608   6675168   1700112   80%  /opt
/dev/vg00/lvol5               524288    49360     471256    9%   /home

Please make a note that this awk won’t remove any line breaks from the output. So one can combine (with pipe |) both awk to get left aligned output with line breaks removed.

Left-aligned output with line breaks removed!

# bdf | awk '{if (NF==1) {line=$0;getline;sub(" *"," ");print line$0} else {print}}' |awk '///{printf("%-30s%-10s%-10s%-10s%-5s%-10sn",$1,$2,$3,$4,$5,$6)}'
/dev/vg00/lvol3               2097152   737408    1349312   35%  /
/dev/vg00/lvol1               1048576   206160    835928    20%  /stand
/dev/vg00/lvol8               8388608   5481008   2897240   65%  /var
/dev/vg00/lvol7               8388608   4655256   3713000   56%  /usr
/dev/vg00/lvol4               2097152   1052368   1036888   50%  /tmp
/dev/vg00/lvol6               8388608   6675168   1700112   80%  /opt
/dev/vg00/lvol5               524288    49360     471256    9%   /home
testserver01:/data            50574008  4541896   43463104  9%   /data
/dev/vgdata/lvol1             918421504 591931608 306084338 66%  /datastore

					

How to change sender’s email id in EMS HPUX

EMS doesn’t allow us to edit the sender email address. All events are always sent out with sender id as root@hostname. Here is a small script to change it.

Requirement :

Normally in the Event monitoring system ( EMS ) on HPUX send an email with sender id as root@hostname. Many organization’s email servers don’t allow such an email address in the sender field. We need generic email id in sender field when EMS shoots an alert email something like notification@xyz.com

Workaround :

There is no provision to change this email id anywhere in HPUX or EMS configurations. You can use the below workaround which works perfectly without any issues.

Step 1 :

Make sure you have a valid email address (like notification@xyz.com) for your logged-in account which works. Send a test email from the server to verify using below command

# echo test |/usr/sbin/sendmail -v receiver_id@xyz.com
receiver_id@xyz.com... Connecting to smtpserver.xyz.com via relay...
220 smtpserver.xyz.com ESMTP Postfix
>>> EHLO xyz.com
250-smtpserver.xyz.com
250-PIPELINING
250-SIZE 25600000
250-VRFY
250-ETRN
250-ENHANCEDSTATUSCODES
250-8BITMIME
250 DSN
>>> MAIL From:<root@xyz.com> SIZE=5
250 2.1.0 Ok
>>> RCPT To:<receiver_id@xyz.com>
>>> DATA
250 2.1.5 Ok
354 End data with <CR><LF>.<CR><LF>
>>> .
250 2.0.0 Ok: queued as 46466822F2
receiver_id@xyz.com... Sent (Ok: queued as 46466822F2)
Closing connection to smtpserver.xyz.com
>>> QUIT
221 2.0.0 Bye

Step 2 :

Setup crontab for above logged in the account (for which email tested) which will execute the EMS log scanner script every 30 minutes. As per your convenience, you can even schedule it to run every 10 mins or even lower.

00,30 * * * * /scripts/ems_monitor.sh

Step 3:

The script code is as below.

# Script to scan EMS log file and email alert if any
# Author : Shrikant Lavhate
#! /bin/bash
if [ -f "https://z5.kerneltalks.com/logs/event_monitor.log" ]
then
:
else
cp -p /var/opt/resmon/log/event.log /logs/event_monitor.log
fi
diff /logs/event_monitor.log /var/opt/resmon/log/event.log /logs/logfile_difference
if [ -s "/logs/logfile_difference" ]
then
cat /logs/logfile_difference | grep '^'|cut -c 2- | mailx -s "EMS monitor alert from `hostname`" receiver_id@xyz.com
fi
cp -p /var/opt/resmon/log/event.log /logs/event_monitor.log

Step 4:

Now you can test the script by generating test events in EMS. Generate test event with send_test_event command.

# send_test_event -v -a disk_em

Finding resource name associated with monitor disk_em.

Found resource name /storage/events/disks/default
associated with monitor disk_em.

Creating test file /var/stm/config/tools/monitor/disk_em.test
for monitor disk_em.

Making test file /var/stm/config/tools/monitor/disk_em.test
for monitor disk_em
indicate send test event for all resources.

Performing resls on resource name /storage/events/disks/default
for monitor disk_em to cause generation of test event.
Contacting Registrar on aprss006

NAME:   /storage/events/disks/default
DESCRIPTION:    Disk Event Monitor

This resource monitors events for stand-alone disk drives.  Event 
monitoring requests are created using the Monitoring Request 
Manager.  Monitoring requests to detect changes in device status are 
created using the Peripheral Status Monitor (psmmon(1m)) and Event 
Monitoring Service (EMS). 

For more information see the monitor man page, (disk_em(1m)).

TYPE:   /storage/events/disks/default is a Resource Class.

There are 2 resources configured below /storage/events/disks/default:
Resource Class
        /storage/events/disks/default/64000_0xfa00_0x0
        /storage/events/disks/default/64000_0xfa00_0x35

You will receive test events from admin@hostname email id which is normal. Now run above script and you will receive the same email with sender id as notification@xyz.com !!

I have tested this script and it works flawlessly. Please leave comments below if it’s helpful to you too.

How to generate CSR file for SSL request on Linux

Step to generate a CSR file. CSR file is a request file that is then submitted to the vendor for getting an SSL certificate for a webserver.

CSR is a Certificate Signing Request file. It will be generated on the server on which the SSL certificate will be used. This file contains details about the organization and URL in an encrypted format. Whenever you approach any vendor for getting an SSL certificate for your web server, you have to submit this CSR file to them. Based on information in this CSR file your certificate will be generated.

How to generate CSR using OpenSSL

Let’s jump into creating our CSR using the most commonly used method ie. using OpenSSL. It’s a two-way process –

  1. Create a private key
  2. Generate CSR using the private key

Create a private key

Using openssl generate 2048 bit key file *.key. This key file will be used for the generation of CSR. This command will ask you for a password that will be assigned within the key file. Use the password of your choice. This password you need to supply while generating CSR.

[root@kerneltalks ~]# openssl genrsa -des3 -out kerneltalks.com.key 2048
Generating RSA private key, 2048 bit long modulus
............................+++
..............................................................................................................................................................................................................................................................................................................................+++
e is 65537 (0x10001)
Enter pass phrase for kerneltalks.com.key:
Verifying - Enter pass phrase for kerneltalks.com.key:

Read also: How to install an SSL certificate on Apache webserver

Generate CSR file using key

Now generate CSR file using the key file we generated in the above step.

[root@kerneltalks ~]# openssl req -new -key kerneltalks.com.key -out kerneltalks.comcsr -sha256
Enter pass phrase for kerneltalks.com.key:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:IN
State or Province Name (full name) []:Maharashtra
Locality Name (eg, city) [Default City]:Mumbai
Organization Name (eg, company) [Default Company Ltd]:Personal
Organizational Unit Name (eg, section) []:Personal
Common Name (eg, your name or your server's hostname) []:kerneltalks.com
Email Address []:contact@kerneltalks.com

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

Note that sha256 will generate CSR with the SHA2 algorithm which is preferred normally. If -sha256 argument is not given, CSR will be generated with SHA1 which is outdated and normally not preferred.

Once you get a CSR file, you cat check its using cat. Its a bunch of encrypted code which you can even decode and check information within on this link. If there is any typo in data you can regenerate CSR before submitting it to the vendor.

How to generate CSR using Java keytool

Some people create a CSR file using java Keystore. Let’s walk you through, how to create a certificate signing request using java keytool.

Firstly your web server must have java installed and you should have java binary directory know. This is where keytool command binary resides.

It’s too 2 step process –

  1. Create java Keystore
  2. Generate CSR using java Keystore

Create java Keystore

keytool is a java binary used to run below commands. Here while generating Keystore you will be asked all the website-related information.

# keytool -genkey -alias server -keyalg RSA -keystore kerneltalks.com.jks -keysize 2048
Enter keystore password:
Re-enter new password:
What is your first and last name?
  [Unknown]:  kerneltalks.com
What is the name of your organizational unit?
  [Unknown]:  Personal
What is the name of your organization?
  [Unknown]:  Personal
What is the name of your City or Locality?
  [Unknown]:  Mumbai
What is the name of your State or Province?
  [Unknown]:  Maharashtra
What is the two-letter country code for this unit?
  [Unknown]:  IN
Is CN=kerneltalks.com, OU=Personal, O=Personal, L=Mumbai, ST=Maharashtra, C=IN correct?
  [no]:  yes

Enter key password for <server>
        (RETURN if same as keystore password):

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore kerneltalks.com.jks -destkeystore kerneltalks.com.jks -deststoretype pkcs12".

Create CSR using java Keystore

Now use the above created Keystore i.e. jks file and generate CSR file.

[root@kerneltalks ~]# keytool -certreq -keyalg RSA -alias server -file kerneltalks.com.csr -keystore kerneltalks.com.jks
Enter keystore password:

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore kerneltalks.com.jks -destkeystore kerneltalks.com.jks -deststoretype pkcs12".

Once done you can give this CSR to your vendor for SSL certificate procurement.

Get list of desired LUN id from powermt output

Small script to get LUN id from powermt output when supplied with a list of disks. It’s a tedious job to extract LUN id from the output when you have a list of disks to wok upon.

Requirement :

You have a list of disk names from OS end and you need to get their respective LUN ids from powermt output.

This requires manual work of searching each disk name in output and then copying its respective LUN id. Typically these three lines you are interested in powermt output.

Pseudo name=emcpoweraa
Symmetrix ID=000549754319
Logical device ID=03C4

If you have a list of disks to search its a tedious task. Because you have to search each and every disk name in the output.

Solution :

Get the output of powermt command in a file

# powermt display dev=all > powermt.old

Get all disk names in one file e.g. test.txt
Run a for loop which will get LUN id of each disk described in the file. In this loop, we are taking each disk and searching it in the above file. Then we are extracting 2 below lines from disk name (since 2nd line below disk name contains LUN id). And then extracting LUN id from it with some data filtering.

# for i in `cat test`
do
cat powermt.old |grep -A 2 $i|grep Logical|awk '{print $3}'|cut -d= -f2
done

You will be presented with the list of LUN ids for respective disks in the test.txt file! You can even echo disk name before LUN id by inserting echo $i just above cat command in the above code.

Vice versa:

Get disk names by giving LUN ids in text.txt file. Its same logic, the only difference is we will be extracting above 2 lines rather than below ones. Rest all procedure remains the same.

# for i in `cat test`
do
cat powermt.old |grep -B 2 $i|grep Pseudo|awk '{print $2}'|cut -d= -f2
done

Processes using high memory, cpu using unix95

Learn how to get a list of processes consuming high memory or CPU on the kernel in sorted order using unix95. Helpful for troubleshooting high system utilization.


Listing of processes who are using high memory :

# UNIX95=1 ps -ef -o vsz= -o pid= -o comm= | sort -rnk1 | awk '{ print $1/1024" MB "$2" "$3; }'
749.156 MB 2025 cimprovagt
60.1875 MB 2842 midaemon
56.8672 MB 3616 opcmona
49.3633 MB 6905 cmcld
43.8906 MB 3485 coda
42.9375 MB 2035 cimprovagt
41.043 MB 2372 java
22.875 MB 2039 cimprovagt
21.2578 MB 3414 nsrexecd
18.2617 MB 2617 vxpal
17.2188 MB 2431 vxsvc
16.9297 MB 3264 postgres:

----- output truncated -----

or

# UNIX95= ps -eo vsz,comm,args | sed 1d | sort -rn | more
 767136 cimprovagt      /opt/wbem/lbin/cimprovagt 0 4 9 root SFMProviderModule
  61632 midaemon        /opt/perf/bin/midaemon
  58232 opcmona         /opt/OV/lbin/eaagt/opcmona
  50548 cmcld           /usr/lbin/cmcld -j
  44944 coda            /opt/OV/lbin/perf/coda
  43968 cimprovagt      /opt/wbem/lbin/cimprovagt 0 4 9 root HPUXFCIndicationProviderModule
  42028 java            /opt/hpservices/jre/opt/java1.4/jre/bin/IA64N/java -Xmx512m -DRunnerValuePairFile=/var/opt/runner/data/valuePairs -cp /opt/runn
  23424 cimprovagt      /opt/wbem/lbin/cimprovagt 0 4 9 root HPUXStorageIndicationProviderModule
  21768 nsrexecd        /opt/networker/bin/nsrexecd
  18700 vxpal           /opt/VRTSobc/pal33/bin/vxpal -a StorageAgent -x
  17632 vxsvc           /opt/VRTSob/bin/vxsvc -r /etc/vx/isis/Registry -e

----- output truncated -----





Listing of processes who are using high CPU:

# UNIX95= ps -e -o "vsz pcpu ruser pid stime time state args" | sort -rn |head -10
 767136  0.00 root      2025  Nov 15  1-04:47:22 R /opt/wbem/lbin/cimprovagt 0 4 9 root SFMProviderModule
  61632  0.07 root      2842  Nov 15  11:29:30 R /opt/perf/bin/midaemon
  58232  0.00 root      3616  Nov 15  05:52:05 R /opt/OV/lbin/eaagt/opcmona
  50548  0.14 root      6905  Nov 15  1-05:23:35 R /usr/lbin/cmcld -j
  44944  0.00 root      3485  Nov 15  04:46:17 R /opt/OV/lbin/perf/coda
  43968  0.00 root      2035  Nov 15  05:53:58 R /opt/wbem/lbin/cimprovagt 0 4 9 root HPUXFCIndicationProviderModule
  42028  0.09 root      2372  Nov 15  22:59:12 R /opt/hpservices/jre/opt/java1.4/jre/bin/IA64N/java -Xmx512m -DRunnerValuePairFile=/var/opt/runner/data/valuePairs -cp /opt/runn
  23424  0.00 root      2039  Nov 15  02:26:51 R /opt/wbem/lbin/cimprovagt 0 4 9 root HPUXStorageIndicationProviderModule
  21768  0.00 root      3414  Nov 15  01:22:45 R /opt/networker/bin/nsrexecd
  18700  0.00 root      2617  Nov 15  01:04:36 R /opt/VRTSobc/pal33/bin/vxpal -a StorageAgent -x

How to restore nagios configuration from backup

Learn to restore Nagios configuration backup. If you messed up Nagios configuration, you can restore last known good config using these steps.

Restore Nagios configuration

Requirement

While troubleshooting or doing some configuration changes (R n D), sometimes you mess up the configuration in Nagios. This leads to no data dashboard. You need to revert back to the last known good configuration so that the tool can resume its work.

Also read: How to install & configure checkmk in Linux

Solution

Normally you should have daily (or frequency of your choice i.e. weekly, hourly, etc.) backups scheduled for tool configuration using services like cron. We will see how to configure this configuration backup in another article.

Now, navigate to the directory where configuration backups are kept. Normally they should be in gunzip format.

Once inside that directory run below restore command.

# check_mk --restore check_mk.11-Mar-2016.tar.gz

After restore to make sure, ownership is well in place, run below command

# chown -R apache:nagcmd /etc/check_mk/conf.d/wato/

Lastly, restart Nagios to take up restored configuration.

# check_mk -R --restart nagios-check_mk

Now go back to the dashboard and check its populated with values!

Highest size files in mount point

Search for the highest sized file in the mount point. Useful while dealing with full mount points and cleaning up files to free up space.


There are many instances in the life of sysadmin when his system starts throwing filesystem full events and its headache to search culprit. Here we are going to see a command to search and display files with high utilization in a mount point. This is very useful to dig down and target exactly huge/big files rather than wasting time on trimming/deleting small files.

For example, if you are facing /tmp is full. Below is the command to search for large files in the given mount point. Here we are using du i.e. disk usage command then sorting the output list in reverse order so that the highest size files come to the top. Then getting top 10 to display as a result.

# du -a /tmp | sort -nr  | head -n 10
3396448 /tmp
1801984 /tmp/Acasrro411704
994848  /tmp/software.depot
81088   /tmp/PACPTHP_00001.depot
77088   /tmp/OraInstall2008-03-19_09-39-20AM
76912   /tmp/OraInstall2008-03-19_10-15-28AM
76848   /tmp/OraInstall2008-03-19_09-39-20AM/jre/1.4.2
76848   /tmp/OraInstall2008-03-19_09-39-20AM/jre
76656   /tmp/OraInstall2008-03-19_10-15-28AM/jre/1.4.2
76656   /tmp/OraInstall2008-03-19_10-15-28AM/jre

Replace /tmp with a mount point of your choice while running it in your system.

There are also different ways you can trace the highest size hogger on a mount point. Below are few

To find the large file size above 5000

# find /var -xdev -size +5000
/var/opt/perf/datafiles/logglob
/var/opt/perf/datafiles/logproc
/var/opt/perf/datafiles/logdev
/var/opt/perf/datafiles/logtran
/var/opt/OV/log/SPISvcDisc/DBSPI_ORACLE_UNIX_discovery.trc
/var/opt/OV/log/osspi/osspi.log
----- output truncated -----





Top 10 large directories in the mount point 

# du -kx /var | sort -rn -k1 | head -n 10
6726048	/var
3962830	/var/opt
3471340	/var/opt/OV
2494362	/var/adm
2465390	/var/opt/OV/tmp
1512210	/var/opt/OV/tmp/OpC
1438642	/var/adm/openv/logs
1438642	/var/adm/openv
921030	/var/adm/sa
822672	/var/adm/openv/logs/bpdbm

Top 10 large files in the mount point

# find /var -type f -xdev -print | xargs -e ll | sort -rn -k5 | head -n 10
-rw-rw----   1 root       bin        1463238440 Nov  1 14:45 /var/opt/OV/tmp/OpC/msgagtdf
-rw-rw-r--   1 root       bin        944541567 Nov  1 14:52 /var/opt/OV/tmp/sqlnet.log
-rw-rw-rw-   1 oracle92   dba        307369984 Mar  5  2014 /var/opt/omni/tmp/rcvcat21945.exp
-rw-------   1 root       mail       187848937 Nov  1 14:01 /var/tmp/dead.letter
-rw-rw----   1 root       bin        84680704 Feb 16  2013 /var/opt/OV/tmp/OpC/msgagtdf-11975.gc
-rw-r--r--   1 root       bin        80937879 Nov  1 14:54 /var/opt/OV/log/osspi/osspi.log
-rwxrwxrwx   1 root       users      58566204 Nov  1 14:55 /var/adm/rrdtool/apcrss32.rrd
-rw-rw-r--   1 root       bin        55159884 May 25  2009 /var/opt/OV/installation/temp/bbc/fx/upload/DeplM22799
-rw-rw-r--   1 root       bin        55159884 May 18  2009 /var/opt/OV/installation/temp/bbc/fx/upload/DeplM22011
-rw-rw-r--   1 root       bin        55159884 Jun 30  2010 /var/opt/OV/installation/temp/bbc/fx/upload/DeplM10143