Category Archives: Commands

How to change process priority in Linux or Unix

Learn how to change process priority in Linux or Unix. Using the renice command understands how to alter the priority of running processes.

Processes in the kernel have their own scheduling priorities and using which they get served by the kernel. If you have a loaded system and need to have some processes serve before others you need to change process priority. This process is also called renicing since we use renice command to change process priority.

There are nice values defined from 0 to 20. 20 being the highest. The process with low nice values gets service before the process with a high nice value. So if you want a particular process to get served first you need to lower its nice value. Administrators may also choose to mark negative nice value (down up to -20) to speed up processes furthermore. Let’s see how to change process priority –

There are 3 ways to select a target for renice command. Once can submit process id (PID) or user ID or process group ID. Normally we use PID and UID in the real-world so we will see these options. New priority or nice value can be defined with option -n. Current nice value can be viewed in top command under NI column Or it can be checked using below command :

# ps -eo pid,user,nice,command | grep 30411
30411 root       0 top
31567 root       0 grep 30411

In above example, nice value is set to 0 for give PID.

Renice process id :

Process id can be submitted to renice using -p option. In below example we will renice value to 2 for pid 30411.

# renice -n 2 -p 30411
30411: old priority 0, new priority 2
# ps -eo pid,user,nice,command | grep 30411
  747 root       0 grep 30411
30411 root       2 top

renice command itself shows old and new nice values in output. We also verified new nice value using ps command.

Renice user id :

If you want to change priorities of all processes of a particular user then you can submit UID of that user using -u option. This option is useful when you want all processes by the user to complete fast, so you can set the user to -20 to get things speedy!

# ps -eo pid,user,nice,command | grep top
 3859 user4   0 top
 3892 user4   0 top
 4588 root    0 grep top
# renice -n 2 -u user4
54323: old priority 0, new priority 2
# ps -eo pid,user,nice,command | grep top
 3859 user4   2 top
 3892 user4   2 top
 4966 root    0 grep top

In the above example, there are two processes owned by user4 with priority 0. We changed the priority of user4 to 2. So both processes had their priority changed to 2.

Normal users can change their own process priority too. But he can not override priority set by the administrator. -20 is the lowest minimum nice value one can set on the system. This is the speediest priority, once set that process gets service and all available resources on the system to get its task done.

5 different examples to send email through Linux terminal

Five different examples to show you how to send an email using the Linux terminal. Learn how to send an email with the subject, attachment, and body from the terminal.

We have seen how to configure SMTP in Linux, in this post we will see different examples to send a mail through terminal. This post will elaborate on how to attach a file to mail, how to add a subject line, how to add body content to mail-in command-line interface.

Also read :

First of all, you need to check if SMTP is configured correctly and you are able to send mail from your Linux server. You can test it by sending test mail like below :

# echo test |sendmail -v info@xyz.com
[<-] 220 mailserver.xyz.com ESMTP Postfix
[->] HELO testsrv2
[<-] 250 mailserver.xyz.com
[->] MAIL FROM:<root@xyz.com>
[<-] 250 2.1.0 Ok
[->] RCPT TO:<info@xyz.com>
[<-] 250 2.1.5 Ok
[->] DATA
[<-] 354 End data with <CR><LF>.<CR><LF>
[->] Received: by testsrv2 (sSMTP sendmail emulation); Fri, 23 Dec 2016 02:29:07 +0800
[->] From: "root" <root@xyz.com>
[->] Date: Fri, 23 Dec 2016 02:29:07 +0800
[->] test
[->]
[->] .
[<-] 250 2.0.0 Ok: queued as 19F75822B8
[->] QUIT
[<-] 221 2.0.0 Bye

Once you receive test mail, it’s confirmed that your SMTP configurations are correct. Let’s see different examples to send mail.

1. An email with the subject line :

Sendmail is low-level utility hence it doesn’t support -s option for a subject line like other email utilities do. We have to stdin subject to sendmail like below :

# echo "Subject : test" |sendmail -v user4@xyz.com
user4@xyz.com... Connecting to mailserver.xyz.com via relay...
220 mailserver.xyz.com ESMTP Postfix
>>> EHLO server2.xyz.com
250-mailserver.xyz.com
250-PIPELINING
250-SIZE 25600000
---- output clipped -----

Text following “Subject:” will be treated as the subject line for mail.

You can also use mailx command with -s option to specify the subject line.

# mailx -s "test email subject line"  user4@xyz.com
Your email body goes here... type and hit ctrl+d
Cc:

2. An email with mail body :

If you have a long email body then you can save it in a file. Then you can pass that file to sendmail command. Sendmail will parse the file and the content of files will be seen in the body of the email.

# cat /tmp/mail_body.txt | sendmail -v user4@xyz.com
user4@xyz.com... Connecting to mailserver.xyz.com via relay...
220 mailserver.xyz.com ESMTP Postfix
>>> EHLO server2.xyz.com
250-mailserver.xyz.com
250-PIPELINING
250-SIZE 25600000
250-VRFY

The same format works with mailx command too. You can even source file using redirector for mail body like below:

# mailx user4.xyz.com < /tmp/mail_body.txt

3. Email with attachment

Sending attachment with sendmail/mail is a bit tricky. We need to encode files before sending as an attachment using uuencode.

# uuencode /tmp/attachement.txt | mail -s "Subject line" user4.xyz.com

4. An email with different FROM field

You can even change from the field of the email so that you can send an email with nicer names in the FROM field rather than ugly account names which mail utility picks up from your account details. There are few options when one is not available on your system to try another.

# mailx -r from_sender_id@xyz.com -s "subject" user4@xyz.com

# mailx -s "subject" user4.xyz.com -- -f from_sender_id@xyz.com 

# mailx -s "subject" -a "From: Sender <from_sender_id@xyz.com>" user4.xyz.com

5. Adding CC and BCC fields:

You can add BC and BCC fields too. Using options -c cc and -b for bcc:

# mail -s "test subject" -c abc@xyz.com -b pqr@xyz.com user4@xyz.com

How to add a subject line in sendmail command

You can add an email subject line in sendmail by sending “Subject: xxxxx” to sendmail command. See below examples –

(echo "Subject: Test email"; cat mailbody.txt) | sendmail -v info@kerneltalks.com

Errors and logfile for email in Linux

Once you send an email check /var/mail/maillog file for errors or success messages. If your configuration is correct and everything is working as expected you can see below message in there.

Sep 17 23:44:38 testsrv postfix/qmgr[10290]: 1086212AA0C: from=<root@testsrv>, size=470, nrcpt=1 (queue active)
Sep 17 23:44:38 testsrv postfix/smtp[17001]: 1086212AA0C: to=<info@kerneltalks.com>, relay=smtp.kerneltalks.com[10.10.1.1]:25, delay=4193, delays=4193/0.02/0.03/0.01, dsn=2.0.0, status=sent (250 2.0.0 Ok: queued as 66AE24043)
Sep 17 23:44:38 testsrv postfix/qmgr[10290]: 1086212AA0C: removed

It means your message is successfully sent to the SMTP server. If you still did not receive the messages in your mailbox then you should check with the SMTP server owner that if they have valid email address restriction. Since if you don’t configure sender address then command by default sent an email with TO address as username@hostname

You can use -r option in mailx command to define TO email address as well which matches the policy of the SMTP server.

# echo test | mailx -s "Test from `hostname`" -r root@kerneltalks.com -S smtp="smtp.kerneltalks.com" info@kerneltalks.com

Like we have defined the sender address as root@kerneltalks.com in the above test command. Now, this email will be accepted by the SMTP server and will be delivered to the recipient.

Another error can be seen in maillog is as below –

Sep 17 23:07:33 testsrv postfix/smtp[9918]: 1086212AA0C: to=<info@kerneltalks.com>, relay=none, delay=1968, delays=1968/0.01/0.26/0, dsn=4.3.5, status=deferred (Host or domain name not found. Name service error for name=smtp.kerneltalks.com type=A: Host not found)

Solution: type A record, the host is not found means server is unable to resolve SMTP server name from DNS. Make sure you have DNS configured in /etc/resolv.conf and SMTP name is registered in DNS. You can verify DNS resolution and DNS server being used using nslookup smtp.kerneltalks.com command.

One more same error but for record type AAAA can be seen as below :

Sep 15 22:41:04 testsrv postfix/smtp[7833]: 97E9C12A7D9: to=<info@kerneltalks.com>, relay=none, delay=349010, delays=349010/0.01/0.26/0, dsn=4.3.5, status=deferred (Host or domain name not found. Name service error for name=smtp.kerneltalks.com type=AAAA: Host not found)

Solution: This means the server is looking for an IPv6 record for the SMTP server. Edit /etc/postfix/main.cf file and remove all protocols and define only ipv4 and you will be good.

# vi /etc/postfix/main.cf
inet_protocols = ipv4

How to save top command output in file

A how-to guide to save top command output in a simple text file. A useful tip to share real-time, dynamic command output in a static log file.

TOP is the first command that came to mind whenever someone asks about system resource utilization monitoring. CPU, memory utilization can be easily tracked in this tool. This tool is available to almost every UNIX and Linux flavor.

The top command outputs in a real-time updating ASCII window. Data within this ASCII GUI screen updates every 2 seconds in real-time. So all bits in it change every time and its difficult to share its output to anyone who doesn’t have access to the system. Also, if you want to collect the current output and save for future use then that’s also not possible due to updating data.

For such a situation where you want to save data being shown in the output window, top command gives you options of iterations and output files. You can define how many iterations you need and the time difference between them. It works like iterations and intervals in SAR command. Use the below options to save the output in a file.

  • -b: batch operation mode
  • -n: number of iterations
  • -d: Delay i.e. intervals between iterations (not needed for single iteration)
  • -f file: Output file in which command saves data (supports in few versions only)

We are using a batch operation mode of the top. By defining, iterations you ask top command to stop execution after that many counts. Normally it runs continuously with a delay of 2 sec. This makes it easy to capture the output in a file using -o option. If -o is not supported then you can simply redirect output to file using redirection operator.

# top -n 1 -b >/tmp/top_output.txt
# cat  /tmp/top_output.txt
top - 02:58:22 up 278 days,  7:19,  1 user,  load average: 0.56, 0.52, 0.56
Tasks: 189 total,   1 running, 188 sleeping,   0 stopped,   0 zombie
Cpu(s): 10.8%us,  4.9%sy,  0.0%ni, 84.0%id,  0.2%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  12199956k total,  7774572k used,  4425384k free,   856212k buffers
Swap:  8388604k total,        0k used,  8388604k free,  4787268k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 2994 root      RT   0  688m  84m  55m S  7.9  0.7   2998:17 osysmond.bin
 4227 grid      20   0  501m  29m  17m S  2.0  0.3 458:12.36 oracle
 4229 grid      -2   0  500m  29m  17m S  2.0  0.3   1009:43 oracle
 4251 grid      20   0  485m  27m  25m S  2.0  0.2 112:07.76 oracle
 4266 root      20   0 2206m  73m  25m S  2.0  0.6   2519:45 crsd.bin
    1 root      20   0 19400 1552 1228 S  0.0  0.0   0:03.23 init
    2 root      20   0     0    0    0 S  0.0  0.0   0:00.41 kthreadd
    3 root      20   0     0    0    0 S  0.0  0.0  56:25.42 ksoftirqd/0
----- output clipped -----

In the above example, I ran the top command with a single iteration only. Since its single iteration, I didn’t use a delay option. Also -o option was not supported so I redirected output to a file. You can see the content of the file is the top command output screen!

If you use more than 1 iteration lets say 2. Then there will 2 top output screens concatenated one after another in a log file. So it will be like you have a snapshot of all values for a given period of time for future reference. See example below:

# top -n 2 -d 4 -b >/tmp/top_out.txt
# cat /tmp/top_out.txt
top - 03:07:39 up 278 days,  7:28,  1 user,  load average: 0.43, 0.51, 0.54
Tasks: 191 total,   1 running, 190 sleeping,   0 stopped,   0 zombie
Cpu(s): 10.8%us,  4.9%sy,  0.0%ni, 84.0%id,  0.2%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  12199956k total,  7762352k used,  4437604k free,   856212k buffers
Swap:  8388604k total,        0k used,  8388604k free,  4774904k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 4213 grid      -2   0  485m  14m  12m S  2.0  0.1   5601:55 oracle
    1 root      20   0 19400 1552 1228 S  0.0  0.0   0:03.23 init
    2 root      20   0     0    0    0 S  0.0  0.0   0:00.41 kthreadd
    3 root      20   0     0    0    0 S  0.0  0.0  56:25.58 ksoftirqd/0
    6 root      RT   0     0    0    0 S  0.0  0.0 399995:22 migration/0
    7 root      RT   0     0    0    0 S  0.0  0.0   1:17.98 watchdog/0
    8 root      RT   0     0    0    0 S  0.0  0.0 399911:47 migration/1
   10 root      20   0     0    0    0 S  0.0  0.0  31:40.41 ksoftirqd/1
   11 root      20   0     0    0    0 S  0.0  0.0  19:42.29 kworker/0:1
  
----- output clipped -----

top - 03:07:44 up 278 days,  7:28,  1 user,  load average: 0.48, 0.51, 0.55
Tasks: 193 total,   4 running, 189 sleeping,   0 stopped,   0 zombie
Cpu(s): 24.5%us, 10.1%sy,  0.0%ni, 65.1%id,  0.2%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  12199956k total,  7803692k used,  4396264k free,   856212k buffers
Swap:  8388604k total,        0k used,  8388604k free,  4774968k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 2466 root      20   0  129m  25m 1288 S  6.6  0.2   8235:08 OSWatcher.sh
15104 root      20   0  129m  24m  344 R  1.8  0.2   0:00.09 OSWatcher.sh
 4213 grid      -2   0  485m  14m  12m S  1.4  0.1   5601:55 oracle
 2994 root      RT   0  688m  84m  55m S  1.0  0.7   2998:22 osysmond.bin
 3038 grid      RT   0 1608m 116m  53m S  0.8  1.0   2379:27 ocssd.bin
 4385 root      20   0 2543m  42m  13m S  0.8  0.4   3013:44 orarootagent.bi
----- output clipped -----

You can even put up a cron to save the output of top commands to some log file. But this is not recommended since it will pile up logfile in no time and hence fill mount point rapidly. Anyway, we have in-depth utilization data being collected by OS which can be viewed using SAR utility.

-f option which is available in HPUX native top utility has few limitations. It stores only the first 16 lines of output to file and it supports only one iteration. 16 lines limitations make logfile quite short and it shows mainly header values CPU, memory, etc. Most of the processes related to data get lost. See example below :

$ top -f /tmp/test
$ cat /tmp/test
System: testsrv3 Wed Dec 21 03:18:19 2016
Load averages: 0.03, 0.04, 0.04
246 processes: 168 sleeping, 78 running
Cpu states:
CPU   LOAD   USER   NICE    SYS   IDLE  BLOCK  SWAIT   INTR   SSYS
 0    0.04   0.0%   0.0%   0.2%  99.8%   0.0%   0.0%   0.0%   0.0%
14    0.03   0.0%   0.0%   0.0% 100.0%   0.0%   0.0%   0.0%   0.0%
15    0.03   0.0%   0.0%   0.2%  99.8%   0.0%   0.0%   0.0%   0.0%
16    0.04   0.0%   0.0%   0.4%  99.6%   0.0%   0.0%   0.0%   0.0%
17    0.02   0.2%   0.0%   0.4%  99.4%   0.0%   0.0%   0.0%   0.0%
18    0.04   0.0%   0.0%   0.0% 100.0%   0.0%   0.0%   0.0%   0.0%
---   ----  -----  -----  -----  -----  -----  -----  -----  -----
avg   0.03   0.0%   0.0%   0.2%  99.8%   0.0%   0.0%   0.0%   0.0%

System Page Size: 4Kbytes
Memory: 1679640K (1411160K) real, 3170312K (2643480K) virtual, 30532328K free  P
CPU TTY  PID USERNAME PRI NI   SIZE    RES STATE    TIME %WCPU  %CPU COMMAND

sar command (Part III) : Disk, Network reporting

Learn System Activity Report sar command with real-world scenario examples. Understand how to do disk, network reporting using this command.

In the last two parts of the sar command, we have seen time formats to be used with command, its data files, CPU & Memory reporting. In this last part, we will be seeing disk, network reporting using sar command.

Read last two parts of this tutorial here :

Disk IO reporting

sar provides disk (block devices)  report with -d option.  Normally, it shows below parameters values (highlighted values are more commonly observed for performance monitoring) :

  • DEV: Block device name. It follows the dev m-n format. M is major and n is a minor number of the block devices.
  • tps: Transfers per second
  • rd_sec/s: Sector reads per second (sector is 512 byte)
  • wr_sec/s: Sector writes per second 
  • avgrq-sz:  average size (in sectors) of the requests that were issued to the device
  • avgqu-sz: average queue length of the requests that were issued to the device
  • await: The average time (in milliseconds) for I/O requests
  • svctm: The average service time (in milliseconds) for I/O requests
  • %util: Percentage  of  CPU  time  during  which  I/O requests were issued to the device
# sar -d 2 3
Linux 2.6.39-200.24.1.el6uek.x86_64 (testsrv2)         12/21/2016      _x86_64_        (4 CPU)

01:20:19 AM       DEV       tps  rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz     await     svctm     %util
01:20:21 AM    dev8-0      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
01:20:21 AM   dev8-64      3.00      2.00      1.00      1.00      0.00      0.50      0.50      0.15
01:20:21 AM   dev8-32      3.00      2.00      1.00      1.00      0.00      0.50      0.33      0.10
01:20:21 AM  dev252-0      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
01:20:21 AM  dev252-1      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
01:20:21 AM  dev252-2      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00

In the above output, device names are not so user friendly. So to identify devices easily, -p option is available. This option prints pretty device names in the DEV column and it should always be used with -d option.

# sar -d -p 2 3
Linux 2.6.39-200.24.1.el6uek.x86_64 (testsrv2)         12/21/2016      _x86_64_        (4 CPU)

01:20:38 AM       DEV       tps  rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz     await     svctm     %util
01:20:40 AM       sda      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
01:20:40 AM       sdb      4.98      0.00     91.54     18.40      0.00      0.80      0.20      0.10
01:20:40 AM       sdc      2.99      1.99      1.00      1.00      0.00      0.67      0.67      0.20
01:20:40 AM      dm-0      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
01:20:40 AM      dm-1      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
01:20:40 AM      dm-2      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00

Now see above output where device names are easily identifiable. sda means disk /dev/sda and so on.

Network utilization reporting

Option -n gives all network stats. There are a total of 18 different keywords (like NFS, IP, DEV, TCP, etc.) which can be used with -n option to get related parameters. Each keyword has almost 8-10 parameters to display. So in the call, if you are using ALL keyword, then the output will be a huge list of parameters which is difficult to understand.

To keep it short here we will see only one example of keyword DEV i.e. device. This will show the device i.e. network card’s parameter values. Most of the time NIC performance is checked hence we are using this keyword example.

# sar -n DEV 2 1
Linux 2.6.39-200.24.1.el6uek.x86_64 (textsrv2)         12/21/2016      _x86_64_        (4 CPU)

01:35:22 AM     IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s
01:35:24 AM        lo      6.00      6.00      0.29      0.29      0.00      0.00      0.00
01:35:24 AM      eth0     15.50      0.50      0.91      0.04      0.00      0.00      0.00
01:35:24 AM      eth1      6.50      4.50      0.97      0.77      0.00      0.00      0.00
01:35:24 AM      eth3      0.00      0.00      0.00      0.00      0.00      0.00      0.00

Average:        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s
Average:           lo      6.00      6.00      0.29      0.29      0.00      0.00      0.00
Average:         eth0     15.50      0.50      0.91      0.04      0.00      0.00      0.00
Average:         eth1      6.50      4.50      0.97      0.77      0.00      0.00      0.00
Average:         eth3      0.00      0.00      0.00      0.00      0.00      0.00      0.00

In the above example, I used the DEV keyword along with -n option and took only one iteration in output. Parameters displayed for device keyword are :

  • IFACE: Its interface name. You can easily see eth0, eth1, loopback (lo) interfaces here.
  • rxpck/s: Packets received per seconds
  • txpck/s: packets transmitted per second
  • rxkB/s: kilobytes received per second
  • txkB/s : kilobytes transmitted per second
  • rxcmp/s: compressed packets received per second
  • txcmp/s: compressed packets transmitted per second
  • rxmcst/s: Number of multicast packets received per second

This concludes sar command tutorial’s part III about the disk, network reporting. This is a three-part tutorial with example outputs included. Put your queries, suggestions, feedback in the comments below. You can also reach us using our Contact form. 

sar command (Part II) : CPU, Memory reporting

Learn System Activity Report sar command with real-world scenario examples. Understand CPU, Memory reporting using this command.

In the last post of sar command, we have seen its data file structure, how to extract data from it, and time formats to be used with this command.  In this post, let’s see how to get CPU, memory utilization reports from data files, or real-time using sar command.

Read other parts of sar tutorial :

sar command follows below format :

# sar [ options ] [ <interval> [ <count> ] ]

We have already seen what is interval and count in the last post. Now we will see different options that can be used to get different system resource utilization stats. Also, we have seen how to get historic data from sar data files, I will be using only real-time commands (i.e. without -f option) for all below examples.

Before we start with resource reporting here is a quick tip about start and end time of reports when you are extracting data from datafiles. Below two options can be used with sar command (in conjunction with -f) so that specific time window data can be extracted.

  • -s hh:mm:ss Start time of the report. Sar starts output records tagged to this time or very next available time-tagged record. Default start time is 08:00:00
  • -e hh:mm:ss The end time of the report. The default end time is 18:00:00

CPU utilization reporting using sar

For CPU statistics, sar command has -u option. Executing sar command with -u gives us below utilization matrices (highlighted values are more commonly observed for performance monitoring) :

  • %user: CPU % used by user processes
  • %nice: CPU % used by user processes with nice priority
  • %system:  CPU % used by system processes
  • %iowait: % of the time when CPU was idle (since processes were busy in IO)
  • %steal: % of time wait by virtual CPU while hypervisor servicing another CPU (virtualization aspect)
  • %idle: CPU % idle.
# sar -u 2 3
Linux 2.6.39-200.24.1.el6uek.x86_64 (testsrv2)         12/20/2016      _x86_64_        (16 CPU)

03:34:51 AM     CPU     %user     %nice   %system   %iowait    %steal     %idle
03:34:53 AM     all     33.00      0.00      9.99     32.22      0.00     24.80
03:34:55 AM     all     37.32      0.00     10.44     32.95      0.00     19.29
03:34:57 AM     all     36.04      0.00     11.90     29.83      0.00     22.24
Average:        all     35.46      0.00     10.78     31.66      0.00     22.11

See the above example to get the values of the parameters explained above. The output starts with a line that has OS kernel version details, hostname in brackets, date, architecture, and the total number of CPUs. Followed by a data with interval (2 sec) and count (3) specified in the command. Finally, it also gives us the average value for all parameters. Column CPU denoting value all indicates these are averaged out values of all 16 CPU for the given time instance.

If you are interested in seeing values for each processor then -P option (per processor reporting) can be used with CPU number of your choice or ‘ALL’. When ALL is specified each processor’s data is shown or only specified CPU’s data is processed.

# sar -P ALL -u 2 3
Linux 2.6.39-200.24.1.el6uek.x86_64 (testsvr2)         12/20/2016      _x86_64_        (16 CPU)

04:08:33 AM     CPU     %user     %nice   %system   %iowait    %steal     %idle
04:08:35 AM     all     34.50      0.00      6.38      7.66      0.00     51.45
04:08:35 AM       0     32.66      0.00      9.55      4.52      0.00     53.27
04:08:35 AM       1     76.24      0.00      2.48      5.45      0.00     15.84
04:08:35 AM       2     24.62      0.00     10.05      6.53      0.00     58.79
04:08:35 AM       3     38.50      0.00     14.50      5.50      0.00     41.50
04:08:35 AM       4      3.05      0.00      4.06      0.51      0.00     92.39
04:08:35 AM       5      1.99      0.00      1.49      9.45      0.00     87.06
04:08:35 AM       6     99.00      0.00      1.00      0.00      0.00      0.00
04:08:35 AM       7      1.50      0.00      1.00      0.00      0.00     97.50
04:08:35 AM       8     62.00      0.00     13.00     13.50      0.00     11.50
04:08:35 AM       9     91.96      0.00      6.53      0.00      0.00      1.51
04:08:35 AM      10     34.67      0.00     10.55     18.09      0.00     36.68
04:08:35 AM      11     57.00      0.00      4.00      4.00      0.00     35.00
04:08:35 AM      12     11.50      0.00      5.50     20.50      0.00     62.50
04:08:35 AM      13      6.47      0.00      2.49     15.42      0.00     75.62
04:08:35 AM      14      3.00      0.00      2.00      3.50      0.00     91.50
04:08:35 AM      15      7.54      0.00     15.08     15.58      0.00     61.81
----- output clipped -----

See the above example where I mentioned ALL with -P option and got each processor’s utilization report. In the below example, only CPU number 2 data is extracted.

# sar -P 2 -u 2 3
Linux 2.6.39-200.24.1.el6uek.x86_64 (testsvr2)         12/20/2016      _x86_64_        (16 CPU)

04:11:11 AM     CPU     %user     %nice   %system   %iowait    %steal     %idle
04:11:13 AM       2     49.75      0.00      3.98     26.87      0.00     19.40
04:11:15 AM       2     97.50      0.00      1.50      1.00      0.00      0.00
04:11:17 AM       2     97.50      0.00      1.50      0.50      0.00      0.50
Average:          2     81.53      0.00      2.33      9.48      0.00      6.66

Another small stats regarding processor is power stats. Here sar shows you processor clock frequency at given instance of time. This helps in calculating power being used by CPU. -m option gives this data and it can be used per -processor reporting option too.

# sar -m 2 3
Linux 2.6.39-200.24.1.el6uek.x86_64 (testsrv2)         12/20/2016      _x86_64_        (16 CPU)

04:15:39 AM     CPU       MHz
04:15:41 AM     all   1970.50
04:15:43 AM     all   1845.81
04:15:45 AM     all   1587.93
Average:        all   1801.41

# sar -P ALL 2 3
Linux 2.6.39-200.24.1.el6uek.x86_64 (testsrv2)         12/20/2016      _x86_64_        (16 CPU)

04:15:52 AM     CPU     %user     %nice   %system   %iowait    %steal     %idle
04:15:54 AM     all     15.67      0.00      7.59      7.02      0.00     69.72
04:15:54 AM       0     20.00      0.00      7.00     37.50      0.00     35.50
04:15:54 AM       1     27.86      0.00     19.40      6.97      0.00     45.77
04:15:54 AM       2     47.50      0.00     15.50      2.50      0.00     34.50
04:15:54 AM       3      2.49      0.00      1.99      1.99      0.00     93.53
04:15:54 AM       4      3.02      0.00      5.03      1.01      0.00     90.95
04:15:54 AM       5      1.00      0.00      7.00      1.00      0.00     91.00
04:15:54 AM       6      0.51      0.00      0.51      0.00      0.00     98.97
04:15:54 AM       7      1.00      0.00      0.50      0.00      0.00     98.50
04:15:54 AM       8     35.18      0.00     21.61     20.10      0.00     23.12
04:15:54 AM       9     51.24      0.00     22.89      4.98      0.00     20.90
04:15:54 AM      10     28.64      0.00      8.04     14.57      0.00     48.74
04:15:54 AM      11     12.94      0.00      5.97     11.94      0.00     69.15
04:15:54 AM      12      8.50      0.00      3.50      7.50      0.00     80.50
04:15:54 AM      13      7.04      0.00      2.51      1.51      0.00     88.94
04:15:54 AM      14      1.01      0.00      0.51      1.52      0.00     96.97
04:15:54 AM      15      1.49      0.00      0.99      0.00      0.00     97.52
----- output clipped -----

Memory utilization reporting using sar

Memory stats can be extracted with -r option. When sar runs with -r option, it presents below parameters (highlighted values are more commonly observed for performance monitoring) :

  • kbmemfree: Free memory available in kilobytes.
  • kbmemused: Memory used (excluding kernel usage)
  • %memused: Percentage of memory used
  • kbbuffers:  memory used as buffers by the kernel in kilobytes
  • kbcached: memory used to cache data by the kernel in kilobytes
  • kbcommit: memory in kilobytes needed for the current workload. (commitment!)
  • %commit: % of memory needed for the current workload in relation to the total memory (RAM+swap)
# sar -r 2 3
Linux 2.6.39-200.24.1.el6uek.x86_64 (testsrv2)         12/20/2016      _x86_64_        (16 CPU)

03:42:07 AM kbmemfree kbmemused  %memused kbbuffers  kbcached  kbcommit   %commit
03:42:09 AM 119133212 145423112     54.97   1263568 109109560 106882060     32.23
03:42:11 AM 119073748 145482576     54.99   1263572 109135424 107032108     32.27
03:42:13 AM 119015480 145540844     55.01   1263572 109162404 106976556     32.25
Average:    119074147 145482177     54.99   1263571 109135796 106963575     32.25

The output above shows the parameter values. It sections the same as explained above (CPU report example); first-line details, last avg row, etc. Note that, %commit can be 100%+ too since kernel always over-commit to avoid out of memory situation.

Paging statistics can be obtained using -B option. Normally, parameters shown in this option’s output are not observed by sysadmin. But if in-depth troubleshooting or monitoring is required then only this option is used. It shows the below parameters:

  • pgpgin/s:  Number of kilobytes the system paged in from disk per second.
  • pgpgout/s: Number of kilobytes the system paged out to disk per second.
  • fault/s: Number of page faults per second.
  • majflt/s: Number of major page faults per second.
  • pgfree/s: Number of pages placed on the free list by the system per second.
  • pgscank/s: Number of pages scanned by the kswapd daemon per second.
  • pgscand/s: Number of pages scanned directly per second.
  • pgsteal/s: Number of pages the system has reclaimed from cache per second.
  • %vmeff: This is a metric of the efficiency of page reclaim.
# sar -B 2 3
Linux 2.6.39-200.24.1.el6uek.x86_64 (testsrv2)         12/21/2016      _x86_64_        (4 CPU)

12:59:41 AM  pgpgin/s pgpgout/s   fault/s  majflt/s  pgfree/s pgscank/s pgscand/s pgsteal/s    %vmeff
12:59:43 AM      3.05     40.10 120411.68      0.00  99052.28      0.00      0.00      0.00      0.00
12:59:45 AM      2.99      7.46  42649.75      0.00  37486.57      0.00      0.00      0.00      0.00
12:59:47 AM      3.00     47.50     43.00      0.00     80.50      0.00      0.00      0.00      0.00
Average:         3.01     31.61  54017.22      0.00  45257.86      0.00      0.00      0.00      0.00

Swap statistics can be obtained with -S option. Swap is another aspect of memory hence its utilization monitoring is as important as memory. Swap utilization reports are shown with -S option. It shows below parameters (highlighted values are more commonly observed for performance monitoring) :

  • kbswpfree: Free swap in kilobytes
  • kbswpused: Used swap in kilobytes
  • %swpused: % of swap used
  • kbswpcad: Amount of cached swap memory in kilobytes
  • %swpcad: % of cached swap memory in relation to used swap.
# sar -S 2 3
Linux 2.6.39-200.24.1.el6uek.x86_64 (testsrv2)         12/21/2016      _x86_64_        (4 CPU)

01:01:45 AM kbswpfree kbswpused  %swpused  kbswpcad   %swpcad
01:01:47 AM   8388604         0      0.00         0      0.00
01:01:49 AM   8388604         0      0.00         0      0.00
01:01:51 AM   8388604         0      0.00         0      0.00
Average:      8388604         0      0.00         0      0.00

In the above example, you can see a total of 8GB swap available on server and nothing of it used. The swap will get hits only if memory gets completely utilized.

This concludes the second part of the sar tutorial. We will be seeing network-related reporting in the next part.

sar command (Part I): All you need to know with examples

Learn System Activity Report sar command with real-world scenario examples. Understand the command’s log files, execution, and different usage.

SAR ! System Activity Report! sar command is the second-best command used to check system performance or utilization after top command. From the man page, ‘The sar command writes to standard output the contents of selected cumulative activity counters in the operating system. The accounting system, based on
the values in the count and interval parameters, writes information the specified number of times spaced at the specified intervals in seconds.’ No doubt this is the best performance monitoring tool to be used for any sysadmin.

Read next part of sar tutorial :

Command log file management:

sar keep collecting system resource utilization and store it in binary files. These files are called datafiles and those are located in /var/log/sa/saXX the path where XX is data in dd format. So this could be one of the locations to check when you are troubleshooting file system utilization.

# ll /var/log/sa
total 29024
-rw-r--r-- 1 root root 494100 Dec  1 23:50 sa01
-rw-r--r-- 1 root root 494100 Dec  2 23:50 sa02
-rw-r--r-- 1 root root 494100 Dec  3 23:50 sa03
-rw-r--r-- 1 root root 494100 Dec  4 23:50 sa04
-rw-r--r-- 1 root root 494100 Dec  5 23:50 sa05
-rw-r--r-- 1 root root 494100 Dec  6 23:50 sa06
-rw-r--r-- 1 root root 494100 Dec  7 23:50 sa07

----- output clipped -----

Log files are binary hence can be read only with sar using -f option. Normal sar command shows your data in real-time when executed. If you need to check historic data you need to use -f option and provide a path of the particular data file.

# sar -u 2 3
Linux 2.6.39-200.24.1.el6uek.x86_64 (testsvr1)         12/19/2016      _x86_64_        (4 CPU)

11:44:29 AM     CPU     %user     %nice   %system   %iowait    %steal     %idle
11:44:31 AM     all     25.37      0.00     10.12      0.00      0.00     64.50
11:44:33 AM     all     25.41      0.00     10.39      0.13      0.00     64.08
11:44:35 AM     all     27.84      0.00     11.36      0.12      0.00     60.67
Average:        all     26.21      0.00     10.62      0.08      0.00     63.08

In the above example, when executed it will run for 23 iterations (we will see what it is, in later part of this post) for 2 seconds each and show you an output which is in real-time. Let’s see -f option :

# sar -u 2 3 -f /var/log/sa/sa15
Linux 2.6.39-200.24.1.el6uek.x86_64 (testsvr1)         12/15/2016      _x86_64_        (4 CPU)

12:00:01 AM     CPU     %user     %nice   %system   %iowait    %steal     %idle
12:10:01 AM     all     10.24      0.00      5.18      0.17      0.00     84.41
12:20:01 AM     all     11.55      0.00      5.02      0.19      0.00     83.24
12:30:01 AM     all     10.79      0.00      4.79      0.17      0.00     84.25
Average:        all     10.86      0.00      5.00      0.17      0.00     83.97

In above example, we ran sar command but on a datafile /var/log/sa/sa15. Hence data is being read from older/historic data files which is not real-time. File’s first entry is always treated as the first iteration and further on data is displayed according to command arguments. Hence you can see the first entry is being of 12AM.

Another beauty of this command for log management is you can save real-time command output in a log file of your choice. Let’s say you need to share the output of a specific time of monitoring then you can save the output in the log file and can share. In this way, you don’t have to share complete day datafile. You have to use -o option along with file path of your choice.

# sar -u 2 3 -o /tmp/logfile
Linux 2.6.39-200.24.1.el6uek.x86_64 (testsvr1)         12/19/2016      _x86_64_        (4 CPU)

11:51:42 AM     CPU     %user     %nice   %system   %iowait    %steal     %idle
11:51:44 AM     all     27.75      0.00      9.88      0.12      0.00     62.25
11:51:46 AM     all     26.00      0.00      9.88      0.12      0.00     64.00
11:51:48 AM     all     25.53      0.00     10.26      0.00      0.00     64.21
Average:        all     26.43      0.00     10.00      0.08      0.00     63.48
# ls -lrt /tmp/logfile
-rw-r--r-- 1 root root 63672 Dec 19 11:51 /tmp/logfile

In the above example, you can see the output is being displayed on the terminal as well as in a file provided in command options. Note that this file is also a binary file only.

Command Intervals and Iterations :

This command takes these two arguments which will define the time factors of output.

Interval is the time in seconds between two iterations of output samples. Normally selected as 2,5,10 seconds. Iteration or count is the number of samples to be taken after an interval of defined seconds. So for a command which says sar 2 5 means 2 interval and 5 iterations i.e. take 5 samples separated by 2 seconds each. i.e. if the command is fired at 12:00:00 then the output will include samples for times 12:00:02, 12:00:04 till 12:00:10. Check any above example and you will figure out how it works.

If the interval parameter is set to zero, the sar command displays the average statistics for the time since the system was started. If the iterations parameter is specified without the count parameter, then reports are generated continuously as shown below.

# sar -u 2
Linux 2.6.39-200.24.1.el6uek.x86_64 (oratest02)         12/19/2016      _x86_64_        (4 CPU)

12:09:28 PM     CPU     %user     %nice   %system   %iowait    %steal     %idle
12:09:30 PM     all      0.75      0.00      0.50      0.25      0.00     98.50
12:09:32 PM     all      0.88      0.00      0.38      0.13      0.00     98.62
12:09:34 PM     all      1.12      0.00      1.75      0.25      0.00     96.88
12:09:36 PM     all      2.38      0.00      1.38      0.12      0.00     96.12
12:09:38 PM     all     14.79      0.00      7.39      0.50      0.00     77.32
------- continuous reports being generated, output clipped -----

We will see useful monitoring example of this command in next post.

cut command and its examples

Learn how to use the cut command in various scenarios and in scripting with lots of examples. Understand how to extract selective data using cut command.

Everybody in a Linux or Unix environment is familiar with grep command which is used for pattern finding/filtering. This is a kind of selective data extraction from source data. cut is another command which is useful for selective data extraction.

cut command basically extracts a portion of data from each line of a given file or supplied data. Syntax to be used for cut command is –

# cut [options] [file]

Below different options can be used with cut command :

c To select only this number of character.

Roughly you can say its a column number to be extracted. See below example:

# cat test
sd
teh
dxff
cq2w31q5
# cut -b 4 test


f
w

Here, cut is instructed to select only 4th character. If you look closely, in the output it shows only 4th column letters. Lines which are having less than 4 characters are shown blank in cut output!

This option is useful in scripting when there is a need for single-column extraction from supplied data.

-b To select only this number of bytes.

This option tells the command to select only a given number of a byte from each line of data and show in the output. In most cases, its output is the same as -c option since each character is treated as one byte in a human-readable text file.

-d Use specified delimiter instead of TAB

Normally cut uses tab as a delimiter while filtering data. If your file has different delimiter like a comma , in CSV file or colon : in /etc/passwd file then you can specify this different delimiter to cut command. Command will process data accordingly. This option should always be used with b, c or f options. Only using -d option will result in below error :

# cut -d a test
cut: you must specify a list of bytes, characters, or fields
Try `cut --help' for more information.

-f To select only specified number of fields.

This option is to specify which all fields to be filtered from data. This option if not supplied with delimiter character data (-d option/ no TAB in data) then it will print all lines. For example, if you say -f 2 then it searches for TAB or supplied delimiter values if any in data. If found then it will print 2nd field from delimiter for that line. For lines where it won’t find delimiter, it will print the whole line.

# cat test
this is test file
raw data
asdgfghtr
data
#cut -f1 test
this is test file
raw data
asdgfghtr
# cut -d" " -f2 test
is
data
asdgfghtr
data

Field numbering is as below:

Field one is the left side field from the first occurrence of the delimiter
Field second is the right-side field from the first occurrence of the delimiter
Fields third is the right-side second field from the first occurrence of the delimiter

Observe the above example, where it prints all lines as it is when the delimiter is not specified. When the delimiter is defined as single space ” ” then it prints field two according to numbering explained above.

-s To print only lines with delimiters

Missing delimiter causes cut to print the whole line as it is which may mislead the desired output. This is dangerous when the command is being used in scripts. Hence -s option is available which restricts cut to display the whole line in case of missing delimiter.

# cut -f1 -s test

Using the same test file from the above example when -s is specified then there is no output. This is because the default delimiter TAB does not exist in the file.

Number to be used with -b, -c and -f options:

In all the above examples, we have declared single integers like 2,4 for -b, -c and -f options. But we can also use the below system :

  • x: Single number like we used in all examples
  • x-: From xth bye, character or field till end of line
  • x-y: From xth byte, character or field till yth
  • -y: From first byte, character or field till yth
# cut -d" " -f2- test
is test file
data
asdgfghtr
data
# cut -d" " -f2-3 test
is test
data
asdgfghtr
data
# cut -d" " -f-1 test
this
raw
asdgfghtr
data

Few examples of cut command :

To get a list of users from /etc/passwd file, we will use delimiter as : and cut out the first field.

# cut -d: -f1 /etc/passwd
root
bin
daemon
adm
lp
------ output clipped -----

cut command can be feed with data from the pipe as well. In that case last [file] parameter shouldn’t be defined. Command will read input from pipe and process data accordingly. For example, we are grep ing user with uid 0 and then getting their usernames using cut like below:

# cat /etc/passwd |grep :0: | cut -d: -f1
root
sync
shutdown
halt
operator

Getting userid and group id from /etc/passwd file along with their usernames.

# cat /etc/passwd |cut -d: -f1-4
root:x:0:0
bin:x:1:1
daemon:x:2:2
adm:x:3:4
lp:x:4:7
sync:x:5:0

File encryption / password protect file in HPUX

Learn how to password protect files in HPUX. This is helpful to encrypt some public readable files using a password and decrypt them whenever needed.

It’s pretty obvious that you can control file access using permissions but sometimes you may want to protect file lying in a public directory like with password of your choice. Or sometimes you may want even root shouldn’t read your files 😉

In that case, you can use crypt command to encrypt your file with a password of your choice. This command is available in HPUX, Solaris. I couldn’t found it in Linux though. crypt command is basically used to encrypt or decrypt files. So basically you will be encrypting your file with the key of your choice and whenever you want to read it back, you need to decrypt it by supplying password/key you chose at the time of encryption.

Locking / encrypting file with key

Let’s take a myfile.txt sample file for encryption. You need to supply this file as an input to crypt command and define the output file (see example below).

# cat myfile.txt
This is test file for crypt.

# crypt < myfile.txt > myfile.crypt
Enter key:

# cat myfile.crypt
3▒▒▒x▒▒X▒n▒d▒6▒▒=▒▒q▒j

Now, crypt command will ask you for a key. Its a password you can set of your choice. Note that, it won’t ask you to retype key. Once executed you can new output file created (with the name given in command). This file is encrypted and can’t be read using cat, more etc commands!

That’s it! your file in encrypted. Now you can safely delete your original file myfile.txt and keep an encrypted copy on the machine.

Unlocking / decrypting file with key

Now, to retrieve file content back i.e. decryption of file you can run the same command. Only input and output file names will be exchanging their positions. Now, the encrypted filename will be your input file and the original filename will be the output file name.

# rm myfile.txt
# crypt < myfile.crypt > myfile.txt
Enter key:
# ll myfile.txt
-rw-------   1 root       users           29 Dec 12 11:51 myfile.txt
# cat myfile.txt
This is test file for crypt.

crypt command checks input file and get to know its encrypted one. So it uses key supplied by user to decrypt it into output file specified in command. You get your file back as it was!

Amazing “who” command

Learn ‘who’ command in Linux and related files. Understand its usage, different options, and scenarios in which command can be useful. 

If you are in Linux/Unix administration, you probably used who command many times. And majorly for checking the list of users currently logged in to the system. But there is much other stuff this command can do which we overlook. Let see the power of who command.

‘who’ command is capable of doing the following :

  1. Provide information about logged-in users
  2. List dead processes
  3. Shows last system boot time
  4. List system login processes
  5. List processes spawned by init
  6. Shows runlevel info
  7. Track last system clock change
  8. User’s message status

Let’s see all the above doings one by one and understand from where command fetch this information for us.

1. Provide information about logged-in users

This is a pretty famous use of who command. Without any argument when run, it shows output similar to one below :

# who
root     pts/1        2016-12-09 10:48 (10.10.42.56)
user4    pts/2        2016-12-09 10:53 (10.10.42.22)

The output consists of 5 columns where,

  • The first field is the username of the user who is logged in currently
  • The second field is the terminal from where the user is logged in
  • Third and fourth fields are date and time of login
  • The fifth field is the IP address/hostname from where the user logged in.

who reads information from /var/run/utmp file and provide in a formatted manner in this output.

Read also: How to check bad logins in HPUX

2. List dead processes

This is helpful during troubleshooting of performance issues or system cleanup. There are some processes that went dead in the system i.e. not properly terminated or closed after their execution completes. These processes can be seen using who -d.

# who -d
         pts/1        2016-12-09 11:46             27696 id=ts/1  term=0 exit=0
         pts/2        2016-12-03 00:34             23816 id=ts/2  term=0 exit=0
         pts/3        2016-12-03 00:34             23856 id=ts/3  term=0 exit=0

The output shows terminal from which process was fired in the first field, followed by the date, time, process id and other details.

3. Shows last system boot time

If you want to check when the system was booted quickly then who -b is your way out. No need to search through log files or no need to back-calculate the date from system uptime. Just 4 letters command and you will be presented with last boot time.

# who -b
         system boot  2016-03-17 19:39

4. List system login processes

These are the currently active login processes i.e. Getty on the system. Using who -l you will get details of login processes. This information can also be traced/verified in ps -ef output.

# who -l
LOGIN    tty3         2016-03-17 19:39              2502 id=3
LOGIN    tty6         2016-03-17 19:39              2522 id=6
LOGIN    tty5         2016-03-17 19:39              2515 id=5
LOGIN    tty2         2016-03-17 19:39              2497 id=2
LOGIN    tty4         2016-03-17 19:39              2510 id=4

Here,

  • The first field is the process type
  • The second field denotes terminal used
  • Third and fourth are the date and time of spawn
  • The fifth field is process id i.e. PID
  • Sixth is identification/sequence number

The above processes can be seen in ps -ef filtered output as well.

# ps -ef |grep -i getty
root      2497     1  0 Mar17 tty2     00:00:00 /sbin/mingetty /dev/tty2
root      2502     1  0 Mar17 tty3     00:00:00 /sbin/mingetty /dev/tty3
root      2510     1  0 Mar17 tty4     00:00:00 /sbin/mingetty /dev/tty4
root      2515     1  0 Mar17 tty5     00:00:00 /sbin/mingetty /dev/tty5
root      2522     1  0 Mar17 tty6     00:00:00 /sbin/mingetty /dev/tty6

5. List processes spawned by init

Whenever the system finishes boot, the kernel will load services/processes with the help of the init program. If there are any active processes running on the server which were spawned by init program then those can be viewed using who -p. Currently, my server does not have such processes exist so no output can be shared here.

6. Shows runlevel info

Another famous use of who with -r argument. To check the current run level of the system we use who -r.

# who -r
         run-level 5  2016-03-17 19:39

In the output, it shows the current run level number and date, a time when the system entered this run level. In HPUX this command output also shows the previous run level system was in before entering the current run level.

Read also: Run levels in HPUX

7. Track last system clock change

Normally this option i.e. who -t is not useful nowadays. Since all system now runs with NTP (network time protocol), time syncs with NTP server and there are no manual interventions for the system time change. who -t a command normally aims at showing the last system clock change details if manually done.

8. User’s message status

Linux uses a native messaging system using which one can send and messages to logged in the user’s terminal. who -T gives you the user’s message status i.e. if messaging service is enabled or disabled for the user. The user may opt-out of this service from his terminal using mesg n command. This will prevent any message to be displayed on his terminal.

By observing the user’s messages status, one can determine which user won’t be receiving a broadcast message is sent. This will help to analyze which user may not be aware of happenings that are notified through broadcast messages. Mainly system reboot, shutdown sent outs broadcast messages to all logged-in users. If some users don’t see it, he may lose his work during the system down event.

# who -T
user3      + pts/0        2016-12-09 11:42 (10.10.49.12)
testuser2  - pts/1        2016-12-09 12:38 (10.10.49.12)

In the above output, I logged in with testuser2 and opt-out of messaging service with mesg n. You can see - sign against testuser2 while user3 is with + i.e. he has messaging-enabled and will receive messages on the terminal.