Category Archives: Commands

Record Linux session using the script command

Learn how to record Linux sessions using script command. This will help to keep a record of your commands and their outputs printed on the terminal at the time of execution.

Record your Linux session using ‘script’ command

In our last article we saw how to save PuTTY session output in a file on your desktop. In this article we will learn how to record session output in a file on the Linux server itself. And yes, it’s recording. It can be replayed later at any time and view commands being run and output being shown as it was done in real-time at the time of recording.

There are two utilities used for this task. script command used to record session and scriptreply is used to replay the recorded session from the file. In this article we will see Linux session recording using script command. scriptreply command is covered in this article.

script command takes a filename as an argument without any switch. It will write your commands along with their outputs in this file. Remember only commands ran after script command will be recorded and till you stop recording by hitting cntl+d key combination or typing exit. Make sure you have proper file permissions on the log file you are asking script to write into. Also, make sure you have enough space for recording since if your outputs are pretty lengthy then logfile gonna grow big.

Let’s start with an example. Here we are recording output in myoutputs.txt file. Command will be script myoutputs.txt:

[root@kerneltalks ~]# script myoutputs.txt
Script started, file is myoutputs.txt
[root@kerneltalks ~]# date
Thu Jul 27 01:31:39 EDT 2017
[root@kerneltalks ~]# hostname
kerneltalks
[root@kerneltalks ~]# w
 01:31:46 up 1 min,  2 users,  load average: 0.49, 0.21, 0.08
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
ec2-user pts/0    59.184.148.123   01:31    2.00s  0.00s  0.19s sshd: ec2-user [priv]
root     pts/1                     01:31    2.00s  0.01s  0.01s w
[root@kerneltalks ~]# exit
Script done, file is myoutputs.txt

Now, observe above output.

  1. After script command execution, it notifies you (Script started, file is myoutputs.txt) that it has started recording your session and log file where it is recording.
  2. Then I punched in few commands for testing like date, hostname, w.
  3. I stopped recording by hitting the cntl+d key combination (exit command works too). script stopped and informed (Script is done, the file is myoutputs.txt) it has stopped recording.

Lets look at logfile myoutputs.txt

# cat myoutputs.txt
     Script started on Thu 27 Jul 2017 01:31:35 AM EDT
     [root@kerneltalks ~]# date
     Thu Jul 27 01:31:39 EDT 2017
     [root@kerneltalks ~]# hostname
     kerneltalks
     [root@kerneltalks ~]# w
      01:31:46 up 1 min,  2 users,  load average: 0.49, 0.21, 0.08
     USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
     ec2-user pts/0    59.184.148.123   01:31    2.00s  0.00s  0.19s sshd: ec2-user [priv]
     root     pts/1                     01:31    2.00s  0.01s  0.01s w
     [root@kerneltalks ~]# exit

     Script done on Thu 27 Jul 2017 01:31:51 AM EDT

Voila! All commands (including shell prompt PS), their outputs are there in the logfile. Notice that it also added timestamps at the top and bottom of the logfile indicating when the recording was started and stopped.

How to append script command recording in same logfile

Every time script command runs, it empties its logfile and adds content to it. If you want to append your recording to previously recorded file then you need to use -a (append) switch with it. This will keep data in a log file as it is and add new data to the bottom of the file.

[root@kerneltalks ~]# script -a myoutputs.txt
Script started, file is myoutputs.txt
[root@kerneltalks ~]# echo " I love kerneltalks.com"
 I love kerneltalks.com
[root@kerneltalks ~]# exit
exit
Script done, file is myoutputs.txt

I used -a switch in the above example to append new recording in the same file we created earlier. Tested one echo command. Let’s check the logfile.

[root@kerneltalks ~]# cat myoutputs.txt
     Script started on Thu 27 Jul 2017 01:31:35 AM EDT
     [root@kerneltalks ~]# date
     Thu Jul 27 01:31:39 EDT 2017
     [root@kerneltalks ~]# hostname
     kerneltalks
     [root@kerneltalks ~]# w
      01:31:46 up 1 min,  2 users,  load average: 0.49, 0.21, 0.08
     USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
     ec2-user pts/0    59.184.148.123   01:31    2.00s  0.00s  0.19s sshd: ec2-user [priv]
     root     pts/1                     01:31    2.00s  0.01s  0.01s w
     [root@kerneltalks ~]# exit

     Script done on Thu 27 Jul 2017 01:31:51 AM EDT
     Script started on Thu 27 Jul 2017 01:41:13 AM EDT
     [root@kerneltalks ~]# echo " I love kerneltalks.com"
      I love kerneltalks.com
     [root@kerneltalks ~]# exit
     exit

     Script done on Thu 27 Jul 2017 01:41:24 AM EDT

You can see there are two recordings in the same log file. You can distinguish them by their start and stop time (highlighted in output)

Conclusion :

script command can be used to log commands and their outputs on the Linux server itself. This command can be used in user profiles to monitor user activity on a server provided logfile mount point has huge free space to accommodate data generated by users.

Learn Rsync command with examples

Rsync command used to copy and sync files, directories locally or remotely. Learn rsync with different examples and options.

Rsync command! Data backup with copy and sync!

Rsync i.e. Remote Sync is the command used for copying and synchronizing files or directories locally or remotely on Linux Unix based systems. In this article we will learn how to use rsync command. different switches of it and examples.

As you know Rsync not only copies but also synchronizes files and directories, it is best for mirroring data between two machines and data backups. Rsync is an efficient way than any other copy command since it takes bandwidth and time only for the first time of data copy. In consecutive sync cycles, it only transfers data which is changed. This requires less bandwidth and time, but you still have updated data at the destination! Hence it is best for remote data mirrors and backups.

Lets see Rsync command with different uses and their examples.

Rsync command should be available on your system in plain vanilla installation. If not, install the ‘rsync’ package. Basically without any switch you can run the command with just source and destination path. The path can be local or remote.

# rsync testfile /tmp/

This is a very basic simple example of rsync command. It won’t show any output and copy/sync data from source file ‘testfile‘ to destination file under /tmp directory.

Rsync command with verbose mode

-v switch can be used for verbose mode. Information like byte sent, bytes received, transfer speed, the total size of the source is displayed on the console when command run.

# rsync -v testfile /tmp/directory9/
created directory /tmp/directory9
testfile

sent 79 bytes  received 31 bytes  220.00 bytes/sec
total size is 7  speedup is 0.06

If you watch closely, Rsync will create a destination directory if it does not exist!

Rsync command to copy directories

-r (recursive copy) the switch helps in copying directories recursively to destination.

# rsync -rv testdir /tmp/
sending incremental file list
testdir/
testdir/test2
testdir/test4
testdir/test1/
testdir/test3/

sent 194 bytes  received 62 bytes  512.00 bytes/sec
total size is 0  speedup is 0.00

In the above example we used verbose mode as well. Directory testdir has two files named test2, test4. It also has directories named test1, test3. All these 4 entities copy over to destination as you see their listing in output.

Preserve permissions and ownership while rsync copy

-a (archive mode) the switch used to instruct rsync to preserve file directory permissions and ownership at the destination.

# ll test1
-rw-r--r--. 1 root root 0 Jul 23 22:40 test1

# rsync -av test1 /tmp
sending incremental file list
test1

sent 69 bytes  received 31 bytes  200.00 bytes/sec
total size is 0  speedup is 0.00

# ll /tmp/test1
-rw-r--r--. 1 root root 0 Jul 23 22:40 /tmp/test1

You can observe source and destination permissions and ownership is preserved after using -a switch with rsync.

Rsync with compression

One of the important factors while copying data is bandwidth. Bandwidth utilization can be reduced with compressed data. Compressed data transfer is more speedy, takes less time and bandwidth. Rsync allows us to compress data during transfer using -z switch.

# rsync -azv test1 /tmp
sending incremental file list
test1

sent 66 bytes  received 31 bytes  194.00 bytes/sec
total size is 0  speedup is 0.00

If you observe we synced the same file test1 from previous example output (obviously after removing /tmp/test1). Without compression it sent 69 bytes and now with compression 66 bytes! I know, 3 bytes is no big difference but these are empty test files. Compression makes a big difference with a bunch of huge files.

Rsync to copy from local to remote

For copying file/directory from local to remote, specify the destination as remote in notation username@hostname_or_IP:/path/

# rsync -azv test1 root@10.10.1.23:/tmp/
root@10.10.1.23's password:
sending incremental file list
test1

sent 66 bytes  received 31 bytes  21323.00 bytes/sec
total size is 0  speedup is 0.00

Rsync to copy from remote to local

It will work like the above example only you have to reverse source and destination.

# rsync -azv root@10.10.1.23:/tmp/ test1 
root@10.10.1.23's password:
sending incremental file list
test1

sent 66 bytes  received 31 bytes  21123.00 bytes/sec
total size is 0  speedup is 0.00

I am transferring empty file here hence you see total size as zero.

Limiting bandwidth use in Rsync

Another important aspect of network file copy is to limit the bandwidth of transfer so that it won’t take whole available bandwidth, choke network, and degrade the network performance for other services/devices using it. Rsync has the ability to limit bandwidth with --bwlimit switch. The switch should be supplied to specify a maximum transfer rate in kilobytes per second.

# rsync -razv --bwlimit=100 httpd-2.4.25 /tmp/
sending incremental file list
httpd-2.4.25/srclib/apr/build/
httpd-2.4.25/srclib/apr/build/ltmain.sh
-----output clipped------

sent 1214030 bytes  received 10136 bytes  97933.28 bytes/sec
total size is 83112573  speedup is 67.89

You can see speed limited to 97kbps (97933 bytes/sec) as per our specification 100kbps in command switch --bwlimit.

Show progress while transferring in Rsync

If you have a list of file directories with big size then its good to see the progress of each transfer so that you are assured that transfer is in progress and not hung.  Specify --progress switch so that transfer details shown for each file transfer in output.

# rsync -razv --progress httpd-2.4.25 /tmp/
sending incremental file list
httpd-2.4.25/
httpd-2.4.25/.deps
           0 100%    0.00kB/s    0:00:00 (xfer#1, to-check=1312/1314)
httpd-2.4.25/.gdbinit
       10566 100%    0.00kB/s    0:00:00 (xfer#2, to-check=1311/1314)
httpd-2.4.25/ABOUT_APACHE
       13496 100%    2.57MB/s    0:00:00 (xfer#3, to-check=1310/1314)
httpd-2.4.25/Apache-apr2.dsw
       65980 100%   10.49MB/s    0:00:00 (xfer#4, to-check=1309/1314)

----- output clipped -----

In the above output you can see after each file copied, transfer speed details, the duration is flashed on the screen.

Apart from above all switches, there is the whole list of switch which can be used with Rsync command. All of them can be found under the Rsync man page on your terminal. I am listing a few important one below for your quick reference.

  • -h: Output numbers will be shown in a human-readable format
  • --include: Include files. Wild cards can be used
  • --exclude: Exclude from the copy. Wild cards can be used
  • -q: Quiet mode. Suppress non-error messages
  • --backup: Make a backup
  • -l: Copy links as well
  • -H: Preserve hard links
  • -t: Preserve modification times
  • --dry-run: Perform dry run. No changes actually made.
  • --delete-after: Delete files after transfer

Let us know if you want additions of examples of any specific switch in the below comments. I will add them accordingly.

Beginner’s guide: 4 Linux group management commands

Learn to manage groups in Linux with these group management commands. The article includes how to create, modify, delete, and administer groups.

Group management in Linux

Groups on the Linux system are a bunch of users created for easy access/permission management. One user can be a member of one or many groups. Users will have only one primary and one/many secondary groups. In our other article we have seen user management commands in Linux/Unix. In this article we will discuss group management. There are mainly 4 commands used to manage user groups on Linux systems :

  1. groupadd
  2. groupmod
  3. groupdel
  4. gpasswd

Let’s check all these commands and fields they are responsible in /etc/group file.

groupadd command

As the name suggests, it is used to create new groups on the Linux system. groupadd command needs a group name as an argument.

# groupadd sysadmins

# cat /etc/group
sysadmins:x:502:

This command creates a group named sysadmins. A newly created group can be verified in /etc/group file. Study fields in /etc/group file here.

Several common switches which works with groupadd are :

  • -g : Specify GID of your choice
  • -o : Create a group with non-unique GID
  • -r : Create a system group. (GID will be taken from system group GID range)

groupmod command

If you want to edit parameters like name, GID, uniqueness of group which already exist in the system then you can modify group using groupmod. Below the list of the switch with their desired values should feed to this command –

  • -g : new GID
  • -o : Make it non-unique
  • -n : New name
# groupmod -n newsysadmins sysadmins
# cat /etc/group |grep sys
newsysadmins:x:502:

# groupmod -g 9999 sysadmins
# cat /etc/group 
sysadmins:x:9999:

# groupmod -o -g 3 sysadmins
# cat /etc/group |grep sys
sys:x:3:bin,adm
sysadmins:x:3:


Observe above outputs where we changed the name, gid of the group and lastly we assigned the same GID 3 (non-unique) to our group which was already existing.

groupdel command

That’s the command where group ends their life! Yes, group deletion is performed using this command. This command is pretty simple. Just supply your group name and it will be deleted from the system.

# groupdel sysadmins

gpasswd command

This command is used to administer group. Administering groups includes :

  1. Adding/removing users to/from group
  2. Setting and removing group password
  3. Making a user administrator/member of a group

Adding and removing user in the group is done with switch -a and -d followed by user name and lastly group name. Check below examples :

# gpasswd -a shri sysadmins
Adding user shri to group sysadmins

# cat /etc/group | grep sysadmin
sysadmins:x:3:shri

# gpasswd -d shri sysadmins
Removing user shri from group sysadmins

# cat /etc/group | grep sysadmin
sysadmins:x:3:


Password set is done without any switch while password removal is with -r switch as below :

# gpasswd sysadmins
Changing the password for group sysadmins
New Password:
Re-enter new password:

What is the use of group password in Linux? 

This question comes to many of us. Hardly rather no one uses this feature at all. The idea must be to secure a group from non-member users. But since a group password should be known to all group members, it actually doesn’t make any sense to use it. Then you might ask then why group passwords exist in the first place? It may be just following the user (password security) model to groups as well to maintain symmetry in design. I mean it’s just my thought. Let me know if you have any other reason which suits group password existence!

Making any user administrator of the group grants him the privilege to administer the group. Member, the user is just a member of the group and can not administer it. You can make user administrator of the group with -A switch and member with -M. By default, the user is added to the group as a member

# gpasswd -A shri sysadmins
# gpasswd -M shri sysadmins

Those are all group management commands in Linux with their most used switches. Let us know any addition/correction/feedback in the comments!

Finger command in Linux

Learn how to get user details by using finger command in Linux. List of switches for finger command and list of parameter information in this article.

Finger command howto!

The finger is a user information lookup program in Linux. It is used to get system user details like the user, home directory, last login, user shell, etc. This command is useful to see these parameters which otherwise you have to look under /etc/passwd and last login records.

Sometimes you will not find finger command in your out of box distribution. You can install a finger package and proceed to use this command. Sample output of installation on Redhat for your reference.

# yum install finger
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos, security
Setting up Install Process
epel/metalink                                                                                                                         |  15 kB     00:00
epel                                                                                                                                  | 4.3 kB     00:00
epel/primary_db                                                                                                                       | 5.9 MB     00:04
rhui-REGION-client-config-server-6                                                                                                    | 2.9 kB     00:00
rhui-REGION-rhel-server-releases                                                                                                      | 3.5 kB     00:00
rhui-REGION-rhel-server-releases/primary_db                                                                                           |  56 MB     00:00
rhui-REGION-rhel-server-releases-optional                                                                                             | 3.5 kB     00:00
rhui-REGION-rhel-server-releases-optional/primary_db                                                                                  | 5.4 MB     00:00
rhui-REGION-rhel-server-rh-common                                                                                                     | 3.8 kB     00:00
Resolving Dependencies
--> Running transaction check
---> Package finger.x86_64 0:0.17-40.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=============================================================================================================================================================
 Package                      Arch                         Version                              Repository                                              Size
=============================================================================================================================================================
Installing:
 finger                       x86_64                       0.17-40.el6                          rhui-REGION-rhel-server-releases                        22 k

Transaction Summary
=============================================================================================================================================================
Install       1 Package(s)

Total download size: 22 k
Installed size: 27 k
Is this ok [y/N]: y
Downloading Packages:
finger-0.17-40.el6.x86_64.rpm                                                                                                         |  22 kB     00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : finger-0.17-40.el6.x86_64                                                                                                                 1/1
  Verifying  : finger-0.17-40.el6.x86_64                                                                                                                 1/1

Installed:
  finger.x86_64 0:0.17-40.el6

Complete!

Finger command requires the username as an argument. Without any switch finger shows below details.

# finger shrikant
Login: shrikant                         Name: Shrikant Lavhate
Directory: /home/ec2-user               Shell: /bin/bash
On since Wed Jul  5 00:31 (EDT) on pts/0 from 59.184.183.234
No mail.
No Plan.

It displays –

  1. Login: Login id
  2. Name: Comment in /etc/passwd against that user
  3. Directory: Home directory of the user
  4. Shell: The user login shell
  5. Last login time and IP from where he/she was logged in
  6. Email status
  7. Plan : (the content of .plan file in user’s home directory)

Email status can be one of the below –

  • No Mail. :  if there is no mail at all
  • Mail last read DDD MMM ## HH:MM YYYY (TZ):  if the person has
    looked at their mailbox since new mail arriving
  • New mail received … : Same as above
  • Unread since … :  if the user has new mail

Finger command switches

Finger command supports a few switches. The above output without any switch is the same output for the switch -l (multi-line listing). It also displays the content of the files .plan, .project, .pgpkey, and .forward from the user’s a home directory if they exist.

Another switch is -s which can be used for more information like terminal name, write status, idle time, login time and contact details, etc.

# finger -s  shri
Login     Name              Tty        Idle  Login Time   Office     Office Phone
shri      Shrikant Lavhate   *     *    No     logins

In this output you can see * for :

  • terminal: When the unknown device
  • Write status: If write permission is denied
  • login and idle time: If nonexistent

The last switch is -m which prevents user matching. Finger command matches the supplied user name in userid and user comment details. To avoid matching it in comment details and only check-in user ids this switch can be used.

Finger can even be used to lookup remote user information by using user@host format.

Command alias in Linux, Unix

Learn how to create, view, and remove command alias in Linux or Unix. Useful to shorten long commands or imposing switches to commands.

Command alias

Command alias in Linux is another (mostly short) version of frequently used command (or command with arguments). It is meant for lessening keystrokes to type long commands and making it fast, easy, and accurate to work in shell. On the other hand, if you want users to use some commands with preferred switch always, you can create an alias of it. For example, rm command can be aliased to rm -i so that it will be always interactive asking the user to confirm his remove operation.

In this article we will see how to create command alias, how to remove the alias, and how to list alias.

How to create alias

Creating alias is an easy job. You have to specify alias and command to alias like below :

# alias ls='ls -lrt'

Here in this example, we are aliasing command ls to ls -lrt so that whenever we type ls command it will long list, reverse order with a timestamp. Saving out time to write long command with arguments. See how differently ls works before and after alias in the below output.

# ls
apache  httpd-2.4.25  httpd-2.4.25.tar  letsencrypt  lolcat-master  master.zip  shti

# alias ls='ls -lrt'

# ls
total 38504
-rw-r--r--.  1 root root  39198720 Dec 19 12:29 httpd-2.4.25.tar
drwxr-xr-x.  5 root root      4096 Dec 26 07:48 lolcat-master
-rw-r--r--.  1 root root    205876 Mar 22 02:21 master.zip
drwxr-xr-x.  2 root root      4096 Mar 29 08:15 apache
drwxr-xr-x. 12  501 games     4096 Mar 29 08:42 httpd-2.4.25
drwxr-xr-x. 14 root root      4096 Apr  3 15:07 letsencrypt
drwxr-xr-x.  2 root root      4096 May 16 01:29 shti

Make a note that this alias will be available in the current shell only. If you want to make it permanent over reboots, spawns over other users and shells then you need to define it in /etc/profile or respective shell profiles of the individual users. Defining it in profiles is the same syntax, just add the above command in a profile file and it will works once the profile is sourced.

List alias in current shell

To view and list all aliases currently active in your shell, just type command alias without any argument.

# alias
alias cp='cp -i'
alias l.='ls -d .* --color=auto'
alias ll='ls -l --color=auto'
alias ls='ls -lart'
alias mv='mv -i'
alias rm='rm -i'
alias which='alias | /usr/bin/which --tty-only --read-alias --show-dot --show-tilde'

In the above output you can see all commands on the left of = and values they are aliased for in the right section. These aliases are defined on /etc/profile or user profiles or shell profiles or defined in the shell with alias command.

How to delete alias

In case you want to remove or delete alias you defined, you need to use unalias command. This command takes your alias command (left portion of = in the above listing) as an argument.

# unalias ls

# alias
alias cp='cp -i'
alias l.='ls -d .* --color=auto'
alias ll='ls -l --color=auto'
alias mv='mv -i'
alias rm='rm -i'
alias which='alias | /usr/bin/which --tty-only --read-alias --show-dot --show-tilde'

Observe in above output after un-aliasing ls, its alias to ls -lart vanishes from alias listing. Keep in mind that this un-alias is limited to the current shell only. If you want to permanently un-alias command then you need to remove it from the profile file where you have defined it and re-source that profile.

Fun in terminal using alias

By now you know how alias works, you can play around to have some fun in the terminal. You can alias funny statements to commonly used commands. For example aliasing ls to cmatrix command to run matrix green falling code in terminal and stun user!

# alias ls=' echo I love kerneltalks.com :D'
# ls
I love kerneltalks.com :D

As you know how it can turn evil if it gets into the wrong hands! So be extra cautious when defining an alias for destructive commands. 

Difference between hard link and soft link

Learn the difference between hard links and soft links. Also discover what are they, how to create them, and how to identify them on the system.

Differences between hard link and soft link

One of the frequently asked Linux or Unix interview questions is what is the difference between hard links and soft links? In this post, we will touch base: what is the hard link & soft link,  main differences between hard and soft link, how to create a soft link and hard link, a table showing the difference between a hard and soft link, and how to identify the hard link and soft link.

Without much of distraction, lets get started :

What is hard link?

A hard link is a mirror copy of a file in the Linux or Unix system. Having said that, the original file and link file both have the same inodes. Since both share the same inode, hard links can not cross file system boundaries i.e. you can not create a hard link of a file residing in another mount point. Whenever you delete hard links, the original file and its other hard links still exist since they are all mirror copies. It just reduces the link count! Hard links have actual file content.

What is soft link?

A soft link is just a link to a file in Linux or Unix system. For understanding, you can visualize soft link as a “desktop shortcuts” in windows. Since its a link, its inode is different from the file it’s linking to. Soft links can cross file systems. You can create soft links across file systems. If you delete the original file all linked soft links fail. Since it will point to a non-existent file.

Differences between hard link and soft link :

Hard link
Soft link
Its mirror copy of original file Its link to original file
Link and original file have same inode Links has different inode than original file
Can not cross file systems Can be created across file systems
Show data even if original file deleted Fails if original file deleted
Has full content of original file Its just points to source file hence contains no data of source
It can not link directories It can link to directory
Saves your inodes in kernel since it shared same inode as source One inode is occupied hence decreasing available inodes
Takes up storage space since its a mirror copy Takes almost no storage since it contains only path of source

How to create hard link?

To create a hard link you need to use command ln followed by source (original filename) and then link name. In the below example, we are creating two hard links link1 and link2 to file testdata.

# cat testdata
This is test file with test data!
# ln testdata link1
# ln testdata link2
# ls -li
total 12
3921 -rw-r--r--. 3 root root 50 May 16 01:16 link1
3921 -rw-r--r--. 3 root root 50 May 16 01:16 link2
3921 -rw-r--r--. 3 root root 50 May 16 01:16 testdata

You can see above we used ln command to create hard links. Following which we listed their inodes with -i option of command ls. You can see, both links are having the same inode (3921) as the original file (see the first column). Also, the size of hard links is the same as the original file since they contain the same data as the source file.

Now we will delete the original file and see if we can still have data of it from link files.

# rm testdata
rm: remove regular file `testdata'? y

# ls -li
total 8
3921 -rw-r--r--. 2 root root 50 May 16 01:16 link1
3921 -rw-r--r--. 2 root root 50 May 16 01:16 link2

# cat link1
This is test file with test data!

Yes. Data still can be fetched from hard links even after deleting the original file since those are mirror copies!

How to create soft link?

For creating soft links, the same ln command can be used but need to specify -s option (soft link). The rest of the command format remains the same.

# ln -s testdata link1
# ln -s testdata link2
# ls -li
total 4
3921 lrwxrwxrwx. 1 root root  8 May 16 01:26 link1 -> testdata
3925 lrwxrwxrwx. 1 root root  8 May 16 01:26 link2 -> testdata
3923 -rw-r--r--. 1 root root 34 May 16 01:25 testdata

In the above example, after creating soft links if you observe inode numbers of soft links are different from the original file. Also, link size is pretty small since they have only path details of the source, not data. Another observation is soft links shows which file they are pointing to in ls output at last column which was not the case in hard links.

Now, we will delete original file and try to access links.

# cat link2
This is test file with test data!
# rm -f testdata
# cat link2
cat: link2: No such file or directory

You can see in the above output, previously we can use link2 properly. After deleting the original file, links are broken and throwing errors when we try to access them!

How to identify hard link and soft link?

From the above examples, you can figure out soft links are easy to identify. Soft links are marked as link files lxxxxxxxxx in special file bit (first column of ll output). They are even displaying pointers and source file names in the last column of ll output (link1 -> testdata).

Hard links are not that pretty straight forward to identify. You need to use inode option -i in ls command and then you need to check for duplicate inodes. This is a manual method. You can even use find command with -same file option. It will then scan inodes and list files with the same inodes (i.e. hard links!)

# find /path -xdev -samefile testdata

The above command will scan /path directory and will list all files having the same inode as testdata file. Which means it will list all hard links to testdata file!

Watch command to execute script/shell command repeatedly

Learn watch command to execute script or shell commands repeatedly every n seconds. Very much useful in automation or monitoring.

watch command and its examples

watch command is a small utility using which you can execute shell command or script repetitively and after every n seconds. Its helpful in automation or monitoring. Once can design automation by monitoring some code/command output using watch to trigger next course of action e.g. notification.

watch command is part of procps package. Its bundled with OS still you can verify if package is installed on the system. This utility can be used directly by issuing the watch command followed by command/script name to execute.

Watch command in action

For example, I created a small script which writes junk data continuously in a file placed under /. This will change utilization numbers in df -k output. In the above GIF, you can see changes in the “Used” and “Available” column of df -k output when monitored with watch command.

In output, you can see –

  1. The default time interval is 2 seconds as shown n first line
  2. Time duration followed by a command which is being executed by watch
  3. The current date, time of server on the right-hand side
  4. Output of command being executed

Go through below watch command examples to understand how flexible the watch is.

Different options of watch

Now, to change the default time interval use option -n followed by time interval of your choice. To execute command after 20 seconds you can use :

# watch -n 20 df -k
Every 20.0s: df -k                      Mon Mar 20 15:00:47 2017

Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/xvda1       6061632 2194812   3552248  39% /
tmpfs             509252       0    509252   0% /dev/shm

See above output, interval is changed to 20 seconds (highlighted row)

If you want to hide header in output i.e. time interval, the command being executed, and current server date, time, use -t option. It will strip off the first line of output.

# watch -t df -h
Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/xvda1       6061632 2194812   3552248  39% /
tmpfs             509252       0    509252   0% /dev/shm

Highlighting difference in the present and previous output is made easy with -d option. To understand this, watch below GIF –

watch command with -d option

In the above output, I used the same data writing script to fill /. You can observe the only portion which is different from the previous output is being highlighted by watch in the white box!

YUM automatic updates! Save your valuable time!

Learn how to schedule YUM automatic updates to upgrade all system packages to the available latest version in the background without manual intervention!

Set YUM to update packages automatically

Recently we published a post about how to update packages in the RHEL system using YUM. In this post, we have explained how to update single or multiple packages and even all packages at once using the command line. But these ways are manual tasks and needs human intervention to complete them.

In this post, we will see how to set automatic updates using YUM-cron. This will save sysadmin time which is invested in updating packages manually.

Setting this up is not recommended in the production system since they always need a risk analysis of their environment before processing updates.

Setting YUM automatic updates in production servers not recommended. Since production servers seek analysis on who updates will impact the operation of the server and its hosted apps. And this process is completely automatic and runs in the background, it’s advisable to refrain implementing it on critical servers.

YUM-cron is service available on RHEL which runs in background and updates packages on the system automatically. It’s like cron for YUM like we have crons for scripts/commands in Linux. It’s available with the package name yum-cron. Let’s see stepwise install and configure the process of it.

Install yum-cron

yum-cron package is available on optional and supplementary channels. Your YUM should be configured to fetch packages from these channels. Install these packages using :

# yum install yum-cron

Once installed, you need to enable this service since its disabled by default. Enable service using chkconfig & start it manually:

# chkconfig yum-cron on
# service yum-cron start

Configure yum-cron:

yum-cron configuration file is /etc/sysconfig/yum-cron/etc/sysconfig/yum-cron-houely.confIn this configuration file, you can set the frequency and extent of updates.

It has majorly three important fields to set as highlighted below :

# Whether a message should emitted when updates are available.
update_messages = yes

# Whether updates should be downloaded when they are available. Note
# that updates_messages must also be yes for updates to be downloaded.
download_updates = yes

# Whether updates should be applied when they are available.  Note
# that both update_messages and download_updates must also be yes for
# the update to be applied
apply_updates = yes

In the hourly conf file you can set security updates with below settings to make sure your system running latest secured packages and not missing any important security update :

#  What kind of update to use:
# default                            = yum upgrade
# security                           = yum --security upgrade
# security-severity:Critical         = yum --sec-severity=Critical upgrade
# minimal                            = yum --bugfix update-minimal
# minimal-security                   = yum --security update-minimal
# minimal-security-severity:Critical =  --sec-severity=Critical update-minimal

You can also configure email ID so that notification will be sent out after yum-cron finishes its tasks. This can be defined against MAILTO or email_to variable in the configuration file.

Once configuration is done restart yum-cron service.

That’s it! you are done. Now yum-cron service runs in the background quietly. It will update packages on the system (configured as per extent) on time set by frequency in the config and send you an email notification (if configured). You can use your valuable time in other sysadmin tasks!

How to create tar file in Linux or Unix

Learn how to create tar file in Linux or Unix operating system. Understand how to create, list, extract tar files with preserving file permissions.

TAR is a short form of tape archive. They are used to sequentially read or write data. Compiling a huge file list or deep directory structure into a single file (tar file) is what we are seeing in this post. This resulting single file sometimes termed as tarball as well.

Creating tape archives of files or directories is extremely important when you are planning to FTP a huge pile of them to another server. Since each file opens a new FTP session and closes it after transfer finishes, if the number of files is huge then FTP takes forever to complete data transfer (even if size of each file is small). Even if you have an ample amount of bandwidth, due to session open and close operations FTP slows down to knees. In such cases, archiving all files in single tarball makes sense. This single archive can transfer at maximum speed and making transfer quickly. After transfer, you can again expand the archive and get all files in place the same as the source.

Normally its good practice to first zip files or directories and then create a tape archive of it. Zipping reduces their size and tar packs these reduced size files in single tarball! In Linux or Unix, command tar is used to create a tar file. Let’s see different operations that can be handled by tar command.

How to create tar file :

To create tar file -c option is used with tar command. It should be followed with the filename of the archive, normally it should have *.tar extension. It for users to identify its a tar archive, because Linux/Unix really don’t care about an extension.

# tar -cvf file.tar file2 file3
file2
file3
# ll
total 32
-rw-r--r-- 1 root users    40 Jan  3 00:46 file2
-rw-r--r-- 1 root users   114 Jan  3 00:46 file3
-rw-r--r-- 1 root users 10240 Jan  9 10:22 file.tar

In the above output, we have used -v (verbose mode) for -c option. file.tar is archive name specified and the end of the command is stuffed with a list of files to be added in the archive. The output shows filename on the terminal which is currently being added to the archive. Once all files are processed, you can see the new archive in the specified path.

To add directories in the tar file, you need to add directory path at end of the same command above.

# tar -cvf dir.tar dir4/
dir4/
dir4/file.tar
dir4/file1
dir4/file3
dir4/file2

If you use a full absolute path in command then tar will remove leading / so making members path relative. When archive will be deflated then all members will have a path starting with the current working directory/specified directory rather than leading /

# tar -cvf dir2.tar /tmp/dir4
tar: Removing leading `/' from member names
/tmp/dir4/
/tmp/dir4/file.tar
/tmp/dir4/file1
/tmp/dir4/file3
/tmp/dir4/file2

You can see / being removed warning in output above.

How to view content of tar file :

Tar file content can be viewed without deflating it. -t i.e. table option does this task. It shows file permissions, ownerships, size, date timestamp, and relative file path.

# tar -tvf dir4.tar
drwxr-xr-x root/users      0 2017-01-09 10:28 tmp/dir4/
-rw-r--r-- root/users  10240 2017-01-09 10:22 tmp/dir4/file.tar
-rw-r--r-- root/users  10240 2017-01-09 10:21 tmp/dir4/file1
-rw-r--r-- root/users    114 2017-01-03 00:46 tmp/dir4/file3
-rw-r--r-- root/users     40 2017-01-03 00:46 tmp/dir4/file2

In the above output, you can see the mentioned parameters in the same sequence displayed. Relative file path means when this tar file extracts it will create files of directories in PWD or path specified in the command line. For example in the above output, if you extract this file in /home/user4/data directory then it will create tmp directory under /home/user4/data and extracts all files in it i.e. under /home/user4/data/tmp.

How to extract tar file :

-x option extracts files from the tar file. As explained above, it will extract files and directories within tar into a relative path.

# tar -xvf dir4.tar
tmp/dir4/
tmp/dir4/file.tar
tmp/dir4/file1
tmp/dir4/file3
tmp/dir4/file2
# ll
total 36
-rw-r--r-- 1 root users 30720 Jan  9 10:28 dir4.tar
drwxr-xr-x 3 root users  4096 Jan  9 10:46 tmp
# ll tmp
total 4
drwxr-xr-x 2 root users 4096 Jan  9 10:28 dir4
# ll tmp/dir4
total 32
-rw-r--r-- 1 root users 10240 Jan  9 10:21 file1
-rw-r--r-- 1 root users    40 Jan  3 00:46 file2
-rw-r--r-- 1 root users   114 Jan  3 00:46 file3
-rw-r--r-- 1 root users 10240 Jan  9 10:22 file.tar

In the above outputs, step by step you can see tmp and dir4 structure created by tar command and extracted all files within it.

There are also few more options supported by tar-like appending files in the existing archives, zipping, updating new files in the existing archives, etc. We will see them in another post later. Leave us comments if you have any queries or suggestions.

How to zip, unzip files and directories in Linux / Unix

Learn how to zip, unzip files and directories in Linux or Unix. Compressing files helps in log management and makes data transfer easy.

Zipping files or directories compress data within them using  Lempel-Ziv coding (LZ77). This reduces the size of the resulting file. Lower size means lower storage requirement (log management) and faster transfer (FTP).

In Linux or Unix platforms gzip is widely available utility mostly native to OS which is used to zip, unzip files. In this post, we will see how to zip and unzip files using gzip utility with examples.

Compressing files

Zipping files i.e compressing is achieved by gzip without any option. You need to submit filename.xyz to gzip command. It will compress the file and the resulting file will have a name as filename.xyz.gz Point here to note is gzip removes the original file and keep new gz file in place.

# ll
total 12
-rw-r--r-- 1 root users  40 Jan  3 00:46 file2

# gzip file2

# ll
total 12
-rw-r--r-- 1 root users  63 Jan  3 00:46 file2.gz

Note in the above output after zipping original file file2 is vanished from the system and compressed archive file2.gz came in existence.

This command support wildcards like * or ? too. Also, you can supply a list of files to it and it will compress all the files supplied in the argument.

# gzip file2 file3

# ll
total 12
-rw-r--r-- 1 root users 63 Jan  3 00:46 file2.gz
-rw-r--r-- 1 root users 134 Jan  3 00:46 file3.gz

You can use forceful operation with -f option. This is helpful in case files to be compressed has multiple links in existence.

gzip also supports -v option i.e. verbose mode which shows all details about the operation being done.

# gzip -v *
file1:   45.7% -- replaced with file1.gz
file2:   22.5% -- replaced with file2.gz
file3:   10.5% -- replaced with file3.gz

In the above example, we zipped all files (hence *) within a directory using wild cards. Here verbose mode printed compression ratio for each file along with which file it replaced after the operation.

Compressing directories

Like files, even directories can be compressed recursively. When -r option is used, gzip command read through the given directory to its subtree structure and zips all the files it founds within.

# ll /tmp/dir3
total 12
-rw-r--r-- 1 root users  35 Jan  3 00:46 file1
-rw-r--r-- 1 root users  40 Jan  3 00:46 file2
-rw-r--r-- 1 root users 114 Jan  3 00:46 file3

# gzip -r /tmp/dir3

# ll /tmp/dir3
total 12
-rw-r--r-- 1 root users  51 Jan  3 00:46 file1.gz
-rw-r--r-- 1 root users  63 Jan  3 00:46 file2.gz
-rw-r--r-- 1 root users 134 Jan  3 00:46 file3.gz

In the above output, it recursively zipped all files within a given directory. This is a helpful option where there are hundreds of files in the directory.

Checking compressed files

To test the compressed archive you can use -t option with gunzip. If there are any issues with the compressed files it will report or else it will return you shell prompt.

# gzip -t file2.gz

You can even view compression details of this file using -l option with gunzip command. It shows, uncompressed and compressed size, compression ratio (0% if not known) and name of uncompressed file i.e. filename before compression

# gzip -l file2.gz
         compressed        uncompressed  ratio uncompressed_name
                 63                  40  22.5% file2

Un-Compressing files

To gain the original file from a compressed archive, -d option needs to be used and gz file to be supplied in the argument. It works vice versa and removes gz file and keeps the original file in the directory here. Recursive -r option works with -d too.

# gzip -d file2.gz

# ll
total 12
-rw-r--r-- 1 root users  40 Jan  3 00:46 file2

You can see file2 is back available now and the gz file has been removed. Using verbose mode prints more information about operations being done.

# gzip -v -d *
file1.gz:        45.7% -- replaced with file1
file2.gz:        22.5% -- replaced with file2
file3.gz:        10.5% -- replaced with file3

In the above output, we de-compressed three files in the same directory using wildcard *. It shows the compression ratio with which file replaced which file after decompression.