Yearly Archives: 2017

4 ways to check the size of physical memory (RAM) in Linux

An article explaining how to check physical memory (RAM) in the Linux server. 4 different commands to get memory information from the Linux server.

Checking physical memory (RAM)

In this article we will see basic commands to check physical memory on a server in Linux. Many beginners struggle with knowing their system well in context to resources like CPU, Memory, disks, etc. So I decided to write this small article pinpointing commands to check RAM on the Linux server. These commands will work in different flavors of Linux like Red Hat, CentOS, Suse, Ubuntu, Fedora, Debian, etc.

Without much delay lets dive into commands –

1. Using free command

The first command is free. This is the simplest command to check your physical memory. This command is mainly used for checking RAM and SWAP on the system. Using different switch you can change the byte-format of output. Like -b for bytes, -k for kilobytes, -m for megabytes and -g for gigabytes.

Check row with Mem: and number against it. That’s the physical RAM of your server.

root@kerneltalks # free -b
             total       used       free     shared    buffers     cached
Mem:    135208493056 1247084544 133961408512          0  175325184  191807488
-/+ buffers/cache:  879951872 134328541184
Swap:   17174347776          0 17174347776

root@kerneltalks # free -k
             total       used       free     shared    buffers     cached
Mem:     132039544    1218368  130821176          0     171216     187316
-/+ buffers/cache:     859836  131179708
Swap:     16771824          0   16771824

root@kerneltalks # free -m
             total       used       free     shared    buffers     cached
Mem:        128944       1189     127754          0        167        182
-/+ buffers/cache:        839     128105
Swap:        16378          0      16378

root@kerneltalks # free -g
             total       used       free     shared    buffers     cached
Mem:           125          1        124          0          0          0
-/+ buffers/cache:          0        125
Swap:           15          0         15

In the above output you can see the system is installed with 125GB of physical RAM (observe highlighted rows). By using a different switch -b, -k, -m and -g output changed numbers according to selected byte-format.

2. Using /proc/meminfo file

Another way is to read memory info from the proc filesystem. /proc/meminfo is the file you should read to get detailed information about memory. The very first line or line starts with MemTotal is your total physical memory on the server.

root@kerneltalks # cat /proc/meminfo |grep MemTotal
MemTotal:       132039544 kB

As you can see from output, memory is displayed in kilobytes.

3. Using top command

The famous top command also lists physical memory information in a very clear way. In the upper section of the top command output lies the CPU, Memory, and SWAP information.

root@kerneltalks # top
top - 16:03:41 up 89 days,  3:43,  1 user,  load average: 0.00, 0.01, 0.05
Tasks: 141 total,   1 running, 140 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  132039544k total,  1218336k used, 130821208k free,   171224k buffers
Swap: 16771824k total,        0k used, 16771824k free,   187420k cached

I clipped the above section of the top command output in the above example. Check second last line saying Mem: (highlighted row). This shows physical memory in kilobytes. You can see the total, used, and free portions of it. Total is your actual RAM installed on the server.

4. Using vmstat

Another way is to use vmstat (virtual memory stats) command with -s switch. This will list memory in detail with the first-line being total memory on the server.

# vmstat -s
    132039544  total memory
      1218692  used memory
       181732  active memory
----output trimmed----

Memory is displayed in kilobytes by default. The very first line shows you total memory on the server.

14 find command examples for Linux

Learn to find command with these 14 examples. Find command examples handpicked to help you in your day-to-day operations.

‘find’ command

One of the most important, used and helpful command in Linux terminal is ‘find’ command. It will search files depending on your search criteria and bring the list for you! It saves you from going to many directories and wasting time looking out your required files.

In this article we will see a list of different find commands to search files. Normally find command syntax is –

# find <path_to_search> -switch <search_criteria>

Where, path_to_search is a directory location where you want to search and search_criteria is a condition that should be matched to search files. Note here that find command will search all sub-directories of given path_to_search recursively.

Let’s see 14 find command examples that are very helpful for you in your day to day operations on Linux servers like RedHat, Ubuntu, CentOS, Debian, etc.

Find file using name

Using –name switch you can specify the name of files to search in a particular location.

root@kerneltalks # find /tmp -name "*.gz"

You can use wild cards as well while specifying your search criteria. In the above example, we searched /tmp directory for all gun-zipped files.

Find only files

To search specific file type -type switch needs to be defined in find command.

root@kerneltalks # find /tmp -type f -name "*log"

For searching only files, we defined -type as f in the above example. The above command will search /tmp for files whose name ends with a log.

Find only directories

For searching only directories define -type as d.

root@kerneltalks # find /tmp -type d -name "box*"

In the above example, find command will search /tmp for only directories whose name starts with a box.

Find files which are modified in last 7 days

Searching files that are modified in the last X days is helpful for log management. When you don’t have utilities like logrotate configured then you will have to search and house keep files with this command.

root@kerneltalks # find /tmp -mtime 7

-mtime is a switch which takes a number of days as an argument.  mtime stands for modification time.

You can combine the same switch two times to get a range of days for your search. For example to search files which are modified between last 10 to 20 days you can use :

root@kerneltalks # find /tmp -mtime -20 -mtime +10

Find files accessed in last 7 days

Another variant of the above search is to find files that are accessed in the last X days. So that decisions can be made for those files who are not accessed for a long period of time and can be zipped/trimmed to save disk space.

root@kerneltalks # find /tmp -atime 7

-atime switch to be used and number of days to be supplied as argument.

You can combine the same switch two times to get a range of days for your search. For example to search files which are accessed between last 10 to 20 days you can use :

root@kerneltalks # find /tmp -atime -20 -atime +10

Find files with particular size

Some times housekeeping is done based on file size too. To search file based on its size you can use -size and supply human-readable size formats like 10M, 2G, etc.

root@kerneltalks # find /tmp -size 5M

Above command will search file with 5MB size.

Find files having size greater than

But, exact size search mostly doesn’t yield expected. It’s always good if you search with a file size range. To find files with size greater than X –

root@kerneltalks # find /tmp -size +10M

This command will search /tmp for files whose size is greater than 10MB.

Find files having size lesser than

Same way files can be searched with size less than specified value like –

root@kerneltalks # find /tmp -size -20M

Using above two option we can even define range of size to search file from.

root@kerneltalks # find /tmp -size +10M -size -20M

This command will search files whose size is greater than 10MB and less than 20MB!

Find hidden files

As you know in Linux/Unix, hidden files name starts with .

So we can search for hidden files using -name switch explained earlier with the wild card as below –

root@kerneltalks # find /tmp  -type f -name .*

We specified wild card asterisk with . means search all files whose name starts with .

Find files with particular permission

Searching files with particular permission is one of the tasks while auditing. This will help you to trace down files that might be having un-necessary extra permissions and can pose a security threat to the system.

-perm (permission) the switch can be used with findcommand followed by permissions.

root@kerneltalks # find / -type f -perm 0777

In above command we are searching all files with 777 permission.

Find world readable files

With the above example, we can search world-readable files i.e. everyone has only read access on that file (444 or -r--r--r-- permission)

root@kerneltalks # find / -type f -perm 444
OR
root@kerneltalks # find / -type f -perm /u=r -perm /g=r -perm /o=r

You can see numeric as well as u-g-o (user, group, others) format can be used with this switch.

Find files owner by particular user

If you are suspecting some user is spamming files on the server, you can directly search files with his ownership using -user switch.

root@kerneltalks # find / -type f -user shrikant

This command will search whole root directory for files which are owned by user ‘shrikant’

Find files owned by particular group

Similarly, files owned by a specific group can be searched using find command with -group switch

root@kerneltalks # find / -type f -group dba

In above example we are searching for files owned by group named ‘dba’

Find empty files or directories

Empty files and directories cleanup is crucial when you are hitting your inode limits. Sometimes deleting sources of soft links, left link files empty on server occupying inode. In such a case we can search them with -empty switch to find command.

root@kerneltalks # find / -type f -empty
root@kerneltalks # find / -type d -empty

Defining file type as file or directory will search respective empty entities.

 

12 examples of ls command in Linux for daily use

Beginners guide to ls command. Learn ls command with 12 examples that can be used in your daily Linux tasks. 

Learn ‘ls’ command

The very first command anyone types, when they logged into the terminal, is either ls or hostname! Yes, ls is the first command all beginners, newbies learn and use when they introduced to Linux world. It is one of the most used rather smashed commands in terminal 🙂

ls stands for list. This command helps in listing files and directories in Linux or Unix. There are many switches available to use with use to fit your need. We will walk you through 15 different examples of ls commands which can be useful for you in your daily routine.

Normal ls command without any switch

Without any switch ls command shows all files and directories names in a single line separated by space.

root@kerneltalks # ls
directory1  directory2  testfile1  testfile2

Long listing using ls -l

For more detailed information, use long listing. That is using -l switch with ls command.

root@kerneltalks # ls -l
total 16
drwxr-xr-x 2 root admin 4096 Sep 14 18:07 directory1
drwxr-xr-x 2 root admin 4096 Sep 14 18:07 directory2
-rw-r--r-- 1 root admin    8 Sep 14 18:08 testfile1
-rw-r--r-- 1 root admin   51 Sep 14 18:08 testfile2

Information is displayed column wise where –

  • The first column is file/directory permission details
  • The second column is tree count
  • The third column is the owner of the file/directory
  • A fourth column is a group of file/directory
  • The fifth column is the size in blocks
  • Sixth, the seventh column is Date
  • Eight columns have the last modification time of file/directory
  • The last column is file or directory name.

Listing hidden files using ls

Normal ls command won’t display hidden files. Hidden files in Linux are files whose names start with .

These files can be listed using -a switch.

root@kerneltalks # ls -al
total 32
drwxr-xr-x   4 root admin  4096 Sep 14 18:08 .
drwxrwxrwt. 11 root root    12288 Sep 14 18:07 ..
drwxr-xr-x   2 root admin  4096 Sep 14 18:07 directory1
drwxr-xr-x   2 root admin  4096 Sep 14 18:07 directory2
-rw-r--r--   1 root admin    15 Sep 14 18:08 .account_detail
-rw-r--r--   1 root admin     8 Sep 14 18:08 testfile1
-rw-r--r--   1 root admin    51 Sep 14 18:08 testfile2

You can see in the above output, hidden file  .account_detail (name starts with .) is listed.

Listing human readable file sizes

In long listing we have seen that file size is displayed in block size. This is not a user-friendly format since you have to convert blocks to conventional byte size. Easy human-readable format like KB, Mb, GB is available with switch -h. Using these file sizes will be displayed in a human-readable format.

root@kerneltalks # ls -hl
total 16K
drwxr-xr-x 2 root admin 4.0K Sep 14 18:07 directory1
drwxr-xr-x 2 root admin 4.0K Sep 14 18:07 directory2
-rw-r--r-- 1 root admin    8 Sep 14 18:08 testfile1
-rw-r--r-- 1 root admin   51 Sep 14 18:08 testfile2

Here size is displayed as 4K for directories i.e. 4 Kilobyte.

Listing inode numbers of files

Inodes are the numbers assigned to each file/directory in the Linux system. Once can view them using -i switch.

root@kerneltalks # ls -i
18 directory1  30 directory2  32 testfile1  43 testfile2

Numbers 18, 30, 32, and 43 are respective inodes of those files and directories on right.

Sorting files with time of last modify time

This is one of the most widely used formats of ls command. Switch used are -l (long listing), -r (reverse sort), -t (the sort with modification time). Due to the reverse sort, the latest updated file will be shown at the bottom of the output.

root@ kerneltalks # ls -lrt
total 16
drwxr-xr-x 2 root admin 4096 Sep 14 18:07 directory1
drwxr-xr-x 2 root admin 4096 Sep 14 18:07 directory2
-rw-r--r-- 1 root admin    8 Sep 14 18:08 testfile1
-rw-r--r-- 1 root admin   51 Sep 14 18:08 testfile2

Listing file owners with their IDs

Normal long listing shows owner and group as their names. To list owner and group as UID and GID you can use -n switch.

root@kerneltalks # ls -n
total 16
drwxr-xr-x 2 0 512 4096 Sep 14 18:07 directory1
drwxr-xr-x 2 0 512 4096 Sep 14 18:07 directory2
-rw-r--r-- 1 0 512    8 Sep 14 18:08 testfile1
-rw-r--r-- 1 0 512   51 Sep 14 18:08 testfile2

Listing directories by appending / to their names

ls command without argument lists all files and directory names. But without long listing (in which directories have their permission string starts with d) you won’t be able to identify directories. So here is a tip. Use -p switch. It will append / to all directory names and you will identify them easily.

root@kerneltalks # ls -p
directory1/  directory2/  testfile1  testfile2

You can see both directories has / appended to their names.

Listing directories recursively

The long listing or normal ls command shows you only directories residing in the current directory (from where you are running the command). To view files inside those directories you need to run ls command recursively i.e using -R switch.

root@kerneltalks # ls -R
.:
directory1  directory2  testfile1  testfile2

./directory1:
file1  file2

./directory2:
file3  file4

In output you can see –

  • First part . means current directory and then a list of files/directories within it.
  • Second part says ./directory1 and then a list of files/directories within it.
  • The third part lists files/directories within ./directory2.
  • So it listed all the content of both directories which resides on our present working directory.

Sorting files by file size

Sorting list with their size. Use -S switch. It will sort in descending order i.e. high size files being at the top.

root@kerneltalks # ls -lS
total 16
drwxr-xr-x 2 root admin 4096 Sep 14 18:16 directory1
drwxr-xr-x 2 root admin 4096 Sep 14 18:16 directory2
-rw-r--r-- 1 root admin   51 Sep 14 18:08 testfile2
-rw-r--r-- 1 root admin    8 Sep 14 18:08 testfile1

Listing only owners of files

Want to list only owners of files? Use -o switch. Group won’t be listed in the output.

root@kerneltalks # ls -o
total 16
drwxr-xr-x 2 root 4096 Sep 14 18:16 directory1
drwxr-xr-x 2 root 4096 Sep 14 18:16 directory2
-rw-r--r-- 1 root    8 Sep 14 18:08 testfile1
-rw-r--r-- 1 root   51 Sep 14 18:08 testfile2

Listing only groups of files

Opposite of the above. Group will be listed and users won’t be listed for -g switch.

root@kerneltalks # ls -g
total 16
drwxr-xr-x 2 admin 4096 Sep 14 18:16 directory1
drwxr-xr-x 2 admin 4096 Sep 14 18:16 directory2
-rw-r--r-- 1 admin    8 Sep 14 18:08 testfile1
-rw-r--r-- 1 admin   51 Sep 14 18:08 testfile2

This is it! 12 different examples of ls command which can be helpful to you in your daily Linux learning. Do subscribe to our blog to get the latest post notifications about Linux howto guides. Let us know if you want to cover some beginner’s topics in the comments section below.

Linux server build template (document)

Linux server build template which will help you design your own build sheet or build book. This template can be used as a baseline and has all the necessary details regarding the new server build.

Linux server build sheet

Technical work needs to be documented well so that it can be referred by future sysadmins and make their life easy! One of the important documentation all Linux sysadmin has to maintain is the Server build book or server configuration sheet.

Although there are many automation tools available in the market which serve the purpose of pulling configuration from the server and presenting you in a formatted manner, they come into the picture once the server is up and running. What if you want to fill in this server build document before the server is made active. Or you need to draft a server build sheet wherein you require the requester to fill in details so that accordingly you will deploy the server. In such cases you can not rely on automation tools.

Before you read further please be noted that this is a pure process-related document being discussed not the technical.

It will be a manual method of collecting information from respective stakeholders and then using all information, you can build your server. Information collection can be done from various stakeholders like below –

  • Business authorities (project details, billing context, server tier classification)
  • Datacenter team (hardware details)
  • Storage team (Storage connectivity and capacity)
  • Network team (IP, connectivity matrix)

For building or deploying one Linux server, all the above-said stakeholders needs to be contacted to get relevant information which will help to deploy a server. I have created on sample template for Linux server build which you may refer (link at end of the article).

Business authorities can help you identify the server’s tier classification. It tells you how critical the server it is. Most critical servers will get high-performance resources (like SSD storage), latest tech while less critical receives less expensive resources (like SATA disk storage). This tier configuration mostly named under platinum (being most critical, high valued), gold, silver, copper (being least critical). The terminology may change in different organizations but this is widely used across the IT industry. Business authorities also help you identify server’s project so that you can use this information in your Inventory sheet or naming conventions for server infra etc.

Datacenter team helps you to identify hardware on which you can build your server. If its a physical server then its DC location along with Rack number, chassis number, blade number, etc details helps you in locating the server physically. Also iLO or management port connectivity can be arranged with the help of the data center team. This is crucial for a new server when they don’t have anything running on them and you have to start afresh install. If it’s a virtual server based on virtualization like VMware, the datacenter team can help you identify proper hosts within virtualized infra to host your server. Other details like VMware datacenter, cluster, ESXi host, datastore can be obtained with the help of the DC team.

The storage team can help you with LUN provisioning according to your requirement. Storage can be allocated according to the server tier. If its a new physical server then you need to have physical connectivity as well in place with the help of storage and DC team.

The network team can provide you with free IP, subnet mask, and gateways which are to be configured on the server. Any new VLAN creation request can be taken up with the network team for resolution.

All these points I incorporated in a sheet.

Download Linux server build template by kerneltalks.com

This build sheet is helpful to gather all requirements before you start to build a Linux server in your physical or virtual infra!

Difference between /etc/passwd and /etc/shadow

Learn about the difference between /etc/passwd and /etc/shadow files in the Linux system. 9 points to understand the comparison of these two files.

/etc/passwd vs /etc/shadow

Its one of the Linux beginners interview question explain the difference between /etc/passwd and /etc/shadow files or compare passwd and shadow files in Linux. Basically both files serve different purposes on the system so it’s not completely logical to compare them but still if you want to we have this article for you explaining  /etc/passwd vs /etc/shadow.

Before reading ahead, if you are not sure about these files read our articles explaining these files field by field.

Difference between /etc/passwd and /etc/shadow

  1. File formats are the same i.e. fields separated by colons & new row for each user. But the number of fields is different. passwd file has 7 fields whereas the shadow file has 8 fields.
  2. All fields are different except for the first one. It’s the same for both files and is the username.
  3. /etc/passwd file aims at user account details while /etc/shadow aims at the user’s password details.
  4. the passwd file is world-readable. shadow file can only be read by the root account.
  5. The user’s encrypted password can only be stored in /etc/shadow file.
  6. pwconv command is used to generate a shadow file from the passwd file if it doesn’t exist.
  7. passwd file exists by default when the system is installed.
  8. passwd file information is more of a static (home directory, shell, uid, gid which hardly changes)
  9. shadow file information changes frequently since its related to password and user password changes frequently (if not, password policies are loosely defined!)

Understanding /etc/shadow file

Article to understand fields, formats of /etc/shadow file. Learn each field in detail and how it can be modified.

/etc/shadow file in Linux

We have written about /etc/passwd file in the past. In this article, we will see /etc/shadow file, its format, its content, its importance for the Linux system. /etc/shadow file (henceforth referred to as shadow file in this article) is one of the crucial files on system and counterpart of /etc/passwd file.

Unlike the password file, the shadow file is not world-readable. It can be read by the root user only. Shadow file permissions are 400 i.e. -r-------- and ownership is root:root. This means it can be only read and by root users only. The reason for such security is password related information that is being stored in this file.

Typical /etc/shadow file looks like :

# cat /etc/shadow
root:$1$UFnkhP.mzcMyajdD9OEY1P80:17413:0:99999:7:::
bin:*:15069:0:99999:7:::
daemon:*:15069:0:99999:7:::
adm:*:15069:0:99999:7:::
testuser:$1$FrWa$ZCMQ5zpEG61e/wI45N8Zw.:17413:0:33:7:::

Since its normal text file, commands like cat, more will work without any issue on it.

/etc/shadow file has different fields separated by a colon. There are a total of 8 fields in the shadow file. They are –

  1. Username
  2. Encrypted password
  3. Last password change
  4. Min days
  5. Max days
  6. Warn days
  7. Inactive days
  8. Expiry

Lets walk through all these fields one by one.

Username

Username is the user’s login name. Its created on the system whenever the user is created using useradd command.

Encrypted password

Its user’s password in encrypted format.

Last password change

Its number of days since 1 Jan 1970, that password was last changed. For example in the above sample testuser’s last password change value is 17413 days. Means count 17413 days since 1 Jan 1970 which comes to 4 Sept 2017! That means testuser last changed his password on 4 Sept 2017.

You can easily add/subtract dates using scripts or online tools.

Min days

Its minimum number of days between two password changes of that account. That means the user can not change his password again unless min days have passed after his last password change. This field can be tweaked using chage command. This is set to 7 days generally but can be 1 too depends on your organization’s security norms.

Max days

Its maximum number of days for which the user password is valid. Once this period exhausted, the user is forced to change his/her password. This value can be altered using chage command. It is generally set to 30 days but value differs as per your security demands.

Warn days

Its number of days before password expiry, the user will start seeing a warning about his password expiration after login. Generally it is set to 7 but it’s up to you or your organization to decide this value as per organizational security policies.

Inactive days

A number of days after password expiry, the account will be disabled. This means if the user doesn’t log in to the system after his/her password expiry (so he doesn’t change the password) then after these many days account will be disabled. Once the account is disabled, the system admin needs to unlock it.

Expiry

Its number of days since 1 Jan 1970, the account is disabled.  Calculations we already saw in the ‘last password change’ section.

Except for the first 2 fields, the rest of all fields are related to password aging/password policies.

Beginners guide to kill the process in Linux

Learn to kill the process in Linux using kill, kill, and killall commands. Kill processes using PID or process name.

Kill process in Linux with kill, pkill and killall

Windows users have a task manager where they can monitor running processes and choose to ‘End Task‘ to kill off unwanted/hung/less critical processes to save system resources. Same way, in Linux as well you can kill processes and save on your system resource utilization.

In this article we will walk through steps on how to kill the process in Linux using kill, kill, and killall commands. These three commands used to kill processes in a different manner. To proceed with you should know the concept of PID i.e. Process ID. It is the numeric value you will be used as an argument in kill commands.

What is PID?

PID is the Process ID, it’s a numeric identification of process in the kernel process table. Each process in Linux is identified by PID. PID 1 is always init process in Linux whereas new Linux distributions like RHEL7 has systemd as a PID 1 process. It is the parent of all processes. If any process don’t have a parent or if its parent process is terminated abruptly (zombie process), PID 1 process takes over that child process.

The next question is how to find process id in Linux? It can be obtained using below several commands :

root@kerneltalks # ps -A 
 PID TTY          TIME CMD
    1 ?        00:00:05 systemd
    2 ?        00:00:00 kthreadd
    3 ?        00:00:00 ksoftirqd/0
    5 ?        00:00:00 kworker/0:0H
    7 ?        00:00:00 migration/0
    8 ?        00:00:00 rcu_bh

root@kerneltalks # ps aux 
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1  0.0  0.6 128164  6824 ?        Ss   Aug29   0:05 /usr/lib/systemd/systemd --switched-root --system --deserialize 20
root         2  0.0  0.0      0     0 ?        S    Aug29   0:00 [kthreadd]
root         3  0.0  0.0      0     0 ?        S    Aug29   0:00 [ksoftirqd/0]
root         5  0.0  0.0      0     0 ?        S<   Aug29   0:00 [kworker/0:0H]

root@kerneltalks # pidof systemd
1

With ps -A command you get a list of all running processes and their PID in the first column of the output. Grep out your desired process from the output. With ps aux command you can see more information about processes with PID in the second column of the output. Alternatively, you can use pidof command when you know the exact process name to get its only PID.

Now, you are ready with PID of the process to be killed. Let’s move on to killing it!

How to kill process in Linux?

There are a few limitations you should consider before killing any PID. They are as below –

  1. You can kill the process which is owned by your userid only.
  2. You can not kill system processes.
  3. Only the root user can kill other user’s processes.
  4. Only root can kill system using processes.

After fulfilling all above criteria, you can move ahead to kill PID.

Kill process using kill command

Kill command is used to send specific signals to specified PID. Signal numbers and PID you need to supply to command. The signal used are :

  • 1 :  Hung up
  • 9 : Kill
  • 15 : Terminate

Normally 9 signal is used (famous kill -9 command) while 15 is used if 9 doesn’t work. Hung up signal is rarely used. Kill process using command syntex kill -signal PID like –

root@kerneltalks # kill -9 8274

Kill process using pkill

If you want to use the process name instead of PID then you can use the pkill command. But remember to use the correct process name. Even a small typo can lead you to kill off unwanted processes.  Syntex is simple, just specify process name to command.

root@kerneltalks # pkill myprocess

Kill process using killall

With the above two commands : kill and pkill, you are killing only a specific process whose PID or name is specified. This leads its child processes to hung or zombie. To avoid this situation, you can kill the process along with all its child processes using killall command.

root@kerneltalks # killall myprocess

Conclusion

As root you can kill any process including system ones on the Linux system. As a normal user you can kill processes owned by you only. Process ID i.e. PID can be obtained using command ps or pidof. This PID or process name can be used to kill the process using kill, pkill and killall commands.

How to release the Elastic IP in AWS

Learn how to disassociate elastic IP from EC2 and how to release elastic IP in AWS with screenshots. Also, understand how elastic IP is billed and the cost of billing.

How to guide: Release Elastic IP in AWS

In our previous article we understood how to allocate elastic IP to your AWS account & how to associate that elastic IP to EC2 instance. In this article we will walk through steps to disassociate elastic IP from EC2 instance and then release elastic IP from your AWS account.

Before we run into steps lets look at elastic IP billing information which will help you to judge why it is important to release un-used elastic IP back to AWS.

Elastic IP billing

At the most you can allocate 5 elastic IP for an AWS account per region. If you have a requirement of more, you need to reach out to the AWS team to raise this limit through form. This limit is set by Amazon since IPv4 is a scarce resource.

Coming to the billing part, you will be billed for each elastic IP which is not being used anywhere but allocated to your AWS account. This is to impose efficient use of such scarce resources. You will not be billed for elastic IP if below conditions are met –

  1. Elastic IP allocated to you is associated with EC2 instance
  2. That EC2 instance has only one elastic IP associated
  3. That EC2 instance is in running state.

You might like :

Elastic IP is allocated to you once you demand it. There is no default elastic IP allocated to your AWS account. Hence elastic IP billed under on-demand pricing model. Here are points to consider :

  1. Elastic IP billed under an on-demand pricing model
  2. It’s billed per hour on a pro-rata basis.
  3. Billing rate changes as per region. Detailed rates are available on the pricing page under the ‘Elastic IP Addresses‘ section.

For example, see below rates of elastic IP for US East (Ohio) region depending on your type of use –

Elastic IP billing information. Information credit : AWS website.

Now, you know how your usage gonna get billed for elastic IPs in AWS. Without further deviation, let’s walk through process to release elastic IP from the AWS account.

How to remove elastic IP from EC2 instance

In AWS terms, its process to disassociate elastic IP from EC2 instance. The process is pretty simple. Login to EC2 console and navigate to Elastic IPs. You will be listed with all available Elastic IPs in your account. Choose one to disassociate and choose disassociate from the action menu. You will be shown a pop up like one below :

Disassociate elastic IP from EC2

Confirm disassociation by clicking the disassociate address button. Your elastic IP will be removed from an EC2 instance and the instance will be assigned with public IP from AWS automatically. You can confirm empty elastic IP from EC2 instance details.

Now you have successfully disassociated elastic IP from EC2 instance. But it is still allocated to your AWS account and you are getting billed since you are not using it anywhere. You need to release it back to the AWS pool if you are not planned to use it for any other purpose so that you won’t be billed for it.

How to release elastic IP from AWS account

Releasing elastic IP means freeing it from your account and making it available back in the AWS pool for someone else to use it. To release login to EC2 console and navigate to Elastic IPs page. From the presented list select elastic IP you want to release and choose release address from the actions menu. You will be prompted with pop up like below :

Release elastic IP from AWS account

Confirm your action by hitting the release button. Your elastic IP will be released back to the AWS pool and you won’t be able to see it in your account anymore!

Benefits of cloud computing over the traditional data center

Article listing benefits of cloud over traditional datacenter. 7 different aspects of cloud vs on-premise datacenter.

Cloud vs traditional datacenter

In the past few years, the cloud industry is gaining good momentum. Many companies are moving their workloads to cloud from the traditional data centers. This trend is increasing day by day due to the list of advantages cloud offers over the traditional data centers. In this article we will walk through these advantages of cloud over traditional datacenter. This is also one of the basic cloud interview questions where they tell you to list of pros and cons of the cloud. Without much delay lets jump into cloud vs data center discussion.

  1. Low maintenance cost. For a customer maintenance cost is almost nil. Since you are using hardware from the cloud provider’s datacenter, you don’t need to maintain hardware at all. Your cost is saved from geographical location cost, hardware purchase, upgrades, datacenter staff, power, facility management cost, etc. All this is bared by the cloud provider. Also, for cloud providers, this is also low since they are operating multiple clients from the same facility and hence cost is low compared to cost one has to bear when all those clients are operating from different datacenters. This is very much environment friendly too since you are reducing the need for multiple facilities to fewer ones.
  2. Cheap resources. Cloud providers have a pool of resources and from which you get assigned your share. This means cloud providers maintain and operate a large volume of resources and distribute smaller chunks to customers. This obviously reduces the cost of maintenance and operation for cloud providers and in turn provides low cost, cheap resources to customers.
  3. Scale as per your need. In a traditional data center you have to study and plan your capacity well in advance to finalize your hardware purchase. Once purchased you are stuck with purchased limited capacity and you can not accommodate if capacity requirement grows beyond limit before your estimated time. It again goes through planning, purchasing new hardware which is a time-consuming process. In the cloud you can scale up and scale down your computing capacity almost instantly (or way shorter in time than traditional purchase process). And don’t even need to worry and follow for approvals,  purchase, billing, etc things.
  4. Pay as you use. In traditional data centers whenever you buy hardware you make an investment upfront even if you don’t use the full capacity of purchased hardware. In the cloud, you are billed per your use. So your expenditure on computing is optimum with your use.
  5. The latest technology at your service. Technology changing very fast these days. Hardware you buy today becomes obsolete in a couple of months. And if you are making huge investments in hardware, the company expects to use it at least for a couple of years. So you are stuck with the hardware you brought with a nice price tag and now way behind from its latest counterparts. Cloud provides you the latest tech always and you don’t need to worry about upgrades or maintenance. All these hardware aspects are the headache of cloud providers and they take care of it in the background. As a customer, all the latest technology is at your service without any hassle.
  6. Redundancy. Redundancy in traditional datacenter means cost investment to build almost identical facilities of the primary.  Along with it also involves cost for infrastructure which connects them. Also, on-site redundancy for power, network, etc. is also expensive and maintenance prone. When you are opting cloud, everything said previously is just vanished from your plate. Cloud at single entity level like single server, storage disk, etc is already redundant. Nothing to be done and no extra cost is being billed to you for it. For your infra design requirement if you want, you can use ready-made services provided by cloud (for redundancy) and you are all set from failures.
  7. Accessibility. With an on-premise datacenter, you have very limited connectivity mostly locally. If you want access to inside entities, you need to maintain your own VPN. Cloud services have a portal with access to almost all of their services over the web. It can be accessed from anywhere with internet. Also, if you want to opt-in for a VPN, you get a pre-configured secure VPN from your cloud provider. No need for designing and maintaining a VPN!

Let us know your views on cloud vs on premise datacenters in comments section below.