12 examples of ls command in Linux for daily use

Beginners guide to ls command. Learn ls command with 12 examples that can be used in your daily Linux tasks. 

Learn ‘ls’ command

The very first command anyone types, when they logged into the terminal, is either ls or hostname! Yes, ls is the first command all beginners, newbies learn and use when they introduced to Linux world. It is one of the most used rather smashed commands in terminal 🙂

ls stands for list. This command helps in listing files and directories in Linux or Unix. There are many switches available to use with use to fit your need. We will walk you through 15 different examples of ls commands which can be useful for you in your daily routine.

Normal ls command without any switch

Without any switch ls command shows all files and directories names in a single line separated by space.

root@kerneltalks # ls
directory1  directory2  testfile1  testfile2

Long listing using ls -l

For more detailed information, use long listing. That is using -l switch with ls command.

root@kerneltalks # ls -l
total 16
drwxr-xr-x 2 root admin 4096 Sep 14 18:07 directory1
drwxr-xr-x 2 root admin 4096 Sep 14 18:07 directory2
-rw-r--r-- 1 root admin    8 Sep 14 18:08 testfile1
-rw-r--r-- 1 root admin   51 Sep 14 18:08 testfile2

Information is displayed column wise where –

  • The first column is file/directory permission details
  • The second column is tree count
  • The third column is the owner of the file/directory
  • A fourth column is a group of file/directory
  • The fifth column is the size in blocks
  • Sixth, the seventh column is Date
  • Eight columns have the last modification time of file/directory
  • The last column is file or directory name.

Listing hidden files using ls

Normal ls command won’t display hidden files. Hidden files in Linux are files whose names start with .

These files can be listed using -a switch.

root@kerneltalks # ls -al
total 32
drwxr-xr-x   4 root admin  4096 Sep 14 18:08 .
drwxrwxrwt. 11 root root    12288 Sep 14 18:07 ..
drwxr-xr-x   2 root admin  4096 Sep 14 18:07 directory1
drwxr-xr-x   2 root admin  4096 Sep 14 18:07 directory2
-rw-r--r--   1 root admin    15 Sep 14 18:08 .account_detail
-rw-r--r--   1 root admin     8 Sep 14 18:08 testfile1
-rw-r--r--   1 root admin    51 Sep 14 18:08 testfile2

You can see in the above output, hidden file  .account_detail (name starts with .) is listed.

Listing human readable file sizes

In long listing we have seen that file size is displayed in block size. This is not a user-friendly format since you have to convert blocks to conventional byte size. Easy human-readable format like KB, Mb, GB is available with switch -h. Using these file sizes will be displayed in a human-readable format.

root@kerneltalks # ls -hl
total 16K
drwxr-xr-x 2 root admin 4.0K Sep 14 18:07 directory1
drwxr-xr-x 2 root admin 4.0K Sep 14 18:07 directory2
-rw-r--r-- 1 root admin    8 Sep 14 18:08 testfile1
-rw-r--r-- 1 root admin   51 Sep 14 18:08 testfile2

Here size is displayed as 4K for directories i.e. 4 Kilobyte.

Listing inode numbers of files

Inodes are the numbers assigned to each file/directory in the Linux system. Once can view them using -i switch.

root@kerneltalks # ls -i
18 directory1  30 directory2  32 testfile1  43 testfile2

Numbers 18, 30, 32, and 43 are respective inodes of those files and directories on right.

Sorting files with time of last modify time

This is one of the most widely used formats of ls command. Switch used are -l (long listing), -r (reverse sort), -t (the sort with modification time). Due to the reverse sort, the latest updated file will be shown at the bottom of the output.

root@ kerneltalks # ls -lrt
total 16
drwxr-xr-x 2 root admin 4096 Sep 14 18:07 directory1
drwxr-xr-x 2 root admin 4096 Sep 14 18:07 directory2
-rw-r--r-- 1 root admin    8 Sep 14 18:08 testfile1
-rw-r--r-- 1 root admin   51 Sep 14 18:08 testfile2

Listing file owners with their IDs

Normal long listing shows owner and group as their names. To list owner and group as UID and GID you can use -n switch.

root@kerneltalks # ls -n
total 16
drwxr-xr-x 2 0 512 4096 Sep 14 18:07 directory1
drwxr-xr-x 2 0 512 4096 Sep 14 18:07 directory2
-rw-r--r-- 1 0 512    8 Sep 14 18:08 testfile1
-rw-r--r-- 1 0 512   51 Sep 14 18:08 testfile2

Listing directories by appending / to their names

ls command without argument lists all files and directory names. But without long listing (in which directories have their permission string starts with d) you won’t be able to identify directories. So here is a tip. Use -p switch. It will append / to all directory names and you will identify them easily.

root@kerneltalks # ls -p
directory1/  directory2/  testfile1  testfile2

You can see both directories has / appended to their names.

Listing directories recursively

The long listing or normal ls command shows you only directories residing in the current directory (from where you are running the command). To view files inside those directories you need to run ls command recursively i.e using -R switch.

root@kerneltalks # ls -R
.:
directory1  directory2  testfile1  testfile2

./directory1:
file1  file2

./directory2:
file3  file4

In output you can see –

  • First part . means current directory and then a list of files/directories within it.
  • Second part says ./directory1 and then a list of files/directories within it.
  • The third part lists files/directories within ./directory2.
  • So it listed all the content of both directories which resides on our present working directory.

Sorting files by file size

Sorting list with their size. Use -S switch. It will sort in descending order i.e. high size files being at the top.

root@kerneltalks # ls -lS
total 16
drwxr-xr-x 2 root admin 4096 Sep 14 18:16 directory1
drwxr-xr-x 2 root admin 4096 Sep 14 18:16 directory2
-rw-r--r-- 1 root admin   51 Sep 14 18:08 testfile2
-rw-r--r-- 1 root admin    8 Sep 14 18:08 testfile1

Listing only owners of files

Want to list only owners of files? Use -o switch. Group won’t be listed in the output.

root@kerneltalks # ls -o
total 16
drwxr-xr-x 2 root 4096 Sep 14 18:16 directory1
drwxr-xr-x 2 root 4096 Sep 14 18:16 directory2
-rw-r--r-- 1 root    8 Sep 14 18:08 testfile1
-rw-r--r-- 1 root   51 Sep 14 18:08 testfile2

Listing only groups of files

Opposite of the above. Group will be listed and users won’t be listed for -g switch.

root@kerneltalks # ls -g
total 16
drwxr-xr-x 2 admin 4096 Sep 14 18:16 directory1
drwxr-xr-x 2 admin 4096 Sep 14 18:16 directory2
-rw-r--r-- 1 admin    8 Sep 14 18:08 testfile1
-rw-r--r-- 1 admin   51 Sep 14 18:08 testfile2

This is it! 12 different examples of ls command which can be helpful to you in your daily Linux learning. Do subscribe to our blog to get the latest post notifications about Linux howto guides. Let us know if you want to cover some beginner’s topics in the comments section below.

Linux server build template (document)

Linux server build template which will help you design your own build sheet or build book. This template can be used as a baseline and has all the necessary details regarding the new server build.

Linux server build sheet

Technical work needs to be documented well so that it can be referred by future sysadmins and make their life easy! One of the important documentation all Linux sysadmin has to maintain is the Server build book or server configuration sheet.

Although there are many automation tools available in the market which serve the purpose of pulling configuration from the server and presenting you in a formatted manner, they come into the picture once the server is up and running. What if you want to fill in this server build document before the server is made active. Or you need to draft a server build sheet wherein you require the requester to fill in details so that accordingly you will deploy the server. In such cases you can not rely on automation tools.

Before you read further please be noted that this is a pure process-related document being discussed not the technical.

It will be a manual method of collecting information from respective stakeholders and then using all information, you can build your server. Information collection can be done from various stakeholders like below –

  • Business authorities (project details, billing context, server tier classification)
  • Datacenter team (hardware details)
  • Storage team (Storage connectivity and capacity)
  • Network team (IP, connectivity matrix)

For building or deploying one Linux server, all the above-said stakeholders needs to be contacted to get relevant information which will help to deploy a server. I have created on sample template for Linux server build which you may refer (link at end of the article).

Business authorities can help you identify the server’s tier classification. It tells you how critical the server it is. Most critical servers will get high-performance resources (like SSD storage), latest tech while less critical receives less expensive resources (like SATA disk storage). This tier configuration mostly named under platinum (being most critical, high valued), gold, silver, copper (being least critical). The terminology may change in different organizations but this is widely used across the IT industry. Business authorities also help you identify server’s project so that you can use this information in your Inventory sheet or naming conventions for server infra etc.

Datacenter team helps you to identify hardware on which you can build your server. If its a physical server then its DC location along with Rack number, chassis number, blade number, etc details helps you in locating the server physically. Also iLO or management port connectivity can be arranged with the help of the data center team. This is crucial for a new server when they don’t have anything running on them and you have to start afresh install. If it’s a virtual server based on virtualization like VMware, the datacenter team can help you identify proper hosts within virtualized infra to host your server. Other details like VMware datacenter, cluster, ESXi host, datastore can be obtained with the help of the DC team.

The storage team can help you with LUN provisioning according to your requirement. Storage can be allocated according to the server tier. If its a new physical server then you need to have physical connectivity as well in place with the help of storage and DC team.

The network team can provide you with free IP, subnet mask, and gateways which are to be configured on the server. Any new VLAN creation request can be taken up with the network team for resolution.

All these points I incorporated in a sheet.

Download Linux server build template by kerneltalks.com

This build sheet is helpful to gather all requirements before you start to build a Linux server in your physical or virtual infra!

Difference between /etc/passwd and /etc/shadow

Learn about the difference between /etc/passwd and /etc/shadow files in the Linux system. 9 points to understand the comparison of these two files.

/etc/passwd vs /etc/shadow

Its one of the Linux beginners interview question explain the difference between /etc/passwd and /etc/shadow files or compare passwd and shadow files in Linux. Basically both files serve different purposes on the system so it’s not completely logical to compare them but still if you want to we have this article for you explaining  /etc/passwd vs /etc/shadow.

Before reading ahead, if you are not sure about these files read our articles explaining these files field by field.

Difference between /etc/passwd and /etc/shadow

  1. File formats are the same i.e. fields separated by colons & new row for each user. But the number of fields is different. passwd file has 7 fields whereas the shadow file has 8 fields.
  2. All fields are different except for the first one. It’s the same for both files and is the username.
  3. /etc/passwd file aims at user account details while /etc/shadow aims at the user’s password details.
  4. the passwd file is world-readable. shadow file can only be read by the root account.
  5. The user’s encrypted password can only be stored in /etc/shadow file.
  6. pwconv command is used to generate a shadow file from the passwd file if it doesn’t exist.
  7. passwd file exists by default when the system is installed.
  8. passwd file information is more of a static (home directory, shell, uid, gid which hardly changes)
  9. shadow file information changes frequently since its related to password and user password changes frequently (if not, password policies are loosely defined!)

Understanding /etc/shadow file

Article to understand fields, formats of /etc/shadow file. Learn each field in detail and how it can be modified.

/etc/shadow file in Linux

We have written about /etc/passwd file in the past. In this article, we will see /etc/shadow file, its format, its content, its importance for the Linux system. /etc/shadow file (henceforth referred to as shadow file in this article) is one of the crucial files on system and counterpart of /etc/passwd file.

Unlike the password file, the shadow file is not world-readable. It can be read by the root user only. Shadow file permissions are 400 i.e. -r-------- and ownership is root:root. This means it can be only read and by root users only. The reason for such security is password related information that is being stored in this file.

Typical /etc/shadow file looks like :

# cat /etc/shadow
root:$1$UFnkhP.mzcMyajdD9OEY1P80:17413:0:99999:7:::
bin:*:15069:0:99999:7:::
daemon:*:15069:0:99999:7:::
adm:*:15069:0:99999:7:::
testuser:$1$FrWa$ZCMQ5zpEG61e/wI45N8Zw.:17413:0:33:7:::

Since its normal text file, commands like cat, more will work without any issue on it.

/etc/shadow file has different fields separated by a colon. There are a total of 8 fields in the shadow file. They are –

  1. Username
  2. Encrypted password
  3. Last password change
  4. Min days
  5. Max days
  6. Warn days
  7. Inactive days
  8. Expiry

Lets walk through all these fields one by one.

Username

Username is the user’s login name. Its created on the system whenever the user is created using useradd command.

Encrypted password

Its user’s password in encrypted format.

Last password change

Its number of days since 1 Jan 1970, that password was last changed. For example in the above sample testuser’s last password change value is 17413 days. Means count 17413 days since 1 Jan 1970 which comes to 4 Sept 2017! That means testuser last changed his password on 4 Sept 2017.

You can easily add/subtract dates using scripts or online tools.

Min days

Its minimum number of days between two password changes of that account. That means the user can not change his password again unless min days have passed after his last password change. This field can be tweaked using chage command. This is set to 7 days generally but can be 1 too depends on your organization’s security norms.

Max days

Its maximum number of days for which the user password is valid. Once this period exhausted, the user is forced to change his/her password. This value can be altered using chage command. It is generally set to 30 days but value differs as per your security demands.

Warn days

Its number of days before password expiry, the user will start seeing a warning about his password expiration after login. Generally it is set to 7 but it’s up to you or your organization to decide this value as per organizational security policies.

Inactive days

A number of days after password expiry, the account will be disabled. This means if the user doesn’t log in to the system after his/her password expiry (so he doesn’t change the password) then after these many days account will be disabled. Once the account is disabled, the system admin needs to unlock it.

Expiry

Its number of days since 1 Jan 1970, the account is disabled.  Calculations we already saw in the ‘last password change’ section.

Except for the first 2 fields, the rest of all fields are related to password aging/password policies.

Beginners guide to kill the process in Linux

Learn to kill the process in Linux using kill, kill, and killall commands. Kill processes using PID or process name.

Kill process in Linux with kill, pkill and killall

Windows users have a task manager where they can monitor running processes and choose to ‘End Task‘ to kill off unwanted/hung/less critical processes to save system resources. Same way, in Linux as well you can kill processes and save on your system resource utilization.

In this article we will walk through steps on how to kill the process in Linux using kill, kill, and killall commands. These three commands used to kill processes in a different manner. To proceed with you should know the concept of PID i.e. Process ID. It is the numeric value you will be used as an argument in kill commands.

What is PID?

PID is the Process ID, it’s a numeric identification of process in the kernel process table. Each process in Linux is identified by PID. PID 1 is always init process in Linux whereas new Linux distributions like RHEL7 has systemd as a PID 1 process. It is the parent of all processes. If any process don’t have a parent or if its parent process is terminated abruptly (zombie process), PID 1 process takes over that child process.

The next question is how to find process id in Linux? It can be obtained using below several commands :

root@kerneltalks # ps -A 
 PID TTY          TIME CMD
    1 ?        00:00:05 systemd
    2 ?        00:00:00 kthreadd
    3 ?        00:00:00 ksoftirqd/0
    5 ?        00:00:00 kworker/0:0H
    7 ?        00:00:00 migration/0
    8 ?        00:00:00 rcu_bh

root@kerneltalks # ps aux 
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1  0.0  0.6 128164  6824 ?        Ss   Aug29   0:05 /usr/lib/systemd/systemd --switched-root --system --deserialize 20
root         2  0.0  0.0      0     0 ?        S    Aug29   0:00 [kthreadd]
root         3  0.0  0.0      0     0 ?        S    Aug29   0:00 [ksoftirqd/0]
root         5  0.0  0.0      0     0 ?        S<   Aug29   0:00 [kworker/0:0H]

root@kerneltalks # pidof systemd
1

With ps -A command you get a list of all running processes and their PID in the first column of the output. Grep out your desired process from the output. With ps aux command you can see more information about processes with PID in the second column of the output. Alternatively, you can use pidof command when you know the exact process name to get its only PID.

Now, you are ready with PID of the process to be killed. Let’s move on to killing it!

How to kill process in Linux?

There are a few limitations you should consider before killing any PID. They are as below –

  1. You can kill the process which is owned by your userid only.
  2. You can not kill system processes.
  3. Only the root user can kill other user’s processes.
  4. Only root can kill system using processes.

After fulfilling all above criteria, you can move ahead to kill PID.

Kill process using kill command

Kill command is used to send specific signals to specified PID. Signal numbers and PID you need to supply to command. The signal used are :

  • 1 :  Hung up
  • 9 : Kill
  • 15 : Terminate

Normally 9 signal is used (famous kill -9 command) while 15 is used if 9 doesn’t work. Hung up signal is rarely used. Kill process using command syntex kill -signal PID like –

root@kerneltalks # kill -9 8274

Kill process using pkill

If you want to use the process name instead of PID then you can use the pkill command. But remember to use the correct process name. Even a small typo can lead you to kill off unwanted processes.  Syntex is simple, just specify process name to command.

root@kerneltalks # pkill myprocess

Kill process using killall

With the above two commands : kill and pkill, you are killing only a specific process whose PID or name is specified. This leads its child processes to hung or zombie. To avoid this situation, you can kill the process along with all its child processes using killall command.

root@kerneltalks # killall myprocess

Conclusion

As root you can kill any process including system ones on the Linux system. As a normal user you can kill processes owned by you only. Process ID i.e. PID can be obtained using command ps or pidof. This PID or process name can be used to kill the process using kill, pkill and killall commands.

How to release the Elastic IP in AWS

Learn how to disassociate elastic IP from EC2 and how to release elastic IP in AWS with screenshots. Also, understand how elastic IP is billed and the cost of billing.

How to guide: Release Elastic IP in AWS

In our previous article we understood how to allocate elastic IP to your AWS account & how to associate that elastic IP to EC2 instance. In this article we will walk through steps to disassociate elastic IP from EC2 instance and then release elastic IP from your AWS account.

Before we run into steps lets look at elastic IP billing information which will help you to judge why it is important to release un-used elastic IP back to AWS.

Elastic IP billing

At the most you can allocate 5 elastic IP for an AWS account per region. If you have a requirement of more, you need to reach out to the AWS team to raise this limit through form. This limit is set by Amazon since IPv4 is a scarce resource.

Coming to the billing part, you will be billed for each elastic IP which is not being used anywhere but allocated to your AWS account. This is to impose efficient use of such scarce resources. You will not be billed for elastic IP if below conditions are met –

  1. Elastic IP allocated to you is associated with EC2 instance
  2. That EC2 instance has only one elastic IP associated
  3. That EC2 instance is in running state.

You might like :

Elastic IP is allocated to you once you demand it. There is no default elastic IP allocated to your AWS account. Hence elastic IP billed under on-demand pricing model. Here are points to consider :

  1. Elastic IP billed under an on-demand pricing model
  2. It’s billed per hour on a pro-rata basis.
  3. Billing rate changes as per region. Detailed rates are available on the pricing page under the ‘Elastic IP Addresses‘ section.

For example, see below rates of elastic IP for US East (Ohio) region depending on your type of use –

Elastic IP billing information. Information credit : AWS website.

Now, you know how your usage gonna get billed for elastic IPs in AWS. Without further deviation, let’s walk through process to release elastic IP from the AWS account.

How to remove elastic IP from EC2 instance

In AWS terms, its process to disassociate elastic IP from EC2 instance. The process is pretty simple. Login to EC2 console and navigate to Elastic IPs. You will be listed with all available Elastic IPs in your account. Choose one to disassociate and choose disassociate from the action menu. You will be shown a pop up like one below :

Disassociate elastic IP from EC2

Confirm disassociation by clicking the disassociate address button. Your elastic IP will be removed from an EC2 instance and the instance will be assigned with public IP from AWS automatically. You can confirm empty elastic IP from EC2 instance details.

Now you have successfully disassociated elastic IP from EC2 instance. But it is still allocated to your AWS account and you are getting billed since you are not using it anywhere. You need to release it back to the AWS pool if you are not planned to use it for any other purpose so that you won’t be billed for it.

How to release elastic IP from AWS account

Releasing elastic IP means freeing it from your account and making it available back in the AWS pool for someone else to use it. To release login to EC2 console and navigate to Elastic IPs page. From the presented list select elastic IP you want to release and choose release address from the actions menu. You will be prompted with pop up like below :

Release elastic IP from AWS account

Confirm your action by hitting the release button. Your elastic IP will be released back to the AWS pool and you won’t be able to see it in your account anymore!

Benefits of cloud computing over the traditional data center

Article listing benefits of cloud over traditional datacenter. 7 different aspects of cloud vs on-premise datacenter.

Cloud vs traditional datacenter

In the past few years, the cloud industry is gaining good momentum. Many companies are moving their workloads to cloud from the traditional data centers. This trend is increasing day by day due to the list of advantages cloud offers over the traditional data centers. In this article we will walk through these advantages of cloud over traditional datacenter. This is also one of the basic cloud interview questions where they tell you to list of pros and cons of the cloud. Without much delay lets jump into cloud vs data center discussion.

  1. Low maintenance cost. For a customer maintenance cost is almost nil. Since you are using hardware from the cloud provider’s datacenter, you don’t need to maintain hardware at all. Your cost is saved from geographical location cost, hardware purchase, upgrades, datacenter staff, power, facility management cost, etc. All this is bared by the cloud provider. Also, for cloud providers, this is also low since they are operating multiple clients from the same facility and hence cost is low compared to cost one has to bear when all those clients are operating from different datacenters. This is very much environment friendly too since you are reducing the need for multiple facilities to fewer ones.
  2. Cheap resources. Cloud providers have a pool of resources and from which you get assigned your share. This means cloud providers maintain and operate a large volume of resources and distribute smaller chunks to customers. This obviously reduces the cost of maintenance and operation for cloud providers and in turn provides low cost, cheap resources to customers.
  3. Scale as per your need. In a traditional data center you have to study and plan your capacity well in advance to finalize your hardware purchase. Once purchased you are stuck with purchased limited capacity and you can not accommodate if capacity requirement grows beyond limit before your estimated time. It again goes through planning, purchasing new hardware which is a time-consuming process. In the cloud you can scale up and scale down your computing capacity almost instantly (or way shorter in time than traditional purchase process). And don’t even need to worry and follow for approvals,  purchase, billing, etc things.
  4. Pay as you use. In traditional data centers whenever you buy hardware you make an investment upfront even if you don’t use the full capacity of purchased hardware. In the cloud, you are billed per your use. So your expenditure on computing is optimum with your use.
  5. The latest technology at your service. Technology changing very fast these days. Hardware you buy today becomes obsolete in a couple of months. And if you are making huge investments in hardware, the company expects to use it at least for a couple of years. So you are stuck with the hardware you brought with a nice price tag and now way behind from its latest counterparts. Cloud provides you the latest tech always and you don’t need to worry about upgrades or maintenance. All these hardware aspects are the headache of cloud providers and they take care of it in the background. As a customer, all the latest technology is at your service without any hassle.
  6. Redundancy. Redundancy in traditional datacenter means cost investment to build almost identical facilities of the primary.  Along with it also involves cost for infrastructure which connects them. Also, on-site redundancy for power, network, etc. is also expensive and maintenance prone. When you are opting cloud, everything said previously is just vanished from your plate. Cloud at single entity level like single server, storage disk, etc is already redundant. Nothing to be done and no extra cost is being billed to you for it. For your infra design requirement if you want, you can use ready-made services provided by cloud (for redundancy) and you are all set from failures.
  7. Accessibility. With an on-premise datacenter, you have very limited connectivity mostly locally. If you want access to inside entities, you need to maintain your own VPN. Cloud services have a portal with access to almost all of their services over the web. It can be accessed from anywhere with internet. Also, if you want to opt-in for a VPN, you get a pre-configured secure VPN from your cloud provider. No need for designing and maintaining a VPN!

Let us know your views on cloud vs on premise datacenters in comments section below.

Difference between elastic IP and public IP

Learn what is elastic IP and public IP means in AWS. List out all differences between elastic IP and public IP in AWS.

Elastic IP vs Public IP in AWS

I was having a conversation about AWS with one of my friends and he came up with the question what is the difference between elastic IP and public IP? So, I explained to him and thought why not draft a post about it! So in this article, we will see the difference between elastic IP and public IP in AWS. This can be a cloud interview question so without much delay lets get into elastic IP vs public IP battle.

What is elastic IP in AWS?

First thing first, let’s get basics clear. What is the elastic IP? It is IPv4 IP address designed and exists for dynamic cloud computing and reachable over the internet. By name, you can imagine it’s a flexible IP that can be used or mapped rapidly from one EC2 instance to another when a currently associated instance fails. This way end-user or application continues to talk to the same IP even if the instance behind it fails. They are like static public IPs allocated for your AWS account.

What is Public IP in AWS?

Public IP is IPv4 IP address which is reachable over the internet. Remember switching flexibility of Elastic IP is not available for this IP. Amazon also assigns external/Public DNS name (shown in the screenshot) to instances who receives public IP. The public IP of the instance is mapped to the primary private IP of that instance via NAT (Network Address Translation) by default.

Refer below screenshot from the EC2 console of AWS and observe where you can check your elastic IP, public IP, and public DNS name.

Check elastic IP, public IP and public DNS in EC2 AWS console

Difference between elastic IP and Public IP

Now let’s look at the difference between these two IP types.

  1. Whenever a new EC2 instance spins up, it’s assigned with public IP by default. Elastic IP is not assigned by default.
  2. Elastic IPs are assigned to AWS accounts which you can attach to instances. Public IPs assigned to instances directly.
  3. When an instance is stopped and started again, public IP gets changed. But if the instance is assigned with elastic IP, it will remain the same even if the instance is stopped and started again.
  4. If elastic IP is allocated to your account and not in use then you will be charged for it on an hourly basis.
  5. Public IP released once your instance is stopped so no question of getting charged for not using it.
  6. You won’t be able to re-use the same public IP since its allocated from the free IP pool. You can always re-use, re-attach elastic IP to other instances when it is released from the current instance.
  7. You can not manually attach or detach public IP from the instance. It’s auto allocated from the pool. Elastic IP can be manually attached and detach from the instance.
  8. You can have a maximum of 5 elastic IP to your account per region. But, you can have as many public IPs as EC2 instances you spin up.
  9. You can have either of them for an instance. If you assign elastic IP to instance then its currently assigned public IP will be released to the free pool.

Datacenter presence of top Cloud providers

An overview of the global data center presence of top Cloud companies like AWS, Google Cloud, Azure, etc. This article lists maps and links which will help you understand the data center presence of various firms.

Data center presence of Cloud providers

All companies are moving to the cloud now. Since the cloud model aims at pay-per-use its best fit in cost-cutting for every company that works in the IT field. The latest hardware and technologies are at your service without investing and worrying about the maintenance of hardware and facilities is the biggest benefit luring customers to cloud companies.

Since the cloud is a hot cake and its demand is growing day by day, we see lots of players in the cloud providers market. It’s not easy to offer cloud services since you need huge facilities all over the globe rather a network of such facilities across the globe to build your own cloud. It’s not an easy task, not at all! Data centers are the backbone of these clouds. They house the hardware which actually runs cloud services. So it’s important to know your cloud provider’s data center locations which helps you deciding many aspects of your services before moving in.

Here in this article, we will be listing all top-level cloud provider’s data center locations. This information is available on their respective websites, but we are consolidating it here for your quick reference!

Below data is as on 24th May 2020

Amazon Web Service (AWS)

AWS global infrastructures consist of 24 regions with 76 availability zones within & 216 Point of Presence. Zone normally indicates one or more datacenter. The latest updated details can be found on their page here. AWS global infrastructure map looks like :

AWS Global Infrastructure. Image credit : Amazon

Where, numbers denote the number of availability zones in that region and green circles show upcoming regions.

AWS Infrastructure is visually represented on this website which is nice to understand.

Google Cloud Platform

Another biggie in the Cloud service market. Google has a presence in 23 regions with 70 zones within & 140 Network Edge Locations. Google also shared their network map details along with data center presence here on this page. Their global locations map looks like below :

Google Cloud global locations. Image credit : Google

Where number denotes available zones and blue ones are upcoming locations.

Microsoft Azure

Azure is a cloud service from Microsoft. Their global infrastructure consists of 60+ regions. The latest and updated info can be grabbed from this page. Microsoft’s data center map looks like one below :

Microsoft Azure global infrastructure map. Image credit : Microsoft

Where triangle ones are upcoming locations.

IBM Bluemix

IBM offers its cloud platform as a service in Bluemix. The latest updated infra details can be found here on this page.

IBM Bluemix data centers. Image credit : IBM

HP Cloud

I could not find infrastructure maps or consolidated information for the Hewlett Packard (HP) cloud global infrastructure. However I got some info on the HP UK website here. Where HP states they have 28 data centers powering their cloud and they are building new 5 data centers in the EMEA region.

vCloud

VMware cloud also referred to as vCloud has 11 zones in a total of 4 countries. VMware doesn’t define regions, rather they deny zones within countries. VMware released this PDF which has all these details.

So looking at above maps and numbers we can jot down current infrastructure reach of  three top cloud companies in below table :

Amazon web services Google Cloud Microsoft Azure
Regions 161136
Zones 4433
Upcoming regions 566

Every company uses different terminology. Where regions are common for all but not zones. Zones are referred to as single or multiple data centers in that geographical vicinity. Microsoft, IBM doesn’t refer zones term they only refer to regions. VMware doesn’t refer regions they only refer to country-wise zones. So the exact data center count for each company is not known rather shouldn’t be!

Let us know if you have more information of links regarding data center details of cloud providers (publically made available by owners) in comments below.