Category Archives: Cloud Services

Preparing for 1Z0-1085-20 Oracle Cloud Infrastructure Foundations 2020 Associate Exam

A quick article on how to prepare for 1Z0-1085-20 Oracle Cloud Infrastructure Foundations 2020 Associate certification exam.

OCI Foundations Associate 2020

OCI (Oracle Cloud Infrastructure) Foundations 2020 Associate is a foundation level exam. If you are coming from another cloud service provider background then it will be a piece of cake for you. Being a foundation level exam will test you on a conceptual basis only.

Its a 60 multiple choice questions exam which you have to complete in 105 minutes. Approximately 2 minutes to spare per question which is pretty good enough for foundation level exam. Questions and answers are small so you don’t have to invest much time in reading and you can complete the exam well before time.

The exam costs $95 and the passing score is 68%. At the time of writing this article, due to the COVID-19 epidemic, Oracle announced course material and exam free of cost for a specific period of time. The exam currently available under online proctored mode from Pearson since most of the exam centers are closed in view of COVID-19 lock-down.

Journey to AWS Certified Solutions Architect – Professional Certification

Read our another article about preparation for the AWS certification

Let’s walk through exam topics and points you need to consider while preparing for this certification. An exam guide from Oracle can be viewed here.

Exam topics are :

  1. Cloud concepts
  2. OCI fundamentals
  3. Core OCI services
  4. Security and compliance
  5. OCI Pricing, billing, and support

Cloud concepts

If you are coming with a background of any other cloud provider like AWS, then you got it covered already.

  • You should be through with concepts of HA (High Availability), FT (Fault Tolerance) and the difference between them.
  • What is the cloud?
  • Know the advantages of cloud over the on-prem data center.
  • Get familiar with RTO and RPO concepts.

OCI Fundamentals

This topic covers basics of OCI i.e. how it is architected.

  • Understand concepts of the region, AD (Availability Domain)and FD (Fault Domain)
  • Types of the region – Single AD and multi AD
  • Learn about compartments and tenancy

Core OCI services

In this topic, you are introduced to core OCI services at a higher level. There is no need for a deep dive into each service. A high-level understanding of each is enough for this exam.

  • OCI Compute service. Learn all the below offerings.
    • Bare metal
    • Dedicated virtual host
    • Virtual Machine
    • Container engine
    • Functions
  • OCI Storage services. Learn below offerings
    • Block Volume
    • Local NVMe
    • File Storage service
    • Object service
    • Archive storage
    • Data transfer service
  • OCI Networking services
    • VCN (Virtual Cloud Network)
    • Peering
    • Different kind of gateways
      • NAT Gateway
      • DRG Gateway
      • Internet Gateway
    • Load balancers
    • NSG (Network Security Groups) and SL (Security Lists)
  • OCI IAM service
    • Concept of principals and Instance principals
    • Groups and dynamic groups
    • Policy understanding along with syntax and parameters
  • OCI Database service. Study all below offerings
    • VM DB systems
    • Bare Metal DB systems
    • RAC
    • Exadata DB systems
    • Autonomous data warehouse
    • Study backup, HA, DR strategies
  • Have a high-level understanding of below services :
    • OCI Key management service
    • OCI DNS service
    • Data safe
    • OS Management service
    • OCI WAF
    • Audit log service
  • Tagging
    • Usages
    • Type: free form and defined
    • Tag namespaces

Security and complilance

OCI security consists of different facets. Understand below areas in context to security

  • Cloud shared security model
  • Securing OCI using IAM
  • data at rest and data in transit protection
  • Key management service
  • HA, FT using AD, FD or services for data protection

OCI Pricing, billing and support

Understand how pricing and billing work in each service we saw above. Learn pricing high/low among tiers in storage services. You don’t need to remember any numbers but you should know it contextually like which is priced high and which one is low etc.

Learn billing models in OCI

  • PAYG (Pay as you go)
  • Monthly Flex
  • BYOL

Understand budget service and how tags, compartments can help in billing and budgeting.

Learn about the SLA structure offered by Oracle. This part is missing in OCI online training.

That’s all you have to know to clear this exam. As I said if you are coming from AWS, Azure then you can relate almost everything to those cloud services which makes it easy to learn and understand.

I created my last day revision notes here (most of the reference to AWS for comparison) which might be useful for you as well.

Now, just little bit of study and go for it! All the best!

Our other certification preparation articles

  1. Preparing for CLF-C01 AWS Certified Cloud Practitioner Exam
  2. Journey to AWS Certified Solutions Architect – Professional Certification SAP-C01

Journey to AWS Certified Solutions Architect – Professional Certification SAP-C01

Let me share my experience to clear the toughest AWS exam ‘AWS Certified Solutions Architect – Professional’. This article might help you in your journey to get AWS CSA PRO certified.

Getting AWS CSA PRO certified!

In this article, I am going to cover the last few months of the certification journey which can prove useful to you as it was for me.

As I said last few months, so I assume you have good hands-on experience (might be via personal account/corporate projects) of AWS services. Obviously services like Snowball, Direct Connect are rare to get hands-on but you need to have a solid understanding of these services at least.

Let’s begin with the non-technical aspect of this journey which plays a key role in completing your Exam.

Your reading skills matters!

Yup, you read it right. AWS CSA PRO exam is having 75 questions which you need to answer in 180 minutes. Which drills down to approx 2 minutes per question.

Most of the questions are 3-4 or more statements long and so are the choices in answers. So you need to read through almost a big paragraph of text for a single question. Then you understand what is being asked, analyze answers and choosing best which fits the ask. That’s too much of work to be accomplished in 2 mins!

And there are very few questions where answers are just incorrect and you can eliminate them quickly in first glance. Most of the answers are correct but you need to choose the most appropriate one to suit the question’s requirement. So that’s a tedious task which requires more time. Hence I said reading skills do matter.

A tip (might be a crazy one): Watching videos with subtitles is an easy way you can train your brain to read speedily and grasp the context parallelly!

Obviously you should make yourself comfortable before you sit for your exam. Since its a 3 hour, long course and you don’t want to get distracted by anything.

Last month revisions using online training courses

In last month before the exam, you might want to subscribe to online courses specifically structured and targeted to the scope of the exam and their material is designed across the core services appearing in the exam.

These courses are a bit on a longer side like 40-50 hours of video but you can always use video speeds (set to 1.5x generally) to go through the course quickly. I took Linux Academy’s course by Adrian Cantrill & A Cloud Guru’s course by Scott Pletcher. But do not attempt the practice exams at the end of the course right away. Keep them for your final stage before the exam.

There are free courses available on the AWS training portal as well which you can check in the meantime. You should be knowing all AWS services at least by name and their use. Services launched in the last 1 year are less likely to appear on the exam so you can skip them.

Refer AWS documents and videos (Mandatory)

Once you are through online training courses for the exam, you will be well versed in the idea of what you can expect in the exam. These courses often supplemented with the links to AWS whitepapers or re-invent videos related to the chapter topic. Yup, those are essential things to go through.

AWS whitepapers and FAQ pages give you many minute things that you may have missed and help you to determine the validity of your choice for the situation in question. If you are short on time, then at least go through documents for the services in which you are weak or have little knowledge/experience.

AWS re:Invent videos on Youtube is another content-rich platform that gives you some insights/points which you may have missed in your preparation. They are also helpful since many customers are coming in re:Invent and present their use cases. This will help you to map real-world use cases with that in exams and get solid confirmation about your answer. And you can use Youtube’s video speed control to go through videos quickly!

Getting there

All right now we are at the stage that all knowledge sourcing has been done and its time to test that knowledge. Now its time to hit those practice exams from your online courses. Be sure to get these practice exams by Jon Bonso. Its a set of 4 practice tests and worth investing.

Also, you should consider taking AWS’s own practice exam. If you are lucky you might encounter some questions from it, in real exam. Also, if you hold any previous AWS certification, you must have coupon code in your AWS learning account which you can use to take this test for free.

You are good to book your exam when you can score 90% and above in all the above practice tests by understanding why a particular answer is correct and why others not. Memorizing answers not gonna help you in any way.

I uploaded my 50 page long handwritten notes. They might serve you for last day revision like flashcards.

View my last day revision notes

Being there!!

And here you are! The deal day! On exam day, just keep calm and give the exam. Don’t rush for any last-minute reads etc. Its gonna confuse and complicate things. Better be in a peaceful state since your mind is much important on exam day because that’s what gonna help you to read and understand essays! of the exam in the first go. This way you don’t waste your precious time in re-reading questions/answers.

  • Always keep in mind you can not spend more than 2 mins on a single question. Time is precious!
  • If you are cant figure out answers quickly then flag it and move on.
  • If you see answers with the same solutions & only one/two keywords different then easy to finalize answer quickly without reading through the whole statements
  • Scan through question and capture keywords like a highly available solution, less cost, multi-region, load balancing, etc. This helps you to narrow down to particular services
  • Start building solutions in mind as you read through questions using the above-said keywords. This helps to look at answers and match the solution you have in mind. It helps you save a lot on time!
  • Do not submit the exam till last second, even if you manage to complete all questions and review of flagged ones before time. Use the remaining time to go through answers again.

Result?

Your result will be emailed to you within 5 business days. But you can make it out from the messages displayed on the screen once you submit the exam that you made it or not. The message is quite confusing (it’s more when you fried your brain for the last 3 hours!) since it states that you complete the exam! (Diff messages mentioned here in the forum) But, in a nutshell, if you see it starts with Congratulations then you made it! and if it starts with Thank You then you need a re-attempt.

Our other certification preparation articles

  1. Preparing for CLF-C01 AWS Certified Cloud Practitioner Exam
  2. Preparing for 1Z0-1085-20 Oracle Cloud Infrastructure Foundations 2020 Associate Exam

How to connect Windows EC2 AWS server using RDP?

Quick post explaining how to connect Windows EC2 AWS server using RDP.

Connecting AWS Windows server using RDP

Step 1. Get windows password in AWS

Retrieve administrator password from the Windows EC2 server.  Login to the EC2 dashboard from the AWS console. Select your Windows server EC2 instance and choose ‘Get Windows password‘ from the Actions menu.

Get windows password in AWS

Sometimes you may see below error complaining that Windows password is not available –

Windows password not available in AWS

Error is self explanatory. You are seeing it because –

  1. You need to wait for your server to properly boot and module to start which decrypt windows password and share with you.
  2. You spun windows server from AMI which does not have a module which could retrieve administrator password from Windows.

In the case of the first situation wait for few minutes and again try to ‘Get Windows Password‘. In the case of the second situation, you need to spin up EC2 with proper AMI.

Also read : How to deploy Linux server in AWS

Once the above issue is resolved, you will see the next window which asks for key pair you used while deploying the Windows server. This is AWS authenticating you again before it releases administrator password to you.

Key pair required to get windows password in aws

Browse key pair on your local machine and then hit the ‘Decrypt Password‘ button. If you have supplied proper key pair you will be presented with a password window like below –

Windows password in aws

That’s it. You have the administrator password for your Windows server on AWS.

Step 2 : Login to Windows AWS server using RDP

Logged in to Windows AWS server

Open remote desktop connection on your local machine. Punch in details that are given to you by AWS in the above window. Public DNS is the hostname you should use to connect to the server followed by the user name and password.

And you are in !

How to extend EBS & filesystem online on AWS server

Learn how to extend EBS & filesystem online on the AWS EC2 Linux server. Step by step procedure along with screenshots and sample outputs.

Extend EBS & filesystem online on AWS

In this article, we will walk you through steps to extend EBS volume attached to the EC2  Linux server and then extend the filesystem on it at Linux level using LVM. Read here about how to attach EBS volume to the EC2 server in AWS.

It involves two steps –

  1. Extend the attached EBS volume on AWS console
  2. Extend file system using LVM

Current setup :

We have a 10GB EBS volume attached to the Linux EC2 server. /testmount of 9.9GB is created using this disk at OS level. We will be increasing it to 15GB.

root@kerneltalks # lsblk
NAME                   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda                   202:0    0   10G  0 disk
└─xvda1                202:1    0   10G  0 part /
xvdf                   202:80   0   10G  0 disk
└─datavg-datalv (dm-0) 253:0    0  9.9G  0 lvm  /testmount

Step 1: How to extend EBS volume attached to the EC2 server in AWS

Login to AWS EC2 console, click on Volumes under Elastic Block Store in the left-hand side menu. Then select the volume you want to extend. From Actions drop-down menu select Modify Volume  You will see below screen :

Modify EBS volume in AWS

Change size (in our case we changed from 10 to 16GB) and click Modify. Accept the confirmation dialogue box by clicking Yes.

Once modify operation succeeded, refresh the Volume list page and confirm the new size is being shown against the volume you modified just now. Now, your EBS volume is extended successfully at the AWS level. You need to extend it at OS level now.

Step 2: How to re-scan new size of EBS volume in Linux & extend filesystem online

Since EBS volumes size has been changed you need to rescan it in OS so that kernel and volume managers (LVM in our case) should make a note about the new size. In LVM, you can use pvresize command to rescan this extended EBS volume.

root@kerneltalks # pvresize /dev/xvdf
  Physical volume "/dev/xvdf" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized

After successful rescan, check if the new size is identified by the kernel or not using lsblk command.

root@kerneltalks # lsblk
NAME                   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda                   202:0    0   10G  0 disk
└─xvda1                202:1    0   10G  0 part /
xvdf                   202:80   0   16G  0 disk
└─datavg-datalv (dm-0) 253:0    0  9.9G  0 lvm  /testmount

You can see in the above output, now xvdf disk is shown with size 16G! So, the new disk size is identified. Now proceed to extend file system online using lvextend and resize2fs. Read how to extend the filesystem online for more details.

root@kerneltalks # lvextend -L 15G /dev/datavg/datalv
  Extending logical volume datalv to 15.00 GiB
  Logical volume datalv successfully resized

root@kerneltalks # resize2fs /dev/datavg/datalv
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/datavg/datalv is mounted on /testmount; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 1
Performing an on-line resize of /dev/datavg/datalv to 3932160 (4k) blocks.
The filesystem on /dev/datavg/datalv is now 3932160 blocks long.

Check if the mount point is showing new bigger size.

root@kerneltalks # df -Ph /testmount
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/datavg-datalv   15G  153M   14G   2% /testmount

Yup, as we planned /testmount is now 15G in size from 9.9GB earlier size.

How to add EBS disk on AWS Linux server

An article explaining step by step procedure to add EBS disk on AWS Linux server with screenshots.

EBS disk addition on AWS instance

Nowadays most of the servers run on cloud platforms like Amazon Web Services (AWS), Azure, etc. So daily administrative tasks on Linux servers from AWS console is one of the common things in sysadmin’s task list. In this article, we will walk you through one such task i.e. adding a new disk to the AWS Linux server.

Adding a disk to the EC2 Linux server has two portions. The first portion is to be done on AWS EC2 console which is creating new volume to be attached to the server. And attaching it to EC2 instance on AWS console. The second portion is to be done on the Linux server which is to identify newly added disk at the kernel level and prepare it for use.

Creating & attaching EBS volume

In this step, we will learn how to create EBS volume in the AWS console and how to attach EBS volume to AWS EC2 instance.

Login to your EC2 console and navigate to Volumes which is under ELASTIC BLOCK STORAGE menu on the left-hand sidebar. You will be presented with the current list of volumes in your AWS account like below –

Create volume in AWS

Now, click Create Volume button and you will be presented with the below screen.

Volume creation in AWS

Here you need to choose several parameters of your volume –

  1. Volume Type. This decides your volume performance and obv billing.
  2. Size. In GB. Min and Max available sizes differ according to your volume type choice.
  3. IOPS. Performance parameters. Changes according to your volume type choice
  4. Availability Zone. Make sure you select same AZ as your EC2 instance
  5. Throughput. Performance parameter. Only available for ST1 & SC1 volume type.
  6. Snapshot ID. Select snapshot if you want to create the new volume from existing snapshot backup. For fresh blank volume leave it blank.
  7. Encryption. Checkmark if you want the volume to be encrypted. An extra layer of security.
  8. Tags. Add tags for management, reporting, billing purposes.

After selecting proper parameters as per your requirement, click Create Volume button. You will be presented with ‘Volume created successfully’ dialogue if everything goes well along with the volume ID of your newly created volume. Click Close and you will be back of the volume list.

Now check volume ID to identify your newly created volume in this list. It will be marked with an ‘Available’ state. Select that volume and select Attach volume from Actions menu.

Attach volume menu

Now you will be presented with an instance selection menu. Here you need to choose an instance to which this volume is to be attached. Remember only instances in the same AZ of the volume are

Attach volume instance selection

Once you select the instance you can see the device name which will be reflected at the kernel level in your instance under Device field. Here its /dev/sdf.

Check out the note being displayed here. It says : Note: Newer Linux kernels may rename your devices to /dev/xvdf through /dev/xvdp internally, even when the device name entered here (and shown in the details) is /dev/sdf through /dev/sdp.

It says newer Linux kernels may interpret your device name as /dev/xvdf than /dev/sdf. This means this volume will be either /dev/sdf (on the old kernel) or /dev/xvdf on the new kernel.

That’s it. Once attached you can see volume state is changed from Available to in-use

Identifying volume on Linux instance

Now head back to your Linux server. Log in and check new volume in fdisk -l output.

root@kerneltalks # fdisk -l
WARNING: fdisk GPT support is currently new, and therefore in an experimental ph                                                                                        ase. Use at your own discretion.

Disk /dev/xvda: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: 25D08425-708A-47D2-B907-1F0A3F769A90


#         Start          End    Size  Type            Name
 1         2048         4095      1M  BIOS boot parti
 2         4096     20971486     10G  Microsoft basic

Disk /dev/xvdf: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

As AWS mentioned new device name will be reflected as /dev/xvdf in the kernel, you can see /dev/xvdf in the above output.

Now you need to partition this disk using LVM (using pvcreate) or fdisk so that you can use it for creating mount points!

How to download files from EC2 to local machine using winSCP

Learn how to transfer files between desktop and EC2 using WinSCP. Using key-based authentication, winSCP can be connected to EC2 to download/upload files from/to the server.

Transfer data to/from EC2 using winSCP

While working on the EC2 server hosted on AWS, one of the basic requirements you come across is to transfer data between your desktop/laptop and EC2 instance. Since EC2 uses key-based authentication, for beginners it’s hard to understand how to transfer data from desktop to EC2 cloud.

Normally, programs like WinSCP are used to transfer data between the Linux server and the windows machine. In this article we will walk you through how to add key-based authentication in WinSCP. Later how to download files from EC2 to the local machine.

Other EC2 related posts which might interest you :

Step 1: Know your DNS name

Make sure your EC2 instance is spun up. You have the Public DNS name of your EC2 instance. You can see it under instance description in your AWS EC2 console.

Take our AWS CSA exam quiz! Check your knowledge.

Refer screenshot below :

Public DNS of EC2 instance

Step 2 : Set private key for authentication

Open WinSCP tool. Click on Advanced to open settings of tool.

Open winSCP settings

Under settings, click on Authentication under SSH in the left panel. This will open up authentication settings on the right panel.

Authentication settings in winSCP

Under, Authentication parameters tick ‘Allow agent forwarding‘ and browse your private key file in it. This private key file is key the same file you use to authenticate to EC2 when connecting via PuTTY

Click OK and close settings.

Step 3 : Connect

Copy public DNS of your EC2 instance, username as ec2-user for RedHat (different Linux distro has diff default logins in AWS. List of all is here) and hit connect. It will pop up to accept the key if you are connecting for the first time via WinSCP. Accept it and you will be connected to the EC2 server!

I have created small GIF which shows whole above process. Have a look .

Connect EC2 using winSCP

Now you can download or upload files from EC2 to local like you normally do!

How to release the Elastic IP in AWS

Learn how to disassociate elastic IP from EC2 and how to release elastic IP in AWS with screenshots. Also, understand how elastic IP is billed and the cost of billing.

How to guide: Release Elastic IP in AWS

In our previous article we understood how to allocate elastic IP to your AWS account & how to associate that elastic IP to EC2 instance. In this article we will walk through steps to disassociate elastic IP from EC2 instance and then release elastic IP from your AWS account.

Before we run into steps lets look at elastic IP billing information which will help you to judge why it is important to release un-used elastic IP back to AWS.

Elastic IP billing

At the most you can allocate 5 elastic IP for an AWS account per region. If you have a requirement of more, you need to reach out to the AWS team to raise this limit through form. This limit is set by Amazon since IPv4 is a scarce resource.

Coming to the billing part, you will be billed for each elastic IP which is not being used anywhere but allocated to your AWS account. This is to impose efficient use of such scarce resources. You will not be billed for elastic IP if below conditions are met –

  1. Elastic IP allocated to you is associated with EC2 instance
  2. That EC2 instance has only one elastic IP associated
  3. That EC2 instance is in running state.

You might like :

Elastic IP is allocated to you once you demand it. There is no default elastic IP allocated to your AWS account. Hence elastic IP billed under on-demand pricing model. Here are points to consider :

  1. Elastic IP billed under an on-demand pricing model
  2. It’s billed per hour on a pro-rata basis.
  3. Billing rate changes as per region. Detailed rates are available on the pricing page under the ‘Elastic IP Addresses‘ section.

For example, see below rates of elastic IP for US East (Ohio) region depending on your type of use –

Elastic IP billing information. Information credit : AWS website.

Now, you know how your usage gonna get billed for elastic IPs in AWS. Without further deviation, let’s walk through process to release elastic IP from the AWS account.

How to remove elastic IP from EC2 instance

In AWS terms, its process to disassociate elastic IP from EC2 instance. The process is pretty simple. Login to EC2 console and navigate to Elastic IPs. You will be listed with all available Elastic IPs in your account. Choose one to disassociate and choose disassociate from the action menu. You will be shown a pop up like one below :

Disassociate elastic IP from EC2

Confirm disassociation by clicking the disassociate address button. Your elastic IP will be removed from an EC2 instance and the instance will be assigned with public IP from AWS automatically. You can confirm empty elastic IP from EC2 instance details.

Now you have successfully disassociated elastic IP from EC2 instance. But it is still allocated to your AWS account and you are getting billed since you are not using it anywhere. You need to release it back to the AWS pool if you are not planned to use it for any other purpose so that you won’t be billed for it.

How to release elastic IP from AWS account

Releasing elastic IP means freeing it from your account and making it available back in the AWS pool for someone else to use it. To release login to EC2 console and navigate to Elastic IPs page. From the presented list select elastic IP you want to release and choose release address from the actions menu. You will be prompted with pop up like below :

Release elastic IP from AWS account

Confirm your action by hitting the release button. Your elastic IP will be released back to the AWS pool and you won’t be able to see it in your account anymore!

Benefits of cloud computing over the traditional data center

Article listing benefits of cloud over traditional datacenter. 7 different aspects of cloud vs on-premise datacenter.

Cloud vs traditional datacenter

In the past few years, the cloud industry is gaining good momentum. Many companies are moving their workloads to cloud from the traditional data centers. This trend is increasing day by day due to the list of advantages cloud offers over the traditional data centers. In this article we will walk through these advantages of cloud over traditional datacenter. This is also one of the basic cloud interview questions where they tell you to list of pros and cons of the cloud. Without much delay lets jump into cloud vs data center discussion.

  1. Low maintenance cost. For a customer maintenance cost is almost nil. Since you are using hardware from the cloud provider’s datacenter, you don’t need to maintain hardware at all. Your cost is saved from geographical location cost, hardware purchase, upgrades, datacenter staff, power, facility management cost, etc. All this is bared by the cloud provider. Also, for cloud providers, this is also low since they are operating multiple clients from the same facility and hence cost is low compared to cost one has to bear when all those clients are operating from different datacenters. This is very much environment friendly too since you are reducing the need for multiple facilities to fewer ones.
  2. Cheap resources. Cloud providers have a pool of resources and from which you get assigned your share. This means cloud providers maintain and operate a large volume of resources and distribute smaller chunks to customers. This obviously reduces the cost of maintenance and operation for cloud providers and in turn provides low cost, cheap resources to customers.
  3. Scale as per your need. In a traditional data center you have to study and plan your capacity well in advance to finalize your hardware purchase. Once purchased you are stuck with purchased limited capacity and you can not accommodate if capacity requirement grows beyond limit before your estimated time. It again goes through planning, purchasing new hardware which is a time-consuming process. In the cloud you can scale up and scale down your computing capacity almost instantly (or way shorter in time than traditional purchase process). And don’t even need to worry and follow for approvals,  purchase, billing, etc things.
  4. Pay as you use. In traditional data centers whenever you buy hardware you make an investment upfront even if you don’t use the full capacity of purchased hardware. In the cloud, you are billed per your use. So your expenditure on computing is optimum with your use.
  5. The latest technology at your service. Technology changing very fast these days. Hardware you buy today becomes obsolete in a couple of months. And if you are making huge investments in hardware, the company expects to use it at least for a couple of years. So you are stuck with the hardware you brought with a nice price tag and now way behind from its latest counterparts. Cloud provides you the latest tech always and you don’t need to worry about upgrades or maintenance. All these hardware aspects are the headache of cloud providers and they take care of it in the background. As a customer, all the latest technology is at your service without any hassle.
  6. Redundancy. Redundancy in traditional datacenter means cost investment to build almost identical facilities of the primary.  Along with it also involves cost for infrastructure which connects them. Also, on-site redundancy for power, network, etc. is also expensive and maintenance prone. When you are opting cloud, everything said previously is just vanished from your plate. Cloud at single entity level like single server, storage disk, etc is already redundant. Nothing to be done and no extra cost is being billed to you for it. For your infra design requirement if you want, you can use ready-made services provided by cloud (for redundancy) and you are all set from failures.
  7. Accessibility. With an on-premise datacenter, you have very limited connectivity mostly locally. If you want access to inside entities, you need to maintain your own VPN. Cloud services have a portal with access to almost all of their services over the web. It can be accessed from anywhere with internet. Also, if you want to opt-in for a VPN, you get a pre-configured secure VPN from your cloud provider. No need for designing and maintaining a VPN!

Let us know your views on cloud vs on premise datacenters in comments section below.

Difference between elastic IP and public IP

Learn what is elastic IP and public IP means in AWS. List out all differences between elastic IP and public IP in AWS.

Elastic IP vs Public IP in AWS

I was having a conversation about AWS with one of my friends and he came up with the question what is the difference between elastic IP and public IP? So, I explained to him and thought why not draft a post about it! So in this article, we will see the difference between elastic IP and public IP in AWS. This can be a cloud interview question so without much delay lets get into elastic IP vs public IP battle.

What is elastic IP in AWS?

First thing first, let’s get basics clear. What is the elastic IP? It is IPv4 IP address designed and exists for dynamic cloud computing and reachable over the internet. By name, you can imagine it’s a flexible IP that can be used or mapped rapidly from one EC2 instance to another when a currently associated instance fails. This way end-user or application continues to talk to the same IP even if the instance behind it fails. They are like static public IPs allocated for your AWS account.

What is Public IP in AWS?

Public IP is IPv4 IP address which is reachable over the internet. Remember switching flexibility of Elastic IP is not available for this IP. Amazon also assigns external/Public DNS name (shown in the screenshot) to instances who receives public IP. The public IP of the instance is mapped to the primary private IP of that instance via NAT (Network Address Translation) by default.

Refer below screenshot from the EC2 console of AWS and observe where you can check your elastic IP, public IP, and public DNS name.

Check elastic IP, public IP and public DNS in EC2 AWS console

Difference between elastic IP and Public IP

Now let’s look at the difference between these two IP types.

  1. Whenever a new EC2 instance spins up, it’s assigned with public IP by default. Elastic IP is not assigned by default.
  2. Elastic IPs are assigned to AWS accounts which you can attach to instances. Public IPs assigned to instances directly.
  3. When an instance is stopped and started again, public IP gets changed. But if the instance is assigned with elastic IP, it will remain the same even if the instance is stopped and started again.
  4. If elastic IP is allocated to your account and not in use then you will be charged for it on an hourly basis.
  5. Public IP released once your instance is stopped so no question of getting charged for not using it.
  6. You won’t be able to re-use the same public IP since its allocated from the free IP pool. You can always re-use, re-attach elastic IP to other instances when it is released from the current instance.
  7. You can not manually attach or detach public IP from the instance. It’s auto allocated from the pool. Elastic IP can be manually attached and detach from the instance.
  8. You can have a maximum of 5 elastic IP to your account per region. But, you can have as many public IPs as EC2 instances you spin up.
  9. You can have either of them for an instance. If you assign elastic IP to instance then its currently assigned public IP will be released to the free pool.