Category Archives: Cloud Services

How to configure EC2 for Session Manager

A quick reference to configure EC2 for Session Manager in AWS

EC2 session manager!

Ok this must be a very basic post for most of you and there is a readily available AWS doc for it, but I am just cutting it short to list down steps for achieving the objective quickly. You should go through the official AWS doc to understand all aspects of it but if you are on the clock then just follow along and get it set up in no time.

Checklist

Before you start, make sure you checked out these minimum configurations to get going.

  1. Your EC2 is running supported Opertaing System. We are taking example of Linux here so all Linux versions that supports AWS Systems Manager supports session manager.
  2. SSM agent 2.3+ installed on system. If not, we got it covered here.
  3. Outbound 443 traffic should be allowed to below 3 endpoints. You must have this already covered since most of the setups has ALL traffic aalowed in outgoing security group rule. –
    • ec2messages.region.amazonaws.com
    • ssm.region.amazonaws.com
    • ssmmessages.region.amazonaws.com

In a nutshell, probably point 2 is the one you need to verify. If you are using AWS managed AMI then you got it covered for that too! But, if you are using custom-built, home-grown AMI then that might not be the case.

SSM agent installation

It’s a pretty basic RPM installation as you would do on any Linux platform. Download package relevant to your Linux version from here. Or global URLs for Linux agents –

Run package installation and service handler commands with root privileges as below –

# systemctl enable amazon-ssm-agent
# systemctl start amazon-ssm-agent
# systemctl status amazon-ssm agent

If you do not have access to EC2 (Key lost or EC2 without keypair) then probably you need to re-launch the EC2. If your EC2 is part of an auto-scaling group (ASG) then it makes sense to add these commands in the user-data script for the launch template and launch a new EC2 from ASG.

Instance role permissions

Now the agent is up and running. The next step is to authorize the AWS Systems Manager service to perform actions on EC2. This is done via Instance Role. Create the IAM instance role with below IAM policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:UpdateInstanceInformation",
                "ssmmessages:CreateControlChannel",
                "ssmmessages:CreateDataChannel",
                "ssmmessages:OpenControlChannel",
                "ssmmessages:OpenDataChannel"
            ],
            "Resource": "*"
        }
    ]
}

You can scope it down to a particular resource if you want. You can even add KMS encryption-related permissions in it if you are planning to encrypt session data using KMS encryption. An example can be found here.

Once done attach the role to EC2. If EC2 is already having a role attached to it then add the above policy to the existing role and you should be good.

IAM instance profile

Connecting via Session Manager

Now you are good to test the connection.

  • Login to EC2 console.
  • Navigate to instances and selct the respective EC2 instance from the list.
  • Click on Connect button
Connecting to session manager from EC2 console
  • Make sure you are on Serssion Manager tab and click on Connect.
  • If you still see error reported on this screen then give it a minute or two. Sometimes it takes some seconds to propagate IAM role permissions.
Connect to the instance using session manager

New browser tab will open and you should be seeing the Linux prompt.

Instance connected!

Notice you are logged in with the default user ssm-user. You can switch to root user by using sudo.

There are a couple of benefits to using session manager as standard over Key pairs :

  • No need to maintain key files.
  • Avoid security threat posed to infra associated with Key file management.
  • Access management is easy through IAM.
  • Native AWS feature!
  • Session can be logged for audit purposes.

How to configure switching IAM roles in AWS CLI?

A short howto on configuring AWS CLI to switch roles

AWS CLI Switch Roles configuration

Requirement:

You have one AWS account that needs to switch roles before executing things on AWS. It’s an easy method on AWS console, but how to switch roles in AWS CLI.

Solution:

Let’s consider the below setup-

  • AWS IAM account with programmatic access – user101
  • Same IAM account having sts:AsumeRole permissions.
  • AWS IAM role for above said IAM user to assume (same or cross-account)- role101

Start with configuring the AWS CLI in a standard way.

$ aws configure --profile user101
AWS Access Key ID [None]: AKIAQX3SNXZGUQFOSK4T
AWS Secret Access Key [None]: 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
Default region name [us-east-1]: us-east-1
Default output format [None]: json

Note: It is not a good practice to keep AWS credentials in a plain text format. Keep them in a secured encrypted way using aws-auth.

Now, at this point, you must have an AWS credentials file created in the home directory.

$ cd ~/.aws
$ cat credentials
[user101]
aws_access_key_id = AKIAQX3SNXZGUQFOSK4T
aws_secret_access_key = 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
region = us-east-1
output = json

You need to edit the above credentials file to add IAM role details. Append the below configuration in the file.

If you are working with AWS Gov Cloud make sure the ARNs has proper AWS Partition defined. E.g. arm:aws-us-gov:x:x:…..
[role101]
role_arn = arn:aws:iam::xxxxxxxxx:role/role101
output = json
source_profile = user101

where –

  • role101 is a Role identifier. You can choose as per your choice.
  • Mention the correct IAM role ARN
  • source_profile should use the profile identifier of the user who will assume this role. In our case, its user101.

Save the file, and you are ready to go.

Test configurations –

$ aws sts get-caller-identity
{
    "UserId": "AIDAQX3SNXZG3Z2AXNIMJ",
    "Account": "xxxxxxxxx",
    "Arn": "arn:aws:iam::xxxxxxxxx:user/user101"
}

$ aws sts get-caller-identity --profile role101
{
    "UserId": "AROAQX3SNXZG6KL4YENFZ:botocore-session-1631087792",
    "Account": "xxxxxxxxx",
    "Arn": "arn:aws:sts::xxxxxxxxx:assumed-role/role101/botocore-session-1631087792"
}

You can see this by using --profile role101 we are assuming the IAM role role101 for the user user101.

AWS CLI configuration for switching roles using MFA

Note: If you are on Windows and using GitBash, refer to configuring GitBash for MFA prompts. It works perfectly in Powershell.

In some cases, your AWS environment must have MFA restrictions in place where the user user101 must have MFA enabled to switch to the role role101. In such a scenario, your role profile in credentials files should include MFA device ARN as well like below –

[role101]
role_arn = arn:aws:iam::xxxxxxxxx:role/role101
mfa_serial = arn:aws:iam::xxxxxxxxx:mfa/user101
output = json
source_profile = user101

where –

mfa_serial is the ARN of the MFA device of user101.

You will be prompted to supply the MFA code whenever you use profile role101 in AWS CLI commands.

$ aws sts get-caller-identity --profile role101
Enter MFA code for arn:aws:iam::xxxxxxxxx:mfa/user101:
{
    "UserId": "AROAQX3SNXZG6KL4YENFZ:botocore-session-1631089277",
    "Account": "xxxxxxxxx",
    "Arn": "arn:aws:sts::xxxxxxxxx:assumed-role/role101/botocore-session-1631089277"
}

How to find AWS resources that need to be tagged

A quick rundown on how to hunt AWS resources that needs tagging

Scan AWS resources to tag

Tags are the most important and equally negligible AWS entity! As AWS spread grows in an organization they start to realize the importance of tags and then comes the projects for tagging existing resources!

At this stage, the first question on the table is how to search for AWS resources that need tagging? or How can we search non-tagged AWS resources?

It’s a very short process that can be summarised in a single picture!

Searching AWS resources to tag

Breaking it down –

  1. Login to AWS Resource groups console.
  2. On left hand side menu, select Tag Editor under Tagging.
  3. Now you should have seelction on right hand side.
  4. Select perticular region or All regions from Regions drop down.
  5. Select specific resource or All supported resource types from Resource types drop down.
  6. Tags – Optional: You can specify key, value details to search for specific tags. Since we are searching for resources that are not tagged lets keep it blank.
  7. Finally, click on Search resources button and you are done!
  8. You should be presented with list of AWS resources in specified regions that needs to be tagged like below.
List of AWS resources to tag

You can export the list to CSV as well for further data analytics.

Netflix’s ConsoleMe local installation on Linux machine

A step by step guide to install ConsoleMe on Ubuntu Linux machine

ConsoleMe Ubuntu Local Install

ConsoleMe is an open-source web service published by Netflix. It is designed to make life easy for end-users and cloud administrators. Using ConsoleMe, cloud administrators can manage IAM permissions/credentials for IAM roles, S3 buckets, SQS queues, and SNS topics across multiple AWS accounts from a single interface. It also provides CLI called weep for AWS credentials management. That’s a fair introduction if you are not aware of the tool. Next, let’s get into the installation part.

ConsoleMe offers docker and local installs. We will walk you through local install in this article.

Pre-requisite:

  • A machine running Ubuntu 19.04+ with root access. I used Ubuntu 20.04 LTS x86.
  • Active and working package manager subscription to install packages
  • Storage requirement: 2GB of disk space
  • An AWS user/role for consoleme service with appropriate permissions
  • AWS access keys for above user if you are not using roles. I used keys (steps below)

Installation

We are installing ConsoleMe in /consoleme directory. If you want to install in another location, make the necessary changes in the commands below. Let me give you a list of commands you need to run as root –

apt-get update
apt-get install build-essential libxml2-dev libxmlsec1 libxmlsec1-dev libxmlsec1-openssl musl-dev libcurl4-nss-dev python3-dev pkg-config python3.8-venv awscli docker-compose -y
curl -sL https://deb.nodesource.com/setup_14.x | sudo bash
apt-get install -y nodejs
npm install yarn -g
cd /
git clone https://github.com/Netflix/consoleme.git
cd consoleme
docker-compose -f docker-compose-dependencies.yaml up -d

Here, the first few commands are installing all the dependencies and related software/tools. Then, we are cloning the GitHub repo of the tool in /consoleme and lastly, we are running two containers.

These are Redis and dynamodb containers that ConsoleMe leverages for caching and aggregating the AWS accounts information. You can make use of AWS Redis and dynamodb table services, but for now, we will run these containers locally so that ConsoleMe will talk to them rather than AWS services.

I am avoiding putting up console outputs for frequently used commands like package installations etc., here.

Make sure both containers are up and running before proceeding to the next step –

root@kerneltalks:/consoleme# docker ps
CONTAINER ID   IMAGE                             COMMAND                  CREATED          STATUS         PORTS                              NAMES
5333cdee2202   cnadiminti/dynamodb-local         "java -jar DynamoDBL…"   10 seconds ago   Up 4 seconds   8000/tcp, 0.0.0.0:8005->8005/tcp   consoleme-dynamodb
19ac354c3d70   redis:alpine                      "docker-entrypoint.s…"   10 seconds ago   Up 4 seconds   0.0.0.0:6379->6379/tcp             consoleme-redis
4cf931d38652   aaronshaf/dynamodb-admin:latest   "node bin/dynamodb-a…"   10 seconds ago   Up 4 seconds   0.0.0.0:8001->8001/tcp             consoleme-dynamodb-admin

Now, you need to prepare the machine to talk with AWS for fetching account details in the upcoming install steps. Ensure that you have set up account and permissions perfectly in IAM (mentioned in the pre-requisite above) to avoid any issues. You can do that by configuring AWS profile –

root@kerneltalks:/consoleme# aws configure
AWS Access Key ID [None]: AKIAQX3STVKIYRO36XEC
AWS Secret Access Key [None]: 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
Default region name [None]:
Default output format [None]:

Lastly, create a new python environment and run the final install step. This will take a while to complete since at the end of make install command, it also fetches and caches the AWS account details in the local Redis cache –

python3 -m venv env
. env/bin/activate
make install

After successful installation, you should be able to start the application.

Running ConsoleMe

On a current shell, you can run the ConsoleMe with the command. If you are in another shell, activate the python environment again –

(env) root@kerneltalks:/consoleme# python consoleme/__main__.py
{"asctime": "2021-07-25T08:32:16Z+0000", "name": "consoleme", "processName": "MainProcess", "filename": "jwt.py", "funcName": "<module>", "levelname": "ERROR", "lineno": 14, "module": "jwt", "threadName": "MainThread", "message": "Configuration key `jwt.secret` is not set. Setting a random secret", "eventTime": "2021-07-25T01:32:16.286230-07:00", "hostname": "kerneltalks", "timestamp": "2021-07-25T08:32:16Z+0000"}
2021-07-25 08:32:17,322 - DEBUG - root - [constants.py:39 - <module>() ] - Leveraging the bundled IAM Definition.
2021-07-25 08:32:17,322 - INFO - root - [iam_data.py:10 - <module>() ] - Leveraging the IAM definition at /consoleme/env/lib/python3.8/site-packages/policy_sentry/shared/data/iam-definition.json
2021-07-25 08:32:17,824 - DEBUG - git.cmd - [cmd.py:817 - execute() ] - Popen(['git', 'version'], cwd=/consoleme, universal_newlines=False, shell=None, istream=None)
2021-07-25 08:32:17,859 - DEBUG - git.cmd - [cmd.py:817 - execute() ] - Popen(['git', 'version'], cwd=/consoleme, universal_newlines=False, shell=None, istream=None)
{"asctime": "2021-07-25T08:32:18Z+0000", "name": "consoleme", "processName": "MainProcess", "filename": "__main__.py", "funcName": "init", "levelname": "DEBUG", "lineno": 57, "module": "__main__", "threadName": "MainThread", "message": "Server started", "eventTime": "2021-07-25T01:32:16.286230-07:00", "hostname": "kerneltalks", "timestamp": "2021-07-25T08:32:18Z+0000"}

But, it will exit out when you terminate the command or shell. It’s safe to run it in the background or, even better, run it as a Linux service. For running ConsoleMe as a service, create below two files –

File /usr/bin/consoleme_start.sh

#!/bin/bash
. env/bin/activate
python consoleme/__main__.py

File /etc/systemd/system/consoleme.service


[Unit]
Description=Run consoleme service.

[Service]
Type=simple
User=root
WorkingDirectory=/consoleme
ExecStart=/usr/bin/consoleme_start.sh

[Install]
WantedBy=multi-user.target

Assign executable permissions to

chmod +x /usr/bin/consoleme_start.sh

Enable and start the service

root@kerneltalks:/consoleme# systemctl enable consoleme
Created symlink /etc/systemd/system/multi-user.target.wants/consoleme.service → /etc/systemd/system/consoleme.service.

root@kerneltalks:/consoleme# systemctl start consoleme

root@kerneltalks:/consoleme# systemctl status consoleme
● consoleme.service - Run consoleme service.
     Loaded: loaded (/etc/systemd/system/consoleme.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2021-07-25 08:35:52 UTC; 7s ago
   Main PID: 14775 (consoleme_start)
      Tasks: 5 (limit: 4706)
     Memory: 159.7M
     CGroup: /system.slice/consoleme.service
             ├─14775 /bin/bash /usr/bin/consoleme_start.sh
             └─14776 python consoleme/__main__.py

Jul 25 08:35:52 kerneltalks systemd[1]: Started Run consoleme service..
Jul 25 08:35:53 kerneltalks consoleme_start.sh[14776]: {"asctime": "2021-07-25T08:35:53Z+0000", "name": "consoleme", "processName": "MainProcess", "filename": "jwt.py", "funcName": "<module>", "levelname": "ERROR", "lineno": 14, "m>
Jul 25 08:35:53 kerneltalks consoleme_start.sh[14776]: 2021-07-25 08:35:53,954 - DEBUG - root - [constants.py:39 - <module>() ] - Leveraging the bundled IAM Definition.
Jul 25 08:35:53 kerneltalks consoleme_start.sh[14776]: 2021-07-25 08:35:53,955 - INFO - root - [iam_data.py:10 - <module>() ] - Leveraging the IAM definition at /consoleme/env/lib/python3.8/site-packages/policy_sentry/shared/data/i>
Jul 25 08:35:54 kerneltalks consoleme_start.sh[14776]: 2021-07-25 08:35:54,354 - DEBUG - git.cmd - [cmd.py:817 - execute() ] - Popen(['git', 'version'], cwd=/consoleme, universal_newlines=False, shell=None, istream=None)
Jul 25 08:35:54 kerneltalks consoleme_start.sh[14776]: 2021-07-25 08:35:54,361 - DEBUG - git.cmd - [cmd.py:817 - execute() ] - Popen(['git', 'version'], cwd=/consoleme, universal_newlines=False, shell=None, istream=None)
Jul 25 08:35:54 kerneltalks consoleme_start.sh[14776]: {"asctime": "2021-07-25T08:35:54Z+0000", "name": "consoleme", "processName": "MainProcess", "filename": "__main__.py", "funcName": "init", "levelname": "DEBUG", "lineno": 57, ">

ConsoleMe GUI

Now that your console service is running, you should load its GUI on a web browser. The service listens on the 8081 port, so you need to navigate the server address with port 8081. Make sure the security group is allowing 8081 traffic if you are installing on EC2.

At this point, ConsoleMe is running with the default open example configuration. It’s very well highlighted on the web app as a warning. It would be best if you were editing this configuration to make your ConsoleMe more secure. ConsoleMe recommends Application Load Balancer authentication for securing your web app GUI. Refer to our next article on how to secure the ConsoleMe web app using ALB authentication.

How to create an Amazon Cognito User pool for ALB authentication

A step by step procedure to create an Amazon Cognito user pool. All available options are explained.

Amazon Cognito user pool!

One of the best features of AWS application load Balancers (ALB) is authentication! You can offload authentication to ALB that leverages Amazon Cognito in the backend. Amazon Cognito offers identity management through user pools or federated identities. This article will walk you through creating a user pool in Amazon Cognito that is used for ALB authentication. Without further delay, let’s get into it.

  • Login to Amazon Cognito console
  • Click on Manage User Pools
  • On the User pools page, click on Create a user pool button on top right hand corner of the page.
  • That should start user pool creation wizard. Lets go through it one by one –

Name

Enter the Pool name and click on the Step through settings button.

User pool creation wizard

Attributes

Settings on this page can not be edited later so choose wisely!

The first thing you need to choose is the end user’s sign-in method. They should use a username or email address/phone number to signup/sign in. I am choosing a username and also, allowing them to use email addresses while logging in later once they sign up. I am also selecting case sensitive usernames because that makes more sense.

via CognitoChoose the way end-user sign in

The next section of attributes, let you choose through the list of attributes you want the end user to provide when they sign up in Cognito. You can also choose to add a custom attribute here if one is not listed in the standard list.

Set end-user attributes

Policies

End-user password policies and controls are defined in this section. All the fields are pretty self-explanatory.

Cognito user pool password policies and account control

MFA and verifications

An extra layer of account security can be defined here. MFA and related configurations. Please note that if you are enabling MFA for end-users then you should be enabling phone number attributes in earlier settings and text messages (verification and subsequent messaging) will incur extra charges. Amazon pretty much explained each option here.

Cognito MFA settings

If you are opting for adding and managing phone number attributes then you need to create a role that provides access to Cognito for sending text messages on your behalf.

SMS access related settings

Messages customizations

In this section, you should be customizing the email or SMS messages being sent out by Amazon Cognito on your behalf. It’s a place if you want to have company branding in the communications! Make sure you have a verified email address in Amazon SES to set it as From email address.

Cognito messages customizations

In the later part of the page, you can configure how you want verification to be done using codes or clickable links. Also, you can customize the text of the message here.

Tags

A place that is crucial but mostly ignored by everyone! Tagging for the user pool.

Amazon Cognito user pool tags

Devices

Choose if Cognito should remember the user’s device. This will enhance the user experience. But, to use this feature you should have MFA enabled for end-users. Since we did not opt for it, we will simply say No and move forward.

Remember user device

App clients

In this section, you should create an app client which will access this user pool. On the creation of the app client, you will receive the app ID and secret key that you can configure in your applications to access this user pool.

Click on Add an app client

Cognito app client settings
  • App client name: Add unique name
  • Refresh token expiration: Refresh tokens are used to retrieve new ID and access token. Control their expiration here. Read more about refresh tokens
  • Access token expiration: Used for autorizing the API operation. Control expiration here. Read more about access tokens
  • ID token expiration: It used to claim the authenticated user’s identity. Define its expiration limits here. Read more about ID tokens.
  • Auth Flows Configuration: Enable depends on your integration requirements. I selected ALLOW_USER_PASSWORD_AUTH and left others untouched.
Cognito app client security settings
  • Security Configuration: It allows to send back generic error. Select recommened unless you have any other reason not to!
  • Advanced token settings: Enable or disable token revocations.
  • Attributes read and write permissions: Select list of attributes which this app client can read or write.

Click on Create app client. It will be created along with the user pool when you completes the whole wizard.

Click on the Next step to move forward in the user pool creation wizard.

Cognito app client

Triggers

On this page, you can configure lambda functions to be triggered on specific actions or workflow. You need to create Lambda functions in advance to select here from the dropdown. List of triggers available here –

  1. Pre sign up
  2. Pre authentication
  3. Custom message
  4. Post authentication
  5. Post confirmation
  6. Define Auth Challenge
  7. Create auth challlenge
  8. Verify authc challenge response
  9. User migration
  10. Pre token generation

All triggers are listed with descriptions for easy understanding of when they will be activated and execute related Lambda functions. For the simplicity of this article, we are not adding any.

Cognito Lambda triggers

Review

Review all the details you supplied throughout the wizard. You can make edits if necessary and then lastly click on Create pool

User pool created!

You should be greeted with a success message and the user pool management page. You can note the user pool ID generated for this user pool.

Amazon app clients settings

Now, that you created a user pool and app client for it. Let’s look at some of the settings those needs to be checked or changed to make sure your app client is ready to be consumed.

Configure Amazon Cognito app client’s IDP settings

Navigate to App integration > App client settings on the left sidebar menu on the user pool page.

  • Enable Cognito user pool under Enabled identity providers.
  • You should be having Callback URLs handy to fill in here. Those are URLs where app will be navigated once successful authentication happens. Your application developers should be able to help you with these details.
  • Sign out URLs are those where user will be redirected once its signed out from IDP session
  • OAuth 2.0 settings should be discussed with developer and configured as the app requirement
app client IDP settings

What is Amazon Cognito domain and how to configure it?

It’s a domain prefix with FQDN https://<prefix>.auth.<region>.amazoncognito.com where,

  • prefix : unique identifier of your choice
  • region: AWS region where user pool is hosted.

This domain is used to host sign-up and sign-in pages by Amazon Cognito. You can edit those pages for your company branding as well as explained in the next step.

Navigate to App integration > Domain name on the left sidebar menu on the user pool page.

Amazon Cognito Domain

Enter the prefix in the given text box and click Check availability. It will make sure you chose a unique prefix. Click on Save changes

You can opt to choose your own domain as well. You need to have an associated SSL certificate in ACM and permission to add the ALIAS record in the DNS hosted zone.

Custom domain in Aamzon Cognito

Once done, Cognito will create Amazon Cloudfront distribution for that domain in the backend and supply you with the alias target value to be configured in the hosted zone.

Amazon Cognito custom domain alias

Add ALIAS record (CNAME for non-Route53) for Domain name and Alias target mentioned above. Once done and CloudFront distribution is created, your domain status will be set to ACTIVE.

Cognito custom domain set

How to change login UI of Amazon Cognito?

Navigate to App integration > App client settings on the left sidebar menu on the user pool page.

On the last part of the page, you can find Hosted UI settings. There you will be able to play around with CSS, logo files to create a new custom login page.

Make sure you have Amazon Cognito domain name defined and at least one OAuth scope defined (above step)

How to retrieve Amazon app client secret?

Navigate to General settings > App clients on the left sidebar menu on the user pool page. And there you can retrieve app client secret.

Cognito app client secret

How to connect RDS with AWS IAM authentication

A quick post listing step by step procedure to connect RDS database configured with IAM authentication.

RDS with IAM authentication

We are considering RDS running MySQL and configured with AWS IAM authentication option throughout this post. However, if you are using a different database engine, consider editing commands/arguments whenever necessary.

Basically, we will be creating a local database user that leverages AWS IAM for authentication. Then EC2 will be configured with an IAM role or aws configure that has appropriate policies defined. And lastly, the user will generate an authentication token and log into the RDS database.

Why should we use IAM authentications for RDS?

Here is a list of reasons that are helpful to understand the benefits of the IAM authentications option for RDS.

  1. IAM tokens used to log into the RDS database are valid for 15 minutes only. So they are more secure than permanent username/password pairs, and administrators don’t need to enforce/manage password reset policies.
  2. IAM tokens are generated by making API calls to the AWS IAM service whenever needed. As a result, storing tokens is useless, and even if someone does, that does not pose a security threat due to its short life.
  3. Applications can use EC2 instance profiles for generating tokens, so there is no need to store authentication information anywhere for applications to consume.

What you need?

You should be equipped with below inventory before hand –

  1. RDS instance up and running configured with IAM DB authentication.
  2. EC2 instance configured with AWS CLI (make sure SG allows the connectivity between EC2 and RDS on database port)
  3. Master user login to RDS datatase
  4. Access to AWS IAM service.

Creating user on database for the RDS access

For this step, you need to log in to the RDS database with the master user and create a new user. If you are on windows, you can use a lightweight tool like Sqlectron, or if you are already on EC2, you can use CLI as well –

Create DB user

For SQL CLI :

[root@kerneltalks ~]# mysql -h kerneltalks-rds.cn8uwrapea4b.us-east-1.rds.amazonaws.com -P 3306 -u admin -p
Enter password: 
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 26
Server version: 8.0.20 Source distribution

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> CREATE USER 'kt_iam_user' IDENTIFIED WITH AWSAuthenticationPlugin as 'RDS' REQUIRE SSL;
Query OK, 0 rows affected (0.01 sec)

Make necessary changes to username and RDS endpoint!

Create IAM role for RDS access

  • Navigate to AWS IAM
  • Create the IAM policy (sample policy below)
  • If you intend to use the EC2 instance profile, create an IAM role for AWS service EC2 and attach the IAM policy to it.
  • If you intend to use only IAM users, then make sure you configure your CLI with aws configure command by supplying access key ID and secret access key.
  • Make necessary changes to the resource section!
{
   "Version": "2012-10-17",
   "Statement": [
      {
         "Effect": "Allow",
         "Action": [
             "rds-db:connect"
         ],
         "Resource": [
             "arn:aws:rds-db:us-east-1:986532147:dbuser:db-kerneltalks-rds/kt_iam_user"
         ]
      }
   ]
}

Download the SSL root certificate available for all regions from S3 bucket. This certificate required while making RDS connection since we enforced SSL on the database user. This ensures data is encrypted in flight.

[root@kerneltalks ~]#  wget https://s3.amazonaws.com/rds-downloads/rds-ca-2019-root.pem
--2021-07-03 14:44:12--  https://s3.amazonaws.com/rds-downloads/rds-ca-2019-root.pem
Resolving s3.amazonaws.com (s3.amazonaws.com)... 52.216.205.189
Connecting to s3.amazonaws.com (s3.amazonaws.com)|52.216.205.189|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1456 (1.4K) [binary/octet-stream]
Saving to: ‘rds-ca-2019-root.pem’

100%[===================================================================================================================================>] 1,456       --.-K/s   in 0s      

2021-07-03 14:44:12 (81.2 MB/s) - ‘rds-ca-2019-root.pem’ saved [1456/1456]

Now your EC2 or IAM user is ready to access RDS.

Connect to RDS using IAM token

It’s time for you to generate an IAM token and connect to RDS. We will save the token in the Shell variable for easy management and pass that shell variable token into RDS connect command. If the downloaded certificate is not in the PWD then use the absolute path for the pem file.

[root@kerneltalks ~]# token=`aws rds generate-db-auth-token --hostname kerneltalks-rds.cn8uwrapea4b.us-east-1.rds.amazonaws.com  --port 3306 --region us-east-1 --usernam
e kt_iam_user`
[root@kerneltalks ~]# mysql -h kerneltalks-rds.cn8uwrapea4b.us-east-1.rds.amazonaws.com -P 3306 --ssl-ca=rds-ca-2019-root.pem -u kt_iam_user --password=$token --protocol
=tcp
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 59
Server version: 8.0.20 Source distribution

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> 

And you are connected!

Here we have used IAM token as password to connect to RDS database! Make a note that, these generated tokens are good only for 15 minutes. If you are using connect command after 15 minutes you need to generate the token again.

Just for anote, token looks like this –

[root@kerneltalks ~]# echo $token
kerneltalks-rds.cn8uwrapea4b.us-east-1.rds.amazonaws.com:3306/
NXZG4BNZMCVY%2F20210703%2Fus-east-1%2Frds-db%2Faws4_request&X-Amz-SignedHeaders=host&X-Amz-Date=20210703T153110Z&X-Amz-Signature=04605698c01f3de6b2d7a7a48ddf28008d290795e11e
e4ac89587cbb3cdd9661

Understanding the authentication flow

For your understanding, lets see how authentication flows under the hood –

  1. User runs the token generation command with database name, port, region and username for which token to be generated.
  2. RDS sends back API token with 15 min lifespan. It requires your EC2 Instance role/IAM user to have rds connect permission. We covered this by defining IAM policy.
  3. The user attempt to connect to RDS using the token acquired in the previous step.
  4. A secure connection establishes, and the user logs in only if
    • The root certificate is valid
    • IAM permissions are in place and valid for db:connect
    • Supplied username has RDS authentication set in the database
    • The supplied token is generated for the same username, and currently, it’s not expired.
  5. A user granted access and logs in. SQL prompt will be presented.

How to connect AWS RDS database from Windows

A quick article for AWS beginners on connecting the AWS RDS database from Windows.

A step by step RDS database login procedure using the lightweight SQL client. This article is intended for AWS beginners who wanted to learn about RDS service with a little bit of database hands-on. Being said that, there is always a question for the non-DB guys; “How did I connect with this RDS database?”. So here we will walk you through step by step procedure for doing that.

For connecting to RDS database you should have –

  1. RDS database up and running
  2. RDS database endpoint
  3. A SQL client
  4. Connectivity between your machine and RDS database

Throughout this article, I am considering the MySQL database on RDS. If you are using a different database engine on RDS, then the connection port may vary. Also, we are considering ‘Password authentication’ of authentications options here –

  1. Password authentication: RDS configured with this option has users managed at the database level.
  2. AWS IAM database authentication: RDS configured with this option authenticates users by leveraging AWS IAM service. As a result, users are managed outside the database.

Let’s get started!

Identify RDS Endpoint

  • Navigate to the RDS console
  • Go to Databases and select your database
  • Database details screen should open up where you can copy your RDS endpoint.
RDS endpoint

Prepare SQL client

  • Download lightweight SQL client Sqlectron
  • Install and open the Sqlectron client.
Sqlectron client!

Click on Add button to enter RDS connection details. We are testing RDS with ‘Password authentication’ here hence user and password needs to be supplied.

RDS connection details in SQL Client

Click on Test to verify the connectivity.

If you encounter the below error, try to connect from a machine with direct connectivity to RDS. You can analyse RDS security groups to determine allowed subnets.

DB_CONNECT Error

Verify below things –

  • There are no firewall rules between your machine and RDS blocking the port 3306 traffic.
  • RDS database is publicly accessible. (Obv. It applies to testing POC databases). If not, you can configure RDS to be publicly accessible.
  • If you don’t want your RDS to be publicly accessible, you need to connect RDS from the allowed subnets. That means your machine needs to be in the same VPC as RDS (e.g. Windows EC2 in the same VPC)

How to make RDS publicly accessible

Not recommended for production RDS instances or RDS carrying sensitive data.

  • Navigate to RDS console and respective database
  • Select Modify from the action menu for that particular database
  • Goto Connectivity section and expand Additional configuration
  • Here choose the radio button against Publicly accessible and Apply the changes
RDS Public Access

RDS connection using ‘Password authentication’ via SQL client

Once you sort out RDS connectivity issues, go back to the SQL client and try Test again. Now you should be able to succeed.

Now, click on Save to save this configuration in SQL client. And then hit Connect to connect to your RDS database.

Connect RDS using Client

After connecting, you can see schema on the left-hand side and a command box on the right-hand side to execute commands on the database.

Running SQL commands on RDS

Command outputs or messages will be shown in the blank space below the command text area. You will be able to download outputs in JSON, CSV format or even copy them directly.

Running SQL commands with Sqlectron!

That’s all! You can use this light weight SQL client to get started with RDS immediately.

How to create atomic counter in AWS DynamoDB with AWS CLI

A step by step procedure to create and update atomic counter in AWS DynamoDB table.

Creating counter in DDB!

First of all, we will see what is atomic counter and why do we need it. We will also check why DybamoDB is chosen in this use case.

What is atomic counter?

Often it would help if you tracked some numerical like website visits, which are incremental in nature. Such counters need to be stored in a centralized place, and the update should be atomic. Atomic means one request executes without interfering with another request. i.e. concurrent updates do not clash with each other and so no data lost in the process. Since everyone is leveraging temporary infrastructure like EC2 getting replaced by Auto Scaling groups or containers, storing such counter locally is not a good idea. So to get central storage, DDB is the best choice since it’s an ultra-fast, single-digit milliseconds latency, NoSQL database. Perfect for the atomic operation of scaling infra/traffic.

Now let’s get into the process of creating this counter and updating it.

Creating DynamoDB table for the counter

  1. Login to DDB console
  2. Click on Create table
  3. Enter Table Name
  4. Enter Primary Key (Partition Key)
  5. We don’t need a sort key here since our table will carry only one counter value. Keep the rest of the settings default and click Create
Create DynamoDB table

You can use below Cloudformation resource block to create a DDB table:

Type: AWS::DynamoDB::Table
    Properties:
      TableName: kerneltalks-counter
      AttributeDefinitions: 
        - AttributeName: ID
          AttributeType: S
      KeySchema: 
        - AttributeName: ID
          KeyType: HASH

Make sure you change the TableName to your choice. You can also explorer other properties supported by DDB in CloudFormation.

Preparing DynamoDB table for counter updates

Now, you will be presented with newly created table details.

Goto Items tab and click Create item

Create DynamoDB Item

Add attributes in item as mentioned below :

Click on Save. Now your DDB table is ready for updating the counter

DDB table creation can be done via AWS CLI as well using the below command:

$ aws dynamodb put-item --table-name kerneltalks-counter --item '{"ID": { "S": "Counter" }, "TotalCount": { "N": "0" }}'

Updating counter in DDB table

Now you can use below AWS CLI command below to update the counter! Every time you run the command, it will update the counter by 1. You can code it in your application at the appropriate place to run this command/API call to update the counter.

$ aws dynamodb update-item --table-name kerneltalks-counter --key '{"ID": { "S": "Counter" }}' --update-expression "SET TotalCount = TotalCount + :incr" --expression-attribute-values '{":incr":{"N":"1"}}' --return-values UPDATED_NEW
{
    "Attributes": {
        "TotalCount": {
            "N": "1"
        }
    }
}

The command should return with the updated attribute in JSON format. You can format it to text for easy usability in code.

$ aws dynamodb update-item --table-name kerneltalks-counter --key '{"ID": { "S": "Counter" }}' --update-expression "SET TotalCount = TotalCount + :incr" --expression-attribute-values '{":incr":{"N":"1"}}' --return-values UPDATED_NEW --output text
TOTALCOUNT      2

The same can be verified in the console.

Counter in DDB.

That’s all! Now you have a counter in DDB which can be updated from different sources using API calls.