Steps to configure CLI for running kubectl commands on EKS clusters.
kubectl is the command-line utility used to interact with Kubernetes clusters. AWS EKS is AWS managed Kubernetes service broadly used for running Kubernetes workloads on AWS Cloud. We will be going through steps to set up the kubectl command to run with the AWS EKS cluster. Without further due, let’s get into it.
AWS CLI configuration
Install AWS CLI on your workstation and configure it by running –
# aws configure
AWS Access Key ID [None]: AKIAQX3SNXXXXXUVQ
AWS Secret Access Key [None]: tzS/a1sMDxxxxxxxxxxxxxxxxxxxxxx/D
Default region name [us-west-2]: us-east-1
Default output format [json]: json
# aws eks --region us-east-1 update-kubeconfig --name blog-cluster
Added new context arn:aws:eks:us-east-1:xxxxxxxxxx:cluster/blog-cluster to C:\Users\linux\.kube\config
At this point your kubeconfig point to the cluster of your interest. You can execute kubectl commands and those will be executed against the cluster you mentioned above.
# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-66cb55d4f4-hk9p5 0/1 Pending 0 6m54s
kube-system coredns-66cb55d4f4-wmtvf 0/1 Pending 0 6m54s
I did not add any nodes yet to my EKS cluster hence you can see pods are in a pending state.
If you have multiple clusters configured in kubeconfig then you must switch context to interested cluster before running kubectl commands. To switch context –
# kubectl config use-context <CONTEXT-NAME>
# kubectl config use-context arn:aws:eks:us-east-1:xxxxxxxxxx:cluster/blog-cluster
Switched to context "arn:aws:eks:us-east-1:xxxxxxxxxx:cluster/blog-cluster".
You can verify all configured contexts by analysing ~/.kube/config file.
Troubleshooting errors
If your IAM user (configured in AWS CLI) is not authorized on the EKS cluster then you will see this error –
# kubectl get pods --all-namespaces
error: You must be logged in to the server (Unauthorized)
Make sure your IAM user is authorised in the EKS cluster. This can be done by adding user details under mapUsers field in the configmap named aws-auth residing in kube-system namespace. You will be able to fetch and edit it with the user who built the cluster in the first place. By default, AWS adds the IAM user as system:masters in config map who built the cluster. You have to configure the same IAM user with kubectl and edit this configmap for adding other IAM users to the cluster.
A quick post on how to resolve an issue with Gitbash that prevents MFA prompts while using AWS CLI.
Problem
GitBash under the hood uses winpty emulator for providing a bash experience on windows. Winpty does not work well with AWS CLI especially when dealing with MFA prompts. Hence you need to replace this with bash.exe and you should be good.
Procedure
Go to the Windows start menu and search for Git Bash. Click on Open file location.
Right click on the shortcut and select Properties
Under properties change the target from “C:\Program Files\Git\git-bash.exe” to “C:\Program Files\Git\bin\bash.exe“
The user is not able to register an MFA device. When a user tries to assign a new MFA, IAM throws an error –
This entity already exists. MFADevice entity at the same path and name already exists. Before you can add a new virtual MFA device, ask your administrator to delete the existing device using the CLI or API.
Whereas if you as admin or even user check the AWS console it shows Assigned MFA device as Not assigned for that user.
Resolution
As an administrator, you need to delete the MFA device (yes even if says not assigned) using AWS CLI. The performer needs to have IAM permission iam:DeleteVirtualMFADevice on to the given resource to update the IAM user’s MFA.
Run below command from AWS CLI –
# aws iam delete-virtual-mfa-device --serial-number arn:aws:iam::<AWS account number>:mfa/<username>
where –
AWS account number is account number where user exists
username is IAM username of that user
This should clear out the error message and the user should be able to register a new MFA device.
A quick reference to configure EC2 for Session Manager in AWS
Ok this must be a very basic post for most of you and there is a readily available AWS doc for it, but I am just cutting it short to list down steps for achieving the objective quickly. You should go through the official AWS doc to understand all aspects of it but if you are on the clock then just follow along and get it set up in no time.
Checklist
Before you start, make sure you checked out these minimum configurations to get going.
Your EC2 is running supported Opertaing System. We are taking example of Linux here so all Linux versions that supports AWS Systems Manager supports session manager.
SSM agent 2.3+ installed on system. If not, we got it covered here.
Outbound 443 traffic should be allowed to below 3 endpoints. You must have this already covered since most of the setups has ALL traffic aalowed in outgoing security group rule. –
ec2messages.region.amazonaws.com
ssm.region.amazonaws.com
ssmmessages.region.amazonaws.com
In a nutshell, probably point 2 is the one you need to verify. If you are using AWS managed AMI then you got it covered for that too! But, if you are using custom-built, home-grown AMI then that might not be the case.
SSM agent installation
It’s a pretty basic RPM installation as you would do on any Linux platform. Download package relevant to your Linux version from here. Or global URLs for Linux agents –
If you do not have access to EC2 (Key lost or EC2 without keypair) then probably you need to re-launch the EC2. If your EC2 is part of an auto-scaling group (ASG) then it makes sense to add these commands in the user-data script for the launch template and launch a new EC2 from ASG.
Instance role permissions
Now the agent is up and running. The next step is to authorize the AWS Systems Manager service to perform actions on EC2. This is done via Instance Role. Create the IAM instance role with below IAM policy:
You can scope it down to a particular resource if you want. You can even add KMS encryption-related permissions in it if you are planning to encrypt session data using KMS encryption. An example can be found here.
Once done attach the role to EC2. If EC2 is already having a role attached to it then add the above policy to the existing role and you should be good.
In this post, I will be sharing various resources to help you prepare for the CKA exam. In addition, feel free to add resources you know in the comments section, which may help fellow readers.
Exam details
Offered by: The Cloud Native Computing Foundation (CNCF)
Duration: 2 hours
Type: Complete tasks on Linux CLI (Practical)
Number of questions/tasks: 15-20 (I had 17)
Mode: Online proctored
Cost: $375 (that includes one free retake). Watch out over LinkedIn or internet for coupons. They got good deals on black friday as well.
Result: It will be available in 24 hours from the exam completion.
You are allowed to open one additional browser tab to access K8s docs, K8s Github or K8s blog. You should not be clicking/opening any links other than these domains that includes K8s forum as well.
Study journey
Having Linux and container background helps picking up Kubernetes world quickly. No prior cloud experience is required.
If you are not familier with containerization, its good to look into it before starting with Kubernetes. Docker containers are the best place to get started with.
Practise course labs heavily. You may go through course quickly to understand the Kubernetes world but you need to spend more time on practising Kubernetes on CLI.
Once you are good with the theory and understood all aspects of Kubernetes world, Labs are the only places where you should spend all of your study time.
Once you are through all the scenarios/tasks provided by online courses, you can think of your own custom scenarios and try implementing them.
Tips
Practise! Practise!! Practise!!! The more you are familiar with the CLI and commands, the more time you will save during the exam. In addition, it helps to build your muscle memory for command arguments and gain those extra seconds during the exam.
CKA requires you to complete the given tasks in the Linux terminal (Ubuntu) CLI on a shared Kubernetes cluster setup. So, having good Linux background is added plus! Moreover, it helps you in navigating through CLI, editing files and save a lot of time.
Make use of -h frequently! If you are not sure about the command arguments, use a -h flag that lists arguments along with example commands. You can directly copy those example commands and edit them accordingly before executing. A quick way to get the job done rather than navigating through kubectl commands on Kubernetes documentation
Try to complete tasks using imperative commands rather than building spec files.
Read the question carefully and completely before creating any objects. Keep an eye on the namespaces mentioned in the questions. Assume default namespace when no specific namespace is mentioned.
Verify created objects to make sure they carry properties asked in questions. For pods, make sure they reach running state before proceeding.
Setting alias in Linux shell is one of the famous tips you will come across over the internet. Use it according to your comfort. I did not use it.
Always make sure you run the given context commands at the start of each task. It makes sure you are on the right cluster to perform the task.
Always make sure to return to the main terminal if you are doing ssh to other nodes for performing the tasks.
For Tasks mentioning sudo -i for root privileges, it’s good practice to switch to root as soon as you log in to the respective node rather than finding out you are not run after running some commands and investing time there!
If you are not familiar with Linux editors like vi, edit your spec files in the exam provided notepad and then copy the final version of the config directly on the terminal within the file rather than running around in Linux editors and wasting time.
Get familiar with copy, paste operations in the terminal. There are different key combinations depending on the operating system. Refer exam handbook for the same. Then, practise using those key combinations.
Use kubernetes.io/docs heavily during practice. If you are stuck at something, always try to search and get information from Kubernetes official documentation. This will make you comfortable navigating through the documentation site and hence saves some time during the exam. In addition, you will know exact keywords to search and exact links to click on topics you had a hard time studying.
It’s the student’s responsibility not to click/open any other sites than the allowed three. Search in K8s documentation may yield results with links to the K8s forum. You should not be clicking them. Make a habit of checking links before opening to avoid issues during the exams.
Please note that the exam simulator you get along with your exam booking has more challenging questions than the actual exam. They mentioned it explicitly there. So if your morale goes down pretty quickly, then it’s best not to check those questions just before the exam :P. They aim more at getting an in-depth understanding of how things run under the hood.
A short howto on configuring AWS CLI to switch roles
Requirement:
You have one AWS account that needs to switch roles before executing things on AWS. It’s an easy method on AWS console, but how to switch roles in AWS CLI.
Solution:
Let’s consider the below setup-
AWS IAM account with programmatic access – user101
Same IAM account having sts:AsumeRole permissions.
AWS IAM role for above said IAM user to assume (same or cross-account)- role101
Start with configuring the AWS CLI in a standard way.
$ aws configure --profile user101
AWS Access Key ID [None]: AKIAQX3SNXZGUQFOSK4T
AWS Secret Access Key [None]: 33hjtNbOq9otA/OjBgnAcawHQjxTKtpY465NrDxR
Default region name [us-east-1]: us-east-1
Default output format [None]: json
In some cases, your AWS environment must have MFA restrictions in place where the user user101 must have MFA enabled to switch to the role role101. In such a scenario, your role profile in credentials files should include MFA device ARN as well like below –
A quick rundown on how to hunt AWS resources that needs tagging
Tags are the most important and equally negligible AWS entity! As AWS spread grows in an organization they start to realize the importance of tags and then comes the projects for tagging existing resources!
At this stage, the first question on the table is how to search for AWS resources that need tagging? or How can we search non-tagged AWS resources?
It’s a very short process that can be summarised in a single picture!
On left hand side menu, select Tag Editor under Tagging.
Now you should have seelction on right hand side.
Select perticular region or All regions from Regions drop down.
Select specific resource or All supported resource types from Resource types drop down.
Tags – Optional: You can specify key, value details to search for specific tags. Since we are searching for resources that are not tagged lets keep it blank.
Finally, click on Search resources button and you are done!
You should be presented with list of AWS resources in specified regions that needs to be tagged like below.
You can export the list to CSV as well for further data analytics.
A quick article on how to configure ALB Auth via Amazon Cognito for ConsoleMe webapp
In our last article, we looked at Netflix’s IAM management tool ConsoleMe. We installed it on the Ubuntu Linux machine with the Local install method, and we got it up and running with the default example configuration. In this article, we will walk you through the process of configuring ALB authentication for the ConsoleMe webapp.
With the default example configuration, ConsoleMe webapp opens up without any authentication. However, since ConsoleMe will manage your AWS account’s IAM, it’s not safe to keep your keys to kingdom open on the internet without any authentication mechanism in place. Hence, we will be protecting it with the ALB auth method.
ConsoleMe supports webapp authentication via –
ALB Auth (Recommended)
Google groups
OIDC/OAuth2
SAML
Headers
As recommended by ConsoleMe, we will move ahead with ALB Auth.
Let’s get into it.
Pre-requisites
ConsoleMe is up and running
ALB is configured to listen on HTTPS with target group configured on 8081 port with ConsoleMe instance as target
Above mentioned setup is working correctly and you are able to open up ConsoleMe webapp using ALB DNS name/DNS ALIAS you configured for ALB.
Before you proceed, you need to make few configuration changes in the Amazon Cognito user pool if you have followed the above link to create it.
Edit/ make sure you have the below configurations in the Cognito user pool’s App client settings :
Callback URLs are set to http://DNS-NAME/auth, http://DNS-NAME/oauth2/idpresponse. Where DNS-NAME is DNS name of ALB or the ALIAS defined for DNS name.
Enable Authorization code grant
Allowed Oauth scope has email, openid and profile enabled.
Apart from the steps in the above link, you need to add extra rules in the HTTPS listener that forwards below path patterns directly to the target group bypassing Cognito authentication.
/api/v1/get_roles*
/api/v2/mtls/roles/*
/api/v1/get_credentials*
/api/v1/myheaders/?
/api/v2/get_resource_url*
/noauth/v1/challenge_poller/*
/noauth/v1/challenge_generator/*
ConsoleMe leverages these path patterns to perform CLI actions and authentication.
After adding them, your listener should look like –
Now, the final step is to do the custom configuration on the ConsoleMe part. As you are aware from the ConsoleMe installation that it runs the default example configuration. We should override that open configuration with the one that supports ALB Auth. Copy the sample configuration file from GitHub here.
Save this file as in the installation directory. We installed ConsoleMe in /consoleme directory, so it should be saved as /consoleme/consoleme.yaml
Edit below parameters in the configuration file –
application_admin: Email that will receive the approval requests
metadata_url: Replace region and Cognito pool id.
is_example_config: false
ses: Edit if you are using SES
aws: with relevent details
Restart application
systemctl restart consolme
OR
python consoleme/__main__.py
Now, your application is reading the newly created configuration file. Next, open up ALB’s DNS/ ALIAS DNS, and you will be prompted to log in from Cognito. We discussed this part in an earlier article on Cognito authentication.
Once you are successfully authenticated, you should see the ConsoleMe console with custom config!
Notice that the example configuration notice is vanished now. Also, you can see Cognito user ID’s email as a logged-in user in ConsoleMe!
We successfully enabled ALB Auth for securing ConsoleMe webapp!
A step by step guide to install ConsoleMe on Ubuntu Linux machine
ConsoleMe is an open-source web service published by Netflix. It is designed to make life easy for end-users and cloud administrators. Using ConsoleMe, cloud administrators can manage IAM permissions/credentials for IAM roles, S3 buckets, SQS queues, and SNS topics across multiple AWS accounts from a single interface. It also provides CLI called weep for AWS credentials management. That’s a fair introduction if you are not aware of the tool. Next, let’s get into the installation part.
ConsoleMe offers docker and local installs. We will walk you through local install in this article.
Pre-requisite:
A machine running Ubuntu 19.04+ with root access. I used Ubuntu 20.04 LTS x86.
Active and working package manager subscription to install packages
AWS access keys for above user if you are not using roles. I used keys (steps below)
Installation
We are installing ConsoleMe in /consoleme directory. If you want to install in another location, make the necessary changes in the commands below. Let me give you a list of commands you need to run as root –
Here, the first few commands are installing all the dependencies and related software/tools. Then, we are cloning the GitHub repo of the tool in /consoleme and lastly, we are running two containers.
These are Redis and dynamodb containers that ConsoleMe leverages for caching and aggregating the AWS accounts information. You can make use of AWS Redis and dynamodb table services, but for now, we will run these containers locally so that ConsoleMe will talk to them rather than AWS services.
I am avoiding putting up console outputs for frequently used commands like package installations etc., here.
Make sure both containers are up and running before proceeding to the next step –
root@kerneltalks:/consoleme# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5333cdee2202 cnadiminti/dynamodb-local "java -jar DynamoDBL…" 10 seconds ago Up 4 seconds 8000/tcp, 0.0.0.0:8005->8005/tcp consoleme-dynamodb
19ac354c3d70 redis:alpine "docker-entrypoint.s…" 10 seconds ago Up 4 seconds 0.0.0.0:6379->6379/tcp consoleme-redis
4cf931d38652 aaronshaf/dynamodb-admin:latest "node bin/dynamodb-a…" 10 seconds ago Up 4 seconds 0.0.0.0:8001->8001/tcp consoleme-dynamodb-admin
Now, you need to prepare the machine to talk with AWS for fetching account details in the upcoming install steps. Ensure that you have set up account and permissions perfectly in IAM (mentioned in the pre-requisite above) to avoid any issues. You can do that by configuring AWS profile –
root@kerneltalks:/consoleme# aws configure
AWS Access Key ID [None]: AKIAQX3STVKIYRO36XEC
AWS Secret Access Key [None]: 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
Default region name [None]:
Default output format [None]:
Lastly, create a new python environment and run the final install step. This will take a while to complete since at the end of make install command, it also fetches and caches the AWS account details in the local Redis cache –
python3 -m venv env
. env/bin/activate
make install
After successful installation, you should be able to start the application.
Running ConsoleMe
On a current shell, you can run the ConsoleMe with the command. If you are in another shell, activate the python environment again –
But, it will exit out when you terminate the command or shell. It’s safe to run it in the background or, even better, run it as a Linux service. For running ConsoleMe as a service, create below two files –
Now that your console service is running, you should load its GUI on a web browser. The service listens on the 8081 port, so you need to navigate the server address with port 8081. Make sure the security group is allowing 8081 traffic if you are installing on EC2.
At this point, ConsoleMe is running with the default open example configuration. It’s very well highlighted on the web app as a warning. It would be best if you were editing this configuration to make your ConsoleMe more secure. ConsoleMe recommends Application Load Balancer authentication for securing your web app GUI. Refer to our next article on how to secure the ConsoleMe web app using ALB authentication.
Navigate to the Load Balancing > Load Balancers in left sidebar menu
On Load balancers page, select the Application load balancer that needs to be configured
Click on Listeners tab in below details pane.
Click on View/Edit rules against HTTPS 443 listner
You should see the editor window where you need to click on pen icons to open up the rule editor.
Now, click on + Add action button. Select Authenticate from the drop-down. The rule will be listed in the editor where you need to select from dropdowns-
Cognito user pool
Client ID
Once you select Cognito user pool it will accordingly populate values for Client ID in that pool. Choose respectively. Keep other options untouched unless you have other requirements.
New Authenticate action will be prioritized to rule 1 and existing Forward to action will be getting down to number 2. Your rule windows should look like this –
Click on Update to save the rule.
If you are facing any issue to populate User pool ID and seeing Too many requests error in console then you can opt to re-create ALB with this configuration using AWS CLI or CloudFormation. Because using AWS CLI, you can not edit the default rule. If you have ALB created from the Cloudformation template already then you just need to tweak your template and add the below code for the ALB listener resource.
Make sure your HTTP listener rule (port 80) is set to redirect to HTTPS.
Now ALB is ready to authenticate users before their requests are forwarded to target group targets. Grab the ALB DNS from the load balancers page. If you are having CNAME/ALIAS entry for ALB then you should be using the custom DNS you have defined.
Navigate to the Load Balancing > Load Balancers in left sidebar menu
On Load balancers page, select the Application load balancer that needs to be configured
Click on Description tab in below details pane.
Copy DNS name
Paste DNS name in the browser and you should be redirected to the authentication page from Amazon Cognito. You can customise this page along with its domain as we explained in our earlier article on the Amazon Cognito user pool.
Notice that we configured a custom domain for Amazon Cognito hence our request is served with a custom domain auth.kerneltalks.com. If you observe the URL carefully, you can see Client ID, Callback URL, etc. details configured in Amazon Cognito when we created the user pool.
For any Cognito user, the default behaviour is to force change password on the first login which we observed here –
During change password, you can even see password policy applied while creating the Amazon Cognito user pool is being checked –
For testing purposes, if you want to bypass all the above password change procedures then you can set the password from AWS CLI using the below command –
Once the password is changed successfully, you will be redirected to the Callback URL configured in Amazon Cognito. We configured it to the ALB DNS name. So we redirected to it and backend EC2 target served us a sample webserver page!
Thus we configured Amazon Cognito authentication on the Application Load balancer and secured our targets!