Steps to configure CLI for running kubectl commands on EKS clusters.
kubectl
is the command-line utility used to interact with Kubernetes clusters. AWS EKS is AWS managed Kubernetes service broadly used for running Kubernetes workloads on AWS Cloud. We will be going through steps to set up the kubectl
command to run with the AWS EKS cluster. Without further due, let’s get into it.
AWS CLI configuration
Install AWS CLI on your workstation and configure it by running –
# aws configure
AWS Access Key ID [None]: AKIAQX3SNXXXXXUVQ
AWS Secret Access Key [None]: tzS/a1sMDxxxxxxxxxxxxxxxxxxxxxx/D
Default region name [us-west-2]: us-east-1
Default output format [json]: json
If you require to switch roles before you can access your AWS environment then configure your CLI with roles.
Once configured, verify your CLI is working fine and reaching to appropriate AWS account.
# aws sts get-caller-identity
{
"UserId": "AIDAQX3SNXXXXXXXXXXXX",
"Account": "xxxxxxxxxx",
"Arn": "arn:aws:iam::xxxxxxxxxx:user/blog-user"
}
kubectl configuration
Install kubectl command if not already. Update kubeconfig with the cluster details you want to connect to –
# aws eks --region us-west-2 update-kubeconfig --name <CLUSTER-NAME>
# aws eks --region us-east-1 update-kubeconfig --name blog-cluster
Added new context arn:aws:eks:us-east-1:xxxxxxxxxx:cluster/blog-cluster to C:\Users\linux\.kube\config
At this point your kubeconfig
point to the cluster of your interest. You can execute kubectl
commands and those will be executed against the cluster you mentioned above.
# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-66cb55d4f4-hk9p5 0/1 Pending 0 6m54s
kube-system coredns-66cb55d4f4-wmtvf 0/1 Pending 0 6m54s
I did not add any nodes yet to my EKS cluster hence you can see pods are in a pending state.
If you have multiple clusters configured in kubeconfig then you must switch context to interested cluster before running kubectl commands. To switch context –
# kubectl config use-context <CONTEXT-NAME>
# kubectl config use-context arn:aws:eks:us-east-1:xxxxxxxxxx:cluster/blog-cluster
Switched to context "arn:aws:eks:us-east-1:xxxxxxxxxx:cluster/blog-cluster".
You can verify all configured contexts by analysing ~/.kube/config
file.
Troubleshooting errors
If your IAM user (configured in AWS CLI) is not authorized on the EKS cluster then you will see this error –
# kubectl get pods --all-namespaces
error: You must be logged in to the server (Unauthorized)
Make sure your IAM user is authorised in the EKS cluster. This can be done by adding user details under mapUsers
field in the configmap
named aws-auth
residing in kube-system
namespace. You will be able to fetch and edit it with the user who built the cluster in the first place. By default, AWS adds the IAM user as system:masters
in config map who built the cluster. You have to configure the same IAM user with kubectl
and edit this configmap
for adding other IAM users to the cluster.
$ kubectl get -n kube-system configmap/aws-auth -o yaml
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::xxxxxxxxxx:role/blog-eks-role
username: system:node:{{EC2PrivateDNSName}}
mapUsers: |
- userarn: arn:aws:iam::xxxxxxxxxx:user/blog-user
username: blog-user
groups:
- system:masters
debi panigrahy says
This is really a fantastic article which solves my problem.