How to configure kubectl for AWS EKS

Steps to configure CLI for running kubectl commands on EKS clusters.

kubectl with EKS!

kubectl is the command-line utility used to interact with Kubernetes clusters. AWS EKS is AWS managed Kubernetes service broadly used for running Kubernetes workloads on AWS Cloud. We will be going through steps to set up the kubectl command to run with the AWS EKS cluster. Without further due, let’s get into it.

AWS CLI configuration

Install AWS CLI on your workstation and configure it by running –

# aws configure
AWS Access Key ID [None]: AKIAQX3SNXXXXXUVQ
AWS Secret Access Key [None]: tzS/a1sMDxxxxxxxxxxxxxxxxxxxxxx/D
Default region name [us-west-2]: us-east-1
Default output format [json]: json

If you require to switch roles before you can access your AWS environment then configure your CLI with roles.

Once configured, verify your CLI is working fine and reaching to appropriate AWS account.

# aws sts get-caller-identity
{
    "UserId": "AIDAQX3SNXXXXXXXXXXXX",
    "Account": "xxxxxxxxxx",
    "Arn": "arn:aws:iam::xxxxxxxxxx:user/blog-user"
}

kubectl configuration

Install kubectl command if not already. Update kubeconfig with the cluster details you want to connect to –

# aws eks --region us-west-2 update-kubeconfig --name <CLUSTER-NAME>
# aws eks --region us-east-1 update-kubeconfig --name blog-cluster
Added new context arn:aws:eks:us-east-1:xxxxxxxxxx:cluster/blog-cluster to C:\Users\linux\.kube\config

At this point your kubeconfig point to the cluster of your interest. You can execute kubectl commands and those will be executed against the cluster you mentioned above.

# kubectl get pods --all-namespaces
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   coredns-66cb55d4f4-hk9p5   0/1     Pending   0          6m54s
kube-system   coredns-66cb55d4f4-wmtvf   0/1     Pending   0          6m54s

I did not add any nodes yet to my EKS cluster hence you can see pods are in a pending state.

If you have multiple clusters configured in kubeconfig then you must switch context to interested cluster before running kubectl commands. To switch context –

# kubectl config use-context <CONTEXT-NAME>
# kubectl config use-context arn:aws:eks:us-east-1:xxxxxxxxxx:cluster/blog-cluster
Switched to context "arn:aws:eks:us-east-1:xxxxxxxxxx:cluster/blog-cluster".

You can verify all configured contexts by analysing ~/.kube/config file.

Troubleshooting errors

If your IAM user (configured in AWS CLI) is not authorized on the EKS cluster then you will see this error –

# kubectl get pods --all-namespaces
error: You must be logged in to the server (Unauthorized)

Make sure your IAM user is authorised in the EKS cluster. This can be done by adding user details under mapUsers field in the configmap named aws-auth residing in kube-system namespace. You will be able to fetch and edit it with the user who built the cluster in the first place. By default, AWS adds the IAM user as system:masters in config map who built the cluster. You have to configure the same IAM user with kubectl and edit this configmap for adding other IAM users to the cluster.

$ kubectl get -n kube-system configmap/aws-auth -o yaml
apiVersion: v1
data:
  mapRoles: |
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::xxxxxxxxxx:role/blog-eks-role
      username: system:node:{{EC2PrivateDNSName}}
  mapUsers: |
    - userarn: arn:aws:iam::xxxxxxxxxx:user/blog-user
      username: blog-user
      groups:
        - system:masters

GitBash not prompting for MFA in AWS CLI

A quick post on how to resolve an issue with Gitbash that prevents MFA prompts while using AWS CLI.

MFA GitBsah issue.

Problem

GitBash under the hood uses winpty emulator for providing a bash experience on windows. Winpty does not work well with AWS CLI especially when dealing with MFA prompts. Hence you need to replace this with bash.exe and you should be good.

Procedure

Go to the Windows start menu and search for Git Bash. Click on Open file location.

Right click on the shortcut and select Properties

Under properties change the target from “C:\Program Files\Git\git-bash.exe” to “C:\Program Files\Git\bin\bash.exe

Now launch GitBash and you should be good.

How to resolve the MFA entity already exists error

A quick fix for error MFA entity already exists.

IAM says MFA exists when its not!

Issue

The user is not able to register an MFA device. When a user tries to assign a new MFA, IAM throws an error –

This entity already exists. MFADevice entity at the same path and name already exists. Before you can add a new virtual MFA device, ask your administrator to delete the existing device using the CLI or API.
MFA assignment error

Whereas if you as admin or even user check the AWS console it shows Assigned MFA device as Not assigned for that user.

Resolution

As an administrator, you need to delete the MFA device (yes even if says not assigned) using AWS CLI. The performer needs to have IAM permission iam:DeleteVirtualMFADevice on to the given resource to update the IAM user’s MFA.

Run below command from AWS CLI –

# aws iam delete-virtual-mfa-device --serial-number arn:aws:iam::<AWS account number>:mfa/<username>

where –

  • AWS account number is account number where user exists
  • username is IAM username of that user

This should clear out the error message and the user should be able to register a new MFA device.

How to configure EC2 for Session Manager

A quick reference to configure EC2 for Session Manager in AWS

EC2 session manager!

Ok this must be a very basic post for most of you and there is a readily available AWS doc for it, but I am just cutting it short to list down steps for achieving the objective quickly. You should go through the official AWS doc to understand all aspects of it but if you are on the clock then just follow along and get it set up in no time.

Checklist

Before you start, make sure you checked out these minimum configurations to get going.

  1. Your EC2 is running supported Opertaing System. We are taking example of Linux here so all Linux versions that supports AWS Systems Manager supports session manager.
  2. SSM agent 2.3+ installed on system. If not, we got it covered here.
  3. Outbound 443 traffic should be allowed to below 3 endpoints. You must have this already covered since most of the setups has ALL traffic aalowed in outgoing security group rule. –
    • ec2messages.region.amazonaws.com
    • ssm.region.amazonaws.com
    • ssmmessages.region.amazonaws.com

In a nutshell, probably point 2 is the one you need to verify. If you are using AWS managed AMI then you got it covered for that too! But, if you are using custom-built, home-grown AMI then that might not be the case.

SSM agent installation

It’s a pretty basic RPM installation as you would do on any Linux platform. Download package relevant to your Linux version from here. Or global URLs for Linux agents –

Run package installation and service handler commands with root privileges as below –

# systemctl enable amazon-ssm-agent
# systemctl start amazon-ssm-agent
# systemctl status amazon-ssm agent

If you do not have access to EC2 (Key lost or EC2 without keypair) then probably you need to re-launch the EC2. If your EC2 is part of an auto-scaling group (ASG) then it makes sense to add these commands in the user-data script for the launch template and launch a new EC2 from ASG.

Instance role permissions

Now the agent is up and running. The next step is to authorize the AWS Systems Manager service to perform actions on EC2. This is done via Instance Role. Create the IAM instance role with below IAM policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:UpdateInstanceInformation",
                "ssmmessages:CreateControlChannel",
                "ssmmessages:CreateDataChannel",
                "ssmmessages:OpenControlChannel",
                "ssmmessages:OpenDataChannel"
            ],
            "Resource": "*"
        }
    ]
}

You can scope it down to a particular resource if you want. You can even add KMS encryption-related permissions in it if you are planning to encrypt session data using KMS encryption. An example can be found here.

Once done attach the role to EC2. If EC2 is already having a role attached to it then add the above policy to the existing role and you should be good.

IAM instance profile

Connecting via Session Manager

Now you are good to test the connection.

  • Login to EC2 console.
  • Navigate to instances and selct the respective EC2 instance from the list.
  • Click on Connect button
Connecting to session manager from EC2 console
  • Make sure you are on Serssion Manager tab and click on Connect.
  • If you still see error reported on this screen then give it a minute or two. Sometimes it takes some seconds to propagate IAM role permissions.
Connect to the instance using session manager

New browser tab will open and you should be seeing the Linux prompt.

Instance connected!

Notice you are logged in with the default user ssm-user. You can switch to root user by using sudo.

There are a couple of benefits to using session manager as standard over Key pairs :

  • No need to maintain key files.
  • Avoid security threat posed to infra associated with Key file management.
  • Access management is easy through IAM.
  • Native AWS feature!
  • Session can be logged for audit purposes.

Preparing for Certified Kubernetes Administrator (CKA) exam

A small rundown on CKA preparation.

CKA Preparations!

In this post, I will be sharing various resources to help you prepare for the CKA exam. In addition, feel free to add resources you know in the comments section, which may help fellow readers.

Exam details

  • Offered by: The Cloud Native Computing Foundation (CNCF)
  • Duration: 2 hours
  • Type: Complete tasks on Linux CLI (Practical)
  • Number of questions/tasks: 15-20 (I had 17)
  • Mode: Online proctored
  • Cost: $375 (that includes one free retake). Watch out over LinkedIn or internet for coupons. They got good deals on black friday as well.
  • Result: It will be available in 24 hours from the exam completion.
  • You are allowed to open one additional browser tab to access K8s docs, K8s Github or K8s blog. You should not be clicking/opening any links other than these domains that includes K8s forum as well.

Study journey

  • Practise course labs heavily. You may go through course quickly to understand the Kubernetes world but you need to spend more time on practising Kubernetes on CLI.
  • Online free labs for practising :
  • Once you are good with the theory and understood all aspects of Kubernetes world, Labs are the only places where you should spend all of your study time.
  • Once you are through all the scenarios/tasks provided by online courses, you can think of your own custom scenarios and try implementing them.

Tips

Practise! Practise!! Practise!!! The more you are familiar with the CLI and commands, the more time you will save during the exam. In addition, it helps to build your muscle memory for command arguments and gain those extra seconds during the exam.

CKA requires you to complete the given tasks in the Linux terminal (Ubuntu) CLI on a shared Kubernetes cluster setup. So, having good Linux background is added plus! Moreover, it helps you in navigating through CLI, editing files and save a lot of time.

Make use of -h frequently! If you are not sure about the command arguments, use a -h flag that lists arguments along with example commands. You can directly copy those example commands and edit them accordingly before executing. A quick way to get the job done rather than navigating through kubectl commands on Kubernetes documentation

Try to complete tasks using imperative commands rather than building spec files.

Read the question carefully and completely before creating any objects. Keep an eye on the namespaces mentioned in the questions. Assume default namespace when no specific namespace is mentioned.

Verify created objects to make sure they carry properties asked in questions. For pods, make sure they reach running state before proceeding.

Setting alias in Linux shell is one of the famous tips you will come across over the internet. Use it according to your comfort. I did not use it.

Always make sure you run the given context commands at the start of each task. It makes sure you are on the right cluster to perform the task.

Always make sure to return to the main terminal if you are doing ssh to other nodes for performing the tasks.

For Tasks mentioning sudo -i for root privileges, it’s good practice to switch to root as soon as you log in to the respective node rather than finding out you are not run after running some commands and investing time there!

If you are not familiar with Linux editors like vi, edit your spec files in the exam provided notepad and then copy the final version of the config directly on the terminal within the file rather than running around in Linux editors and wasting time.

Get familiar with copy, paste operations in the terminal. There are different key combinations depending on the operating system. Refer exam handbook for the same. Then, practise using those key combinations.

Use kubernetes.io/docs heavily during practice. If you are stuck at something, always try to search and get information from Kubernetes official documentation. This will make you comfortable navigating through the documentation site and hence saves some time during the exam. In addition, you will know exact keywords to search and exact links to click on topics you had a hard time studying.

It’s the student’s responsibility not to click/open any other sites than the allowed three. Search in K8s documentation may yield results with links to the K8s forum. You should not be clicking them. Make a habit of checking links before opening to avoid issues during the exams.

Please note that the exam simulator you get along with your exam booking has more challenging questions than the actual exam. They mentioned it explicitly there. So if your morale goes down pretty quickly, then it’s best not to check those questions just before the exam :P. They aim more at getting an in-depth understanding of how things run under the hood.

That’s all I have. All the best!

How to configure switching IAM roles in AWS CLI?

A short howto on configuring AWS CLI to switch roles

AWS CLI Switch Roles configuration

Requirement:

You have one AWS account that needs to switch roles before executing things on AWS. It’s an easy method on AWS console, but how to switch roles in AWS CLI.

Solution:

Let’s consider the below setup-

  • AWS IAM account with programmatic access – user101
  • Same IAM account having sts:AsumeRole permissions.
  • AWS IAM role for above said IAM user to assume (same or cross-account)- role101

Start with configuring the AWS CLI in a standard way.

$ aws configure --profile user101
AWS Access Key ID [None]: AKIAQX3SNXZGUQFOSK4T
AWS Secret Access Key [None]: 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
Default region name [us-east-1]: us-east-1
Default output format [None]: json

Note: It is not a good practice to keep AWS credentials in a plain text format. Keep them in a secured encrypted way using aws-auth.

Now, at this point, you must have an AWS credentials file created in the home directory.

$ cd ~/.aws
$ cat credentials
[user101]
aws_access_key_id = AKIAQX3SNXZGUQFOSK4T
aws_secret_access_key = 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
region = us-east-1
output = json

You need to edit the above credentials file to add IAM role details. Append the below configuration in the file.

If you are working with AWS Gov Cloud make sure the ARNs has proper AWS Partition defined. E.g. arm:aws-us-gov:x:x:…..
[role101]
role_arn = arn:aws:iam::xxxxxxxxx:role/role101
output = json
source_profile = user101

where –

  • role101 is a Role identifier. You can choose as per your choice.
  • Mention the correct IAM role ARN
  • source_profile should use the profile identifier of the user who will assume this role. In our case, its user101.

Save the file, and you are ready to go.

Test configurations –

$ aws sts get-caller-identity
{
    "UserId": "AIDAQX3SNXZG3Z2AXNIMJ",
    "Account": "xxxxxxxxx",
    "Arn": "arn:aws:iam::xxxxxxxxx:user/user101"
}

$ aws sts get-caller-identity --profile role101
{
    "UserId": "AROAQX3SNXZG6KL4YENFZ:botocore-session-1631087792",
    "Account": "xxxxxxxxx",
    "Arn": "arn:aws:sts::xxxxxxxxx:assumed-role/role101/botocore-session-1631087792"
}

You can see this by using --profile role101 we are assuming the IAM role role101 for the user user101.

AWS CLI configuration for switching roles using MFA

Note: If you are on Windows and using GitBash, refer to configuring GitBash for MFA prompts. It works perfectly in Powershell.

In some cases, your AWS environment must have MFA restrictions in place where the user user101 must have MFA enabled to switch to the role role101. In such a scenario, your role profile in credentials files should include MFA device ARN as well like below –

[role101]
role_arn = arn:aws:iam::xxxxxxxxx:role/role101
mfa_serial = arn:aws:iam::xxxxxxxxx:mfa/user101
output = json
source_profile = user101

where –

mfa_serial is the ARN of the MFA device of user101.

You will be prompted to supply the MFA code whenever you use profile role101 in AWS CLI commands.

$ aws sts get-caller-identity --profile role101
Enter MFA code for arn:aws:iam::xxxxxxxxx:mfa/user101:
{
    "UserId": "AROAQX3SNXZG6KL4YENFZ:botocore-session-1631089277",
    "Account": "xxxxxxxxx",
    "Arn": "arn:aws:sts::xxxxxxxxx:assumed-role/role101/botocore-session-1631089277"
}

How to find AWS resources that need to be tagged

A quick rundown on how to hunt AWS resources that needs tagging

Scan AWS resources to tag

Tags are the most important and equally negligible AWS entity! As AWS spread grows in an organization they start to realize the importance of tags and then comes the projects for tagging existing resources!

At this stage, the first question on the table is how to search for AWS resources that need tagging? or How can we search non-tagged AWS resources?

It’s a very short process that can be summarised in a single picture!

Searching AWS resources to tag

Breaking it down –

  1. Login to AWS Resource groups console.
  2. On left hand side menu, select Tag Editor under Tagging.
  3. Now you should have seelction on right hand side.
  4. Select perticular region or All regions from Regions drop down.
  5. Select specific resource or All supported resource types from Resource types drop down.
  6. Tags – Optional: You can specify key, value details to search for specific tags. Since we are searching for resources that are not tagged lets keep it blank.
  7. Finally, click on Search resources button and you are done!
  8. You should be presented with list of AWS resources in specified regions that needs to be tagged like below.
List of AWS resources to tag

You can export the list to CSV as well for further data analytics.

Netflix’s ConsoleMe local installation on Linux machine

A step by step guide to install ConsoleMe on Ubuntu Linux machine

ConsoleMe Ubuntu Local Install

ConsoleMe is an open-source web service published by Netflix. It is designed to make life easy for end-users and cloud administrators. Using ConsoleMe, cloud administrators can manage IAM permissions/credentials for IAM roles, S3 buckets, SQS queues, and SNS topics across multiple AWS accounts from a single interface. It also provides CLI called weep for AWS credentials management. That’s a fair introduction if you are not aware of the tool. Next, let’s get into the installation part.

ConsoleMe offers docker and local installs. We will walk you through local install in this article.

Pre-requisite:

  • A machine running Ubuntu 19.04+ with root access. I used Ubuntu 20.04 LTS x86.
  • Active and working package manager subscription to install packages
  • Storage requirement: 2GB of disk space
  • An AWS user/role for consoleme service with appropriate permissions
  • AWS access keys for above user if you are not using roles. I used keys (steps below)

Installation

We are installing ConsoleMe in /consoleme directory. If you want to install in another location, make the necessary changes in the commands below. Let me give you a list of commands you need to run as root –

apt-get update
apt-get install build-essential libxml2-dev libxmlsec1 libxmlsec1-dev libxmlsec1-openssl musl-dev libcurl4-nss-dev python3-dev pkg-config python3.8-venv awscli docker-compose -y
curl -sL https://deb.nodesource.com/setup_14.x | sudo bash
apt-get install -y nodejs
npm install yarn -g
cd /
git clone https://github.com/Netflix/consoleme.git
cd consoleme
docker-compose -f docker-compose-dependencies.yaml up -d

Here, the first few commands are installing all the dependencies and related software/tools. Then, we are cloning the GitHub repo of the tool in /consoleme and lastly, we are running two containers.

These are Redis and dynamodb containers that ConsoleMe leverages for caching and aggregating the AWS accounts information. You can make use of AWS Redis and dynamodb table services, but for now, we will run these containers locally so that ConsoleMe will talk to them rather than AWS services.

I am avoiding putting up console outputs for frequently used commands like package installations etc., here.

Make sure both containers are up and running before proceeding to the next step –

root@kerneltalks:/consoleme# docker ps
CONTAINER ID   IMAGE                             COMMAND                  CREATED          STATUS         PORTS                              NAMES
5333cdee2202   cnadiminti/dynamodb-local         "java -jar DynamoDBL…"   10 seconds ago   Up 4 seconds   8000/tcp, 0.0.0.0:8005->8005/tcp   consoleme-dynamodb
19ac354c3d70   redis:alpine                      "docker-entrypoint.s…"   10 seconds ago   Up 4 seconds   0.0.0.0:6379->6379/tcp             consoleme-redis
4cf931d38652   aaronshaf/dynamodb-admin:latest   "node bin/dynamodb-a…"   10 seconds ago   Up 4 seconds   0.0.0.0:8001->8001/tcp             consoleme-dynamodb-admin

Now, you need to prepare the machine to talk with AWS for fetching account details in the upcoming install steps. Ensure that you have set up account and permissions perfectly in IAM (mentioned in the pre-requisite above) to avoid any issues. You can do that by configuring AWS profile –

root@kerneltalks:/consoleme# aws configure
AWS Access Key ID [None]: AKIAQX3STVKIYRO36XEC
AWS Secret Access Key [None]: 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
Default region name [None]:
Default output format [None]:

Lastly, create a new python environment and run the final install step. This will take a while to complete since at the end of make install command, it also fetches and caches the AWS account details in the local Redis cache –

python3 -m venv env
. env/bin/activate
make install

After successful installation, you should be able to start the application.

Running ConsoleMe

On a current shell, you can run the ConsoleMe with the command. If you are in another shell, activate the python environment again –

(env) root@kerneltalks:/consoleme# python consoleme/__main__.py
{"asctime": "2021-07-25T08:32:16Z+0000", "name": "consoleme", "processName": "MainProcess", "filename": "jwt.py", "funcName": "<module>", "levelname": "ERROR", "lineno": 14, "module": "jwt", "threadName": "MainThread", "message": "Configuration key `jwt.secret` is not set. Setting a random secret", "eventTime": "2021-07-25T01:32:16.286230-07:00", "hostname": "kerneltalks", "timestamp": "2021-07-25T08:32:16Z+0000"}
2021-07-25 08:32:17,322 - DEBUG - root - [constants.py:39 - <module>() ] - Leveraging the bundled IAM Definition.
2021-07-25 08:32:17,322 - INFO - root - [iam_data.py:10 - <module>() ] - Leveraging the IAM definition at /consoleme/env/lib/python3.8/site-packages/policy_sentry/shared/data/iam-definition.json
2021-07-25 08:32:17,824 - DEBUG - git.cmd - [cmd.py:817 - execute() ] - Popen(['git', 'version'], cwd=/consoleme, universal_newlines=False, shell=None, istream=None)
2021-07-25 08:32:17,859 - DEBUG - git.cmd - [cmd.py:817 - execute() ] - Popen(['git', 'version'], cwd=/consoleme, universal_newlines=False, shell=None, istream=None)
{"asctime": "2021-07-25T08:32:18Z+0000", "name": "consoleme", "processName": "MainProcess", "filename": "__main__.py", "funcName": "init", "levelname": "DEBUG", "lineno": 57, "module": "__main__", "threadName": "MainThread", "message": "Server started", "eventTime": "2021-07-25T01:32:16.286230-07:00", "hostname": "kerneltalks", "timestamp": "2021-07-25T08:32:18Z+0000"}

But, it will exit out when you terminate the command or shell. It’s safe to run it in the background or, even better, run it as a Linux service. For running ConsoleMe as a service, create below two files –

File /usr/bin/consoleme_start.sh

#!/bin/bash
. env/bin/activate
python consoleme/__main__.py

File /etc/systemd/system/consoleme.service


[Unit]
Description=Run consoleme service.

[Service]
Type=simple
User=root
WorkingDirectory=/consoleme
ExecStart=/usr/bin/consoleme_start.sh

[Install]
WantedBy=multi-user.target

Assign executable permissions to

chmod +x /usr/bin/consoleme_start.sh

Enable and start the service

root@kerneltalks:/consoleme# systemctl enable consoleme
Created symlink /etc/systemd/system/multi-user.target.wants/consoleme.service → /etc/systemd/system/consoleme.service.

root@kerneltalks:/consoleme# systemctl start consoleme

root@kerneltalks:/consoleme# systemctl status consoleme
● consoleme.service - Run consoleme service.
     Loaded: loaded (/etc/systemd/system/consoleme.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2021-07-25 08:35:52 UTC; 7s ago
   Main PID: 14775 (consoleme_start)
      Tasks: 5 (limit: 4706)
     Memory: 159.7M
     CGroup: /system.slice/consoleme.service
             ├─14775 /bin/bash /usr/bin/consoleme_start.sh
             └─14776 python consoleme/__main__.py

Jul 25 08:35:52 kerneltalks systemd[1]: Started Run consoleme service..
Jul 25 08:35:53 kerneltalks consoleme_start.sh[14776]: {"asctime": "2021-07-25T08:35:53Z+0000", "name": "consoleme", "processName": "MainProcess", "filename": "jwt.py", "funcName": "<module>", "levelname": "ERROR", "lineno": 14, "m>
Jul 25 08:35:53 kerneltalks consoleme_start.sh[14776]: 2021-07-25 08:35:53,954 - DEBUG - root - [constants.py:39 - <module>() ] - Leveraging the bundled IAM Definition.
Jul 25 08:35:53 kerneltalks consoleme_start.sh[14776]: 2021-07-25 08:35:53,955 - INFO - root - [iam_data.py:10 - <module>() ] - Leveraging the IAM definition at /consoleme/env/lib/python3.8/site-packages/policy_sentry/shared/data/i>
Jul 25 08:35:54 kerneltalks consoleme_start.sh[14776]: 2021-07-25 08:35:54,354 - DEBUG - git.cmd - [cmd.py:817 - execute() ] - Popen(['git', 'version'], cwd=/consoleme, universal_newlines=False, shell=None, istream=None)
Jul 25 08:35:54 kerneltalks consoleme_start.sh[14776]: 2021-07-25 08:35:54,361 - DEBUG - git.cmd - [cmd.py:817 - execute() ] - Popen(['git', 'version'], cwd=/consoleme, universal_newlines=False, shell=None, istream=None)
Jul 25 08:35:54 kerneltalks consoleme_start.sh[14776]: {"asctime": "2021-07-25T08:35:54Z+0000", "name": "consoleme", "processName": "MainProcess", "filename": "__main__.py", "funcName": "init", "levelname": "DEBUG", "lineno": 57, ">

ConsoleMe GUI

Now that your console service is running, you should load its GUI on a web browser. The service listens on the 8081 port, so you need to navigate the server address with port 8081. Make sure the security group is allowing 8081 traffic if you are installing on EC2.

At this point, ConsoleMe is running with the default open example configuration. It’s very well highlighted on the web app as a warning. It would be best if you were editing this configuration to make your ConsoleMe more secure. ConsoleMe recommends Application Load Balancer authentication for securing your web app GUI. Refer to our next article on how to secure the ConsoleMe web app using ALB authentication.