Monthly Archives: January 2022

Setting up WSL for Sysadmin work

A list of tools/configurations to make sysadmin life easy on Windows workstation!

Linux lovers on Windows!

This article is intended for the sysadmins who use Windows workstations for their job and yet would love to have Linux experience on it. Moreover, if they are interacting with AWS CLI, GIT, etc. CLI based tools on daily basis then its best suited for them. I list all the tools and their respective configurations you must have in your arsenal to make your journey peaceful, less frustrating and avoid non-Linux workstation issues. I expect the audience to be comfortable with Linux.

Without further a due let’s get started.

Windows Subsystem for Linux

First of all, let’s get Linux on the Windows 🙂 WSL is a Windows feature available from Windows 10 (WSL Install steps). Install the latest (at the time of this article draft) Ubuntu 20.04 LTS from Microsoft Store. Post-installation you can run it just like other Windows apps. For the first login, you will be prompted to set a username and password. This user is configured to switch to root using sudo.

Now, you have a Linux subsystem running on your Windows! Let’s move on to configure it to ease up daily activities.

Install necessary packages using apt-get. I am listing here frequently useful for your quick reference –

I even configured WSL normal user to perform passwordless sudo into root at the login to save the hassle of typing command and password to switch into root. I love to work at root # prompt!

Avoid sound beeps from Linux terminal

With WSL, one thing you might like to avoid is workstation speaker beeps/bells due to the Linux terminal prompt of vi editors. Here is how you can avoid them :

# echo set bell-style none >>/etc/inputrc # Stops prompt bells
# echo set visualbell >> ~/.vimrc # Stops vi bells

Setting up Git on WSL

Personal Authentication Token (PAT) or SSH keys can be leveraged for configuring Git on WSL. I prefer to use SSH keys so listing steps here –

  • Create and add SSH keys to GitHub account. Steps here.
  • Authorize the organizations for the Public key you are uploading to Git by visiting Key settings on Git.
  • Add ssh-agent service startup and key identity addition at login under user/shell profile. Dirty way to do it on bash is adding below lines in ~/.bashrc file.
eval "$(ssh-agent -s)"
ssh-add /root/.ssh/git_id_rsa
  • Add alias to your Git folder on Windows drive so that you can navigate to it quickly when running all Git commands like repo clone. It can be done by adding below command to your user/shell profiles. You can choose alias (gitdir) of your owne choice and the destination cd <path> too.
alias gitdir='cd /mnt/c/Users/<username>/Downloads/Github'    

Setting up prompt to show current Git branch

It’s easy. You need to tweak your prompt PS1 with git branch command output!

The git branch output looks like this –

# git branch
* master

With help of sed you can take out branch name from it. Obviously, you also want to redirect error (on non-git directory command will fail). And add brackets around branch name to have the same look like gitbash prompt. That sums up to below code –

# git branch 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/(\1)/'
(master)

Add this to a function and call this function in your PS1! Ta da. Sample prompt with colours from Ubuntu. Don’t forget to set this into shell profile (e.g. ~/.bashrc) so that it will be loaded on your login.

git_branch() {
  git branch 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/(\1)/'
}
export PS1="\[\e]0;\u@\h: \w\a\]${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\] \[\033[00;32m\]\$(git_branch)\[\033[00m\]# "

Code platform

Oh yes, even sysadmins code for their automation stuff and with IaC being hot in the market it’s essential for sysadmins to code as well. Since we discussed this article is intended for Windows users, Microsoft Visual Code is an undefeated entry here! Its superb code editing tool with numerous plugins makes you comfortable to code.

Tweaking Visual code for PuTTY like experience

PuTTY is the preferred tool for SSHing in Linux world. The beauty of PuTTY lies in its copy-paste capabilities. The same capabilities can be configured on MS Visual code terminal.

Head to the terminal settings by entering the command Terminal: configure Terminal Settings in the command palette (ctrl + shift + p). On the setting screen set below options –

MS VS code setting for Putty copy-paster behaviour!

Setting up VS Code to launch from WSL

Since we already configured Git on WSL, it makes sense to directly run code . command in WSL from Git Directory and have VS code started on the Windows workstation. For that, you just need to add the alias of the code.exe file with an absolute path on Windows to code command!

If you have installed VS code with default config then the below command in your user/shell profile should do the trick.

alias code='/mnt/c/Users/<username>/AppData/Local/Programs/Microsoft\ VS\ Code/code.exe'

Code linters

There are two ways you can have your code linted locally before you commit it on Git.

  1. Install respective code linter binaries/packages on WSL. Its Linux!
  2. Install code linters on VS code if appropriate plugin is available.

Running docker on WSL without installing Docker Desktop for Windows

With WSL version 2, one can run docker on WSL without installing the docker desktop for windows. The Docker installation remains the same inside WSL just like any other Linux installation.

Once installed make sure you are running on WSL version 2. If not upgrade to WSL 2.

Convert the current WSL distro to make use of WSL 2 using the command in PowerShell –

> wsl --set-version <distro-name> 2
## Example wsl --set-version Ubuntu-20.04 2

Now, launch WSL and start the docker by incoming /usr/bin/dockerd binary! You can set an alias to dockerd & start it quickly in the background.

You can also set up cron so that it will start at boot. Note: It did not work for me in WSL

@reboot /usr/bin/dockerd &

Or, you can add the below code in your login profile like .bashrc file so that docker will run at your login.

ps -ef |grep -iq dockerd
if [ $? == 0 ]; then
:
else
/usr/bin/dockerd &
fi

If you have more tips please let us know in the comments below!

Kubernetes tools

Install a text-based UI tool for managing the K8s clusters. Its K9s. Simple installation with standalone binary can be done using the below commands –

# wget -qO- https://github.com/derailed/k9s/releases/download/v0.25.18/k9s_Linux_x86_64.tar.gz | tar zxvf -  -C /tmp/
# mv /tmp/k9s /usr/local/bin

You need to set the context from CLI first and then run k9s command.

How to install Cluster Autoscaler on AWS EKS

A quick rundown on how to install Cluster Autoscaler on AWS EKS.

CA on EKS!

What is Cluster Autoscaler (CA)

Cluster Autoscaler is not a new word in the Kubernetes world. It’s a program that scales out or scales in the Kubernetes cluster as per capacity demands. It is available on Github here.

For scale-out action, it looks for any unschedulable pods in the cluster and scale-out to make sure they can be scheduled. If CA is running with default settings, then it checks every 10 seconds. So basically it detects and acts for scale-out in 10 secs.

For scale in action it watches nodes for their utilization and any underutilized node will be elected for scale in. The elected node will have to remain in an un-needed state for 10 minutes for CA to terminate it.

CA on AWS EKS

As you know now, CA’s core functionality is spawning new nodes or terminating the un-needed ones, it’s essential it must be having underlying infrastructure access to perform these actions.

In AWS EKS, Kubernetes nodes are EC2 or FARGATE compute. Hence, Cluster Autoscaler running on EKS clusters should be having access to respective service APIs to perform scale out and scale in. It can be achieved by creating an IAM role with appropriate IAM policies attached to it.

Cluster Autoscaler should be running in a separate namespace (kube-system by default) on the same EKS cluster as a Kubernetes deployment. Let’s look at the installation

How to install Cluster Autoscaler on AWS EKS

Creating IAM role

IAM role of Autoscaler needs to have an IAM policy attached to it with the below permissions –

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "sts:AssumeRole",
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:DescribeAutoScalingInstances",
                "autoscaling:DescribeLaunchConfigurations",
                "autoscaling:DescribeTags",
                "autoscaling:SetDesiredCapacity",
                "autoscaling:TerminateInstanceInAutoScalingGroup",
                "ec2:DescribeLaunchTemplateVersions"
            ],
            "Resource": "*",
            "Effect": "Allow"
        }
    ]
}

You will need to use this policy ARN in eksctl command. Also, make sure you have an IAM OIDC provider associated with your EKS cluster. Read more in detail here.

As mentioned above, we need to have an IAM role in a place that can be leveraged by Cluster Autoscaler to perform resource creation or termination on AWS services like EC2. It can be done manually, but it’s recommended to perform it using eksctl command for its comfort and perfection! It takes care of trust relationship policy and related conditions while setting up a role. If you do not prefer eksctl then refer to this document to create it using AWS CLI or console.

You need to run it from the terminal where AWS CLI is configured.

# eksctl create iamserviceaccount --cluster=<CLUSTER-NAME> --namespace=<NAMESPACE> --name=cluster-autoscaler --attach-policy-arn=<MANAGED-POLICY-ARN> --override-existing-serviceaccounts --region=<CLUSTER-REGION> --approve

where –

  • CLUSTER-NAME: Name of the EKS Cluster
  • NAMESPACE: ns under which you plan to run CA. Preference: kube-system
  • CLUSTER-REGION: Region in which EKS Cluster is running
  • MANAGED-POLICY-ARN: IAM policy ARN created for this role
# eksctl create iamserviceaccount --cluster=blog-cluster --namespace=kube-system --name=cluster-autoscaler --attach-policy-arn=arn:aws:iam::xxxxxxxxxx:policy/blog-eks-policy --override-existing-serviceaccounts --region=us-east-1 --approve
2022-01-26 13:45:11 [&#x2139;]  eksctl version 0.80.0
2022-01-26 13:45:11 [&#x2139;]  using region us-east-1
2022-01-26 13:45:13 [&#x2139;]  1 iamserviceaccount (kube-system/cluster-autoscaler) was included (based on the include/exclude rules)
2022-01-26 13:45:13 [!]  metadata of serviceaccounts that exist in Kubernetes will be updated, as --override-existing-serviceaccounts was set
2022-01-26 13:45:13 [&#x2139;]  1 task: {
    2 sequential sub-tasks: {
        create IAM role for serviceaccount "kube-system/cluster-autoscaler",
        create serviceaccount "kube-system/cluster-autoscaler",
    } }2022-01-26 13:45:13 [&#x2139;]  building iamserviceaccount stack "eksctl-blog-cluster-addon-iamserviceaccount-kube-system-cluster-autoscaler"
2022-01-26 13:45:14 [&#x2139;]  deploying stack "eksctl-blog-cluster-addon-iamserviceaccount-kube-system-cluster-autoscaler"
2022-01-26 13:45:14 [&#x2139;]  waiting for CloudFormation stack "eksctl-blog-cluster-addon-iamserviceaccount-kube-system-cluster-autoscaler"
2022-01-26 13:45:33 [&#x2139;]  waiting for CloudFormation stack "eksctl-blog-cluster-addon-iamserviceaccount-kube-system-cluster-autoscaler"
2022-01-26 13:45:50 [&#x2139;]  waiting for CloudFormation stack "eksctl-blog-cluster-addon-iamserviceaccount-kube-system-cluster-autoscaler"
2022-01-26 13:45:52 [&#x2139;]  created serviceaccount "kube-system/cluster-autoscaler"

The above command prepares the JSON CloudFormation template and deploys it in the same region. You can visit the CloudFormation console and check it.

Installation

If you choose to run CA in different namespace by defining custom namespace in manifest file, then replace kube-system with appropriate namespace name in all below commands.

Download and prepare your Kubernetes to manifest file.

# curl -o cluster-autoscaler-autodiscover.yaml https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml
# sed -i 's/<YOUR CLUSTER NAME>/cluster-name/g' cluster-autoscaler-autodiscover.yaml

Replace cluster-name with EKS cluster name.

Apply the manifest to your EKS cluster. Make sure you have the proper context set for your kubectl command so that kubectl is targeted to the expected EKS cluster.

# kubectl apply -f cluster-autoscaler-autodiscover.yaml
serviceaccount/cluster-autoscaler configured
clusterrole.rbac.authorization.k8s.io/cluster-autoscaler created
role.rbac.authorization.k8s.io/cluster-autoscaler created
clusterrolebinding.rbac.authorization.k8s.io/cluster-autoscaler created
rolebinding.rbac.authorization.k8s.io/cluster-autoscaler created
deployment.apps/cluster-autoscaler created

Add annotation to cluster-autoscaler service account with ARN of the IAM role we created in the first step. Replace ROLE-ARN with IAM role arn.

# kubectl annotate serviceaccount cluster-autoscaler -n kube-system eks.amazonaws.com/role-arn=<ROLE-ARN>
$ kubectl annotate serviceaccount cluster-autoscaler -n kube-system eks.amazonaws.com/role-arn=arn:aws:iam::xxxxxxxxxx:role/eksctl-blog-cluster-addon-iamserviceaccount-Role1-1X55OI558WHXF --overwrite=true
serviceaccount/cluster-autoscaler annotated

Patch CA for adding eviction related annotation

# kubectl patch deployment cluster-autoscaler -n kube-system -p '{"spec":{"template":{"metadata":{"annotations":{"cluster-autoscaler.kubernetes.io/safe-to-evict": "false"}}}}}'
deployment.apps/cluster-autoscaler patched

Edit CA container command to accommodate below two arguments –

  • --balance-similar-node-groups
  • --skip-nodes-with-system-pods=false
# NEW="        - --balance-similar-node-groups\n        - --skip-nodes-with-system-pods=false"
# kubectl get -n kube-system deployment.apps/cluster-autoscaler -o yaml | awk "/- --node-group-auto-discovery/{print;print \"$NEW\";next}1" > autoscaler-patch.yaml
# kubectl patch deployment.apps/cluster-autoscaler -n kube-system --patch "$(cat autoscaler-patch.yaml)"
deployment.apps/cluster-autoscaler patched

Make sure the CA container image is the latest one in your deployment definition. If not you can choose a new image by running –

# kubectl set image deployment cluster-autoscaler -n kube-system cluster-autoscaler=k8s.gcr.io/autoscaling/cluster-autoscaler:vX.Y.Z

Replace X.Y.Z with the latest version.

$ kubectl set image deployment cluster-autoscaler -n kube-system cluster-autoscaler=k8s.gcr.io/autoscaling/cluster-autoscaler:v1.21.1
deployment.apps/cluster-autoscaler image updated

Verification

Cluster Autoscaler installation is now complete. Verify the logs to make sure Cluster Autoscaler is not throwing any errors.

# kubectl -n kube-system logs -f deployment.apps/cluster-autoscaler

Creating Identity provider for AWS EKS

A quick post on creating EKS OIDC provider.

EKS OIDC provider!

We will be creating OpenID Connect Identity Provider for the AWS EKS cluster in the IAM service. It will enable to establish trust between AWS account and Kubernetes running on EKS. For using IAM roles with service accounts created under the EKS cluster, it must have the OIDC provider associated with the cluster. Hence, it’s important to have this created at the beginning of the project along with the cluster.

Let’s get into steps to create an OIDC provider for your cluster.

First, you need to get the OpenID Connect provider URL from EKS Cluster.

  • Navigate to EKS console
  • Click on Cluster name
  • Select Configuration tab and check under Details
OpenID URL on EKS console.

Now head back to the IAM console

  • Click on Identity providers under Access management on left hand side menu
  • Click on Add provider button
Add provider in IAM
  • Select OpenId Connet
  • Paste EKS OpenId provider URL in the give field
  • Click on Get thumbprint button
  • Add sts.amazonaws.com in Audience field
  • Click on Add provider button.
IdP thumbprint

Identity provider is created! View its details by clicking on the provider name.

EKS OIDC

If you are using CloudFormation as an IaC tool then below resource block can be used to create OIDC for the EKS cluster :

OidcProvider:
    Type: AWS::IAM::OIDCProvider
    Properties: 
      Url: !GetAtt EksCluster.OpenIdConnectIssuerUrl
      ThumbprintList: 
        - 9e99a48a9960b14926bb7f3b02e22da2b0ab7280
      ClientIdList:
        - sts.amazonaws.com

Where –

  • EksCluster is the logical ID of the EKS cluster resource in the same CloudFormation template.
  • 9e99a48a9960b14926bb7f3b02e22da2b0ab7280 is EKS thumbprint for region us-east-1. Refer this document to get thumbprints.

How to configure kubectl for AWS EKS

Steps to configure CLI for running kubectl commands on EKS clusters.

kubectl with EKS!

kubectl is the command-line utility used to interact with Kubernetes clusters. AWS EKS is AWS managed Kubernetes service broadly used for running Kubernetes workloads on AWS Cloud. We will be going through steps to set up the kubectl command to run with the AWS EKS cluster. Without further due, let’s get into it.

AWS CLI configuration

Install AWS CLI on your workstation and configure it by running –

# aws configure
AWS Access Key ID [None]: AKIAQX3SNXXXXXUVQ
AWS Secret Access Key [None]: tzS/a1sMDxxxxxxxxxxxxxxxxxxxxxx/D
Default region name [us-west-2]: us-east-1
Default output format [json]: json

If you require to switch roles before you can access your AWS environment then configure your CLI with roles.

Once configured, verify your CLI is working fine and reaching to appropriate AWS account.

# aws sts get-caller-identity
{
    "UserId": "AIDAQX3SNXXXXXXXXXXXX",
    "Account": "xxxxxxxxxx",
    "Arn": "arn:aws:iam::xxxxxxxxxx:user/blog-user"
}

kubectl configuration

Install kubectl command if not already. Update kubeconfig with the cluster details you want to connect to –

# aws eks --region us-west-2 update-kubeconfig --name <CLUSTER-NAME>
# aws eks --region us-east-1 update-kubeconfig --name blog-cluster
Added new context arn:aws:eks:us-east-1:xxxxxxxxxx:cluster/blog-cluster to C:\Users\linux\.kube\config

At this point your kubeconfig point to the cluster of your interest. You can execute kubectl commands and those will be executed against the cluster you mentioned above.

# kubectl get pods --all-namespaces
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   coredns-66cb55d4f4-hk9p5   0/1     Pending   0          6m54s
kube-system   coredns-66cb55d4f4-wmtvf   0/1     Pending   0          6m54s

I did not add any nodes yet to my EKS cluster hence you can see pods are in a pending state.

If you have multiple clusters configured in kubeconfig then you must switch context to interested cluster before running kubectl commands. To switch context –

# kubectl config use-context <CONTEXT-NAME>
# kubectl config use-context arn:aws:eks:us-east-1:xxxxxxxxxx:cluster/blog-cluster
Switched to context "arn:aws:eks:us-east-1:xxxxxxxxxx:cluster/blog-cluster".

You can verify all configured contexts by analysing ~/.kube/config file.

Troubleshooting errors

If your IAM user (configured in AWS CLI) is not authorized on the EKS cluster then you will see this error –

# kubectl get pods --all-namespaces
error: You must be logged in to the server (Unauthorized)

Make sure your IAM user is authorised in the EKS cluster. This can be done by adding user details under mapUsers field in the configmap named aws-auth residing in kube-system namespace. You will be able to fetch and edit it with the user who built the cluster in the first place. By default, AWS adds the IAM user as system:masters in config map who built the cluster. You have to configure the same IAM user with kubectl and edit this configmap for adding other IAM users to the cluster.

$ kubectl get -n kube-system configmap/aws-auth -o yaml
apiVersion: v1
data:
  mapRoles: |
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::xxxxxxxxxx:role/blog-eks-role
      username: system:node:{{EC2PrivateDNSName}}
  mapUsers: |
    - userarn: arn:aws:iam::xxxxxxxxxx:user/blog-user
      username: blog-user
      groups:
        - system:masters

GitBash not prompting for MFA in AWS CLI

A quick post on how to resolve an issue with Gitbash that prevents MFA prompts while using AWS CLI.

MFA GitBsah issue.

Problem

GitBash under the hood uses winpty emulator for providing a bash experience on windows. Winpty does not work well with AWS CLI especially when dealing with MFA prompts. Hence you need to replace this with bash.exe and you should be good.

Procedure

Go to the Windows start menu and search for Git Bash. Click on Open file location.

Right click on the shortcut and select Properties

Under properties change the target from “C:\Program Files\Git\git-bash.exe” to “C:\Program Files\Git\bin\bash.exe

Now launch GitBash and you should be good.

How to resolve the MFA entity already exists error

A quick fix for error MFA entity already exists.

IAM says MFA exists when its not!

Issue

The user is not able to register an MFA device. When a user tries to assign a new MFA, IAM throws an error –

This entity already exists. MFADevice entity at the same path and name already exists. Before you can add a new virtual MFA device, ask your administrator to delete the existing device using the CLI or API.
MFA assignment error

Whereas if you as admin or even user check the AWS console it shows Assigned MFA device as Not assigned for that user.

Resolution

As an administrator, you need to delete the MFA device (yes even if says not assigned) using AWS CLI. The performer needs to have IAM permission iam:DeleteVirtualMFADevice on to the given resource to update the IAM user’s MFA.

Run below command from AWS CLI –

# aws iam delete-virtual-mfa-device --serial-number arn:aws:iam::<AWS account number>:mfa/<username>

where –

  • AWS account number is account number where user exists
  • username is IAM username of that user

This should clear out the error message and the user should be able to register a new MFA device.

How to configure EC2 for Session Manager

A quick reference to configure EC2 for Session Manager in AWS

EC2 session manager!

Ok this must be a very basic post for most of you and there is a readily available AWS doc for it, but I am just cutting it short to list down steps for achieving the objective quickly. You should go through the official AWS doc to understand all aspects of it but if you are on the clock then just follow along and get it set up in no time.

Checklist

Before you start, make sure you checked out these minimum configurations to get going.

  1. Your EC2 is running supported Opertaing System. We are taking example of Linux here so all Linux versions that supports AWS Systems Manager supports session manager.
  2. SSM agent 2.3+ installed on system. If not, we got it covered here.
  3. Outbound 443 traffic should be allowed to below 3 endpoints. You must have this already covered since most of the setups has ALL traffic aalowed in outgoing security group rule. –
    • ec2messages.region.amazonaws.com
    • ssm.region.amazonaws.com
    • ssmmessages.region.amazonaws.com

In a nutshell, probably point 2 is the one you need to verify. If you are using AWS managed AMI then you got it covered for that too! But, if you are using custom-built, home-grown AMI then that might not be the case.

SSM agent installation

It’s a pretty basic RPM installation as you would do on any Linux platform. Download package relevant to your Linux version from here. Or global URLs for Linux agents –

Run package installation and service handler commands with root privileges as below –

# systemctl enable amazon-ssm-agent
# systemctl start amazon-ssm-agent
# systemctl status amazon-ssm agent

If you do not have access to EC2 (Key lost or EC2 without keypair) then probably you need to re-launch the EC2. If your EC2 is part of an auto-scaling group (ASG) then it makes sense to add these commands in the user-data script for the launch template and launch a new EC2 from ASG.

Instance role permissions

Now the agent is up and running. The next step is to authorize the AWS Systems Manager service to perform actions on EC2. This is done via Instance Role. Create the IAM instance role with below IAM policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:UpdateInstanceInformation",
                "ssmmessages:CreateControlChannel",
                "ssmmessages:CreateDataChannel",
                "ssmmessages:OpenControlChannel",
                "ssmmessages:OpenDataChannel"
            ],
            "Resource": "*"
        }
    ]
}

You can scope it down to a particular resource if you want. You can even add KMS encryption-related permissions in it if you are planning to encrypt session data using KMS encryption. An example can be found here.

Once done attach the role to EC2. If EC2 is already having a role attached to it then add the above policy to the existing role and you should be good.

IAM instance profile

Connecting via Session Manager

Now you are good to test the connection.

  • Login to EC2 console.
  • Navigate to instances and selct the respective EC2 instance from the list.
  • Click on Connect button
Connecting to session manager from EC2 console
  • Make sure you are on Serssion Manager tab and click on Connect.
  • If you still see error reported on this screen then give it a minute or two. Sometimes it takes some seconds to propagate IAM role permissions.
Connect to the instance using session manager

New browser tab will open and you should be seeing the Linux prompt.

Instance connected!

Notice you are logged in with the default user ssm-user. You can switch to root user by using sudo.

There are a couple of benefits to using session manager as standard over Key pairs :

  • No need to maintain key files.
  • Avoid security threat posed to infra associated with Key file management.
  • Access management is easy through IAM.
  • Native AWS feature!
  • Session can be logged for audit purposes.