Category Archives: Docker

All articles related to Docker container, Docker swarm and containerization on Linux.

kubernetes pod

Running a pod in Kubernetes

In this article we will look at pod concept in Kubernetes

pods in K8s.

What is pod in kubernetes?

The pod is the smallest execution unit in Kubernetes. It’s a single container or group of containers that serve a running process in the K8s cluster. Read what is container? if you are not familiar with containerization.

Each pod has a single IP address that is shared by all the containers within. Also, the port space is shared by all the containers inside.

You can view running pods in K8s by using below command –

$ kubectl get pods
NAME        READY   STATUS    RESTARTS   AGE
webserver   1/1     Running   0          10s

View pod details in K8s

To get more detailed information on each pod, you can run below command by supplying its pod name as argument –

$ kubectl describe pods webserver
Name:         webserver
Namespace:    default
Priority:     0
Node:         node01/172.17.0.9
Start Time:   Sun, 05 Jul 2020 13:50:41 +0000
Labels:       <none>
Annotations:  <none>
Status:       Running
IP:           10.244.1.3
IPs:
  IP:  10.244.1.3
Containers:
  webserver:
    Container ID:   docker://8b260effa4ada1ff80e106fb12cf6e2da90eb955321bbe3b9e302fdd33b6c0d8
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:21f32f6c08406306d822a0e6e8b7dc81f53f336570e852e25fbe1e3e3d0d0133
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sun, 05 Jul 2020 13:50:50 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bjcwg (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  default-token-bjcwg:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-bjcwg
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  25s   default-scheduler  Successfully assigned default/webserver to node01
  Normal  Pulling    23s   kubelet, node01    Pulling image "nginx"
  Normal  Pulled     17s   kubelet, node01    Successfully pulled image "nginx"
  Normal  Created    16s   kubelet, node01    Created container webserver
  Normal  Started    16s   kubelet, node01    Started container webserver

pod configuration file

One can create a pod configuration file i.e. yml file which has all the details to start a pod. K8s can read this file and spin up your pod according to specifications. Sample file below –

$ cat my_webserver.yml
echo "apiVersion: v1
kind: Pod
metadata:
  name: webserver
spec:
  containers:
    - name: webserver
      image: nginx
      ports:
        - containerPort: 80" >my_webserver.yml

Its a single container pod file since we specified specs for only one kind of container in it.

Single container pod

Single container pod can be run without using a yml file. Like using simple command –

$ kubectl run single-c-pod --image=nginx
pod/single-c-pod created
$ kubectl get pods
NAME           READY   STATUS    RESTARTS   AGE
single-c-pod   1/1     Running   0          35s
webserver      1/1     Running   0          2m52s

You can spin the single container pod using simple yml file stated above.

Multiple container pod

For multiple container pods, let’s edit the above yml file to add another container specs as well.

$ cat << EOF >web-bash.yml
apiVersion: v1
kind: Pod
metadata:
  name: web-bash
spec:
  containers:
    - name: apache
      image: httpd
      ports:
        - containerPort: 80
    - name: linux
      image: ubuntu
      ports:
      command: ["/bin/bash", "-ec", "while true; do echo '.'; sleep 1 ; done"]
EOF

In the above file, we are spinning up a pod that has 1 webserver container and another is Ubuntu Linux container.

$ kubectl create -f web-bash.yml
pod/web-bash created
$ kubectl get pods
NAME       READY   STATUS    RESTARTS   AGE
web-bash   2/2     Running   0          12s

How to delete pod

Its a simple delete pod command

$ kubectl delete pods web-bash
pod "web-bash" deleted

How to view pod logs in Kubernetes

I am running a single container pod of Nginx. We will then check pod logs to confirm this messages.

$ kubectl run single-c-pod --image=nginx
pod/single-c-pod created
$ kubectl logs single-c-pod
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up

Lab setup for Ansible testing

Quick lab setup for learning Ansible using containers on Oracle Virtualbox Linux VM.

Setting up LAb for learning Ansible

In this article, we will be setting up our lab using Docker containers for testing Ansible. We are using Oracle Virtualbox so that you can spin up VM with a readymade OVA file in a minute. This will save efforts to install the OS from scratch. Secondly, we will be spinning up a couple of containers which can be used as ansible clients. Since we need to test ansible for running a few remote commands/modules, it’s best to have containers working as clients rather than spinning complete Linux VMs as a client. This will save a lot of resource requirements as well and you can run this ansible lab on your desktop/laptop as well for practicing ansible.

Without further delay lets dive into setting up a lab on desktop/laptop for learning Ansible. Roughly it’s divided into below sections –

  1. Download Oracle Virtualbox and OVA file
  2. Install Oracle Virtualbox and spin VM from OVA file
  3. Run containers to work as ansible clients
  4. Test connectivity via passwordless SSH access from Ansible worker to clients

Step 1. Download Oracle Virtualbox & OEL7 with Docker readymade OVA file

Goto VirtualBox downloads and download Virtualbox for your OS.

Goto Oracle Downloads and download Oracle Linux 7 with Docker 1.12 Hands-On Lab Appliance file. This will help us to spin up VM in Oracle VirtualBox without much hassle.

Step 2. Install Oracle Virtualbox and start VM from OVA file

Install Oracle Virtualbox. Its a pretty standard setup procedure so I am not getting into it. Once you download above OVA file, open it in Oracle VirtualBox and it will open up the Import Virtual Appliance menu like below-

Import Virtual Appliance menu

Click Import. Agree to the software license agreement shown and it will start Importing OVA as a VM. After finishing import, you will see VM named DOC-1002902 i.e. same name as OVA file is created in your Oracle VirtualBox.

Start that VM and login with the user. Credentials details are mentioned in the documentation link on the download page of OVA file.

Step 3. Running containers

For running containers, you need to set up Docker Engine first on VM. All steps are listed in the same documentation I mentioned above where you looked at your first login credentials. Also, you can follow our Docker installation guide if you want.

Then create key pair on your VM i.e. Ansible worker/server so that public key can be used within a container for passwordless SSH. We will be using Ansible user as ansible-usr in our setup, so you can see this user henceforth here. Read how to configure Ansible default user.

[root@ansible-srv .ssh]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
98:42:9a:82:79:ac:74:7f:f9:31:71:2a:ec:bb:af:ee root@ansible-srv.kerneltalks.com
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|    .            |
|.o +   o         |
|+.=.. o S. .     |
|.+. ... . +      |
|.    . = +       |
|      o o o      |
|      oE=o       |
+-----------------+

Now we have key pair ready move on to containers.

Once your Docker Engine is installed and started, create custom Docker Image using Dockerfile mentioned below which we will use to spin up multiple containers (ansible clients). Below Dockerfile is taken from link and modified a bit for setting passwordless SSH. This Dockerfile answers the question how to configure passwordless SSH for containers!

FROM ubuntu:16.04

RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:password' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config

# SSH login fix. Otherwise user is kicked off after login
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd

ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
RUN useradd -m -d /home/ansible-usr ansible-usr
RUN mkdir /home/ansible-usr/.ssh
COPY .ssh/id_rsa.pub /home/ansible-usr/.ssh/authorized_keys
RUN chown -R ansible-usr:ansible-usr /home/ansible-usr/.ssh
RUN chmod 700 /home/ansible-usr/.ssh
RUN chmod 640 /home/ansible-usr/.ssh/authorized_keys
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

Keep above file as Dockerfile in /root and then run below command while you are in /root. If you are in some other directory then make sure you use relative path in COPY command in above Dockerfile.

[root@ansible-srv ~]# docker build -t eg_sshd .

This command will create a custom Docker Image named eg_sshd. Now you are ready to spin up containers using this custom docker image.

We will start containers in below format –

  1. Webserver
    1. k-web1
    2. k-web2
  2. Middleware
    1. k-app1
    2. k-app2
  3. Database
    1. k-db1

So in total 5 containers spread across different groups with different hostname so that we can use them for testing different configs/actions in ansible.

I am listing command for the first container only. Repeat them for rest 4 servers.

[root@ansible-srv ~]# docker run -d -P --hostname=k-web1 --name k-web1 eg_sshd
e70d825904b8c130582c0c52481b6e9ff33b18e0ba8ab47f12976a568587087b

It is working!

Now, spin up all 5 containers. Verify all containers are running and note down their ports.

[root@ansible-srv ~]# docker container ls -a
CONTAINER ID        IMAGE               COMMAND               CREATED              STATUS              PORTS                   NAMES
2da32a4706fb        eg_sshd             "/usr/sbin/sshd -D"   5 seconds ago        Up 3 seconds        0.0.0.0:32778->22/tcp   k-db1
75e2a4bb812f        eg_sshd             "/usr/sbin/sshd -D"   39 seconds ago       Up 33 seconds       0.0.0.0:32776->22/tcp   k-app2
40970c69348f        eg_sshd             "/usr/sbin/sshd -D"   50 seconds ago       Up 47 seconds       0.0.0.0:32775->22/tcp   k-app1
4b733ce710e4        eg_sshd             "/usr/sbin/sshd -D"   About a minute ago   Up About a minute   0.0.0.0:32774->22/tcp   k-web2
e70d825904b8        eg_sshd             "/usr/sbin/sshd -D"   4 minutes ago        Up 4 minutes        0.0.0.0:32773->22/tcp   k-web1

Step 4. Passwordless SSH connectivity between Ansible server and clients

This is an important step for the smooth & hassle-free functioning of Ansible. You need to create ansible user on Ansible server & clients. Then configure passwordless SSH (using keys) for that user.

Now you need to get the IP addresses of your containers. You can inspect the container and extract that information –

[root@ansible-srv ~]# docker inspect k-web1 |grep IPAddress
            "SecondaryIPAddresses": null,
            "IPAddress": "172.17.0.2",
                    "IPAddress": "172.17.0.2",

Now we have an IP address, let’s test the passwordless connectivity –

[root@ansible-srv ~]# ssh ansible-usr@172.17.0.2
Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 4.1.12-37.5.1.el7uek.x86_64 x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

Last login: Wed Jan 15 18:57:38 2020 from 172.17.0.1
$ hostname
k-web1
$ exit
Connection to 172.17.0.2 closed.

It’s working! Go ahead and test it for rest all, so that the client’s authenticity will be added and RSA fingerprints will be saved to the known host list. Now we have all 5 client containers running and passwordless SSH is setup between ansible server and clients for user ansible-usr

Now you have full lab setup ready on your desktop/laptop within Oracle Virtualbox for learning Ansible! Lab setup has a VM running in Oracle Virtualbox which is you mail Ansible server/worker and it has 5 containers running within acting as Ansible clients. This setup fulfills the pre-requisite of the configuration of passwordless SSH for Ansible.

Kubernetes installation and configuration

Step by step guide for Kubernetes installation and configuration along with sample outputs.

Kubernetes installation guide

Pre-requisite

  • Basic requirement to run Kubernetes is your machine should not have SWAP configured if at all it is configured you need to turn it off using swapoff -a.
  • You will need Docker installed on your machine.
  • You will need to set your SELinux in permissive mode to enable kubelet network communication. You can set policy in SELinux for Kubernetes and then you can enable it normally.
  • Your machine should have at least 2 CPUs.
  • Kubernetes ports should be open between master and nodes for cluster communications. All are TCP ports and to be open for inbound traffic.
PortsDescription
10250Kublet API (for master and nodes)
10251kube-scheduler
10252kube-controller-manager
6443*Kubernetes API server
2379-2380etcd server client API
30000-32767 NodePort Services (only for nodes)

Installation of Kubernetes master node Kubemaster

First step is to install three pillar packages of Kubernetes which are :

  • kubeadm – It bootstraps Kubernetes cluster
  • kubectl – CLI for managing cluster
  • kubelet – Service running on all nodes which helps managing cluster by performing tasks

For downloading these packages you need to configure repo for the same. Below are repo file contents for respective distributions.

For RedHat, CentOs or Fedora (YUM based)-

root@kerneltalks # cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
root@kerneltalks # yum install -y kubectl kubeadm kubelet

For Ubuntu, Suse or Debian (APT based)-

sudo apt-get update && sudo apt-get install -y apt-transport-https gnupg2
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl kubeadm kubelet

Once you have configured the repo install packages kubeadm, kubectl and kubelet according to your distribution package manager.

Enable and start kubelet service

root@kerneltalks # systemctl enable kubelet.service
root@kerneltalks # systemctl start kubelet

Configuration of Kubernetes master node Kubemaster

Now you need to make sure both Docker and Kubernetes using the same cgroup driver. By default its cgroupfs for both. If you haven’t changed for Docker then you don’t have to do anything for Kubernetes as well. But if you are using different cgroup in Docker you need to specify it for Kubernetes in below file –

root@kernetalks # cat /etc/default/kubelet
KUBELET_KUBEADM_EXTRA_ARGS=--cgroup-driver=<value>

This file will be picked up by kubeadm while starting up. But if you have Kubernetes already running you need to reload this configuration using –

root@kerneltalks # systemctl daemon-reload
root@kerneltalks # systemctl restart kubelet

Now you are ready to bring up Kubernetes master and then add worker nodes or minions to it as a slave for the cluster.

You have installed and adjusted settings to bring up Kubemaster. You can start Kubemaster using the command kubeadm init but you need to provide network CIDR first time.

  • --pod-network-cidr= : For pod network
  • --apiserver-advertise-address= : Optional. To be used when multiple IP addresses/subnets assigned to the machine.

Refer below output for starting up Kubernetes master node. There are few warnings which can be corrected with basic sysadmin tasks.

# kubeadm init --apiserver-advertise-address=172.31.81.44 --pod-network-cidr=192.168.1.0/16
[init]

using Kubernetes version: v1.11.3

[preflight]

running pre-flight checks I0912 07:57:56.501790 2443 kernel_validator.go:81] Validating kernel version I0912 07:57:56.501875 2443 kernel_validator.go:96] Validating kernel config [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.05.0-ce. Max validated version: 17.03 [WARNING Hostname]: hostname “kerneltalks” could not be reached [WARNING Hostname]: hostname “kerneltalks” lookup kerneltalks1 on 172.31.0.2:53: no such host [WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service’

[preflight/images]

Pulling images required for setting up a Kubernetes cluster

[preflight/images]

This might take a minute or two, depending on the speed of your internet connection

[preflight/images]

You can also perform this action in beforehand using ‘kubeadm config images pull’

[kubelet]

Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”

[kubelet]

Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”

[preflight]

Activating the kubelet service

[certificates]

Generated ca certificate and key.

[certificates]

Generated apiserver certificate and key.

[certificates]

apiserver serving cert is signed for DNS names [kerneltalks1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.81.44]

[certificates]

Generated apiserver-kubelet-client certificate and key.

[certificates]

Generated sa key and public key.

[certificates]

Generated front-proxy-ca certificate and key.

[certificates]

Generated front-proxy-client certificate and key.

[certificates]

Generated etcd/ca certificate and key.

[certificates]

Generated etcd/server certificate and key.

[certificates]

etcd/server serving cert is signed for DNS names [kerneltalks1 localhost] and IPs [127.0.0.1 ::1]

[certificates]

Generated etcd/peer certificate and key.

[certificates]

etcd/peer serving cert is signed for DNS names [kerneltalks1 localhost] and IPs [172.31.81.44 127.0.0.1 ::1]

[certificates]

Generated etcd/healthcheck-client certificate and key.

[certificates]

Generated apiserver-etcd-client certificate and key.

[certificates]

valid certificates and keys now exist in “/etc/kubernetes/pki”

[kubeconfig]

Wrote KubeConfig file to disk: “/etc/kubernetes/admin.conf”

[kubeconfig]

Wrote KubeConfig file to disk: “/etc/kubernetes/kubelet.conf”

[kubeconfig]

Wrote KubeConfig file to disk: “/etc/kubernetes/controller-manager.conf”

[kubeconfig]

Wrote KubeConfig file to disk: “/etc/kubernetes/scheduler.conf”

[controlplane]

wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml”

[controlplane]

wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml”

[controlplane]

wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml”

[etcd]

Wrote Static Pod manifest for a local etcd instance to “/etc/kubernetes/manifests/etcd.yaml”

[init]

waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests”

[init]

this might take a minute or longer if the control plane images have to be pulled

[apiclient]

All control plane components are healthy after 46.002127 seconds

[uploadconfig]

storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace

[kubelet]

Creating a ConfigMap “kubelet-config-1.11” in namespace kube-system with the configuration for the kubelets in the cluster

[markmaster]

Marking the node kerneltalks1 as master by adding the label “node-role.kubernetes.io/master=””

[markmaster]

Marking the node kerneltalks1 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]

[patchnode]

Uploading the CRI Socket information “/var/run/dockershim.sock” to the Node API object “kerneltalks1” as an annotation

[bootstraptoken]

using token: 8lqimn.2u78dcs5rcb1mggf

[bootstraptoken]

configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstraptoken]

configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstraptoken]

configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstraptoken]

creating the “cluster-info” ConfigMap in the “kube-public” namespace

[addons]

Applied essential addon: CoreDNS

[addons]

Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 172.31.81.44:6443 –token 8lqimn.2u78dcs5rcb1mggf –discovery-token-ca-cert-hash sha256:de6bfdec100bb979d26ffc177de0e924b6c2fbb71085aa065fd0a0854e1bf360

In the above output there are two key things you get –

  • Commands to enable the regular user to administer Kubemaster
  • Command to run on slave node to join Kubernetes cluster

That’s it. You have successfully started the Kubemaster node and brought up your Kubernetes cluster. The next task is to install and configure your secondary nodes in this cluster.

Installation of Kubernetes slave node or minion

The installation process remains the same. Follow steps for disabling SWAP, installing Docker, and installing 3 Kubernetes packages.

Configuration of Kubernetes slave node minion

Nothing to do much on this node. You already have the command to run on this node for joining cluster which was spitting out by kubeadm init command.

Lets see how to join node in Kubernetes cluster using kubeadm command –

[root@minion ~]# kubeadm join 172.31.81.44:6443 --token 8lqimn.2u78dcs5rcb1mggf --discovery-token-ca-cert-hash sha256:de6bfdec100bb979d26ffc177de0e924b6c2fbb71085aa065fd0a0854e1bf360
[preflight]

running pre-flight checks I0912 08:19:56.440122 1555 kernel_validator.go:81] Validating kernel version I0912 08:19:56.440213 1555 kernel_validator.go:96] Validating kernel config

[discovery]

Trying to connect to API Server “172.31.81.44:6443”

[discovery]

Created cluster-info discovery client, requesting info from “https://172.31.81.44:6443”

[discovery]

Failed to request cluster info, will try again: [Get https://172.31.81.44:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: net/http: TLS handshake timeout]

[discovery]

Requesting info from “https://172.31.81.44:6443” again to validate TLS against the pinned public key

[discovery]

Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server “172.31.81.44:6443”

[discovery]

Successfully established connection with API Server “172.31.81.44:6443”

[kubelet]

Downloading configuration for the kubelet from the “kubelet-config-1.11” ConfigMap in the kube-system namespace

[kubelet]

Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”

[kubelet]

Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”

[preflight]

Activating the kubelet service

[tlsbootstrap]

Waiting for the kubelet to perform the TLS Bootstrap…

[patchnode]

Uploading the CRI Socket information “/var/run/dockershim.sock” to the Node API object “minion” as an annotation This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run ‘kubectl get nodes’ on the master to see this node join the cluster.

And here you go. Node has joined the cluster successfully. Thus you have completed Kubernetes cluster installation and configuration!

Check nodes status from kubemaster.

[root@kerneltalks ~]# kubectl get nodes
NAME           STATUS     ROLES     AGE       VERSION
kerneltalks1   Ready      master    2h        v1.11.3
minion         Ready      <none>    1h        v1.11.3

Once you see all status as ready you have a steady cluster up and running.

Difference between Docker swarm and Kubernetes

Learn the difference between Docker swarm and Kubernetes. Comparison between two container orchestration platforms in a tabular manner.

Docker Swarm v/s Kubernetes

When you are on the learning curve of application containerization, there will be a stage when you come across orchestration tools for containers. If you have started your learning with Docker then Docker swarm is the first cluster management tool you must have learned and then Kubernetes. So its time to compare docker swarm and Kubernetes. In this article, we will quickly see what is docker, what is Kubernetes, and then a comparison between the two.

What is Docker swarm?

Docker swarm is a native tool to Docker which is aimed at clustering management of Docker containers. Docker swarm enables you to build a cluster of multi-node VM of physical machines running the Docker engine. In turn, you will be running containers on multiple machines to facilitate HA, availability, fault-tolerant environment. It’s pretty much simple to set up and native to Docker.

What is Kubernetes?

It’s a platform to manage containerized applications i.e. containers in cluster environment along with automation. It does almost similar job swarm mode does but in a different and enhanced way. It’s developed by Google in the first place and later project handed over to CNCF. It works with containers like Docker and rocket. Kubernetes installation is a bit complex than Swarm.

Compare Docker and Kubernetes

If someone asks you a comparison between Docker and Kubernetes then that’s not a valid question in the first place. You can not differentiate between Docker and Kubernetes. Docker is an engine that runs containers or itself it refers to as container and Kubernetes is orchestration platform that manages Docker containers in cluster environment. So one can not compare Docker and Kubernetes.

Difference between Docker Swarm and Kubernetes

I added a comparison of Swarm and Kubernetes in the below table for easy readability.

Docker Swarm
Kubernetes
Docker’s own orchestration tool Google’s open-source orchestration tool
Younger than Kubernetes Older than Swarm
Simple to setup being native tool to Docker A bit complex to set up but once done offer more functionality than Swarm
Less community around it but Docker has excellent documentation Being Google’s product and older has huge community support
Simple application deploy in form of services
Bit complex application deploys through pods, deployments, and services.
Has only command line interface to manage Also offers GUI addition to CLI
Monitoring is available using third party applications Offer native and third party for monitoring and logging
Much faster than Kubernetes Since its a complex system its deployments are bit slower than Swarm

DCA – Docker Certified Associate Certification guide

The small guide which will help aspirants for Docker Certified Associate Certification preparation.

Docker Certified Associate Certification guide

I recently cleared DCA – Docker Certified Associate Certification and wanted to share my experience here on my blog. This might be helpful for folks who are going to appear examination soon or may aspire containerization aspirant to take it.

DCA details :

Complete certification details and FAQs can be found here on there official website.

  • Duration: 90 minutes
  • Type: Multiple choice questions
  • Number of questions: 55
  • Mode of exam: Remotely proctored
  • Cost: $195 (For India residents, it would be plus 18% GST which comes roughly 16-17K INR.)

Preparation

Docker Certified Associate aims at certifying professionals having enterprise-level experience of Docker for a minimum of a year. Whenever you are starting to learn Docker, mostly you start off with CE (Community Editions) which comes free or your practice on Play with Docker which also serves CE Docker. You should not attempt this certification with knowledge or experience on CE only. This certification is designed to test your knowledge with Enterprise Edition of Docker which is fully feature packed and has paid license tagged to it.

So it is expected that you have a minimum 6 months or years of experience on Docker EE in the enterprise environment before you attempt for certification. Docker also offers Trail EE license which you can use to start off with EE Docker learning. Once you are through with all the syllabus mentioned on the website for this certification and well versed with Docker enterprise world then only attempt for certification.

You can register for examination from a website which will redirect you to their vendor Examity website. There you need to register for the exam according to the available time slot and make the payment online. You can even book for a time which is within 24 hours but it’s not always available. Make sure your computer completes all the pre-requisite so that you can take the exam without any hassle. You can even connect with the Exam vendor well before the exam and get your computer checked for compatibility with Exam software/plugin.

Docker’s official study guide walks you through the syllabus so that you can prepare yourself accordingly.

During Exam

You can take this exam from anywhere provided you have a good internet connection and your surrounding abides rules mentioned on certification website like an empty room, clean desk, etc. As this exam is remotely proctored, there will be executive monitoring of your screen, webcam, mic remotely in real-time. So make sure you have a calm place, empty room before you start the exam. You should eat, use a cellphone or similar electronic device, talk, etc during the exam.

Exam questions are carefully designed by professionals to test your knowledge in all areas. Do not expect only command, options, etc types questions. There is a good mix of all types of logical, conceptual, and practical application questions. Some questions may have multiple answers so keep an eye on such questions and do not miss to select more than one answer.

After exam

Your examination scorecard will be displayed immediately and the result will be shown to you. You can have it on email. The actual certificate takes 3 minutes before it hits your inbox! Do check spam if you don’t receive it before you escalate it to Docker Certification Team (certification@docker.com)

All the best! Do share your success stories in the comments below.

How Docker container DNS works

Learn about Docker DNS. How docker container DNS works? How to change nameserver in Docker container to use external DNS?

Docker DNS

Docker container has inbuilt DNS which automatically resolves IP to container names in user-defined networks.  But what if you want to use external DNS into the container for some project need. Or how to use external DNS in all the containers run on my host? In this article, we will walk you through the below points :

  1. Docker native DNS
  2. Nameservers in Docker
  3. How to use external DNS in the container while starting it
  4. How to use external DNS in all the containers on a docker host

Docker native DNS

In a user-defined docker network, DNS resolution to container names happens automatically. You don’t have to do anything if your containers are using your defined docker network they can find each other with hostname automatically.

We have 2 Nginx containers running using my newly created docker network named kerneltalks. Both Nginx containers are installed with ping utility.

$ docker container ls
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
1b1bb99559ac        nginx               "nginx -g 'daemon of…"   27 minutes ago      Up 27 minutes       80/tcp              nginx2
239c662d3945        nginx               "nginx -g 'daemon of…"   27 minutes ago      Up 27 minutes       80/tcp              nginx1

$ docker network inspect kerneltalks
"Containers": {
"1b1bb99559ac21e29ae671c23d46f2338336203c96874ac592431f60a2e6a5de": {
"Name": "nginx2",
"EndpointID": "4141f56fe878275e322b9283476508d1135e813d12ea2b7d87a5c3d0db527f79",
"MacAddress": "02:42:ac:13:00:05",
"IPv4Address": "172.19.0.5/16",
"IPv6Address": ""
},
"239c662d3945031413e4c69b99e3ddde57832004bd6193bdbc30bd5e6ca6f4e2": {
"Name": "nginx1",
"EndpointID": "376da79e6746cc80d178f4363085e521a9d45c65df08248b77c1bc744b495ae4",
"MacAddress": "02:42:ac:13:00:04",
"IPv4Address": "172.19.0.4/16",
"IPv6Address": ""
},

And they can ping each other without any extra DNS efforts. Since user-defined networks have inbuilt DNS which resolves IP addresses from container names.

$ docker exec -it nginx1 ping nginx2
PING nginx2 (172.19.0.5) 56(84) bytes of data.
64 bytes from nginx2.kerneltalks (172.19.0.5): icmp_seq=1 ttl=64 time=0.151 ms
64 bytes from nginx2.kerneltalks (172.19.0.5): icmp_seq=2 ttl=64 time=0.053 ms

$ docker exec -it nginx2 ping nginx1
PING nginx1 (172.19.0.4) 56(84) bytes of data.
64 bytes from nginx1.kerneltalks (172.19.0.4): icmp_seq=1 ttl=64 time=0.088 ms
64 bytes from nginx1.kerneltalks (172.19.0.4): icmp_seq=2 ttl=64 time=0.054 ms

But in default docker bridge network (which installs with docker daemon) automatic DNS resolution is disabled to maintain container isolation. You can add container inter-comm just by using --link option while running container (when on default bridge network)

--link is a legacy feature and may be removed in upcoming features. So it is always advisable to use user-customized networks rather than using default docker networks.

DNS nameservers in Docker

Docker is coded in a smart way. When you run a new container on the docker host without any DNS related option in command, it simply copies host’s /etc/resolv.conf into container. While copying it filter’s out all localhost IP addresses from the file. That’s pretty obvious since that won’t be reachable from container network so no point in keeping them. During this filtering, if no nameserver left to add in container’s /etc/resolv.conf the file then Docker daemon smartly adds Google’s public nameservers 8.8.8.8 and 8.8.4.4 in to file and use it within the container.

Also, host and container /etc/resolv.conf always be in sync. Docker daemon takes help from the file change notifier and makes necessary changes in the container’s resolve file when there are changes made in the host’s file! The only catch is these changes will be done only if the container is not running. So to pick up changes you need to stop and start the container again. All stopped containers will be updated immediately after the host’s file changes.

How to use external DNS in container while starting it

If you want to use external DNS in the container other than docker native or other than what’s in host’s resolv.conf file, then you need to use --dns switch in docker container run command.

$ docker container run -d --dns 10.2.12.2 --name nginx5 nginx
fbe29f22bd5f78213163532f2529c5cd98bc04573a626d0e864e670f96c5dc7a

$ docker exec -it nginx5 cat /etc/resolv.conf
search 51ur3jppi0eupdptvsj42kdvgc.bx.internal.cloudapp.net
nameserver 10.2.12.2
options ndots:0

In the above example, we chose to have nameserver 10.2.12.2 in the container we run. And you can see /etc/resolv.conf inside the container saves this new nameserver in it. Make a note that whenever you are using --dns switch it will wipe out all existing nameserver entries within the container and keeps only the one you supply.

This is a way if you want to use custom DNS in a single container. But what if you want to use this custom DNS to all containers which will run on your docker host then you need to define it in the config file. We are going to see this in the next point.

How to use external DNS in all the containers on docker host

You need to define the external DNS IP in docker daemon configuration file /etc/docker/daemon.json as below –

{
    "dns": ["10.2.12.2", "3.4.5.6"]
}

Once changes saved in the file you need to restart docker daemon to pick up these new changes.

root@kerneltalks # systemctl docker restart

and it’s done! Now any container you run fresh on your docker host will have these two DNS nameservers by default in it.

$ docker container run -d --name nginx7 nginx
200d024ac8930c5bfe59fdbc90a1d4d0e8cd6d865f82096c985e23f1e022d548

$ docker exec -it nginx7 cat /etc/resolv.conf
search 51ur3jppi0eupdptvsj42kdvgc.bx.internal.cloudapp.net
options ndots:0

nameserver 10.2.12.2
nameserver 3.4.5.6

If you have any queries/feedback/corrections, let us know in the comment box below.

Docker swarm cheat sheet

Docker swarm cheat sheet. List of all commands to create, run, manage container cluster environment, Docker Swarm!

Docker swarm cheat-sheet

Docker swarm is a cluster environment for Docker containers. Swarm is created with a number of machines running docker daemons. Collectively they are managed by one master node to run clustered environment for containers!

In this article, we are listing out all the currently available docker swarm commands in a very short overview. This is a cheat sheet you can glance through to brush or your swarm knowledge or quick reference for any swarm management command. We are covering most used or useful switches with the below commands. There are more switches available for each command and you can get them with --help

Read all docker or containerization related articles here from KernelTalk’s archives.

Docker swarm commands for swarm management

This set of command is used mainly to start, manage the swarm cluster as a whole. For node management, within the cluster, we have a different set of commands following this section.

  • docker swarm init : Initiate swam cluster
    • –advertise-addr: Advertised address on which swarm lives
    • –autolock: Locks manager and display key which will be needed to unlock stopped manager
    • –force-new-cluster: Create a new cluster from backup and dont attempt to connect to old known nodes
  • docker swarm join-token: Lists join security token to join another node in swarm as worker or manager
    • –quite: Only display token. By default, it displays complete command to be used along with the token.
    • –rotate: Rotate (change) token for security reasons.
  • docker swarm join: Join already running swarm as a worker or manager
    • –token: Security token to join the swarm
    • –availability: Mark node’s status as active/drain/pause after joining
  • docker swarm leave: Leave swarm. To be run from the node itself
    • -f: Leave forcefully ignoring all warnings.
  • docker swarm unlock: Unlocks swarm by providing key after manager restarts
  • docker swarm unlock-key: Display swarm unlock key
    • -q: Only display token.
    • –rotate: Rotate (change) token for security reasons.
  • docker swarm update: Updates swarm configurations
    • –autolock: true/false. Turns on or off locking if not done while initiating.

Docker swarm node commands for swarm node management

Node is a server participating in Docker swarm. A node can either be a worker or manager in the swarm. The manager node has the ability to manage swarm nodes and services along with serving workloads. Worker nodes can only serve workloads.

  • docker node ls : Lists nodes in the swarm
    • -q : Only display node Ids
    • –format : Format output using GO format
    • –filter : Apply filters to output
  • docker node ps : Display tasks running on nodes
    • Above all switches applies here too.
  • docker node promote : Promote node to a manager role
  • docker node demote : Demote node from manager to worker role
  • docker node rm : Remove node from the swarm. Run from the manager node.
    • -f : Force remove
  • docker node inspect : Detailed information about the node
    • –format : Format output using GO format
    • –pretty : Print in a human-readable friendly format
  • docker node update : Update node configs
    • –role : worker/manager. Update node role
    • –availability : active/pause/drain. Set node state.

Docker swarm service commands for swarm service management

Docker service is used to create and spawn workloads to swarm nodes.

  • docker service create : Start new service in Docker swarm
    • Switches of docker container run command like -i (interactive), -t (pseud terminal), -d (detached), -p (publish port) etc supported here.
  • docker service ls : List services
    • –filter, –format and -q (quiet) switches which we saw above are supported with this command.
  • docker service ps : Lists tasks of services
    • –filter, –format and -q (quiet) switches which we saw above are supported with this command.
  • docker service logs : Display logs of service or tasks
  • docker service rm : Remove service
    • -f : Force remove
  • docker service update : Update service config
    • Most of the parameters defined in service create command can be updated here.
  • docker service rollback : Revert back changes done in service config.
  • docker service scale : Scale one or more replicated services.
    • servicename=number format
  • docker service inspect : Detailed information about service.

Beginners guide to Docker Image

Learn the basics of the Docker image. Beginners guide explaining What is Docker image, what are different types of Docker images, and how to pull/push docker image?

Learn basics of Docker Image

In this article, we will be discussing the Docker image mainly. We will touch base all the below points :

  1. What is the Docker image?
  2. Difference between Docker image and Docker container
  3. What is the official Docker image
  4. Public and private Docker image
  5. How to download/upload pull/push Docker image from/to Docker hub

What is Docker image?

The official definition of Docker image is – ” Docker images are the basis of containers. An Image is an ordered collection of root filesystem changes and the corresponding execution parameters for use within a container runtime. An image typically contains a union of layered filesystems stacked on top of each other. An image does not have state and it never changes.

Read all docker or containerization related articles here from KernelTalk’s archives.

Lets cut this definition for our understanding. The first sentence means containers are launched using images. The second one tells it has binaries, dependencies & parameters to run a container (i.e. application). Third denotes image changes are layered and it never changes actual image file which means the image is the read-only file.

In the virtualization world’s concept Image can be visualized as a template from which you can launch as many containers as you want. In short, Image is an application binary along with its dependencies and metadata which tells how this image should be run in the container.

The image is named with the syntax:software_name:version_tag if the tag is not specified then latest is considered as default tag.

Difference between Docker image and Docker container

Let me summarize the difference between Docker image and container in tabular format for easy understanding :

Docker Image
Docker container
An ordered collection of root filesystem changes and the corresponding execution parameters for use within a container runtime An active (or inactive if exited) stateful instantiation of an image.
Can be pulled from Docker hub Runs locally on your host OS
Its read only Dynamic
Its a template from which container launches Its running/stopped instance of image
It’s a file It’s a process
Management command : #docker image <..> Management command : #docker container <..>

What is official Docker image?

Docker hub is a repository like Git for Docker images. Any registered user can upload images to it. But, for popular software like Nginx, alpine, Redis, Mongo Docker has a team which creates, test, verify images with standard settings and keep them for public use on Docker hub. The official docker image always has an only software name as its image name. Any other image other than official is named with syntax username/software_name where username is a user who uploaded that image on Docker hub. Also, the official image has official written below it in listing as shown in the below screenshot.

Identify official docker image

You can see apart from the first official image all other images of alpine are named username on the left. Only the official image has only the software name in its name.

Public and private Docker image

You can upload your image to a private or public on Docker hub. Public images are available to download for free for everyone. Private images are access restrictions defined by the uploader. While using the public image you should be very careful to choose which image to pull.

Docker hub has a star grading system for images. The user can star images and the total number of stars earned by the image are shown on the right column in listing (it can be seen in the above screenshot). Also, alongside stars, you can see the number of pulls images witnessed. Image with maximum stars and/or pulls considered a good image.

How to pull or download image from Docker hub?

You don’t need a Docker account to pull the image from the Docker hub. You only need to have an active internet connection on the server and you can use docker image pull to download the image from the Docker hub.

root@kerneltalks # docker image pull alpine
Using default tag: latest
latest: Pulling from library/alpine
Digest: sha256:e1871801d30885a610511c867de0d6baca7ed4e6a2573d506bbec7fd3b03873f
Status: Image is up to date for alpine:latest

Since my local image cache already has alpine image downloaded, pull command verified its SHA. Once it found that the cached image is the same as the one available on Docker hub so it didn’t download image! It saved bandwidth and storage both.

Now, I will pull the alpine image which has java available in it. Also, it’s not in my local image cache.

root@kerneltalks # docker image pull  anapsix/alpine-java
Using default tag: latest
latest: Pulling from anapsix/alpine-java
ff3a5c916c92: Already exists
b2573fe715ab: Pull complete
Digest: sha256:f4271069aa69eeb4cfed50b6a61fb7e4060297511098a3605232dbfe8f85de74
Status: Downloaded newer image for anapsix/alpine-java:latest
root@kerneltalks # docker image pull  anapsix/alpine-java
Using default tag: latest
latest: Pulling from anapsix/alpine-java
ff3a5c916c92: Already exists
b2573fe715ab: Pull complete
Digest: sha256:f4271069aa69eeb4cfed50b6a61fb7e4060297511098a3605232dbfe8f85de74
Status: Downloaded newer image for anapsix/alpine-java:latest

If you observe output above, command pulled only one file with ID b2573fe715ab whereas the other one it says already exists! So here comes the logic of Union layered filesystem.

As explained in the definition image is consists of a layered union filesystem. Since this image is alpine with java, it has the base of alpine image (which I already have in my image cache). And hence only extra java layer on this image is downloaded by pull command. And then together it stored alpine java image in my local cache!

You can even check details of this image on Docker hub and it indeed mentions that its build on top of the base alpine image. Which explains pull behavior and support our above explanation.

How to push or upload image to Docker hub?

To upload the image on Docker hub you will need a Docker account. It’s free to create. Just head to https://hub.docker.com/ and sign up. Once logged in you can create your private repo in which you can upload images and those will be private images. Otherwise, images uploaded will be public.

You need to login to Docker account from server where you have locally built images stored using docker login command.

root@kerneltalks # docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: shrikantlavhate
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

Now you can push your image to the public repo or private repo. I created my private repo with the name kerneltalks. So I am pushing the image to Docker hub in my private repo. But before pushing the image to Docker hub you need to properly tag it. To upload the image in the private repo you need to tag it like – username/reponame

root@kerneltalks # docker image tag 3fd9065eaf02 shrikantlavhate/kerneltalks
root@kerneltalks # docker image push shrikantlavhate/kerneltalks
The push refers to repository [docker.io/shrikantlavhate/kerneltalks]
cd7100a72410: Pushed
latest: digest: sha256:8c03bb07a531c53ad7d0f6e7041b64d81f99c6e493cb39abba56d956b40eacbc size: 528

Once the image is tagged properly it can be pushed to Docker hub. Using the tag Docker hub determines it’s a public or private image and accordingly uploads it.

Be noted that your login information is kept in local profiles for the seamless experience. Don’t forget to logout from the Docker account if you are on the shared machine.

root@kerneltalks # cat $HOME/.docker/config.json
{
        "auths": {
                "https://index.docker.io/v1/": {
                        "auth": "c2hyaWthXXXXXXXXXXXkRkdjflcXXXXg1"
                }
        },
        "HttpHeaders": {
                "User-Agent": "Docker-Client/18.05.0-ce (linux)"
        }


You can see config.json file under .docker directory saved login details.  You can logout to wipe out this info from the machine using docker logout.

root@kerneltalks # docker logout
Removing login credentials for https://index.docker.io/v1/
root@kerneltalks # cat $HOME/.docker/config.json
{
        "auths": {},
        "HttpHeaders": {
                "User-Agent": "Docker-Client/18.05.0-ce (linux)"
        }

Once we logged out of Docker account, login details wiped out from the file!

Docker container utilization monitoring

An article explaining Docker container utilization monitoring. How to monitor or save reports of Docker container resource utilization and how to format output according to your requirement.

Monitor your Docker containers

Docker containers are processes running on host OS using its resources. It means Docker containers are using CPU, Memory, and IO from Host OS to execute their commands or perform their tasks. Resource utilization is a major area leading to the performance of a server or application.

Host OS being a Linux in our case can be monitored using tools like sar, top, etc for resource utilization. You can trace down PID of Docker containers and then drill down to those PID’s utilization in the host’s monitoring tool to get container utilization. But this is a bit tedious job and not feasible in case you have the number of containers running on your server. Docker already took care of it and provided its own real-time monitoring tool which reports resource utilization by each container in real-time.

If you still don’t have Docker on your system, read here how to install Docker in Linux and 8 basic Docker management commands.

How to monitor Docker container utilization?

Docker provided command stats to provide real-time container’s resource utilization statistics. Commands run in terminal like top command and update values in real-time.

Read all docker or containerization related articles here from KernelTalk’s archives.

You can supply a container ID or name to this command to view the statistics of that specific container. If no container name/ID supplied, it will show stats of all running containers.

root@kerneltalks # docker container stats
CONTAINER ID        NAME                    CPU %               MEM USAGE / LIMIT   MEM %               NET I/O             BLOCK I/O           PIDS
2554070a4ba7        friendly_hodgkin        0.19%               205MiB / 991MiB     20.69%              1.21kB / 767B       105MB / 8.7kB       31
b60fa988daee        condescending_galileo   0.18%               201MiB / 991MiB     20.29%              1.21kB / 761B       96.3MB / 9.22kB     31

root@kerneltalks # docker container stats friendly_hodgkin
CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT   MEM %               NET I/O             BLOCK I/O           PIDS
2554070a4ba7        friendly_hodgkin    0.15%               205.2MiB / 991MiB   20.71%              1.21kB / 767B       105MB / 8.7kB       31

The output is tabulated and column-wise it has –

  • Container ID: Docker container ID
  • Name: Docker container name
  • CPU %: CPU percentage of the host being utilized by container right now.
  • MEM USAGE / LIMIT: Memory being utilized by container right now / Max memory which can be used by the container
  • MEM %: Memory percentage of Host being utilized by container right now.
  • NET I/ O: Network Input Output traffic on container’s network interface
  • BLOCK I/ O: Disk IO did on the Host storage
  • PIDS: Total number of processes/threads container created/forked.

You have to press cntrl+c to return to a prompt from real-time updating stats screen.

How to save Docker container utilization?

Now, if you want to save container utilization or you want to use stats command in some script then you may want to run it for 1 iteration only and exits automatically rather than keep running.

In such a case, you need to use --no-stream switch along with stats command.

root@kerneltalks # docker container stats --no-stream
CONTAINER ID        NAME                    CPU %               MEM USAGE / LIMIT   MEM %               NET I/O             BLOCK I/O           PIDS
2554070a4ba7        friendly_hodgkin        0.15%               205.2MiB / 991MiB   20.71%              1.21kB / 767B       105MB / 8.7kB       31
b60fa988daee        condescending_galileo   0.15%               201.3MiB / 991MiB   20.31%              1.21kB / 761B       96.3MB / 9.22kB     31
root@kerneltalks #

You can redirect this output to file for further processing.

CPU and Memory utilization of Docker container

stats command offers to format according to your need by using --format switch. It has GO template formatting available with this switch.

Using it, you can make stats command to display only CPU and MEM utilization of containers like below :

root@kerneltalks # docker container stats --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemPerc}}"
CONTAINER           CPU %               MEM %
2554070a4ba7        0.18%               20.71%
b60fa988daee        0.18%               20.32%

Placeholders of this formatting are as below –

  • .Container Container name or ID (user input)
  • .Name Container name
  • .ID Container ID
  • .CPUPerc CPU %
  • .MemPerc Memory %
  • .MemUsage Memory usage
  • .NetIO Network IO
  • .BlockIO Block IO
  • .PIDs Number of PIDs

So you can format the output the way you want it and with only values, you are interested in. Then you can use no-stream and get the utilization figures to another file or pipe them to other commands for further processing.

How to execute command inside Docker container

Learn how to access shell and execute a command inside a Docker container. Explains running commands inside already running containers or while launching containers.

Execute commands in Docker container

If you are following the Docker series on my blog then you must have been gone through Docker basics and Docker container maintenance commands by now. In this tutorial, we will walk you through how to access shell inside Docker container and how to execute commands inside the container.

First of all, you can not execute commands or access shells in any container. Basically, the container image you are using to launch your container should have a shell in it. If the image does not support shell then you can not do anything inside the container during launch or even after launch. 

Read all docker or containerization related articles here from KernelTalk’s archives.

For example, if you are launching a container from Nginx image i.e. web-server container then you won’t be able to access the shell or execute the command within it. Since its just a web-server process! But, if you are launching a container from the ubuntu image or alpine image then you will be able to access its shell since those images/software does support shell.

You can access shell inside a docker container and execute commands inside container either of using two ways –

  1. Execute bash shell while launching container
  2. Use docker command to execute single command inside container

Remember, each Docker image has a default command defined in it which it executes whenever it launches any container. You can edit it anytime but if you want to change it on the fly then you need to specify it at the end of the run command.  So, image ignores default defined command and it executes a command specified in docker run command after it launches container.

Access shell & execute command in Docker container while launching it

Once you are confirmed that the image you are using to launch container does support shell (mostly its bash) then you need to launch a container using -it switch. where –

  • -i is the interactive mode.It keeps STDIN open even if you choose to detach container after launch
  • -t is to assign pseudo-terminal through which STDIN is kept open for user input.

I launched Ubuntu container with -it switch and I presented with shell prompt within. Observe output below –

root@kerneltalks# docker container run -it ubuntu:latest
root@2493081de86f:/# hostname
2493081de86f
root@2493081de86f:/# ls -lrt
total 20
drwxr-xr-x.   2 root root    6 Apr 24 08:34 home
drwxr-xr-x.   2 root root    6 Apr 24 08:34 boot
drwxr-xr-x.   8 root root   96 Apr 26 21:16 lib
drwxr-xr-x.  10 root root 4096 Apr 26 21:16 usr
drwxr-xr-x.   2 root root    6 Apr 26 21:16 srv
drwxr-xr-x.   2 root root    6 Apr 26 21:16 opt
drwxr-xr-x.   2 root root    6 Apr 26 21:16 mnt
drwxr-xr-x.   2 root root    6 Apr 26 21:16 media
drwxr-xr-x.   2 root root   34 Apr 26 21:16 lib64
drwx------.   2 root root   37 Apr 26 21:17 root
drwxr-xr-x.  11 root root 4096 Apr 26 21:17 var
drwxr-xr-x.   2 root root 4096 Apr 26 21:17 bin
drwxrwxrwt.   2 root root    6 Apr 26 21:17 tmp
drwxr-xr-x.   2 root root 4096 Apr 27 23:28 sbin
drwxr-xr-x.   5 root root   58 Apr 27 23:28 run
dr-xr-xr-x.  13 root root    0 Jun  2 14:40 sys
drwxr-xr-x.  29 root root 4096 Jun  2 14:58 etc
dr-xr-xr-x. 114 root root    0 Jun  2 14:58 proc
drwxr-xr-x.   5 root root  360 Jun  2 14:58 dev
root@2493081de86f:/# date
Sat Jun  2 15:00:17 UTC 2018
root@2493081de86f:/# exit

With the output, you can see after the container is launched promptly is given root@2493081de86f . Now you are within the container with root the account. Keep in mind everything inside the container happens with root id. If you see hostname of Ubuntu container is set the same as container ID. I executed a couple of commands inside the container in the above output.

Keep in mind, since the container is aimed to be very lightweight they always consist of minimal software inside. So if you are running any Linux distribution container, you won’t be able to run all commands as you would normally do in VM or Linux server.

Execute command inside already running container

The above process is applicable for the container you are about to launch. But what if you want to execute a command on the container which is already running on the system. Docker provided exec switch to access running container shell. Syntax is docker container exec <container name/ID> <command to run>

I have already a ubuntu container running in my system. I used exec switch to execute hostname,  date and df commands inside the container.

root@kerneltalks # docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ae0721fb8ecf ubuntu:latest "/bin/bash" 2 minutes ago Up 2 minutes loving_bohr

root@kerneltalks # docker container exec ae0721fb8ecf date
Sat Jun 2 15:41:24 UTC 2018
root@kerneltalks # docker container exec ae0721fb8ecf hostname
ae0721fb8ecf
root@kerneltalks # docker container exec ae0721fb8ecf df
Filesystem                                                                                         1K-blocks    Used Available Use% Mounted on
/dev/mapper/docker-202:1-26198093-57ab60113158ca3f51c470fefb25a3fdf154a5309f05f254c660dba2a55dbab7  10474496  109072  10365424   2% /
tmpfs                                                                                                  65536       0     65536   0% /dev
tmpfs                                                                                                 507368       0    507368   0% /sys/fs/cgroup
/dev/xvda1                                                                                           8376320 5326996   3049324  64% /etc/hosts
shm                                                                                                    65536       0     65536   0% /dev/shm
tmpfs                                                                                                 507368       0    507368   0% /proc/scsi
tmpfs                                                                                                 507368       0    507368   0% /sys/firmware

Observe about output and all 3 commands ran successfully inside container and shown output on our host machine terminal.