Yearly Archives: 2018

Install Ansible in Linux

Small tutorial about how to install Ansible in Linux and run ansible command on the remote clients from the control server.

Ansible installation in Linux

What is Ansible ?

Ansible is an open-source configuration management tool developed by Red Hat. You can have enterprise support for it from Red Hat subscriptions. Ansible is written in Python, Ruby, and Power shell. It uses SSH in the background to communicate with clients and execute tasks. The best feature of Ansible is being agent-less hence no load on clients and configurations can be pushed from the server at any time.

Ansible installation

The first pre-requisite of Ansible is: Primary or control server should have password-less SSH connection configured for Ansible user for all its client servers. You can configure passwordless SSH in two commands steps using ssh-keygen and ssh-copy-id.

For our understanding, we have 1 control server kerneltalks1 and 1 client kerneltalks2 and we have configured passwordless SSH for user shrikant (which we treat as Ansible user here)

Lets install Ansible on control server i.e. kerneltalks1

Ansible can be installed using the normal package installation procedure. Below are quick commands for your reference.

  • RHELsubscription-manager repos --enable rhel-7-server-ansible-2.6-rpms; yum install ansible
  • CentOS, Fedora : yum install ansible
  • Ubuntuapt-add-repository --yes --update ppa:ansible/ansibleapt-get install ansible
  • Git clone : git clone https://github.com/ansible/ansible.git
    • cd ./ansiblemake rpm
    • rpm -Uvh ./rpm-build/ansible-*.noarch.rpm

I installed Ansible on my CentOS machine using above command.

[root@kerneltalks1 ~]# ansible --version
ansible 2.7.0
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /bin/ansible
  python version = 2.7.5 (default, Aug  4 2017, 00:39:18) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]

Ansible default config structure

After installation, Ansible creates/etc/ansible directory with default configuration in it.  You can find ansible.cfg and hosts files in it.

[root@kerneltalks1 ~]# ll /etc/ansible
total 24
-rw-r--r--. 1 root root 20269 Oct  9 01:34 ansible.cfg
-rw-r--r--. 1 root root  1016 Oct  9 01:34 hosts
drwxr-xr-x. 2 root root     6 Oct  9 01:34 roles

ansible.cfg is default configuration file for ansible executable

hosts is a list of clients on which control server executes commands remotely via password-less SSH.

Running first command via Ansible

Let’s configure kerneltalks2 and run our first Ansible command on it remotely from kerneltalks1 control server.

You need to configure the password less ssh as we discussed earlier. Then add this server name in /etc/ansible/hosts file.

root@kerneltalks1 # cat /etc/ansible/hosts
[testservers]
 172.31.81.83 

Here IP mentioned is of kerneltalks2 and you can specify the grouping of servers in square braces. And you are good to go. Run ansible command with ping module (-m switch). There are many modules comes in-built with ansible which you can use rather than using equivalent shell commands.

[shrikant@kerneltalks1 ~]$ ansible -m ping all
172.31.81.83 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

You can see the output is a success on the mentioned IP. So we installed and ran the first successful command using ansible!

Common errors

1. If you try to run ansible command on a group of the server which does not exist in the host file. You will see below error –

[shrikant@kerneltalks1 ~]$ ansible -m ping testserver
 [WARNING]: Could not match supplied host pattern, ignoring: testserver

 [WARNING]: No hosts matched, nothing to do

You need to check /etc/ansible/hosts file (or hosts files being referred by your ansible installation) and make sure the server group mentioned on command exists in it.

2. If you do not configure passwordless SSH from the control server to the client or If the client is not reachable over the network you will see below error.

[root@kerneltalks1 ansible]# ansible -m ping all
kerneltalks2 | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: Warning: Permanently added 'kerneltalks2,172.31.81.83' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n",
    "unreachable": true
}

You need to check the connectivity and passwordless ssh access from the control server.

Kubernetes installation and configuration

Step by step guide for Kubernetes installation and configuration along with sample outputs.

Kubernetes installation guide

Pre-requisite

  • Basic requirement to run Kubernetes is your machine should not have SWAP configured if at all it is configured you need to turn it off using swapoff -a.
  • You will need Docker installed on your machine.
  • You will need to set your SELinux in permissive mode to enable kubelet network communication. You can set policy in SELinux for Kubernetes and then you can enable it normally.
  • Your machine should have at least 2 CPUs.
  • Kubernetes ports should be open between master and nodes for cluster communications. All are TCP ports and to be open for inbound traffic.
PortsDescription
10250Kublet API (for master and nodes)
10251kube-scheduler
10252kube-controller-manager
6443*Kubernetes API server
2379-2380etcd server client API
30000-32767 NodePort Services (only for nodes)

Installation of Kubernetes master node Kubemaster

First step is to install three pillar packages of Kubernetes which are :

  • kubeadm – It bootstraps Kubernetes cluster
  • kubectl – CLI for managing cluster
  • kubelet – Service running on all nodes which helps managing cluster by performing tasks

For downloading these packages you need to configure repo for the same. Below are repo file contents for respective distributions.

For RedHat, CentOs or Fedora (YUM based)-

root@kerneltalks # cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
root@kerneltalks # yum install -y kubectl kubeadm kubelet

For Ubuntu, Suse or Debian (APT based)-

sudo apt-get update && sudo apt-get install -y apt-transport-https gnupg2
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl kubeadm kubelet

Once you have configured the repo install packages kubeadm, kubectl and kubelet according to your distribution package manager.

Enable and start kubelet service

root@kerneltalks # systemctl enable kubelet.service
root@kerneltalks # systemctl start kubelet

Configuration of Kubernetes master node Kubemaster

Now you need to make sure both Docker and Kubernetes using the same cgroup driver. By default its cgroupfs for both. If you haven’t changed for Docker then you don’t have to do anything for Kubernetes as well. But if you are using different cgroup in Docker you need to specify it for Kubernetes in below file –

root@kernetalks # cat /etc/default/kubelet
KUBELET_KUBEADM_EXTRA_ARGS=--cgroup-driver=<value>

This file will be picked up by kubeadm while starting up. But if you have Kubernetes already running you need to reload this configuration using –

root@kerneltalks # systemctl daemon-reload
root@kerneltalks # systemctl restart kubelet

Now you are ready to bring up Kubernetes master and then add worker nodes or minions to it as a slave for the cluster.

You have installed and adjusted settings to bring up Kubemaster. You can start Kubemaster using the command kubeadm init but you need to provide network CIDR first time.

  • --pod-network-cidr= : For pod network
  • --apiserver-advertise-address= : Optional. To be used when multiple IP addresses/subnets assigned to the machine.

Refer below output for starting up Kubernetes master node. There are few warnings which can be corrected with basic sysadmin tasks.

# kubeadm init --apiserver-advertise-address=172.31.81.44 --pod-network-cidr=192.168.1.0/16
[init]

using Kubernetes version: v1.11.3

[preflight]

running pre-flight checks I0912 07:57:56.501790 2443 kernel_validator.go:81] Validating kernel version I0912 07:57:56.501875 2443 kernel_validator.go:96] Validating kernel config [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.05.0-ce. Max validated version: 17.03 [WARNING Hostname]: hostname “kerneltalks” could not be reached [WARNING Hostname]: hostname “kerneltalks” lookup kerneltalks1 on 172.31.0.2:53: no such host [WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service’

[preflight/images]

Pulling images required for setting up a Kubernetes cluster

[preflight/images]

This might take a minute or two, depending on the speed of your internet connection

[preflight/images]

You can also perform this action in beforehand using ‘kubeadm config images pull’

[kubelet]

Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”

[kubelet]

Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”

[preflight]

Activating the kubelet service

[certificates]

Generated ca certificate and key.

[certificates]

Generated apiserver certificate and key.

[certificates]

apiserver serving cert is signed for DNS names [kerneltalks1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.81.44]

[certificates]

Generated apiserver-kubelet-client certificate and key.

[certificates]

Generated sa key and public key.

[certificates]

Generated front-proxy-ca certificate and key.

[certificates]

Generated front-proxy-client certificate and key.

[certificates]

Generated etcd/ca certificate and key.

[certificates]

Generated etcd/server certificate and key.

[certificates]

etcd/server serving cert is signed for DNS names [kerneltalks1 localhost] and IPs [127.0.0.1 ::1]

[certificates]

Generated etcd/peer certificate and key.

[certificates]

etcd/peer serving cert is signed for DNS names [kerneltalks1 localhost] and IPs [172.31.81.44 127.0.0.1 ::1]

[certificates]

Generated etcd/healthcheck-client certificate and key.

[certificates]

Generated apiserver-etcd-client certificate and key.

[certificates]

valid certificates and keys now exist in “/etc/kubernetes/pki”

[kubeconfig]

Wrote KubeConfig file to disk: “/etc/kubernetes/admin.conf”

[kubeconfig]

Wrote KubeConfig file to disk: “/etc/kubernetes/kubelet.conf”

[kubeconfig]

Wrote KubeConfig file to disk: “/etc/kubernetes/controller-manager.conf”

[kubeconfig]

Wrote KubeConfig file to disk: “/etc/kubernetes/scheduler.conf”

[controlplane]

wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml”

[controlplane]

wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml”

[controlplane]

wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml”

[etcd]

Wrote Static Pod manifest for a local etcd instance to “/etc/kubernetes/manifests/etcd.yaml”

[init]

waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests”

[init]

this might take a minute or longer if the control plane images have to be pulled

[apiclient]

All control plane components are healthy after 46.002127 seconds

[uploadconfig]

storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace

[kubelet]

Creating a ConfigMap “kubelet-config-1.11” in namespace kube-system with the configuration for the kubelets in the cluster

[markmaster]

Marking the node kerneltalks1 as master by adding the label “node-role.kubernetes.io/master=””

[markmaster]

Marking the node kerneltalks1 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]

[patchnode]

Uploading the CRI Socket information “/var/run/dockershim.sock” to the Node API object “kerneltalks1” as an annotation

[bootstraptoken]

using token: 8lqimn.2u78dcs5rcb1mggf

[bootstraptoken]

configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstraptoken]

configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstraptoken]

configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstraptoken]

creating the “cluster-info” ConfigMap in the “kube-public” namespace

[addons]

Applied essential addon: CoreDNS

[addons]

Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 172.31.81.44:6443 –token 8lqimn.2u78dcs5rcb1mggf –discovery-token-ca-cert-hash sha256:de6bfdec100bb979d26ffc177de0e924b6c2fbb71085aa065fd0a0854e1bf360

In the above output there are two key things you get –

  • Commands to enable the regular user to administer Kubemaster
  • Command to run on slave node to join Kubernetes cluster

That’s it. You have successfully started the Kubemaster node and brought up your Kubernetes cluster. The next task is to install and configure your secondary nodes in this cluster.

Installation of Kubernetes slave node or minion

The installation process remains the same. Follow steps for disabling SWAP, installing Docker, and installing 3 Kubernetes packages.

Configuration of Kubernetes slave node minion

Nothing to do much on this node. You already have the command to run on this node for joining cluster which was spitting out by kubeadm init command.

Lets see how to join node in Kubernetes cluster using kubeadm command –

[root@minion ~]# kubeadm join 172.31.81.44:6443 --token 8lqimn.2u78dcs5rcb1mggf --discovery-token-ca-cert-hash sha256:de6bfdec100bb979d26ffc177de0e924b6c2fbb71085aa065fd0a0854e1bf360
[preflight]

running pre-flight checks I0912 08:19:56.440122 1555 kernel_validator.go:81] Validating kernel version I0912 08:19:56.440213 1555 kernel_validator.go:96] Validating kernel config

[discovery]

Trying to connect to API Server “172.31.81.44:6443”

[discovery]

Created cluster-info discovery client, requesting info from “https://172.31.81.44:6443”

[discovery]

Failed to request cluster info, will try again: [Get https://172.31.81.44:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: net/http: TLS handshake timeout]

[discovery]

Requesting info from “https://172.31.81.44:6443” again to validate TLS against the pinned public key

[discovery]

Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server “172.31.81.44:6443”

[discovery]

Successfully established connection with API Server “172.31.81.44:6443”

[kubelet]

Downloading configuration for the kubelet from the “kubelet-config-1.11” ConfigMap in the kube-system namespace

[kubelet]

Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”

[kubelet]

Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”

[preflight]

Activating the kubelet service

[tlsbootstrap]

Waiting for the kubelet to perform the TLS Bootstrap…

[patchnode]

Uploading the CRI Socket information “/var/run/dockershim.sock” to the Node API object “minion” as an annotation This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run ‘kubectl get nodes’ on the master to see this node join the cluster.

And here you go. Node has joined the cluster successfully. Thus you have completed Kubernetes cluster installation and configuration!

Check nodes status from kubemaster.

[root@kerneltalks ~]# kubectl get nodes
NAME           STATUS     ROLES     AGE       VERSION
kerneltalks1   Ready      master    2h        v1.11.3
minion         Ready      <none>    1h        v1.11.3

Once you see all status as ready you have a steady cluster up and running.

Difference between Docker swarm and Kubernetes

Learn the difference between Docker swarm and Kubernetes. Comparison between two container orchestration platforms in a tabular manner.

Docker Swarm v/s Kubernetes

When you are on the learning curve of application containerization, there will be a stage when you come across orchestration tools for containers. If you have started your learning with Docker then Docker swarm is the first cluster management tool you must have learned and then Kubernetes. So its time to compare docker swarm and Kubernetes. In this article, we will quickly see what is docker, what is Kubernetes, and then a comparison between the two.

What is Docker swarm?

Docker swarm is a native tool to Docker which is aimed at clustering management of Docker containers. Docker swarm enables you to build a cluster of multi-node VM of physical machines running the Docker engine. In turn, you will be running containers on multiple machines to facilitate HA, availability, fault-tolerant environment. It’s pretty much simple to set up and native to Docker.

What is Kubernetes?

It’s a platform to manage containerized applications i.e. containers in cluster environment along with automation. It does almost similar job swarm mode does but in a different and enhanced way. It’s developed by Google in the first place and later project handed over to CNCF. It works with containers like Docker and rocket. Kubernetes installation is a bit complex than Swarm.

Compare Docker and Kubernetes

If someone asks you a comparison between Docker and Kubernetes then that’s not a valid question in the first place. You can not differentiate between Docker and Kubernetes. Docker is an engine that runs containers or itself it refers to as container and Kubernetes is orchestration platform that manages Docker containers in cluster environment. So one can not compare Docker and Kubernetes.

Difference between Docker Swarm and Kubernetes

I added a comparison of Swarm and Kubernetes in the below table for easy readability.

Docker Swarm
Kubernetes
Docker’s own orchestration tool Google’s open-source orchestration tool
Younger than Kubernetes Older than Swarm
Simple to setup being native tool to Docker A bit complex to set up but once done offer more functionality than Swarm
Less community around it but Docker has excellent documentation Being Google’s product and older has huge community support
Simple application deploy in form of services
Bit complex application deploys through pods, deployments, and services.
Has only command line interface to manage Also offers GUI addition to CLI
Monitoring is available using third party applications Offer native and third party for monitoring and logging
Much faster than Kubernetes Since its a complex system its deployments are bit slower than Swarm

Format date and time for Linux shell script or variable

Learn how to format date and time to use in a shell script or as a variable along with different format examples.

Date formats

There are many times you need to use date in your shell script e.g. to name log file, to pass it as a variable, etc. So we need a different format of dates that can be used as a string or variable in our scripts. In this article, let’s see how to use date in shell script and what all different types of formats you can use.

  • Check timedatectl command to easily manage date & time in Linux

How to use date in shell script?

You can use the date by inserting shell execution within the command. For example, if you want to create a log file by inserting the current date in it, you can do it by following way –

root@kerneltalks # echo test > /tmp/`date +%d`.txt
root@kerneltalks # ls -lrt
-rw-r--r--. 1 root  root     5 Sep 10 09:10 10.txt

Basically you need to pass format identifier with +% to date command to get your desired format of the output. There is a different identifier date command supply.

You can even save specific date format to some variable like –

root@kerneltalks # MYDATE=`date +%d.%b.%Y`
root@kerneltalks # echo $MYDATE
10.Sep.2018

Different format variables for date command

These format identifiers are from date command man page :

%a     locale’s abbreviated weekday name (e.g., Sun)
%A     locale’s full weekday name (e.g., Sunday)
%b     locale’s abbreviated month name (e.g., Jan)
%B     locale’s full month name (e.g., January)
%c     locale’s date and time (e.g., Thu Mar  3 23:05:25 2005)
%C     century; like %Y, except omit last two digits (e.g., 20)
%d     day of month (e.g, 01)
%D     date; same as %m/%d/%y
%e     day of month, space padded; same as %_d
%F     full date; same as %Y-%m-%d
%g     last two digits of year of ISO week number (see %G)
%G     year of ISO week number (see %V); normally useful only with %V
%h     same as %b
%H     hour (00..23)
%I     hour (01..12)
%j     day of year (001..366)
%k     hour ( 0..23)
%l     hour ( 1..12)
%m     month (01..12)
%M     minute (00..59)
%N     nanoseconds (000000000..999999999)
%p     locale’s equivalent of either AM or PM; blank if not known
%P     like %p, but lower case
%r     locale’s 12-hour clock time (e.g., 11:11:04 PM)
%R     24-hour hour and minute; same as %H:%M
%s     seconds since 1970-01-01 00:00:00 UTC
%S     second (00..60)
%T     time; same as %H:%M:%S
%u     day of week (1..7); 1 is Monday
%U     week number of year, with Sunday as first day of week (00..53)
%V     ISO week number, with Monday as first day of week (01..53)
%w     day of week (0..6); 0 is Sunday
%W     week number of year, with Monday as first day of week (00..53)
%x     locale’s date representation (e.g., 12/31/99)
%X     locale’s time representation (e.g., 23:13:48)
%y     last two digits of year (00..99)
%Y     year
%z     +hhmm numeric timezone (e.g., -0400)
%:z    +hh:mm numeric timezone (e.g., -04:00)
%::z   +hh:mm:ss numeric time zone (e.g., -04:00:00)
%Z     alphabetic time zone abbreviation (e.g., EDT)

Using combinations of above you can get your desired date format as output to use in shell script! You can even use %n for new-line and %t for adding a tab in outputs that are mostly not needed since you will be using it as a single string.

Different date format examples

For your convenience and ready to use, I listed below combinations for different date formats.

root@kerneltalks # date +%d_%b_%Y
10_Sep_2018

root@kerneltalks # date +%D
09/10/18

root@kerneltalks # date +%F-%T
2018-09-10-11:09:51

root@kerneltalks # echo today is  `date +%A`
today is Monday

root@kerneltalks # echo Its `date +%d` of `date +%B" "%Y` and time is `date +%r`
Its 10 of September 2018 and time is 11:13:42 AM

DCA – Docker Certified Associate Certification guide

The small guide which will help aspirants for Docker Certified Associate Certification preparation.

Docker Certified Associate Certification guide

I recently cleared DCA – Docker Certified Associate Certification and wanted to share my experience here on my blog. This might be helpful for folks who are going to appear examination soon or may aspire containerization aspirant to take it.

DCA details :

Complete certification details and FAQs can be found here on there official website.

  • Duration: 90 minutes
  • Type: Multiple choice questions
  • Number of questions: 55
  • Mode of exam: Remotely proctored
  • Cost: $195 (For India residents, it would be plus 18% GST which comes roughly 16-17K INR.)

Preparation

Docker Certified Associate aims at certifying professionals having enterprise-level experience of Docker for a minimum of a year. Whenever you are starting to learn Docker, mostly you start off with CE (Community Editions) which comes free or your practice on Play with Docker which also serves CE Docker. You should not attempt this certification with knowledge or experience on CE only. This certification is designed to test your knowledge with Enterprise Edition of Docker which is fully feature packed and has paid license tagged to it.

So it is expected that you have a minimum 6 months or years of experience on Docker EE in the enterprise environment before you attempt for certification. Docker also offers Trail EE license which you can use to start off with EE Docker learning. Once you are through with all the syllabus mentioned on the website for this certification and well versed with Docker enterprise world then only attempt for certification.

You can register for examination from a website which will redirect you to their vendor Examity website. There you need to register for the exam according to the available time slot and make the payment online. You can even book for a time which is within 24 hours but it’s not always available. Make sure your computer completes all the pre-requisite so that you can take the exam without any hassle. You can even connect with the Exam vendor well before the exam and get your computer checked for compatibility with Exam software/plugin.

Docker’s official study guide walks you through the syllabus so that you can prepare yourself accordingly.

During Exam

You can take this exam from anywhere provided you have a good internet connection and your surrounding abides rules mentioned on certification website like an empty room, clean desk, etc. As this exam is remotely proctored, there will be executive monitoring of your screen, webcam, mic remotely in real-time. So make sure you have a calm place, empty room before you start the exam. You should eat, use a cellphone or similar electronic device, talk, etc during the exam.

Exam questions are carefully designed by professionals to test your knowledge in all areas. Do not expect only command, options, etc types questions. There is a good mix of all types of logical, conceptual, and practical application questions. Some questions may have multiple answers so keep an eye on such questions and do not miss to select more than one answer.

After exam

Your examination scorecard will be displayed immediately and the result will be shown to you. You can have it on email. The actual certificate takes 3 minutes before it hits your inbox! Do check spam if you don’t receive it before you escalate it to Docker Certification Team (certification@docker.com)

All the best! Do share your success stories in the comments below.

How to disable iptables firewall temporarily

Learn how to disable the iptables firewall in Linux temporarily for troubleshooting purposes. Also, learn how to save policies and how to restore them back when you enable the firewall back.

Disable iptables firewall!

Sometimes you have the requirement to turn off the iptables firewall to do some connectivity troubleshooting and then you need to turn it back on. While doing it you also want to save all your firewall policies as well. In this article, we will walk you through how to save firewall policies and how to disable/enable an iptables firewall. For more details about the iptables firewall and policies read our article on it.

Save  iptables policies

The first step while disabling the iptables firewall temporarily is to save existing firewall rules/policies. iptables-save command lists all your existing policies which you can save in a file on your server.

root@kerneltalks # iptables-save
# Generated by iptables-save v1.4.21 on Tue Jun 19 09:54:36 2018
*nat
:PREROUTING ACCEPT [1:52]
:INPUT ACCEPT [1:52]
:OUTPUT ACCEPT [15:1140]
:POSTROUTING ACCEPT [15:1140]
:DOCKER - [0:0]
---- output trucated----

root@kerneltalks # iptables-save > /root/firewall_rules.backup

So iptables-save is the command with you can take iptables policy backup.

Stop/disable iptables firewall

For older Linux kernels you have an option of stopping service iptables with service iptables stop but if you are on the new kernel, you just need to wipe out all the policies and allow all traffic through the firewall. This is as good as you are stopping the firewall.

Use below list of commands to do that.

root@kerneltalks # iptables -F
root@kerneltalks # iptables -X
root@kerneltalks # iptables -P INPUT ACCEPT
root@kerneltalks # iptables -P OUTPUT ACCEPT
root@kerneltalks # iptables -P FORWARD ACCEPT

Where –

  • -F: Flush all policy chains
  • -X: Delete user-defined chains
  • -P INPUT/OUTPUT/FORWARD: Accept specified traffic

Once done, check current firewall policies. It should look like below which means everything is accepted (as good as your firewall is disabled/stopped)

# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Restore firewall policies

Once you are done with troubleshooting and you want to turn iptables back on with all its configurations. You need to first restore policies from the backup we took in the first step.

root@kerneltalks # iptables-restore </root/firewall_rules.backup

Start iptables firewall

And then start iptables service in case you have stopped it in the previous step using service iptables start. If you haven’t stopped service then only restoring policies will do for you. Check if all policies are back in iptables firewall configurations :

root@kerneltalks # iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy DROP)
target     prot opt source               destination
DOCKER-USER  all  --  anywhere             anywhere
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere
-----output truncated-----

That’s it! You have successfully disabled and enabled the firewall without losing your policy rules.

Disable iptables firewall permanently

For disabling iptables permanently follow below process –

  • Stop iptables service
  • Disable iptables service
  • Flush all rules
  • Save configuration

This can be achieved using below set of commands.

root@kerneltalks # systemctl stop iptables
root@kerneltalks # systemctl disable iptables
root@kerneltalks # systemctl status iptables
root@kerneltalks # iptables --flush
root@kerneltalks # service iptables save
root@kerneltalks # cat  /etc/sysconfig/iptables

How Docker container DNS works

Learn about Docker DNS. How docker container DNS works? How to change nameserver in Docker container to use external DNS?

Docker DNS

Docker container has inbuilt DNS which automatically resolves IP to container names in user-defined networks.  But what if you want to use external DNS into the container for some project need. Or how to use external DNS in all the containers run on my host? In this article, we will walk you through the below points :

  1. Docker native DNS
  2. Nameservers in Docker
  3. How to use external DNS in the container while starting it
  4. How to use external DNS in all the containers on a docker host

Docker native DNS

In a user-defined docker network, DNS resolution to container names happens automatically. You don’t have to do anything if your containers are using your defined docker network they can find each other with hostname automatically.

We have 2 Nginx containers running using my newly created docker network named kerneltalks. Both Nginx containers are installed with ping utility.

$ docker container ls
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
1b1bb99559ac        nginx               "nginx -g 'daemon of…"   27 minutes ago      Up 27 minutes       80/tcp              nginx2
239c662d3945        nginx               "nginx -g 'daemon of…"   27 minutes ago      Up 27 minutes       80/tcp              nginx1

$ docker network inspect kerneltalks
"Containers": {
"1b1bb99559ac21e29ae671c23d46f2338336203c96874ac592431f60a2e6a5de": {
"Name": "nginx2",
"EndpointID": "4141f56fe878275e322b9283476508d1135e813d12ea2b7d87a5c3d0db527f79",
"MacAddress": "02:42:ac:13:00:05",
"IPv4Address": "172.19.0.5/16",
"IPv6Address": ""
},
"239c662d3945031413e4c69b99e3ddde57832004bd6193bdbc30bd5e6ca6f4e2": {
"Name": "nginx1",
"EndpointID": "376da79e6746cc80d178f4363085e521a9d45c65df08248b77c1bc744b495ae4",
"MacAddress": "02:42:ac:13:00:04",
"IPv4Address": "172.19.0.4/16",
"IPv6Address": ""
},

And they can ping each other without any extra DNS efforts. Since user-defined networks have inbuilt DNS which resolves IP addresses from container names.

$ docker exec -it nginx1 ping nginx2
PING nginx2 (172.19.0.5) 56(84) bytes of data.
64 bytes from nginx2.kerneltalks (172.19.0.5): icmp_seq=1 ttl=64 time=0.151 ms
64 bytes from nginx2.kerneltalks (172.19.0.5): icmp_seq=2 ttl=64 time=0.053 ms

$ docker exec -it nginx2 ping nginx1
PING nginx1 (172.19.0.4) 56(84) bytes of data.
64 bytes from nginx1.kerneltalks (172.19.0.4): icmp_seq=1 ttl=64 time=0.088 ms
64 bytes from nginx1.kerneltalks (172.19.0.4): icmp_seq=2 ttl=64 time=0.054 ms

But in default docker bridge network (which installs with docker daemon) automatic DNS resolution is disabled to maintain container isolation. You can add container inter-comm just by using --link option while running container (when on default bridge network)

--link is a legacy feature and may be removed in upcoming features. So it is always advisable to use user-customized networks rather than using default docker networks.

DNS nameservers in Docker

Docker is coded in a smart way. When you run a new container on the docker host without any DNS related option in command, it simply copies host’s /etc/resolv.conf into container. While copying it filter’s out all localhost IP addresses from the file. That’s pretty obvious since that won’t be reachable from container network so no point in keeping them. During this filtering, if no nameserver left to add in container’s /etc/resolv.conf the file then Docker daemon smartly adds Google’s public nameservers 8.8.8.8 and 8.8.4.4 in to file and use it within the container.

Also, host and container /etc/resolv.conf always be in sync. Docker daemon takes help from the file change notifier and makes necessary changes in the container’s resolve file when there are changes made in the host’s file! The only catch is these changes will be done only if the container is not running. So to pick up changes you need to stop and start the container again. All stopped containers will be updated immediately after the host’s file changes.

How to use external DNS in container while starting it

If you want to use external DNS in the container other than docker native or other than what’s in host’s resolv.conf file, then you need to use --dns switch in docker container run command.

$ docker container run -d --dns 10.2.12.2 --name nginx5 nginx
fbe29f22bd5f78213163532f2529c5cd98bc04573a626d0e864e670f96c5dc7a

$ docker exec -it nginx5 cat /etc/resolv.conf
search 51ur3jppi0eupdptvsj42kdvgc.bx.internal.cloudapp.net
nameserver 10.2.12.2
options ndots:0

In the above example, we chose to have nameserver 10.2.12.2 in the container we run. And you can see /etc/resolv.conf inside the container saves this new nameserver in it. Make a note that whenever you are using --dns switch it will wipe out all existing nameserver entries within the container and keeps only the one you supply.

This is a way if you want to use custom DNS in a single container. But what if you want to use this custom DNS to all containers which will run on your docker host then you need to define it in the config file. We are going to see this in the next point.

How to use external DNS in all the containers on docker host

You need to define the external DNS IP in docker daemon configuration file /etc/docker/daemon.json as below –

{
    "dns": ["10.2.12.2", "3.4.5.6"]
}

Once changes saved in the file you need to restart docker daemon to pick up these new changes.

root@kerneltalks # systemctl docker restart

and it’s done! Now any container you run fresh on your docker host will have these two DNS nameservers by default in it.

$ docker container run -d --name nginx7 nginx
200d024ac8930c5bfe59fdbc90a1d4d0e8cd6d865f82096c985e23f1e022d548

$ docker exec -it nginx7 cat /etc/resolv.conf
search 51ur3jppi0eupdptvsj42kdvgc.bx.internal.cloudapp.net
options ndots:0

nameserver 10.2.12.2
nameserver 3.4.5.6

If you have any queries/feedback/corrections, let us know in the comment box below.

Docker swarm cheat sheet

Docker swarm cheat sheet. List of all commands to create, run, manage container cluster environment, Docker Swarm!

Docker swarm cheat-sheet

Docker swarm is a cluster environment for Docker containers. Swarm is created with a number of machines running docker daemons. Collectively they are managed by one master node to run clustered environment for containers!

In this article, we are listing out all the currently available docker swarm commands in a very short overview. This is a cheat sheet you can glance through to brush or your swarm knowledge or quick reference for any swarm management command. We are covering most used or useful switches with the below commands. There are more switches available for each command and you can get them with --help

Read all docker or containerization related articles here from KernelTalk’s archives.

Docker swarm commands for swarm management

This set of command is used mainly to start, manage the swarm cluster as a whole. For node management, within the cluster, we have a different set of commands following this section.

  • docker swarm init : Initiate swam cluster
    • –advertise-addr: Advertised address on which swarm lives
    • –autolock: Locks manager and display key which will be needed to unlock stopped manager
    • –force-new-cluster: Create a new cluster from backup and dont attempt to connect to old known nodes
  • docker swarm join-token: Lists join security token to join another node in swarm as worker or manager
    • –quite: Only display token. By default, it displays complete command to be used along with the token.
    • –rotate: Rotate (change) token for security reasons.
  • docker swarm join: Join already running swarm as a worker or manager
    • –token: Security token to join the swarm
    • –availability: Mark node’s status as active/drain/pause after joining
  • docker swarm leave: Leave swarm. To be run from the node itself
    • -f: Leave forcefully ignoring all warnings.
  • docker swarm unlock: Unlocks swarm by providing key after manager restarts
  • docker swarm unlock-key: Display swarm unlock key
    • -q: Only display token.
    • –rotate: Rotate (change) token for security reasons.
  • docker swarm update: Updates swarm configurations
    • –autolock: true/false. Turns on or off locking if not done while initiating.

Docker swarm node commands for swarm node management

Node is a server participating in Docker swarm. A node can either be a worker or manager in the swarm. The manager node has the ability to manage swarm nodes and services along with serving workloads. Worker nodes can only serve workloads.

  • docker node ls : Lists nodes in the swarm
    • -q : Only display node Ids
    • –format : Format output using GO format
    • –filter : Apply filters to output
  • docker node ps : Display tasks running on nodes
    • Above all switches applies here too.
  • docker node promote : Promote node to a manager role
  • docker node demote : Demote node from manager to worker role
  • docker node rm : Remove node from the swarm. Run from the manager node.
    • -f : Force remove
  • docker node inspect : Detailed information about the node
    • –format : Format output using GO format
    • –pretty : Print in a human-readable friendly format
  • docker node update : Update node configs
    • –role : worker/manager. Update node role
    • –availability : active/pause/drain. Set node state.

Docker swarm service commands for swarm service management

Docker service is used to create and spawn workloads to swarm nodes.

  • docker service create : Start new service in Docker swarm
    • Switches of docker container run command like -i (interactive), -t (pseud terminal), -d (detached), -p (publish port) etc supported here.
  • docker service ls : List services
    • –filter, –format and -q (quiet) switches which we saw above are supported with this command.
  • docker service ps : Lists tasks of services
    • –filter, –format and -q (quiet) switches which we saw above are supported with this command.
  • docker service logs : Display logs of service or tasks
  • docker service rm : Remove service
    • -f : Force remove
  • docker service update : Update service config
    • Most of the parameters defined in service create command can be updated here.
  • docker service rollback : Revert back changes done in service config.
  • docker service scale : Scale one or more replicated services.
    • servicename=number format
  • docker service inspect : Detailed information about service.

Beginners guide to Docker Image

Learn the basics of the Docker image. Beginners guide explaining What is Docker image, what are different types of Docker images, and how to pull/push docker image?

Learn basics of Docker Image

In this article, we will be discussing the Docker image mainly. We will touch base all the below points :

  1. What is the Docker image?
  2. Difference between Docker image and Docker container
  3. What is the official Docker image
  4. Public and private Docker image
  5. How to download/upload pull/push Docker image from/to Docker hub

What is Docker image?

The official definition of Docker image is – ” Docker images are the basis of containers. An Image is an ordered collection of root filesystem changes and the corresponding execution parameters for use within a container runtime. An image typically contains a union of layered filesystems stacked on top of each other. An image does not have state and it never changes.

Read all docker or containerization related articles here from KernelTalk’s archives.

Lets cut this definition for our understanding. The first sentence means containers are launched using images. The second one tells it has binaries, dependencies & parameters to run a container (i.e. application). Third denotes image changes are layered and it never changes actual image file which means the image is the read-only file.

In the virtualization world’s concept Image can be visualized as a template from which you can launch as many containers as you want. In short, Image is an application binary along with its dependencies and metadata which tells how this image should be run in the container.

The image is named with the syntax:software_name:version_tag if the tag is not specified then latest is considered as default tag.

Difference between Docker image and Docker container

Let me summarize the difference between Docker image and container in tabular format for easy understanding :

Docker Image
Docker container
An ordered collection of root filesystem changes and the corresponding execution parameters for use within a container runtime An active (or inactive if exited) stateful instantiation of an image.
Can be pulled from Docker hub Runs locally on your host OS
Its read only Dynamic
Its a template from which container launches Its running/stopped instance of image
It’s a file It’s a process
Management command : #docker image <..> Management command : #docker container <..>

What is official Docker image?

Docker hub is a repository like Git for Docker images. Any registered user can upload images to it. But, for popular software like Nginx, alpine, Redis, Mongo Docker has a team which creates, test, verify images with standard settings and keep them for public use on Docker hub. The official docker image always has an only software name as its image name. Any other image other than official is named with syntax username/software_name where username is a user who uploaded that image on Docker hub. Also, the official image has official written below it in listing as shown in the below screenshot.

Identify official docker image

You can see apart from the first official image all other images of alpine are named username on the left. Only the official image has only the software name in its name.

Public and private Docker image

You can upload your image to a private or public on Docker hub. Public images are available to download for free for everyone. Private images are access restrictions defined by the uploader. While using the public image you should be very careful to choose which image to pull.

Docker hub has a star grading system for images. The user can star images and the total number of stars earned by the image are shown on the right column in listing (it can be seen in the above screenshot). Also, alongside stars, you can see the number of pulls images witnessed. Image with maximum stars and/or pulls considered a good image.

How to pull or download image from Docker hub?

You don’t need a Docker account to pull the image from the Docker hub. You only need to have an active internet connection on the server and you can use docker image pull to download the image from the Docker hub.

root@kerneltalks # docker image pull alpine
Using default tag: latest
latest: Pulling from library/alpine
Digest: sha256:e1871801d30885a610511c867de0d6baca7ed4e6a2573d506bbec7fd3b03873f
Status: Image is up to date for alpine:latest

Since my local image cache already has alpine image downloaded, pull command verified its SHA. Once it found that the cached image is the same as the one available on Docker hub so it didn’t download image! It saved bandwidth and storage both.

Now, I will pull the alpine image which has java available in it. Also, it’s not in my local image cache.

root@kerneltalks # docker image pull  anapsix/alpine-java
Using default tag: latest
latest: Pulling from anapsix/alpine-java
ff3a5c916c92: Already exists
b2573fe715ab: Pull complete
Digest: sha256:f4271069aa69eeb4cfed50b6a61fb7e4060297511098a3605232dbfe8f85de74
Status: Downloaded newer image for anapsix/alpine-java:latest
root@kerneltalks # docker image pull  anapsix/alpine-java
Using default tag: latest
latest: Pulling from anapsix/alpine-java
ff3a5c916c92: Already exists
b2573fe715ab: Pull complete
Digest: sha256:f4271069aa69eeb4cfed50b6a61fb7e4060297511098a3605232dbfe8f85de74
Status: Downloaded newer image for anapsix/alpine-java:latest

If you observe output above, command pulled only one file with ID b2573fe715ab whereas the other one it says already exists! So here comes the logic of Union layered filesystem.

As explained in the definition image is consists of a layered union filesystem. Since this image is alpine with java, it has the base of alpine image (which I already have in my image cache). And hence only extra java layer on this image is downloaded by pull command. And then together it stored alpine java image in my local cache!

You can even check details of this image on Docker hub and it indeed mentions that its build on top of the base alpine image. Which explains pull behavior and support our above explanation.

How to push or upload image to Docker hub?

To upload the image on Docker hub you will need a Docker account. It’s free to create. Just head to https://hub.docker.com/ and sign up. Once logged in you can create your private repo in which you can upload images and those will be private images. Otherwise, images uploaded will be public.

You need to login to Docker account from server where you have locally built images stored using docker login command.

root@kerneltalks # docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: shrikantlavhate
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

Now you can push your image to the public repo or private repo. I created my private repo with the name kerneltalks. So I am pushing the image to Docker hub in my private repo. But before pushing the image to Docker hub you need to properly tag it. To upload the image in the private repo you need to tag it like – username/reponame

root@kerneltalks # docker image tag 3fd9065eaf02 shrikantlavhate/kerneltalks
root@kerneltalks # docker image push shrikantlavhate/kerneltalks
The push refers to repository [docker.io/shrikantlavhate/kerneltalks]
cd7100a72410: Pushed
latest: digest: sha256:8c03bb07a531c53ad7d0f6e7041b64d81f99c6e493cb39abba56d956b40eacbc size: 528

Once the image is tagged properly it can be pushed to Docker hub. Using the tag Docker hub determines it’s a public or private image and accordingly uploads it.

Be noted that your login information is kept in local profiles for the seamless experience. Don’t forget to logout from the Docker account if you are on the shared machine.

root@kerneltalks # cat $HOME/.docker/config.json
{
        "auths": {
                "https://index.docker.io/v1/": {
                        "auth": "c2hyaWthXXXXXXXXXXXkRkdjflcXXXXg1"
                }
        },
        "HttpHeaders": {
                "User-Agent": "Docker-Client/18.05.0-ce (linux)"
        }


You can see config.json file under .docker directory saved login details.  You can logout to wipe out this info from the machine using docker logout.

root@kerneltalks # docker logout
Removing login credentials for https://index.docker.io/v1/
root@kerneltalks # cat $HOME/.docker/config.json
{
        "auths": {},
        "HttpHeaders": {
                "User-Agent": "Docker-Client/18.05.0-ce (linux)"
        }

Once we logged out of Docker account, login details wiped out from the file!

MobaXterm X11 proxy: Authorisation not recognised

Learn how to resolve Authorisation not recognized error while using xterm in Linux

xclock error

Error :

Sometimes your users complain they can’t use GUI via X server from Linux box (in this case mobaXterm). They are receiving their display authorization is not recognized. An error like below –

appuser@kerneltalks@ xclock
MobaXterm X11 proxy: Authorisation not recognised
Error: Can't open display: localhost:10.0

Sometimes these errors show up when you switch user from the root account or any other account.

Quick Solution:

Login directly with user on which you want to use xclock

appuser needs to log in directly on the server and you won’t see this issue. Most of the time it arises once you su to appuser from root or different users.

Read further if you have to switch user and then use x-term.

appuser need to add its entry to authorization. This entry will be the last entry in .Xauthority file in a home directory of the previous user with which you have logged in the server in the first place. Let’s say its root in our case. i.e. we logged in as root and then su to appuser

root@kerneltalks # xauth -f .Xauthority list |tail -1
kerneltalks/unix:10 MIT-MAGIC-COOKIE-1 df22dfc7df88b60f0653198cc85f543c

appuser@kerneltalks $ xauth add kerneltalks/unix:10 MIT-MAGIC-COOKIE-1 df22dfc7df88b60f0653198cc85f543c

So here we got values from root home directory file and then we added it in using xauth in currently su user i.e. appuser

and you are good to go!

Bit of an explanation :

This error occurs since your ID doesn’t have the authorization to connect to the X server.  Let’s walk through how to resolve this error. List out authorization entries for displays using xauth list

appuser@kerneltalks $ xauth list
kerneltalks/unix:12  MIT-MAGIC-COOKIE-1  60c402df81f68e721qwe531d1c99c1eb
kerneltalks/unix:11  MIT-MAGIC-COOKIE-1  ad81da801d778fqwe6aea383635be27d
kerneltalks/unix:10  MIT-MAGIC-COOKIE-1  0bd591485031d0ae670475g46db1b8b9

The output shows entries column wise –

  1. Display name
  2. Protocol name (MIT-MAGIC-COOKIE-1 referred to single period)
  3. hexkey

If you have many sessions and you are on test/dev environment and you are the only one using your system you can remove all the above entries using xauth remove to make sure you have a clean slate and getting only your session cookie. Or, you can save this output for reference. Log in again, try  xclock and new the entry will be generated. Compare the latest output with the older one and get your new entry filtered out. Or as mentioned above in a quick solution it will be last entry in .Xauthority file in a home directory of appuser. You can not read  .Xauthority file like text file so you have to use xauth -f command to view its content.

Logout from all sessions. Login again with the app user and run xclock once. This will generate a new session cookie token which you can see in xauth list .

appuser@kerneltalks $ xauth list
kerneltalks/unix:10  MIT-MAGIC-COOKIE-1  df22dfc7df88b60f0653198cc85f543c

Now, grab this entry and add authorization using below command –

appuser@kerneltalks $ xauth add APCSFIOGWDV02/unix:10  MIT-MAGIC-COOKIE-1  df22dfc7df88b60f0653198cc85f543c

and that’s it. You xclock should work now!


Error :

You are seeing below error in mobaXterm

X11-forwarding  : ✘  (disabled or not supported by server)

Solution :

The best way to make sure you have all X11 stuff installed is to run the install package xclock. Additionally, you need to install xauth package as well.

Secondly, make sure you have X11Forwarding yes set in your /etc/ssh/sshd_config. If not then set and restart sshd daemon.

That’s all! Try re-logging to the server and it should work. You should see the below message after login using MobXterm.

 X11-forwarding  : &#x2714;  (remote display is forwarded through SSH)