Kubernetes installation and configuration

Step by step guide for Kubernetes installation and configuration along with sample outputs.

Kubernetes installation guide

Pre-requisite

  • Basic requirement to run Kubernetes is your machine should not have SWAP configured if at all it is configured you need to turn it off using swapoff -a.
  • You will need Docker installed on your machine.
  • You will need to set your SELinux in permissive mode to enable kubelet network communication. You can set policy in SELinux for Kubernetes and then you can enable it normally.
  • Your machine should have at least 2 CPUs.
  • Kubernetes ports should be open between master and nodes for cluster communications. All are TCP ports and to be open for inbound traffic.
PortsDescription
10250Kublet API (for master and nodes)
10251kube-scheduler
10252kube-controller-manager
6443*Kubernetes API server
2379-2380etcd server client API
30000-32767 NodePort Services (only for nodes)

Installation of Kubernetes master node Kubemaster

First step is to install three pillar packages of Kubernetes which are :

  • kubeadm – It bootstraps Kubernetes cluster
  • kubectl – CLI for managing cluster
  • kubelet – Service running on all nodes which helps managing cluster by performing tasks

For downloading these packages you need to configure repo for the same. Below are repo file contents for respective distributions.

For RedHat, CentOs or Fedora (YUM based)-

root@kerneltalks # cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
root@kerneltalks # yum install -y kubectl kubeadm kubelet

For Ubuntu, Suse or Debian (APT based)-

sudo apt-get update && sudo apt-get install -y apt-transport-https gnupg2
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl kubeadm kubelet

Once you have configured the repo install packages kubeadm, kubectl and kubelet according to your distribution package manager.

Enable and start kubelet service

root@kerneltalks # systemctl enable kubelet.service
root@kerneltalks # systemctl start kubelet

Configuration of Kubernetes master node Kubemaster

Now you need to make sure both Docker and Kubernetes using the same cgroup driver. By default its cgroupfs for both. If you haven’t changed for Docker then you don’t have to do anything for Kubernetes as well. But if you are using different cgroup in Docker you need to specify it for Kubernetes in below file –

root@kernetalks # cat /etc/default/kubelet
KUBELET_KUBEADM_EXTRA_ARGS=--cgroup-driver=<value>

This file will be picked up by kubeadm while starting up. But if you have Kubernetes already running you need to reload this configuration using –

root@kerneltalks # systemctl daemon-reload
root@kerneltalks # systemctl restart kubelet

Now you are ready to bring up Kubernetes master and then add worker nodes or minions to it as a slave for the cluster.

You have installed and adjusted settings to bring up Kubemaster. You can start Kubemaster using the command kubeadm init but you need to provide network CIDR first time.

  • --pod-network-cidr= : For pod network
  • --apiserver-advertise-address= : Optional. To be used when multiple IP addresses/subnets assigned to the machine.

Refer below output for starting up Kubernetes master node. There are few warnings which can be corrected with basic sysadmin tasks.

# kubeadm init --apiserver-advertise-address=172.31.81.44 --pod-network-cidr=192.168.1.0/16
[init]

using Kubernetes version: v1.11.3

[preflight]

running pre-flight checks I0912 07:57:56.501790 2443 kernel_validator.go:81] Validating kernel version I0912 07:57:56.501875 2443 kernel_validator.go:96] Validating kernel config [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.05.0-ce. Max validated version: 17.03 [WARNING Hostname]: hostname “kerneltalks” could not be reached [WARNING Hostname]: hostname “kerneltalks” lookup kerneltalks1 on 172.31.0.2:53: no such host [WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service’

[preflight/images]

Pulling images required for setting up a Kubernetes cluster

[preflight/images]

This might take a minute or two, depending on the speed of your internet connection

[preflight/images]

You can also perform this action in beforehand using ‘kubeadm config images pull’

[kubelet]

Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”

[kubelet]

Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”

[preflight]

Activating the kubelet service

[certificates]

Generated ca certificate and key.

[certificates]

Generated apiserver certificate and key.

[certificates]

apiserver serving cert is signed for DNS names [kerneltalks1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.81.44]

[certificates]

Generated apiserver-kubelet-client certificate and key.

[certificates]

Generated sa key and public key.

[certificates]

Generated front-proxy-ca certificate and key.

[certificates]

Generated front-proxy-client certificate and key.

[certificates]

Generated etcd/ca certificate and key.

[certificates]

Generated etcd/server certificate and key.

[certificates]

etcd/server serving cert is signed for DNS names [kerneltalks1 localhost] and IPs [127.0.0.1 ::1]

[certificates]

Generated etcd/peer certificate and key.

[certificates]

etcd/peer serving cert is signed for DNS names [kerneltalks1 localhost] and IPs [172.31.81.44 127.0.0.1 ::1]

[certificates]

Generated etcd/healthcheck-client certificate and key.

[certificates]

Generated apiserver-etcd-client certificate and key.

[certificates]

valid certificates and keys now exist in “/etc/kubernetes/pki”

[kubeconfig]

Wrote KubeConfig file to disk: “/etc/kubernetes/admin.conf”

[kubeconfig]

Wrote KubeConfig file to disk: “/etc/kubernetes/kubelet.conf”

[kubeconfig]

Wrote KubeConfig file to disk: “/etc/kubernetes/controller-manager.conf”

[kubeconfig]

Wrote KubeConfig file to disk: “/etc/kubernetes/scheduler.conf”

[controlplane]

wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml”

[controlplane]

wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml”

[controlplane]

wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml”

[etcd]

Wrote Static Pod manifest for a local etcd instance to “/etc/kubernetes/manifests/etcd.yaml”

[init]

waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests”

[init]

this might take a minute or longer if the control plane images have to be pulled

[apiclient]

All control plane components are healthy after 46.002127 seconds

[uploadconfig]

storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace

[kubelet]

Creating a ConfigMap “kubelet-config-1.11” in namespace kube-system with the configuration for the kubelets in the cluster

[markmaster]

Marking the node kerneltalks1 as master by adding the label “node-role.kubernetes.io/master=””

[markmaster]

Marking the node kerneltalks1 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]

[patchnode]

Uploading the CRI Socket information “/var/run/dockershim.sock” to the Node API object “kerneltalks1” as an annotation

[bootstraptoken]

using token: 8lqimn.2u78dcs5rcb1mggf

[bootstraptoken]

configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstraptoken]

configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstraptoken]

configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstraptoken]

creating the “cluster-info” ConfigMap in the “kube-public” namespace

[addons]

Applied essential addon: CoreDNS

[addons]

Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 172.31.81.44:6443 –token 8lqimn.2u78dcs5rcb1mggf –discovery-token-ca-cert-hash sha256:de6bfdec100bb979d26ffc177de0e924b6c2fbb71085aa065fd0a0854e1bf360

In the above output there are two key things you get –

  • Commands to enable the regular user to administer Kubemaster
  • Command to run on slave node to join Kubernetes cluster

That’s it. You have successfully started the Kubemaster node and brought up your Kubernetes cluster. The next task is to install and configure your secondary nodes in this cluster.

Installation of Kubernetes slave node or minion

The installation process remains the same. Follow steps for disabling SWAP, installing Docker, and installing 3 Kubernetes packages.

Configuration of Kubernetes slave node minion

Nothing to do much on this node. You already have the command to run on this node for joining cluster which was spitting out by kubeadm init command.

Lets see how to join node in Kubernetes cluster using kubeadm command –

[root@minion ~]# kubeadm join 172.31.81.44:6443 --token 8lqimn.2u78dcs5rcb1mggf --discovery-token-ca-cert-hash sha256:de6bfdec100bb979d26ffc177de0e924b6c2fbb71085aa065fd0a0854e1bf360
[preflight]

running pre-flight checks I0912 08:19:56.440122 1555 kernel_validator.go:81] Validating kernel version I0912 08:19:56.440213 1555 kernel_validator.go:96] Validating kernel config

[discovery]

Trying to connect to API Server “172.31.81.44:6443”

[discovery]

Created cluster-info discovery client, requesting info from “https://172.31.81.44:6443”

[discovery]

Failed to request cluster info, will try again: [Get https://172.31.81.44:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: net/http: TLS handshake timeout]

[discovery]

Requesting info from “https://172.31.81.44:6443” again to validate TLS against the pinned public key

[discovery]

Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server “172.31.81.44:6443”

[discovery]

Successfully established connection with API Server “172.31.81.44:6443”

[kubelet]

Downloading configuration for the kubelet from the “kubelet-config-1.11” ConfigMap in the kube-system namespace

[kubelet]

Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”

[kubelet]

Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”

[preflight]

Activating the kubelet service

[tlsbootstrap]

Waiting for the kubelet to perform the TLS Bootstrap…

[patchnode]

Uploading the CRI Socket information “/var/run/dockershim.sock” to the Node API object “minion” as an annotation This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run ‘kubectl get nodes’ on the master to see this node join the cluster.

And here you go. Node has joined the cluster successfully. Thus you have completed Kubernetes cluster installation and configuration!

Check nodes status from kubemaster.

[root@kerneltalks ~]# kubectl get nodes
NAME           STATUS     ROLES     AGE       VERSION
kerneltalks1   Ready      master    2h        v1.11.3
minion         Ready      <none>    1h        v1.11.3

Once you see all status as ready you have a steady cluster up and running.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.