A quick tutorial to configure a proxy for YUM in RHEL, CentOS, or Fedora Linux.
1. Enable proxy for yum in configuration file. (global)
If your server is connected to the internet via proxy server then you can define it in the configuration file located at /etc/yum.conf. For a definition, you should have below details ready with you –
Proxy server IP or hostname
Port to be used for proxy connectivity
User ID and password for authenticating you at proxy if enabled on proxy
Now, edit /etc/yum.conf using any text editor like vi and edit below parameters :
kerneltalksproxy.com: Proxy server name 3487: Port Username for proxy authentication: shrikant Password for proxy authentication: funWif#92cE
Save and exit. List repo to verify internet connectivity using yum repolist.
Using this method you are configuring proxy within YUM configuration which will be used or available for all users on the system whenever they use yum commands.
2. Using profile proxy (User specific)
If you don’t want global proxy setup through /etc/yum.conf file then you can opt to define proxy at the user level by defining it in the user’s individual profile files. Add below config in .profile ( .profile file has different names depends on which login shell you are using) of the user so that it will be loaded every time user logged in to the system.
So, this proxy setting will be available for all applications who use system proxy (like curl, yum) under that user’s login. Ask the user to login and verify proxy by refreshing yum repo.
A Short article to learn how to enter a single-user mode in SUSE 12 Linux server.
In this short article, we will walk you through steps that demonstrate how to enter a single-user mode in SUSE 12 Linux. Single user mode is always preferred when you are troubleshooting major issues with your system. Single user mode disables networking and no other users are logged in, you rule out many situations of the multi-user system and it helps you in troubleshooting fast. One of the most popular uses of single-user mode is to reset forgotten root password.
1. Halt boot process
First of all, you need to have a console of your machine to get into single-user mode. If its VM then VM console, if its physical machine then you need its iLO/serial console connected. Reboot the system and halt automatic booting of the kernel at the grub boot menu by pressing any key.
2. Edit boot option of kernel
Once you are on the above screen, press e on selected kernel (which is normally your preferred latest kernel) to update its boot options. You will see be below screen.
Now, scroll down to your booting kernel line and add init=/bin/bash at the end of the line as shown below.
3. Boot kernel with edited entry
Now press Ctrl-x or F10 to boot this edited kernel. The kernel will be booted in single-user mode and you will be presented with hash prompt i.e. root access to the server. At this point in time, your root file system is mounted in read-only mode. So any changes you are doing to the system won’t be saved.
Run below command to remount root filesystem as re-writable.
kerneltalks:/ # mount -o remount,rw /
And you are good to go! Go ahead and do your necessary actions in single-user mode. Don’t forget to reboot the server or type exit to boot into normal multiuser mode once you are done.
Complete installation guide to install & configure checkmk server on Linux. Also steps to add Linux client to checkmk monitoring using checkmk monitoring instance console.
checkmk is a free, open-source IT infrastructure monitoring tool. It’s actually Nagios plugins that enhances their capabilities and performance. In this article, we will walk you through step by step procedure to setup checkmk monitoring server and add the client to its monitoring.
Check_mk is re-branded as checkmk
Also website is moved from mathias-kettner.com to checkmk.com. There are few pointers in article which needs to be treated with new URL although I made necessary changes.
Now, install the package along with all of its dependencies. Use your package manager like yum, zipper, or apt to install package so that it will resolve its dependencies automatically and install them too.
2. Allow http protocol and port in firewall
Since the checkmk portal runs on HTTP protocol with default port 80, you need to allow them in the firewall.
If your machine has SELinux activated, you need to allow it in SELinux. If you have a local firewall i.e. iptables enabled, you need to allow it in iptables as well.
After RPM installation, check if omd command is running properly.
[root@kerneltalks1 ~]# omd version
OMD - Open Monitoring Distribution Version 1.5.0p7.cre
Now, proceed with creating a monitoring instance and then starting it. Create a monitoring instance with omd create command.
[root@kerneltalks1 ~]# omd create kerneltalks_test
Adding /opt/omd/sites/kerneltalks_test/tmp to /etc/fstab.
Creating temporary filesystem /omd/sites/kerneltalks_test/tmp...OK
Restarting Apache...OK
Created new site kerneltalks_test with version 1.5.0p7.cre.
The site can be started with omd start kerneltalks_test.
The default web UI is available at http://kerneltalks1/kerneltalks_test/
The admin user for the web applications is cmkadmin with password: Pz4IM7J7
(It can be changed with 'htpasswd -m ~/etc/htpasswd cmkadmin' as site user.
)
Please do a su - kerneltalks_test for administration of this site.
Our monitoring server instance is ready. You can gather details like URL, login credentials, the command to change password, etc from the command output.
Now if you try to login to the mentioned URL you will see OMD: Site Not Started error.
So, to use this server instance you need to start it using the command omd start
[root@kerneltalks1 ~]# omd start kerneltalks_test
OK
Starting mkeventd...OK
Starting rrdcached...OK
Starting npcd...OK
Starting nagios...2018-11-14 04:09:41 [6] updating log file index
2018-11-14 04:09:41 [6] updating log file index
OK
Starting apache...OK
Initializing Crontab...OK
Now you are good to go! You can go back to the URL and login to your monitoring server console!
You can see everything is valued to zero since its being a fresh monitoring server instance we just created. Let’s add one Linux host into this monitoring instance to monitor.
How to install check_mk agent on Linux client
In this part, we will install check_mk agent on the Linux client and will add that client into monitoring. Below 2 pre-requisite should be completed before agent installation.
check_mk client works with xinetd service on the machine. You should install xinetd service and start it before you attempt to agent install.
Port 6556 TCP should be open between check_mk server and client for communication
check_mk client package is available on check_mk server at path http://<servername>/<instance_name>/check_mk/agents/. In our case it will be at http://kerneltalks1/kerneltalks_test/check_mk/agents/
You can find almost all platform agents here. Let’s download the agent on our Linux client using the command line and install it.
After agent installation, you need to go back to check_mk console to add this new host into monitoring.
Add new client in check_mk monitoring
Login to console and navigate to WATO configurations > Hosts > Create new host
Fill in details like hostname, IP address, agent details in next screen, and hit ‘Save & Goto services‘. You will be presented with the below screen in which check_mk discovers services on the client.
Click on the red button with a number of changes written on it. Activate changes and you are done!
Once changes are completed activated you can see one host is added into monitoring. This completes end to end walkthrough tutorial to install the check_mk monitoring server and add Linux client to it.
That’s it. Reboot and it will disable IPv6 on your system.
Another method is to disable it using /etc/sysctl.d/ipv6.conf file.
Add below entry in file :
# To disable for all interfaces
net.ipv6.conf.all.disable_ipv6 = 1
#Disable default
net.ipv6.conf.default.disable_ipv6 = 1
#Disable on loopback
net.ipv6.conf.lo.disable_ipv6 = 1
If you are having GUI access of server then you can do it under network settings. Navigate to Applications > System Tools > YaST > Network Settings . Goto Global Options tab and uncheck Enable IPv6.
You will require to reboot server to take this effect.
Disable IPv6 in Ubuntu Linux
Above process of Suse Linux applies to ubuntu as well. You need to edit /etc/sysctl.conf and add above lines. Reload the file with sysctl -p and you are done.
To verify if IPv6 is disabled on server use below command –
Short post to learn how to install and uninstall Sophos Antivirus in Linux.
Sophos is a well-known antivirus for Windows, Linux, Mac platforms. Sophos also offers different security solutions along with antivirus. In this post we walk through the install, check and remove Sophos antivirus on Linux systems. You can download Sophos antivirus for Linux for free here.
How to install Sophos Antivirus in Linux
You can transfer the installer downloaded on a laptop or desktop on your Linux server. Or you can use tools like wget to download the installer directly on your Linux server. You can get a Linux installer link from your account on a website.
You will be having Sophos Antivirus with install.sh script within. For non-interactive setup executive script with below switches and you are good to go –
root@kerneltalks # ./install.sh --automatic --acceptlicence /opt/sophos-av
Installing Sophos Anti-Virus....
Selecting appropriate kernel support...
Installation completed.
Your computer is now protected by Sophos Anti-Virus.
Antivirus is successfully installed on your server.
Check current status of Sophos Antivirus
Antivirus runs with service named sav-protect. So you can use normal Linux service status command to check the status of AV service.
root@kerneltalks # service sav-protect status
sav-protect.service - "Sophos Anti-Virus daemon"
Loaded: loaded (/usr/lib/systemd/system/sav-protect.service; enabled)
Active: active (running)[0m since Thu 2018-07-19 13:30:50 IST; 3 months 4 days ago
Docs: man:sav-protect
Process: 5619 ExecStop=/opt/sophos-av/engine/.sav-protect.systemd.stop.sh (code=exited, status=0/SUCCESS)
Process: 6287 ExecStartPost=/opt/sophos-av/engine/.sav-protect.systemd.poststart.(code=exited, status=1/FAILURE)
Process: 5646 ExecStartPre=/opt/sophos-av/engine/.sav-protect.systemd.prestart.sh (code=exited, status=0/SUCCESS)
Main PID: 6286 (savd)
CGroup: /system.slice/sav-protect.service
├─5842 savscand --incident=unix://tmp/incident --namedscan=unix://root@tmp/namedscansprocessor.397 --ondemandcontrol=socketpair://46/47
└─6286 savd etc/savd.cfg
Oct 21 17:50:56 kerneltalks savd[6286]: scheduled.scan.log: Scheduled scan "SEC:Weekly scan" completed: master boot records scanned: 0, boot records scanned: 0, files scanned: 968342, scan errors: 0, threats detected: 0, infected files detected: 0
Oct 21 21:38:46 kerneltalks savd[6286]: update.check: Successfully updated Sophos Anti-Virus from \\avserver.kerneltalks.com\SophosUpdate\CIDs\S038\savlinux
You can see the recent two activities as a successful scheduled scan run and virus definition update in the last log lines.
How to uninstall Sophos Antivirus in Linux
Run uninstall.sh script located at /opt/sophos-av to uninstall Sophos Antivirus.
root@kerneltalks # /opt/sophos-av/uninstall.sh
Uninstalling Sophos Anti-Virus.
WARNING: Sophos Anti-Virus still running.
Do you want to stop Sophos Anti-Virus? Yes(Y)/No(N) [N]
> Y
Stopping Sophos Anti-Virus.
Sophos Anti-Virus has been uninstalled.
And AV is un-installed. You can confirm by checking status again which will result in an error.
root@kerneltalks # service sav-protect status
service: no such service sav-protect
Small tutorial about how to install Ansible in Linux and run ansible command on the remote clients from the control server.
What is Ansible ?
Ansible is an open-source configuration management tool developed by Red Hat. You can have enterprise support for it from Red Hat subscriptions. Ansible is written in Python, Ruby, and Power shell. It uses SSH in the background to communicate with clients and execute tasks. The best feature of Ansible is being agent-less hence no load on clients and configurations can be pushed from the server at any time.
Ansible installation
The first pre-requisite of Ansible is: Primary or control server should have password-less SSH connection configured for Ansible user for all its client servers. You can configure passwordless SSH in two commands steps using ssh-keygen and ssh-copy-id.
For our understanding, we have 1 control server kerneltalks1 and 1 client kerneltalks2 and we have configured passwordless SSH for user shrikant (which we treat as Ansible user here)
Lets install Ansible on control server i.e. kerneltalks1
Here IP mentioned is of kerneltalks2 and you can specify the grouping of servers in square braces. And you are good to go. Run ansible command with ping module (-m switch). There are many modules comes in-built with ansible which you can use rather than using equivalent shell commands.
You can see the output is a success on the mentioned IP. So we installed and ran the first successful command using ansible!
Common errors
1. If you try to run ansible command on a group of the server which does not exist in the host file. You will see below error –
[shrikant@kerneltalks1 ~]$ ansible -m ping testserver
[WARNING]: Could not match supplied host pattern, ignoring: testserver
[WARNING]: No hosts matched, nothing to do
You need to check /etc/ansible/hosts file (or hosts files being referred by your ansible installation) and make sure the server group mentioned on command exists in it.
2. If you do not configure passwordless SSH from the control server to the client or If the client is not reachable over the network you will see below error.
[root@kerneltalks1 ansible]# ansible -m ping all
kerneltalks2 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Warning: Permanently added 'kerneltalks2,172.31.81.83' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey,gssapi-keyex,gssapi-with-mic,password).\r\n",
"unreachable": true
}
You need to check the connectivity and passwordless ssh access from the control server.
Step by step guide for Kubernetes installation and configuration along with sample outputs.
Pre-requisite
Basic requirement to run Kubernetes is your machine should not have SWAP configured if at all it is configured you need to turn it off using swapoff -a.
You will need to set your SELinux in permissive mode to enable kubelet network communication. You can set policy in SELinux for Kubernetes and then you can enable it normally.
Your machine should have at least 2 CPUs.
Kubernetes ports should be open between master and nodes for cluster communications. All are TCP ports and to be open for inbound traffic.
Ports
Description
10250
Kublet API (for master and nodes)
10251
kube-scheduler
10252
kube-controller-manager
6443*
Kubernetes API server
2379-2380
etcd server client API
30000-32767
NodePort Services (only for nodes)
Installation of Kubernetes master node Kubemaster
First step is to install three pillar packages of Kubernetes which are :
kubeadm – It bootstraps Kubernetes cluster
kubectl – CLI for managing cluster
kubelet – Service running on all nodes which helps managing cluster by performing tasks
For downloading these packages you need to configure repo for the same. Below are repo file contents for respective distributions.
Configuration of Kubernetes master node Kubemaster
Now you need to make sure both Docker and Kubernetes using the same cgroup driver. By default its cgroupfs for both. If you haven’t changed for Docker then you don’t have to do anything for Kubernetes as well. But if you are using different cgroup in Docker you need to specify it for Kubernetes in below file –
Now you are ready to bring up Kubernetes master and then add worker nodes or minions to it as a slave for the cluster.
You have installed and adjusted settings to bring up Kubemaster. You can start Kubemaster using the command kubeadm init but you need to provide network CIDR first time.
--pod-network-cidr= : For pod network
--apiserver-advertise-address= : Optional. To be used when multiple IP addresses/subnets assigned to the machine.
Refer below output for starting up Kubernetes master node. There are few warnings which can be corrected with basic sysadmin tasks.
# kubeadm init --apiserver-advertise-address=172.31.81.44 --pod-network-cidr=192.168.1.0/16
[init]
using Kubernetes version: v1.11.3
[preflight]
running pre-flight checks I0912 07:57:56.501790 2443 kernel_validator.go:81] Validating kernel version I0912 07:57:56.501875 2443 kernel_validator.go:96] Validating kernel config [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.05.0-ce. Max validated version: 17.03 [WARNING Hostname]: hostname “kerneltalks” could not be reached [WARNING Hostname]: hostname “kerneltalks” lookup kerneltalks1 on 172.31.0.2:53: no such host [WARNING Service-Kubelet]: kubelet service is not enabled, please run ‘systemctl enable kubelet.service’
[preflight/images]
Pulling images required for setting up a Kubernetes cluster
[preflight/images]
This might take a minute or two, depending on the speed of your internet connection
[preflight/images]
You can also perform this action in beforehand using ‘kubeadm config images pull’
[kubelet]
Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet]
Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[preflight]
Activating the kubelet service
[certificates]
Generated ca certificate and key.
[certificates]
Generated apiserver certificate and key.
[certificates]
apiserver serving cert is signed for DNS names [kerneltalks1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.81.44]
[certificates]
Generated apiserver-kubelet-client certificate and key.
[certificates]
Generated sa key and public key.
[certificates]
Generated front-proxy-ca certificate and key.
[certificates]
Generated front-proxy-client certificate and key.
[certificates]
Generated etcd/ca certificate and key.
[certificates]
Generated etcd/server certificate and key.
[certificates]
etcd/server serving cert is signed for DNS names [kerneltalks1 localhost] and IPs [127.0.0.1 ::1]
[certificates]
Generated etcd/peer certificate and key.
[certificates]
etcd/peer serving cert is signed for DNS names [kerneltalks1 localhost] and IPs [172.31.81.44 127.0.0.1 ::1]
[certificates]
Generated etcd/healthcheck-client certificate and key.
[certificates]
Generated apiserver-etcd-client certificate and key.
[certificates]
valid certificates and keys now exist in “/etc/kubernetes/pki”
[kubeconfig]
Wrote KubeConfig file to disk: “/etc/kubernetes/admin.conf”
[kubeconfig]
Wrote KubeConfig file to disk: “/etc/kubernetes/kubelet.conf”
[kubeconfig]
Wrote KubeConfig file to disk: “/etc/kubernetes/controller-manager.conf”
[kubeconfig]
Wrote KubeConfig file to disk: “/etc/kubernetes/scheduler.conf”
[controlplane]
wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml”
[controlplane]
wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml”
[controlplane]
wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml”
[etcd]
Wrote Static Pod manifest for a local etcd instance to “/etc/kubernetes/manifests/etcd.yaml”
[init]
waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests”
[init]
this might take a minute or longer if the control plane images have to be pulled
[apiclient]
All control plane components are healthy after 46.002127 seconds
[uploadconfig]
storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[kubelet]
Creating a ConfigMap “kubelet-config-1.11” in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster]
Marking the node kerneltalks1 as master by adding the label “node-role.kubernetes.io/master=””
[markmaster]
Marking the node kerneltalks1 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode]
Uploading the CRI Socket information “/var/run/dockershim.sock” to the Node API object “kerneltalks1” as an annotation
[bootstraptoken]
using token: 8lqimn.2u78dcs5rcb1mggf
[bootstraptoken]
configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken]
configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken]
configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken]
creating the “cluster-info” ConfigMap in the “kube-public” namespace
[addons]
Applied essential addon: CoreDNS
[addons]
Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 172.31.81.44:6443 –token 8lqimn.2u78dcs5rcb1mggf –discovery-token-ca-cert-hash sha256:de6bfdec100bb979d26ffc177de0e924b6c2fbb71085aa065fd0a0854e1bf360
In the above output there are two key things you get –
Commands to enable the regular user to administer Kubemaster
Command to run on slave node to join Kubernetes cluster
That’s it. You have successfully started the Kubemaster node and brought up your Kubernetes cluster. The next task is to install and configure your secondary nodes in this cluster.
Installation of Kubernetes slave node or minion
The installation process remains the same. Follow steps for disabling SWAP, installing Docker, and installing 3 Kubernetes packages.
Configuration of Kubernetes slave node minion
Nothing to do much on this node. You already have the command to run on this node for joining cluster which was spitting out by kubeadm init command.
Lets see how to join node in Kubernetes cluster using kubeadm command –
[root@minion ~]# kubeadm join 172.31.81.44:6443 --token 8lqimn.2u78dcs5rcb1mggf --discovery-token-ca-cert-hash sha256:de6bfdec100bb979d26ffc177de0e924b6c2fbb71085aa065fd0a0854e1bf360
[preflight]
running pre-flight checks I0912 08:19:56.440122 1555 kernel_validator.go:81] Validating kernel version I0912 08:19:56.440213 1555 kernel_validator.go:96] Validating kernel config
[discovery]
Trying to connect to API Server “172.31.81.44:6443”
[discovery]
Created cluster-info discovery client, requesting info from “https://172.31.81.44:6443”
[discovery]
Failed to request cluster info, will try again: [Get https://172.31.81.44:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: net/http: TLS handshake timeout]
[discovery]
Requesting info from “https://172.31.81.44:6443” again to validate TLS against the pinned public key
[discovery]
Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server “172.31.81.44:6443”
[discovery]
Successfully established connection with API Server “172.31.81.44:6443”
[kubelet]
Downloading configuration for the kubelet from the “kubelet-config-1.11” ConfigMap in the kube-system namespace
[kubelet]
Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet]
Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[preflight]
Activating the kubelet service
[tlsbootstrap]
Waiting for the kubelet to perform the TLS Bootstrap…
[patchnode]
Uploading the CRI Socket information “/var/run/dockershim.sock” to the Node API object “minion” as an annotation This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run ‘kubectl get nodes’ on the master to see this node join the cluster.
And here you go. Node has joined the cluster successfully. Thus you have completed Kubernetes cluster installation and configuration!
Check nodes status from kubemaster.
[root@kerneltalks ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kerneltalks1 Ready master 2h v1.11.3
minion Ready <none> 1h v1.11.3
Once you see all status as ready you have a steady cluster up and running.
Learn the difference between Docker swarm and Kubernetes. Comparison between two container orchestration platforms in a tabular manner.
When you are on the learning curve of application containerization, there will be a stage when you come across orchestration tools for containers. If you have started your learning with Docker then Docker swarm is the first cluster management tool you must have learned and then Kubernetes. So its time to compare docker swarm and Kubernetes. In this article, we will quickly see what is docker, what is Kubernetes, and then a comparison between the two.
What is Docker swarm?
Docker swarm is a native tool to Docker which is aimed at clustering management of Docker containers. Docker swarm enables you to build a cluster of multi-node VM of physical machines running the Docker engine. In turn, you will be running containers on multiple machines to facilitate HA, availability, fault-tolerant environment. It’s pretty much simple to set up and native to Docker.
What is Kubernetes?
It’s a platform to manage containerized applications i.e. containers in cluster environment along with automation. It does almost similar job swarm mode does but in a different and enhanced way. It’s developed by Google in the first place and later project handed over to CNCF. It works with containers like Docker and rocket. Kubernetes installation is a bit complex than Swarm.
Compare Docker and Kubernetes
If someone asks you a comparison between Docker and Kubernetes then that’s not a valid question in the first place. You can not differentiate between Docker and Kubernetes. Docker is an engine that runs containers or itself it refers to as container and Kubernetes is orchestration platform that manages Docker containers in cluster environment. So one can not compare Docker and Kubernetes.
Difference between Docker Swarm and Kubernetes
I added a comparison of Swarm and Kubernetes in the below table for easy readability.
Docker Swarm
Kubernetes
Docker’s own orchestration tool
Google’s open-source orchestration tool
Younger than Kubernetes
Older than Swarm
Simple to setup being native tool to Docker
A bit complex to set up but once done offer more functionality than Swarm
Less community around it but Docker has excellent documentation
Being Google’s product and older has huge community support
Simple application deploy in form of services
Bit complex application deploys through pods, deployments, and services.
Has only command line interface to manage
Also offers GUI addition to CLI
Monitoring is available using third party applications
Offer native and third party for monitoring and logging
Much faster than Kubernetes
Since its a complex system its deployments are bit slower than Swarm
Learn how to format date and time to use in a shell script or as a variable along with different format examples.
There are many times you need to use date in your shell script e.g. to name log file, to pass it as a variable, etc. So we need a different format of dates that can be used as a string or variable in our scripts. In this article, let’s see how to use date in shell script and what all different types of formats you can use.
Check timedatectl command to easily manage date & time in Linux
How to use date in shell script?
You can use the date by inserting shell execution within the command. For example, if you want to create a log file by inserting the current date in it, you can do it by following way –
root@kerneltalks # echo test > /tmp/`date +%d`.txt
root@kerneltalks # ls -lrt
-rw-r--r--. 1 root root 5 Sep 10 09:10 10.txt
Basically you need to pass format identifier with +% to date command to get your desired format of the output. There is a different identifier date command supply.
You can even save specific date format to some variable like –
These format identifiers are from date command man page :
%a locale’s abbreviated weekday name (e.g., Sun)
%A locale’s full weekday name (e.g., Sunday)
%b locale’s abbreviated month name (e.g., Jan)
%B locale’s full month name (e.g., January)
%c locale’s date and time (e.g., Thu Mar 3 23:05:25 2005)
%C century; like %Y, except omit last two digits (e.g., 20)
%d day of month (e.g, 01)
%D date; same as %m/%d/%y
%e day of month, space padded; same as %_d
%F full date; same as %Y-%m-%d
%g last two digits of year of ISO week number (see %G)
%G year of ISO week number (see %V); normally useful only with %V
%h same as %b
%H hour (00..23)
%I hour (01..12)
%j day of year (001..366)
%k hour ( 0..23)
%l hour ( 1..12)
%m month (01..12)
%M minute (00..59)
%N nanoseconds (000000000..999999999)
%p locale’s equivalent of either AM or PM; blank if not known
%P like %p, but lower case
%r locale’s 12-hour clock time (e.g., 11:11:04 PM)
%R 24-hour hour and minute; same as %H:%M
%s seconds since 1970-01-01 00:00:00 UTC
%S second (00..60)
%T time; same as %H:%M:%S
%u day of week (1..7); 1 is Monday
%U week number of year, with Sunday as first day of week (00..53)
%V ISO week number, with Monday as first day of week (01..53)
%w day of week (0..6); 0 is Sunday
%W week number of year, with Monday as first day of week (00..53)
%x locale’s date representation (e.g., 12/31/99)
%X locale’s time representation (e.g., 23:13:48)
%y last two digits of year (00..99)
%Y year
%z +hhmm numeric timezone (e.g., -0400)
%:z +hh:mm numeric timezone (e.g., -04:00)
%::z +hh:mm:ss numeric time zone (e.g., -04:00:00)
%Z alphabetic time zone abbreviation (e.g., EDT)
Using combinations of above you can get your desired date format as output to use in shell script! You can even use %n for new-line and %t for adding a tab in outputs that are mostly not needed since you will be using it as a single string.
Different date format examples
For your convenience and ready to use, I listed below combinations for different date formats.
root@kerneltalks # date +%d_%b_%Y
10_Sep_2018
root@kerneltalks # date +%D
09/10/18
root@kerneltalks # date +%F-%T
2018-09-10-11:09:51
root@kerneltalks # echo today is `date +%A`
today is Monday
root@kerneltalks # echo Its `date +%d` of `date +%B" "%Y` and time is `date +%r`
Its 10 of September 2018 and time is 11:13:42 AM
The small guide which will help aspirants for Docker Certified Associate Certification preparation.
I recently cleared DCA – Docker Certified Associate Certification and wanted to share my experience here on my blog. This might be helpful for folks who are going to appear examination soon or may aspire containerization aspirant to take it.
DCA details :
Complete certification details and FAQs can be found here on there official website.
Duration: 90 minutes
Type: Multiple choice questions
Number of questions: 55
Mode of exam: Remotely proctored
Cost: $195 (For India residents, it would be plus 18% GST which comes roughly 16-17K INR.)
Preparation
Docker Certified Associate aims at certifying professionals having enterprise-level experience of Docker for a minimum of a year. Whenever you are starting to learn Docker, mostly you start off with CE (Community Editions) which comes free or your practice on Play with Docker which also serves CE Docker. You should not attempt this certification with knowledge or experience on CE only. This certification is designed to test your knowledge with Enterprise Edition of Docker which is fully feature packed and has paid license tagged to it.
So it is expected that you have a minimum 6 months or years of experience on Docker EE in the enterprise environment before you attempt for certification. Docker also offers Trail EE license which you can use to start off with EE Docker learning. Once you are through with all the syllabus mentioned on the website for this certification and well versed with Docker enterprise world then only attempt for certification.
You can register for examination from a website which will redirect you to their vendor Examity website. There you need to register for the exam according to the available time slot and make the payment online. You can even book for a time which is within 24 hours but it’s not always available. Make sure your computer completes all the pre-requisite so that you can take the exam without any hassle. You can even connect with the Exam vendor well before the exam and get your computer checked for compatibility with Exam software/plugin.
Docker’s official study guide walks you through the syllabus so that you can prepare yourself accordingly.
During Exam
You can take this exam from anywhere provided you have a good internet connection and your surrounding abides rules mentioned on certification website like an empty room, clean desk, etc. As this exam is remotely proctored, there will be executive monitoring of your screen, webcam, mic remotely in real-time. So make sure you have a calm place, empty room before you start the exam. You should eat, use a cellphone or similar electronic device, talk, etc during the exam.
Exam questions are carefully designed by professionals to test your knowledge in all areas. Do not expect only command, options, etc types questions. There is a good mix of all types of logical, conceptual, and practical application questions. Some questions may have multiple answers so keep an eye on such questions and do not miss to select more than one answer.
After exam
Your examination scorecard will be displayed immediately and the result will be shown to you. You can have it on email. The actual certificate takes 3 minutes before it hits your inbox! Do check spam if you don’t receive it before you escalate it to Docker Certification Team (certification@docker.com)
All the best! Do share your success stories in the comments below.