How to add Cloundfront CDN in WordPress blog with SSL

Step by step procedure to add Cloundfront CDN in WordPress blog with SSL certificate. Screenshot included for better understanding.

Add Cloundfront CDN in WordPress blog with SSL

In this article, we will walk you through steps where we gonna configure AWS CloudFront CDN for WordPress blog under W3TC (W3 Total Cache) plugin. We will be using basic setup under AWS CloudFront so we won’t be using IAM authentication and accesses in our configuration.

See Cloudfront content delivery network pricing here.

We assume below pre-requisites are completed before moving on with this tutorial.

  1. You have logged in WordPress console of your blog with Admin login
  2. You have W3TC plugin installed in WordPress blog
  3. You have logged in AWS account
  4. You have access to change zone files for your domain (required to have fancy CDN CNAMEs)

Without further delay lets jump in to step by step procedure to add Cloudfront CDN in WordPress blog with screenshots.

AWS certificate manager

You can skip this step if your blog is not https enabled.

In this step, we will import your SSL certificate in AWS which needs to be used with Cloudfront distributions in case you are using fancy URL (like c1.kerneltalks.com) for distributions instead of default system generated XXXXXX.cloudfront.net

You can skip this step if you want to buy an SSL certificate from Amazon and don’t want to use your own. If you are ok to use system-generated distributions name like askdhue.kerneltalks.com and don’t want custom CNAME like c1.kerneltalks.com then also you can skip this step.

You can buy an SSL certificate from many authorized bodies or you can get open source Lets Encrypt SSL certificate for free.

Log in to the AWS certificate manager console. Make sure you use region US East (N. Virginia).  Since only certificates stored in this region are available to select while creating Cloudfront distributions. Click on Get Started and in the next screen click Import a certificate. You will be presented with the below screen.

Import certificate in aws

Fill in your certificate details in the fields above. Certificate body will have your SSL certificate content, then private key, and finally certificate chain (if any). Click Review and import.

These filled in details will be verified and information fetched from it will be shown on screen for your review like below.

Review certificate in AWS certificate manager

If everything looks good click Import. Your certificate will be imported and details will be shown to you in the dashboard.

Now, we have our SSL certificate ready in AWS to be used with Cloudfront distributions custom URLs like c1.kerneltalks.com. Let’s move on to creating distributions.

AWS Cloudfront configuration

Login to AWS Cloudfront console using your Amazon account. On left hand side menu bar make sure you have Distributions selected. Click Create Distribution button. Now, you will be presented with wizard step 1. Select the delivery method.

Click Get Started under the Web delivery method. You will see below screen where you need to fill in details –

cloudfront web distribution

Below are few fields you need to select/fill.

  1. Origin Domain Name: Enter your blog’s naked domain name e.g. kerneltalks.com
  2. Origin ID: If you like autogenerated value keep it or you can name it anything.
  3. Origin protocol policy: Select HTTPS only.
  4. Viewer Protocol Policy: Redirect HTTP to HTTPS
  5. Alternate Domain Names: Enter fancy CDN name you want like c1.kerneltalks.com
  6. SSL certificate -> Custom SSL certificate: You should see your imported certificate from the previous step here.

There are many other options which you can toggle based on your requirement. The above listed are the most basic and needed for normal CDN to work. Once done, click Create Distribution.

You will be redirected back to distributions dashboard where you can see your created distribution and its status as In progress. This means now AWS is fetching all content files like media, CSS, JS from your domain hosting server to their edge servers. In other words, you can say your CDN zone is being deployed. Once all sync completes, its state will be changed to Deployed . This process will take time depending on how big your blog is.

Meanwhile, your distribution is being deployed you can head back to your zone file editor (probably in cPanel) and add entries for CNAME you mentioned in distribution setting (e.g. c1.kerneltalks.com)

CNAME entry

You can skip this step if you are not using custom CNAME for your Cloudfront distribution

Goto zone file editor for your domain and add CNAME entry for the custom name you used above (here c1.kerneltalks.com) and point it to the Cloudfront URL of your distribution.

Cloudfront URL of your distribution can be found under Domain Name in above distributions dashboard screenshot. It’s generally in format XXXXXXX.cloudfront.net

This will take a few mins to hours to propagate change through the internet web. You can check if it’s live on the internet by pinging your custom domain name. You should receive pingback from cloudfront.net

ping custom domain name

That’s it. You are done with your AWS configurations. Now, you need to add this custom CNAME or cloudfront.net name in W3TC settings in your WordPress admins panel.

W3TC settings

Login to the WordPress admin panel. Goto W3TC General Settings and enable CDN as per the below screenshot.

W3TC General settings CDN portion

Goto W3TC CDN settings.

Scroll down to Configuration: Objects . Select SSL support as Enabled and add your CNAME in below Replace site's hostname with:

Once done click on Test Mirror and you should see it passed. Check the below screenshot for better understanding.

W3TC CDN mirror check

If your test is not being passed, wait for some time. Make sure you can ping that CNAME as explained above and your Cloudfront distribution is deployed completely.

Check blog for Cloudfront CDN

That’s it. Your blog is serving files from Cloudfront CDN now! You can open the website in a new browser after clearing cookies. View website’s source code and look for URLs with your custom domain name (here c1.kerneltalks.com) and you will see your CSS, JS, and media files URL are not of your naked domain (here kerneltalks.com) but from CDN (i.e. c1.kerneltalks.com)!

To server files parallelly you can create more than 1 (ideally 4) distributions in the same way and add their CNAMEs in W3TC settings.

Enjoy lightning fast web pages of your blog!

Documentary films on Linux!

The Code & Revolution OS! Documentary films on Linux released in 2001.

Documentary films on Linux

Yup, you read it right. The Code & Revolution OS! Those are documentary films released in 2001. The Code is based on birth and journey of Linux & Revolution OS is based on 20 years journey of Linux, GNU, Open Source world.

Have you watched them?

The Code (Wiki Page) is a 58-minute documentary featuring the creator of Linux, Linus Torvalds, and some of the programmers who contributed to Linux. And yeah, there is a piece of the interview where Linus talks about developers of India! Since I am from India, I feel like mentioning it here 🙂

Revolution OS is (Wiki Page) is 85 minutes long documentary which spans over 20 years journey of free software movement through Linux, GNU, and Open Source.

Documentary films are available on YouTube along with subtitles.

The Code : https://www.youtube.com/watch?v=XMm0HsmOTFI

Revolution OS : https://www.youtube.com/watch?v=GsHh2wfy_-4

Embedded versions below if you wanna watch it right way 🙂

If you know some more titles, let me know in the comments I will add them up in the list.

How to remount filesystem in the read-write mode under Linux

Learn how to remount the file system in the read-write mode under Linux. The article also explains how to check if the file system is read-only and how to clean the file system

Re-mount filesystem as read-write

Most of the time on newly created file systems of NFS filesystems we see an error like below :

root@kerneltalks # touch file1
touch: cannot touch ‘file1’: Read-only file system

This is because the file system is mounted as read-only. In such a scenario you have to mount it in read-write mode. Before that, we will see how to check if the file system is mounted in read-only mode and then we will get to how to remount it as a read-write filesystem.

How to check if file system is read only

To confirm file system is mounted in read only mode use below command –

# cat /proc/mounts | grep datastore
/dev/xvdf /datastore ext3 ro,seclabel,relatime,data=ordered 0 0

Grep your mount point in cat /proc/mounts and observer third column which shows all options which are used in the mounted file system. Here ro denotes file system is mounted read-only.

You can also get these details using mount -v command

root@kerneltalks # mount -v |grep datastore
/dev/xvdf on /datastore type ext3 (ro,relatime,seclabel,data=ordered)

In this output. file system options are listed in braces at last column.

Re-mount file system in read-write mode

To remount file system in read-write mode use below command –

root@kerneltalks # mount -o remount,rw /datastore

root@kerneltalks # mount -v |grep datastore
/dev/xvdf on /datastore type ext3 (rw,relatime,seclabel,data=ordered)

Observe after re-mounting option ro changed to rw. Now, the file system is mounted as read-write and now you can write files in it.

Note : It is recommended to fsck file system before re mounting it.

You can check file system by running fsck on its volume.

root@kerneltalks # df -h /datastore
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda2       10G  881M  9.2G   9% /

root@kerneltalks # fsck /dev/xvdf
fsck from util-linux 2.23.2
e2fsck 1.42.9 (28-Dec-2013)
/dev/xvdf: clean, 12/655360 files, 79696/2621440 blocks

Sometimes there are some corrections that need to be made on a file system that needs a reboot to make sure there are no processes are accessing the file system.

How to list YUM repositories in RHEL / CentOS

Learn how to list YUM repositories in RHEL / CentOS. This how-to guide includes various commands along with examples to check details about repositories and their packages in Red Hat systems.

Listing YUM repo in Red Hat systems

YUM (Yellow dog Updater Modified) is a package management tool in Red Hat Linux and its variants like CentOS. In this article, we will walk you through several commands which will be useful for you to get details of YUM repositories in RHEL.

Related read :

Without any further delay, let’s see a list of commands and their example outputs.


List YUM repositories

Run command yum repolist and it will show you all repositories configured under YUM and enabled for use on that server. To view, disabled repositories or all repositories refer below section in this article.

[root@kerneltalks ~]# yum repolist
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
repo id                                                                  repo name                                                                                status
*epel/x86_64                                                             Extra Packages for Enterprise Linux 6 - x86_64                                           12,448
rhui-REGION-client-config-server-7/x86_64                                Red Hat Update Infrastructure 2.0 Client Configuration Server 7                               2
rhui-REGION-rhel-server-releases/7Server/x86_64                          Red Hat Enterprise Linux Server 7 (RPMs)                                                 17,881
rhui-REGION-rhel-server-rh-common/7Server/x86_64                         Red Hat Enterprise Linux Server 7 RH Common (RPMs)                                          231
rsawaroha                                                                rsaw aroha rpms for Fedora/RHEL6+                                                            19
repolist: 30,581

In the above output, you can see the repo list with repo id, repo name, and status. You can see we have EPEL repo configured (repo id epel/x86_64) on the server. Also, last repo rsawaroha we added for installation of xsos tool used to read sosreport.

What is the status column in yum repolist ?

Last column of yum repolist output is status which has numbers in it. You might be wondering, what is the meaning of status numbers in yum repolist?

They are a number of packages included in the respective repository! If you see a number like XXXX+N i.e. followed by + sign and another number then it means that the repository has XXXX number of packages available for installation and N number of packages are excluded.

List details of YUM repositories

Each repositories details like name, id, number of packages available, total size, link details, timestamps, etc can be viewed by using verbose mode. Use -v switch with yum repolist to view repositories details.

[root@kerneltalks ~]# yum -v repolist
Not loading "rhnplugin" plugin, as it is disabled
Loading "amazon-id" plugin
Not loading "product-id" plugin, as it is disabled
Loading "rhui-lb" plugin
Loading "search-disabled-repos" plugin
Not loading "subscription-manager" plugin, as it is disabled
Config time: 0.048
Yum version: 3.4.3
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/supplementary/os
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/extras/os
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/rh-common/debug
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/supplementary/debug
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/rhscl/1/debug
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/rhui-client-config/rhel/server/7/x86_64/os
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/rhscl/1/source/SRPMS
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/rhscl/1/os
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/source/SRPMS
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/extras/debug
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/optional/source/SRPMS
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/optional/debug
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/supplementary/source/SRPMS
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/debug
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/optional/os
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/extras/source/SRPMS
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/rh-common/os
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/rh-common/source/SRPMS
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/os
Setting up Package Sacks
pkgsack time: 0.009
Repo-id      : epel/x86_64
Repo-name    : Extra Packages for Enterprise Linux 6 - x86_64
Repo-revision: 1515267354
Repo-updated : Sat Jan  6 19:58:06 2018
Repo-pkgs    : 12,448
Repo-size    : 11 G
Repo-metalink: https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=x86_64
  Updated    : Mon Jan  8 15:45:02 2018
Repo-baseurl : http://mirror.sjc02.svwh.net/fedora-epel/6/x86_64/ (43 more)
Repo-expire  : 21,600 second(s) (last: Mon Jan  8 19:23:12 2018)
  Filter     : read-only:present
Repo-filename: /etc/yum.repos.d/epel.repo

Repo-id      : rhui-REGION-client-config-server-7/x86_64
Repo-name    : Red Hat Update Infrastructure 2.0 Client Configuration Server 7
Repo-revision: 1509723523
Repo-updated : Fri Nov  3 15:38:43 2017
Repo-pkgs    : 2
Repo-size    : 106 k
Repo-mirrors : https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/rhui-client-config/rhel/server/7/x86_64/os
Repo-baseurl : https://rhui2-cds02.ap-south-1.aws.ce.redhat.com/pulp/repos//rhui-client-config/rhel/server/7/x86_64/os/ (1 more)
Repo-expire  : 21,600 second(s) (last: Mon Jan  8 19:23:13 2018)
  Filter     : read-only:present
Repo-filename: /etc/yum.repos.d/redhat-rhui-client-config.repo

Repo-id      : rhui-REGION-rhel-server-releases/7Server/x86_64
Repo-name    : Red Hat Enterprise Linux Server 7 (RPMs)
Repo-revision: 1515106250
Repo-updated : Thu Jan  4 22:50:49 2018
Repo-pkgs    : 17,881
Repo-size    : 24 G
Repo-mirrors : https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/os
Repo-baseurl : https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/repos//content/dist/rhel/rhui/server/7/7Server/x86_64/os/ (1 more)
Repo-expire  : 21,600 second(s) (last: Mon Jan  8 19:23:13 2018)
  Filter     : read-only:present
Repo-filename: /etc/yum.repos.d/redhat-rhui.repo

Repo-id      : rhui-REGION-rhel-server-rh-common/7Server/x86_64
Repo-name    : Red Hat Enterprise Linux Server 7 RH Common (RPMs)
Repo-revision: 1513002956
Repo-updated : Mon Dec 11 14:35:56 2017
Repo-pkgs    : 231
Repo-size    : 4.5 G
Repo-mirrors : https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/rh-common/os
Repo-baseurl : https://rhui2-cds02.ap-south-1.aws.ce.redhat.com/pulp/repos//content/dist/rhel/rhui/server/7/7Server/x86_64/rh-common/os/ (1 more)
Repo-expire  : 21,600 second(s) (last: Mon Jan  8 19:23:13 2018)
  Filter     : read-only:present
Repo-filename: /etc/yum.repos.d/redhat-rhui.repo

Repo-id      : rsawaroha
Repo-name    : rsaw aroha rpms for Fedora/RHEL6+
Repo-revision: 1507778106
Repo-updated : Thu Oct 12 03:15:06 2017
Repo-pkgs    : 19
Repo-size    : 1.4 M
Repo-baseurl : http://people.redhat.com/rsawhill/rpms
Repo-expire  : 21,600 second(s) (last: Mon Jan  8 18:02:10 2018)
  Filter     : read-only:present
Repo-filename: /etc/yum.repos.d/rsawaroha.repo

repolist: 30,581

List enabled YUM repositories

Under YUM you have the choice to enable or disable repositories. During yum operations like installation of packages only enabled repositories are scanned/contacted to perform operations.

To view only enabled repositories in YUM, use yum repolist enabled

[root@kerneltalks ~]# yum repolist enabled
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
repo id                                                                  repo name                                                                                status
*epel/x86_64                                                             Extra Packages for Enterprise Linux 6 - x86_64                                           12,448
rhui-REGION-client-config-server-7/x86_64                                Red Hat Update Infrastructure 2.0 Client Configuration Server 7                               2
rhui-REGION-rhel-server-releases/7Server/x86_64                          Red Hat Enterprise Linux Server 7 (RPMs)                                                 17,881
rhui-REGION-rhel-server-rh-common/7Server/x86_64                         Red Hat Enterprise Linux Server 7 RH Common (RPMs)                                          231
rsawaroha                                                                rsaw aroha rpms for Fedora/RHEL6+                                                            19
repolist: 30,581

List disabled YUM repositories

Similarly, you can list only disabled yum repositories as well. Use yum repolist disabled

[root@kerneltalks ~]# yum repolist disabled
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
repo id                                                                          repo name
epel-debuginfo/x86_64                                                            Extra Packages for Enterprise Linux 6 - x86_64 - Debug
epel-source/x86_64                                                               Extra Packages for Enterprise Linux 6 - x86_64 - Source
epel-testing/x86_64                                                              Extra Packages for Enterprise Linux 6 - Testing - x86_64
epel-testing-debuginfo/x86_64                                                    Extra Packages for Enterprise Linux 6 - Testing - x86_64 - Debug
epel-testing-source/x86_64                                                       Extra Packages for Enterprise Linux 6 - Testing - x86_64 - Source
rhui-REGION-rhel-server-debug-extras/7Server/x86_64                              Red Hat Enterprise Linux Server 7 Extra Debug (Debug RPMs)
rhui-REGION-rhel-server-debug-optional/7Server/x86_64                            Red Hat Enterprise Linux Server 7 Optional Debug (Debug RPMs)
rhui-REGION-rhel-server-debug-rh-common/7Server/x86_64                           Red Hat Enterprise Linux Server 7 RH Common Debug (Debug RPMs)
rhui-REGION-rhel-server-debug-rhscl/7Server/x86_64                               Red Hat Enterprise Linux Server 7 RHSCL Debug (Debug RPMs)
rhui-REGION-rhel-server-debug-supplementary/7Server/x86_64                       Red Hat Enterprise Linux Server 7 Supplementary Debug (Debug RPMs)
rhui-REGION-rhel-server-extras/7Server/x86_64                                    Red Hat Enterprise Linux Server 7 Extra(RPMs)
rhui-REGION-rhel-server-optional/7Server/x86_64                                  Red Hat Enterprise Linux Server 7 Optional (RPMs)
rhui-REGION-rhel-server-releases-debug/7Server/x86_64                            Red Hat Enterprise Linux Server 7 Debug (Debug RPMs)
rhui-REGION-rhel-server-releases-source/7Server/x86_64                           Red Hat Enterprise Linux Server 7 (SRPMs)
rhui-REGION-rhel-server-rhscl/7Server/x86_64                                     Red Hat Enterprise Linux Server 7 RHSCL (RPMs)
rhui-REGION-rhel-server-source-extras/7Server/x86_64                             Red Hat Enterprise Linux Server 7 Extra (SRPMs)
rhui-REGION-rhel-server-source-optional/7Server/x86_64                           Red Hat Enterprise Linux Server 7 Optional (SRPMs)
rhui-REGION-rhel-server-source-rh-common/7Server/x86_64                          Red Hat Enterprise Linux Server 7 RH Common (SRPMs)
rhui-REGION-rhel-server-source-rhscl/7Server/x86_64                              Red Hat Enterprise Linux Server 7 RHSCL (SRPMs)
rhui-REGION-rhel-server-source-supplementary/7Server/x86_64                      Red Hat Enterprise Linux Server 7 Supplementary (SRPMs)
rhui-REGION-rhel-server-supplementary/7Server/x86_64                             Red Hat Enterprise Linux Server 7 Supplementary (RPMs)
repolist: 0

List all configured YUM repositories

List all YUM repositories available on server.

[root@kerneltalks ~]# yum repolist all
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
repo id                                                                  repo name                                                                       status
*epel/x86_64                                                             Extra Packages for Enterprise Linux 6 - x86_64                                  enabled: 12,448
epel-debuginfo/x86_64                                                    Extra Packages for Enterprise Linux 6 - x86_64 - Debug                          disabled
epel-source/x86_64                                                       Extra Packages for Enterprise Linux 6 - x86_64 - Source                         disabled
epel-testing/x86_64                                                      Extra Packages for Enterprise Linux 6 - Testing - x86_64                        disabled
epel-testing-debuginfo/x86_64                                            Extra Packages for Enterprise Linux 6 - Testing - x86_64 - Debug                disabled
epel-testing-source/x86_64                                               Extra Packages for Enterprise Linux 6 - Testing - x86_64 - Source               disabled
rhui-REGION-client-config-server-7/x86_64                                Red Hat Update Infrastructure 2.0 Client Configuration Server 7                 enabled:      2
rhui-REGION-rhel-server-debug-extras/7Server/x86_64                      Red Hat Enterprise Linux Server 7 Extra Debug (Debug RPMs)                      disabled
rhui-REGION-rhel-server-debug-optional/7Server/x86_64                    Red Hat Enterprise Linux Server 7 Optional Debug (Debug RPMs)                   disabled
rhui-REGION-rhel-server-debug-rh-common/7Server/x86_64                   Red Hat Enterprise Linux Server 7 RH Common Debug (Debug RPMs)                  disabled
rhui-REGION-rhel-server-debug-rhscl/7Server/x86_64                       Red Hat Enterprise Linux Server 7 RHSCL Debug (Debug RPMs)                      disabled
rhui-REGION-rhel-server-debug-supplementary/7Server/x86_64               Red Hat Enterprise Linux Server 7 Supplementary Debug (Debug RPMs)              disabled
rhui-REGION-rhel-server-extras/7Server/x86_64                            Red Hat Enterprise Linux Server 7 Extra(RPMs)                                   disabled
rhui-REGION-rhel-server-optional/7Server/x86_64                          Red Hat Enterprise Linux Server 7 Optional (RPMs)                               disabled
rhui-REGION-rhel-server-releases/7Server/x86_64                          Red Hat Enterprise Linux Server 7 (RPMs)                                        enabled: 17,881
rhui-REGION-rhel-server-releases-debug/7Server/x86_64                    Red Hat Enterprise Linux Server 7 Debug (Debug RPMs)                            disabled
rhui-REGION-rhel-server-releases-source/7Server/x86_64                   Red Hat Enterprise Linux Server 7 (SRPMs)                                       disabled
rhui-REGION-rhel-server-rh-common/7Server/x86_64                         Red Hat Enterprise Linux Server 7 RH Common (RPMs)                              enabled:    231
rhui-REGION-rhel-server-rhscl/7Server/x86_64                             Red Hat Enterprise Linux Server 7 RHSCL (RPMs)                                  disabled
rhui-REGION-rhel-server-source-extras/7Server/x86_64                     Red Hat Enterprise Linux Server 7 Extra (SRPMs)                                 disabled
rhui-REGION-rhel-server-source-optional/7Server/x86_64                   Red Hat Enterprise Linux Server 7 Optional (SRPMs)                              disabled
rhui-REGION-rhel-server-source-rh-common/7Server/x86_64                  Red Hat Enterprise Linux Server 7 RH Common (SRPMs)                             disabled
rhui-REGION-rhel-server-source-rhscl/7Server/x86_64                      Red Hat Enterprise Linux Server 7 RHSCL (SRPMs)                                 disabled
rhui-REGION-rhel-server-source-supplementary/7Server/x86_64              Red Hat Enterprise Linux Server 7 Supplementary (SRPMs)                         disabled
rhui-REGION-rhel-server-supplementary/7Server/x86_64                     Red Hat Enterprise Linux Server 7 Supplementary (RPMs)                          disabled
rsawaroha                                                                rsaw aroha rpms for Fedora/RHEL6+                                               enabled:     19
repolist: 30,581

List all available packages in repositories

To list all available packages for installation from all repositories use below command –

[root@kerneltalks ~]# yum list available |more
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
Available Packages
2048-cli.x86_64                0.9.1-1.el6      epel
2048-cli-nocurses.x86_64       0.9.1-1.el6      epel
2ping.noarch                   3.2.1-2.el6      epel
389-admin.i686                 1.1.35-1.el6     epel
389-admin.x86_64               1.1.35-1.el6     epel
389-admin-console.noarch       1.1.8-1.el6      epel
389-admin-console-doc.noarch   1.1.8-1.el6      epel
389-adminutil.i686             1.1.19-1.el6     epel
389-adminutil.x86_64           1.1.19-1.el6     epel
389-adminutil-devel.i686       1.1.19-1.el6     epel
389-adminutil-devel.x86_64     1.1.19-1.el6     epel
389-console.noarch             1.1.7-1.el6      epel
389-ds.noarch                  1.2.2-1.el6      epel
389-ds-base.x86_64             1.3.6.1-24.el7_4 rhui-REGION-rhel-server-releases
389-ds-base-libs.x86_64        1.3.6.1-24.el7_4 rhui-REGION-rhel-server-releases
389-ds-console.noarch          1.2.6-1.el6      epel
389-ds-console-doc.noarch      1.2.6-1.el6      epel
389-dsgw.x86_64                1.1.11-1.el6     epel

List packages from the particular repository

yum list available command is useful to list all available packages. If you want to list packages from the particular repository then use below switches –

disablerepo="*" which will exclude all repos from scanning

enablerepo="<repo>" which will include only your desired repo to scan for packages.

[root@kerneltalks ~]# yum --disablerepo="*" --enablerepo="rsawaroha" list available
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
Available Packages
Rebooty-inspector.noarch                                                               0.3.0-1                                                                 rsawaroha
pyrite.noarch                                                                          1.0.2-1                                                                 rsawaroha
ravshello.x86_64                                                                       1.35.0-1                                                                rsawaroha
reboot-guard.x86_64                                                                    1.0.1-1                                                                 rsawaroha
rhsecapi.x86_64                                                                        1.0.1-1                                                                 rsawaroha
rhv-manage-hosts.noarch                                                                0.5-1                                                                   rsawaroha
rsar.noarch                                                                            0.1.1-1                                                                 rsawaroha
upvm.x86_64                                                                            0.10.8-1                                                                rsawaroha
valine.noarch                                                                          0.7.4-1                                                                 rsawaroha

In the above command, we listed all packages available under repo name rsawaroha

xsos: a tool to read sosreport in RHEL/CentOS

Learn how to use xsos tool to read sosreport in RHEL/CentOS. xsos is a very helpful tool for Linux sysadmins. Different options and their examples included in the article.

xsos tool to read sosreport

an xsos tool is a tool coded to read a sosreport on Linux systems. sosreport is a tool from RedHat which collects system information which helps vendors to troubleshoot issues. sosreport creates the tarball which contains all the system information but you can not read it directly. For simplicity, Ryan Sawhill created a tool named xsos which will help you to read sosreport in a much easier way in your terminal itself. In this article, we will walk you through how to read sosreport on the Linux terminal.

xsos installation

xsos rpm is available on Ryan’s page on redhat.com. One can simply use the URL as below to install his yum repo and then install the xsos package.

yum install http://people.redhat.com/rsawhill/rpms/latest-rsawaroha-release.rpm &

yum install xsos

Refer below outputs for your reference.

root@kerneltalks # yum install http://people.redhat.com/rsawhill/rpms/latest-rsawaroha-release.rpm
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
latest-rsawaroha-release.rpm                             |  24 kB     00:00
Examining /var/tmp/yum-root-b0cGf5/latest-rsawaroha-release.rpm: rsawaroha-release-1-1.noarch
Marking /var/tmp/yum-root-b0cGf5/latest-rsawaroha-release.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package rsawaroha-release.noarch 0:1-1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package               Arch       Version   Repository                     Size
================================================================================
Installing:
 rsawaroha-release     noarch     1-1       /latest-rsawaroha-release      27 k

Transaction Summary
================================================================================
Install  1 Package

Total size: 27 k
Installed size: 27 k
Is this ok [y/d/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : rsawaroha-release-1-1.noarch                                 1/1
  Verifying  : rsawaroha-release-1-1.noarch                                 1/1

Installed:
  rsawaroha-release.noarch 0:1-1

Complete!


root@kerneltalks # yum install xsos
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
rsawaroha                                                | 2.9 kB     00:00
rsawaroha/primary_db                                       |  11 kB   00:00
Resolving Dependencies
--> Running transaction check
---> Package xsos.noarch 0:0.7.13-1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package        Arch             Version              Repository           Size
================================================================================
Installing:
 xsos           noarch           0.7.13-1             rsawaroha            54 k

Transaction Summary
================================================================================
Install  1 Package

Total download size: 54 k
Installed size: 137 k
Is this ok [y/d/N]: y
Downloading packages:
warning: /var/cache/yum/x86_64/7Server/rsawaroha/packages/latest-xsos.rpm: Header V4 RSA/SHA256 Signature, key ID 4c4ffd02: NOKEY
Public key for latest-xsos.rpm is not installed
latest-xsos.rpm                                            |  54 kB   00:00
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rsaw.aroha
Importing GPG key 0x4C4FFD02:
 Userid     : "Ryan Sawhill Aroha <rsaw@redhat.com>"
 Fingerprint: bac2 74b8 9f6b 5907 b3d0 e467 fba4 72a7 4c4f fd02
 Package    : rsawaroha-release-1-1.noarch (installed)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-rsaw.aroha
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : xsos-0.7.13-1.noarch                                         1/1
  Verifying  : xsos-0.7.13-1.noarch                                         1/1

Installed:
  xsos.noarch 0:0.7.13-1

Complete!

Now xsos utility is installed on your system. Lets check it out.

Using xsos

Without any argument, running xsos command will give you current system information in a summarized format.

root@kerneltalks # xsos
OS
  Hostname: kerneltalks
  Distro:   [redhat-release] Red Hat Enterprise Linux Server release 7.4 (Maipo)
            [os-release] Red Hat Enterprise Linux Server 7.4 (Maipo) 7.4 (Maipo)
  RHN:      serverURL = https://enter.your.server.url.here/XMLRPC
            enableProxy = 0
  RHSM:     hostname = subscription.rhsm.redhat.com
            proxy_hostname =
  YUM:      4 enabled plugins: amazon-id, langpacks, rhui-lb, search-disabled-repos
  Runlevel: N 3  (default multi-user)
  SELinux:  enforcing  (default enforcing)
  Arch:     mach=x86_64  cpu=x86_64  platform=x86_64
  Kernel:
    Booted kernel:  3.10.0-693.11.6.el7.x86_64
    GRUB default:   3.10.0-693.11.6.el7.x86_64
    Build version:
      Linux version 3.10.0-693.11.6.el7.x86_64 (mockbuild@x86-041.build.eng.bos.redhat.com) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC) ) #1 SMP Thu Dec 28
      14:23:39 EST 2017
    Booted kernel cmdline:
      root=UUID=3e11801e-5277-4d87-be4c-0a9a61fbc3da ro console=ttyS0,115200n8 console=tty0 net.ifnames=0 crashkernel=auto LANG=en_US.UTF-8
    GRUB default kernel cmdline:
      ro root=UUID=3e11801e-5277-4d87-be4c-0a9a61fbc3da console=hvc0 LANG=en_US.UTF-8
    Taint-check: 0  (kernel untainted)
    - - - - - - - - - - - - - - - - - - -
  Sys time:  Mon Jan  8 18:13:30 UTC 2018
  Boot time: Mon Jan  8 18:05:57 UTC 2018  (epoch: 1515434757)
  Uptime:    7 min,  1 user
  LoadAvg:   [1 CPU] 0.00 (0%), 0.01 (1%), 0.01 (1%)
  /proc/stat:
    procs_running: 2   procs_blocked: 0    processes [Since boot]: 1132
    cpu [Utilization since boot]:
      us 1%, ni 0%, sys 2%, idle 94%, iowait 0%, irq 0%, sftirq 0%, steal 3%

It shows system info like hostname, distro details, kernel details, arch details, yum config, date & time, and load information.

Now there are different switches you can use with xsos command and get the required details. Frequently used switches given below –

 -a     show everything
 -b     show info from dmidecode
 -o     show hostname, distro, SELinux, kernel info, uptime, etc
 -k     inspect kdump configuration
 -c     show info from /proc/cpuinfo
 -m     show info from /proc/meminfo
 -d     show info from /proc/partitions
 -t     show info from dm-multipath
 -i     show info from ip addr

Above is a snippet from help. Full list of switches can be obtained by running help using xsos -h

Reading sosreport using xsos

To read sosreport using xsos tool, you need to first extract sosreport tarball and use the extracted directory path as a source for the xsos tool. The command format is –

xsos –<switch> <sosreport_dir_path>

For example, lets see CPU information read from sosreport.

root@kerneltalks # xsos -c /var/tmp/sosreport-kerneltalks-20180108180100
CPU
  1 logical processors
  1 Intel Xeon CPU E5-2676 v3 @ 2.40GHz (flags: aes,constant_tsc,ht,lm,nx,pae,rdrand)

Here, -c instructs xsos command to read CPU information from sosreport which is saved in /var/tmp/sosreport-kerneltalks-20180108180100 directory.

Another example below which reads IP information from sosreport.

root@kerneltalks # xsos -i /var/tmp/sosreport-kerneltalks-20180108180100
IP4
  Interface  Master IF  MAC Address        MTU     State  IPv4 Address
  =========  =========  =================  ======  =====  ==================
  lo         -          -                  65536   up     127.0.0.1/8
  eth0       -          02:e5:4c:f8:86:0e  9001    up     172.31.29.189/20

IP6
  Interface  Master IF  MAC Address        MTU     State  IPv6 Address                                 Scope
  =========  =========  =================  ======  =====  ===========================================  =====
  lo         -          -                  65536   up     ::1/128                                      host
  eth0       -          02:e5:4c:f8:86:0e  9001    up     fe80::e5:4cff:fef8:860e/64                   link

you can see IP information fetched from stored sosreport and displayed for your understanding.

You can use different switches to fetch different information as per your requirement from the sosreport. This way you need not go through each and every logfile or log directory extracted in the sosreport directory to get the information. Just use a relevant switch with xsos utility and it will scan the sosreport directory and present your data!

Space is not released after deleting files in Linux?

Troubleshooting guide to reclaim space on disk after deleting files in Linux.

Space is not released after deleting files in Linux? Read this troubleshooting guide

One of the common issues Linux Unix system users face is disk space is not being released even after files are deleted. Sysadmins face some issues when they try to recover disk space by deleting high sized files in a mount point and then they found disk utilization stays the same even after deleting huge files. Sometimes, application users are moving/deleting large log files and still won’t be able to reclaim space on the mount point.

In this troubleshooting guide, I will walk you through steps that will help you to reclaim space on disk after deleting files. Here we will learn how to remove deleted open files in Linux. Most of the time files are deleted manually but processes using those files keep them open and hence space is not reclaimed. df also shows incorrect space utilization.

Process stop/start/restart

To resolve this issue, you need to gracefully or forcefully end processes using those deleted files. First, get a list of such deleted files that are still marked open by processes. Use lsof (list open files) command with +L1 switch for this or you can directly grep for deleted in lsof output without switch

root@kerneltalks # lsof +L1
COMMAND PID USER   FD   TYPE DEVICE SIZE/OFF NLINK    NODE NAME
tuned   777 root    7u   REG  202,2     4096     0 8827610 /tmp/ffiJEo5nz (deleted)

root@kerneltalks # lsof | grep -i deleted
tuned   777 root    7u   REG  202,2     4096     0 8827610 /tmp/ffiJEo5nz (deleted)

lsof output can be read column wise as below –

  1. command
  2. PID
  3. user
  4. FD
  5. type
  6. device
  7. size
  8. node
  9. name

Now, in above output check the PID 777and stop that process. If you can not stop it you can kill the process. In the case of application processes, you can refer application guides on how to stop, start, restart its processes. The restarting process helps in releasing the lock on that file which process made to hold it as open. Once the related process is stopped/restarted you can see space will be released and you can observe reduced utilization in df command output.

Clear from proc filesystem

Another way is to vacate the space used by file by de-allocating that space from /proc filesystem. As you are aware, every process in Linux has its allocations in /proc filesystem i.e. process filesystem. Make sure that the process/application has no impact if you are flushing files (which are held open by an app) from /proc filesystem.

You can find file allocation at /proc/<pid>/fd/<fd_number> location. Where PID and fd_number you can get from lsof output we saw above. If you check the type of this file then it’s a symbolic link to your deleted file.

root@kerneltalks # file /proc/777/fd/7
/proc/777/fd/7: broken symbolic link to `/tmp/ffiJEo5nz (deleted)

So, in our case we can do it using –

root@kerneltalks # > /proc/777/fd/7

That’s it! Flushing it will regain your lost space by those files which you already deleted.

How to add EBS disk on AWS Linux server

An article explaining step by step procedure to add EBS disk on AWS Linux server with screenshots.

EBS disk addition on AWS instance

Nowadays most of the servers run on cloud platforms like Amazon Web Services (AWS), Azure, etc. So daily administrative tasks on Linux servers from AWS console is one of the common things in sysadmin’s task list. In this article, we will walk you through one such task i.e. adding a new disk to the AWS Linux server.

Adding a disk to the EC2 Linux server has two portions. The first portion is to be done on AWS EC2 console which is creating new volume to be attached to the server. And attaching it to EC2 instance on AWS console. The second portion is to be done on the Linux server which is to identify newly added disk at the kernel level and prepare it for use.

Creating & attaching EBS volume

In this step, we will learn how to create EBS volume in the AWS console and how to attach EBS volume to AWS EC2 instance.

Login to your EC2 console and navigate to Volumes which is under ELASTIC BLOCK STORAGE menu on the left-hand sidebar. You will be presented with the current list of volumes in your AWS account like below –

Create volume in AWS

Now, click Create Volume button and you will be presented with the below screen.

Volume creation in AWS

Here you need to choose several parameters of your volume –

  1. Volume Type. This decides your volume performance and obv billing.
  2. Size. In GB. Min and Max available sizes differ according to your volume type choice.
  3. IOPS. Performance parameters. Changes according to your volume type choice
  4. Availability Zone. Make sure you select same AZ as your EC2 instance
  5. Throughput. Performance parameter. Only available for ST1 & SC1 volume type.
  6. Snapshot ID. Select snapshot if you want to create the new volume from existing snapshot backup. For fresh blank volume leave it blank.
  7. Encryption. Checkmark if you want the volume to be encrypted. An extra layer of security.
  8. Tags. Add tags for management, reporting, billing purposes.

After selecting proper parameters as per your requirement, click Create Volume button. You will be presented with ‘Volume created successfully’ dialogue if everything goes well along with the volume ID of your newly created volume. Click Close and you will be back of the volume list.

Now check volume ID to identify your newly created volume in this list. It will be marked with an ‘Available’ state. Select that volume and select Attach volume from Actions menu.

Attach volume menu

Now you will be presented with an instance selection menu. Here you need to choose an instance to which this volume is to be attached. Remember only instances in the same AZ of the volume are

Attach volume instance selection

Once you select the instance you can see the device name which will be reflected at the kernel level in your instance under Device field. Here its /dev/sdf.

Check out the note being displayed here. It says : Note: Newer Linux kernels may rename your devices to /dev/xvdf through /dev/xvdp internally, even when the device name entered here (and shown in the details) is /dev/sdf through /dev/sdp.

It says newer Linux kernels may interpret your device name as /dev/xvdf than /dev/sdf. This means this volume will be either /dev/sdf (on the old kernel) or /dev/xvdf on the new kernel.

That’s it. Once attached you can see volume state is changed from Available to in-use

Identifying volume on Linux instance

Now head back to your Linux server. Log in and check new volume in fdisk -l output.

root@kerneltalks # fdisk -l
WARNING: fdisk GPT support is currently new, and therefore in an experimental ph                                                                                        ase. Use at your own discretion.

Disk /dev/xvda: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: 25D08425-708A-47D2-B907-1F0A3F769A90


#         Start          End    Size  Type            Name
 1         2048         4095      1M  BIOS boot parti
 2         4096     20971486     10G  Microsoft basic

Disk /dev/xvdf: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

As AWS mentioned new device name will be reflected as /dev/xvdf in the kernel, you can see /dev/xvdf in the above output.

Now you need to partition this disk using LVM (using pvcreate) or fdisk so that you can use it for creating mount points!

xfs file system commands with examples

Learn xfs file system commands to create, grow, repair xfs file system along with command examples. 

Learn xfs commands with examples

In our other article, we walked you through what is xfs, features of xfs, etc. In this article, we will see some frequently used xfs administrative commands. We will see how to create xfs filesystem, how to grow xfs filesystem, how to repair the xfs file system, and check xfs filesystem along with command examples.

Create XFS filesystem

mkfs.xfs command is used to create xfs filesystem. Without any special switches, command output looks like one below –

root@kerneltalks # mkfs.xfs /dev/xvdf
meta-data=/dev/xvdf              isize=512    agcount=4, agsize=1310720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=5242880, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

Note: Once the XFS filesystem is created it can not be reduced. It can only be extended to a bigger size.

Resize XFS file system

In XFS, you can only extend the file system and can not reduce it. To grow XFS file system use xfs_growfs. You need to specify a new size of mount point along with -D switch. -D takes argument number as file system blocks. If you don’t supply -D switch, xfs_growfs will grow the filesystem to the maximum available limit on that device.

root@kerneltalks # xfs_growfs /dev/xvdf -D 256
meta-data=/dev/xvdf              isize=512    agcount=4, agsize=720896 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=2883584, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data size 256 too small, old size is 2883584

In the above output, observe the last line. Since I supplied a new size smaller than the existing size, xfs_growfs didn’t change the filesystem. This shows you can not reduce the XFS file system. You can only extend it.

root@kerneltalks #  xfs_growfs /dev/xvdf -D 2883840
meta-data=/dev/xvdf              isize=512    agcount=4, agsize=720896 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=2883584, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 2883584 to 2883840

Now, I supplied new size 1 GB  extra and it successfully grew the file system.

1 GB blocks calculation :

The current filesystem has bsize=4096 i.e. block size of 4MB. We need 1 GB i.e. 256 blocks. So add 256 in a current number of blocks i.e. 2883584 which gives you 2883840. So I used 2883840 as an argument to -D switch.

Repair XFS file system

File system consistency check and repair of XFS can be performed using xfs_repair command. You can run the command with -n switch so that it will not modify anything on the filesystem. It will only scans and reports which modifications to be done. If you are running it without -n switch, it will modify the file system wherever necessary to make it clean.

Please note that you need to un-mount the XFS filesystem before you can run checks on it. Otherwise, you will see the below error.

root@kerneltalks # xfs_repair -n /dev/xvdf
xfs_repair: /dev/xvdf contains a mounted filesystem
xfs_repair: /dev/xvdf contains a mounted and writable filesystem

fatal error -- couldn't initialize XFS library

Once successfully un-mounting file system you can run command on it.

root@kerneltalks # xfs_repair -n /dev/xvdf
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan (but don't clear) agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.

In the above output you can observe, in each phase command shows possible modification which can be done to make the file system healthy. If you want the command to do that modification during the scan then run the command without any switch.

root@kerneltalks # xfs_repair /dev/xvdf
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - agno = 4
Phase 5 - rebuild AG headers and trees...
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
done

In the above output, you can observer xfs_repair command is executing possible filesystem modification as well to make it healthy.

Check XFS version and details

Checking the xfs file system requires it to un-mount. Run xfs_db command on its device path and once you entered xfs_db prompt, run version command.

xfs_db command normally used for examining the XFS file system. version command used to enable features in the file system. Without any argument, the current version and feature bits are printed

root@kerneltalks # xfs_db /dev/xvdf1
xfs_db: /dev/xvdf1 contains a mounted filesystem

fatal error -- couldn't initialize XFS library
root@kerneltalks # umount /shrikant
root@kerneltalks # xfs_db /dev/xvdf1
xfs_db> version
versionnum [0xb4a5+0x18a] = V5,NLINK,DIRV2,ALIGN,LOGV2,EXTFLG,MOREBITS,ATTR2,LAZYSBCOUNT,PROJID32BIT,CRC,FTYPE
xfs_db> quit

To view details of the XFS file system like block size and number of blocks which helps you in calculating new block number for growing XFS file system, use xfs_info without any switch.

root@kerneltalks # xfs_info /shrikant
meta-data=/dev/xvdf              isize=512    agcount=5, agsize=720896 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0 spinodes=0
data     =                       bsize=4096   blocks=2883840, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

It displays all details as it shows while creating XFS file system

There are other XFS file system management commands which alter and manages its metadata. We will cover them in another article.

Date & time management in Linux using timedatectl command

Learn timezone management using timedatectl command. An article explaining different uses of timedatectl command along with examples.

Learn timedatectl command

In our previous article, we have seen how to change the timezone of the Linux server using files or variables in the system. Few of our readers pushed timedatectl command to achieve this task easily. So I thought of writing a separate article on timedatectl command explaining all its usage.

In this article, we will see how to display server time details, view, list, and change the timezone of the server using timedatectl command. If you want to use date or time in a shell script or as a variable, we explained here how to format the date and time to use it as variable or in scripting.

timedatectl is Time Date Control command! It used to control the date and time of the server in Linux. To check your current system date and time details, run this command without any switch –

root@kerneltalks # timedatectl
      Local time: Wed 2017-11-15 15:58:33 UTC
  Universal time: Wed 2017-11-15 15:58:33 UTC
        RTC time: Wed 2017-11-15 15:58:32
       Time zone: UTC (UTC, +0000)
     NTP enabled: yes
NTP synchronized: yes
 RTC in local TZ: no
      DST active: n/a

Where,

  • Local time : Your system’s local date and time
  • Universal time : Current UTC time
  • RTC time : Current Real-time clock time
  • Time zone : Your system’s current timezone
  • NTP enabled : Is NTP is enabled on system or not
  • NTP synchronized : NTP time synced or not.
  • RTC in local TZ : Is RTC maintaining in configured system TZ? It is recommended to keep it off.
  • DST active : Daylight saving time enabled or not

List and change timezone using timedatectl

timedatectl allow you to change the timezone of the system with a list of available timezones.  To view, a list of available timezones use the command with list-timezones switch.

root@kerneltalks # timedatectl list-timezones
Africa/Abidjan
Africa/Accra
Africa/Addis_Ababa
Africa/Algiers
Africa/Asmara
Africa/Bamako
----- output clipped-----

You will be presented with a list of available timezones in a paged manner. You can use any of thee timezones to set on your local system. To change timezone of your system use set-timezone switch.

root@kerneltalks # timedatectl set-timezone Australia/Sydney

Above command will change server timezone to Australia/Sydney one.


Change date and time using timedatectl

Most of the servers are configured with NTP these days. But if not and/or if you want to change the date and time manually then you can use set-time switch. Time should be supplied with YYYY-MM-DD HH:MM:SS

root@kerneltalks # timedatectl set-time "2018-01-01 12:12:12"

If NTP is configured then you might see an error Failed to set time: Automatic time synchronization is enabled when attempting to change system time manually.


Enable/Disable RTC and NTP using timedatectl

You can enable or disable RTC (Real Time Clock) and NTP using timedatectl. For RTC use set-local-rtc and for NTP use set-ntp switch. Both arguments accepts 1 (to enable) and 0 (to disable) as values.

root@kerneltalks # timedatectl set-ntp 0
root@kerneltalks # timedatectl set-local-rtc 0

Please note that enabling NTP here does not take you through NTP configuration steps. That has to be done separately. This only controls if the system should sync time using configured NTP or not.


Manage date and time of other machine

You can use timedatectl command to manage the date and time of local containers or remote machines. To manage the time of local container use -M switch whereas to connect to the remote host and manage its time use -H switch.

-M switch takes argument as --host=ip/hostname. -H switch takes argument as --machine=container_name.

That’s all switches which are mainly useful in day-to-day operations. There are a few more that can be referred from its man page.

How to troubleshoot RPC: Port mapper failure – Timed out error

Learn how to troubleshoot RPC: Port mapper failure – Timed out error on NFS client. This will help you to resolve NFS mounting being timed out issue.

RPC: Port mapper failure – Timed out

In this article, we are going to discuss the troubleshooting of one of the NFS errors you see on NFS clients. This error can be seen while trying commands related to NFS like below :

root@kerneltalks # showmount -e mynfsserver
clnt_create: RPC: Port mapper failure - Timed out

root@kerneltalks # rpcinfo -p  mynfsserver
mynfsserver: RPC: Port mapper failure - Timed out

Normally when you see this error you are not able to mount NFS share as well. You will see mount.nfs: Connection timed out error when you try to mount NFS share.

root@kerneltalks # mount mynfsserver:/data /nfs_data
mount.nfs: Connection timed out

Troubleshooting steps

Follow below troubleshooting steps to fix RPC: Port mapper failure - Timed out error.

Check NFS services on NFS server

First, check if NFS server services are running smoothly on the NFS server.

root@mynfsserver # service nfs-server status
nfs-server.service - NFS server and services
   Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled)
  Drop-In: /usr/lib/systemd/system/nfs-server.service.d
           └─nfsserver.conf
        /run/systemd/generator/nfs-server.service.d
           └─order-with-mounts.conf
   Active: active (exited) since Tue 2017-11-07 15:58:08 BRST; 6 days ago
 Main PID: 1586 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/nfs-server.service

The above output is from the Suse Linux server. The output may look different n different Linux distros. If it’s not running or hung, you may need to restart NFS services.

Check connectivity between NFS server and client

Make sure you are able to reach the NFS server from your client. Check using ping and telnet to NFS ports like 111 and 2049 over both protocols TCP and UDP.

root@kerneltalks #  ping mynfsserver
PING lasnfsp01v.la.holcim.net (10.186.1.22) 56(84) bytes of data.
64 bytes from 10.186.1.22: icmp_seq=1 ttl=56 time=3.92 ms
64 bytes from 10.186.1.22: icmp_seq=2 ttl=56 time=3.74 ms
64 bytes from 10.186.1.22: icmp_seq=3 ttl=56 time=3.82 ms
^C
--- mynfsserver ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 3.748/3.830/3.920/0.086 ms

root@kerneltalks # telnet 10.186.1.22 2049
Trying 10.186.1.22...
Connected to 10.186.1.22.
Escape character is '^]'.
root@kerneltalks # nc -v -u mynfsserver 111
Connection to mynfsserver 111 port [udp/sunrpc] succeeded!
^C
root@kerneltalks # nc -v -u mynfsserver 2049
Connection to mynfsserver 2049 port [udp/nfs] succeeded!
^C
root@kerneltalks # nc -v mynfsserver 111
Connection to mynfsserver 111 port [tcp/sunrpc] succeeded!
^C
root@kerneltalks # nc -v mynfsserver 2049
Connection to mynfsserver 2049 port [tcp/nfs] succeeded!
^C
Check if RPC info is reachable from client

Run below command to check if you can read RPC information of the NFS server from the client machine.

root@kerneltalks # rpcinfo -p 10.186.1.22
   program vers proto   port  service
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100005    1   udp  20048  mountd
    100005    1   tcp  20048  mountd
    100005    2   udp  20048  mountd
    100005    2   tcp  20048  mountd
    100005    3   udp  20048  mountd
    100005    3   tcp  20048  mountd
    100024    1   udp   4000  status
    100024    1   tcp   4000  status
    100003    3   tcp   2049  nfs
    100003    4   tcp   2049  nfs
    100227    3   tcp   2049  nfs_acl
    100003    3   udp   2049  nfs
    100003    4   udp   2049  nfs
    100227    3   udp   2049  nfs_acl
    100021    1   udp   4001  nlockmgr
    100021    3   udp   4001  nlockmgr
    100021    4   udp   4001  nlockmgr
    100021    1   tcp   4001  nlockmgr
    100021    3   tcp   4001  nlockmgr
    100021    4   tcp   4001  nlockmgr

If you have connectivity issue then you will see mynfsserver: RPC: Port mapper failure - Timed out error here.

Check if you can read exported share info from client

Use the below command to check if you can read exported share info from the client.

root@kerneltalks # showmount -e  10.186.1.22
Export list for  10.186.1.22:
/data *(rw,sync,no_root_squash)
Check if antivirus kernel modules are blocking the NFS

Lastly, if you have SEP 14 (Symantec Endpoint Protection) antivirus installed on your server then you need to uninstall it. For some mysterious reason, SEP 14 holds on nfsd and crashes everything to NFS. You may see below messages in dmesg of NFS server to verify if SEP kernel modules are messing up with NFS

symev_custom_4_12_14_95_48_default_x86_64: loading out-of-tree module taints kernel.
symev_custom_4_12_14_95_48_default_x86_64: module verification failed: signature and/or required key missing - tainting kernel
symap_custom_4_12_14_95_48_default_x86_64: module license 'Proprietary' taints kernel.
symev: hold nfsd module
BUG: unable to handle kernel paging request at ffffffffc068f8b4
IP: svc_tcp_accept+0x1a6/0x320 [sunrpc]

You need a reboot after uninstalling antivirus since its kernel modules loaded in kernel won’t get freed with uninstall. For that, you need to reboot the server. You cant even remove it from the kernel as you normally remove the module from the running kernel. So reboot is a way to go after uninstalling antivirus.

root@kerneltalks # lsmod |grep sym
symev_custom_4_12_14_95_48_default_x86_64    98304  1

root@kerneltalks # modprobe -r symev_custom_4_12_14_95_48_default_x86_64
modprobe: FATAL: Module symev_custom_4_12_14_95_48_default_x86_64 is in use.

root@kerneltalks # rmmod  symev_custom_4_12_14_95_48_default_x86_64
rmmod: ERROR: Module symev_custom_4_12_14_95_48_default_x86_64 is in use

root@kerneltalks # lsmod | grep sym
symev_custom_4_12_14_95_48_default_x86_64    98304  1

root@kerneltalks # modinfo symev_custom_4_12_14_95_48_default_x86_64
filename:       /lib/modules/4.12.14-95.48-default/kernel/drivers/char/symev-custom-4-12-14-95-48-default-x86-64.ko
modinfo: ERROR: could not get modinfo from 'symev_custom_4_12_14_95_48_default_x86_64': No such file or directory

Once all the above commands are able to provide you expected output, you will then be ready to mount your share without any issues and the issue will be resolved.


How to resolve connectivity issue

To resolve connectivity between two servers first you need to check on network ends that two servers are able to communicate over a network. If you are running it on AWS Linux EC2 instances then you might need to check security groups to allow proper traffic.

On the OS front, you may need to check iptables settings and allow NFS ports. SELinux is also an area where you need to explore settings if you have customized SELinux running on your server. Normally by default SELinux allows NFS traffic.