The article explaining How to resolve /bin/bash^M: bad interpreter: No such file or directory in Unix or Linux server.
Issue :
Sometimes we see below error while running scripts :
root@kerneltalks # ./test_script.sh
-bash: ./test_script.sh: /bin/bash^M: bad interpreter: No such file or directory
This is the issue with files that were created or updated in Windows and later copied over to Unix or Linux machine to execute. Since Windows (DOS) and Linux/Unix interpret line feeds and carriage returns differently. Window’s carriage returns interpreted as an illegal character ^M in *nix systems. Hence you can see ^M in the above error which is at the end of a very first line of script #!/bin/bash which invokes bash shell in the script.
To resolve this issue you need to convert the DOS file into Linux one. You can either re-write the whole file using text editors in Linux/Unix system or you can use tools like dos2unix or native commands like sed.
Solution:
Use dos2unix utility which comes pre-installed on almost all distributions nowadays. dos2unix project hosted here.
There are different encoding you can choose to convert your file. -ascii is default conversion mode & it only converts line breaks. I used here -iso which worked fine for me.
The syntax is pretty simple you need to give encoding format along with the source and destination filenames.
root@kerneltalks # dos2unix -iso -n test_script.sh script_new.sh
dos2unix: active code page: 0
dos2unix: using code page 437.
dos2unix: converting file backup.sh to file script_new.sh in Unix format ...
This way you can keep old files intact and don’t mess with the original file. If you are ok to directly edit the old file then you can try below command :
root@kerneltalks # dos2unix -k -o test_script.sh
dos2unix: converting file test_script.sh to Unix format ...
Where -k keeps the timestamp of the file intact and -o converts the file and overwrites changes to the same file.
Or
You can use streamline editor sed to globally search an replace
root@kerneltalks # sed -i -e 's/\r$//' test_script.sh
where, -i uses source file, edit, and overwrites to the same file. -e supplied the following script code to be run on the source file.
That’s it. You repaired your file from Windows to run fine on the Linux system! Go ahead… execute…!
Learn 8 different ways to generate a random password in Linux using Linux native commands or third-party utilities.
In this article, we will walk you through various different ways to generate a random password in the Linux terminal. Few of them are using native Linux commands and others are using third-party tools or utilities which can easily be installed on the Linux machine. Here we are looking at native commands like,openssldd, md5sum, tr, urandom and third-party tools like mkpasswd, randpw, pwgen, spw, gpg, xkcdpass, diceware, revelation, keepaasx, passwordmaker.
These are actually ways to get some random alphanumeric string which can be utilized as a password. Random passwords can be used for new users so that there will be uniqueness no matter how large your user base is. Without any further delay, let’s jump into those 15 different ways to generate the random password in Linux.
Generate password using mkpasswd utility
mkpasswd comes with the install of expect package on RHEL based systems. On Debian based systems mkpasswd comes with package whois. Trying to install mkpasswd package will result in error –
No package mkpasswd available. on RHEL system and E: Unable to locate package mkpasswd in Debian based.
So install their parent packages as mentioned above and you are good to go.
Run mkpasswd to get passwords
root@kerneltalks# mkpasswd << on RHEL
zt*hGW65c
root@kerneltalks# mkpasswd teststring << on Ubuntu
XnlrKxYOJ3vik
Command behaves differently on different systems so work accordingly. There are many switches that can be used to control length etc parameters. You can explore them from man pages.
Generate password using OpenSSL
OpenSSL comes in build with almost all the Linux distributions. We can use its random function to get alphanumeric string generated which can be used as a password.
Here, we are using base64 encoding with random function and last digit for the argument to base64 encoding.
Generate password using urandom
The device file /dev/urandom is another source of getting random characters. We are using tr function and trimming output to get the random string to use as a password.
We can even use /dev/urandom device along with dd command to get a string of random characters.
root@kerneltalks# dd if=/dev/urandom bs=1 count=15|base64 -w 0
15+0 records in
15+0 records out
15 bytes (15 B) copied, 5.5484e-05 s, 270 kB/s
QMsbe2XbrqAc2NmXp8D0
We need to pass output through base64 encoding to make it human-readable. You can play with count value to get the desired length. For much cleaner output, redirect std2 to /dev/null. The clean command is –
Another way to get an array of random characters which can be used as the password is to calculate MD5 checksum! s you know checksum value indeed looks like random characters grouped together we can use it as the password. Make sure you use the source as something variable so that you get different checksum every time you run command. For example date ! date command always yields changing the output.
root@kerneltalks # date |md5sum
4d8ce5c42073c7e9ca4aeffd3d157102 -
Here we passed date command output to md5sum and get the checksum hash! You can use cut command to get the desired length of the output.
Generate password using pwgen
pwgen package comes with repositories like EPEL. pwgen is more focused on generating passwords that are pronounceable but not a dictionary word or not in plain English. You may not find it in standard distribution repo. Install the package and run pwgen command. Boom!
You will be presented with the list of passwords at your terminal! What else you want? Ok. You still want to explore, pwgen comes with many custom options that can be referred for man page.
Generate password using gpg tool
GPG is an OpenPGP encryption and signing tool. Mostly gpg tool comes pre-installed (at least it is on my RHEL7). But if not you can look for gpg or gpg2 package and install it.
Use below command to generate password from gpg tool.
Here we are passing generate random byte sequence switch (--gen-random) of quality 1 (first argument) with a count of 12 (second argument). Switch --armor ensures output is base64 encoded.
Generate password using xkcdpass
Famous geek humor website xkcd, published a very interesting post about memorable but still complex passwords. You can view it here. So xkcdpass tool took inspiration from this post and did its work! It’s a python package and available on python’s official website here
All installation and usage instructions are mentioned on that page. Here is install steps and outputs from my test RHEL server for your reference.
You can use these words as input to other commands like md5sum to get the random password (like below) or you can even use the Nth letter of each word to form your password!
Or even you can use all those words together as such a long password which is easy to remember for a user and very hard to crack using the computer program.
Beginners guide to learn dd command along with a list of examples. The article includes outputs for command examples too.
Beginners guide to learn dd command! In this article, we will learn about dd (Disk Duplication) command and various usage of it along with examples.
dd command mainly used to convert and copy files in Linux and Unix systems. dd command syntax is
dd <options>
It has a very large list of options which can be used as per your requirement. Most of the commonly used options are :
bs=xxx Read and write xxx bytes at a time
count=n Copy only n blocks.
if=FILE Read from FILE
of=FILE Output to FILE
Let me walk you through examples to understand dd command usage.
Backup complete disk using dd
For copying the whole disk to another disk, dd is very helpful. You just need to give it disk to read from and disk to write. Check below example –
root@kerneltalks # dd if=/dev/xvdf of=/dev/xvdg
4194304+0 records in
4194304+0 records out
2147483648 bytes (2.1 GB) copied, 181.495 s, 11.8 MB/s
In the above output, you can see disk /dev/xvdf is copied to /dev/xvdg. Command will show you how much data and what speed it copied.
Identify disk physically using dd
When there are a bunch of disks attached to the server and if you want to trace a particular disk physically, then dd command might be helpful. You have to run dd command to read from disk and write into the void. This will keep the hard disk activity light solid (physical on disk).
root@kerneltalks # dd if=/dev/xvdf of=/dev/null
Normally all other disk blinking activity LED whereas this one will be having its LED solid. Easy to spot the disk then! Be careful with IF and OF. IF you switch their arguments, you will end up wiping out your hard disk clean.
Create image of hard disk using dd
You can create an image of hard disk using dd. It’s the same as what we saw in the first example backup of the disk. Here we will use output file OF as a data file on mount point and not another disk.
root@kerneltalks # dd if=/dev/xvdf of=/xvdf_disk.img
4194304+0 records in
4194304+0 records out
2147483648 bytes (2.1 GB) copied, 32.9723 s, 65.1 MB/s
root@kerneltalks # ls -lh /xvdf_disk.img
-rw-r--r--. 1 root root 2.0G Jan 15 14:36 /xvdf_disk.img
In the above output, we created an image of disk /dev/xvdf into a file located in / named xvdf_disk.img
Compressed image can be created as well using gzip along with dd
root@kerneltalks # dd if=/dev/xvdf |gzip -c >/xvdf_disk.img.gz
4194304+0 records in
4194304+0 records out
2147483648 bytes (2.1 GB) copied, 32.6262 s, 65.8 MB/s
root@kerneltalks # ls -lh /xvdf_disk.img.gz
-rw-r--r--. 1 root root 2.0M Jan 15 14:31 /xvdf_disk.img.gz
You can observe output zipped image is very much less in size.
Restore image of hard disk using dd
Yup, the next question will be how to restore this hard disk image on another disk? The answer is simply to use it as a source and destination as another disk.
root@kerneltalks # dd if=/xvdf_disk.img of=/dev/xvdg
4194304+0 records in
4194304+0 records out
2147483648 bytes (2.1 GB) copied, 175.748 s, 12.2 MB/s
Make sure your disk image and target disk has same size.
Restore compressed hard disk image using dd along with gzip command as below –
root@kerneltalks # gzip -dc /xvdf_disk.img.gz | dd of=/dev/xvdg
4194304+0 records in
4194304+0 records out
2147483648 bytes (2.1 GB) copied, 177.272 s, 12.1 MB/s
Create ISO from CD or DVD using dd
Another popular use of dd command is creating an optical disk image file i.e. ISO file from CD or DVD. You need to first mount CD or DVD on your server then use it as a source device and file on mount point as a destination.
Here, we specified the 4096 block size using bs option. Make sure no other application or user is accessing a CD or DVD when running this command. You can use fuser command to check if someone is accessing it.
The next question will be how to mount ISO file in Linux? Well we have already article on it here 🙂
Creating file of definite size with zero data using dd
Many times sysadmins or developers need files with junk data or zero data for testing. Using dd you can create such files with definite size.
Let’s say you want to create a file of 1GB then you define block size of 1M and count of 1024. So 1M x 1024 = 1024M = 1G.
root@kerneltalks # dd if=/dev/zero of=/testfile bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 13.0623 s, 82.2 MB/s
root@kerneltalks # ls -lh /testfile
-rw-r--r--. 1 root root 1.0G Jan 15 14:29 /testfile
In the above output, you can see our math worked perfectly. 1G file is created out of our command.
Changing file uppercase to lowercase using dd
All the above examples we have seen so far are of data copy using dd command. Now, this example is of data convert using dd command. Using dd, you can change file data from all uppercase to lowercase and vice versa.
# cat /root/testdata
This is test data file on kerneltalks.com test server.
# dd if=/root/testdata of=/root/testdata_upper conv=ucase
0+1 records in
0+1 records out
55 bytes (55 B) copied, 0.000138394 s, 397 kB/s
# cat /root/testdata_upper
THIS IS TEST DATA FILE ON KERNELTALKS.COM TEST SERVER.
You can see all data in file is converted to uppercase. For changing data from uppercase to lowercase use option conv=lcase
If you another interesting use of dd command, let us know in the comments down below.
Step by step procedure to add Cloundfront CDN in WordPress blog with SSL certificate. Screenshot included for better understanding.
In this article, we will walk you through steps where we gonna configure AWS CloudFront CDN for WordPress blog under W3TC (W3 Total Cache) plugin. We will be using basic setup under AWS CloudFront so we won’t be using IAM authentication and accesses in our configuration.
See Cloudfront content delivery network pricing here.
We assume below pre-requisites are completed before moving on with this tutorial.
You have logged in WordPress console of your blog with Admin login
You have W3TC plugin installed in WordPress blog
You have logged in AWS account
You have access to change zone files for your domain (required to have fancy CDN CNAMEs)
Without further delay lets jump in to step by step procedure to add Cloudfront CDN in WordPress blog with screenshots.
AWS certificate manager
You can skip this step if your blog is not https enabled.
In this step, we will import your SSL certificate in AWS which needs to be used with Cloudfront distributions in case you are using fancy URL (like c1.kerneltalks.com) for distributions instead of default system generated XXXXXX.cloudfront.net
You can skip this step if you want to buy an SSL certificate from Amazon and don’t want to use your own. If you are ok to use system-generated distributions name like askdhue.kerneltalks.com and don’t want custom CNAME like c1.kerneltalks.com then also you can skip this step.
You can buy an SSL certificate from many authorized bodies or you can get open source Lets Encrypt SSL certificate for free.
Log in to the AWS certificate manager console. Make sure you use region US East (N. Virginia). Since only certificates stored in this region are available to select while creating Cloudfront distributions. Click on Get Started and in the next screen click Import a certificate. You will be presented with the below screen.
Fill in your certificate details in the fields above. Certificate body will have your SSL certificate content, then private key, and finally certificate chain (if any). Click Review and import.
These filled in details will be verified and information fetched from it will be shown on screen for your review like below.
If everything looks good click Import. Your certificate will be imported and details will be shown to you in the dashboard.
Now, we have our SSL certificate ready in AWS to be used with Cloudfront distributions custom URLs like c1.kerneltalks.com. Let’s move on to creating distributions.
AWS Cloudfront configuration
Login to AWS Cloudfront console using your Amazon account. On left hand side menu bar make sure you have Distributions selected. Click Create Distribution button. Now, you will be presented with wizard step 1. Select the delivery method.
Click Get Started under the Web delivery method. You will see below screen where you need to fill in details –
Below are few fields you need to select/fill.
Origin Domain Name: Enter your blog’s naked domain name e.g. kerneltalks.com
Origin ID: If you like autogenerated value keep it or you can name it anything.
Origin protocol policy: Select HTTPS only.
Viewer Protocol Policy: Redirect HTTP to HTTPS
Alternate Domain Names: Enter fancy CDN name you want like c1.kerneltalks.com
SSL certificate -> Custom SSL certificate: You should see your imported certificate from the previous step here.
There are many other options which you can toggle based on your requirement. The above listed are the most basic and needed for normal CDN to work. Once done, click Create Distribution.
You will be redirected back to distributions dashboard where you can see your created distribution and its status as In progress. This means now AWS is fetching all content files like media, CSS, JS from your domain hosting server to their edge servers. In other words, you can say your CDN zone is being deployed. Once all sync completes, its state will be changed to Deployed . This process will take time depending on how big your blog is.
Meanwhile, your distribution is being deployed you can head back to your zone file editor (probably in cPanel) and add entries for CNAME you mentioned in distribution setting (e.g. c1.kerneltalks.com)
CNAME entry
You can skip this step if you are not using custom CNAME for your Cloudfront distribution
Goto zone file editor for your domain and add CNAME entry for the custom name you used above (here c1.kerneltalks.com) and point it to the Cloudfront URL of your distribution.
Cloudfront URL of your distribution can be found under Domain Name in above distributions dashboard screenshot. It’s generally in format XXXXXXX.cloudfront.net
This will take a few mins to hours to propagate change through the internet web. You can check if it’s live on the internet by pinging your custom domain name. You should receive pingback from cloudfront.net
That’s it. You are done with your AWS configurations. Now, you need to add this custom CNAME or cloudfront.net name in W3TC settings in your WordPress admins panel.
W3TC settings
Login to the WordPress admin panel. Goto W3TC General Settings and enable CDN as per the below screenshot.
Goto W3TC CDN settings.
Scroll down to Configuration: Objects . Select SSL support as Enabled and add your CNAME in below Replace site's hostname with:
Once done click on Test Mirror and you should see it passed. Check the below screenshot for better understanding.
If your test is not being passed, wait for some time. Make sure you can ping that CNAME as explained above and your Cloudfront distribution is deployed completely.
Check blog for Cloudfront CDN
That’s it. Your blog is serving files from Cloudfront CDN now! You can open the website in a new browser after clearing cookies. View website’s source code and look for URLs with your custom domain name (here c1.kerneltalks.com) and you will see your CSS, JS, and media files URL are not of your naked domain (here kerneltalks.com) but from CDN (i.e. c1.kerneltalks.com)!
To server files parallelly you can create more than 1 (ideally 4) distributions in the same way and add their CNAMEs in W3TC settings.
The Code & Revolution OS! Documentary films on Linux released in 2001.
Yup, you read it right. The Code & Revolution OS! Those are documentary films released in 2001. The Code is based on birth and journey of Linux & Revolution OS is based on 20 years journey of Linux, GNU, Open Source world.
Have you watched them?
The Code (Wiki Page) is a 58-minute documentary featuring the creator of Linux, Linus Torvalds, and some of the programmers who contributed to Linux. And yeah, there is a piece of the interview where Linus talks about developers of India! Since I am from India, I feel like mentioning it here 🙂
Revolution OS is (Wiki Page) is 85 minutes long documentary which spans over 20 years journey of free software movement through Linux, GNU, and Open Source.
Documentary films are available on YouTube along with subtitles.
Learn how to remount the file system in the read-write mode under Linux. The article also explains how to check if the file system is read-only and how to clean the file system
Most of the time on newly created file systems of NFS filesystems we see an error like below :
This is because the file system is mounted as read-only. In such a scenario you have to mount it in read-write mode. Before that, we will see how to check if the file system is mounted in read-only mode and then we will get to how to remount it as a read-write filesystem.
How to check if file system is read only
To confirm file system is mounted in read only mode use below command –
Grep your mount point in cat /proc/mounts and observer third column which shows all options which are used in the mounted file system. Here ro denotes file system is mounted read-only.
You can also get these details using mount -v command
root@kerneltalks # mount -v |grep datastore
/dev/xvdf on /datastore type ext3 (ro,relatime,seclabel,data=ordered)
In this output. file system options are listed in braces at last column.
Re-mount file system in read-write mode
To remount file system in read-write mode use below command –
root@kerneltalks # mount -o remount,rw /datastore
root@kerneltalks # mount -v |grep datastore
/dev/xvdf on /datastore type ext3 (rw,relatime,seclabel,data=ordered)
Observe after re-mounting option ro changed to rw. Now, the file system is mounted as read-write and now you can write files in it.
Note : It is recommended to fsck file system before re mounting it.
You can check file system by running fsck on its volume.
Sometimes there are some corrections that need to be made on a file system that needs a reboot to make sure there are no processes are accessing the file system.
Learn how to list YUM repositories in RHEL / CentOS. This how-to guide includes various commands along with examples to check details about repositories and their packages in Red Hat systems.
YUM (Yellow dog Updater Modified) is a package management tool in Red Hat Linux and its variants like CentOS. In this article, we will walk you through several commands which will be useful for you to get details of YUM repositories in RHEL.
Without any further delay, let’s see a list of commands and their example outputs.
List YUM repositories
Run command yum repolist and it will show you all repositories configured under YUM and enabled for use on that server. To view, disabled repositories or all repositories refer below section in this article.
[root@kerneltalks ~]# yum repolist
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
repo id repo name status
*epel/x86_64 Extra Packages for Enterprise Linux 6 - x86_64 12,448
rhui-REGION-client-config-server-7/x86_64 Red Hat Update Infrastructure 2.0 Client Configuration Server 7 2
rhui-REGION-rhel-server-releases/7Server/x86_64 Red Hat Enterprise Linux Server 7 (RPMs) 17,881
rhui-REGION-rhel-server-rh-common/7Server/x86_64 Red Hat Enterprise Linux Server 7 RH Common (RPMs) 231
rsawaroha rsaw aroha rpms for Fedora/RHEL6+ 19
repolist: 30,581
In the above output, you can see the repo list with repo id, repo name, and status. You can see we have EPEL repo configured (repo id epel/x86_64) on the server. Also, last repo rsawaroha we added for installation of xsos tool used to read sosreport.
What is the status column in yum repolist ?
Last column of yum repolist output is status which has numbers in it. You might be wondering, what is the meaning of status numbers in yum repolist?
They are a number of packages included in the respective repository! If you see a number like XXXX+N i.e. followed by + sign and another number then it means that the repository has XXXX number of packages available for installation and N number of packages are excluded.
List details of YUM repositories
Each repositories details like name, id, number of packages available, total size, link details, timestamps, etc can be viewed by using verbose mode. Use -v switch with yum repolist to view repositories details.
[root@kerneltalks ~]# yum -v repolist
Not loading "rhnplugin" plugin, as it is disabled
Loading "amazon-id" plugin
Not loading "product-id" plugin, as it is disabled
Loading "rhui-lb" plugin
Loading "search-disabled-repos" plugin
Not loading "subscription-manager" plugin, as it is disabled
Config time: 0.048
Yum version: 3.4.3
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/supplementary/os
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/extras/os
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/rh-common/debug
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/supplementary/debug
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/rhscl/1/debug
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/rhui-client-config/rhel/server/7/x86_64/os
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/rhscl/1/source/SRPMS
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/rhscl/1/os
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/source/SRPMS
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/extras/debug
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/optional/source/SRPMS
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/optional/debug
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/supplementary/source/SRPMS
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/debug
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/optional/os
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/extras/source/SRPMS
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/rh-common/os
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/rh-common/source/SRPMS
mirrorlist: https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/os
Setting up Package Sacks
pkgsack time: 0.009
Repo-id : epel/x86_64
Repo-name : Extra Packages for Enterprise Linux 6 - x86_64
Repo-revision: 1515267354
Repo-updated : Sat Jan 6 19:58:06 2018
Repo-pkgs : 12,448
Repo-size : 11 G
Repo-metalink: https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=x86_64
Updated : Mon Jan 8 15:45:02 2018
Repo-baseurl : http://mirror.sjc02.svwh.net/fedora-epel/6/x86_64/ (43 more)
Repo-expire : 21,600 second(s) (last: Mon Jan 8 19:23:12 2018)
Filter : read-only:present
Repo-filename: /etc/yum.repos.d/epel.repo
Repo-id : rhui-REGION-client-config-server-7/x86_64
Repo-name : Red Hat Update Infrastructure 2.0 Client Configuration Server 7
Repo-revision: 1509723523
Repo-updated : Fri Nov 3 15:38:43 2017
Repo-pkgs : 2
Repo-size : 106 k
Repo-mirrors : https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/rhui-client-config/rhel/server/7/x86_64/os
Repo-baseurl : https://rhui2-cds02.ap-south-1.aws.ce.redhat.com/pulp/repos//rhui-client-config/rhel/server/7/x86_64/os/ (1 more)
Repo-expire : 21,600 second(s) (last: Mon Jan 8 19:23:13 2018)
Filter : read-only:present
Repo-filename: /etc/yum.repos.d/redhat-rhui-client-config.repo
Repo-id : rhui-REGION-rhel-server-releases/7Server/x86_64
Repo-name : Red Hat Enterprise Linux Server 7 (RPMs)
Repo-revision: 1515106250
Repo-updated : Thu Jan 4 22:50:49 2018
Repo-pkgs : 17,881
Repo-size : 24 G
Repo-mirrors : https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/os
Repo-baseurl : https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/repos//content/dist/rhel/rhui/server/7/7Server/x86_64/os/ (1 more)
Repo-expire : 21,600 second(s) (last: Mon Jan 8 19:23:13 2018)
Filter : read-only:present
Repo-filename: /etc/yum.repos.d/redhat-rhui.repo
Repo-id : rhui-REGION-rhel-server-rh-common/7Server/x86_64
Repo-name : Red Hat Enterprise Linux Server 7 RH Common (RPMs)
Repo-revision: 1513002956
Repo-updated : Mon Dec 11 14:35:56 2017
Repo-pkgs : 231
Repo-size : 4.5 G
Repo-mirrors : https://rhui2-cds01.ap-south-1.aws.ce.redhat.com/pulp/mirror/content/dist/rhel/rhui/server/7/7Server/x86_64/rh-common/os
Repo-baseurl : https://rhui2-cds02.ap-south-1.aws.ce.redhat.com/pulp/repos//content/dist/rhel/rhui/server/7/7Server/x86_64/rh-common/os/ (1 more)
Repo-expire : 21,600 second(s) (last: Mon Jan 8 19:23:13 2018)
Filter : read-only:present
Repo-filename: /etc/yum.repos.d/redhat-rhui.repo
Repo-id : rsawaroha
Repo-name : rsaw aroha rpms for Fedora/RHEL6+
Repo-revision: 1507778106
Repo-updated : Thu Oct 12 03:15:06 2017
Repo-pkgs : 19
Repo-size : 1.4 M
Repo-baseurl : http://people.redhat.com/rsawhill/rpms
Repo-expire : 21,600 second(s) (last: Mon Jan 8 18:02:10 2018)
Filter : read-only:present
Repo-filename: /etc/yum.repos.d/rsawaroha.repo
repolist: 30,581
List enabled YUM repositories
Under YUM you have the choice to enable or disable repositories. During yum operations like installation of packages only enabled repositories are scanned/contacted to perform operations.
To view only enabled repositories in YUM, use yum repolist enabled
[root@kerneltalks ~]# yum repolist enabled
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
repo id repo name status
*epel/x86_64 Extra Packages for Enterprise Linux 6 - x86_64 12,448
rhui-REGION-client-config-server-7/x86_64 Red Hat Update Infrastructure 2.0 Client Configuration Server 7 2
rhui-REGION-rhel-server-releases/7Server/x86_64 Red Hat Enterprise Linux Server 7 (RPMs) 17,881
rhui-REGION-rhel-server-rh-common/7Server/x86_64 Red Hat Enterprise Linux Server 7 RH Common (RPMs) 231
rsawaroha rsaw aroha rpms for Fedora/RHEL6+ 19
repolist: 30,581
List disabled YUM repositories
Similarly, you can list only disabled yum repositories as well. Use yum repolist disabled
[root@kerneltalks ~]# yum repolist disabled
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
repo id repo name
epel-debuginfo/x86_64 Extra Packages for Enterprise Linux 6 - x86_64 - Debug
epel-source/x86_64 Extra Packages for Enterprise Linux 6 - x86_64 - Source
epel-testing/x86_64 Extra Packages for Enterprise Linux 6 - Testing - x86_64
epel-testing-debuginfo/x86_64 Extra Packages for Enterprise Linux 6 - Testing - x86_64 - Debug
epel-testing-source/x86_64 Extra Packages for Enterprise Linux 6 - Testing - x86_64 - Source
rhui-REGION-rhel-server-debug-extras/7Server/x86_64 Red Hat Enterprise Linux Server 7 Extra Debug (Debug RPMs)
rhui-REGION-rhel-server-debug-optional/7Server/x86_64 Red Hat Enterprise Linux Server 7 Optional Debug (Debug RPMs)
rhui-REGION-rhel-server-debug-rh-common/7Server/x86_64 Red Hat Enterprise Linux Server 7 RH Common Debug (Debug RPMs)
rhui-REGION-rhel-server-debug-rhscl/7Server/x86_64 Red Hat Enterprise Linux Server 7 RHSCL Debug (Debug RPMs)
rhui-REGION-rhel-server-debug-supplementary/7Server/x86_64 Red Hat Enterprise Linux Server 7 Supplementary Debug (Debug RPMs)
rhui-REGION-rhel-server-extras/7Server/x86_64 Red Hat Enterprise Linux Server 7 Extra(RPMs)
rhui-REGION-rhel-server-optional/7Server/x86_64 Red Hat Enterprise Linux Server 7 Optional (RPMs)
rhui-REGION-rhel-server-releases-debug/7Server/x86_64 Red Hat Enterprise Linux Server 7 Debug (Debug RPMs)
rhui-REGION-rhel-server-releases-source/7Server/x86_64 Red Hat Enterprise Linux Server 7 (SRPMs)
rhui-REGION-rhel-server-rhscl/7Server/x86_64 Red Hat Enterprise Linux Server 7 RHSCL (RPMs)
rhui-REGION-rhel-server-source-extras/7Server/x86_64 Red Hat Enterprise Linux Server 7 Extra (SRPMs)
rhui-REGION-rhel-server-source-optional/7Server/x86_64 Red Hat Enterprise Linux Server 7 Optional (SRPMs)
rhui-REGION-rhel-server-source-rh-common/7Server/x86_64 Red Hat Enterprise Linux Server 7 RH Common (SRPMs)
rhui-REGION-rhel-server-source-rhscl/7Server/x86_64 Red Hat Enterprise Linux Server 7 RHSCL (SRPMs)
rhui-REGION-rhel-server-source-supplementary/7Server/x86_64 Red Hat Enterprise Linux Server 7 Supplementary (SRPMs)
rhui-REGION-rhel-server-supplementary/7Server/x86_64 Red Hat Enterprise Linux Server 7 Supplementary (RPMs)
repolist: 0
List all configured YUM repositories
List all YUM repositories available on server.
[root@kerneltalks ~]# yum repolist all
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
repo id repo name status
*epel/x86_64 Extra Packages for Enterprise Linux 6 - x86_64 enabled: 12,448
epel-debuginfo/x86_64 Extra Packages for Enterprise Linux 6 - x86_64 - Debug disabled
epel-source/x86_64 Extra Packages for Enterprise Linux 6 - x86_64 - Source disabled
epel-testing/x86_64 Extra Packages for Enterprise Linux 6 - Testing - x86_64 disabled
epel-testing-debuginfo/x86_64 Extra Packages for Enterprise Linux 6 - Testing - x86_64 - Debug disabled
epel-testing-source/x86_64 Extra Packages for Enterprise Linux 6 - Testing - x86_64 - Source disabled
rhui-REGION-client-config-server-7/x86_64 Red Hat Update Infrastructure 2.0 Client Configuration Server 7 enabled: 2
rhui-REGION-rhel-server-debug-extras/7Server/x86_64 Red Hat Enterprise Linux Server 7 Extra Debug (Debug RPMs) disabled
rhui-REGION-rhel-server-debug-optional/7Server/x86_64 Red Hat Enterprise Linux Server 7 Optional Debug (Debug RPMs) disabled
rhui-REGION-rhel-server-debug-rh-common/7Server/x86_64 Red Hat Enterprise Linux Server 7 RH Common Debug (Debug RPMs) disabled
rhui-REGION-rhel-server-debug-rhscl/7Server/x86_64 Red Hat Enterprise Linux Server 7 RHSCL Debug (Debug RPMs) disabled
rhui-REGION-rhel-server-debug-supplementary/7Server/x86_64 Red Hat Enterprise Linux Server 7 Supplementary Debug (Debug RPMs) disabled
rhui-REGION-rhel-server-extras/7Server/x86_64 Red Hat Enterprise Linux Server 7 Extra(RPMs) disabled
rhui-REGION-rhel-server-optional/7Server/x86_64 Red Hat Enterprise Linux Server 7 Optional (RPMs) disabled
rhui-REGION-rhel-server-releases/7Server/x86_64 Red Hat Enterprise Linux Server 7 (RPMs) enabled: 17,881
rhui-REGION-rhel-server-releases-debug/7Server/x86_64 Red Hat Enterprise Linux Server 7 Debug (Debug RPMs) disabled
rhui-REGION-rhel-server-releases-source/7Server/x86_64 Red Hat Enterprise Linux Server 7 (SRPMs) disabled
rhui-REGION-rhel-server-rh-common/7Server/x86_64 Red Hat Enterprise Linux Server 7 RH Common (RPMs) enabled: 231
rhui-REGION-rhel-server-rhscl/7Server/x86_64 Red Hat Enterprise Linux Server 7 RHSCL (RPMs) disabled
rhui-REGION-rhel-server-source-extras/7Server/x86_64 Red Hat Enterprise Linux Server 7 Extra (SRPMs) disabled
rhui-REGION-rhel-server-source-optional/7Server/x86_64 Red Hat Enterprise Linux Server 7 Optional (SRPMs) disabled
rhui-REGION-rhel-server-source-rh-common/7Server/x86_64 Red Hat Enterprise Linux Server 7 RH Common (SRPMs) disabled
rhui-REGION-rhel-server-source-rhscl/7Server/x86_64 Red Hat Enterprise Linux Server 7 RHSCL (SRPMs) disabled
rhui-REGION-rhel-server-source-supplementary/7Server/x86_64 Red Hat Enterprise Linux Server 7 Supplementary (SRPMs) disabled
rhui-REGION-rhel-server-supplementary/7Server/x86_64 Red Hat Enterprise Linux Server 7 Supplementary (RPMs) disabled
rsawaroha rsaw aroha rpms for Fedora/RHEL6+ enabled: 19
repolist: 30,581
List all available packages in repositories
To list all available packages for installation from all repositories use below command –
yum list available command is useful to list all available packages. If you want to list packages from the particular repository then use below switches –
disablerepo="*" which will exclude all repos from scanning
enablerepo="<repo>" which will include only your desired repo to scan for packages.
Learn how to use xsos tool to read sosreport in RHEL/CentOS. xsos is a very helpful tool for Linux sysadmins. Different options and their examples included in the article.
an xsos tool is a tool coded to read a sosreport on Linux systems. sosreport is a tool from RedHat which collects system information which helps vendors to troubleshoot issues. sosreportcreates the tarball which contains all the system information but you can not read it directly. For simplicity, Ryan Sawhill created a tool named xsos which will help you to read sosreport in a much easier way in your terminal itself. In this article, we will walk you through how to read sosreport on the Linux terminal.
Now there are different switches you can use with xsos command and get the required details. Frequently used switches given below –
-a show everything
-b show info from dmidecode
-o show hostname, distro, SELinux, kernel info, uptime, etc
-k inspect kdump configuration
-c show info from /proc/cpuinfo
-m show info from /proc/meminfo
-d show info from /proc/partitions
-t show info from dm-multipath
-i show info from ip addr
Above is a snippet from help. Full list of switches can be obtained by running help using xsos -h
Reading sosreport using xsos
To read sosreport using xsos tool, you need to first extract sosreport tarball and use the extracted directory path as a source for the xsos tool. The command format is –
xsos –<switch> <sosreport_dir_path>
For example, lets see CPU information read from sosreport.
root@kerneltalks # xsos -c /var/tmp/sosreport-kerneltalks-20180108180100
CPU
1 logical processors
1 Intel Xeon CPU E5-2676 v3 @ 2.40GHz (flags: aes,constant_tsc,ht,lm,nx,pae,rdrand)
Here, -c instructs xsos command to read CPU information from sosreport which is saved in /var/tmp/sosreport-kerneltalks-20180108180100 directory.
Another example below which reads IP information from sosreport.
root@kerneltalks # xsos -i /var/tmp/sosreport-kerneltalks-20180108180100
IP4
Interface Master IF MAC Address MTU State IPv4 Address
========= ========= ================= ====== ===== ==================
lo - - 65536 up 127.0.0.1/8
eth0 - 02:e5:4c:f8:86:0e 9001 up 172.31.29.189/20
IP6
Interface Master IF MAC Address MTU State IPv6 Address Scope
========= ========= ================= ====== ===== =========================================== =====
lo - - 65536 up ::1/128 host
eth0 - 02:e5:4c:f8:86:0e 9001 up fe80::e5:4cff:fef8:860e/64 link
you can see IP information fetched from stored sosreport and displayed for your understanding.
You can use different switches to fetch different information as per your requirement from the sosreport. This way you need not go through each and every logfile or log directory extracted in the sosreport directory to get the information. Just use a relevant switch with xsos utility and it will scan the sosreport directory and present your data!
Troubleshooting guide to reclaim space on disk after deleting files in Linux.
One of the common issues Linux Unix system users face is disk space is not being released even after files are deleted. Sysadmins face some issues when they try to recover disk space by deleting high sized files in a mount point and then they found disk utilization stays the same even after deleting huge files. Sometimes, application users are moving/deleting large log files and still won’t be able to reclaim space on the mount point.
In this troubleshooting guide, I will walk you through steps that will help you to reclaim space on disk after deleting files. Here we will learn how to remove deleted open files in Linux. Most of the time files are deleted manually but processes using those files keep them open and hence space is not reclaimed. df also shows incorrect space utilization.
Process stop/start/restart
To resolve this issue, you need to gracefully or forcefully end processes using those deleted files. First, get a list of such deleted files that are still marked open by processes. Use lsof (list open files) command with +L1 switch for this or you can directly grep for deleted in lsof output without switch
Now, in above output check the PID 777and stop that process. If you can not stop it you can kill the process. In the case of application processes, you can refer application guides on how to stop, start, restart its processes. The restarting process helps in releasing the lock on that file which process made to hold it as open. Once the related process is stopped/restarted you can see space will be released and you can observe reduced utilization in df command output.
Clear from proc filesystem
Another way is to vacate the space used by file by de-allocating that space from /proc filesystem. As you are aware, every process in Linux has its allocations in /proc filesystem i.e. process filesystem. Make sure that the process/application has no impact if you are flushing files (which are held open by an app) from /proc filesystem.
You can find file allocation at /proc/<pid>/fd/<fd_number> location. Where PID and fd_number you can get from lsof output we saw above. If you check the type of this file then it’s a symbolic link to your deleted file.
root@kerneltalks # file /proc/777/fd/7
/proc/777/fd/7: broken symbolic link to `/tmp/ffiJEo5nz (deleted)
So, in our case we can do it using –
root@kerneltalks # > /proc/777/fd/7
That’s it! Flushing it will regain your lost space by those files which you already deleted.
An article explaining step by step procedure to add EBS disk on AWS Linux server with screenshots.
Nowadays most of the servers run on cloud platforms like Amazon Web Services (AWS), Azure, etc. So daily administrative tasks on Linux servers from AWS console is one of the common things in sysadmin’s task list. In this article, we will walk you through one such task i.e. adding a new disk to the AWS Linux server.
Adding a disk to the EC2 Linux server has two portions. The first portion is to be done on AWS EC2 console which is creating new volume to be attached to the server. And attaching it to EC2 instance on AWS console. The second portion is to be done on the Linux server which is to identify newly added disk at the kernel level and prepare it for use.
Creating & attaching EBS volume
In this step, we will learn how to create EBS volume in the AWS console and how to attach EBS volume to AWS EC2 instance.
Login to your EC2 console and navigate to Volumes which is under ELASTIC BLOCK STORAGE menu on the left-hand sidebar. You will be presented with the current list of volumes in your AWS account like below –
Now, click Create Volume button and you will be presented with the below screen.
Here you need to choose several parameters of your volume –
Volume Type. This decides your volume performance and obv billing.
Size. In GB. Min and Max available sizes differ according to your volume type choice.
IOPS. Performance parameters. Changes according to your volume type choice
Availability Zone. Make sure you select same AZ as your EC2 instance
Throughput. Performance parameter. Only available for ST1 & SC1 volume type.
Snapshot ID. Select snapshot if you want to create the new volume from existing snapshot backup. For fresh blank volume leave it blank.
Encryption. Checkmark if you want the volume to be encrypted. An extra layer of security.
Tags. Add tags for management, reporting, billing purposes.
After selecting proper parameters as per your requirement, click Create Volume button. You will be presented with ‘Volume created successfully’ dialogue if everything goes well along with the volume ID of your newly created volume. Click Close and you will be back of the volume list.
Now check volume ID to identify your newly created volume in this list. It will be marked with an ‘Available’ state. Select that volume and select Attach volume from Actions menu.
Now you will be presented with an instance selection menu. Here you need to choose an instance to which this volume is to be attached. Remember only instances in the same AZ of the volume are
Once you select the instance you can see the device name which will be reflected at the kernel level in your instance under Device field. Here its /dev/sdf.
Check out the note being displayed here. It says : Note: Newer Linux kernels may rename your devices to /dev/xvdf through /dev/xvdp internally, even when the device name entered here (and shown in the details) is /dev/sdf through /dev/sdp.
It says newer Linux kernels may interpret your device name as /dev/xvdf than /dev/sdf. This means this volume will be either /dev/sdf (on the old kernel) or /dev/xvdf on the new kernel.
That’s it. Once attached you can see volume state is changed from Available to in-use
Identifying volume on Linux instance
Now head back to your Linux server. Log in and check new volume in fdisk -l output.
root@kerneltalks # fdisk -l
WARNING: fdisk GPT support is currently new, and therefore in an experimental ph ase. Use at your own discretion.
Disk /dev/xvda: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: 25D08425-708A-47D2-B907-1F0A3F769A90
# Start End Size Type Name
1 2048 4095 1M BIOS boot parti
2 4096 20971486 10G Microsoft basic
Disk /dev/xvdf: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
As AWS mentioned new device name will be reflected as /dev/xvdf in the kernel, you can see /dev/xvdf in the above output.
Now you need to partition this disk using LVM (using pvcreate) or fdisk so that you can use it for creating mount points!