Category Archives: Config

How to configure switching IAM roles in AWS CLI?

A short howto on configuring AWS CLI to switch roles

AWS CLI Switch Roles configuration

Requirement:

You have one AWS account that needs to switch roles before executing things on AWS. It’s an easy method on AWS console, but how to switch roles in AWS CLI.

Solution:

Let’s consider the below setup-

  • AWS IAM account with programmatic access – user101
  • Same IAM account having sts:AsumeRole permissions.
  • AWS IAM role for above said IAM user to assume (same or cross-account)- role101

Start with configuring the AWS CLI in a standard way.

$ aws configure --profile user101
AWS Access Key ID [None]: AKIAQX3SNXZGUQFOSK4T
AWS Secret Access Key [None]: 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
Default region name [us-east-1]: us-east-1
Default output format [None]: json

Note: It is not a good practice to keep AWS credentials in a plain text format. Keep them in a secured encrypted way using aws-auth.

Now, at this point, you must have an AWS credentials file created in the home directory.

$ cd ~/.aws
$ cat credentials
[user101]
aws_access_key_id = AKIAQX3SNXZGUQFOSK4T
aws_secret_access_key = 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
region = us-east-1
output = json

You need to edit the above credentials file to add IAM role details. Append the below configuration in the file.

If you are working with AWS Gov Cloud make sure the ARNs has proper AWS Partition defined. E.g. arm:aws-us-gov:x:x:…..
[role101]
role_arn = arn:aws:iam::xxxxxxxxx:role/role101
output = json
source_profile = user101

where –

  • role101 is a Role identifier. You can choose as per your choice.
  • Mention the correct IAM role ARN
  • source_profile should use the profile identifier of the user who will assume this role. In our case, its user101.

Save the file, and you are ready to go.

Test configurations –

$ aws sts get-caller-identity
{
    "UserId": "AIDAQX3SNXZG3Z2AXNIMJ",
    "Account": "xxxxxxxxx",
    "Arn": "arn:aws:iam::xxxxxxxxx:user/user101"
}

$ aws sts get-caller-identity --profile role101
{
    "UserId": "AROAQX3SNXZG6KL4YENFZ:botocore-session-1631087792",
    "Account": "xxxxxxxxx",
    "Arn": "arn:aws:sts::xxxxxxxxx:assumed-role/role101/botocore-session-1631087792"
}

You can see this by using --profile role101 we are assuming the IAM role role101 for the user user101.

AWS CLI configuration for switching roles using MFA

Note: If you are on Windows and using GitBash, refer to configuring GitBash for MFA prompts. It works perfectly in Powershell.

In some cases, your AWS environment must have MFA restrictions in place where the user user101 must have MFA enabled to switch to the role role101. In such a scenario, your role profile in credentials files should include MFA device ARN as well like below –

[role101]
role_arn = arn:aws:iam::xxxxxxxxx:role/role101
mfa_serial = arn:aws:iam::xxxxxxxxx:mfa/user101
output = json
source_profile = user101

where –

mfa_serial is the ARN of the MFA device of user101.

You will be prompted to supply the MFA code whenever you use profile role101 in AWS CLI commands.

$ aws sts get-caller-identity --profile role101
Enter MFA code for arn:aws:iam::xxxxxxxxx:mfa/user101:
{
    "UserId": "AROAQX3SNXZG6KL4YENFZ:botocore-session-1631089277",
    "Account": "xxxxxxxxx",
    "Arn": "arn:aws:sts::xxxxxxxxx:assumed-role/role101/botocore-session-1631089277"
}

How to configure proxy in RHEL, Suse, OEL, CentOS, Ubuntu Linux

Learn how to configure a proxy in Linux flavors like RHEL, SUSE, OEL, CentOS, Ubuntu, etc.

Proxy in Linux


One of the basic tasks after building a new system in your environment is to set up a proxy to enable internet access on the server. In this tutorial, we will walk you through step by step how to configure internet proxy in major Linux flavors like RHEL, SUSE, OEL, Centos, Ubuntu, etc. Without further delay lets jump in.

How to setup proxy in Linux using shell variables

Typically you can set up internet proxy details using shell variable http_proxy. The syntax is below –

root@kerneltalks # export http_proxy=http://username:password@proxy-servername:port/

For example, let’s consider below proxy server details which we need to configure.

  • Proxy server: kerneltalks-proxy.com
  • Port: 8081
  • Username: shrikant
  • Password: @Fnr5*r$9Lp

Above proxy uses authentication so we can define it as –

root@kerneltalks # export http_proxy=http://shrikant:@Fnr5*r$9Lp@kerneltalks-proxy.com:8081/

If you do not have authentication at proxy then it can be defined as :

root@kerneltalks # export http_proxy=http://kerneltalks-proxy.com:8081/

The above proxy configuration is for the current user only. To configure a proxy for all users on the system, just add above command entry into /etc/profile file on your system and it will be applicable/available to all users on the system.

You can configure no_proxy hosts using the below command. no_proxy hosts are those destinations that you want to reach directly bypassing proxy for them.

root@kerneltalks # export no_proxy="kerenltalks.com,10.10.2.3"

How to setup proxy persistently using shell variable

It’s pretty much the same as above. The only thing is we are going to save this variable in /etc/environment. So every time anyone logs into the system, this proxy variable loaded into his/her login environment automatically.

echo "http_proxy=http://proxy-servername:port" >>/etc/environment

How to setup proxy using yast in SUSE Linux

yast is configuration manager native to SUSE Linux which gives nice text-based GUI in PuTTY terminal! If you are used to it, you can configure proxy from yast as well.

Navigate to Network Services -> Proxy

Suse yast proxy configuration

Check Enable Proxy using tab. It will allow you to fill in details below like server details, authentication, etc. Fill in details and you can test the configuration by selecting Test Proxy Settings. After successful testing select OK.

You can even mention hostnames, IPs under No Proxy Domainsso that they can be connected bypassing the proxy. This is very much helpful when you have internet and local network repos configured under zypper. By adding FQDN / IPof local patching server under No Proxy Domain you can reach a local patching server while the proxy is enabled.

You will be presented with the notice “It is recommended to relogin to make new proxy settings effective.” Re-login and test internet access.


How to setup proxy in RHEL using GUI

In RHEL, navigate to Application -> System Tools -> Settings -> Network

Network settings in RHEL

Select the Network Proxy and then Manual method.

RHEL proxy configuration

Here fill in proxy server details along with port and you are good to go. Add hostnames or IPs in Ignore Hosts so that they can be connected bypassing the proxy. This is helpful when you have repo from the local server and internet configured under yum. By entering local patching server FQDN/ IP in Ignore Hostsyou can use local patching server in YUM while the proxy is enabled server-wide.


How to ignore proxy for local patching server in Linux

As I mentioned a couple of times above, here is a particular case you may face in your system. You have a repo manager like zypper is configured with repo from the internet and also from a local patching server (with FQDN). It’s the same as ‘No proxy for‘ or ‘Bypass proxy server for local addresses‘ setting in Windows.

Now when you enable proxy, internet repo works (via proxy) and local patching server repo won’t work. Since it tries to reach a local patching server over the internet and couldn’t do it (via proxy). If you disable proxy, your local patching repo will work and internet repo won’t.

In such a case, you need a local repo to bypass proxy and internet repo should go through the proxy. Here you can define local server FQDN/IP in Ignore Hosts or No Proxy Domains as I explained earlier.

From CLI, you can edit file /etc/sysconfig/proxy add entry in below line –

## Type:        string(localhost)
## Default:     localhost
#
# Example: NO_PROXY="www.me.de, do.main, localhost"
#
NO_PROXY="localhost, 127.0.0.1, patchingsvr.kt.com"

Here entry patchingsvr.kt.com makes proxy bypass for this local server. The above sample file is from SUSE Linux.

Enable debugging to log NFS logs in Linux

Learn how to set up debugging to generate NFS logs. By default NFS daemon does not provide a dedicated log file. So you have to setup debugging.

Capturing NFS logs

One of our readers asked me to query that where are NFS logs are located? I decided to write this post then to answer his query since it’s not easy to just name a log file. There is a process you need to execute well in advance to capture your NFS logs. This article will help you to find answers for where are my NFS logs? Find the NFS log file location or where NFS daemon logs events?

There is NFS logging utility in Solaris called nfslogd (NFS transfer logs). It has a configuration file /etc/nfs/nfslog.conf and stores logs in a file /var/nfs/nfslog. But, I am yet to see/use it in Linux (if it does exist for Lx). If you any insight about it, let me know in the comments.

By default, NFS daemon does not have a dedicated log file whose configuration can be done while you setup the NFS server. You need to enable debugging for NFS daemon so that its events can be logged in /var/log/messages  syslog logfile. Sometimes, even without this debugging enabled few events may be logged to Syslog. These logs are not enough when you try to troubleshoot NFS related errors. So we need to enable debugging for NFS daemon and plenty handful of information will be available for you to analyze when to start NFS troubleshooting.

Below are NFS service start-stop logs in Syslog when debugging is not enabled

Jun 24 00:31:24 kerneltalks.com rpc.mountd[3310]: Version 1.2.3 starting
Jun 24 00:31:24 kerneltalks.com kernel: NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
Jun 24 00:31:24 kerneltalks.com kernel: NFSD: starting 90-second grace period
Jun 24 00:31:46 kerneltalks.com kernel: nfsd: last server has exited, flushing export cache
Jun 24 00:31:46 kerneltalks.com rpc.mountd[3310]: Caught signal 15, un-registering and exiting.

rpcdebug is the command used to set NFS & RPC debug flags? This command supports below switch :

  • -m: specify modules to set or clear
  • -s: set given debug flags
  • -c: Clear flags

Pretty simple! If you want to enable debugging use -s, if you want to turn off/disable debugging use -c! Below is a list of important debug flags you can set or clear.

  • nfs: NFS client
  • nfsd: NFS server
  • NLM : Network lock manager of client or server
  • RPC : Remote procedure call module of client or server

Enable debugging for NFS logs :

Use the below command to enable NFS logs. Here are enabling all modules. You can instead use the module of your requirement from the above list instead of all.

# rpcdebug -m nfsd all
nfsd       sock fh export svc proc fileop auth repcache xdr lockd

In the above output you can see its enabled list of modules for debugging (on right) for daemon nfsd (on left). Once this is done you need to restart your NFS daemon. After restarting you can check Syslog and voila! There are your NFS logs!

Jun 24 00:31:46 kerneltalks.com kernel: nfsd: last server has exited, flushing export cache
Jun 24 00:31:46 kerneltalks.com rpc.mountd[3310]: Caught signal 15, un-registering and exiting.
Jun 24 00:32:03 kerneltalks.com kernel: svc_export_parse: '-test-client- /shareit  3 8192 65534 65534 0'
Jun 24 00:32:03 kerneltalks.com rpc.mountd[3496]: Version 1.2.3 starting
Jun 24 00:32:03 kerneltalks.com kernel: set_max_drc nfsd_drc_max_mem 962560
Jun 24 00:32:03 kerneltalks.com kernel: nfsd: creating service
Jun 24 00:32:03 kerneltalks.com kernel: nfsd: allocating 32 readahead buffers.
Jun 24 00:32:03 kerneltalks.com kernel: NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
Jun 24 00:32:03 kerneltalks.com kernel: NFSD: starting 90-second grace period

Logs follow standard Syslog format. Date, time, hostname, service, and finally message. You can compare these logs with logs given at the start of this post when debugging was not enabled. You can see there are pretty extra logs are being generated by debugging.

Disable debugging for NFS logs

Disabling debugging will stop logging NFS daemon logs. It can be done with -c switch.

# rpcdebug -m  nfsd -c  all
nfsd      <no flags set>

You can see, command clears all set flags and shows no flags set for nfsd to log any debug messages.

syslog configuration in Linux

Learn everything about Syslog in Linux. Its configuration file format, how to restart Syslog, rotation, and how to log Syslog entry manually.

Linux Syslog configuration

One of the most important daemons on Unix or Linux based system is syslogd! It logs many crucial system events by default. Logs written by syslogd are commonly referred to as Syslog. Syslogs are first logs when you want to trace issues with your system. They are the lifeline of sysadmins 🙂

In this article, we will see configuration files for syslogd, different configs and how to apply them. Before we begin to go through the below files which we will be using throughout this article frequently.

  1. /etc/syslog.conf : syslogd configuration file
  2. /var/log/messages : Syslog file

There are three projects on Syslog daemon spawned one after another to enhance the previous project’s functionality. They are: syslog (year 1980), syslog-ng (year 1998) and rsyslog (year 2004). So depending on which project’s fruit is running on your server, your daemon name changes. The rest of the configuration remains pretty close similar.

Syslog uses port TCP 514 for communication.

syslogd daemon

This daemon starts with systems and runs in the background all the time, capturing system events and logging them in Syslog. It can be started, stop, restart like other services operations in Linux. You need to check which Syslog version (three projects as stated above) is running (ps -ef |grep syslog) and accordingly, use the daemon name.

# service rsyslog status
rsyslogd (pid  999) is running...

# service rsyslog restart
Shutting down system logger:                               [  OK  ]
Starting system logger:                                    [  OK  ]

After making any changes in the configuration file you need to restart syslogd in order to take these new changes in effect.

syslog configuration file

As stated above /etc/syslog.conf is a configuration file where you can define when, where, which event to be logged by Syslog daemon. There name changes as per your Syslog version

  • /etc/syslog.conf for syslog
  • /etc/syslog-ng.conf for syslog-ng
  • /etc/rsyslog.conf for rsyslog

The typical config file looks like below :

*.info;mail.none;authpriv.none;cron.none                /var/log/messages
authpriv.*                                              /var/log/secure
mail.*                                                  -/var/log/maillog
cron.*                                                  /var/log/cron
*.emerg                                                 *
uucp,news.crit                                          /var/log/spooler
local7.*                                                /var/log/boot.log

Here, on the left side column shows services for which you want logs to be logged along with their priority (succeeded by . after service name) and on the right side are actions normally destinations where logs should be written by the daemon.

Services values and priorities :

  • local7: boot messages
  • kern: Kernel messages
  • auth: Security events
  • authpriv : Access control related messages
  • mail, cron: mail and cron related events

Service priorities :

  • debug
  • info
  • notice
  • warning
  • err
  • crit
  • alert
  • emerg
  • * means all level of messages to be logged
  • none means no messages to be logged

All the above priorities are given in ascending level of urgency.

Actions/destination :

Those mostly log files or remote Syslog server to which logs get sent. The remote server can be specified by IP or hostname preceded by @ sign.

Syslog

All logs by syslogd are written its Syslog file /var/log/messages. Typical Syslog file looks like :

May 22 02:00:29 server1 rsyslogd: [origin software="rsyslogd" swVersion="5.8.10" x-pid="999" x-info="http://www.rsyslog.com"] exiting on signal 15.
May 22 02:00:29 server1 kernel: imklog 5.8.10, log source = /proc/kmsg started.
May 22 02:00:29 server1 rsyslogd: [origin software="rsyslogd" swVersion="5.8.10" x-pid="1698" x-info="http://www.rsyslog.com"] start
May 22 02:17:43 server1 dhclient[916]: DHCPREQUEST on eth0 to 172.31.0.1 port 67 (xid=0x445faedb)

Here file can be read in below parts from left to right :

  1. Date
  2. Time
  3. Hostname (This is important to identify which server’s log is this on centralized Syslog server)
  4. The service name for which logs were written by the daemon
  5. Separator colon
  6. Actual message or log

The first 5 fields can be used for sorting, filtering logs in various tools, scripts, etc. Since Syslog logs, all events on system, it’s obvious it grows in size pretty quickly. You can manually rotate Syslog over a specific period or you can even use logrotate utility to do it automatically in the background.

Testing Syslog logging

To test if the daemon is logging messages in Syslog or not, you can use logger command. With this command, you can specify numerous options like a priority, service, etc. But even without any argument, you can supply a string to write in Syslog and it will do the job for you.

# logger Writing KERNELTALKS in syslog using logger. Testing...

# cat /var/log/messages |grep -i kerneltalks
May 22 02:31:05 server1 root: Writing KERNELTALKS in syslog using logger. Testing...

In the above example, you can see all entries after logger command are printed in the Syslog file. Since we used logger command and didn’t specify any service, it logged message with userid root in-service field!

Let’s Encrypt SSL certificate on Apache YUM based Linux system

Learn to configure the free, open-source, secure Let’s Encrypt SSL certificate on Apache webserver running on YUM based Linux server.

Lets Encrypt installation on Apache

What is Let’s Encrypt

Let’s Encrypt is free, open-source, and automatic SSL CA (Certificate Authority). Its managed by ISRG (Internet Security Research Group). SSL certificate always involved a cost which is recurring every year for renewal. Let’s encrypt aimed at open source and free SSL. This is an ideal choice for small websites, businesses which have less or no critical data on their websites and looking for SSL certificates.

If you are running a personal blog then SSL is essential for having a good search engine reputation. But before you dive into you need one dedicated IP for your domain name. If you are on shared hosting you are likely not having a dedicated IP. So for Let’s Encrypt SSL, you need to buy IP for your domain name.

Lets Encrypt SSL certificate

Let’s encrypt provides you fee SSL after your domain name validity which lasts for 3 months. You have to manually renew it every 3 months. The renewal process can be automatized too. Certbot is currently serving you deploying https on your server and configuring Let’s encrypt certs for you.

Before run into installation you should have these pre-requisite completed :

  1. Install EPEL repo
  2. You should have a webserver running
  3. The webpage is being displayed on your domain name (port 80) properly

Let’s encrypt installation on Apache & YUM Linux system

First of all, you need to clone git repository of letsencrypt. For that install package ‘git’ first. Once done run below git clone command :

# git clone https://github.com/letsencrypt/letsencrypt
Initialized empty Git repository in /root/letsencrypt/.git/
remote: Counting objects: 45178, done.
remote: Compressing objects: 100% (164/164), done.
remote: Total 45178 (delta 112), reused 0 (delta 0), pack-reused 45014
Receiving objects: 100% (45178/45178), 13.38 MiB | 2.15 MiB/s, done.
Resolving deltas: 100% (32345/32345), done.

Now goto letsencrypt directory which is created in your present directory by clone command. Under that directory run below command :

# ./letsencrypt-auto  certonly --standalone

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Please enter in your domain name(s) (comma and/or space separated)  (Enter 'c'
to cancel):ktwebtest.ddns.net
Obtaining a new certificate
Performing the following challenges:
tls-sni-01 challenge for ktwebtest.ddns.net
Waiting for verification...
Cleaning up challenges
Generating key (2048 bits): /etc/letsencrypt/keys/0002_key-certbot.pem
Creating CSR: /etc/letsencrypt/csr/0002_csr-certbot.pem

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at
   /etc/letsencrypt/live/ktwebtest.ddns.net/fullchain.pem. Your cert
   will expire on 2017-07-02. To obtain a new or tweaked version of
   this certificate in the future, simply run letsencrypt-auto again.
   To non-interactively renew *all* of your certificates, run
   "letsencrypt-auto renew"
 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

You can see above, after supplying domain name, Certbot creates a CSR file on its own, generates a key file, and fetch SSL certificate too. All paths are visible in the output. All files path are :

Key files directory/etc/letsencrypt/keys/
CSR files directory/etc/letsencrypt/csr/
SSL files directory/etc/letsencrypt/live/

It also shows you when your certificate going to expire. And command you can use to renew your certificate. Now you can follow the tutorial of how to install SSL which you obtained in the above step.

You can even automatize this SSL installation steps by using command :

# ./letsencrypt-auto  -d ktwebtest.ddns.net --apache

With this command, it will create key, CSR, fetch SSL, install SSL on your domain webserver!

Certificate renewal

You can renew certificate manually using :

# ./letsencrypt-auto renew
Saving debug log to /var/log/letsencrypt/letsencrypt.log

-------------------------------------------------------------------------------
Processing /etc/letsencrypt/renewal/ktwebtest.ddns.net.conf
-------------------------------------------------------------------------------
Cert not yet due for renewal

The following certs are not due for renewal yet:
  /etc/letsencrypt/live/ktwebtest.ddns.net/fullchain.pem (skipped)
No renewals were attempted.


As you can see, it will scan through all fetched let’s encrypt certificates on the server and their due date. If found due, those certificates will be renewed in no time!

If you want to renew certificate regardless of the due date then you can use force renew as below :

# ./letsencrypt-auto renew  --force-renewal

Saving debug log to /var/log/letsencrypt/letsencrypt.log

-------------------------------------------------------------------------------
Processing /etc/letsencrypt/renewal/ktwebtest.ddns.net.conf
-------------------------------------------------------------------------------
Renewing an existing certificate
Performing the following challenges:
tls-sni-01 challenge for ktwebtest.ddns.net
Waiting for verification...
Cleaning up challenges
Generating key (2048 bits): /etc/letsencrypt/keys/0003_key-certbot.pem
Creating CSR: /etc/letsencrypt/csr/0003_csr-certbot.pem

-------------------------------------------------------------------------------
new certificate deployed without reload, fullchain is
/etc/letsencrypt/live/ktwebtest.ddns.net/fullchain.pem
-------------------------------------------------------------------------------

Congratulations, all renewals succeeded. The following certs have been renewed:
  /etc/letsencrypt/live/ktwebtest.ddns.net/fullchain.pem (success)

You can even schedule crontab with a little bit of scripting to have an automatic renewal of certificates.

How to configure yum server in Linux

Learn to configure the yum server in RPM-based Linux systems. The article explains yum server configs over HTTP and FTP protocol.

YUM server Configuration

In our last article, we saw yum configurations. We learned what is yum, why to use it, what is repository, yum config file locations, config file format, how to configure DVD, HTTP locations as a repository. In this article, we will walk through YUM server configuration i.e. configuring serverA as a YUM server so that other clients can configure serverA as a repo location.

Other YUM related articles :

In this article, we will see how to set up a yum server over FTP and HTTP protocol. Before proceeding with configurations make sure you have three packages deltarpm, python-deltarpm, createrepo installed on your yum server.

YUM server http configuration

First of all, we need to install a web server on the system so that the HTTP page can be served by the system. Install httpd package using yum. Post-installation you will have /var/www/html directory which is home of your webserver. Create packages directory within it which will hold all packages. Now we have /var/www/html/packages directory to hold packages of our YUM server.

Start httpd service and verify you are able to access http://ip-address/packages in the browser. It should look like below :

Webserver directory listing

Now, we need to copy package files (.rpm) into this directory. You can manually copy them from your OS DVD or you can download using wget from online official package mirrors. Once you populate /var/www/html/packages directory with .rpm files they are available to download from the browser but YUM won’t be able to recognize them.

For YUM (on client side) to fetch packages from the above directory you need to create an index of these files (.xml). You can create it using below command –

# createrepo /var/www/html/packages/
Spawning worker 0 with 3 pkgs
Workers Finished
Gathering worker results
Saving Primary metadata
Saving file lists metadata
Saving other metadata
Generating sqlite DBs
Sqlite DBs complete

Here I kept only 3 RPMs in the directory so you can see it started with 0 of 3 pkg! After completion of the above command, you can observe directory repodata is created in packages directory. And it contains repo detail files along with xml file.

# ll /var/www/html/packages/repodata/
total 40
-rw-r--r--. 1 root root 10121 Mar 23 15:38 196f88dd1e6b0b74bbd8b3a689e77a8f632650da7fa77db06f212536a2e75096-primary.sqlite.bz2
-rw-r--r--. 1 root root  4275 Mar 23 15:38 1fc168d13253247ba15d45806c8f33bfced19bb1bf5eca54fb1d6758c831085f-filelists.sqlite.bz2
-rw-r--r--. 1 root root  2733 Mar 23 15:38 59d6b723590f73c4a65162c2f6f378bae422c72756f3dec60b1c4ef87f954f4c-filelists.xml.gz
-rw-r--r--. 1 root root  3874 Mar 23 15:38 656867c9894e31f39a1ecd3e14da8d1fbd68bbdf099e5a5f3ecbb581cf9129e5-other.sqlite.bz2
-rw-r--r--. 1 root root  2968 Mar 23 15:38 8d9cb58a2cf732deb12ce3796a5bc71b04e5c5c93247f4e2ab76bff843e7a747-primary.xml.gz
-rw-r--r--. 1 root root  2449 Mar 23 15:38 b30ec7d46fafe3d5e0b375f9c8bc0df7e9e4f69dc404fdec93777ddf9b145ef3-other.xml.gz
-rw-r--r--. 1 root root  2985 Mar 23 15:38 repomd.xml

Now your location http://ip-address/packages is ready to be identified by client YUM to fetch packages. The next thing is to configure another Linux machine (client) with this HTTP path as repo and try installing packages (which you kept in packages directory obv).

YUM server ftp configuration

In the FTP scenario, we are keeping packages accessible to other machines over FTP rather than HTTP protocol. You need to configure FTP and keep packages directory in the FTP share.

Go through createrepo step explained above for the FTP share directory. Once done you can configure the client with FTP address to fetch packages from the yum server. Repo location entry in the client repo configuration file will be –

baseurl=ftp://ip-address/ftp-share

YUM configuration in Linux

Learn YUM configuration in Linux. Understand what is yum, features of yum, what is a repository, and how to configure it.

YUM Configuration

YUM is Yellow dog Updated Modified. It is developed to maintain an RPM-based system. RPM is the Redhat Package Manager. YUM is a package manager with below features –

  1. Simple install, uninstall, upgrade operations
  2. Automatic resolves software dependency
  3. Looks for more than one source for software
  4. Supports CLI and GUI
  5. Automatically detects architecture of the system and search for best-fit software version
  6. Works well with remote (network connectivity) and local (without network connectivity) repositories.

All these features made it the best package manager. In this article, we will walk through Yum configuration steps. You can also browse through below yum related posts :

YUM configuration basics

Yum configuration has repositories defined. Repositories are the places where package files .rpm are located and yum searches, downloads files from repositories for installations. Repositories can be the local mount point file://path, remote FTP location ftp://link, HTTP location link http://link or http://login:password@link, https link or remote NFS mount point.

Yum configuration file is /etc/yum.conf and repository configuration files are located under /etc/yum.repos.d/ directory. All repository configuration files must have .repo extension so than yum can identify them and read their configurations.

Typical repo configuration file entry looks like below :

[rhel-source-beta]
name=Red Hat Enterprise Linux $releasever Beta - $basearch - Source
baseurl=ftp://ftp.redhat.com/pub/redhat/linux/beta/$releasever/en/os/SRPMS/
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta,file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

here –

  • [rhrl-source-beta] is a unique repository id.
  • name is a human readable repository name
  • baseurl is the location from where packages should be scanned and fetched
  • enabled denotes if this repo is enabled or not i.e. yum should use it or not
  • gpgcheck enable/disable GPG signature check
  • gpgkey is the location of GPG key

Out of these first 4 entries are mandatory for every repo location. Let’s see how to create a repo from the DVD ISO file.

Remember one repo configuration file can have more than one location listed.

You can even configure internet proxy for yum in this configuration file.

YUM repo configuration for DVD ISO

RPM-based Linux installation DVD has RPM files in it which are used to install packages at the time of OS installation. We can use this package and build our repo so that yum can use those packages!

First, you have to mount ISO file on system. Let’s assume we have mounted it on /mnt/dvdNow we have to create a yum repo file for it. Lets create file dvdiso.repo under /etc/yum.repos.d/ directory. It should look like :

[dvdiso]
name=RedHat DVD ISO
baseurl=file:///mnt/dvd
enabled=1
gpgcheck=1
gpgkey=file:///mnt/dvd/RPM-GPG-KEY-redhat-6

Male sure you check the path of GPG key on your ISO and edit accordingly. baseurl path will be a directory where repodata directory & gpg file lives.

Thats it! Your repo is ready. You can check using yum repolist command.

# yum repolist
Loaded plugins: refresh-packagekit, security
...
repo id                          repo name                                status
dvdiso                         RedHat DVD ISO                             25,459

In the above output, you can see repo is identified by yum. Now you can try installing any software from it with yum install command.

Make sure your ISO is always mounted on the system even after a reboot (add an entry in /etc/fstab to run this repo successfully.

YUM repo configuration for http repo

There are many official and unofficial repositories are hosted on the internet and can be accessed over HTTP protocol. These repositories are large and may contain more packages than your DVD has. To use them in yum, your server should have an active internet connection and it should be able to connect with HTTP locations you are trying to configure.

Once connectivity is confirmed create new repo file for them e.g. named weblocations.repo under directory /etc/yum.repos.d/ with content as below (for example) :

[centos]
name=CentOS Repository
baseurl=http://mirror.cisp.com/CentOS/6/os/i386/
enabled=1
gpgcheck=1
gpgkey=http://mirror.cisp.com/CentOS/6/os/i386/RPM-GPG-KEY-CentOS-6
[rhel-server-releases-optional]
name=Red Hat Enterprise Linux Server 6 Optional (RPMs) mirrorlist=https://redhat.com/pulp/mirror/content/dist/rhel/rhui/server/6/$releasever/$basearch/optional/os enabled=0 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release sslverify=1 sslclientkey=/etc/pki/rhui/content-rhel6.key sslclientcert=/etc/pki/rhui/product/content-rhel6.crt sslcacert=/etc/pki/rhui/cdn.redhat.com-chain.crt  

In the above example, you can see 2 web locations are configured in the repo. First is HTTP for centOS whereas the second one is RHEL supplied with https mirror list. Since https protocol is used other SSL related config can be seen following it.

Time to check repo –

# yum repolist
Loaded plugins: rhui-lb, security
repo id                                                         repo name                                                                              status
centos                                                          CentOS Repository                                                                       5,062
rhui-REGION-rhel-server-releases-optional                       Red Hat Enterprise Linux Server 6 Optional (RPMs)                                      11,057

Both repo are identified by yum. Configuration is successful.

Read about yum server configuration for FTP, HTTP, and client-side yum configuration in our other articles.

YUM certificate error

If you have an issue with your Red Hat Network certificate you will see below error while executing yum commands.

The certificate /usr/share/rhn/ULN-CA-CERT is expired. Please ensure you have the correct certificate and your system time is correct.

You need to update rhn-client-tools package and it will update certificate details.

If rhn-client-tools package is not installed properly you may see below error while executing yum commands-

rhn-plugin: ERROR: can not find RHNS CA file: /usr/share/rhn/ULN-CA-CERT

In this case, you need to reinstall or update rhn-client-tools package. If you are not using RHN on your server you can even safely remove this package from the system and get your yum working.

AutoFS configuration in Linux

On-demand NFS mounting utility: autofs. Learn what is autofs, why, and when to use autofs and autofs configuration steps in the Linux server.

Autofs configuration

The first place to manage mount points on any Linux system is /etc/fstab file. These files mount all listed mount points at the system startup and made them available to the user. Although I explained mainly how autofs advantages us with NFS mount points, it also works well with native mount points.

NFS mount points are also part of it. Now, the issue is even if users don’t access NFS mount points they are still mounted by /etc/fstab and leech some system resources in the background continuously. Like NFS services need to check connectivity, permissions, etc details of these mount points in the background continuously. If these NFS mounts are considerably high in numbers then managing them through /etc/fstab will be a major drawback since you are allotting major system resource chunk to system portion which is not frequently used by users.

Why use AutoFS?

In such a scenario, AutoFS comes in picture. AutoFS is on-demand NFS mounting facility. In short, it mounts NFS mount points when a user tries to access them. Again once time hits timeout value (since last activity on that NFS mount), it will automatically un-mount that NFS mount saving system resources serving idle mount point.

It also reduces your system boot time since the mounting task is done after system boot and when the user demands it.

When use AutoFS?

  • If your system is having a large number of mount points
  • Many of them are not being used frequently
  • The system is tight on resources and every single piece of system resource counts

AutoFS configuration steps

First, you need to install package autofs using yum or apt. The main configuration file for autofs is /etc/auto.master which is also called a mast map file. This file has autofs controlled mount points details. The master file follows below format :

mount_point map_file options

where –

  • mount_point is a directory on which mounts should be mounted
  • map_file (automounter map file) is a file containing a list of mount points and their file systems from which they should be mounted
  • options are extra options to be applied on mount_point

Sample master map file looks like one below :

/my_auto_mount  /etc/auto.misc --timeout=60

In above sample, mount points defined under /etc/auto.misc files can be mounted on /my_auto_mount directory with timeout value 60 sec.

Parameter map_file (automounter map file) in the above master map file is also a configuration file which has below format :

mount_point options source_location

where –

  • mount_point is a directory on which mounts should be mounted
  • options are mounting options
  • source_location is FS or NFS path from where the mount will be mounted

Sample automounter map file looks like one below :

linux          -ro,soft,intr           ftp.example.org:/pub/linux
data1         -fstype=ext3            :/dev/fd0

Users should be aware of the share path. Means, in our case, /my_auto_mount and Linux, data1 these paths should be known to users in order to access them.

In all both these configuration file collectively tells :

Whenever user tries to access mount point Linux or data1 –

  1. autofs checks data1 source (/dev/fs0) with option (-fstype=ext3)
  2. mounts data1 on /my_auto_mount/data1
  3. Un-mounts /my_auto_mount/data1 when there is no activity on mount for 60 secs

Once you are done with configuring your required mounts you can start autofs service.  Reload its configurations :

# /etc/init.d/autofs reload
Reloading maps

That’s it! Configuration is done!

Testing AutoFS configuration

Once you reload configuration, check and you will notice autofs defined mount points are not mounted on systems (output of df -h).

Now cd into /my_auto_mount/data1 and you will be presented with a listing of the content of data1 from /dev/fd0!

Another way is to use watch utility in another session and keep watch on command mount. As you execute commands, you will see mount point is mounted on system and after timeout value it’s un-mounted!

How-to guide: sudo configuration in Unix – Linux (with examples)

Learn how to secure your system and limit user access using sudo configuration. It helps to restrict superuser privileges of the normal user for a specific command

Many times there is a requirement where a normal user on system needs superuser privileges to run some commands. There are options to this situation which are like sharing the password of the superuser account so the user can su to that user or declaring UID 0 to the user making him superuser himself. Both options open pandora box to user granting him limitless power on the system. This is dangerous and not at all a good practice to compromise the whole system for a few commands. The alternative is sudo !

What is sudo ?

Sudo stands for ‘superuser do’. Sudo grants superuser (or other user’s) privileges to another user for specific/all commands. Normally sudo used to grant superuser privileges to other users hence ‘superuser do’ stand perfect for it. The beauty of sudo is you can define user access command wise. So that user is restricted to only defined commands and your system is secured from the user doing stuff with root privileges without your knowledge.

Sudo configuration :

Let’s see sudo configuration step by step. Here we will assign user usr5 sudo permission to execute apache bounce commands.

First of all, you need to check if sudo package is installed on your system or not.

# rpm -qa |grep  sudo (RHEL, CentOS, Fedora)
sudo-1.6.7p5-30.1.5
# dpkg -s sudo   (Debian, Ubuntu)
Package: sudo
Status: install ok installed
Priority: optional
---- output clipped ----

If not installed, then install it using yum or apt depending on your Linux distro.

Once installed, you will be able to edit /etc/sudoers file which is sudo configuration file. This is a plain text file that can be opened using vi editor. But its recommended to edit it using visudo command. visudo command opens /etc/sudoers file safely and maintains the integrity of the file. It’s the same way vipw command safely edits /etc/passwd file.

# cat /etc/sudoers
# sudoers file.
#
# This file MUST be edited with the 'visudo' command as root.
#
# See the sudoers man page for the details on how to write a sudoers file.
#

# Host alias specification

# User alias specification

# Cmnd alias specification

# Defaults specification

# User privilege specification
root    ALL=(ALL) ALL

# Uncomment to allow people in group wheel to run all commands
# %wheel        ALL=(ALL)       ALL

# Same thing without a password
# %wheel        ALL=(ALL)       NOPASSWD: ALL

# Samples
# %users  ALL=/sbin/mount /cdrom,/sbin/umount /cdrom
# %users  localhost=/sbin/shutdown -h now

See above sample sudoers file.

We will see each section of this file one by one:

1: Host alias specification –

Host alias is a list of one or more hostnames, IP addresses, network numbers, or netgroups. This alias is defined so that group of hosts can be defined in configuration with a single name.

Host_Alias SERVERS = 10.10.5.1, 10.10.5.2, testsrv1, testsrv3
Host_Alias NETWORK = 192.168.0.0/255.255.255.0

In the above example, we are defining SERVERS alias for 4 machines declared using IP or hostname. So any sudo settings defined for SERVERS will be applicable for all 4 machines. This saves the hassle to write all 4 machine details in each and every time in settings, only writing SERVERS will serve the purpose. Also, alias NETWORK for the range defined.

2: User alias specification –

User alias is list of one or more users, groups, uids etc.

User_Alias ADMINS = %admin
User_Alias USERS = user4, oracle65, testuser, #4523

In the above example, all users under system group admin are covered under alias ADMINS. Also we defined USERS alias for 4 machine users. #4523 indicates user with uid 4523.

3: Cmnd alias specification –

Its a list of commandnames, files, or directories. Commandnames includes is a complete command with wildcards support.

Cmnd_Alias ADMIN_CMDS = /usr/sbin/useradd, /usr/sbin/userdel, /usr/sbin/usermod
Cmnd_Alias APACHE_CMDS = /etc/init.d/apache2

In the above examples we defined ADMIN_CMDS and APACHE_CMDS aliases for a list of commands listed in front of them.

4: User privilege section –

Here actual sudo setting for a user defined. Line root    ALL=(ALL) ALL indicates, account root can execute any commands from any hosts as any user. If we want to define usr5 to execute apache commands then the line will be –

usr5    ALL=(ALL) NOPASSWD: APACHE_CMDS

Here usr5 is allowed to run commands defined under alias APACHE_CMDS without password from all hosts. If NOPASSWD is not mentioned, the user will be prompted for his own password again before executing a command like below (RHEL).

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

[sudo] password for <user>:
5: Run_as alias –

Here you define a list of users. This alias is used to run a command as a different user.

Examples :

Here are few examples to understand how config file works :

ADMINS ALL= /sbin/poweroff

Allows any ADMINS users to run poweroff command from any host.

%users  ALL=/sbin/mount /cdrom,/sbin/umount /cdrom

Allows users under group ‘users’ to mount and unmount /cdrom from any host.

testuser    SERVERS=(root) ADMIN_CMDS

Allows user ‘testuser‘ to run commands defined under ADMIN_CMDS from hosts defined user SERVERS as user root.

testuser ALL=(ALL) NOPASSWD: /usr/bin/su -

Allows user ‘testuser‘ to run command su - without any password. This is an example of how to add commands with arguments in sudo configuration.

Defaults targetpw

Allow users to run commands with their own password. sudo will asks password of the same user before executing su. You need to un-comment the above parameter in sudoers file.

How to configure telnet server in Linux

Step by step guide to configure telnet server on Linux. Generally, SSH is preferred over telnet since its more secure, and hence telnet is not available out of the box.

Telnet (TELetype NETwork) is a network protocol used on the Internet or local area networks. It uses a virtual terminal connection and provides bidirectional interactive text-oriented communication. One can use telnet to log in remotely to another system locally or over the internet.

Caution: telnet open un-encrypted communication channel to your machine over the network. Avoid using telnet and opt SSH for connectivity.

SSH i.e. Secure SHell is more secure than telnet. Hence, all Linux Unix servers use SSH for user connectivity. Even many installations don’t have telnet available out of the box.

This tutorial walks you through the process to configure telnet on your Linux machine but SSH is always advisable for server connectivity than telnet for being more secure. 

telnet server configuration :

Step 1:

As I said above, many installations don’t have telnet out of the box. You need to install the telnet package as a first step. Install telnet, telnet-server, and xinetd packages.

Use apt-get install telnetd for debian, ubuntu distro.

# yum install telnet telnet-server xinetd
Loaded plugins: amazon-id, rhui-lb, security
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package telnet.x86_64 1:0.17-48.el6 will be installed
---> Package telnet-server.x86_64 1:0.17-48.el6 will be installed
---> Package xinetd.x86_64 2:2.3.14-40.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=============================================================================================================================================================
 Package                           Arch                       Version                             Repository                                            Size
=============================================================================================================================================================
Installing:
 telnet                            x86_64                     1:0.17-48.el6                       rhui-REGION-rhel-server-releases                      58 k
 telnet-server                     x86_64                     1:0.17-48.el6                       rhui-REGION-rhel-server-releases                      37 k
 xinetd                            x86_64                     2:2.3.14-40.el6                     rhui-REGION-rhel-server-releases                     122 k

Transaction Summary
=============================================================================================================================================================
Install       3 Package(s)

Total download size: 217 k
Installed size: 423 k
Is this ok [y/N]: y
Downloading Packages:
(1/3): telnet-0.17-48.el6.x86_64.rpm                                                                                                  |  58 kB     00:00
(2/3): telnet-server-0.17-48.el6.x86_64.rpm                                                                                           |  37 kB     00:00
(3/3): xinetd-2.3.14-40.el6.x86_64.rpm                                                                                                | 122 kB     00:00
-------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                        335 kB/s | 217 kB     00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : 2:xinetd-2.3.14-40.el6.x86_64                                                                                                             1/3
  Installing : 1:telnet-server-0.17-48.el6.x86_64                                                                                                        2/3
  Installing : 1:telnet-0.17-48.el6.x86_64                                                                                                               3/3
  Verifying  : 1:telnet-server-0.17-48.el6.x86_64                                                                                                        1/3
  Verifying  : 1:telnet-0.17-48.el6.x86_64                                                                                                               2/3
  Verifying  : 2:xinetd-2.3.14-40.el6.x86_64                                                                                                             3/3

Installed:
  telnet.x86_64 1:0.17-48.el6                      telnet-server.x86_64 1:0.17-48.el6                      xinetd.x86_64 2:2.3.14-40.el6

Complete!

Step 2:

Set services to start on boot.

# chkconfig telnet on
# chkconfig  xinetd  on

Restart services. inetd in case of Debian.

# service xinetd restart
Stopping xinetd:                                           [FAILED]
Starting xinetd:                                           [  OK  ]

Verify service is listening on your server.

# netstat -lptu|grep telnet
tcp        0      0 *:telnet                    *:*                         LISTEN      1618/xinetd

# lsof -i |grep telnet
xinetd    1618     root    5u  IPv6  13908      0t0  TCP *:telnet (LISTEN)

Step 3:

Connect your server from a windows machine with the telnet protocol. Open a command prompt and type telnet IP-address. You will be greeted with a login prompt and will be able to login with an existing user.

If you are not able to connect via telnet make sure there are no firewalls are blocking communication between your Windows machine and telnet server for port 23 TCP.