Yearly Archives: 2017

List WWN of online FC in HPUX server

List of commands to check WWN of online FC in the HPUX server. The article also includes a small script that can do this task in seconds!

WWN of online FC in HPUX

For FC connectivity to storage on the HPUX server, we must share the WWN of the online FC. Getting WWN is a three-step process :

Step 1:

Identify FC devices under ioscan output.

# ioscan -fnCfc
Class     I  H/W Path    Driver S/W State   H/W Type     Description
==================================================================
fc        0  2/0/10/1/0  fcd   CLAIMED     INTERFACE    HP AB379-60101 4Gb Dual Port PCI/PCI-X Fibre Channel Adapter (FC Port 1)
                        /dev/fcd0
fc        1  2/0/10/1/1  fcd   CLAIMED     INTERFACE    HP AB379-60101 4Gb Dual Port PCI/PCI-X Fibre Channel Adapter (FC Port 2)
                        /dev/fcd1
fc        2  2/0/12/1/0  fcd   CLAIMED     INTERFACE    HP AB379-60101 4Gb Dual Port PCI/PCI-X Fibre Channel Adapter (FC Port 1)
                        /dev/fcd2
fc        3  2/0/12/1/1  fcd   CLAIMED     INTERFACE    HP AB379-60101 4Gb Dual Port PCI/PCI-X Fibre Channel Adapter (FC Port 2)
                        /dev/fcd3

In above output, you can see /dev/fcd0 to 3 are FC devices.

Step 2:

Check which FCs are online i.e. have cable connectivity with fcmsutil output.

# fcmsutil /dev/fcd0

                           Vendor ID is = 0x1077
                           Device ID is = 0x2422
            PCI Sub-system Vendor ID is = 0x103C
                   PCI Sub-system ID is = 0x12D7
                               PCI Mode = PCI-X 133 MHz
                       ISP Code version = 5.4.0
                       ISP Chip version = 3
                               Topology = PTTOPT_FABRIC
                             Link Speed = 4Gb
                     Local N_Port_id is = 0x010300
                  Previous N_Port_id is = None
            N_Port Node World Wide Name = 0x50060b00006975ed
            N_Port Port World Wide Name = 0x50060b00006975ec
            Switch Port World Wide Name = 0x200300051e046c0f
            Switch Node World Wide Name = 0x100000051e046c0f
              N_Port Symbolic Port Name = server1_fcd0
              N_Port Symbolic Node Name = server1_HP-UX_B.11.31
                           Driver state = ONLINE
                       Hardware Path is = 2/0/10/1/0
                     Maximum Frame Size = 2048
         Driver-Firmware Dump Available = NO
         Driver-Firmware Dump Timestamp = N/A
                                   TYPE = PFC
                         NPIV Supported = YES
                         Driver Version = @(#) fcd B.11.31.1103 Dec  6 2010

Check the driver state in the above output (highlighted). If it’s ONLINE that means this FC has cable connectivity. If its Awaiting Link UP then it does not have cable connectivity.

Step 3:

If it’s online check its WWN by checking N_Port Port World Wide Name value! That’s it. So WWN of above FC is 0x50060b00006975ec.

I have compiled all the above steps in a single script that you can run and get the WWN of online FC in seconds.

First test script in test server. Run it on your own risk.

Sample output :

# sh test.sh

FC : /dev/fcd0
0x50060b00006975ec

FC : /dev/fcd2
0x50060b00006973c8

pvcreate error: Device /dev/xyz not found (or ignored by filtering).

Solution for pvcreate error:  Device /dev/xyz not found (or ignored by filtering). Troubleshooting steps and resolution for this error.

Solution for pvcreate error: Device /dev/xyz not found (or ignored by filtering).

Sometimes when adding new disk/LUN to Linux machine using pvcreate you may come across below error :

  Device /dev/xyz not found (or ignored by filtering).

# pvcreate /dev/sdb
  Device /dev/sdb not found (or ignored by filtering).

This is due to disk was used in different volume managers (possibly Linux own fdisk manager) and now you are trying to use it in LVM. To resolve this error, first, check if it has fdisk partitions using fdisk command :

# fdisk /dev/sdb

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): p

Disk /dev/sdb: 859.0 GB, 858993459200 bytes
255 heads, 63 sectors/track, 104433 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x62346fee6

    Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      104433   838858041   83  Linux

In the above example, you can print the current partition table of the disk using p option under fdisk menu.

You can see there is one primary partition detected using fdisk. Because of this LVM command to initialize this disk (pvcreate) failed.

To resolve this you need to remove this partition and re-initialize disk in LVM.  To delete partition use d option under fdisk menu.

# fdisk /dev/sdb

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help):d
Selected partition 1

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

After issuing delete d command under fdisk menu, you need to write (w) changes on disk. This will remove your existing partition on the disk. Once again you can use print p option to make sure that there is no fdisk partition on the disk.

You can now use disk in LVM without any issue.

# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created

If this solution doesn’t work for you or there were no partitions on disk previously and still, if you get this error then you may want to look at your multipath configurations. The hint is to look at your verbose pvcreate output to check where it’s failing. Use pvcreate -vvv /dev/<name> command.

YUM cheat sheet

All YUM related articles in one place! Helpful YUM cheat sheet to learn, understand, revise YUM related sysadmin tasks on a single page.

YUM cheat sheet

YUM is Yellow dog Updater Modified. Its a package management tool for RPM-based systems. It has below a list of features that make it must use for every sysadmin.

  1. Simple install, uninstall, upgrade operations for packages
  2. Automatic resolves software dependency while installing or upgrading
  3. Looks for more than one source for software (supports multiple repositories)
  4. Supports CLI and GUI
  5. Automatically detects architecture of the system and search for best-fit software version
  6. Works well with remote (network connectivity) and local (without network connectivity) repositories.

In this article, I am gathering all YUM related posts in one place so that you don’t have to search them through our site!

Package Operations

  1. How to install package
  2. How to upgrade package
  3. How to remove package

Configurations

  1. YUM server configuration
  2. YUM config basics
  3. Package naming conventions
  4. Configure internet proxy for YUM

Services

  1. Automatic scheduled package updates 
  2. Download only packages without installing

Miscellaneous

  1. How to check if package is installed

How to configure yum server in Linux

Learn to configure the yum server in RPM-based Linux systems. The article explains yum server configs over HTTP and FTP protocol.

YUM server Configuration

In our last article, we saw yum configurations. We learned what is yum, why to use it, what is repository, yum config file locations, config file format, how to configure DVD, HTTP locations as a repository. In this article, we will walk through YUM server configuration i.e. configuring serverA as a YUM server so that other clients can configure serverA as a repo location.

Other YUM related articles :

In this article, we will see how to set up a yum server over FTP and HTTP protocol. Before proceeding with configurations make sure you have three packages deltarpm, python-deltarpm, createrepo installed on your yum server.

YUM server http configuration

First of all, we need to install a web server on the system so that the HTTP page can be served by the system. Install httpd package using yum. Post-installation you will have /var/www/html directory which is home of your webserver. Create packages directory within it which will hold all packages. Now we have /var/www/html/packages directory to hold packages of our YUM server.

Start httpd service and verify you are able to access http://ip-address/packages in the browser. It should look like below :

Webserver directory listing

Now, we need to copy package files (.rpm) into this directory. You can manually copy them from your OS DVD or you can download using wget from online official package mirrors. Once you populate /var/www/html/packages directory with .rpm files they are available to download from the browser but YUM won’t be able to recognize them.

For YUM (on client side) to fetch packages from the above directory you need to create an index of these files (.xml). You can create it using below command –

# createrepo /var/www/html/packages/
Spawning worker 0 with 3 pkgs
Workers Finished
Gathering worker results
Saving Primary metadata
Saving file lists metadata
Saving other metadata
Generating sqlite DBs
Sqlite DBs complete

Here I kept only 3 RPMs in the directory so you can see it started with 0 of 3 pkg! After completion of the above command, you can observe directory repodata is created in packages directory. And it contains repo detail files along with xml file.

# ll /var/www/html/packages/repodata/
total 40
-rw-r--r--. 1 root root 10121 Mar 23 15:38 196f88dd1e6b0b74bbd8b3a689e77a8f632650da7fa77db06f212536a2e75096-primary.sqlite.bz2
-rw-r--r--. 1 root root  4275 Mar 23 15:38 1fc168d13253247ba15d45806c8f33bfced19bb1bf5eca54fb1d6758c831085f-filelists.sqlite.bz2
-rw-r--r--. 1 root root  2733 Mar 23 15:38 59d6b723590f73c4a65162c2f6f378bae422c72756f3dec60b1c4ef87f954f4c-filelists.xml.gz
-rw-r--r--. 1 root root  3874 Mar 23 15:38 656867c9894e31f39a1ecd3e14da8d1fbd68bbdf099e5a5f3ecbb581cf9129e5-other.sqlite.bz2
-rw-r--r--. 1 root root  2968 Mar 23 15:38 8d9cb58a2cf732deb12ce3796a5bc71b04e5c5c93247f4e2ab76bff843e7a747-primary.xml.gz
-rw-r--r--. 1 root root  2449 Mar 23 15:38 b30ec7d46fafe3d5e0b375f9c8bc0df7e9e4f69dc404fdec93777ddf9b145ef3-other.xml.gz
-rw-r--r--. 1 root root  2985 Mar 23 15:38 repomd.xml

Now your location http://ip-address/packages is ready to be identified by client YUM to fetch packages. The next thing is to configure another Linux machine (client) with this HTTP path as repo and try installing packages (which you kept in packages directory obv).

YUM server ftp configuration

In the FTP scenario, we are keeping packages accessible to other machines over FTP rather than HTTP protocol. You need to configure FTP and keep packages directory in the FTP share.

Go through createrepo step explained above for the FTP share directory. Once done you can configure the client with FTP address to fetch packages from the yum server. Repo location entry in the client repo configuration file will be –

baseurl=ftp://ip-address/ftp-share

YUM configuration in Linux

Learn YUM configuration in Linux. Understand what is yum, features of yum, what is a repository, and how to configure it.

YUM Configuration

YUM is Yellow dog Updated Modified. It is developed to maintain an RPM-based system. RPM is the Redhat Package Manager. YUM is a package manager with below features –

  1. Simple install, uninstall, upgrade operations
  2. Automatic resolves software dependency
  3. Looks for more than one source for software
  4. Supports CLI and GUI
  5. Automatically detects architecture of the system and search for best-fit software version
  6. Works well with remote (network connectivity) and local (without network connectivity) repositories.

All these features made it the best package manager. In this article, we will walk through Yum configuration steps. You can also browse through below yum related posts :

YUM configuration basics

Yum configuration has repositories defined. Repositories are the places where package files .rpm are located and yum searches, downloads files from repositories for installations. Repositories can be the local mount point file://path, remote FTP location ftp://link, HTTP location link http://link or http://login:password@link, https link or remote NFS mount point.

Yum configuration file is /etc/yum.conf and repository configuration files are located under /etc/yum.repos.d/ directory. All repository configuration files must have .repo extension so than yum can identify them and read their configurations.

Typical repo configuration file entry looks like below :

[rhel-source-beta]
name=Red Hat Enterprise Linux $releasever Beta - $basearch - Source
baseurl=ftp://ftp.redhat.com/pub/redhat/linux/beta/$releasever/en/os/SRPMS/
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta,file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

here –

  • [rhrl-source-beta] is a unique repository id.
  • name is a human readable repository name
  • baseurl is the location from where packages should be scanned and fetched
  • enabled denotes if this repo is enabled or not i.e. yum should use it or not
  • gpgcheck enable/disable GPG signature check
  • gpgkey is the location of GPG key

Out of these first 4 entries are mandatory for every repo location. Let’s see how to create a repo from the DVD ISO file.

Remember one repo configuration file can have more than one location listed.

You can even configure internet proxy for yum in this configuration file.

YUM repo configuration for DVD ISO

RPM-based Linux installation DVD has RPM files in it which are used to install packages at the time of OS installation. We can use this package and build our repo so that yum can use those packages!

First, you have to mount ISO file on system. Let’s assume we have mounted it on /mnt/dvdNow we have to create a yum repo file for it. Lets create file dvdiso.repo under /etc/yum.repos.d/ directory. It should look like :

[dvdiso]
name=RedHat DVD ISO
baseurl=file:///mnt/dvd
enabled=1
gpgcheck=1
gpgkey=file:///mnt/dvd/RPM-GPG-KEY-redhat-6

Male sure you check the path of GPG key on your ISO and edit accordingly. baseurl path will be a directory where repodata directory & gpg file lives.

Thats it! Your repo is ready. You can check using yum repolist command.

# yum repolist
Loaded plugins: refresh-packagekit, security
...
repo id                          repo name                                status
dvdiso                         RedHat DVD ISO                             25,459

In the above output, you can see repo is identified by yum. Now you can try installing any software from it with yum install command.

Make sure your ISO is always mounted on the system even after a reboot (add an entry in /etc/fstab to run this repo successfully.

YUM repo configuration for http repo

There are many official and unofficial repositories are hosted on the internet and can be accessed over HTTP protocol. These repositories are large and may contain more packages than your DVD has. To use them in yum, your server should have an active internet connection and it should be able to connect with HTTP locations you are trying to configure.

Once connectivity is confirmed create new repo file for them e.g. named weblocations.repo under directory /etc/yum.repos.d/ with content as below (for example) :

[centos]
name=CentOS Repository
baseurl=http://mirror.cisp.com/CentOS/6/os/i386/
enabled=1
gpgcheck=1
gpgkey=http://mirror.cisp.com/CentOS/6/os/i386/RPM-GPG-KEY-CentOS-6
[rhel-server-releases-optional]
name=Red Hat Enterprise Linux Server 6 Optional (RPMs) mirrorlist=https://redhat.com/pulp/mirror/content/dist/rhel/rhui/server/6/$releasever/$basearch/optional/os enabled=0 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release sslverify=1 sslclientkey=/etc/pki/rhui/content-rhel6.key sslclientcert=/etc/pki/rhui/product/content-rhel6.crt sslcacert=/etc/pki/rhui/cdn.redhat.com-chain.crt  

In the above example, you can see 2 web locations are configured in the repo. First is HTTP for centOS whereas the second one is RHEL supplied with https mirror list. Since https protocol is used other SSL related config can be seen following it.

Time to check repo –

# yum repolist
Loaded plugins: rhui-lb, security
repo id                                                         repo name                                                                              status
centos                                                          CentOS Repository                                                                       5,062
rhui-REGION-rhel-server-releases-optional                       Red Hat Enterprise Linux Server 6 Optional (RPMs)                                      11,057

Both repo are identified by yum. Configuration is successful.

Read about yum server configuration for FTP, HTTP, and client-side yum configuration in our other articles.

YUM certificate error

If you have an issue with your Red Hat Network certificate you will see below error while executing yum commands.

The certificate /usr/share/rhn/ULN-CA-CERT is expired. Please ensure you have the correct certificate and your system time is correct.

You need to update rhn-client-tools package and it will update certificate details.

If rhn-client-tools package is not installed properly you may see below error while executing yum commands-

rhn-plugin: ERROR: can not find RHNS CA file: /usr/share/rhn/ULN-CA-CERT

In this case, you need to reinstall or update rhn-client-tools package. If you are not using RHN on your server you can even safely remove this package from the system and get your yum working.

lolcat: a tool to rainbow color Linux terminal

Paint your command outputs with rainbow color! Use lolcat (Ruby gem) tool and add some spice to the black putty terminal!

Rainbow color outputs with lolcat

Another article to have some fun on your Linux terminal. In the past, we have seen few articles about fun in Linux terminal –

In this article, we will cover lolcat command which colors your terminal texts in rainbow fashion! See below GIF to start with –

lolcat command sample output

See how lolcat command colors output in rainbow color scheme!

lolcat is available at its Git Repository for download. Lets setup lolcat on your server.

How to install lolcat tool

lolcat is Ruby gem hence you need to install Ruby first. Install packages rubygems ruby-devel & ruby on your system using yum or apt-get. Once successfully install, download the latest version of lolcat  from its Git repository using wget and any Linux downloader.

Once downloaded, unzip it

# unzip master.zip
Archive:  master.zip
dfc68649f6bdac255d5be052d2123f3fbe3f555c
   creating: lolcat-master/
 extracting: lolcat-master/.gitignore
  inflating: lolcat-master/Gemfile
  inflating: lolcat-master/LICENSE
  inflating: lolcat-master/README.md
 extracting: lolcat-master/Rakefile
   creating: lolcat-master/ass/
  inflating: lolcat-master/ass/screenshot.png
   creating: lolcat-master/bin/
  inflating: lolcat-master/bin/lolcat
   creating: lolcat-master/lib/
  inflating: lolcat-master/lib/lolcat.rb
   creating: lolcat-master/lib/lolcat/
  inflating: lolcat-master/lib/lolcat/cat.rb
  inflating: lolcat-master/lib/lolcat/lol.rb
 extracting: lolcat-master/lib/lolcat/version.rb
  inflating: lolcat-master/lolcat.gemspec

and install it using Ruby gems.

# cd lolcat-master/bin
# gem install lolcat
Successfully installed lolcat-42.24.0
Parsing documentation for lolcat-42.24.0
1 gem installed

This confirms your successful installation of lolcat!

lolcat command to rainbow color output!

Its time to see lolcat in action. You can pipe it with any output of your choice and it will color your command output in rainbow color (a few examples below)!

# ps -ef |lolcat
# date | lolcat

Want some more fun?

lolcat comes with few options which will make it more fun on the terminal. Run command with -d and duration and it will color your output in running mode.

Running colors in terminal using lolcat

You can even combine it with text banners like figlet or toilet and have fun!

How to find the process using high memory in Linux

Learn how to find the process using high memory on the Linux server. This helps in tracking down issues and troubleshooting utilization problems.

Find process using high memory in Linux

Many times you came to know system memory is highly utilized using a utility like sar. You want to find processes hogging on memory. To find that, we will be using the sort function of process status ps command in this article. We will be sorting ps output with RSS values. RSS is Resident Set Size. These values show how much memory from physical RAM allocated to a particular process. It does not include swapped out memory numbers. Since we troubleshooting processes using high physical memory RSS fits our criteria.

Lets see below example :

# ps aux --sort -rss |head -10
USER           PID %CPU %MEM    VSZ   RSS     TTY STAT START   TIME COMMAND
oracle_admin  14400  0.0 11.8 36937384 31420276 ?   Ss    2016  86:41 ora_mman_DB1
oracle_admin  14405  0.2 11.3 36993676 30023868 ?   Ss    2016 1676:11 ora_DB3
oracle_admin  14416  0.2 11.3 36993676 30023656 ?   Ss    2016 1722:47 ora_DB3
oracle_admin  14410  0.2 11.3 36993676 30020400 ?   Ss    2016 1702:09 ora_DB3
oracle_admin  14421  0.2 11.3 36993676 30018272 ?   Ss    2016 1754:25 ora_DB3
oracle_admin  14440  0.0 10.5 36946868 27887152 ?   Ss    2016 130:30 ora_mon_DB3
oracle_admin 15855  0.0  6.9 19232424 18298484 ?   Ss    2016  41:01 ora_mman_DB4
oracle_admin 15857  0.1  6.7 19288720 17966276 ?   Ss    2016 161:45 ora_DB4
oracle_admin 15864  0.1  6.7 19288720 17964584 ?   Ss    2016 173:36 ora_DB4

In the above output, we sorted processes with RSS and shown only the top 10 ones. RSS value in output is in Kb. Let’s verify this output for the topmost process with PID 14400.

# free
             total       used       free     shared    buffers     cached
Mem:     264611456   96146728  168464728          0    1042972   75377436
-/+ buffers/cache:   19726320  244885136
Swap:     67108860     539600   66569260

On our system, we have 264611456Kb physical RAM (highlighted entry in the above output). Out of which 11.8% is used by process 14400 (from ps output above) which comes to 31224151Kb. This value matches the RSS value of 31420276Kb (in ps output above)!

So the above method works well when you try to find processes using the highest physical memory on the system!

You can even use other methods to get high memory using processes like top, htop, etc. but this article aimed at using ps.

Watch command to execute script/shell command repeatedly

Learn watch command to execute script or shell commands repeatedly every n seconds. Very much useful in automation or monitoring.

watch command and its examples

watch command is a small utility using which you can execute shell command or script repetitively and after every n seconds. Its helpful in automation or monitoring. Once can design automation by monitoring some code/command output using watch to trigger next course of action e.g. notification.

watch command is part of procps package. Its bundled with OS still you can verify if package is installed on the system. This utility can be used directly by issuing the watch command followed by command/script name to execute.

Watch command in action

For example, I created a small script which writes junk data continuously in a file placed under /. This will change utilization numbers in df -k output. In the above GIF, you can see changes in the “Used” and “Available” column of df -k output when monitored with watch command.

In output, you can see –

  1. The default time interval is 2 seconds as shown n first line
  2. Time duration followed by a command which is being executed by watch
  3. The current date, time of server on the right-hand side
  4. Output of command being executed

Go through below watch command examples to understand how flexible the watch is.

Different options of watch

Now, to change the default time interval use option -n followed by time interval of your choice. To execute command after 20 seconds you can use :

# watch -n 20 df -k
Every 20.0s: df -k                      Mon Mar 20 15:00:47 2017

Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/xvda1       6061632 2194812   3552248  39% /
tmpfs             509252       0    509252   0% /dev/shm

See above output, interval is changed to 20 seconds (highlighted row)

If you want to hide header in output i.e. time interval, the command being executed, and current server date, time, use -t option. It will strip off the first line of output.

# watch -t df -h
Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/xvda1       6061632 2194812   3552248  39% /
tmpfs             509252       0    509252   0% /dev/shm

Highlighting difference in the present and previous output is made easy with -d option. To understand this, watch below GIF –

watch command with -d option

In the above output, I used the same data writing script to fill /. You can observe the only portion which is different from the previous output is being highlighted by watch in the white box!

AutoFS configuration in Linux

On-demand NFS mounting utility: autofs. Learn what is autofs, why, and when to use autofs and autofs configuration steps in the Linux server.

Autofs configuration

The first place to manage mount points on any Linux system is /etc/fstab file. These files mount all listed mount points at the system startup and made them available to the user. Although I explained mainly how autofs advantages us with NFS mount points, it also works well with native mount points.

NFS mount points are also part of it. Now, the issue is even if users don’t access NFS mount points they are still mounted by /etc/fstab and leech some system resources in the background continuously. Like NFS services need to check connectivity, permissions, etc details of these mount points in the background continuously. If these NFS mounts are considerably high in numbers then managing them through /etc/fstab will be a major drawback since you are allotting major system resource chunk to system portion which is not frequently used by users.

Why use AutoFS?

In such a scenario, AutoFS comes in picture. AutoFS is on-demand NFS mounting facility. In short, it mounts NFS mount points when a user tries to access them. Again once time hits timeout value (since last activity on that NFS mount), it will automatically un-mount that NFS mount saving system resources serving idle mount point.

It also reduces your system boot time since the mounting task is done after system boot and when the user demands it.

When use AutoFS?

  • If your system is having a large number of mount points
  • Many of them are not being used frequently
  • The system is tight on resources and every single piece of system resource counts

AutoFS configuration steps

First, you need to install package autofs using yum or apt. The main configuration file for autofs is /etc/auto.master which is also called a mast map file. This file has autofs controlled mount points details. The master file follows below format :

mount_point map_file options

where –

  • mount_point is a directory on which mounts should be mounted
  • map_file (automounter map file) is a file containing a list of mount points and their file systems from which they should be mounted
  • options are extra options to be applied on mount_point

Sample master map file looks like one below :

/my_auto_mount  /etc/auto.misc --timeout=60

In above sample, mount points defined under /etc/auto.misc files can be mounted on /my_auto_mount directory with timeout value 60 sec.

Parameter map_file (automounter map file) in the above master map file is also a configuration file which has below format :

mount_point options source_location

where –

  • mount_point is a directory on which mounts should be mounted
  • options are mounting options
  • source_location is FS or NFS path from where the mount will be mounted

Sample automounter map file looks like one below :

linux          -ro,soft,intr           ftp.example.org:/pub/linux
data1         -fstype=ext3            :/dev/fd0

Users should be aware of the share path. Means, in our case, /my_auto_mount and Linux, data1 these paths should be known to users in order to access them.

In all both these configuration file collectively tells :

Whenever user tries to access mount point Linux or data1 –

  1. autofs checks data1 source (/dev/fs0) with option (-fstype=ext3)
  2. mounts data1 on /my_auto_mount/data1
  3. Un-mounts /my_auto_mount/data1 when there is no activity on mount for 60 secs

Once you are done with configuring your required mounts you can start autofs service.  Reload its configurations :

# /etc/init.d/autofs reload
Reloading maps

That’s it! Configuration is done!

Testing AutoFS configuration

Once you reload configuration, check and you will notice autofs defined mount points are not mounted on systems (output of df -h).

Now cd into /my_auto_mount/data1 and you will be presented with a listing of the content of data1 from /dev/fd0!

Another way is to use watch utility in another session and keep watch on command mount. As you execute commands, you will see mount point is mounted on system and after timeout value it’s un-mounted!

AWS cloud terminology

Understand AWS cloud terminology of 71 services! Get acquainted with terms used in the AWS world to start with your AWS cloud career!

AWS Cloud terminology

AWS i.e. Amazon Web Services cloud platform providing list of web services on pay per use basis. It’s one of the famous cloud platforms to date. Due to flexibility, availability, elasticity, scalability, and no-maintenance, many corporations are moving to the cloud.  Since many companies using these services, it becomes necessary that sysadmin or DevOps should be aware of AWS.

This article aims at listing services provided by AWS and explaining the terminology used in the AWS world.

As of today, AWS offers a total of 71 services which are grouped together in 17 groups as below :

Compute

It’s a cloud computing means virtual server provisioning. This group provides the below services.

  1. EC2: EC2 stands for Elastic Compute Cloud. This service provides you scalable virtual machines per your requirement.
  2. EC2 container service: Its high performance, highly scalable which allows running services on EC2 clustered environment
  3. Lightsail: This service enables the user to launch and manage virtual servers (EC2) very easily.
  4. Elastic Beanstalk: This service manages capacity provisioning, load balancing, scaling, health monitoring of your application automatically thus reducing your management load.
  5. Lambda: It allows you to run your code only when needed without managing servers for it.
  6. Batch: It enables users to run computing workloads (batches) in a customized managed way.

Storage

It’s cloud storage i.e. cloud storage facility provided by Amazon. This group includes :

  1. S3: S3 stands for Simple Storage Service (3 times S). This provides you online storage to store/retrieve any data at any time, from anywhere.
  2. EFS: EFS stands for Elastic File System. It’s online storage that can be used with EC2 servers.
  3. Glacier: Its a low cost/slow performance data storage solution mainly aimed at archives or long term backups.
  4. Storage Gateway: Its interface which connects your on-premise applications (hosted outside AWS) with AWS storage.

Database

AWS also offers to host databases on their Infra so that clients can benefit from cutting edge tech Amazon have for faster/efficient/secured data processing. This group includes :

  1. RDS: RDS stands for Relational Database Service. Helps to set up, operate, manage a relational database on cloud.
  2. DynamoDB: Its NoSQL database providing fast processing and high scalability.
  3. ElastiCache: It’s a way to manage in-memory cache for your web application to run them faster!
  4. Redshift: It’s a huge (petabyte-size) fully scalable, data warehouse service in the cloud.

Networking & Content Delivery

As AWS provides a cloud EC2 server, its corollary that networking will be in the picture too. Content delivery is used to serve files to users from their geographically nearest location. This is pretty much famous for speeding up websites nowadays.

  1. VPC: VPC stands for Virtual Private Cloud. It’s your very own virtual network dedicated to your AWS account.
  2. CloudFront: Its content delivery network by AWS.
  3. Direct Connect: Its a network way of connecting your datacenter/premises with AWS to increase throughput, reduce network cost, and avoid connectivity issues that may arise due to internet-based connectivity.
  4. Route 53: Its a cloud domain name system DNS web service.

Migration

Its a set of services to help you migrate from on-premises services to AWS. It includes :

  1. Application Discovery Service: A service dedicated to analyzing your servers, network, application to help/speed up the migration.
  2. DMS: DMS stands for Database Migration Service. It is used to migrate your data from on-premises DB to RDS or DB hosted on EC2.
  3. Server Migration: Also called SMS (Server Migration Service) is an agentless service that moves your workloads from on-premises to AWS.
  4. Snowball:  Intended to use when you want to transfer huge amount of data in/out of AWS using physical storage appliances (rather than internet/network based transfers)

Developer Tools

As the name suggests, its a group of services helping developers to code easy/better way on the cloud.

  1. CodeCommit: Its a secure, scalable, managed source control service to host code repositories.
  2. CodeBuild: Code builder on the cloud. Executes tests codes and build software packages for deployments.
  3. CodeDeploy: Deployment service to automate application deployments on AWS servers or on-premises.
  4. CodePipeline: This deployment service enables coders to visualize their application before release.
  5. X-Ray: Analyse applications with event calls.

Management Tools

Group of services which helps you manage your web services in AWS cloud.

  1. CloudWatch: Monitoring service to monitor your AWS resources or applications.
  2. CloudFormation: Infrastructure as a code! It’s a way of managing AWS relative infra in a collective and orderly manner.
  3. CloudTrail: Audit & compliance tool for AWS account.
  4. Config: AWS resource inventory, configuration history, and configuration change notifications to enable security and governance.
  5. OpsWorks: Automation to configure, deploy EC2 or on-premises compute
  6. Service Catalog: Create and manage IT service catalogs which are approved to use in your/company account
  7. Trusted Advisor: Its AWS AI helping you to have better, money-saving AWS infra by inspecting your AWS Infra.
  8. Managed Service: Provides ongoing infra management

Security, Identity & compliance

Important group of AWS services helping you secure your AWS space.

  1. IAM: IAM stands for Identity and Access Management. Controls user access to your AWS resources and services.
  2. Inspector: Automated security assessment helping you to secure and compliance your apps on AWS.
  3. Certificate Manager: Provision, manage, and deploy SSL/TLS certificates for AWS applications.
  4. Directory Service: Its Microsoft Active Directory for AWS.
  5. WAF & Shield: WAF stands for Web Application Firewall. Monitors and controls access to your content on CloudFront or Load balancer.
  6. Compliance Reports: Compliance reporting of your AWS infra space to make sure your apps and the infra are compliant with your policies.

Analytics

Data analytics of your AWS space to help you see, plan, act on happenings in your account.

  1. Athena: Its a SQL based query service to analyze S3 stored data.
  2. EMR: EMR stands for Elastic Map Reduce. Service for big data processing and analysis.
  3. CloudSearch: Search capability of AWS within application and services.
  4. Elasticsearch Service: To create a domain and deploy, operate, and scale Elasticsearch clusters in the AWS Cloud
  5. Kinesis: Stream’s large amounts of data in real-time.
  6. Data Pipeline: Helps to move data between different AWS services.
  7. QuickSight: Collect, analyze, and present insight into business data on AWS.

Artificial Intelligence

AI in AWS!

  1. Lex: Helps to build conversational interfaces in an application using voice and text.
  2. Polly: Its a text to speech service.
  3. Rekognition: Gives you the ability to add image analysis to applications
  4. Machine Learning: It has algorithms to learn patterns in your data.

Internet of Things

This service enables AWS highly available on different devices.

  1. AWS IoT: It lets connected hardware devices to interact with AWS applications.

Game Development

As name suggest this services aims at Game Development.

  1. Amazon GameLift: This service aims for deploying, managing dedicated gaming servers for session-based multiplayer games.

Mobile Services

Group of services mainly aimed at handheld devices

  1. Mobile Hub: Helps you to create mobile app backend features and integrate them into mobile apps.
  2. Cognito: Controls mobile user’s authentication and access to AWS on internet-connected devices.
  3. Device Farm: Mobile app testing service enables you to test apps across android, iOS on real phones hosted by AWS.
  4. Mobile Analytics: Measure, track, and analyze mobile app data on AWS.
  5. Pinpoint: Targeted push notification and mobile engagements.

Application Services

Its a group of services which can be used with your applications in AWS.

  1. Step Functions: Define and use various functions in your applications
  2. SWF: SWF stands for Simple Workflow Service. Its cloud workflow management helps developers to co-ordinate and contribute at different stages of the application life cycle.
  3. API Gateway: Helps developers to create, manage, host APIs
  4. Elastic Transcoder: Helps developers to converts media files to play of various devices.

Messaging

Notification and messaging services in AWS

  1. SQS: SQS stands for Simple Queue Service. Fully managed messaging queue service to communicate between services and apps in AWS.
  2. SNS: SNS stands for Simple Notification Service. Push notification service for AWS users to alert them about their services in AWS space.
  3. SES: SES stands for Simple Email Service. Its cost-effective email service from AWS for its own customers.

Business Productivity

Group of services to help boost your business productivity.

  1. WorkDocs: Collaborative file sharing, storing, and editing service.
  2. WorkMail: Secured business mail, calendar service
  3. Amazon Chime: Online business meetings!

Desktop & App Streaming

Its desktop app streaming over cloud.

  1. WorkSpaces: Fully managed, secure desktop computing service on the cloud
  2. AppStream 2.0: Stream desktop applications from the cloud.