How to configure yum server in Linux

Learn to configure the yum server in RPM-based Linux systems. The article explains yum server configs over HTTP and FTP protocol.

YUM server Configuration

In our last article, we saw yum configurations. We learned what is yum, why to use it, what is repository, yum config file locations, config file format, how to configure DVD, HTTP locations as a repository. In this article, we will walk through YUM server configuration i.e. configuring serverA as a YUM server so that other clients can configure serverA as a repo location.

Other YUM related articles :

In this article, we will see how to set up a yum server over FTP and HTTP protocol. Before proceeding with configurations make sure you have three packages deltarpm, python-deltarpm, createrepo installed on your yum server.

YUM server http configuration

First of all, we need to install a web server on the system so that the HTTP page can be served by the system. Install httpd package using yum. Post-installation you will have /var/www/html directory which is home of your webserver. Create packages directory within it which will hold all packages. Now we have /var/www/html/packages directory to hold packages of our YUM server.

Start httpd service and verify you are able to access http://ip-address/packages in the browser. It should look like below :

Webserver directory listing

Now, we need to copy package files (.rpm) into this directory. You can manually copy them from your OS DVD or you can download using wget from online official package mirrors. Once you populate /var/www/html/packages directory with .rpm files they are available to download from the browser but YUM won’t be able to recognize them.

For YUM (on client side) to fetch packages from the above directory you need to create an index of these files (.xml). You can create it using below command –

# createrepo /var/www/html/packages/
Spawning worker 0 with 3 pkgs
Workers Finished
Gathering worker results
Saving Primary metadata
Saving file lists metadata
Saving other metadata
Generating sqlite DBs
Sqlite DBs complete

Here I kept only 3 RPMs in the directory so you can see it started with 0 of 3 pkg! After completion of the above command, you can observe directory repodata is created in packages directory. And it contains repo detail files along with xml file.

# ll /var/www/html/packages/repodata/
total 40
-rw-r--r--. 1 root root 10121 Mar 23 15:38 196f88dd1e6b0b74bbd8b3a689e77a8f632650da7fa77db06f212536a2e75096-primary.sqlite.bz2
-rw-r--r--. 1 root root  4275 Mar 23 15:38 1fc168d13253247ba15d45806c8f33bfced19bb1bf5eca54fb1d6758c831085f-filelists.sqlite.bz2
-rw-r--r--. 1 root root  2733 Mar 23 15:38 59d6b723590f73c4a65162c2f6f378bae422c72756f3dec60b1c4ef87f954f4c-filelists.xml.gz
-rw-r--r--. 1 root root  3874 Mar 23 15:38 656867c9894e31f39a1ecd3e14da8d1fbd68bbdf099e5a5f3ecbb581cf9129e5-other.sqlite.bz2
-rw-r--r--. 1 root root  2968 Mar 23 15:38 8d9cb58a2cf732deb12ce3796a5bc71b04e5c5c93247f4e2ab76bff843e7a747-primary.xml.gz
-rw-r--r--. 1 root root  2449 Mar 23 15:38 b30ec7d46fafe3d5e0b375f9c8bc0df7e9e4f69dc404fdec93777ddf9b145ef3-other.xml.gz
-rw-r--r--. 1 root root  2985 Mar 23 15:38 repomd.xml

Now your location http://ip-address/packages is ready to be identified by client YUM to fetch packages. The next thing is to configure another Linux machine (client) with this HTTP path as repo and try installing packages (which you kept in packages directory obv).

YUM server ftp configuration

In the FTP scenario, we are keeping packages accessible to other machines over FTP rather than HTTP protocol. You need to configure FTP and keep packages directory in the FTP share.

Go through createrepo step explained above for the FTP share directory. Once done you can configure the client with FTP address to fetch packages from the yum server. Repo location entry in the client repo configuration file will be –

baseurl=ftp://ip-address/ftp-share

YUM configuration in Linux

Learn YUM configuration in Linux. Understand what is yum, features of yum, what is a repository, and how to configure it.

YUM Configuration

YUM is Yellow dog Updated Modified. It is developed to maintain an RPM-based system. RPM is the Redhat Package Manager. YUM is a package manager with below features –

  1. Simple install, uninstall, upgrade operations
  2. Automatic resolves software dependency
  3. Looks for more than one source for software
  4. Supports CLI and GUI
  5. Automatically detects architecture of the system and search for best-fit software version
  6. Works well with remote (network connectivity) and local (without network connectivity) repositories.

All these features made it the best package manager. In this article, we will walk through Yum configuration steps. You can also browse through below yum related posts :

YUM configuration basics

Yum configuration has repositories defined. Repositories are the places where package files .rpm are located and yum searches, downloads files from repositories for installations. Repositories can be the local mount point file://path, remote FTP location ftp://link, HTTP location link http://link or http://login:password@link, https link or remote NFS mount point.

Yum configuration file is /etc/yum.conf and repository configuration files are located under /etc/yum.repos.d/ directory. All repository configuration files must have .repo extension so than yum can identify them and read their configurations.

Typical repo configuration file entry looks like below :

[rhel-source-beta]
name=Red Hat Enterprise Linux $releasever Beta - $basearch - Source
baseurl=ftp://ftp.redhat.com/pub/redhat/linux/beta/$releasever/en/os/SRPMS/
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta,file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

here –

  • [rhrl-source-beta] is a unique repository id.
  • name is a human readable repository name
  • baseurl is the location from where packages should be scanned and fetched
  • enabled denotes if this repo is enabled or not i.e. yum should use it or not
  • gpgcheck enable/disable GPG signature check
  • gpgkey is the location of GPG key

Out of these first 4 entries are mandatory for every repo location. Let’s see how to create a repo from the DVD ISO file.

Remember one repo configuration file can have more than one location listed.

You can even configure internet proxy for yum in this configuration file.

YUM repo configuration for DVD ISO

RPM-based Linux installation DVD has RPM files in it which are used to install packages at the time of OS installation. We can use this package and build our repo so that yum can use those packages!

First, you have to mount ISO file on system. Let’s assume we have mounted it on /mnt/dvdNow we have to create a yum repo file for it. Lets create file dvdiso.repo under /etc/yum.repos.d/ directory. It should look like :

[dvdiso]
name=RedHat DVD ISO
baseurl=file:///mnt/dvd
enabled=1
gpgcheck=1
gpgkey=file:///mnt/dvd/RPM-GPG-KEY-redhat-6

Male sure you check the path of GPG key on your ISO and edit accordingly. baseurl path will be a directory where repodata directory & gpg file lives.

Thats it! Your repo is ready. You can check using yum repolist command.

# yum repolist
Loaded plugins: refresh-packagekit, security
...
repo id                          repo name                                status
dvdiso                         RedHat DVD ISO                             25,459

In the above output, you can see repo is identified by yum. Now you can try installing any software from it with yum install command.

Make sure your ISO is always mounted on the system even after a reboot (add an entry in /etc/fstab to run this repo successfully.

YUM repo configuration for http repo

There are many official and unofficial repositories are hosted on the internet and can be accessed over HTTP protocol. These repositories are large and may contain more packages than your DVD has. To use them in yum, your server should have an active internet connection and it should be able to connect with HTTP locations you are trying to configure.

Once connectivity is confirmed create new repo file for them e.g. named weblocations.repo under directory /etc/yum.repos.d/ with content as below (for example) :

[centos]
name=CentOS Repository
baseurl=http://mirror.cisp.com/CentOS/6/os/i386/
enabled=1
gpgcheck=1
gpgkey=http://mirror.cisp.com/CentOS/6/os/i386/RPM-GPG-KEY-CentOS-6
[rhel-server-releases-optional]
name=Red Hat Enterprise Linux Server 6 Optional (RPMs) mirrorlist=https://redhat.com/pulp/mirror/content/dist/rhel/rhui/server/6/$releasever/$basearch/optional/os enabled=0 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release sslverify=1 sslclientkey=/etc/pki/rhui/content-rhel6.key sslclientcert=/etc/pki/rhui/product/content-rhel6.crt sslcacert=/etc/pki/rhui/cdn.redhat.com-chain.crt  

In the above example, you can see 2 web locations are configured in the repo. First is HTTP for centOS whereas the second one is RHEL supplied with https mirror list. Since https protocol is used other SSL related config can be seen following it.

Time to check repo –

# yum repolist
Loaded plugins: rhui-lb, security
repo id                                                         repo name                                                                              status
centos                                                          CentOS Repository                                                                       5,062
rhui-REGION-rhel-server-releases-optional                       Red Hat Enterprise Linux Server 6 Optional (RPMs)                                      11,057

Both repo are identified by yum. Configuration is successful.

Read about yum server configuration for FTP, HTTP, and client-side yum configuration in our other articles.

YUM certificate error

If you have an issue with your Red Hat Network certificate you will see below error while executing yum commands.

The certificate /usr/share/rhn/ULN-CA-CERT is expired. Please ensure you have the correct certificate and your system time is correct.

You need to update rhn-client-tools package and it will update certificate details.

If rhn-client-tools package is not installed properly you may see below error while executing yum commands-

rhn-plugin: ERROR: can not find RHNS CA file: /usr/share/rhn/ULN-CA-CERT

In this case, you need to reinstall or update rhn-client-tools package. If you are not using RHN on your server you can even safely remove this package from the system and get your yum working.

lolcat: a tool to rainbow color Linux terminal

Paint your command outputs with rainbow color! Use lolcat (Ruby gem) tool and add some spice to the black putty terminal!

Rainbow color outputs with lolcat

Another article to have some fun on your Linux terminal. In the past, we have seen few articles about fun in Linux terminal –

In this article, we will cover lolcat command which colors your terminal texts in rainbow fashion! See below GIF to start with –

lolcat command sample output

See how lolcat command colors output in rainbow color scheme!

lolcat is available at its Git Repository for download. Lets setup lolcat on your server.

How to install lolcat tool

lolcat is Ruby gem hence you need to install Ruby first. Install packages rubygems ruby-devel & ruby on your system using yum or apt-get. Once successfully install, download the latest version of lolcat  from its Git repository using wget and any Linux downloader.

Once downloaded, unzip it

# unzip master.zip
Archive:  master.zip
dfc68649f6bdac255d5be052d2123f3fbe3f555c
   creating: lolcat-master/
 extracting: lolcat-master/.gitignore
  inflating: lolcat-master/Gemfile
  inflating: lolcat-master/LICENSE
  inflating: lolcat-master/README.md
 extracting: lolcat-master/Rakefile
   creating: lolcat-master/ass/
  inflating: lolcat-master/ass/screenshot.png
   creating: lolcat-master/bin/
  inflating: lolcat-master/bin/lolcat
   creating: lolcat-master/lib/
  inflating: lolcat-master/lib/lolcat.rb
   creating: lolcat-master/lib/lolcat/
  inflating: lolcat-master/lib/lolcat/cat.rb
  inflating: lolcat-master/lib/lolcat/lol.rb
 extracting: lolcat-master/lib/lolcat/version.rb
  inflating: lolcat-master/lolcat.gemspec

and install it using Ruby gems.

# cd lolcat-master/bin
# gem install lolcat
Successfully installed lolcat-42.24.0
Parsing documentation for lolcat-42.24.0
1 gem installed

This confirms your successful installation of lolcat!

lolcat command to rainbow color output!

Its time to see lolcat in action. You can pipe it with any output of your choice and it will color your command output in rainbow color (a few examples below)!

# ps -ef |lolcat
# date | lolcat

Want some more fun?

lolcat comes with few options which will make it more fun on the terminal. Run command with -d and duration and it will color your output in running mode.

Running colors in terminal using lolcat

You can even combine it with text banners like figlet or toilet and have fun!

How to find the process using high memory in Linux

Learn how to find the process using high memory on the Linux server. This helps in tracking down issues and troubleshooting utilization problems.

Find process using high memory in Linux

Many times you came to know system memory is highly utilized using a utility like sar. You want to find processes hogging on memory. To find that, we will be using the sort function of process status ps command in this article. We will be sorting ps output with RSS values. RSS is Resident Set Size. These values show how much memory from physical RAM allocated to a particular process. It does not include swapped out memory numbers. Since we troubleshooting processes using high physical memory RSS fits our criteria.

Lets see below example :

# ps aux --sort -rss |head -10
USER           PID %CPU %MEM    VSZ   RSS     TTY STAT START   TIME COMMAND
oracle_admin  14400  0.0 11.8 36937384 31420276 ?   Ss    2016  86:41 ora_mman_DB1
oracle_admin  14405  0.2 11.3 36993676 30023868 ?   Ss    2016 1676:11 ora_DB3
oracle_admin  14416  0.2 11.3 36993676 30023656 ?   Ss    2016 1722:47 ora_DB3
oracle_admin  14410  0.2 11.3 36993676 30020400 ?   Ss    2016 1702:09 ora_DB3
oracle_admin  14421  0.2 11.3 36993676 30018272 ?   Ss    2016 1754:25 ora_DB3
oracle_admin  14440  0.0 10.5 36946868 27887152 ?   Ss    2016 130:30 ora_mon_DB3
oracle_admin 15855  0.0  6.9 19232424 18298484 ?   Ss    2016  41:01 ora_mman_DB4
oracle_admin 15857  0.1  6.7 19288720 17966276 ?   Ss    2016 161:45 ora_DB4
oracle_admin 15864  0.1  6.7 19288720 17964584 ?   Ss    2016 173:36 ora_DB4

In the above output, we sorted processes with RSS and shown only the top 10 ones. RSS value in output is in Kb. Let’s verify this output for the topmost process with PID 14400.

# free
             total       used       free     shared    buffers     cached
Mem:     264611456   96146728  168464728          0    1042972   75377436
-/+ buffers/cache:   19726320  244885136
Swap:     67108860     539600   66569260

On our system, we have 264611456Kb physical RAM (highlighted entry in the above output). Out of which 11.8% is used by process 14400 (from ps output above) which comes to 31224151Kb. This value matches the RSS value of 31420276Kb (in ps output above)!

So the above method works well when you try to find processes using the highest physical memory on the system!

You can even use other methods to get high memory using processes like top, htop, etc. but this article aimed at using ps.

Watch command to execute script/shell command repeatedly

Learn watch command to execute script or shell commands repeatedly every n seconds. Very much useful in automation or monitoring.

watch command and its examples

watch command is a small utility using which you can execute shell command or script repetitively and after every n seconds. Its helpful in automation or monitoring. Once can design automation by monitoring some code/command output using watch to trigger next course of action e.g. notification.

watch command is part of procps package. Its bundled with OS still you can verify if package is installed on the system. This utility can be used directly by issuing the watch command followed by command/script name to execute.

Watch command in action

For example, I created a small script which writes junk data continuously in a file placed under /. This will change utilization numbers in df -k output. In the above GIF, you can see changes in the “Used” and “Available” column of df -k output when monitored with watch command.

In output, you can see –

  1. The default time interval is 2 seconds as shown n first line
  2. Time duration followed by a command which is being executed by watch
  3. The current date, time of server on the right-hand side
  4. Output of command being executed

Go through below watch command examples to understand how flexible the watch is.

Different options of watch

Now, to change the default time interval use option -n followed by time interval of your choice. To execute command after 20 seconds you can use :

# watch -n 20 df -k
Every 20.0s: df -k                      Mon Mar 20 15:00:47 2017

Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/xvda1       6061632 2194812   3552248  39% /
tmpfs             509252       0    509252   0% /dev/shm

See above output, interval is changed to 20 seconds (highlighted row)

If you want to hide header in output i.e. time interval, the command being executed, and current server date, time, use -t option. It will strip off the first line of output.

# watch -t df -h
Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/xvda1       6061632 2194812   3552248  39% /
tmpfs             509252       0    509252   0% /dev/shm

Highlighting difference in the present and previous output is made easy with -d option. To understand this, watch below GIF –

watch command with -d option

In the above output, I used the same data writing script to fill /. You can observe the only portion which is different from the previous output is being highlighted by watch in the white box!

AutoFS configuration in Linux

On-demand NFS mounting utility: autofs. Learn what is autofs, why, and when to use autofs and autofs configuration steps in the Linux server.

Autofs configuration

The first place to manage mount points on any Linux system is /etc/fstab file. These files mount all listed mount points at the system startup and made them available to the user. Although I explained mainly how autofs advantages us with NFS mount points, it also works well with native mount points.

NFS mount points are also part of it. Now, the issue is even if users don’t access NFS mount points they are still mounted by /etc/fstab and leech some system resources in the background continuously. Like NFS services need to check connectivity, permissions, etc details of these mount points in the background continuously. If these NFS mounts are considerably high in numbers then managing them through /etc/fstab will be a major drawback since you are allotting major system resource chunk to system portion which is not frequently used by users.

Why use AutoFS?

In such a scenario, AutoFS comes in picture. AutoFS is on-demand NFS mounting facility. In short, it mounts NFS mount points when a user tries to access them. Again once time hits timeout value (since last activity on that NFS mount), it will automatically un-mount that NFS mount saving system resources serving idle mount point.

It also reduces your system boot time since the mounting task is done after system boot and when the user demands it.

When use AutoFS?

  • If your system is having a large number of mount points
  • Many of them are not being used frequently
  • The system is tight on resources and every single piece of system resource counts

AutoFS configuration steps

First, you need to install package autofs using yum or apt. The main configuration file for autofs is /etc/auto.master which is also called a mast map file. This file has autofs controlled mount points details. The master file follows below format :

mount_point map_file options

where –

  • mount_point is a directory on which mounts should be mounted
  • map_file (automounter map file) is a file containing a list of mount points and their file systems from which they should be mounted
  • options are extra options to be applied on mount_point

Sample master map file looks like one below :

/my_auto_mount  /etc/auto.misc --timeout=60

In above sample, mount points defined under /etc/auto.misc files can be mounted on /my_auto_mount directory with timeout value 60 sec.

Parameter map_file (automounter map file) in the above master map file is also a configuration file which has below format :

mount_point options source_location

where –

  • mount_point is a directory on which mounts should be mounted
  • options are mounting options
  • source_location is FS or NFS path from where the mount will be mounted

Sample automounter map file looks like one below :

linux          -ro,soft,intr           ftp.example.org:/pub/linux
data1         -fstype=ext3            :/dev/fd0

Users should be aware of the share path. Means, in our case, /my_auto_mount and Linux, data1 these paths should be known to users in order to access them.

In all both these configuration file collectively tells :

Whenever user tries to access mount point Linux or data1 –

  1. autofs checks data1 source (/dev/fs0) with option (-fstype=ext3)
  2. mounts data1 on /my_auto_mount/data1
  3. Un-mounts /my_auto_mount/data1 when there is no activity on mount for 60 secs

Once you are done with configuring your required mounts you can start autofs service.  Reload its configurations :

# /etc/init.d/autofs reload
Reloading maps

That’s it! Configuration is done!

Testing AutoFS configuration

Once you reload configuration, check and you will notice autofs defined mount points are not mounted on systems (output of df -h).

Now cd into /my_auto_mount/data1 and you will be presented with a listing of the content of data1 from /dev/fd0!

Another way is to use watch utility in another session and keep watch on command mount. As you execute commands, you will see mount point is mounted on system and after timeout value it’s un-mounted!

AWS cloud terminology

Understand AWS cloud terminology of 71 services! Get acquainted with terms used in the AWS world to start with your AWS cloud career!

AWS Cloud terminology

AWS i.e. Amazon Web Services cloud platform providing list of web services on pay per use basis. It’s one of the famous cloud platforms to date. Due to flexibility, availability, elasticity, scalability, and no-maintenance, many corporations are moving to the cloud.  Since many companies using these services, it becomes necessary that sysadmin or DevOps should be aware of AWS.

This article aims at listing services provided by AWS and explaining the terminology used in the AWS world.

As of today, AWS offers a total of 71 services which are grouped together in 17 groups as below :

Compute

It’s a cloud computing means virtual server provisioning. This group provides the below services.

  1. EC2: EC2 stands for Elastic Compute Cloud. This service provides you scalable virtual machines per your requirement.
  2. EC2 container service: Its high performance, highly scalable which allows running services on EC2 clustered environment
  3. Lightsail: This service enables the user to launch and manage virtual servers (EC2) very easily.
  4. Elastic Beanstalk: This service manages capacity provisioning, load balancing, scaling, health monitoring of your application automatically thus reducing your management load.
  5. Lambda: It allows you to run your code only when needed without managing servers for it.
  6. Batch: It enables users to run computing workloads (batches) in a customized managed way.

Storage

It’s cloud storage i.e. cloud storage facility provided by Amazon. This group includes :

  1. S3: S3 stands for Simple Storage Service (3 times S). This provides you online storage to store/retrieve any data at any time, from anywhere.
  2. EFS: EFS stands for Elastic File System. It’s online storage that can be used with EC2 servers.
  3. Glacier: Its a low cost/slow performance data storage solution mainly aimed at archives or long term backups.
  4. Storage Gateway: Its interface which connects your on-premise applications (hosted outside AWS) with AWS storage.

Database

AWS also offers to host databases on their Infra so that clients can benefit from cutting edge tech Amazon have for faster/efficient/secured data processing. This group includes :

  1. RDS: RDS stands for Relational Database Service. Helps to set up, operate, manage a relational database on cloud.
  2. DynamoDB: Its NoSQL database providing fast processing and high scalability.
  3. ElastiCache: It’s a way to manage in-memory cache for your web application to run them faster!
  4. Redshift: It’s a huge (petabyte-size) fully scalable, data warehouse service in the cloud.

Networking & Content Delivery

As AWS provides a cloud EC2 server, its corollary that networking will be in the picture too. Content delivery is used to serve files to users from their geographically nearest location. This is pretty much famous for speeding up websites nowadays.

  1. VPC: VPC stands for Virtual Private Cloud. It’s your very own virtual network dedicated to your AWS account.
  2. CloudFront: Its content delivery network by AWS.
  3. Direct Connect: Its a network way of connecting your datacenter/premises with AWS to increase throughput, reduce network cost, and avoid connectivity issues that may arise due to internet-based connectivity.
  4. Route 53: Its a cloud domain name system DNS web service.

Migration

Its a set of services to help you migrate from on-premises services to AWS. It includes :

  1. Application Discovery Service: A service dedicated to analyzing your servers, network, application to help/speed up the migration.
  2. DMS: DMS stands for Database Migration Service. It is used to migrate your data from on-premises DB to RDS or DB hosted on EC2.
  3. Server Migration: Also called SMS (Server Migration Service) is an agentless service that moves your workloads from on-premises to AWS.
  4. Snowball:  Intended to use when you want to transfer huge amount of data in/out of AWS using physical storage appliances (rather than internet/network based transfers)

Developer Tools

As the name suggests, its a group of services helping developers to code easy/better way on the cloud.

  1. CodeCommit: Its a secure, scalable, managed source control service to host code repositories.
  2. CodeBuild: Code builder on the cloud. Executes tests codes and build software packages for deployments.
  3. CodeDeploy: Deployment service to automate application deployments on AWS servers or on-premises.
  4. CodePipeline: This deployment service enables coders to visualize their application before release.
  5. X-Ray: Analyse applications with event calls.

Management Tools

Group of services which helps you manage your web services in AWS cloud.

  1. CloudWatch: Monitoring service to monitor your AWS resources or applications.
  2. CloudFormation: Infrastructure as a code! It’s a way of managing AWS relative infra in a collective and orderly manner.
  3. CloudTrail: Audit & compliance tool for AWS account.
  4. Config: AWS resource inventory, configuration history, and configuration change notifications to enable security and governance.
  5. OpsWorks: Automation to configure, deploy EC2 or on-premises compute
  6. Service Catalog: Create and manage IT service catalogs which are approved to use in your/company account
  7. Trusted Advisor: Its AWS AI helping you to have better, money-saving AWS infra by inspecting your AWS Infra.
  8. Managed Service: Provides ongoing infra management

Security, Identity & compliance

Important group of AWS services helping you secure your AWS space.

  1. IAM: IAM stands for Identity and Access Management. Controls user access to your AWS resources and services.
  2. Inspector: Automated security assessment helping you to secure and compliance your apps on AWS.
  3. Certificate Manager: Provision, manage, and deploy SSL/TLS certificates for AWS applications.
  4. Directory Service: Its Microsoft Active Directory for AWS.
  5. WAF & Shield: WAF stands for Web Application Firewall. Monitors and controls access to your content on CloudFront or Load balancer.
  6. Compliance Reports: Compliance reporting of your AWS infra space to make sure your apps and the infra are compliant with your policies.

Analytics

Data analytics of your AWS space to help you see, plan, act on happenings in your account.

  1. Athena: Its a SQL based query service to analyze S3 stored data.
  2. EMR: EMR stands for Elastic Map Reduce. Service for big data processing and analysis.
  3. CloudSearch: Search capability of AWS within application and services.
  4. Elasticsearch Service: To create a domain and deploy, operate, and scale Elasticsearch clusters in the AWS Cloud
  5. Kinesis: Stream’s large amounts of data in real-time.
  6. Data Pipeline: Helps to move data between different AWS services.
  7. QuickSight: Collect, analyze, and present insight into business data on AWS.

Artificial Intelligence

AI in AWS!

  1. Lex: Helps to build conversational interfaces in an application using voice and text.
  2. Polly: Its a text to speech service.
  3. Rekognition: Gives you the ability to add image analysis to applications
  4. Machine Learning: It has algorithms to learn patterns in your data.

Internet of Things

This service enables AWS highly available on different devices.

  1. AWS IoT: It lets connected hardware devices to interact with AWS applications.

Game Development

As name suggest this services aims at Game Development.

  1. Amazon GameLift: This service aims for deploying, managing dedicated gaming servers for session-based multiplayer games.

Mobile Services

Group of services mainly aimed at handheld devices

  1. Mobile Hub: Helps you to create mobile app backend features and integrate them into mobile apps.
  2. Cognito: Controls mobile user’s authentication and access to AWS on internet-connected devices.
  3. Device Farm: Mobile app testing service enables you to test apps across android, iOS on real phones hosted by AWS.
  4. Mobile Analytics: Measure, track, and analyze mobile app data on AWS.
  5. Pinpoint: Targeted push notification and mobile engagements.

Application Services

Its a group of services which can be used with your applications in AWS.

  1. Step Functions: Define and use various functions in your applications
  2. SWF: SWF stands for Simple Workflow Service. Its cloud workflow management helps developers to co-ordinate and contribute at different stages of the application life cycle.
  3. API Gateway: Helps developers to create, manage, host APIs
  4. Elastic Transcoder: Helps developers to converts media files to play of various devices.

Messaging

Notification and messaging services in AWS

  1. SQS: SQS stands for Simple Queue Service. Fully managed messaging queue service to communicate between services and apps in AWS.
  2. SNS: SNS stands for Simple Notification Service. Push notification service for AWS users to alert them about their services in AWS space.
  3. SES: SES stands for Simple Email Service. Its cost-effective email service from AWS for its own customers.

Business Productivity

Group of services to help boost your business productivity.

  1. WorkDocs: Collaborative file sharing, storing, and editing service.
  2. WorkMail: Secured business mail, calendar service
  3. Amazon Chime: Online business meetings!

Desktop & App Streaming

Its desktop app streaming over cloud.

  1. WorkSpaces: Fully managed, secure desktop computing service on the cloud
  2. AppStream 2.0: Stream desktop applications from the cloud.

How to resolve the fatal error: curses.h: No such file or directory

Learn how to get rid of the fatal error: curses.h: No such file or directory during utility or third-party package installations in Linux.

Solution for curses.h: No such file or directory

Many times during package/utility installations you must have come across an error like one below :

fatal error: curses.h: No such file or directory

Recently I faced it while installing cmatrix from source code. I saw an error like one below :

# make
gcc -DHAVE_CONFIG_H -I. -I. -I.     -g -O2 -Wall -Wno-comment -c cmatrix.c
cmatrix.c:37:20: fatal error: curses.h: No such file or directory
 #include <curses.h>
                    ^
compilation terminated.
make: *** [cmatrix.o] Error 1

After troubleshooting I came up with a solution and able to pass through make stage. I am sharing it here which might be useful for you.

curses.h header file belongs to ncurses module! You need to install packages ncurses-devel, ncurses (YUM) or libncurses5-dev (APT) and you will be through this error.

Use yum install ncurses-devel ncurses for YUM based systems (like Red Hat, CentOS, etc.) or apt-get install libncurses5-dev for APT based systems (like Debian, Ubuntu, etc.) Verify once that package is installed and proceed with your next course of action.

Follow category ‘Troubleshooting errors‘ for more such error based solutions.

How to check if the package is installed on Linux

Learn to check if the package is installed on the Linux server or not. Verify if the package available on the server along with its installed date.

Check if package in installed on Linux

Package installation on Linux sometimes fails with error package is already installed; nothing to do. To avoid this you need to first check if the package is installed on system or not and then attempt its installation. In this article, we will be seeing different ways we can check if the package is installed on the server and also check its installation date.

Package management related reads :

Different ways to check if package is installed or not :

On RPM based system

RPM-based systems like Red Hat, CentOS, etc, we can use rpm query command like below :

# rpm -qa |grep telnet
telnet-0.17-60.el7.x86_64
OR
# rpm -q telnet
telnet-0.17-60.el7.x86_64

We are using -qa i.e. query all options which will list all installed packages on the system. We are grepping out our desired (telnet in this example) package name. If the output is blank then the package is not installed. If it’s installed then the respective name will be shown (like above). To understand what these numbers in package name mean read package naming conventions.

Or even directly querying the package name will yield you the same result as the second example above.

If the system is configured with YUM then it can list all installed packages for you and you can grep out your desired package from it.

# yum list installed telnet
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
Installed Packages
telnet.x86_64                                                            1:0.17-60.el7                                                            @rhui-REGION-rhel-server-releases
OR
# yum list installed |grep telnet                                                                                                                        
telnet.x86_64                    1:0.17-60.el7              @rhui-REGION-rhel-server-releases

On APT based systems

APT based systems like Debian, Ubuntu, etc, dpkg command can be used to verify if the package is installed –

# dpkg -l |grep telnet
ii  telnet                           0.17-40                            amd64        basic telnet client

Column wise fields in output are Name, Version, Architecture, Description.

If you have an apt repository configured then you can try to install emulation of the desired package. If it’s installed then the respective message will be shown in output (highlighted line below). If it’s not installed then output just emulates installation process and exits without actually installing.  –

# apt-get install -s telnet
Reading package lists... Done
Building dependency tree
Reading state information... Done
telnet is already the newest version (0.17-40).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

Package installation date

One of the Linux interview questions is how to find the package installation date? or how to check when the package was installed in Linux? Answer is –

On YUM based systems

rpm command has a direct option of sorting packages with their installed date --last. Grep your desired package and you will get its installed date.

# rpm -qa --last |grep telnet
telnet-0.17-60.el7.x86_64                     Fri 10 Mar 2017 01:58:17 PM EST
On APT based systems

Here there is no direct command which shows installation date. You have to grep ‘install’ word through installer log files /var/log/dpkg.log to get the installation date. If logrotate is configured on the system then use wild card * to search through all rotated and current log files.

If you observe this file does not exist on your server then install operation wasn’t performed on that system after its setup. On the very first install operation (using apt-get or dpkg) this file will get created and start logging installation details.

# grep install /var/log/dpkg.log* |grep telnet
2017-03-10 19:26:30 status installed telnet:amd64 0.17-40
2017-03-10 19:26:30 status half-installed telnet:amd64 0.17-40
2017-03-10 19:26:40 install telnet:amd64 0.17-40 0.17-40
2017-03-10 19:26:40 status half-installed telnet:amd64 0.17-40
2017-03-10 19:26:40 status installed telnet:amd64 0.17-40

					

How to restart service in Linux

Article explaining service management in Linux. Learn how to restart service in Linux distro like Red Hat, Debian, Ubuntu, CentOS, etc.

Service management in Linux

Managing services in Linux is one of the frequent task sysadmins need to take care of. In this post, we will be discussing several operations like –

  • How to stop service in Linux
  • How to start service in Linux
  • How to restart service in Linux
  • How to check the status of service in Linux

Different distributions have different ways of service management. Even within the same distro, different versions may have different service management aspects. Like RHEL 6 and RHEL7 has different commands to manage services.

Let’s see service related tasks in various flavors of Linux –

How to stop service in Linux

Service can be stopped with below commands (respective distro specified)

# service <name> stop (RHEL6 & lower, Ubuntu, CentOS, Debian, Fedora)

# systemctl stop <name>.service  (RHEL7)

# stop <name> (Ubuntu with upstart)

here <name> is service name like telnet, NTP, NFS, etc. Note that upstart is pre-installed with Ubuntu 6.10 later, if not you can install using the APT package.

Newer versions are implementing systemctl now in place of service command. Even if you use service command in RHEL7 then it will call systemctl in turns.

# service sshd-keygen status
Redirecting to /bin/systemctl status  sshd-keygen.service
● sshd-keygen.service - OpenSSH Server Key Generation
   Loaded: loaded (/usr/lib/systemd/system/sshd-keygen.service; static; vendor preset: disabled)
   Active: inactive (dead)
-----output clipped-----

In the above output, you can see it shows you which systemctl command its executing in place of service command.  Also, note that it appends .service to service_name supplied to service command.

Old service commands like RHEL6 & lower, prints status of operation as OK (success) or FAILED (failure) for start, stop, restart operations. systemctl the command doesn’t print any output on the console.

How to start service in Linux

Starting service follows same above syntax.

# service <name> start (RHEL6 & lower, Ubuntu, CentOS, Debian, Fedora)

# systemctl start <name>.service  (RHEL7)

# start <name> (Ubuntu with upstart)

How to restart service in Linux

# service <name> restart (RHEL6 & lower, Ubuntu, CentOS, Debian, Fedora)

# systemctl restart <name>.service  (RHEL7)

# restart <name> (Ubuntu with upstart)

It stops service and then immediately starts it. So basically its a combined command of above two.

Mostly to reload edited new configuration we seek restart of service. But this can be done without restarting it provided service supports reload config. This can be done by using reload option instead of restart.

How to check the status of service in Linux

Checking the status of service makes you aware of if service is currently running or not. Different distros give different details about service in the output of status. Below are a few examples for your reference.

Service status information in Ubuntu :

# service cron status                                                                                                                                         
● cron.service - Regular background program processing daemon
   Loaded: loaded (/lib/systemd/system/cron.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2017-03-10 17:53:23 UTC; 2s ago
     Docs: man:cron(8)
 Main PID: 3506 (cron)
    Tasks: 1
   Memory: 280.0K
      CPU: 1ms
   CGroup: /system.slice/cron.service
           └─3506 /usr/sbin/cron -f

Mar 10 17:53:23 ip-172-31-19-90 systemd[1]: Started Regular background program processing daemon.
Mar 10 17:53:23 ip-172-31-19-90 cron[3506]: (CRON) INFO (pidfile fd = 3)
Mar 10 17:53:23 ip-172-31-19-90 cron[3506]: (CRON) INFO (Skipping @reboot jobs -- not system startup)

It has details about the service state, its man page, PID, CPU & MEM utilization, and recent happenings from the log.

Service status information in RHEL6:

# service crond status
crond (pid  1474) is running...

It only shows you PID and state of service.

Service status information in RHEL7:

# systemctl status crond.service
● crond.service - Command Scheduler
   Loaded: loaded (/usr/lib/systemd/system/crond.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2017-03-10 13:04:58 EST; 1min 2s ago
 Main PID: 499 (crond)
   CGroup: /system.slice/crond.service
           └─499 /usr/sbin/crond -n

Mar 10 13:04:58 ip-172-31-24-59.ap-south-1.compute.internal systemd[1]: Started Command Scheduler.
Mar 10 13:04:58 ip-172-31-24-59.ap-south-1.compute.internal systemd[1]: Starting Command Scheduler...
Mar 10 13:04:58 ip-172-31-24-59.ap-south-1.compute.internal crond[499]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 85% if used.)
Mar 10 13:04:59 ip-172-31-24-59.ap-south-1.compute.internal crond[499]: (CRON) INFO (running with inotify support)

It prints all details as Ubuntu but doesn’t show CPU and memory utilization, manpage.

List all services on the system

If you want to see all services running on the system and their statuses then you can use below command :

# service --status-all (RHEL6 & lower, Ubuntu, CentOS, Debian, Fedora)

# systemctl list-units --type service --all (RHEL7)

It will present you list of all services and their status with few other details.