Solution for pvcreate error: Device /dev/xyz not found (or ignored by filtering). Troubleshooting steps and resolution for this error.
Sometimes when adding new disk/LUN to Linux machine using pvcreate you may come across below error :
Device /dev/xyz not found (or ignored by filtering).
# pvcreate /dev/sdb
Device /dev/sdb not found (or ignored by filtering).
This is due to disk was used in different volume managers (possibly Linux own fdisk manager) and now you are trying to use it in LVM. To resolve this error, first, check if it has fdisk partitions using fdisk command :
# fdisk /dev/sdb
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): p
Disk /dev/sdb: 859.0 GB, 858993459200 bytes
255 heads, 63 sectors/track, 104433 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x62346fee6
Device Boot Start End Blocks Id System
/dev/sdb1 1 104433 838858041 83 Linux
In the above example, you can print the current partition table of the disk using p option under fdisk menu.
You can see there is one primary partition detected using fdisk. Because of this LVM command to initialize this disk (pvcreate) failed.
To resolve this you need to remove this partition and re-initialize disk in LVM. To delete partition use d option under fdisk menu.
# fdisk /dev/sdb
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help):d
Selected partition 1
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
After issuing delete d command under fdisk menu, you need to write (w) changes on disk. This will remove your existing partition on the disk. Once again you can use print p option to make sure that there is no fdisk partition on the disk.
You can now use disk in LVM without any issue.
# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created
If this solution doesn’t work for you or there were no partitions on disk previously and still, if you get this error then you may want to look at your multipath configurations. The hint is to look at your verbose pvcreate output to check where it’s failing. Use pvcreate -vvv /dev/<name> command.
All YUM related articles in one place! Helpful YUM cheat sheet to learn, understand, revise YUM related sysadmin tasks on a single page.
YUM is Yellow dog Updater Modified. Its a package management tool for RPM-based systems. It has below a list of features that make it must use for every sysadmin.
Simple install, uninstall, upgrade operations for packages
Automatic resolves software dependency while installing or upgrading
Looks for more than one source for software (supports multiple repositories)
Supports CLI and GUI
Automatically detects architecture of the system and search for best-fit software version
Works well with remote (network connectivity) and local (without network connectivity) repositories.
In this article, I am gathering all YUM related posts in one place so that you don’t have to search them through our site!
Learn to configure the yum server in RPM-based Linux systems. The article explains yum server configs over HTTP and FTP protocol.
In our last article, we saw yum configurations. We learned what is yum, why to use it, what is repository, yum config file locations, config file format, how to configure DVD, HTTP locations as a repository. In this article, we will walk through YUM server configuration i.e. configuring serverA as a YUM server so that other clients can configure serverA as a repo location.
In this article, we will see how to set up a yum server over FTP and HTTP protocol. Before proceeding with configurations make sure you have three packages deltarpm, python-deltarpm, createrepo installed on your yum server.
YUM server http configuration
First of all, we need to install a web server on the system so that the HTTP page can be served by the system. Install httpd package using yum. Post-installation you will have/var/www/html directory which is home of your webserver. Create packages directory within it which will hold all packages. Now we have /var/www/html/packages directory to hold packages of our YUM server.
Start httpd service and verify you are able to access http://ip-address/packages in the browser. It should look like below :
Now, we need to copy package files (.rpm) into this directory. You can manually copy them from your OS DVD or you can download using wget from online official package mirrors. Once you populate /var/www/html/packages directory with .rpm files they are available to download from the browser but YUM won’t be able to recognize them.
For YUM (on client side) to fetch packages from the above directory you need to create an index of these files (.xml). You can create it using below command –
Here I kept only 3 RPMs in the directory so you can see it started with 0 of 3 pkg! After completion of the above command, you can observe directory repodata is created in packages directory. And it contains repo detail files along with xml file.
# ll /var/www/html/packages/repodata/
total 40
-rw-r--r--. 1 root root 10121 Mar 23 15:38 196f88dd1e6b0b74bbd8b3a689e77a8f632650da7fa77db06f212536a2e75096-primary.sqlite.bz2
-rw-r--r--. 1 root root 4275 Mar 23 15:38 1fc168d13253247ba15d45806c8f33bfced19bb1bf5eca54fb1d6758c831085f-filelists.sqlite.bz2
-rw-r--r--. 1 root root 2733 Mar 23 15:38 59d6b723590f73c4a65162c2f6f378bae422c72756f3dec60b1c4ef87f954f4c-filelists.xml.gz
-rw-r--r--. 1 root root 3874 Mar 23 15:38 656867c9894e31f39a1ecd3e14da8d1fbd68bbdf099e5a5f3ecbb581cf9129e5-other.sqlite.bz2
-rw-r--r--. 1 root root 2968 Mar 23 15:38 8d9cb58a2cf732deb12ce3796a5bc71b04e5c5c93247f4e2ab76bff843e7a747-primary.xml.gz
-rw-r--r--. 1 root root 2449 Mar 23 15:38 b30ec7d46fafe3d5e0b375f9c8bc0df7e9e4f69dc404fdec93777ddf9b145ef3-other.xml.gz
-rw-r--r--. 1 root root 2985 Mar 23 15:38 repomd.xml
In the FTP scenario, we are keeping packages accessible to other machines over FTP rather than HTTP protocol. You need to configure FTP and keep packages directory in the FTP share.
Go through createrepo step explained above for the FTP share directory. Once done you can configure the client with FTP address to fetch packages from the yum server. Repo location entry in the client repo configuration file will be –
Learn YUM configuration in Linux. Understand what is yum, features of yum, what is a repository, and how to configure it.
YUM is Yellow dog Updated Modified. It is developed to maintain an RPM-based system. RPM is the Redhat Package Manager. YUM is a package manager with below features –
Automatically detects architecture of the system and search for best-fit software version
Works well with remote (network connectivity) and local (without network connectivity) repositories.
All these features made it the best package manager. In this article, we will walk through Yum configuration steps. You can also browse through below yum related posts :
Yum configuration has repositories defined. Repositories are the places where package files .rpm are located and yum searches, downloads files from repositories for installations. Repositories can be the local mount point file://path, remote FTP location ftp://link, HTTP location link http://link or http://login:password@link, https link or remote NFS mount point.
Yum configuration file is /etc/yum.conf and repository configuration files are located under /etc/yum.repos.d/ directory. All repository configuration files must have .repo extension so than yum can identify them and read their configurations.
Typical repo configuration file entry looks like below :
[rhel-source-beta]
name=Red Hat Enterprise Linux $releasever Beta - $basearch - Source
baseurl=ftp://ftp.redhat.com/pub/redhat/linux/beta/$releasever/en/os/SRPMS/
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta,file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
here –
[rhrl-source-beta] is a unique repository id.
name is a human readable repository name
baseurl is the location from where packages should be scanned and fetched
enabled denotes if this repo is enabled or not i.e. yum should use it or not
gpgcheck enable/disable GPG signature check
gpgkey is the location of GPG key
Out of these first 4 entries are mandatory for every repo location. Let’s see how to create a repo from the DVD ISO file.
Remember one repo configuration file can have more than one location listed.
RPM-based Linux installation DVD has RPM files in it which are used to install packages at the time of OS installation. We can use this package and build our repo so that yum can use those packages!
First, you have to mount ISO file on system. Let’s assume we have mounted it on /mnt/dvd. Now we have to create a yum repo file for it. Lets create file dvdiso.repo under /etc/yum.repos.d/ directory. It should look like :
[dvdiso]
name=RedHat DVD ISO
baseurl=file:///mnt/dvd
enabled=1
gpgcheck=1
gpgkey=file:///mnt/dvd/RPM-GPG-KEY-redhat-6
Male sure you check the path of GPG key on your ISO and edit accordingly. baseurl path will be a directory where repodata directory & gpg file lives.
Thats it! Your repo is ready. You can check using yum repolist command.
# yum repolist
Loaded plugins: refresh-packagekit, security
...
repo id repo name status
dvdiso RedHat DVD ISO 25,459
In the above output, you can see repo is identified by yum. Now you can try installing any software from it with yum install command.
Make sure your ISO is always mounted on the system even after a reboot (add an entry in /etc/fstab to run this repo successfully.
YUM repo configuration for http repo
There are many official and unofficial repositories are hosted on the internet and can be accessed over HTTP protocol. These repositories are large and may contain more packages than your DVD has. To use them in yum, your server should have an active internet connection and it should be able to connect with HTTP locations you are trying to configure.
Once connectivity is confirmed create new repo file for them e.g. named weblocations.repo under directory /etc/yum.repos.d/ with content as below (for example) :
[centos]
name=CentOS Repository
baseurl=http://mirror.cisp.com/CentOS/6/os/i386/
enabled=1
gpgcheck=1
gpgkey=http://mirror.cisp.com/CentOS/6/os/i386/RPM-GPG-KEY-CentOS-6
[rhel-server-releases-optional]
name=Red Hat Enterprise Linux Server 6 Optional (RPMs) mirrorlist=https://redhat.com/pulp/mirror/content/dist/rhel/rhui/server/6/$releasever/$basearch/optional/os enabled=0 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release sslverify=1 sslclientkey=/etc/pki/rhui/content-rhel6.key sslclientcert=/etc/pki/rhui/product/content-rhel6.crt sslcacert=/etc/pki/rhui/cdn.redhat.com-chain.crt
In the above example, you can see 2 web locations are configured in the repo. First is HTTP for centOS whereas the second one is RHEL supplied with https mirror list. Since https protocol is used other SSL related config can be seen following it.
Time to check repo –
# yum repolist
Loaded plugins: rhui-lb, security
repo id repo name status
centos CentOS Repository 5,062
rhui-REGION-rhel-server-releases-optional Red Hat Enterprise Linux Server 6 Optional (RPMs) 11,057
Both repo are identified by yum. Configuration is successful.
If you have an issue with your Red Hat Network certificate you will see below error while executing yum commands.
The certificate /usr/share/rhn/ULN-CA-CERT is expired. Please ensure you have the correct certificate and your system time is correct.
You need to update rhn-client-tools package and it will update certificate details.
If rhn-client-tools package is not installed properly you may see below error while executing yum commands-
rhn-plugin: ERROR: can not find RHNS CA file: /usr/share/rhn/ULN-CA-CERT
In this case, you need to reinstall or update rhn-client-tools package. If you are not using RHN on your server you can even safely remove this package from the system and get your yum working.
# cd lolcat-master/bin
# gem install lolcat
Successfully installed lolcat-42.24.0
Parsing documentation for lolcat-42.24.0
1 gem installed
This confirms your successful installation of lolcat!
lolcat command to rainbow color output!
Its time to see lolcat in action. You can pipe it with any output of your choice and it will color your command output in rainbow color (a few examples below)!
# ps -ef |lolcat
# date | lolcat
Want some more fun?
lolcat comes with few options which will make it more fun on the terminal. Run command with -d and duration and it will color your output in running mode.
Learn how to find the process using high memory on the Linux server. This helps in tracking down issues and troubleshooting utilization problems.
Many times you came to know system memory is highly utilized using a utility like sar. You want to find processes hogging on memory. To find that, we will be using the sort function of process status ps command in this article. We will be sorting ps output with RSS values. RSS is Resident Set Size. These values show how much memory from physical RAM allocated to a particular process. It does not include swapped out memory numbers. Since we troubleshooting processes using high physical memory RSS fits our criteria.
Lets see below example :
# ps aux --sort -rss |head -10
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
oracle_admin 14400 0.0 11.8 36937384 31420276 ? Ss 2016 86:41 ora_mman_DB1
oracle_admin 14405 0.2 11.3 36993676 30023868 ? Ss 2016 1676:11 ora_DB3
oracle_admin 14416 0.2 11.3 36993676 30023656 ? Ss 2016 1722:47 ora_DB3
oracle_admin 14410 0.2 11.3 36993676 30020400 ? Ss 2016 1702:09 ora_DB3
oracle_admin 14421 0.2 11.3 36993676 30018272 ? Ss 2016 1754:25 ora_DB3
oracle_admin 14440 0.0 10.5 36946868 27887152 ? Ss 2016 130:30 ora_mon_DB3
oracle_admin 15855 0.0 6.9 19232424 18298484 ? Ss 2016 41:01 ora_mman_DB4
oracle_admin 15857 0.1 6.7 19288720 17966276 ? Ss 2016 161:45 ora_DB4
oracle_admin 15864 0.1 6.7 19288720 17964584 ? Ss 2016 173:36 ora_DB4
In the above output, we sorted processes with RSS and shown only the top 10 ones. RSS value in output is in Kb. Let’s verify this output for the topmost process with PID 14400.
On our system, we have 264611456Kb physical RAM (highlighted entry in the above output). Out of which 11.8% is used by process 14400 (from ps output above) which comes to 31224151Kb. This value matches the RSS value of 31420276Kb (in ps output above)!
So the above method works well when you try to find processes using the highest physical memory on the system!
You can even use other methods to get high memory using processes like top, htop, etc. but this article aimed at using ps.
Learn watch command to execute script or shell commands repeatedly every n seconds. Very much useful in automation or monitoring.
watch command is a small utility using which you can execute shell command or script repetitively and after every n seconds. Its helpful in automation or monitoring. Once can design automation by monitoring some code/command output using watch to trigger next course of action e.g. notification.
watch command is part of procpspackage. Its bundled with OS still you can verify if package is installed on the system. This utility can be used directly by issuing the watch command followed by command/script name to execute.
For example, I created a small script which writes junk data continuously in a file placed under /. This will change utilization numbers in df -k output. In the above GIF, you can see changes in the “Used” and “Available” column of df -k output when monitored with watch command.
In output, you can see –
The default time interval is 2 seconds as shown n first line
Time duration followed by a command which is being executed by watch
The current date, time of server on the right-hand side
Output of command being executed
Go through below watch command examples to understand how flexible the watch is.
Different options of watch
Now, to change the default time interval use option -n followed by time interval of your choice. To execute command after 20 seconds you can use :
# watch -n 20 df -k
Every 20.0s: df -k Mon Mar 20 15:00:47 2017
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda1 6061632 2194812 3552248 39% /
tmpfs 509252 0 509252 0% /dev/shm
See above output, interval is changed to 20 seconds (highlighted row)
If you want to hide header in output i.e. time interval, the command being executed, and current server date, time, use -t option. It will strip off the first line of output.
# watch -t df -h
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda1 6061632 2194812 3552248 39% /
tmpfs 509252 0 509252 0% /dev/shm
Highlighting difference in the present and previous output is made easy with -d option. To understand this, watch below GIF –
In the above output, I used the same data writing script to fill /. You can observe the only portion which is different from the previous output is being highlighted by watch in the white box!
On-demand NFS mounting utility: autofs. Learn what is autofs, why, and when to use autofs and autofs configuration steps in the Linux server.
The first place to manage mount points on any Linux system is /etc/fstab file. These files mount all listed mount points at the system startup and made them available to the user. Although I explained mainly how autofs advantages us with NFS mount points, it also works well with native mount points.
NFS mount points are also part of it. Now, the issue is even if users don’t access NFS mount points they are still mounted by /etc/fstab and leech some system resources in the background continuously. Like NFS services need to check connectivity, permissions, etc details of these mount points in the background continuously. If these NFS mounts are considerably high in numbers then managing them through /etc/fstab will be a major drawback since you are allotting major system resource chunk to system portion which is not frequently used by users.
Why use AutoFS?
In such a scenario, AutoFS comes in picture. AutoFS is on-demand NFS mounting facility. In short, it mounts NFS mount points when a user tries to access them. Again once time hits timeout value (since last activity on that NFS mount), it will automatically un-mount that NFS mount saving system resources serving idle mount point.
It also reduces your system boot time since the mounting task is done after system boot and when the user demands it.
When use AutoFS?
If your system is having a large number of mount points
Many of them are not being used frequently
The system is tight on resources and every single piece of system resource counts
AutoFS configuration steps
First, you need to install package autofs using yum or apt. The main configuration file for autofs is /etc/auto.masterwhich is also called a mast map file. This file has autofs controlled mount points details. The master file follows below format :
mount_point map_file options
where –
mount_point is a directory on which mounts should be mounted
map_file (automounter map file) is a file containing a list of mount points and their file systems from which they should be mounted
options are extra options to be applied on mount_point
Sample master map file looks like one below :
/my_auto_mount /etc/auto.misc --timeout=60
In above sample, mount points defined under /etc/auto.misc files can be mounted on /my_auto_mount directory with timeout value 60 sec.
Parameter map_file (automounter map file) in the above master map file is also a configuration file which has below format :
mount_point options source_location
where –
mount_point is a directory on which mounts should be mounted
options are mounting options
source_location is FS or NFS path from where the mount will be mounted
Sample automounter map file looks like one below :
linux -ro,soft,intr ftp.example.org:/pub/linux
data1 -fstype=ext3 :/dev/fd0
Users should be aware of the share path. Means, in our case, /my_auto_mount and Linux, data1 these paths should be known to users in order to access them.
In all both these configuration file collectively tells :
Whenever user tries to access mount point Linux or data1 –
autofs checks data1 source (/dev/fs0) with option (-fstype=ext3)
mounts data1 on /my_auto_mount/data1
Un-mounts /my_auto_mount/data1 when there is no activity on mount for 60 secs
Once you are done with configuring your required mounts you can start autofs service. Reload its configurations :
# /etc/init.d/autofs reload
Reloading maps
That’s it! Configuration is done!
Testing AutoFS configuration
Once you reload configuration, check and you will notice autofs defined mount points are not mounted on systems (output of df -h).
Now cd into /my_auto_mount/data1 and you will be presented with a listing of the content of data1 from /dev/fd0!
Another way is to use watch utility in another session and keep watch on command mount. As you execute commands, you will see mount point is mounted on system and after timeout value it’s un-mounted!
Understand AWS cloud terminology of 71 services! Get acquainted with terms used in the AWS world to start with your AWS cloud career!
AWS i.e. Amazon Web Services cloud platform providing list of web services on pay per use basis. It’s one of the famous cloud platforms to date. Due to flexibility, availability, elasticity, scalability, and no-maintenance, many corporations are moving to the cloud. Since many companies using these services, it becomes necessary that sysadmin or DevOps should be aware of AWS.
This article aims at listing services provided by AWS and explaining the terminology used in the AWS world.
As of today, AWS offers a total of 71 services which are grouped together in 17 groups as below :
Compute
It’s a cloud computing means virtual server provisioning. This group provides the below services.
EC2 container service: Its high performance, highly scalable which allows running services on EC2 clustered environment
Lightsail: This service enables the user to launch and manage virtual servers (EC2) very easily.
Elastic Beanstalk: This service manages capacity provisioning, load balancing, scaling, health monitoring of your application automatically thus reducing your management load.
Lambda: It allows you to run your code only when needed without managing servers for it.
Batch: It enables users to run computing workloads (batches) in a customized managed way.
Storage
It’s cloud storage i.e. cloud storage facility provided by Amazon. This group includes :
S3: S3 stands for Simple Storage Service (3 times S). This provides you online storage to store/retrieve any data at any time, from anywhere.
EFS: EFS stands for Elastic File System. It’s online storage that can be used with EC2 servers.
Glacier: Its a low cost/slow performance data storage solution mainly aimed at archives or long term backups.
Storage Gateway: Its interface which connects your on-premise applications (hosted outside AWS) with AWS storage.
Database
AWS also offers to host databases on their Infra so that clients can benefit from cutting edge tech Amazon have for faster/efficient/secured data processing. This group includes :
RDS: RDS stands for Relational Database Service. Helps to set up, operate, manage a relational database on cloud.
DynamoDB: Its NoSQL database providing fast processing and high scalability.
ElastiCache: It’s a way to manage in-memory cache for your web application to run them faster!
Redshift: It’s a huge (petabyte-size) fully scalable, data warehouse service in the cloud.
Networking & Content Delivery
As AWS provides a cloud EC2 server, its corollary that networking will be in the picture too. Content delivery is used to serve files to users from their geographically nearest location. This is pretty much famous for speeding up websites nowadays.
VPC: VPC stands for Virtual Private Cloud. It’s your very own virtual network dedicated to your AWS account.
CloudFront: Its content delivery network by AWS.
Direct Connect: Its a network way of connecting your datacenter/premises with AWS to increase throughput, reduce network cost, and avoid connectivity issues that may arise due to internet-based connectivity.
Route 53: Its a cloud domain name system DNS web service.
Migration
Its a set of services to help you migrate from on-premises services to AWS. It includes :
Application Discovery Service: A service dedicated to analyzing your servers, network, application to help/speed up the migration.
DMS: DMS stands for Database Migration Service. It is used to migrate your data from on-premises DB to RDS or DB hosted on EC2.
Server Migration: Also called SMS (Server Migration Service) is an agentless service that moves your workloads from on-premises to AWS.
Snowball: Intended to use when you want to transfer huge amount of data in/out of AWS using physical storage appliances (rather than internet/network based transfers)
Developer Tools
As the name suggests, its a group of services helping developers to code easy/better way on the cloud.
CodeCommit: Its a secure, scalable, managed source control service to host code repositories.
CodeBuild: Code builder on the cloud. Executes tests codes and build software packages for deployments.
CodeDeploy: Deployment service to automate application deployments on AWS servers or on-premises.
CodePipeline: This deployment service enables coders to visualize their application before release.
X-Ray: Analyse applications with event calls.
Management Tools
Group of services which helps you manage your web services in AWS cloud.
CloudWatch: Monitoring service to monitor your AWS resources or applications.
CloudFormation: Infrastructure as a code! It’s a way of managing AWS relative infra in a collective and orderly manner.
CloudTrail: Audit & compliance tool for AWS account.
Config: AWS resource inventory, configuration history, and configuration change notifications to enable security and governance.
OpsWorks: Automation to configure, deploy EC2 or on-premises compute
Service Catalog: Create and manage IT service catalogs which are approved to use in your/company account
Trusted Advisor: Its AWS AI helping you to have better, money-saving AWS infra by inspecting your AWS Infra.
Managed Service: Provides ongoing infra management
Security, Identity & compliance
Important group of AWS services helping you secure your AWS space.
IAM: IAM stands for Identity and Access Management. Controls user access to your AWS resources and services.
Inspector: Automated security assessment helping you to secure and compliance your apps on AWS.
Certificate Manager: Provision, manage, and deploy SSL/TLS certificates for AWS applications.
Directory Service: Its Microsoft Active Directory for AWS.
WAF & Shield: WAF stands for Web Application Firewall. Monitors and controls access to your content on CloudFront or Load balancer.
Compliance Reports: Compliance reporting of your AWS infra space to make sure your apps and the infra are compliant with your policies.
Analytics
Data analytics of your AWS space to help you see, plan, act on happenings in your account.
Athena: Its a SQL based query service to analyze S3 stored data.
EMR: EMR stands for Elastic Map Reduce. Service for big data processing and analysis.
CloudSearch: Search capability of AWS within application and services.
Elasticsearch Service: To create a domain and deploy, operate, and scale Elasticsearch clusters in the AWS Cloud
Kinesis: Stream’s large amounts of data in real-time.
Data Pipeline: Helps to move data between different AWS services.
QuickSight: Collect, analyze, and present insight into business data on AWS.
Artificial Intelligence
AI in AWS!
Lex: Helps to build conversational interfaces in an application using voice and text.
Polly: Its a text to speech service.
Rekognition: Gives you the ability to add image analysis to applications
Machine Learning: It has algorithms to learn patterns in your data.
Internet of Things
This service enables AWS highly available on different devices.
AWS IoT: It lets connected hardware devices to interact with AWS applications.
Game Development
As name suggest this services aims at Game Development.
Amazon GameLift: This service aims for deploying, managing dedicated gaming servers for session-based multiplayer games.
Mobile Services
Group of services mainly aimed at handheld devices
Mobile Hub: Helps you to create mobile app backend features and integrate them into mobile apps.
Cognito: Controls mobile user’s authentication and access to AWS on internet-connected devices.
Device Farm: Mobile app testing service enables you to test apps across android, iOS on real phones hosted by AWS.
Mobile Analytics: Measure, track, and analyze mobile app data on AWS.
Pinpoint: Targeted push notification and mobile engagements.
Application Services
Its a group of services which can be used with your applications in AWS.
Step Functions: Define and use various functions in your applications
SWF: SWF stands for Simple Workflow Service. Its cloud workflow management helps developers to co-ordinate and contribute at different stages of the application life cycle.
API Gateway: Helps developers to create, manage, host APIs
Elastic Transcoder: Helps developers to converts media files to play of various devices.
Messaging
Notification and messaging services in AWS
SQS: SQS stands for Simple Queue Service. Fully managed messaging queue service to communicate between services and apps in AWS.
SNS: SNS stands for Simple Notification Service. Push notification service for AWS users to alert them about their services in AWS space.
SES: SES stands for Simple Email Service. Its cost-effective email service from AWS for its own customers.
Business Productivity
Group of services to help boost your business productivity.
WorkDocs: Collaborative file sharing, storing, and editing service.
WorkMail: Secured business mail, calendar service
Amazon Chime: Online business meetings!
Desktop & App Streaming
Its desktop app streaming over cloud.
WorkSpaces: Fully managed, secure desktop computing service on the cloud
AppStream 2.0: Stream desktop applications from the cloud.
# make
gcc -DHAVE_CONFIG_H -I. -I. -I. -g -O2 -Wall -Wno-comment -c cmatrix.c
cmatrix.c:37:20: fatal error: curses.h: No such file or directory
#include <curses.h>
^
compilation terminated.
make: *** [cmatrix.o] Error 1
After troubleshooting I came up with a solution and able to pass through make stage. I am sharing it here which might be useful for you.
curses.h header file belongs to ncurses module! You need to install packagesncurses-devel,ncurses (YUM) or libncurses5-dev (APT) and you will be through this error.
Use yum install ncurses-devel ncurses for YUM based systems (like Red Hat, CentOS, etc.) or apt-get install libncurses5-dev for APT based systems (like Debian, Ubuntu, etc.) Verify once that package is installed and proceed with your next course of action.