Yearly Archives: 2017

How to create S3 bucket in AWS

Learn how to create an S3 bucket in AWS step by step. Understand permissions and properties of the bucket which are to be set while creating.

S3 bucket creation in AWS

S3 which stands for Simple Storage Service is a storage web service provided by Amazon web service. S3 is the replacement of storage boxes in traditional data centers. Its highly scale-able, cheap, reliable alternative. In S3 data is stored in a bucket. The bucket is the root folder in S3. You can have more than one bucket in a single AWS account. Files stored in buckets are called objects. You can control access to data by defining permissions at the bucket level and object level.

In this article we will see how to create S3 bucket with screenshots.

Step 1 –

Login to AWS console and select S3 under Storage. You can even search it under the search bar of the console.

S3 in AWS console

It will take you to Amazon S3 console where you can see ‘create bucket‘ button along with Delete bucket and Empty bucket button in header.

S3 console

Step 2 –

Click create a bucket and you will be presented with bucket wizard. Enter the bucket name of your choice. Remember it should be unique across all AWS infrastructure. Select region (geographically nearest to source/destination from where read/write of data will happen to/from this bucket to avoid latency). If you want to create a new bucket with the setting of the existing bucket, you can specify the existing bucket name in the last option.

Create bucket wizard

Hit next after filling the required details. You will enter the bucket properties screen as below.

S3 Bucket properties

Here you can set these properties to your bucket.

  1. Versioning. Enable to keep all versions of objects when altered. Once enabled it can not be disabled. It can only be suspended then.
  2. Logging. It will track all access requests made to this bucket.
  3. Tags. Add tags of your choice to identify bucket easily in other AWS services and billing.

All are disabled by default. Once you enable features of your choice hit next. You will enter the permission settings screen.

S3 bucket permissions

Here you can manage permissions at the user, public, and system level. Public and system permissions can be enabled or disabled. User-level permission can be set read, write and user id wise.

Once you are done, hit next and the review screen will show you all the options you have selected as a final confirmation before creating a bucket.

Bucket review screen before creation

Hit create bucket now. Your bucket will be created and you will be redirected back to bucket list screen where you can see your newly created bucket!

Bucket list

Step 3 –

If you click on bucket name you will be able to get into bucket itself where you can upload objects.

Inside S3 bucket

Even tabs like properties, permissions, management are visible in the menu bar which can be used to administer this bucket. We will see them in another post.

How to: Virtual Private Cloud in AWS

A how-to guide for Virtual Private Cloud in AWS. Learn what is vpc, how to create, configure, and delete VPC in AWS with screenshots.

How to guide : VPC in AWS

What is VPC?

VPC is a Virtual Private Cloud. It’s your own private cloud in the public cloud. You control every aspect of VPC and its communication with the outer world. It’s like having your own datacenter which is isolated from other datacenters. When you are using cloud services, you are working inside your VPC. Servers, storage, load balancer, databases everything you create, configure is executed under your VPC. VPC gives you great flexibility to control your data privacy and security even its on cloud.

How to create VPC in AWS?

We will walk through the process of creating VPC in AWS (Amazon Web Services) cloud. By default, one VPC is created for you when you create a new account with AWS. This VPC is marked as default VPC. Whenever you are using services within AWS, this VPC will be used by default if multiple VPC exists in your account.

Check out our AWS CSA associate certificate preparation guide

Lets follow these series of screenshots to create VPC.

First login to your AWS management console and navigate to VPC under the category ‘Networking and Content delivery‘. See the below image. Or you can type VPC in the AWS services search bar you will be presented with VPC link.

VPC in AWS management console

Now you will be presented with VPC dashboard which shows you a summary of your VPC resources like below :

VPC dashboard showing resources details

Here click on ‘Start VPC Wizard‘. This will kick off the VPC wizard to create your VPC step by step.

Step 1 :

Choose which kind of VPC you need. You have these choices –

  1. VPC with a single public subnet
  2. VPC with public and private subnets
  3. VPC with public and private subnets with hardware VPN access
  4. VPC with private subnet only with hardware VPN access

Each choice has its own features to offer. You can see what it offers by clicking on it. We will be creating the first type of VPC in this tutorial.

Select type of VPC

Select your type of VPC on the left column and then click on Select blue button on right.

Step 2:

Here you need to configure your subnet IP ranges, hardware related stuff, etc. See below screenshot and we will understand each field one by one.

VPC configuration
  • IPv4 CIDR block: CIDR is Class-less Inter Domain Routing. It is your subnet range to be used by VPC. The IP addresses from this range will be assigned to components or services you will be using in this VPC. This is a mandatory field. You have to specify your range with subnet notation. Note that this range is configured and reachable only within your VPC.
  • IPv6 CIDR block: Optional field. You can have IPv6 support in your VPC with this. Here IP range will be automatically generated and assigned by Amazon. You do not have the privilege to choose your own.
  • VPC Name: Name of your choice. It helps you to identify this VPC in other parts of AWS within your account for configuration purposes. You can leave this blank since AWS identifies its every component by ARN (Amazon resource name). This ARN is an alphanumeric system-generated name that is not user friendly hence this field is optionally provided so that you can name your components with an easily recognizable name.
  • Public subnet’s IPv4 CIDR: This range is meant for outside world communication. Your resources will be assigned IP from this block when you want them to communicate outside VPC.
  • Availability zone: These zones are logical grouping of AWS hardware within one specified region (geographical grouping). At a one time you can select one region to work within and availability zones from that region will be listed here as a dropdown. If no zone selected, AWS will create VPC in any of the zones which has max free resources at that instant of time.
  • Subnet name: Again this one is to name your public subnet with an easily recognizable name.
  • Service endpoints: These are virtual devices in AWS. If you want any of them to add with this VPC then you can browse and select them here.
  • Enable DNS hostnames: It enables DNS names to be generated for components when they created in this VPC. These names are system generated.
  • Hardware Tenancy: Choose if you want your VPC components to be on single dedicated hardware (dedicated, physically as close as possible) or anywhere (physically may be near or long) within the zone you specified above. Dedicated tenancy assigns hardware which is the same rack or nearby racks so that you have very minimum network latency and highest performance.

Step 3 :

Click ‘Create VPC ‘ button. Your VPC will be created within seconds and you will be greeted with a screen saying “Your VPC has been successfully created. You can launch instances into the subnets of your VPC. For more information, see Launching an Instance into Your Subnet.” (link altered here with my blog post link). Click ok and you will be presented with VPC list screen as below :

VPC list

Here you can see out newly created VPC named kerneltalks_vpc! All details of this VPC can be seen here. You VPC is ready to

How to modify VPC in AWS?

After creation you can modify VPC parameters. From the VPC list shown above, select any VPC you want to edit and then click the Actions button in the header. Dropdown menu will appear to edit below parameters :

  • Delete VPC
  • Edit CIDRs
  • Edit DHCP options set
  • Edit DNS resolution
  • Edit DNS hostnames
  • Create flow log

Flow logs are created fro any resources in VPC to trace and see IP traffic flow information. The rest of the options are self-explanatory. Here you can modify VPc and delete VPC too.

Here is small GIF I created which shows all above process of creating VPC.

GIF: Create VPC in AWS

Shell scripting basics: IF, FOR and WHILE loop

Beginners guide to learn shell scripting basics of If statement and for. while loops. The article includes small scripts of if, for and while loop.

Bash scripting : If, FOR & WHILE loop

This article can be referred to as a beginner’s guide to the introduction of shell scripting. We will be discussing various loops that are used in shell or bash scripting. In this chapter, we will discuss on if, for and while loop of scripting: if statement, for loop and while loop.

IF statement

If the loop is a conditional loop in scripting. It is used to check some conditions and perform actions based on the condition being true or false. If the loop structure is :

if [ condition ]
then
<execute this code>
else
<execute this code>
fi

If starts with condition check, if conditions match then it will execute the following code (after then). If the condition is false then it will execute code after else. If a statement can be used without else part too.

Example :

# cat if_statement.sh
#!/bin/bash
if [ $1 -lt 100 ]
then
echo "Your number is smaller than 100"
else
echo "Your number is greater than 100"
fi

# sh if_statement.sh 34
Your number is smaller than 100

If you execute this script, the loop will read the first argument as $1 and compare it with 100. Accordingly it will display output.

Bash FOR loop

Executing the same set of commands for the number of times loops are used in scripting. One of them is for loop. It takes input as a series of values and executes the following code repeatedly for each value. For loop structure is as below :

for i in <values>
do
<execute this code>
done

Here is reads values as a variable i and then $i can be used in the following code. Once all the values processed loops stops and script proceed to the next line of code.

Example :

# cat for_loop.sh
#!/bin/bash
for i in 1 2 3 4 5
do
echo First value in the series is $i
done

# sh for_loop.sh
First value in the series is 1
First value in the series is 2
First value in the series is 3
First value in the series is 4
First value in the series is 5

You can see variable $i is being fed with different values for each run of the loop. Once all values in range processed, the loop exits.

Bash WHILE loop

While is another loop used in programming which runs on condition. It keeps on running until the condition is met. Once the condition is un-matched, it exists. It’s a conditional loop! While loop structure is :

while [ condition ]
do
<execute this code>
done

While loop starts with the condition. If the condition is met it execute the following code and then again goes to verify condition. If the condition is still met then the next iteration of code execution happens. It continues to condition falses.

Example :

# cat while_loop.sh
#!/bin/bash
count=0
while [ $count -lt 3 ]
do
echo Count is $count
count=$(expr $count + 1)
done

# sh while_loop.sh
Count is 0
Count is 1
Count is 2

In the above example we have incremental counter code. We put up a condition for while loop is it can execute till counter less than 3. You can observe output the while loops exist when the counter hits 2.

These are three widely and commonly used for scripting! If you have any suggestions/feedback/corrections for this article please let us know in the comments below.

Beginner’s guide: 4 Linux group management commands

Learn to manage groups in Linux with these group management commands. The article includes how to create, modify, delete, and administer groups.

Group management in Linux

Groups on the Linux system are a bunch of users created for easy access/permission management. One user can be a member of one or many groups. Users will have only one primary and one/many secondary groups. In our other article we have seen user management commands in Linux/Unix. In this article we will discuss group management. There are mainly 4 commands used to manage user groups on Linux systems :

  1. groupadd
  2. groupmod
  3. groupdel
  4. gpasswd

Let’s check all these commands and fields they are responsible in /etc/group file.

groupadd command

As the name suggests, it is used to create new groups on the Linux system. groupadd command needs a group name as an argument.

# groupadd sysadmins

# cat /etc/group
sysadmins:x:502:

This command creates a group named sysadmins. A newly created group can be verified in /etc/group file. Study fields in /etc/group file here.

Several common switches which works with groupadd are :

  • -g : Specify GID of your choice
  • -o : Create a group with non-unique GID
  • -r : Create a system group. (GID will be taken from system group GID range)

groupmod command

If you want to edit parameters like name, GID, uniqueness of group which already exist in the system then you can modify group using groupmod. Below the list of the switch with their desired values should feed to this command –

  • -g : new GID
  • -o : Make it non-unique
  • -n : New name
# groupmod -n newsysadmins sysadmins
# cat /etc/group |grep sys
newsysadmins:x:502:

# groupmod -g 9999 sysadmins
# cat /etc/group 
sysadmins:x:9999:

# groupmod -o -g 3 sysadmins
# cat /etc/group |grep sys
sys:x:3:bin,adm
sysadmins:x:3:


Observe above outputs where we changed the name, gid of the group and lastly we assigned the same GID 3 (non-unique) to our group which was already existing.

groupdel command

That’s the command where group ends their life! Yes, group deletion is performed using this command. This command is pretty simple. Just supply your group name and it will be deleted from the system.

# groupdel sysadmins

gpasswd command

This command is used to administer group. Administering groups includes :

  1. Adding/removing users to/from group
  2. Setting and removing group password
  3. Making a user administrator/member of a group

Adding and removing user in the group is done with switch -a and -d followed by user name and lastly group name. Check below examples :

# gpasswd -a shri sysadmins
Adding user shri to group sysadmins

# cat /etc/group | grep sysadmin
sysadmins:x:3:shri

# gpasswd -d shri sysadmins
Removing user shri from group sysadmins

# cat /etc/group | grep sysadmin
sysadmins:x:3:


Password set is done without any switch while password removal is with -r switch as below :

# gpasswd sysadmins
Changing the password for group sysadmins
New Password:
Re-enter new password:

What is the use of group password in Linux? 

This question comes to many of us. Hardly rather no one uses this feature at all. The idea must be to secure a group from non-member users. But since a group password should be known to all group members, it actually doesn’t make any sense to use it. Then you might ask then why group passwords exist in the first place? It may be just following the user (password security) model to groups as well to maintain symmetry in design. I mean it’s just my thought. Let me know if you have any other reason which suits group password existence!

Making any user administrator of the group grants him the privilege to administer the group. Member, the user is just a member of the group and can not administer it. You can make user administrator of the group with -A switch and member with -M. By default, the user is added to the group as a member

# gpasswd -A shri sysadmins
# gpasswd -M shri sysadmins

Those are all group management commands in Linux with their most used switches. Let us know any addition/correction/feedback in the comments!

Finger command in Linux

Learn how to get user details by using finger command in Linux. List of switches for finger command and list of parameter information in this article.

Finger command howto!

The finger is a user information lookup program in Linux. It is used to get system user details like the user, home directory, last login, user shell, etc. This command is useful to see these parameters which otherwise you have to look under /etc/passwd and last login records.

Sometimes you will not find finger command in your out of box distribution. You can install a finger package and proceed to use this command. Sample output of installation on Redhat for your reference.

# yum install finger
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos, security
Setting up Install Process
epel/metalink                                                                                                                         |  15 kB     00:00
epel                                                                                                                                  | 4.3 kB     00:00
epel/primary_db                                                                                                                       | 5.9 MB     00:04
rhui-REGION-client-config-server-6                                                                                                    | 2.9 kB     00:00
rhui-REGION-rhel-server-releases                                                                                                      | 3.5 kB     00:00
rhui-REGION-rhel-server-releases/primary_db                                                                                           |  56 MB     00:00
rhui-REGION-rhel-server-releases-optional                                                                                             | 3.5 kB     00:00
rhui-REGION-rhel-server-releases-optional/primary_db                                                                                  | 5.4 MB     00:00
rhui-REGION-rhel-server-rh-common                                                                                                     | 3.8 kB     00:00
Resolving Dependencies
--> Running transaction check
---> Package finger.x86_64 0:0.17-40.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=============================================================================================================================================================
 Package                      Arch                         Version                              Repository                                              Size
=============================================================================================================================================================
Installing:
 finger                       x86_64                       0.17-40.el6                          rhui-REGION-rhel-server-releases                        22 k

Transaction Summary
=============================================================================================================================================================
Install       1 Package(s)

Total download size: 22 k
Installed size: 27 k
Is this ok [y/N]: y
Downloading Packages:
finger-0.17-40.el6.x86_64.rpm                                                                                                         |  22 kB     00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : finger-0.17-40.el6.x86_64                                                                                                                 1/1
  Verifying  : finger-0.17-40.el6.x86_64                                                                                                                 1/1

Installed:
  finger.x86_64 0:0.17-40.el6

Complete!

Finger command requires the username as an argument. Without any switch finger shows below details.

# finger shrikant
Login: shrikant                         Name: Shrikant Lavhate
Directory: /home/ec2-user               Shell: /bin/bash
On since Wed Jul  5 00:31 (EDT) on pts/0 from 59.184.183.234
No mail.
No Plan.

It displays –

  1. Login: Login id
  2. Name: Comment in /etc/passwd against that user
  3. Directory: Home directory of the user
  4. Shell: The user login shell
  5. Last login time and IP from where he/she was logged in
  6. Email status
  7. Plan : (the content of .plan file in user’s home directory)

Email status can be one of the below –

  • No Mail. :  if there is no mail at all
  • Mail last read DDD MMM ## HH:MM YYYY (TZ):  if the person has
    looked at their mailbox since new mail arriving
  • New mail received … : Same as above
  • Unread since … :  if the user has new mail

Finger command switches

Finger command supports a few switches. The above output without any switch is the same output for the switch -l (multi-line listing). It also displays the content of the files .plan, .project, .pgpkey, and .forward from the user’s a home directory if they exist.

Another switch is -s which can be used for more information like terminal name, write status, idle time, login time and contact details, etc.

# finger -s  shri
Login     Name              Tty        Idle  Login Time   Office     Office Phone
shri      Shrikant Lavhate   *     *    No     logins

In this output you can see * for :

  • terminal: When the unknown device
  • Write status: If write permission is denied
  • login and idle time: If nonexistent

The last switch is -m which prevents user matching. Finger command matches the supplied user name in userid and user comment details. To avoid matching it in comment details and only check-in user ids this switch can be used.

Finger can even be used to lookup remote user information by using user@host format.

How-to guide: LVM snapshot

Understand what is LVM snapshot and why to use it. Learn how to create, mount, unmount, and delete the LVM snapshot.

LVM snapshot How to and when to!

LVM (Logical Volume Manager) is one of the widely used volume managers on Unix and Linux. There were series of LVM articles we published at kerneltalks in the past. You can refer them in below list :

In this article we will be discussing how to take snapshots of the logical volume. Since LVM is widely used, you must know how to take a backup of logical volume on the LVM level. Continue reading this article to know more about LVM snapshots.

What is LVM snapshot?

LVM snapshot is a frozen image of the logical volume. It is an exact copy of LVM volume which has all the data of volume at the time of its creation. LVM copies blocks or chinks from volume to volume so it’s comparatively fast than file-based backup. Backup volume is read-only in LVM and read-write in LVM2 by default.

Why to use LVM snapshot?

LVM snapshots are a quick and fast way to create a data copy. If your volume size is big and you can not tolerate performance issues while doing online data backup or downtime while doing offline data backup then LVM snapshot is the way to go. You can create snapshots and then use your system normally. While the system is serving production, you can take a backup of your backup volume at your leisure without hampering system operations.

Another benefit of using snapshot is your data remain un-changed while the backup is going on. If you take a backup of production/live volume then data on volume may change resulting inconsistency in the backup. But in case of snapshot since no app/user using it or writing data on it, taking backup from a snapshot is consistent and smooth.

How to take LVM snapshot?

LVM snapshots are easy to take. You need to use the same lvcreate command with -s switch (denotes snapshot) and supply the name of the volume to back up in the end. Important things here are to decide the size of the backup volume. If backup volume size is unable to hold data volume then your snapshot becomes unusable and it will be dropped. So make sure you have little extra space in backup volume than your original data volume (per man page 15-20% might be enough).

Lets create LVM snapshot.

# df -h /mydata
Filesystem              Size  Used Avail Use% Mounted on
/dev/mapper/vg01-lvol0   93M  1.6M   87M   2% /mydata

# lvcreate -L 150 -s -n backup /dev/vg01/lvol0
  Rounding up size to full physical extent 152.00 MiB
  Reducing COW size 152.00 MiB down to maximum usable size 104.00 MiB.
  Logical volume "backup" created.

In the above example, we created a snapshot of 100MB logical volume. For snapshot we defined 150MB space to be on the safer side. But command itself reduced it to 104MB optimum value to save on space. We also named this volume as a backup with -n switch.

Once the snapshot is created it can be mounted as any other logical volume. Once mounted it can be used as a normal mount point. Remember, LVM snapshots are read-only and LVM2 snapshots are read-write by default.

# mount /dev/vg01/backup /mybackup

# df -h
Filesystem              Size  Used Avail Use% Mounted on
/dev/mapper/vg01-lvol0  93M  1.6M   87M   2% /mydata
/dev/mapper/vg01-backup 93M  1.6M   87M   2% /mybackup

You can see after mounting backup volume its the exact same copy of the original volume. Original volume mounted on /mydata and backup volume mounted on /mybackup.

Now, you can take a backup of the mounted backup volume leaving original volume un-touched and without impacting its service to the environment.

How to delete LVM snapshot?

LVM snapshot deletion can be done as you delete the logical volume.  After backup you may want to delete snapshot then you just need to unmount it and use lvremove command.

# umount /mybackup
# lvremove /dev/vg01/backup
Do you really want to remove active logical volume backup? [y/n]: y
  Logical volume "backup" successfully removed

Conclusion

LVM snapshots are special volumes that hold an exact copy of your data volume. It helps in reducing the overhead of read-write on data volume when taking backup. It also helps for consistent backup since no app/user altering data on snapshot volume.

Process states in Linux

Learn different process states in Linux. Guide explaining what are they, how to identify them, and what do they do.

Different process states

In this article we will walk you through different process states in Linux. This will be helpful in analyzing processes during troubleshooting. Process states define what process is doing and what it is expected to do in the near time. The performance of the system depends on a major number of process states.

From birth (spawn) till death (kill/terminate or exit), the process has a life cycle going through several states. Some processes exist in process table even after they are killed/died, those processes are called zombie processes. We have seen much about the zombie process in this article. Let’s check different process states now. Broadly process states are :

  1. Running or Runnable
  2. Sleeping or waiting
  3. Stopped
  4. Zombie

How to check process state

top command lists total count of all these states in its output header.

top - 00:24:10 up 4 min,  1 user,  load average: 0.00, 0.01, 0.00
Tasks:  88 total,   1 running,  87 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   1018308k total,   187992k used,   830316k free,    15064k buffers
Swap:        0k total,        0k used,        0k free,    67116k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
    1 root      20   0 19352 1552 1240 S  0.0  0.2   0:01.80 init
    2 root      20   0     0    0    0 S  0.0  0.0   0:00.00 kthreadd
----- output clipped -----

See highlighted row named Tasks above. It shows the total number of processes and their state-wise split up.

Later in above top output observe column with heading S. This column shows process states. In the output we can see 2 processes in the sleeping state.

You can even use ps command to check process state. Use below syntax :

# ps -o pid,state,command
  PID S COMMAND
 1661 S sudo su -
 1662 S su -
 1663 S -bash
 1713 R ps -o pid,state,command

In the above output you can see column titled S shows state of the process. We have here 3 sleeping and one running process. Let’s dive into each state.

Process state: Running

The most healthy state of all. It indicates the process is active and serving its requests. The process is properly getting system resources (especially CPU) to perform its operations. Running process is a process which is being served by CPU currently. It can be identified by state flag R in ps or top output.

The runnable state is when the process has got all the system resources to perform its operation except CPU. This means the process is ready to go once the CPU is free. Runnable processes are also flagged with state flag R

Process state: Sleeping

The sleeping process is the one who waits for resources to run. Since its on the waiting stand, it gives up CPU and goes to sleep mode. Once its required resource is free, it gets placed in the scheduler queue for CPU to execute. There are two types of sleep modes: Interruptible and Uninterruptible

Interruptible sleep mode

This mode process waits for a particular time slot or a specific event to occur. If those conditions occur, the process will come out of sleep mode. These processes are shown with state S in ps or top output.

Uninterruptible sleep mode

The process in this sleep mode gets its timeout value before going to sleep. Once the timeout sets off, it awakes. Or it awakes when waited-upon resources become available for it. It can be identified by the state D in outputs.

Process state : Stopped

The process ends or terminates when they receive the kill signal or they enter exit status. At this moment, the process gives up all the occupied resources but does not release entry in the process table. Instead it sends signals about termination to its parent process. This helps the parent process to decide if a child is exited successfully or not. Once SIGCHLD received by the parent process, it takes action and releases child process entry in the process table.

Process state: Zombie

As explained above, while the exiting process sends SIGCHLD to parents. During the time between sending a signal to parent and then parent clearing out process slot in the process table, the process enters zombie mode. The process can stay in zombie mode if its parent died before it releases the child process’s slot in the process table. It can be identified with Z in outputs.

So complete life cycle of process can be circle as –

  • Spawn
  • Waiting
  • Runnable
  • Running
  • Stopped
  • Zombie
  • Removed from process table

Bash fork bomb: How does it work

Insights of Bash fork bomb. Understand how the fork bomb works, what it could to your system, and how to prevent it.

Insight of Bash fork bomb

We will be discussing Bash fork bomb in this article. This article will walk you through what is fork bomb, how it works, and how to prevent it from your systems. Before we proceed kindly read the notice carefully.

Caution: Fork bomb may crash your system if not configured properly. And also can bring your system performance down hence do not run it in production/live systems.

Fork bombs are normally used to test systems before sending them to production/live setup. Fork bombs once exploded can not be stopped. The only way to stop it, to kill all instances of it in one go or reboot your system. Hence you should be very careful when dropping it on the system, since you won’t be able to use that system until you reboot it. Let’s start with insights into the fork bomb.

What is fork bomb?

Its a type of DoS attack (Denial of Service). Little bash function magic! Fork bomb as the name suggests has the capability to fork its own child processes in system indefinably. Means once you start fork bomb it keeps on spawning new processes on the system. These new processes will stay alive in the background and keeps eating system resources until the system hangs. Mostly these new processes don’t do anything and keep idle in the background. Also, by spawning new processes it fills up the kernel process limit, and then the user won’t be able to start any new process i.e. won’t be able to use the system at all.

How fork bomb works?

Now, technically speaking: fork bomb is a function. It calls himself recursively in an indefinite loop. Check the below small example :

bombfn ()
{
bombfn | bombfn &
}
bombfn

Above is the smallest variant of the fork bomb function. Going step by step to read this code :

  1. bombfn () : Line one is to define new function named bombfn.
  2. { and } : Contains the function code
  3. bombfn : This last line calls the function (bombfn) to execute
  4. Now the function code within { } – Here the first occurrence of bombfn means function calls himself within him i.e. recursion. Then its output is piped to another call of the same function. Lastly & puts it in background.

Overall, the fork bomb is a function which calls itself recursively, piped output to another call to himself, and then put it in background.  This recursive call makes this whole process repeat itself for indefinite time till system crashes due to resource utilization full.

This function can be shortened as :(){ :|: & };: Observe carefully, bombfn is above example is replaced by : and the last call of function is inlined with ;

 :(){ :|: & };:

Since this seems just a handful of symbols, the victim easily fell for it thinking that even running it on terminal won’t cause any harm as anyway, it will return an error. But it’s not!

How to prevent bash fork bomb?

Fork bomb can be prevented by limiting user processes. If you don’t allow the user to fork many processes then fork bomb won’t be allowed to spawn many processes that could bring down the system. But yes it may slow down system and user running fork bomb won’t be able to run any commands from his session.

Caution : This solution works for non-root accounts only.

Process limit can be set under /etc/security/limits.conf or PAM configuration.

To define process limit add below code in /etc/security/limits.conf file :

user1            soft    nproc          XXX
user1            hard    nproc          YYYYY

Where :

  • user1 is username for which limits are being defined
  • soft and hard are the type of limits
  • nproc is variable for process limit
  • last numbers (XXX, YYYYY) are a total number of processes a user can fork. Set them according to your system capacity.

You can verify the process limit value of user by running ulimit -u from the user’s login.

Once you limit processes and run fork bomb, it will show you below warning until you kill it. But you will be able to use the system properly provided you set limits well within range.

-bash: fork: retry: Resource temporarily unavailable

You can kill it by pressing cntl+c on the next iteration of warning. Or you can kill using root account from another session.

Cowsay: Fun in Linux terminal

Cowsay command to have some ASCII graphics in your Linux terminal! This command displays strings of your choice as a cow is saying/thinking in graphical format.

Cowsay command!

Another article to have some fun in the Linux terminal. Previously we have seen how to create fancy ASCII banners and matrix falling code in Linux terminal. In this article we will see another small utility called cowsay which prints thinking cow ASCII picture on the terminal with a message of your choice.  Cowsay is useful to write eye-catchy messages to users in motd (message of the day)!

From the man page “Cowsay generates an ASCII picture of a cow saying something provided by the user. If run with no arguments, it accepts standard input, word-wraps the message given at about 40 columns, and prints the cow saying the given message on standard output.” That explains the functionality of cowsay. Let’s see it in action!

# cowsay I love kerneltalks.com
 ________________________
< I love kerneltalks.com >
 ------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

😀 this is how it looks in terminal! Awesome eh?

Installation is pretty simple. Install the cowsay package in your Linux and that’s it. For reference below are installation logs on my AWS EC2 Linux server.

# yum install cowsay
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos, security
Setting up Install Process
epel/metalink                                                                                                                         |  12 kB     00:00
epel                                                                                                                                  | 4.2 kB     00:00
http://mirror.math.princeton.edu/pub/epel/6/x86_64/repodata/repomd.xml: [Errno -1] repomd.xml does not match metalink for epel
Trying other mirror.
epel                                                                                                                                  | 4.3 kB     00:00
epel/primary_db                                                                                                                       | 5.9 MB     00:09
rhui-REGION-client-config-server-6                                                                                                    | 2.9 kB     00:00
rhui-REGION-rhel-server-releases                                                                                                      | 3.5 kB     00:00
rhui-REGION-rhel-server-releases-optional                                                                                             | 3.5 kB     00:00
rhui-REGION-rhel-server-rh-common                                                                                                     | 3.8 kB     00:00
Resolving Dependencies
--> Running transaction check
---> Package cowsay.noarch 0:3.03-8.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=============================================================================================================================================================
 Package                              Arch                                 Version                                  Repository                          Size
=============================================================================================================================================================
Installing:
 cowsay                               noarch                               3.03-8.el6                               epel                                25 k

Transaction Summary
=============================================================================================================================================================
Install       1 Package(s)

Total download size: 25 k
Installed size: 31 k
Is this ok [y/N]: y
Downloading Packages:
cowsay-3.03-8.el6.noarch.rpm                                                                                                          |  25 kB     00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : cowsay-3.03-8.el6.noarch                                                                                                                  1/1
  Verifying  : cowsay-3.03-8.el6.noarch                                                                                                                  1/1

Installed:
  cowsay.noarch 0:3.03-8.el6

Complete!

Once successfully installed you can run cowsay command followed by the text you want the cow to say! There are different cow modes which you can use to change the appearance of cow 😀 (outputs later in this post)

  1. -b: borg mode
  2. -d: Cow appears dead
  3. -g: greedy mode
  4. -s: stoned cow
  5. -t: tired cow
  6. -y: Young cow 😛

Different Cowsay command examples

Normally cowsay word wraps. If you want fancy banners in cowsay you should use -n switch so that cowsay won’t word wrap and you get nice formatted output.

# figlet kerneltalks | cowsay -n
 __________________________________________________
/  _                        _ _        _ _         \
| | | _____ _ __ _ __   ___| | |_ __ _| | | _____  |
| | |/ / _ \ '__| '_ \ / _ \ | __/ _` | | |/ / __| |
| |   <  __/ |  | | | |  __/ | || (_| | |   <\__ \ |
| |_|\_\___|_|  |_| |_|\___|_|\__\__,_|_|_|\_\___/ |
\                                                  /
 --------------------------------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||


Check out below cow appearences as listed above with different switches.

# cowsay -b kerneltalks
 _____________
< kerneltalks >
 -------------
        \   ^__^
         \  (==)\_______
            (__)\       )\/\
                ||----w |
                ||     ||
# cowsay -d kerneltalks
 _____________
< kerneltalks >
 -------------
        \   ^__^
         \  (xx)\_______
            (__)\       )\/\
             U  ||----w |
                ||     ||
# cowsay -g kerneltalks
 _____________
< kerneltalks >
 -------------
        \   ^__^
         \  ($)\_______
            (__)\       )\/\
                ||----w |
                ||     ||
# cowsay -s kerneltalks
 _____________
< kerneltalks >
 -------------
        \   ^__^
         \  (**)\_______
            (__)\       )\/\
             U  ||----w |
                ||     ||
# cowsay -t kerneltalks
 _____________
< kerneltalks >
 -------------
        \   ^__^
         \  (--)\_______
            (__)\       )\/\
                ||----w |
                ||     ||
# cowsay -y kerneltalks
 _____________
< kerneltalks >
 -------------
        \   ^__^
         \  (..)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

If you observe in all different modes eyes and tongues are the only entities that change. So, you can define and change them manually too! You can define eyes with -e switch and tongue with -T switch.

# cowsay -e 88  kerneltalks
 _____________
< kerneltalks >
 -------------
        \   ^__^
         \  (88)\_______
            (__)\       )\/\
                ||----w |
                ||     ||
# cowsay -T X kerneltalks
 _____________
< kerneltalks >
 -------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
             X ||----w |
                ||     ||

In above example I defined 88 as eyes and X as tongue!

It’s cool that developers coded so much versatility for such funny command too! Too much of switch support, man pages, and all!