What is Docker? Introduction guide to Docker for beginners.
Docker! It’s a kind of hot cake right now in the IT industry. Docker is a thing now! If you are into system administration, IT operations, developments, or DevOps then at some point in time you may have or will come across work Docker and you wonder what is docker? Why is docker so famous? So, in this small introduction guide to Docker, we will explain to you about Docker.
Read all docker or containerization related articles here from KernelTalk’s archives.
What is Docker?
Docker is another layer of virtualization where virtualization happens at the operating system level. It’s a software container platform and currently leading this sector globally. You must be familiar with VMware which is virtualization at bare metal level but docker takes one step forward and virtualize things at OS level and hence removing all hardware management, capacity planning, resource management, etc. VMware runs a number of virtual machines (VMs) on single server hardware (refer Figure 1) whereas Docker runs a number of containers on a single Operating System (refer Figure 2). So in simple terms, Docker containers are just processes sharing a host operating system to perform their tasks.
Lets quickly run through the difference between VM and Docker containers. I tabulated the difference for a quick read.
Virtual machine v/s Docker container
Virtual Machine
Docker container
Its a mini version of physical machine
Its just a process
Runs on hypervisor virtualization
Runs on Linux. (HyperV needed if you run on Windows/MAC)
Has its own guest OS
No OS
Can be used only after guest OS boot finishes
Immediately ready to use when launched
Slow
Fast
Uses hardware resources of Host
Uses only OS resources like binaries/libraries of Host
Resource management needed
No resource management
It runs as long as admin/guest OS doesnt power it off
It runs as long as command runs which container executed at startup.
VM stops when you shutdown guest OS
Once the command exits, container stops
Docker engine mainly runs on Linux. So if you are running Docker on Windows or MAC then it’s actually running tiny Linux VM in the background on your Windows or MAC and on top of it, it’s running its own engine to provide you Docker functionalities on non-Linux platform.
Since Docker engine runs containers it also termed as containerization!
Why use Docker?
Docker containers are portable. They can be stored as an image which can be copied to any other machine and can be launched there. This ensures even if host OS parameters, version changes containers still functions the same across the different OS.
Containers use the host operating system, they don’t have their own OS to boot when containers are launched. It means they are almost available for use immediately as there is not booting of OS of anything that sort which takes time to prepare the container for use. Docker containers are fast to use!
They use resources from host OS, there is no resource management like adding/removing CPU, memory, storage, etc tasks on containers!
There are lots of functionality, flexibility being added to Docker every month. Its fast-evolving virtualization concept and gives you more ease of managing IT infra.
What are Docker variants available to use?
Docker Editions
At present, there are two editions available. CE and EE. CE stands for Community Edition and EE stands for Enterprise Edition. Let’s see the difference between Docker CE and Docker EE.
Docker CE
Docker EE
Community Edition
Enterprise Edition
It’s free
It’s paid
Primarily for development use
Use this edition for Production environment
Do it yourself. No support
Support subscription from Docker
For personal use
For enterprise/big/production use
Docker releases
Docker also releases in two forms. Stable and Edge. Let’s see the difference between Docker stable release and Docker edge release.
I believe that should be enough for an introductory article on Docker. If you have any questions/feedback, please leave us to comment below or reach us using the contact form.
Learn how to install Docker in Linux. Docker is the next step of virtualization which does Operating system level virtualization also known as containerization.
In this article, we will walk you through the procedure to install Docker in any Linux distro like RHEL, SUSE, OEL, CentOS, Debian, Fedora, Ubuntu, etc. Sometimes your package manager like YUM or apt-get may offer package docker* to install docker on your server but it’s always good to get a fresh Docker setup. Since Docker is changing fast and it’s always advisable to install the latest version of Docker which might not be available with your package manager.
Read all docker or containerization related articles here from KernelTalk’s archives.
Install docker using package
If your package manager has a Docker package available to install then it’s an easy way to get Docker on your system.
Before going got Docker installation you should install below packages on your system to use the full flexible functionality of Docker. These packages are not dependencies but its good to have them pre-installed so that all Docker functions/drivers you can use.
For CenOs, Redhat etc YUM based systems – yum-utilsdevice-mapper-persistent-datalvm2
For Debian, Ubuntu etc apt based systems – apt-transport-httpsca-certificatescurlsoftware-properties-common
But you may not be getting the latest version of Docker in this case. You can install a package simply using yum or apt-get command. Below sample output for your reference from the OpenSuse server.
root@kerneltalks # zypper in docker
Building repository 'openSUSE-13.2-Update' cache .................................................................................................................[done]
Retrieving repository 'openSUSE-13.2-Update-Non-Oss' metadata ....................................................................................................[done]
Building repository 'openSUSE-13.2-Update-Non-Oss' cache .........................................................................................................[done]
Loading repository data...
Reading installed packages...
Resolving package dependencies...
The following NEW package is going to be installed:
docker
1 new package to install.
Overall download size: 6.2 MiB. Already cached: 0 B After the operation, additional 22.9 MiB will be used.
Continue? [y/n/? shows all options] (y): y
Retrieving package docker-1.9.1-56.1.x86_64 (1/1), 6.2 MiB ( 22.9 MiB unpacked)
Retrieving: docker-1.9.1-56.1.x86_64.rpm .............................................................................................................[done (2.5 MiB/s)]
Checking for file conflicts: .....................................................................................................................................[done]
(1/1) Installing: docker-1.9.1-56.1 ..............................................................................................................................[done]
Additional rpm output:
creating group docker...
Updating /etc/sysconfig/docker...
Install docker using the script
In the below procedure, we will be using the script from Docker’s official website which will scan your system for details and automatically fetch the latest and compatible docker version for your system and installs it. We will be fetching script from this docker URL and using it to install the latest Docker on the list of Linux distros.
Fetch the latest script from docker official website using curl. If you read this script, SUPPORT_MAP variable shows the list of Linux distros this script support. If you are running any other Linux version than listed here then this method won’t be useful for you.
root@kerneltalks # curl -fsSL get.docker.com -o get-docker.sh
root@kerneltalks # ls -lrt
-rw-r--r--. 1 root root 13847 May 30 18:59 get-docker.sh
Now we have latest get-docker.sh script from docker official website on our server. Now, you just have to run the script and it will do the rest!
# sh get-docker.sh
# Executing docker install script, commit: 36b78b2
+ sh -c 'yum install -y -q yum-utils'
Package yum-utils-1.1.31-45.el7.noarch already installed and latest version
+ sh -c 'yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo'
Loaded plugins: fastestmirror
adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
+ '[' edge '!=' stable ']'
+ sh -c 'yum-config-manager --enable docker-ce-edge'
Loaded plugins: fastestmirror
========================================================================= repo: docker-ce-edge =========================================================================
[docker-ce-edge]
async = True
bandwidth = 0
base_persistdir = /var/lib/yum/repos/x86_64/7
baseurl = https://download.docker.com/linux/centos/7/x86_64/edge
cache = 0
cachedir = /var/cache/yum/x86_64/7/docker-ce-edge
check_config_file_age = True
compare_providers_priority = 80
cost = 1000
deltarpm_metadata_percentage = 100
deltarpm_percentage =
enabled = 1
enablegroups = True
exclude =
failovermethod = priority
ftp_disable_epsv = False
gpgcadir = /var/lib/yum/repos/x86_64/7/docker-ce-edge/gpgcadir
gpgcakey =
gpgcheck = True
gpgdir = /var/lib/yum/repos/x86_64/7/docker-ce-edge/gpgdir
gpgkey = https://download.docker.com/linux/centos/gpg
hdrdir = /var/cache/yum/x86_64/7/docker-ce-edge/headers
http_caching = all
includepkgs =
ip_resolve =
keepalive = True
keepcache = False
mddownloadpolicy = sqlite
mdpolicy = group:small
mediaid =
metadata_expire = 21600
metadata_expire_filter = read-only:present
metalink =
minrate = 0
mirrorlist =
mirrorlist_expire = 86400
name = Docker CE Edge - x86_64
old_base_cache_dir =
password =
persistdir = /var/lib/yum/repos/x86_64/7/docker-ce-edge
pkgdir = /var/cache/yum/x86_64/7/docker-ce-edge/packages
proxy = False
proxy_dict =
proxy_password =
proxy_username =
repo_gpgcheck = False
retries = 10
skip_if_unavailable = False
ssl_check_cert_permissions = True
sslcacert =
sslclientcert =
sslclientkey =
sslverify = True
throttle = 0
timeout = 30.0
ui_id = docker-ce-edge/x86_64
ui_repoid_vars = releasever,
basearch
username =
+ sh -c 'yum makecache'
Loaded plugins: fastestmirror
base | 3.6 kB 00:00:00
docker-ce-edge | 2.9 kB 00:00:00
docker-ce-stable | 2.9 kB 00:00:00
epel/x86_64/metalink | 21 kB 00:00:00
extras | 3.4 kB 00:00:00
updates | 3.4 kB 00:00:00
(1/15): docker-ce-stable/x86_64/filelists_db | 7.7 kB 00:00:03
(2/15): base/7/x86_64/other_db | 2.5 MB 00:00:04
(3/15): docker-ce-edge/x86_64/filelists_db | 9.6 kB 00:00:04
(4/15): docker-ce-edge/x86_64/other_db | 62 kB 00:00:04
(5/15): docker-ce-stable/x86_64/other_db | 66 kB 00:00:00
(6/15): base/7/x86_64/filelists_db | 6.9 MB 00:00:05
(7/15): epel/x86_64/filelists_db | 10 MB 00:00:01
(8/15): epel/x86_64/prestodelta | 2.8 kB 00:00:00
(9/15): epel/x86_64/other_db | 3.1 MB 00:00:01
(10/15): extras/7/x86_64/prestodelta | 48 kB 00:00:02
(11/15): extras/7/x86_64/other_db | 95 kB 00:00:02
(12/15): extras/7/x86_64/filelists_db | 519 kB 00:00:02
(13/15): updates/7/x86_64/filelists_db | 1.3 MB 00:00:02
(14/15): updates/7/x86_64/prestodelta | 231 kB 00:00:00
(15/15): updates/7/x86_64/other_db | 228 kB 00:00:00
Loading mirror speeds from cached hostfile
* base: mirror.genesisadaptive.com
* epel: s3-mirror-us-east-1.fedoraproject.org
* extras: mirror.math.princeton.edu
* updates: mirror.metrocast.net
Metadata Cache Created
+ sh -c 'yum install -y -q docker-ce'
If you would like to use Docker as a non-root user, you should now consider
adding your user to the "docker" group with something like:
sudo usermod -aG docker your-user
Remember that you will have to log out and back in for this to take effect!
WARNING: Adding a user to the "docker" group will grant the ability to run
containers which can be used to obtain root privileges on the
docker host.
Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
for more information.
If you observe the above output then you will get to know that script will detect your OS and will download, configure, and use supported repo to install Docker on your machine. It also notifies you to add non-root user to group docker so that he/she can run docker commands with root privileges.
You can download and run the script this in a single command as well like below –
If you are running the script on un-supported Linux version (which is not mentioned in SUPPORT_MAP list) then you will see below error.
root@kerneltalks # sh get-docker.sh
Executing docker install script, commit: 36b78b2
Either your platform is not easily detectable or is not supported by this
installer script.
Please visit the following URL for more detailed installation instructions:
https://docs.docker.com/engine/installation/
If you are on RHEL, SLES (basically Enterprise Linux editions) then only Docker EE i.e. Enterprise Edition (paid) is supported on them. You will need to purchase appropriate subscriptions to use them. You will see below message –
# sh get-docker.sh
# Executing docker install script, commit: 36b78b2
WARNING: rhel is now only supported by Docker EE
Check https://store.docker.com for information on Docker EE
Install with help from docker store
If both above methods are not suitable for you then you can always opt for the last method. Head to Docker online store. Goto Docker CE i.e. Community Edition (the free one) and choose your Linux distro. Currently, they have listed AWS, Azure, Fedora, CentOS, Ubuntu & Debian. Click on your choice, head to Resources tab, and click Detailed installation instructions. You will be redirected to appropriate documents on Docker documents which have detailed step by step commands to perform a clean install of Docker on Linux of your choice! Or you can always head to this home page of installation and choose your host.
Check if Docker is installed
Finally, you have to check if Docker is installed on the system. To check if docker is installed, simply run the command docker version
root@kerneltalks # docker version
Client:
Version: 18.05.0-ce
API version: 1.37
Go version: go1.9.5
Git commit: f150324
Built: Wed May 9 22:14:54 2018
OS/Arch: linux/amd64
Experimental: false
Orchestrator: swarm
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
The last line in the above output shows that the Docker service is not yet running on the server. You can start the service and then the output will show your Docker server details as well.
root@kerneltalks # service docker start
root@kerneltalks # docker version
Client:
Version: 18.05.0-ce
API version: 1.37
Go version: go1.9.5
Git commit: f150324
Built: Wed May 9 22:14:54 2018
OS/Arch: linux/amd64
Experimental: false
Orchestrator: swarm
Server:
Engine:
Version: 18.05.0-ce
API version: 1.37 (minimum version 1.12)
Go version: go1.9.5
Git commit: f150324
Built: Wed May 9 22:18:36 2018
OS/Arch: linux/amd64
Experimental: false
So, now you have successfully installed Docker on your machine and started the Docker server. You are yet to create containers in it!
Setting up docker for non-root user
For non-root user to use Docker, you need to add the user into a group called docker. This group is automatically gets created when you install Docker.
root@kerneltalks # usermod -aG <user> docker
Run above command to add non-root user in docker group and then that user will be able to run all docker commands without root privileges.
Also, you need to make sure that docker services start automatically when the server reboots. Since system control systemctl is becoming standard on all latest Linux versions, below command will suit on nearly major Linux distros
root@kerneltalks # systemctl enable docker
This command will enable docker to run with system boot and hence no root intervention needed when the system reboots. Non-root users will continue to use docker even after a reboot.
Try Docker without installing!
If you want to try Docker without installing it on your machine then just head to Play with Docker website and you will be able to spin up machines having Docker in it. You can try Docker commands in it from your web browser!
The only limitation they have is your session will be auto closed after 4 hours. You have a clock ticking in your browser window set to 4 hours once you log in.
Learn how to setup commands or scripts to execute at shutdown and boot in Suse Linux
In this article, we will walk you through the procedure to schedule scripts at shutdown and boot in Suse Linux. Many times, we have a requirement to start certain applications or services or script after server boots. Sometimes you want to stop application or service or run the script before the server shuts down. This can be done automatically by defining commands or scripts in certain files in Suse Linux.
Application auto start-stop along with OS reboot
Let’s walk through steps to configure the custom applications to auto-start and stop along with Linux reboot. Create a file with a custom name (e.g autoapp) in /etc/init.d as below –
#!/bin/sh
### BEGIN INIT INFO
# Provides: auto_app
# Required-Start: $network $syslog $remote_fs $time
# X-UnitedLinux-Should-Start:
# Required-Stop:
# Default-Start: 3 5
# Default-Stop: 0 1 2 6
# Short-Description: Start and stop app with reboot
# Description: Start and stop custom application with reboot
### END INIT INFO#
case "$1" in
"start")
su - appuser -c "/app/start/command -options"
echo "Application started"
;;
"stop")
su - appuser -c "/app/stop/command -options"
;;
*)
echo "Usage: $0 { start|stop }"
exit 1
;;
esac
exit 0
Make sure you copy all the above text including INIT block at the beginning of the file. Edit appuser and app commands under start and stop blocks.
Set executable permission on this file.
The next step is to identify this file as a service using chkconfig. Use filename as a service name in the below command.
root@kerneltalks # chkconfig --add autoapp
Now enable it to be handeled by systemctl
root@kerneltalks # systemctl enable autoapp
And you are done. Try to start and stop the application using systemctl command to make sure your configuration is working fine. To rule out any permission issues, script entries typo, etc.
If systemctl is properly starting and stopping application as expected then you are all set. Final test you can do by rebooting your server and then verifying if the application was down while the server was shut and did it came up along with server boot.
Run script or command after server boot
In Suse Linux, you have to define commands or scripts in /etc/init.d/after.local to run them after server boots. I am running SLES 12 SP3 and my /etc/init.d/after.locallooks likes below –
root@kerneltalks # cat /etc/init.d/after.local
#! /bin/sh
#
# Copyright (c) 2010 SuSE LINUX Products GmbH, Germany. All rights reserved.
#
# Author: Werner Fink, 2010
#
# /etc/init.d/after.local
#
# script with local commands to be executed from init after all scripts
# of a runlevel have been executed.
#
# Here you should add things, that should happen directly after
# runlevel has been reached.
#
I added below command at end of this file.
echo "I love KernelTalks"
Then to test it, I rebooted the machine. After reboot, since command output is printed to console I need to check logs to confirm if the command executed successfully.
You can check logs of after local service as below :
# systemctl status after-local -l
● after-local.service - /etc/init.d/after.local Compatibility
Loaded: loaded (/usr/lib/systemd/system/after-local.service; static; vendor preset: disabled)
Active: active (exited) since Thu 2018-05-24 03:52:14 UTC; 7min ago
Process: 2860 ExecStart=/etc/init.d/after.local (code=exited, status=0/SUCCESS)
Main PID: 2860 (code=exited, status=0/SUCCESS)
May 24 03:52:14 kerneltalks systemd[1]: Started /etc/init.d/after.local Compatibility.
May 24 03:52:15 kerneltalks after.local[2860]: I love KernelTalks
If you observe the above output, the last line shows the output of our command which we configured in /etc/init.d/after.local! Alternatively, you can check syslog/var/log/messages file as well to check the same logs.
So it was a successful run.
Run script or command before server shutdown
To run a script or command before server initiate shutdown, you need to specify them in /etc/init.d/halt.local. Typical vanilla /etc/init.d/halt.local looks like below –
root@kerneltalks # cat /etc/init.d/halt.local
#! /bin/sh
#
# Copyright (c) 2002 SuSE Linux AG Nuernberg, Germany. All rights reserved.
#
# Author: Werner Fink, 1998
# Burchard Steinbild, 1998
#
# /etc/init.d/halt.local
#
# script with local commands to be executed from init on system shutdown
#
# Here you should add things, that should happen directly before shuting
# down.
#
I added below command at end of this file.
echo "I love KernelTalks"
To make sure, this file is picked up for execution before the shutdown halt.local service should be running. Check if service is running and if not then start it.
# systemctl enable halt.local
halt.local.service is not a native service, redirecting to systemd-sysv-install
Executing /usr/lib/systemd/systemd-sysv-install enable halt.local
# systemctl start halt.local
# systemctl status halt.local
● halt.local.service
Loaded: loaded (/etc/init.d/halt.local; bad; vendor preset: disabled)
Active: active (exited) since Thu 2018-05-24 04:20:18 UTC; 11s ago
Docs: man:systemd-sysv-generator(8)
Process: 3074 ExecStart=/etc/init.d/halt.local start (code=exited, status=0/SUCCESS)
May 24 04:20:18 kerneltalks systemd[1]: Starting halt.local.service...
Then to test it, I shut down the machine. After boot, check logs to confirm if a command was run when the system was shut down.
# cat /var/log/messages |grep halt
2018-05-24T04:21:12.657033+00:00 kerneltalks systemd[1]: Starting halt.local.service...
2018-05-24T04:21:12.657066+00:00 kerneltalks halt.local[832]: I Love KernelTalks
2018-05-24T04:21:12.657080+00:00 kerneltalks systemd[1]: Started halt.local.service.
# systemctl status halt.local -l
● halt.local.service
Loaded: loaded (/etc/init.d/halt.local; bad; vendor preset: disabled)
Active: active (exited) since Thu 2018-05-24 04:21:12 UTC; 1min 18s ago
Docs: man:systemd-sysv-generator(8)
Process: 832 ExecStart=/etc/init.d/halt.local start (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 512)
May 24 04:21:12 kerneltalks systemd[1]: Starting halt.local.service...
May 24 04:21:12 kerneltalks halt.local[832]: I Love KernelTalks
May 24 04:21:12 kerneltalks systemd[1]: Started halt.local.service.
That’s it. You can see our echo message is printed in logs which indicates commands successfully ran before shutdown.
In this way, you can configure your application start-stop commands in Suse Linux to start and stop application after boot and before the shutdown of the server. Also, you can schedule scripts to execute before shutdown and after boot of the Suse Linux server.
Learn why ps output shows UID instead of username.
One of our reader asked me:
I see userid in place of the username in ps -ef command output, please explain.
In this article, we will see why ps output shows UID instead of username sometimes. In some recent Linux distributions like RHEL 7, it shows cropped username ending with + sign. Let’s see the reason behind ps doesn’t show username.
where the first column is username who owns that particular process. Sometimes you see output like below –
kernelt+ 1354 1335 0 17:50 pts/0 00:00:00 top
OR
1001 1354 1335 0 17:50 pts/0 00:00:00 top
where username in ps output is numeric or cropped username ending with +
This is because ps -ef output restricts username up to 8 characters. If your username is longer than 8 characters then it will display UID or cropped version of it. Here we have kerneltalks user on our server.
Learn how to change UID or GID safely in Linux. Also, know how to switch UID between two users and GID between two groups without impacting files ownership they own.
In this article, we will walk you through to change UID or GID of existing users or groups without affecting file ownership owned by them. Later, we also explained how to switch GID between two groups and how to switch UID between two users on the system without affecting file ownership owned by them.
Let’s start with changing UID or GID on the system.
Current scenario :
User shrikant with UID 1001 Group sysadmin with GID 2001
Expected scenario :
User shrikant with UID 3001 Group sysadmin with GID 4001
Changing GID and UID is simple using usermod or groupmod command, but you have to keep in mind that after changing UID or GID you need to change ownership of all files owned by them manually since file ownership is known to the kernel by GID and UID, not by username.
Now, search and change all file’s ownership owned by this user or group with for loop
root@kerneltalks # for i in `find / -user 1001`; do chown 3001 $i; done
root@kerneltalks # for i in `find / -group 2001`; do chgrp 4001 $i; done
OR
root@kerneltalks # find / -user 1001 -exec chown -h shrikant {} \;
root@kerneltalks # find / -group 2001 -exec chgrp -h sysadmin {} \;
That’s it. You have safely changed UID and GID on your system without affecting any file ownership owned by them!
How to switch GID of two groups
Current scenario :
Group sysadmin with GID 1111 Group oracle with GID 2222
Expected scenario :
Group sysadmin with GID 2222 Group oracle with GID 1111
In the above situation, we need to use one intermediate GID which is currently not in use on your system. Check /etc/group file and select one GID XXXX which is not present in a file. In our example, we take 9999 as intermediate GID.
Now, the process is simple –
Change sysadmin GID to 9999
Find and change the group of all files owned by GID 1111 to sysadmin
Change oracle GID to 1111
Find and change the group of all files owned by GID 2222 to oracle
Change sysadmin GID to 2222
Find and change the group of all files owned by GID 9999 to sysadmin
Learn how to safely remove the disk from LVM. It’s useful when you need to free up disks from the volume group and re-use somewhere else or replace a faulty disk.
This article will serve solution for below questions :
How to safely remove the disk from LVM
How to remove the disk from VG online
How to copy data from one disk to other at the physical level
How to replace a faulty disk in LVM online
How to move physical extents from one disk to another
How to free up disk from VG to shrink VG size
How to safely reduce VG
We have volume group named vg01 which has 20M logical volume created in it and mounted it on /mydata mount point. Check lsblk output below –
root@kerneltalks # lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 10G 0 disk
├─xvda1 202:1 0 1M 0 part
└─xvda2 202:2 0 10G 0 part /
xvdf 202:80 0 1G 0 disk
└─vg01-lvol1 253:0 0 20M 0 lvm /mydata
Now, attach new disk of the same or bigger size of the disk /dev/xvdf. Identify the new disk on the system by using lsblk command again and comparing the output to the previous one.
root@kerneltalks # lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 10G 0 disk
├─xvda1 202:1 0 1M 0 part
└─xvda2 202:2 0 10G 0 part /
xvdf 202:80 0 1G 0 disk
└─vg01-lvol1 253:0 0 20M 0 lvm /mydata
xvdg 202:96 0 1G 0 disk
You can see the new disk has been identified as /dev/xvdg. Now, we will add this disk to current VG vg01. This can be done using vgextend command. Obviously, before using it in LVM you need to run pvcreate on it.
Observe the above output. Since we created a 20M mount point from disk /dev/xvdf it has 20M less free size. The new disk /dev/xvdg is completely free.
Now, we need to move physical extents from disk xvdf to xvdg. pvmove is the command used to achieve this. You just need to supply a disk name from where you need to move out PE. Command will move PE out of that disk and write them to all available disks in the same volume group. In our case, only one other disk is available to move PE.
Move progress is shown periodically. If due to any reason operation interrupted in between then moved PE will remain at destination disks and un-moved PEs will remain on the source disk. The operation can be resumed by issuing the same command again. It will then move the remaining PE out of the source disk.
In the above command, it will run pvmove in the background. It will redirect normal console outputs in normal.log file under the current working directory whereas errors will be redirected and saved in error.log file in the current working directory.
Now if you check pvs output again, you will find all space on disk xvdf is free which means its not been used to store any data in that VG. This ensures you can remove the disk without any issues.
Before removing/detaching disk from the server, you need to remove it from LVM. You can do this by reducing VG and opting for that disk out.
root@kerneltalks # vgreduce vg01 /dev/xvdf
Removed "/dev/xvdf" from volume group "vg01"
Now disk xvdf can be removed/detached from server safely.
Few useful switches of pvmove :
Verbose mode prints more detailed information on the operation. It can be invoked by using -v switch.
root@kerneltalks # pvmove -v /dev/xvdf
Cluster mirror log daemon is not running.
Wiping internal VG cache
Wiping cache of LVM-capable devices
Archiving volume group "vg01" metadata (seqno 17).
Creating logical volume pvmove0
activation/volume_list configuration setting not defined: Checking only host tags for vg01/lvol1.
Moving 5 extents of logical volume vg01/lvol1.
activation/volume_list configuration setting not defined: Checking only host tags for vg01/lvol1.
Creating vg01-pvmove0
Loading table for vg01-pvmove0 (253:1).
Loading table for vg01-lvol1 (253:0).
Suspending vg01-lvol1 (253:0) with device flush
Resuming vg01-pvmove0 (253:1).
Resuming vg01-lvol1 (253:0).
Creating volume group backup "/etc/lvm/backup/vg01" (seqno 18).
activation/volume_list configuration setting not defined: Checking only host tags for vg01/pvmove0.
Checking progress before waiting every 15 seconds.
/dev/xvdf: Moved: 0.00%
/dev/xvdf: Moved: 100.00%
Polling finished successfully.
The interval at which command updates the progress can be changed. -i switch followed by a number of seconds can be used to get updates from command on user-defined intervals on progress.
How to guide to boot Suse Linux from old kernel after kernel upgrade.
This article is basically a how-to guide for booting SUSE Linux system from the previous kernel after the kernel upgrade process. Normally, Linux like Red Hat has the option to just change boot priority of kernel in /etc/grub.conf and reboot into the kernel of your choice. But in SUSE Linux, we do not have that option. Now the question is how to boot into old kernel once I upgrade the kernel.
You can boot into the older kernel by using the below method. I explained kernel upgrade first and then how to uninstall update to boot from the older kernel. This is a kind of rollback kernel upgrade in SUSE Linux.
1. Upgrade kernel in Suse Linux
The first thing you want to check and confirm that if your SUSE supports multiversion or not. Go to /etc/zypp/zypp.conf and make sure the below-mentioned line is not commented on. If there is # at the beginning of it, remove it.
multiversion = provides:multiversion(kernel)
There are many ways to maintain how many old kernel versions can be maintained by the system. We won’t be going through it. You can find more details about it here.
Once you are confirmed, multiversion is active then go ahead with kernel upgrade. If it’s not activated, zypper will auto-delete old kernel and you won’t be able to use it.
root@kerneltalks # uname -a
Linux kerneltalks 4.4.114-94.11-default #1 SMP Thu Feb 1 19:28:26 UTC 2018 (4309ff9) x86_64 x86_64 x86_64 GNU/Linux
Install the new kernel version using zypper. Make sure you are installing the new kernel and not updating your current one.
root@kerneltalks # zypper in kernel-default
Refreshing service 'SMT-http_smt-ec2_susecloud_net'.
Refreshing service 'cloud_update'.
Loading repository data...
Reading installed packages...
Resolving package dependencies...
The following 5 NEW packages are going to be installed:
crash-kmp-default crda kernel-default-4.4.120-94.17.1 kernel-firmware
wireless-regdb
5 new packages to install.
Overall download size: 81.3 MiB. Already cached: 0 B. After the operation,
additional 358.3 MiB will be used.
Continue? [y/n/...? shows all options] (y): y
Retrieving package kernel-default-4.4.120-94.17.1.x86_64
(1/5), 38.6 MiB (167.2 MiB unpacked)
Retrieving: kernel-default-4.4.120-94.17.1.x86_64.rpm ........[done (7.1 MiB/s)]
Retrieving package kernel-firmware-20170530-21.19.1.noarch
(2/5), 42.5 MiB (191.1 MiB unpacked)
Retrieving: kernel-firmware-20170530-21.19.1.noarch.rpm .....[done (18.2 MiB/s)]
Retrieving package wireless-regdb-2017.12.23-4.3.1.noarch
(3/5), 14.1 KiB ( 13.0 KiB unpacked)
Retrieving: wireless-regdb-2017.12.23-4.3.1.noarch.rpm ...................[done]
Retrieving package crash-kmp-default-7.1.8_k4.4.92_6.30-4.6.2.x86_64
(4/5), 116.8 KiB ( 7.8 KiB unpacked)
Retrieving: crash-kmp-default-7.1.8_k4.4.92_6.30-4.6.2.x86_64.rpm ........[done]
Retrieving package crda-1.1.3-4.2.1.x86_64 (5/5), 14.4 KiB ( 34.5 KiB unpacked)
Retrieving: crda-1.1.3-4.2.1.x86_64.rpm ..................................[done]
Checking for file conflicts: .............................................[done]
(1/5) Installing: kernel-default-4.4.120-94.17.1.x86_64 ..................[done]
Additional rpm output:
Creating initrd: /boot/initrd-4.4.120-94.17-default
dracut: Executing: /usr/bin/dracut --logfile /var/log/YaST2/mkinitrd.log --force /boot/initrd-4.4.120-94.17-default 4.4.120-94.17-default
dracut: dracut module 'btrfs' will not be installed, because command 'btrfs' cou ld not be found!
dracut: dracut module 'dmraid' will not be installed, because command 'dmraid' c ould not be found!
dracut: dracut module 'mdraid' will not be installed, because command 'mdadm' co uld not be found!
dracut: dracut module 'btrfs' will not be installed, because command 'btrfs' cou ld not be found!
dracut: dracut module 'dmraid' will not be installed, because command 'dmraid' c ould not be found!
dracut: dracut module 'mdraid' will not be installed, because command 'mdadm' co uld not be found!
dracut: *** Including module: bash ***
dracut: *** Including module: systemd ***
dracut: *** Including module: systemd-initrd ***
dracut: *** Including module: i18n ***
dracut: No KEYMAP configured.
dracut: *** Including module: xen-tools-domU ***
dracut: *** Including module: kernel-modules ***
dracut: *** Including module: rootfs-block ***
dracut: *** Including module: suse-xfs ***
dracut: *** Including module: terminfo ***
dracut: *** Including module: udev-rules ***
dracut: Skipping udev rule: 40-redhat.rules
dracut: Skipping udev rule: 50-firmware.rules
dracut: Skipping udev rule: 50-udev.rules
dracut: Skipping udev rule: 91-permissions.rules
dracut: Skipping udev rule: 80-drivers-modprobe.rules
dracut: *** Including module: dracut-systemd ***
dracut: *** Including module: haveged ***
dracut: *** Including module: usrmount ***
dracut: *** Including module: base ***
dracut: *** Including module: fs-lib ***
dracut: *** Including module: shutdown ***
dracut: *** Including module: suse ***
dracut: *** Including modules done ***
dracut: *** Installing kernel module dependencies and firmware ***
dracut: *** Installing kernel module dependencies and firmware done ***
dracut: *** Resolving executable dependencies ***
dracut: *** Resolving executable dependencies done***
dracut: *** Hardlinking files ***
dracut: *** Hardlinking files done ***
dracut: *** Stripping files ***
dracut: *** Stripping files done ***
dracut: *** Generating early-microcode cpio image ***
dracut: *** Store current command line parameters ***
dracut: Stored kernel commandline:
dracut: root=UUID=26fa33a2-ad40-4a85-a495-402aca6a2127 rootfstype=ext4 rootflag s=rw,relatime,data=ordered
dracut: *** Creating image file '/boot/initrd-4.4.120-94.17-default' ***
dracut: *** Creating initramfs image file '/boot/initrd-4.4.120-94.17-default' d one ***
(2/5) Installing: kernel-firmware-20170530-21.19.1.noarch ................[done]
(3/5) Installing: wireless-regdb-2017.12.23-4.3.1.noarch .................[done]
(4/5) Installing: crash-kmp-default-7.1.8_k4.4.92_6.30-4.6.2.x86_64 ......[done]
(5/5) Installing: crda-1.1.3-4.2.1.x86_64 ................................[done]
Output of kernel-firmware-20170530-21.19.1.noarch.rpm %posttrans script:
Creating initrd: /boot/initrd-4.4.114-94.11-default
dracut: Executing: /usr/bin/dracut --logfile /var/log/YaST2/mkinitrd.log --f orce /boot/initrd-4.4.114-94.11-default 4.4.114-94.11-default
dracut: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
dracut: dracut module 'dmraid' will not be installed, because command 'dmrai d' could not be found!
dracut: dracut module 'mdraid' will not be installed, because command 'mdadm ' could not be found!
dracut: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
dracut: dracut module 'dmraid' will not be installed, because command 'dmrai d' could not be found!
dracut: dracut module 'mdraid' will not be installed, because command 'mdadm ' could not be found!
dracut: *** Including module: bash ***
dracut: *** Including module: systemd ***
dracut: *** Including module: systemd-initrd ***
dracut: *** Including module: i18n ***
dracut: No KEYMAP configured.
dracut: *** Including module: xen-tools-domU ***
dracut: *** Including module: kernel-modules ***
dracut: *** Including module: rootfs-block ***
dracut: *** Including module: suse-xfs ***
dracut: *** Including module: terminfo ***
dracut: *** Including module: udev-rules ***
dracut: Skipping udev rule: 40-redhat.rules
dracut: Skipping udev rule: 50-firmware.rules
dracut: Skipping udev rule: 50-udev.rules
dracut: Skipping udev rule: 91-permissions.rules
dracut: Skipping udev rule: 80-drivers-modprobe.rules
dracut: *** Including module: dracut-systemd ***
dracut: *** Including module: haveged ***
dracut: *** Including module: usrmount ***
dracut: *** Including module: base ***
dracut: *** Including module: fs-lib ***
dracut: *** Including module: shutdown ***
dracut: *** Including module: suse ***
dracut: *** Including modules done ***
dracut: *** Installing kernel module dependencies and firmware ***
dracut: *** Installing kernel module dependencies and firmware done ***
dracut: *** Resolving executable dependencies ***
dracut: *** Resolving executable dependencies done***
dracut: *** Hardlinking files ***
dracut: *** Hardlinking files done ***
dracut: *** Stripping files ***
dracut: *** Stripping files done ***
dracut: *** Generating early-microcode cpio image ***
dracut: *** Store current command line parameters ***
dracut: Stored kernel commandline:
dracut: root=UUID=26fa33a2-ad40-4a85-a495-402aca6a2127 rootfstype=ext4 root flags=rw,relatime,data=ordered
dracut: *** Creating image file '/boot/initrd-4.4.114-94.11-default' ***
dracut: *** Creating initramfs image file '/boot/initrd-4.4.114-94.11-defaul t' done ***
Creating initrd: /boot/initrd-4.4.120-94.17-default
dracut: Executing: /usr/bin/dracut --logfile /var/log/YaST2/mkinitrd.log --f orce /boot/initrd-4.4.120-94.17-default 4.4.120-94.17-default
dracut: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
dracut: dracut module 'dmraid' will not be installed, because command 'dmrai d' could not be found!
dracut: dracut module 'mdraid' will not be installed, because command 'mdadm ' could not be found!
dracut: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
dracut: dracut module 'dmraid' will not be installed, because command 'dmrai d' could not be found!
dracut: dracut module 'mdraid' will not be installed, because command 'mdadm ' could not be found!
dracut: *** Including module: bash ***
dracut: *** Including module: systemd ***
dracut: *** Including module: systemd-initrd ***
dracut: *** Including module: i18n ***
dracut: No KEYMAP configured.
dracut: *** Including module: xen-tools-domU ***
dracut: *** Including module: kernel-modules ***
dracut: *** Including module: rootfs-block ***
dracut: *** Including module: suse-xfs ***
dracut: *** Including module: terminfo ***
dracut: *** Including module: udev-rules ***
dracut: Skipping udev rule: 40-redhat.rules
dracut: Skipping udev rule: 50-firmware.rules
dracut: Skipping udev rule: 50-udev.rules
dracut: Skipping udev rule: 91-permissions.rules
dracut: Skipping udev rule: 80-drivers-modprobe.rules
dracut: *** Including module: dracut-systemd ***
dracut: *** Including module: haveged ***
dracut: *** Including module: usrmount ***
dracut: *** Including module: base ***
dracut: *** Including module: fs-lib ***
dracut: *** Including module: shutdown ***
dracut: *** Including module: suse ***
dracut: *** Including modules done ***
dracut: *** Installing kernel module dependencies and firmware ***
dracut: *** Installing kernel module dependencies and firmware done ***
dracut: *** Resolving executable dependencies ***
dracut: *** Resolving executable dependencies done***
dracut: *** Hardlinking files ***
dracut: *** Hardlinking files done ***
dracut: *** Stripping files ***
dracut: *** Stripping files done ***
dracut: *** Generating early-microcode cpio image ***
dracut: *** Store current command line parameters ***
dracut: Stored kernel commandline:
dracut: root=UUID=26fa33a2-ad40-4a85-a495-402aca6a2127 rootfstype=ext4 root flags=rw,relatime,data=ordered
dracut: *** Creating image file '/boot/initrd-4.4.120-94.17-default' ***
dracut: *** Creating initramfs image file '/boot/initrd-4.4.120-94.17-defaul t' done ***
Reboot system and you see your system is booted with the latest new kernel.
root@kerneltalks # uname -a
Linux kerneltalks 4.4.120-94.17-default #1 SMP Wed Mar 14 17:23:00 UTC 2018 (cf3a7bb) x86_64 x86_64 x86_64 GNU/Linux
Now check all the installed kernel packages on your system using –
root@kerneltalks # zypper se -si 'kernel*'
Refreshing service 'SMT-http_smt-ec2_susecloud_net'.
Refreshing service 'cloud_update'.
Loading repository data...
Reading installed packages...
S | Name | Type | Version | Arch | Repository
---+-----------------+---------+------------------+--------+-------------------
i+ | kernel-default | package | 4.4.120-94.17.1 | x86_64 | SLES12-SP3-Updates
i+ | kernel-default | package | 4.4.114-94.11.3 | x86_64 | SLES12-SP3-Updates
i | kernel-firmware | package | 20170530-21.19.1 | noarch | SLES12-SP3-Updates
Here, you can see there are two kernels installed on the system. Old one is 4.4.114-94.11.3 and the new one is 4.4.120-94.17.1 from which the current system is booted.
2. Boot from the old kernel in SUSE Linux
For Suse with GRUB2
Now, if you want to boot the system from the old kernel 4.4.114-94.11.3 without un-installing new kernel then follow the below steps.
Make copy of /etc/default/grub file as a backup. and then edit it –
root@kerneltalks # cp /etc/default/grub /etc/default/grub.backup
root@kerneltalks # vi /etc/default/grub
Look for GRUB_DEFAULT=0and edit the number per old kernel menu number. Old kernel menu number can be found in /boot/grub2/grub.cfg
Open /boot/grub2/grub.cfg and look for entry menuentry You will be able to see different kernel entries in it. First being 0 and then counter goes on. Check and choose the menu number of your old kernel.
After editing /etc/default/grub file you need to re-create /boot/grub2/grub.cfg You can do it with below command –
root@kerneltalks # grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.4.120-94.17-default
Found initrd image: /boot/initrd-4.4.120-94.17-default
Found linux image: /boot/vmlinuz-4.4.114-94.11-default
Found initrd image: /boot/initrd-4.4.114-94.11-default
done
Once done, reboot the system. That’s it. You can see your system is booted with an old kernel while your new kernel is still installed on the server.
For Suse with GRUB
Edit /boot/grub/grub.conf which is also link to /boot/grub/menu.lst . Look for parameter default 0 and change the number 0 to your desired kernel menu number.
You can see the kernel list to be displayed later in the same file. Remember, the numbering starts at 0. So countdown to your old kernel version number and use it for the default parameter.
Save the file and reboot the system. You will be booted with old kernel.
3. Rollback to old kernel in SUSE Linux
Now if you want to rollback system to the old kernel 4.4.114-94.11.3, remove this latest installed kernel 4.4.120-94.17.1. You need to give kernel name as <package_name>-<package_version>. You can get the version from the above command output. In this way, we are downgrading kernel in SUSE Linux.
root@kerneltalks # zypper rm kernel-default-4.4.120-94.17.1
Loading repository data...
Reading installed packages...
Resolving package dependencies...
The following package is going to be REMOVED:
kernel-default-4.4.120-94.17.1
1 package to remove.
After the operation, 167.2 MiB will be freed.
Continue? [y/n/...? shows all options] (y): y
(1/1) Removing kernel-default-4.4.120-94.17.1.x86_64 .............................................................................................................[done]
There are some running programs that might use files deleted by recent upgrade. You may wish to check and restart some of them. Run 'zypper ps -s' to list these programs.
Now as per warning lets check which all processes are using it.
root@kerneltalks # zypper ps -s
The following running processes use deleted files:
PID | PPID | UID | User | Command | Service
----+------+-----+------+---------------+--------------
486 | 1 | 0 | root | systemd-udevd | systemd-udevd
You may wish to restart these processes.
See 'man zypper' for information about the meaning of values in the above table.
Lets reboot system and check kernel version.
root@kerneltalks # uname -a
Linux kerneltalks 4.4.114-94.11-default #1 SMP Thu Feb 1 19:28:26 UTC 2018 (4309ff9) x86_64 x86_64 x86_64 GNU/Linux
You can see the system is booted with your old kernel 4.4.114-94.11.3!
Now, Check installed kernel packages and you can see a newer kernel package is no more installed/active on the system.
root@kerneltalks # zypper se -si 'kernel*'
Refreshing service 'SMT-http_smt-ec2_susecloud_net'.
Refreshing service 'cloud_update'.
Loading repository data...
Reading installed packages...
S | Name | Type | Version | Arch | Repository
---+-----------------+---------+------------------+--------+-------------------
i+ | kernel-default | package | 4.4.114-94.11.3 | x86_64 | SLES12-SP3-Updates
i | kernel-firmware | package | 20170530-21.19.1 | noarch | SLES12-SP3-Updates
If you have another method (command line) to boot into the older kernel then please share in the comments below.
Solution for VMware tools not running after Linux kernel upgrade in guest VM
In this article, we will discuss solutions when VMware tools are not running after the Linux kernel upgrade.
Cause :
After kernel upgrade in the Guest VM Linux machine, you may see VMware tools are not running. This is because there are VMware tools modules that runs using kernel library files. After a kernel upgrade, they point to different library files than the one currently used by the kernel and hence failed to start.
Solution :
The issue can be resolved by reconfiguring VMware tools after the kernel upgrade. This process is on the fly and does not require downtime.
Login to Guest Linux operating system using root account and run reconfiguration script /usr/bin/vmware-config-tools.pl
You will be asked a few choices to make. If you know about those modules you choose your answers according to your requirement and just hit enter to accept defaults. See below sample output –
root@kerneltalks # /usr/bin/vmware-config-tools.pl
Initializing...
Making sure services for VMware Tools are stopped.
Found a compatible pre-built module for vmci. Installing it...
Found a compatible pre-built module for vsock. Installing it...
The module vmxnet3 has already been installed on this system by another
installer or package and will not be modified by this installer.
The module pvscsi has already been installed on this system by another
installer or package and will not be modified by this installer.
The module vmmemctl has already been installed on this system by another
installer or package and will not be modified by this installer.
The VMware Host-Guest Filesystem allows for shared folders between the host OS
and the guest OS in a Fusion or Workstation virtual environment. Do you wish
to enable this feature? [no]
Found a compatible pre-built module for vmxnet. Installing it...
The vmblock enables dragging or copying files between host and guest in a
Fusion or Workstation virtual environment. Do you wish to enable this feature?
[no]
VMware automatic kernel modules enables automatic building and installation of
VMware kernel modules at boot that are not already present. This feature can
be enabled/disabled by re-running vmware-config-tools.pl.
Would you like to enable VMware automatic kernel modules?
[no]
Do you want to enable Guest Authentication (vgauth)? Enabling vgauth is needed
if you want to enable Common Agent (caf). [yes]
Do you want to enable Common Agent (caf)? [yes]
No X install found.
Creating a new initrd boot image for the kernel.
NOTE: both /etc/vmware-tools/GuestProxyData/server/key.pem and
/etc/vmware-tools/GuestProxyData/server/cert.pem already exist.
They are not generated again. To regenerate them by force,
use the "vmware-guestproxycerttool -g -f" command.
vmware-tools start/running
The configuration of VMware Tools 10.0.6 build-3560309 for Linux for this
running kernel completed successfully.
You must restart your X session before any mouse or graphics changes take
effect.
You can now run VMware Tools by invoking "/usr/bin/vmware-toolbox-cmd" from the
command line.
To enable advanced X features (e.g., guest resolution fit, drag and drop, and
file and text copy/paste), you will need to do one (or more) of the following:
1. Manually start /usr/bin/vmware-user
2. Log out and log back into your desktop session; and,
3. Restart your X session.
Enjoy,
--the VMware team
If you are ok to accept the default and want the script to run non-interactive, run it with -d default switch.
Failed to mount cd:///?devices=/dev/disk/by-id/ata-VMware_Virtual_IDE_CDROM_Drive_10000000000000000001 on /var/adm/mount/AP_0xFre2nn: Mounting media failed (mount: no medium found on /dev/sr0)
Detailed error snippet below :
# zypper in salt-minion
Loading repository data...
Reading installed packages...
Resolving package dependencies...
The following 16 NEW packages are going to be installed:
libzmq3 python-Jinja2 python-MarkupSafe python-PyYAML python-backports.ssl_match_hostname python-futures python-msgpack-python python-netaddr python-psutil
python-pycrypto python-pyzmq python-requests python-simplejson python-tornado salt salt-minion
The following 2 recommended packages were automatically selected:
python-futures python-netaddr
The following 15 packages are not supported by their vendor:
libzmq3 python-Jinja2 python-MarkupSafe python-PyYAML python-backports.ssl_match_hostname python-futures python-msgpack-python python-psutil python-pycrypto
python-pyzmq python-requests python-simplejson python-tornado salt salt-minion
16 new packages to install.
Overall download size: 9.0 MiB. Already cached: 0 B. After the operation, additional 48.0 MiB will be used.
Continue? [y/n/? shows all options] (y): y
Retrieving package python-netaddr-0.7.10-8.5.noarch (1/16), 896.9 KiB ( 4.2 MiB unpacked)
Failed to mount cd:///?devices=/dev/disk/by-id/ata-VMware_Virtual_IDE_CDROM_Drive_10000000000000000001 on /var/adm/mount/AP_0xFre2nn: Mounting media failed (mount: no medium found on /dev/sr0)
Please insert medium [SLES12-SP1-12.1-0] #1 and type 'y' to continue or 'n' to cancel the operation. [yes/no] (no): n
Problem occured during or after installation or removal of packages:
Installation aborted by user
Please see the above error message for a hint.
Cause :
This error is nothing but zypper trying to read repo information from CD/DVD. Since one of the zypper repo is configured to look for mountable media, it’s doing its job. But, currently, that media is not connected to the system, and hence zypper is failing to read details from it.
Solution :
List your zypper repo using the command :
# zypper lr --details
# | Alias | Name | Enabled | GPG Check | Refresh | Priority | Type | URI | Service
--+----------------------+----------------------+---------+-----------+---------+----------+--------+----------------------------------------------------------------------------------------+--------
1 | SLES12-SP1-12.1-0 | SLES12-SP1-12.1-0 | Yes | (r ) Yes | No | 99 | yast2 | cd:///?devices=/dev/disk/by-id/ata-VMware_Virtual_IDE_CDROM_Drive_10000000000000000001 |
2 | sles12-sp1-bootstrap | sles12-sp1-bootstrap | Yes | ( p) Yes | No | 99 | rpm-md | http://repo.kerneltalks.com/pub/repositories/sle/12/1/bootstrap |
Here you can see first repo’s URI is pointing to a CD. Now you can mount the CD or you can disable that repo for time being and move ahead with the installation.
Use the below command to disable CD repo. Make sure you enter correct repo number in command (here it’s 1)
# zypper mr --disable 1
Repository 'SLES12-SP1-12.1-0' has been successfully disabled.
Once CD/DVD repo is disabled successfully, re-run zypper installation command and you will be able to execute it without any errors!