A quick walkthrough on how to create new ECS cluster
In our previous article, we got acquainted with Amazon ECS service theoretically. In this article, we will walk you through steps to create a new ECS cluster.
ECS Cluster is a logical grouping of ECS instances on which containerized application can be orchestrated.
This article is using below design to provision ECS cluster.
Now, on the right-hand side click on the Create Cluster button
Here a user should be choosing the cluster template for the new cluster
Three templates mentioned here are :
Networking only
No ECS instances.
All tasks will be launched using the Fargate launch type!
EC2 Linux + Networking
Deploy with Linux ECS instances
EC2 and Fargate both launch types available for tasks
EC2 Windows + Networking
Deploy with Windows ECS instances
EC2 and Fargate both launch types available for tasks
Most of the time, EC2 Linux + Networking should suffice the requirement. Select the appropriate template and click the Next Step button.
On cluster configuration screen various details can be filled.
Cluster name
Create an empty cluster is an option to create clusters with no ECS instances.
Then, instance configurations should be defined.
Under instance configurations choose :
Provisioning model: Choose billing type of instances (on-demand or spot)
Number of instances
EC2 AMI ID. The dropdown allows choosing Amazon Linux AMI.
Root EBS size
Key Pair: If you want to log into ECS instances. If not then choose None.
Next section allows network configuration.
By default setup present to create a new VPC to be used for this ECS cluster. But, if you wish to use existing or already created VPC then choose it from the dropdown.
In my case, I have a custom VPC created already. So I will use it from drop down. While using existing VPC, you need to choose which subnets to be used to place container instances and which security group should be applied to them.
I used my existing VPC along with 2 private subnets in different AZ and security groups which allows SSH and HTTP traffic to instances. Since I will be testing webserver containers on this cluster. This SG should allow the ports you will be using in your containerized applications. Also, they should be allowing traffic from only intended sources.
Finally, IAM roles to be defined which will be attached to ECS instances.
Tags can be applied to instances here. Also, if container-level monitoring needs to be enabled it can be done here. Click Create and a cluster will be created in a few.
ECS uses CloudFormation in the backend to deploy the whole stack. It can be verified in the Launch status or CloudFormation service dashboard as well.
Now, click on the View Cluster button and new ECS cluster details will be presented on screen.
Both ECS instances are registered to cluster as well at this stage. Those Cluster ECS instances can be viewed from the EC2 dashboard as well.
These instances will be named automatically by ECS. And if you observe those are deployed in different AZ (supplied at cluster creation) and assigned with SG as well.
So the ECS cluster is up and ready along with both ECS instances registered to cluster and ready to run tasks!
Issue: ECS instances not registering in ECS cluster
One of the common issues seen at this stage is although EC2 instances are running fine they do not get registered to the ECS cluster. You do not see them in the ECS Instances tab on the cluster details page.
Cause: This is caused when ECS instances have not to route to the internet. ECS agent on the instances needs to reach ECS public endpoint to register the instance in the ECS cluster. Since no route to the internet, they can not reach ECS public endpoint and can not register to cluster.
Solution: If instances are launched in a private subnet then they should be able to reach the internet using NAT gateway or HTTP proxy. Or you can configure VPC endpoints for Amazon ECS and route traffic from instances to ECS without giving them internet access at all.
If instances are launched in public subnet then make sure auto-assign public IPv4 address is enabled and the instance is allocated with public IPv4 address. Also, the subnet is associated with a routeing table that has a route to Internet Gateway.
An article about Amazon ECS foundational topics for beginners
Amazon ECS stands for Amazon Elastic Container Service. We will walk you through ECS bit by bit to help you understand what is ECS. We will touch base below topics –
What is ECS?
Use cases for ECS
ECS component concepts
Pricing
What is ECS?
Amazon ECS is a fully managed container orchestration service. It aimed to do all the heavy lifting of managing container orchestration clusters for customers while customers can focus on developing their containerized application.
If you are new to containers, please read our container articles –
In a nutshell, ECS is Amazon’s own homegrown container orchestration service. If you have learned about Docker Swarm then consider ECS as Amazon’s version of Swarm to manage your containers.
Amazon’s other service ‘Elastic Beanstalk’ actually using ECS in the background to spin up clusters of containers running your desired applications.
Where to use ECS?
In this section, we will see the use cases of Amazon ECS. This service sees uses cases mainly in two sectors:
Microservices
Application following microservices architecture approach can make most of ECS! The Microservices approach aims at decoupling the design so that architecture is failure-proof, can be scaled at the service level, etc. These benefits can be leveraged using containers! Containers can be spun using immutable images, tested locally, scaled using ECS clusters, each service can be defined using different tasks, and pipelined using CI/CD.
Batch Jobs
Since containers are easy, quick to spin up, and terminate they are perfect for running batch jobs. Using containers you can cut down your time to spin up EC2 instances for processing jobs, save time for their terminations, get away with the huge billing associated with them. And all of this can be well managed by AWS in the backend when you spin your containers using ECS. Fargate launch type in ECS is well suited for batch jobs since it doesn’t require to spin us even EC2 container instances. It’s like serverless and you pay for the resources you used and the time you used them.
ECS concepts
With a little bit of foundation let’s dive into a few ECS concepts. While using ECS you will come across the below terms –
Clusters
Container Instance
Task definitions
Launch types
Services
Amazon ECR
Cluster
Cluster is a grouping of tasks and/or services. These tasks/services are run on container instances (in case of EC2 launch types), in that case, EC2 container instances also come under this logical grouping called a cluster. You can have one or more clusters in your account.
Container instance
When you are running a task/service using the EC2 launch type, it will run on EC2 instances. These instances (Linux/Windows) are created when you are creating clusters. Basically, those are normal EC2 instances with docker software, ECS agent preinstalled. You can even connect to them like any other EC2 instances using SSH or RDP. You can view them in the EC2 dashboard of your account as well. Under ECS those will be referred to as container instances.
Tasks definitions
Task definitions are a collection of settings required to run a container in the Amazon ECS cluster. It has numerous parameters that you can configure including container definitions as well. The core of the task definition is container definition and launch type, where you define how your container should be instantiated. In short, you can visualize it as a container Dockerfile.
Launch types
It’s a type of compute on which containers should be instantiated. Amazon offers 2 launch types –
EC2 Launch type
Runs containers on cluster container instances
You will be billed for EC2 instances, not the container runtimes
Fargate types
Runs containers on serverless/managed infra in the backend
You will be billed on the number of resources used and for the duration, they are used
Services
Services are the schedulers that are responsible to maintain the desired number of containers of running tasks in a cluster. So container instantiation and termination to match the given conditions is done by services.
Amazon ECR
It’s Amazon Elastic Container Registry. It’s your own private container registry hosted on AWS. You can use IAM authentications to control the access to ECR and it can be connected to various apps for CI/CD purposes. Container definitions under tasks can refer to ECR images securely.
Pricing
ECS is a free service just like cloudformation! You will be billed only for the resources you deployed/used for using ECS.
When creating clusters, the container instance type should be selected. As I explained earlier these are normal EC2 instances that can be viewed in the EC2 dashboard as well. So they will be billed like any other EC2 instances (instance types and time for which they are running).
Another billing you can incur is when running containers on fargate launch type. It’s considered as pretty costly than the EC2 launch type and hence should be used only for short-running tasks. You will be billed for the number of resources you are using and the duration for which they are being used.
If you are leveraging ECR to maintain your own private container image repository then ECR charges will be applied to your account as well. ECR charges include two components :
Storage. Billed for total storage being used by all of the images
Data transfer. Data in and out bills i.e. data transferred from/to ECR during image pull/push operations
Conclusion
That’s pretty much about ECS basics. So, if you are working on a self-hosted container environment, it’s time to move to ECS and let AWS manage the stuff for you while you can concentrate on developing apps in containers!
A quick post on how to forward SSH key in Putty on Windows.
Let’s start with some basics about SSH key/agent forwarding. Then we will dive into how to configure it in putty.
What is SSH key/agent forwarding?
Traditionally we used to have password-based authentication for Linux servers. In this age of cloud, all the Linux servers deployed in the cloud come with default key-based authentication Authentication is done using pair of keys: Private key (with user) and Public key (stored on server). So every time you connect to the server you need to supply your private key for authentication.
If you are using some jump server or bastion host for connecting servers then you need to store your private key on that server (jump/bastion). So that it can be used for authentication when connecting to servers. This leaves a security risk of the private key being exposed/accessed by other users of jump/bastion host.
In such a scenario, SSH agent forwarding should be used. SSH agent forwarding allows you to forward the SSH key remotely. That means you can authenticate without storing the key on the jump/bastion host! Putty takes care of using the key stored on your local computer and forward it so that it can be used for remote authentications.
How to configure SSH agent forwarding in Putty?
It can be done by using utility pagent.exe which comes with PuTTY. pagent.exe is an SSH authentication agent for PuTTY. It can be downloaded for free from PuTTY website along with the PuTTY executable.
Now Open pagent.exe. It will start in the background. You can click on pagent icon in the taskbar and bring it to the foreground. You should see the below screen –
Click on the Add Key button. Browse your PPK key stored on the local computer and click Open. Key will be added to the database and you should see it in the key list as below –
Now click the Close button. Make sure pagent is running in the background. And open PuTTY. In the left panel of the category, goto Connection > SSH > Auth and select the checkbox next to Allow agent forwarding
Now you are ready to connect to your jump/bastion host. And from there to the remote Linux machines. You will not be prompted for key since it’s already added to pagent and PuTTY is making sure to forward it for further connections!
Below is my test where I connected my instance in a private subnet without supplying the SSH key in command.
That’s all! You can add a number of keys in pagent and use them without leaving a key footprint on intermediate servers!
Everything you need to know about Bastion host in AWS infrastructure.
In this article, we will touch base below points in context to bastion host:
What is a bastion host?
What is the role of bastion host in AWS infrastructure?
How to deploy and configure a bastion host?
Lets start with the introduction to bastion host.
What is bastion host?
A bastion host is a Windows or Linux machine sitting in the Public subnet of your AWS infrastructure. It’s a machine that is used to securely access the rest of the infrastructure for administration purposes. Since you don’t want to expose everything in your infra to the internet, the bastion host will do that heavy lifting and hence securing the infrastructure.
As this host is exposed to the internet it is recommended to implement a strong system hardening on this machine. Secure this machine at OS level with all available hardening techniques since this machine is a gateway to your whole infrastructure.
What is the role of bastion host in AWS infrastructure?
As explained above, the bastion host will be used to access the rest of the infrastructure. for administrative tasks. Sometimes, cloud newbies treat bastion host as a way of accessing instances in the private subnet only. But that’s not it. One should block access (SSH or RDP) to instances in the public subnet as well and allow them only through the bastion host.
This way one can secure administrative level access to instances in public and private subnets. And this is the recommended practice. Your all instances no matter they are in which subnet should be accessible via bastion host only.
In a nutshell, bastion hosts used to secure administrative access to instances in private and public subnets.
How to deploy bastion and configure host?
For this exercise, we will deploy Linux bastion host in the same architecture which we used while creating our last custom VPC. In the case of the Windows environment, SSH can be replaced with RDP, and Linux bastion can be replaced with a Windows machine. Bastion host deployment and configuration can be summarised as –
Deploy EC2 instance in the public subnet (that’s your bastion host)
Create a new security group which allows SSH traffic from bastion to destination public and private subnets
Attach security group to instances
Lets dive into it.
For step 1, I deployed Amazon Linux 2 EC2 instance. You can even use customized AMI which has all hardening already done, logging enabled for a bastion, etc things. But for this exercise, I will be using normal Amazon Linux AMI. The SG created along with this launch should allow SSH traffic from 0.0.0.0/0. Let’s tag this SG as bastion-sg
Now, it’s time to create a custom security group to allow bastion traffic to instances. Custom SG is handy so that you can attach it instances while launching and you don’t need to manually edit instances security groups to allow bastion traffic. On other hand, in this SG we are allowing traffic from SG of bastion host. So even in future IP of bastion host gets changed (or even bastion host gets replaced) we don’t have to edit any SG settings anywhere. The only thing you need to keep in mind that, you need to deploy a new bastion host with the existing bastion SG.
On the left navigation plane, click on Security Groups
Now on the security groups page, click on the Create security group button
You will be presented with the below screen :
You need to fill in below details-
Security group name: For identification
Description
VPC: Select your VPC from the dropdown.
Inbound rules: Allow SSH from SG of bastion host (bastion-sg from step 1)
Outbound rule: Keep it default. Allow all traffic.
Tags: optional.
This SG (allow-bastion-traffic-sg) to be attached with instances launched in public/private instances. Make sure you remove the existing default SG attached to them which allows SSH traffic from 0.0.0.0/0 OR edit an inbound rule in the existing SGs which allows this.
It confirms that SSH traffic to all instances in your VPC will be allowed only from the bastion host.
At this stage, bastion host SG should have below inbound rule:
And instances in VPC (any subnet) should have below inbound rule where the source is bastion-sg (SG of bastion host):
We are all set! It’s time to test. Below are 2 instances for testing. Once is the bastion and another is launched in the private subnet. This could work with an instance in the public subnet as well but they will be having public IP allocated as well so to avoid confusion I took an instance from the private subnet.
I logged in to the bastion host using its public IP. Remember, we deployed bastion host in public subnet hence it will get public IP on launch. And since public IP reachable over the internet, I can directly putty to public IP of bastion host.
Once I am in the bastion host, I tried to ssh to the private IP of an instance launched in the private subnet. Since instance is launched in a private subnet, it won’t be allocated with the public IP so it’s not reachable over the internet. So I have to use a bastion host to get into it and it worked!
Note: I used PuTTY SSH agent forwarding here so I did not have to supply the SSH key in command when connecting to the private instance.
In such a way you can secure administrative access to your instances in VPC (inside public and private subnet) by using bastion hosts.
A quick article on AWS VPC creation along with screenshots.
In this article, we will be creating a custom VPC in the AWS account and check all available options along with screenshots. You must be aware that every AWS account comes with a default VPC already created for you. Few of the AWS services require the existence of this default VPC while it’s recommended to have custom VPC for some. So without further delay, let’s start with some VPC introduction.
What is VPC?
VPC stands for Virtual Private Cloud. It’s your own isolated network section in the AWS cloud. It’s safe to say it’s your own small cloud within the AWS cloud! VPC can be visualized as the outer boundary of your account in AWS within which you deploy all your cloud resources.
For this exercise we will try to implement below design in AWS.
Now on the VPC page, click on the Create VPC button
You will be presented with the below screen :
Here you can fill in below details-
Name tag (optional): For identifying your VPC within your account.
IPv4 CIDR block: This CIDR block will be available throughout your VPC. Make sure you choose wisely to support your IP appetite. You can later add 4 more secondary CIDR blocks to VPC. Plan accordingly.
IPv6 CIDR block: Depending on your requirement. You can specify your own block or use Amazon assigned one.
Tenancy: Choose how your instances will be launched.
Default: Follow the tenancy attribute defined at instance launch
Dedicated: Regardless of tenancy type selected at instance launch, always launch an instance on dedicated hardware.
Tags: Add tags to manage billing, identification, etc. If you choose the Name tag in the first field then it will appear here automatically.
Once you fill everything, click Create VPC and your VPC will be created. You should be seeing a confirmation screen-
Now, your VPC is created. You need to remember below points when you create custom VPC with this method :
Along with this VPC below resources are created automatically –
1 NACL
All traffic is allowed in and out with ALLOW rule with rule number 100
Also has the * DENY rule which means if the packet does not match any of the specified rules it will be denied.
1 DHCP options set
With Internal domain name
No NTP servers defined
Name servers pointing to Amazon-provided DNS
You can not edit it. You can delete this one and create a new one.
1 route table
All traffic destined to remain within VPC i.e. Target defined as local
1 security group
All Traffic allowed in inbound and outbound rules.
You need to create below manually –
Subnets
Internet gateway (If Public subnet is created)
NAT gateways (For internet access to Private subnet)
So to launch an instance in this VPC you have to create a subnet first.
How to create subnets in custom VPC?
Lets go ahead and create subnets in our custom VPC.
Subnet creation needs proper planning. You need to decide on how you want to use your available IP pool. For example, since we have used the 10.0.0.0/24 CIDR block while creating VPC, we have 256 IPv4 addresses available in our VPC. I plan for –
Use of 2 availability zones for HA
Each zone should have 1 public and 1 private subnet.
IPs to be spread across all subnets equally.
So in a nutshell I have to spread 256 IPs in 4 subnets. Also, you should be aware that in each subnet 5 IPs are not available for use –
First IP: Network address
Second IP: AWS VPC router
Third IP: AWS DNS
Fourth IP: Reserved by AWS for future use
Last IP: Broadcast and since the broadcast is blocked in AWS this IP can not be used.
Now, 4 subnets that mean 20 IPs are reserved and not available to us. So in total, we have 246 IPs available to use when we create below 4 subnet –
10.0.0.0/26
10.0.0.64/26
10.0.0.128/26
10.0.0.192/26
Calculation is done! Its time to create subnets in AWS console.
Note: For CIDR notation understanding use https://cidr.xyz/ and for subnetting use online subnet calculators.
On the same VPC AWS console, in the left navigation pane click on Subnets. Then click on the Create subnet button. You should see below screen –
Here we need to fill in –
Name tag: For identification purpose
VPC: Select your custom VPC from the dropdown.
Availability Zone: Select desired AZ from drop-down
IPv4 CIDR block: Choose from your calculation (which we did earlier)
Once done, click Create button. Your subnet should be created and you will see confirmation like this –
Repeat the same process to create the rest of the subnets. Once all subnets are created you should see them in the subnet dashboard.
If you observe here, all subnets will be associated with the same route table which was created during VPC creation. This needs to be changed.
For public subnet, we need to create an internet gateway, create a custom route table who has a route to this IG, and then associate public subnets to that route table. This way we will enable internet connectivity for public subnets.
Optional: You can enable Auto-assign IPv4 setting in Public subnet settings which will enable auto-assign public IPv4 addresses to instances launched in this subnet.
How to create Internet Gateway and associate them with subnet?
On the left navigation plane, click on Route Tables
Now on the route tables page, click on the Create route table button
Where you just need to add a Name tag for it, select custom VPC from the drop-down and click the Create button
Your route table will be created.
Now go back to Route Tables screen and select the recently created route. And click on Routes tab
In the edit route screen, you need to add a route for destination 0.0.0.0/0 with a target to a recently created internet gateway. And then click Save routes. Make sure you keep the existing local route since it’s needed within VPC communication.
Now the internet route table is ready. We need to associate it with the public subnets created in earlier steps.
Select Subnet Associations tab under same route table and click on Edit subnet associations button
Select public subnets and click Save
At this stage, our subnets are properly segregated as public and private. Public subnets are associated with route table having a route to the internet and private subnet associated with route table having route for only within VPC communication.
The last piece of the puzzle is to create a NAT gateway for instances in the private subnet. Using NAT gateway, those instances can access the internet for downloading updates, etc. and yet they won’t be accessible from the internet i.e. not exposed on the open internet.
If you want absolute isolation from the internet for a private subnet then you can skip the NAT gateway topics.
Elastic IP availability is the pre-requisite for creating a NAT gateway. If you don’t have an Elastic IP allocated in your account, please get it allocated first.
On the left navigation plane, click on Elastic IPs
Now on the Elastic IPs page, click on the Allocate Elastic Ip address button
Where you just need to add a Network Broder Group for it and click the Allocate button.
Network broder group is a collection of AZs where allocated Elastic IPs will be available for use. In a nutshell, you will be choosing a region here since Elastic IPs are regional resources.
On the left navigation plane, click on NAT Gateways
Now on the NAT Gateway page, click on the Create NAT gateway button
Where you need to add a Name tag for it, Subnet, Elastic IP and click the Create internet gateway button
make sure you select public subnet here and Elastic Ip which we got allocated in the previous step.
NAT Gateway is now created. We need to create a custom route table which has a target to this NAT gateway. Follow the same procedure we seen above for IG and associate private subnet to this new custom route table.
Repeat the same [procedure to create NAT gateway in another Availability zone as well so that it can be tagged to private subnet in that AZ. Remember NAT gateway is not a regional resource. You need to create it per availability zone.
At this stage, both subnets are all set for instance deployments.
A public subnet is associated with a routing table having a route to the Internet gateway
A private subnet is associated with route table having a route to NAT gateway
This completes our custom VPC creation and we achieved the targeted design !
Quick post to troubleshoot issue with networker service startup
If you come across issue where you installed new networker agent on Linux server and service is not coming up. You will see below message –
root@kerneltalks ~# /etc/init.d/networker start
root@kerneltalks ~# /etc/init.d/networker status
There are currently no running NetWorker processes.
Troubleshooting
You can dig through logs or run a debug using below command :
root@kerneltalks ~# nsrexecd -D5
It will print lots of messages. You have go through them for possible cause of issue. I found below offending entries –
RAP critical 162 Attributes '%s' and/or '%s' of the %s resource do not resolve to the machine's hostname '%s'. To correct the error, it may be necessary to delete the %s database.
Solution
First check your /etc/hosts file is correct and having valid loopback entry.
A quick article to point out configurations to customize sar utility.
sar is monitoring utility on Linux which is used to monitor system resource utilization. We have covered different aspects of sar in the past. You can go through the below articles for the same.
In this article, we will walk you though for some custom settings you can configure for sar like below –
How to change monitoring frequency in sar
How to customize sar log rotation
How to change sar monitoring frequency?
As you are aware sar has 10 minutes default frequency. That means sar utility logs one data point of resource utilization per 10 minutes. If you want to change this frequency then you can do it by altering it in below file –
So you have to edit number 10 with the frequency of your choice. Let’s make it for 1 minute instead of 10 minutes.
Now, after editing the file you have to wait for that minimum time to pass which you choose as frequency and then you can verify it by using sar command.
kerneltalks:~ # sar
Linux 5.3.18-22-default (kerneltalks) 08/20/20 _x86_64_ (1 CPU)
14:16:18 LINUX RESTART (1 CPU)
14:20:01 CPU %user %nice %system %iowait %steal %idle
14:21:01 all 0.02 0.00 0.02 0.00 0.00 99.97
14:22:01 all 0.02 0.00 0.03 0.00 0.02 99.93
14:23:01 all 0.00 0.00 0.00 0.00 0.00 100.00
14:24:01 all 0.02 0.00 0.02 0.00 0.00 99.97
Average: all 0.01 0.00 0.02 0.00 0.00 99.97
You can see now that sar is collecting datapoints with frequency of 1 minute.
How to customize sar log rotation?
saar log rotation is controlled by /etc/sysstat/sysstat file. You can configure below parameters in the file.
In this quick walk-through we will upgrade OL 6.8 to OL 7.6
All outputs under this article are from the EC2 server running on AWS. I am using Oracle Linux Yum server public repo hence reference the names from it. If your system is registered to ULN then use respective repos accordingly.
First you need to prepare system for upgrade. Below are pre-requisites :
Make sure you have a proper backup of your data, disabled monitoring of server, stopped all applications on the server, etc.
Make sure the system is subscribed to ol6_latest repository
Once you are ready you can go ahead with running pre-upgrade checks to verify if your system is compatible to move on. For that, you need to install the below packages. Those are available from ol6_addons repo.
Once packages are installed you are ready to run a pre-upgrade check. Note: In my case, preupgrade-assistant-el6toel7-data-0 was not available from my repo but it did not hurt my upgrade.
Now run below command to run checks –
[root@kerneltalks ~]# preupg
The Preupgrade Assistant is a diagnostics tool
and does not perform the actual upgrade.
Do you want to continue? [Y/n]
Y
Gathering logs used by the Preupgrade Assistant:
All installed packages : 01/10 ...finished (time 00:00s)
All changed files : 02/10 ...finished (time 01:39s)
Changed config files : 03/10 ...finished (time 00:00s)
All users : 04/10 ...finished (time 00:00s)
All groups : 05/10 ...finished (time 00:00s)
Service statuses : 06/10 ...finished (time 00:00s)
All installed files : 07/10 ...finished (time 00:00s)
All local files : 08/10 ...finished (time 00:01s)
All executable files : 09/10 ...finished (time 00:00s)
Oracle signed packages : 10/10 ...finished (time 00:00s)
Assessment of the system, running checks / SCE scripts:
001/141 ...done (Configuration files to be reviewed) (time: 00:01s)
002/141 ...done (File lists for the manual migration) (time: 00:00s)
003/141 ...done (Bacula Backup Software) (time: 00:00s)
004/141 ...done (MySQL configuration) (time: 00:00s)
005/141 ...done (MySQL data stack) (time: 00:00s)
006/141 ...done (Changes related to moving from MySQL to MariaDB) (time: 00:00s)
007/141 ...done (PostgreSQL) (time: 00:00s)
008/141 ...done (GNOME desktop environment) (time: 00:00s)
009/141 ...done (KDE desktop environment) (time: 00:00s)
010/141 ...done (POWER6 processors) (time: 00:00s)
011/141 ...done (Graphic drivers not supported in Oracle Linux 7) (time: 00:00s)
012/141 ...done (Input drivers not supported in Oracle Linux 7) (time: 00:00s)
013/141 ...done (Kernel networking drivers not available in Oracle Linux 7) (time: 00:00s)
014/141 ...done (Kernel storage drivers not available in Oracle Linux 7) (time: 00:00s)
015/141 ...done (Oracle Directory Server) (time: 00:00s)
016/141 ...done (Arptables) (time: 00:00s)
017/141 ...done (BIND9 in a chroot environment) (time: 00:00s)
018/141 ...done (BIND9 configuration compatibility) (time: 00:00s)
019/141 ...done (Moving the 'dhcpd' and 'dhcrelay' arguments) (time: 00:00s)
020/141 ...done (Dnsmasq) (time: 00:00s)
021/141 ...done (Dovecot) (time: 00:00s)
022/141 ...done (Compatibility between iptables and ip6tables) (time: 00:00s)
023/141 ...done (Net-SNMP) (time: 00:00s)
024/141 ...done (NFSv2) (time: 00:00s)
025/141 ...done (OpenLDAP server daemon configuration) (time: 00:00s)
026/141 ...done (Moving openssh-keycat) (time: 00:00s)
027/141 ...done (SSH configuration file and SSH keys) (time: 00:00s)
028/141 ...done (Postfix) (time: 00:00s)
029/141 ...done (SMB) (time: 00:00s)
030/141 ...done (Sendmail) (time: 00:00s)
031/141 ...done (Squid) (time: 00:00s)
032/141 ...done (VSFTP daemon configuration) (time: 00:00s)
033/141 ...done (Reusable configuration files) (time: 00:00s)
034/141 ...done (Changed configuration files) (time: 00:00s)
035/141 ...done (Rsyslog configuration incompatibility) (time: 00:00s)
036/141 ...done (VCS repositories) (time: 00:00s)
037/141 ...done (Added and extended options for BIND9) (time: 00:00s)
038/141 ...done (Added options in dnsmasq) (time: 00:00s)
039/141 ...done (Changes in utilities) (time: 00:00s)
040/141 ...done (Packages from other system variants) (time: 00:00s)
041/141 ...done (Load balancer support) (time: 00:00s)
042/141 ...done (Packages not signed by Oracle) (time: 00:00s)
043/141 ...done (Obsolete RPM packages) (time: 00:01s)
044/141 ...done (w3m browser) (time: 00:00s)
045/141 ...done (The qemu-guest-agent package) (time: 00:00s)
046/141 ...done (The coreutils packages) (time: 00:00s)
047/141 ...done (The gawk package) (time: 00:00s)
048/141 ...done (Removed command line options) (time: 00:00s)
049/141 ...done (The netstat binary) (time: 00:00s)
050/141 ...done (Quota) (time: 00:00s)
051/141 ...done (The util-linux (util-linux-ng) binaries) (time: 00:00s)
052/141 ...done (Removed RPM packages) (time: 00:01s)
053/141 ...done (TaskJuggler) (time: 00:00s)
054/141 ...done (Replaced RPM packages) (time: 00:02s)
055/141 ...done (GMP library incompatibilities) (time: 00:00s)
056/141 ...done ("not-base" channels) (time: 00:05s)
057/141 ...done (Package downgrades) (time: 00:00s)
058/141 ...done (Custom SELinux policy) (time: 00:00s)
059/141 ...done (Custom SELinux configuration) (time: 00:03s)
060/141 ...done (Samba SELinux context check) (time: 00:00s)
061/141 ...done (Removing sandbox from SELinux) (time: 00:00s)
062/141 ...done (CUPS Browsing and BrowsePoll) (time: 00:00s)
063/141 ...done (CVS) (time: 00:00s)
064/141 ...done (FreeRADIUS) (time: 00:00s)
065/141 ...done (httpd) (time: 00:00s)
066/141 ...done (The bind-dyndb-ldap configuration file) (time: 00:00s)
067/141 ...done (Identity Management Server) (time: 00:00s)
068/141 ...done (IPA Server CA) (time: 00:00s)
069/141 ...done (Network Time Protocol) (time: 00:00s)
070/141 ...done (time-sync.target) (time: 00:00s)
071/141 ...done (OpenLDAP /etc/sysconfig and data compatibility) (time: 00:00s)
072/141 ...done (The OpenSSH sshd_config file migration) (time: 00:00s)
073/141 ...done (The OpenSSH sysconfig/sshd file migration) (time: 00:00s)
074/141 ...done (The quota_nld service) (time: 00:00s)
075/141 ...done (Moving the disk quota netlink message daemon into the quota-nld package) (time: 00:00s)
076/141 ...done (System Security Services Daemon) (time: 00:00s)
077/141 ...done (Tomcat configuration compatibility check) (time: 00:00s)
078/141 ...done (Detection of LUKS devices using Whirlpool for password hash) (time: 00:00s)
079/141 ...done (Detection of Direct Access Storage Device (DASD) format on s390x platform for LDL format) (time: 00:00s)
080/141 ...done (The clvmd and cmirrord daemon management) (time: 00:00s)
081/141 ...done (Logical Volume Management 2 services) (time: 00:00s)
082/141 ...done (Device Mapper Multipath) (time: 00:00s)
083/141 ...done (The scsi-target-utils packages) (time: 00:00s)
084/141 ...done (Backing up warnquota) (time: 00:00s)
085/141 ...done (The warnquota tool) (time: 00:00s)
086/141 ...done (Add-Ons) (time: 00:00s)
087/141 ...done (Unsupported architectures) (time: 00:00s)
088/141 ...done (Binaries to be rebuilt) (time: 00:25s)
089/141 ...done (Debuginfo packages) (time: 00:00s)
090/141 ...done (Read-only FHS directories) (time: 00:00s)
091/141 ...done (FHS incompatibilities) (time: 00:00s)
092/141 ...done (Requirements for the /usr/ directory) (time: 00:00s)
093/141 ...done (Cluster and High Availability) (time: 00:00s)
094/141 ...done (The quorum implementation) (time: 00:00s)
095/141 ...done (The krb5kdc configuration file) (time: 00:00s)
096/141 ...done (File systems, partitions, and the mounts configuration) (time: 00:00s)
097/141 ...done (Removable media in the /etc/fstab file) (time: 00:00s)
098/141 ...done (Libraries with their soname bumped) (time: 00:08s)
099/141 ...done (Libraries with their soname kept) (time: 00:07s)
100/141 ...done (Removed .so libraries) (time: 00:46s)
101/141 ...done (CGROUP_DAEMON in sysconfig scripts) (time: 00:00s)
102/141 ...done (Checking the system version and variant) (time: 00:00s)
103/141 ...done (Consequences of upgrading to RHEL 7.6 instead of the latest RHEL minor version) (time: 00:00s)
104/141 ...done (AIDE) (time: 00:00s)
105/141 ...done (CA bundles) (time: 00:00s)
106/141 ...done (Oracle Developer Toolset) (time: 00:00s)
107/141 ...done (GRUB to GRUB 2 migration) (time: 00:00s)
108/141 ...done (Grubby) (time: 00:00s)
109/141 ...done (Obsoleting Hardware Abstraction Layer) (time: 00:00s)
110/141 ...done (Hyper-V) (time: 00:00s)
111/141 ...done (Enabled and disabled services in Oracle Linux 6) (time: 00:02s)
112/141 ...done (Ethernet interface naming) (time: 00:00s)
113/141 ...done (The /etc/rc.local and /etc/rc.d/rc.local files) (time: 00:00s)
114/141 ...done (java-1.8.0-ibm compatibility check) (time: 00:00s)
115/141 ...done (Java upgrade) (time: 00:00s)
116/141 ...done (The kernel-kdump package) (time: 00:00s)
117/141 ...done (The cgroups configuration compatibility) (time: 00:00s)
118/141 ...done (Pluggable authentication modules (PAM)) (time: 00:00s)
119/141 ...done (Perl modules not distributed by Oracle) (time: 00:13s)
120/141 ...done (PHP modules not distributed by Oracle) (time: 00:00s)
121/141 ...done (PolicyKit) (time: 00:00s)
122/141 ...done (Python packages) (time: 00:03s)
123/141 ...done (Repositories for Kickstart) (time: 00:00s)
124/141 ...done (System requirements) (time: 00:00s)
125/141 ...done (Ruby 2.0.0) (time: 00:00s)
126/141 ...done (Oracle Software Collections (RHSCL)) (time: 00:00s)
127/141 ...done (Oracle Subscription Manager) (time: 00:00s)
128/141 ...done (Oracle Network Classic unsupported) (time: 00:00s)
129/141 ...done (Copying Kickstart) (time: 00:00s)
130/141 ...done (The 'tuned' profiles) (time: 00:00s)
131/141 ...done (UEFI boot loader) (time: 00:00s)
132/141 ...done (Yaboot) (time: 00:00s)
133/141 ...done (The yum configuration file) (time: 00:00s)
134/141 ...done (Dangerous ranges of UIDs and GIDs) (time: 00:00s)
135/141 ...done (Incorrect usage of reserved UIDs and GIDs) (time: 00:01s)
136/141 ...done (The libuser.conf file) (time: 00:00s)
137/141 ...done (NIS ypbind) (time: 00:00s)
138/141 ...done (NIS Makefile) (time: 00:00s)
139/141 ...done (NIS server maps) (time: 00:00s)
140/141 ...done (NIS server UID_MIN and GID_MIN limits) (time: 00:00s)
141/141 ...done (The NIS server configuration file) (time: 00:00s)
The assessment finished (time 02:18s)
The '/root/preupgrade/cleanconf/etc/ssh/sshd_config' configuration file already exists in the '/root/preupgrade/cleanconf/etc/ssh' directory
The 'https://z5.kerneltalks.com/root/preupgrade/cleanconf/etc/yum.conf' configuration file already exists in the '/root/preupgrade/cleanconf/etc' directory
Result table with checks and their results for 'main contents':
-------------------------------------------------------------------------------------------------------------------
|Bacula Backup Software |notapplicable |
|MySQL configuration |notapplicable |
|MySQL data stack |notapplicable |
|Changes related to moving from MySQL to MariaDB |notapplicable |
|PostgreSQL |notapplicable |
|GNOME desktop environment |notapplicable |
|KDE desktop environment |notapplicable |
|Graphic drivers not supported in Oracle Linux 7 |notapplicable |
|Input drivers not supported in Oracle Linux 7 |notapplicable |
|Oracle Directory Server |notapplicable |
|Arptables |notapplicable |
|BIND9 in a chroot environment |notapplicable |
|BIND9 configuration compatibility |notapplicable |
|Moving the 'dhcpd' and 'dhcrelay' arguments |notapplicable |
|Dnsmasq |notapplicable |
|Dovecot |notapplicable |
|Net-SNMP |notapplicable |
|OpenLDAP server daemon configuration |notapplicable |
|Postfix |notapplicable |
|SMB |notapplicable |
|Squid |notapplicable |
|VSFTP daemon configuration |notapplicable |
|Added and extended options for BIND9 |notapplicable |
|Added options in dnsmasq |notapplicable |
|Load balancer support |notapplicable |
|w3m browser |notapplicable |
|The qemu-guest-agent package |notapplicable |
|Quota |notapplicable |
|TaskJuggler |notapplicable |
|Samba SELinux context check |notapplicable |
|CUPS Browsing and BrowsePoll |notapplicable |
|CVS |notapplicable |
|FreeRADIUS |notapplicable |
|The bind-dyndb-ldap configuration file |notapplicable |
|Identity Management Server |notapplicable |
|IPA Server CA |notapplicable |
|OpenLDAP /etc/sysconfig and data compatibility |notapplicable |
|The quota_nld service |notapplicable |
|Moving the disk quota netlink message daemon into the quota-nld package |notapplicable |
|System Security Services Daemon |notapplicable |
|Tomcat configuration compatibility check |notapplicable |
|Detection of LUKS devices using Whirlpool for password hash |notapplicable |
|Detection of Direct Access Storage Device (DASD) format on s390x platform for LDL format |notapplicable |
|The clvmd and cmirrord daemon management |notapplicable |
|Logical Volume Management 2 services |notapplicable |
|Device Mapper Multipath |notapplicable |
|The scsi-target-utils packages |notapplicable |
|Backing up warnquota |notapplicable |
|The warnquota tool |notapplicable |
|The quorum implementation |notapplicable |
|The krb5kdc configuration file |notapplicable |
|AIDE |notapplicable |
|Obsoleting Hardware Abstraction Layer |notapplicable |
|Java upgrade |notapplicable |
|java-1.8.0-ibm compatibility check |notapplicable |
|The kernel-kdump package |notapplicable |
|PHP modules not distributed by Oracle |notapplicable |
|Ruby 2.0.0 |notapplicable |
|Oracle Software Collections (RHSCL) |notapplicable |
|Oracle Network Classic unsupported |notapplicable |
|Oracle Subscription Manager |notapplicable |
|Copying Kickstart |notapplicable |
|The 'tuned' profiles |notapplicable |
|Yaboot |notapplicable |
|NIS ypbind |notapplicable |
|NIS Makefile |notapplicable |
|NIS server maps |notapplicable |
|NIS server UID_MIN and GID_MIN limits |notapplicable |
|The NIS server configuration file |notapplicable |
|POWER6 processors |pass |
|Kernel networking drivers not available in Oracle Linux 7 |pass |
|Kernel storage drivers not available in Oracle Linux 7 |pass |
|Sendmail |pass |
|Reusable configuration files |pass |
|time-sync.target |pass |
|The OpenSSH sshd_config file migration |pass |
|Add-Ons |pass |
|Unsupported architectures |pass |
|Debuginfo packages |pass |
|Read-only FHS directories |pass |
|Requirements for the /usr/ directory |pass |
|Cluster and High Availability |pass |
|CGROUP_DAEMON in sysconfig scripts |pass |
|Checking the system version and variant |pass |
|CA bundles |pass |
|Oracle Developer Toolset |pass |
|Hyper-V |pass |
|The /etc/rc.local and /etc/rc.d/rc.local files |pass |
|Pluggable authentication modules (PAM) |pass |
|Python packages |pass |
|System requirements |pass |
|The libuser.conf file |pass |
|NFSv2 |informational |
|Rsyslog configuration incompatibility |informational |
|VCS repositories |informational |
|The coreutils packages |informational |
|The gawk package |informational |
|Removed command line options |informational |
|The netstat binary |informational |
|The util-linux (util-linux-ng) binaries |informational |
|GMP library incompatibilities |informational |
|httpd |informational |
|Network Time Protocol |informational |
|File systems, partitions, and the mounts configuration |informational |
|Removable media in the /etc/fstab file |informational |
|Libraries with their soname kept |informational |
|Consequences of upgrading to RHEL 7.6 instead of the latest RHEL minor version |informational |
|Perl modules not distributed by Oracle |informational |
|PolicyKit |informational |
|The yum configuration file |informational |
|SSH configuration file and SSH keys |fixed |
|Replaced RPM packages |fixed |
|Package downgrades |fixed |
|Custom SELinux policy |fixed |
|Custom SELinux configuration |fixed |
|The OpenSSH sysconfig/sshd file migration |fixed |
|Grubby |fixed |
|Dangerous ranges of UIDs and GIDs |fixed |
|File lists for the manual migration |needs_inspection |
|Compatibility between iptables and ip6tables |needs_inspection |
|Moving openssh-keycat |needs_inspection |
|Changed configuration files |needs_inspection |
|Changes in utilities |needs_inspection |
|Obsolete RPM packages |needs_inspection |
|Binaries to be rebuilt |needs_inspection |
|FHS incompatibilities |needs_inspection |
|Libraries with their soname bumped |needs_inspection |
|Removed .so libraries |needs_inspection |
|Ethernet interface naming |needs_inspection |
|Repositories for Kickstart |needs_inspection |
|Incorrect usage of reserved UIDs and GIDs |needs_inspection |
|Configuration files to be reviewed |needs_action |
|Packages from other system variants |needs_action |
|Packages not signed by Oracle |needs_action |
|Removed RPM packages |needs_action |
|"not-base" channels |needs_action |
|Removing sandbox from SELinux |needs_action |
|GRUB to GRUB 2 migration |needs_action |
|Enabled and disabled services in Oracle Linux 6 |needs_action |
|The cgroups configuration compatibility |needs_action |
|UEFI boot loader |needs_action |
-------------------------------------------------------------------------------------------------------------------
The tarball with results is stored in 'https://z5.kerneltalks.com/root/preupgrade-results/preupg_results-200723042538.tar.gz' .
The latest assessment is stored in the '/root/preupgrade' directory.
Summary information:
We have found some potential risks.
Read the full report file '/root/preupgrade/result.html' for more details.
Please ensure you have backed up your system and/or data
before doing a system upgrade to prevent loss of data in
case the upgrade fails and full re-install of the system
from installation media is needed.
Upload results to UI by the command:
e.g. preupg -u http://example.com:8099/submit/ -r /root/preupgrade-results/preupg_results-200723042538.tar.gz .
Once the tool completes checks, download, and review /root/preupgrade/result.html It will be something like below –
It will be having all the checks, their results, what is actionable and what actions to be taken.
Spare some time to read the report thoroughly, read the actionable, action on it if it suits your environment/needs, etc. and then move ahead with the upgrade. Since I am running a test instance on AWS, I did not care to consider actionable and I moved ahead with the upgrade.
The upgrade needs an ISO or network path from where it can read OL7 packages for an upgrade. I downloaded OL7 ISO from Oracle using get. To start upgrade with ISO use below command –
[root@kerneltalks ~]# redhat-upgrade-tool-cli --iso OracleLinux-R7-U6-Server-x86_64-dvd.iso --debuglog=/tmp/upgrade.log --cleanup-post
setting up repos...
upgradeiso | 3.6 kB 00:00 ...
upgradeiso/primary_db | 5.0 MB 00:00 ...
The Preupgrade Assistant has found upgrade risks.
You can run 'preupg --riskcheck --verbose' to view these risks.
Addressing high risk issues is mandatory before continuing with the upgrade.
Ignoring these risks may result in a broken and/or unsupported upgrade.
Please backup your data.
List of issues:
preupg.risk.MEDIUM: Some packages installed on the system were removed between Oracle Linux 6 and Oracle Linux 7. This might break the functionality of the packages that depend on the removed packages.
preupg.risk.MEDIUM: After the upgrade, migrate GRUB to GRUB 2 manually.
preupg.risk.MEDIUM: The name distros was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name __init__.py was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name __init__.pyc was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name __init__.pyo was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name arch.py was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name arch.pyc was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name arch.pyo was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name debian.py was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name debian.pyc was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name debian.pyo was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name fedora.py was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name fedora.pyc was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name fedora.pyo was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name freebsd.py was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name freebsd.pyc was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name freebsd.pyo was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name gentoo.py was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name gentoo.pyc was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name gentoo.pyo was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name net_util.py was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name net_util.pyc was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name net_util.pyo was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name parsers was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name hostname.py was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name hostname.pyc was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name hostname.pyo was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name hosts.py was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name hosts.pyc was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name hosts.pyo was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name resolv_conf.py was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name resolv_conf.pyc was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name resolv_conf.pyo was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name sys_conf.py was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name sys_conf.pyc was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name sys_conf.pyo was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name rhel.py was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name rhel.pyc was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name rhel.pyo was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name rhel_util.py was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name rhel_util.pyc was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name rhel_util.pyo was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name sles.py was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name sles.pyc was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name sles.pyo was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name ubuntu.py was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name ubuntu.pyc was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.MEDIUM: The name ubuntu.pyo was changed in Oracle Linux 7 to one of these services: cloud-config.service cloud-config.target cloud-final.service cloud-init-local.service cloud-init.service
preupg.risk.SLIGHT: We detected some files where their modifications are not tracked by the RPM packages. Check the functionality of the files after the successful upgrade.
preupg.risk.HIGH: The /etc/shadow and /etc/gshadow files must be backed up manually by the administrator.
preupg.risk.HIGH: You have installed some packages signed by Oracle for a different variant of the Oracle Linux system.
preupg.risk.HIGH: We detected some packages not signed by Oracle. You can find the list in the /root/preupgrade/kickstart/nonrhpkgs file. Handle them yourself.
preupg.risk.HIGH: After upgrading to Oracle Linux 7, there are still some el6 packages left. Add the '--cleanup-post' option to redhat-upgrade-tool to remove them automatically.
preupg.risk.HIGH: The apr-util-ldap package moved to the Optional channel between Oracle Linux 6 and Oracle Linux 7.
preupg.risk.HIGH: The groff package moved to the Optional channel between Oracle Linux 6 and Oracle Linux 7.
preupg.risk.HIGH: The openscap-engine-sce package is available in the Optional channel.
preupg.risk.HIGH: The python-pygments package moved to the Optional channel between Oracle Linux 6 and Oracle Linux 7.
preupg.risk.HIGH: The system-config-firewall-tui package moved to the Optional channel between Oracle Linux 6 and Oracle Linux 7.
preupg.risk.HIGH: The xz-lzma-compat package moved to the Optional channel between Oracle Linux 6 and Oracle Linux 7.
preupg.risk.HIGH: There were changes in SELinux policies between Oracle Linux 6 and Oracle Linux 7. See the solution to resolve this problem.
preupg.risk.HIGH: Back up the grub RPM manually before the upgrade. See the remediation instructions for more info.
preupg.risk.HIGH: The blk-availability service is disabled by default in Oracle Linux 7. Enable it by typing: systemctl enable blk-availability && systemctl start blk-availability.service .
preupg.risk.HIGH: The cloud-config service is disabled by default in Oracle Linux 7. Enable it by typing: systemctl enable cloud-config && systemctl start cloud-config.service .
preupg.risk.HIGH: The cloud-final service is disabled by default in Oracle Linux 7. Enable it by typing: systemctl enable cloud-final && systemctl start cloud-final.service .
preupg.risk.HIGH: The cloud-init service is disabled by default in Oracle Linux 7. Enable it by typing: systemctl enable cloud-init && systemctl start cloud-init.service .
preupg.risk.HIGH: The cloud-init-hotplugd service is disabled by default in Oracle Linux 7. Enable it by typing: systemctl enable cloud-init-hotplugd && systemctl start cloud-init-hotplugd.service .
preupg.risk.HIGH: The cloud-init-local service is disabled by default in Oracle Linux 7. Enable it by typing: systemctl enable cloud-init-local && systemctl start cloud-init-local.service .
preupg.risk.HIGH: The ip6tables service is disabled by default in Oracle Linux 7. Enable it by typing: systemctl enable ip6tables && systemctl start ip6tables.service .
preupg.risk.HIGH: The messagebus service is disabled by default in Oracle Linux 7. Enable it by typing: systemctl enable messagebus && systemctl start messagebus.service .
preupg.risk.HIGH: The netfs service is disabled by default in Oracle Linux 7. Enable it by typing: systemctl enable netfs && systemctl start netfs.service .
preupg.risk.HIGH: The network service is disabled by default in Oracle Linux 7. Enable it by typing: systemctl enable network && systemctl start network.service .
preupg.risk.HIGH: The ntpd service is disabled by default in Oracle Linux 7. Enable it by typing: systemctl enable ntpd && systemctl start ntpd.service .
preupg.risk.HIGH: The sendmail service is disabled by default in Oracle Linux 7. Enable it by typing: systemctl enable sendmail && systemctl start sendmail.service .
preupg.risk.HIGH: The udev-post service is disabled by default in Oracle Linux 7. Enable it by typing: systemctl enable udev-post && systemctl start udev-post.service .
preupg.risk.HIGH: Additional libcgroup configuration files were created (https://z5.kerneltalks.com/etc/cgconfig.d).
preupg.risk.HIGH: Binary efibootmgr is not installed.
preupg.risk.HIGH: Please, install all required packages (and binaries) and run preupg again to process check properly.
preupg.risk.MEDIUM: The ssh-keycat files are moved to the openssh-keycat package.
preupg.risk.MEDIUM: Some packages installed on the system were removed between Oracle Linux 6 and Oracle Linux 7. This might break the functionality of the packages depending on these removed packages.
preupg.risk.MEDIUM: Conflict with the file structure: the /run/ directory already exists.
preupg.risk.MEDIUM: Some soname bumps in the libraries installed on the system were detected, which might break the functionality of some of your third-party applications. They might need to be rebuilt, so check their requirements.
preupg.risk.MEDIUM: Some .so libraries installed on the system were removed between Oracle Linux 6 and Oracle Linux 7. This might break the functionality of some of your third-party applications.
preupg.risk.MEDIUM: Reserved user and group IDs by the setup package changed between Oracle Linux 6 and Oracle Linux 7.
preupg.risk.SLIGHT: Some files untracked by RPM packages were detected. Some of these files might need a manual check or migration after redhat-upgrade-tool and/or might cause conflicts during the installation. Try to reduce the number of the unnecessary untracked files before running redhat-upgrade-tool.
preupg.risk.SLIGHT: The iptables or ip6tables service is enabled.Read the remediation instructions.
preupg.risk.SLIGHT: Certain configuration files are changed and the .rpmnew files will be generated.
preupg.risk.SLIGHT: Some utilities were replaced, removed, moved between packages, or their location changed.
preupg.risk.SLIGHT: Some scripts untracked by RPM were discovered on the system. The scripts might not work properly after the upgrade.
preupg.risk.SLIGHT: /etc/sysconfig/network-scripts/ifcfg-eth0 is old style ethX name without HWADDR, its name can change after the upgrade.
preupg.risk.SLIGHT: You use one network device with an old style 'ethX' name.
preupg.risk.SLIGHT: The public_ol6_latest repository is enabled.
preupg.risk.SLIGHT: The public_ol6_addons repository is enabled.
preupg.risk.SLIGHT: The public_ol6_ga_base repository is not enabled.
preupg.risk.SLIGHT: The public_ol6_u1_base repository is not enabled.
preupg.risk.SLIGHT: The public_ol6_u2_base repository is not enabled.
preupg.risk.SLIGHT: The public_ol6_u3_base repository is not enabled.
preupg.risk.SLIGHT: The public_ol6_u4_base repository is not enabled.
preupg.risk.SLIGHT: The public_ol6_u5_base repository is not enabled.
preupg.risk.SLIGHT: The public_ol6_u6_base repository is not enabled.
preupg.risk.SLIGHT: The public_ol6_u7_base repository is not enabled.
preupg.risk.SLIGHT: The public_ol6_u8_base repository is not enabled.
preupg.risk.SLIGHT: The public_ol6_UEK_latest repository is not enabled.
preupg.risk.SLIGHT: The public_ol6_UEKR3_latest repository is not enabled.
preupg.risk.SLIGHT: The public_ol6_UEKR4 repository is enabled.
preupg.risk.SLIGHT: The public_ol6_UEK_base repository is not enabled.
preupg.risk.SLIGHT: The public_ol6_MySQL repository is not enabled.
preupg.risk.SLIGHT: The public_ol6_gdm_multiseat repository is not enabled.
preupg.risk.SLIGHT: The public_ol6_MySQL56 repository is not enabled.
preupg.risk.SLIGHT: The public_ol6_MySQL57 repository is not enabled.
preupg.risk.SLIGHT: The public_ol6_ceph10 repository is not enabled.
preupg.risk.SLIGHT: The public_ol6_spacewalk20_server repository is not enabled.
preupg.risk.SLIGHT: The public_ol6_spacewalk20_client repository is not enabled.
preupg.risk.SLIGHT: The public_ol6_ofed_UEK repository is not enabled.
preupg.risk.SLIGHT: The public_ol6_UEKR4_OFED repository is not enabled.
preupg.risk.SLIGHT: The public_ol6_playground_latest repository is not enabled.
preupg.risk.SLIGHT: The public_ol6_spacewalk22_server repository is not enabled.
preupg.risk.SLIGHT: The public_ol6_spacewalk22_client repository is not enabled.
preupg.risk.SLIGHT: The public_ol6_software_collections repository is not enabled.
preupg.risk.SLIGHT: The public_ol6_spacewalk24_server repository is not enabled.
preupg.risk.SLIGHT: The public_ol6_spacewalk24_client repository is not enabled.
preupg.risk.SLIGHT: Enabled repository files for the Kickstart generation are stored in the /root/preupgrade/kickstart/available-repos file.
preupg.risk.SLIGHT: Some packages installed on the system changed their names between Oracle Linux 6 and Oracle Linux 7. Although they should be compatible, monitor them after the update.
Continue with the upgrade [Y/N]? Y
Once again it will list out risks of upgrade and ask your confirmation to move ahead. Once you confirm it with Y, the upgrade starts.
Once command completes it will ask you to reboot the server. Reboot will take a while since upgrade process completes during reboot and then login to system to check.
[root@kerneltalks ~]# cat /etc/*release
Oracle Linux Server release 7.6
NAME="Oracle Linux Server"
VERSION="7.6"
ID="ol"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.6"
PRETTY_NAME="Oracle Linux Server 7.6"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:oracle:linux:7:6:server"
HOME_URL="https://linux.oracle.com/"
BUG_REPORT_URL="https://bugzilla.oracle.com/"
ORACLE_BUGZILLA_PRODUCT="Oracle Linux 7"
ORACLE_BUGZILLA_PRODUCT_VERSION=7.6
ORACLE_SUPPORT_PRODUCT="Oracle Linux"
ORACLE_SUPPORT_PRODUCT_VERSION=7.6
Red Hat Enterprise Linux Server release 7.6 (Maipo)
Oracle Linux Server release 7.6
And we are upgraded to Ol7.6! You have to read all the reports and messages before you actually hit confirmation to upgrade. This will make your life easy post upgrade!
Issue with tool version
Redhat upgrade tool always looks for the latest OS that is known to it for an upgrade. So if you are using newer tool version and trying to upgrade OS to old version than the version known to the tool then you will see below error –
The installed version of Preupgrade Assistant allows upgrade only to the system version 7.5
I was trying to upgrade to OL 7.4 and tool was looking for 7.5 only. So in such case, you have to downgrade tool version and try.
For OL 7.4 upgrade below version worked for me –
redhat-upgrade-tool-0.7.50-1.0.1.el6.noarch.rpm
If you use any version below 0.7.50, it will land you up in issue where you see lots of couldn’t add media errors and failed to open file errors in the console –
Warning: couldn't add media/Packages/dracut-network-033-502.0.1.el7.x86_64.rpm to the transaction
Warning: failed to open file /sysroot/var/lib/system-upgrade/media/Packages/xulrunner-31.6.0-2.0.1.el7_1.x86_64.rpm
If you use any version above 0.7.50, you will land in the issue explained above. Decompress kernel modules capability introduced in 0.7.50 makes it best bet in the above-explained scenario.
Few redhat-upgrade-tool versions mapping with their supported upgrades.
In this article we will look at pod concept in Kubernetes
What is pod in kubernetes?
The pod is the smallest execution unit in Kubernetes. It’s a single container or group of containers that serve a running process in the K8s cluster. Read what is container? if you are not familiar with containerization.
Each pod has a single IP address that is shared by all the containers within. Also, the port space is shared by all the containers inside.
You can view running pods in K8s by using below command –
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
webserver 1/1 Running 0 10s
View pod details in K8s
To get more detailed information on each pod, you can run below command by supplying its pod name as argument –
$ kubectl describe pods webserver
Name: webserver
Namespace: default
Priority: 0
Node: node01/172.17.0.9
Start Time: Sun, 05 Jul 2020 13:50:41 +0000
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.244.1.3
IPs:
IP: 10.244.1.3
Containers:
webserver:
Container ID: docker://8b260effa4ada1ff80e106fb12cf6e2da90eb955321bbe3b9e302fdd33b6c0d8
Image: nginx
Image ID: docker-pullable://nginx@sha256:21f32f6c08406306d822a0e6e8b7dc81f53f336570e852e25fbe1e3e3d0d0133
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Sun, 05 Jul 2020 13:50:50 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-bjcwg (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-bjcwg:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-bjcwg
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 25s default-scheduler Successfully assigned default/webserver to node01
Normal Pulling 23s kubelet, node01 Pulling image "nginx"
Normal Pulled 17s kubelet, node01 Successfully pulled image "nginx"
Normal Created 16s kubelet, node01 Created container webserver
Normal Started 16s kubelet, node01 Started container webserver
pod configuration file
One can create a pod configuration file i.e. yml file which has all the details to start a pod. K8s can read this file and spin up your pod according to specifications. Sample file below –
Its a single container pod file since we specified specs for only one kind of container in it.
Single container pod
Single container pod can be run without using a yml file. Like using simple command –
$ kubectl run single-c-pod --image=nginx
pod/single-c-pod created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
single-c-pod 1/1 Running 0 35s
webserver 1/1 Running 0 2m52s
You can spin the single container pod using simple yml file stated above.
Multiple container pod
For multiple container pods, let’s edit the above yml file to add another container specs as well.
In the above file, we are spinning up a pod that has 1 webserver container and another is Ubuntu Linux container.
$ kubectl create -f web-bash.yml
pod/web-bash created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
web-bash 2/2 Running 0 12s
How to delete pod
Its a simple delete pod command
$ kubectl delete pods web-bash
pod "web-bash" deleted
How to view pod logs in Kubernetes
I am running a single container pod of Nginx. We will then check pod logs to confirm this messages.
$ kubectl run single-c-pod --image=nginx
pod/single-c-pod created
$ kubectl logs single-c-pod
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
Step by step procedure to transfer domain from Godaddy to Route 53
In this article, we will walk you through the migrating a domain from Godaddy to AWS Route 53. The process remains almost similar to even another domain registrar to Route 53 but I provided screenshots from Godaddy since I had a domain registered there which I transferred.
We will be migrating my other domain (shrikantlavhate.in) from Godaddy to Route 53 in this article. It’s a 5-6 days procedure where domain transfer approval will be held by the previous registrar. Its a failsafe so you can cancel the transfer if you have not initiated it or want to rollback your action.
Unlock domain for transfer
Login to your current registrar (in our case Godaddy) and unlock the domain for transfer. Goto Manage domains or domain settings and turn off domain lock.
In Godaddy navigation is – Products page > Domains > Click Manage
Then on the domain settings page, scroll down to domain lock where it says – ‘Locking prevents unauthorized changes, including transfer to another registrar. Domain lock: On‘
Click on Edit button beside it, and turn it off.
Now, your domain is unlocked for transfer.
Initiate transfer from AWS Route 53
To start with you should be having an AWS account (comes with 12 months free tier for new accounts). If you haven’t done this already then, go ahead and sign up for it. You will need a valid email address and credit card information during the signup procedure.
Once done, sign in to Route 53 management console. Click on ‘Registered domains‘ link in the left-hand side menu and then click ‘Transfer domain‘
If you missed unlocking your domain in the current registrar as mentioned above you will see the error here like domain can not be transferred to Route 53. Example below –
So ensure you have unlocked domain for transfer and punch in domain name in the wizard and hit check. It will confirm domain can be transferred now.
Click on ‘Add to cart‘ and it will show you billing total on the right-hand side. Make a note that, AWS renews your domain for one year from its current expiry date during transfer and this renewal fee is also included in that billing total. Confirm and click ‘Continue‘
On the next screen you need to provide –
Authorization code
You can get this from current domain registrar portal
Nameservers
You can choose to keep the same nameservers currently being used by a domain. AWS will copy them.
Import from hosted one in Route 53 (if you have created it already)
Specify manually
In the Godaddy domain manager page, click on ‘Transfer domain away from GoDaddy‘ link to get authorization code. I choose to keep current nameservers for now.
On the next page, you need to fill in contact details. Note that these details will be publicly accessible from the WHOIS database. You can choose to opt-out of it by enabling ‘Privacy protection‘ at the end of the same page. But this option is not available for some domains like .in 🙁
You can define 3 different contact details for Registrant, Administrative, and Technical context. I choose to keep one for all.
Once done hit the ‘Continue‘ button at the bottom of the page. On the next page, it will ask you to confirm all the details you filled in since the beginning. Choose here if you want to auto-renew your domain or not (this can be changed later as well) and accept the terms to complete your order.
That’s all. Your order has been placed and the billed amount will be debited from your card which you provided at the time of AWS account creation.
You will be presented with an informational page below which is self-explanatory.
You can verify domain transfer status be navigating to ‘Pending requests‘ in the left hand side menu.
In some countries credit cards can not be debited directly as OTP is mandated by their federal bank regulations. In such cases, you might see ‘action required’ status as below which itself tells you what to do. In my case it’s asking me to complete the billing transaction (using OTP).
So, I completed the payment by navigating to AWS billing dashboard > Order and invoices > Verify and pay
After completing the payment, head back to Route 53 management console and verify the status. Allow some time for changes to propagate in system and status to update. Status should change back to ‘Domain transfer in progress: Waiting for the current registrar to automatically approve the transfer. This can take up to 10 days depending on the TLD and the current registrar. Only the current registrar can accelerate the process. (step 7 of 14)‘.
Meanwhile, you will receive an automated email from your current domain registrar confirming if you initiated the domain transfer. And it also contains a link that you can use to cancel the domain transfer request. Since we did want to transfer a domain, no action required.
Now, you have to sit back and relax. Let the transfer period pass and then your domain transfer will be complete. Typically it takes 5-10 days to complete this transfer.
After 6 days I received an email from AWS that domain transfer is completed. I log in to Route 53 console and now I can see the domain is transferred to Route 53 completely.
Since we choose to keep current nameservers while transferring the domain to Route 53, AWS will not create any hosted zone for your domain.
If you choose to opt for Route 53 as a DNS Manager for domain, then after transfer AWS will create public hosted zone in Route 53 automatically once transfer is complete. This public hosted one will has entries of SOA and nameserver. Note that you will be billed for 0.5$ per month for this hosted zone.
That’s it. We successfully transferred our domain from Godaddy to AWS Route 53.