A quick post to explain the default Ansible user and where it can be changed.
Ansible by default manages its clients over SSH protocol. So its obvious question is what is the default user Ansible uses to connect or execute the command on its clients? Followed by – how to change Ansible default user? We will answer this question in this article.
If you are running default configurations and you did not define Ansible user anywhere then user running ansible command (the current user) will be used to communicate with the client over SSH.
Define Ansible user in the configuration file
Ansible default user can be defined in Ansible configuration file /etc/ansible/ansible.cfg in a below section by un-commenting remote_user line and replacing the root with the user of your choice –
# default user to use for playbooks if user is not specified
# (/usr/bin/ansible will use current user as default)
#remote_user = root
Here it clearly states that if default user is not defined in configuration file then the current logged in user (on control machine i.e. Ansible server) will be used to execute commands on Ansible clients.
Define Ansible user in Inventory
Another place you can define this Ansible user is inventory i.e. client host list file. Default hosts file Ansible uses is /etc/ansible/hosts. You can add below snippet in this file to define Ansible user for your tasks or playbook.
[all:vars]
ansible_user=ansible-usr
where ansible-usr is the user you want Ansible to use while connecting to clients over SSH. Replace ansible-usr with the user of your choice.
Quick lab setup for learning Ansible using containers on Oracle Virtualbox Linux VM.
In this article, we will be setting up our lab using Docker containers for testing Ansible. We are using Oracle Virtualbox so that you can spin up VM with a readymade OVA file in a minute. This will save efforts to install the OS from scratch. Secondly, we will be spinning up a couple of containers which can be used as ansible clients. Since we need to test ansible for running a few remote commands/modules, it’s best to have containers working as clients rather than spinning complete Linux VMs as a client. This will save a lot of resource requirements as well and you can run this ansible lab on your desktop/laptop as well for practicing ansible.
Without further delay lets dive into setting up a lab on desktop/laptop for learning Ansible. Roughly it’s divided into below sections –
Download Oracle Virtualbox and OVA file
Install Oracle Virtualbox and spin VM from OVA file
Run containers to work as ansible clients
Test connectivity via passwordless SSH access from Ansible worker to clients
Step 1. Download Oracle Virtualbox & OEL7 with Docker readymade OVA file
Goto Oracle Downloads and download Oracle Linux 7 with Docker 1.12 Hands-On Lab Appliance file. This will help us to spin up VM in Oracle VirtualBox without much hassle.
Step 2. Install Oracle Virtualbox and start VM from OVA file
Install Oracle Virtualbox. Its a pretty standard setup procedure so I am not getting into it. Once you download above OVA file, open it in Oracle VirtualBox and it will open up the Import Virtual Appliance menu like below-
Click Import. Agree to the software license agreement shown and it will start Importing OVA as a VM. After finishing import, you will see VM named DOC-1002902 i.e. same name as OVA file is created in your Oracle VirtualBox.
Start that VM and login with the user. Credentials details are mentioned in the documentation link on the download page of OVA file.
Step 3. Running containers
For running containers, you need to set up Docker Engine first on VM. All steps are listed in the same documentation I mentioned above where you looked at your first login credentials. Also, you can follow our Docker installation guide if you want.
Then create key pair on your VM i.e. Ansible worker/server so that public key can be used within a container for passwordless SSH. We will be using Ansible user as ansible-usr in our setup, so you can see this user henceforth here. Read how to configure Ansible default user.
[root@ansible-srv .ssh]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
98:42:9a:82:79:ac:74:7f:f9:31:71:2a:ec:bb:af:ee root@ansible-srv.kerneltalks.com
The key's randomart image is:
+--[ RSA 2048]----+
| |
| |
| . |
|.o + o |
|+.=.. o S. . |
|.+. ... . + |
|. . = + |
| o o o |
| oE=o |
+-----------------+
Now we have key pair ready move on to containers.
Once your Docker Engine is installed and started, create custom Docker Image using Dockerfile mentioned below which we will use to spin up multiple containers (ansible clients). Below Dockerfile is taken from link and modified a bit for setting passwordless SSH. This Dockerfile answers the question how to configure passwordless SSH for containers!
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:password' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
RUN useradd -m -d /home/ansible-usr ansible-usr
RUN mkdir /home/ansible-usr/.ssh
COPY .ssh/id_rsa.pub /home/ansible-usr/.ssh/authorized_keys
RUN chown -R ansible-usr:ansible-usr /home/ansible-usr/.ssh
RUN chmod 700 /home/ansible-usr/.ssh
RUN chmod 640 /home/ansible-usr/.ssh/authorized_keys
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Keep above file as Dockerfile in /root and then run below command while you are in /root. If you are in some other directory then make sure you use relative path in COPY command in above Dockerfile.
[root@ansible-srv ~]# docker build -t eg_sshd .
This command will create a custom Docker Image named eg_sshd. Now you are ready to spin up containers using this custom docker image.
We will start containers in below format –
Webserver
k-web1
k-web2
Middleware
k-app1
k-app2
Database
k-db1
So in total 5 containers spread across different groups with different hostname so that we can use them for testing different configs/actions in ansible.
I am listing command for the first container only. Repeat them for rest 4 servers.
Now, spin up all 5 containers. Verify all containers are running and note down their ports.
[root@ansible-srv ~]# docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2da32a4706fb eg_sshd "/usr/sbin/sshd -D" 5 seconds ago Up 3 seconds 0.0.0.0:32778->22/tcp k-db1
75e2a4bb812f eg_sshd "/usr/sbin/sshd -D" 39 seconds ago Up 33 seconds 0.0.0.0:32776->22/tcp k-app2
40970c69348f eg_sshd "/usr/sbin/sshd -D" 50 seconds ago Up 47 seconds 0.0.0.0:32775->22/tcp k-app1
4b733ce710e4 eg_sshd "/usr/sbin/sshd -D" About a minute ago Up About a minute 0.0.0.0:32774->22/tcp k-web2
e70d825904b8 eg_sshd "/usr/sbin/sshd -D" 4 minutes ago Up 4 minutes 0.0.0.0:32773->22/tcp k-web1
Step 4. Passwordless SSH connectivity between Ansible server and clients
This is an important step for the smooth & hassle-free functioning of Ansible. You need to create ansible user on Ansible server & clients. Then configure passwordless SSH (using keys) for that user.
Now you need to get the IP addresses of your containers. You can inspect the container and extract that information –
Now we have an IP address, let’s test the passwordless connectivity –
[root@ansible-srv ~]# ssh ansible-usr@172.17.0.2
Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 4.1.12-37.5.1.el7uek.x86_64 x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
Last login: Wed Jan 15 18:57:38 2020 from 172.17.0.1
$ hostname
k-web1
$ exit
Connection to 172.17.0.2 closed.
It’s working! Go ahead and test it for rest all, so that the client’s authenticity will be added and RSA fingerprints will be saved to the known host list. Now we have all 5 client containers running and passwordless SSH is setup between ansible server and clients for user ansible-usr
Now you have full lab setup ready on your desktop/laptop within Oracle Virtualbox for learning Ansible! Lab setup has a VM running in Oracle Virtualbox which is you mail Ansible server/worker and it has 5 containers running within acting as Ansible clients. This setup fulfills the pre-requisite of the configuration of passwordless SSH for Ansible.
Suse Manager 4 configuration. It includes all steps to set up your SUSE Manager right from scratch till your first login in the SUSE Manager web UI.
Adding product channel in SUSE Manager. Procedure to add SUSE product channels in SUSE Manager so that you can sync packages on your SUSE Manager server.
Oracle Public repo in SUSE Manager. Complete process to add Oracle Public repo in SUSE Manager so you can sync packages from public repo to SUSE Manager.
Remove product channels. Procedure to remove product channels from SUSE Manager from the command line.
A quick post on a couple of errors and their solutions while working on ELK stack.
ELK stack i.e. ElasticSearch Logstash and Kibana. We will walk you through a couple of errors you may see while working on ELK stack and their solutions.
Error: missing authentication token for REST request
First, thing first how to run cluster curl commands which are spared everywhere on the Elastic documentation portal. They have a copy as a curl command which if you run on your terminal will end up in below error –
You need to use authentication within curl command and you are good to go. It’s good practice to use the only username in command with -u switch so that you won’t reveal your password in command history! Make sure you use the Kibana UI user here.
root@kerneltalks # curl -u kibanaadm -X GET "localhost:9200/_cat/health?v&pretty"
Enter host password for user 'kibanaadm':
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1578644464 08:21:04 test-elk green 1 1 522 522 0 0 0 0 - 100.0%
Issue: How to remove x-pack after 6.2 upgrades
If you are running ELK stack 6.2 and you are performing upgrade then you need to take care of the x-pack module first. Since x-pack is included within 6.3 and later distributions you don’t need it as a separate module. But due to some reason, while upgrade mew stack won’t be able to remove the existing x-pack module. This will lead to having 2 x-pack modules on system and Kibana restarting continuously because of that with below error –
Error: Multiple plugins found with the id \"xpack_main\":\n - xpack_main at /usr/share/kibana/node_modules/x-pack\n - xpack_main at /usr/share/kibana/plugins/x-pack
Solution:
So, before the upgrade, you need to remove the x-pack plugin from ElasticSearch and Kibana as well. Using below commands –
root@kerneltalks # /usr/share/elasticsearch/bin/elasticsearch-plugin remove x-pack
-> removing [x-pack]...
-> preserving plugin config files [/etc/elasticsearch/x-pack] in case of upgrade; use --purge if not needed
root@kerneltalks # /usr/share/kibana/bin/kibana-plugin remove x-pack
Removing x-pack...
This will make your upgrade go smooth. If you have already upgraded (with RPM) and faced the issue, you may try to downgrade packages rpm -Uvh --oldpackage <package_name> and then try to remove x-pack modules.
Issue: How to set Index replicas to 0 on single node ElasticSearch cluster
On single node ElasticSearch cluster if you are running default configuration then you will run into un-assigned replicas issue. In Kibana UI you can see those Index health as Yellow. Also, your cluster health will be yellow too with a message – Elasticsearch cluster status is yellow. Allocate missing replica shards.
Solution:
You need to mark all indices with a replica count to zero. You can do this in one go using below command –
root@kerneltalks # curl -u kibanaadm -X PUT "localhost:9200/_all/_settings?pretty" -H 'Content-Type: application/json' -d'
{
"index" : {
"number_of_replicas" : 0
}
}
'
Enter host password for user 'kibanaadm':
{
"acknowledged" : true
}
Where _all can be replaced with a specific index name if you want to do it for a specific index. Use the Kibana UI user in command and you will be asked for the password. Once entered it alters all indices setting and will show you output as above.
You can now check-in Kibana UI and your cluster health along with index health will be Green.
Checklist which will help you in taking Linux infrastructure handover or transition from other support parties
Handover or transition is an unavoidable step of the project that comes in every sysadmin’s life. Its a process of taking over roles and responsibilities from one operation party to another due to change in support contracts/business etc.
The obvious thing here is to understand the current setup and working procedures so that you can continue it once the previous support party leaves the authority. So we will walk you through the list of points or questions that will help you in Linux infrastructure handover or transition. You can treat this as a questionnaire or checklist for Linux handover.
If you are going to handle servers hosted in public cloud like AWS, Azure then the majority of below pointers are just don’t stand any value 🙂
Hardware
We are considering here remote support so managing hardware is not really in the scope of handover. So only generic knowledge about hardware is enough and no detailed analysis required. If your transition/handover includes taking over hardware management as well then you might need more detailed information than listed below.
Hardware details of proprietary systems like HPUX, AIX, Blade, Rackmount servers for inventory purposes.
Datacenter logical diagram with location of all CI. This will be helpful for locating CI quickly for hardware maintenance.
Vendor details along with SLA, datacenter contacts and OEM contacts for hardware support at datacenter, escalation matrix.
Vendor coordination process for HW support at the datacenter
DR site details and connectivity details between primary and DR site
Server Connectivity
One of the prime requirements whenever to take over any Linux Infra. First thing is to know how you can reach remote Linux servers or even local servers along with their console accesses.
How servers are being accessed from remote locations? Jump server details if any.
VPN access details if any. The process to get new VPN access, etc.
Accounts on Linux servers for logins (LDAP, root, etc if implemented)
How console access is provided for physical servers?
Licensing & contracts
When it comes to supporting Infrastructure, you should be well aware of contracts you have with hardware and software vendors so that you can escalate the things when they require expert’s eyes.
Vendor contract information for OS being used (Redhat, Suse, OEL, etc.) includes start/end date, SLA details, level of support included, products included, escalation matrix, etc.
Software licenses for all tools along with middleware software being used in infrastructure.
Procedure or contacts of the team to renew the above said contracts or licenses.
Risk mitigation plans for old technology
Every company runs a few CI with old technology for sure. So one should take into consideration the up-gradation of these CI while taking handover. Old technology dies over a period of time and becomes difficult day by day to support. Hence its always advisable to identify them as a risk before taking handover and have clarity of its mitigation from ower.
Linux infrastructure future roadmap for servers running old OS (i.e. end of life or end of support)
Discuss migration plans for servers running AIX, HPUX Unix flavours to Linux if they are running out of contracts and support by the vendor in near future.
Ask for a migration plan of servers running non-enterprise Linux flavours like CentOS, Fedora, Oracle Linux, etc.
Same investigation for tools or solutions in Linux infra being used for monitoring, patching, automation, etc.
Linux patching
Quarterly planned activity! Patching is an inseparable part of the Linux lifecycle. Obviously we made a separate section for it. Get whatever details you can gather around this topic from the owner or previous support party.
What are the patching solutions are being used like spacewalk server, SUSE manager server, etc?
If not what is the procedure to obtain new patches? If its from Vendor then check related licenses, portal logins, etc.
What are patching cycles being followed? i.e, Frequency of patching, patching calendar if any,
Patching procedure, downtime approval process, ITSM tool’s role in patching activities, co-ordination process, etc.
Check if any patching automation implemented.
Monitoring
Infrastructure monitoring is a vast topic. Some organizations have dedicated teams for it. So if that’s the case you will require very little to gather regarding this topic.
Details of monitoring tools implemented e.g. tool’s servers, portal logins, licensing and infra management details of that tool, etc.
SOP to configure monitoring for new CI, Alert suppression, etc.
Alert policy, threshold, etc. definition process on that tool
Monitoring tool’s integration with other software like ticketing tool/ITSM
If the tool is being managed by the separate team then contact details, escalation matrix, etc for the same.
Backup solutions
Backup is another major topic for organizations and its mostly handled by the dedicated team considering its importance. Still, it’s better to have ground knowledge about backup solutions implemented in infrastructure.
Details of backup solutions
SOP for backup related activities like adding, updating, deleting new/old CI, policy definitions, etc.
List of activities under the backup stream
Backup recovery testing schedules, DR replication details if applicable
Major backup recurring schedules like weekends so that you can plan your activities accordingly
Security compliance
The audit requirement is to keep your Linux infra security complaint. All Linux servers should be complaint to security policies defined by organization, they should be free from any vulnerabilities and always running on the latest software. Below are a few pointers to consider here –
Solution or tool for running security scans on Linux servers
SOP for the same, along with operating details.
Password policies to be defined on Linux servers.
Hardening checklist for newly built servers
Network infra details
The network is the backbone of any IT infrastructure. Its always run by a dedicated team and hence you are not required to have in-depth knowledge of it. It’s not the scope of your transition. But you should know a few basics to get your day to day sysadmin life going smooth.
SOP for proxy details, how to get ports opened, IP requirements, etc.
Network team contact details, process to get network services, escalation matrix, etc.
How internet connectivity implemented for servers
Understanding network perimeter and zones like DMZ, Public, Private in context to DC.
Documentation repository
When you kick off your support to new infrastructure, document repository is the gold mine for you. So make sure you populate it with all kind of related documents and make it worth.
Location & access details of documentation. It could be a shared drive, file server, on the portal like SharePoint etc.
Includes inventories, SOP documents, Process documents, Audit documents etc.
Versioning and approval process for new/existing documents if any
Reporting
This area is in sysadmin’s bin. Gather all the details regarding this area.
List of all reports currently existed for Linux Infrastructure
What is the report frequency (daily, weekly, monthly)?
Check if reports are automated. If not ask for SOP to generate/pull reports. And then it’s an improvement area for you to automate them.
How and why report analysis is done? This will help you to get expectations from report outputs.
Any further procedure for reports like forwarding to management, signoff from any authority etc.
Report repository if any. This is covered in the documentation repository section as well.
Applications
This area is not actually in scope for Sysadmin but it helps them to work in a process-oriented environment. Also helps to trace down criticality and impact on applications running on servers when underlying CI runs into trouble.
ITSM tool (IT Service Management tool) used for ticketing & asset management & all details related to ITSM tool like access, authorization etc.
Also, ask for a small training session to get familiar with ITSM tools as it’s customized accordingly to organizations operating structure.
Architectural overview of applications running on Linux servers.
Critical applications along with their CI mapping to track down application impact in case of issues with server
Communication and escalation matrices for applications.
Software repository being used. Like software setups, installable, OS ISO images, VM templates etc
Operations
In all the above points, we gathered data which can be used in this phase i.e. actual supporting Linux infrastructure.
List of day to day activities and expected support model
Logistics for operations like phone lines, ODC requirement, IT hardware needed for support etc.
Process for decommissioning old server and commissioning new server process
New CI onboarding process
DR drill activities details
Escalation/Management matrices on owner organization side for all above tech solutions
That’s all I could think of right now. If you have any more pointers let me know in comments, I am happy to add them here.
A quick post to configure oracle public repo in SUSE Manager
In this article, we will walk you through step by step procedure to add Oracle Linux client in SUSE Manager. The complete process is operated in the below steps :
Add Oracle YUM repositories to SUSE Manger
Manually sync Oracle Linux repo to SUSE Manager
Copy GPG key from Oracle public repo to SUSE Manager
Create Oracle Linux bootstrap repository in SUSE Manger
Create activation key
Generate and modify the bootstrap script for Oracle Linux
Register Oracle Linux client to SUSE Manger
By adding Oracle Linux client in SUSE Manager you can manage OEL clients and their patching from your enterprise tool. You can do content lifecycle management as well with Oracle public channels. Without further delay lets jump into it.
How to add Oracle Public repositories in SUSE Manager
First thing first, install spacewalk utilities on your SUSE Manager server.
kerneltalks:~ # zypper in spacewalk-utils
Now, run spacewalk command to list all available base channels along with their available architectures.
You need to choose the channel you want to sync per your requirement. For this tutorial, we will register the OEL7 client to SUSE Manager. For that, we will select two channels oraclelinux7 & oraclelinux7-spacewalk24-client
Always base version of OS and spacewalk client channels are mandatory. Rest related channels to your base OS are optional for you to choose. You need to sync these channels to SUSE Manager using below command –
kerneltalks:~ # spacewalk-common-channels -v -a x86_64 oraclelinux7
Connecting to http://localhost/rpc/api
SUSE Manager username: suseadmin
SUSE Manager password:
Base channel 'Oracle Linux 7 (x86_64)' - creating...
kerneltalks:~ # spacewalk-common-channels -v -a x86_64 oraclelinux7-spacewalk24-client
Connecting to http://localhost/rpc/api
SUSE Manager username: suseadmin
SUSE Manager password:
Base channel 'Oracle Linux 7 (x86_64)' - exists
* Child channel 'Spacewalk 2.4 Server for Oracle Linux 7 (x86_64)' - creating...
Now both channels are created and now you can even view them in the SUSE Manager web console.
Sync Oracle Linux Public repo to SUSE Manager
The next step is to sync these channels manually for the first time. Later you can schedule them to sync automatically. To sync Oracle public repo manually run below command –
It takes time depending on your server internet bandwidth. If you are getting any python errors like AttributeError: 'ZypperRepo' object has no attribute 'repoXML' then make sure your SUSE Manager is up to date (zypper up) and then execute these steps.
You can navigate to SUSE Manager > Channel List, click on the channel name, Manage channel (right-hand top corner), goto last tab Repositories, and sync tab. Here, you can schedule automatic sync daily, weekly, etc as per your choice.
Copy GPG key
Copy key from RPM-GPG-KEY-oracle-ol7 to /srv/www/htdocs/pub/RPM-GPG-KEY-oracle-ol7 on the SUSE Manager server.
We will define this GPG key to use in the bootstrap script.
Create Oracle Linux bootstrap repo in SUSE Manager
Follow the below set of commands to create a bootstrap repo. Since we synced public repo channels (which are not Suse backed channels) command mgr-create-bootstrap-repo won’t work to create Oracle Linux bootstrap repo.
This step is pretty much the same as we normally do for any other channel. You can refer to this article with screenshots for the procedure.
We created the activation key 1-oel7 here for this demo. We will refer to this key throughout later this chapter.
Generate and modify the bootstrap script for Oracle Linux
You need to follow the same step you did earlier for salt clients. Goto SUSE Manager > Admin > Manager Configuration > Bootstrap Script.
The only thing here you need to uncheck ‘Bootstrap using salt’ option. Since salt is not supported, we will register Oracle Linux as the traditional system. For that you need to generate bootstrap script without salt part.
The script will be generated at /srv/www/htdocs/pub/bootstrap on SUSE Manager Server. Make a copy of it and edit it.
Modify the script to edit the below parameters (Make sure you enter your activation key and related GPG key value). Also, don’t forget to enable the script by commenting out exit 1 at beginning of script.:
Also, rename all occurrences of spacewalk-check & spacewalk-client-tools to rhn-check & rhn-client-tools. And delete spacewalk-client-setup in the same lines. These 3 packages are being referred by SUSE Manager by old name so we are updating them accordingly. Below 3 sed one-liner command to perform this task for you! Make sure you edit the last file name to match your bootstrap script name.
kerneltalks:~ # sed --in-place 's/spacewalk-check/rhn-check/' /srv/www/htdocs/pub/bootstrap/oel7_bootstrap.sh
kerneltalks:~ # sed --in-place 's/spacewalk-client-tools/rhn-client-tools/' /srv/www/htdocs/pub/bootstrap/oel7_bootstrap.sh
kerneltalks:~ # sed --in-place 's/spacewalk-client-setup//' /srv/www/htdocs/pub/bootstrap/oel7_bootstrap.sh
Register Oracle Linux client to SUSE Manager as traditional client
That’s all. You are all ready to register the client. Login to the client with root account and run bootstrap script.
If your script exits with below error which indicates CA trust updates are disabled on your server –
ERROR: Dynamic CA-Trust > Updates are disabled. Enable Dynamic CA-Trust Updates with '/usr/bin/update-ca-trust force-enable'
Run mentioned command in error i.e. /usr/bin/update-ca-trust force-enable and re-run the bootstrap script. You will be through next time.
Also, if you see certificate error about expiry for certificate /usr/share/rhn/ULN-CA-CERT like below –
The certificate /usr/share/rhn/ULN-CA-CERT is expired. Please ensure you have the correct certificate and your system time is correct.
then get the fresh copy of the certificate from Oracle.com and replace it with /srv/www/htdocs/pub/ULN-CA-CERT on SUSE Manager server. Re-run bootstrap script on client.
Once the bootstrap script completes you can see your system in SUSE Manager > Systems. Since its non-salt i.e. traditional system you don’t need to approve salt key in the web console. The system will directly appear in SUSE Manager.
Now you can check repositories on Oracle Linux client to confirm its subscribed to SUSE Manager.
root@o-client ~ # yum repolist
Loaded plugins: rhnplugin
This system is receiving updates from Spacewalk server.
repo id repo name status
oraclelinux7-x86_64 Oracle Linux 7 (x86_64) 12,317
oraclelinux7-x86_64-spacewalk24-client Spacewalk 2.4 Client for Oracle Linux 7 (x86_64) 31
repolist: 12,348
That’s it! You have created, synced Oracle Linux Public repo in SUSE Manager and registered Oracle Linux Client in SUSE Manager!
How to configure CentOS repo in SUSE Manager
Bonus tip !!
All the above process applies fro CentOS repo as well. Everything remains the same except below points –
Instead of spacewalk-client you need to sync uyuni-client repo.
GPG keys you can get from CentOS page . Choose CentOs X signing key according to your synced repo.
Create bootstrap repo in the path /srv/www/htdocs/pub/repositories/centos/6/bootstrap/
A quick post to walk you through step by step procedure to set up SUSE Manager in the AWS EC2 server.
We have written many articles about the SUSE Manager server product from SUSE. It was about hosting it on an on-premise server. All outputs, screenshots are from my setup hosted on Oracle Virtual box.
So one question arises, is it possible to host SUSE Manager on a public cloud server? Yes, it’s possible to host the SUSE Manager server on AWS EC2 instance. Only a few steps are different when you configure SUSE Manager on the EC2 server. I will walk you through them and it will be a piece of cake to set up.
Configuring SUSE Manager on AWS public cloud server
The whole process can be listed as :
Image selection to spin public cloud server
EC2 instance type selection and extra EBS volumes
Security group port opening
SUSE Manager setup
Image selection
You need to spin EC2 instance using SUSE Manager images available on Community AMIs. Search for SUSE Manager in AMIs listing. You will see AMI for SUSE Manager 3.1, 3.2, 4. Always go for the latest one. We discussed SUSE Manager 4 in all our articles. See screenshot below –
Select AMI and spin up your EC2 server.
Read below article which explains step by step procedure to spin up EC2 instance in AWS public cloud
While creating EC2 instance keep in mind the hardware requirement of SUSE Manager. Make sure you add extra EBS volumes to create filesystems /var/lib/pgsql and /var/spacewalk mentioned in requirements.
Spin up the instance, log in and create filesystems on those EBS volumes. Simple LVM tasks eh!
TCP Port 4505-4506 for communicating with managed systems via Salt
TCP Port 5269 for pushing actions to or via a SUSE Manager Proxy.
TCP Port 5222 for pushing client actions by the osad daemon running on client systems.
SUSE Manager setup
Make sure you update the system using zypper up before you proceed further.
Finally the SUSE Manager setup! Register your system to SCC (Suse Customer Center) using SUSEConnect command. Proceed with the SUSE Manager setup using yast susemanager_setup command as usual. All process remains the same for SUSE Manager setup.
Additional steps for AWS public cloud servers are as below –
Setup will automatically create a default administrator account admin and default Organization organization for SUSE Manager. You need to set a password for this admin account using the command below –
kerneltalks_aws # satpasswd admin
New password: *****
Confirm password: *****
Now you have an admin account with the password. Log in to the SUSE Manager web console using these credentials and you are done! You have working SUSE Manager on AWS Public Cloud.
The next step you want to do is add a new administrator account and organization. Then get rid of these default acc and org. These are pretty easy steps through the SUSE Manager web console.
Step by step procedure to add a new client in SUSE Manager.
In this article, we will walk you through step by step procedure to register a client in SUSE Manager. The complete process can be split into 5 parts as below where first 4 are pre-requisite –
Create custom channels
Create Activation Keys
Create bootstrap scripts
Create bootstrap repo
Register client
If you already have an established SUSE Manager in your infra then the first 4 steps must have been already completed and configured. Let’s go one by one now –
For this step, we will use dev channel we created in the previous step. So we will create Activation Key (AK) for channel year-1-dev-SLE-Product-SLES15-Pool for x86_64
Navigate to Systems > Activation Keys
Hit Create Key button
I next screen there are 3 important fields you need to fill in –
Key : which starts with 1-. Rest you need to fill in some standard format so that its easier for you to identify later. We used 1-dev-sles15 here
Base Channel: You need to select the proper custom channels from the drop-down menu. Here custom channels created by Content Lifecycle Management and SUSE product channels will be listed. Choose wisely.
Child channels: Select child channels from your main base custom channel.
Leave rest to default. Every option has help text as well which will help you to understand it and its pretty simple. Finally, click Create Activation Key button at the bottom of the page.
Your key will be created and can be checked at the Activation Keys home menu we visited in the first step.
Create bootstrap scripts
Don’t worry you don’t have to script the code on your own. SUSE Manager got you covered. You just need to edit Activation Key in the ready-made script.
Navigate to Admin > Manager Configuration > Bootstrap Script
Here you can see the location of bootstrap script located in your SUSE Manager along with few options like a proxy (mainly) which can be tweaked. Make sure to hit Update button at bottom of the page to generate a script on the mentioned location for the first time before you use it.
As you can see the bootstrap script is located in /srv/www/htdocs/pub/bootstrap on SUSE Manager. Log in to the SUSE Manager server using putty and make a copy of the script.
kerneltalks:~ # cp /srv/www/htdocs/pub/bootstrap/bootstrap.sh dev_sles15_bootstrap.sh
kerneltalks:~ # vi dev_sles15_bootstrap.sh
And in the copy edit below parameter to your Activation key.
ACTIVATION_KEYS=1-dev-sles15
That’s it. Your bootstrap script is ready to register client under dev channel.
Create bootstrap repo
Now, you need to create a bootstrap repo as well. This repo will be added to the client temporarily to fetch all SUSE Manager registration-related packages and their dependent packages so that registration can be initiated on the client. All this happens in the background when you run the bootstrap script on the client.
To create bootstrap repo run below command on SUSE Manager. Make sure all SUSE product repos are synced completely before running this command –
Make sure you edit command and choose OS distribution as per your channel you are selecting. We are working on dev SLES15 channel here so I chose SLE-15-x86_64 product in command.
You can see it copies all packages and their dependencies to the new repo for new clients. Sample output :
And we came to the last step for which we have been sweating on all the above pre-requisite!
Its a very simple one command step to be executed on the client machine. The client can be registered from the SUSE Manager console itself as well. We will see both steps here.
Before that one point to note – If your system is VM built from template or clone or if it’s a clone system in any way then you should run below commands on client systems to assign unique system Id and then proceed with registration.
It will do all the work for you and once the script finishes the execution you should see the client’s key is pending for approval in the SUSE Manager console. Unless you approve it, the client won’t be registered to SUSE Manager. Script has a long output so I am not mentioning it here.
To approve client key navigate to SUSE Manager > Salt > Keys
Click the tick button and your client is registered! It will be shown as accepted in Salt then. You can view it under SUSE Manager > Systems > Overview
You can system is registered in SUSE Manager under dev channel!
To view more details about the system, click on hostname and you will see client details along with a tabbed menu bar which will help you manage that client from the SUSE Manager console.
Register client to SUSE Manager from the console itself
You can provide SSH login to the SUSE Manager console and it will do all the above steps which you need to do manually by logging into the client using putty.
Navigate to SUSE Manager > Systems > Bootstrapping
Fill in details and hit Bootstrap button. It will start connecting system via SSH in backend and execute stuff. On console you will be shown message Your system is bootstrapping: waiting for a response..
Once completed, your system is registered and you can view it in system overview as explained above. You need not accept key in this case since SUSE Manager auto approves this salt request.
Issue on SUSE clients
You may face issues on some SUSE clients where even after bootstrap completes properly salt-minion process wont start and hence you can not register server with SUSE Manager.
You might see below error in such case :
root@kerneltalks # systemctl status salt-minion
● salt-minion.service - The Salt Minion
Loaded: loaded (/usr/lib/systemd/system/salt-minion.service; enabled; vendor preset: disabled)
Active: activating (auto-restart) (Result: exit-code) since Tue 2020-07-21 18:19:14 IST; 3s ago
Process: 3708 ExecStart=/usr/bin/salt-minion (code=exited, status=1/FAILURE)
Main PID: 3708 (code=exited, status=1/FAILURE)
Jul 21 18:19:14 kerneltalks systemd[1]: salt-minion.service: Unit entered failed state.
Jul 21 18:19:14 kernelatalks systemd[1]: salt-minion.service: Failed with result 'exit-code'.
And you can check /var/log/messges for below error messages :
2020-07-21T18:32:04.575062+02:00 kerneltalks salt-minion[6530]: /usr/lib/python2.7/site-packages/salt/scripts.py:198: DeprecationWarning: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. Salt will drop support for Python 2.7 in the Sodium release or later.
2020-07-21T18:32:04.778852+02:00 kerneltalks salt-minion[6530]: Process Process-1:
2020-07-21T18:32:04.779245+02:00 kerneltalks salt-minion[6530]: Traceback (most recent call last):
2020-07-21T18:32:04.779495+02:00 kerneltalks salt-minion[6530]: File "/usr/lib64/python2.7/multiprocessing/process.py", line 258, in _bootstrap
2020-07-21T18:32:04.779891+02:00 kerneltalks salt-minion[6530]: self.run()
2020-07-21T18:32:04.780163+02:00 kerneltalks salt-minion[6530]: File "/usr/lib64/python2.7/multiprocessing/process.py", line 114, in run
2020-07-21T18:32:04.780408+02:00 kerneltalks salt-minion[6530]: self._target(*self._args, **self._kwargs)
2020-07-21T18:32:04.780642+02:00 kerneltalks salt-minion[6530]: File "/usr/lib/python2.7/site-packages/salt/scripts.py", line 157, in minion_process
2020-07-21T18:32:04.781024+02:00 kerneltalks salt-minion[6530]: minion.start()
2020-07-21T18:32:04.781263+02:00 kerneltalks salt-minion[6530]: File "/usr/lib/python2.7/site-packages/salt/cli/daemons.py", line 343, in start
2020-07-21T18:32:04.781684+02:00 kerneltalks salt-minion[6530]: super(Minion, self).start()
2020-07-21T18:32:04.781923+02:00 kerneltalks salt-minion[6530]: File "/usr/lib/python2.7/site-packages/salt/utils/parsers.py", line 1064, in start
2020-07-21T18:32:04.782900+02:00 kerneltalks salt-minion[6530]: self.prepare()
2020-07-21T18:32:04.783141+02:00 kerneltalks salt-minion[6530]: File "/usr/lib/python2.7/site-packages/salt/cli/daemons.py", line 311, in prepare
2020-07-21T18:32:04.783385+02:00 kerneltalks salt-minion[6530]: import salt.minion
2020-07-21T18:32:04.783613+02:00 kerneltalks salt-minion[6530]: File "/usr/lib/python2.7/site-packages/salt/minion.py", line 69, in <module>
2020-07-21T18:32:04.784700+02:00 kerneltalks salt-minion[6530]: import salt.client
2020-07-21T18:32:04.784942+02:00 kerneltalks salt-minion[6530]: File "/usr/lib/python2.7/site-packages/salt/client/__init__.py", line 40, in <module>
2020-07-21T18:32:04.785631+02:00 kerneltalks salt-minion[6530]: import salt.utils.minions
2020-07-21T18:32:04.785870+02:00 kerneltalks salt-minion[6530]: File "/usr/lib/python2.7/site-packages/salt/utils/minions.py", line 24, in <module>
2020-07-21T18:32:04.786399+02:00 kerneltalks salt-minion[6530]: import salt.auth.ldap
2020-07-21T18:32:04.786634+02:00 kerneltalks salt-minion[6530]: File "/usr/lib/python2.7/site-packages/salt/auth/ldap.py", line 21, in <module>
2020-07-21T18:32:04.787043+02:00 kerneltalks salt-minion[6530]: from jinja2 import Environment
2020-07-21T18:32:04.787300+02:00 kerneltalks salt-minion[6530]: ImportError: No module named jinja2
2020-07-21T18:32:04.818391+02:00 kerneltalks systemd[1]: salt-minion.service: Main process exited, code=exited, status=1/FAILURE
2020-07-21T18:32:04.818897+02:00 kerneltalks systemd[1]: salt-minion.service: Unit entered failed state.
2020-07-21T18:32:04.819261+02:00 kerneltalks systemd[1]: salt-minion.service: Failed with result 'exit-code'.
In this case, you should be able to run the salt-minion process manually by exporting the python path. Check the salt-minion binary to make sure which python is being used for this process in case your system has multiple versions installed.
Once salt-minion is running you will be able to register a client to SUSE Manager. After registration update python by zypper up python* and then your salt-minion process will run using systemctl properly.
Issue of RHEL/OEL clients
I observed a peculiar problem where patch update tasks are sitting idle in a pending state for a long time and not being picked up by the client.
It shows in SUSE Manager GUI that –
This action will be executed after 1/10/20 10:28:00 AM IST
This action's status is: Queued.
This action has not yet been picked up.
and it sits there and does nothing.
The solution is to run rhn_check -vvvv on the client machine for which the job is stuck on SUSE Manager. It will be checked, picked up and executed!
How to create custom channels using Content Lifecycle Management in SUSE Manager
In this article, we will discuss Content Lifecycle Management in SUSE Manager for controlling patching in your infrastructure.
What is Content Lifecycle Management in SUSE Manager
Content Lifecycle management is managing how patches flow through your infra in a staged manner. In ideal infra, the latest patches will always be applied on development servers. If everything is good there then those patches will be applied to QA servers and lastly to production servers. This enables sysadmins to catch issues if any and hence preventing patching of the prod system which may create downtime of live environments.
SUSE Manager gives you this control via the content lifecycle. In this, you create custom channels in SUSE Manager for example dev, QA and prod. Then you register your systems to those channels according to their criticality. Now whenever channels get the new patches it will be available to respective systems (registered to those channels) to install. So if you control channels you control the patch availability to systems.
In content lifecycle management, SUSE manager enables you to push patches to channels manually. Like on first deploy all latest patches will be available to dev channels and hence dev systems. At this stage, if you run update commands (zypper up, yum update) they will show the latest patches only on dev servers. QA and prod servers won’t show any new patches.
Once dev is found to be ok after updates, you can go and manually promote patches to QA so now QA channels will have new latest patches and hence QA servers. Finally the same for prod. This is how you control and hence manage the patch lifecycle using SUSE Manager.
If it found confusing to you then go through the below process and screenshots, it will be more clear for you.
How to create custom channels in SUSE Manager
Now we will start with Content Lifecycle Management in SUSE Manager we setup. Log in to SUSE Manager and navigate to Content Lifecycle > Projects and click Create Project button.
You will be presented with the below page: Fill in all relevant details and hit Create button. You can create a project for each flavor of Linux you have in your infra. For example, you can create projects for Suse Linux 11, Suse Linux 12, Suse Linux 12 sp 3, etc. So that you can select respective source channels in each of these projects and keep your SUSE Manager organized.
In our SUSE Manager, I synced only one product channels i.e. of Suse Linux 15 so I simply keyed in patch deploy as a name.
Once the project is created, you will be prompted to add source channels to it. Means from those channels packages, updates will be sourced (from SUSE) and distributed to your project channels.
These source channels are the ones you synced during initial setup of SUSE Manager. Read how to sync SUSE product channels in SUSE Manager for more details. So you need to select channels from these ones according to project requirement. Like for project Suse Linux 11 select only source channels of Suse Linux 11 and so on.
Click Attach/Detach sources to do that.
Now you can see in the below screenshot that only Suse Linux 15 channels are available for me to select since I synced only the Suse Linux product channel in the initial setup. You will see here all the products which you have synced.
Once selected and clicked save you will see sources are updated with your selected channel list. Also, notice that version history details under Project properties are set to version 1 (draft - Not built)
Now its time to add your destination! This means to create environments. As I explained earlier here we will flow patches from dev to QA to prod. So here it is where you define this hierarchy. In the interest of time, we will follow from dev to prod only.
So we will create the environment as dev and prod as below by clicking Add Environment button –
Once done you can see as below, dev and prod environments and buttons Build and Promote. Whereas version is marked as not built for all of them.
So you have to start patch flow now. As of now, all the latest patches are in source channels. Once you click Build button below they will be made available to the dev environment. Basically it will create child channels for dev where all these patches will be made available from source channel.
Once you click Build button you will see below version keeper window where you can add a version message note so that it will be easy to remember the purpose of this channel syncs or date/time of sync etc.
It will take time depending on the number of channels, number of patches within, size of them and of course your internet bandwidth! As Don Vosburg from SUSE commented below – ” This process is database intensive – so having the Postgres database on SSD helps speed it up a bit! “
Patches will be built in new custom channels and only then you will be able to Promote them to the next stage.
What do you mean by promoting patches?
So once build is completed, the latest patches are now available to dev environment from source channels via custom channels. But still, the next environment i.e. prod still don’t have them. At this stage, you can install/test them on dev servers and isolate prod servers from them in case of any issues. If everything is working fine after installing/testing then you can promote them to the next environment (here its prod) and then all latest patches will be made available to the prod environment via custom channels.
You can then click Promote button and the same way they will be synced to the next environment.
View custom channels in SUSE Manager
Now we built and promoted; dev and prod environments. I said they will have now custom channels through which the latest patches will be made available to respective environments. So its time to check these new custom channels created by content lifecycle management.
Navigate to Software > Channel List > All
You can see below dev and prod channel of project year-1 listed there. Where the provider is Personal. Remember, we added our organization name as Personal in our initial SUSE Manager setup.
That’s all for this article! We created new custom channels in SUSE Manager via Content Lifecycle Management feature. Using this feature we able to control the latest patches availability to different environments.
The next step is to create Activation Keys for these custom channels which can be used to register client systems to these channels in your infra.
A short article explaining product channels in SUSE Manager along with screenshots.
In our previous article, we saw how to configure SUSE Manager 4.0 with screenshots. In this article, we will discuss channel management in SUSE Manager.
To start with you should have base product channels synced to SUSE Manager from Suse. For that goto to Admin > Setup Wizard in SUSE Manager web console. It’s a 3 step process which you need to complete for your first base channel syncs.
In the first step, you need to configure for internet access if applicable
In the second step, you need to add your organizational credentials which will be used to verify your subscriptions and accordingly products will be made available to you for sync in SUSE Manager.
You will find your organization credentials at https://scc.suse.com/organization . There you will find the username (same as organization id) and password which you need to fill up in SUSE Manager.
Enter it to the SUSE manager page above and move to the third step i.e. SUSE products. You will have to wait for a few minutes when you visit this page for the first time. It will download all products catalog from SUSE Customer Center depending on your organization’s credentials. Once the refresh is done, you will see a list of products available for you like below –
Product channel sync
Now select product of your choice to sync its channels. It depends on what variety of OS flavors you have in your infra and which all you have subscribed to. I selected only SUSE 15 for now.
And click on Add product button highlighted in the screenshot. They will start syncing. It takes time to sync channels depending on the number of products you selected to sync and the internet bandwidth of the server.
You can track progress in log files on the SUSE Manager server located at /var/log/rhn/reposync . You will see log file for each channel and it contains sync status progress for that channel.
kerneltalks:/var/log/rhn/reposync # ls -lrt
total 540
-rw-rw---- 1 wwwrun www 1474 Dec 3 12:02 sle-product-sles15-pool-x86_64.log
-rw-rw---- 1 wwwrun www 1731 Dec 3 12:02 sle-product-sles15-updates-x86_64.log
-rw-rw---- 1 wwwrun www 245815 Dec 3 12:16 sle-module-basesystem15-pool-x86_64.log
-rw-rw---- 1 wwwrun www 293137 Dec 3 13:05 sle-module-basesystem15-updates-x86_64.log
Once the sync is complete it will show as below –
That’s it! You have added a product and associated channels to SUSE Manager.
How to remove product channels from SUSE Manager
If by mistake, you have added some products which you don’t want then it’s not easy to remove it from SUSE Manager. The webpage does not allow you to just de-select it. You have to follow another method to remove them. I explained all steps to remove product and channels from SUSE manager here