Lab setup for Ansible testing

Quick lab setup for learning Ansible using containers on Oracle Virtualbox Linux VM.

Setting up LAb for learning Ansible

In this article, we will be setting up our lab using Docker containers for testing Ansible. We are using Oracle Virtualbox so that you can spin up VM with a readymade OVA file in a minute. This will save efforts to install the OS from scratch. Secondly, we will be spinning up a couple of containers which can be used as ansible clients. Since we need to test ansible for running a few remote commands/modules, it’s best to have containers working as clients rather than spinning complete Linux VMs as a client. This will save a lot of resource requirements as well and you can run this ansible lab on your desktop/laptop as well for practicing ansible.

Without further delay lets dive into setting up a lab on desktop/laptop for learning Ansible. Roughly it’s divided into below sections –

  1. Download Oracle Virtualbox and OVA file
  2. Install Oracle Virtualbox and spin VM from OVA file
  3. Run containers to work as ansible clients
  4. Test connectivity via passwordless SSH access from Ansible worker to clients

Step 1. Download Oracle Virtualbox & OEL7 with Docker readymade OVA file

Goto VirtualBox downloads and download Virtualbox for your OS.

Goto Oracle Downloads and download Oracle Linux 7 with Docker 1.12 Hands-On Lab Appliance file. This will help us to spin up VM in Oracle VirtualBox without much hassle.

Step 2. Install Oracle Virtualbox and start VM from OVA file

Install Oracle Virtualbox. Its a pretty standard setup procedure so I am not getting into it. Once you download above OVA file, open it in Oracle VirtualBox and it will open up the Import Virtual Appliance menu like below-

Import Virtual Appliance menu

Click Import. Agree to the software license agreement shown and it will start Importing OVA as a VM. After finishing import, you will see VM named DOC-1002902 i.e. same name as OVA file is created in your Oracle VirtualBox.

Start that VM and login with the user. Credentials details are mentioned in the documentation link on the download page of OVA file.

Step 3. Running containers

For running containers, you need to set up Docker Engine first on VM. All steps are listed in the same documentation I mentioned above where you looked at your first login credentials. Also, you can follow our Docker installation guide if you want.

Then create key pair on your VM i.e. Ansible worker/server so that public key can be used within a container for passwordless SSH. We will be using Ansible user as ansible-usr in our setup, so you can see this user henceforth here. Read how to configure Ansible default user.

[root@ansible-srv .ssh]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
98:42:9a:82:79:ac:74:7f:f9:31:71:2a:ec:bb:af:ee root@ansible-srv.kerneltalks.com
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|    .            |
|.o +   o         |
|+.=.. o S. .     |
|.+. ... . +      |
|.    . = +       |
|      o o o      |
|      oE=o       |
+-----------------+

Now we have key pair ready move on to containers.

Once your Docker Engine is installed and started, create custom Docker Image using Dockerfile mentioned below which we will use to spin up multiple containers (ansible clients). Below Dockerfile is taken from link and modified a bit for setting passwordless SSH. This Dockerfile answers the question how to configure passwordless SSH for containers!

FROM ubuntu:16.04

RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:password' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config

# SSH login fix. Otherwise user is kicked off after login
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd

ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
RUN useradd -m -d /home/ansible-usr ansible-usr
RUN mkdir /home/ansible-usr/.ssh
COPY .ssh/id_rsa.pub /home/ansible-usr/.ssh/authorized_keys
RUN chown -R ansible-usr:ansible-usr /home/ansible-usr/.ssh
RUN chmod 700 /home/ansible-usr/.ssh
RUN chmod 640 /home/ansible-usr/.ssh/authorized_keys
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

Keep above file as Dockerfile in /root and then run below command while you are in /root. If you are in some other directory then make sure you use relative path in COPY command in above Dockerfile.

[root@ansible-srv ~]# docker build -t eg_sshd .

This command will create a custom Docker Image named eg_sshd. Now you are ready to spin up containers using this custom docker image.

We will start containers in below format –

  1. Webserver
    1. k-web1
    2. k-web2
  2. Middleware
    1. k-app1
    2. k-app2
  3. Database
    1. k-db1

So in total 5 containers spread across different groups with different hostname so that we can use them for testing different configs/actions in ansible.

I am listing command for the first container only. Repeat them for rest 4 servers.

[root@ansible-srv ~]# docker run -d -P --hostname=k-web1 --name k-web1 eg_sshd
e70d825904b8c130582c0c52481b6e9ff33b18e0ba8ab47f12976a568587087b

It is working!

Now, spin up all 5 containers. Verify all containers are running and note down their ports.

[root@ansible-srv ~]# docker container ls -a
CONTAINER ID        IMAGE               COMMAND               CREATED              STATUS              PORTS                   NAMES
2da32a4706fb        eg_sshd             "/usr/sbin/sshd -D"   5 seconds ago        Up 3 seconds        0.0.0.0:32778->22/tcp   k-db1
75e2a4bb812f        eg_sshd             "/usr/sbin/sshd -D"   39 seconds ago       Up 33 seconds       0.0.0.0:32776->22/tcp   k-app2
40970c69348f        eg_sshd             "/usr/sbin/sshd -D"   50 seconds ago       Up 47 seconds       0.0.0.0:32775->22/tcp   k-app1
4b733ce710e4        eg_sshd             "/usr/sbin/sshd -D"   About a minute ago   Up About a minute   0.0.0.0:32774->22/tcp   k-web2
e70d825904b8        eg_sshd             "/usr/sbin/sshd -D"   4 minutes ago        Up 4 minutes        0.0.0.0:32773->22/tcp   k-web1

Step 4. Passwordless SSH connectivity between Ansible server and clients

This is an important step for the smooth & hassle-free functioning of Ansible. You need to create ansible user on Ansible server & clients. Then configure passwordless SSH (using keys) for that user.

Now you need to get the IP addresses of your containers. You can inspect the container and extract that information –

[root@ansible-srv ~]# docker inspect k-web1 |grep IPAddress
            "SecondaryIPAddresses": null,
            "IPAddress": "172.17.0.2",
                    "IPAddress": "172.17.0.2",

Now we have an IP address, let’s test the passwordless connectivity –

[root@ansible-srv ~]# ssh ansible-usr@172.17.0.2
Welcome to Ubuntu 16.04.6 LTS (GNU/Linux 4.1.12-37.5.1.el7uek.x86_64 x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

Last login: Wed Jan 15 18:57:38 2020 from 172.17.0.1
$ hostname
k-web1
$ exit
Connection to 172.17.0.2 closed.

It’s working! Go ahead and test it for rest all, so that the client’s authenticity will be added and RSA fingerprints will be saved to the known host list. Now we have all 5 client containers running and passwordless SSH is setup between ansible server and clients for user ansible-usr

Now you have full lab setup ready on your desktop/laptop within Oracle Virtualbox for learning Ansible! Lab setup has a VM running in Oracle Virtualbox which is you mail Ansible server/worker and it has 5 containers running within acting as Ansible clients. This setup fulfills the pre-requisite of the configuration of passwordless SSH for Ansible.

Our list of SUSE Manager articles

A quick post listing all our articles on SUSE Manager

All SUSE Manager articles

Past few weeks I published a few articles on SUSE Manager so I thought of publishing the curated list of all articles in one place.

  1. SUSE Manager 4 server installation along with screenshots. Step by Step procedure on how to install SUSE Manager 4 along with its Base OS with screenshots of each and every step.
  2. Suse Manager 4 configuration. It includes all steps to set up your SUSE Manager right from scratch till your first login in the SUSE Manager web UI.
  3. Adding product channel in SUSE Manager. Procedure to add SUSE product channels in SUSE Manager so that you can sync packages on your SUSE Manager server.
  4. Content Lifecycle Management in SUSE Manager. CLM overview and how to implement CLM in SUSE Manager.
  5. Client registration. All steps to register Linux client to SUSE Manager so that it can be managed via SUSE Manager.
  6. SUSE Manager in the AWS EC2 server. A quick article explaining how to install SUSE Manager in the AWS EC2 server.
  7. Oracle Public repo in SUSE Manager. Complete process to add Oracle Public repo in SUSE Manager so you can sync packages from public repo to SUSE Manager.
  8. Remove product channels. Procedure to remove product channels from SUSE Manager from the command line.

Issues while working on ELK stack

A quick post on a couple of errors and their solutions while working on ELK stack.

ELK stack issues and solutions

ELK stack i.e. ElasticSearch Logstash and Kibana. We will walk you through a couple of errors you may see while working on ELK stack and their solutions.

Error: missing authentication token for REST request

First, thing first how to run cluster curl commands which are spared everywhere on the Elastic documentation portal. They have a copy as a curl command which if you run on your terminal will end up in below error –

root@kerneltalks # curl -X GET "localhost:9200/_cat/health?v&pretty"
{
  "error" : {
    "root_cause" : [
      {
        "type" : "security_exception",
        "reason" : "missing authentication token for REST request [/_cat/health?                                                                                        v&pretty]",
        "header" : {
          "WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\""
        }
      }
    ],
    "type" : "security_exception",
    "reason" : "missing authentication token for REST request [/_cat/health?v&pr                                                                                        etty]",
    "header" : {
      "WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\""
    }
  },
  "status" : 401
}
Solution:

You need to use authentication within curl command and you are good to go. It’s good practice to use the only username in command with -u switch so that you won’t reveal your password in command history! Make sure you use the Kibana UI user here.

root@kerneltalks # curl -u kibanaadm -X GET "localhost:9200/_cat/health?v&pretty"
Enter host password for user 'kibanaadm':
epoch      timestamp cluster        status node.total node.data shards  pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1578644464 08:21:04  test-elk green           1         1   522 522    0    0        0             0                  -                100.0%

Issue: How to remove x-pack after 6.2 upgrades

If you are running ELK stack 6.2 and you are performing upgrade then you need to take care of the x-pack module first. Since x-pack is included within 6.3 and later distributions you don’t need it as a separate module. But due to some reason, while upgrade mew stack won’t be able to remove the existing x-pack module. This will lead to having 2 x-pack modules on system and Kibana restarting continuously because of that with below error –

Error: Multiple plugins found with the id \"xpack_main\":\n  - xpack_main at /usr/share/kibana/node_modules/x-pack\n  - xpack_main at /usr/share/kibana/plugins/x-pack
Solution:

So, before the upgrade, you need to remove the x-pack plugin from ElasticSearch and Kibana as well. Using below commands –

root@kerneltalks # /usr/share/elasticsearch/bin/elasticsearch-plugin remove x-pack
-> removing [x-pack]...
-> preserving plugin config files [/etc/elasticsearch/x-pack] in case of upgrade; use --purge if not needed

root@kerneltalks # /usr/share/kibana/bin/kibana-plugin remove x-pack
Removing x-pack...

This will make your upgrade go smooth. If you have already upgraded
(with RPM) and faced the issue, you may try to downgrade packages rpm -Uvh --oldpackage <package_name> and then try to remove x-pack modules.


Issue: How to set Index replicas to 0 on single node ElasticSearch cluster

On single node ElasticSearch cluster if you are running default configuration then you will run into un-assigned replicas issue. In Kibana UI you can see those Index health as Yellow. Also, your cluster health will be yellow too with a message – Elasticsearch cluster status is yellow. Allocate missing replica shards.

Solution:

You need to mark all indices with a replica count to zero. You can do this in one go using below command –

root@kerneltalks # curl -u kibanaadm -X PUT "localhost:9200/_all/_settings?pretty" -H 'Content-Type: application/json' -d'
{
    "index" : {
        "number_of_replicas" : 0
    }
}
'
Enter host password for user 'kibanaadm':
{
  "acknowledged" : true
}

Where _all can be replaced with a specific index name if you want to do it for a specific index. Use the Kibana UI user in command and you will be asked for the password. Once entered it alters all indices setting and will show you output as above.

You can now check-in Kibana UI and your cluster health along with index health will be Green.

Linux infrastructure handover checklist

Checklist which will help you in taking Linux infrastructure handover or transition from other support parties

Pointers for Linux infra handover

Handover or transition is an unavoidable step of the project that comes in every sysadmin’s life. Its a process of taking over roles and responsibilities from one operation party to another due to change in support contracts/business etc.

The obvious thing here is to understand the current setup and working procedures so that you can continue it once the previous support party leaves the authority. So we will walk you through the list of points or questions that will help you in Linux infrastructure handover or transition. You can treat this as a questionnaire or checklist for Linux handover.

If you are going to handle servers hosted in public cloud like AWS, Azure then the majority of below pointers are just don’t stand any value 🙂

Hardware

We are considering here remote support so managing hardware is not really in the scope of handover. So only generic knowledge about hardware is enough and no detailed analysis required. If your transition/handover includes taking over hardware management as well then you might need more detailed information than listed below.

  1. Hardware details of proprietary systems like HPUX, AIX, Blade, Rackmount servers for inventory purposes.
  2. Datacenter logical diagram with location of all CI. This will be helpful for locating CI quickly for hardware maintenance.
  3. Vendor details along with SLA, datacenter contacts and OEM contacts for hardware support at datacenter, escalation matrix.
  4. Vendor coordination process for HW support at the datacenter
  5. DR site details and connectivity details between primary and DR site
Server Connectivity

One of the prime requirements whenever to take over any Linux Infra. First thing is to know how you can reach remote Linux servers or even local servers along with their console accesses.

  1. How servers are being accessed from remote locations? Jump server details if any.
  2. VPN access details if any. The process to get new VPN access, etc.
  3. Accounts on Linux servers for logins (LDAP, root, etc if implemented)
  4. How console access is provided for physical servers?
Licensing & contracts

When it comes to supporting Infrastructure, you should be well aware of contracts you have with hardware and software vendors so that you can escalate the things when they require expert’s eyes.

  1. Vendor contract information for OS being used (Redhat, Suse, OEL, etc.) includes start/end date, SLA details, level of support included, products included, escalation matrix, etc.
  2. Software licenses for all tools along with middleware software being used in infrastructure.
  3. Procedure or contacts of the team to renew the above said contracts or licenses.
Risk mitigation plans for old technology

Every company runs a few CI with old technology for sure. So one should take into consideration the up-gradation of these CI while taking handover. Old technology dies over a period of time and becomes difficult day by day to support. Hence its always advisable to identify them as a risk before taking handover and have clarity of its mitigation from ower.

  1. Linux infrastructure future roadmap for servers running old OS (i.e. end of life or end of support)
  2. Discuss migration plans for servers running AIX, HPUX Unix flavours to Linux if they are running out of contracts and support by the vendor in near future.
  3. Ask for a migration plan of servers running non-enterprise Linux flavours like CentOS, Fedora, Oracle Linux, etc.
  4. Same investigation for tools or solutions in Linux infra being used for monitoring, patching, automation, etc.
Linux patching

Quarterly planned activity! Patching is an inseparable part of the Linux lifecycle. Obviously we made a separate section for it. Get whatever details you can gather around this topic from the owner or previous support party.

  1. What are the patching solutions are being used like spacewalk server, SUSE manager server, etc?
  2. If not what is the procedure to obtain new patches? If its from Vendor then check related licenses, portal logins, etc.
  3. What are patching cycles being followed? i.e, Frequency of patching, patching calendar if any, 
  4. Patching procedure, downtime approval process, ITSM tool’s role in patching activities, co-ordination process, etc.
  5. Check if any patching automation implemented.
Monitoring

Infrastructure monitoring is a vast topic. Some organizations have dedicated teams for it. So if that’s the case you will require very little to gather regarding this topic.

  1. Details of monitoring tools implemented e.g. tool’s servers, portal logins, licensing and infra management details of that tool, etc.
  2. SOP to configure monitoring for new CI, Alert suppression, etc.
  3. Alert policy, threshold, etc. definition process on that tool
  4. Monitoring tool’s integration with other software like ticketing tool/ITSM
  5. If the tool is being managed by the separate team then contact details, escalation matrix, etc for the same.
Backup solutions

Backup is another major topic for organizations and its mostly handled by the dedicated team considering its importance. Still, it’s better to have ground knowledge about backup solutions implemented in infrastructure.

  1. Details of backup solutions
  2. SOP for backup related activities like adding, updating, deleting new/old CI, policy definitions, etc.
  3. List of activities under the backup stream
  4. Backup recovery testing schedules, DR replication details if applicable
  5. Major backup recurring schedules like weekends so that you can plan your activities accordingly
Security compliance

The audit requirement is to keep your Linux infra security complaint. All Linux servers should be complaint to security policies defined by organization, they should be free from any vulnerabilities and always running on the latest software. Below are a few pointers to consider here –

  1. Solution or tool for running security scans on Linux servers
  2. SOP for the same, along with operating details.
  3. Password policies to be defined on Linux servers.
  4. Hardening checklist for newly built servers
Network infra details 

The network is the backbone of any IT infrastructure. Its always run by a dedicated team and hence you are not required to have in-depth knowledge of it. It’s not the scope of your transition. But you should know a few basics to get your day to day sysadmin life going smooth.

  1. SOP for proxy details, how to get ports opened, IP requirements, etc. 
  2. Network team contact details, process to get network services, escalation matrix, etc.
  3. How internet connectivity implemented for servers
  4. Understanding network perimeter and zones like DMZ, Public, Private in context to DC.
Documentation repository

When you kick off your support to new infrastructure, document repository is the gold mine for you. So make sure you populate it with all kind of related documents and make it worth.

  1. Location & access details of documentation. It could be a shared drive, file server, on the portal like SharePoint etc.
  2. Includes inventories, SOP documents, Process documents, Audit documents etc.
  3. Versioning and approval process for new/existing documents if any
Reporting

This area is in sysadmin’s bin. Gather all the details regarding this area.

  1. List of all reports currently existed for Linux Infrastructure
  2. What is the report frequency (daily, weekly, monthly)? 
  3. Check if reports are automated. If not ask for SOP to generate/pull reports. And then it’s an improvement area for you to automate them.
  4. How and why report analysis is done? This will help you to get expectations from report outputs.
  5. Any further procedure for reports like forwarding to management, signoff from any authority etc.
  6. Report repository if any. This is covered in the documentation repository section as well.
Applications

This area is not actually in scope for Sysadmin but it helps them to work in a process-oriented environment. Also helps to trace down criticality and impact on applications running on servers when underlying CI runs into trouble.

  1. ITSM tool (IT Service Management tool) used for ticketing & asset management & all details related to ITSM tool like access, authorization etc.
  2. Also, ask for a small training session to get familiar with ITSM tools as it’s customized accordingly to organizations operating structure.
  3. Architectural overview of applications running on Linux servers.
  4. Critical applications along with their CI mapping to track down application impact in case of issues with server
  5. Communication and escalation matrices for applications.
  6. Software repository being used. Like software setups, installable, OS ISO images, VM templates etc
Operations

In all the above points, we gathered data which can be used in this phase i.e. actual supporting Linux infrastructure.

  1. List of day to day activities and expected support model
  2. Logistics for operations like phone lines, ODC requirement, IT hardware needed for support etc.
  3. Process for decommissioning old server and commissioning new server process
  4. New CI onboarding process
  5. DR drill activities details
  6. Escalation/Management matrices on owner organization side for all above tech solutions

That’s all I could think of right now. If you have any more pointers let me know in comments, I am happy to add them here.

How to add Oracle Linux public repository in SUSE Manger

A quick post to configure oracle public repo in SUSE Manager

Oracle public repo in SUSE Manager

In this article, we will walk you through step by step procedure to add Oracle Linux client in SUSE Manager. The complete process is operated in the below steps :

  • Add Oracle YUM repositories to SUSE Manger
  • Manually sync Oracle Linux repo to SUSE Manager
  • Copy GPG key from Oracle public repo to SUSE Manager
  • Create Oracle Linux bootstrap repository in SUSE Manger
  • Create activation key
  • Generate and modify the bootstrap script for Oracle Linux
  • Register Oracle Linux client to SUSE Manger

By adding Oracle Linux client in SUSE Manager you can manage OEL clients and their patching from your enterprise tool. You can do content lifecycle management as well with Oracle public channels. Without further delay lets jump into it.

How to add Oracle Public repositories in SUSE Manager

First thing first, install spacewalk utilities on your SUSE Manager server.

kerneltalks:~ # zypper in spacewalk-utils

Now, run spacewalk command to list all available base channels along with their available architectures.

 kerneltalks:~ # spacewalk-common-channels -l |grep oraclelinux
 oraclelinux6:        i386, x86_64
 oraclelinux6-addons: i386, x86_64
 oraclelinux6-mysql55: i386, x86_64
 oraclelinux6-mysql56: i386, x86_64
 oraclelinux6-mysql57: i386, x86_64
 oraclelinux6-openstack30: x86_64
.....output clipped.....

You need to choose the channel you want to sync per your requirement. For this tutorial, we will register the OEL7 client to SUSE Manager. For that, we will select two channels oraclelinux7 & oraclelinux7-spacewalk24-client

Always base version of OS and spacewalk client channels are mandatory. Rest related channels to your base OS are optional for you to choose. You need to sync these channels to SUSE Manager using below command –

kerneltalks:~ # spacewalk-common-channels -v -a x86_64 oraclelinux7
Connecting to http://localhost/rpc/api
SUSE Manager username: suseadmin
SUSE Manager password:
Base channel 'Oracle Linux 7 (x86_64)' - creating...

kerneltalks:~ # spacewalk-common-channels -v -a x86_64 oraclelinux7-spacewalk24-client
Connecting to http://localhost/rpc/api
SUSE Manager username: suseadmin
SUSE Manager password:
Base channel 'Oracle Linux 7 (x86_64)' - exists
* Child channel 'Spacewalk 2.4 Server for Oracle Linux 7 (x86_64)' - creating...

Now both channels are created and now you can even view them in the SUSE Manager web console.

Sync Oracle Linux Public repo to SUSE Manager

The next step is to sync these channels manually for the first time. Later you can schedule them to sync automatically. To sync Oracle public repo manually run below command –

kerneltalks:~ # spacewalk-repo-sync --channel=oraclelinux7-x86_64
kerneltalks:~ # spacewalk-repo-sync --channel=oraclelinux7-spacewalk24-client-x86_64

It takes time depending on your server internet bandwidth. If you are getting any python errors like AttributeError: 'ZypperRepo' object has no attribute 'repoXML' then make sure your SUSE Manager is up to date (zypper up) and then execute these steps.

You can navigate to SUSE Manager > Channel List, click on the channel name, Manage channel (right-hand top corner), goto last tab Repositories, and sync tab. Here, you can schedule automatic sync daily, weekly, etc as per your choice.

Copy GPG key

Copy key from RPM-GPG-KEY-oracle-ol7 to  /srv/www/htdocs/pub/RPM-GPG-KEY-oracle-ol7 on the SUSE Manager server.

We will define this GPG key to use in the bootstrap script.

Create Oracle Linux bootstrap repo in SUSE Manager

Follow the below set of commands to create a bootstrap repo. Since we synced public repo channels (which are not Suse backed channels) command mgr-create-bootstrap-repo won’t work to create Oracle Linux bootstrap repo.

kerneltalks:~ # mkdir -p /srv/www/htdocs/pub/repositories/res/7/bootstrap
kerneltalks:~ # cd /srv/www/htdocs/pub/repositories/res/7/bootstrap
kerneltalks:~ # wget -r -nH --cut-dirs=5 --no-parent --reject="index.html*" http://yum.oracle.com/repo/OracleLinux/OL7/spacewalk24/client/x86_64
kerneltalks:~ # wget http://yum.oracle.com/repo/OracleLinux/OL7/spacewalk24/client//x86_64/getPackage/jabberpy-0.5-0.27.el7.noarch.rpm
kerneltalks:~ # createrepo .

Create activation key

This step is pretty much the same as we normally do for any other channel. You can refer to this article with screenshots for the procedure.

We created the activation key 1-oel7 here for this demo. We will refer to this key throughout later this chapter.

Generate and modify the bootstrap script for Oracle Linux

You need to follow the same step you did earlier for salt clients. Goto SUSE Manager > Admin > Manager Configuration > Bootstrap Script.

The only thing here you need to uncheck ‘Bootstrap using salt’ option. Since salt is not supported, we will register Oracle Linux as the traditional system. For that you need to generate bootstrap script without salt part.

bootstrap script for traditional clients in SUSE Manager

The script will be generated at /srv/www/htdocs/pub/bootstrap on SUSE Manager Server. Make a copy of it and edit it.

kerneltalks:~ # cp /srv/www/htdocs/pub/bootstrap/bootstrap.sh /srv/www/htdocs/pub/bootstrap/oel7_bootstrap.sh

Modify the script to edit the below parameters (Make sure you enter your activation key and related GPG key value). Also, don’t forget to enable the script by commenting out exit 1 at beginning of script.:

#exit 1
ACTIVATION_KEYS=1-oel7
ORG_GPG_KEY=RPM-GPG-KEY-oracle-ol7

Also, rename all occurrences of spacewalk-check & spacewalk-client-tools to rhn-check & rhn-client-tools. And delete spacewalk-client-setup in the same lines. These 3 packages are being referred by SUSE Manager by old name so we are updating them accordingly. Below 3 sed one-liner command to perform this task for you! Make sure you edit the last file name to match your bootstrap script name.

kerneltalks:~ # sed --in-place 's/spacewalk-check/rhn-check/' /srv/www/htdocs/pub/bootstrap/oel7_bootstrap.sh
kerneltalks:~ # sed --in-place 's/spacewalk-client-tools/rhn-client-tools/' /srv/www/htdocs/pub/bootstrap/oel7_bootstrap.sh
kerneltalks:~ # sed --in-place 's/spacewalk-client-setup//' /srv/www/htdocs/pub/bootstrap/oel7_bootstrap.sh

Register Oracle Linux client to SUSE Manager as traditional client

That’s all. You are all ready to register the client. Login to the client with root account and run bootstrap script.

root@o-client ~ # curl -Sks https://<suse manager server>/pub/bootstrap/oel7_bootstrap.sh | /bin/bash

If your script exits with below error which indicates CA trust updates are disabled on your server –

ERROR: Dynamic CA-Trust > Updates are disabled. Enable Dynamic CA-Trust Updates with '/usr/bin/update-ca-trust force-enable'

Run mentioned command in error i.e. /usr/bin/update-ca-trust force-enable and re-run the bootstrap script. You will be through next time.

Also, if you see certificate error about expiry for certificate /usr/share/rhn/ULN-CA-CERT like below –

The certificate /usr/share/rhn/ULN-CA-CERT is expired. Please ensure you have the correct certificate and your system time is correct.

then get the fresh copy of the certificate from Oracle.com and replace it with /srv/www/htdocs/pub/ULN-CA-CERT on SUSE Manager server. Re-run bootstrap script on client.

Once the bootstrap script completes you can see your system in SUSE Manager > Systems. Since its non-salt i.e. traditional system you don’t need to approve salt key in the web console. The system will directly appear in SUSE Manager.

Oracle Linux client in SUSE Manager

Now you can check repositories on Oracle Linux client to confirm its subscribed to SUSE Manager.

root@o-client ~ # yum repolist
Loaded plugins: rhnplugin
This system is receiving updates from Spacewalk server.
repo id                                                                     repo name                                                                             status
oraclelinux7-x86_64                                                         Oracle Linux 7 (x86_64)                                                               12,317
oraclelinux7-x86_64-spacewalk24-client                                      Spacewalk 2.4 Client for Oracle Linux 7 (x86_64)                                          31
repolist: 12,348

That’s it! You have created, synced Oracle Linux Public repo in SUSE Manager and registered Oracle Linux Client in SUSE Manager!


How to configure CentOS repo in SUSE Manager

Bonus tip !!

All the above process applies fro CentOS repo as well. Everything remains the same except below points –

  • Instead of spacewalk-client you need to sync uyuni-client repo.
  • GPG keys you can get from CentOS page . Choose CentOs X signing key according to your synced repo.
  • Create bootstrap repo in the path /srv/www/htdocs/pub/repositories/centos/6/bootstrap/

How to setup SUSE Manager in AWS server

A quick post to walk you through step by step procedure to set up SUSE Manager in the AWS EC2 server.

Setting up SUSE Manager on AWS EC2 instance

We have written many articles about the SUSE Manager server product from SUSE. It was about hosting it on an on-premise server. All outputs, screenshots are from my setup hosted on Oracle Virtual box.

So one question arises, is it possible to host SUSE Manager on a public cloud server? Yes, it’s possible to host the SUSE Manager server on AWS EC2 instance. Only a few steps are different when you configure SUSE Manager on the EC2 server. I will walk you through them and it will be a piece of cake to set up.

Configuring SUSE Manager on AWS public cloud server

The whole process can be listed as :

  1. Image selection to spin public cloud server
  2. EC2 instance type selection and extra EBS volumes
  3. Security group port opening
  4. SUSE Manager setup

Image selection

You need to spin EC2 instance using SUSE Manager images available on Community AMIs. Search for SUSE Manager in AMIs listing. You will see AMI for SUSE Manager 3.1, 3.2, 4. Always go for the latest one. We discussed SUSE Manager 4 in all our articles. See screenshot below –

SUSE Manager AMI on AWS Community AMI listing

Select AMI and spin up your EC2 server.

Read below article which explains step by step procedure to spin up EC2 instance in AWS public cloud

How to deploy EC2 server in AWS?

EC2 instance type and EBS volumes

While creating EC2 instance keep in mind the hardware requirement of SUSE Manager. Make sure you add extra EBS volumes to create filesystems /var/lib/pgsql and /var/spacewalk mentioned in requirements.

Spin up the instance, log in and create filesystems on those EBS volumes. Simple LVM tasks eh!

Security port opening

Open below ports in your EC2 instance’s security group incoming traffic rules. Read how to open port on EC2 instance security group here.

  • SSH Port 22 for SSH logins to the server.
  • TCP Port 4505-4506 for communicating with managed systems via Salt
  • TCP Port 5269 for pushing actions to or via a SUSE Manager Proxy.
  • TCP Port 5222 for pushing client actions by the osad daemon running on client systems.

SUSE Manager setup

Make sure you update the system using zypper up before you proceed further.

Finally the SUSE Manager setup! Register your system to SCC (Suse Customer Center) using SUSEConnect command. Proceed with the SUSE Manager setup using yast susemanager_setup command as usual. All process remains the same for SUSE Manager setup.

Additional steps for AWS public cloud servers are as below –

Setup will automatically create a default administrator account admin and default Organization organization for SUSE Manager. You need to set a password for this admin account using the command below –

kerneltalks_aws # satpasswd admin
New password: *****
Confirm password: *****

Now you have an admin account with the password. Log in to the SUSE Manager web console using these credentials and you are done! You have working SUSE Manager on AWS Public Cloud.

The next step you want to do is add a new administrator account and organization. Then get rid of these default acc and org. These are pretty easy steps through the SUSE Manager web console.

SUSE Manager Client registration

Step by step procedure to add a new client in SUSE Manager.

In this article, we will walk you through step by step procedure to register a client in SUSE Manager. The complete process can be split into 5 parts as below where first 4 are pre-requisite –

  • Create custom channels
  • Create Activation Keys
  • Create bootstrap scripts
  • Create bootstrap repo
  • Register client

If you already have an established SUSE Manager in your infra then the first 4 steps must have been already completed and configured. Let’s go one by one now –

Create custom channels

We already covered it in another article here.

Create Activation Keys

For this step, we will use dev channel we created in the previous step. So we will create Activation Key (AK) for channel year-1-dev-SLE-Product-SLES15-Pool for x86_64

Navigate to Systems > Activation Keys

Hit Create Key button

Create Activation Key

I next screen there are 3 important fields you need to fill in –

  1. Key : which starts with 1-. Rest you need to fill in some standard format so that its easier for you to identify later. We used 1-dev-sles15 here
  2. Base Channel: You need to select the proper custom channels from the drop-down menu. Here custom channels created by Content Lifecycle Management and SUSE product channels will be listed. Choose wisely.
  3. Child channels: Select child channels from your main base custom channel.
Activation key creation options

Leave rest to default. Every option has help text as well which will help you to understand it and its pretty simple. Finally, click Create Activation Key button at the bottom of the page.

Your key will be created and can be checked at the Activation Keys home menu we visited in the first step.

Create bootstrap scripts

Don’t worry you don’t have to script the code on your own. SUSE Manager got you covered. You just need to edit Activation Key in the ready-made script.

Navigate to Admin > Manager Configuration > Bootstrap Script

Here you can see the location of bootstrap script located in your SUSE Manager along with few options like a proxy (mainly) which can be tweaked. Make sure to hit Update button at bottom of the page to generate a script on the mentioned location for the first time before you use it.

Bootstrap script location on SUSE Manager

As you can see the bootstrap script is located in /srv/www/htdocs/pub/bootstrap on SUSE Manager. Log in to the SUSE Manager server using putty and make a copy of the script.

kerneltalks:~ # cp /srv/www/htdocs/pub/bootstrap/bootstrap.sh dev_sles15_bootstrap.sh
kerneltalks:~ # vi dev_sles15_bootstrap.sh

And in the copy edit below parameter to your Activation key.

ACTIVATION_KEYS=1-dev-sles15

That’s it. Your bootstrap script is ready to register client under dev channel.

Create bootstrap repo

Now, you need to create a bootstrap repo as well. This repo will be added to the client temporarily to fetch all SUSE Manager registration-related packages and their dependent packages so that registration can be initiated on the client. All this happens in the background when you run the bootstrap script on the client.

To create bootstrap repo run below command on SUSE Manager. Make sure all SUSE product repos are synced completely before running this command –

kerneltalks:~ # mgr-create-bootstrap-repo  -c SLE-15-x86_64 --with-custom-channel

Make sure you edit command and choose OS distribution as per your channel you are selecting. We are working on dev SLES15 channel here so I chose SLE-15-x86_64 product in command.

You can see it copies all packages and their dependencies to the new repo for new clients. Sample output :

#  mgr-create-bootstrap-repo  -c SLE-15-x86_64 --with-custom-channel
Creating bootstrap repo for SLE-15-x86_64

copy 'libgudev-1_0-0-232-1.33.x86_64'
copy 'libnewt0_52-0.52.20-5.35.x86_64'
copy 'libslang2-2.3.1a-3.13.x86_64'
copy 'newt-0.52.20-5.35.x86_64'
copy 'python3-asn1crypto-0.24.0-1.20.noarch'
copy 'python3-cffi-1.11.2-4.3.1.x86_64'
copy 'python3-cryptography-2.1.4-4.6.1.x86_64'
copy 'python-dmidecode-3.12.2-1.24.x86_64'
copy 'python3-dmidecode-3.12.2-1.24.x86_64'
copy 'python3-idna-2.6-1.20.noarch'
copy 'python3-libxml2-python-2.9.7-3.12.1.x86_64'
copy 'python3-netifaces-0.10.6-1.31.x86_64'
copy 'python3-newt-0.52.20-5.35.x86_64'
copy 'python3-pyasn1-0.4.2-1.20.noarch'
copy 'python3-pycparser-2.17-1.24.noarch'
copy 'python3-pyOpenSSL-17.5.0-3.6.1.noarch'
copy 'python3-pyudev-0.21.0-3.22.noarch'
copy 'python3-rpm-4.14.1-10.16.1.x86_64'
copy 'python3-packaging-16.8-1.23.noarch'
copy 'python3-setuptools-38.4.1-1.18.noarch'
copy 'python3-appdirs-1.4.3-1.21.noarch'
copy 'python3-pyparsing-2.2.0-1.28.noarch'
copy 'hwdata-0.320-3.8.1.noarch'
copy 'python3-hwdata-2.3.5-1.21.noarch'
copy 'python3-rhnlib-4.0.11-3.10.1.noarch'
copy 'spacewalk-check-4.0.10-3.11.1.noarch'
copy 'spacewalk-client-setup-4.0.10-3.11.1.noarch'
copy 'spacewalk-client-tools-4.0.10-3.11.1.noarch'
copy 'python3-spacewalk-check-4.0.10-3.11.1.noarch'
copy 'python3-spacewalk-client-setup-4.0.10-3.11.1.noarch'
copy 'python3-spacewalk-client-tools-4.0.10-3.11.1.noarch'
copy 'python3-spacewalk-usix-4.0.9-3.3.16.noarch'
copy 'mgr-daemon-4.0.8-1.11.1.noarch'
copy 'suseRegisterInfo-4.0.4-3.3.16.noarch'
copy 'python3-suseRegisterInfo-4.0.4-3.3.16.noarch'
copy 'zypp-plugin-spacewalk-1.0.5-3.6.9.noarch'
copy 'python3-zypp-plugin-0.6.3-2.18.noarch'
copy 'python3-zypp-plugin-spacewalk-1.0.5-3.6.9.noarch'
copy 'libpgm-5_2-0-5.2.122-3.15.x86_64'
copy 'libsodium23-1.0.16-2.20.x86_64'
copy 'libzmq5-4.2.3-3.8.1.x86_64'
copy 'python3-Babel-2.5.1-1.26.noarch'
copy 'python3-certifi-2018.1.18-1.18.noarch'
copy 'python3-chardet-3.0.4-3.23.noarch'
copy 'python3-Jinja2-2.10.1-3.5.1.noarch'
copy 'python3-MarkupSafe-1.0-1.29.x86_64'
copy 'python3-msgpack-0.5.4-2.9.x86_64'
copy 'python3-psutil-5.4.3-1.19.x86_64'
copy 'python3-py-1.5.2-1.24.noarch'
copy 'python3-pycrypto-2.6.1-1.28.x86_64'
copy 'python3-pytz-2017.3-1.20.noarch'
copy 'python3-PyYAML-3.12-1.32.x86_64'
copy 'python3-pyzmq-17.0.0-1.25.x86_64'
copy 'python3-requests-2.18.4-1.35.noarch'
copy 'python3-simplejson-3.13.2-1.21.x86_64'
copy 'python3-six-1.11.0-2.21.noarch'
copy 'python3-tornado-4.5.3-1.26.x86_64'
copy 'python3-urllib3-1.22-6.7.1.noarch'
copy 'timezone-2019c-3.23.1.x86_64'
copy 'salt-2019.2.0-5.52.1.x86_64'
copy 'python3-salt-2019.2.0-5.52.1.x86_64'
copy 'salt-minion-2019.2.0-5.52.1.x86_64'
copy 'libunwind-1.2.1-2.13.x86_64'
Directory walk started
Directory walk done - 75 packages
Temporary output repo path: /srv/www/htdocs/pub/repositories/sle/15/0/bootstrap/.repodata/
Preparing sqlite DBs
Pool started (with 5 workers)
Pool finished

Register client to SUSE Manager

And we came to the last step for which we have been sweating on all the above pre-requisite!

Its a very simple one command step to be executed on the client machine. The client can be registered from the SUSE Manager console itself as well. We will see both steps here.

Before that one point to note – If your system is VM built from template or clone or if it’s a clone system in any way then you should run below commands on client systems to assign unique system Id and then proceed with registration.

# rm /etc/machine-id; rm /var/lib/dbus/machine-id; rm /etc/salt/minion_id
# dbus-uuidgen --ensure; systemd-machine-id-setup
# service salt-minion stop
# rm -rf /etc/salt
# rm -rf /var/cache/salt

These commands will also wipe out any previous salt registration details if any from the clone procedure.

Register client to SUSE Manager from client putty login

Login to client machine which you want to register with SUSE Manager with the root account. Run command :

curl -Sks https://<suse-manager>/pub/bootstrap/<bootstrap-script>.sh | /bin/bash

Where –

  • <suse-manager> is SUSE Manager IP or hostname
  • <bootstrap-script> is bootstrap script name you prepared in the earlier step

As per our setup below is a command –

k-client # curl -Sks https://kerneltalks/pub/bootstrap/dev-sles15_bootstrap.sh | /bin/bash

It will do all the work for you and once the script finishes the execution you should see the client’s key is pending for approval in the SUSE Manager console. Unless you approve it, the client won’t be registered to SUSE Manager. Script has a long output so I am not mentioning it here.

To approve client key navigate to SUSE Manager > Salt > Keys

Accept salt client in SUSE Manager

Click the tick button and your client is registered! It will be shown as accepted in Salt then. You can view it under SUSE Manager > Systems > Overview

System Overview in SUSE Manager

You can system is registered in SUSE Manager under dev channel!

To view more details about the system, click on hostname and you will see client details along with a tabbed menu bar which will help you manage that client from the SUSE Manager console.

Client details in SUSE Manager
Register client to SUSE Manager from the console itself

You can provide SSH login to the SUSE Manager console and it will do all the above steps which you need to do manually by logging into the client using putty.

Navigate to SUSE Manager > Systems > Bootstrapping

Bootstrapping client from SUSE Manager

Fill in details and hit Bootstrap button. It will start connecting system via SSH in backend and execute stuff. On console you will be shown message Your system is bootstrapping: waiting for a response..

Once completed, your system is registered and you can view it in system overview as explained above. You need not accept key in this case since SUSE Manager auto approves this salt request.

Issue on SUSE clients

You may face issues on some SUSE clients where even after bootstrap completes properly salt-minion process wont start and hence you can not register server with SUSE Manager.

You might see below error in such case :

root@kerneltalks # systemctl status salt-minion
● salt-minion.service - The Salt Minion
   Loaded: loaded (/usr/lib/systemd/system/salt-minion.service; enabled; vendor preset: disabled)
   Active: activating (auto-restart) (Result: exit-code) since Tue 2020-07-21 18:19:14 IST; 3s ago
  Process: 3708 ExecStart=/usr/bin/salt-minion (code=exited, status=1/FAILURE)
 Main PID: 3708 (code=exited, status=1/FAILURE)

Jul 21 18:19:14 kerneltalks systemd[1]: salt-minion.service: Unit entered failed state.
Jul 21 18:19:14 kernelatalks systemd[1]: salt-minion.service: Failed with result 'exit-code'.

And you can check /var/log/messges for below error messages :

2020-07-21T18:32:04.575062+02:00 kerneltalks salt-minion[6530]: /usr/lib/python2.7/site-packages/salt/scripts.py:198: DeprecationWarning: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date.  Salt will drop support for Python 2.7 in the Sodium release or later.
2020-07-21T18:32:04.778852+02:00 kerneltalks salt-minion[6530]: Process Process-1:
2020-07-21T18:32:04.779245+02:00 kerneltalks salt-minion[6530]: Traceback (most recent call last):
2020-07-21T18:32:04.779495+02:00 kerneltalks salt-minion[6530]:   File "/usr/lib64/python2.7/multiprocessing/process.py", line 258, in _bootstrap
2020-07-21T18:32:04.779891+02:00 kerneltalks salt-minion[6530]:     self.run()
2020-07-21T18:32:04.780163+02:00 kerneltalks salt-minion[6530]:   File "/usr/lib64/python2.7/multiprocessing/process.py", line 114, in run
2020-07-21T18:32:04.780408+02:00 kerneltalks salt-minion[6530]:     self._target(*self._args, **self._kwargs)
2020-07-21T18:32:04.780642+02:00 kerneltalks salt-minion[6530]:   File "/usr/lib/python2.7/site-packages/salt/scripts.py", line 157, in minion_process
2020-07-21T18:32:04.781024+02:00 kerneltalks salt-minion[6530]:     minion.start()
2020-07-21T18:32:04.781263+02:00 kerneltalks salt-minion[6530]:   File "/usr/lib/python2.7/site-packages/salt/cli/daemons.py", line 343, in start
2020-07-21T18:32:04.781684+02:00 kerneltalks salt-minion[6530]:     super(Minion, self).start()
2020-07-21T18:32:04.781923+02:00 kerneltalks salt-minion[6530]:   File "/usr/lib/python2.7/site-packages/salt/utils/parsers.py", line 1064, in start
2020-07-21T18:32:04.782900+02:00 kerneltalks salt-minion[6530]:     self.prepare()
2020-07-21T18:32:04.783141+02:00 kerneltalks salt-minion[6530]:   File "/usr/lib/python2.7/site-packages/salt/cli/daemons.py", line 311, in prepare
2020-07-21T18:32:04.783385+02:00 kerneltalks salt-minion[6530]:     import salt.minion
2020-07-21T18:32:04.783613+02:00 kerneltalks salt-minion[6530]:   File "/usr/lib/python2.7/site-packages/salt/minion.py", line 69, in <module>
2020-07-21T18:32:04.784700+02:00 kerneltalks salt-minion[6530]:     import salt.client
2020-07-21T18:32:04.784942+02:00 kerneltalks salt-minion[6530]:   File "/usr/lib/python2.7/site-packages/salt/client/__init__.py", line 40, in <module>
2020-07-21T18:32:04.785631+02:00 kerneltalks salt-minion[6530]:     import salt.utils.minions
2020-07-21T18:32:04.785870+02:00 kerneltalks salt-minion[6530]:   File "/usr/lib/python2.7/site-packages/salt/utils/minions.py", line 24, in <module>
2020-07-21T18:32:04.786399+02:00 kerneltalks salt-minion[6530]:     import salt.auth.ldap
2020-07-21T18:32:04.786634+02:00 kerneltalks salt-minion[6530]:   File "/usr/lib/python2.7/site-packages/salt/auth/ldap.py", line 21, in <module>
2020-07-21T18:32:04.787043+02:00 kerneltalks salt-minion[6530]:     from jinja2 import Environment
2020-07-21T18:32:04.787300+02:00 kerneltalks salt-minion[6530]: ImportError: No module named jinja2
2020-07-21T18:32:04.818391+02:00 kerneltalks systemd[1]: salt-minion.service: Main process exited, code=exited, status=1/FAILURE
2020-07-21T18:32:04.818897+02:00 kerneltalks systemd[1]: salt-minion.service: Unit entered failed state.
2020-07-21T18:32:04.819261+02:00 kerneltalks systemd[1]: salt-minion.service: Failed with result 'exit-code'.

In this case, you should be able to run the salt-minion process manually by exporting the python path. Check the salt-minion binary to make sure which python is being used for this process in case your system has multiple versions installed.

root@kerneltalks # head -1 /usr/bin/salt-minion
root@kerneltalks # export PATH=$PATH:/usr/lib64/python2.6/site-packages/
root@kerneltalks # export PYTHONPATH=/usr/lib64/python2.6/site-packages/
root@kerneltalks # salt-minion start &

Once salt-minion is running you will be able to register a client to SUSE Manager. After registration update python by zypper up python* and then your salt-minion process will run using systemctl properly.

Issue of RHEL/OEL clients

I observed a peculiar problem where patch update tasks are sitting idle in a pending state for a long time and not being picked up by the client.

It shows in SUSE Manager GUI that –

This action will be executed after 1/10/20 10:28:00 AM IST
This action's status is: Queued.
This action has not yet been picked up.

and it sits there and does nothing.

The solution is to run rhn_check -vvvv on the client machine for which the job is stuck on SUSE Manager. It will be checked, picked up and executed!

Content Lifecycle Management in SUSE Manager

How to create custom channels using Content Lifecycle Management in SUSE Manager

CLM in SUSE Manager

In this article, we will discuss Content Lifecycle Management in SUSE Manager for controlling patching in your infrastructure.

What is Content Lifecycle Management in SUSE Manager

Content Lifecycle management is managing how patches flow through your infra in a staged manner. In ideal infra, the latest patches will always be applied on development servers. If everything is good there then those patches will be applied to QA servers and lastly to production servers. This enables sysadmins to catch issues if any and hence preventing patching of the prod system which may create downtime of live environments.

SUSE Manager gives you this control via the content lifecycle. In this, you create custom channels in SUSE Manager for example dev, QA and prod. Then you register your systems to those channels according to their criticality. Now whenever channels get the new patches it will be available to respective systems (registered to those channels) to install. So if you control channels you control the patch availability to systems.

In content lifecycle management, SUSE manager enables you to push patches to channels manually. Like on first deploy all latest patches will be available to dev channels and hence dev systems. At this stage, if you run update commands (zypper up, yum update) they will show the latest patches only on dev servers. QA and prod servers won’t show any new patches.

Once dev is found to be ok after updates, you can go and manually promote patches to QA so now QA channels will have new latest patches and hence QA servers. Finally the same for prod. This is how you control and hence manage the patch lifecycle using SUSE Manager.

If it found confusing to you then go through the below process and screenshots, it will be more clear for you.

How to create custom channels in SUSE Manager

Now we will start with Content Lifecycle Management in SUSE Manager we setup. Log in to SUSE Manager and navigate to Content Lifecycle > Projects and click Create Project button.

Creating a project in Content Lifecycle Management of SUSE Manager

You will be presented with the below page: Fill in all relevant details and hit Create button. You can create a project for each flavor of Linux you have in your infra. For example, you can create projects for Suse Linux 11, Suse Linux 12, Suse Linux 12 sp 3, etc. So that you can select respective source channels in each of these projects and keep your SUSE Manager organized.

In our SUSE Manager, I synced only one product channels i.e. of Suse Linux 15 so I simply keyed in patch deploy as a name.

New Project in SUSE Manager CLM

Once the project is created, you will be prompted to add source channels to it. Means from those channels packages, updates will be sourced (from SUSE) and distributed to your project channels.

These source channels are the ones you synced during initial setup of SUSE Manager. Read how to sync SUSE product channels in SUSE Manager for more details. So you need to select channels from these ones according to project requirement. Like for project Suse Linux 11 select only source channels of Suse Linux 11 and so on.

Click Attach/Detach sources to do that.

How to attach source channels in the SUSE Manager project

Now you can see in the below screenshot that only Suse Linux 15 channels are available for me to select since I synced only the Suse Linux product channel in the initial setup. You will see here all the products which you have synced.

Select product channels

Once selected and clicked save you will see sources are updated with your selected channel list. Also, notice that version history details under Project properties are set to version 1 (draft - Not built)

Project version history

Now its time to add your destination! This means to create environments. As I explained earlier here we will flow patches from dev to QA to prod. So here it is where you define this hierarchy. In the interest of time, we will follow from dev to prod only.

So we will create the environment as dev and prod as below by clicking Add Environment button –

Create an environment

Once done you can see as below, dev and prod environments and buttons Build and Promote. Whereas version is marked as not built for all of them.

So you have to start patch flow now. As of now, all the latest patches are in source channels. Once you click Build button below they will be made available to the dev environment. Basically it will create child channels for dev where all these patches will be made available from source channel.

Build project in SUSE Manager

Once you click Build button you will see below version keeper window where you can add a version message note so that it will be easy to remember the purpose of this channel syncs or date/time of sync etc.

Start building the first environment

It will take time depending on the number of channels, number of patches within, size of them and of course your internet bandwidth! As Don Vosburg from SUSE commented below – ” This process is database intensive – so having the Postgres database on SSD helps speed it up a bit! “

The first environment built!

Patches will be built in new custom channels and only then you will be able to Promote them to the next stage.

What do you mean by promoting patches?

So once build is completed, the latest patches are now available to dev environment from source channels via custom channels. But still, the next environment i.e. prod still don’t have them. At this stage, you can install/test them on dev servers and isolate prod servers from them in case of any issues. If everything is working fine after installing/testing then you can promote them to the next environment (here its prod) and then all latest patches will be made available to the prod environment via custom channels.

You can then click Promote button and the same way they will be synced to the next environment.

View custom channels in SUSE Manager

Now we built and promoted; dev and prod environments. I said they will have now custom channels through which the latest patches will be made available to respective environments. So its time to check these new custom channels created by content lifecycle management.

Navigate to Software > Channel List > All

You can see below dev and prod channel of project year-1 listed there. Where the provider is Personal. Remember, we added our organization name as Personal in our initial SUSE Manager setup.

That’s all for this article! We created new custom channels in SUSE Manager via Content Lifecycle Management feature. Using this feature we able to control the latest patches availability to different environments.

The next step is to create Activation Keys for these custom channels which can be used to register client systems to these channels in your infra.

How to add product channels in SUSE Manager

A short article explaining product channels in SUSE Manager along with screenshots.

Product sync in SUSE Manager

In our previous article, we saw how to configure SUSE Manager 4.0 with screenshots. In this article, we will discuss channel management in SUSE Manager.

To start with you should have base product channels synced to SUSE Manager from Suse. For that goto to Admin > Setup Wizard in SUSE Manager web console. It’s a 3 step process which you need to complete for your first base channel syncs.

How to install SUSE Manager 4.0?

Read here

In the first step, you need to configure for internet access if applicable

Proxy configuration in SUSE Manager

In the second step, you need to add your organizational credentials which will be used to verify your subscriptions and accordingly products will be made available to you for sync in SUSE Manager.

Organizational credentials in SUSE manager

You will find your organization credentials at https://scc.suse.com/organization . There you will find the username (same as organization id) and password which you need to fill up in SUSE Manager.

Enter it to the SUSE manager page above and move to the third step i.e. SUSE products. You will have to wait for a few minutes when you visit this page for the first time. It will download all products catalog from SUSE Customer Center depending on your organization’s credentials. Once the refresh is done, you will see a list of products available for you like below –

SUSE product catalog

Product channel sync

Now select product of your choice to sync its channels. It depends on what variety of OS flavors you have in your infra and which all you have subscribed to. I selected only SUSE 15 for now.

SUSE Manager product channel sync

And click on Add product button highlighted in the screenshot. They will start syncing. It takes time to sync channels depending on the number of products you selected to sync and the internet bandwidth of the server.

You can track progress in log files on the SUSE Manager server located at /var/log/rhn/reposync . You will see log file for each channel and it contains sync status progress for that channel.

kerneltalks:/var/log/rhn/reposync # ls -lrt
total 540
-rw-rw---- 1 wwwrun www   1474 Dec  3 12:02 sle-product-sles15-pool-x86_64.log
-rw-rw---- 1 wwwrun www   1731 Dec  3 12:02 sle-product-sles15-updates-x86_64.log
-rw-rw---- 1 wwwrun www 245815 Dec  3 12:16 sle-module-basesystem15-pool-x86_64.log
-rw-rw---- 1 wwwrun www 293137 Dec  3 13:05 sle-module-basesystem15-updates-x86_64.log

Once the sync is complete it will show as below –

Sync complete!

That’s it! You have added a product and associated channels to SUSE Manager.


How to remove product channels from SUSE Manager

If by mistake, you have added some products which you don’t want then it’s not easy to remove it from SUSE Manager. The webpage does not allow you to just de-select it. You have to follow another method to remove them. I explained all steps to remove product and channels from SUSE manager here

SUSE Manager 4 Setup Configuration

Step by step setup of SUSE Manager Server 4.0 configuration

SUSE Manager server 4 setup

In our previous post of SUSE Manager server installation, we walked you through how to install SUSE Manager step by step including screenshots. In this article, we will walk you through the SUSE Manager 4.0 configuration setup.

Considering you have the system installed with SUSE Manager package you can proceed to start SUSE Manager setup by running below command –

kerneltalks:~ # yast2 susemanager_setup

If you see an error saying No such client module susemanger_setup then you must not have susemanger package installed. Install it using zyapper in susemanager command and you will be able to run above setup command.

Once run, you will be presented with a text-based GUI setup and we will go through it step by step along with screenshots.

Obviously keep in mind you completed the disk space requirements before you start setup. Those are explained in the pre-requisite on the SUSE documentation.

SUSE Manager Setup

The first screen to choose the type of setup which is a pretty obvious choice.

The first screen of the setup

On the second screen, you will be asked to enter the SUSE Manager Administrator email address.

Admin email address

On the next screen, you need to provide details to create an SSL certificate of SUSE Manager.

Certificate setup

Now it will ask you for database details to be set. You can choose the database user of your choice.

Database settings

At this stage, all inputs have been collected and setup is ready to complete configurations. It still gives you another chance to modify your responses in answer file and run setup manually later in below window.

The setup is ready!

We made the obvious choice and hit the Yes button. Now, it will setup the SUSE manager and show you output as it goes. Finally, the SUSE Manager setup will be completed as below.

Setup is completed!

Hit Next and you will be shown web URL which can be used to administrator your SUSE Manager along with the instruction to create an account first.

SUSE Manager is configured!

SUSE Manager web console

As given in the last screen of setup, open your browser and head to the URL mentioned. Since I installed in VirtualBox, I used port forwards and opened it on loopback IP –

SUSE Manager console first page!

You need to fill in all the details to create your SUSE Administrator user and hit ‘Create Organization‘ button at the end of the page. And you are done! You will see below home page of the SUSE Manager console.

SUSE Manager console home page

Now your SUSE Manager setup is completed and you have web page console from where you can manage your SUSE Manager.

As very next step after this setup completion is to add subscription details to it and sync product channels so that it can be used in your organization for patching. We have covered it here in how to add product channels in SUSE Manager