Preparing for CLF-C01 AWS Certified Cloud Practitioner Exam

A quick article on how to prepare for CLF-C01 AWS Certified Cloud Practitioner Exam

AWS CLF!

I am writing this article as a preparation guide for the AWS Certified Cloud Practitioner Certification exam. I recently cleared the exam and hence thought of sharing a few pointers which may help you in your journey to get certified.

This is foundational level certification from AWS and aims at getting acquainted with Cloud and then AWS Cloud fundamentals. If you are looking for a career in the AWS ecosystem then this is your first step. This is also helpful for sales personals, managers, etc i.e. non-technical population to get familiar with Cloud and AWS terminologies.

If you are coming from a background of working locally or remotely on traditional data center equipment like servers, storage, network, etc or if you are possessing another cloud technology background then it’s a walk-in garden for you. Since I completed professional level AWS certification, I literally sit for this one with no such prior study.

You can refer to AWS’s own study guide for a detailed curriculum for the exam and other details.

  • Its a 90-minute exam with 60 questions to attend. Questions and choices are fairly short hence there should not be a time constraint for you.
  • Passing score is 700 out of 1000 and your result will be shown on screen PASS/FAIL immediately after you submit the exam.
  • The exam costs USD $100. If you have completed any previous AWS certification then you can make use of a 50% discount coupon in your AWS certification account.

Topic you should study

Cloud and on-prem
  • What is cloud
  • Difference between cloud and on-prem
  • Benefits and trade-offs for cloud over on-prem
  • The economics behind both. CAPEX, OPEX.
  • Different cloud architecture designs
Basics of AWS
  • AWS infrastructure. Understand each element infrastructure.aws
  • How to use or interact with AWS services
  • Understand AWS core services
    • IAM, KMS
    • EC2, ELB, Autoscaling, Lambda
    • S3, EFS, EBS
    • VPC
    • Cloudfront
    • Route 53
    • Cloudwatch
    • Cloud trail
    • SNS, SQS
    • RDS, Dynamodb
  • It won’t hurt to know a few more services around the above core ones at a very high level i.e. name of service and what it is used for.
  • AWS Billing and pricing, how it works, how to get discounts etc.
  • AWS support tiers
  • Differnt AWS support teams
Cloud security
  • Security of the cloud (AWS responsibility)
  • Security in the cloud (User’s responsibility)
  • Learn the shared responsibility model
  • AWS Access management
  • Compliance aspect of AWS

While studying AWS services make sure you know their use cases, billing logics, pricings, service limits, integration with other services, access control, types/classes within, etc. You are not expected to remember numbers of any kind but you should know the contextual comparison. Like you are not expected to remember IO or throughput exact numbers of EBS volumes but you should know which EBS type gives more throughput or IOPS than others.

Online courses

I try to curate few online course list here which you can take to build solid AWS foundation.

Practice test exams

There are practice test exams included in the above courses by LA and ACG. But if you want to purchase practice exams only then you can do so. AWS offers a practice exam too for USD $20. You can attempt it only once and no point in re-purchasing since every time you will see the same questions. You can get a free voucher for this to practise test if you have completed other AWS certification.

I created my last day revision notes here but I mainly referred by notes from AWS SAP-C01 exam and only added what’s missing there in these new notes.

That’s all I wanted to share. All the best!

Our other certification preparation articles

  1. Preparing for 1Z0-1085-20 Oracle Cloud Infrastructure Foundations 2020 Associate Exam
  2. Journey to AWS Certified Solutions Architect – Professional Certification SAP-C01

Preparing for 1Z0-1085-20 Oracle Cloud Infrastructure Foundations 2020 Associate Exam

A quick article on how to prepare for 1Z0-1085-20 Oracle Cloud Infrastructure Foundations 2020 Associate certification exam.

OCI Foundations Associate 2020

OCI (Oracle Cloud Infrastructure) Foundations 2020 Associate is a foundation level exam. If you are coming from another cloud service provider background then it will be a piece of cake for you. Being a foundation level exam will test you on a conceptual basis only.

Its a 60 multiple choice questions exam which you have to complete in 105 minutes. Approximately 2 minutes to spare per question which is pretty good enough for foundation level exam. Questions and answers are small so you don’t have to invest much time in reading and you can complete the exam well before time.

The exam costs $95 and the passing score is 68%. At the time of writing this article, due to the COVID-19 epidemic, Oracle announced course material and exam free of cost for a specific period of time. The exam currently available under online proctored mode from Pearson since most of the exam centers are closed in view of COVID-19 lock-down.

Journey to AWS Certified Solutions Architect – Professional Certification

Read our another article about preparation for the AWS certification

Let’s walk through exam topics and points you need to consider while preparing for this certification. An exam guide from Oracle can be viewed here.

Exam topics are :

  1. Cloud concepts
  2. OCI fundamentals
  3. Core OCI services
  4. Security and compliance
  5. OCI Pricing, billing, and support

Cloud concepts

If you are coming with a background of any other cloud provider like AWS, then you got it covered already.

  • You should be through with concepts of HA (High Availability), FT (Fault Tolerance) and the difference between them.
  • What is the cloud?
  • Know the advantages of cloud over the on-prem data center.
  • Get familiar with RTO and RPO concepts.

OCI Fundamentals

This topic covers basics of OCI i.e. how it is architected.

  • Understand concepts of the region, AD (Availability Domain)and FD (Fault Domain)
  • Types of the region – Single AD and multi AD
  • Learn about compartments and tenancy

Core OCI services

In this topic, you are introduced to core OCI services at a higher level. There is no need for a deep dive into each service. A high-level understanding of each is enough for this exam.

  • OCI Compute service. Learn all the below offerings.
    • Bare metal
    • Dedicated virtual host
    • Virtual Machine
    • Container engine
    • Functions
  • OCI Storage services. Learn below offerings
    • Block Volume
    • Local NVMe
    • File Storage service
    • Object service
    • Archive storage
    • Data transfer service
  • OCI Networking services
    • VCN (Virtual Cloud Network)
    • Peering
    • Different kind of gateways
      • NAT Gateway
      • DRG Gateway
      • Internet Gateway
    • Load balancers
    • NSG (Network Security Groups) and SL (Security Lists)
  • OCI IAM service
    • Concept of principals and Instance principals
    • Groups and dynamic groups
    • Policy understanding along with syntax and parameters
  • OCI Database service. Study all below offerings
    • VM DB systems
    • Bare Metal DB systems
    • RAC
    • Exadata DB systems
    • Autonomous data warehouse
    • Study backup, HA, DR strategies
  • Have a high-level understanding of below services :
    • OCI Key management service
    • OCI DNS service
    • Data safe
    • OS Management service
    • OCI WAF
    • Audit log service
  • Tagging
    • Usages
    • Type: free form and defined
    • Tag namespaces

Security and complilance

OCI security consists of different facets. Understand below areas in context to security

  • Cloud shared security model
  • Securing OCI using IAM
  • data at rest and data in transit protection
  • Key management service
  • HA, FT using AD, FD or services for data protection

OCI Pricing, billing and support

Understand how pricing and billing work in each service we saw above. Learn pricing high/low among tiers in storage services. You don’t need to remember any numbers but you should know it contextually like which is priced high and which one is low etc.

Learn billing models in OCI

  • PAYG (Pay as you go)
  • Monthly Flex
  • BYOL

Understand budget service and how tags, compartments can help in billing and budgeting.

Learn about the SLA structure offered by Oracle. This part is missing in OCI online training.

That’s all you have to know to clear this exam. As I said if you are coming from AWS, Azure then you can relate almost everything to those cloud services which makes it easy to learn and understand.

I created my last day revision notes here (most of the reference to AWS for comparison) which might be useful for you as well.

Now, just little bit of study and go for it! All the best!

Our other certification preparation articles

  1. Preparing for CLF-C01 AWS Certified Cloud Practitioner Exam
  2. Journey to AWS Certified Solutions Architect – Professional Certification SAP-C01

Run commands & copy files on salt clients from SUSE Manager Server

Lets check out salt CLI a bit!

In this article, we will walk you through a list of useful commands to interact with salt clients and get your work done.

We have covered SUSE Manager right from installation till configuration and client registration in our list of articles in the past. For now, let’s dive into a list of commands you can use to complete tasks on salt clients remotely via SUSE Manager.

You can always check out the list of salt modules available to choose from. I am listing our only a few of them which are useful in day-to-day tasks. Few of these tasks can be done from SUSE Manager UI as well but if you want to script them then using salt CLI is a way better option.

In the below examples, we have our SUSE Manager kerneltalks and salt client k-client1

Copy files from SUSE Manager to salt clients

There are two ways to copy a file. If you are copying simple text files then below command is just fine for you. salt-cp clientname/FQDN source destination

kerneltalks:~ # salt-cp k-client1 test1 /tmp/
k-client1:
    ----------
    /tmp/test1:
        True

Here we copied test1 file in the current directory from SUSE Manager to k-client1:/tmp.

It will treat files in question as text files and hence should not be used for a binary files. It will corrupt binary files or just fails to copy them. So if I try to copy zip file from SUSE Manager I see below error –

kerneltalks:~ # salt-cp k-client1 test2.gz /tmp/
[ERROR   ] An un-handled exception was caught by salt's global exception handler:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
Traceback (most recent call last):
  File "/usr/bin/salt-cp", line 10, in <module>
    salt_cp()
  File "/usr/lib/python3.6/site-packages/salt/scripts.py", line 418, in salt_cp
    client.run()
  File "/usr/lib/python3.6/site-packages/salt/cli/cp.py", line 52, in run
    cp_.run()
  File "/usr/lib/python3.6/site-packages/salt/cli/cp.py", line 142, in run
    ret = self.run_oldstyle()
  File "/usr/lib/python3.6/site-packages/salt/cli/cp.py", line 153, in run_oldstyle
    arg = [self._load_files(), self.opts['dest']]
  File "/usr/lib/python3.6/site-packages/salt/cli/cp.py", line 126, in _load_files
    files.update(self._file_dict(fn_))
  File "/usr/lib/python3.6/site-packages/salt/cli/cp.py", line 115, in _file_dict
    data = fp_.read()
  File "/usr/lib64/python3.6/codecs.py", line 321, in decode
    (result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte

In such cases, you can use the below salt module to copy over files from SUSE Manager to salt clients. For that, you need to keep your file under /srv/salt directory on the SUSE Manager server.

kerneltalks:/srv/salt # ls -lrt
total 4
-rw-r--r-- 1 root root 44 Apr  3 12:26 test2.gz
kerneltalks:~ # salt k-client1 cp.get_file salt://test2.gz /tmp/
k-client1:
    /tmp/test2.gz

Now we successfully copied zip file from SUSE Manager kerneltalks:/srv/salt/test2.gz to salt client k-client1:/tmp

Execute remote commands on salt clients from SUSE Manager

Now this part where we will run commands on the salt client from SUSE Manager. The command output will be returned to you on current session. You can run a couple of commands together separated by ; same as the shell.

kerneltalks:/srv/salt # salt k-client1 cmd.run 'df -Ph; date'
k-client1:
    Filesystem      Size  Used Avail Use% Mounted on
    devtmpfs        489M     0  489M   0% /dev
    tmpfs           496M   12K  496M   1% /dev/shm
    tmpfs           496M   14M  482M   3% /run
    tmpfs           496M     0  496M   0% /sys/fs/cgroup
    /dev/xvda1      9.8G  1.6G  7.7G  17% /
    Fri Apr  3 12:30:49 UTC 2020

Here we successfully ran df -Ph and date command on salt client remotely from SUSE Manager.

Make sure if you have multiple commands to run then bundle them to script, copy it over to the client using the above method and then execute the script on the client from SUSE Manager using run command module.

If you see below error that means your mentioned client is not registered with SUSE Manager or you have misspelled client name or use FQDN

kerneltalks:~ # salt-cp k-client1 test1 /tmp/
No minions matched the target. No command was sent, no jid was assigned.

Installing packages on salt client using salt cli

You can execute this task from the SUSE Manager web UI as well. But if you want to script it then salt CLI is a better option.

Installing a package is an easy task. Use pkg.install salt module and submit one or more lists of packages to be installed on the remote salt system.

Install single package using –

kerneltalks:~ # salt k-client1 pkg.install 'telnet'
k-client1:
    ----------
    telnet:
        ----------
        new:
            1.2-165.63
        old:

Install multiple packages using –

kerneltalks:~ # salt k-client1 pkg.install pkgs='["telnet", "apache2"]'
k-client1:
    ----------
    apache2:
        ----------
        new:
            2.4.23-29.40.1
        old:
    apache2-prefork:
        ----------
        new:
            2.4.23-29.40.1
        old:
    apache2-utils:
        ----------
        new:
            2.4.23-29.40.1
        old:
    libapr-util1:
        ----------
        new:
            1.5.3-2.8.1
        old:
    libapr1:
        ----------
        new:
            1.5.1-4.5.1
        old:
    liblua5_2:
        ----------
        new:
            5.2.4-6.1
        old:
    libnghttp2-14:
        ----------
        new:
            1.7.1-1.84
        old:
    telnet:
        ----------
        new:
            1.2-165.63
        old:

Here you can see it installed telnet and apache2 packages remotely along with its dependencies. Be sure that if the package is already installed and its updated version is available to install then the salt will update it. Hence you can see new and old version details in output.

Journey to AWS Certified Solutions Architect – Professional Certification SAP-C01

Let me share my experience to clear the toughest AWS exam ‘AWS Certified Solutions Architect – Professional’. This article might help you in your journey to get AWS CSA PRO certified.

Getting AWS CSA PRO certified!

In this article, I am going to cover the last few months of the certification journey which can prove useful to you as it was for me.

As I said last few months, so I assume you have good hands-on experience (might be via personal account/corporate projects) of AWS services. Obviously services like Snowball, Direct Connect are rare to get hands-on but you need to have a solid understanding of these services at least.

Let’s begin with the non-technical aspect of this journey which plays a key role in completing your Exam.

Your reading skills matters!

Yup, you read it right. AWS CSA PRO exam is having 75 questions which you need to answer in 180 minutes. Which drills down to approx 2 minutes per question.

Most of the questions are 3-4 or more statements long and so are the choices in answers. So you need to read through almost a big paragraph of text for a single question. Then you understand what is being asked, analyze answers and choosing best which fits the ask. That’s too much of work to be accomplished in 2 mins!

And there are very few questions where answers are just incorrect and you can eliminate them quickly in first glance. Most of the answers are correct but you need to choose the most appropriate one to suit the question’s requirement. So that’s a tedious task which requires more time. Hence I said reading skills do matter.

A tip (might be a crazy one): Watching videos with subtitles is an easy way you can train your brain to read speedily and grasp the context parallelly!

Obviously you should make yourself comfortable before you sit for your exam. Since its a 3 hour, long course and you don’t want to get distracted by anything.

Last month revisions using online training courses

In last month before the exam, you might want to subscribe to online courses specifically structured and targeted to the scope of the exam and their material is designed across the core services appearing in the exam.

These courses are a bit on a longer side like 40-50 hours of video but you can always use video speeds (set to 1.5x generally) to go through the course quickly. I took Linux Academy’s course by Adrian Cantrill & A Cloud Guru’s course by Scott Pletcher. But do not attempt the practice exams at the end of the course right away. Keep them for your final stage before the exam.

There are free courses available on the AWS training portal as well which you can check in the meantime. You should be knowing all AWS services at least by name and their use. Services launched in the last 1 year are less likely to appear on the exam so you can skip them.

Refer AWS documents and videos (Mandatory)

Once you are through online training courses for the exam, you will be well versed in the idea of what you can expect in the exam. These courses often supplemented with the links to AWS whitepapers or re-invent videos related to the chapter topic. Yup, those are essential things to go through.

AWS whitepapers and FAQ pages give you many minute things that you may have missed and help you to determine the validity of your choice for the situation in question. If you are short on time, then at least go through documents for the services in which you are weak or have little knowledge/experience.

AWS re:Invent videos on Youtube is another content-rich platform that gives you some insights/points which you may have missed in your preparation. They are also helpful since many customers are coming in re:Invent and present their use cases. This will help you to map real-world use cases with that in exams and get solid confirmation about your answer. And you can use Youtube’s video speed control to go through videos quickly!

Getting there

All right now we are at the stage that all knowledge sourcing has been done and its time to test that knowledge. Now its time to hit those practice exams from your online courses. Be sure to get these practice exams by Jon Bonso. Its a set of 4 practice tests and worth investing.

Also, you should consider taking AWS’s own practice exam. If you are lucky you might encounter some questions from it, in real exam. Also, if you hold any previous AWS certification, you must have coupon code in your AWS learning account which you can use to take this test for free.

You are good to book your exam when you can score 90% and above in all the above practice tests by understanding why a particular answer is correct and why others not. Memorizing answers not gonna help you in any way.

I uploaded my 50 page long handwritten notes. They might serve you for last day revision like flashcards.

View my last day revision notes

Being there!!

And here you are! The deal day! On exam day, just keep calm and give the exam. Don’t rush for any last-minute reads etc. Its gonna confuse and complicate things. Better be in a peaceful state since your mind is much important on exam day because that’s what gonna help you to read and understand essays! of the exam in the first go. This way you don’t waste your precious time in re-reading questions/answers.

  • Always keep in mind you can not spend more than 2 mins on a single question. Time is precious!
  • If you are cant figure out answers quickly then flag it and move on.
  • If you see answers with the same solutions & only one/two keywords different then easy to finalize answer quickly without reading through the whole statements
  • Scan through question and capture keywords like a highly available solution, less cost, multi-region, load balancing, etc. This helps you to narrow down to particular services
  • Start building solutions in mind as you read through questions using the above-said keywords. This helps to look at answers and match the solution you have in mind. It helps you save a lot on time!
  • Do not submit the exam till last second, even if you manage to complete all questions and review of flagged ones before time. Use the remaining time to go through answers again.

Result?

Your result will be emailed to you within 5 business days. But you can make it out from the messages displayed on the screen once you submit the exam that you made it or not. The message is quite confusing (it’s more when you fried your brain for the last 3 hours!) since it states that you complete the exam! (Diff messages mentioned here in the forum) But, in a nutshell, if you see it starts with Congratulations then you made it! and if it starts with Thank You then you need a re-attempt.

Our other certification preparation articles

  1. Preparing for CLF-C01 AWS Certified Cloud Practitioner Exam
  2. Preparing for 1Z0-1085-20 Oracle Cloud Infrastructure Foundations 2020 Associate Exam

Installing Ansible and running the first command

How to install Ansible and how to run a simple command using Ansible.

Ansible installation

In this article, we will walk you through step by step procedure to install Ansible and then run the first ping command on its clients.

We will be using our lab setup built using containers for this exercise. In our all articles related to Ansible, we are referring Ansible server as Ansible control machine i.e. where Ansible software is installed and running. Ansible clients are machines who are being managed using this Ansible.

Pre-requisite

Ansible control machine requirements

It should be a Linux machine. Ansible can bot be installed on Windows OS. and secondly it should have Python installed.

It’s preferred to have passwordless SSH setup between Ansible control machine and managed machine for smooth executions but not mandatory.

Ansible managed machine requirement

It should have libselinux-python installed if SELinux is enabled which is obviously most of the time.

A Python interpreter should be installed.


Ansible installation

Installation of Ansible is an easy task. Its a package so install it like you install any other package in your Linux. Make sure you have subscribed to the proper repo which has an Ansible engine available to install.

I enabled EPEL repo on my Oracle Linux running in Virtual box and installed it using –

[root@ansible-srv ~]# yum install ansible

Once the installation is done, you need to add your client list in file /etc/ansible/hosts. Our setup files look like below :

[root@ansible-srv ~]# cat /etc/ansible/hosts
[all:vars]
ansible_user=ansible-usr

[webserver]
k-web1 ansible_host=172.17.0.9
k-web2 ansible_host=172.17.0.3

[middleware]
k-app1 ansible_host=172.17.0.4
k-app2 ansible_host=172.17.0.5

[database]
k-db1 ansible_host=172.17.0.6

Here, we defined the Ansible default user in the inventory file itself. Since we do not have DNS and using containers in our setup, I defined hostname and IP as mentioned above.


Running first Ansible command

As I explained earlier in the Lab setup article, I configured passwordless SSH from the Ansible control machine to the managed node.

Let’s run our first ansible command i.e. ping one hosts. Command syntax is – ansible -m <module> <target>

[root@ansible-srv ~]# ansible -m ping k-db1
k-db1 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}

We used the ping module here and the target host is k-db1. And we received back pong i.e. command successfully executed. In this output –

  • SUCCESS is command exit status
  • ansible_facts is data collected by Ansible while executing a command on the managed node.
  • changed indicates if the task has to make any changes

Let’s run another simple command like hostname

[root@ansible-srv ~]# ansible -m command -a hostname k-db1
k-db1 | CHANGED | rc=0 >>
k-db1

Here in the second line you see the command stdout i.e. output. and return code rc i.e. exit code of the command is 0 confirming command execution was successful.

SEP 14 antivirus client commands in Linux

List of Symantec Endpoint Protection 14 antivirus client commands in Linux and few errors along with their possible solutions

SEP Linux client commands

In this article, we will walk you through few SEP 14 antivirus agent commands which will help you troubleshoot your issues related to it and then we will give solutions to some frequently seen errors.

Symantec Endpoint Protection 14 Linux client commands

How to restart SEP 14 Linux client processes

Stop SEP 14 Linux client using single command below –

[root@kerneltalks tmp]# /etc/init.d/symcfgd stop
Stopping smcd: ..                                                    done

Stopping rtvscand: ..                                                done

Stopping symcfgd: .                                                  done

Start SEP 14 Linux client using below commands in the given order –

[root@kerneltalks tmp]# /etc/init.d/symcfgd start
Starting symcfgd:                                                    done

[root@kerneltalks tmp]# /etc/init.d/rtvscand start
Starting rtvscand:                                                   done

[root@kerneltalks tmp]# /etc/init.d/smcd start
Starting smcd:                                                       done
How to uninstall SEP 14 client from Linux machine
[root@kerneltalks tmp]# /opt/Symantec/symantec_antivirus/uninstall.sh
Are you sure to remove SEP for Linux from your machine?
WARNING: After SEP for Linux is removed, your machine will not be protected.
Do you want to remove SEP for Linux? Y[es]|N[o]: N
Y
Starting to uninstall Symantec Endpoint Protection for Linux
Begin removing GUI component
GUI component removed successfully
Begin removing Auto-Protect component
symcfgd is running
rtvscand is not running
smcd is not running
Auto-Protect component removed successfully
Begin removing virus protection component
smcd is running
rtvscand is running
symcfgd is running
Virus protection component removed successfully
Uninstallation completed
The log file for uninstallation of Symantec Endpoint Protection for Linux is under: /root/sepfl-uninstall.log

All the below commands are of binary sav which is located in /opt/Symantec/symantec_antivirus

Display auto-protect module state

[root@kerneltalks symantec_antivirus]# ./sav info -a
Enabled

Display virus definition status

[root@kerneltalks symantec_antivirus]# ./sav info -d
11/24/2019 rev. 2

Check if the client is Self-managed or being managed from the SEPM server. The output is server hostname or IP who is managing the client.

[root@kerneltalks symantec_antivirus]# ./sav manage -s 
syman01

Display the management server group to which the current client belongs.

[root@kerneltalks symantec_antivirus]# ./sav manage -g 
My Company\Default Group

Run immediate virus definition update

[root@kerneltalks symantec_antivirus]# ./sav liveupdate -u
Update was successful

Triggers the heartbeat immediately and download the profile from SEPM server

[root@kerneltalks symantec_antivirus]# ./sav manage -h
Requesting updated policy from the Symantec Endpoint Protection Manager ...

Import sylink file in the client

[root@kerneltalks symantec_antivirus]# ./sav manage -i /tmp/sylink.xml
Imported successfully.

Now, let’s look at a few errors and their possible solutions –

SAV manage server is offline
[root@kerneltalks symantec_antivirus]# ./sav manage -s
Offline

This means your client is not able to communicate with the SEPM server. Make sure there is no firewall ( internal to OS like iptables or external ) is blocking the traffic. Also, you have proper proxy configurations in place. If its internal server make sure you excluded it from proxy as no_proxy hosts.

Refer SEP communication ports here which will help you drill down communication issues.

LiveUpdate fails

Best way to troubleshoot LiveUpdate issues is to go through the log file /opt/Symantec/LiveUpdate/Logs/lux.log. It has a descriptive message about the error which helps to quickly drill down to the problem.

[root@kerneltalks symantec_antivirus]# ./sav liveupdate -u
sep::lux::Cseplux: Failed to run session, error code: 0x80010830
Live update session failed. Please enable debug logging for more information
Unable to perform update

Or error logged in lux.log file as below –

Result Message: FAIL - failed to select server
Status Message: Server was not selected

The client is unable to reach the LiveUpdate server or LiveUpdate Administrator i.e. LUA. Again same troubleshooting steps as above.

How to move /tmp on a separate disk as a separate mount point

A quick post explaining how you can move out /tmp directory from / to new mount point on the new disk

Create /tmp as a new mount point

One of the headaches for sysadmin is getting a file system full. It can have many reasons from blaming application, and un-adequate capacity planning to an un-organized file system structure. We are going to look at the file system aspect of it.

Server with a single disk approach i.e. root disk is formatted as one partition and mounted as / is common these days. But, there are servers on-prem that still follow the slicing of disks and mounting different root FS on their approach. So if your server is one of them and for some reason, your /tmp directory is part of / and not separate mount point then this article is for you.

In this article, we will walk you through step by step procedure to mount /tmp on another disk as a separate mount point. We are going to separate out /tmp directory from / file system as /tmp mount point. We are taking an example with LVM but the procedure remains the same if you want to mount /tmp on another partition. Only replace LVM parts i.e. VG, and LV stuff with an equivalent partition creation procedure.

Make sure you have a valid backup of the server before proceeding.

How to move /tmp as new mount point with downtime

/tmp is used by many processes on the server to open up temp files during execution. So this directory is always in use and rebooting in single-user mode to perform a such activity is the safest and clean way. You can check processes using /tmp by lsof command.

The complete procedure can be done in the below order –

  1. Prepare a new disk for /tmp
    1. Create LV on new disk (pvcreate, lvcreate)
      • pvcreate /dev/sdb
      • vgcreate vg_tmp /dev/sdb
      • lvcreate -l 100%FREE -n lv_tmp vg_tmp
    2. Format LV with the filesystem of your choice
      • mkfs.ext4 /dev/vg_tmp/lv_tmp
    3. Mount it on a temporary mount
      • mount /dev/vg_tmp/lv_tmp /mnt
  2. Copy data from /tmp directory to the new disk
    • cp -pr /tmp/* /mnt
    • ls -lrt /mnt
    • ls -lrt /tmp
  3. Reboot server into single-user mode
  4. Prepare new /tmp mount point
    1. Delete/move existing /tmp directory depending on space availability in /
      • rm -rf /tmp OR
      • mv /tmp /tmp.orig
    2. Create new /tmp for the mount point
      • mkdir /tmp
    3. Set permission and ownership
      • chmod 1777 /tmp
      • chown root:root /tmp
    4. Add entry in /etc/fstab
      1. echo “/dev/vg_tmp/lv_tmp /tmp defaults 1 2″>>/etc/fstab
  5. Reboot the server normally.
  6. Log in and check /tmp is mounted as the separate mount point.

Setting up permission 1777 is an important step in this. Otherwise /tmp will not function as it is expected to.

Troubleshooting Ansible errors

List of errors seen while working on Ansible and their solutions.

Let’s check errors you might come across in Ansible

Error

"msg": "Failed to connect to the host via ssh: ssh: connect to host 172.17.0.9 port 22: No route to host",

Cause and solution

Ansible control machine is not able to reach the client. Make sure client hostname is resolved via –

  • DNS server or
  • /etc/hosts of Ansible control server or
  • By /etc/ansible/hosts or your custom Ansible inventory file.

Also, network connectivity over port 22 from Ansible control machine to the client is working fine (test using telnet)


Error

"msg": "Failed to connect to the host via ssh: Permission denied (publickey,password).",

Cause and solution

Ansible control server is failed to authenticate the connection to the client.

How to define Ansible default user

A quick post to explain the default Ansible user and where it can be changed.

Ansible user configuration

Ansible by default manages its clients over SSH protocol. So its obvious question is what is the default user Ansible uses to connect or execute the command on its clients? Followed by – how to change Ansible default user? We will answer this question in this article.

If you are running default configurations and you did not define Ansible user anywhere then user running ansible command (the current user) will be used to communicate with the client over SSH.

Define Ansible user in the configuration file

Ansible default user can be defined in Ansible configuration file /etc/ansible/ansible.cfg in a below section by un-commenting remote_user line and replacing the root with the user of your choice –

# default user to use for playbooks if user is not specified
# (/usr/bin/ansible will use current user as default)
#remote_user = root

Here it clearly states that if default user is not defined in configuration file then the current logged in user (on control machine i.e. Ansible server) will be used to execute commands on Ansible clients.

Define Ansible user in Inventory

Another place you can define this Ansible user is inventory i.e. client host list file. Default hosts file Ansible uses is /etc/ansible/hosts. You can add below snippet in this file to define Ansible user for your tasks or playbook.

[all:vars]
ansible_user=ansible-usr

where ansible-usr is the user you want Ansible to use while connecting to clients over SSH. Replace ansible-usr with the user of your choice.