Category Archives: Software & Tools

Securing AWS credentials in WSL using aws-vault

This article will walk you through step-by-step procedures to secure AWS CLI credentials using the aws-vault tool.

aws-vault configuration in WSL

Why aws-vault?

Configuring the AWS CLI saves the AWS access key and secret key in plain text format under ~/.aws/credentials file. This is a security concern since the location is known, anybody can scan/look for the keys, and credentials can be compromised.

Hence, to store AWS keys more securely, we need some tool/software to keep the keys in encrypted format rather than plain text. We are discussing the aws-vault open-source tool widely used to secure AWS keys.

Also read: Setting up WSL for Sysadmin work.

How to install and configure aws-vault in WSL?

We will use the pass as a secure backend for aws-vault while configuring it in WSL as the pass is a native Linux password manager. The list of compatible secure backends for aws-vault can be found here.

Install pass utility in WSL. You can leverage package manager depending on your Linux flavour.

# apt install pass

The next step is to generate the gpg key. You will be prompted to enter a password for the key in this process. This password is necessary to unlock password storage for fetching the passwords within.

$ gpg --generate-key
...
GnuPG needs to construct a user ID to identify your key.

Real name: shrikant
Email address: info@kerneltalks.com
You selected this USER-ID:
    "shrikant <info@kerneltalks.com>"

Change (N)ame, (E)mail, or (O)kay/(Q)uit? O
...

pub   rsa3072 2022-09-25 [SC] [expires: 2024-09-24]
      110577AFEE906D84224719BCD0F123xxxxxxxxxx
uid                      shrikant <info@kerneltalks.com>
sub   rsa3072 2022-09-25 [E] [expires: 2024-09-24]

Initiate the new password storage along with gpg key encryption. Use gpg key created in the previous step in the below command –

$ pass init -p aws-vault 110577AFEE906D84224719BCD0F123xxxxxxxxxx
mkdir: created directory '/root/.password-store/aws-vault'
Password store initialized for 110577AFEE906D84224719BCD0F123xxxxxxxxxx (aws-vault)

Now, the pass backend is ready to use with aws-vault.

Install aws-vault. Simply grab the latest release from here. Move and rename it to one of the binary directories and assign execute permission.

$ wget https://github.com/99designs/aws-vault/releases/download/v6.6.0/aws-vault-linux-amd64
$ mv aws-vault-linux-amd64 /usr/local/sbin/aws-vault
$ chmod +x /usr/local/sbin/aws-vault

You have pass configured and aws-vault installed. Your system is ready to save the AWS credentials in aws-vault.

By default, aws-vault considers keyctl backend. Export the below variables for configuring it with pass backend. It’s good to export them in the login profile of WSL so that they will be exported for all your future shell sessions.

$ export AWS_VAULT_BACKEND=pass
$ export AWS_VAULT_PASS_PASSWORD_STORE_DIR=/root/.password-store/aws-vault

Add your AWS credentials in aws-vault with the below command –

# aws-vault add my-aws-account
Enter Access Key ID: XXXXXXXXXXXXX
Enter Secret Access Key: 
Added credentials to profile "my-aws-account" in vault

my-aws-account is an account alias/name for identification purposes only.

Verify if aws-vault saved the credentials in pass password storage.

$ pass show
Password Store
└── aws-vault
    └── my-aws-account

At this point, you have successfully secured your AWS credentials, and you can safely remove ~/.aws/credentials file. If you have more than one AWS account configured in the credentials file, add all of them into aws-vault before deleting it.

pass saves the password storage in ~/.password-store in encrypted format.

$ ls -lrt ~/.password-store/aws-vault/
total 8
-rw------- 1 root root   41 Sep 25 22:01 .gpg-id
-rw------- 1 root root  831 Sep 25 14:22 my-aws-account.gpg

How to use aws-vault?

When you want to run any command using AWS CLI, you can execute it like this –

$ aws-vault exec <profile-name> -- aws <cli-command>
# aws-vault exec my-aws-account -- aws s3 ls

Here: my-aws-account is the AWS profile name. aws-vault also reads the ~/.aws/config file to figure out AWS profiles configured on the machine.

You can authenticate properly with credentials stored securely in the encrypted form! aws-vault also handles the IAM role and MFA prompts defined in AWS profiles.

Error troubleshooting

gpg: decryption failed: No secret key

pass password storage is locked and needs to be unlocked. Run pass show <password-name> , and it should prompt for the gpg key password.

$ pass show my-aws-account
┌───────────────────────────────────────────────────────────────┐
│ Please enter the passphrase to unlock the OpenPGP secret key: │
│ "shrikant <info@kerneltalks.com>"                             │
│ 3072-bit RSA key, ID 0DB81DDFxxxxxxxx,                        │
│ created 2022-09-25 (main key ID D0F123Bxxxxxxxxx).            │
│                                                               │
│                                                               │
│ Passphrase: _________________________________________________ │
│                                                               │
│         <OK>                                   <Cancel>       │
└───────────────────────────────────────────────────────────────┘

Now, try the aws-vault command, and it should work fine.


aws-vault: error: Specified keyring backend not available, try --help

Ensure you have exported both parameters mentioned in the above configuration steps so that aws-vault will use the pass backend.


aws-vault: error: exec: Failed to get credentials for kerneltalks-test: operation error STS: GetSessionToken, https response error StatusCode: 403, RequestID: e514d0e5-b8b6-4144-8616-05061eeed00c, api error SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.

It points to the wrong access/secret key combination added to the secure vault. Delete the respective key from the vault and re-add it. If you lost your secret key, then re-create new pair from the AWS console and add it to the key-vault.

$ aws-vault remove kerneltalks-test
Delete credentials for profile "kerneltalks-test"? (y|N) y
Deleted credentials.
$aws-vault add kerneltalks-test
Enter Access Key ID: xxxxxxxxxx
Enter Secret Access Key: 
Added credentials to profile "kerneltalks-test" in vault

aws-vault: error: rotate: Error creating a new access key: operation error IAM: CreateAccessKey, https response error StatusCode: 403, RequestID: 2392xxxx-x452-4bd4-80c3-452403baxxxx, api error InvalidClientTokenId: The security token included in the request is invalid

If the profile is enabled with MFA and you are trying to run some IAM modification, then you should encounter the above error. Use --no-session flag with the command. Read in-depth regarding the no-session flag here.

Netflix’s ConsoleMe local installation on Linux machine

A step by step guide to install ConsoleMe on Ubuntu Linux machine

ConsoleMe Ubuntu Local Install

ConsoleMe is an open-source web service published by Netflix. It is designed to make life easy for end-users and cloud administrators. Using ConsoleMe, cloud administrators can manage IAM permissions/credentials for IAM roles, S3 buckets, SQS queues, and SNS topics across multiple AWS accounts from a single interface. It also provides CLI called weep for AWS credentials management. That’s a fair introduction if you are not aware of the tool. Next, let’s get into the installation part.

ConsoleMe offers docker and local installs. We will walk you through local install in this article.

Pre-requisite:

  • A machine running Ubuntu 19.04+ with root access. I used Ubuntu 20.04 LTS x86.
  • Active and working package manager subscription to install packages
  • Storage requirement: 2GB of disk space
  • An AWS user/role for consoleme service with appropriate permissions
  • AWS access keys for above user if you are not using roles. I used keys (steps below)

Installation

We are installing ConsoleMe in /consoleme directory. If you want to install in another location, make the necessary changes in the commands below. Let me give you a list of commands you need to run as root –

apt-get update
apt-get install build-essential libxml2-dev libxmlsec1 libxmlsec1-dev libxmlsec1-openssl musl-dev libcurl4-nss-dev python3-dev pkg-config python3.8-venv awscli docker-compose -y
curl -sL https://deb.nodesource.com/setup_14.x | sudo bash
apt-get install -y nodejs
npm install yarn -g
cd /
git clone https://github.com/Netflix/consoleme.git
cd consoleme
docker-compose -f docker-compose-dependencies.yaml up -d

Here, the first few commands are installing all the dependencies and related software/tools. Then, we are cloning the GitHub repo of the tool in /consoleme and lastly, we are running two containers.

These are Redis and dynamodb containers that ConsoleMe leverages for caching and aggregating the AWS accounts information. You can make use of AWS Redis and dynamodb table services, but for now, we will run these containers locally so that ConsoleMe will talk to them rather than AWS services.

I am avoiding putting up console outputs for frequently used commands like package installations etc., here.

Make sure both containers are up and running before proceeding to the next step –

root@kerneltalks:/consoleme# docker ps
CONTAINER ID   IMAGE                             COMMAND                  CREATED          STATUS         PORTS                              NAMES
5333cdee2202   cnadiminti/dynamodb-local         "java -jar DynamoDBL…"   10 seconds ago   Up 4 seconds   8000/tcp, 0.0.0.0:8005->8005/tcp   consoleme-dynamodb
19ac354c3d70   redis:alpine                      "docker-entrypoint.s…"   10 seconds ago   Up 4 seconds   0.0.0.0:6379->6379/tcp             consoleme-redis
4cf931d38652   aaronshaf/dynamodb-admin:latest   "node bin/dynamodb-a…"   10 seconds ago   Up 4 seconds   0.0.0.0:8001->8001/tcp             consoleme-dynamodb-admin

Now, you need to prepare the machine to talk with AWS for fetching account details in the upcoming install steps. Ensure that you have set up account and permissions perfectly in IAM (mentioned in the pre-requisite above) to avoid any issues. You can do that by configuring AWS profile –

root@kerneltalks:/consoleme# aws configure
AWS Access Key ID [None]: AKIAQX3STVKIYRO36XEC
AWS Secret Access Key [None]: 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
Default region name [None]:
Default output format [None]:

Lastly, create a new python environment and run the final install step. This will take a while to complete since at the end of make install command, it also fetches and caches the AWS account details in the local Redis cache –

python3 -m venv env
. env/bin/activate
make install

After successful installation, you should be able to start the application.

Running ConsoleMe

On a current shell, you can run the ConsoleMe with the command. If you are in another shell, activate the python environment again –

(env) root@kerneltalks:/consoleme# python consoleme/__main__.py
{"asctime": "2021-07-25T08:32:16Z+0000", "name": "consoleme", "processName": "MainProcess", "filename": "jwt.py", "funcName": "<module>", "levelname": "ERROR", "lineno": 14, "module": "jwt", "threadName": "MainThread", "message": "Configuration key `jwt.secret` is not set. Setting a random secret", "eventTime": "2021-07-25T01:32:16.286230-07:00", "hostname": "kerneltalks", "timestamp": "2021-07-25T08:32:16Z+0000"}
2021-07-25 08:32:17,322 - DEBUG - root - [constants.py:39 - <module>() ] - Leveraging the bundled IAM Definition.
2021-07-25 08:32:17,322 - INFO - root - [iam_data.py:10 - <module>() ] - Leveraging the IAM definition at /consoleme/env/lib/python3.8/site-packages/policy_sentry/shared/data/iam-definition.json
2021-07-25 08:32:17,824 - DEBUG - git.cmd - [cmd.py:817 - execute() ] - Popen(['git', 'version'], cwd=/consoleme, universal_newlines=False, shell=None, istream=None)
2021-07-25 08:32:17,859 - DEBUG - git.cmd - [cmd.py:817 - execute() ] - Popen(['git', 'version'], cwd=/consoleme, universal_newlines=False, shell=None, istream=None)
{"asctime": "2021-07-25T08:32:18Z+0000", "name": "consoleme", "processName": "MainProcess", "filename": "__main__.py", "funcName": "init", "levelname": "DEBUG", "lineno": 57, "module": "__main__", "threadName": "MainThread", "message": "Server started", "eventTime": "2021-07-25T01:32:16.286230-07:00", "hostname": "kerneltalks", "timestamp": "2021-07-25T08:32:18Z+0000"}

But, it will exit out when you terminate the command or shell. It’s safe to run it in the background or, even better, run it as a Linux service. For running ConsoleMe as a service, create below two files –

File /usr/bin/consoleme_start.sh

#!/bin/bash
. env/bin/activate
python consoleme/__main__.py

File /etc/systemd/system/consoleme.service


[Unit]
Description=Run consoleme service.

[Service]
Type=simple
User=root
WorkingDirectory=/consoleme
ExecStart=/usr/bin/consoleme_start.sh

[Install]
WantedBy=multi-user.target

Assign executable permissions to

chmod +x /usr/bin/consoleme_start.sh

Enable and start the service

root@kerneltalks:/consoleme# systemctl enable consoleme
Created symlink /etc/systemd/system/multi-user.target.wants/consoleme.service → /etc/systemd/system/consoleme.service.

root@kerneltalks:/consoleme# systemctl start consoleme

root@kerneltalks:/consoleme# systemctl status consoleme
● consoleme.service - Run consoleme service.
     Loaded: loaded (/etc/systemd/system/consoleme.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2021-07-25 08:35:52 UTC; 7s ago
   Main PID: 14775 (consoleme_start)
      Tasks: 5 (limit: 4706)
     Memory: 159.7M
     CGroup: /system.slice/consoleme.service
             ├─14775 /bin/bash /usr/bin/consoleme_start.sh
             └─14776 python consoleme/__main__.py

Jul 25 08:35:52 kerneltalks systemd[1]: Started Run consoleme service..
Jul 25 08:35:53 kerneltalks consoleme_start.sh[14776]: {"asctime": "2021-07-25T08:35:53Z+0000", "name": "consoleme", "processName": "MainProcess", "filename": "jwt.py", "funcName": "<module>", "levelname": "ERROR", "lineno": 14, "m>
Jul 25 08:35:53 kerneltalks consoleme_start.sh[14776]: 2021-07-25 08:35:53,954 - DEBUG - root - [constants.py:39 - <module>() ] - Leveraging the bundled IAM Definition.
Jul 25 08:35:53 kerneltalks consoleme_start.sh[14776]: 2021-07-25 08:35:53,955 - INFO - root - [iam_data.py:10 - <module>() ] - Leveraging the IAM definition at /consoleme/env/lib/python3.8/site-packages/policy_sentry/shared/data/i>
Jul 25 08:35:54 kerneltalks consoleme_start.sh[14776]: 2021-07-25 08:35:54,354 - DEBUG - git.cmd - [cmd.py:817 - execute() ] - Popen(['git', 'version'], cwd=/consoleme, universal_newlines=False, shell=None, istream=None)
Jul 25 08:35:54 kerneltalks consoleme_start.sh[14776]: 2021-07-25 08:35:54,361 - DEBUG - git.cmd - [cmd.py:817 - execute() ] - Popen(['git', 'version'], cwd=/consoleme, universal_newlines=False, shell=None, istream=None)
Jul 25 08:35:54 kerneltalks consoleme_start.sh[14776]: {"asctime": "2021-07-25T08:35:54Z+0000", "name": "consoleme", "processName": "MainProcess", "filename": "__main__.py", "funcName": "init", "levelname": "DEBUG", "lineno": 57, ">

ConsoleMe GUI

Now that your console service is running, you should load its GUI on a web browser. The service listens on the 8081 port, so you need to navigate the server address with port 8081. Make sure the security group is allowing 8081 traffic if you are installing on EC2.

At this point, ConsoleMe is running with the default open example configuration. It’s very well highlighted on the web app as a warning. It would be best if you were editing this configuration to make your ConsoleMe more secure. ConsoleMe recommends Application Load Balancer authentication for securing your web app GUI. Refer to our next article on how to secure the ConsoleMe web app using ALB authentication.

How to connect AWS RDS database from Windows

A quick article for AWS beginners on connecting the AWS RDS database from Windows.

A step by step RDS database login procedure using the lightweight SQL client. This article is intended for AWS beginners who wanted to learn about RDS service with a little bit of database hands-on. Being said that, there is always a question for the non-DB guys; “How did I connect with this RDS database?”. So here we will walk you through step by step procedure for doing that.

For connecting to RDS database you should have –

  1. RDS database up and running
  2. RDS database endpoint
  3. A SQL client
  4. Connectivity between your machine and RDS database

Throughout this article, I am considering the MySQL database on RDS. If you are using a different database engine on RDS, then the connection port may vary. Also, we are considering ‘Password authentication’ of authentications options here –

  1. Password authentication: RDS configured with this option has users managed at the database level.
  2. AWS IAM database authentication: RDS configured with this option authenticates users by leveraging AWS IAM service. As a result, users are managed outside the database.

Let’s get started!

Identify RDS Endpoint

  • Navigate to the RDS console
  • Go to Databases and select your database
  • Database details screen should open up where you can copy your RDS endpoint.
RDS endpoint

Prepare SQL client

  • Download lightweight SQL client Sqlectron
  • Install and open the Sqlectron client.
Sqlectron client!

Click on Add button to enter RDS connection details. We are testing RDS with ‘Password authentication’ here hence user and password needs to be supplied.

RDS connection details in SQL Client

Click on Test to verify the connectivity.

If you encounter the below error, try to connect from a machine with direct connectivity to RDS. You can analyse RDS security groups to determine allowed subnets.

DB_CONNECT Error

Verify below things –

  • There are no firewall rules between your machine and RDS blocking the port 3306 traffic.
  • RDS database is publicly accessible. (Obv. It applies to testing POC databases). If not, you can configure RDS to be publicly accessible.
  • If you don’t want your RDS to be publicly accessible, you need to connect RDS from the allowed subnets. That means your machine needs to be in the same VPC as RDS (e.g. Windows EC2 in the same VPC)

How to make RDS publicly accessible

Not recommended for production RDS instances or RDS carrying sensitive data.

  • Navigate to RDS console and respective database
  • Select Modify from the action menu for that particular database
  • Goto Connectivity section and expand Additional configuration
  • Here choose the radio button against Publicly accessible and Apply the changes
RDS Public Access

RDS connection using ‘Password authentication’ via SQL client

Once you sort out RDS connectivity issues, go back to the SQL client and try Test again. Now you should be able to succeed.

Now, click on Save to save this configuration in SQL client. And then hit Connect to connect to your RDS database.

Connect RDS using Client

After connecting, you can see schema on the left-hand side and a command box on the right-hand side to execute commands on the database.

Running SQL commands on RDS

Command outputs or messages will be shown in the blank space below the command text area. You will be able to download outputs in JSON, CSV format or even copy them directly.

Running SQL commands with Sqlectron!

That’s all! You can use this light weight SQL client to get started with RDS immediately.

Assorted list of resources to ease your AWS tasks

Assorted list of resources to help you with your work in AWS!

Swiss knife for AWS tasks!

In this post, I will quickly run down the assorted list of different software, tools, or online resources that will help you with your AWS journey. So without further delay, let’s jump into it.

AWS Native tools

AWS console related

AWS architecting

  • AWS Architecture Icons: List of supported drawing and diagramming tools and download links for Icons assets and toolkits. It’s very much helpful in designing presentable diagrams with updated, latest AWS icons. Every architecture’s must-have resource!
  • Draw.io: Create architecture diagrams with draw.io
  • Visual subnet calculator: Easy way to slice up your CIDR block into the required number of subnets.
  • CIDR Range Visualizer: An interactive CIDR IP addressing visualizer webpage!
  • AWS EC2 instances info: Single place to look at all EC2 instances, filters, costing, etc.

CLI lovers

Infrastructure coding

  • Visual Studio Code: Most favoured and loved software for all coding you do on your computer. Supports almost all languages and clouds. Allows you to connect to your cloud and run your code from the software window itself.
  • cfn-lint plugin: Cloudformation Linter plugin for Microsoft Visual Studio Code.
  • InfraCost: cloud cost estimates on the terraform pull requests!

The list can grow on and on. I just collected a few of them here to start with. Let me know your additions in the comments down below!

sar utility custom settings

A quick article to point out configurations to customize sar utility.

sar custom settings!

sar is monitoring utility on Linux which is used to monitor system resource utilization. We have covered different aspects of sar in the past. You can go through the below articles for the same.

In this article, we will walk you though for some custom settings you can configure for sar like below –

  1. How to change monitoring frequency in sar
  2. How to customize sar log rotation

How to change sar monitoring frequency?

As you are aware sar has 10 minutes default frequency. That means sar utility logs one data point of resource utilization per 10 minutes. If you want to change this frequency then you can do it by altering it in below file –

kerneltalks:~ #  cat /etc/cron.d/sysstat
# crontab for sysstat

# Activity reports every 10 minutes everyday
*/10 * * * * root [ -x /usr/lib64/sa/sa1 ] && exec /usr/lib64/sa/sa1 1 1

So you have to edit number 10 with the frequency of your choice. Let’s make it for 1 minute instead of 10 minutes.

Now, after editing the file you have to wait for that minimum time to pass which you choose as frequency and then you can verify it by using sar command.

kerneltalks:~ # sar
Linux 5.3.18-22-default (kerneltalks)      08/20/20        _x86_64_        (1 CPU)

14:16:18     LINUX RESTART      (1 CPU)

14:20:01        CPU     %user     %nice   %system   %iowait    %steal     %idle
14:21:01        all      0.02      0.00      0.02      0.00      0.00     99.97
14:22:01        all      0.02      0.00      0.03      0.00      0.02     99.93
14:23:01        all      0.00      0.00      0.00      0.00      0.00    100.00
14:24:01        all      0.02      0.00      0.02      0.00      0.00     99.97
Average:        all      0.01      0.00      0.02      0.00      0.00     99.97

You can see now that sar is collecting datapoints with frequency of 1 minute.

How to customize sar log rotation?

saar log rotation is controlled by /etc/sysstat/sysstat file. You can configure below parameters in the file.

kerneltalks:~ # cat /etc/sysstat/sysstat|grep -v ^#|grep -v ^$
HISTORY=60
COMPRESSAFTER=10
SADC_OPTIONS=" -S ALL"
SA_DIR=/var/log/sa
ZIP="xz"

File has a description for all the parameters which are self-explanatory. You can edit each parameter as per your requirement and restart sar process.

I am mentioning short description of each parameters from file here for your quick reference.

  • HISTORY How long to keep log files (in days).
  • COMPRESSAFTER Compress sa and sar files older than (in days)
  • SADC_OPTIONS Parameters for the system activity data collector
  • SA_DIR Directory where sa and sar files are saved.
  • ZIP Compression program to use. xz, gzip or bzip2

Run commands & copy files on salt clients from SUSE Manager Server

Lets check out salt CLI a bit!

In this article, we will walk you through a list of useful commands to interact with salt clients and get your work done.

We have covered SUSE Manager right from installation till configuration and client registration in our list of articles in the past. For now, let’s dive into a list of commands you can use to complete tasks on salt clients remotely via SUSE Manager.

You can always check out the list of salt modules available to choose from. I am listing our only a few of them which are useful in day-to-day tasks. Few of these tasks can be done from SUSE Manager UI as well but if you want to script them then using salt CLI is a way better option.

In the below examples, we have our SUSE Manager kerneltalks and salt client k-client1

Copy files from SUSE Manager to salt clients

There are two ways to copy a file. If you are copying simple text files then below command is just fine for you. salt-cp clientname/FQDN source destination

kerneltalks:~ # salt-cp k-client1 test1 /tmp/
k-client1:
    ----------
    /tmp/test1:
        True

Here we copied test1 file in the current directory from SUSE Manager to k-client1:/tmp.

It will treat files in question as text files and hence should not be used for a binary files. It will corrupt binary files or just fails to copy them. So if I try to copy zip file from SUSE Manager I see below error –

kerneltalks:~ # salt-cp k-client1 test2.gz /tmp/
[ERROR   ] An un-handled exception was caught by salt's global exception handler:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
Traceback (most recent call last):
  File "/usr/bin/salt-cp", line 10, in <module>
    salt_cp()
  File "/usr/lib/python3.6/site-packages/salt/scripts.py", line 418, in salt_cp
    client.run()
  File "/usr/lib/python3.6/site-packages/salt/cli/cp.py", line 52, in run
    cp_.run()
  File "/usr/lib/python3.6/site-packages/salt/cli/cp.py", line 142, in run
    ret = self.run_oldstyle()
  File "/usr/lib/python3.6/site-packages/salt/cli/cp.py", line 153, in run_oldstyle
    arg = [self._load_files(), self.opts['dest']]
  File "/usr/lib/python3.6/site-packages/salt/cli/cp.py", line 126, in _load_files
    files.update(self._file_dict(fn_))
  File "/usr/lib/python3.6/site-packages/salt/cli/cp.py", line 115, in _file_dict
    data = fp_.read()
  File "/usr/lib64/python3.6/codecs.py", line 321, in decode
    (result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte

In such cases, you can use the below salt module to copy over files from SUSE Manager to salt clients. For that, you need to keep your file under /srv/salt directory on the SUSE Manager server.

kerneltalks:/srv/salt # ls -lrt
total 4
-rw-r--r-- 1 root root 44 Apr  3 12:26 test2.gz
kerneltalks:~ # salt k-client1 cp.get_file salt://test2.gz /tmp/
k-client1:
    /tmp/test2.gz

Now we successfully copied zip file from SUSE Manager kerneltalks:/srv/salt/test2.gz to salt client k-client1:/tmp

Execute remote commands on salt clients from SUSE Manager

Now this part where we will run commands on the salt client from SUSE Manager. The command output will be returned to you on current session. You can run a couple of commands together separated by ; same as the shell.

kerneltalks:/srv/salt # salt k-client1 cmd.run 'df -Ph; date'
k-client1:
    Filesystem      Size  Used Avail Use% Mounted on
    devtmpfs        489M     0  489M   0% /dev
    tmpfs           496M   12K  496M   1% /dev/shm
    tmpfs           496M   14M  482M   3% /run
    tmpfs           496M     0  496M   0% /sys/fs/cgroup
    /dev/xvda1      9.8G  1.6G  7.7G  17% /
    Fri Apr  3 12:30:49 UTC 2020

Here we successfully ran df -Ph and date command on salt client remotely from SUSE Manager.

Make sure if you have multiple commands to run then bundle them to script, copy it over to the client using the above method and then execute the script on the client from SUSE Manager using run command module.

If you see below error that means your mentioned client is not registered with SUSE Manager or you have misspelled client name or use FQDN

kerneltalks:~ # salt-cp k-client1 test1 /tmp/
No minions matched the target. No command was sent, no jid was assigned.

Installing packages on salt client using salt cli

You can execute this task from the SUSE Manager web UI as well. But if you want to script it then salt CLI is a better option.

Installing a package is an easy task. Use pkg.install salt module and submit one or more lists of packages to be installed on the remote salt system.

Install single package using –

kerneltalks:~ # salt k-client1 pkg.install 'telnet'
k-client1:
    ----------
    telnet:
        ----------
        new:
            1.2-165.63
        old:

Install multiple packages using –

kerneltalks:~ # salt k-client1 pkg.install pkgs='["telnet", "apache2"]'
k-client1:
    ----------
    apache2:
        ----------
        new:
            2.4.23-29.40.1
        old:
    apache2-prefork:
        ----------
        new:
            2.4.23-29.40.1
        old:
    apache2-utils:
        ----------
        new:
            2.4.23-29.40.1
        old:
    libapr-util1:
        ----------
        new:
            1.5.3-2.8.1
        old:
    libapr1:
        ----------
        new:
            1.5.1-4.5.1
        old:
    liblua5_2:
        ----------
        new:
            5.2.4-6.1
        old:
    libnghttp2-14:
        ----------
        new:
            1.7.1-1.84
        old:
    telnet:
        ----------
        new:
            1.2-165.63
        old:

Here you can see it installed telnet and apache2 packages remotely along with its dependencies. Be sure that if the package is already installed and its updated version is available to install then the salt will update it. Hence you can see new and old version details in output.

Installing Ansible and running the first command

How to install Ansible and how to run a simple command using Ansible.

Ansible installation

In this article, we will walk you through step by step procedure to install Ansible and then run the first ping command on its clients.

We will be using our lab setup built using containers for this exercise. In our all articles related to Ansible, we are referring Ansible server as Ansible control machine i.e. where Ansible software is installed and running. Ansible clients are machines who are being managed using this Ansible.

Pre-requisite

Ansible control machine requirements

It should be a Linux machine. Ansible can bot be installed on Windows OS. and secondly it should have Python installed.

It’s preferred to have passwordless SSH setup between Ansible control machine and managed machine for smooth executions but not mandatory.

Ansible managed machine requirement

It should have libselinux-python installed if SELinux is enabled which is obviously most of the time.

A Python interpreter should be installed.


Ansible installation

Installation of Ansible is an easy task. Its a package so install it like you install any other package in your Linux. Make sure you have subscribed to the proper repo which has an Ansible engine available to install.

I enabled EPEL repo on my Oracle Linux running in Virtual box and installed it using –

[root@ansible-srv ~]# yum install ansible

Once the installation is done, you need to add your client list in file /etc/ansible/hosts. Our setup files look like below :

[root@ansible-srv ~]# cat /etc/ansible/hosts
[all:vars]
ansible_user=ansible-usr

[webserver]
k-web1 ansible_host=172.17.0.9
k-web2 ansible_host=172.17.0.3

[middleware]
k-app1 ansible_host=172.17.0.4
k-app2 ansible_host=172.17.0.5

[database]
k-db1 ansible_host=172.17.0.6

Here, we defined the Ansible default user in the inventory file itself. Since we do not have DNS and using containers in our setup, I defined hostname and IP as mentioned above.


Running first Ansible command

As I explained earlier in the Lab setup article, I configured passwordless SSH from the Ansible control machine to the managed node.

Let’s run our first ansible command i.e. ping one hosts. Command syntax is – ansible -m <module> <target>

[root@ansible-srv ~]# ansible -m ping k-db1
k-db1 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}

We used the ping module here and the target host is k-db1. And we received back pong i.e. command successfully executed. In this output –

  • SUCCESS is command exit status
  • ansible_facts is data collected by Ansible while executing a command on the managed node.
  • changed indicates if the task has to make any changes

Let’s run another simple command like hostname

[root@ansible-srv ~]# ansible -m command -a hostname k-db1
k-db1 | CHANGED | rc=0 >>
k-db1

Here in the second line you see the command stdout i.e. output. and return code rc i.e. exit code of the command is 0 confirming command execution was successful.

SEP 14 antivirus client commands in Linux

List of Symantec Endpoint Protection 14 antivirus client commands in Linux and few errors along with their possible solutions

SEP Linux client commands

In this article, we will walk you through few SEP 14 antivirus agent commands which will help you troubleshoot your issues related to it and then we will give solutions to some frequently seen errors.

Symantec Endpoint Protection 14 Linux client commands

How to restart SEP 14 Linux client processes

Stop SEP 14 Linux client using single command below –

[root@kerneltalks tmp]# /etc/init.d/symcfgd stop
Stopping smcd: ..                                                    done

Stopping rtvscand: ..                                                done

Stopping symcfgd: .                                                  done

Start SEP 14 Linux client using below commands in the given order –

[root@kerneltalks tmp]# /etc/init.d/symcfgd start
Starting symcfgd:                                                    done

[root@kerneltalks tmp]# /etc/init.d/rtvscand start
Starting rtvscand:                                                   done

[root@kerneltalks tmp]# /etc/init.d/smcd start
Starting smcd:                                                       done
How to uninstall SEP 14 client from Linux machine
[root@kerneltalks tmp]# /opt/Symantec/symantec_antivirus/uninstall.sh
Are you sure to remove SEP for Linux from your machine?
WARNING: After SEP for Linux is removed, your machine will not be protected.
Do you want to remove SEP for Linux? Y[es]|N[o]: N
Y
Starting to uninstall Symantec Endpoint Protection for Linux
Begin removing GUI component
GUI component removed successfully
Begin removing Auto-Protect component
symcfgd is running
rtvscand is not running
smcd is not running
Auto-Protect component removed successfully
Begin removing virus protection component
smcd is running
rtvscand is running
symcfgd is running
Virus protection component removed successfully
Uninstallation completed
The log file for uninstallation of Symantec Endpoint Protection for Linux is under: /root/sepfl-uninstall.log

All the below commands are of binary sav which is located in /opt/Symantec/symantec_antivirus

Display auto-protect module state

[root@kerneltalks symantec_antivirus]# ./sav info -a
Enabled

Display virus definition status

[root@kerneltalks symantec_antivirus]# ./sav info -d
11/24/2019 rev. 2

Check if the client is Self-managed or being managed from the SEPM server. The output is server hostname or IP who is managing the client.

[root@kerneltalks symantec_antivirus]# ./sav manage -s 
syman01

Display the management server group to which the current client belongs.

[root@kerneltalks symantec_antivirus]# ./sav manage -g 
My Company\Default Group

Run immediate virus definition update

[root@kerneltalks symantec_antivirus]# ./sav liveupdate -u
Update was successful

Triggers the heartbeat immediately and download the profile from SEPM server

[root@kerneltalks symantec_antivirus]# ./sav manage -h
Requesting updated policy from the Symantec Endpoint Protection Manager ...

Import sylink file in the client

[root@kerneltalks symantec_antivirus]# ./sav manage -i /tmp/sylink.xml
Imported successfully.

Now, let’s look at a few errors and their possible solutions –

SAV manage server is offline
[root@kerneltalks symantec_antivirus]# ./sav manage -s
Offline

This means your client is not able to communicate with the SEPM server. Make sure there is no firewall ( internal to OS like iptables or external ) is blocking the traffic. Also, you have proper proxy configurations in place. If its internal server make sure you excluded it from proxy as no_proxy hosts.

Refer SEP communication ports here which will help you drill down communication issues.

LiveUpdate fails

Best way to troubleshoot LiveUpdate issues is to go through the log file /opt/Symantec/LiveUpdate/Logs/lux.log. It has a descriptive message about the error which helps to quickly drill down to the problem.

[root@kerneltalks symantec_antivirus]# ./sav liveupdate -u
sep::lux::Cseplux: Failed to run session, error code: 0x80010830
Live update session failed. Please enable debug logging for more information
Unable to perform update

Or error logged in lux.log file as below –

Result Message: FAIL - failed to select server
Status Message: Server was not selected

The client is unable to reach the LiveUpdate server or LiveUpdate Administrator i.e. LUA. Again same troubleshooting steps as above.

How to define Ansible default user

A quick post to explain the default Ansible user and where it can be changed.

Ansible user configuration

Ansible by default manages its clients over SSH protocol. So its obvious question is what is the default user Ansible uses to connect or execute the command on its clients? Followed by – how to change Ansible default user? We will answer this question in this article.

If you are running default configurations and you did not define Ansible user anywhere then user running ansible command (the current user) will be used to communicate with the client over SSH.

Define Ansible user in the configuration file

Ansible default user can be defined in Ansible configuration file /etc/ansible/ansible.cfg in a below section by un-commenting remote_user line and replacing the root with the user of your choice –

# default user to use for playbooks if user is not specified
# (/usr/bin/ansible will use current user as default)
#remote_user = root

Here it clearly states that if default user is not defined in configuration file then the current logged in user (on control machine i.e. Ansible server) will be used to execute commands on Ansible clients.

Define Ansible user in Inventory

Another place you can define this Ansible user is inventory i.e. client host list file. Default hosts file Ansible uses is /etc/ansible/hosts. You can add below snippet in this file to define Ansible user for your tasks or playbook.

[all:vars]
ansible_user=ansible-usr

where ansible-usr is the user you want Ansible to use while connecting to clients over SSH. Replace ansible-usr with the user of your choice.