SPYRUS WorkSafe Linux2Go : Your personal Linux machine on USB

Complete Spyrus WorkSafe Linux2Go drive review covering hardware and software features along with performance parameters.

Spyrus Linux2go device review by KernelTalks

Spyrus WorkSafe Linux2Go is your personal Linux machine on a USB stick with military-grade security. Do you work in IT and move from office to office frequently? Do you travel a lot? Do you aim for lesser luggage to carry? You don’t want to carry the laptop everywhere you travel? then Spyrus WorkSafe is the solution for you. Its a secured USB device with SSD storage carrying complete OS and loaded with heavy security features. You just need a live computer with a keyboard, mouse, and USB port to tuck in. Plugin your Spyrus device, boot from USB with Spyrus, and you are good to go. Your workplace, your personal computer is with you all the time! And it’s super packed with hardware and software layer of securities that you should not be worried about someone trying to tamper it.

Let’s get more familiar with this device and get into its specification.

Physical dimensions

It’s a rock-solid device with a black metal casing. The casing is inside filled with an epoxy filling which saves the device from physical shocks and tampering attempts.

Linux2Go drive casing

The metal cap is designed large enough to cover the port physically well. It even clicks fit when closed so that it won’t loose open while you store the device in a bag, drawers, etc. Ensuring the device is well protected from dirt, moisture, physical damage, etc when not in use and sitting idle. The metal cap is secured with a soft rubber tether so that you won’t lose it when not closed. With the cap perfectly closed in, the device can withstand 1 meter of water depth for several minutes without any impact. Such kind of physical security is provided to the device which leaves a very rare chance of physical damage to the device.

The device measures 86.1 mm x 24.2 mm x 10.8 mm. It’s pretty large enough than normal USB storage sticks. With all this thickness, it’s not possible to connect any other USB device in a neighboring port. You may want to use a USB extension cable to access neighboring USB port. The casing does have a strap hole to attach keychain or sort of accessories.

Also read: How to boot Spyrus Linux2Go drive tutorial with video

Software security features

Along with physical military-grade security, the device has so much to offer on software security front as well.

Spyrus linux2go drive

Device booting is protected by ToughBoot bootloader which is password protected. Only if you have ToughBoot password, you can boot device. When not booted into and plugged into to already running machine, you can use it as a smart card logon. The device is also loaded with CCID (Chip Card Interface Device) support and embedded readerless smart card for authentication. This smart card authentication can be used for secured network accesses or with PKI digital certificate functions.

The device is installed with BitLocker which offers full disk encryption for an extra layer of security. You can even create a separate encrypted partition with it. The device also offers military-grade XTS-AES 256 hardware-based encryption which is purely happening on the device only. Resources needed for hardware-based encryption are equipped with the device and it doesn’t rely on the host machine’s resources for encryption.

Spyrus offers central management of devices through SEMS (Spyrus Enterprise Management System) It helps to manage your devices centrally through one console. You can even get the help of it when you forgot your device passwords. It can also enable or disable drives remotely so you have full control of the device whether you have it physically with you or not.

Spyrus Linux2Go device is also configured with hardware read-only mode which can be added security for very sensitive data placed on it. More of such a technical feature list can be found here on their webpage.

Performance

The device is pretty quick to boot. It comes up to the ToughBoot password prompt within few seconds. My Spyrus WorkSafe Linux2Go drive booted in 17 seconds. Read-write speeds seem promising. Spyrus claimed sequential read up to 249 MB/sec and sequential write up to 238 MB/sec.

The device does get warm after long use. Performance is super on USB 3.0 ports and yes they are backward compatible with USB 2.0 ports as well. But you won’t get that optimum performance on 2.0 ports. Spyrus guarantees data retention on the drive for 10 years which is pretty good enough.

Spyrus WorkSafe Linux2Go drive datasheet here for your reference. Some more numbers, performance parameters are in this datasheet.

Where to buy

Spyrus Linux2Go drives are available in 32GB, 64GB, 128GB, 256GB, 512GB & 1TB sizing. As of today (at the time of writing this review) drives are not available to buy online directly. You need to contact Spyrus for your purchase. Spyrus does have the online store here but Linux2Go drives are not on sale there. Pricing details are not available online from Spyrus but approximately it varies from $2.5 to $4 per GB. Higher size device will have a lower price per GB and vice versa.

So why wait…Go get your copy of Linux2Go drive and carry your Linux world with you wherever you go!

How to extend EBS & filesystem online on AWS server

Learn how to extend EBS & filesystem online on the AWS EC2 Linux server. Step by step procedure along with screenshots and sample outputs.

Extend EBS & filesystem online on AWS

In this article, we will walk you through steps to extend EBS volume attached to the EC2  Linux server and then extend the filesystem on it at Linux level using LVM. Read here about how to attach EBS volume to the EC2 server in AWS.

It involves two steps –

  1. Extend the attached EBS volume on AWS console
  2. Extend file system using LVM

Current setup :

We have a 10GB EBS volume attached to the Linux EC2 server. /testmount of 9.9GB is created using this disk at OS level. We will be increasing it to 15GB.

root@kerneltalks # lsblk
NAME                   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda                   202:0    0   10G  0 disk
└─xvda1                202:1    0   10G  0 part /
xvdf                   202:80   0   10G  0 disk
└─datavg-datalv (dm-0) 253:0    0  9.9G  0 lvm  /testmount

Step 1: How to extend EBS volume attached to the EC2 server in AWS

Login to AWS EC2 console, click on Volumes under Elastic Block Store in the left-hand side menu. Then select the volume you want to extend. From Actions drop-down menu select Modify Volume  You will see below screen :

Modify EBS volume in AWS

Change size (in our case we changed from 10 to 16GB) and click Modify. Accept the confirmation dialogue box by clicking Yes.

Once modify operation succeeded, refresh the Volume list page and confirm the new size is being shown against the volume you modified just now. Now, your EBS volume is extended successfully at the AWS level. You need to extend it at OS level now.

Step 2: How to re-scan new size of EBS volume in Linux & extend filesystem online

Since EBS volumes size has been changed you need to rescan it in OS so that kernel and volume managers (LVM in our case) should make a note about the new size. In LVM, you can use pvresize command to rescan this extended EBS volume.

root@kerneltalks # pvresize /dev/xvdf
  Physical volume "/dev/xvdf" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized

After successful rescan, check if the new size is identified by the kernel or not using lsblk command.

root@kerneltalks # lsblk
NAME                   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda                   202:0    0   10G  0 disk
└─xvda1                202:1    0   10G  0 part /
xvdf                   202:80   0   16G  0 disk
└─datavg-datalv (dm-0) 253:0    0  9.9G  0 lvm  /testmount

You can see in the above output, now xvdf disk is shown with size 16G! So, the new disk size is identified. Now proceed to extend file system online using lvextend and resize2fs. Read how to extend the filesystem online for more details.

root@kerneltalks # lvextend -L 15G /dev/datavg/datalv
  Extending logical volume datalv to 15.00 GiB
  Logical volume datalv successfully resized

root@kerneltalks # resize2fs /dev/datavg/datalv
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/datavg/datalv is mounted on /testmount; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 1
Performing an on-line resize of /dev/datavg/datalv to 3932160 (4k) blocks.
The filesystem on /dev/datavg/datalv is now 3932160 blocks long.

Check if the mount point is showing new bigger size.

root@kerneltalks # df -Ph /testmount
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/datavg-datalv   15G  153M   14G   2% /testmount

Yup, as we planned /testmount is now 15G in size from 9.9GB earlier size.

12 useful zypper command examples

Learn the zypper command with 12 useful examples along with sample outputs. zypper is used for package and patch management in Suse Linux systems.

zypper command examples

zypper is a package management system powered by ZYpp package manager engine. Suse Linux uses zypper for package management. In this article, we will be sharing 12 useful zypper commands along with examples that are helpful for your day to day sysadmin tasks.

Without any argument zypper command will list you all available switches which can be used. It’s quite handy than referring to the man page which is pretty much in detail.

root@kerneltalks # zypper
  Usage:
        zypper [--global-options] <command> [--command-options] [arguments]
        zypper <subcommand> [--command-options] [arguments]

  Global Options:
        --help, -h              Help.
        --version, -V           Output the version number.
        --promptids             Output a list of zypper's user prompts.
        --config, -c <file>     Use specified config file instead of the default                                                                                        .
        --userdata <string>     User defined transaction id used in history and                                                                                         plugins.
        --quiet, -q             Suppress normal output, print only error
                                messages.
        --verbose, -v           Increase verbosity.
        --color
        --no-color              Whether to use colors in output if tty supports                                                                                         it.
        --no-abbrev, -A         Do not abbreviate text in tables.
        --table-style, -s       Table style (integer).
        --non-interactive, -n   Do not ask anything, use default answers
                                automatically.
        --non-interactive-include-reboot-patches
                                Do not treat patches as interactive, which have
                                the rebootSuggested-flag set.
        --xmlout, -x            Switch to XML output.
        --ignore-unknown, -i    Ignore unknown packages.

        --reposd-dir, -D <dir>  Use alternative repository definition file
                                directory.
        --cache-dir, -C <dir>   Use alternative directory for all caches.
        --raw-cache-dir <dir>   Use alternative raw meta-data cache directory.
        --solv-cache-dir <dir>  Use alternative solv file cache directory.
        --pkg-cache-dir <dir>   Use alternative package cache directory.

     Repository Options:
        --no-gpg-checks         Ignore GPG check failures and continue.
        --gpg-auto-import-keys  Automatically trust and import new repository
                                signing keys.
        --plus-repo, -p <URI>   Use an additional repository.
        --plus-content <tag>    Additionally use disabled repositories providing                                                                                         a specific keyword.
                                Try '--plus-content debug' to enable repos indic                                                                                        ating to provide debug packages.
        --disable-repositories  Do not read meta-data from repositories.
        --no-refresh            Do not refresh the repositories.
        --no-cd                 Ignore CD/DVD repositories.
        --no-remote             Ignore remote repositories.
        --releasever            Set the value of $releasever in all .repo files                                                                                         (default: distribution version)

     Target Options:
        --root, -R <dir>        Operate on a different root directory.
        --disable-system-resolvables
                                Do not read installed packages.

  Commands:
        help, ?                 Print help.
        shell, sh               Accept multiple commands at once.

     Repository Management:
        repos, lr               List all defined repositories.
        addrepo, ar             Add a new repository.
        removerepo, rr          Remove specified repository.
        renamerepo, nr          Rename specified repository.
        modifyrepo, mr          Modify specified repository.
        refresh, ref            Refresh all repositories.
        clean                   Clean local caches.

     Service Management:
        services, ls            List all defined services.
        addservice, as          Add a new service.
        modifyservice, ms       Modify specified service.
        removeservice, rs       Remove specified service.
        refresh-services, refs  Refresh all services.

     Software Management:
        install, in             Install packages.
        remove, rm              Remove packages.
        verify, ve              Verify integrity of package dependencies.
        source-install, si      Install source packages and their build
                                dependencies.
        install-new-recommends, inr
                                Install newly added packages recommended
                                by installed packages.

     Update Management:
        update, up              Update installed packages with newer versions.
        list-updates, lu        List available updates.
        patch                   Install needed patches.
        list-patches, lp        List needed patches.
        dist-upgrade, dup       Perform a distribution upgrade.
        patch-check, pchk       Check for patches.

     Querying:
        search, se              Search for packages matching a pattern.
        info, if                Show full information for specified packages.
        patch-info              Show full information for specified patches.
        pattern-info            Show full information for specified patterns.
        product-info            Show full information for specified products.
        patches, pch            List all available patches.
        packages, pa            List all available packages.
        patterns, pt            List all available patterns.
        products, pd            List all available products.
        what-provides, wp       List packages providing specified capability.

     Package Locks:
        addlock, al             Add a package lock.
        removelock, rl          Remove a package lock.
        locks, ll               List current package locks.
        cleanlocks, cl          Remove unused locks.

     Other Commands:
        versioncmp, vcmp        Compare two version strings.
        targetos, tos           Print the target operating system ID string.
        licenses                Print report about licenses and EULAs of
                                installed packages.
        download                Download rpms specified on the commandline to a                                                                                         local directory.
        source-download         Download source rpms for all installed packages
                                to a local directory.

     Subcommands:
        subcommand              Lists available subcommands.

Type 'zypper help <command>' to get command-specific help.

How to install the package using zypper

zypper takes in or install switch to install the package on your system. It’s the same as yum package installation, supplying package name as an argument, and package manager (zypper here) will resolve all dependencies and install them along with your required package.

# zypper install telnet
Refreshing service 'SMT-http_smt-ec2_susecloud_net'.
Refreshing service 'cloud_update'.
Loading repository data...
Reading installed packages...
Resolving package dependencies...

The following NEW package is going to be installed:
  telnet

1 new package to install.
Overall download size: 51.8 KiB. Already cached: 0 B. After the operation, additional 113.3 KiB will be used.
Continue? [y/n/...? shows all options] (y): y
Retrieving package telnet-1.2-165.63.x86_64                                                                                        (1/1),  51.8 KiB (113.3 KiB unpacked)
Retrieving: telnet-1.2-165.63.x86_64.rpm .........................................................................................................................[done]
Checking for file conflicts: .....................................................................................................................................[done]
(1/1) Installing: telnet-1.2-165.63.x86_64 .......................................................................................................................[done]

Above output for your reference in which we installed telnet package.

Suggested read: Install packages in YUM and APT systems

How to remove package using zypper

For erasing or removing packages in Suse Linux, use zypper with remove or rm switch.

root@kerneltalks # zypper rm telnet
Loading repository data...
Reading installed packages...
Resolving package dependencies...

The following package is going to be REMOVED:
  telnet

1 package to remove.
After the operation, 113.3 KiB will be freed.
Continue? [y/n/...? shows all options] (y): y
(1/1) Removing telnet-1.2-165.63.x86_64 ..........................................................................................................................[done]

We removed previously installed telnet package here.

Check dependencies and verify the integrity of installed packages using zypper

There are times when one can install the package by force ignoring dependencies. zypper gives you the power to scan all installed packages and checks for their dependencies too. If any dependency is missing, it offers you to install/remove it and hence maintain the integrity of your installed packages.

Use verify or ve switch with zypper to check the integrity of installed packages.

root@kerneltalks # zypper ve
Refreshing service 'SMT-http_smt-ec2_susecloud_net'.
Refreshing service 'cloud_update'.
Loading repository data...
Reading installed packages...

Dependencies of all installed packages are satisfied.

In the above output, you can see the last line confirms that all dependencies of installed packages are completed and no action required.

How to download package using zypper in Suse Linux

zypper offers a way to download the package in the local directory without installation. You can use this downloaded package on another system with the same configuration. Packages will be downloaded to /var/cache/zypp/packages/<repo>/<arch>/ directory.

root@kerneltalks # zypper download telnet
Refreshing service 'SMT-http_smt-ec2_susecloud_net'.
Refreshing service 'cloud_update'.
Loading repository data...
Reading installed packages...
Retrieving package telnet-1.2-165.63.x86_64                                                                                        (1/1),  51.8 KiB (113.3 KiB unpacked)
(1/1) /var/cache/zypp/packages/SMT-http_smt-ec2_susecloud_net:SLES12-SP3-Pool/x86_64/telnet-1.2-165.63.x86_64.rpm ................................................[done]

download: Done.

# ls -lrt /var/cache/zypp/packages/SMT-http_smt-ec2_susecloud_net:SLES12-SP3-Pool/x86_64/
total 52
-rw-r--r-- 1 root root 53025 Feb 21 03:17 telnet-1.2-165.63.x86_64.rpm

You can see we have downloaded telnet package locally using zypper

Suggested read: Download packages in YUM and APT systems without installing

How to list available package update in zypper

zypper allows you to view all available updates for your installed packages so that you can plan update activity in advance. Use list-updates or lu switch to show you a list of all available updates for installed packages.

root@kerneltalks # zypper lu
Refreshing service 'SMT-http_smt-ec2_susecloud_net'.
Refreshing service 'cloud_update'.
Loading repository data...
Reading installed packages...
S | Repository                        | Name                       | Current Version               | Available Version                  | Arch
--+-----------------------------------+----------------------------+-------------------------------+------------------------------------+-------
v | SLES12-SP3-Updates                | at-spi2-core               | 2.20.2-12.3                   | 2.20.2-14.3.1                      | x86_64
v | SLES12-SP3-Updates                | bash                       | 4.3-82.1                      | 4.3-83.5.2                         | x86_64
v | SLES12-SP3-Updates                | ca-certificates-mozilla    | 2.7-11.1                      | 2.22-12.3.1                        | noarch
v | SLE-Module-Containers12-Updates   | containerd                 | 0.2.5+gitr639_422e31c-20.2    | 0.2.9+gitr706_06b9cb351610-16.8.1  | x86_64
v | SLES12-SP3-Updates                | crash                      | 7.1.8-4.3.1                   | 7.1.8-4.6.2                        | x86_64
v | SLES12-SP3-Updates                | rsync                      | 3.1.0-12.1                    | 3.1.0-13.10.1                      | x86_64

The output is properly formatted for easy reading. Column wise it shows the name of repo where package belongs, package name, installed version, new updated available version & architecture.

List  and install patches in Suse linux

Use list-patches or lp switch to display all available patches for your Suse Linux system which needs to be applied.

root@kerneltalks # zypper lp
Refreshing service 'SMT-http_smt-ec2_susecloud_net'.
Refreshing service 'cloud_update'.
Loading repository data...
Reading installed packages...

Repository                        | Name                                     | Category    | Severity  | Interactive | Status | Summary                                 
----------------------------------+------------------------------------------+-------------+-----------+-------------+--------+------------------------------------------------------------------------------------
SLE-Module-Containers12-Updates   | SUSE-SLE-Module-Containers-12-2018-273   | security    | important | ---         | needed | Version update for docker, docker-runc, containerd, golang-github-docker-libnetwork
SLE-Module-Containers12-Updates   | SUSE-SLE-Module-Containers-12-2018-62    | recommended | low       | ---         | needed | Recommended update for sle2docker       
SLE-Module-Public-Cloud12-Updates | SUSE-SLE-Module-Public-Cloud-12-2018-268 | recommended | low       | ---         | needed | Recommended update for python-ecdsa     
SLES12-SP3-Updates                | SUSE-SLE-SERVER-12-SP3-2018-116          | security    | moderate  | ---         | needed | Security update for rsync               
---- output clipped ----
SLES12-SP3-Updates                | SUSE-SLE-SERVER-12-SP3-2018-89           | security    | moderate  | ---         | needed | Security update for perl-XML-LibXML     
SLES12-SP3-Updates                | SUSE-SLE-SERVER-12-SP3-2018-90           | recommended | low       | ---         | needed | Recommended update for lvm2             

Found 37 applicable patches:
37 patches needed (18 security patches)

The output is pretty much nicely organized with respective headers. You can easily figure out and plan your patch update accordingly. We can see out of 37 patches available on our system 18 are security ones and needs to be applied on high priority!

You can install all needed patches by issuing zypper patch command.

How to update package using zypper

To update package using zypper, use update or up switch followed by package name. In the above list updates command, we learned that rsync package update is available on our server. Let update it now –

root@kerneltalks # zypper update rsync
Refreshing service 'SMT-http_smt-ec2_susecloud_net'.
Refreshing service 'cloud_update'.
Loading repository data...
Reading installed packages...
Resolving package dependencies...

The following package is going to be upgraded:
  rsync

1 package to upgrade.
Overall download size: 325.2 KiB. Already cached: 0 B. After the operation, additional 64.0 B will be used.
Continue? [y/n/...? shows all options] (y): y
Retrieving package rsync-3.1.0-13.10.1.x86_64                                                                                      (1/1), 325.2 KiB (625.5 KiB unpacked)
Retrieving: rsync-3.1.0-13.10.1.x86_64.rpm .......................................................................................................................[done]
Checking for file conflicts: .....................................................................................................................................[done]
(1/1) Installing: rsync-3.1.0-13.10.1.x86_64 .....................................................................................................................[done]

Search package using zypper in Suse Linux

If you are not sure about the full package name, no worries. You can search packages in zypper by supplying search string with se or search switch

root@kerneltalks # zypper se lvm
Refreshing service 'SMT-http_smt-ec2_susecloud_net'.
Refreshing service 'cloud_update'.
Loading repository data...
Reading installed packages...

S  | Name          | Summary                      | Type
---+---------------+------------------------------+-----------
   | libLLVM       | Libraries for LLVM           | package
   | libLLVM-32bit | Libraries for LLVM           | package
   | llvm          | Low Level Virtual Machine    | package
   | llvm-devel    | Header Files for LLVM        | package
   | lvm2          | Logical Volume Manager Tools | srcpackage
i+ | lvm2          | Logical Volume Manager Tools | package
   | lvm2-devel    | Development files for LVM2   | package

In the above example, we searched lvm string and came up with the list shown above. You can use Name in zypper install/remove/update commands.

Check installed package information using zypper

You can check installed packages details using zypper. info or if switch will list out information of the installed package. It can also display package details which are not installed. In that case, Installed parameter will reflect No value.

root@kerneltalks # zypper info rsync
Refreshing service 'SMT-http_smt-ec2_susecloud_net'.
Refreshing service 'cloud_update'.
Loading repository data...
Reading installed packages...


Information for package rsync:
------------------------------
Repository     : SLES12-SP3-Updates
Name           : rsync
Version        : 3.1.0-13.10.1
Arch           : x86_64
Vendor         : SUSE LLC <https://www.suse.com/>
Support Level  : Level 3
Installed Size : 625.5 KiB
Installed      : Yes
Status         : up-to-date
Source package : rsync-3.1.0-13.10.1.src
Summary        : Versatile tool for fast incremental file transfer
Description    :
    Rsync is a fast and extraordinarily versatile file  copying  tool. It can copy
    locally, to/from another host over any remote shell, or to/from a remote rsync
    daemon. It offers a large number of options that control every aspect of its
    behavior and permit very flexible specification of the set of files to be
    copied. It is famous for its delta-transfer algorithm, which reduces the amount
    of data sent over the network by sending only the differences between the
    source files and the existing files in the destination. Rsync is widely used
    for backups and mirroring and as an improved copy command for everyday use.

List repositories using zypper

To list repo use lr or repos switch with zypper command. It will list all available repos which include enabled and not-enabled both repos.

root@kerneltalks # zypper lr
Refreshing service 'cloud_update'.
Repository priorities are without effect. All enabled repositories share the same priority.

#  | Alias                                                                                | Name                                                  | Enabled | GPG Check | Refresh
---+--------------------------------------------------------------------------------------+-------------------------------------------------------+---------+-----------+--------
 1 | SMT-http_smt-ec2_susecloud_net:SLE-Module-Adv-Systems-Management12-Debuginfo-Pool    | SLE-Module-Adv-Systems-Management12-Debuginfo-Pool    | No      | ----      | ----
 2 | SMT-http_smt-ec2_susecloud_net:SLE-Module-Adv-Systems-Management12-Debuginfo-Updates | SLE-Module-Adv-Systems-Management12-Debuginfo-Updates | No      | ----      | ----
 3 | SMT-http_smt-ec2_susecloud_net:SLE-Module-Adv-Systems-Management12-Pool              | SLE-Module-Adv-Systems-Management12-Pool              | Yes     | (r ) Yes  | No
 4 | SMT-http_smt-ec2_susecloud_net:SLE-Module-Adv-Systems-Management12-Updates           | SLE-Module-Adv-Systems-Management12-Updates           | Yes     | (r ) Yes  | Yes
 5 | SMT-http_smt-ec2_susecloud_net:SLE-Module-Containers12-Debuginfo-Pool                | SLE-Module-Containers12-Debuginfo-Pool                | No      | ----      | ----
 6 | SMT-http_smt-ec2_susecloud_net:SLE-Module-Containers12-Debuginfo-Updates             | SLE-Module-Containers12-Debuginfo-Updates             | No      | ----      | ----

here you need to check enabled column to check which repos are enabled and which are not.

Recommended read : How to list repositories in RHEL & List of online package repositories

Add and remove repo in Suse Linux using zypper

To add repo you will need URI of repo/.repo file or else you end up in below error.

root@kerneltalks # zypper addrepo -c SLES12-SP3-Updates
If only one argument is used, it must be a URI pointing to a .repo file.

With URI, you can add repo like below :

root@kerneltalks # zypper  addrepo -c http://smt-ec2.susecloud.net/repo/SUSE/Products/SLE-SDK/12-SP3/x86_64/product?credentials=SMT-http_smt-ec2_susecloud_net SLE-SDK12-SP3-Pool
Adding repository 'SLE-SDK12-SP3-Pool' ...........................................................................................................................[done]
Repository 'SLE-SDK12-SP3-Pool' successfully added

URI         : http://smt-ec2.susecloud.net/repo/SUSE/Products/SLE-SDK/12-SP3/x86_64/product?credentials=SMT-http_smt-ec2_susecloud_net
Enabled     : Yes
GPG Check   : Yes
Autorefresh : No
Priority    : 99 (default priority)

Repository priorities are without effect. All enabled repositories share the same priority.

Use addrepo or ar switch with zypper to add a repo in Suse. Followed by URI and lastly, you need to provide alias as well.

To remove repo in Suse, use removerepo or rr switch with zypper.

root@kerneltalks # zypper removerepo nVidia-Driver-SLE12-SP3
Removing repository 'nVidia-Driver-SLE12-SP3' ....................................................................................................................[done]
Repository 'nVidia-Driver-SLE12-SP3' has been removed.

Clean local zypper cache

Cleaning up local zypper caches with zypper clean command –

root@kerneltalks # zypper clean
All repositories have been cleaned up.

How to enable repository using subscription-manager in RHEL

Learn how to enable repository using subscription-manager in RHEL. The article also includes steps to register system with Red Hat, attach subscription and errors along with resolutions.

Enable repository using subscription-manager

In this article, we will walk you through step by step process to enable Red Hat repository in RHEL fresh installed server.

The repository can be enabled using subscription-managercommand like below –

root@kerneltalks # subscription-manager repos --enable rhel-6-server-rhv-4-agent-beta-debug-rpms
Error: 'rhel-6-server-rhv-4-agent-beta-debug-rpms' does not match a valid repository ID. Use "subscription-manager repos --list" to see valid repositories.

You will see the above error when your subscription is not in place. Let’s go through step by step procedure to enable repositories via subscription-manager

Step 1 : Register your system with Red Hat

We are considering you have a freshly installed system and it’s not yet registered with Red Hat. If you have a registered system already then you can ignore this step.

You can check if your system is registered with Red Hat for the subscription using below command –

# subscription-manager version
server type: This system is currently not registered.
subscription management server: Unknown
subscription management rules: Unknown
subscription-manager: 1.18.10-1.el6
python-rhsm: 1.18.6-1.el6

Here, in the first line of output, you can see the system is not registered. So, let’s start with the registering system. You need to use the subscription-managercommand with register switch. You need to use your Red Hat account credentials here.

root@kerneltalks # subscription-manager register
Registering to: subscription.rhsm.redhat.com:443/subscription
Username: admin@kerneltalks.com
Password:
Network error, unable to connect to server. Please see /var/log/rhsm/rhsm.log for more information.

If you are getting above error then your server is not able to reach RedHat. Check internet connection & if you are able to resolve site names. Sometimes even if you are able to ping the subscription server, you will see this error. This might be because you have the proxy server in your environment. In such a case, you need to add its details in file /etc/rhsm/rhsm.conf. Below proxy details should be populated :

# an http proxy server to use
 proxy_hostname =

# port for http proxy server
 proxy_port =

# user name for authenticating to an http proxy, if needed
 proxy_user =

# password for basic http proxy auth, if needed
 proxy_password =

Once you are done, recheck if subscription-manager taken up new proxy details by using below command –

root@kerneltalks # subscription-manager config
[server]
 hostname = [subscription.rhsm.redhat.com]
 insecure = [0]
 port = [443]
 prefix = [/subscription]
 proxy_hostname = [kerneltalksproxy.abc.com]
 proxy_password = [asdf]
 proxy_port = [3456]
 proxy_user = [user2]
 server_timeout = [180]
 ssl_verify_depth = [3]

[rhsm]
 baseurl = [https://cdn.redhat.com]
 ca_cert_dir = [/etc/rhsm/ca/]
 consumercertdir = [/etc/pki/consumer]
 entitlementcertdir = [/etc/pki/entitlement]
 full_refresh_on_yum = [0]
 manage_repos = [1]
 pluginconfdir = [/etc/rhsm/pluginconf.d]
 plugindir = [/usr/share/rhsm-plugins]
 productcertdir = [/etc/pki/product]
 repo_ca_cert = /etc/rhsm/ca/redhat-uep.pem
 report_package_profile = [1]

[rhsmcertd]
 autoattachinterval = [1440]
 certcheckinterval = [240]

[logging]
 default_log_level = [INFO]

[] - Default value in use

Now, try registering your system again.

root@kerneltalks # subscription-manager register
Registering to: subscription.rhsm.redhat.com:443/subscription
Username: admin@kerneltalks.com
Password:
You must first accept Red Hat's Terms and conditions. Please visit https://www.redhat.com/wapps/tnc/termsack?event[]=signIn . You may have to log out of and back into the Customer Portal in order to see the terms.

You will see the above error if you are adding the server to your Red Hat account for the first time. Go to the URL and accept the terms. Come back to the terminal and try again.

root@kerneltalks # subscription-manager register
Registering to: subscription.rhsm.redhat.com:443/subscription
Username: admin@kerneltalks.com
Password:
The system has been registered with ID: xxxxb2-xxxx-xxxx-xxxx-xx8e199xxx

Bingo! The system is registered with Red Hat now. You can again verify it with version switch.

root@kerneltalks # subscription-manager version
server type: Red Hat Subscription Management
subscription management server: 2.0.43-1
subscription management rules: 5.26
subscription-manager: 1.18.10-1.el6
python-rhsm: 1.18.6-1.el6

Step 2: Attach subscription to your server

First, try to list repositories. You won’t be able to list any since we haven’t attached any subscription to our server yet.

root@kerneltalks # subscription-manager repos --list
This system has no repositories available through subscriptions.

As you can see subscription-manager couldn’t found any repositories, you need to attach subscriptions to your server. Once the subscription is attached, subscription-manager will be able to list repositories under it.

To attach subscription, check all available subscriptions for your server with below command –

root@kerneltalks # subscription-manager list --available
+-------------------------------------------+
Available Subscriptions
+-------------------------------------------+
Subscription Name: Red Hat Enterprise Linux for Virtual Datacenters, Standard
Provides: Red Hat Beta
Red Hat Software Collections (for RHEL Server)
Red Hat Enterprise Linux Atomic Host Beta
Oracle Java (for RHEL Server)
Red Hat Enterprise Linux Server
dotNET on RHEL (for RHEL Server)
Red Hat Enterprise Linux Atomic Host
Red Hat Software Collections Beta (for RHEL Server)
Red Hat Developer Tools Beta (for RHEL Server)
Red Hat Developer Toolset (for RHEL Server)
Red Hat Developer Tools (for RHEL Server)
SKU: RH00050
Contract: xxxxxxxx
Pool ID: 8a85f98c6011059f0160110a2ae6000f
Provides Management: Yes
Available: Unlimited
Suggested: 0
Service Level: Standard
Service Type: L1-L3
Subscription Type: Stackable (Temporary)
Ends: 12/01/2018
System Type: Virtual

You will get the list of such subscriptions available for your server. You need to read through what it provides and note down Pool ID of subscriptions that are useful/required for you.

Now, attach subscriptions to your server by using pool ID.

root@kerneltalks # subscription-manager attach --pool=8a85f98c6011059f0160110a2ae6000f
Successfully attached a subscription for: Red Hat Enterprise Linux for Virtual Datacenters, Standard

If you are not sure which one to pick, you can simply attach subscriptions automatically which are best suited for your server with below command –

root@kerneltalks # subscription-manager attach --auto
Installed Product Current Status:
Product Name: Red Hat Enterprise Linux Server
Status: Subscribed

Move on to the final step to enable repository.

Step 3: Enable repository

Now you will enable repository which is available under your attached subscription.

root@kerneltalks # subscription-manager repos --enable rhel-6-server-rhv-4-agent-beta-debug-rpms
Repository 'rhel-6-server-rhv-4-agent-beta-debug-rpms' is enabled for this system.

That’s it. You are done. You can list repositories with yum command and confirm.

How to boot SPYRUS WorkSafe Pro Linux2Go drive

Learn how to boot SPYRUS WorkSafe PRO Linux2Go drive along with Video Demo.

how to boot Linux from Linux2Go drive

In this article, we will be walking you through steps to first boot your SPYRUS WorkSafe Pro Linux2Go drive and use it. More details about the SPYRUS WorkSafe Pro Linux2Go device. We will be covering more about the device in another article.

Step 1 :

Set your host machine which might be a laptop or desktop on which you are attaching SPYRUS Linux2Go drive in USB port to boot from USB. For altering boot sequences and priority you need to enter BIOS settings of your laptop or desktop.

The process to enter into BIOS changes from hardware to hardware and also the base Operating System you use. In most of the cases pressing orF2F8 or DEL key, while the system is booting, takes you to BIOS. You can search your hardware vendor support manual or website to find the process to get into BIOS.

Once you are in BIOS, you need to change the boot sequence. Normally its Optical drive, Hard Disk, and network/external. Means machine will search for the attached optical drive CD or DVD ROM for the operating system to boot. If not found then it will search attached hard disks. If it doesn’t found OS there it will proceed to check network boot or peripherals devices that have OS to boot.

In our case, we want to boot from USB change the sequence to the external boot method before the internal hard disk. So, the system will check peripheral i.e. USB i.e. our Linux2Go device to boot before internal hard disk which already has OS. This way we are booting from the Linux2Go device rather than the host computer’s hard disk.

Save the settings and reboot the system. Its usually can be done by pressing F10 key and then answering yes to prompt saying ‘save settings and reboot’. But again this might be little different depending on your hardware manufacturer standards.

Step 2:

Connect Linux2Go device to the USB port and reboot the system. Now your system will boot from the SPYRUS Linux2Go device and it will display a bootloader security screen which is known as Toughboot as below.

Toughboot in linux2go drive

You need to enter the Toughboot password here to actually begin booting of OS installed on the Linux2Go device. This password can be found on paper you receive along with your Linux2Go drive. This is additional security by SPYRUS implemented on the drive.

That’s it! after successful password authentication, you will be booted into OS installed on your drive. I have Ubuntu 16.04 LTS installed on my drive. So I booted into it.

Here is a small video of booting SPYRUS WorkSafe Pro Linux2go drive!

19 grep command examples

Beginners guide to learn grep command with these 19 different practical examples.

Learn grep command with examples

grep : one of the widely used Linux / Unix commands which help sysadmins narrowing down their searches! grep stands for Global Regular Expression Print which is used for searching regular expressions within the source stream of data.

grep command syntax is simple.

grep <switch> <string to search> file

where the switch is from the variety of switches available to use with command. string to search is a regular expression that you want to search within source data. The file is a source of data in which you expect grep command to search.

It is also used widely with pipe for searching strings in data outputted by the previous command. In such a scenario, syntax followed is –

command1 | grep <switch> <string to search>

where output of command1 is being searched using grep command.

We are sharing here 19 grep command practical examples which will help beginners to well versed with this command and use it in their daily operations. Without further delay, let’s learn to grep command with below 20 examples. In all below examples, we will be searching string ‘kerneltalks’ in file1 which is as below –

root@kerneltalks # cat file1
This is demo file for kerneltalks.com
Demo file to be used for demonstrating grep commands on KernelTalks blog.
We are using kerneltalks as a search string for grep examples
Filling data with kernelTalks words for demo purpose.
This line does not contain our targeted search string.
Junk data kasdiohn fjad;ioe;ion cslfns;o;nfg

Find out string in file

# grep kerneltalks file1
This is demo file for kerneltalks.com
We are using kerneltalks as a search string for grep examples

Recursive search in the directory

You can use recursive grep in the directory to search for string/pattern in all files within a directory.

# grep -r "kerneltalks" /tmp/data

OR

# grep -R "kerneltalks" /tmp/data

Count pattern match in grep

You can use -c i.e. count switch with grep command to count how many times a pattern is matched in given data.

# grep -c kerneltalks file1
2

Search exact word in the file

Normally, grep returns lines from data that have pattern matching in it. If you want to search exact word in data then use -w switch.

# grep -w kerneltalks file1
This is demo file for kerneltalks.com
We are using kerneltalks as a search string for grep examples

You can combine it with count switch above and can get the number of times the exact word appeared in the file.

Ignore case while searching with grep

To ignore case while finding match use -i switch i.e. case-insensitive match. So when you search for kerneltalks with -i switch it will show the occurrence of kerneltalks, KERNELTALKS, KernelTalks, etc.

# grep -i kerneltalks file1
This is demo file for kerneltalks.com
Demo file to be used for demonstrating grep commands on KernelTalks blog.
We are using kerneltalks as a search string for grep examples
Filling data with kernelTalks words for demo purpose.

Use of wild card with grep

Wild cards or repetition operators can be used with grep command.

# grep kernel* file1

here, * match anything which precedes with string kernel. You can use repetition operators like ?, *, + with grep.

Reverse grep operation

If you want to display data just by omitting the lines containing your targeted string then you can use grep operation in a reverse way i.e. by using -v switch. Some people also call it an inverted match or grep operation.

# grep -v kerneltalks file1
Demo file to be used for demonstrating grep commands on KernelTalks blog.
Filling data with kernelTalks words for demo purpose.
This line does not contain our targeted search string.
Junk data kasdiohn fjad;ioe;ion cslfns;o;nfg

The above command displays all lines within file1 except the ones which contain string kerneltalks.

Display N lines before matching string using grep

Use of -B switch followed by N number argument will let you display N lines before matching string in a file.

# grep -B 2 targeted  file1
We are using kerneltalks as a search string for grep examples
Filling data with kernelTalks words for demo purpose.
This line does not contain our targeted search string.

The above command will display 2 lines above the line which contains string targeted including the line with the string.

Display N lines after matching string using grep

Opposite to above, if you want to display N lines after the match is found, use -A switch with the same above syntax.

# grep -A 2 targeted  file1
This line does not contain our targeted search string.
Junk data kasdiohn fjad;ioe;ion cslfns;o;nfg

Here it will display 2 lines below the line which has string targeted in it including a line with the string.

Display N lines around matching string using grep

Using both -A and -B switches, you can display N lines before and after of matching string line. But, grep comes with inbuild switch -C which will do this for you.

# grep -C 1 targeted  file1
Filling data with kernelTalks words for demo purpose.
This line does not contain our targeted search string.
Junk data kasdiohn fjad;ioe;ion cslfns;o;nfg

The above command will display 1 line above and 1 line after the line which has matching string.

Match multiple pattern with grep

Searching more than one pattern/string is possible with grep. Stack up your strings using -e switch.

# grep -e Junk -e KernelTalks file1
Demo file to be used for demonstrating grep commands on KernelTalks blog.
Junk data kasdiohn fjad;ioe;ion cslfns;o;nfg

It will search for string1 and string2 in file1 and display all lines with either string in them.

List only file names with matching string in them

When you are searching through a bunch of files and only interested in file names within which string is matched then use -l switch.

# grep -l kerneltalks *.log

Here, we are searching string kerneltalks in all files ending with .log. Since -l switch is used, grep will display only file names where string match is found.

Display line number of match with grep

If you want to get the line number where your string found the match, you can use -n switch.

# grep -n kerneltalks file1
1:This is demo file for kerneltalks.com
3:We are using kerneltalks as a search string for grep examples

Display matched string only rather than whole line of match

By default, grep displays the whole line which contains the match of your searched string. To display only matched string rather than the whole line, you need to use -o switch. Obviously, it not useful when you are searching for the whole string or word but it is very useful when you are searching with wild cards.

# grep -o kernel* file1

Coloring up your grep search output

To highlight matched strings in output use –-color switch

# grep --color=always kerneltalks file1
This is demo file for kerneltalks.com
We are using kerneltalks as a search string for grep examples

You have three options to used with --color switch. auto, always and never

Grep out blank lines

You can search and count for blank lines with grep.

# grep -e ^$ file1

It’s helpful for removing blank lines from a file and get only data lines. Use reverse grep we saw earlier (-v switch)

# grep -v -e ^$ file1

It will show you only data lines, omitting all blank lines. You can redirect it to a new file and get a clean data file! You can use the same technique to remove hashed entries from the file by using ^# as a search string. This will helps to remove comments from scripts as well.

Invoke Egrep using grep

Egrep is extended grep with additional character support. egrep is derivative from grep utility. You can use it with egrep command or invoke using grep as below :

# grep -E

Fixed grep Fgrep using grep

Fixed grep is used for fast searching direct strings without any meta-characters or regular expressions. As the name suggests, fgrep is fixed grep! Only direct strings to search and it will be a bit fast than normal grep. fgrep is also another derivative from normal grep and used as fgrep separate command. But it can also be invoked using grep with below switch –

# grep -F

Search pattern in zip file

One more derivative of grep is zgrep. IT is used to find and match string in zip files. It uses almost the same switches as grep only difference is you have to source its zip file to search

# zgrep kerneltalks file2.gz

Let us know if you have any other grep command examples in comments below which re really helpful for sysadmin in day to day operations.

/bin/bash^M: bad interpreter: No such file or directory

 The article explaining How to resolve /bin/bash^M: bad interpreter: No such file or directory in Unix or Linux server.

How to resolve /bin/bash^M: bad interpreter: No such file or directory

Issue :

Sometimes we see below error while running scripts :

root@kerneltalks # ./test_script.sh
-bash: ./test_script.sh: /bin/bash^M: bad interpreter: No such file or directory

This is the issue with files that were created or updated in Windows and later copied over to Unix or Linux machine to execute. Since Windows (DOS) and Linux/Unix interpret line feeds and carriage returns differently. Window’s carriage returns interpreted as an illegal character ^M in *nix systems.  Hence you can see ^M in the above error which is at the end of a very first line of script #!/bin/bash which invokes bash shell in the script.

To resolve this issue you need to convert the DOS file into Linux one. You can either re-write the whole file using text editors in Linux/Unix system or you can use tools like dos2unix or native commands like sed.

Solution:

Use dos2unix utility which comes pre-installed on almost all distributions nowadays. dos2unix project hosted here.

There are different encoding you can choose to convert your file. -ascii is default conversion mode & it only converts line breaks. I used here -iso which worked fine for me.

The syntax is pretty simple you need to give encoding format along with the source and destination filenames.

root@kerneltalks # dos2unix -iso -n test_script.sh script_new.sh
dos2unix: active code page: 0
dos2unix: using code page 437.
dos2unix: converting file backup.sh to file script_new.sh in Unix format ...

This way you can keep old files intact and don’t mess with the original file. If you are ok to directly edit the old file then you can try below command :

root@kerneltalks # dos2unix -k -o test_script.sh
dos2unix: converting file test_script.sh to Unix format ...

Where -k keeps the timestamp of the file intact and -o converts the file and overwrites changes to the same file.

Or

You can use streamline editor sed to globally search an replace

root@kerneltalks # sed -i -e 's/\r$//' test_script.sh

where, -i uses source file, edit, and overwrites to the same file. -e supplied the following script code to be run on the source file.

That’s it. You repaired your file from Windows to run fine on the Linux system! Go ahead… execute…!

8 ways to generate random password in Linux

Learn 8 different ways to generate a random password in Linux using Linux native commands or third-party utilities.

Different ways to generate password in Linux

In this article, we will walk you through various different ways to generate a random password in the Linux terminal. Few of them are using native Linux commands and others are using third-party tools or utilities which can easily be installed on the Linux machine. Here we are looking at native commands like,openssl dd, md5sum, tr, urandom and third-party tools like mkpasswd, randpw, pwgen, spw, gpg, xkcdpass, diceware, revelation, keepaasx, passwordmaker.

These are actually ways to get some random alphanumeric string which can be utilized as a password. Random passwords can be used for new users so that there will be uniqueness no matter how large your user base is. Without any further delay, let’s jump into those 15 different ways to generate the random password in Linux.

Generate password using mkpasswd utility

mkpasswd comes with the install of expect package on RHEL based systems. On Debian based systems mkpasswd comes with package whois. Trying to install mkpasswd package will result in error –

No package mkpasswd available. on RHEL system and E: Unable to locate package mkpasswd in Debian based.

So install their parent packages as mentioned above and you are good to go.

Run mkpasswd to get passwords

root@kerneltalks# mkpasswd << on RHEL
zt*hGW65c

root@kerneltalks# mkpasswd teststring << on Ubuntu
XnlrKxYOJ3vik

Command behaves differently on different systems so work accordingly. There are many switches that can be used to control length etc parameters. You can explore them from man pages.

Generate password using OpenSSL

OpenSSL comes in build with almost all the Linux distributions. We can use its random function to get alphanumeric string generated which can be used as a password.

root@kerneltalks # openssl rand -base64 10
nU9LlHO5nsuUvw==

Here, we are using base64 encoding with random function and last digit for the argument to base64 encoding.

Generate password using urandom

The device file /dev/urandom is another source of getting random characters. We are using tr function and trimming output to get the random string to use as a password.

root@kerneltalks # strings /dev/urandom |tr -dc A-Za-z0-9 | head -c20; echo
UiXtr0NAOSIkqtjK4c0X

dd command to generate password

We can even use /dev/urandom device along with dd command to get a string of random characters.

root@kerneltalks# dd if=/dev/urandom bs=1 count=15|base64 -w 0
15+0 records in
15+0 records out
15 bytes (15 B) copied, 5.5484e-05 s, 270 kB/s
QMsbe2XbrqAc2NmXp8D0

We need to pass output through base64 encoding to make it human-readable. You can play with count value to get the desired length. For much cleaner output, redirect std2 to /dev/null. The clean command is –

root@kerneltalks # dd if=/dev/urandom bs=1 count=15 2>/dev/null|base64 -w 0
F8c3a4joS+a3BdPN9C++

Using md5sum to generate password

Another way to get an array of random characters which can be used as the password is to calculate MD5 checksum! s you know checksum value indeed looks like random characters grouped together we can use it as the password. Make sure you use the source as something variable so that you get different checksum every time you run command. For example date ! date command always yields changing the output.

root@kerneltalks # date |md5sum
4d8ce5c42073c7e9ca4aeffd3d157102  -

Here we passed date command output to md5sum and get the checksum hash! You can use cut command to get the desired length of the output.

Generate password using pwgen

pwgen package comes with repositories like EPEL. pwgen is more focused on generating passwords that are pronounceable but not a dictionary word or not in plain English. You may not find it in standard distribution repo. Install the package and run pwgen command. Boom!

root@kerneltalks # pwgen
thu8Iox7 ahDeeQu8 Eexoh0ai oD8oozie ooPaeD9t meeNeiW2 Eip6ieph Ooh1tiet
cootad7O Gohci0vo wah9Thoh Ohh3Ziur Ao1thoma ojoo6aeW Oochai4v ialaiLo5
aic2OaDa iexieQu8 Aesoh4Ie Eixou9ph ShiKoh0i uThohth7 taaN3fuu Iege0aeZ
cah3zaiW Eephei0m AhTh8guo xah1Shoo uh8Iengo aifeev4E zoo4ohHa fieDei6c
aorieP7k ahna9AKe uveeX7Hi Ohji5pho AigheV7u Akee9fae aeWeiW4a tiex8Oht

You will be presented with the list of passwords at your terminal! What else you want? Ok. You still want to explore, pwgen comes with many custom options that can be referred for man page.

Generate password using gpg tool

GPG is an OpenPGP encryption and signing tool. Mostly gpg tool comes pre-installed (at least it is on my RHEL7). But if not you can look for gpg or gpg2 package and install it.

Use below command to generate password from gpg tool.

root@kerneltalks # gpg --gen-random --armor 1 12
mL8i+PKZ3IuN6a7a

Here we are passing generate random byte sequence switch (--gen-random) of quality 1 (first argument) with a count of 12 (second argument). Switch --armor ensures output is base64 encoded.

Generate password using xkcdpass

Famous geek humor website xkcd, published a very interesting post about memorable but still complex passwords. You can view it here. So xkcdpass tool took inspiration from this post and did its work! It’s a python package and available on python’s official website here

All installation and usage instructions are mentioned on that page. Here is install steps and outputs from my test RHEL server for your reference.

root@kerneltalks # wget https://pypi.python.org/packages/b4/d7/3253bd2964390e034cf0bba227db96d94de361454530dc056d8c1c096abc/xkcdpass-1.14.3.tar.gz#md5=5f15d52f1d36207b07391f7a25c7965f
--2018-01-23 19:09:17--  https://pypi.python.org/packages/b4/d7/3253bd2964390e034cf0bba227db96d94de361454530dc056d8c1c096abc/xkcdpass-1.14.3.tar.gz
Resolving pypi.python.org (pypi.python.org)... 151.101.32.223, 2a04:4e42:8::223
Connecting to pypi.python.org (pypi.python.org)|151.101.32.223|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 871848 (851K) [binary/octet-stream]
Saving to: ‘xkcdpass-1.14.3.tar.gz’

100%[==============================================================================================================================>] 871,848     --.-K/s   in 0.01s

2018-01-23 19:09:17 (63.9 MB/s) - ‘xkcdpass-1.14.3.tar.gz’ saved [871848/871848]


root@kerneltalks # tar -xvf xkcdpass-1.14.3.tar.gz
xkcdpass-1.14.3/
xkcdpass-1.14.3/examples/
xkcdpass-1.14.3/examples/example_import.py
xkcdpass-1.14.3/examples/example_json.py
xkcdpass-1.14.3/examples/example_postprocess.py
xkcdpass-1.14.3/LICENSE.BSD
xkcdpass-1.14.3/MANIFEST.in
xkcdpass-1.14.3/PKG-INFO
xkcdpass-1.14.3/README.rst
xkcdpass-1.14.3/setup.cfg
xkcdpass-1.14.3/setup.py
xkcdpass-1.14.3/tests/
xkcdpass-1.14.3/tests/test_list.txt
xkcdpass-1.14.3/tests/test_xkcdpass.py
xkcdpass-1.14.3/tests/__init__.py
xkcdpass-1.14.3/xkcdpass/
xkcdpass-1.14.3/xkcdpass/static/
xkcdpass-1.14.3/xkcdpass/static/eff-long
xkcdpass-1.14.3/xkcdpass/static/eff-short
xkcdpass-1.14.3/xkcdpass/static/eff-special
xkcdpass-1.14.3/xkcdpass/static/fin-kotus
xkcdpass-1.14.3/xkcdpass/static/ita-wiki
xkcdpass-1.14.3/xkcdpass/static/legacy
xkcdpass-1.14.3/xkcdpass/static/spa-mich
xkcdpass-1.14.3/xkcdpass/xkcd_password.py
xkcdpass-1.14.3/xkcdpass/__init__.py
xkcdpass-1.14.3/xkcdpass.1
xkcdpass-1.14.3/xkcdpass.egg-info/
xkcdpass-1.14.3/xkcdpass.egg-info/dependency_links.txt
xkcdpass-1.14.3/xkcdpass.egg-info/entry_points.txt
xkcdpass-1.14.3/xkcdpass.egg-info/not-zip-safe
xkcdpass-1.14.3/xkcdpass.egg-info/PKG-INFO
xkcdpass-1.14.3/xkcdpass.egg-info/SOURCES.txt
xkcdpass-1.14.3/xkcdpass.egg-info/top_level.txt


root@kerneltalks # cd xkcdpass-1.14.3

root@kerneltalks # python setup.py install
running install
running bdist_egg
running egg_info
writing xkcdpass.egg-info/PKG-INFO
writing top-level names to xkcdpass.egg-info/top_level.txt
writing dependency_links to xkcdpass.egg-info/dependency_links.txt
writing entry points to xkcdpass.egg-info/entry_points.txt
reading manifest file 'xkcdpass.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'xkcdpass.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
creating build
creating build/lib
creating build/lib/xkcdpass
copying xkcdpass/xkcd_password.py -> build/lib/xkcdpass
copying xkcdpass/__init__.py -> build/lib/xkcdpass
creating build/lib/xkcdpass/static
copying xkcdpass/static/eff-long -> build/lib/xkcdpass/static
copying xkcdpass/static/eff-short -> build/lib/xkcdpass/static
copying xkcdpass/static/eff-special -> build/lib/xkcdpass/static
copying xkcdpass/static/fin-kotus -> build/lib/xkcdpass/static
copying xkcdpass/static/ita-wiki -> build/lib/xkcdpass/static
copying xkcdpass/static/legacy -> build/lib/xkcdpass/static
copying xkcdpass/static/spa-mich -> build/lib/xkcdpass/static
creating build/bdist.linux-x86_64
creating build/bdist.linux-x86_64/egg
creating build/bdist.linux-x86_64/egg/xkcdpass
copying build/lib/xkcdpass/xkcd_password.py -> build/bdist.linux-x86_64/egg/xkcdpass
copying build/lib/xkcdpass/__init__.py -> build/bdist.linux-x86_64/egg/xkcdpass
creating build/bdist.linux-x86_64/egg/xkcdpass/static
copying build/lib/xkcdpass/static/eff-long -> build/bdist.linux-x86_64/egg/xkcdpass/static
copying build/lib/xkcdpass/static/eff-short -> build/bdist.linux-x86_64/egg/xkcdpass/static
copying build/lib/xkcdpass/static/eff-special -> build/bdist.linux-x86_64/egg/xkcdpass/static
copying build/lib/xkcdpass/static/fin-kotus -> build/bdist.linux-x86_64/egg/xkcdpass/static
copying build/lib/xkcdpass/static/ita-wiki -> build/bdist.linux-x86_64/egg/xkcdpass/static
copying build/lib/xkcdpass/static/legacy -> build/bdist.linux-x86_64/egg/xkcdpass/static
copying build/lib/xkcdpass/static/spa-mich -> build/bdist.linux-x86_64/egg/xkcdpass/static
byte-compiling build/bdist.linux-x86_64/egg/xkcdpass/xkcd_password.py to xkcd_password.pyc
byte-compiling build/bdist.linux-x86_64/egg/xkcdpass/__init__.py to __init__.pyc
creating build/bdist.linux-x86_64/egg/EGG-INFO
copying xkcdpass.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO
copying xkcdpass.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying xkcdpass.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying xkcdpass.egg-info/entry_points.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying xkcdpass.egg-info/not-zip-safe -> build/bdist.linux-x86_64/egg/EGG-INFO
copying xkcdpass.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
creating dist
creating 'dist/xkcdpass-1.14.3-py2.7.egg' and adding 'build/bdist.linux-x86_64/egg' to it
removing 'build/bdist.linux-x86_64/egg' (and everything under it)
Processing xkcdpass-1.14.3-py2.7.egg
creating /usr/lib/python2.7/site-packages/xkcdpass-1.14.3-py2.7.egg
Extracting xkcdpass-1.14.3-py2.7.egg to /usr/lib/python2.7/site-packages
Adding xkcdpass 1.14.3 to easy-install.pth file
Installing xkcdpass script to /usr/bin

Installed /usr/lib/python2.7/site-packages/xkcdpass-1.14.3-py2.7.egg
Processing dependencies for xkcdpass==1.14.3
Finished processing dependencies for xkcdpass==1.14.3

Now running xkcdpass command will give you a random set of dictionary words like below –

root@kerneltalks # xkcdpass
broadside unpadded osmosis statistic cosmetics lugged

You can use these words as input to other commands like md5sum to get the random password (like below) or you can even use the Nth letter of each word to form your password!

root@kerneltalks # xkcdpass |md5sum
45f2ec9b3ca980c7afbd100268c74819  -

root@kerneltalks # xkcdpass |md5sum
ad79546e8350744845c001d8836f2ff2  -

Or even you can use all those words together as such a long password which is easy to remember for a user and very hard to crack using the computer program.

There are tools like Diceware, KeePassX, Revelation, PasswordMaker for Linux which can be considered for making strong random passwords.

Learn dd command with examples

Beginners guide to learn dd command along with a list of examples. The article includes outputs for command examples too.

Learn dd command

Beginners guide to learn dd command! In this article, we will learn about dd (Disk Duplication) command and various usage of it along with examples.

dd command mainly used to convert and copy files in Linux and Unix systems. dd command syntax is

dd <options>

It has a very large list of options which can be used as per your requirement. Most of the commonly used options are :

  • bs=xxx Read and write xxx bytes at a time
  • count=n Copy only n blocks.
  • if=FILE Read from FILE
  • of=FILE Output to FILE

Let me walk you through examples to understand dd command usage.

Backup complete disk using dd

For copying the whole disk to another disk, dd is very helpful. You just need to give it disk to read from and disk to write. Check below example –

root@kerneltalks # dd if=/dev/xvdf of=/dev/xvdg
4194304+0 records in
4194304+0 records out
2147483648 bytes (2.1 GB) copied, 181.495 s, 11.8 MB/s

In the above output, you can see disk /dev/xvdf is copied to /dev/xvdg. Command will show you how much data and what speed it copied.

Identify disk physically using dd

When there are a bunch of disks attached to the server and if you want to trace a particular disk physically, then dd command might be helpful. You have to run dd command to read from disk and write into the void. This will keep the hard disk activity light solid (physical on disk).

root@kerneltalks # dd  if=/dev/xvdf of=/dev/null

Normally all other disk blinking activity LED whereas this one will be having its LED solid. Easy to spot the disk then! Be careful with IF and OF. IF you switch their arguments, you will end up wiping out your hard disk clean.

Create image of hard disk using dd

You can create an image of hard disk using dd. It’s the same as what we saw in the first example backup of the disk. Here we will use output file OF as a data file on mount point and not another disk.

root@kerneltalks # dd if=/dev/xvdf of=/xvdf_disk.img
4194304+0 records in
4194304+0 records out
2147483648 bytes (2.1 GB) copied, 32.9723 s, 65.1 MB/s

root@kerneltalks # ls -lh /xvdf_disk.img
-rw-r--r--. 1 root root 2.0G Jan 15 14:36 /xvdf_disk.img

In the above output, we created an image of disk /dev/xvdf into a file located in / named xvdf_disk.img

Compressed image can be created as well using gzip along with dd

root@kerneltalks # dd if=/dev/xvdf |gzip -c >/xvdf_disk.img.gz
4194304+0 records in
4194304+0 records out
2147483648 bytes (2.1 GB) copied, 32.6262 s, 65.8 MB/s
root@kerneltalks # ls -lh /xvdf_disk.img.gz
-rw-r--r--. 1 root root 2.0M Jan 15 14:31 /xvdf_disk.img.gz

You can observe output zipped image is very much less in size.

Restore image of hard disk using dd

Yup, the next question will be how to restore this hard disk image on another disk? The answer is simply to use it as a source and destination as another disk.

root@kerneltalks # dd if=/xvdf_disk.img of=/dev/xvdg
4194304+0 records in
4194304+0 records out
2147483648 bytes (2.1 GB) copied, 175.748 s, 12.2 MB/s

Make sure your disk image and target disk has same size.

Restore compressed hard disk image using dd along with gzip command as below –

root@kerneltalks # gzip -dc /xvdf_disk.img.gz | dd of=/dev/xvdg
4194304+0 records in
4194304+0 records out
2147483648 bytes (2.1 GB) copied, 177.272 s, 12.1 MB/s

Create ISO from CD or DVD using dd

Another popular use of dd command is creating an optical disk image file i.e. ISO file from CD or DVD. You need to first mount CD or DVD on your server then use it as a source device and file on mount point as a destination.

root@kerneltalks # dd if=/dev/dvd of=/dvd_disc.iso bs=4096

Here, we specified the 4096 block size using bs option. Make sure no other application or user is accessing a CD or DVD when running this command. You can use fuser command to check if someone is accessing it.

The next question will be how to mount ISO file in Linux? Well we have already article on it here 🙂

Creating file of definite size with zero data using dd

Many times sysadmins or developers need files with junk data or zero data for testing. Using dd you can create such files with definite size.

Let’s say you want to create a file of 1GB then you define block size of 1M and count of 1024. So 1M x 1024 = 1024M = 1G.

root@kerneltalks # dd if=/dev/zero of=/testfile bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 13.0623 s, 82.2 MB/s

root@kerneltalks # ls -lh /testfile
-rw-r--r--. 1 root root 1.0G Jan 15 14:29 /testfile

In the above output, you can see our math worked perfectly. 1G file is created out of our command.

Changing file uppercase to lowercase using dd

All the above examples we have seen so far are of data copy using dd command. Now, this example is of data convert using dd command. Using dd, you can change file data from all uppercase to lowercase and vice versa.

# cat /root/testdata
This is test data file on kerneltalks.com test server.

# dd if=/root/testdata of=/root/testdata_upper conv=ucase
0+1 records in
0+1 records out
55 bytes (55 B) copied, 0.000138394 s, 397 kB/s

# cat /root/testdata_upper
THIS IS TEST DATA FILE ON KERNELTALKS.COM TEST SERVER.

You can see all data in file is converted to uppercase. For changing data from uppercase to lowercase use option conv=lcase

If you another interesting use of dd command, let us know in the comments down below.

How to add Cloundfront CDN in WordPress blog with SSL

Step by step procedure to add Cloundfront CDN in WordPress blog with SSL certificate. Screenshot included for better understanding.

Add Cloundfront CDN in WordPress blog with SSL

In this article, we will walk you through steps where we gonna configure AWS CloudFront CDN for WordPress blog under W3TC (W3 Total Cache) plugin. We will be using basic setup under AWS CloudFront so we won’t be using IAM authentication and accesses in our configuration.

See Cloudfront content delivery network pricing here.

We assume below pre-requisites are completed before moving on with this tutorial.

  1. You have logged in WordPress console of your blog with Admin login
  2. You have W3TC plugin installed in WordPress blog
  3. You have logged in AWS account
  4. You have access to change zone files for your domain (required to have fancy CDN CNAMEs)

Without further delay lets jump in to step by step procedure to add Cloudfront CDN in WordPress blog with screenshots.

AWS certificate manager

You can skip this step if your blog is not https enabled.

In this step, we will import your SSL certificate in AWS which needs to be used with Cloudfront distributions in case you are using fancy URL (like c1.kerneltalks.com) for distributions instead of default system generated XXXXXX.cloudfront.net

You can skip this step if you want to buy an SSL certificate from Amazon and don’t want to use your own. If you are ok to use system-generated distributions name like askdhue.kerneltalks.com and don’t want custom CNAME like c1.kerneltalks.com then also you can skip this step.

You can buy an SSL certificate from many authorized bodies or you can get open source Lets Encrypt SSL certificate for free.

Log in to the AWS certificate manager console. Make sure you use region US East (N. Virginia).  Since only certificates stored in this region are available to select while creating Cloudfront distributions. Click on Get Started and in the next screen click Import a certificate. You will be presented with the below screen.

Import certificate in aws

Fill in your certificate details in the fields above. Certificate body will have your SSL certificate content, then private key, and finally certificate chain (if any). Click Review and import.

These filled in details will be verified and information fetched from it will be shown on screen for your review like below.

Review certificate in AWS certificate manager

If everything looks good click Import. Your certificate will be imported and details will be shown to you in the dashboard.

Now, we have our SSL certificate ready in AWS to be used with Cloudfront distributions custom URLs like c1.kerneltalks.com. Let’s move on to creating distributions.

AWS Cloudfront configuration

Login to AWS Cloudfront console using your Amazon account. On left hand side menu bar make sure you have Distributions selected. Click Create Distribution button. Now, you will be presented with wizard step 1. Select the delivery method.

Click Get Started under the Web delivery method. You will see below screen where you need to fill in details –

cloudfront web distribution

Below are few fields you need to select/fill.

  1. Origin Domain Name: Enter your blog’s naked domain name e.g. kerneltalks.com
  2. Origin ID: If you like autogenerated value keep it or you can name it anything.
  3. Origin protocol policy: Select HTTPS only.
  4. Viewer Protocol Policy: Redirect HTTP to HTTPS
  5. Alternate Domain Names: Enter fancy CDN name you want like c1.kerneltalks.com
  6. SSL certificate -> Custom SSL certificate: You should see your imported certificate from the previous step here.

There are many other options which you can toggle based on your requirement. The above listed are the most basic and needed for normal CDN to work. Once done, click Create Distribution.

You will be redirected back to distributions dashboard where you can see your created distribution and its status as In progress. This means now AWS is fetching all content files like media, CSS, JS from your domain hosting server to their edge servers. In other words, you can say your CDN zone is being deployed. Once all sync completes, its state will be changed to Deployed . This process will take time depending on how big your blog is.

Meanwhile, your distribution is being deployed you can head back to your zone file editor (probably in cPanel) and add entries for CNAME you mentioned in distribution setting (e.g. c1.kerneltalks.com)

CNAME entry

You can skip this step if you are not using custom CNAME for your Cloudfront distribution

Goto zone file editor for your domain and add CNAME entry for the custom name you used above (here c1.kerneltalks.com) and point it to the Cloudfront URL of your distribution.

Cloudfront URL of your distribution can be found under Domain Name in above distributions dashboard screenshot. It’s generally in format XXXXXXX.cloudfront.net

This will take a few mins to hours to propagate change through the internet web. You can check if it’s live on the internet by pinging your custom domain name. You should receive pingback from cloudfront.net

ping custom domain name

That’s it. You are done with your AWS configurations. Now, you need to add this custom CNAME or cloudfront.net name in W3TC settings in your WordPress admins panel.

W3TC settings

Login to the WordPress admin panel. Goto W3TC General Settings and enable CDN as per the below screenshot.

W3TC General settings CDN portion

Goto W3TC CDN settings.

Scroll down to Configuration: Objects . Select SSL support as Enabled and add your CNAME in below Replace site's hostname with:

Once done click on Test Mirror and you should see it passed. Check the below screenshot for better understanding.

W3TC CDN mirror check

If your test is not being passed, wait for some time. Make sure you can ping that CNAME as explained above and your Cloudfront distribution is deployed completely.

Check blog for Cloudfront CDN

That’s it. Your blog is serving files from Cloudfront CDN now! You can open the website in a new browser after clearing cookies. View website’s source code and look for URLs with your custom domain name (here c1.kerneltalks.com) and you will see your CSS, JS, and media files URL are not of your naked domain (here kerneltalks.com) but from CDN (i.e. c1.kerneltalks.com)!

To server files parallelly you can create more than 1 (ideally 4) distributions in the same way and add their CNAMEs in W3TC settings.

Enjoy lightning fast web pages of your blog!