Category Archives: Documentation

Crafting a One-Page resume website without spending a dollar

This post details my side project—a one-page resume website that I constructed without incurring any expenses.

What is one-page resume website?

A one-page resume website serves as a digital manifestation of your resume, offering a modern alternative to the traditional paper-based format. While PDF resumes remain effective for email sharing, the dynamic nature of resume websites provides an excellent digital representation of your profile. Particularly advantageous in fields like digital, marketing, and art, a resume website becomes an ideal platform to showcase portfolios, offering a level of presentation that is challenging to achieve with traditional paper or PDF formats.

I created my one-page resume website as a side project. And when it’s budget-friendly, why not!

We are talking about: https://resume.kerneltalks.com

The frontend

The one-page resume website is static, meaning its content doesn’t change frequently. Being a single-page site eliminates the need for a content management system, making a basic HTML website the ideal choice – especially if you’re not into web design and don’t have a substantial budget or time commitment.

Since web design isn’t my expertise, I opted for an HTML template as a foundation. If you know HTML, customizing the template with your data is a straightforward task, and your site is ready to be hosted.

I chose a free HTML template, so there’s zero investment made. It’s essential to respect the personal use terms specified by the template providers and include a link back to them on your website.

The backend

Given that it’s a one-page static HTML website, it makes sense to minimize hosting expenses (in terms of money and time). Amazon S3 website hosting seems to be the optimal choice. Uploading files to an S3 bucket, configuring a few settings, and your website is live! You can find the complete procedure here.

Since it’s a single HTML page, my website’s size is a mere 4MB, resulting in negligible (I would say zero) S3 storage charges.

Amazon S3 website hosting provides an AWS-branded HTTP endpoint. If you desire a custom domain or HTTPS protocol, integrating CloudFront with S3 is the solution.

I utilized a subdomain on my existing registered domain and set up a CNAME entry for it against CloudFront’s DNS.

If you don’t have a domain, expect a registration investment of around ~$13 per year. In my case, it was zero. Additionally, CloudFront bills based on the number of requests, so your costs will depend on your website’s traffic. Considering the nature of a personal resume website, significant traffic, and associated costs are negligible.

For HTTPS, you can create a free SSL certificate in Amazon Certificate Manager and use it with the CloudFront distribution.

The result

The outcome is a serverless, responsive, compact, HTML-based, secure, static, one-page resume website, leveraging the top-tier CDN, Amazon CloudFront, all achieved with no upfront investment!

Architecture

You could make it more secure by adding a WAF (Web Application Firewall) on top of CloudFront though. Since it was a zero dollar project, WAF was not included in the design at that time.

GitHub Do’s and Don’ts

A curated list of GitHub Do’s and Don’ts

GitHub guidelines

Here are some pointers that can help you define GitHub usage guidelines.

GitHub Do’s

It is recommended to create a new GitHub repository for each new project.

To ensure proper code management, a CODEOWNERS file should be included in the repository to indicate the list of individuals/GitHub teams responsible for approving the code changes. Additionally, a README.md file should be added to provide information about the repository and its code.

A .gitignore file should also be included to prevent unwanted files from being committed.

Regularly committing and pushing changes to the remote branch will prevent loss of work and ensure that the branch name is reserved.

A consistent branching strategy should be used, with new bugs or features branching off the master branch.

Before creating a pull request, it is essential to review thoroughly, lint your code and perform tests to ensure it is error-free and does not contain any unnecessary code.

Make use of PR templates to endure the contributor submits all the necessary information for PR review.

When creating pull requests, it is important to include detailed commit messages and mention any relevant Ticket IDs or information about testing or proof of concepts. This will help reviewers understand the changes and the reliability of the code.

It is always necessary to create a pull request when merging changes from one branch to another, and the master branch should be protected to prevent direct editing. Pull request standards should be followed to merge code even in repositories without protection.

During the PR review process, when changes to the code are requested, do not delete the branch, instead, ensure the new changes are made in the same branch and aligned with the original pull request so that the history is visible to future reviewers.

Having multiple people review a pull request is recommended, although it is not required.

For large projects, using git-submodule can be beneficial. Keep your branches up to date with development branches and consult with peers to ensure consistency in your project.

Be cautious when merging code on repositories integrated with automation, as it can automatically alter related resources. Ensure you understand the potential impacts of your code commits.

After each release, it is best practice to merge changes from the release branch to the master.

Regularly maintain your repositories by deleting branches that are no longer needed after merging a feature or bug fix.

Consider using Git Hooks for automation where appropriate.

Access to repositories is controlled using GitHub teams, regularly check that only the intended teams and users have access.

After creating a GitHub user ID, enabling multi-factor authentication (MFA) from the settings is crucial.

GitHub Don’ts

Do not share your GitHub credentials with new team members. They should only have access to the codebase after they have secured their credentials through the established process.

The organization should implement the policy of protecting the master branch in GitHub. Even if you encounter an unprotected repository, do not commit directly to the master/main branch. Instead, please take action to have it protected as soon as possible.

Do not delay progress by regularly not pushing changes made on local branches to remote branches.

Avoid working on outdated master branches. Always use the most recent version by taking a fresh pull.

Never include sensitive information, such as application secrets or proprietary data, in public repositories. Implement secret scanning in PR checks via GitHub actions. e.g. Trufflehog

Avoid committing large files in the repository, as GitHub has a default upload limit of 100MB. Consult with peers/higher to determine the best strategy for a specific project. Git LFS is another option to consider.

Avoid creating one pull request that addresses multiple issues or environments.

Before resetting a branch, make sure to commit or stash your changes to avoid losing them.

Be cautious when using force push, as it may override remote files and should only be used by those who are experienced with it.

Do not alter or delete the public history.

Avoid manually copying code from GitHub private repositories. Instead, use the Fork functionality to duplicate the code safely.

Avoid using public GitHub actions on repositories without going through defined procedures and approvals.

Preparing for Certified Kubernetes Administrator (CKA) exam

A small rundown on CKA preparation.

CKA Preparations!

In this post, I will be sharing various resources to help you prepare for the CKA exam. In addition, feel free to add resources you know in the comments section, which may help fellow readers.

Exam details

  • Offered by: The Cloud Native Computing Foundation (CNCF)
  • Duration: 2 hours
  • Type: Complete tasks on Linux CLI (Practical)
  • Number of questions/tasks: 15-20 (I had 17)
  • Mode: Online proctored
  • Cost: $375 (that includes one free retake). Watch out over LinkedIn or internet for coupons. They got good deals on black friday as well.
  • Result: It will be available in 24 hours from the exam completion.
  • You are allowed to open one additional browser tab to access K8s docs, K8s Github or K8s blog. You should not be clicking/opening any links other than these domains that includes K8s forum as well.

Study journey

  • Practise course labs heavily. You may go through course quickly to understand the Kubernetes world but you need to spend more time on practising Kubernetes on CLI.
  • Online free labs for practising :
  • Once you are good with the theory and understood all aspects of Kubernetes world, Labs are the only places where you should spend all of your study time.
  • Once you are through all the scenarios/tasks provided by online courses, you can think of your own custom scenarios and try implementing them.

Tips

Practise! Practise!! Practise!!! The more you are familiar with the CLI and commands, the more time you will save during the exam. In addition, it helps to build your muscle memory for command arguments and gain those extra seconds during the exam.

CKA requires you to complete the given tasks in the Linux terminal (Ubuntu) CLI on a shared Kubernetes cluster setup. So, having good Linux background is added plus! Moreover, it helps you in navigating through CLI, editing files and save a lot of time.

Make use of -h frequently! If you are not sure about the command arguments, use a -h flag that lists arguments along with example commands. You can directly copy those example commands and edit them accordingly before executing. A quick way to get the job done rather than navigating through kubectl commands on Kubernetes documentation

Try to complete tasks using imperative commands rather than building spec files.

Read the question carefully and completely before creating any objects. Keep an eye on the namespaces mentioned in the questions. Assume default namespace when no specific namespace is mentioned.

Verify created objects to make sure they carry properties asked in questions. For pods, make sure they reach running state before proceeding.

Setting alias in Linux shell is one of the famous tips you will come across over the internet. Use it according to your comfort. I did not use it.

Always make sure you run the given context commands at the start of each task. It makes sure you are on the right cluster to perform the task.

Always make sure to return to the main terminal if you are doing ssh to other nodes for performing the tasks.

For Tasks mentioning sudo -i for root privileges, it’s good practice to switch to root as soon as you log in to the respective node rather than finding out you are not run after running some commands and investing time there!

If you are not familiar with Linux editors like vi, edit your spec files in the exam provided notepad and then copy the final version of the config directly on the terminal within the file rather than running around in Linux editors and wasting time.

Get familiar with copy, paste operations in the terminal. There are different key combinations depending on the operating system. Refer exam handbook for the same. Then, practise using those key combinations.

Use kubernetes.io/docs heavily during practice. If you are stuck at something, always try to search and get information from Kubernetes official documentation. This will make you comfortable navigating through the documentation site and hence saves some time during the exam. In addition, you will know exact keywords to search and exact links to click on topics you had a hard time studying.

It’s the student’s responsibility not to click/open any other sites than the allowed three. Search in K8s documentation may yield results with links to the K8s forum. You should not be clicking them. Make a habit of checking links before opening to avoid issues during the exams.

Please note that the exam simulator you get along with your exam booking has more challenging questions than the actual exam. They mentioned it explicitly there. So if your morale goes down pretty quickly, then it’s best not to check those questions just before the exam :P. They aim more at getting an in-depth understanding of how things run under the hood.

That’s all I have. All the best!

Preparing for Hashicorp Certified Terraform Associate Exam

A quick article that helps you preparing for Hashicorp Certified Terraform Associate Exam

Terraform Associate exam!

In this quick post, I would like to share some of the resources that help you clear the terraform associate exam. Feel free to add resources you know in the comments section, which may help fellow readers.

The terraform associate exam is designed to test the candidate’s readiness towards IaC, i.e. Infrastructure as code. IaC concepts, terraform CLI hands-on (a lot of it) and knowledge on terraform’s paid offerings through Cloud or Enterprise should get you through this exam. It’s a practitioner level exam, so it shouldn’t be hard to beat if you have IaC and cloud background.

You must have researched already about the exam on its official page, but here are quick facts for your reference.

Topics to study

I suggest you have good hands-on with terraform CLI before taking this exam. It will help you cover the majority of topics, and you don’t have to learn them during preparation. That leaves you with minimal topics to prepare for actual certification.

Hashicorp’s study guide is the best resource to follow along for preparation. Let me quickly list down a couple of topics you should not miss during preparation –

  • IaC concepts
    • Traditional infra provisioning v/s IaC
  • Terraform basic workflow
    • Write, plan and apply.
  • Different types of blocks in terraform code
  • Terraform CLI commands (a huge list of them!)
  • Terraform Modules, functions, state files
    • At least go through all functions once.
    • Lots of hands-on to understand how modules works
    • State management (a big topic!)
  • Debugging and variables
    • Different ways to handle variables
    • Debugging levels, ways to set them, logging in files
  • Detailed understanding of Terraform cloud and enterprise
    • Free and paid offerings in each type
    • Sentinal, workspaces, remote runs etc. understanding
    • Clustering, OS availability in each type

Resources for preparation

Assorted list of online resources you can leverage to follow along your preparation journey.

I am linking here my own last day revision notes as well that I prepared during my certification preparation.

Practice tests

Here is a list of practice tests you can take online before going in for an actual exam. It will test the understanding of your topic and concretes your decision for exam booking.

That’s all I have to share. All the best!

Preparing for SOA-C01 AWS Certified SysOps Administrator Associate Exam

A quick article on how to prepare for SOA-C01 AWS Certified SysOps Administrator – Associate Exam

AWS Sysops Associate!

It’s a short article on AWS Sysops Associate certification exam. These are extracts from my personal experience which might help you in clearing the exam.

Sysops exam is aimed at you have good knowledge about a few core services like EC2, Cloudformation, Cloudwatch, etc. and AWS CLI. There are many references to CLI options or commands. This exam does not judge you for knowing all AWS services like Solution Architect one but it does check you for a few core services with deeper knowledge.

You must have already gone through the AWS certification page for details about this exam. Let me jot it down for your quick reference.

  • Total of 65 questions
  • Cost $150
  • Exam guide
  • Sample questions
  • The exam result will appear on the screen as soon as you submit the exam.
  • Questions vary in length (short/long) but time should not be constraint here as it could be in SA professional exam.

Topics you should study

Its recommended that you should at least consider clearing AWS certified solution architect exam before appearing this one. It will firm your AWS foundational knowledge for many services and help you get a grip on the learning path for Sysops.

Below are few service which you should deep dive –

  • Cloudwatch
    • Built-in and custom matrics
    • AWS CLI commands and options
    • Deep dive for custom matrics
    • Cloudwatch console (what, when, where)
    • Cloudwatch alarm deep dive
  • Cloudformation
    • All template references and their use cases
    • How to re-use templates in other regions/accounts etc.
    • Create, update, and delete template/stack. All its stages, CLI options, console, etc.
  • EC2
    • Pricing classes, how EC2 is billed and use cases
    • I did not get many questions on EC2 apart from identifying the correct EC2 class in a given scenario
    • Spot block
    • System status checks and Instance status checks
    • Autoscaling group deep dive
    • Root cause analysis of EC2 termination on launch
    • AWS Systems Manager deep dive
    • AWS Inspector
  • S3
    • Different classes and use cases
    • Encryption
    • Security using ACL and bucket policies
    • CORS and CRR
    • Cross account Access control & signed URLs
    • Website hosting basics
    • MFA delete deep dive along with CLI options
    • Versioning fundamentals
  • RDS, Redshift
    • I got the only couple of questions on RDS
    • Enhanced monitoring
    • Multi-AZ and read replica deep dive
    • How to DR, HA, and FT in RDS
    • Redshift enhanced VPC routing
    • Redshift basics
  • VPC and networking
    • VPC flow logs deep dive
    • Security group, NACLs, and route tables
    • NAT, IG, VPC Endpoints
    • Public, private subnets
    • VPC peering process
    • VPC, On-prem connectivity
    • On-prem extension services for AWS
    • WAF, Cloudfront
  • Assorted
    • IAM, KMS deep dive
    • AWS Trusted advisor
    • AWS config deep dive
    • Shared responsibility model
    • AWS certificate manager
    • ELB – ALB, NLB and Classis LB
    • AWS Beanstalk, AWS Op works
    • SNS, SQS, Lambda
    • Health dashboards
    • Billing tools

Online courses

I relied on only one course for this exam since I backpacked foundational, associate, and professional level certifications before this one. So a few refreshers were required. Here is a list of online courses from well-known websites –

Practice tests

Here is a list of practice tests that you can take online to test your knowledge. If you are already certified you can get a free practice test from AWS itself. You have to claim it under benefits in your AWS certification portal.

I am linking here my last day revision notes which may help you in your preparation.

Our other certification preparation articles

  1. Preparing for 1Z0-1085-20 Oracle Cloud Infrastructure Foundations 2020 Associate Exam
  2. Journey to AWS Certified Solutions Architect – Professional Certification SAP-C01
  3. Preparing for CLF-C01 AWS Certified Cloud Practitioner Exam

Preparing for CLF-C01 AWS Certified Cloud Practitioner Exam

A quick article on how to prepare for CLF-C01 AWS Certified Cloud Practitioner Exam

AWS CLF!

I am writing this article as a preparation guide for the AWS Certified Cloud Practitioner Certification exam. I recently cleared the exam and hence thought of sharing a few pointers which may help you in your journey to get certified.

This is foundational level certification from AWS and aims at getting acquainted with Cloud and then AWS Cloud fundamentals. If you are looking for a career in the AWS ecosystem then this is your first step. This is also helpful for sales personals, managers, etc i.e. non-technical population to get familiar with Cloud and AWS terminologies.

If you are coming from a background of working locally or remotely on traditional data center equipment like servers, storage, network, etc or if you are possessing another cloud technology background then it’s a walk-in garden for you. Since I completed professional level AWS certification, I literally sit for this one with no such prior study.

You can refer to AWS’s own study guide for a detailed curriculum for the exam and other details.

  • Its a 90-minute exam with 60 questions to attend. Questions and choices are fairly short hence there should not be a time constraint for you.
  • Passing score is 700 out of 1000 and your result will be shown on screen PASS/FAIL immediately after you submit the exam.
  • The exam costs USD $100. If you have completed any previous AWS certification then you can make use of a 50% discount coupon in your AWS certification account.

Topic you should study

Cloud and on-prem
  • What is cloud
  • Difference between cloud and on-prem
  • Benefits and trade-offs for cloud over on-prem
  • The economics behind both. CAPEX, OPEX.
  • Different cloud architecture designs
Basics of AWS
  • AWS infrastructure. Understand each element infrastructure.aws
  • How to use or interact with AWS services
  • Understand AWS core services
    • IAM, KMS
    • EC2, ELB, Autoscaling, Lambda
    • S3, EFS, EBS
    • VPC
    • Cloudfront
    • Route 53
    • Cloudwatch
    • Cloud trail
    • SNS, SQS
    • RDS, Dynamodb
  • It won’t hurt to know a few more services around the above core ones at a very high level i.e. name of service and what it is used for.
  • AWS Billing and pricing, how it works, how to get discounts etc.
  • AWS support tiers
  • Differnt AWS support teams
Cloud security
  • Security of the cloud (AWS responsibility)
  • Security in the cloud (User’s responsibility)
  • Learn the shared responsibility model
  • AWS Access management
  • Compliance aspect of AWS

While studying AWS services make sure you know their use cases, billing logics, pricings, service limits, integration with other services, access control, types/classes within, etc. You are not expected to remember numbers of any kind but you should know the contextual comparison. Like you are not expected to remember IO or throughput exact numbers of EBS volumes but you should know which EBS type gives more throughput or IOPS than others.

Online courses

I try to curate few online course list here which you can take to build solid AWS foundation.

Practice test exams

There are practice test exams included in the above courses by LA and ACG. But if you want to purchase practice exams only then you can do so. AWS offers a practice exam too for USD $20. You can attempt it only once and no point in re-purchasing since every time you will see the same questions. You can get a free voucher for this to practise test if you have completed other AWS certification.

I created my last day revision notes here but I mainly referred by notes from AWS SAP-C01 exam and only added what’s missing there in these new notes.

That’s all I wanted to share. All the best!

Our other certification preparation articles

  1. Preparing for 1Z0-1085-20 Oracle Cloud Infrastructure Foundations 2020 Associate Exam
  2. Journey to AWS Certified Solutions Architect – Professional Certification SAP-C01

Preparing for 1Z0-1085-20 Oracle Cloud Infrastructure Foundations 2020 Associate Exam

A quick article on how to prepare for 1Z0-1085-20 Oracle Cloud Infrastructure Foundations 2020 Associate certification exam.

OCI Foundations Associate 2020

OCI (Oracle Cloud Infrastructure) Foundations 2020 Associate is a foundation level exam. If you are coming from another cloud service provider background then it will be a piece of cake for you. Being a foundation level exam will test you on a conceptual basis only.

Its a 60 multiple choice questions exam which you have to complete in 105 minutes. Approximately 2 minutes to spare per question which is pretty good enough for foundation level exam. Questions and answers are small so you don’t have to invest much time in reading and you can complete the exam well before time.

The exam costs $95 and the passing score is 68%. At the time of writing this article, due to the COVID-19 epidemic, Oracle announced course material and exam free of cost for a specific period of time. The exam currently available under online proctored mode from Pearson since most of the exam centers are closed in view of COVID-19 lock-down.

Journey to AWS Certified Solutions Architect – Professional Certification

Read our another article about preparation for the AWS certification

Let’s walk through exam topics and points you need to consider while preparing for this certification. An exam guide from Oracle can be viewed here.

Exam topics are :

  1. Cloud concepts
  2. OCI fundamentals
  3. Core OCI services
  4. Security and compliance
  5. OCI Pricing, billing, and support

Cloud concepts

If you are coming with a background of any other cloud provider like AWS, then you got it covered already.

  • You should be through with concepts of HA (High Availability), FT (Fault Tolerance) and the difference between them.
  • What is the cloud?
  • Know the advantages of cloud over the on-prem data center.
  • Get familiar with RTO and RPO concepts.

OCI Fundamentals

This topic covers basics of OCI i.e. how it is architected.

  • Understand concepts of the region, AD (Availability Domain)and FD (Fault Domain)
  • Types of the region – Single AD and multi AD
  • Learn about compartments and tenancy

Core OCI services

In this topic, you are introduced to core OCI services at a higher level. There is no need for a deep dive into each service. A high-level understanding of each is enough for this exam.

  • OCI Compute service. Learn all the below offerings.
    • Bare metal
    • Dedicated virtual host
    • Virtual Machine
    • Container engine
    • Functions
  • OCI Storage services. Learn below offerings
    • Block Volume
    • Local NVMe
    • File Storage service
    • Object service
    • Archive storage
    • Data transfer service
  • OCI Networking services
    • VCN (Virtual Cloud Network)
    • Peering
    • Different kind of gateways
      • NAT Gateway
      • DRG Gateway
      • Internet Gateway
    • Load balancers
    • NSG (Network Security Groups) and SL (Security Lists)
  • OCI IAM service
    • Concept of principals and Instance principals
    • Groups and dynamic groups
    • Policy understanding along with syntax and parameters
  • OCI Database service. Study all below offerings
    • VM DB systems
    • Bare Metal DB systems
    • RAC
    • Exadata DB systems
    • Autonomous data warehouse
    • Study backup, HA, DR strategies
  • Have a high-level understanding of below services :
    • OCI Key management service
    • OCI DNS service
    • Data safe
    • OS Management service
    • OCI WAF
    • Audit log service
  • Tagging
    • Usages
    • Type: free form and defined
    • Tag namespaces

Security and complilance

OCI security consists of different facets. Understand below areas in context to security

  • Cloud shared security model
  • Securing OCI using IAM
  • data at rest and data in transit protection
  • Key management service
  • HA, FT using AD, FD or services for data protection

OCI Pricing, billing and support

Understand how pricing and billing work in each service we saw above. Learn pricing high/low among tiers in storage services. You don’t need to remember any numbers but you should know it contextually like which is priced high and which one is low etc.

Learn billing models in OCI

  • PAYG (Pay as you go)
  • Monthly Flex
  • BYOL

Understand budget service and how tags, compartments can help in billing and budgeting.

Learn about the SLA structure offered by Oracle. This part is missing in OCI online training.

That’s all you have to know to clear this exam. As I said if you are coming from AWS, Azure then you can relate almost everything to those cloud services which makes it easy to learn and understand.

I created my last day revision notes here (most of the reference to AWS for comparison) which might be useful for you as well.

Now, just little bit of study and go for it! All the best!

Our other certification preparation articles

  1. Preparing for CLF-C01 AWS Certified Cloud Practitioner Exam
  2. Journey to AWS Certified Solutions Architect – Professional Certification SAP-C01

Journey to AWS Certified Solutions Architect – Professional Certification SAP-C01

Let me share my experience to clear the toughest AWS exam ‘AWS Certified Solutions Architect – Professional’. This article might help you in your journey to get AWS CSA PRO certified.

Getting AWS CSA PRO certified!

In this article, I am going to cover the last few months of the certification journey which can prove useful to you as it was for me.

As I said last few months, so I assume you have good hands-on experience (might be via personal account/corporate projects) of AWS services. Obviously services like Snowball, Direct Connect are rare to get hands-on but you need to have a solid understanding of these services at least.

Let’s begin with the non-technical aspect of this journey which plays a key role in completing your Exam.

Your reading skills matters!

Yup, you read it right. AWS CSA PRO exam is having 75 questions which you need to answer in 180 minutes. Which drills down to approx 2 minutes per question.

Most of the questions are 3-4 or more statements long and so are the choices in answers. So you need to read through almost a big paragraph of text for a single question. Then you understand what is being asked, analyze answers and choosing best which fits the ask. That’s too much of work to be accomplished in 2 mins!

And there are very few questions where answers are just incorrect and you can eliminate them quickly in first glance. Most of the answers are correct but you need to choose the most appropriate one to suit the question’s requirement. So that’s a tedious task which requires more time. Hence I said reading skills do matter.

A tip (might be a crazy one): Watching videos with subtitles is an easy way you can train your brain to read speedily and grasp the context parallelly!

Obviously you should make yourself comfortable before you sit for your exam. Since its a 3 hour, long course and you don’t want to get distracted by anything.

Last month revisions using online training courses

In last month before the exam, you might want to subscribe to online courses specifically structured and targeted to the scope of the exam and their material is designed across the core services appearing in the exam.

These courses are a bit on a longer side like 40-50 hours of video but you can always use video speeds (set to 1.5x generally) to go through the course quickly. I took Linux Academy’s course by Adrian Cantrill & A Cloud Guru’s course by Scott Pletcher. But do not attempt the practice exams at the end of the course right away. Keep them for your final stage before the exam.

There are free courses available on the AWS training portal as well which you can check in the meantime. You should be knowing all AWS services at least by name and their use. Services launched in the last 1 year are less likely to appear on the exam so you can skip them.

Refer AWS documents and videos (Mandatory)

Once you are through online training courses for the exam, you will be well versed in the idea of what you can expect in the exam. These courses often supplemented with the links to AWS whitepapers or re-invent videos related to the chapter topic. Yup, those are essential things to go through.

AWS whitepapers and FAQ pages give you many minute things that you may have missed and help you to determine the validity of your choice for the situation in question. If you are short on time, then at least go through documents for the services in which you are weak or have little knowledge/experience.

AWS re:Invent videos on Youtube is another content-rich platform that gives you some insights/points which you may have missed in your preparation. They are also helpful since many customers are coming in re:Invent and present their use cases. This will help you to map real-world use cases with that in exams and get solid confirmation about your answer. And you can use Youtube’s video speed control to go through videos quickly!

Getting there

All right now we are at the stage that all knowledge sourcing has been done and its time to test that knowledge. Now its time to hit those practice exams from your online courses. Be sure to get these practice exams by Jon Bonso. Its a set of 4 practice tests and worth investing.

Also, you should consider taking AWS’s own practice exam. If you are lucky you might encounter some questions from it, in real exam. Also, if you hold any previous AWS certification, you must have coupon code in your AWS learning account which you can use to take this test for free.

You are good to book your exam when you can score 90% and above in all the above practice tests by understanding why a particular answer is correct and why others not. Memorizing answers not gonna help you in any way.

I uploaded my 50 page long handwritten notes. They might serve you for last day revision like flashcards.

View my last day revision notes

Being there!!

And here you are! The deal day! On exam day, just keep calm and give the exam. Don’t rush for any last-minute reads etc. Its gonna confuse and complicate things. Better be in a peaceful state since your mind is much important on exam day because that’s what gonna help you to read and understand essays! of the exam in the first go. This way you don’t waste your precious time in re-reading questions/answers.

  • Always keep in mind you can not spend more than 2 mins on a single question. Time is precious!
  • If you are cant figure out answers quickly then flag it and move on.
  • If you see answers with the same solutions & only one/two keywords different then easy to finalize answer quickly without reading through the whole statements
  • Scan through question and capture keywords like a highly available solution, less cost, multi-region, load balancing, etc. This helps you to narrow down to particular services
  • Start building solutions in mind as you read through questions using the above-said keywords. This helps to look at answers and match the solution you have in mind. It helps you save a lot on time!
  • Do not submit the exam till last second, even if you manage to complete all questions and review of flagged ones before time. Use the remaining time to go through answers again.

Result?

Your result will be emailed to you within 5 business days. But you can make it out from the messages displayed on the screen once you submit the exam that you made it or not. The message is quite confusing (it’s more when you fried your brain for the last 3 hours!) since it states that you complete the exam! (Diff messages mentioned here in the forum) But, in a nutshell, if you see it starts with Congratulations then you made it! and if it starts with Thank You then you need a re-attempt.

Our other certification preparation articles

  1. Preparing for CLF-C01 AWS Certified Cloud Practitioner Exam
  2. Preparing for 1Z0-1085-20 Oracle Cloud Infrastructure Foundations 2020 Associate Exam

Linux infrastructure handover checklist

Checklist which will help you in taking Linux infrastructure handover or transition from other support parties

Pointers for Linux infra handover

Handover or transition is an unavoidable step of the project that comes in every sysadmin’s life. Its a process of taking over roles and responsibilities from one operation party to another due to change in support contracts/business etc.

The obvious thing here is to understand the current setup and working procedures so that you can continue it once the previous support party leaves the authority. So we will walk you through the list of points or questions that will help you in Linux infrastructure handover or transition. You can treat this as a questionnaire or checklist for Linux handover.

If you are going to handle servers hosted in public cloud like AWS, Azure then the majority of below pointers are just don’t stand any value 🙂

Hardware

We are considering here remote support so managing hardware is not really in the scope of handover. So only generic knowledge about hardware is enough and no detailed analysis required. If your transition/handover includes taking over hardware management as well then you might need more detailed information than listed below.

  1. Hardware details of proprietary systems like HPUX, AIX, Blade, Rackmount servers for inventory purposes.
  2. Datacenter logical diagram with location of all CI. This will be helpful for locating CI quickly for hardware maintenance.
  3. Vendor details along with SLA, datacenter contacts and OEM contacts for hardware support at datacenter, escalation matrix.
  4. Vendor coordination process for HW support at the datacenter
  5. DR site details and connectivity details between primary and DR site
Server Connectivity

One of the prime requirements whenever to take over any Linux Infra. First thing is to know how you can reach remote Linux servers or even local servers along with their console accesses.

  1. How servers are being accessed from remote locations? Jump server details if any.
  2. VPN access details if any. The process to get new VPN access, etc.
  3. Accounts on Linux servers for logins (LDAP, root, etc if implemented)
  4. How console access is provided for physical servers?
Licensing & contracts

When it comes to supporting Infrastructure, you should be well aware of contracts you have with hardware and software vendors so that you can escalate the things when they require expert’s eyes.

  1. Vendor contract information for OS being used (Redhat, Suse, OEL, etc.) includes start/end date, SLA details, level of support included, products included, escalation matrix, etc.
  2. Software licenses for all tools along with middleware software being used in infrastructure.
  3. Procedure or contacts of the team to renew the above said contracts or licenses.
Risk mitigation plans for old technology

Every company runs a few CI with old technology for sure. So one should take into consideration the up-gradation of these CI while taking handover. Old technology dies over a period of time and becomes difficult day by day to support. Hence its always advisable to identify them as a risk before taking handover and have clarity of its mitigation from ower.

  1. Linux infrastructure future roadmap for servers running old OS (i.e. end of life or end of support)
  2. Discuss migration plans for servers running AIX, HPUX Unix flavours to Linux if they are running out of contracts and support by the vendor in near future.
  3. Ask for a migration plan of servers running non-enterprise Linux flavours like CentOS, Fedora, Oracle Linux, etc.
  4. Same investigation for tools or solutions in Linux infra being used for monitoring, patching, automation, etc.
Linux patching

Quarterly planned activity! Patching is an inseparable part of the Linux lifecycle. Obviously we made a separate section for it. Get whatever details you can gather around this topic from the owner or previous support party.

  1. What are the patching solutions are being used like spacewalk server, SUSE manager server, etc?
  2. If not what is the procedure to obtain new patches? If its from Vendor then check related licenses, portal logins, etc.
  3. What are patching cycles being followed? i.e, Frequency of patching, patching calendar if any, 
  4. Patching procedure, downtime approval process, ITSM tool’s role in patching activities, co-ordination process, etc.
  5. Check if any patching automation implemented.
Monitoring

Infrastructure monitoring is a vast topic. Some organizations have dedicated teams for it. So if that’s the case you will require very little to gather regarding this topic.

  1. Details of monitoring tools implemented e.g. tool’s servers, portal logins, licensing and infra management details of that tool, etc.
  2. SOP to configure monitoring for new CI, Alert suppression, etc.
  3. Alert policy, threshold, etc. definition process on that tool
  4. Monitoring tool’s integration with other software like ticketing tool/ITSM
  5. If the tool is being managed by the separate team then contact details, escalation matrix, etc for the same.
Backup solutions

Backup is another major topic for organizations and its mostly handled by the dedicated team considering its importance. Still, it’s better to have ground knowledge about backup solutions implemented in infrastructure.

  1. Details of backup solutions
  2. SOP for backup related activities like adding, updating, deleting new/old CI, policy definitions, etc.
  3. List of activities under the backup stream
  4. Backup recovery testing schedules, DR replication details if applicable
  5. Major backup recurring schedules like weekends so that you can plan your activities accordingly
Security compliance

The audit requirement is to keep your Linux infra security complaint. All Linux servers should be complaint to security policies defined by organization, they should be free from any vulnerabilities and always running on the latest software. Below are a few pointers to consider here –

  1. Solution or tool for running security scans on Linux servers
  2. SOP for the same, along with operating details.
  3. Password policies to be defined on Linux servers.
  4. Hardening checklist for newly built servers
Network infra details 

The network is the backbone of any IT infrastructure. Its always run by a dedicated team and hence you are not required to have in-depth knowledge of it. It’s not the scope of your transition. But you should know a few basics to get your day to day sysadmin life going smooth.

  1. SOP for proxy details, how to get ports opened, IP requirements, etc. 
  2. Network team contact details, process to get network services, escalation matrix, etc.
  3. How internet connectivity implemented for servers
  4. Understanding network perimeter and zones like DMZ, Public, Private in context to DC.
Documentation repository

When you kick off your support to new infrastructure, document repository is the gold mine for you. So make sure you populate it with all kind of related documents and make it worth.

  1. Location & access details of documentation. It could be a shared drive, file server, on the portal like SharePoint etc.
  2. Includes inventories, SOP documents, Process documents, Audit documents etc.
  3. Versioning and approval process for new/existing documents if any
Reporting

This area is in sysadmin’s bin. Gather all the details regarding this area.

  1. List of all reports currently existed for Linux Infrastructure
  2. What is the report frequency (daily, weekly, monthly)? 
  3. Check if reports are automated. If not ask for SOP to generate/pull reports. And then it’s an improvement area for you to automate them.
  4. How and why report analysis is done? This will help you to get expectations from report outputs.
  5. Any further procedure for reports like forwarding to management, signoff from any authority etc.
  6. Report repository if any. This is covered in the documentation repository section as well.
Applications

This area is not actually in scope for Sysadmin but it helps them to work in a process-oriented environment. Also helps to trace down criticality and impact on applications running on servers when underlying CI runs into trouble.

  1. ITSM tool (IT Service Management tool) used for ticketing & asset management & all details related to ITSM tool like access, authorization etc.
  2. Also, ask for a small training session to get familiar with ITSM tools as it’s customized accordingly to organizations operating structure.
  3. Architectural overview of applications running on Linux servers.
  4. Critical applications along with their CI mapping to track down application impact in case of issues with server
  5. Communication and escalation matrices for applications.
  6. Software repository being used. Like software setups, installable, OS ISO images, VM templates etc
Operations

In all the above points, we gathered data which can be used in this phase i.e. actual supporting Linux infrastructure.

  1. List of day to day activities and expected support model
  2. Logistics for operations like phone lines, ODC requirement, IT hardware needed for support etc.
  3. Process for decommissioning old server and commissioning new server process
  4. New CI onboarding process
  5. DR drill activities details
  6. Escalation/Management matrices on owner organization side for all above tech solutions

That’s all I could think of right now. If you have any more pointers let me know in comments, I am happy to add them here.

Linux server build template (document)

Linux server build template which will help you design your own build sheet or build book. This template can be used as a baseline and has all the necessary details regarding the new server build.

Linux server build sheet

Technical work needs to be documented well so that it can be referred by future sysadmins and make their life easy! One of the important documentation all Linux sysadmin has to maintain is the Server build book or server configuration sheet.

Although there are many automation tools available in the market which serve the purpose of pulling configuration from the server and presenting you in a formatted manner, they come into the picture once the server is up and running. What if you want to fill in this server build document before the server is made active. Or you need to draft a server build sheet wherein you require the requester to fill in details so that accordingly you will deploy the server. In such cases you can not rely on automation tools.

Before you read further please be noted that this is a pure process-related document being discussed not the technical.

It will be a manual method of collecting information from respective stakeholders and then using all information, you can build your server. Information collection can be done from various stakeholders like below –

  • Business authorities (project details, billing context, server tier classification)
  • Datacenter team (hardware details)
  • Storage team (Storage connectivity and capacity)
  • Network team (IP, connectivity matrix)

For building or deploying one Linux server, all the above-said stakeholders needs to be contacted to get relevant information which will help to deploy a server. I have created on sample template for Linux server build which you may refer (link at end of the article).

Business authorities can help you identify the server’s tier classification. It tells you how critical the server it is. Most critical servers will get high-performance resources (like SSD storage), latest tech while less critical receives less expensive resources (like SATA disk storage). This tier configuration mostly named under platinum (being most critical, high valued), gold, silver, copper (being least critical). The terminology may change in different organizations but this is widely used across the IT industry. Business authorities also help you identify server’s project so that you can use this information in your Inventory sheet or naming conventions for server infra etc.

Datacenter team helps you to identify hardware on which you can build your server. If its a physical server then its DC location along with Rack number, chassis number, blade number, etc details helps you in locating the server physically. Also iLO or management port connectivity can be arranged with the help of the data center team. This is crucial for a new server when they don’t have anything running on them and you have to start afresh install. If it’s a virtual server based on virtualization like VMware, the datacenter team can help you identify proper hosts within virtualized infra to host your server. Other details like VMware datacenter, cluster, ESXi host, datastore can be obtained with the help of the DC team.

The storage team can help you with LUN provisioning according to your requirement. Storage can be allocated according to the server tier. If its a new physical server then you need to have physical connectivity as well in place with the help of storage and DC team.

The network team can provide you with free IP, subnet mask, and gateways which are to be configured on the server. Any new VLAN creation request can be taken up with the network team for resolution.

All these points I incorporated in a sheet.

Download Linux server build template by kerneltalks.com

This build sheet is helpful to gather all requirements before you start to build a Linux server in your physical or virtual infra!