• Understanding the Linux Boot Process: From Power Button to Login

    Ever wonder what actually happens when you hit the power button on a Linux system? I’ve been diving into the boot process lately and figured I’d break down what’s happening behind the scenes. It’s one of those fundamental topics that helps you troubleshoot issues and understand how your system really works.

    The Big Picture

    The overall process follows this sequence: BIOS POST → GRUB → Kernel → systemd

    There are generally two main sequences involved: boot and startup.

    Boot is everything from when the computer powers on until the kernel is initialized and systemd is launched. Startup picks up from there and finishes getting the system to an operational state where you can actually do work.

    Let’s walk through each stage.

    BIOS and POST

    The boot process starts with hardware. When you first power on or reboot, the computer runs POST (Power-On Self-Test), which is part of the BIOS. The BIOS initializes the hardware, and POST’s job is to make sure everything is functioning correctly – checking basic operability of your CPU, RAM, storage devices, and other critical components.

    [If POST detects a hardware failure, you’ll usually hear beep codes or see error messages before the system halts. On newer systems with UEFI instead of traditional BIOS, this process is similar but UEFI offers more features like a graphical interface and support for larger disks.]

    GRUB2 – The Bootloader

    Once POST completes, the BIOS loads the bootloader – GRUB2 in most modern Linux distributions. GRUB’s job is to find the operating system kernel and load it into memory.

    When you see that GRUB menu at startup, those options let you boot into different kernels – useful if you need to roll back to an older kernel after an update causes issues. The GRUB configuration file lives at /etc/grub2/grub.cfg or /boot/grub2/grub.cfg depending on your distribution.

    GRUB actually operates in two stages:

    Stage 1: Right after POST, GRUB searches for the boot record on your disks, located in the MBR (Master Boot Record)

    MBR: The Master Boot Record is the information in the first sector of a hard disk, it identifies how and where the system’s OS is located in order to be booted into the computer’s main storage or RAM.

    Stage 2: The files for this stage live in /boot/grub2. Stage 2’s job is to locate the kernel, load it into RAM, and hand control over to it. Kernel files are located under /boot – you’ll see files like vmlinuz-[version].

    The Kernel Takes Over

    After you select a kernel from GRUB (or it auto-selects the default), the kernel is loaded. First, it extracts itself from its compressed file format. The kernel then loads initramfs (initial RAM filesystem), a temporary root filesystem that contains drivers and tools needed to mount the real root filesystem. This is especially important for systems where the root filesystem is on RAID, LVM, or encrypted volumes.

    Once the kernel has initialized the hardware and mounted the root filesystem, it loads systemd and hands control over to it. At this point, the boot process technically ends – you have a kernel running and systemd is up. But the system isn’t ready for work yet.

    Startup Process with systemd

    The startup process is what brings your Linux system from “kernel loaded” to “ready to use.” systemd is the mother of all processes and is responsible for getting the system to an operational state.

    systemd’s responsibilities include:

    • Mounting filesystems (it reads /etc/fstab to know what to mount)
    • Starting system services
    • Bringing the system to the appropriate target state

    systemd looks at the default.target to determine which target it should load. Think of targets as runlevels – they define what state the system should be in. Common targets include multi-user.target (multi-user text mode) and graphical.target (GUI mode). You can check your default target with systemctl get-default.

    Each target has dependencies described in its configuration file. systemd handles these dependencies and starts services in the correct order to satisfy them.

    Wrapping Up

    GRUB2 and systemd are the key components in the boot and startup phases of most modern Linux distributions. These two work together to first load the kernel and then start all the system services required to produce a functional Linux system.

    Understanding this process has helped me troubleshoot boot issues, understand where to look when services don’t start, and generally appreciate what’s happening under the hood when I power on a system. Next time your system hangs during boot, you’ll have a better idea of which stage it’s stuck in and where to start investigating.

    See you in the next post!

  • Building a Full DevOps Pipeline: From Dev Container to Production

    Recently wrapped up a project that took me through the complete DevOps lifecycle. The goal was simple: understand how all these pieces fit together in a real workflow. From setting up a development environment to deploying to production with GitOps, here’s how it all came together.

    Starting with the Dev Environment

    First things first, we needed a consistent development environment. We used devcontainers with a JSON config and Dockerfile to spin up a container with everything we needed already configured. Added a script that points to a mise.toml file to handle our tooling setup. This became our devpod – our entire workspace where all development happens.

    Python Packaging with UV

    Inside the devpod, we set up UV, a Python package manager that handles dependencies. Coming from managing Python environments the traditional way, UV was refreshing. Commands like uv init --package, uv sync, and uv add made dependency management straightforward. We structured our project with separate frontend and backend directories and used pytest to test our code as we built it out.

    Containerizing the Application

    Next step was turning our Python app into Docker images – aiming for the smallest size possible. We created Dockerfiles for both backend and frontend with a few key configurations:

    • Used Python Alpine images for minimal size
    • Mounted dependencies on a cache layer for faster builds
    • Copied our code into the image’s working directory
    • Exposed necessary ports and set up proper user groups
    • Ran the app directly from .venv/bin

    Introducing CI/CD with GitHub Actions

    This is where things got interesting. We set up GitHub Actions workflows (pipelines) triggered by changes to our backend or frontend code. Each workflow included:

    Automated testing – Set up the environment on Ubuntu, installed UV, configured Python, pulled our repo, and ran our tests. We added Ruff for linting to catch syntax issues before they became problems. Even added a pre-commit hook so Ruff checks all Python code before commits go through.

    Test coverage – Running pytest was good, but we wanted to know how much of our code was actually covered by tests. Added coverage reporting to see exactly what we were testing and what we weren’t.

    Image building and security scanning – Built our Docker images and scanned them with Trivy to catch any security vulnerabilities.

    Versioning with Release Please

    With each push, we wanted proper versioning. Set up release-please as a separate GitHub Action that triggers when a PR merges to main. It automatically creates release versions for us – detects changes to our backend (and frontend) and generates its own PR with the new version.

    Following that release, another workflow kicks in to build and push our versioned images to our container registry.

    Local Testing with K3d

    Before anything hits production, we needed to test in a Kubernetes environment. We set up k3d (the Docker version of k3s) right in our devcontainer. Created a kubernetes directory with manifests following a similar structure to my homelab setup – base and dev directories with kustomization files.

    The dev environment reads in our frontend and backend configurations, applies patches to use the correct image tags, and references back to the base manifests.

    We added two key components:

    • A k3d config file
    • A script that automates the entire process: checks dependencies, creates the cluster if it doesn’t exist, builds our images, imports them to the cluster, deploys with kustomize, and prints out the application URLs

    End-to-End Testing

    Created an e2e_test.py script that tests both backend and frontend in the actual cluster environment, then tears down the cluster when done. This runs as part of our GitHub Actions workflow after image creation – a final validation before anything moves forward.

    GitOps for Production

    The final piece was setting up GitOps with Flux. We created a separate script that spins up a k3d cluster configured with GitOps, pointing to our GitOps repository. This simulates our actual production setup.

    Here’s how it flows: our test repo goes through all the CI/CD steps, creates tested and versioned images, and if everything passes, a workflow updates the image tags in our production GitOps repo. Flux watches that repo and automatically syncs any changes to our production cluster. The workflow creates a PR to update the main branch with the new images, and once merged, Flux handles the deployment.

    Wrapping Up

    Going through this entire pipeline gave me a real appreciation for how all these DevOps tools and practices connect. It’s one thing to know about Docker, GitHub Actions, Kubernetes, and GitOps individually. It’s another to see them work together in a complete workflow – from writing code in a standardized dev environment to automated testing, versioning, and GitOps-based deployments.

    The beauty of this setup is that once it’s configured, the entire process from code commit to production deployment is automated and tested at every step. No manual image building, no kubectl apply commands in production, just Git commits and pull requests.

    Looking forward to expanding on this setup and diving deeper into each component. See you in the next post!

  • I’ve recently been studying for the AWS SAA and coming from a mostly on Prem or virtual environment I noticed how many of the services I’m reviewing have such great use cases, or the function of the services and how they correlate to previous existing functions. Don’t get me wrong, I don’t see everything about the cloud as a plus, or life changing as an admin, but going over this prep material and coming from an architect POV, I can appreciate the designing of an infrastructure much more.

    AWS EC2: Autoscaling

    A useful feature with EC2 (AWS’ VM) that I find is the idea of Autoscaling (vertical/horizontal). With on-prem environments you’re often left manually provisioning a new server when an increase in demand comes, or planning ahead of time to build out a cluster build with a load balancer to manage system load.

    The convenience of an Auto Scaling Group solves this headache. The purpose is to scale out/in to match the load your application is receiving. And with the various scaling options we can configure based on which scaling option is most applicable to our need. Dynamic (setting a target based off a metric), Scheduling (scaling based off known usage patterns), Predictive (continuously forecasts load and schedules based off it)

    From previous experience of handling a similar scenario with on-prem servers, this would save plenty of headache and ease. From reactive to proactive.

    AWS: S3 vs Traditional File Storage

    Dealing with Files and storage there are plenty of useful tools and functions we are able to use on Linux, nothing is really lacking in terms of usefulness. But there are a few useful features that make S3 a great choice.

    With on-prem servers, you’re dealing with setting up file servers, managing disk space, dealing with RAID configurations, worrying about backups, setting up NFS shares for application access etc. These are multiple different applications or services we have to deal with.

    S3 gives us various Storage Class options, built-in redundancy, versioning, and lifecycle policies. No managing underlying storage hardware.

    Storage classes: With S3 we have various storage options based off need, based off how we want our storage to operate. Do we need instant retrieval of our storage? Frequent or Infrequent access ? Or archive it in Glacier for backups?

    Versioning: A very useful feature to help protect against unintended deletes and easy rollbacks to previous versions

    Lifecycle Policies: another useful feature to help us move our S3 objects between storage classes. Use case example, if we want to convert an object after X amount of time from frequent access to eventually archiving it as Glacier storage class.

    The features are plentiful that I haven’t discussed like encryption, Access Points, performance etc. All useful features available with S3 buckets.

    I’ve kept it to just 2 services, but those architecting environments can see different problem solving solutions that Cloud Services can provide. But I wouldn’t say I’m all-in on the cloud, I still very much enjoy the hands on experience with building with On-Prem or virtual environments. Planning and building out clusters and applications with different services and tools, intertwining them into a functioning, highly available end product. And coming from that background I can understand the issues that cloud services try and tackle and the solutions it can offer. Giving me a background to base it off of and relate to.

  • For the longest time, I’ve always for the longest time seen cool projects by my fellow Admins and Engineers. Building their own servers, homelab, applications etc. I thought to myself why not me. I figured too much time commitment, hardware commitment, what would I create or build etc.

    I realized those were just excuses, if I am passionate about it I’ll find the time. Hardware? Old laptop is more than enough (also building on raspberry pis, super cool). What to build? Anything I want.

    So I went ahead and took that step, lot of tutorials and guides later I am just about there. I figured why not write up a short recap.

    The homelab is still a work in progress, but it will always be. The main idea was to get it up and running in the first place.

    Grabbed an old laptop, installed Ubuntu and got started.

    First thing to do was decide which container orchestration tool I wanted to go with, ended up going with K3s and it was a great decision. A quick, easy and lightweight Kubernetes distribution that’s perfect for learning and smaller environments. For someone transitioning from traditional sysadmin work to DevOps and container orchestration, K3s lets me get hands-on with Kubernetes without the overhead of a full production cluster setup.

    After getting k3s setup, next was setting up Flux. After seeing some guides and suggestions of GitOps I figured why not. I wanted to get a feel of the GitOps workflow and have a feel of it not in a production environment but simulating it in my home lab. After getting a feel of Flux, I can’t imagine running my cluster any other way. From a SysAdmin POV with experience in something like Ansible. I love the idea of controlling state of an application or machine with files. Flux watches my Git repo and automatically syncs changes to the cluster. Defining the state of our cluster from a single point of truth with our GitHub repository, lovely.

    Avoiding manually declaring resources and instead letting Flux take care of that. I enjoyed following the Flux repository structure as well (https://fluxcd.io/flux/guides/repository-structure/) a great way to structure and keep a clean repository. It was definitely more difficult to grasp and structure theoretically. But once in action further applications were much easier to setup.

    K3s done, Flux done. Now to host applications. I won’t expand too much on the simple setup of storage, service, namespace and deployment. Setting up my first application (Linkding, a bookmark manager) on the cluster felt like such a win. Once the initial homelab setup is done, hosting applications becomes straightforward. The trickiest aspect can be networking, which I’ll probably look to tackle in a separate post

    I’ll wrap it here for this post, but this is just the start of the homelab. I look forward to growing it, expanding upon this and learning so much more and I’ll be sure to share it as well.

    Thank you for reading, see you in the next post!

  • My Takeaways from Kubernetes Fundamentals

    I recently just wrapped up a Kubernetes fundamentals course and wanted to share some of the key concepts that stuck with me. If you’re just getting started with Kubernetes like I am, hopefully this helps clarify some things.

    The biggest thing I learned early on: everything in Kubernetes is defined through YAML files. Once that clicked, things started making a lot more sense.

    The course mainly focused on three core components: Deployments, Services, and Storage.

    Deployments are where you define your pods. This includes details like what container image you’re using, how many replicas you want running, selectors and labels for organizing things, which namespace you’re working in, and any volumes you want to attach. Think of deployments as the blueprint for what you want running.

    Services are what actually give you an IP address to access your pods. Each pod comes with it’s own IP address, but services allow us to group our Pods under a single IP. There are a few types but the main ones are ClusterIP, LoadBalancer, and NodePort. Each one serving its own purpose.

    ClusterIP is the default and gives you an internal IP that your pods can use to talk to each other. You can quickly create one using the expose command on an existing deployment. If you need temporary external access, port forwarding does the job.

    LoadBalancer is what you use when you need something more permanent and externally accessible. Say you have a group of pods you want exposed to the outside world. You create one LoadBalancer service for them and boom, you’ve got a persistent external IP. No need for port forwarding here.

    Storage was probably the trickiest part for me to grasp at first. By default pods are ephemeral and so is the default storage. You can say goodbye to any data on a pod you just deleted. For more persistent storage you can create either Persistent Volumes or Persistent Volume Claims. They work a bit differently but accomplish similar goals. The key benefit is that once you attach these volumes to your deployment, your data sticks around even if the pods get deleted or recreated.

    These fundamentals have given me a solid foundation to start working with Kubernetes in real environments. There’s obviously a lot more to learn, but understanding deployments, services, and storage gets you pretty far. Looking forward to diving deeper and sharing more as I continue this journey into container orchestration.

  • Over the course of my years as a SysAdmin I’ve come across useful tips/tricks that have helped me navigate the terminal more efficiently and made my job much easier. So I thought why not share them here as well, some of them you may know and some you may be hearing about for the first time

    Tip 1: Using !! to redo previous command

    I can’t count how many times I’ve had to redo a command and coming across this has helped me save plenty of time.

    How this works is !! calls the last command you ran. Now for example some useful use cases are: rerunning the last command but with sudo. sudo !! solves that for you without you having to retype the full command with sudo before hand. I know plenty of Admins can understand the frustration of forgetting to type in sudo before running a command.

    Tip 2: CTRL + R

    Working on the terminal and typing commands constantly you can sometimes forget a command and the full syntax that you may have run in the past or just recently. CTRL + R has been great in finding a command that I may have run before. Especially for longer syntax commands, it’s a bit tricky to just retype the full command. CTRL +R allows you to search history by whatever bit you can remember. Let’s take an example:

    Let’s say you recently ran an ansible-adhoc command:

    ansible databases -b -m yum -a “name=mariadb-server state=latest”

    and you wish to run another adhoc command and by using CTRL + R and by searching for “ansible” you can access the last ansible command you run and from there can rewrite command as you wish. Time efficient if you ask me

    Tip 3: Aliases for Frequent Commands

    If you are consistently writing certain commands with certain flags and arguments, long or short. You can always create a shortcut by creating an alias for it within your ~/.bashrc or ~/.bash_aliases file

    alias ll=’ls -alF’

    alias up=’sudo apt update && sudo apt upgrade -y’

    these are just some examples, you can imagine how much easier this makes things for a SysAdmin. Feel free to get crazy and experiment with what works for you.

    Remember, you will need to run source ~/.bashrc or restart your terminal for new aliases to take effect

    These are only just some tips that you may or may not find useful but as you’ll see, with more time spent on the CLI you’ll come across more tricks like these making your life as an Admin much easier. Hope I was able to help and hope these tricks were useful. I’ll see you next week with a new post.

    Hasnain

    Linux Systems Administrator | Aspiring Devops Engineer

  • Welcome, My name is Hasnain and to keep it short and simple I am a Linux Admin who is looking to document his journey to being a DevOps Engineer.

    I’ve never been the poster or blogger type but I had a realization. I’ve always benefited from the blogs and articles of others within the field, so why can’t that be me. Why not document this journey of mine to achieve being a proper Devops Engineer, to document learning the variety of tools on the way, paths I am taking to achieve certain certifications as well as sharing what I have expertise in, Linux.

    I look forward to this, and hope to be consistent in my posts and consistent in my progress. Onwards and upwards!

    Hasnain