Building a Full DevOps Pipeline: From Dev Container to Production

Recently wrapped up a project that took me through the complete DevOps lifecycle. The goal was simple: understand how all these pieces fit together in a real workflow. From setting up a development environment to deploying to production with GitOps, here’s how it all came together.

Starting with the Dev Environment

First things first, we needed a consistent development environment. We used devcontainers with a JSON config and Dockerfile to spin up a container with everything we needed already configured. Added a script that points to a mise.toml file to handle our tooling setup. This became our devpod – our entire workspace where all development happens.

Python Packaging with UV

Inside the devpod, we set up UV, a Python package manager that handles dependencies. Coming from managing Python environments the traditional way, UV was refreshing. Commands like uv init --package, uv sync, and uv add made dependency management straightforward. We structured our project with separate frontend and backend directories and used pytest to test our code as we built it out.

Containerizing the Application

Next step was turning our Python app into Docker images – aiming for the smallest size possible. We created Dockerfiles for both backend and frontend with a few key configurations:

  • Used Python Alpine images for minimal size
  • Mounted dependencies on a cache layer for faster builds
  • Copied our code into the image’s working directory
  • Exposed necessary ports and set up proper user groups
  • Ran the app directly from .venv/bin

Introducing CI/CD with GitHub Actions

This is where things got interesting. We set up GitHub Actions workflows (pipelines) triggered by changes to our backend or frontend code. Each workflow included:

Automated testing – Set up the environment on Ubuntu, installed UV, configured Python, pulled our repo, and ran our tests. We added Ruff for linting to catch syntax issues before they became problems. Even added a pre-commit hook so Ruff checks all Python code before commits go through.

Test coverage – Running pytest was good, but we wanted to know how much of our code was actually covered by tests. Added coverage reporting to see exactly what we were testing and what we weren’t.

Image building and security scanning – Built our Docker images and scanned them with Trivy to catch any security vulnerabilities.

Versioning with Release Please

With each push, we wanted proper versioning. Set up release-please as a separate GitHub Action that triggers when a PR merges to main. It automatically creates release versions for us – detects changes to our backend (and frontend) and generates its own PR with the new version.

Following that release, another workflow kicks in to build and push our versioned images to our container registry.

Local Testing with K3d

Before anything hits production, we needed to test in a Kubernetes environment. We set up k3d (the Docker version of k3s) right in our devcontainer. Created a kubernetes directory with manifests following a similar structure to my homelab setup – base and dev directories with kustomization files.

The dev environment reads in our frontend and backend configurations, applies patches to use the correct image tags, and references back to the base manifests.

We added two key components:

  • A k3d config file
  • A script that automates the entire process: checks dependencies, creates the cluster if it doesn’t exist, builds our images, imports them to the cluster, deploys with kustomize, and prints out the application URLs

End-to-End Testing

Created an e2e_test.py script that tests both backend and frontend in the actual cluster environment, then tears down the cluster when done. This runs as part of our GitHub Actions workflow after image creation – a final validation before anything moves forward.

GitOps for Production

The final piece was setting up GitOps with Flux. We created a separate script that spins up a k3d cluster configured with GitOps, pointing to our GitOps repository. This simulates our actual production setup.

Here’s how it flows: our test repo goes through all the CI/CD steps, creates tested and versioned images, and if everything passes, a workflow updates the image tags in our production GitOps repo. Flux watches that repo and automatically syncs any changes to our production cluster. The workflow creates a PR to update the main branch with the new images, and once merged, Flux handles the deployment.

Wrapping Up

Going through this entire pipeline gave me a real appreciation for how all these DevOps tools and practices connect. It’s one thing to know about Docker, GitHub Actions, Kubernetes, and GitOps individually. It’s another to see them work together in a complete workflow – from writing code in a standardized dev environment to automated testing, versioning, and GitOps-based deployments.

The beauty of this setup is that once it’s configured, the entire process from code commit to production deployment is automated and tested at every step. No manual image building, no kubectl apply commands in production, just Git commits and pull requests.

Looking forward to expanding on this setup and diving deeper into each component. See you in the next post!

Posted in

Leave a comment