Continuing my Kubernetes series, this post covers how to manage applications throughout their lifecycle – from deployments and updates to configuration and scaling. I’ll also dive into cluster maintenance tasks like upgrades and backups as well as other topics such as Scaling These are critical topics for keeping your applications running smoothly and your cluster healthy.

Application Lifecycle Management

Rolling Updates and Rollbacks

One of Kubernetes’ strengths is how it handles application updates without downtime. When you create a deployment, it triggers rollout version 1. Update the image and apply the changes, and you get rollout version 2.

Kubernetes tracks these rollout versions, making it easy to roll back if something goes wrong.

Check rollout status and history with kubectl rollout status and kubectl rollout history.

Deployment Strategies:

Recreate – Destroy the existing deployment completely, then create the new one. This causes downtime but is simpler.

Rolling Update (default) – This method brings down one pod at a time and brings up a new one. No downtime, but the update takes longer. This is the default method.

To trigger a rolling update, just apply your updated deployment with kubectl apply and Kubernetes handles the rest – gradually replacing old pods with new ones. Feel free to try this out in your test environment, have 2 windows open. Create a deployment on 1, edit and apply changes to the deployment and from the second window you can see the rolling update take place.

If an update causes issues, roll back with kubectl rollout undo deployment deployment-name.

Configuring Applications

Applications need configuration – commands to run, environment variables to call, and sensitive data like passwords to manage. Kubernetes gives you several ways to handle this.

Commands and Arguments

Remember that containers aren’t meant to be standalone operating systems. They’re stateless applications that exit when their task is done.

When running containers, whether with docker run or in a Dockerfile, you specify what the container should do using CMD, ENTRYPOINT, and ARG.

In Kubernetes, you define these in your pod spec under the container section using the command and args fields.

Environment Variables

You can pass environment variables directly to your containers in the pod spec:

env:
- name: DATABASE_URL
value: "mysql://db:3306"

This is the direct approach – you’re hardcoding values in your YAML. It works but isn’t ideal for values you use across multiple pods or sensitive data.

ConfigMaps: Centralized Configuration

ConfigMaps provide a more centralized solution. Instead of repeating the same environment variables across multiple pod definitions, you create a ConfigMap once and reference it from your pods.

Two steps: create the ConfigMap, then inject it into your pods.

You can inject a single environment variable from a ConfigMap:

env:
- name: PLAYER_INITIAL_LIVES
valueFrom:
configMapKeyRef:
name: game-demo
key: player_initial_lives

Or use all variables from a ConfigMap at once:

envFrom:
- configMapRef:
name: myconfigmap

This second approach is cleaner when you have many related configuration values.

Secrets: Handling Sensitive Data

Secrets work similarly to ConfigMaps but are meant for sensitive information like passwords, API keys, and tokens. The values are encoded (base64) before storing.

Create and reference Secrets the same way as ConfigMaps, just using Secret resources instead. Kubernetes also supports encrypting data at rest using encryption configuration for added security.

Multi-Container Pods

Sometimes you need two services working together closely. Instead of configuring the relationship between two separate pods, you can run multiple containers in a single pod.

This way they scale together, share volume mounts, and share resources. There are different design patterns for this – sidecars, ambassadors, and adapters.

Init Containers

In multi-container pods, you usually expect all containers to run continuously. But sometimes you want a container to run a task and stop before the main application starts – that’s where init containers come in.

Init containers run to completion before the main containers start. Common use cases include setting up configuration files, waiting for dependencies to be ready, or performing database migrations.

Self-Healing Applications

Kubernetes supports self-healing through ReplicaSets and Replication Controllers. If a pod crashes, the controller automatically recreates it. If you’ve specified 3 replicas and one goes down, Kubernetes spins up a replacement to maintain your desired state.

This happens automatically – no manual intervention needed. It’s one of the core features that makes Kubernetes reliable for production workloads.

Autoscaling

Scaling comes in two flavors: horizontal and vertical.

Horizontal scaling – Increases the number of pods or nodes

Vertical scaling – Increase the size (CPU/memory) of existing pods or nodes

Scaling can be done manually or automatically.

Scaling Cluster Infrastructure (Nodes)

To scale the number of nodes in your cluster, you can use the Cluster Autoscaler (Cloud platforms have their own). It watches for pods that can’t be scheduled due to insufficient resources and automatically adds nodes. It also scales down when nodes are underutilized.

Scaling Workloads (Pods)

For pods, you have Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA).

Horizontal Pod Autoscaler (HPA):

You can manually scale with kubectl scale, but that’s not efficient. HPA automates this by observing metrics and adding pods when needed.

Create an HPA with kubectl autoscale deployment myapp –cpu-percent=50 –min=2 –max=10.

This tells Kubernetes to maintain CPU usage around 50%, scaling between 2 and 10 replicas as needed. You can also define these parameters in the HPA spec for more control.

Vertical Pod Autoscaler (VPA):

Manually scaling pod resources is done with kubectl edit, but VPA automates it by observing metrics and adjusting CPU/memory requests and limits.

VPA doesn’t come by default – you need to deploy it separately. It’s useful for workloads with variable resource needs.

In-Place Pod Resizing:

Traditionally, changing a pod’s resources means deleting and redeploying it. There’s a feature for in-place vertical scaling that adjusts resources without recreating the pod, helping avoid downtime. This needs to be enabled as it’s still evolving.

Cluster Maintenance

Now let’s talk about keeping your cluster healthy through upgrades and proper maintenance procedures.

OS Upgrades

When you need to upgrade the OS on a node, that node goes down and its pods become inaccessible.

If the node is down for less than 5 minutes, the controller waits and brings pods back online when the node returns. If it’s down longer, Kubernetes considers the pods dead and the node comes back clean.

The safe way to handle this is to drain the node first with kubectl drain node-name. This gracefully moves the pods to other nodes. Once the upgrade is complete, uncordon the node with kubectl uncordon node-name to allow scheduling again.

You can also cordon a node with kubectl cordon node-name, which just prevents new pods from being scheduled there without moving existing pods.

Kubernetes Version Upgrades

Kubernetes releases follow semantic versioning: 1.xx.xx (Major.Minor.Patch).

When upgrading, there shouldn’t be a big version gap between cluster components, but nothing should run a higher version than kube-apiserver – it’s the central component everything else talks to.

Upgrade Process:

Upgrade master nodes first, then worker nodes. While masters are upgrading, the cluster still runs, but you can’t make changes or deploy new workloads.

For worker nodes, you have different strategies:

  • All at once (causes downtime for all workloads)
  • One at a time (safer, rolling approach)
  • Add new nodes with the new version, drain old nodes, then remove them

With kubeadm:

First upgrade kubeadm itself, then use it to upgrade the cluster.

Steps:

  1. Upgrade kubeadm package on the master node
  2. Run kubeadm upgrade apply v1.xx.xx on master
  3. If kubelet runs on the master, upgrade it too
  4. For worker nodes: drain the node, upgrade kubeadm and kubelet packages, run kubeadm upgrade node, restart the kubelet service, then uncordon the node

Repeat for each worker node.

Different Kubernetes distributions (k3s, Rancher, manual kubeadm installations) have their own upgrade procedures, so always check the specific documentation.

Backup and Restore

Backups are critical – you need a recovery plan if something goes wrong.

What to backup:

Resource configuration files – Your YAML manifests. Store these in version control (Git) so you can recreate resources if needed.

etcd – This is where all cluster state is stored – information about nodes, pods, configs, secrets, everything. Backing up etcd means you can restore your entire cluster state.

You can backup the etcd data directory directly, or create a snapshot using etcdctl snapshot save. To restore, use etcdctl snapshot restore.

etcdctl is the command-line client for etcd and your main tool for backup and restore operations.

Certification Exam Tip

In the CKA exam, you won’t get immediate feedback like in practice tests. You must verify your work yourself. If asked to create a pod with a specific image, run kubectl describe pod to confirm it was created with the correct name and image. Get in the habit of verifying everything you do.

Wrapping Up

Managing application lifecycles in Kubernetes involves understanding deployments and updates, properly configuring applications with environment variables and secrets, and implementing autoscaling for reliability and efficiency. Cluster maintenance – from OS upgrades to Kubernetes version updates and backups – keeps your infrastructure healthy and recoverable.

These concepts build on the scheduling and monitoring topics from the previous post. Together, they give you the foundation to run production Kubernetes workloads confidently.

Next in the series, I’ll cover what steps we take to secure our Kubernetes Cluster. See you soon!

Posted in

Leave a comment