Continuing my Kubernetes series, this post dives into security – one of the most critical aspects of running a production cluster. From authentication and authorization to network policies and securing containers, understanding these concepts is essential for protecting your cluster and workloads.
Security in Kubernetes starts at two levels:
Secure the hosts – Basic security practices like disabling root login, using SSH keys, keeping systems patched. If your underlying nodes are compromised, nothing else matters.
Secure the Kubernetes cluster – Control access to the kube-apiserver. Since the API server is the gateway to everything in the cluster, securing it is critical. This involves determining who can access it (authentication) and what they can do (authorization).
We also use network policies to control pod-to-pod communication at a granular level.
Authentication: Who Can Access the Cluster
Different users and services need to access your cluster – admins, developers, CI/CD systems, monitoring tools. Each needs proper authentication.
User access is managed by the kube-apiserver. When a request comes in, the API server verifies the identity before processing it.
Authentication Mechanisms:
- Static token files – Authentication based on a file containing tokens. Not recommended for production due to security concerns.
- Certificates – Using TLS certificates for authentication. This is the most common and secure method.
- Identity services – Integration with external providers like LDAP, Active Directory, or cloud IAM services.
TLS Certificates in Kubernetes
Communication between master and worker nodes needs to be encrypted and authenticated. Kubernetes uses TLS certificates extensively – server certificates for servers and client certificates for clients.
Server Certificates:
- kube-apiserver has apiserver.crt and apiserver.key
- etcd has its own certificate and key
- kubelet on each node has its certificate and key
Client Certificates:
- Admin users accessing the API server have their own certificates
- Components like kube-scheduler, controller-manager, and kube-proxy also have client certificates
- The kube-apiserver acts as both server and client – it’s a client when accessing etcd
Checking Certificate Health
To verify certificate details, check the YAML files in /etc/kubernetes/manifests for each service. Each file references the certificates being used. You can inspect those certificates to check expiration dates, subject names, and other details.
Understanding certificate locations and validity is crucial for troubleshooting authentication issues.
Certificate API
When a new user needs access, the manual process is cumbersome. Kubernetes has a Certificate API to automate this.
The workflow:
- User creates their private key
- User creates a CSR and sends it to an admin
- Admin creates a CertificateSigningRequest resource in Kubernetes
- Admin approves the CSR using kubectl certificate approve
- User can retrieve the signed certificate
Check pending CSRs with kubectl get csr.
KubeConfig: Managing Access
The kubeconfig file (stored at ~/.kube/config) holds details about clusters, contexts, and users. This is how kubectl knows which cluster to talk to and with what credentials.
The three main sections:
- Clusters – Different Kubernetes clusters you might access (production, development, staging)
- Users – Credentials for different users or service accounts, including certificate information
- Contexts – Bridge users to clusters. A context combines a specific user with a specific cluster, optionally specifying a default namespace.
For example, you might have contexts like “admin@production” or “developer@staging”.
Working with contexts:
View all contexts:
kubectl config get-contexts
Check current context:
kubectl config current-context
Switch contexts:
kubectl config use-context production
Use a specific kubeconfig file:
kubectl config --kubeconfig=/path/to/config use-context research
Contexts vs Namespaces
Let’s understand these 2 concepts, this can be confusing at first, so let me clarify:
Namespaces are logical partitions within a single cluster. They organize resources, provide scope for names (you can have a pod named “nginx” in multiple namespaces), and allow you to apply resource quotas and policies.
Common uses: separating environments (dev, staging, prod), multi-tenancy (team-a, team-b), or isolating applications (frontend, backend).
Default namespaces in Kubernetes:
- default – where resources go if you don’t specify a namespace
- kube-system – Kubernetes system components
- kube-public – publicly accessible data
- kube-node-lease – node heartbeat data
Contexts are combinations of cluster + user + namespace saved in your kubeconfig file. They help you quickly switch between different clusters, users, or default namespaces.
Common uses: managing multiple clusters (prod-cluster, dev-cluster), different cloud providers (aws-cluster, gcp-cluster), or different roles (admin-context, developer-context).
The key difference: namespaces exist within the cluster itself, while contexts exist in your local kubeconfig file to help you manage access to multiple clusters.
Authorization: What Can Users Do
Authentication gets you in the door – authorization determines what you can do once inside.
You don’t want all users to have equal access. Admins need full control, developers might only need access to specific namespaces, and monitoring tools need read-only access.
Authorization Mechanisms:
- Node Authorization – For kubelet on nodes to communicate with the API server
- ABAC (Attribute-Based Access Control) – Policies defined per user. Difficult to manage at scale.
- RBAC (Role-Based Access Control) – Create roles defining permissions, then bind users to those roles. This is the most common approach.
- Webhook – Outsource authorization decisions to an external service
You can configure which mechanisms to use and in what order via the authorization-mode flag.
RBAC in Practice
RBAC involves two resources: Roles and RoleBindings.
- Role – Defines permissions: which API resources (pods, services, deployments), and which verbs (get, list, create, delete) are allowed.
- RoleBinding – Associates a role with users or service accounts.
Roles are namespaced – they apply within a specific namespace.
Check role bindings:
kubectl describe rolebinding kube-proxy -n kube-system
This shows which users or service accounts are bound to the kube-proxy role.
Cluster Roles
ClusterRoles work like RBAC roles but at the cluster level, not namespace level. Use these for cluster-wide resources like nodes, persistent volumes, or for granting permissions across all namespaces.
Service Accounts
Accounts in Kubernetes come in two forms: users and service accounts.
Service accounts are for applications that need to interact with the Kubernetes API – things like Prometheus for monitoring, Jenkins for CI/CD, or custom controllers.
When you create a service account, Kubernetes generates a token for authentication.
For external applications: Provide the application with the service account token so it can authenticate to the API server.
For applications running inside the cluster: Every namespace has a default service account that’s automatically attached to pods. To use a specific service account, reference it in your pod spec.
When a service account is attached to a pod, Kubernetes automatically:
- Creates a token
- Mounts it as a projected volume
- Rotates the token periodically
- Expires the token when the pod is deleted
Pod level Security
Image Security
When you specify an image in your pod spec, the full path is registry/user-or-account/image-name.
By default, this pulls from public registries like Docker Hub. For production workloads, you’ll likely use private registries for better security and control.
To pull from a private registry, create a Secret containing the registry credentials, then reference it in your deployment’s imagePullSecrets field. Kubernetes uses this secret to authenticate when pulling the image.
Pod and Container Security
You can define security settings at both the pod and container level – things like which user the container runs as, Linux capabilities, whether it can run as root, and filesystem permissions.
This is done by adding a securityContext section in your pod spec. Container-level settings override pod-level settings.
Example use cases:
- Run containers as non-root users
- Drop unnecessary Linux capabilities
- Make the root filesystem read-only
- Prevent privilege escalation
Network Policies
By default, all pods in a Kubernetes cluster can communicate with each other. Network policies let you restrict this traffic.
A NetworkPolicy is a resource that defines rules for pods – which traffic is allowed in (ingress) and out (egress).
You specify:
- Which pods the policy applies to (using labels and selectors)
- The type of policy (ingress, egress, or both)
- Which ports are affected
- Which sources/destinations are allowed
A single network policy can define multiple rules, handling both ingress and egress in one resource.
Useful Command-Line Tools
kubectx – Quickly switch between contexts. Much faster than typing out the full kubectl config use-context command.
kubens – Switch between namespaces. Saves you from adding –namespace to every command.
These aren’t built into kubectl but are incredibly useful utilities to install.
Custom Resources
Kubernetes lets you extend the API by creating custom resources. You define a CustomResourceDefinition (CRD) that acts as a template for creating resources of your custom kind.
This is how operators and custom controllers work – they define new resource types specific to their needs, then watch and manage those resources.
Wrapping Up
Security in Kubernetes is layered – from securing the underlying nodes, to authentication and authorization, to network policies and container security contexts. Understanding these concepts and implementing them properly is critical for production workloads.
Authentication gets users and services into the cluster, authorization determines what they can do, and network policies control how pods communicate. Add in proper image security and container hardening, and you’ve got a solid security foundation.
In the next post, I’ll cover storage – persistent volumes, volume claims, and stateful applications. See you then!
Leave a comment