1. Architecture: Control Plane, Nodes & Pods
A Kubernetes cluster has two layers: the control plane (the brain) and worker nodes (the muscle).
Control Plane components:
- API Server: The front-end for the control plane. Every
kubectlcommand goes through the API server. It validates and processes requests, then updates etcd. - etcd: A distributed key-value store holding the full cluster state. If etcd dies, the cluster loses its state — back it up.
- Scheduler: Watches for newly created pods without a node assignment and selects the best node based on resource availability and constraints.
- Controller Manager: Runs control loops that maintain desired state — Deployment controller ensures the right number of replicas are running; Node controller handles node failures.
Worker Node components:
- kubelet: An agent that ensures containers described in PodSpecs are running and healthy.
- kube-proxy: Maintains network rules on nodes that allow pods to communicate with each other and with Services.
- Container runtime: containerd (default in K8s 1.24+) or CRI-O — runs the actual containers.
The smallest deployable unit is a Pod — one or more tightly coupled containers sharing a network namespace and storage volumes. Pods are ephemeral — when they die, they're replaced with new ones with different IPs. Never rely on a pod's IP address; use Services instead.
2. Core Workload Resources
| Resource | Purpose | When to Use |
|---|---|---|
| Deployment | Manages stateless pods with desired replica count, rolling updates, rollbacks | Web servers, APIs, workers — anything stateless |
| StatefulSet | Like Deployment but with stable pod identities, stable network names, ordered deployment | Databases, Kafka, ZooKeeper — stateful applications |
| DaemonSet | Ensures one Pod runs on every (or selected) node | Log collectors (Fluentd), monitoring agents (Prometheus Node Exporter) |
| Job | Runs pods to completion (batch tasks) | Database migrations, batch processing, one-time tasks |
| CronJob | Runs Jobs on a schedule (like cron) | Scheduled reports, cache warm-up, periodic backups |
| HorizontalPodAutoscaler | Automatically scales Deployment replica count based on CPU/memory/custom metrics | Variable load applications |
3. Services and Networking
A Service provides a stable IP and DNS name for a set of Pods selected by a label selector. Three main Service types:
- ClusterIP (default): Accessible only within the cluster. Other pods reach the Service by its DNS name (e.g.,
http://api-service:3000). Use for service-to-service communication. - NodePort: Exposes the Service on a port on every node's IP. Accessible from outside the cluster via
NodeIP:NodePort. Useful for development and simple scenarios; not recommended for production. - LoadBalancer: Provisions a cloud load balancer (AWS ELB, GCP Cloud Load Balancing) in front of the Service. Expensive (one LB per service) — use Ingress instead for HTTP traffic.
4. ConfigMaps and Secrets
# ConfigMap: non-sensitive app configuration
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
NODE_ENV: "production"
LOG_LEVEL: "info"
API_URL: "https://api.example.com"
---
# Secret: sensitive data (base64 encoded, not encrypted by default — use Sealed Secrets or External Secrets for real encryption)
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
type: Opaque
stringData: # stringData auto-encodes to base64
DATABASE_URL: "postgres://user:strongpassword@db:5432/mydb"
JWT_SECRET: "my-jwt-secret-value"
Security note: Kubernetes Secrets are base64-encoded, not encrypted, in etcd by default. Use Sealed Secrets (Bitnami), External Secrets Operator (with AWS Secrets Manager or Vault), or enable etcd encryption at rest for production.
5. Ingress Controllers
An Ingress defines HTTP routing rules to direct traffic to different Services based on hostname or path — all from a single cloud load balancer:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: "letsencrypt-prod" # auto TLS
spec:
ingressClassName: nginx
tls:
- hosts: [api.example.com, app.example.com]
secretName: tls-secret
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-service
port: { number: 3000 }
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port: { number: 80 }
Popular Ingress controllers: ingress-nginx (most common), Traefik (built-in Let's Encrypt, good UI), AWS Load Balancer Controller (native AWS ALB/NLB integration).
6. Persistent Storage
Pods are ephemeral but databases need persistent storage. Kubernetes storage model:
- PersistentVolume (PV): A piece of storage in the cluster (AWS EBS, NFS, local disk) provisioned by an administrator or dynamically by a StorageClass.
- PersistentVolumeClaim (PVC): A request for storage by a user. K8s matches PVCs to PVs automatically. Developers create PVCs; cluster admins/cloud providers provision PVs.
- StorageClass: Defines the type and parameters for dynamically provisioning PVs (e.g., AWS gp3 SSD, GCP pd-ssd).
7. Helm: Kubernetes Package Manager
Helm packages Kubernetes manifests into reusable "charts" with configurable values. Installing PostgreSQL in K8s in one command:
# Add Bitnami chart repository
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
# Install PostgreSQL with custom values
helm install postgres bitnami/postgresql \
--set auth.username=myuser \
--set auth.password=mypassword \
--set auth.database=mydb \
--set primary.persistence.size=20Gi
# View deployed releases
helm list
# Upgrade with new values
helm upgrade postgres bitnami/postgresql --set primary.resources.limits.cpu=500m
# Rollback to previous version
helm rollback postgres 1
8. kubectl Cheat Sheet
| Command | Description |
|---|---|
kubectl get pods -n namespace | List pods in a namespace |
kubectl get all | List all resources in current namespace |
kubectl describe pod pod-name | Detailed info: events, resource limits, readiness |
kubectl logs pod-name -f | Stream pod logs |
kubectl exec -it pod-name -- sh | Open shell in pod |
kubectl apply -f manifest.yaml | Create or update resources from YAML |
kubectl delete -f manifest.yaml | Delete resources defined in YAML |
kubectl scale deploy/myapp --replicas=5 | Scale deployment to 5 replicas |
kubectl rollout status deploy/myapp | Watch rolling update progress |
kubectl rollout undo deploy/myapp | Roll back to previous deployment |
kubectl port-forward svc/myapp 8080:3000 | Forward local port 8080 to service port 3000 |
kubectl top pods | Show CPU/memory usage (requires metrics-server) |
9. Complete Deployment Example
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
labels: { app: api }
spec:
replicas: 3
selector:
matchLabels: { app: api }
strategy:
type: RollingUpdate
rollingUpdate: { maxSurge: 1, maxUnavailable: 0 }
template:
metadata:
labels: { app: api }
spec:
containers:
- name: api
image: myregistry/api:1.5.0
ports:
- containerPort: 3000
envFrom:
- configMapRef: { name: app-config }
- secretRef: { name: app-secrets }
resources:
requests: { cpu: "100m", memory: "128Mi" }
limits: { cpu: "500m", memory: "512Mi" }
readinessProbe:
httpGet: { path: /health, port: 3000 }
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet: { path: /health, port: 3000 }
initialDelaySeconds: 15
periodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: api-service
spec:
selector: { app: api }
ports:
- port: 3000
targetPort: 3000
type: ClusterIP
10. When K8s vs Docker Compose
| Use Docker Compose When… | Use Kubernetes When… |
|---|---|
| Local development | Multi-instance production workloads |
| Single-server deployment (<5 services) | Auto-scaling based on load is required |
| Startup or early-stage product | Zero-downtime rolling deployments are critical |
| Team is small and ops capacity is limited | Service mesh, multi-region, or complex networking needed |
| Simplicity > scalability for now | Platform team can own K8s complexity |
Managed alternatives that reduce K8s complexity: Fly.io, Render, Railway (K8s under the hood, developer-friendly interface). Worth evaluating before building internal K8s capability.
11. Frequently Asked Questions
What is the best way to learn Kubernetes locally?
Install k3s (lightweight K8s, 60MB binary) or kind (Kubernetes in Docker) for local experimentation. Docker Desktop also includes a single-node K8s cluster enabled in Settings. Minikube is the traditional option but heavier. k3d (k3s in Docker) is excellent for multi-node cluster simulation on a single machine.
How do I handle database migrations with Kubernetes?
Run migrations as a Kubernetes Job before the Deployment is updated. In Helm, use a pre-upgrade hook. Alternatively, use an init container in the Deployment that runs migrations on startup — but this requires idempotent migrations since they run on every pod start. Never run migrations from within the application code itself at startup with multiple replicas — race conditions will cause failures.
12. Glossary
- Pod
- The smallest deployable unit in K8s — one or more containers sharing a network and storage namespace.
- Deployment
- Manages a set of replica Pods with rolling update and rollback capabilities.
- Service
- A stable virtual IP and DNS name for a set of Pods, providing load balancing and service discovery.
- Ingress
- An API object defining HTTP routing rules from external traffic to internal Services.
- Helm Chart
- A package of templatised K8s manifests with configurable values, distributed via Helm.
- PersistentVolumeClaim (PVC)
- A request for persistent storage; Kubernetes binds it to an available PersistentVolume.
13. References & Further Reading
- Kubernetes Official Documentation
- Helm Documentation
- k3s — Lightweight Kubernetes
- kind — Kubernetes in Docker
- learnk8s.io — Comprehensive K8s Learning
Install k3d locally and deploy the example YAML from this article. Getting a real application running in K8s — with health checks, rolling updates, and port-forwarding — teaches more than any amount of reading. Run kubectl rollout undo to experience the rollback magic firsthand.