Docker & Containers: Complete Practical Guide for Developers 2026

Docker changed how software is built, shipped, and run. A container packages an application along with every dependency it needs — no more "it works on my machine." In 2026, Docker images underpin virtually every CI/CD pipeline, cloud deployment, and local development environment. This guide takes you from first principles to production-grade Dockerfiles, covering everything from the docker run command to multi-stage builds, Compose orchestration, and networking.

1. Core Concepts: Images, Containers, Layers

Understanding the image/container distinction is the first conceptual hurdle:

  • Image: A read-only, layered blueprint for a container. Built from a Dockerfile. An image is to a container as a class is to an object instance. Images are stored in registries (Docker Hub, GitHub Container Registry, AWS ECR).
  • Container: A running (or stopped) instance of an image. A container has its own filesystem, network interface, and process space. Multiple containers can run from the same image simultaneously.
  • Layer: Each instruction in a Dockerfile creates an immutable filesystem layer. Layers are cached — if nothing changes in a layer or layers above it, Docker reuses the cache, making rebuilds fast. This is why layer ordering matters: put the instructions that change least often near the top of the Dockerfile.
  • Registry: A storage and distribution system for Docker images. Docker Hub is the default public registry. Self-hosted options include Harbor and Gitea's built-in registry.

2. Essential Docker Commands

CommandDescription
docker build -t myapp:1.0 .Build image from Dockerfile in current dir, tag as myapp:1.0
docker run -p 3000:3000 myapp:1.0Run container, map host port 3000 to container port 3000
docker run -d --name app myapp:1.0Run detached (background) with a named container
docker psList running containers
docker ps -aList all containers (including stopped)
docker logs -f appStream logs from the "app" container
docker exec -it app shOpen interactive shell in running container
docker stop appGracefully stop container (SIGTERM → SIGKILL after 10s)
docker rm appRemove stopped container
docker rmi myapp:1.0Remove image
docker pull postgres:16Pull image from Docker Hub without running it
docker system prune -aRemove all unused images, containers, networks, build cache

3. Writing Dockerfiles: Best Practices

# --- Node.js production Dockerfile (best practices) ---

# 1. Pin a specific version — never use :latest in production
FROM node:20.11-alpine3.19

# 2. Set working directory
WORKDIR /app

# 3. Copy dependency files BEFORE source code
#    This layer is cached until package.json changes
COPY package.json package-lock.json ./

# 4. Install only production dependencies
RUN npm ci --only=production

# 5. Copy source code (this layer invalidates on code changes)
COPY . .

# 6. Run as non-root user (security)
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser

# 7. Document exposed ports
EXPOSE 3000

# 8. Use exec form (not shell form) for CMD
CMD ["node", "src/server.js"]

Key rules:

  • Always COPY package.json before COPY . . so the npm install layer is cached and not re-run on every code change.
  • Use alpine base images for smaller size (~5MB vs ~900MB for Debian), but be aware Alpine uses musl libc which occasionally causes compatibility issues.
  • Never run as root. Create a dedicated user with adduser and switch to it before CMD.
  • Never include .env, node_modules, or secrets in the image — add them to .dockerignore.

4. Multi-Stage Builds

Multi-stage builds produce lean production images by discarding build tools, compilers, and dev dependencies in the final image. The build stage may be 1GB; the runtime stage is typically under 100MB:

# Multi-stage build: TypeScript app

# === Stage 1: Build ===
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci  # includes dev dependencies for TypeScript compiler
COPY . .
RUN npm run build  # outputs to /app/dist

# === Stage 2: Production ===
FROM node:20-alpine AS production
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production  # no TypeScript, no test libs
COPY --from=builder /app/dist ./dist  # only copy compiled output
RUN adduser -S appuser && chown -R appuser /app
USER appuser
EXPOSE 3000
CMD ["node", "dist/server.js"]

# Result: 180MB image instead of 1.1GB

5. Docker Compose: Multi-Container Apps

Docker Compose orchestrates multiple containers with a single YAML file. Perfect for local development stacks (app + database + cache + message queue):

# docker-compose.yml — full development stack
services:
  app:
    build: .
    ports:
      - "3000:3000"
    environment:
      - NODE_ENV=development
      - DATABASE_URL=postgres://user:pass@db:5432/mydb
      - REDIS_URL=redis://cache:6379
    volumes:
      - .:/app           # mount source for hot reload
      - /app/node_modules  # exclude node_modules from mount
    depends_on:
      db:
        condition: service_healthy
      cache:
        condition: service_started
    restart: unless-stopped

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: user
      POSTGRES_PASSWORD: pass
      POSTGRES_DB: mydb
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U user -d mydb"]
      interval: 10s
      timeout: 5s
      retries: 5

  cache:
    image: redis:7-alpine
    command: redis-server --appendonly yes
    volumes:
      - redis_data:/data

volumes:
  postgres_data:
  redis_data:

6. Volumes and Persistent Data

Containers are ephemeral — data written to a container's filesystem is lost when the container is removed. Use volumes for anything that must persist:

  • Named volumes (volumes: postgres_data:): Managed by Docker, stored in Docker's storage area. Best for databases and data that should persist independently of the project directory.
  • Bind mounts (./src:/app/src): Mounts a host directory into the container. Best for development hot-reload — changes to host files are immediately visible in the container.
  • tmpfs mounts: In-memory — not persisted. Useful for sensitive temporary data or test databases where persistence is not needed.

7. Container Networking

Docker creates an isolated virtual network for each Compose project. Containers on the same network can reach each other by service name as the hostname — no IP addresses needed. This is why DATABASE_URL=postgres://db:5432 works: db is the service name from compose, resolved by Docker's internal DNS.

Network types:

  • bridge (default): Private network shared by containers on the same host. Containers can communicate with each other and reach the internet via NAT.
  • host: Container shares the host's network namespace directly. Best performance, but no network isolation. Linux only.
  • overlay: Spans multiple Docker hosts — required for Docker Swarm. Not commonly needed when using Kubernetes.

8. Security Best Practices

  • Never run as root: Create a non-root user in your Dockerfile and switch to it before CMD.
  • Scan images for vulnerabilities: docker scout cves myimage:latest or use trivy image myimage:latest to scan before pushing to production.
  • Use read-only filesystems: docker run --read-only myimage prevents container-side filesystem writes, limiting blast radius of a compromised container.
  • Never bake secrets into images: Use environment variables (injected at runtime via Compose, Kubernetes Secrets, or a vault), not ENV SECRET=value in Dockerfile.
  • Limit capabilities: docker run --cap-drop ALL --cap-add NET_BIND_SERVICE removes all Linux capabilities except those explicitly needed.

9. Production Tips

  • Use --restart=always or restart: always in Compose for services that must auto-restart after host reboots.
  • Set resource limits: docker run --memory=512m --cpus=0.5 myimage. Without limits, a single runaway container can consume all host resources.
  • Use a container registry with image scanning and signing (Sigstore/Cosign) for supply chain security.
  • Implement health checks (HEALTHCHECK in Dockerfile) so orchestrators can detect and restart unhealthy containers automatically.
  • Publish images with both a version tag and latest: docker buildx build --platform linux/amd64,linux/arm64 -t myapp:1.2 -t myapp:latest --push .

10. Docker Desktop Alternatives in 2026

ToolLicenseHighlightsBest For
Docker DesktopPaid for enterpriseOfficial GUI, WSL2 integration, Docker ScoutTeams already using Docker ecosystem
Podman DesktopFree (Apache 2.0)Rootless by default, compatible CLI, no daemonSecurity-conscious developers, Red Hat shops
OrbStackPaid (Mac only)Much faster than Docker Desktop on Mac, lower memoryMac developers who find Docker Desktop slow
Rancher DesktopFree (Apache 2.0)Includes containerd/nerdctl, k3s for local K8sDevelopers who need local Kubernetes
LimaFree (Apache 2.0)Linux VM for Mac, powers OrbStack and othersAdvanced Mac users wanting CLI-only

11. Frequently Asked Questions

What is the difference between Docker and a virtual machine?

A VM virtualises the entire hardware stack — each VM has its own kernel, which consumes hundreds of MB of RAM and takes minutes to start. A container shares the host kernel but has an isolated filesystem, network, and process space. Containers start in milliseconds and use tens of MB of overhead. The tradeoff: VMs provide stronger isolation (each has its own kernel); containers provide better density and developer experience.

Should I use Docker Compose or Kubernetes for deployment?

Docker Compose is ideal for local development and small single-host deployments. Kubernetes adds automated scaling, self-healing, rolling deployments, and multi-host orchestration — complexity worth it at scale. The pragmatic path: use Compose for development and small apps; use Kubernetes (or a managed service like Render, Fly.io, or Railway that abstracts it) when you need multi-instance scaling or complex deployment orchestration.

How do I pass secrets to containers safely?

In development: use a .env file with Compose (never commit it). In production: use Docker Secrets (documented in Swarm, but also available standalone), Kubernetes Secrets, cloud-native secret managers (AWS Secrets Manager, HashiCorp Vault), or environment variable injection from your CI/CD platform. Never bake secrets into Docker images.

12. Glossary

Image
A read-only, layered template from which containers are created. Stored in a registry.
Container
A running instance of an image with its own isolated filesystem, network, and process space.
Dockerfile
A text file with instructions for building a Docker image, one layer per instruction.
Docker Compose
A tool for defining and running multi-container applications with a single YAML configuration file.
Multi-Stage Build
A Dockerfile technique using multiple FROM instructions to produce a lean final image by discarding build-time tools.
Volume
A persistent storage mechanism managed by Docker, surviving container removal.
Registry
A service for storing and distributing Docker images. Docker Hub is the default public registry.

13. References & Further Reading

Start by Dockerising an existing project today: write a basic Dockerfile, build it, and run it locally. The fastest way to understand Docker is to make a working container from something you already know.