Docker Complete Guide
Introduction
Docker revolutionizes application deployment through containerization. This guide covers Docker fundamentals, images, containers, networking, volumes, Docker Compose, multi-stage builds, and best practices for production deployments.
1. Docker Basics
# Install Docker (Ubuntu)
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker $USER
# Verify installation
docker --version
docker run hello-world
# Basic commands
docker ps # List running containers
docker ps -a # List all containers
docker images # List images
docker pull nginx:latest # Download image
docker rmi image_name # Remove image
# Container lifecycle
docker run nginx # Create and start
docker start container_id # Start stopped container
docker stop container_id # Stop running container
docker restart container_id # Restart container
docker rm container_id # Remove container
docker logs container_id # View logs
docker exec -it container_id bash # Execute command
2. Dockerfile Fundamentals
# Node.js application Dockerfile
FROM node:18-alpine
# Set working directory
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Copy application code
COPY . .
# Expose port
EXPOSE 3000
# Set environment variables
ENV NODE_ENV=production
# Run application
CMD ["node", "index.js"]
# Build and run
docker build -t my-app:1.0 .
docker run -p 3000:3000 my-app:1.0
# Dockerfile instructions explained
FROM # Base image
WORKDIR # Set working directory
COPY # Copy files from host to container
ADD # Like COPY but can extract tar and fetch URLs
RUN # Execute commands during build
CMD # Default command when container starts
ENTRYPOINT # Configure container as executable
EXPOSE # Document which ports container listens on
ENV # Set environment variables
ARG # Build-time variables
VOLUME # Create mount point
USER # Set user for RUN, CMD, ENTRYPOINT
LABEL # Add metadata to image
3. Multi-Stage Builds
# Optimize image size with multi-stage builds
# Stage 1: Build
FROM node:18 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Production
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY --from=builder /app/dist ./dist
EXPOSE 3000
CMD ["node", "dist/index.js"]
# React build example
FROM node:18 AS build
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
# Go application (tiny final image)
FROM golang:1.21 AS builder
WORKDIR /app
COPY go.* ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o main .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /app/main .
EXPOSE 8080
CMD ["./main"]
4. Docker Compose
# docker-compose.yml - Full stack application
version: '3.8'
services:
# Node.js API
api:
build:
context: ./api
dockerfile: Dockerfile
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- DATABASE_URL=postgresql://user:password@postgres:5432/mydb
- REDIS_URL=redis://redis:6379
depends_on:
- postgres
- redis
volumes:
- ./api:/app
- /app/node_modules
restart: unless-stopped
networks:
- app-network
# PostgreSQL Database
postgres:
image: postgres:15-alpine
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
- POSTGRES_DB=mydb
ports:
- "5432:5432"
volumes:
- postgres-data:/var/lib/postgresql/data
restart: unless-stopped
networks:
- app-network
# Redis Cache
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis-data:/data
restart: unless-stopped
networks:
- app-network
# Nginx Reverse Proxy
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- ./nginx/ssl:/etc/nginx/ssl
depends_on:
- api
restart: unless-stopped
networks:
- app-network
volumes:
postgres-data:
redis-data:
networks:
app-network:
driver: bridge
# Commands
docker-compose up -d # Start all services
docker-compose down # Stop and remove
docker-compose ps # List services
docker-compose logs -f api # Follow logs
docker-compose exec api sh # Execute command
docker-compose build # Rebuild images
docker-compose restart api # Restart service
5. Docker Networking
# Network types
docker network ls
# Bridge (default) - containers on same host
docker network create my-bridge
docker run --network my-bridge nginx
# Host - container uses host network
docker run --network host nginx
# None - no networking
docker run --network none nginx
# Custom bridge network
docker network create --driver bridge \
--subnet 172.20.0.0/16 \
--gateway 172.20.0.1 \
my-custom-network
# Connect container to network
docker network connect my-network container_name
# Inspect network
docker network inspect my-network
# Container communication
# Containers on same network can use service names
# In docker-compose:
services:
api:
# Can access postgres at: postgres:5432
postgres:
# postgres is the hostname
# Port mapping
docker run -p 8080:80 nginx # Host:Container
docker run -p 127.0.0.1:8080:80 nginx # Bind to specific IP
docker run -P nginx # Random host port
# Expose vs Publish
EXPOSE 3000 # Documents port (doesn't publish)
-p 3000:3000 # Actually publishes port
6. Docker Volumes
# Volume types
# 1. Named volumes (managed by Docker)
docker volume create my-data
docker run -v my-data:/app/data nginx
docker volume ls
docker volume inspect my-data
docker volume rm my-data
# 2. Bind mounts (host directory)
docker run -v /host/path:/container/path nginx
docker run -v $(pwd):/app node:18
# 3. tmpfs mounts (memory only, Linux)
docker run --tmpfs /app/temp nginx
# Volume examples
# PostgreSQL with persistent data
docker run -d \
--name postgres \
-e POSTGRES_PASSWORD=secret \
-v postgres-data:/var/lib/postgresql/data \
postgres:15
# Node.js development with hot reload
docker run -d \
--name dev-server \
-v $(pwd):/app \
-v /app/node_modules \
-p 3000:3000 \
node:18
# Backup volume
docker run --rm \
-v postgres-data:/data \
-v $(pwd):/backup \
alpine tar czf /backup/backup.tar.gz /data
# Restore volume
docker run --rm \
-v postgres-data:/data \
-v $(pwd):/backup \
alpine tar xzf /backup/backup.tar.gz -C /
# Volume permissions
# Run as non-root user
FROM node:18
RUN groupadd -r appuser && useradd -r -g appuser appuser
USER appuser
WORKDIR /app
7. Environment Variables & Secrets
# Environment variables in Dockerfile
ENV NODE_ENV=production
ENV PORT=3000
# At runtime
docker run -e NODE_ENV=production \
-e DATABASE_URL=postgres://... \
my-app
# From file
# .env file
NODE_ENV=production
DATABASE_URL=postgres://localhost/mydb
REDIS_URL=redis://localhost
docker run --env-file .env my-app
# Docker Compose
services:
api:
environment:
- NODE_ENV=production
- DATABASE_URL=${DATABASE_URL}
env_file:
- .env
# Docker Secrets (Swarm mode)
echo "db_password" | docker secret create db_password -
services:
api:
secrets:
- db_password
environment:
- DB_PASSWORD_FILE=/run/secrets/db_password
secrets:
db_password:
external: true
# Read secret in app
import fs from 'fs';
const password = fs.readFileSync('/run/secrets/db_password', 'utf8').trim();
8. Health Checks
# Dockerfile health check
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s \
--timeout=3s \
--start-period=40s \
--retries=3 \
CMD node healthcheck.js
CMD ["node", "index.js"]
# healthcheck.js
const http = require('http');
const options = {
host: 'localhost',
port: 3000,
path: '/health',
timeout: 2000
};
const request = http.request(options, (res) => {
if (res.statusCode === 200) {
process.exit(0);
} else {
process.exit(1);
}
});
request.on('error', () => process.exit(1));
request.end();
# Docker Compose health check
services:
api:
build: ./api
healthcheck:
test: ["CMD", "node", "healthcheck.js"]
interval: 30s
timeout: 3s
retries: 3
start_period: 40s
postgres:
image: postgres:15
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
# Wait for dependencies
services:
api:
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_started
9. Docker Best Practices
# .dockerignore file
node_modules
npm-debug.log
.git
.gitignore
.env
.env.local
*.md
.DS_Store
dist
build
coverage
.vscode
.idea
# Optimize layer caching
# Bad - installs dependencies on every code change
COPY . .
RUN npm install
# Good - caches dependencies
COPY package*.json ./
RUN npm ci --only=production
COPY . .
# Use specific versions
FROM node:18.19.0-alpine # Not node:latest
# Minimize layers
# Bad
RUN apt-get update
RUN apt-get install -y curl
RUN apt-get clean
# Good
RUN apt-get update && \
apt-get install -y curl && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Run as non-root
FROM node:18-alpine
RUN addgroup -g 1001 -S nodejs && \
adduser -S nodejs -u 1001
USER nodejs
WORKDIR /app
COPY --chown=nodejs:nodejs . .
# Security scanning
docker scan my-app:latest
# Use small base images
FROM node:18-alpine # ~120MB
# vs
FROM node:18 # ~900MB
# Multi-architecture builds
docker buildx create --use
docker buildx build --platform linux/amd64,linux/arm64 -t my-app:latest .
10. Production Deployment
# Build for production
docker build \
--build-arg NODE_ENV=production \
--tag myapp:1.0.0 \
--tag myapp:latest \
.
# Push to registry
docker login
docker tag myapp:1.0.0 username/myapp:1.0.0
docker push username/myapp:1.0.0
# Run with resource limits
docker run -d \
--name api \
--restart unless-stopped \
--memory="512m" \
--cpus="1.0" \
--health-cmd="curl -f http://localhost:3000/health || exit 1" \
--health-interval=30s \
-p 3000:3000 \
myapp:1.0.0
# Docker Compose production
services:
api:
image: username/myapp:1.0.0
deploy:
replicas: 3
resources:
limits:
cpus: '1'
memory: 512M
reservations:
cpus: '0.5'
memory: 256M
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
# Logging
docker logs -f container_id
docker logs --since 30m container_id
docker logs --tail 100 container_id
# Container stats
docker stats
docker stats --no-stream
# Clean up
docker system prune -a # Remove unused data
docker volume prune # Remove unused volumes
docker image prune -a # Remove unused images
docker container prune # Remove stopped containers
11. Docker Compose Best Practices
✓ Docker Best Practices:
- ✓ Use .dockerignore to exclude unnecessary files
- ✓ Leverage build cache by ordering instructions properly
- ✓ Use multi-stage builds to minimize image size
- ✓ Run containers as non-root users
- ✓ Use specific image versions (avoid :latest)
- ✓ Implement health checks for all services
- ✓ Use named volumes for persistent data
- ✓ Set resource limits (memory, CPU)
- ✓ Scan images for vulnerabilities
- ✓ Keep images small (use alpine variants)
- ✓ One process per container
- ✓ Use secrets for sensitive data
- ✓ Implement proper logging strategy
- ✓ Use container orchestration for production
- ✓ Regular security updates of base images
Conclusion
Docker simplifies application deployment through containerization. Master Dockerfiles, multi-stage builds, Docker Compose, networking, and volumes for efficient development and production workflows. Always follow security best practices and keep images minimal.
💡 Pro Tip: Use Docker BuildKit for faster builds and advanced features. Enable it with `export DOCKER_BUILDKIT=1`. It provides better caching, parallel builds, and build secrets that don't end up in image layers.