Shortcuts
BlogNovember 20, 2024

Mohamed Elbarry
Containerization with Docker
“It works on my machine” usually means different Node versions, env vars, or missing services. I use Docker so the app and its dependencies (DB, Redis) run the same way locally and in CI. Here’s the setup I use day to day and what I change for production. For Lumin Search and Lumin AI we used Docker in CI and for local dev; multi-stage builds kept images small. An image is the built artifact; a container is a running instance. I build once and run the same image everywhere.
Bash
docker build -t my-app:1.0 .
docker run -p 3000:3000 my-app:1.0
Bash
docker build -t my-app .
docker run -d -p 3000:3000 --name my-app-container my-app
docker ps
docker stop my-app-container
docker rm my-app-container
docker images
docker rmi my-app
I use a small base image (e.g. node:18-alpine), copy only what’s needed for the build and then for run, and run as a non-root user. For Node I copy package files first so layer caching works.
Dockerfile
FROM node:18-alpine

WORKDIR /app

COPY package*.json ./
RUN npm ci --only=production

COPY . .

RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
RUN chown -R nextjs:nodejs /app
USER nextjs

EXPOSE 3000
CMD ["npm", "start"]
For Next.js (or any built frontend) I use two stages: one to install deps and build, one to run. The final image only contains the built output and runtime deps, so it’s smaller and has fewer attack surfaces. In practice you can cut Node app image size by 80% or more (e.g. from over 1GB to well under 200MB) by dropping the full node image, devDependencies, and build tools from the final stage—the runner stage only needs the compiled output and production node_modules. Docker’s multi-stage build docs explain the pattern if you want to tweak it for other runtimes.
Dockerfile
FROM node:18-alpine AS builder

WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

FROM node:18-alpine AS runner

WORKDIR /app

RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs

COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static

USER nextjs
EXPOSE 3000
CMD ["node", "server.js"]
I run app + DB (and Redis if needed) with one file. I mount the source so I can edit without rebuilding and use a named volume for DB data so it persists. For dev I often expose the DB port so I can connect with a GUI.
Yaml
version: '3.8'

services:
  frontend:
    build: ./frontend
    ports:
      - '3000:3000'
    environment:
      - NODE_ENV=development
      - REACT_APP_API_URL=http://localhost:8000
    volumes:
      - ./frontend:/app
      - /app/node_modules
    depends_on:
      - backend

  backend:
    build: ./backend
    ports:
      - '8000:8000'
    environment:
      - NODE_ENV=development
      - DATABASE_URL=postgresql://postgres:password@db:5432/myapp
    depends_on:
      - db

  db:
    image: postgres:15-alpine
    environment:
      - POSTGRES_DB=myapp
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=password
    volumes:
      - postgres_data:/var/lib/postgresql/data
    ports:
      - '5432:5432'

volumes:
  postgres_data:
For production I don’t mount source; I build images and inject secrets via env (or a secrets manager). I use restart: unless-stopped and avoid exposing the DB port publicly.
Yaml
version: '3.8'

services:
  frontend:
    build:
      context: ./frontend
      dockerfile: Dockerfile.prod
    restart: unless-stopped
    environment:
      - NODE_ENV=production
      - REACT_APP_API_URL=https://api.myapp.com
    depends_on:
      - backend

  backend:
    build:
      context: ./backend
      dockerfile: Dockerfile.prod
    restart: unless-stopped
    environment:
      - NODE_ENV=production
      - DATABASE_URL=${DATABASE_URL}
      - JWT_SECRET=${JWT_SECRET}
    depends_on:
      - db

  db:
    image: postgres:15-alpine
    restart: unless-stopped
    environment:
      - POSTGRES_DB=${POSTGRES_DB}
      - POSTGRES_USER=${POSTGRES_USER}
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
    volumes:
      - postgres_data:/var/lib/postgresql/data

volumes:
  postgres_data:
When I have multiple Compose stacks or want to isolate services I put them on explicit networks so only the right containers can talk.
Yaml
services:
  frontend:
    build: ./frontend
    networks:
      - frontend-network
      - backend-network

  backend:
    build: ./backend
    networks:
      - backend-network
      - database-network

  db:
    image: postgres:15-alpine
    networks:
      - database-network

networks:
  frontend-network:
    driver: bridge
  backend-network:
    driver: bridge
  database-network:
    driver: bridge
I pin the base image tag (e.g. node:18.17.0-alpine3.18), run as non-root, and run docker scan (or Trivy) in CI. I avoid installing unnecessary tools in the final stage.
Dockerfile
FROM node:18.17.0-alpine3.18

RUN apk add --no-cache curl \
    && rm -rf /var/cache/apk/*

RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
USER nextjs
Use multi-stage builds so the final image is small and only has what’s needed to run. Run as non-root and pin base image tags. Use Compose for local dev with volumes and env; for production, build images and use env/secrets for config. One thing to try: run docker scan my-app:latest and fix the high/critical findings. Docker docs and Best practices are good references. Hope that helps. I'm currently looking for new challenges in the AI and Full Stack space. If you're building something interesting, let's chat.
Share this post: