If you’ve worked on any non-trivial project, you know the drill. Start PostgreSQL. Start Redis. Set environment variables. Run migrations. Start the backend. Start the frontend. Half the time something fails silently and you spend ten minutes figuring out which service isn’t talking to which.
Docker Compose fixes this. You describe your entire stack in one YAML file, and docker compose up handles the rest. Networking, volumes, startup order, health checks. All of it.
In this tutorial, I’ll walk you through building a realistic full-stack application: a Node.js API backed by PostgreSQL and Redis, served behind Nginx. By the end, you’ll have a production-ready docker-compose.yml you can adapt to any project.
Prerequisites
Before we start, make sure you have:
- Docker Desktop or Docker Engine installed (version 24+ recommended)
- Docker Compose V2 bundled with Docker (verify with
docker compose version) - Basic familiarity with the command line
- A rough understanding of what containers are (if you’ve run
docker runbefore, you’re good)
Docker Compose V2 ships as a Docker plugin, so the command is docker compose (no hyphen). The old standalone docker-compose is deprecated. If you’re on Docker Desktop 4.x or later, you already have V2.
What We’re Building
Here’s the architecture:
- Nginx (port 80) reverse proxy
- Node.js API (Express) on port 3000
- PostgreSQL 17 database
- Redis 7 for caching and session storage
All four services run in their own containers, connected through Docker’s internal networking. Your browser only touches port 80, and Nginx routes traffic to the API.

Project Structure
Create a project directory and set up this structure:
fullstack-app/
├── docker-compose.yml
├── nginx/
│ └── default.conf
├── api/
│ ├── Dockerfile
│ ├── package.json
│ └── index.js
├── .env
└── init.sql
Nothing fancy. The docker-compose.yml is the central file. Everything else supports it.
The docker-compose.yml File
This is the core of the whole setup. Create docker-compose.yml in your project root:
services:
db:
image: postgres:17-alpine
restart: unless-stopped
environment:
POSTGRES_USER: ${DB_USER:-appuser}
POSTGRES_PASSWORD: ${DB_PASSWORD:-secret}
POSTGRES_DB: ${DB_NAME:-myapp}
volumes:
- pgdata:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USER:-appuser} -d ${DB_NAME:-myapp}"]
interval: 5s
timeout: 3s
retries: 5
networks:
- backend
redis:
image: redis:7-alpine
restart: unless-stopped
ports:
- "6379:6379"
volumes:
- redisdata:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
networks:
- backend
api:
build:
context: ./api
dockerfile: Dockerfile
restart: unless-stopped
ports:
- "3000:3000"
environment:
DATABASE_URL: postgres://${DB_USER:-appuser}:${DB_PASSWORD:-secret}@db:5432/${DB_NAME:-myapp}
REDIS_URL: redis://redis:6379
NODE_ENV: production
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
networks:
- backend
- frontend
nginx:
image: nginx:alpine
restart: unless-stopped
ports:
- "80:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
depends_on:
- api
networks:
- frontend
volumes:
pgdata:
redisdata:
networks:
backend:
internal: true
frontend:
Let me break down what’s happening here, because every piece matters.
Services
Each service is a container. The image key pulls a pre-built image from Docker Hub. The build key tells Compose to build from a Dockerfile. Our api service uses a build, while the others use official images.
Environment Variables with Defaults
Notice the ${DB_USER:-appuser} syntax. This reads from a .env file if it exists, or falls back to the default value. Create a .env file in your project root:
DB_USER=appuser
DB_PASSWORD=your_secure_password_here
DB_NAME=myapp
Never commit your .env file to version control. Add .env to your .gitignore immediately.
Volumes
There are two types of volumes in play:
Named volumes (pgdata, redisdata) persist data across container restarts and removals. When you docker compose down -v you explicitly remove them. Otherwise they stick around. This is what you want for databases.
Bind mounts (./init.sql:/docker-entrypoint-initdb.d/init.sql:ro) map a file or directory from your host into the container. The :ro flag makes it read-only. PostgreSQL automatically runs .sql files in /docker-entrypoint-initdb.d/ on first startup, which is perfect for initial schema creation.
Health Checks
The healthcheck blocks are critical. Without them, your API might try to connect to PostgreSQL before it’s ready to accept connections. The depends_on with condition: service_healthy tells Compose to wait until the database reports healthy before starting the API.
PostgreSQL’s pg_isready command checks if the server is accepting connections. Redis’s redis-cli ping does the same. Both run every 5 seconds with a 3-second timeout, up to 5 retries.
Networks
I defined two networks: backend (marked internal: true) and frontend. The database and Redis only sit on the backend network, meaning nothing from outside can reach them directly. The API sits on both networks so it can talk to the database and be reached by Nginx. Nginx only sits on the frontend network.
This is a basic security pattern. Your database ports (5432, 6379) are exposed to the host for development convenience, but in production you’d remove those ports entries and let everything flow through internal networking only.
The Node.js API
Now let’s build the actual API. Create api/package.json:
{
"name": "fullstack-api",
"version": "1.0.0",
"main": "index.js",
"scripts": {
"start": "node index.js"
},
"dependencies": {
"express": "^4.21.0",
"pg": "^8.13.0",
"ioredis": "^5.4.0"
}
}
And api/index.js:
const express = require('express');
const { Pool } = require('pg');
const Redis = require('ioredis');
const app = express();
app.use(express.json());
// PostgreSQL connection
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
});
// Redis connection
const redis = new Redis(process.env.REDIS_URL);
// Health check endpoint
app.get('/health', async (req, res) => {
try {
await pool.query('SELECT 1');
const redisPing = await redis.ping();
res.json({
status: 'healthy',
database: 'connected',
redis: redisPing === 'PONG' ? 'connected' : 'disconnected',
timestamp: new Date().toISOString()
});
} catch (err) {
res.status(503).json({ status: 'unhealthy', error: err.message });
}
});
// Simple CRUD endpoint
app.get('/api/items', async (req, res) => {
// Check cache first
const cached = await redis.get('items');
if (cached) {
return res.json({ source: 'cache', data: JSON.parse(cached) });
}
const result = await pool.query('SELECT * FROM items ORDER BY created_at DESC');
// Cache for 60 seconds
await redis.set('items', JSON.stringify(result.rows), 'EX', 60);
res.json({ source: 'database', data: result.rows });
});
app.post('/api/items', async (req, res) => {
const { name, description } = req.body;
const result = await pool.query(
'INSERT INTO items (name, description) VALUES ($1, $2) RETURNING *',
[name, description]
);
// Invalidate cache
await redis.del('items');
res.status(201).json(result.rows[0]);
});
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
console.log(`API running on port ${PORT}`);
});
The API has three endpoints: a health check (useful for monitoring), a GET with Redis caching, and a POST that invalidates the cache on write. Nothing groundbreaking, but it demonstrates real database and cache interaction.
The Dockerfile
Create api/Dockerfile:
FROM node:22-alpine AS builder
WORKDIR /app
COPY package.json ./
RUN npm ci --only=production
FROM node:22-alpine
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
This uses a multi-stage build. The first stage installs dependencies, the second copies them over. This keeps the final image small since it doesn’t include the npm cache or build tools. The node:22-alpine base image is around 50MB compared to the ~350MB full Node image.
The Database Schema
Create init.sql in your project root:
CREATE TABLE IF NOT EXISTS items (
id SERIAL PRIMARY KEY,
name VARCHAR(255) NOT NULL,
description TEXT,
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);
INSERT INTO items (name, description) VALUES
('Docker Compose', 'Define and run multi-container applications'),
('PostgreSQL', 'A powerful, open source object-relational database'),
('Redis', 'An in-memory data structure store used as a database');
PostgreSQL runs this file automatically on the first container startup (when the data volume is empty). On subsequent starts, it skips it because the data already exists.
The Nginx Configuration
Create nginx/default.conf:
upstream api_backend {
server api:3000;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://api_backend;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /health {
proxy_pass http://api_backend;
access_log off;
}
}
Nginx forwards all requests to the API container using the service name api as the hostname. Docker’s internal DNS resolves this automatically. The /health endpoint gets its own location block with logging disabled, so health checks don’t flood your access logs.
Running Everything
Now for the payoff. In your project root:
docker compose up --build -d
That single command:
- Builds the API image from the Dockerfile
- Pulls PostgreSQL 17, Redis 7, and Nginx Alpine images
- Creates the
backendandfrontendnetworks - Creates the
pgdataandredisdatavolumes - Starts PostgreSQL, waits for it to be healthy
- Starts Redis, waits for it to be healthy
- Starts the API (now that its dependencies are up)
- Starts Nginx (now that the API is up)
Check that everything is running:
docker compose ps
You should see all four services listed with a “running” status. Test the API:
curl http://localhost/health
Expected response:
{
"status": "healthy",
"database": "connected",
"redis": "connected",
"timestamp": "2026-04-22T07:15:00.000Z"
}
Fetch the items (first request hits the database, second hits Redis cache):
curl http://localhost/api/items
Add a new item:
curl -X POST http://localhost/api/items \
-H "Content-Type: application/json" \
-d '{"name": "Test Item", "description": "Created via Docker Compose"}'
Common Operations
Here are the commands you’ll use daily:
# View logs for all services
docker compose logs -f
# View logs for a specific service
docker compose logs -f api
# Restart a single service
docker compose restart api
# Stop everything (containers stay, data persists)
docker compose stop
# Stop and remove containers (data persists in volumes)
docker compose down
# Stop and remove everything including volumes (fresh start)
docker compose down -v
# Rebuild and restart after code changes
docker compose up --build -d api
The docker compose down -v command is the nuclear option. It removes the containers, networks, and volumes. Your database data is gone. Use this when you want a completely clean slate, not for routine restarts.
Compose Profiles for Development vs Production
One feature that doesn’t get enough attention is profiles. They let you define services that only start in certain environments:
services:
# ... your existing services ...
db-admin:
image: dpage/pgadmin4:latest
profiles:
- dev
ports:
- "5050:80"
environment:
PGADMIN_DEFAULT_EMAIL: admin@admin.com
PGADMIN_DEFAULT_PASSWORD: admin
depends_on:
- db
networks:
- backend
With profiles: [dev], this pgAdmin service only starts when you explicitly include it:
# Normal start (no pgAdmin)
docker compose up -d
# Development start (includes pgAdmin)
docker compose --profile dev up -d
This is cleaner than maintaining separate docker-compose.dev.yml and docker-compose.prod.yml files. Keep one file, use profiles to toggle development tools on and off.
Watch Mode for Local Development
If you’re actively coding, rebuilding the container on every change gets old fast. Docker Compose has a built-in watch mode that syncs files and triggers actions without a full rebuild:
services:
api:
build:
context: ./api
dockerfile: Dockerfile
# ... existing config ...
develop:
watch:
- action: sync
path: ./api
target: /app
ignore:
- node_modules/
- action: rebuild
path: ./api/package.json
Start it with:
docker compose watch
Now when you edit a file in ./api, it syncs into the running container immediately. When package.json changes, it triggers a rebuild. This has been generally available since 2025, and in the 2026 release, watch mode delegates builds to Docker Bake for better parallelization and caching.
Troubleshooting
Here are the problems I see most often:
“database is not accepting connections”
Your API is starting before PostgreSQL is ready. Make sure you have the healthcheck on the database service AND depends_on with condition: service_healthy on the API. Without both, it’s a race condition.
“port is already allocated”
Changes to init.sql not applying
PostgreSQL only runs init scripts when the data volume is empty. If you already started the stack once, the script won’t run again. Run docker compose down -v and start fresh, or connect to the database and run the SQL manually.
Container keeps restarting
Check the logs: docker compose logs api. The most common cause is a missing environment variable or a connection string that doesn’t match what the database service exposes.

What’s Next
This setup covers the basics, but Docker Compose can do a lot more:
- Secrets management with Docker secrets or external secret stores
- Resource limits with
deploy.resources.limitsto cap CPU and memory per service - Multi-environment files with
docker compose -f docker-compose.yml -f docker-compose.override.yml - Docker Bake for complex multi-image build pipelines
If you’re deploying to production, pair Compose with a reverse proxy that handles TLS termination (like Caddy or Traefik), and consider adding a container orchestration layer if you need auto-scaling.
The whole project is available in a single docker-compose.yml and a handful of supporting files. No Kubernetes manifests, no Terraform, no CI pipeline needed just to run it locally. That’s the beauty of Compose: it makes the simple things simple, and the complex things possible.
