Building a Multi-Container Docker Application: Web App + Database
In today’s development landscape, containerisation has become a cornerstone of modern application deployment. Docker, in particular, has revolutionised how we package, distribute, and run applications. In this post, I’ll walk you through creating a simple multi-container Docker application that demonstrates a common architectural pattern: a web application that writes to a separate database container.
- The Project Goal
- Github Link
- Architecture Overview
- Setting Up the Project Structure
- The Web Application Container
- The Database Container
- Orchestrating with Docker Compose
- Key elements in this configuration:
- Lessons Learned and Best Practices
- Deployment
- Conclusion
The Project Goal
Our objective was straightforward: build two Docker containers with the following responsibilities:
Container 1: A basic web application that writes data to a database
Container 2: A database server that stores the data from the web application
Github Link
https://github.com/xnomle/Docker-Application-Web-App-Database
Architecture Overview
The architecture consists of:
A Node.js/Express web application container exposing port 3000 A PostgreSQL database container exposing port 5432 A Docker network enabling communication between the containers A Docker volume providing data persistence for the database
Setting Up the Project Structure
Our project directory structure looks like this:
├── docker-compose.yml
├── webapp/
│ ├── dockerfile
│ ├── package.json
│ ├── app.js
│ └── public/
│ └── index.html
└── database/
├── Dockerfile
└── init.sql
The Web Application Container
Our web application is a simple message board built with Node.js and Express. Users can post messages which are then stored in the database.
The core files include:
package.json:
json{
"name": "simple-webapp",
"version": "1.0.0",
"description": "A simple web application that connects to a database",
"main": "app.js",
"dependencies": {
"express": "^4.18.2",
"pg": "^8.11.3",
"body-parser": "^1.20.2"
}
}
app.js:
javascriptconst express = require('express');
const { Pool } = require('pg');
const bodyParser = require('body-parser');
// Database connection
const pool = new Pool({
user: process.env.DB_USER,
host: process.env.DB_HOST,
database: process.env.DB_NAME,
password: process.env.DB_PASSWORD,
port: process.env.DB_PORT,
});
// App setup and routes
// ...
dockerfile:
dockerfileFROM node:18-alpine
WORKDIR /app
# Copy app files explicitly
COPY package.json .
COPY app.js .
COPY public/ ./public/
RUN npm install
EXPOSE 3000
CMD ["node", "app.js"]
The Database Container
Our database container runs PostgreSQL and includes an initialisation script:
Dockerfile:
FROM postgres:15-alpine
# Copy initialisation scripts
COPY init.sql /docker-entrypoint-initdb.d/
# Expose PostgreSQL port
EXPOSE 5432
init.sql:
sql-- Create messages table
CREATE TABLE IF NOT EXISTS messages (
id SERIAL PRIMARY KEY,
content TEXT NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Insert some initial data
INSERT INTO messages (content) VALUES
('Welcome to our simple message board!'),
('This is running on Docker containers');
Orchestrating with Docker Compose
Docker Compose makes it easy to define and run multi-container applications. Our docker-compose.yml file:
services:
webapp:
build:
context: ./webapp
dockerfile: dockerfile
ports:
- "3000:3000"
environment:
- DB_HOST=database
- DB_USER=postgres
- DB_PASSWORD=postgres
- DB_NAME=myapp
- DB_PORT=5432
depends_on:
- database
networks:
- app-network
database:
build:
context: ./database
dockerfile: Dockerfile
ports:
- "5432:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=myapp
volumes:
- db-data:/var/lib/postgresql/data
networks:
- app-network
volumes:
db-data:
networks:
app-network:
driver: bridge
Key elements in this configuration:
- Environment Variables: We pass database credentials to both containers
- depends_on: Ensures the database container starts before the web app
- networks: Creates a private network for container communication
- volumes: Provides data persistence for the database
Lessons Learned and Best Practices
During development, we encountered and fixed several issues:
- Database Connection Timing The web application needs to wait for the database to be ready before connecting. We implemented a retry mechanism in our code:
async function initialiseDb() {
try {
// Connection code
} catch (err) {
// Retry after a delay
setTimeout(initialiseDb, 5000);
}
}
Deployment
To deploy this application, you simply run:
docker-compose up
This single command builds the images (if needed), creates the containers, networks, and volumes, and starts all services.
Conclusion
This multi-container setup demonstrates several important concepts in Docker orchestration:
- Separation of concerns: Keeping application and database in separate containers
- Networking: Allowing containers to communicate while remaining isolated
- Environment configuration: Using environment variables for container configuration
- Data persistence: Using volumes to persist database data
This project serves as an excellent starting point for microservice architectures. The principles shown here—container communication, environment configuration, and data persistence—are fundamental to containerised application development.