Swiftorial Logo
Home
Swift Lessons
Tutorials
Learn More
Career
Resources

Database per Service Pattern

Introduction to the Database per Service Pattern

The Database per Service Pattern is a microservices design approach where each microservice owns its own private database or schema, ensuring strong data ownership and minimizing coupling between services. By isolating data storage, each service manages its own Data Access independently, reducing the risk of unintended dependencies and enabling autonomous development and deployment. Services communicate via well-defined APIs rather than direct database access, promoting loose coupling and encapsulation.

For example, in an e-commerce system, the Order Service maintains its own database for order data, while the Inventory Service manages inventory data separately. If the Order Service needs inventory information, it makes an API call to the Inventory Service rather than querying its database directly, preserving service boundaries.

The Database per Service Pattern enforces data ownership by giving each microservice its own schema, minimizing coupling and enhancing autonomy.

Database per Service Pattern Diagram

The diagram illustrates the Database per Service Pattern. A Client sends Requests to microservices (e.g., Service A, Service B), each owning its own Database. Services interact via Inter-Service Calls for data exchange, avoiding direct database access. Arrows are color-coded: yellow (dashed) for requests, blue (dotted) for data access, and red (dashed) for inter-service calls.

graph TD A[Client] -->|Request| B[Service A] A -->|Request| C[Service B] B -->|Data Access| D[Database A] C -->|Data Access| E[Database B] B -->|Inter-Service Call| C subgraph Microservices B C D E end style A stroke:#ff6f61,stroke-width:2px style B stroke:#ffeb3b,stroke-width:2px style C stroke:#ffeb3b,stroke-width:2px style D stroke:#405de6,stroke-width:2px style E stroke:#405de6,stroke-width:2px linkStyle 0 stroke:#ffeb3b,stroke-width:2px,stroke-dasharray:5,5 linkStyle 1 stroke:#ffeb3b,stroke-width:2px,stroke-dasharray:5,5 linkStyle 2 stroke:#405de6,stroke-width:2px,stroke-dasharray:2,2 linkStyle 3 stroke:#405de6,stroke-width:2px,stroke-dasharray:2,2 linkStyle 4 stroke:#ff4d4f,stroke-width:2px,stroke-dasharray:3,3
Each microservice owns its own Database, accessing it directly and communicating with other services via APIs to maintain loose coupling.

Key Components

The core components of the Database per Service Pattern include:

  • Microservice: An independent service responsible for a specific business capability, owning its data and logic.
  • Private Database/Schema: A dedicated database or schema for each microservice, inaccessible to other services directly.
  • API Interface: Well-defined APIs (e.g., REST, gRPC) for inter-service communication, enabling data exchange without direct database access.
  • Data Ownership: Each service fully controls its data model, schema, and storage, enforcing encapsulation.
  • Inter-Service Communication: Mechanisms like HTTP/REST, message queues (e.g., Kafka, RabbitMQ), or event streams for service interactions.
  • Database Technology: Flexibility to choose different database types (e.g., SQL, NoSQL) per service based on specific needs.

The pattern is typically implemented in microservices architectures running on container orchestration platforms like Kubernetes, where each service and its database are deployed independently.

Benefits of the Database per Service Pattern

The Database per Service Pattern offers several advantages for microservices architectures:

  • Loose Coupling: Services are decoupled by avoiding shared databases, reducing dependencies and enabling independent changes.
  • Autonomy: Teams can develop, deploy, and scale services independently, choosing optimal technologies for each service.
  • Data Encapsulation: Each service owns its data, preventing unintended access or modifications by other services.
  • Scalability: Databases can be scaled independently based on each service’s workload, optimizing resource usage.
  • Technology Flexibility: Services can use different database types (e.g., PostgreSQL for orders, MongoDB for inventory) tailored to their needs.
  • Resilience: Failures in one service’s database do not directly impact others, improving overall system reliability.

These benefits make the Database per Service Pattern ideal for complex, distributed systems requiring high autonomy and scalability, such as e-commerce, financial services, or SaaS platforms.

Implementation Considerations

Implementing the Database per Service Pattern requires careful planning to address complexity, consistency, and operational overhead. Key considerations include:

  • Data Consistency: Use eventual consistency and patterns like Saga or Event Sourcing to manage distributed transactions across services.
  • Inter-Service Communication: Design robust APIs or event-driven systems to handle data exchange, ensuring fault tolerance and retries.
  • Schema Design: Create service-specific schemas optimized for each service’s access patterns, avoiding over-normalization or duplication.
  • Database Management: Plan for schema migrations, backups, and monitoring for each database, increasing operational complexity.
  • Performance Overhead: Account for latency in inter-service calls compared to direct database queries, optimizing API performance.
  • Data Duplication: Allow controlled data duplication across services to improve performance, but manage synchronization carefully.
  • Security: Implement strict access controls (e.g., separate credentials per service) to prevent unauthorized database access.
  • Monitoring and Observability: Use tools like Prometheus, Grafana, or OpenTelemetry to monitor database performance and service interactions.
  • Testing: Test service interactions and failure scenarios (e.g., using chaos engineering tools like Gremlin) to ensure resilience.
  • Cost Management: Evaluate the cost of running multiple databases, especially in cloud environments, and optimize resource allocation.

Common tools and frameworks for implementing the Database per Service Pattern include:

  • Databases: PostgreSQL, MySQL, MongoDB, DynamoDB, or Cassandra, chosen based on service requirements.
  • ORMs/Frameworks: Prisma, TypeORM, Mongoose, or Spring Data for managing database interactions.
  • Message Brokers: Kafka, RabbitMQ, or AWS SQS for event-driven communication between services.
  • API Gateways: Kong, AWS API Gateway, or Spring Cloud Gateway for routing inter-service API calls.
  • Kubernetes: For deploying and managing services and their databases in a containerized environment.
The Database per Service Pattern is ideal for microservices requiring autonomy and scalability, but introduces challenges in data consistency and operational complexity.

Example: Database per Service Pattern in Action

Below is a detailed example demonstrating the Database per Service Pattern using two Node.js microservices: an Order Service with a PostgreSQL database and an Inventory Service with a MongoDB database. The services communicate via REST APIs, illustrating data ownership and inter-service calls.

# docker-compose.yml version: '3.8' services: order-service: build: ./order-service ports: - "3001:3001" environment: - DATABASE_URL=postgres://user:password@order-db:5432/orders - INVENTORY_SERVICE_URL=http://inventory-service:3002 depends_on: - order-db order-db: image: postgres:latest environment: - POSTGRES_USER=user - POSTGRES_PASSWORD=password - POSTGRES_DB=orders volumes: - order-data:/var/lib/postgresql/data inventory-service: build: ./inventory-service ports: - "3002:3002" environment: - MONGO_URI=mongodb://inventory-db:27017/inventory depends_on: - inventory-db inventory-db: image: mongo:latest volumes: - inventory-data:/data/db volumes: order-data: inventory-data: # order-service/Dockerfile FROM node:16 WORKDIR /app COPY package*.json ./ RUN npm install COPY . . EXPOSE 3001 CMD ["node", "index.js"] # order-service/index.js const express = require('express'); const { Pool } = require('pg'); const axios = require('axios'); const app = express(); app.use(express.json()); const pool = new Pool({ connectionString: process.env.DATABASE_URL }); // Initialize database pool.query(` CREATE TABLE IF NOT EXISTS orders ( id SERIAL PRIMARY KEY, item_id VARCHAR(50) NOT NULL, quantity INTEGER NOT NULL ); `).catch(err => console.error('DB init error:', err)); app.post('/orders', async (req, res) => { const { item_id, quantity } = req.body; try { // Check inventory via Inventory Service const inventoryResponse = await axios.get( `${process.env.INVENTORY_SERVICE_URL}/inventory/${item_id}` ); if (inventoryResponse.data.quantity < quantity) { return res.status(400).json({ error: 'Insufficient inventory' }); } // Create order const result = await pool.query( 'INSERT INTO orders (item_id, quantity) VALUES ($1, $2) RETURNING *', [item_id, quantity] ); res.status(201).json(result.rows[0]); } catch (error) { res.status(500).json({ error: error.message }); } }); app.get('/orders/:id', async (req, res) => { try { const result = await pool.query('SELECT * FROM orders WHERE id = $1', [req.params.id]); if (result.rows.length === 0) { return res.status(404).json({ error: 'Order not found' }); } res.json(result.rows[0]); } catch (error) { res.status(500).json({ error: error.message }); } }); app.listen(3001, () => console.log('Order Service running on port 3001')); # order-service/package.json { "name": "order-service", "version": "1.0.0", "dependencies": { "express": "^4.17.1", "pg": "^8.7.3", "axios": "^0.27.2" } } # inventory-service/Dockerfile FROM node:16 WORKDIR /app COPY package*.json ./ RUN npm install COPY . . EXPOSE 3002 CMD ["node", "index.js"] # inventory-service/index.js const express = require('express'); const mongoose = require('mongoose'); const app = express(); app.use(express.json()); mongoose.connect(process.env.MONGO_URI, { useNewUrlParser: true, useUnifiedTopology: true }) .catch(err => console.error('MongoDB connection error:', err)); const inventorySchema = new mongoose.Schema({ item_id: { type: String, required: true, unique: true }, quantity: { type: Number, required: true } }); const Inventory = mongoose.model('Inventory', inventorySchema); // Initialize sample data Inventory.findOne({ item_id: 'item1' }).then(item => { if (!item) { Inventory.create({ item_id: 'item1', quantity: 100 }) .catch(err => console.error('Inventory init error:', err)); } }); app.get('/inventory/:item_id', async (req, res) => { try { const item = await Inventory.findOne({ item_id: req.params.item_id }); if (!item) { return res.status(404).json({ error: 'Item not found' }); } res.json(item); } catch (error) { res.status(500).json({ error: error.message }); } }); app.listen(3002, () => console.log('Inventory Service running on port 3002')); # inventory-service/package.json { "name": "inventory-service", "version": "1.0.0", "dependencies": { "express": "^4.17.1", "mongoose": "^6.3.4" } }

This example demonstrates the Database per Service Pattern with two microservices:

  • Order Service: Uses a PostgreSQL database to store orders, exposing /orders (POST) and /orders/:id (GET) endpoints. It queries the Inventory Service to check stock before creating an order.
  • Inventory Service: Uses a MongoDB database to store inventory, exposing a /inventory/:item_id (GET) endpoint to provide stock information.
  • Inter-Service Communication: The Order Service makes HTTP calls to the Inventory Service, avoiding direct access to its database.
  • Data Ownership: Each service owns its database (PostgreSQL for orders, MongoDB for inventory), ensuring encapsulation.
  • Docker Compose: Orchestrates the services and databases, using separate volumes for data persistence.

To run this example, create the directory structure, save the files, and execute:

docker-compose up --build

Test the Order Service by creating an order:

curl -X POST http://localhost:3001/orders -H "Content-Type: application/json" -d '{"item_id":"item1","quantity":10}'

Retrieve an order:

curl http://localhost:3001/orders/1

Check inventory:

curl http://localhost:3002/inventory/item1

This setup illustrates the Database per Service Pattern’s principles: each service owns its database, communicates via APIs, and maintains autonomy. The use of different database technologies (PostgreSQL and MongoDB) highlights the pattern’s flexibility, while the Docker Compose configuration simplifies deployment. In a production environment, you’d add monitoring, security, and event-driven patterns (e.g., Saga) to handle distributed data consistency.