moai-domain-database

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Database Domain Specialist

数据库领域专家

Quick Reference (30 seconds)

快速参考(30秒)

Enterprise Database Expertise - Comprehensive database patterns and implementations covering PostgreSQL, MongoDB, Redis, and advanced data management for scalable modern applications.
Core Capabilities:
  • PostgreSQL: Advanced relational patterns, optimization, and scaling
  • MongoDB: Document modeling, aggregation, and NoSQL performance tuning
  • Redis: In-memory caching, real-time analytics, and distributed systems
  • Multi-Database: Hybrid architectures and data integration patterns
  • Performance: Query optimization, indexing strategies, and scaling
  • Operations: Connection management, migrations, and monitoring
When to Use:
  • Designing database schemas and data models
  • Implementing caching strategies and performance optimization
  • Building scalable data architectures
  • Working with multi-database systems
  • Optimizing database queries and performance

企业级数据库专业能力 - 涵盖PostgreSQL、MongoDB、Redis的全面数据库模式与实现,以及面向可扩展现代应用的高级数据管理方案。
核心能力:
  • PostgreSQL:高级关系型模式、优化与扩容
  • MongoDB:文档建模、聚合与NoSQL性能调优
  • Redis:内存缓存、实时分析与分布式系统
  • 多数据库:混合架构与数据集成模式
  • 性能:查询优化、索引策略与扩容
  • 运维:连接管理、迁移与监控
适用场景:
  • 设计数据库模式与数据模型
  • 实施缓存策略与性能优化
  • 构建可扩展的数据架构
  • 处理多数据库系统相关工作
  • 优化数据库查询与性能

Implementation Guide (5 minutes)

实施指南(5分钟)

Quick Start Workflow

快速开始流程

Database Stack Initialization:
python
from moai_domain_database import DatabaseManager
数据库栈初始化:
python
from moai_domain_database import DatabaseManager

Initialize multi-database stack

Initialize multi-database stack

db_manager = DatabaseManager()
db_manager = DatabaseManager()

Configure PostgreSQL for relational data

Configure PostgreSQL for relational data

postgresql = db_manager.setup_postgresql( connection_string="postgresql://...", connection_pool_size=20, enable_query_logging=True )
postgresql = db_manager.setup_postgresql( connection_string="postgresql://...", connection_pool_size=20, enable_query_logging=True )

Configure MongoDB for document storage

Configure MongoDB for document storage

mongodb = db_manager.setup_mongodb( connection_string="mongodb://...", database_name="app_data", enable_sharding=True )
mongodb = db_manager.setup_mongodb( connection_string="mongodb://...", database_name="app_data", enable_sharding=True )

Configure Redis for caching and real-time features

Configure Redis for caching and real-time features

redis = db_manager.setup_redis( connection_string="redis://...", max_connections=50, enable_clustering=True )
redis = db_manager.setup_redis( connection_string="redis://...", max_connections=50, enable_clustering=True )

Use unified database interface

Use unified database interface

user_data = db_manager.get_user_with_profile(user_id) analytics = db_manager.get_user_analytics(user_id, time_range="30d")

Single Database Operations:
```bash
user_data = db_manager.get_user_with_profile(user_id) analytics = db_manager.get_user_analytics(user_id, time_range="30d")

单数据库操作:
```bash

PostgreSQL schema migration

PostgreSQL schema migration

moai db:migrate --database postgresql --migration-file schema_v2.sql
moai db:migrate --database postgresql --migration-file schema_v2.sql

MongoDB aggregation pipeline

MongoDB aggregation pipeline

moai db:aggregate --collection users --pipeline analytics_pipeline.json
moai db:aggregate --collection users --pipeline analytics_pipeline.json

Redis cache warming

Redis cache warming

moai db:cache:warm --pattern "user:*" --ttl 3600
undefined
moai db:cache:warm --pattern "user:*" --ttl 3600
undefined

Core Components

核心组件

  1. PostgreSQL (
    modules/postgresql.md
    )
  • Advanced schema design and constraints
  • Complex query optimization and indexing
  • Window functions and CTEs
  • Partitioning and materialized views
  • Connection pooling and performance tuning
  1. MongoDB (
    modules/mongodb.md
    )
  • Document modeling and schema design
  • Aggregation pipelines for analytics
  • Indexing strategies and performance
  • Sharding and scaling patterns
  • Data consistency and validation
  1. Redis (
    modules/redis.md
    )
  • Multi-layer caching strategies
  • Real-time analytics and counting
  • Distributed locking and coordination
  • Pub/sub messaging and streams
  • Advanced data structures (HyperLogLog, Geo)

  1. PostgreSQL(
    modules/postgresql.md
  • 高级模式设计与约束
  • 复杂查询优化与索引
  • 窗口函数与CTE
  • 分区与物化视图
  • 连接池与性能调优
  1. MongoDB(
    modules/mongodb.md
  • 文档建模与模式设计
  • 用于分析的聚合管道
  • 索引策略与性能
  • 分片与扩容模式
  • 数据一致性与验证
  1. Redis(
    modules/redis.md
  • 多层缓存策略
  • 实时分析与计数
  • 分布式锁与协调
  • 发布/订阅消息与流
  • 高级数据结构(HyperLogLog、Geo)

Advanced Patterns (10+ minutes)

高级模式(10分钟以上)

Multi-Database Architecture

多数据库架构

Polyglot Persistence Pattern:
python
class DataRouter:
 def __init__(self):
 self.postgresql = PostgreSQLConnection()
 self.mongodb = MongoDBConnection()
 self.redis = RedisConnection()

 def get_user_profile(self, user_id):
 # Get structured user data from PostgreSQL
 user = self.postgresql.get_user(user_id)

 # Get flexible profile data from MongoDB
 profile = self.mongodb.get_user_profile(user_id)

 # Get real-time status from Redis
 status = self.redis.get_user_status(user_id)

 return self.merge_user_data(user, profile, status)

 def update_user_data(self, user_id, data):
 # Route different data types to appropriate databases
 if 'structured_data' in data:
 self.postgresql.update_user(user_id, data['structured_data'])

 if 'profile_data' in data:
 self.mongodb.update_user_profile(user_id, data['profile_data'])

 if 'real_time_data' in data:
 self.redis.set_user_status(user_id, data['real_time_data'])

 # Invalidate cache across databases
 self.invalidate_user_cache(user_id)
Data Synchronization:
python
class DataSyncManager:
 def sync_user_data(self, user_id):
 # Sync from PostgreSQL to MongoDB for search
 pg_user = self.postgresql.get_user(user_id)
 search_document = self.create_search_document(pg_user)
 self.mongodb.upsert_user_search(user_id, search_document)

 # Update cache in Redis
 cache_data = self.create_cache_document(pg_user)
 self.redis.set_user_cache(user_id, cache_data, ttl=3600)
多语言持久化模式:
python
class DataRouter:
 def __init__(self):
 self.postgresql = PostgreSQLConnection()
 self.mongodb = MongoDBConnection()
 self.redis = RedisConnection()

 def get_user_profile(self, user_id):
 # Get structured user data from PostgreSQL
 user = self.postgresql.get_user(user_id)

 # Get flexible profile data from MongoDB
 profile = self.mongodb.get_user_profile(user_id)

 # Get real-time status from Redis
 status = self.redis.get_user_status(user_id)

 return self.merge_user_data(user, profile, status)

 def update_user_data(self, user_id, data):
 # Route different data types to appropriate databases
 if 'structured_data' in data:
 self.postgresql.update_user(user_id, data['structured_data'])

 if 'profile_data' in data:
 self.mongodb.update_user_profile(user_id, data['profile_data'])

 if 'real_time_data' in data:
 self.redis.set_user_status(user_id, data['real_time_data'])

 # Invalidate cache across databases
 self.invalidate_user_cache(user_id)
数据同步:
python
class DataSyncManager:
 def sync_user_data(self, user_id):
 # Sync from PostgreSQL to MongoDB for search
 pg_user = self.postgresql.get_user(user_id)
 search_document = self.create_search_document(pg_user)
 self.mongodb.upsert_user_search(user_id, search_document)

 # Update cache in Redis
 cache_data = self.create_cache_document(pg_user)
 self.redis.set_user_cache(user_id, cache_data, ttl=3600)

Performance Optimization

性能优化

Query Performance Analysis:
python
undefined
查询性能分析:
python
undefined

PostgreSQL query optimization

PostgreSQL query optimization

def analyze_query_performance(query): explain_result = postgresql.execute(f"EXPLAIN (ANALYZE, BUFFERS) {query}") return QueryAnalyzer(explain_result).get_optimization_suggestions()
def analyze_query_performance(query): explain_result = postgresql.execute(f"EXPLAIN (ANALYZE, BUFFERS) {query}") return QueryAnalyzer(explain_result).get_optimization_suggestions()

MongoDB aggregation optimization

MongoDB aggregation optimization

def optimize_aggregation_pipeline(pipeline): optimizer = AggregationOptimizer() return optimizer.optimize_pipeline(pipeline)
def optimize_aggregation_pipeline(pipeline): optimizer = AggregationOptimizer() return optimizer.optimize_pipeline(pipeline)

Redis performance monitoring

Redis performance monitoring

def monitor_redis_performance(): metrics = redis.info() return PerformanceAnalyzer(metrics).get_recommendations()

Scaling Strategies:
```python
def monitor_redis_performance(): metrics = redis.info() return PerformanceAnalyzer(metrics).get_recommendations()

扩容策略:
```python

Read replicas for PostgreSQL

Read replicas for PostgreSQL

read_replicas = postgresql.setup_read_replicas([ "postgresql://replica1...", "postgresql://replica2..." ])
read_replicas = postgresql.setup_read_replicas([ "postgresql://replica1...", "postgresql://replica2..." ])

Sharding for MongoDB

Sharding for MongoDB

mongodb.setup_sharding( shard_key="user_id", num_shards=4 )
mongodb.setup_sharding( shard_key="user_id", num_shards=4 )

Redis clustering

Redis clustering

redis.setup_cluster([ "redis://node1:7000", "redis://node2:7000", "redis://node3:7000" ])

---
redis.setup_cluster([ "redis://node1:7000", "redis://node2:7000", "redis://node3:7000" ])

---

Works Well With

协同适配

Complementary Skills:
  • moai-domain-backend
    - API integration and business logic
  • moai-foundation-core
    - Database migration and schema management
  • moai-workflow-project
    - Database project setup and configuration
  • moai-platform-baas
    - BaaS database integration patterns
Technology Integration:
  • ORMs and ODMs (SQLAlchemy, Mongoose, TypeORM)
  • Connection pooling (PgBouncer, connection pools)
  • Migration tools (Alembic, Flyway)
  • Monitoring (pg_stat_statements, MongoDB Atlas)
  • Cache invalidation and synchronization

互补技能:
  • moai-domain-backend
    - API集成与业务逻辑
  • moai-foundation-core
    - 数据库迁移与模式管理
  • moai-workflow-project
    - 数据库项目搭建与配置
  • moai-platform-baas
    - BaaS数据库集成模式
技术集成:
  • ORM与ODM(SQLAlchemy、Mongoose、TypeORM)
  • 连接池(PgBouncer、连接池)
  • 迁移工具(Alembic、Flyway)
  • 监控(pg_stat_statements、MongoDB Atlas)
  • 缓存失效与同步

Usage Examples

使用示例

Database Operations

数据库操作

python
undefined
python
undefined

PostgreSQL advanced queries

PostgreSQL advanced queries

users = postgresql.query( "SELECT * FROM users WHERE created_at > %s ORDER BY activity_score DESC LIMIT 100", [datetime.now() - timedelta(days=30)] )
users = postgresql.query( "SELECT * FROM users WHERE created_at > %s ORDER BY activity_score DESC LIMIT 100", [datetime.now() - timedelta(days=30)] )

MongoDB analytics

MongoDB analytics

analytics = mongodb.aggregate('events', [ {"$match": {"timestamp": {"$gte": start_date}}}, {"$group": {"_id": "$type", "count": {"$sum": 1}}}, {"$sort": {"count": -1}} ])
analytics = mongodb.aggregate('events', [ {"$match": {"timestamp": {"$gte": start_date}}}, {"$group": {"_id": "$type", "count": {"$sum": 1}}}, {"$sort": {"count": -1}} ])

Redis caching operations

Redis caching operations

async def get_user_data(user_id): cache_key = f"user:{user_id}" data = await redis.get(cache_key)
if not data: data = fetch_from_database(user_id) await redis.setex(cache_key, 3600, json.dumps(data))
return json.loads(data)
undefined
async def get_user_data(user_id): cache_key = f"user:{user_id}" data = await redis.get(cache_key)
if not data: data = fetch_from_database(user_id) await redis.setex(cache_key, 3600, json.dumps(data))
return json.loads(data)
undefined

Multi-Database Transactions

多数据库事务

python
async def create_user_with_profile(user_data, profile_data):
 try:
 # Start transaction across databases
 async with transaction_manager():
 # Create user in PostgreSQL
 user_id = await postgresql.insert_user(user_data)

 # Create profile in MongoDB
 await mongodb.insert_user_profile(user_id, profile_data)

 # Set initial cache in Redis
 await redis.set_user_cache(user_id, {
 "id": user_id,
 "status": "active",
 "created_at": datetime.now().isoformat()
 })

 return user_id

 except Exception as e:
 # Automatic rollback across databases
 logger.error(f"User creation failed: {e}")
 raise

python
async def create_user_with_profile(user_data, profile_data):
 try:
 # Start transaction across databases
 async with transaction_manager():
 # Create user in PostgreSQL
 user_id = await postgresql.insert_user(user_data)

 # Create profile in MongoDB
 await mongodb.insert_user_profile(user_id, profile_data)

 # Set initial cache in Redis
 await redis.set_user_cache(user_id, {
 "id": user_id,
 "status": "active",
 "created_at": datetime.now().isoformat()
 })

 return user_id

 except Exception as e:
 # Automatic rollback across databases
 logger.error(f"User creation failed: {e}")
 raise

Technology Stack

技术栈

Relational Database:
  • PostgreSQL 14+ (primary)
  • MySQL 8.0+ (alternative)
  • Connection pooling (PgBouncer, SQLAlchemy)
NoSQL Database:
  • MongoDB 6.0+ (primary)
  • Document modeling and validation
  • Aggregation framework
  • Sharding and replication
In-Memory Database:
  • Redis 7.0+ (primary)
  • Redis Stack for advanced features
  • Clustering and high availability
  • Advanced data structures
Supporting Tools:
  • Migration tools (Alembic, Flyway)
  • Monitoring (Prometheus, Grafana)
  • ORMs/ODMs (SQLAlchemy, Mongoose)
  • Connection management
Performance Features:
  • Query optimization and analysis
  • Index management and strategies
  • Caching layers and invalidation
  • Load balancing and failover

For detailed implementation patterns and database-specific optimizations, see the
modules/
directory.
关系型数据库:
  • PostgreSQL 14+(首选)
  • MySQL 8.0+(备选)
  • 连接池(PgBouncer、SQLAlchemy)
NoSQL数据库:
  • MongoDB 6.0+(首选)
  • 文档建模与验证
  • 聚合框架
  • 分片与复制
内存数据库:
  • Redis 7.0+(首选)
  • Redis Stack(用于高级功能)
  • 集群与高可用
  • 高级数据结构
配套工具:
  • 迁移工具(Alembic、Flyway)
  • 监控(Prometheus、Grafana)
  • ORM/ODM(SQLAlchemy、Mongoose)
  • 连接管理
性能特性:
  • 查询优化与分析
  • 索引管理与策略
  • 缓存层与失效
  • 负载均衡与故障转移

如需详细的实施模式与数据库专属优化方案,请查看
modules/
目录。