moai-workflow-testing
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseDevelopment Workflow Specialist
开发工作流专家
Quick Reference
快速参考
Unified Development Workflow provides comprehensive development lifecycle management combining DDD testing, AI-powered debugging, performance optimization, automated code review, and quality assurance into integrated workflows.
Core Capabilities:
- DDD Testing: Characterization tests for legacy code, specification tests for greenfield projects, behavior snapshots
- AI-Powered Debugging: Intelligent error analysis and solution recommendations
- Performance Optimization: Profiling and bottleneck detection guidance
- Automated Code Review: TRUST 5 validation framework for quality analysis
- PR Code Review: Multi-agent pattern with Haiku eligibility check and Sonnet parallel review
- Quality Assurance: Comprehensive testing and CI/CD integration patterns
- Workflow Orchestration: End-to-end development process guidance
Workflow Progression: Debug stage leads to Refactor stage, which leads to Optimize stage, then Review stage, followed by Test stage, and finally Profile stage. Each stage benefits from AI-powered analysis and recommendations.
When to Use:
- Complete development lifecycle management
- Enterprise-grade quality assurance implementation
- Multi-language development projects
- Performance-critical applications
- Technical debt reduction initiatives
- Automated testing and CI/CD integration
- Pull request code review automation
统一开发工作流提供全面的开发生命周期管理,将DDD测试、AI驱动调试、性能优化、自动化代码审查和质量保证整合到一体化工作流中。
核心能力:
- DDD测试:遗留代码的特征测试、新项目的规范测试、行为快照
- AI驱动调试:智能错误分析与解决方案建议
- 性能优化:性能分析与瓶颈检测指导
- 自动化代码审查:用于质量分析的TRUST 5验证框架
- PR代码审查:包含Haiku资格检查和Sonnet并行审查的多Agent模式
- 质量保证:全面测试与CI/CD集成模式
- 工作流编排:端到端开发流程指导
工作流演进:调试阶段进入重构阶段,之后是优化阶段,接着是审查阶段,再到测试阶段,最后是性能分析阶段。每个阶段都受益于AI驱动的分析与建议。
适用场景:
- 完整开发生命周期管理
- 企业级质量保证实施
- 多语言开发项目
- 性能关键型应用
- 技术债务削减计划
- 自动化测试与CI/CD集成
- 拉取请求代码审查自动化
Implementation Guide
实施指南
Core Concepts
核心概念
Unified Development Philosophy:
- Integrates all aspects of development into cohesive workflow
- AI-powered assistance for complex decision-making
- Industry best practices integration for optimal patterns
- Continuous feedback loops between workflow stages
- Automated quality gates and validation
Workflow Components:
Component 1 - AI-Powered Debugging: The debugging component provides intelligent error classification and solution recommendations. When an error occurs, the system analyzes the error type, stack trace, and surrounding context to identify root causes and suggest appropriate fixes. The debugger references current best practices and common error resolution patterns.
Component 2 - Smart Refactoring: The refactoring component performs technical debt analysis and identifies safe automated transformation opportunities. It evaluates code complexity, duplication, and maintainability metrics to recommend specific refactoring actions with risk assessments.
Component 3 - Performance Optimization: The performance component provides real-time monitoring guidance and bottleneck detection. It helps identify CPU-intensive operations, memory leaks, and I/O bottlenecks, then recommends specific optimization strategies based on the identified issues.
Component 4 - DDD Testing Management: The DDD testing component provides specialized testing approaches aligned with domain-driven development. For legacy code, it uses characterization tests to capture current behavior before refactoring (PRESERVE phase). For greenfield projects, it uses specification tests derived from domain requirements. Behavior snapshots ensure behavioral consistency during transformations, and TRUST 5 validation ensures quality standards are maintained.
Component 5 - Automated Code Review: The code review component applies TRUST 5 framework validation with AI-powered quality analysis. It evaluates code against five trust dimensions and provides actionable improvement recommendations.
统一开发理念:
- 将开发的所有方面整合到连贯的工作流中
- AI驱动的辅助支持复杂决策
- 整合行业最佳实践以形成最优模式
- 工作流阶段间的持续反馈循环
- 自动化质量门与验证
工作流组件:
组件1 - AI驱动调试:调试组件提供智能错误分类和解决方案建议。当错误发生时,系统会分析错误类型、堆栈跟踪和周边上下文以确定根本原因,并提出合适的修复建议。调试器参考当前最佳实践和常见错误解决模式。
组件2 - 智能重构:重构组件执行技术债务分析,并识别安全的自动化转换机会。它评估代码复杂度、重复度和可维护性指标,以推荐带有风险评估的具体重构操作。
组件3 - 性能优化:性能组件提供实时监控指导和瓶颈检测。它帮助识别CPU密集型操作、内存泄漏和I/O瓶颈,然后根据发现的问题提出具体的优化策略。
组件4 - DDD测试管理:DDD测试组件提供与领域驱动开发对齐的专门测试方法。对于遗留代码,它使用特征测试在重构前捕获当前行为(PRESERVE阶段)。对于新项目,它使用从领域需求衍生的规范测试。行为快照确保转换过程中的行为一致性,TRUST 5验证确保质量标准得到维护。
组件5 - 自动化代码审查:代码审查组件应用TRUST 5框架分析和AI驱动的质量评估。它从五个信任维度评估代码,并提供可操作的改进建议。
TRUST 5 Framework
TRUST 5框架
The TRUST 5 framework is a conceptual quality assessment model with five dimensions. This framework provides guidance for evaluating code quality, not an implemented module.
Dimension 1 - Testability: Evaluate whether the code can be effectively tested. Consider whether functions are pure and deterministic, whether dependencies are injectable, and whether the code is modular enough for unit testing. Scoring ranges from low testability requiring significant refactoring to high testability with excellent test coverage support.
Dimension 2 - Readability: Assess how easily the code can be understood by others. Consider whether variable and function names are descriptive, whether the code structure is logical, and whether complex operations are documented. Scoring evaluates naming conventions, code organization, and documentation quality.
Dimension 3 - Understandability: Evaluate the conceptual clarity of the implementation. Consider whether the business logic is clearly expressed, whether abstractions are appropriate, and whether a new developer can understand the code quickly. This goes beyond surface readability to assess architectural clarity.
Dimension 4 - Security: Assess security posture and vulnerability exposure. Consider whether inputs are validated, whether secrets are properly managed, and whether common vulnerability patterns are avoided including injection, XSS, and CSRF. Scoring evaluates adherence to security best practices.
Dimension 5 - Transparency: Evaluate operational visibility and debuggability. Consider whether error handling is comprehensive, whether logs are meaningful and structured, and whether issues can be traced through the system. Scoring assesses observability and troubleshooting capabilities.
Overall TRUST Score Calculation: The overall TRUST score combines all five dimensions using weighted averaging. Critical issues in any dimension can override the average, ensuring security or testability problems are not masked by high scores elsewhere. A passing score typically requires minimum thresholds in each dimension plus an acceptable weighted average.
TRUST 5框架是一个包含五个维度的概念性质量评估模型。该框架为代码质量评估提供指导,而非已实现的模块。
维度1 - 可测试性:评估代码是否能被有效测试。需考虑函数是否为纯函数且具有确定性、依赖是否可注入、代码是否足够模块化以支持单元测试。评分范围从需要大量重构的低可测试性到支持出色测试覆盖率的高可测试性。
维度2 - 可读性:评估代码被他人理解的难易程度。需考虑变量和函数名称是否具有描述性、代码结构是否合理、复杂操作是否有文档说明。评分评估命名规范、代码组织和文档质量。
维度3 - 易懂性:评估实现的概念清晰度。需考虑业务逻辑是否清晰表达、抽象是否恰当、新开发者能否快速理解代码。这超越了表面可读性,评估架构清晰度。
维度4 - 安全性:评估安全态势和漏洞暴露情况。需考虑输入是否经过验证、机密信息是否得到妥善管理、是否避免了注入攻击、XSS和CSRF等常见漏洞模式。评分评估对安全最佳实践的遵循情况。
维度5 - 透明度:评估操作可见性和可调试性。需考虑错误处理是否全面、日志是否有意义且结构化、问题能否在系统中被追踪。评分评估可观测性和故障排查能力。
整体TRUST分数计算:整体TRUST分数使用加权平均法结合所有五个维度。任何维度中的关键问题都可以覆盖平均分,确保安全或可测试性问题不会被其他维度的高分掩盖。合格分数通常要求每个维度达到最低阈值,同时加权平均分达到可接受水平。
Basic Workflow Implementation
基础工作流实施
Debugging Workflow Process:
- Step 1: Capture the error with full context including stack trace, environment, and recent code changes
- Step 2: Classify the error type as syntax, runtime, logic, integration, or performance
- Step 3: Analyze the error pattern against known issue databases and best practices
- Step 4: Generate solution candidates ranked by likelihood of success
- Step 5: Apply the recommended fix and verify resolution
- Step 6: Document the issue and solution for future reference
Refactoring Workflow Process:
- Step 1: Analyze the target codebase for code smells and technical debt indicators
- Step 2: Calculate complexity metrics including cyclomatic complexity and coupling
- Step 3: Identify refactoring opportunities with associated risk levels
- Step 4: Generate a refactoring plan with prioritized actions
- Step 5: Apply refactoring transformations in safe increments
- Step 6: Verify behavior preservation through test execution
Performance Optimization Process:
- Step 1: Configure profiling for target metrics including CPU, memory, I/O, and network
- Step 2: Execute profiling runs under representative load conditions
- Step 3: Analyze profiling results to identify bottlenecks
- Step 4: Generate optimization recommendations with expected impact estimates
- Step 5: Apply optimizations in isolation to measure individual effects
- Step 6: Validate overall performance improvement
DDD Testing Process:
Legacy Code Testing (PRESERVE Phase):
- Step 1: Write characterization tests that capture the current behavior of the system. These tests document what the code actually does, not what it should do.
- Step 2: Organize characterization tests by domain concepts and business rules. Group related behaviors to identify potential domain boundaries.
- Step 3: Use behavior snapshots to record input-output pairs for complex scenarios. These serve as regression safeguards during refactoring.
- Step 4: Verify that all characterization tests pass before making any changes. This establishes a baseline of current behavior.
- Step 5: Apply refactoring transformations while continuously running characterization tests to ensure behavior preservation.
- Step 6: After refactoring, run TRUST 5 validation to ensure code quality standards are maintained.
Greenfield Development Testing:
- Step 1: Derive specification tests directly from domain requirements and use cases. Each test should express a business rule or domain invariant.
- Step 2: Organize tests by domain concepts (aggregates, entities, value objects) following DDD principles.
- Step 3: Write tests that specify domain behavior in business language, avoiding implementation details.
- Step 4: Implement domain logic to satisfy specification tests while maintaining ubiquitous language.
- Step 5: Verify behavior with integration tests that validate domain interactions and invariants.
- Step 6: Apply TRUST 5 validation to ensure testability, readability, and understandability of domain code.
Code Review Process:
- Step 1: Scan the codebase to identify files requiring review
- Step 2: Apply TRUST 5 framework analysis to each file
- Step 3: Identify critical issues requiring immediate attention
- Step 4: Calculate per-file and aggregate quality scores
- Step 5: Generate actionable recommendations with priority rankings
- Step 6: Create a summary report with improvement roadmap
PR Code Review Process:
- Step 1: Eligibility Check using Haiku agent to filter PRs, skipping closed, draft, already reviewed, and trivial changes
- Step 2: Gather Context by finding CLAUDE.md files in modified directories and summarizing PR changes
- Step 3: Parallel Review Agents using five Sonnet agents for independent analysis covering CLAUDE.md compliance, obvious bugs, git blame context, previous comments, and code comment compliance
- Step 4: Confidence Scoring from 0 to 100 for each detected issue where 0 indicates false positive, 25 indicates somewhat confident, 50 indicates moderately confident, 75 indicates highly confident, and 100 indicates absolutely certain
- Step 5: Filter and Report by removing issues below 80 confidence threshold and posting via gh CLI
调试工作流流程:
- 步骤1:捕获包含完整上下文的错误,包括堆栈跟踪、环境信息和最近的代码变更
- 步骤2:将错误类型分类为语法错误、运行时错误、逻辑错误、集成错误或性能错误
- 步骤3:对照已知问题数据库和最佳实践分析错误模式
- 步骤4:生成按成功可能性排序的解决方案候选列表
- 步骤5:应用推荐的修复方案并验证问题是否解决
- 步骤6:记录问题和解决方案以供未来参考
重构工作流流程:
- 步骤1:分析目标代码库的代码异味和技术债务指标
- 步骤2:计算复杂度指标,包括圈复杂度和耦合度
- 步骤3:识别带有相关风险级别的重构机会
- 步骤4:生成包含优先级操作的重构计划
- 步骤5:以安全的增量方式应用重构转换
- 步骤6:通过测试执行验证行为是否保持不变
性能优化流程:
- 步骤1:配置针对CPU、内存、I/O和网络等目标指标的性能分析
- 步骤2:在具有代表性的负载条件下执行性能分析
- 步骤3:分析性能分析结果以识别瓶颈
- 步骤4:生成带有预期影响估算的优化建议
- 步骤5:单独应用优化措施以衡量各自的效果
- 步骤6:验证整体性能是否提升
DDD测试流程:
遗留代码测试(PRESERVE阶段):
- 步骤1:编写捕获系统当前行为的特征测试。这些测试记录代码实际的行为,而非预期行为。
- 步骤2:按领域概念和业务规则组织特征测试。将相关行为分组以识别潜在的领域边界。
- 步骤3:使用行为快照记录复杂场景的输入输出对。这些在重构过程中作为回归保障。
- 步骤4:在进行任何变更前,验证所有特征测试都能通过。这建立了当前行为的基准。
- 步骤5:在持续运行特征测试的同时应用重构转换,确保行为保持不变。
- 步骤6:重构完成后,运行TRUST 5验证以确保代码质量标准得到维护。
新项目开发测试:
- 步骤1:直接从领域需求和用例衍生规范测试。每个测试都应表达一条业务规则或领域不变量。
- 步骤2:遵循DDD原则,按领域概念(聚合、实体、值对象)组织测试。
- 步骤3:用业务语言编写指定领域行为的测试,避免涉及实现细节。
- 步骤4:实现领域逻辑以满足规范测试,同时保持通用语言。
- 步骤5:通过集成测试验证领域交互和不变量,以确认行为正确性。
- 步骤6:应用TRUST 5验证以确保领域代码的可测试性、可读性和易懂性。
代码审查流程:
- 步骤1:扫描代码库以识别需要审查的文件
- 步骤2:对每个文件应用TRUST 5框架分析
- 步骤3:识别需要立即关注的关键问题
- 步骤4:计算单个文件和整体的质量分数
- 步骤5:生成带有优先级排序的可操作建议
- 步骤6:创建包含改进路线图的总结报告
PR代码审查流程:
- 步骤1:使用Haiku agent进行资格检查,过滤PR,跳过已关闭、草稿、已审查和微小变更的PR
- 步骤2:收集上下文,查找修改目录中的CLAUDE.md文件并总结PR变更
- 步骤3:使用五个Sonnet agent进行并行审查,独立分析CLAUDE.md合规性、明显bug、git blame上下文、历史评论和代码注释合规性
- 步骤4:为每个检测到的问题分配0到100的置信度分数,0表示误报,25表示有些置信,50表示中等置信,75表示高度置信,100表示绝对确定
- 步骤5:过滤并报告,移除置信度低于80的问题,通过gh CLI发布结果
Common Use Cases
常见用例
Enterprise Development Workflow: For enterprise applications, the workflow integrates quality gates at each stage. Before deployment, the code must pass minimum TRUST score thresholds, have zero critical issues identified, and meet required test coverage percentages. The quality gates configuration specifies minimum trust scores (typically 0.85), maximum allowed critical issues (typically zero), and required coverage levels (typically 80 percent).
Performance-Critical Applications: For performance-sensitive systems, the workflow emphasizes profiling and optimization stages. Performance thresholds define maximum acceptable response times, memory usage limits, and minimum throughput requirements. The workflow provides percentage improvement tracking and specific optimization recommendations.
企业开发工作流:对于企业应用,工作流在每个阶段都集成了质量门。部署前,代码必须达到最低TRUST分数阈值,没有被识别出的关键问题,并且满足要求的测试覆盖率百分比。质量门配置指定了最低信任分数(通常为0.85)、允许的最大关键问题数(通常为0)和所需的测试覆盖率水平(通常为80%)。
性能关键型应用:对于对性能敏感的系统,工作流强调性能分析和优化阶段。性能阈值定义了最大可接受响应时间、内存使用限制和最低吞吐量要求。工作流提供性能提升百分比跟踪和具体的优化建议。
Advanced Features
高级功能
Workflow Integration Patterns
工作流集成模式
Continuous Integration Integration: The workflow integrates with CI/CD pipelines through a multi-stage validation process.
Stage 1 - Code Quality Validation: Run the code review component and verify results meet quality standards. If the quality check fails, the pipeline terminates with a quality failure report.
Stage 2 - Testing Validation: Execute the full test suite including unit, integration, and end-to-end tests. If any tests fail, the pipeline terminates with a test failure report.
Stage 3 - Performance Validation: Run performance tests and compare results against defined thresholds. If performance standards are not met, the pipeline terminates with a performance failure report.
Stage 4 - Security Validation: Execute security analysis including static analysis and dependency scanning. If critical vulnerabilities are found, the pipeline terminates with a security failure report.
Upon passing all stages, the pipeline generates a success report and proceeds to deployment.
持续集成集成:工作流通过多阶段验证流程与CI/CD管道集成。
阶段1 - 代码质量验证:运行代码审查组件并验证结果是否符合质量标准。如果质量检查失败,管道将终止并生成质量失败报告。
阶段2 - 测试验证:执行完整的测试套件,包括单元测试、集成测试和端到端测试。如果任何测试失败,管道将终止并生成测试失败报告。
阶段3 - 性能验证:运行性能测试并将结果与定义的阈值进行比较。如果未达到性能标准,管道将终止并生成性能失败报告。
阶段4 - 安全验证:执行安全分析,包括静态分析和依赖项扫描。如果发现关键漏洞,管道将终止并生成安全失败报告。
通过所有阶段后,管道将生成成功报告并进入部署阶段。
Quality Gate Configuration
质量门配置
Quality gates define the criteria that must be met at each workflow stage. Gates can be configured with different strictness levels.
Strict Mode: All quality dimensions must meet or exceed thresholds. Any critical issue blocks progression. Full test coverage requirements apply.
Standard Mode: Average quality score must meet threshold. Critical issues block progression, but warnings are allowed. Standard coverage requirements apply.
Lenient Mode: Only critical blocking issues prevent progression. Quality scores generate warnings but do not block. Reduced coverage requirements apply.
Gate configuration includes threshold values for each TRUST dimension, maximum allowed issues by severity, required test coverage levels, and performance benchmark targets.
质量门定义了每个工作流阶段必须满足的标准。可以配置不同严格程度的质量门。
严格模式:所有质量维度必须达到或超过阈值。任何关键问题都会阻止流程推进。需满足完整的测试覆盖率要求。
标准模式:平均质量分数必须达到阈值。关键问题会阻止流程推进,但允许警告。需满足标准覆盖率要求。
宽松模式:只有关键阻塞问题会阻止流程推进。质量分数会生成警告但不会阻塞。需满足降低后的覆盖率要求。
质量门配置包括每个TRUST维度的阈值、按严重程度划分的最大允许问题数、所需的测试覆盖率水平和性能基准目标。
Multi-Language Support
多语言支持
The workflow supports development across multiple programming languages with language-specific adaptations.
Python Projects: Integration with pytest for testing, pylint and flake8 for static analysis, bandit for security scanning, and cProfile or memory_profiler for performance analysis.
JavaScript and TypeScript Projects: Integration with Jest or Vitest for testing, ESLint for static analysis, npm audit for security scanning, and Chrome DevTools or lighthouse for performance analysis.
Go Projects: Integration with go test for testing, golint and staticcheck for static analysis, gosec for security scanning, and pprof for performance analysis.
Rust Projects: Integration with cargo test for testing, clippy for static analysis, cargo audit for security scanning, and flamegraph for performance analysis.
工作流支持跨多种编程语言的开发,并针对不同语言进行了适配。
Python项目:与pytest集成用于测试,pylint和flake8用于静态分析,bandit用于安全扫描,cProfile或memory_profiler用于性能分析。
JavaScript和TypeScript项目:与Jest或Vitest集成用于测试,ESLint用于静态分析,npm audit用于安全扫描,Chrome DevTools或lighthouse用于性能分析。
Go项目:与go test集成用于测试,golint和staticcheck用于静态分析,gosec用于安全扫描,pprof用于性能分析。
Rust项目:与cargo test集成用于测试,clippy用于静态分析,cargo audit用于安全扫描,flamegraph用于性能分析。
PR Code Review Multi-Agent Pattern
PR代码审查多Agent模式
The PR Code Review process uses a multi-agent architecture following the official Claude Code plugin pattern.
Eligibility Check Agent using Haiku: The Haiku agent performs lightweight filtering to avoid unnecessary reviews. It checks the PR state and metadata to determine if review is warranted. Skip conditions include closed PRs, draft PRs, PRs already reviewed by bot, trivial changes like typo fixes, and automated dependency updates.
Context Gathering: Before launching review agents, the system gathers relevant context by finding CLAUDE.md files in directories containing modified code to understand project-specific coding standards. It also generates a concise summary of PR changes including files modified, lines added or removed, and overall impact assessment.
Parallel Review Agents using five Sonnet instances: Five Sonnet agents run in parallel, each focusing on a specific review dimension. Agent 1 audits CLAUDE.md compliance checking for violations of documented coding standards and conventions. Agent 2 scans for obvious bugs including logic errors, null reference risks, and resource leaks. Agent 3 provides git blame and history context to identify recent changes and potential patterns. Agent 4 checks previous PR comments for recurring issues and unresolved feedback. Agent 5 validates code comment compliance ensuring comments are accurate and helpful.
Confidence Scoring System: Each detected issue receives a confidence score from 0 to 100. A score of 0 indicates a false positive with no confidence. A score of 25 means somewhat confident but might be real. A score of 50 indicates moderately confident that the issue is real but minor. A score of 75 means highly confident that the issue is very likely real. A score of 100 indicates absolutely certain that the issue is definitely real.
Filter and Report Stage: Issues below the 80 confidence threshold are filtered out to reduce noise. Remaining issues are formatted and posted to the PR using the GitHub CLI. The output format follows a standardized markdown structure with issue count, numbered list of issues with descriptions, and direct links to code with specific commit SHA and line range.
Example PR Review Output: The review output begins with a Code review header, followed by the count of found issues. Each issue is numbered and includes a description of the problem with reference to the relevant CLAUDE.md rule, followed by a link to the specific file, line range, and commit SHA in the pull request.
PR代码审查流程采用多Agent架构,遵循官方Claude Code插件模式。
使用Haiku的资格检查Agent:Haiku agent执行轻量级过滤以避免不必要的审查。它检查PR状态和元数据以确定是否需要审查。跳过条件包括已关闭的PR、草稿PR、已由机器人审查的PR、拼写错误修复等微小变更,以及自动化依赖项更新。
上下文收集:在启动审查Agent之前,系统会收集相关上下文,查找包含修改代码的目录中的CLAUDE.md文件,以了解项目特定的编码标准。它还会生成PR变更的简明摘要,包括修改的文件、添加或删除的代码行以及整体影响评估。
使用五个Sonnet实例的并行审查Agent:五个Sonnet agent并行运行,每个专注于特定的审查维度。Agent 1审核CLAUDE.md合规性,检查是否违反了文档化的编码标准和约定。Agent 2扫描明显的bug,包括逻辑错误、空引用风险和资源泄漏。Agent 3提供git blame和历史上下文,以识别最近的变更和潜在模式。Agent 4检查之前的PR评论,查找重复问题和未解决的反馈。Agent 5验证代码注释合规性,确保注释准确且有帮助。
置信度评分系统:每个检测到的问题都会获得0到100的置信度分数。0表示误报,25表示有些置信,50表示中等置信,75表示高度置信,100表示绝对确定。
过滤和报告阶段:过滤掉置信度低于80的问题以减少干扰。剩余问题会被格式化并通过GitHub CLI发布到PR中。输出格式遵循标准化的markdown结构,包括问题数量、带描述的编号问题列表,以及指向代码的直接链接,包含特定的提交SHA和行范围。
PR审查输出示例:审查输出以“Code review”标题开头,随后是发现的问题数量。每个问题都有编号,包含问题描述及相关CLAUDE.md规则的引用,然后是拉取请求中特定文件、行范围和提交SHA的链接。
Works Well With
协同工具
- moai-domain-backend: Backend development workflows and API testing patterns
- moai-domain-frontend: Frontend development workflows and UI testing strategies
- moai-foundation-core: Core SPEC system and workflow management integration
- moai-platform-supabase: Supabase-specific testing patterns and database testing
- moai-platform-vercel: Vercel deployment testing and edge function validation
- moai-platform-firebase-auth: Firebase authentication testing patterns
- moai-workflow-project: Project management and documentation workflows
- moai-domain-backend:后端开发工作流和API测试模式
- moai-domain-frontend:前端开发工作流和UI测试策略
- moai-foundation-core:核心SPEC系统和工作流管理集成
- moai-platform-supabase:Supabase特定测试模式和数据库测试
- moai-platform-vercel:Vercel部署测试和边缘函数验证
- moai-platform-firebase-auth:Firebase身份验证测试模式
- moai-workflow-project:项目管理和文档工作流
Technology Stack Reference
技术栈参考
The workflow leverages industry-standard tools for each capability area.
Analysis Libraries: cProfile provides Python profiling and performance analysis. memory_profiler enables memory usage analysis and optimization. psutil supports system resource monitoring. line_profiler offers line-by-line performance profiling.
Static Analysis Tools: pylint performs comprehensive code analysis and quality checks. flake8 enforces style guide compliance and error detection. bandit scans for security vulnerabilities. mypy validates static types.
Testing Frameworks: pytest provides advanced testing with fixtures and plugins. unittest offers standard library testing capabilities. coverage measures code coverage and identifies untested paths.
工作流在每个能力领域都利用了行业标准工具。
分析库:cProfile提供Python性能分析。memory_profiler支持内存使用分析和优化。psutil支持系统资源监控。line_profiler提供逐行性能分析。
静态分析工具:pylint执行全面的代码分析和质量检查。flake8强制执行风格指南合规性和错误检测。bandit扫描安全漏洞。mypy验证静态类型。
测试框架:pytest提供带有夹具和插件的高级测试功能。unittest提供标准库测试能力。coverage衡量代码覆盖率并识别未测试路径。
Integration Patterns
集成模式
GitHub Actions Integration
GitHub Actions集成
The workflow integrates with GitHub Actions through a multi-step job configuration.
Job Configuration Steps:
- Step 1: Check out the repository using actions/checkout
- Step 2: Set up the Python environment using actions/setup-python with the target Python version
- Step 3: Install project dependencies including testing and analysis tools
- Step 4: Execute the quality validation workflow with strict quality gates
- Step 5: Run the test suite with coverage reporting
- Step 6: Perform performance benchmarking against baseline metrics
- Step 7: Execute security scanning and vulnerability detection
- Step 8: Upload workflow results as job artifacts for review
The job can be configured to run on push and pull request events, with matrix testing across multiple Python versions if needed.
工作流通过多步骤作业配置与GitHub Actions集成。
作业配置步骤:
- 步骤1:使用actions/checkout检出代码库
- 步骤2:使用actions/setup-python设置Python环境,指定目标Python版本
- 步骤3:安装项目依赖,包括测试和分析工具
- 步骤4:执行带有严格质量门的质量验证工作流
- 步骤5:运行测试套件并生成覆盖率报告
- 步骤6:针对基准指标执行性能基准测试
- 步骤7:执行安全扫描和漏洞检测
- 步骤8:上传工作流结果作为作业工件以供审查
可以配置作业在推送和拉取请求事件时运行,如果需要,还可以在多个Python版本上进行矩阵测试。
Docker Integration
Docker集成
For containerized environments, the workflow executes within Docker containers.
Container Configuration: Base the image on a Python slim variant for minimal size. Install project dependencies from requirements file. Copy project source code into the container. Configure entrypoint to execute the complete workflow sequence. Mount volumes for result output if persistent storage is needed.
The containerized workflow ensures consistent execution environments across development, testing, and production systems.
Status: Production Ready
Last Updated: 2026-01-21
Maintained by: MoAI-ADK Development Workflow Team
Version: 2.4.0 (DDD Testing Methodology)
对于容器化环境,工作流在Docker容器内执行。
容器配置:基于Python slim变体构建镜像以最小化体积。从requirements文件安装项目依赖。将项目源代码复制到容器中。配置入口点以执行完整的工作流序列。如果需要持久存储,挂载卷用于结果输出。
容器化工作流确保在开发、测试和生产系统中执行环境的一致性。
状态:生产就绪
最后更新:2026-01-21
维护团队:MoAI-ADK开发工作流团队
版本:2.4.0(DDD测试方法论)