ln-512-manual-tester

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Manual Tester

手动测试程序

Manually verifies Story AC on running code and reports structured results for the quality gate.
在运行中的代码上手动验证Story AC,并为质量门禁提交结构化测试结果。

Purpose & Scope

目标与范围

  • Create executable test scripts in
    tests/manual/
    folder of target project.
  • Run AC-driven checks via bash/curl (API) or puppeteer (UI).
  • Save scripts permanently for regression testing (not temp files).
  • Document results in Linear with pass/fail per AC and script path.
  • No status changes or task creation.
  • 在目标项目的
    tests/manual/
    目录下创建可执行测试脚本。
  • 通过bash/curl(API接口)或puppeteer(UI界面)执行基于AC的检查。
  • 永久保存脚本用于回归测试(不可使用临时文件)。
  • 在Linear中记录结果,包含每个AC的通过/失败状态及脚本路径。
  • 不修改任务状态或创建新任务。

When to Use

使用时机

  • Invoked by ln-510-test-planner after ln-511-test-researcher completes
  • Research comment "## Test Research:" exists on Story (from ln-511)
  • All implementation tasks in Story status = Done
  • 由ln-510-test-planner调用,需在ln-511-test-researcher完成工作之后
  • Story上存在来自ln-511的“## Test Research:”研究评论
  • Story中的所有开发任务状态均为“Done”

Test Design Principles

测试设计原则

1. Fail-Fast - No Silent Failures

1. 快速失败 - 禁止静默失败

CRITICAL: Tests MUST return 1 (fail) immediately when any criterion is not met.
Never use:
print_status "WARN" + return 0
for validation failures, graceful degradation without explicit flags, silent fallbacks that hide errors.
Exceptions (WARN is OK): Informational warnings that don't affect correctness, optional features (with clear justification in comments), infrastructure issues (e.g., missing Nginx in dev environment).
核心要求: 当任何验证条件不满足时,测试必须立即返回1(失败)。
禁止使用: 验证失败时使用
print_status "WARN" + return 0
、无明确标记的优雅降级、隐藏错误的静默回退机制。
例外情况(允许WARN): 不影响正确性的信息性警告、可选功能(需在注释中明确说明理由)、基础设施问题(如开发环境中缺少Nginx)。

2. Expected-Based Testing - The Golden Standard

2. 基于预期结果的测试 - 黄金标准

CRITICAL: Tests MUST compare actual results against expected reference files, not apply heuristics or algorithmic checks.
Directory structure:
tests/manual/NN-feature/
├── samples/               # Input files
├── expected/              # Expected output files (REQUIRED!)
│   └── {base_name}_{source_lang}-{target_lang}.{ext}
└── test-*.sh
Heuristics acceptable ONLY for: dynamic/non-deterministic data (timestamps, UUIDs, tokens - normalize before comparison; JSON with unordered keys - use
jq --sort-keys
).
核心要求: 测试必须将实际结果与预期参考文件对比,不得使用启发式或算法检查。
目录结构:
tests/manual/NN-feature/
├── samples/               # 输入文件
├── expected/              # 预期输出文件(必填!)
│   └── {base_name}_{source_lang}-{target_lang}.{ext}
└── test-*.sh
仅以下场景允许使用启发式: 动态/非确定性数据(时间戳、UUID、令牌 - 对比前需标准化;键无序的JSON - 使用
jq --sort-keys
处理)。

3. Results Storage

3. 结果存储

Test results saved to
tests/manual/results/
(persistent, in .gitignore). Named:
result_{ac_name}.{ext}
or
response_{ac_name}.json
. Inspectable after test completion for debugging.
测试结果保存至
tests/manual/results/
(持久化存储,已加入.gitignore)。命名格式为:
result_{ac_name}.{ext}
response_{ac_name}.json
。测试完成后可查看结果用于调试。

4. Expected File Generation

4. 预期文件生成

To create expected files:
  1. Run test with current implementation
  2. Review output in
    results/
    folder
  3. If correct: copy to
    expected/
    folder with proper naming
  4. If incorrect: fix implementation first, then copy
IMPORTANT: Never blindly copy results to expected. Always validate correctness first.
创建预期文件的步骤:
  1. 在当前实现上运行测试
  2. 查看
    results/
    目录下的输出内容
  3. 若结果正确:复制至
    expected/
    目录并使用规范命名
  4. 若结果错误:先修复实现,再复制结果
重要提示: 不得盲目将结果复制为预期文件,必须先验证结果的正确性。

Workflow

工作流程

Phase 1: Setup tests/manual structure

阶段1:搭建tests/manual目录结构

  1. Read
    docs/project/runbook.md
    — get Docker commands, API base URL, test prerequisites, environment setup
  2. Check if
    tests/manual/
    folder exists in project root
  3. If missing, create structure:
    • tests/manual/config.sh
      — shared configuration (BASE_URL, helpers, colors)
    • tests/manual/README.md
      — folder documentation (see README.md template below)
    • tests/manual/test-all.sh
      — master script to run all test suites (see test-all.sh template below)
    • tests/manual/results/
      — folder for test outputs (add to
      .gitignore
      )
  4. Add
    tests/manual/results/
    to project
    .gitignore
    if not present
  5. If exists, read existing
    config.sh
    to reuse settings (BASE_URL, tokens)
  1. 阅读
    docs/project/runbook.md
    — 获取Docker命令、API基础URL、测试前置条件、环境配置说明
  2. 检查项目根目录是否存在
    tests/manual/
    文件夹
  3. 若不存在,创建以下结构:
    • tests/manual/config.sh
      — 共享配置文件(BASE_URL、辅助函数、颜色定义)
    • tests/manual/README.md
      — 目录说明文档(参考下方的README.md模板)
    • tests/manual/test-all.sh
      — 执行所有测试套件的主脚本(参考下方的test-all.sh模板)
    • tests/manual/results/
      — 测试输出目录(需加入
      .gitignore
  4. 若项目
    .gitignore
    中未包含
    tests/manual/results/
    ,则添加该路径
  5. 若目录已存在,读取现有
    config.sh
    以复用配置(BASE_URL、令牌等)

Phase 2: Create Story test script

阶段2:创建Story测试脚本

  1. Fetch Story, parse AC into Given/When/Then list (3-5 expected)
    • Check for research comment (from ln-511-test-researcher) — incorporate findings into test cases
  2. Detect API vs UI (API → curl, UI → puppeteer)
  3. Create test folder structure:
    • tests/manual/{NN}-{story-slug}/samples/
      — input files (if needed)
    • tests/manual/{NN}-{story-slug}/expected/
      — expected output files (REQUIRED for deterministic tests)
  4. Generate test script:
    tests/manual/{NN}-{story-slug}/test-{story-slug}.sh
    • Use appropriate template: TEMPLATE-api-endpoint.sh (direct calls) or TEMPLATE-document-format.sh (async jobs)
    • Header: Story ID, AC list, prerequisites
    • Test function per AC + edge/error cases
    • diff-based validation against expected files (PRIMARY)
    • Results saved to
      tests/manual/results/
    • Summary table with timing
  5. Make script executable (
    chmod +x
    )
  1. 获取Story信息,将AC解析为Given/When/Then列表(3-5项预期结果)
    • 检查研究评论(来自ln-511-test-researcher)— 将研究结果融入测试用例
  2. 检测测试类型:API接口测试使用curl,UI界面测试使用puppeteer
  3. 创建测试目录结构:
    • tests/manual/{NN}-{story-slug}/samples/
      — 输入文件(如有需要)
    • tests/manual/{NN}-{story-slug}/expected/
      — 预期输出文件(确定性测试必填)
  4. 生成测试脚本:
    tests/manual/{NN}-{story-slug}/test-{story-slug}.sh
    • 使用对应模板:TEMPLATE-api-endpoint.sh(直接调用接口)或TEMPLATE-document-format.sh(异步任务)
    • 头部信息:Story ID、AC列表、前置条件
    • 每个AC对应一个测试函数,同时包含边界/错误场景
    • 基于diff的验证(与预期文件对比,核心验证方式)
    • 结果保存至
      tests/manual/results/
    • 包含计时信息的汇总表格
  5. 设置脚本可执行权限(
    chmod +x

Phase 2.5: Update Documentation

阶段2.5:更新文档

  1. Update
    tests/manual/README.md
    :
    • Add new test to "Available Test Suites" table
    • Include Story ID, AC covered, run command
  2. Update
    tests/manual/test-all.sh
    :
    • Add call to new script in SUITES array
    • Maintain execution order (00-setup first, then numbered suites)
  1. 更新
    tests/manual/README.md
    • 在“可用测试套件”表格中添加新测试
    • 包含Story ID、覆盖的AC、运行命令
  2. 更新
    tests/manual/test-all.sh
    • 在SUITES数组中添加新脚本的调用
    • 保持执行顺序(先执行00-setup,再按编号执行测试套件)

Phase 3: Execute and report

阶段3:执行测试并提交报告

  1. Rebuild Docker containers (no cache), ensure healthy
  2. Run generated script, capture output
  3. Parse results (pass/fail counts)
  4. Post Linear comment with:
    • AC matrix (pass/fail per AC)
    • Script path:
      tests/manual/{NN}-{story-slug}/test-{story-slug}.sh
    • Rerun command:
      cd tests/manual && ./{NN}-{story-slug}/test-{story-slug}.sh
  1. 重新构建Docker容器(禁用缓存),确保容器状态健康
  2. 运行生成的测试脚本,捕获输出内容
  3. 解析测试结果(通过/失败数量)
  4. 在Linear中发布评论,包含:
    • AC矩阵(每个AC的通过/失败状态)
    • 脚本路径:
      tests/manual/{NN}-{story-slug}/test-{story-slug}.sh
    • 重运行命令:
      cd tests/manual && ./{NN}-{story-slug}/test-{story-slug}.sh

Critical Rules

核心规则

  • Scripts saved to project
    tests/manual/
    , NOT temp files.
  • Rebuild Docker before testing; fail if rebuild/run unhealthy.
  • Keep language of Story (EN/RU) in script comments and Linear comment.
  • No fixes or status changes; only evidence and verdict.
  • Script must be idempotent (can rerun anytime).
  • 脚本需保存至项目的
    tests/manual/
    目录,不得使用临时文件。
  • 测试前重新构建Docker容器;若构建/运行失败则终止测试。
  • 脚本注释和Linear评论需与Story使用相同语言(英文/俄文)。
  • 不得修复代码或修改任务状态;仅提交测试证据和结论。
  • 脚本必须具备幂等性(可随时重复运行)。

Definition of Done

完成标准

  • tests/manual/
    structure exists (config.sh, README.md, test-all.sh, results/ created if missing).
  • tests/manual/results/
    added to project
    .gitignore
    .
  • Test script created at
    tests/manual/{NN}-{story-slug}/test-{story-slug}.sh
    .
  • expected/
    folder created with at least 1 expected file per deterministic AC.
  • Script uses diff-based validation against expected files (not heuristics).
  • Script saves results to
    tests/manual/results/
    for debugging.
  • Script is executable and idempotent.
  • README.md updated with new test suite in "Available Test Suites" table.
  • test-all.sh updated with call to new script in SUITES array.
  • App rebuilt and running; tests executed.
  • Verdict and Linear comment posted with script path and rerun command.
  • tests/manual/
    目录结构已创建(若缺失则需创建config.sh、README.md、test-all.sh、results/)。
  • tests/manual/results/
    已添加至项目的
    .gitignore
  • 测试脚本已创建于
    tests/manual/{NN}-{story-slug}/test-{story-slug}.sh
  • 已创建
    expected/
    目录,每个确定性AC至少对应1个预期文件。
  • 脚本使用基于diff的验证与预期文件对比(不得使用启发式方法)。
  • 脚本将结果保存至
    tests/manual/results/
    用于调试。
  • 脚本具备可执行权限且幂等。
  • README.md已更新,在“可用测试套件”表格中添加新测试。
  • test-all.sh已更新,在SUITES数组中添加新脚本的调用。
  • 应用已重新构建并运行;测试已执行完成。
  • 已发布测试结论和Linear评论,包含脚本路径和重运行命令。

Script Templates

脚本模板

README.md (created once per project)

README.md(每个项目创建一次)

markdown
undefined
markdown
undefined

Manual Testing Scripts

Manual Testing Scripts

SCOPE: Bash scripts for manual API testing. Complements automated tests with CLI-based workflows.
SCOPE: Bash scripts for manual API testing. Complements automated tests with CLI-based workflows.

Quick Start

Quick Start

bash
cd tests/manual
./00-setup/create-account.sh  # (if auth required)
./test-all.sh                 # Run ALL test suites
bash
cd tests/manual
./00-setup/create-account.sh  # (if auth required)
./test-all.sh                 # Run ALL test suites

Prerequisites

Prerequisites

  • Docker containers running (
    docker compose ps
    )
  • jq installed (
    apt-get install jq
    or
    brew install jq
    )
  • Docker containers running (
    docker compose ps
    )
  • jq installed (
    apt-get install jq
    or
    brew install jq
    )

Folder Structure

Folder Structure

tests/manual/
├── config.sh          # Shared configuration (BASE_URL, helpers, colors)
├── README.md          # This file
├── test-all.sh        # Run all test suites
├── 00-setup/          # Account & token setup (if auth required)
│   ├── create-account.sh
│   └── get-token.sh
└── {NN}-{topic}/      # Test suites by Story
    └── test-{slug}.sh
tests/manual/
├── config.sh          # Shared configuration (BASE_URL, helpers, colors)
├── README.md          # This file
├── test-all.sh        # Run all test suites
├── 00-setup/          # Account & token setup (if auth required)
│   ├── create-account.sh
│   └── get-token.sh
└── {NN}-{topic}/      # Test suites by Story
    └── test-{slug}.sh

Available Test Suites

Available Test Suites

<!-- Add new test suites here when creating new tests -->
SuiteStoryAC CoveredRun Command
<!-- Add new test suites here when creating new tests -->
SuiteStoryAC CoveredRun Command

Adding New Tests

Adding New Tests

  1. Create script in
    {NN}-{topic}/test-{slug}.sh
  2. Update this README (Available Test Suites table)
  3. Update
    test-all.sh
    (add to SUITES array)
undefined
  1. Create script in
    {NN}-{topic}/test-{slug}.sh
  2. Update this README (Available Test Suites table)
  3. Update
    test-all.sh
    (add to SUITES array)
undefined

test-all.sh (created once per project)

test-all.sh(每个项目创建一次)

bash
#!/bin/bash
bash
#!/bin/bash

=============================================================================

=============================================================================

Run all manual test suites

Run all manual test suites

=============================================================================

=============================================================================

set -e SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" source "$SCRIPT_DIR/config.sh"
echo "==========================================" echo "Running ALL Manual Test Suites" echo "=========================================="
check_jq check_api
set -e SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" source "$SCRIPT_DIR/config.sh"
echo "==========================================" echo "Running ALL Manual Test Suites" echo "=========================================="
check_jq check_api

Setup (if exists)

Setup (if exists)

[ -f "$SCRIPT_DIR/00-setup/create-account.sh" ] && "$SCRIPT_DIR/00-setup/create-account.sh" [ -f "$SCRIPT_DIR/00-setup/get-token.sh" ] && "$SCRIPT_DIR/00-setup/get-token.sh"
[ -f "$SCRIPT_DIR/00-setup/create-account.sh" ] && "$SCRIPT_DIR/00-setup/create-account.sh" [ -f "$SCRIPT_DIR/00-setup/get-token.sh" ] && "$SCRIPT_DIR/00-setup/get-token.sh"

Test suites (add new suites here)

Test suites (add new suites here)

SUITES=( # "01-auth/test-auth-flow.sh" # "02-translation/test-translation.sh" )
PASSED=0; FAILED=0 for suite in "${SUITES[@]}"; do echo "" echo "==========================================" echo "Running: $suite" echo "==========================================" if "$SCRIPT_DIR/$suite"; then ((++PASSED)) print_status "PASS" "$suite" else ((++FAILED)) print_status "FAIL" "$suite" fi done
echo "" echo "==========================================" echo "TOTAL: $PASSED suites passed, $FAILED failed" echo "==========================================" [ $FAILED -eq 0 ] && exit 0 || exit 1
undefined
SUITES=( # "01-auth/test-auth-flow.sh" # "02-translation/test-translation.sh" )
PASSED=0; FAILED=0 for suite in "${SUITES[@]}"; do echo "" echo "==========================================" echo "Running: $suite" echo "==========================================" if "$SCRIPT_DIR/$suite"; then ((++PASSED)) print_status "PASS" "$suite" else ((++FAILED)) print_status "FAIL" "$suite" fi done
echo "" echo "==========================================" echo "TOTAL: $PASSED suites passed, $FAILED failed" echo "==========================================" [ $FAILED -eq 0 ] && exit 0 || exit 1
undefined

config.sh (created once per project)

config.sh(每个项目创建一次)

bash
#!/bin/bash
bash
#!/bin/bash

Shared configuration for manual testing scripts

Shared configuration for manual testing scripts

export BASE_URL="${BASE_URL:-http://localhost:8080}" export RED='\033[0;31m' export GREEN='\033[0;32m' export YELLOW='\033[1;33m' export NC='\033[0m'
print_status() { local status=$1; local message=$2 case $status in "PASS") echo -e "${GREEN}[PASS]${NC} $message" ;; "FAIL") echo -e "${RED}[FAIL]${NC} $message" ;; "WARN") echo -e "${YELLOW}[WARN]${NC} $message" ;; "INFO") echo -e "[INFO] $message" ;; esac }
check_jq() { command -v jq &> /dev/null || { echo "Error: jq required"; exit 1; } }
check_api() { local response=$(curl -s -o /dev/null -w "%{http_code}" "$BASE_URL/health" 2>/dev/null) if [ "$response" != "200" ]; then echo "Error: API not reachable at $BASE_URL" exit 1 fi print_status "INFO" "API reachable at $BASE_URL" }
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" export SCRIPT_DIR
undefined
export BASE_URL="${BASE_URL:-http://localhost:8080}" export RED='\033[0;31m' export GREEN='\033[0;32m' export YELLOW='\033[1;33m' export NC='\033[0m'
print_status() { local status=$1; local message=$2 case $status in "PASS") echo -e "${GREEN}[PASS]${NC} $message" ;; "FAIL") echo -e "${RED}[FAIL]${NC} $message" ;; "WARN") echo -e "${YELLOW}[WARN]${NC} $message" ;; "INFO") echo -e "[INFO] $message" ;; esac }
check_jq() { command -v jq &> /dev/null || { echo "Error: jq required"; exit 1; } }
check_api() { local response=$(curl -s -o /dev/null -w "%{http_code}" "$BASE_URL/health" 2>/dev/null) if [ "$response" != "200" ]; then echo "Error: API not reachable at $BASE_URL" exit 1 fi print_status "INFO" "API reachable at $BASE_URL" }
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" export SCRIPT_DIR
undefined

Test Script Templates

测试脚本模板

See: references/templates/
TemplateUse CaseLocation
template-api-endpoint.shAPI endpoint tests (NO async jobs)template-api-endpoint.sh
template-document-format.shDocument/file processing (WITH async jobs)template-document-format.sh
Quick start:
bash
cp references/templates/template-api-endpoint.sh {NN}-feature/test-{feature}.sh      # Endpoint tests
cp references/templates/template-document-format.sh {NN}-feature/test-{format}.sh    # Document tests
查看: references/templates/
TemplateUse CaseLocation
template-api-endpoint.shAPI endpoint tests (NO async jobs)template-api-endpoint.sh
template-document-format.shDocument/file processing (WITH async jobs)template-document-format.sh
快速开始:
bash
cp references/templates/template-api-endpoint.sh {NN}-feature/test-{feature}.sh      # Endpoint tests
cp references/templates/template-document-format.sh {NN}-feature/test-{format}.sh    # Document tests

Reference Files

参考文件

  • Script format reference: prompsit-api
    tests/manual/
    (production example)
  • AC format:
    shared/templates/test_task_template.md
    (or local
    docs/templates/
    in target project)
  • Risk-based context:
    ln-513-auto-test-planner/references/risk_based_testing_guide.md
  • Research findings: ln-511-test-researcher creates "## Test Research" comment on Story

Version: 1.0.0 (Renamed from ln-503, Phase 0 Research moved to ln-511-test-researcher) Last Updated: 2026-01-15
  • 脚本格式参考:prompsit-api
    tests/manual/
    (生产环境示例)
  • AC格式:
    shared/templates/test_task_template.md
    (或目标项目中的本地
    docs/templates/
  • 风险导向测试参考:
    ln-513-auto-test-planner/references/risk_based_testing_guide.md
  • 研究结果:ln-511-test-researcher在Story上创建的“## Test Research”评论

Version: 1.0.0 (Renamed from ln-503, Phase 0 Research moved to ln-511-test-researcher) Last Updated: 2026-01-15