add-test

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese
You are helping a user create new Robot Framework test cases in this SnapLogic pipeline testing project. Follow these conventions and patterns.
你正在协助用户在这个SnapLogic管道测试项目中创建新的Robot Framework测试用例。请遵循以下约定和模式。

Quick Start Template

快速入门模板

Here's a basic test file template to get started:
robotframework
*** Settings ***
Documentation    Description of what this test suite covers
...              Include prerequisites and dependencies
Resource         ../../resources/common/general.resource
Resource         ../../resources/common/database.resource
Library          Collections
Library          JSONLibrary

Suite Setup      Test Suite Setup
Suite Teardown   Test Suite Teardown

*** Variables ***
${PIPELINE_NAME}       my_pipeline
${UNIQUE_ID}           ${EMPTY}

*** Test Cases ***
Test Pipeline Executes Successfully
    [Documentation]    Verify the pipeline completes without errors
    [Tags]    oracle    smoke    pipeline_name
    [Setup]    Prepare Test Data

    # Given
    ${unique_id}=    Get Unique Id
    Set Suite Variable    ${UNIQUE_ID}    ${unique_id}

    # When
    Upload Pipeline    ${PIPELINE_NAME}_${unique_id}
    Execute Pipeline    ${PIPELINE_NAME}_${unique_id}

    # Then
    ${status}=    Get Pipeline Status    ${PIPELINE_NAME}_${unique_id}
    Should Be Equal    ${status}    Completed

    [Teardown]    Cleanup Test Pipeline

*** Keywords ***
Test Suite Setup
    Log To Console    \nInitializing test suite...
    Validate Required Variables

Test Suite Teardown
    Log To Console    \nCleaning up test suite...
    Run Keyword And Ignore Error    Delete All Test Artifacts

Prepare Test Data
    Log    Preparing test data for ${PIPELINE_NAME}

Cleanup Test Pipeline
    Run Keyword And Ignore Error    Delete Pipeline    ${PIPELINE_NAME}_${UNIQUE_ID}

Validate Required Variables
    Variable Should Exist    ${URL}
    Variable Should Exist    ${ORG_NAME}
以下是一个基础的测试文件模板,帮助你快速上手:
robotframework
*** Settings ***
Documentation    Description of what this test suite covers
...              Include prerequisites and dependencies
Resource         ../../resources/common/general.resource
Resource         ../../resources/common/database.resource
Library          Collections
Library          JSONLibrary

Suite Setup      Test Suite Setup
Suite Teardown   Test Suite Teardown

*** Variables ***
${PIPELINE_NAME}       my_pipeline
${UNIQUE_ID}           ${EMPTY}

*** Test Cases ***
Test Pipeline Executes Successfully
    [Documentation]    Verify the pipeline completes without errors
    [Tags]    oracle    smoke    pipeline_name
    [Setup]    Prepare Test Data

    # Given
    ${unique_id}=    Get Unique Id
    Set Suite Variable    ${UNIQUE_ID}    ${unique_id}

    # When
    Upload Pipeline    ${PIPELINE_NAME}_${unique_id}
    Execute Pipeline    ${PIPELINE_NAME}_${unique_id}

    # Then
    ${status}=    Get Pipeline Status    ${PIPELINE_NAME}_${unique_id}
    Should Be Equal    ${status}    Completed

    [Teardown]    Cleanup Test Pipeline

*** Keywords ***
Test Suite Setup
    Log To Console    \nInitializing test suite...
    Validate Required Variables

Test Suite Teardown
    Log To Console    \nCleaning up test suite...
    Run Keyword And Ignore Error    Delete All Test Artifacts

Prepare Test Data
    Log    Preparing test data for ${PIPELINE_NAME}

Cleanup Test Pipeline
    Run Keyword And Ignore Error    Delete Pipeline    ${PIPELINE_NAME}_${UNIQUE_ID}

Validate Required Variables
    Variable Should Exist    ${URL}
    Variable Should Exist    ${ORG_NAME}

Step-by-Step: Creating a New Test

分步指南:创建新测试

Step 1: Choose the Right Location

步骤1:选择合适的位置

test/suite/pipeline_tests/
├── oracle/           # Oracle database tests
├── postgres/         # PostgreSQL tests
├── snowflake/        # Snowflake tests
├── kafka/            # Kafka messaging tests
├── salesforce/       # Salesforce mock tests
└── <new_system>/     # Create new folder if needed
test/suite/pipeline_tests/
├── oracle/           # Oracle数据库测试
├── postgres/         # PostgreSQL测试
├── snowflake/        # Snowflake测试
├── kafka/            # Kafka消息测试
├── salesforce/       # Salesforce模拟测试
└── <new_system>/     # 如有需要可创建新文件夹

Step 2: Create the Test File

步骤2:创建测试文件

Naming Convention:
<feature>_<system>.robot
Examples:
  • data_load_oracle.robot
  • message_processing_kafka.robot
  • api_sync_salesforce.robot
命名约定:
<feature>_<system>.robot
示例:
  • data_load_oracle.robot
  • message_processing_kafka.robot
  • api_sync_salesforce.robot

Step 3: Add Required Settings

步骤3:添加必要的设置

robotframework
*** Settings ***
Documentation    Clear description of the test suite
...              - What pipelines are tested
...              - Prerequisites (services, data)
...              - Expected outcomes
robotframework
*** Settings ***
Documentation    Clear description of the test suite
...              - What pipelines are tested
...              - Prerequisites (services, data)
...              - Expected outcomes

Import common resources

导入通用资源

Resource ../../resources/common/general.resource Resource ../../resources/common/files.resource
Resource ../../resources/common/general.resource Resource ../../resources/common/files.resource

System-specific resources (if any)

系统特定资源(如有)

Resource ../../resources/kafka/kafka.resource
Resource ../../resources/kafka/kafka.resource

Required libraries

必要的库

Library Collections Library JSONLibrary Library OperatingSystem
undefined
Library Collections Library JSONLibrary Library OperatingSystem
undefined

Step 4: Define Variables

步骤4:定义变量

robotframework
*** Variables ***
robotframework
*** Variables ***

Test-specific constants

测试特定常量

${PIPELINE_NAME} my_test_pipeline ${EXPECTED_RECORD_COUNT} 100
${PIPELINE_NAME} my_test_pipeline ${EXPECTED_RECORD_COUNT} 100

Paths (relative to test execution)

路径(相对于测试执行目录)

${TEST_DATA_PATH} ${CURDIR}/../test_data
${TEST_DATA_PATH} ${CURDIR}/../test_data

Timeouts

超时设置

${PIPELINE_TIMEOUT} 300s ${DB_TIMEOUT} 60s
${PIPELINE_TIMEOUT} 300s ${DB_TIMEOUT} 60s

Lists and dictionaries

列表和字典

@{EXPECTED_COLUMNS} id name value timestamp &{CONNECTION_CONFIG} host=localhost port=5432
undefined
@{EXPECTED_COLUMNS} id name value timestamp &{CONNECTION_CONFIG} host=localhost port=5432
undefined

Step 5: Write Test Cases

步骤5:编写测试用例

robotframework
*** Test Cases ***
Test Data Load From Source To Target
    [Documentation]    Verifies end-to-end data load from Oracle to Snowflake
    ...                Prerequisites:
    ...                - Oracle container running with test data
    ...                - Snowflake mock container running
    [Tags]    oracle    snowflake    data_load    regression
    [Timeout]    ${PIPELINE_TIMEOUT}

    # Setup
    ${unique_id}=    Get Unique Id

    # Given source data exists
    ${source_count}=    Get Oracle Record Count    source_table
    Should Be True    ${source_count} > 0

    # When pipeline is executed
    Upload And Execute Pipeline    data_load_${unique_id}
    Wait Until Pipeline Completes    data_load_${unique_id}    timeout=300

    # Then data appears in target
    ${target_count}=    Get Snowflake Record Count    target_table
    Should Be Equal As Numbers    ${source_count}    ${target_count}

    [Teardown]    Cleanup Pipeline And Data    data_load_${unique_id}

Test Error Handling For Invalid Data
    [Documentation]    Verifies pipeline handles invalid data gracefully
    [Tags]    oracle    error_handling    negative

    # Given invalid source data
    Insert Invalid Test Record    source_table

    # When pipeline is executed
    ${status}=    Execute Pipeline And Get Status    error_test_pipeline

    # Then pipeline handles error appropriately
    Should Be Equal    ${status}    Failed
    ${error_log}=    Get Pipeline Error Log
    Should Contain    ${error_log}    Invalid data format
robotframework
*** Test Cases ***
Test Data Load From Source To Target
    [Documentation]    Verifies end-to-end data load from Oracle to Snowflake
    ...                Prerequisites:
    ...                - Oracle容器运行并包含测试数据
    ...                - Snowflake模拟容器运行
    [Tags]    oracle    snowflake    data_load    regression
    [Timeout]    ${PIPELINE_TIMEOUT}

    # 准备工作
    ${unique_id}=    Get Unique Id

    # 给定源数据存在
    ${source_count}=    Get Oracle Record Count    source_table
    Should Be True    ${source_count} > 0

    # 当管道执行时
    Upload And Execute Pipeline    data_load_${unique_id}
    Wait Until Pipeline Completes    data_load_${unique_id}    timeout=300

    # 验证数据出现在目标端
    ${target_count}=    Get Snowflake Record Count    target_table
    Should Be Equal As Numbers    ${source_count}    ${target_count}

    [Teardown]    Cleanup Pipeline And Data    data_load_${unique_id}

Test Error Handling For Invalid Data
    [Documentation]    Verifies pipeline handles invalid data gracefully
    [Tags]    oracle    error_handling    negative

    # 给定无效的源数据
    Insert Invalid Test Record    source_table

    # 当管道执行时
    ${status}=    Execute Pipeline And Get Status    error_test_pipeline

    # 验证管道正确处理错误
    Should Be Equal    ${status}    Failed
    ${error_log}=    Get Pipeline Error Log
    Should Contain    ${error_log}    Invalid data format

Step 6: Implement Keywords

步骤6:实现关键字

robotframework
*** Keywords ***
Upload And Execute Pipeline
    [Documentation]    Uploads pipeline and starts execution
    [Arguments]    ${pipeline_name}

    ${pipeline_path}=    Set Variable    ${pipeline_payload_path}/${pipeline_name}.slp
    Upload Pipeline    ${pipeline_path}    ${PROJECT_SPACE}/${PROJECT_NAME}

    ${snode_id}=    Get Pipeline Snode Id    ${pipeline_name}
    Set Test Variable    ${PIPELINE_SNODE_ID}    ${snode_id}

    Execute Pipeline Api    ${snode_id}    ${GROUNDPLEX_NAME}

Wait Until Pipeline Completes
    [Documentation]    Waits for pipeline to complete with timeout
    [Arguments]    ${pipeline_name}    ${timeout}=300

    ${status}=    Set Variable    Running
    ${end_time}=    Evaluate    time.time() + ${timeout}    time

    WHILE    '${status}' == 'Running'
        ${status}=    Get Pipeline Status    ${pipeline_name}
        ${current_time}=    Evaluate    time.time()    time
        IF    ${current_time} > ${end_time}
            Fail    Pipeline timeout after ${timeout} seconds
        END
        Sleep    5s
    END

    RETURN    ${status}

Cleanup Pipeline And Data
    [Documentation]    Cleans up test artifacts
    [Arguments]    ${pipeline_name}

    Run Keyword And Ignore Error    Delete Pipeline By Name    ${pipeline_name}
    Run Keyword And Ignore Error    Delete Test Data    ${pipeline_name}
robotframework
*** Keywords ***
Upload And Execute Pipeline
    [Documentation]    Uploads pipeline and starts execution
    [Arguments]    ${pipeline_name}

    ${pipeline_path}=    Set Variable    ${pipeline_payload_path}/${pipeline_name}.slp
    Upload Pipeline    ${pipeline_path}    ${PROJECT_SPACE}/${PROJECT_NAME}

    ${snode_id}=    Get Pipeline Snode Id    ${pipeline_name}
    Set Test Variable    ${PIPELINE_SNODE_ID}    ${snode_id}

    Execute Pipeline Api    ${snode_id}    ${GROUNDPLEX_NAME}

Wait Until Pipeline Completes
    [Documentation]    Waits for pipeline to complete with timeout
    [Arguments]    ${pipeline_name}    ${timeout}=300

    ${status}=    Set Variable    Running
    ${end_time}=    Evaluate    time.time() + ${timeout}    time

    WHILE    '${status}' == 'Running'
        ${status}=    Get Pipeline Status    ${pipeline_name}
        ${current_time}=    Evaluate    time.time()    time
        IF    ${current_time} > ${end_time}
            Fail    Pipeline timeout after ${timeout} seconds
        END
        Sleep    5s
    END

    RETURN    ${status}

Cleanup Pipeline And Data
    [Documentation]    Cleans up test artifacts
    [Arguments]    ${pipeline_name}

    Run Keyword And Ignore Error    Delete Pipeline By Name    ${pipeline_name}
    Run Keyword And Ignore Error    Delete Test Data    ${pipeline_name}

Tag Guidelines

标签指南

Required Tags

必填标签

Every test should have:
  1. System tag:
    oracle
    ,
    postgres
    ,
    kafka
    , etc.
  2. Test type tag:
    smoke
    ,
    regression
    ,
    negative
  3. Feature tag:
    data_load
    ,
    transformation
    ,
    api_sync
每个测试都应包含:
  1. 系统标签
    oracle
    postgres
    kafka
  2. 测试类型标签
    smoke
    regression
    negative
  3. 功能标签
    data_load
    transformation
    api_sync

Tag Examples

标签示例

robotframework
undefined
robotframework
undefined

Smoke test for Oracle data load

Oracle数据加载的冒烟测试

[Tags] oracle smoke data_load
[Tags] oracle smoke data_load

Regression test for Kafka with error handling

Kafka错误处理的回归测试

[Tags] kafka regression error_handling messaging
[Tags] kafka regression error_handling messaging

Integration test spanning multiple systems

跨多系统的集成测试

[Tags] oracle snowflake integration etl
undefined
[Tags] oracle snowflake integration etl
undefined

Common Patterns

常见模式

Pattern 1: Setup-Execute-Verify-Cleanup

模式1:准备-执行-验证-清理

robotframework
Test Pattern Example
    [Setup]    Initialize Test Environment

    # Arrange
    Prepare Source Data

    # Act
    Execute Pipeline

    # Assert
    Verify Target Data

    [Teardown]    Cleanup All Test Data
robotframework
Test Pattern Example
    [Setup]    Initialize Test Environment

    # 准备阶段
    Prepare Source Data

    # 执行阶段
    Execute Pipeline

    # 验证阶段
    Verify Target Data

    [Teardown]    Cleanup All Test Data

Pattern 2: Data-Driven Tests

模式2:数据驱动测试

robotframework
*** Test Cases ***
Test Multiple Data Scenarios
    [Template]    Execute And Verify Pipeline
    # input_file    expected_count    expected_status
    small_data.csv     100    Completed
    medium_data.csv    1000   Completed
    empty_data.csv     0      Completed
    invalid_data.csv   0      Failed

*** Keywords ***
Execute And Verify Pipeline
    [Arguments]    ${input_file}    ${expected_count}    ${expected_status}
    Load Test Data    ${input_file}
    ${status}=    Execute Pipeline    data_processor
    Should Be Equal    ${status}    ${expected_status}
    ${count}=    Get Output Record Count
    Should Be Equal As Numbers    ${count}    ${expected_count}
robotframework
*** Test Cases ***
Test Multiple Data Scenarios
    [Template]    Execute And Verify Pipeline
    # input_file    expected_count    expected_status
    small_data.csv     100    Completed
    medium_data.csv    1000   Completed
    empty_data.csv     0      Completed
    invalid_data.csv   0      Failed

*** Keywords ***
Execute And Verify Pipeline
    [Arguments]    ${input_file}    ${expected_count}    ${expected_status}
    Load Test Data    ${input_file}
    ${status}=    Execute Pipeline    data_processor
    Should Be Equal    ${status}    ${expected_status}
    ${count}=    Get Output Record Count
    Should Be Equal As Numbers    ${count}    ${expected_count}

Pattern 3: Parallel-Safe Tests

模式3:支持并行的测试

robotframework
*** Test Cases ***
Parallel Safe Test
    [Tags]    parallel_safe

    # Use unique identifiers to avoid conflicts
    ${unique_id}=    Get Unique Id
    ${pipeline_name}=    Set Variable    test_pipeline_${unique_id}
    ${table_name}=    Set Variable    test_table_${unique_id}

    # All resources are unique to this test run
    Create Test Table    ${table_name}
    Upload Pipeline    ${pipeline_name}
    Execute And Verify    ${pipeline_name}    ${table_name}

    [Teardown]    Cleanup Unique Resources    ${pipeline_name}    ${table_name}
robotframework
*** Test Cases ***
Parallel Safe Test
    [Tags]    parallel_safe

    # 使用唯一标识符避免冲突
    ${unique_id}=    Get Unique Id
    ${pipeline_name}=    Set Variable    test_pipeline_${unique_id}
    ${table_name}=    Set Variable    test_table_${unique_id}

    # 所有资源仅属于当前测试运行
    Create Test Table    ${table_name}
    Upload Pipeline    ${pipeline_name}
    Execute And Verify    ${pipeline_name}    ${table_name}

    [Teardown]    Cleanup Unique Resources    ${pipeline_name}    ${table_name}

Checklist Before Committing

提交前检查清单

  • Test has clear documentation
  • Appropriate tags are assigned
  • Setup and teardown handle cleanup
  • Variables use meaningful names
  • Error handling is in place
  • Test runs successfully locally
  • Test is idempotent (can run multiple times)
  • No hardcoded credentials or secrets
  • 测试包含清晰的文档说明
  • 已分配合适的标签
  • 准备和清理步骤处理了资源回收
  • 变量使用了有意义的名称
  • 已实现错误处理
  • 测试在本地可成功运行
  • 测试具有幂等性(可多次运行)
  • 没有硬编码的凭证或敏感信息