pytest-patterns

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Pytest Patterns - Comprehensive Testing Guide

Pytest 模式 - 全面测试指南

A comprehensive skill for mastering Python testing with pytest. This skill covers everything from basic test structure to advanced patterns including fixtures, parametrization, mocking, test organization, coverage analysis, and CI/CD integration.
这是一项掌握使用pytest进行Python测试的综合技能。本技能涵盖从基础测试结构到高级模式的所有内容,包括fixtures、参数化、模拟、测试组织、覆盖率分析以及CI/CD集成。

When to Use This Skill

何时使用此技能

Use this skill when:
  • Writing tests for Python applications (web apps, APIs, CLI tools, libraries)
  • Setting up test infrastructure for a new Python project
  • Refactoring existing tests to be more maintainable and efficient
  • Implementing test-driven development (TDD) workflows
  • Creating fixture patterns for database, API, or external service testing
  • Organizing large test suites with hundreds or thousands of tests
  • Debugging failing tests or improving test reliability
  • Setting up continuous integration testing pipelines
  • Measuring and improving code coverage
  • Writing integration, unit, or end-to-end tests
  • Testing async Python code
  • Mocking external dependencies and services
在以下场景使用此技能:
  • 为Python应用(Web应用、API、CLI工具、库)编写测试
  • 为新Python项目搭建测试基础设施
  • 重构现有测试以提高可维护性和效率
  • 实现测试驱动开发(TDD)工作流
  • 为数据库、API或外部服务测试创建fixture模式
  • 组织包含数百或数千个测试的大型测试套件
  • 调试失败的测试或提高测试可靠性
  • 搭建持续集成测试流水线
  • 衡量并提高代码覆盖率
  • 编写集成测试、单元测试或端到端测试
  • 测试异步Python代码
  • 模拟外部依赖和服务

Core Concepts

核心概念

What is pytest?

什么是pytest?

pytest is a mature, full-featured Python testing framework that makes it easy to write simple tests, yet scales to support complex functional testing. It provides:
  • Simple syntax: Use plain
    assert
    statements instead of special assertion methods
  • Powerful fixtures: Modular, composable test setup and teardown
  • Parametrization: Run the same test with different inputs
  • Plugin ecosystem: Hundreds of plugins for extended functionality
  • Detailed reporting: Clear failure messages and debugging information
  • Test discovery: Automatic test collection following naming conventions
pytest是一个成熟、功能齐全的Python测试框架,既可以轻松编写简单测试,又能扩展以支持复杂的功能测试。它提供:
  • 简洁语法:使用普通的
    assert
    语句而非特殊的断言方法
  • 强大的fixtures:模块化、可组合的测试前置和后置处理
  • 参数化:使用不同输入运行同一测试
  • 插件生态系统:数百个插件用于扩展功能
  • 详细报告:清晰的失败信息和调试内容
  • 测试自动发现:遵循命名约定自动收集测试

pytest vs unittest

pytest vs unittest

python
undefined
python
undefined

unittest (traditional)

unittest(传统方式)

import unittest
class TestMath(unittest.TestCase): def test_addition(self): self.assertEqual(2 + 2, 4)
import unittest
class TestMath(unittest.TestCase): def test_addition(self): self.assertEqual(2 + 2, 4)

pytest (simpler)

pytest(更简洁)

def test_addition(): assert 2 + 2 == 4
undefined
def test_addition(): assert 2 + 2 == 4
undefined

Test Discovery Rules

测试发现规则

pytest automatically discovers tests by following these conventions:
  1. Test files:
    test_*.py
    or
    *_test.py
  2. Test functions: Functions prefixed with
    test_
  3. Test classes: Classes prefixed with
    Test
    (no
    __init__
    method)
  4. Test methods: Methods prefixed with
    test_
    inside Test classes
pytest通过以下约定自动发现测试:
  1. 测试文件
    test_*.py
    *_test.py
  2. 测试函数:以
    test_
    为前缀的函数
  3. 测试类:以
    Test
    为前缀的类(无
    __init__
    方法)
  4. 测试方法:Test类中以
    test_
    为前缀的方法

Fixtures - The Heart of pytest

Fixtures - pytest的核心

What are Fixtures?

什么是Fixtures?

Fixtures provide a fixed baseline for tests to run reliably and repeatably. They handle setup, provide test data, and perform cleanup.
Fixtures为测试提供固定的基准,确保测试可靠且可重复执行。它们处理前置准备、提供测试数据并执行后置清理。

Basic Fixture Pattern

基础Fixture模式

python
import pytest

@pytest.fixture
def sample_data():
    """Provides sample data for testing."""
    return {"name": "Alice", "age": 30}

def test_data_access(sample_data):
    assert sample_data["name"] == "Alice"
    assert sample_data["age"] == 30
python
import pytest

@pytest.fixture
def sample_data():
    """为测试提供示例数据。"""
    return {"name": "Alice", "age": 30}

def test_data_access(sample_data):
    assert sample_data["name"] == "Alice"
    assert sample_data["age"] == 30

Fixture Scopes

Fixture作用域

Fixtures can have different scopes controlling how often they're created:
  • function (default): Created for each test function
  • class: Created once per test class
  • module: Created once per test module
  • package: Created once per test package
  • session: Created once per test session
python
@pytest.fixture(scope="session")
def database_connection():
    """Database connection created once for entire test session."""
    conn = create_db_connection()
    yield conn
    conn.close()  # Cleanup after all tests

@pytest.fixture(scope="module")
def api_client():
    """API client created once per test module."""
    client = APIClient()
    client.authenticate()
    yield client
    client.logout()

@pytest.fixture  # scope="function" is default
def temp_file():
    """Temporary file created for each test."""
    import tempfile
    f = tempfile.NamedTemporaryFile(mode='w', delete=False)
    yield f.name
    os.unlink(f.name)
Fixtures可以设置不同的作用域,控制其创建频率:
  • function(默认):为每个测试函数创建
  • class:为每个测试类创建一次
  • module:为每个测试模块创建一次
  • package:为每个测试包创建一次
  • session:为整个测试会话创建一次
python
@pytest.fixture(scope="session")
def database_connection():
    """为整个测试会话创建一次数据库连接。"""
    conn = create_db_connection()
    yield conn
    conn.close()  # 所有测试完成后清理

@pytest.fixture(scope="module")
def api_client():
    """为每个测试模块创建一次API客户端。"""
    client = APIClient()
    client.authenticate()
    yield client
    client.logout()

@pytest.fixture  # 默认scope="function"
def temp_file():
    """为每个测试创建临时文件。"""
    import tempfile
    f = tempfile.NamedTemporaryFile(mode='w', delete=False)
    yield f.name
    os.unlink(f.name)

Fixture Dependencies

Fixture依赖

Fixtures can depend on other fixtures, creating a dependency graph:
python
@pytest.fixture
def database():
    db = Database()
    db.connect()
    yield db
    db.disconnect()

@pytest.fixture
def user_repository(database):
    """Depends on database fixture."""
    return UserRepository(database)

@pytest.fixture
def sample_user(user_repository):
    """Depends on user_repository, which depends on database."""
    user = user_repository.create(name="Test User")
    yield user
    user_repository.delete(user.id)

def test_user_operations(sample_user):
    """Uses sample_user fixture (which uses user_repository and database)."""
    assert sample_user.name == "Test User"
Fixtures可以依赖其他Fixtures,形成依赖关系图:
python
@pytest.fixture
def database():
    db = Database()
    db.connect()
    yield db
    db.disconnect()

@pytest.fixture
def user_repository(database):
    """依赖database fixture。"""
    return UserRepository(database)

@pytest.fixture
def sample_user(user_repository):
    """依赖user_repository,而user_repository依赖database。"""
    user = user_repository.create(name="Test User")
    yield user
    user_repository.delete(user.id)

def test_user_operations(sample_user):
    """使用sample_user fixture(它依赖user_repository和database)。"""
    assert sample_user.name == "Test User"

Autouse Fixtures

自动执行的Fixtures

Fixtures that run automatically without being explicitly requested:
python
@pytest.fixture(autouse=True)
def reset_database():
    """Runs before every test automatically."""
    clear_database()
    seed_test_data()

@pytest.fixture(autouse=True, scope="session")
def configure_logging():
    """Configure logging once for entire test session."""
    import logging
    logging.basicConfig(level=logging.DEBUG)
无需显式调用即可自动运行的Fixtures:
python
@pytest.fixture(autouse=True)
def reset_database():
    """自动在每个测试前运行。"""
    clear_database()
    seed_test_data()

@pytest.fixture(autouse=True, scope="session")
def configure_logging():
    """为整个测试会话配置一次日志。"""
    import logging
    logging.basicConfig(level=logging.DEBUG)

Fixture Factories

Fixture工厂

Fixtures that return functions for creating test data:
python
@pytest.fixture
def make_user():
    """Factory fixture for creating users."""
    users = []

    def _make_user(name, email=None):
        user = User(name=name, email=email or f"{name}@example.com")
        users.append(user)
        return user

    yield _make_user

    # Cleanup all created users
    for user in users:
        user.delete()

def test_multiple_users(make_user):
    user1 = make_user("Alice")
    user2 = make_user("Bob", email="bob@test.com")
    assert user1.name == "Alice"
    assert user2.email == "bob@test.com"
返回用于创建测试数据的函数的Fixtures:
python
@pytest.fixture
def make_user():
    """用于创建用户的工厂fixture。"""
    users = []

    def _make_user(name, email=None):
        user = User(name=name, email=email or f"{name}@example.com")
        users.append(user)
        return user

    yield _make_user

    # 清理所有创建的用户
    for user in users:
        user.delete()

def test_multiple_users(make_user):
    user1 = make_user("Alice")
    user2 = make_user("Bob", email="bob@test.com")
    assert user1.name == "Alice"
    assert user2.email == "bob@test.com"

Parametrization - Testing Multiple Cases

参数化 - 测试多个用例

Basic Parametrization

基础参数化

Run the same test with different inputs:
python
import pytest

@pytest.mark.parametrize("input_value,expected", [
    (2, 4),
    (3, 9),
    (4, 16),
    (5, 25),
])
def test_square(input_value, expected):
    assert input_value ** 2 == expected
使用不同输入运行同一测试:
python
import pytest

@pytest.mark.parametrize("input_value,expected", [
    (2, 4),
    (3, 9),
    (4, 16),
    (5, 25),
])
def test_square(input_value, expected):
    assert input_value ** 2 == expected

Multiple Parameters

多参数

python
@pytest.mark.parametrize("x", [0, 1])
@pytest.mark.parametrize("y", [2, 3])
def test_combinations(x, y):
    """Runs 4 times: (0,2), (0,3), (1,2), (1,3)."""
    assert x < y
python
@pytest.mark.parametrize("x", [0, 1])
@pytest.mark.parametrize("y", [2, 3])
def test_combinations(x, y):
    """运行4次:(0,2), (0,3), (1,2), (1,3)。"""
    assert x < y

Parametrizing with IDs

带ID的参数化

Make test output more readable:
python
@pytest.mark.parametrize("test_input,expected", [
    pytest.param("3+5", 8, id="addition"),
    pytest.param("2*4", 8, id="multiplication"),
    pytest.param("10-2", 8, id="subtraction"),
])
def test_eval(test_input, expected):
    assert eval(test_input) == expected
让测试输出更易读:
python
@pytest.mark.parametrize("test_input,expected", [
    pytest.param("3+5", 8, id="addition"),
    pytest.param("2*4", 8, id="multiplication"),
    pytest.param("10-2", 8, id="subtraction"),
])
def test_eval(test_input, expected):
    assert eval(test_input) == expected

Output:

输出:

test_eval[addition] PASSED

test_eval[addition] PASSED

test_eval[multiplication] PASSED

test_eval[multiplication] PASSED

test_eval[subtraction] PASSED

test_eval[subtraction] PASSED

undefined
undefined

Parametrizing Fixtures

Fixture参数化

Create fixture instances with different values:
python
@pytest.fixture(params=["mysql", "postgresql", "sqlite"])
def database_type(request):
    """Test runs three times, once for each database."""
    return request.param

def test_database_connection(database_type):
    conn = connect_to_database(database_type)
    assert conn.is_connected()
创建具有不同值的fixture实例:
python
@pytest.fixture(params=["mysql", "postgresql", "sqlite"])
def database_type(request):
    """测试运行三次,每种数据库一次。"""
    return request.param

def test_database_connection(database_type):
    conn = connect_to_database(database_type)
    assert conn.is_connected()

Combining Parametrization and Marks

结合参数化和标记

python
@pytest.mark.parametrize("test_input,expected", [
    ("valid@email.com", True),
    ("invalid-email", False),
    pytest.param("edge@case", True, marks=pytest.mark.xfail),
    pytest.param("slow@test.com", True, marks=pytest.mark.slow),
])
def test_email_validation(test_input, expected):
    assert is_valid_email(test_input) == expected
python
@pytest.mark.parametrize("test_input,expected", [
    ("valid@email.com", True),
    ("invalid-email", False),
    pytest.param("edge@case", True, marks=pytest.mark.xfail),
    pytest.param("slow@test.com", True, marks=pytest.mark.slow),
])
def test_email_validation(test_input, expected):
    assert is_valid_email(test_input) == expected

Indirect Parametrization

间接参数化

Pass parameters through fixtures:
python
@pytest.fixture
def database(request):
    """Create database based on parameter."""
    db_type = request.param
    db = Database(db_type)
    db.connect()
    yield db
    db.close()

@pytest.mark.parametrize("database", ["mysql", "postgres"], indirect=True)
def test_database_operations(database):
    """database fixture receives the parameter value."""
    assert database.is_connected()
    database.execute("SELECT 1")
通过fixture传递参数:
python
@pytest.fixture
def database(request):
    """根据参数创建数据库。"""
    db_type = request.param
    db = Database(db_type)
    db.connect()
    yield db
    db.close()

@pytest.mark.parametrize("database", ["mysql", "postgres"], indirect=True)
def test_database_operations(database):
    """database fixture接收参数值。"""
    assert database.is_connected()
    database.execute("SELECT 1")

Mocking and Monkeypatching

模拟和Monkeypatching

Using pytest's monkeypatch

使用pytest的monkeypatch

The
monkeypatch
fixture provides safe patching that's automatically undone:
python
def test_get_user_env(monkeypatch):
    """Test environment variable access."""
    monkeypatch.setenv("USER", "testuser")
    assert os.getenv("USER") == "testuser"

def test_remove_env(monkeypatch):
    """Test with missing environment variable."""
    monkeypatch.delenv("PATH", raising=False)
    assert os.getenv("PATH") is None

def test_modify_path(monkeypatch):
    """Test sys.path modification."""
    monkeypatch.syspath_prepend("/custom/path")
    assert "/custom/path" in sys.path
monkeypatch
fixture提供安全的补丁,会自动恢复:
python
def test_get_user_env(monkeypatch):
    """测试环境变量访问。"""
    monkeypatch.setenv("USER", "testuser")
    assert os.getenv("USER") == "testuser"

def test_remove_env(monkeypatch):
    """测试缺失环境变量的情况。"""
    monkeypatch.delenv("PATH", raising=False)
    assert os.getenv("PATH") is None

def test_modify_path(monkeypatch):
    """测试sys.path修改。"""
    monkeypatch.syspath_prepend("/custom/path")
    assert "/custom/path" in sys.path

Mocking Functions and Methods

模拟函数和方法

python
import requests

def get_user_data(user_id):
    response = requests.get(f"https://api.example.com/users/{user_id}")
    return response.json()

def test_get_user_data(monkeypatch):
    """Mock external API call."""
    class MockResponse:
        @staticmethod
        def json():
            return {"id": 1, "name": "Test User"}

    def mock_get(*args, **kwargs):
        return MockResponse()

    monkeypatch.setattr(requests, "get", mock_get)

    result = get_user_data(1)
    assert result["name"] == "Test User"
python
import requests

def get_user_data(user_id):
    response = requests.get(f"https://api.example.com/users/{user_id}")
    return response.json()

def test_get_user_data(monkeypatch):
    """模拟外部API调用。"""
    class MockResponse:
        @staticmethod
        def json():
            return {"id": 1, "name": "Test User"}

    def mock_get(*args, **kwargs):
        return MockResponse()

    monkeypatch.setattr(requests, "get", mock_get)

    result = get_user_data(1)
    assert result["name"] == "Test User"

Using unittest.mock

使用unittest.mock

python
from unittest.mock import Mock, MagicMock, patch, call

def test_with_mock():
    """Basic mock usage."""
    mock_db = Mock()
    mock_db.get_user.return_value = {"id": 1, "name": "Alice"}

    user = mock_db.get_user(1)
    assert user["name"] == "Alice"
    mock_db.get_user.assert_called_once_with(1)

def test_with_patch():
    """Patch during test execution."""
    with patch('mymodule.database.get_connection') as mock_conn:
        mock_conn.return_value = Mock()
        # Test code that uses database.get_connection()
        assert mock_conn.called

@patch('mymodule.send_email')
def test_notification(mock_email):
    """Patch as decorator."""
    send_notification("test@example.com", "Hello")
    mock_email.assert_called_once()
python
from unittest.mock import Mock, MagicMock, patch, call

def test_with_mock():
    """基础mock用法。"""
    mock_db = Mock()
    mock_db.get_user.return_value = {"id": 1, "name": "Alice"}

    user = mock_db.get_user(1)
    assert user["name"] == "Alice"
    mock_db.get_user.assert_called_once_with(1)

def test_with_patch():
    """在测试执行期间打补丁。"""
    with patch('mymodule.database.get_connection') as mock_conn:
        mock_conn.return_value = Mock()
        # 测试使用database.get_connection()的代码
        assert mock_conn.called

@patch('mymodule.send_email')
def test_notification(mock_email):
    """以装饰器形式打补丁。"""
    send_notification("test@example.com", "Hello")
    mock_email.assert_called_once()

Mock Return Values and Side Effects

Mock返回值和副作用

python
def test_mock_return_values():
    """Different return values for sequential calls."""
    mock_api = Mock()
    mock_api.fetch.side_effect = [
        {"status": "pending"},
        {"status": "processing"},
        {"status": "complete"}
    ]

    assert mock_api.fetch()["status"] == "pending"
    assert mock_api.fetch()["status"] == "processing"
    assert mock_api.fetch()["status"] == "complete"

def test_mock_exception():
    """Mock raising exceptions."""
    mock_service = Mock()
    mock_service.connect.side_effect = ConnectionError("Failed to connect")

    with pytest.raises(ConnectionError):
        mock_service.connect()
python
def test_mock_return_values():
    """连续调用返回不同值。"""
    mock_api = Mock()
    mock_api.fetch.side_effect = [
        {"status": "pending"},
        {"status": "processing"},
        {"status": "complete"}
    ]

    assert mock_api.fetch()["status"] == "pending"
    assert mock_api.fetch()["status"] == "processing"
    assert mock_api.fetch()["status"] == "complete"

def test_mock_exception():
    """模拟抛出异常。"""
    mock_service = Mock()
    mock_service.connect.side_effect = ConnectionError("Failed to connect")

    with pytest.raises(ConnectionError):
        mock_service.connect()

Spy Pattern - Partial Mocking

间谍模式 - 部分模拟

python
def test_spy_pattern(monkeypatch):
    """Spy on a function while preserving original behavior."""
    original_function = mymodule.process_data
    call_count = 0

    def spy_function(*args, **kwargs):
        nonlocal call_count
        call_count += 1
        return original_function(*args, **kwargs)

    monkeypatch.setattr(mymodule, "process_data", spy_function)

    result = mymodule.process_data([1, 2, 3])
    assert call_count == 1
    assert result is not None  # Original function executed
python
def test_spy_pattern(monkeypatch):
    """在保留原始行为的同时监视函数。"""
    original_function = mymodule.process_data
    call_count = 0

    def spy_function(*args, **kwargs):
        nonlocal call_count
        call_count += 1
        return original_function(*args, **kwargs)

    monkeypatch.setattr(mymodule, "process_data", spy_function)

    result = mymodule.process_data([1, 2, 3])
    assert call_count == 1
    assert result is not None  # 原始函数已执行

Test Organization

测试组织

Directory Structure

目录结构

project/
├── src/
│   └── mypackage/
│       ├── __init__.py
│       ├── models.py
│       ├── services.py
│       └── utils.py
├── tests/
│   ├── __init__.py
│   ├── conftest.py          # Shared fixtures
│   ├── unit/
│   │   ├── __init__.py
│   │   ├── test_models.py
│   │   └── test_utils.py
│   ├── integration/
│   │   ├── __init__.py
│   │   ├── conftest.py      # Integration-specific fixtures
│   │   └── test_services.py
│   └── e2e/
│       └── test_workflows.py
├── pytest.ini               # pytest configuration
└── setup.py
project/
├── src/
│   └── mypackage/
│       ├── __init__.py
│       ├── models.py
│       ├── services.py
│       └── utils.py
├── tests/
│   ├── __init__.py
│   ├── conftest.py          # 共享Fixtures
│   ├── unit/
│   │   ├── __init__.py
│   │   ├── test_models.py
│   │   └── test_utils.py
│   ├── integration/
│   │   ├── __init__.py
│   │   ├── conftest.py      # 集成测试专属Fixtures
│   │   └── test_services.py
│   └── e2e/
│       └── test_workflows.py
├── pytest.ini               # pytest配置
└── setup.py

conftest.py - Sharing Fixtures

conftest.py - 共享Fixtures

The
conftest.py
file makes fixtures available to all tests in its directory and subdirectories:
python
undefined
conftest.py
文件使其目录及子目录下的所有测试都能使用其中的Fixtures:
python
undefined

tests/conftest.py

tests/conftest.py

import pytest
@pytest.fixture(scope="session") def database(): """Database connection available to all tests.""" db = Database() db.connect() yield db db.disconnect()
@pytest.fixture def clean_database(database): """Reset database before each test.""" database.clear_all_tables() return database
def pytest_configure(config): """Register custom markers.""" config.addinivalue_line( "markers", "slow: marks tests as slow (deselect with '-m "not slow"')" ) config.addinivalue_line( "markers", "integration: marks tests as integration tests" )
undefined
import pytest
@pytest.fixture(scope="session") def database(): """所有测试都可使用的数据库连接。""" db = Database() db.connect() yield db db.disconnect()
@pytest.fixture def clean_database(database): """每个测试前重置数据库。""" database.clear_all_tables() return database
def pytest_configure(config): """注册自定义标记。""" config.addinivalue_line( "markers", "slow: 标记测试为慢测试(使用'-m "not slow"'排除)" ) config.addinivalue_line( "markers", "integration: 标记测试为集成测试" )
undefined

Using Markers

使用标记

Markers allow categorizing and selecting tests:
python
import pytest

@pytest.mark.slow
def test_slow_operation():
    """Marked as slow test."""
    time.sleep(5)
    assert True

@pytest.mark.integration
def test_api_integration():
    """Marked as integration test."""
    response = requests.get("https://api.example.com")
    assert response.status_code == 200

@pytest.mark.skip(reason="Not implemented yet")
def test_future_feature():
    """Skipped test."""
    pass

@pytest.mark.skipif(sys.version_info < (3, 8), reason="Requires Python 3.8+")
def test_python38_feature():
    """Conditionally skipped."""
    pass

@pytest.mark.xfail(reason="Known bug in dependency")
def test_known_failure():
    """Expected to fail."""
    assert False

@pytest.mark.parametrize("env", ["dev", "staging", "prod"])
@pytest.mark.integration
def test_environments(env):
    """Multiple markers on one test."""
    assert environment_exists(env)
Running tests with markers:
bash
pytest -m slow                    # Run only slow tests
pytest -m "not slow"              # Skip slow tests
pytest -m "integration and not slow"  # Integration tests that aren't slow
pytest --markers                  # List all available markers
标记可用于分类和选择测试:
python
import pytest

@pytest.mark.slow
def test_slow_operation():
    """标记为慢测试。"""
    time.sleep(5)
    assert True

@pytest.mark.integration
def test_api_integration():
    """标记为集成测试。"""
    response = requests.get("https://api.example.com")
    assert response.status_code == 200

@pytest.mark.skip(reason="尚未实现")
def test_future_feature():
    """跳过的测试。"""
    pass

@pytest.mark.skipif(sys.version_info < (3, 8), reason="需要Python 3.8+")
def test_python38_feature():
    """有条件地跳过。"""
    pass

@pytest.mark.xfail(reason="依赖项存在已知bug")
def test_known_failure():
    """预期会失败的测试。"""
    assert False

@pytest.mark.parametrize("env", ["dev", "staging", "prod"])
@pytest.mark.integration
def test_environments(env):
    """一个测试使用多个标记。"""
    assert environment_exists(env)
使用标记运行测试:
bash
pytest -m slow                    # 仅运行慢测试
pytest -m "not slow"              # 跳过慢测试
pytest -m "integration and not slow"  # 运行非慢测试的集成测试
pytest --markers                  # 列出所有可用标记

Test Classes for Organization

使用测试类进行组织

python
class TestUserAuthentication:
    """Group related authentication tests."""

    @pytest.fixture(autouse=True)
    def setup(self):
        """Setup for all tests in this class."""
        self.user_service = UserService()

    def test_login_success(self):
        result = self.user_service.login("user", "password")
        assert result.success

    def test_login_failure(self):
        result = self.user_service.login("user", "wrong")
        assert not result.success

    def test_logout(self):
        self.user_service.login("user", "password")
        assert self.user_service.logout()

class TestUserRegistration:
    """Group related registration tests."""

    def test_register_new_user(self):
        pass

    def test_register_duplicate_email(self):
        pass
python
class TestUserAuthentication:
    """将相关的认证测试分组。"""

    @pytest.fixture(autouse=True)
    def setup(self):
        """此类中所有测试的前置设置。"""
        self.user_service = UserService()

    def test_login_success(self):
        result = self.user_service.login("user", "password")
        assert result.success

    def test_login_failure(self):
        result = self.user_service.login("user", "wrong")
        assert not result.success

    def test_logout(self):
        self.user_service.login("user", "password")
        assert self.user_service.logout()

class TestUserRegistration:
    """将相关的注册测试分组。"""

    def test_register_new_user(self):
        pass

    def test_register_duplicate_email(self):
        pass

Coverage Analysis

覆盖率分析

Installing Coverage Tools

安装覆盖率工具

bash
pip install pytest-cov
bash
pip install pytest-cov

Running Coverage

运行覆盖率测试

bash
undefined
bash
undefined

Basic coverage report

基础覆盖率报告

pytest --cov=mypackage tests/
pytest --cov=mypackage tests/

Coverage with HTML report

生成HTML格式的覆盖率报告

pytest --cov=mypackage --cov-report=html tests/
pytest --cov=mypackage --cov-report=html tests/

Opens htmlcov/index.html

打开htmlcov/index.html查看

Coverage with terminal report

终端格式的覆盖率报告

pytest --cov=mypackage --cov-report=term-missing tests/
pytest --cov=mypackage --cov-report=term-missing tests/

Coverage with multiple formats

生成多种格式的报告

pytest --cov=mypackage --cov-report=html --cov-report=term tests/
pytest --cov=mypackage --cov-report=html --cov-report=term tests/

Fail if coverage below threshold

如果覆盖率低于阈值则失败

pytest --cov=mypackage --cov-fail-under=80 tests/
undefined
pytest --cov=mypackage --cov-fail-under=80 tests/
undefined

Coverage Configuration

覆盖率配置

ini
undefined
ini
undefined

pytest.ini or setup.cfg

pytest.ini 或 setup.cfg

[tool:pytest] addopts = --cov=mypackage --cov-report=html --cov-report=term-missing --cov-fail-under=80
[coverage:run] source = mypackage omit = /tests/ /venv/ /pycache/
[coverage:report] exclude_lines = pragma: no cover def repr raise AssertionError raise NotImplementedError if name == .main.: if TYPE_CHECKING:
undefined
[tool:pytest] addopts = --cov=mypackage --cov-report=html --cov-report=term-missing --cov-fail-under=80
[coverage:run] source = mypackage omit = /tests/ /venv/ /pycache/
[coverage:report] exclude_lines = pragma: no cover def repr raise AssertionError raise NotImplementedError if name == .main.: if TYPE_CHECKING:
undefined

Coverage in Code

代码中的覆盖率排除

python
def critical_function():  # pragma: no cover
    """Excluded from coverage."""
    pass

if sys.platform == 'win32':  # pragma: no cover
    # Platform-specific code excluded
    pass
python
def critical_function():  # pragma: no cover
    """排除在覆盖率统计之外。"""
    pass

if sys.platform == 'win32':  # pragma: no cover
    # 特定平台代码被排除
    pass

pytest Configuration

pytest配置

pytest.ini

pytest.ini

ini
[pytest]
ini
[pytest]

Test discovery

测试发现

testpaths = tests python_files = test_.py _test.py python_classes = Test python_functions = test_
testpaths = tests python_files = test_.py _test.py python_classes = Test python_functions = test_

Output options

输出选项

addopts = -ra --strict-markers --strict-config --showlocals --tb=short --cov=mypackage --cov-report=html --cov-report=term-missing
addopts = -ra --strict-markers --strict-config --showlocals --tb=short --cov=mypackage --cov-report=html --cov-report=term-missing

Markers

标记

markers = slow: marks tests as slow (deselect with '-m "not slow"') integration: marks tests as integration tests unit: marks tests as unit tests smoke: marks tests as smoke tests regression: marks tests as regression tests
markers = slow: 标记测试为慢测试(使用'-m "not slow"'排除) integration: 标记测试为集成测试 unit: 标记测试为单元测试 smoke: 标记测试为冒烟测试 regression: 标记测试为回归测试

Timeout for tests

测试超时

timeout = 300
timeout = 300

Minimum Python version

最低Python版本

minversion = 7.0
minversion = 7.0

Directories to ignore

忽略的目录

norecursedirs = .git .tox dist build *.egg venv
norecursedirs = .git .tox dist build *.egg venv

Warning filters

警告过滤器

filterwarnings = error ignore::DeprecationWarning
undefined
filterwarnings = error ignore::DeprecationWarning
undefined

pyproject.toml Configuration

pyproject.toml配置

toml
[tool.pytest.ini_options]
testpaths = ["tests"]
python_files = ["test_*.py", "*_test.py"]
addopts = [
    "-ra",
    "--strict-markers",
    "--cov=mypackage",
    "--cov-report=html",
    "--cov-report=term-missing",
]
markers = [
    "slow: marks tests as slow",
    "integration: marks tests as integration tests",
]

[tool.coverage.run]
source = ["mypackage"]
omit = ["*/tests/*", "*/venv/*"]

[tool.coverage.report]
exclude_lines = [
    "pragma: no cover",
    "def __repr__",
    "raise NotImplementedError",
]
toml
[tool.pytest.ini_options]
testpaths = ["tests"]
python_files = ["test_*.py", "*_test.py"]
addopts = [
    "-ra",
    "--strict-markers",
    "--cov=mypackage",
    "--cov-report=html",
    "--cov-report=term-missing",
]
markers = [
    "slow: 标记测试为慢测试",
    "integration: 标记测试为集成测试",
]

[tool.coverage.run]
source = ["mypackage"]
omit = ["*/tests/*", "*/venv/*"]

[tool.coverage.report]
exclude_lines = [
    "pragma: no cover",
    "def __repr__",
    "raise NotImplementedError",
]

CI/CD Integration

CI/CD集成

GitHub Actions

GitHub Actions

yaml
undefined
yaml
undefined

.github/workflows/test.yml

.github/workflows/test.yml

name: Tests
on: [push, pull_request]
jobs: test: runs-on: ${{ matrix.os }} strategy: matrix: os: [ubuntu-latest, windows-latest, macos-latest] python-version: ['3.8', '3.9', '3.10', '3.11', '3.12']
steps:
- uses: actions/checkout@v3

- name: Set up Python ${{ matrix.python-version }}
  uses: actions/setup-python@v4
  with:
    python-version: ${{ matrix.python-version }}

- name: Install dependencies
  run: |
    python -m pip install --upgrade pip
    pip install -e .[dev]
    pip install pytest pytest-cov pytest-xdist

- name: Run tests
  run: |
    pytest --cov=mypackage --cov-report=xml --cov-report=term-missing -n auto

- name: Upload coverage to Codecov
  uses: codecov/codecov-action@v3
  with:
    file: ./coverage.xml
    fail_ci_if_error: true
undefined
name: Tests
on: [push, pull_request]
jobs: test: runs-on: ${{ matrix.os }} strategy: matrix: os: [ubuntu-latest, windows-latest, macos-latest] python-version: ['3.8', '3.9', '3.10', '3.11', '3.12']
steps:
- uses: actions/checkout@v3

- name: Set up Python ${{ matrix.python-version }}
  uses: actions/setup-python@v4
  with:
    python-version: ${{ matrix.python-version }}

- name: Install dependencies
  run: |
    python -m pip install --upgrade pip
    pip install -e .[dev]
    pip install pytest pytest-cov pytest-xdist

- name: Run tests
  run: |
    pytest --cov=mypackage --cov-report=xml --cov-report=term-missing -n auto

- name: Upload coverage to Codecov
  uses: codecov/codecov-action@v3
  with:
    file: ./coverage.xml
    fail_ci_if_error: true
undefined

GitLab CI

GitLab CI

yaml
undefined
yaml
undefined

.gitlab-ci.yml

.gitlab-ci.yml

image: python:3.11
stages:
  • test
  • coverage
variables: PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
cache: paths: - .cache/pip - venv/
before_script:
  • python -m venv venv
  • source venv/bin/activate
  • pip install -e .[dev]
  • pip install pytest pytest-cov
test: stage: test script: - pytest --junitxml=report.xml --cov=mypackage --cov-report=xml artifacts: when: always reports: junit: report.xml coverage_report: coverage_format: cobertura path: coverage.xml
coverage: stage: coverage script: - pytest --cov=mypackage --cov-report=html --cov-fail-under=80 coverage: '/(?i)total.*? (100(?:.0+)?%|[1-9]?\d(?:.\d+)?%)$/' artifacts: paths: - htmlcov/
undefined
image: python:3.11
stages:
  • test
  • coverage
variables: PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip"
cache: paths: - .cache/pip - venv/
before_script:
  • python -m venv venv
  • source venv/bin/activate
  • pip install -e .[dev]
  • pip install pytest pytest-cov
test: stage: test script: - pytest --junitxml=report.xml --cov=mypackage --cov-report=xml artifacts: when: always reports: junit: report.xml coverage_report: coverage_format: cobertura path: coverage.xml
coverage: stage: coverage script: - pytest --cov=mypackage --cov-report=html --cov-fail-under=80 coverage: '/(?i)total.*? (100(?:.0+)?%|[1-9]?\d(?:.\d+)?%)$/' artifacts: paths: - htmlcov/
undefined

Jenkins Pipeline

Jenkins Pipeline

groovy
// Jenkinsfile
pipeline {
    agent any

    stages {
        stage('Setup') {
            steps {
                sh 'python -m venv venv'
                sh '. venv/bin/activate && pip install -e .[dev]'
                sh '. venv/bin/activate && pip install pytest pytest-cov pytest-html'
            }
        }

        stage('Test') {
            steps {
                sh '. venv/bin/activate && pytest --junitxml=results.xml --html=report.html --cov=mypackage'
            }
            post {
                always {
                    junit 'results.xml'
                    publishHTML([
                        allowMissing: false,
                        alwaysLinkToLastBuild: true,
                        keepAll: true,
                        reportDir: 'htmlcov',
                        reportFiles: 'index.html',
                        reportName: 'Coverage Report'
                    ])
                }
            }
        }
    }
}
groovy
// Jenkinsfile
pipeline {
    agent any

    stages {
        stage('Setup') {
            steps {
                sh 'python -m venv venv'
                sh '. venv/bin/activate && pip install -e .[dev]'
                sh '. venv/bin/activate && pip install pytest pytest-cov pytest-html'
            }
        }

        stage('Test') {
            steps {
                sh '. venv/bin/activate && pytest --junitxml=results.xml --html=report.html --cov=mypackage'
            }
            post {
                always {
                    junit 'results.xml'
                    publishHTML([
                        allowMissing: false,
                        alwaysLinkToLastBuild: true,
                        keepAll: true,
                        reportDir: 'htmlcov',
                        reportFiles: 'index.html',
                        reportName: 'Coverage Report'
                    ])
                }
            }
        }
    }
}

Advanced Patterns

高级模式

Testing Async Code

测试异步代码

python
import pytest
import asyncio

@pytest.fixture
def event_loop():
    """Create event loop for async tests."""
    loop = asyncio.new_event_loop()
    yield loop
    loop.close()

@pytest.mark.asyncio
async def test_async_function():
    result = await async_fetch_data()
    assert result is not None

@pytest.mark.asyncio
async def test_async_with_timeout():
    with pytest.raises(asyncio.TimeoutError):
        await asyncio.wait_for(slow_async_operation(), timeout=1.0)
python
import pytest
import asyncio

@pytest.fixture
def event_loop():
    """为异步测试创建事件循环。"""
    loop = asyncio.new_event_loop()
    yield loop
    loop.close()

@pytest.mark.asyncio
async def test_async_function():
    result = await async_fetch_data()
    assert result is not None

@pytest.mark.asyncio
async def test_async_with_timeout():
    with pytest.raises(asyncio.TimeoutError):
        await asyncio.wait_for(slow_async_operation(), timeout=1.0)

Using pytest-asyncio plugin

使用pytest-asyncio插件

pip install pytest-asyncio

pip install pytest-asyncio

undefined
undefined

Testing Database Operations

测试数据库操作

python
@pytest.fixture(scope="session")
def database_engine():
    """Create database engine for test session."""
    engine = create_engine("postgresql://test:test@localhost/testdb")
    Base.metadata.create_all(engine)
    yield engine
    Base.metadata.drop_all(engine)
    engine.dispose()

@pytest.fixture
def db_session(database_engine):
    """Create new database session for each test."""
    connection = database_engine.connect()
    transaction = connection.begin()
    session = Session(bind=connection)

    yield session

    session.close()
    transaction.rollback()
    connection.close()

def test_user_creation(db_session):
    user = User(name="Test User", email="test@example.com")
    db_session.add(user)
    db_session.commit()

    assert user.id is not None
    assert db_session.query(User).count() == 1
python
@pytest.fixture(scope="session")
def database_engine():
    """为测试会话创建数据库引擎。"""
    engine = create_engine("postgresql://test:test@localhost/testdb")
    Base.metadata.create_all(engine)
    yield engine
    Base.metadata.drop_all(engine)
    engine.dispose()

@pytest.fixture
def db_session(database_engine):
    """为每个测试创建新的数据库会话。"""
    connection = database_engine.connect()
    transaction = connection.begin()
    session = Session(bind=connection)

    yield session

    session.close()
    transaction.rollback()
    connection.close()

def test_user_creation(db_session):
    user = User(name="Test User", email="test@example.com")
    db_session.add(user)
    db_session.commit()

    assert user.id is not None
    assert db_session.query(User).count() == 1

Testing with Temporary Files

使用临时文件进行测试

python
@pytest.fixture
def temp_directory(tmp_path):
    """Create temporary directory with sample files."""
    data_dir = tmp_path / "data"
    data_dir.mkdir()

    (data_dir / "config.json").write_text('{"debug": true}')
    (data_dir / "data.csv").write_text("name,value\ntest,42")

    return data_dir

def test_file_processing(temp_directory):
    config = load_config(temp_directory / "config.json")
    assert config["debug"] is True

    data = load_csv(temp_directory / "data.csv")
    assert len(data) == 1
python
@pytest.fixture
def temp_directory(tmp_path):
    """创建包含示例文件的临时目录。"""
    data_dir = tmp_path / "data"
    data_dir.mkdir()

    (data_dir / "config.json").write_text('{"debug": true}')
    (data_dir / "data.csv").write_text("name,value\ntest,42")

    return data_dir

def test_file_processing(temp_directory):
    config = load_config(temp_directory / "config.json")
    assert config["debug"] is True

    data = load_csv(temp_directory / "data.csv")
    assert len(data) == 1

Caplog - Capturing Log Output

Caplog - 捕获日志输出

python
import logging

def test_logging_output(caplog):
    """Test that function logs correctly."""
    with caplog.at_level(logging.INFO):
        process_data()

    assert "Processing started" in caplog.text
    assert "Processing completed" in caplog.text
    assert len(caplog.records) == 2

def test_warning_logged(caplog):
    """Test warning is logged."""
    caplog.set_level(logging.WARNING)
    risky_operation()

    assert any(record.levelname == "WARNING" for record in caplog.records)
python
import logging

def test_logging_output(caplog):
    """测试函数是否正确记录日志。"""
    with caplog.at_level(logging.INFO):
        process_data()

    assert "Processing started" in caplog.text
    assert "Processing completed" in caplog.text
    assert len(caplog.records) == 2

def test_warning_logged(caplog):
    """测试是否记录了警告。"""
    caplog.set_level(logging.WARNING)
    risky_operation()

    assert any(record.levelname == "WARNING" for record in caplog.records)

Capsys - Capturing stdout/stderr

Capsys - 捕获stdout/stderr

python
def test_print_output(capsys):
    """Test console output."""
    print("Hello, World!")
    print("Error message", file=sys.stderr)

    captured = capsys.readouterr()
    assert "Hello, World!" in captured.out
    assert "Error message" in captured.err

def test_progressive_output(capsys):
    """Test multiple output captures."""
    print("First")
    captured = capsys.readouterr()
    assert captured.out == "First\n"

    print("Second")
    captured = capsys.readouterr()
    assert captured.out == "Second\n"
python
def test_print_output(capsys):
    """测试控制台输出。"""
    print("Hello, World!")
    print("Error message", file=sys.stderr)

    captured = capsys.readouterr()
    assert "Hello, World!" in captured.out
    assert "Error message" in captured.err

def test_progressive_output(capsys):
    """测试多次输出捕获。"""
    print("First")
    captured = capsys.readouterr()
    assert captured.out == "First\n"

    print("Second")
    captured = capsys.readouterr()
    assert captured.out == "Second\n"

Test Examples

测试示例

Example 1: Basic Unit Test

示例1:基础单元测试

python
undefined
python
undefined

test_calculator.py

test_calculator.py

import pytest from calculator import add, subtract, multiply, divide
def test_add(): assert add(2, 3) == 5 assert add(-1, 1) == 0 assert add(0, 0) == 0
def test_subtract(): assert subtract(5, 3) == 2 assert subtract(0, 5) == -5
def test_multiply(): assert multiply(3, 4) == 12 assert multiply(-2, 3) == -6
def test_divide(): assert divide(10, 2) == 5 assert divide(7, 2) == 3.5
def test_divide_by_zero(): with pytest.raises(ZeroDivisionError): divide(10, 0)
undefined
import pytest from calculator import add, subtract, multiply, divide
def test_add(): assert add(2, 3) == 5 assert add(-1, 1) == 0 assert add(0, 0) == 0
def test_subtract(): assert subtract(5, 3) == 2 assert subtract(0, 5) == -5
def test_multiply(): assert multiply(3, 4) == 12 assert multiply(-2, 3) == -6
def test_divide(): assert divide(10, 2) == 5 assert divide(7, 2) == 3.5
def test_divide_by_zero(): with pytest.raises(ZeroDivisionError): divide(10, 0)
undefined

Example 2: Parametrized String Validation

示例2:参数化的字符串验证

python
undefined
python
undefined

test_validators.py

test_validators.py

import pytest from validators import is_valid_email, is_valid_phone, is_valid_url
@pytest.mark.parametrize("email,expected", [ ("user@example.com", True), ("user.name+tag@example.co.uk", True), ("invalid.email", False), ("@example.com", False), ("user@", False), ("", False), ]) def test_email_validation(email, expected): assert is_valid_email(email) == expected
@pytest.mark.parametrize("phone,expected", [ ("+1-234-567-8900", True), ("(555) 123-4567", True), ("1234567890", True), ("123", False), ("abc-def-ghij", False), ]) def test_phone_validation(phone, expected): assert is_valid_phone(phone) == expected
@pytest.mark.parametrize("url,expected", [ ("https://www.example.com", True), ("http://example.com/path?query=1", True), ("ftp://files.example.com", True), ("not a url", False), ("http://", False), ]) def test_url_validation(url, expected): assert is_valid_url(url) == expected
undefined
import pytest from validators import is_valid_email, is_valid_phone, is_valid_url
@pytest.mark.parametrize("email,expected", [ ("user@example.com", True), ("user.name+tag@example.co.uk", True), ("invalid.email", False), ("@example.com", False), ("user@", False), ("", False), ]) def test_email_validation(email, expected): assert is_valid_email(email) == expected
@pytest.mark.parametrize("phone,expected", [ ("+1-234-567-8900", True), ("(555) 123-4567", True), ("1234567890", True), ("123", False), ("abc-def-ghij", False), ]) def test_phone_validation(phone, expected): assert is_valid_phone(phone) == expected
@pytest.mark.parametrize("url,expected", [ ("https://www.example.com", True), ("http://example.com/path?query=1", True), ("ftp://files.example.com", True), ("not a url", False), ("http://", False), ]) def test_url_validation(url, expected): assert is_valid_url(url) == expected
undefined

Example 3: API Testing with Fixtures

示例3:使用Fixtures进行API测试

python
undefined
python
undefined

test_api.py

test_api.py

import pytest import requests from api_client import APIClient
@pytest.fixture(scope="module") def api_client(): """Create API client for test module.""" client = APIClient(base_url="https://api.example.com") client.authenticate(api_key="test-key") yield client client.close()
@pytest.fixture def sample_user(api_client): """Create sample user for testing.""" user = api_client.create_user({ "name": "Test User", "email": "test@example.com" }) yield user api_client.delete_user(user["id"])
def test_get_user(api_client, sample_user): user = api_client.get_user(sample_user["id"]) assert user["name"] == "Test User" assert user["email"] == "test@example.com"
def test_update_user(api_client, sample_user): updated = api_client.update_user(sample_user["id"], { "name": "Updated Name" }) assert updated["name"] == "Updated Name"
def test_list_users(api_client): users = api_client.list_users() assert isinstance(users, list) assert len(users) > 0
def test_user_not_found(api_client): with pytest.raises(requests.HTTPError) as exc: api_client.get_user("nonexistent-id") assert exc.value.response.status_code == 404
undefined
import pytest import requests from api_client import APIClient
@pytest.fixture(scope="module") def api_client(): """为测试模块创建API客户端。""" client = APIClient(base_url="https://api.example.com") client.authenticate(api_key="test-key") yield client client.close()
@pytest.fixture def sample_user(api_client): """创建用于测试的示例用户。""" user = api_client.create_user({ "name": "Test User", "email": "test@example.com" }) yield user api_client.delete_user(user["id"])
def test_get_user(api_client, sample_user): user = api_client.get_user(sample_user["id"]) assert user["name"] == "Test User" assert user["email"] == "test@example.com"
def test_update_user(api_client, sample_user): updated = api_client.update_user(sample_user["id"], { "name": "Updated Name" }) assert updated["name"] == "Updated Name"
def test_list_users(api_client): users = api_client.list_users() assert isinstance(users, list) assert len(users) > 0
def test_user_not_found(api_client): with pytest.raises(requests.HTTPError) as exc: api_client.get_user("nonexistent-id") assert exc.value.response.status_code == 404
undefined

Example 4: Database Testing

示例4:数据库测试

python
undefined
python
undefined

test_models.py

test_models.py

import pytest from sqlalchemy import create_engine from sqlalchemy.orm import Session from models import Base, User, Post
@pytest.fixture(scope="function") def db_session(): """Create clean database session for each test.""" engine = create_engine("sqlite:///:memory:") Base.metadata.create_all(engine) session = Session(engine)
yield session

session.close()
@pytest.fixture def sample_user(db_session): """Create sample user.""" user = User(username="testuser", email="test@example.com") db_session.add(user) db_session.commit() return user
def test_user_creation(db_session): user = User(username="newuser", email="new@example.com") db_session.add(user) db_session.commit()
assert user.id is not None
assert db_session.query(User).count() == 1
def test_user_posts(db_session, sample_user): post1 = Post(title="First Post", content="Content 1", user=sample_user) post2 = Post(title="Second Post", content="Content 2", user=sample_user) db_session.add_all([post1, post2]) db_session.commit()
assert len(sample_user.posts) == 2
assert sample_user.posts[0].title == "First Post"
def test_user_deletion_cascades(db_session, sample_user): post = Post(title="Post", content="Content", user=sample_user) db_session.add(post) db_session.commit()
db_session.delete(sample_user)
db_session.commit()

assert db_session.query(Post).count() == 0
undefined
import pytest from sqlalchemy import create_engine from sqlalchemy.orm import Session from models import Base, User, Post
@pytest.fixture(scope="function") def db_session(): """为每个测试创建干净的数据库会话。""" engine = create_engine("sqlite:///:memory:") Base.metadata.create_all(engine) session = Session(engine)
yield session

session.close()
@pytest.fixture def sample_user(db_session): """创建示例用户。""" user = User(username="testuser", email="test@example.com") db_session.add(user) db_session.commit() return user
def test_user_creation(db_session): user = User(username="newuser", email="new@example.com") db_session.add(user) db_session.commit()
assert user.id is not None
assert db_session.query(User).count() == 1
def test_user_posts(db_session, sample_user): post1 = Post(title="First Post", content="Content 1", user=sample_user) post2 = Post(title="Second Post", content="Content 2", user=sample_user) db_session.add_all([post1, post2]) db_session.commit()
assert len(sample_user.posts) == 2
assert sample_user.posts[0].title == "First Post"
def test_user_deletion_cascades(db_session, sample_user): post = Post(title="Post", content="Content", user=sample_user) db_session.add(post) db_session.commit()
db_session.delete(sample_user)
db_session.commit()

assert db_session.query(Post).count() == 0
undefined

Example 5: Mocking External Services

示例5:模拟外部服务

python
undefined
python
undefined

test_notification_service.py

test_notification_service.py

import pytest from unittest.mock import Mock, patch from notification_service import NotificationService, EmailProvider, SMSProvider
@pytest.fixture def mock_email_provider(): provider = Mock(spec=EmailProvider) provider.send.return_value = {"status": "sent", "id": "email-123"} return provider
@pytest.fixture def mock_sms_provider(): provider = Mock(spec=SMSProvider) provider.send.return_value = {"status": "sent", "id": "sms-456"} return provider
@pytest.fixture def notification_service(mock_email_provider, mock_sms_provider): return NotificationService( email_provider=mock_email_provider, sms_provider=mock_sms_provider )
def test_send_email_notification(notification_service, mock_email_provider): result = notification_service.send_email( to="user@example.com", subject="Test", body="Test message" )
assert result["status"] == "sent"
mock_email_provider.send.assert_called_once()
call_args = mock_email_provider.send.call_args
assert call_args[1]["to"] == "user@example.com"
def test_send_sms_notification(notification_service, mock_sms_provider): result = notification_service.send_sms( to="+1234567890", message="Test SMS" )
assert result["status"] == "sent"
mock_sms_provider.send.assert_called_once_with(
    to="+1234567890",
    message="Test SMS"
)
def test_notification_retry_on_failure(notification_service, mock_email_provider): mock_email_provider.send.side_effect = [ Exception("Network error"), Exception("Network error"), {"status": "sent", "id": "email-123"} ]
result = notification_service.send_email_with_retry(
    to="user@example.com",
    subject="Test",
    body="Test message",
    max_retries=3
)

assert result["status"] == "sent"
assert mock_email_provider.send.call_count == 3
undefined
import pytest from unittest.mock import Mock, patch from notification_service import NotificationService, EmailProvider, SMSProvider
@pytest.fixture def mock_email_provider(): provider = Mock(spec=EmailProvider) provider.send.return_value = {"status": "sent", "id": "email-123"} return provider
@pytest.fixture def mock_sms_provider(): provider = Mock(spec=SMSProvider) provider.send.return_value = {"status": "sent", "id": "sms-456"} return provider
@pytest.fixture def notification_service(mock_email_provider, mock_sms_provider): return NotificationService( email_provider=mock_email_provider, sms_provider=mock_sms_provider )
def test_send_email_notification(notification_service, mock_email_provider): result = notification_service.send_email( to="user@example.com", subject="Test", body="Test message" )
assert result["status"] == "sent"
mock_email_provider.send.assert_called_once()
call_args = mock_email_provider.send.call_args
assert call_args[1]["to"] == "user@example.com"
def test_send_sms_notification(notification_service, mock_sms_provider): result = notification_service.send_sms( to="+1234567890", message="Test SMS" )
assert result["status"] == "sent"
mock_sms_provider.send.assert_called_once_with(
    to="+1234567890",
    message="Test SMS"
)
def test_notification_retry_on_failure(notification_service, mock_email_provider): mock_email_provider.send.side_effect = [ Exception("Network error"), Exception("Network error"), {"status": "sent", "id": "email-123"} ]
result = notification_service.send_email_with_retry(
    to="user@example.com",
    subject="Test",
    body="Test message",
    max_retries=3
)

assert result["status"] == "sent"
assert mock_email_provider.send.call_count == 3
undefined

Example 6: Testing File Operations

示例6:测试文件操作

python
undefined
python
undefined

test_file_processor.py

test_file_processor.py

import pytest from pathlib import Path from file_processor import process_csv, process_json, FileProcessor
@pytest.fixture def csv_file(tmp_path): """Create temporary CSV file.""" csv_path = tmp_path / "data.csv" csv_path.write_text( "name,age,city\n" "Alice,30,New York\n" "Bob,25,Los Angeles\n" "Charlie,35,Chicago\n" ) return csv_path
@pytest.fixture def json_file(tmp_path): """Create temporary JSON file.""" import json json_path = tmp_path / "data.json" data = { "users": [ {"name": "Alice", "age": 30}, {"name": "Bob", "age": 25} ] } json_path.write_text(json.dumps(data)) return json_path
def test_process_csv(csv_file): data = process_csv(csv_file) assert len(data) == 3 assert data[0]["name"] == "Alice" assert data[1]["age"] == "25"
def test_process_json(json_file): data = process_json(json_file) assert len(data["users"]) == 2 assert data["users"][0]["name"] == "Alice"
def test_file_not_found(): with pytest.raises(FileNotFoundError): process_csv("nonexistent.csv")
def test_file_processor_creates_backup(tmp_path): processor = FileProcessor(tmp_path) source = tmp_path / "original.txt" source.write_text("original content")
processor.process_with_backup(source)

backup = tmp_path / "original.txt.bak"
assert backup.exists()
assert backup.read_text() == "original content"
undefined
import pytest from pathlib import Path from file_processor import process_csv, process_json, FileProcessor
@pytest.fixture def csv_file(tmp_path): """创建临时CSV文件。""" csv_path = tmp_path / "data.csv" csv_path.write_text( "name,age,city\n" "Alice,30,New York\n" "Bob,25,Los Angeles\n" "Charlie,35,Chicago\n" ) return csv_path
@pytest.fixture def json_file(tmp_path): """创建临时JSON文件。""" import json json_path = tmp_path / "data.json" data = { "users": [ {"name": "Alice", "age": 30}, {"name": "Bob", "age": 25} ] } json_path.write_text(json.dumps(data)) return json_path
def test_process_csv(csv_file): data = process_csv(csv_file) assert len(data) == 3 assert data[0]["name"] == "Alice" assert data[1]["age"] == "25"
def test_process_json(json_file): data = process_json(json_file) assert len(data["users"]) == 2 assert data["users"][0]["name"] == "Alice"
def test_file_not_found(): with pytest.raises(FileNotFoundError): process_csv("nonexistent.csv")
def test_file_processor_creates_backup(tmp_path): processor = FileProcessor(tmp_path) source = tmp_path / "original.txt" source.write_text("original content")
processor.process_with_backup(source)

backup = tmp_path / "original.txt.bak"
assert backup.exists()
assert backup.read_text() == "original content"
undefined

Example 7: Testing Classes and Methods

示例7:测试类和方法

python
undefined
python
undefined

test_shopping_cart.py

test_shopping_cart.py

import pytest from shopping_cart import ShoppingCart, Product
@pytest.fixture def cart(): """Create empty shopping cart.""" return ShoppingCart()
@pytest.fixture def products(): """Create sample products.""" return [ Product(id=1, name="Book", price=10.99), Product(id=2, name="Pen", price=2.50), Product(id=3, name="Notebook", price=5.99), ]
def test_add_product(cart, products): cart.add_product(products[0], quantity=2) assert cart.total_items() == 2 assert cart.subtotal() == 21.98
def test_remove_product(cart, products): cart.add_product(products[0], quantity=2) cart.remove_product(products[0].id, quantity=1) assert cart.total_items() == 1
def test_clear_cart(cart, products): cart.add_product(products[0]) cart.add_product(products[1]) cart.clear() assert cart.total_items() == 0
def test_apply_discount(cart, products): cart.add_product(products[0], quantity=2) cart.apply_discount(0.10) # 10% discount assert cart.total() == pytest.approx(19.78, rel=0.01)
def test_cannot_add_negative_quantity(cart, products): with pytest.raises(ValueError, match="Quantity must be positive"): cart.add_product(products[0], quantity=-1)
class TestShoppingCartDiscounts: """Test various discount scenarios."""
@pytest.fixture
def cart_with_items(self, cart, products):
    cart.add_product(products[0], quantity=2)
    cart.add_product(products[1], quantity=3)
    return cart

def test_percentage_discount(self, cart_with_items):
    original = cart_with_items.total()
    cart_with_items.apply_discount(0.20)
    assert cart_with_items.total() == original * 0.80

def test_fixed_discount(self, cart_with_items):
    original = cart_with_items.total()
    cart_with_items.apply_fixed_discount(5.00)
    assert cart_with_items.total() == original - 5.00

def test_cannot_apply_negative_discount(self, cart_with_items):
    with pytest.raises(ValueError):
        cart_with_items.apply_discount(-0.10)
undefined
import pytest from shopping_cart import ShoppingCart, Product
@pytest.fixture def cart(): """创建空购物车。""" return ShoppingCart()
@pytest.fixture def products(): """创建示例商品。""" return [ Product(id=1, name="Book", price=10.99), Product(id=2, name="Pen", price=2.50), Product(id=3, name="Notebook", price=5.99), ]
def test_add_product(cart, products): cart.add_product(products[0], quantity=2) assert cart.total_items() == 2 assert cart.subtotal() == 21.98
def test_remove_product(cart, products): cart.add_product(products[0], quantity=2) cart.remove_product(products[0].id, quantity=1) assert cart.total_items() == 1
def test_clear_cart(cart, products): cart.add_product(products[0]) cart.add_product(products[1]) cart.clear() assert cart.total_items() == 0
def test_apply_discount(cart, products): cart.add_product(products[0], quantity=2) cart.apply_discount(0.10) # 10%折扣 assert cart.total() == pytest.approx(19.78, rel=0.01)
def test_cannot_add_negative_quantity(cart, products): with pytest.raises(ValueError, match="Quantity must be positive"): cart.add_product(products[0], quantity=-1)
class TestShoppingCartDiscounts: """测试各种折扣场景。"""
@pytest.fixture
def cart_with_items(self, cart, products):
    cart.add_product(products[0], quantity=2)
    cart.add_product(products[1], quantity=3)
    return cart

def test_percentage_discount(self, cart_with_items):
    original = cart_with_items.total()
    cart_with_items.apply_discount(0.20)
    assert cart_with_items.total() == original * 0.80

def test_fixed_discount(self, cart_with_items):
    original = cart_with_items.total()
    cart_with_items.apply_fixed_discount(5.00)
    assert cart_with_items.total() == original - 5.00

def test_cannot_apply_negative_discount(self, cart_with_items):
    with pytest.raises(ValueError):
        cart_with_items.apply_discount(-0.10)
undefined

Example 8: Testing Command-Line Interface

示例8:测试命令行界面

python
undefined
python
undefined

test_cli.py

test_cli.py

import pytest from click.testing import CliRunner from myapp.cli import cli
@pytest.fixture def runner(): """Create CLI test runner.""" return CliRunner()
def test_cli_help(runner): result = runner.invoke(cli, ['--help']) assert result.exit_code == 0 assert 'Usage:' in result.output
def test_cli_version(runner): result = runner.invoke(cli, ['--version']) assert result.exit_code == 0 assert '1.0.0' in result.output
def test_cli_process_file(runner, tmp_path): input_file = tmp_path / "input.txt" input_file.write_text("test data")
result = runner.invoke(cli, ['process', str(input_file)])
assert result.exit_code == 0
assert 'Processing complete' in result.output
def test_cli_invalid_option(runner): result = runner.invoke(cli, ['--invalid-option']) assert result.exit_code != 0 assert 'Error' in result.output
undefined
import pytest from click.testing import CliRunner from myapp.cli import cli
@pytest.fixture def runner(): """创建CLI测试运行器。""" return CliRunner()
def test_cli_help(runner): result = runner.invoke(cli, ['--help']) assert result.exit_code == 0 assert 'Usage:' in result.output
def test_cli_version(runner): result = runner.invoke(cli, ['--version']) assert result.exit_code == 0 assert '1.0.0' in result.output
def test_cli_process_file(runner, tmp_path): input_file = tmp_path / "input.txt" input_file.write_text("test data")
result = runner.invoke(cli, ['process', str(input_file)])
assert result.exit_code == 0
assert 'Processing complete' in result.output
def test_cli_invalid_option(runner): result = runner.invoke(cli, ['--invalid-option']) assert result.exit_code != 0 assert 'Error' in result.output
undefined

Example 9: Testing Async Functions

示例9:测试异步函数

python
undefined
python
undefined

test_async_operations.py

test_async_operations.py

import pytest import asyncio from async_service import fetch_data, process_batch, AsyncWorker
@pytest.mark.asyncio async def test_fetch_data(): data = await fetch_data("https://api.example.com/data") assert data is not None assert 'results' in data
@pytest.mark.asyncio async def test_process_batch(): items = [1, 2, 3, 4, 5] results = await process_batch(items) assert len(results) == 5
@pytest.mark.asyncio async def test_async_worker(): worker = AsyncWorker() await worker.start()
result = await worker.submit_task("process", data={"key": "value"})
assert result["status"] == "completed"

await worker.stop()
@pytest.mark.asyncio async def test_concurrent_requests(): async with AsyncWorker() as worker: tasks = [ worker.submit_task("task1"), worker.submit_task("task2"), worker.submit_task("task3"), ] results = await asyncio.gather(*tasks) assert len(results) == 3
undefined
import pytest import asyncio from async_service import fetch_data, process_batch, AsyncWorker
@pytest.mark.asyncio async def test_fetch_data(): data = await fetch_data("https://api.example.com/data") assert data is not None assert 'results' in data
@pytest.mark.asyncio async def test_process_batch(): items = [1, 2, 3, 4, 5] results = await process_batch(items) assert len(results) == 5
@pytest.mark.asyncio async def test_async_worker(): worker = AsyncWorker() await worker.start()
result = await worker.submit_task("process", data={"key": "value"})
assert result["status"] == "completed"

await worker.stop()
@pytest.mark.asyncio async def test_concurrent_requests(): async with AsyncWorker() as worker: tasks = [ worker.submit_task("task1"), worker.submit_task("task2"), worker.submit_task("task3"), ] results = await asyncio.gather(*tasks) assert len(results) == 3
undefined

Example 10: Fixture Parametrization

示例10:Fixture参数化

python
undefined
python
undefined

test_database_backends.py

test_database_backends.py

import pytest from database import DatabaseConnection
@pytest.fixture(params=['sqlite', 'postgresql', 'mysql']) def db_connection(request): """Test runs three times, once for each database.""" db = DatabaseConnection(request.param) db.connect() yield db db.disconnect()
def test_database_insert(db_connection): """Test insert operation on each database.""" db_connection.execute("INSERT INTO users (name) VALUES ('test')") result = db_connection.execute("SELECT COUNT(*) FROM users") assert result[0][0] == 1
def test_database_transaction(db_connection): """Test transaction support on each database.""" with db_connection.transaction(): db_connection.execute("INSERT INTO users (name) VALUES ('test')") db_connection.rollback()
result = db_connection.execute("SELECT COUNT(*) FROM users")
assert result[0][0] == 0
undefined
import pytest from database import DatabaseConnection
@pytest.fixture(params=['sqlite', 'postgresql', 'mysql']) def db_connection(request): """测试运行三次,每种数据库一次。""" db = DatabaseConnection(request.param) db.connect() yield db db.disconnect()
def test_database_insert(db_connection): """在每种数据库上测试插入操作。""" db_connection.execute("INSERT INTO users (name) VALUES ('test')") result = db_connection.execute("SELECT COUNT(*) FROM users") assert result[0][0] == 1
def test_database_transaction(db_connection): """在每种数据库上测试事务支持。""" with db_connection.transaction(): db_connection.execute("INSERT INTO users (name) VALUES ('test')") db_connection.rollback()
result = db_connection.execute("SELECT COUNT(*) FROM users")
assert result[0][0] == 0
undefined

Example 11: Testing Exceptions

示例11:测试异常

python
undefined
python
undefined

test_error_handling.py

test_error_handling.py

import pytest from custom_errors import ValidationError, AuthenticationError from validator import validate_user_input from auth import authenticate_user
def test_validation_error_message(): with pytest.raises(ValidationError) as exc_info: validate_user_input({"email": "invalid"})
assert "Invalid email format" in str(exc_info.value)
assert exc_info.value.field == "email"
def test_multiple_validation_errors(): with pytest.raises(ValidationError) as exc_info: validate_user_input({ "email": "invalid", "age": -5 })
assert len(exc_info.value.errors) == 2
def test_authentication_error(): with pytest.raises(AuthenticationError, match="Invalid credentials"): authenticate_user("user", "wrong_password")
@pytest.mark.parametrize("input_data,error_type", [ ({"email": ""}, ValidationError), ({"email": None}, ValidationError), ({}, ValidationError), ]) def test_various_validation_errors(input_data, error_type): with pytest.raises(error_type): validate_user_input(input_data)
undefined
import pytest from custom_errors import ValidationError, AuthenticationError from validator import validate_user_input from auth import authenticate_user
def test_validation_error_message(): with pytest.raises(ValidationError) as exc_info: validate_user_input({"email": "invalid"})
assert "Invalid email format" in str(exc_info.value)
assert exc_info.value.field == "email"
def test_multiple_validation_errors(): with pytest.raises(ValidationError) as exc_info: validate_user_input({ "email": "invalid", "age": -5 })
assert len(exc_info.value.errors) == 2
def test_authentication_error(): with pytest.raises(AuthenticationError, match="Invalid credentials"): authenticate_user("user", "wrong_password")
@pytest.mark.parametrize("input_data,error_type", [ ({"email": ""}, ValidationError), ({"email": None}, ValidationError), ({}, ValidationError), ]) def test_various_validation_errors(input_data, error_type): with pytest.raises(error_type): validate_user_input(input_data)
undefined

Example 12: Testing with Fixtures and Mocks

示例12:使用Fixtures和Mocks进行测试

python
undefined
python
undefined

test_payment_service.py

test_payment_service.py

import pytest from unittest.mock import Mock, patch from payment_service import PaymentService, PaymentGateway from models import Order, PaymentStatus
@pytest.fixture def mock_gateway(): gateway = Mock(spec=PaymentGateway) gateway.process_payment.return_value = { "transaction_id": "tx-12345", "status": "success" } return gateway
@pytest.fixture def payment_service(mock_gateway): return PaymentService(gateway=mock_gateway)
@pytest.fixture def sample_order(): return Order( id="order-123", amount=99.99, currency="USD", customer_id="cust-456" )
def test_successful_payment(payment_service, mock_gateway, sample_order): result = payment_service.process_order(sample_order)
assert result.status == PaymentStatus.SUCCESS
assert result.transaction_id == "tx-12345"
mock_gateway.process_payment.assert_called_once()
def test_payment_failure(payment_service, mock_gateway, sample_order): mock_gateway.process_payment.return_value = { "status": "failed", "error": "Insufficient funds" }
result = payment_service.process_order(sample_order)

assert result.status == PaymentStatus.FAILED
assert "Insufficient funds" in result.error_message
def test_payment_retry_logic(payment_service, mock_gateway, sample_order): mock_gateway.process_payment.side_effect = [ {"status": "error", "error": "Network timeout"}, {"status": "error", "error": "Network timeout"}, {"transaction_id": "tx-12345", "status": "success"} ]
result = payment_service.process_order_with_retry(sample_order, max_retries=3)

assert result.status == PaymentStatus.SUCCESS
assert mock_gateway.process_payment.call_count == 3
undefined
import pytest from unittest.mock import Mock, patch from payment_service import PaymentService, PaymentGateway from models import Order, PaymentStatus
@pytest.fixture def mock_gateway(): gateway = Mock(spec=PaymentGateway) gateway.process_payment.return_value = { "transaction_id": "tx-12345", "status": "success" } return gateway
@pytest.fixture def payment_service(mock_gateway): return PaymentService(gateway=mock_gateway)
@pytest.fixture def sample_order(): return Order( id="order-123", amount=99.99, currency="USD", customer_id="cust-456" )
def test_successful_payment(payment_service, mock_gateway, sample_order): result = payment_service.process_order(sample_order)
assert result.status == PaymentStatus.SUCCESS
assert result.transaction_id == "tx-12345"
mock_gateway.process_payment.assert_called_once()
def test_payment_failure(payment_service, mock_gateway, sample_order): mock_gateway.process_payment.return_value = { "status": "failed", "error": "Insufficient funds" }
result = payment_service.process_order(sample_order)

assert result.status == PaymentStatus.FAILED
assert "Insufficient funds" in result.error_message
def test_payment_retry_logic(payment_service, mock_gateway, sample_order): mock_gateway.process_payment.side_effect = [ {"status": "error", "error": "Network timeout"}, {"status": "error", "error": "Network timeout"}, {"transaction_id": "tx-12345", "status": "success"} ]
result = payment_service.process_order_with_retry(sample_order, max_retries=3)

assert result.status == PaymentStatus.SUCCESS
assert mock_gateway.process_payment.call_count == 3
undefined

Example 13: Integration Test Example

示例13:集成测试示例

python
undefined
python
undefined

test_integration_workflow.py

test_integration_workflow.py

import pytest from app import create_app from database import db, User, Order
@pytest.fixture(scope="module") def app(): """Create application for testing.""" app = create_app('testing') return app
@pytest.fixture(scope="module") def client(app): """Create test client.""" return app.test_client()
@pytest.fixture(scope="function") def clean_db(app): """Clean database before each test.""" with app.app_context(): db.drop_all() db.create_all() yield db db.session.remove()
@pytest.fixture def authenticated_user(client, clean_db): """Create and authenticate user.""" user = User(username="testuser", email="test@example.com") user.set_password("password123") clean_db.session.add(user) clean_db.session.commit()
# Login
response = client.post('/api/auth/login', json={
    'username': 'testuser',
    'password': 'password123'
})
token = response.json['access_token']

return {'user': user, 'token': token}
def test_create_order_workflow(client, authenticated_user): """Test complete order creation workflow.""" headers = {'Authorization': f'Bearer {authenticated_user["token"]}'}
# Create order
response = client.post('/api/orders',
    headers=headers,
    json={
        'items': [
            {'product_id': 1, 'quantity': 2},
            {'product_id': 2, 'quantity': 1}
        ]
    }
)
assert response.status_code == 201
order_id = response.json['order_id']

# Verify order was created
response = client.get(f'/api/orders/{order_id}', headers=headers)
assert response.status_code == 200
assert len(response.json['items']) == 2

# Update order status
response = client.patch(f'/api/orders/{order_id}',
    headers=headers,
    json={'status': 'processing'}
)
assert response.status_code == 200
assert response.json['status'] == 'processing'
undefined
import pytest from app import create_app from database import db, User, Order
@pytest.fixture(scope="module") def app(): """创建用于测试的应用。""" app = create_app('testing') return app
@pytest.fixture(scope="module") def client(app): """创建测试客户端。""" return app.test_client()
@pytest.fixture(scope="function") def clean_db(app): """每个测试前清理数据库。""" with app.app_context(): db.drop_all() db.create_all() yield db db.session.remove()
@pytest.fixture def authenticated_user(client, clean_db): """创建并认证用户。""" user = User(username="testuser", email="test@example.com") user.set_password("password123") clean_db.session.add(user) clean_db.session.commit()
# 登录
response = client.post('/api/auth/login', json={
    'username': 'testuser',
    'password': 'password123'
})
token = response.json['access_token']

return {'user': user, 'token': token}
def test_create_order_workflow(client, authenticated_user): """测试完整的订单创建工作流。""" headers = {'Authorization': f'Bearer {authenticated_user["token"]}'}
# 创建订单
response = client.post('/api/orders',
    headers=headers,
    json={
        'items': [
            {'product_id': 1, 'quantity': 2},
            {'product_id': 2, 'quantity': 1}
        ]
    }
)
assert response.status_code == 201
order_id = response.json['order_id']

# 验证订单已创建
response = client.get(f'/api/orders/{order_id}', headers=headers)
assert response.status_code == 200
assert len(response.json['items']) == 2

# 更新订单状态
response = client.patch(f'/api/orders/{order_id}',
    headers=headers,
    json={'status': 'processing'}
)
assert response.status_code == 200
assert response.json['status'] == 'processing'
undefined

Example 14: Property-Based Testing

示例14:基于属性的测试

python
undefined
python
undefined

test_property_based.py

test_property_based.py

import pytest from hypothesis import given, strategies as st from string_utils import reverse_string, is_palindrome
@given(st.text()) def test_reverse_string_twice(s): """Reversing twice should return original string.""" assert reverse_string(reverse_string(s)) == s
@given(st.lists(st.integers())) def test_sort_idempotent(lst): """Sorting twice should be same as sorting once.""" sorted_once = sorted(lst) sorted_twice = sorted(sorted_once) assert sorted_once == sorted_twice
@given(st.text(alphabet=st.characters(whitelist_categories=('Lu', 'Ll')))) def test_palindrome_reverse(s): """If a string is a palindrome, its reverse is too.""" if is_palindrome(s): assert is_palindrome(reverse_string(s))
@given(st.integers(min_value=1, max_value=1000)) def test_factorial_positive(n): """Factorial should always be positive.""" from math import factorial assert factorial(n) > 0
undefined
import pytest from hypothesis import given, strategies as st from string_utils import reverse_string, is_palindrome
@given(st.text()) def test_reverse_string_twice(s): """反转两次应返回原字符串。""" assert reverse_string(reverse_string(s)) == s
@given(st.lists(st.integers())) def test_sort_idempotent(lst): """排序两次应与排序一次结果相同。""" sorted_once = sorted(lst) sorted_twice = sorted(sorted_once) assert sorted_once == sorted_twice
@given(st.text(alphabet=st.characters(whitelist_categories=('Lu', 'Ll')))) def test_palindrome_reverse(s): """如果字符串是回文,其反转也是回文。""" if is_palindrome(s): assert is_palindrome(reverse_string(s))
@given(st.integers(min_value=1, max_value=1000)) def test_factorial_positive(n): """阶乘应始终为正。""" from math import factorial assert factorial(n) > 0
undefined

Example 15: Performance Testing

示例15:性能测试

python
undefined
python
undefined

test_performance.py

test_performance.py

import pytest import time from data_processor import process_large_dataset, optimize_query
@pytest.mark.slow def test_large_dataset_processing_time(): """Test that large dataset is processed within acceptable time.""" start = time.time() data = list(range(1000000)) result = process_large_dataset(data) duration = time.time() - start
assert len(result) == 1000000
assert duration < 5.0  # Should complete in under 5 seconds
@pytest.mark.benchmark def test_query_optimization(benchmark): """Benchmark query performance.""" result = benchmark(optimize_query, "SELECT * FROM users WHERE active=1") assert result is not None
@pytest.mark.parametrize("size", [100, 1000, 10000]) def test_scaling_performance(size): """Test performance with different data sizes.""" data = list(range(size)) start = time.time() result = process_large_dataset(data) duration = time.time() - start
# Should scale linearly
expected_max_time = size / 100000  # 1 second per 100k items
assert duration < expected_max_time
undefined
import pytest import time from data_processor import process_large_dataset, optimize_query
@pytest.mark.slow def test_large_dataset_processing_time(): """测试大型数据集应在可接受时间内处理完成。""" start = time.time() data = list(range(1000000)) result = process_large_dataset(data) duration = time.time() - start
assert len(result) == 1000000
assert duration < 5.0  # 应在5秒内完成
@pytest.mark.benchmark def test_query_optimization(benchmark): """基准测试查询性能。""" result = benchmark(optimize_query, "SELECT * FROM users WHERE active=1") assert result is not None
@pytest.mark.parametrize("size", [100, 1000, 10000]) def test_scaling_performance(size): """测试不同数据量下的性能。""" data = list(range(size)) start = time.time() result = process_large_dataset(data) duration = time.time() - start
# 应线性扩展
expected_max_time = size / 100000  # 每10万条数据耗时1秒
assert duration < expected_max_time
undefined

Best Practices

最佳实践

Test Organization

测试组织

  1. One test file per source file:
    mymodule.py
    test_mymodule.py
  2. Group related tests in classes: Use
    Test*
    classes for logical grouping
  3. Use descriptive test names:
    test_user_login_with_invalid_credentials
  4. Keep tests independent: Each test should work in isolation
  5. Use fixtures for setup: Avoid duplicate setup code
  1. 一个源文件对应一个测试文件
    mymodule.py
    test_mymodule.py
  2. 使用类对相关测试分组:使用
    Test*
    类进行逻辑分组
  3. 使用描述性的测试名称
    test_user_login_with_invalid_credentials
  4. 保持测试独立:每个测试应独立运行
  5. 使用Fixtures进行前置设置:避免重复的前置代码

Writing Effective Tests

编写高效测试

  1. Follow AAA pattern: Arrange, Act, Assert
    python
    def test_user_creation():
        # Arrange
        user_data = {"name": "Alice", "email": "alice@example.com"}
    
        # Act
        user = create_user(user_data)
    
        # Assert
        assert user.name == "Alice"
  2. Test one thing per test: Each test should verify a single behavior
  3. Use descriptive assertions: Make failures easy to understand
  4. Avoid test interdependencies: Tests should not depend on execution order
  5. Test edge cases: Empty lists, None values, boundary conditions
  1. 遵循AAA模式:Arrange(准备)、Act(执行)、Assert(断言)
    python
    def test_user_creation():
        # Arrange
        user_data = {"name": "Alice", "email": "alice@example.com"}
    
        # Act
        user = create_user(user_data)
    
        # Assert
        assert user.name == "Alice"
  2. 每个测试只验证一个行为:每个测试应仅验证单一行为
  3. 使用描述性断言:让失败信息易于理解
  4. 避免测试间依赖:测试不应依赖执行顺序
  5. 测试边缘情况:空列表、None值、边界条件

Fixture Best Practices

Fixture最佳实践

  1. Use appropriate scope: Minimize fixture creation cost
  2. Keep fixtures small: Each fixture should have a single responsibility
  3. Use fixture factories: For creating multiple test objects
  4. Clean up resources: Use yield for teardown
  5. Share fixtures via conftest.py: Make common fixtures available
  1. 使用合适的作用域:最小化Fixture的创建成本
  2. 保持Fixture简洁:每个Fixture应只有单一职责
  3. 使用Fixture工厂:用于创建多个测试对象
  4. 清理资源:使用yield进行后置清理
  5. 通过conftest.py共享Fixture:让通用Fixture可被访问

Coverage Guidelines

覆盖率指南

  1. Aim for high coverage: 80%+ is a good target
  2. Focus on critical paths: Prioritize important business logic
  3. Don't chase 100%: Some code doesn't need tests (getters, setters)
  4. Use coverage to find gaps: Not as a quality metric
  5. Exclude generated code: Mark with
    # pragma: no cover
  1. 追求高覆盖率:80%+是不错的目标
  2. 聚焦关键路径:优先覆盖重要的业务逻辑
  3. 不要盲目追求100%:有些代码不需要测试(getter、setter)
  4. 使用覆盖率发现测试缺口:不要将其作为质量指标
  5. 排除生成的代码:使用
    # pragma: no cover
    标记

CI/CD Integration

CI/CD集成

  1. Run tests on every commit: Catch issues early
  2. Test on multiple Python versions: Ensure compatibility
  3. Generate coverage reports: Track coverage trends
  4. Fail on low coverage: Maintain coverage standards
  5. Run tests in parallel: Speed up CI pipeline
  1. 每次提交都运行测试:尽早发现问题
  2. 在多个Python版本上测试:确保兼容性
  3. 生成覆盖率报告:跟踪覆盖率趋势
  4. 覆盖率过低则失败:维持覆盖率标准
  5. 并行运行测试:加速CI流水线

Useful Plugins

实用插件

  • pytest-cov: Coverage reporting
  • pytest-xdist: Parallel test execution
  • pytest-asyncio: Async/await support
  • pytest-mock: Enhanced mocking
  • pytest-timeout: Test timeouts
  • pytest-randomly: Randomize test order
  • pytest-html: HTML test reports
  • pytest-benchmark: Performance benchmarking
  • hypothesis: Property-based testing
  • pytest-django: Django testing support
  • pytest-flask: Flask testing support
  • pytest-cov:覆盖率报告
  • pytest-xdist:并行测试执行
  • pytest-asyncio:Async/await支持
  • pytest-mock:增强的模拟功能
  • pytest-timeout:测试超时
  • pytest-randomly:随机化测试执行顺序
  • pytest-html:HTML格式的测试报告
  • pytest-benchmark:性能基准测试
  • hypothesis:基于属性的测试
  • pytest-django:Django测试支持
  • pytest-flask:Flask测试支持

Troubleshooting

故障排除

Tests Not Discovered

测试未被发现

  • Check file naming:
    test_*.py
    or
    *_test.py
  • Check function naming:
    test_*
  • Verify
    __init__.py
    files exist in test directories
  • Run with
    -v
    flag to see discovery process
  • 检查文件命名:
    test_*.py
    *_test.py
  • 检查函数命名:
    test_*
  • 验证测试目录下存在
    __init__.py
    文件
  • 使用
    -v
    标志查看发现过程

Fixtures Not Found

Fixture未找到

  • Check fixture is in
    conftest.py
    or same file
  • Verify fixture scope is appropriate
  • Check for typos in fixture name
  • Use
    --fixtures
    flag to list available fixtures
  • 检查Fixture是否在
    conftest.py
    或同一文件中
  • 验证Fixture的作用域是否合适
  • 检查Fixture名称是否有拼写错误
  • 使用
    --fixtures
    标志列出可用的Fixture

Test Failures

测试失败

  • Use
    -v
    for verbose output
  • Use
    --tb=long
    for detailed tracebacks
  • Use
    --pdb
    to drop into debugger on failure
  • Use
    -x
    to stop on first failure
  • Use
    --lf
    to rerun last failed tests
  • 使用
    -v
    获取详细输出
  • 使用
    --tb=long
    查看详细的回溯信息
  • 使用
    --pdb
    在失败时进入调试器
  • 使用
    -x
    在第一个失败时停止
  • 使用
    --lf
    重新运行最后失败的测试

Import Errors

导入错误

  • Ensure package is installed:
    pip install -e .
  • Check PYTHONPATH is set correctly
  • Verify
    __init__.py
    files exist
  • Use
    sys.path
    manipulation if needed
  • 确保包已安装:
    pip install -e .
  • 检查PYTHONPATH是否设置正确
  • 验证
    __init__.py
    文件是否存在
  • 必要时修改sys.path

Resources

资源


Skill Version: 1.0.0 Last Updated: October 2025 Skill Category: Testing, Python, Quality Assurance, Test Automation Compatible With: pytest 7.0+, Python 3.8+

技能版本:1.0.0 最后更新:2025年10月 技能分类:测试、Python、质量保证、测试自动化 兼容版本:pytest 7.0+,Python 3.8+