cloudflare-python-workers

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Cloudflare Python Workers

Cloudflare Python Workers

Status: Beta (requires
python_workers
compatibility flag) Runtime: Pyodide (Python 3.12+ compiled to WebAssembly) Package Versions: workers-py@1.7.0, workers-runtime-sdk@0.3.1, wrangler@4.58.0 Last Verified: 2026-01-21
状态:测试版(需要
python_workers
兼容性标志) 运行时:Pyodide(编译为WebAssembly的Python 3.12+) 包版本:workers-py@1.7.0, workers-runtime-sdk@0.3.1, wrangler@4.58.0 最后验证时间:2026-01-21

Quick Start (5 Minutes)

快速开始(5分钟)

1. Prerequisites

1. 前置条件

Ensure you have installed:
  • uv - Python package manager
  • Node.js - Required for Wrangler
确保你已安装:
  • uv - Python包管理器
  • Node.js - Wrangler所需

2. Initialize Project

2. 初始化项目

bash
undefined
bash
undefined

Create project directory

Create project directory

mkdir my-python-worker && cd my-python-worker
mkdir my-python-worker && cd my-python-worker

Initialize Python project

Initialize Python project

uv init
uv init

Install pywrangler

Install pywrangler

uv tool install workers-py
uv tool install workers-py

Initialize Worker configuration

Initialize Worker configuration

uv run pywrangler init
undefined
uv run pywrangler init
undefined

3. Create Entry Point

3. 创建入口文件

Create
src/entry.py
:
python
from workers import WorkerEntrypoint, Response

class Default(WorkerEntrypoint):
    async def fetch(self, request):
        return Response("Hello from Python Worker!")
Create
src/entry.py
:
python
from workers import WorkerEntrypoint, Response

class Default(WorkerEntrypoint):
    async def fetch(self, request):
        return Response("Hello from Python Worker!")

4. Configure wrangler.jsonc

4. 配置wrangler.jsonc

jsonc
{
  "name": "my-python-worker",
  "main": "src/entry.py",
  "compatibility_date": "2025-12-01",
  "compatibility_flags": ["python_workers"]
}
jsonc
{
  "name": "my-python-worker",
  "main": "src/entry.py",
  "compatibility_date": "2025-12-01",
  "compatibility_flags": ["python_workers"]
}

5. Run Locally

5. 本地运行

bash
uv run pywrangler dev
bash
uv run pywrangler dev
undefined
undefined

6. Deploy

6. 部署

bash
uv run pywrangler deploy

bash
uv run pywrangler deploy

Migration from Pre-December 2025 Workers

从2025年12月前的Workers迁移

If you created a Python Worker before December 2025, you were limited to built-in packages. With pywrangler (Dec 2025), you can now deploy with external packages.
Old Approach (no longer needed):
python
undefined
如果你在2025年12月之前创建了Python Worker,那么你只能使用内置包。借助2025年12月推出的pywrangler,现在你可以部署包含外部包的Worker了。
旧方法(不再需要):
python
undefined

Limited to built-in packages only

Limited to built-in packages only

Could only use httpx, aiohttp, beautifulsoup4, etc.

Could only use httpx, aiohttp, beautifulsoup4, etc.

Error: "You cannot yet deploy Python Workers that depend on

Error: "You cannot yet deploy Python Workers that depend on

packages defined in requirements.txt [code: 10021]"

packages defined in requirements.txt [code: 10021]"


**New Approach** (pywrangler):
```toml

**新方法**(使用pywrangler):
```toml

pyproject.toml

pyproject.toml

[project] dependencies = ["fastapi", "any-pyodide-compatible-package"]

```bash
uv tool install workers-py
uv run pywrangler deploy  # Now works!
Historical Timeline:
  • April 2024 - Dec 2025: Package deployment completely blocked
  • Dec 8, 2025: Pywrangler released, enabling package deployment
  • Jan 2026: Open beta with full package support

[project] dependencies = ["fastapi", "any-pyodide-compatible-package"]

```bash
uv tool install workers-py
uv run pywrangler deploy  # Now works!
历史时间线:
  • 2024年4月 - 2025年12月:完全禁止包部署
  • 2025年12月8日:Pywrangler发布,支持包部署
  • 2026年1月:开放测试版,支持完整包功能

Core Concepts

核心概念

WorkerEntrypoint Class Pattern

WorkerEntrypoint类模式

As of August 2025, Python Workers use a class-based pattern (not global handlers):
python
from workers import WorkerEntrypoint, Response

class Default(WorkerEntrypoint):
    async def fetch(self, request):
        # Access bindings via self.env
        value = await self.env.MY_KV.get("key")

        # Parse request
        url = request.url
        method = request.method

        return Response(f"Method: {method}, URL: {url}")
从2025年8月起,Python Workers采用基于类的模式(而非全局处理器):
python
from workers import WorkerEntrypoint, Response

class Default(WorkerEntrypoint):
    async def fetch(self, request):
        # Access bindings via self.env
        value = await self.env.MY_KV.get("key")

        # Parse request
        url = request.url
        method = request.method

        return Response(f"Method: {method}, URL: {url}")

Accessing Bindings

访问绑定资源

All Cloudflare bindings are accessed via
self.env
:
python
class Default(WorkerEntrypoint):
    async def fetch(self, request):
        # D1 Database
        result = await self.env.DB.prepare("SELECT * FROM users").all()

        # KV Storage
        value = await self.env.MY_KV.get("key")
        await self.env.MY_KV.put("key", "value")

        # R2 Object Storage
        obj = await self.env.MY_BUCKET.get("file.txt")

        # Workers AI
        response = await self.env.AI.run("@cf/meta/llama-2-7b-chat-int8", {
            "prompt": "Hello!"
        })

        return Response("OK")
Supported Bindings:
  • D1 (SQL database)
  • KV (key-value storage)
  • R2 (object storage)
  • Workers AI
  • Vectorize
  • Durable Objects
  • Queues
  • Analytics Engine
所有Cloudflare绑定资源都通过
self.env
访问:
python
class Default(WorkerEntrypoint):
    async def fetch(self, request):
        # D1 Database
        result = await self.env.DB.prepare("SELECT * FROM users").all()

        # KV Storage
        value = await self.env.MY_KV.get("key")
        await self.env.MY_KV.put("key", "value")

        # R2 Object Storage
        obj = await self.env.MY_BUCKET.get("file.txt")

        # Workers AI
        response = await self.env.AI.run("@cf/meta/llama-2-7b-chat-int8", {
            "prompt": "Hello!"
        })

        return Response("OK")
支持的绑定资源:
  • D1(SQL数据库)
  • KV(键值存储)
  • R2(对象存储)
  • Workers AI
  • Vectorize
  • Durable Objects
  • Queues
  • Analytics Engine

Request/Response Handling

请求/响应处理

python
from workers import WorkerEntrypoint, Response
import json

class Default(WorkerEntrypoint):
    async def fetch(self, request):
        # Parse JSON body
        if request.method == "POST":
            body = await request.json()
            return Response(
                json.dumps({"received": body}),
                headers={"Content-Type": "application/json"}
            )

        # Query parameters
        url = URL(request.url)
        name = url.searchParams.get("name", "World")

        return Response(f"Hello, {name}!")
python
from workers import WorkerEntrypoint, Response
import json

class Default(WorkerEntrypoint):
    async def fetch(self, request):
        # Parse JSON body
        if request.method == "POST":
            body = await request.json()
            return Response(
                json.dumps({"received": body}),
                headers={"Content-Type": "application/json"}
            )

        # Query parameters
        url = URL(request.url)
        name = url.searchParams.get("name", "World")

        return Response(f"Hello, {name}!")

Scheduled Handlers (Cron)

定时处理器(Cron)

python
from workers import handler

@handler
async def on_scheduled(event, env, ctx):
    # Run on cron schedule
    print(f"Cron triggered at {event.scheduledTime}")

    # Do work...
    await env.MY_KV.put("last_run", str(event.scheduledTime))
Configure in wrangler.jsonc:
jsonc
{
  "triggers": {
    "crons": ["*/5 * * * *"]  // Every 5 minutes
  }
}

python
from workers import handler

@handler
async def on_scheduled(event, env, ctx):
    # Run on cron schedule
    print(f"Cron triggered at {event.scheduledTime}")

    # Do work...
    await env.MY_KV.put("last_run", str(event.scheduledTime))
在wrangler.jsonc中配置:
jsonc
{
  "triggers": {
    "crons": ["*/5 * * * *"]  // Every 5 minutes
  }
}

Python Workflows

Python工作流

Python Workflows enable durable, multi-step automation with automatic retries and state persistence.
Python工作流支持持久化的多步骤自动化,具备自动重试和状态持久化功能。

Why Decorator Pattern?

为什么使用装饰器模式?

Python Workflows use the
@step.do()
decorator pattern because Python does not easily support anonymous callbacks (unlike JavaScript/TypeScript which allows inline arrow functions). This is a fundamental language difference, not a limitation of Cloudflare's implementation.
JavaScript Pattern (doesn't translate):
javascript
await step.do("my step", async () => {
  // Inline callback
  return result;
});
Python Pattern (required):
python
@step.do("my step")
async def my_step():
    # Named function with decorator
    return result

result = await my_step()
Python Workflows使用
@step.do()
装饰器模式,因为Python不容易支持匿名回调(与JavaScript/TypeScript允许内联箭头函数不同)。这是语言的根本性差异,而非Cloudflare实现的限制。
JavaScript模式(无法直接转换):
javascript
await step.do("my step", async () => {
  // Inline callback
  return result;
});
Python模式(必须使用):
python
@step.do("my step")
async def my_step():
    # Named function with decorator
    return result

result = await my_step()

Concurrency with asyncio.gather

使用asyncio.gather实现并发

Pyodide captures JavaScript promises (thenables) and proxies them as Python awaitables. This enables
Promise.all
-equivalent behavior using standard Python async patterns:
python
import asyncio

@step.do("step_a")
async def step_a():
    return "A"

@step.do("step_b")
async def step_b():
    return "B"
Pyodide捕获JavaScript Promise(thenables)并将其代理为Python可等待对象。这使得使用标准Python异步模式实现类似
Promise.all
的行为成为可能:
python
import asyncio

@step.do("step_a")
async def step_a():
    return "A"

@step.do("step_b")
async def step_b():
    return "B"

Concurrent execution (like Promise.all)

Concurrent execution (like Promise.all)

results = await asyncio.gather(step_a(), step_b())
results = await asyncio.gather(step_a(), step_b())

results = ["A", "B"]

results = ["A", "B"]


**Why This Works**: JavaScript promises from workflow steps are proxied as Python awaitables, allowing standard asyncio concurrency primitives.

**Source**: [Python Workflows Blog](https://blog.cloudflare.com/python-workflows/)

**为什么可行**:工作流步骤返回的JavaScript Promise被代理为Python可等待对象,允许使用标准的asyncio并发原语。

**来源**:[Python Workflows博客](https://blog.cloudflare.com/python-workflows/)

Basic Workflow

基础工作流

python
from workers import WorkflowEntrypoint, WorkerEntrypoint, Response

class MyWorkflow(WorkflowEntrypoint):
    async def run(self, event, step):
        # Step 1
        @step.do("fetch data")
        async def fetch_data():
            response = await fetch("https://api.example.com/data")
            return await response.json()

        data = await fetch_data()

        # Step 2: Sleep
        await step.sleep("wait", "10 seconds")

        # Step 3: Process
        @step.do("process data")
        async def process_data():
            return {"processed": True, "count": len(data)}

        result = await process_data()
        return result


class Default(WorkerEntrypoint):
    async def fetch(self, request):
        # Create workflow instance
        instance = await self.env.MY_WORKFLOW.create()
        return Response(f"Workflow started: {instance.id}")
python
from workers import WorkflowEntrypoint, WorkerEntrypoint, Response

class MyWorkflow(WorkflowEntrypoint):
    async def run(self, event, step):
        # Step 1
        @step.do("fetch data")
        async def fetch_data():
            response = await fetch("https://api.example.com/data")
            return await response.json()

        data = await fetch_data()

        # Step 2: Sleep
        await step.sleep("wait", "10 seconds")

        # Step 3: Process
        @step.do("process data")
        async def process_data():
            return {"processed": True, "count": len(data)}

        result = await process_data()
        return result


class Default(WorkerEntrypoint):
    async def fetch(self, request):
        # Create workflow instance
        instance = await self.env.MY_WORKFLOW.create()
        return Response(f"Workflow started: {instance.id}")

DAG Dependencies

DAG依赖关系

Define step dependencies for parallel execution:
python
class MyWorkflow(WorkflowEntrypoint):
    async def run(self, event, step):
        @step.do("step_a")
        async def step_a():
            return "A done"

        @step.do("step_b")
        async def step_b():
            return "B done"

        # step_c waits for both step_a and step_b
        @step.do("step_c", depends=[step_a, step_b], concurrent=True)
        async def step_c(result_a, result_b):
            return f"C received: {result_a}, {result_b}"

        return await step_c()
定义步骤依赖以实现并行执行:
python
class MyWorkflow(WorkflowEntrypoint):
    async def run(self, event, step):
        @step.do("step_a")
        async def step_a():
            return "A done"

        @step.do("step_b")
        async def step_b():
            return "B done"

        # step_c waits for both step_a and step_b
        @step.do("step_c", depends=[step_a, step_b], concurrent=True)
        async def step_c(result_a, result_b):
            return f"C received: {result_a}, {result_b}"

        return await step_c()

Workflow Configuration

工作流配置

jsonc
{
  "compatibility_flags": ["python_workers", "python_workflows"],
  "compatibility_date": "2025-12-01",
  "workflows": [
    {
      "name": "my-workflow",
      "binding": "MY_WORKFLOW",
      "class_name": "MyWorkflow"
    }
  ]
}

jsonc
{
  "compatibility_flags": ["python_workers", "python_workflows"],
  "compatibility_date": "2025-12-01",
  "workflows": [
    {
      "name": "my-workflow",
      "binding": "MY_WORKFLOW",
      "class_name": "MyWorkflow"
    }
  ]
}

Package Management

包管理

pyproject.toml Configuration

pyproject.toml配置

toml
[project]
name = "my-python-worker"
version = "0.1.0"
requires-python = ">=3.12"
dependencies = [
    "beautifulsoup4",
    "httpx"
]

[dependency-groups]
dev = [
    "workers-py",
    "workers-runtime-sdk"
]
toml
[project]
name = "my-python-worker"
version = "0.1.0"
requires-python = ">=3.12"
dependencies = [
    "beautifulsoup4",
    "httpx"
]

[dependency-groups]
dev = [
    "workers-py",
    "workers-runtime-sdk"
]

Supported Packages

支持的包

Python Workers support:
  • Pure Python packages from PyPI
  • Pyodide packages (pre-built for WebAssembly)
Python Workers支持:
  • PyPI中的纯Python包
  • Pyodide包(为WebAssembly预构建)

HTTP Clients

HTTP客户端

Only async HTTP libraries work:
python
undefined
只有异步HTTP库可以工作:
python
undefined

✅ WORKS - httpx (async)

✅ WORKS - httpx (async)

import httpx
async with httpx.AsyncClient() as client: response = await client.get("https://api.example.com")
import httpx
async with httpx.AsyncClient() as client: response = await client.get("https://api.example.com")

✅ WORKS - aiohttp

✅ WORKS - aiohttp

import aiohttp
async with aiohttp.ClientSession() as session: async with session.get("https://api.example.com") as response: data = await response.json()
import aiohttp
async with aiohttp.ClientSession() as session: async with session.get("https://api.example.com") as response: data = await response.json()

❌ DOES NOT WORK - requests (sync)

❌ DOES NOT WORK - requests (sync)

import requests # Will fail!
undefined
import requests # Will fail!
undefined

Requesting New Packages

请求新增包支持

FFI (Foreign Function Interface)

FFI(外部函数接口)

Access JavaScript APIs from Python via Pyodide's FFI:
通过Pyodide的FFI从Python访问JavaScript API:

JavaScript Globals

JavaScript全局对象

python
from js import fetch, console, Response as JSResponse

class Default(WorkerEntrypoint):
    async def fetch(self, request):
        # Use JavaScript fetch
        response = await fetch("https://api.example.com")
        data = await response.json()

        # Console logging
        console.log("Fetched data:", data)

        # Return JavaScript Response
        return JSResponse.new("Hello!")
python
from js import fetch, console, Response as JSResponse

class Default(WorkerEntrypoint):
    async def fetch(self, request):
        # Use JavaScript fetch
        response = await fetch("https://api.example.com")
        data = await response.json()

        # Console logging
        console.log("Fetched data:", data)

        # Return JavaScript Response
        return JSResponse.new("Hello!")

Type Conversions

类型转换

Important:
to_py()
is a METHOD on JavaScript objects, not a standalone function. Only
to_js()
is a function.
python
from js import Object
from pyodide.ffi import to_js
重要提示
to_py()
是JavaScript对象的一个方法,而非独立函数。只有
to_js()
是独立函数。
python
from js import Object
from pyodide.ffi import to_js

❌ WRONG - ImportError!

❌ WRONG - ImportError!

from pyodide.ffi import to_py python_data = to_py(js_data)
from pyodide.ffi import to_py python_data = to_py(js_data)

✅ CORRECT - to_py() is a method

✅ CORRECT - to_py() is a method

async def fetch(self, request): data = await request.json() # Returns JS object python_data = data.to_py() # Convert to Python dict
async def fetch(self, request): data = await request.json() # Returns JS object python_data = data.to_py() # Convert to Python dict

Convert Python dict to JavaScript object

Convert Python dict to JavaScript object

python_dict = {"name": "test", "count": 42} js_object = to_js(python_dict, dict_converter=Object.fromEntries)
python_dict = {"name": "test", "count": 42} js_object = to_js(python_dict, dict_converter=Object.fromEntries)

Use in Response

Use in Response

return Response(to_js({"status": "ok"}))

**Source**: [GitHub Issue #3322](https://github.com/cloudflare/workerd/issues/3322) (Pyodide maintainer clarification)

---
return Response(to_js({"status": "ok"}))

**来源**:[GitHub Issue #3322](https://github.com/cloudflare/workerd/issues/3322)(Pyodide维护者澄清)

---

Known Issues Prevention

已知问题预防

This skill prevents 11 documented issues:
本技能可预防11种已记录的问题

Issue #1: Legacy Handler Pattern

问题1:旧版处理器模式

Error:
TypeError: on_fetch is not defined
Why: Handler pattern changed in August 2025.
python
undefined
错误
TypeError: on_fetch is not defined
原因:处理器模式在2025年8月发生了变化。
python
undefined

❌ OLD (deprecated)

❌ OLD (deprecated)

@handler async def on_fetch(request): return Response("Hello")
@handler async def on_fetch(request): return Response("Hello")

✅ NEW (current)

✅ NEW (current)

class Default(WorkerEntrypoint): async def fetch(self, request): return Response("Hello")
undefined
class Default(WorkerEntrypoint): async def fetch(self, request): return Response("Hello")
undefined

Issue #2: Sync HTTP Libraries

问题2:同步HTTP库

Error:
RuntimeError: cannot use blocking call in async context
Why: Python Workers run async-only. Sync libraries block the event loop.
python
undefined
错误
RuntimeError: cannot use blocking call in async context
原因:Python Workers仅支持异步运行。同步库会阻塞事件循环。
python
undefined

❌ FAILS

❌ FAILS

import requests response = requests.get("https://api.example.com")
import requests response = requests.get("https://api.example.com")

✅ WORKS

✅ WORKS

import httpx async with httpx.AsyncClient() as client: response = await client.get("https://api.example.com")
undefined
import httpx async with httpx.AsyncClient() as client: response = await client.get("https://api.example.com")
undefined

Issue #3: Native/Compiled Packages

问题3:原生/编译型包

Error:
ModuleNotFoundError: No module named 'numpy'
(or similar)
Why: Only pure Python packages work. Native C extensions are not supported.
Solution: Use Pyodide-compatible alternatives or check Pyodide packages.
错误
ModuleNotFoundError: No module named 'numpy'
(或类似)
原因:仅支持纯Python包。原生C扩展不受支持。
解决方案:使用Pyodide兼容的替代包,或查看Pyodide包列表

Issue #4: Missing Compatibility Flags

问题4:缺少兼容性标志

Error:
Error: Python Workers require the python_workers compatibility flag
Fix: Add to wrangler.jsonc:
jsonc
{
  "compatibility_flags": ["python_workers"]
}
For Workflows, also add
"python_workflows"
.
错误
Error: Python Workers require the python_workers compatibility flag
修复:在wrangler.jsonc中添加:
jsonc
{
  "compatibility_flags": ["python_workers"]
}
对于工作流,还需添加
"python_workflows"

Issue #5: I/O Outside Workflow Steps

问题5:工作流步骤外的I/O操作

Error: Workflow state not persisted correctly
Why: All I/O must happen inside
@step.do
for durability.
python
undefined
错误:工作流状态未正确持久化
原因:所有I/O操作必须在
@step.do
内部执行以保证持久性。
python
undefined

❌ BAD - fetch outside step

❌ BAD - fetch outside step

response = await fetch("https://api.example.com") @step.do("use data") async def use_data(): return await response.json() # response may be stale on retry
response = await fetch("https://api.example.com") @step.do("use data") async def use_data(): return await response.json() # response may be stale on retry

✅ GOOD - fetch inside step

✅ GOOD - fetch inside step

@step.do("fetch and use") async def fetch_and_use(): response = await fetch("https://api.example.com") return await response.json()
undefined
@step.do("fetch and use") async def fetch_and_use(): response = await fetch("https://api.example.com") return await response.json()
undefined

Issue #6: Type Serialization Errors

问题6:类型序列化错误

Error:
TypeError: Object of type X is not JSON serializable
Why: Workflow step return values must be JSON-serializable.
Fix: Convert complex objects before returning:
python
@step.do("process")
async def process():
    # Convert datetime to string
    return {"timestamp": datetime.now().isoformat()}
错误
TypeError: Object of type X is not JSON serializable
原因:工作流步骤的返回值必须可序列化为JSON。
修复:在返回前转换复杂对象:
python
@step.do("process")
async def process():
    # Convert datetime to string
    return {"timestamp": datetime.now().isoformat()}

Issue #7: Cold Start Performance

问题7:冷启动性能

Note: Python Workers have higher cold starts than JavaScript. With Wasm memory snapshots (Dec 2025), heavy packages like FastAPI and Pydantic now load in ~1 second (down from ~10 seconds previously), but this is still ~2x slower than JavaScript Workers (~50ms).
Performance Numbers (as of Dec 2025):
  • Before snapshots: ~10 seconds for FastAPI/Pydantic
  • After snapshots: ~1 second (10x improvement)
  • JavaScript equivalent: ~50ms
Mitigation:
  • Minimize top-level imports
  • Use lazy loading for heavy packages
  • Consider JavaScript Workers for latency-critical paths
  • Wasm snapshots automatically improve cold starts (no config needed)
注意:Python Workers的冷启动时间比JavaScript长。借助Wasm内存快照(2025年12月),像FastAPI和Pydantic这样的重型包现在的加载时间约为1秒(之前约为10秒),但仍然比JavaScript Workers慢约2倍(约50ms)。
性能数据(截至2025年12月):
  • 快照前:FastAPI/Pydantic约10秒
  • 快照后:约1秒(提升10倍)
  • JavaScript等效项:约50ms
缓解措施
  • 最小化顶层导入
  • 对重型包使用延迟加载
  • 对延迟敏感的路径考虑使用JavaScript Workers
  • Wasm快照会自动改善冷启动时间(无需配置)

Issue #8: Package Installation Failures

问题8:包安装失败

Error:
Failed to install package X
Causes:
  • Package has native dependencies
  • Package not in Pyodide ecosystem
  • Network issues during bundling
Fix: Check package compatibility, use alternatives, or request support.
错误
Failed to install package X
原因
  • 包包含原生依赖
  • 包不在Pyodide生态系统中
  • 打包过程中出现网络问题
修复:检查包兼容性,使用替代包,或请求支持。

Issue #9: Dev Registry Breaks JS-to-Python RPC

问题9:开发注册表破坏JS到Python的RPC通信

Error:
Network connection lost
when calling Python Worker from JavaScript Worker Source: GitHub Issue #11438
Why It Happens: Dev registry doesn't properly route RPC calls between separately-run Workers in different terminals.
Prevention:
bash
undefined
错误:从JavaScript Worker调用Python Worker时出现
Network connection lost
来源GitHub Issue #11438
原因:开发注册表无法正确路由在不同终端中单独运行的Workers之间的RPC调用。
预防
bash
undefined

❌ Doesn't work - separate terminals

❌ Doesn't work - separate terminals

Terminal 1: npx wrangler dev (JS worker)

Terminal 1: npx wrangler dev (JS worker)

Terminal 2: npx wrangler dev (Python worker)

Terminal 2: npx wrangler dev (Python worker)

Result: Network connection lost error

Result: Network connection lost error

✅ Works - single wrangler instance

✅ Works - single wrangler instance

npx wrangler dev -c ts/wrangler.jsonc -c py/wrangler.jsonc

Run both workers in a single wrangler instance to enable proper RPC communication.
npx wrangler dev -c ts/wrangler.jsonc -c py/wrangler.jsonc

在单个wrangler实例中运行两个Worker,以启用正确的RPC通信。

Issue #10: HTMLRewriter Memory Limit with Data URLs

问题10:使用Data URL时HTMLRewriter的内存限制

Error:
TypeError: Parser error: The memory limit has been exceeded
Source: GitHub Issue #10814
Why It Happens: Large inline
data:
URLs (>10MB) in HTML trigger parser memory limits. This is NOT about response size—10MB plain text works fine, but 10MB HTML with embedded data URLs fails. Common with Python Jupyter Notebooks that use inline images for plots.
Prevention:
python
undefined
错误
TypeError: Parser error: The memory limit has been exceeded
来源GitHub Issue #10814
原因:HTML中大型内联
data:
URL(>10MB)会触发解析器内存限制。这与响应大小无关——10MB纯文本可以正常工作,但包含嵌入式data URL的10MB HTML会失败。常见于使用内联图片绘制图表的Python Jupyter Notebook。
预防
python
undefined

❌ FAILS - HTMLRewriter triggered on notebook HTML with data: URLs

❌ FAILS - HTMLRewriter triggered on notebook HTML with data: URLs

response = await fetch("https://origin.example.com/notebook.html") return response # Crashes if HTML contains large data: URLs
response = await fetch("https://origin.example.com/notebook.html") return response # Crashes if HTML contains large data: URLs

✅ WORKS - Stream directly or use text/plain

✅ WORKS - Stream directly or use text/plain

response = await fetch("https://origin.example.com/notebook.html") headers = {"Content-Type": "text/plain"} # Bypass parser return Response(await response.text(), headers=headers)

**Workarounds**:
- Avoid HTMLRewriter on notebook content (stream directly)
- Pre-process notebooks to extract data URLs to external files
- Use `text/plain` content-type to bypass parser
response = await fetch("https://origin.example.com/notebook.html") headers = {"Content-Type": "text/plain"} # Bypass parser return Response(await response.text(), headers=headers)

**解决方法**:
- 避免对包含data: URL的Notebook HTML使用HTMLRewriter(直接流式传输)
- 预处理Notebook以将data: URL提取到外部文件
- 使用`text/plain`内容类型绕过解析器

Issue #11: PRNG Cannot Be Seeded During Initialization

问题11:初始化期间无法为PRNG种子

Error: Deployment fails with user error Source: Python Workers Redux Blog
Why It Happens: Wasm snapshots don't support PRNG initialization before request handlers. If you call pseudorandom number generator APIs (like
random.seed()
) during module initialization, deployment FAILS.
Prevention:
python
import random
错误:部署因用户错误失败 来源Python Workers Redux博客
原因:Wasm快照不支持在请求处理器之前初始化PRNG。如果在模块初始化期间调用伪随机数生成器API(如
random.seed()
),部署会失败。
预防
python
import random

❌ FAILS deployment - module-level PRNG call

❌ FAILS deployment - module-level PRNG call

random.seed(42)
class Default(WorkerEntrypoint): async def fetch(self, request): return Response(str(random.randint(1, 100)))
random.seed(42)
class Default(WorkerEntrypoint): async def fetch(self, request): return Response(str(random.randint(1, 100)))

✅ WORKS - PRNG calls inside handlers

✅ WORKS - PRNG calls inside handlers

class Default(WorkerEntrypoint): async def fetch(self, request): random.seed(42) # Initialize inside handler return Response(str(random.randint(1, 100)))

Only call PRNG functions inside request handlers, not at module level.

---
class Default(WorkerEntrypoint): async def fetch(self, request): random.seed(42) # Initialize inside handler return Response(str(random.randint(1, 100)))

仅在请求处理器内部调用PRNG函数,不要在模块级别调用。

---

Best Practices

最佳实践

Always Do

务必遵循

  • Use
    WorkerEntrypoint
    class pattern
  • Use async HTTP clients (httpx, aiohttp)
  • Put all I/O inside workflow steps
  • Add
    python_workers
    compatibility flag
  • Use
    self.env
    for all bindings
  • Return JSON-serializable data from workflow steps
  • 使用
    WorkerEntrypoint
    类模式
  • 使用异步HTTP客户端(httpx、aiohttp)
  • 将所有I/O操作放在工作流步骤内
  • 添加
    python_workers
    兼容性标志
  • 使用
    self.env
    访问所有绑定资源
  • 工作流步骤返回可JSON序列化的数据

Never Do

切勿执行

  • Use sync HTTP libraries (requests)
  • Use native/compiled packages
  • Perform I/O outside workflow steps
  • Use legacy
    @handler
    decorator for fetch
  • Expect JavaScript-level cold start times

  • 使用同步HTTP库(requests)
  • 使用原生/编译型包
  • 在工作流步骤外执行I/O操作
  • 对fetch使用旧版
    @handler
    装饰器
  • 期望达到JavaScript级别的冷启动速度

Framework Note: FastAPI

框架说明:FastAPI

FastAPI can work with Python Workers but with limitations:
python
from fastapi import FastAPI
from workers import WorkerEntrypoint

app = FastAPI()

@app.get("/")
async def root():
    return {"message": "Hello from FastAPI"}

class Default(WorkerEntrypoint):
    async def fetch(self, request):
        # Route through FastAPI
        return await app(request)
Limitations:
  • Async-only (no sync endpoints)
  • No WSGI middleware
  • Beta stability

FastAPI可以与Python Workers配合使用,但存在一些限制:
python
from fastapi import FastAPI
from workers import WorkerEntrypoint

app = FastAPI()

@app.get("/")
async def root():
    return {"message": "Hello from FastAPI"}

class Default(WorkerEntrypoint):
    async def fetch(self, request):
        # Route through FastAPI
        return await app(request)
限制
  • 仅支持异步(无同步端点)
  • 不支持WSGI中间件
  • 测试版稳定性
详情请查看Cloudflare FastAPI示例

Official Documentation

官方文档

Dependencies

依赖项

json
{
  "workers-py": "1.7.0",
  "workers-runtime-sdk": "0.3.1",
  "wrangler": "4.58.0"
}
Note: Always pin versions for reproducible builds. Check PyPI workers-py for latest releases.

json
{
  "workers-py": "1.7.0",
  "workers-runtime-sdk": "0.3.1",
  "wrangler": "4.58.0"
}
注意:始终固定版本以确保构建的可重复性。查看PyPI workers-py获取最新版本。

Production Validation

生产环境验证

  • Cloudflare changelog: Dec 8, 2025 (Pywrangler + cold start improvements)
  • workers-py 1.7.0: Latest stable (Jan 2026)
  • Python Workflows beta: Aug 22, 2025
  • Handler pattern change: Aug 14, 2025
Compatibility Date Guidance:
  • Use
    2025-12-01
    for new projects (latest features including pywrangler improvements)
  • Use
    2025-08-01
    only if you need to match older production Workers
  • Cloudflare更新日志:2025年12月8日(Pywrangler + 冷启动优化)
  • workers-py 1.7.0:最新稳定版(2026年1月)
  • Python Workflows测试版:2025年8月22日
  • 处理器模式变更:2025年8月14日
兼容性日期指南
  • 新项目使用
    2025-12-01
    (包含pywrangler改进等最新功能)
  • 仅当需要匹配旧版生产环境Workers时才使用
    2025-08-01