pyre-code-ml-practice

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Pyre Code ML Practice Platform

Pyre Code ML练习平台

Skill by ara.so — Daily 2026 Skills collection.
Pyre Code is a self-hosted ML coding practice platform with 68 problems ranging from ReLU to flow matching. Users implement internals of modern AI systems (Transformers, vLLM, TRL, diffusion models) in a browser editor with instant pass/fail feedback, no GPU required.

ara.so提供的Skill — 2026年度技能合集。
Pyre Code是一款自托管的机器学习编码练习平台,包含从ReLU到流匹配等68道习题。用户可以在浏览器编辑器中实现现代AI系统(Transformers、vLLM、TRL、diffusion模型)的内部逻辑,即时获得通过/未通过的反馈,无需GPU。

Installation

安装

Option A — One-liner (recommended)

方案A — 一键安装(推荐)

bash
git clone https://github.com/whwangovo/pyre-code.git
cd pyre-code
./setup.sh
npm run dev
setup.sh
creates a
.venv
(prefers
uv
, falls back to
python3 -m venv
), installs all Python deps, then prints the start command.
bash
git clone https://github.com/whwangovo/pyre-code.git
cd pyre-code
./setup.sh
npm run dev
setup.sh
会创建一个
.venv
环境(优先使用
uv
,降级方案为
python3 -m venv
),安装所有Python依赖后输出启动命令。

Option B — Conda

方案B — Conda

bash
git clone https://github.com/whwangovo/pyre-code.git
cd pyre-code
conda create -n pyre python=3.11 -y && conda activate pyre
pip install -e ".[dev]"
npm install
npm run dev
bash
git clone https://github.com/whwangovo/pyre-code.git
cd pyre-code
conda create -n pyre python=3.11 -y && conda activate pyre
pip install -e ".[dev]"
npm install
npm run dev

Option C — Docker

方案C — Docker

bash
git clone https://github.com/whwangovo/pyre-code.git
cd pyre-code
docker compose up --build
Progress persists in a Docker volume. Reset with
docker compose down -v
.
bash
git clone https://github.com/whwangovo/pyre-code.git
cd pyre-code
docker compose up --build
练习进度会持久化存储在Docker卷中,执行
docker compose down -v
可重置进度。

After installation

安装完成后

  • Grading service:
    http://localhost:8000
  • Web app:
    http://localhost:3000

  • 评分服务:
    http://localhost:8000
  • Web应用:
    http://localhost:3000

Project Structure

项目结构

pyre/
├── web/                        # Next.js frontend
│   ├── src/app/                # Pages and API routes
│   ├── src/components/         # UI components
│   └── src/lib/                # Utilities, problem data
├── grading_service/            # FastAPI backend (grading API)
├── torch_judge/                # Judge engine — problem definitions + test runner
│   ├── problems/               # Individual problem modules
│   └── runner.py               # Test execution logic
├── setup.sh                    # Environment bootstrap script
├── package.json                # Dev scripts (runs frontend + backend concurrently)
└── pyproject.toml              # Python package config

pyre/
├── web/                        # Next.js 前端
│   ├── src/app/                # 页面与API路由
│   ├── src/components/         # UI组件
│   └── src/lib/                # 工具函数、习题数据
├── grading_service/            # FastAPI 后端(评分API)
├── torch_judge/                # 判题引擎 — 习题定义 + 测试运行器
│   ├── problems/               # 单独的习题模块
│   └── runner.py               # 测试执行逻辑
├── setup.sh                    # 环境初始化脚本
├── package.json                # 开发脚本(同时启动前端+后端)
└── pyproject.toml              # Python包配置

Key Commands

核心命令

bash
undefined
bash
undefined

Start both frontend and backend concurrently

同时启动前端和后端

npm run dev
npm run dev

Start only the grading service (FastAPI)

仅启动评分服务(FastAPI)

cd grading_service && uvicorn main:app --reload --port 8000
cd grading_service && uvicorn main:app --reload --port 8000

Start only the frontend (Next.js)

仅启动前端(Next.js)

cd web && npm run dev
cd web && npm run dev

Run Python tests

运行Python测试

pytest torch_judge/
pytest torch_judge/

Install Python package in editable mode with dev deps

以可编辑模式安装Python包及开发依赖

pip install -e ".[dev]"
pip install -e ".[dev]"

Docker: build and start

Docker: 构建并启动

docker compose up --build
docker compose up --build

Docker: stop and remove volumes (reset progress)

Docker: 停止并删除卷(重置进度)

docker compose down -v

---
docker compose down -v

---

Configuration

配置

Environment Variables

环境变量

Create
web/.env.local
to override defaults:
bash
undefined
创建
web/.env.local
覆盖默认配置:
bash
undefined

URL of the FastAPI grading service

FastAPI评分服务的URL

GRADING_SERVICE_URL=http://localhost:8000
GRADING_SERVICE_URL=http://localhost:8000

SQLite database path for progress tracking

用于进度跟踪的SQLite数据库路径

DB_PATH=./data/pyre.db
undefined
DB_PATH=./data/pyre.db
undefined

AI Help (Optional)

AI辅助功能(可选)

Copy
web/.env.example
to
web/.env
and configure:
bash
AI_HELP_BASE_URL=https://api.openai.com/v1
AI_HELP_API_KEY=$OPENAI_API_KEY
AI_HELP_MODEL=gpt-4o-mini
Any OpenAI-compatible endpoint works: OpenAI, Anthropic via proxy, Ollama, etc. Users can also set their own key in the UI if no server-side config is present.

复制
web/.env.example
web/.env
并配置:
bash
AI_HELP_BASE_URL=https://api.openai.com/v1
AI_HELP_API_KEY=$OPENAI_API_KEY
AI_HELP_MODEL=gpt-4o-mini
支持所有兼容OpenAI格式的端点:OpenAI、通过代理访问的Anthropic、Ollama等。如果没有服务端配置,用户也可以在UI中自行设置密钥。

Problem Categories

习题分类

CategoryExamples
FundamentalsReLU, Softmax, GELU, SwiGLU, Dropout, Embedding, Linear, Kaiming Init
NormalizationLayerNorm, BatchNorm, RMSNorm
AttentionScaled Dot-Product, Multi-Head, Causal, GQA, Flash, Differential, MLA
Position EncodingSinusoidal PE, RoPE, ALiBi, NTK-aware RoPE
ArchitectureGPT-2 Block, ViT Block, Conv2D, MoE, Depthwise Conv
TrainingAdam, Cosine LR, Gradient Clipping, Mixed Precision, Activation Checkpointing
DistributedTensor Parallel, FSDP, Ring Attention
InferenceKV Cache, Top-k Sampling, Beam Search, Speculative Decoding, Paged Attention
AlignmentDPO, GRPO, PPO, Reward Model
DiffusionNoise Schedule, DDIM Step, Flow Matching, adaLN-Zero
AdaptationLoRA, QLoRA
ReasoningMCTS, Multi-Token Prediction
SSMMamba SSM

分类示例
基础ReLU, Softmax, GELU, SwiGLU, Dropout, Embedding, Linear, Kaiming 初始化
归一化LayerNorm, BatchNorm, RMSNorm
注意力缩放点积注意力, 多头注意力, 因果注意力, GQA, Flash注意力, 差分注意力, MLA
位置编码正弦位置编码, RoPE, ALiBi, NTK感知RoPE
架构GPT-2 Block, ViT Block, Conv2D, MoE, 深度可分离卷积
训练Adam, 余弦学习率, 梯度裁剪, 混合精度, 激活检查点
分布式张量并行, FSDP, 环形注意力
推理KV Cache, Top-k采样, 束搜索, 投机解码, 分页注意力
对齐DPO, GRPO, PPO, 奖励模型
扩散噪声调度, DDIM步骤, 流匹配, adaLN-Zero
适配LoRA, QLoRA
推理MCTS, 多Token预测
SSMMamba SSM

Adding a New Problem

添加新习题

Problems live in
torch_judge/problems/
. Each problem is a Python module with a standard structure:
python
undefined
习题存放在
torch_judge/problems/
目录下,每个习题是一个遵循标准结构的Python模块:
python
undefined

torch_judge/problems/my_new_problem.py

torch_judge/problems/my_new_problem.py

import torch import torch.nn as nn from typing import Any
PROBLEM_ID = "my_new_problem" TITLE = "My New Problem: Implement Foo" DIFFICULTY = "medium" # "easy" | "medium" | "hard" CATEGORY = "Fundamentals"
DESCRIPTION = """
import torch import torch.nn as nn from typing import Any
PROBLEM_ID = "my_new_problem" TITLE = "My New Problem: Implement Foo" DIFFICULTY = "medium" # "easy" | "medium" | "hard" CATEGORY = "Fundamentals"
DESCRIPTION = """

My New Problem

我的新习题

Implement the
foo
function that does XYZ.
实现完成XYZ功能的
foo
函数。

Input

输入

  • x
    (Tensor): shape
    (batch, dim)
  • x
    (Tensor): 形状为
    (batch, dim)

Output

输出

  • Tensor of shape
    (batch, dim)
  • 形状为
    (batch, dim)
    的Tensor

Formula

公式

$$\text{foo}(x) = x^2 + 1$$ """
STARTER_CODE = """ import torch
def foo(x: torch.Tensor) -> torch.Tensor: # Your implementation here pass """
REFERENCE_SOLUTION = """ import torch
def foo(x: torch.Tensor) -> torch.Tensor: return x ** 2 + 1 """
def make_test_cases() -> list[dict[str, Any]]: """Return a list of test cases, each with inputs and expected outputs.""" cases = []
# Basic case
x = torch.tensor([[1.0, 2.0, 3.0]])
cases.append({
    "input": {"x": x},
    "expected": x ** 2 + 1,
    "description": "Basic 1x3 tensor",
})

# Batch case
x = torch.randn(4, 16)
cases.append({
    "input": {"x": x},
    "expected": x ** 2 + 1,
    "description": "Batch of 4, dim 16",
})

# Edge case: zeros
x = torch.zeros(2, 8)
cases.append({
    "input": {"x": x},
    "expected": torch.ones(2, 8),
    "description": "Zero tensor",
})

return cases
def grade(submission_code: str) -> dict[str, Any]: """Execute submission and return grading results.""" namespace = {} exec(submission_code, namespace)
if "foo" not in namespace:
    return {"passed": 0, "total": 0, "error": "Function 'foo' not found"}

fn = namespace["foo"]
test_cases = make_test_cases()
results = []

for i, case in enumerate(test_cases):
    try:
        output = fn(**case["input"])
        passed = torch.allclose(output, case["expected"], atol=1e-5)
        results.append({
            "case": i + 1,
            "description": case["description"],
            "passed": passed,
            "error": None if passed else f"Output mismatch: got {output}, expected {case['expected']}",
        })
    except Exception as e:
        results.append({
            "case": i + 1,
            "description": case["description"],
            "passed": False,
            "error": str(e),
        })

passed = sum(r["passed"] for r in results)
return {
    "passed": passed,
    "total": len(results),
    "results": results,
}
undefined
$$\text{foo}(x) = x^2 + 1$$ """
STARTER_CODE = """ import torch
def foo(x: torch.Tensor) -> torch.Tensor: # 在此处编写你的实现 pass """
REFERENCE_SOLUTION = """ import torch
def foo(x: torch.Tensor) -> torch.Tensor: return x ** 2 + 1 """
def make_test_cases() -> list[dict[str, Any]]: """返回测试用例列表,每个用例包含输入和预期输出""" cases = []
# 基础用例
x = torch.tensor([[1.0, 2.0, 3.0]])
cases.append({
    "input": {"x": x},
    "expected": x ** 2 + 1,
    "description": "基础1x3张量",
})

# 批次用例
x = torch.randn(4, 16)
cases.append({
    "input": {"x": x},
    "expected": x ** 2 + 1,
    "description": "4个批次,维度16",
})

# 边界用例:零张量
x = torch.zeros(2, 8)
cases.append({
    "input": {"x": x},
    "expected": torch.ones(2, 8),
    "description": "零张量",
})

return cases
def grade(submission_code: str) -> dict[str, Any]: """执行提交的代码并返回评分结果""" namespace = {} exec(submission_code, namespace)
if "foo" not in namespace:
    return {"passed": 0, "total": 0, "error": "未找到函数'foo'"}

fn = namespace["foo"]
test_cases = make_test_cases()
results = []

for i, case in enumerate(test_cases):
    try:
        output = fn(**case["input"])
        passed = torch.allclose(output, case["expected"], atol=1e-5)
        results.append({
            "case": i + 1,
            "description": case["description"],
            "passed": passed,
            "error": None if passed else f"输出不匹配:得到{output},预期为{case['expected']}",
        })
    except Exception as e:
        results.append({
            "case": i + 1,
            "description": case["description"],
            "passed": False,
            "error": str(e),
        })

passed = sum(r["passed"] for r in results)
return {
    "passed": passed,
    "total": len(results),
    "results": results,
}
undefined

Register the problem

注册习题

After creating the module, register it in the problem registry (typically
torch_judge/registry.py
or equivalent):
python
from torch_judge.problems import my_new_problem

PROBLEMS = [
    # ... existing problems ...
    my_new_problem,
]

创建模块后,在习题注册表中注册(通常是
torch_judge/registry.py
或等效文件):
python
from torch_judge.problems import my_new_problem

PROBLEMS = [
    # ... 现有习题 ...
    my_new_problem,
]

Grading Service API

评分服务API

The FastAPI grading service at
http://localhost:8000
exposes:
bash
undefined
运行在
http://localhost:8000
的FastAPI评分服务提供以下接口:
bash
undefined

Health check

健康检查

GET /health
GET /health

List all problems

列出所有习题

GET /problems
GET /problems

Get a specific problem

获取指定习题

GET /problems/{problem_id}
GET /problems/{problem_id}

Submit a solution

提交解答

POST /submit Content-Type: application/json
{ "problem_id": "relu", "code": "import torch\n\ndef relu(x):\n return torch.clamp(x, min=0)" }
POST /submit Content-Type: application/json
{ "problem_id": "relu", "code": "import torch\n\ndef relu(x):\n return torch.clamp(x, min=0)" }

Response

响应

{ "problem_id": "relu", "passed": 3, "total": 3, "results": [ {"case": 1, "description": "Basic positive values", "passed": true, "error": null}, {"case": 2, "description": "Negative values", "passed": true, "error": null}, {"case": 3, "description": "Mixed values", "passed": true, "error": null} ] }
undefined
{ "problem_id": "relu", "passed": 3, "total": 3, "results": [ {"case": 1, "description": "基础正值", "passed": true, "error": null}, {"case": 2, "description": "负值", "passed": true, "error": null}, {"case": 3, "description": "混合值", "passed": true, "error": null} ] }
undefined

Calling the grading API from Python

从Python调用评分API

python
import requests

response = requests.post(
    "http://localhost:8000/submit",
    json={
        "problem_id": "softmax",
        "code": """
import torch

def softmax(x: torch.Tensor, dim: int = -1) -> torch.Tensor:
    x_max = x.max(dim=dim, keepdim=True).values
    x_exp = torch.exp(x - x_max)
    return x_exp / x_exp.sum(dim=dim, keepdim=True)
"""
    }
)

result = response.json()
print(f"Passed {result['passed']}/{result['total']} test cases")
for r in result["results"]:
    status = "✓" if r["passed"] else "✗"
    print(f"  {status} Case {r['case']}: {r['description']}")
    if r["error"]:
        print(f"      Error: {r['error']}")

python
import requests

response = requests.post(
    "http://localhost:8000/submit",
    json={
        "problem_id": "softmax",
        "code": """
import torch

def softmax(x: torch.Tensor, dim: int = -1) -> torch.Tensor:
    x_max = x.max(dim=dim, keepdim=True).values
    x_exp = torch.exp(x - x_max)
    return x_exp / x_exp.sum(dim=dim, keepdim=True)
"""
    }
)

result = response.json()
print(f"通过{result['passed']}/{result['total']}个测试用例")
for r in result["results"]:
    status = "✓" if r["passed"] else "✗"
    print(f"  {status} 用例{r['case']}: {r['description']}")
    if r["error"]:
        print(f"      错误:{r['error']}")

Example Implementations

实现示例

Scaled Dot-Product Attention

缩放点积注意力

python
import torch
import torch.nn.functional as F
import math

def scaled_dot_product_attention(
    q: torch.Tensor,  # (batch, heads, seq, d_k)
    k: torch.Tensor,
    v: torch.Tensor,
    mask: torch.Tensor | None = None,
) -> torch.Tensor:
    d_k = q.size(-1)
    scores = torch.matmul(q, k.transpose(-2, -1)) / math.sqrt(d_k)
    if mask is not None:
        scores = scores.masked_fill(mask == 0, float('-inf'))
    weights = F.softmax(scores, dim=-1)
    return torch.matmul(weights, v)
python
import torch
import torch.nn.functional as F
import math

def scaled_dot_product_attention(
    q: torch.Tensor,  # (batch, heads, seq, d_k)
    k: torch.Tensor,
    v: torch.Tensor,
    mask: torch.Tensor | None = None,
) -> torch.Tensor:
    d_k = q.size(-1)
    scores = torch.matmul(q, k.transpose(-2, -1)) / math.sqrt(d_k)
    if mask is not None:
        scores = scores.masked_fill(mask == 0, float('-inf'))
    weights = F.softmax(scores, dim=-1)
    return torch.matmul(weights, v)

RMSNorm

RMSNorm

python
import torch

def rms_norm(x: torch.Tensor, weight: torch.Tensor, eps: float = 1e-6) -> torch.Tensor:
    rms = x.pow(2).mean(dim=-1, keepdim=True).add(eps).sqrt()
    return x / rms * weight
python
import torch

def rms_norm(x: torch.Tensor, weight: torch.Tensor, eps: float = 1e-6) -> torch.Tensor:
    rms = x.pow(2).mean(dim=-1, keepdim=True).add(eps).sqrt()
    return x / rms * weight

LoRA Linear Layer

LoRA线性层

python
import torch
import torch.nn as nn

class LoRALinear(nn.Module):
    def __init__(self, in_features: int, out_features: int, rank: int = 4, alpha: float = 1.0):
        super().__init__()
        self.weight = nn.Parameter(torch.randn(out_features, in_features) * 0.02)
        self.lora_A = nn.Parameter(torch.randn(rank, in_features) * 0.02)
        self.lora_B = nn.Parameter(torch.zeros(out_features, rank))
        self.scale = alpha / rank

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        base = x @ self.weight.T
        lora = x @ self.lora_A.T @ self.lora_B.T
        return base + self.scale * lora
python
import torch
import torch.nn as nn

class LoRALinear(nn.Module):
    def __init__(self, in_features: int, out_features: int, rank: int = 4, alpha: float = 1.0):
        super().__init__()
        self.weight = nn.Parameter(torch.randn(out_features, in_features) * 0.02)
        self.lora_A = nn.Parameter(torch.randn(rank, in_features) * 0.02)
        self.lora_B = nn.Parameter(torch.zeros(out_features, rank))
        self.scale = alpha / rank

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        base = x @ self.weight.T
        lora = x @ self.lora_A.T @ self.lora_B.T
        return base + self.scale * lora

Cosine Learning Rate Schedule

余弦学习率调度

python
import math

def cosine_lr(step: int, max_steps: int, lr_max: float, lr_min: float = 0.0) -> float:
    if step >= max_steps:
        return lr_min
    progress = step / max_steps
    return lr_min + 0.5 * (lr_max - lr_min) * (1 + math.cos(math.pi * progress))
python
import math

def cosine_lr(step: int, max_steps: int, lr_max: float, lr_min: float = 0.0) -> float:
    if step >= max_steps:
        return lr_min
    progress = step / max_steps
    return lr_min + 0.5 * (lr_max - lr_min) * (1 + math.cos(math.pi * progress))

KV Cache (Inference)

KV Cache(推理用)

python
import torch
from dataclasses import dataclass, field

@dataclass
class KVCache:
    keys: list[torch.Tensor] = field(default_factory=list)
    values: list[torch.Tensor] = field(default_factory=list)

    def update(self, new_k: torch.Tensor, new_v: torch.Tensor):
        self.keys.append(new_k)
        self.values.append(new_v)

    def get(self) -> tuple[torch.Tensor, torch.Tensor]:
        return torch.cat(self.keys, dim=-2), torch.cat(self.values, dim=-2)

    def __len__(self) -> int:
        return len(self.keys)

python
import torch
from dataclasses import dataclass, field

@dataclass
class KVCache:
    keys: list[torch.Tensor] = field(default_factory=list)
    values: list[torch.Tensor] = field(default_factory=list)

    def update(self, new_k: torch.Tensor, new_v: torch.Tensor):
        self.keys.append(new_k)
        self.values.append(new_v)

    def get(self) -> tuple[torch.Tensor, torch.Tensor]:
        return torch.cat(self.keys, dim=-2), torch.cat(self.values, dim=-2)

    def __len__(self) -> int:
        return len(self.keys)

Learning Paths

学习路径

Choose a path based on your goal:
PathFocus
Transformer InternalsActivations → Normalization → Attention → GPT-2 Block
Attention & Position EncodingEvery attention variant + RoPE, ALiBi, NTK-RoPE
Train a GPT from ScratchEmbeddings → architecture → loss → optimizer → tricks
Inference & DistributedKV cache, quantization, sampling, tensor parallel, FSDP
Alignment & ReasoningReward model → DPO → GRPO → PPO → MCTS
Vision TransformerConv → patch embedding → ViT block
Diffusion & DiTNoise schedule → DDIM → flow matching → adaLN-Zero
LLM Frontier ArchitecturesGQA, Differential Attention, MLA, MoE, MTP
Recommended progression:
Fundamentals → Transformer Internals → Train a GPT from Scratch
                      │                         │
                      ▼                         ▼
             Attention & PE            Inference & Distributed
                      │                         │
                      ▼                         ▼
             LLM Frontier Archs        Alignment & Reasoning

根据你的目标选择路径:
路径侧重点
Transformer内部原理激活函数 → 归一化 → 注意力 → GPT-2 Block
注意力与位置编码所有注意力变体 + RoPE、ALiBi、NTK-RoPE
从零训练GPT嵌入 → 架构 → 损失 → 优化器 → 技巧
推理与分布式KV缓存、量化、采样、张量并行、FSDP
对齐与推理奖励模型 → DPO → GRPO → PPO → MCTS
视觉Transformer卷积 → Patch嵌入 → ViT Block
扩散与DiT噪声调度 → DDIM → 流匹配 → adaLN-Zero
LLM前沿架构GQA、差分注意力、MLA、MoE、MTP
推荐学习顺序:
基础 → Transformer内部原理 → 从零训练GPT
                      │                         │
                      ▼                         ▼
             注意力与位置编码            推理与分布式
                      │                         │
                      ▼                         ▼
             LLM前沿架构              对齐与推理

Troubleshooting

故障排查

Grading service not reachable

评分服务无法访问

bash
undefined
bash
undefined

Check if the service is running

检查服务是否运行

If not, start it manually

如果未运行,手动启动

cd grading_service source ../.venv/bin/activate uvicorn main:app --reload --port 8000
undefined
cd grading_service source ../.venv/bin/activate uvicorn main:app --reload --port 8000
undefined

Python environment issues

Python环境问题

bash
undefined
bash
undefined

Verify correct Python is active

验证当前激活的Python版本是否正确

which python && python --version # should be 3.11+
which python && python --version # 应为3.11+

Reinstall deps

重新安装依赖

pip install -e ".[dev]"
pip install -e ".[dev]"

With uv

使用uv安装

uv pip install -e ".[dev]"
undefined
uv pip install -e ".[dev]"
undefined

Frontend can't connect to grading service

前端无法连接评分服务

Check
web/.env.local
:
bash
GRADING_SERVICE_URL=http://localhost:8000
Restart Next.js after changing
.env.local
.
检查
web/.env.local
配置:
bash
GRADING_SERVICE_URL=http://localhost:8000
修改
.env.local
后重启Next.js。

Docker: port conflicts

Docker端口冲突

bash
undefined
bash
undefined

Check what's on port 3000 or 8000

检查3000或8000端口被什么进程占用

lsof -i :3000 lsof -i :8000
lsof -i :3000 lsof -i :8000

Stop conflicting processes, then retry

停止冲突进程后重试

docker compose up --build
undefined
docker compose up --build
undefined

Submission always fails with import errors

提交的代码总是报导入错误

Ensure the submission code only uses packages available in the environment. Core deps include
torch
,
numpy
,
math
. Check
pyproject.toml
for the full list.
确保提交的代码仅使用环境中可用的包,核心依赖包括
torch
numpy
math
,完整依赖列表可查看
pyproject.toml

Progress not persisting

进度没有持久化

The SQLite DB lives at
./data/pyre.db
by default. For Docker, ensure the volume is mounted:
yaml
undefined
SQLite数据库默认存放在
./data/pyre.db
,对于Docker部署,确保卷已挂载:
yaml
undefined

docker-compose.yml

docker-compose.yml

volumes:
  • pyre_data:/app/data

---
volumes:
  • pyre_data:/app/data

---

Contributing a Problem

贡献习题

  1. Create
    torch_judge/problems/{problem_id}.py
    using the structure above
  2. Include
    PROBLEM_ID
    ,
    TITLE
    ,
    DIFFICULTY
    ,
    CATEGORY
    ,
    DESCRIPTION
    ,
    STARTER_CODE
    ,
    REFERENCE_SOLUTION
    ,
    make_test_cases()
    , and
    grade()
  3. Register in the problem registry
  4. Write at least 3 test cases: basic, edge case, and a larger/random tensor case
  5. Verify with
    pytest torch_judge/
    before opening a PR
  6. Open an issue first for new categories or structural changes
  1. 按照上述结构创建
    torch_judge/problems/{problem_id}.py
    文件
  2. 包含
    PROBLEM_ID
    TITLE
    DIFFICULTY
    CATEGORY
    DESCRIPTION
    STARTER_CODE
    REFERENCE_SOLUTION
    make_test_cases()
    grade()
  3. 在习题注册表中注册
  4. 编写至少3个测试用例:基础用例、边界用例、更大规模/随机张量用例
  5. 提交PR前执行
    pytest torch_judge/
    验证
  6. 如果要新增分类或修改结构,请先提交Issue讨论