pyre-code-ml-practice
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChinesePyre Code ML Practice Platform
Pyre Code ML练习平台
Skill by ara.so — Daily 2026 Skills collection.
Pyre Code is a self-hosted ML coding practice platform with 68 problems ranging from ReLU to flow matching. Users implement internals of modern AI systems (Transformers, vLLM, TRL, diffusion models) in a browser editor with instant pass/fail feedback, no GPU required.
由ara.so提供的Skill — 2026年度技能合集。
Pyre Code是一款自托管的机器学习编码练习平台,包含从ReLU到流匹配等68道习题。用户可以在浏览器编辑器中实现现代AI系统(Transformers、vLLM、TRL、diffusion模型)的内部逻辑,即时获得通过/未通过的反馈,无需GPU。
Installation
安装
Option A — One-liner (recommended)
方案A — 一键安装(推荐)
bash
git clone https://github.com/whwangovo/pyre-code.git
cd pyre-code
./setup.sh
npm run devsetup.sh.venvuvpython3 -m venvbash
git clone https://github.com/whwangovo/pyre-code.git
cd pyre-code
./setup.sh
npm run devsetup.sh.venvuvpython3 -m venvOption B — Conda
方案B — Conda
bash
git clone https://github.com/whwangovo/pyre-code.git
cd pyre-code
conda create -n pyre python=3.11 -y && conda activate pyre
pip install -e ".[dev]"
npm install
npm run devbash
git clone https://github.com/whwangovo/pyre-code.git
cd pyre-code
conda create -n pyre python=3.11 -y && conda activate pyre
pip install -e ".[dev]"
npm install
npm run devOption C — Docker
方案C — Docker
bash
git clone https://github.com/whwangovo/pyre-code.git
cd pyre-code
docker compose up --buildProgress persists in a Docker volume. Reset with .
docker compose down -vbash
git clone https://github.com/whwangovo/pyre-code.git
cd pyre-code
docker compose up --build练习进度会持久化存储在Docker卷中,执行可重置进度。
docker compose down -vAfter installation
安装完成后
- Grading service:
http://localhost:8000 - Web app:
http://localhost:3000
- 评分服务:
http://localhost:8000 - Web应用:
http://localhost:3000
Project Structure
项目结构
pyre/
├── web/ # Next.js frontend
│ ├── src/app/ # Pages and API routes
│ ├── src/components/ # UI components
│ └── src/lib/ # Utilities, problem data
├── grading_service/ # FastAPI backend (grading API)
├── torch_judge/ # Judge engine — problem definitions + test runner
│ ├── problems/ # Individual problem modules
│ └── runner.py # Test execution logic
├── setup.sh # Environment bootstrap script
├── package.json # Dev scripts (runs frontend + backend concurrently)
└── pyproject.toml # Python package configpyre/
├── web/ # Next.js 前端
│ ├── src/app/ # 页面与API路由
│ ├── src/components/ # UI组件
│ └── src/lib/ # 工具函数、习题数据
├── grading_service/ # FastAPI 后端(评分API)
├── torch_judge/ # 判题引擎 — 习题定义 + 测试运行器
│ ├── problems/ # 单独的习题模块
│ └── runner.py # 测试执行逻辑
├── setup.sh # 环境初始化脚本
├── package.json # 开发脚本(同时启动前端+后端)
└── pyproject.toml # Python包配置Key Commands
核心命令
bash
undefinedbash
undefinedStart both frontend and backend concurrently
同时启动前端和后端
npm run dev
npm run dev
Start only the grading service (FastAPI)
仅启动评分服务(FastAPI)
cd grading_service && uvicorn main:app --reload --port 8000
cd grading_service && uvicorn main:app --reload --port 8000
Start only the frontend (Next.js)
仅启动前端(Next.js)
cd web && npm run dev
cd web && npm run dev
Run Python tests
运行Python测试
pytest torch_judge/
pytest torch_judge/
Install Python package in editable mode with dev deps
以可编辑模式安装Python包及开发依赖
pip install -e ".[dev]"
pip install -e ".[dev]"
Docker: build and start
Docker: 构建并启动
docker compose up --build
docker compose up --build
Docker: stop and remove volumes (reset progress)
Docker: 停止并删除卷(重置进度)
docker compose down -v
---docker compose down -v
---Configuration
配置
Environment Variables
环境变量
Create to override defaults:
web/.env.localbash
undefined创建覆盖默认配置:
web/.env.localbash
undefinedURL of the FastAPI grading service
FastAPI评分服务的URL
GRADING_SERVICE_URL=http://localhost:8000
GRADING_SERVICE_URL=http://localhost:8000
SQLite database path for progress tracking
用于进度跟踪的SQLite数据库路径
DB_PATH=./data/pyre.db
undefinedDB_PATH=./data/pyre.db
undefinedAI Help (Optional)
AI辅助功能(可选)
Copy to and configure:
web/.env.exampleweb/.envbash
AI_HELP_BASE_URL=https://api.openai.com/v1
AI_HELP_API_KEY=$OPENAI_API_KEY
AI_HELP_MODEL=gpt-4o-miniAny OpenAI-compatible endpoint works: OpenAI, Anthropic via proxy, Ollama, etc. Users can also set their own key in the UI if no server-side config is present.
复制为并配置:
web/.env.exampleweb/.envbash
AI_HELP_BASE_URL=https://api.openai.com/v1
AI_HELP_API_KEY=$OPENAI_API_KEY
AI_HELP_MODEL=gpt-4o-mini支持所有兼容OpenAI格式的端点:OpenAI、通过代理访问的Anthropic、Ollama等。如果没有服务端配置,用户也可以在UI中自行设置密钥。
Problem Categories
习题分类
| Category | Examples |
|---|---|
| Fundamentals | ReLU, Softmax, GELU, SwiGLU, Dropout, Embedding, Linear, Kaiming Init |
| Normalization | LayerNorm, BatchNorm, RMSNorm |
| Attention | Scaled Dot-Product, Multi-Head, Causal, GQA, Flash, Differential, MLA |
| Position Encoding | Sinusoidal PE, RoPE, ALiBi, NTK-aware RoPE |
| Architecture | GPT-2 Block, ViT Block, Conv2D, MoE, Depthwise Conv |
| Training | Adam, Cosine LR, Gradient Clipping, Mixed Precision, Activation Checkpointing |
| Distributed | Tensor Parallel, FSDP, Ring Attention |
| Inference | KV Cache, Top-k Sampling, Beam Search, Speculative Decoding, Paged Attention |
| Alignment | DPO, GRPO, PPO, Reward Model |
| Diffusion | Noise Schedule, DDIM Step, Flow Matching, adaLN-Zero |
| Adaptation | LoRA, QLoRA |
| Reasoning | MCTS, Multi-Token Prediction |
| SSM | Mamba SSM |
| 分类 | 示例 |
|---|---|
| 基础 | ReLU, Softmax, GELU, SwiGLU, Dropout, Embedding, Linear, Kaiming 初始化 |
| 归一化 | LayerNorm, BatchNorm, RMSNorm |
| 注意力 | 缩放点积注意力, 多头注意力, 因果注意力, GQA, Flash注意力, 差分注意力, MLA |
| 位置编码 | 正弦位置编码, RoPE, ALiBi, NTK感知RoPE |
| 架构 | GPT-2 Block, ViT Block, Conv2D, MoE, 深度可分离卷积 |
| 训练 | Adam, 余弦学习率, 梯度裁剪, 混合精度, 激活检查点 |
| 分布式 | 张量并行, FSDP, 环形注意力 |
| 推理 | KV Cache, Top-k采样, 束搜索, 投机解码, 分页注意力 |
| 对齐 | DPO, GRPO, PPO, 奖励模型 |
| 扩散 | 噪声调度, DDIM步骤, 流匹配, adaLN-Zero |
| 适配 | LoRA, QLoRA |
| 推理 | MCTS, 多Token预测 |
| SSM | Mamba SSM |
Adding a New Problem
添加新习题
Problems live in . Each problem is a Python module with a standard structure:
torch_judge/problems/python
undefined习题存放在目录下,每个习题是一个遵循标准结构的Python模块:
torch_judge/problems/python
undefinedtorch_judge/problems/my_new_problem.py
torch_judge/problems/my_new_problem.py
import torch
import torch.nn as nn
from typing import Any
PROBLEM_ID = "my_new_problem"
TITLE = "My New Problem: Implement Foo"
DIFFICULTY = "medium" # "easy" | "medium" | "hard"
CATEGORY = "Fundamentals"
DESCRIPTION = """
import torch
import torch.nn as nn
from typing import Any
PROBLEM_ID = "my_new_problem"
TITLE = "My New Problem: Implement Foo"
DIFFICULTY = "medium" # "easy" | "medium" | "hard"
CATEGORY = "Fundamentals"
DESCRIPTION = """
My New Problem
我的新习题
Implement the function that does XYZ.
foo实现完成XYZ功能的函数。
fooInput
输入
- (Tensor): shape
x(batch, dim)
- (Tensor): 形状为
x(batch, dim)
Output
输出
- Tensor of shape
(batch, dim)
- 形状为的Tensor
(batch, dim)
Formula
公式
$$\text{foo}(x) = x^2 + 1$$
"""
STARTER_CODE = """
import torch
def foo(x: torch.Tensor) -> torch.Tensor:
# Your implementation here
pass
"""
REFERENCE_SOLUTION = """
import torch
def foo(x: torch.Tensor) -> torch.Tensor:
return x ** 2 + 1
"""
def make_test_cases() -> list[dict[str, Any]]:
"""Return a list of test cases, each with inputs and expected outputs."""
cases = []
# Basic case
x = torch.tensor([[1.0, 2.0, 3.0]])
cases.append({
"input": {"x": x},
"expected": x ** 2 + 1,
"description": "Basic 1x3 tensor",
})
# Batch case
x = torch.randn(4, 16)
cases.append({
"input": {"x": x},
"expected": x ** 2 + 1,
"description": "Batch of 4, dim 16",
})
# Edge case: zeros
x = torch.zeros(2, 8)
cases.append({
"input": {"x": x},
"expected": torch.ones(2, 8),
"description": "Zero tensor",
})
return casesdef grade(submission_code: str) -> dict[str, Any]:
"""Execute submission and return grading results."""
namespace = {}
exec(submission_code, namespace)
if "foo" not in namespace:
return {"passed": 0, "total": 0, "error": "Function 'foo' not found"}
fn = namespace["foo"]
test_cases = make_test_cases()
results = []
for i, case in enumerate(test_cases):
try:
output = fn(**case["input"])
passed = torch.allclose(output, case["expected"], atol=1e-5)
results.append({
"case": i + 1,
"description": case["description"],
"passed": passed,
"error": None if passed else f"Output mismatch: got {output}, expected {case['expected']}",
})
except Exception as e:
results.append({
"case": i + 1,
"description": case["description"],
"passed": False,
"error": str(e),
})
passed = sum(r["passed"] for r in results)
return {
"passed": passed,
"total": len(results),
"results": results,
}undefined$$\text{foo}(x) = x^2 + 1$$
"""
STARTER_CODE = """
import torch
def foo(x: torch.Tensor) -> torch.Tensor:
# 在此处编写你的实现
pass
"""
REFERENCE_SOLUTION = """
import torch
def foo(x: torch.Tensor) -> torch.Tensor:
return x ** 2 + 1
"""
def make_test_cases() -> list[dict[str, Any]]:
"""返回测试用例列表,每个用例包含输入和预期输出"""
cases = []
# 基础用例
x = torch.tensor([[1.0, 2.0, 3.0]])
cases.append({
"input": {"x": x},
"expected": x ** 2 + 1,
"description": "基础1x3张量",
})
# 批次用例
x = torch.randn(4, 16)
cases.append({
"input": {"x": x},
"expected": x ** 2 + 1,
"description": "4个批次,维度16",
})
# 边界用例:零张量
x = torch.zeros(2, 8)
cases.append({
"input": {"x": x},
"expected": torch.ones(2, 8),
"description": "零张量",
})
return casesdef grade(submission_code: str) -> dict[str, Any]:
"""执行提交的代码并返回评分结果"""
namespace = {}
exec(submission_code, namespace)
if "foo" not in namespace:
return {"passed": 0, "total": 0, "error": "未找到函数'foo'"}
fn = namespace["foo"]
test_cases = make_test_cases()
results = []
for i, case in enumerate(test_cases):
try:
output = fn(**case["input"])
passed = torch.allclose(output, case["expected"], atol=1e-5)
results.append({
"case": i + 1,
"description": case["description"],
"passed": passed,
"error": None if passed else f"输出不匹配:得到{output},预期为{case['expected']}",
})
except Exception as e:
results.append({
"case": i + 1,
"description": case["description"],
"passed": False,
"error": str(e),
})
passed = sum(r["passed"] for r in results)
return {
"passed": passed,
"total": len(results),
"results": results,
}undefinedRegister the problem
注册习题
After creating the module, register it in the problem registry (typically or equivalent):
torch_judge/registry.pypython
from torch_judge.problems import my_new_problem
PROBLEMS = [
# ... existing problems ...
my_new_problem,
]创建模块后,在习题注册表中注册(通常是或等效文件):
torch_judge/registry.pypython
from torch_judge.problems import my_new_problem
PROBLEMS = [
# ... 现有习题 ...
my_new_problem,
]Grading Service API
评分服务API
The FastAPI grading service at exposes:
http://localhost:8000bash
undefined运行在的FastAPI评分服务提供以下接口:
http://localhost:8000bash
undefinedHealth check
健康检查
GET /health
GET /health
List all problems
列出所有习题
GET /problems
GET /problems
Get a specific problem
获取指定习题
GET /problems/{problem_id}
GET /problems/{problem_id}
Submit a solution
提交解答
POST /submit
Content-Type: application/json
{
"problem_id": "relu",
"code": "import torch\n\ndef relu(x):\n return torch.clamp(x, min=0)"
}
POST /submit
Content-Type: application/json
{
"problem_id": "relu",
"code": "import torch\n\ndef relu(x):\n return torch.clamp(x, min=0)"
}
Response
响应
{
"problem_id": "relu",
"passed": 3,
"total": 3,
"results": [
{"case": 1, "description": "Basic positive values", "passed": true, "error": null},
{"case": 2, "description": "Negative values", "passed": true, "error": null},
{"case": 3, "description": "Mixed values", "passed": true, "error": null}
]
}
undefined{
"problem_id": "relu",
"passed": 3,
"total": 3,
"results": [
{"case": 1, "description": "基础正值", "passed": true, "error": null},
{"case": 2, "description": "负值", "passed": true, "error": null},
{"case": 3, "description": "混合值", "passed": true, "error": null}
]
}
undefinedCalling the grading API from Python
从Python调用评分API
python
import requests
response = requests.post(
"http://localhost:8000/submit",
json={
"problem_id": "softmax",
"code": """
import torch
def softmax(x: torch.Tensor, dim: int = -1) -> torch.Tensor:
x_max = x.max(dim=dim, keepdim=True).values
x_exp = torch.exp(x - x_max)
return x_exp / x_exp.sum(dim=dim, keepdim=True)
"""
}
)
result = response.json()
print(f"Passed {result['passed']}/{result['total']} test cases")
for r in result["results"]:
status = "✓" if r["passed"] else "✗"
print(f" {status} Case {r['case']}: {r['description']}")
if r["error"]:
print(f" Error: {r['error']}")python
import requests
response = requests.post(
"http://localhost:8000/submit",
json={
"problem_id": "softmax",
"code": """
import torch
def softmax(x: torch.Tensor, dim: int = -1) -> torch.Tensor:
x_max = x.max(dim=dim, keepdim=True).values
x_exp = torch.exp(x - x_max)
return x_exp / x_exp.sum(dim=dim, keepdim=True)
"""
}
)
result = response.json()
print(f"通过{result['passed']}/{result['total']}个测试用例")
for r in result["results"]:
status = "✓" if r["passed"] else "✗"
print(f" {status} 用例{r['case']}: {r['description']}")
if r["error"]:
print(f" 错误:{r['error']}")Example Implementations
实现示例
Scaled Dot-Product Attention
缩放点积注意力
python
import torch
import torch.nn.functional as F
import math
def scaled_dot_product_attention(
q: torch.Tensor, # (batch, heads, seq, d_k)
k: torch.Tensor,
v: torch.Tensor,
mask: torch.Tensor | None = None,
) -> torch.Tensor:
d_k = q.size(-1)
scores = torch.matmul(q, k.transpose(-2, -1)) / math.sqrt(d_k)
if mask is not None:
scores = scores.masked_fill(mask == 0, float('-inf'))
weights = F.softmax(scores, dim=-1)
return torch.matmul(weights, v)python
import torch
import torch.nn.functional as F
import math
def scaled_dot_product_attention(
q: torch.Tensor, # (batch, heads, seq, d_k)
k: torch.Tensor,
v: torch.Tensor,
mask: torch.Tensor | None = None,
) -> torch.Tensor:
d_k = q.size(-1)
scores = torch.matmul(q, k.transpose(-2, -1)) / math.sqrt(d_k)
if mask is not None:
scores = scores.masked_fill(mask == 0, float('-inf'))
weights = F.softmax(scores, dim=-1)
return torch.matmul(weights, v)RMSNorm
RMSNorm
python
import torch
def rms_norm(x: torch.Tensor, weight: torch.Tensor, eps: float = 1e-6) -> torch.Tensor:
rms = x.pow(2).mean(dim=-1, keepdim=True).add(eps).sqrt()
return x / rms * weightpython
import torch
def rms_norm(x: torch.Tensor, weight: torch.Tensor, eps: float = 1e-6) -> torch.Tensor:
rms = x.pow(2).mean(dim=-1, keepdim=True).add(eps).sqrt()
return x / rms * weightLoRA Linear Layer
LoRA线性层
python
import torch
import torch.nn as nn
class LoRALinear(nn.Module):
def __init__(self, in_features: int, out_features: int, rank: int = 4, alpha: float = 1.0):
super().__init__()
self.weight = nn.Parameter(torch.randn(out_features, in_features) * 0.02)
self.lora_A = nn.Parameter(torch.randn(rank, in_features) * 0.02)
self.lora_B = nn.Parameter(torch.zeros(out_features, rank))
self.scale = alpha / rank
def forward(self, x: torch.Tensor) -> torch.Tensor:
base = x @ self.weight.T
lora = x @ self.lora_A.T @ self.lora_B.T
return base + self.scale * lorapython
import torch
import torch.nn as nn
class LoRALinear(nn.Module):
def __init__(self, in_features: int, out_features: int, rank: int = 4, alpha: float = 1.0):
super().__init__()
self.weight = nn.Parameter(torch.randn(out_features, in_features) * 0.02)
self.lora_A = nn.Parameter(torch.randn(rank, in_features) * 0.02)
self.lora_B = nn.Parameter(torch.zeros(out_features, rank))
self.scale = alpha / rank
def forward(self, x: torch.Tensor) -> torch.Tensor:
base = x @ self.weight.T
lora = x @ self.lora_A.T @ self.lora_B.T
return base + self.scale * loraCosine Learning Rate Schedule
余弦学习率调度
python
import math
def cosine_lr(step: int, max_steps: int, lr_max: float, lr_min: float = 0.0) -> float:
if step >= max_steps:
return lr_min
progress = step / max_steps
return lr_min + 0.5 * (lr_max - lr_min) * (1 + math.cos(math.pi * progress))python
import math
def cosine_lr(step: int, max_steps: int, lr_max: float, lr_min: float = 0.0) -> float:
if step >= max_steps:
return lr_min
progress = step / max_steps
return lr_min + 0.5 * (lr_max - lr_min) * (1 + math.cos(math.pi * progress))KV Cache (Inference)
KV Cache(推理用)
python
import torch
from dataclasses import dataclass, field
@dataclass
class KVCache:
keys: list[torch.Tensor] = field(default_factory=list)
values: list[torch.Tensor] = field(default_factory=list)
def update(self, new_k: torch.Tensor, new_v: torch.Tensor):
self.keys.append(new_k)
self.values.append(new_v)
def get(self) -> tuple[torch.Tensor, torch.Tensor]:
return torch.cat(self.keys, dim=-2), torch.cat(self.values, dim=-2)
def __len__(self) -> int:
return len(self.keys)python
import torch
from dataclasses import dataclass, field
@dataclass
class KVCache:
keys: list[torch.Tensor] = field(default_factory=list)
values: list[torch.Tensor] = field(default_factory=list)
def update(self, new_k: torch.Tensor, new_v: torch.Tensor):
self.keys.append(new_k)
self.values.append(new_v)
def get(self) -> tuple[torch.Tensor, torch.Tensor]:
return torch.cat(self.keys, dim=-2), torch.cat(self.values, dim=-2)
def __len__(self) -> int:
return len(self.keys)Learning Paths
学习路径
Choose a path based on your goal:
| Path | Focus |
|---|---|
| Transformer Internals | Activations → Normalization → Attention → GPT-2 Block |
| Attention & Position Encoding | Every attention variant + RoPE, ALiBi, NTK-RoPE |
| Train a GPT from Scratch | Embeddings → architecture → loss → optimizer → tricks |
| Inference & Distributed | KV cache, quantization, sampling, tensor parallel, FSDP |
| Alignment & Reasoning | Reward model → DPO → GRPO → PPO → MCTS |
| Vision Transformer | Conv → patch embedding → ViT block |
| Diffusion & DiT | Noise schedule → DDIM → flow matching → adaLN-Zero |
| LLM Frontier Architectures | GQA, Differential Attention, MLA, MoE, MTP |
Recommended progression:
Fundamentals → Transformer Internals → Train a GPT from Scratch
│ │
▼ ▼
Attention & PE Inference & Distributed
│ │
▼ ▼
LLM Frontier Archs Alignment & Reasoning根据你的目标选择路径:
| 路径 | 侧重点 |
|---|---|
| Transformer内部原理 | 激活函数 → 归一化 → 注意力 → GPT-2 Block |
| 注意力与位置编码 | 所有注意力变体 + RoPE、ALiBi、NTK-RoPE |
| 从零训练GPT | 嵌入 → 架构 → 损失 → 优化器 → 技巧 |
| 推理与分布式 | KV缓存、量化、采样、张量并行、FSDP |
| 对齐与推理 | 奖励模型 → DPO → GRPO → PPO → MCTS |
| 视觉Transformer | 卷积 → Patch嵌入 → ViT Block |
| 扩散与DiT | 噪声调度 → DDIM → 流匹配 → adaLN-Zero |
| LLM前沿架构 | GQA、差分注意力、MLA、MoE、MTP |
推荐学习顺序:
基础 → Transformer内部原理 → 从零训练GPT
│ │
▼ ▼
注意力与位置编码 推理与分布式
│ │
▼ ▼
LLM前沿架构 对齐与推理Troubleshooting
故障排查
Grading service not reachable
评分服务无法访问
bash
undefinedbash
undefinedCheck if the service is running
检查服务是否运行
If not, start it manually
如果未运行,手动启动
cd grading_service
source ../.venv/bin/activate
uvicorn main:app --reload --port 8000
undefinedcd grading_service
source ../.venv/bin/activate
uvicorn main:app --reload --port 8000
undefinedPython environment issues
Python环境问题
bash
undefinedbash
undefinedVerify correct Python is active
验证当前激活的Python版本是否正确
which python && python --version # should be 3.11+
which python && python --version # 应为3.11+
Reinstall deps
重新安装依赖
pip install -e ".[dev]"
pip install -e ".[dev]"
With uv
使用uv安装
uv pip install -e ".[dev]"
undefineduv pip install -e ".[dev]"
undefinedFrontend can't connect to grading service
前端无法连接评分服务
Check :
web/.env.localbash
GRADING_SERVICE_URL=http://localhost:8000Restart Next.js after changing .
.env.local检查配置:
web/.env.localbash
GRADING_SERVICE_URL=http://localhost:8000修改后重启Next.js。
.env.localDocker: port conflicts
Docker端口冲突
bash
undefinedbash
undefinedCheck what's on port 3000 or 8000
检查3000或8000端口被什么进程占用
lsof -i :3000
lsof -i :8000
lsof -i :3000
lsof -i :8000
Stop conflicting processes, then retry
停止冲突进程后重试
docker compose up --build
undefineddocker compose up --build
undefinedSubmission always fails with import errors
提交的代码总是报导入错误
Ensure the submission code only uses packages available in the environment. Core deps include , , . Check for the full list.
torchnumpymathpyproject.toml确保提交的代码仅使用环境中可用的包,核心依赖包括、、,完整依赖列表可查看。
torchnumpymathpyproject.tomlProgress not persisting
进度没有持久化
The SQLite DB lives at by default. For Docker, ensure the volume is mounted:
./data/pyre.dbyaml
undefinedSQLite数据库默认存放在,对于Docker部署,确保卷已挂载:
./data/pyre.dbyaml
undefineddocker-compose.yml
docker-compose.yml
volumes:
- pyre_data:/app/data
---volumes:
- pyre_data:/app/data
---Contributing a Problem
贡献习题
- Create using the structure above
torch_judge/problems/{problem_id}.py - Include ,
PROBLEM_ID,TITLE,DIFFICULTY,CATEGORY,DESCRIPTION,STARTER_CODE,REFERENCE_SOLUTION, andmake_test_cases()grade() - Register in the problem registry
- Write at least 3 test cases: basic, edge case, and a larger/random tensor case
- Verify with before opening a PR
pytest torch_judge/ - Open an issue first for new categories or structural changes
- 按照上述结构创建文件
torch_judge/problems/{problem_id}.py - 包含、
PROBLEM_ID、TITLE、DIFFICULTY、CATEGORY、DESCRIPTION、STARTER_CODE、REFERENCE_SOLUTION和make_test_cases()grade() - 在习题注册表中注册
- 编写至少3个测试用例:基础用例、边界用例、更大规模/随机张量用例
- 提交PR前执行验证
pytest torch_judge/ - 如果要新增分类或修改结构,请先提交Issue讨论