ai-following-rules

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Make Your AI Follow the Rules

让你的AI遵守规则

Guide the user through defining and enforcing rules their AI must follow. The key insight: don't ask the AI to follow rules — program constraints that enforce them automatically.
引导用户定义并强制执行AI必须遵守的规则。核心思路:不要只是要求AI遵守规则——编写可自动强制执行规则的约束

The two types of rules

两种规则类型

DSPy gives you two constraint primitives:
dspy.Assert
dspy.Suggest
BehaviorHard stop — retries if violatedSoft nudge — continues if violated
Use forMust-comply rules (format, safety, legal)Should-comply preferences (style, tone)
On failureLM retries with error feedbackLM gets suggestion, continues
PM translation"This must happen""This should happen"
python
import dspy
DSPy为你提供两种约束原语:
dspy.Assert
dspy.Suggest
行为硬限制——违反时会重试软提示——违反时仍继续
适用场景必须遵守的规则(格式、安全、法律相关)建议遵守的偏好(风格、语气相关)
失败时的处理大语言模型(LM)携带错误反馈进行重试大语言模型(LM)收到提示后继续执行
产品视角解读“这件事必须做到”“这件事应该做到”
python
import dspy

Hard rule — will retry up to max_backtrack_attempts times

硬规则——最多重试max_backtrack_attempts次

dspy.Assert( condition, # bool: does the output satisfy the rule? "error message" # str: feedback to the LM on what went wrong )
dspy.Assert( condition, # 布尔值:输出是否满足规则? "error message" # 字符串:向LM反馈哪里出错了 )

Soft rule — nudges but doesn't block

软规则——仅提示不阻止

dspy.Suggest( condition, "suggestion message" )
undefined
dspy.Suggest( condition, "suggestion message" )
undefined

Step 1: Identify your rules

步骤1:识别你的规则

Ask the user:
  1. What rules does the AI break? (too long? wrong format? forbidden content? missing fields?)
  2. Which rules are hard requirements vs nice-to-haves? (Assert vs Suggest)
  3. What should happen when a rule is broken? (retry, flag for review, fail loudly)
询问用户:
  1. AI违反了哪些规则?(内容过长?格式错误?包含违禁内容?缺失字段?)
  2. 哪些是硬性要求,哪些是锦上添花的偏好?(对应Assert或Suggest)
  3. 规则被违反时应该如何处理?(重试、标记待审核、直接报错)

Step 2: Content policy rules

步骤2:内容政策规则

Enforce what the AI can and cannot say.
python
class PolicyCheckedResponse(dspy.Module):
    def __init__(self):
        self.respond = dspy.ChainOfThought("question -> answer")

    def forward(self, question):
        result = self.respond(question=question)
        answer = result.answer

        # Hard rules — must comply
        dspy.Assert(
            len(answer.split()) <= 280,
            f"Response is {len(answer.split())} words. Must be under 280 words."
        )
        dspy.Assert(
            not any(word in answer.lower() for word in BLOCKED_WORDS),
            "Response contains blocked words. Remove them and regenerate."
        )
        dspy.Assert(
            "disclaimer" not in answer.lower(),
            "Do not include disclaimers. Answer directly."
        )

        # Soft rules — prefer but don't block
        dspy.Suggest(
            answer[0].isupper(),
            "Response should start with a capital letter."
        )
        dspy.Suggest(
            answer.endswith(".") or answer.endswith("!") or answer.endswith("?"),
            "Response should end with proper punctuation."
        )

        return result

BLOCKED_WORDS = ["competitor_name", "profanity1", "profanity2"]  # your list
强制执行AI的言论规范,明确可讲和不可讲的内容。
python
class PolicyCheckedResponse(dspy.Module):
    def __init__(self):
        self.respond = dspy.ChainOfThought("question -> answer")

    def forward(self, question):
        result = self.respond(question=question)
        answer = result.answer

        # 硬规则——必须遵守
        dspy.Assert(
            len(answer.split()) <= 280,
            f"回复字数为{len(answer.split())},必须控制在280字以内。"
        )
        dspy.Assert(
            not any(word in answer.lower() for word in BLOCKED_WORDS),
            "回复包含违禁词汇,请移除后重新生成。"
        )
        dspy.Assert(
            "disclaimer" not in answer.lower(),
            "请勿包含免责声明,请直接作答。"
        )

        # 软规则——建议遵守但不强制
        dspy.Suggest(
            answer[0].isupper(),
            "回复首字母应大写。"
        )
        dspy.Suggest(
            answer.endswith(".") or answer.endswith("!") or answer.endswith("?"),
            "回复应使用正确的标点符号结尾。"
        )

        return result

BLOCKED_WORDS = ["competitor_name", "profanity1", "profanity2"]  # 你的违禁词列表

Step 3: Format rules

步骤3:格式规则

Enforce output structure — valid JSON, required fields, correct types.
python
import json
from pydantic import BaseModel, Field
from typing import Literal
强制执行输出结构——有效的JSON、必填字段、正确的类型。
python
import json
from pydantic import BaseModel, Field
from typing import Literal

Option A: Pydantic validation (automatic)

选项A:Pydantic验证(自动)

class QuizQuestion(BaseModel): question: str = Field(min_length=10) options: list[str] = Field(min_length=4, max_length=4) correct_answer: str difficulty: Literal["easy", "medium", "hard"]
class GenerateQuiz(dspy.Signature): """Generate a quiz question about the topic.""" topic: str = dspy.InputField() quiz: QuizQuestion = dspy.OutputField()
class QuizQuestion(BaseModel): question: str = Field(min_length=10) options: list[str] = Field(min_length=4, max_length=4) correct_answer: str difficulty: Literal["easy", "medium", "hard"]
class GenerateQuiz(dspy.Signature): """Generate a quiz question about the topic.""" topic: str = dspy.InputField() quiz: QuizQuestion = dspy.OutputField()

Option B: Assert-based validation (custom logic)

选项B:基于Assert的验证(自定义逻辑)

class QuizGenerator(dspy.Module): def init(self): self.generate = dspy.ChainOfThought(GenerateQuiz)
def forward(self, topic):
    result = self.generate(topic=topic)
    quiz = result.quiz

    # Correct answer must be one of the options
    dspy.Assert(
        quiz.correct_answer in quiz.options,
        f"Correct answer '{quiz.correct_answer}' is not in options {quiz.options}. "
        "The correct answer must be one of the four options."
    )

    # Options must be unique
    dspy.Assert(
        len(set(quiz.options)) == 4,
        "All four options must be different from each other."
    )

    return result

Combine Pydantic (catches type/structure errors) with Assert (catches logic errors) for the strongest format enforcement.
class QuizGenerator(dspy.Module): def init(self): self.generate = dspy.ChainOfThought(GenerateQuiz)
def forward(self, topic):
    result = self.generate(topic=topic)
    quiz = result.quiz

    # 正确答案必须是选项之一
    dspy.Assert(
        quiz.correct_answer in quiz.options,
        f"正确答案'{quiz.correct_answer}'不在选项{quiz.options}中。正确答案必须是四个选项中的一个。"
    )

    # 选项必须唯一
    dspy.Assert(
        len(set(quiz.options)) == 4,
        "四个选项必须互不相同。"
    )

    return result

结合Pydantic(捕获类型/结构错误)和Assert(捕获逻辑错误),可以实现最强的格式强制效果。

Step 4: Business constraint rules

步骤4:业务约束规则

Translate business requirements into programmatic constraints.
python
class PricingResponse(dspy.Module):
    def __init__(self):
        self.respond = dspy.ChainOfThought("customer_question, pricing_docs -> answer")

    def forward(self, customer_question, pricing_docs):
        result = self.respond(
            customer_question=customer_question,
            pricing_docs=pricing_docs,
        )

        # Never mention competitor pricing
        dspy.Assert(
            not any(comp in result.answer.lower() for comp in COMPETITORS),
            "Do not mention competitor pricing. Focus only on our plans."
        )

        # Never offer unauthorized discounts
        dspy.Assert(
            "discount" not in result.answer.lower() or "authorized" in result.answer.lower(),
            "Do not offer discounts unless referencing an authorized promotion."
        )

        # Always include a CTA
        dspy.Suggest(
            any(cta in result.answer.lower() for cta in ["contact", "sign up", "learn more", "get started"]),
            "Include a call-to-action at the end of the response."
        )

        return result

COMPETITORS = ["competitor_a", "competitor_b"]
将业务需求转化为程序化约束。
python
class PricingResponse(dspy.Module):
    def __init__(self):
        self.respond = dspy.ChainOfThought("customer_question, pricing_docs -> answer")

    def forward(self, customer_question, pricing_docs):
        result = self.respond(
            customer_question=customer_question,
            pricing_docs=pricing_docs,
        )

        # 绝不提及竞争对手定价
        dspy.Assert(
            not any(comp in result.answer.lower() for comp in COMPETITORS),
            "请勿提及竞争对手定价,仅聚焦我们的方案。"
        )

        # 绝不提供未授权的折扣
        dspy.Assert(
            "discount" not in result.answer.lower() or "authorized" in result.answer.lower(),
            "除非引用授权促销活动,否则请勿提供折扣。"
        )

        # 始终包含行动号召(CTA)
        dspy.Suggest(
            any(cta in result.answer.lower() for cta in ["contact", "sign up", "learn more", "get started"]),
            "请在回复末尾添加行动号召(CTA)。"
        )

        return result

COMPETITORS = ["competitor_a", "competitor_b"]

Step 5: How retry and backtracking works

步骤5:重试与回溯机制的工作原理

When
dspy.Assert
fails, DSPy doesn't just retry blindly — it feeds the error message back to the LM:
Attempt 1: LM generates response → Assert fails ("Response is 350 words, must be under 280")
Attempt 2: LM retries with feedback → Assert fails ("Response contains blocked words")
Attempt 3: LM retries with feedback → Assert passes ✓
Key details:
  • Error messages matter. They're the LM's self-correction instructions. Be specific: "Response is 350 words, must be under 280" is better than "too long."
  • Default retries: 2. Set via
    max_backtrack_attempts
    on the module.
  • Each retry sees all previous failures. The model gets a cumulative error log.
  • Suggest never retries. It sends the feedback but continues regardless.
dspy.Assert
失败时,DSPy不会盲目重试——它会将错误消息反馈给LM:
尝试1:LM生成回复 → Assert失败("回复字数为350,必须控制在280以内")
尝试2:LM携带反馈重试 → Assert失败("回复包含违禁词汇")
尝试3:LM携带反馈重试 → Assert通过 ✓
关键细节:
  • 错误消息至关重要。它们是LM自我修正的指令。要具体:“回复字数为350,必须控制在280以内”比“太长了”效果更好。
  • 默认重试次数:2次。可通过模块的
    max_backtrack_attempts
    参数设置。
  • 每次重试都会看到之前所有的失败记录。模型会收到累积的错误日志。
  • Suggest从不触发重试。它会发送反馈,但无论如何都会继续执行。

Step 6: Composing multiple rules

步骤6:组合多个规则

Stack rules by putting multiple Assert/Suggest calls in sequence. They're checked in order.
python
class TweetWriter(dspy.Module):
    def __init__(self):
        self.write = dspy.ChainOfThought("topic, key_facts -> tweet")

    def forward(self, topic, key_facts):
        result = self.write(topic=topic, key_facts=key_facts)
        tweet = result.tweet

        # Rule 1: Length limit (hard)
        dspy.Assert(
            len(tweet) <= 280,
            f"Tweet is {len(tweet)} chars. Must be ≤280."
        )

        # Rule 2: No hashtags (hard)
        dspy.Assert(
            "#" not in tweet,
            "No hashtags allowed. Remove all # symbols."
        )

        # Rule 3: Must include key fact (hard)
        dspy.Assert(
            any(fact.lower() in tweet.lower() for fact in key_facts),
            f"Tweet must mention at least one key fact: {key_facts}"
        )

        # Rule 4: Engaging tone (soft)
        dspy.Suggest(
            not tweet.startswith("Did you know"),
            "Avoid starting with 'Did you know' — be more creative."
        )

        # Rule 5: No emojis (soft)
        dspy.Suggest(
            not any(ord(c) > 127 for c in tweet),
            "Prefer text-only tweets without emojis."
        )

        return result
When rules conflict (e.g., "include all key facts" vs "stay under 280 chars"), put the harder constraint first so the model prioritizes it.
通过依次放置多个Assert/Suggest调用,来堆叠规则。它们会按顺序被检查。
python
class TweetWriter(dspy.Module):
    def __init__(self):
        self.write = dspy.ChainOfThought("topic, key_facts -> tweet")

    def forward(self, topic, key_facts):
        result = self.write(topic=topic, key_facts=key_facts)
        tweet = result.tweet

        # 规则1:长度限制(硬规则)
        dspy.Assert(
            len(tweet) <= 280,
            f"推文长度为{len(tweet)}字符,必须≤280。"
        )

        # 规则2:禁止使用话题标签(硬规则)
        dspy.Assert(
            "#" not in tweet,
            "不允许使用话题标签,请移除所有#符号。"
        )

        # 规则3:必须包含关键事实(硬规则)
        dspy.Assert(
            any(fact.lower() in tweet.lower() for fact in key_facts),
            f"推文必须提及至少一个关键事实:{key_facts}"
        )

        # 规则4:引人入胜的语气(软规则)
        dspy.Suggest(
            not tweet.startswith("Did you know"),
            "避免以'Did you know'开头——请更有创意一些。"
        )

        # 规则5:禁止使用表情符号(软规则)
        dspy.Suggest(
            not any(ord(c) > 127 for c in tweet),
            "建议使用纯文本推文,不要添加表情符号。"
        )

        return result
当规则冲突时(例如“包含所有关键事实”与“控制在280字符以内”),请将更严格的约束放在前面,这样模型会优先满足它。

Step 7: Optimizing with rules

步骤7:结合规则进行优化

DSPy optimizers work with assertions. When you optimize a module that has Assert/Suggest:
  • The optimizer sees assertion pass/fail rates as part of the metric
  • Optimized prompts learn to satisfy constraints more often
  • Result: fewer retries needed in production
python
def metric(example, pred, trace=None):
    # Your quality metric + assertion compliance
    correct = pred.answer == example.expected_answer
    return correct  # assertions are enforced separately during forward()

optimizer = dspy.MIPROv2(metric=metric, num_threads=4)
optimized = optimizer.compile(
    my_module,
    trainset=trainset,
    max_bootstrapped_demos=4,
    max_labeled_demos=4,
)
DSPy优化器可与断言配合使用。当你优化包含Assert/Suggest的模块时:
  • 优化器会将断言的通过率作为指标的一部分
  • 优化后的提示会更频繁地满足约束条件
  • 结果:生产环境中需要的重试次数更少
python
def metric(example, pred, trace=None):
    # 你的质量指标 + 断言合规性
    correct = pred.answer == example.expected_answer
    return correct  # 断言在forward()过程中会被单独强制执行

optimizer = dspy.MIPROv2(metric=metric, num_threads=4)
optimized = optimizer.compile(
    my_module,
    trainset=trainset,
    max_bootstrapped_demos=4,
    max_labeled_demos=4,
)

Key principles

核心原则

  • Assert for requirements, Suggest for preferences. Don't use Assert for style issues.
  • Specific error messages. "350 words, must be under 280" beats "too long."
  • Pydantic + Assert together. Pydantic catches structure, Assert catches logic.
  • Order matters. Put hard constraints before soft ones.
  • Optimize after adding rules. DSPy learns to comply, reducing runtime retries.
  • 用Assert处理要求,用Suggest处理偏好。不要用Assert来处理风格问题。
  • 错误消息要具体。“350字,必须控制在280以内”比“太长了”更好。
  • 同时使用Pydantic和Assert。Pydantic捕获结构错误,Assert捕获逻辑错误。
  • 顺序很重要。硬约束放在软约束前面。
  • 添加规则后再进行优化。DSPy会学习如何合规,减少运行时的重试次数。

Additional resources

额外资源

  • Use
    /ai-checking-outputs
    for general output verification (safety, quality gates)
  • Use
    /ai-stopping-hallucinations
    for grounding AI in facts and sources
  • Use
    /ai-improving-accuracy
    to measure and improve quality after adding rules
  • Use
    /ai-testing-safety
    to verify your rules hold up against adversarial users
  • See
    examples.md
    for complete worked examples
  • 使用
    /ai-checking-outputs
    进行通用输出验证(安全、质量关卡)
  • 使用
    /ai-stopping-hallucinations
    让AI基于事实和来源生成内容
  • 使用
    /ai-improving-accuracy
    在添加规则后衡量并提升质量
  • 使用
    /ai-testing-safety
    验证你的规则能否抵御恶意用户的攻击
  • 查看
    examples.md
    获取完整的实战示例