eval-clarity

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Eval Clarity

清晰度评估

Use this skill to evaluate how clear and understandable an assistant response is.
使用此技能来评估助手回复的清晰易懂程度。

Inputs

输入要求

Require:
  • The assistant response text to evaluate.
必填:
  • 待评估的助手回复文本。

Internal Rubric (1–5)

内部评分标准(1–5分)

5 = Structured, unambiguous, direct answer, minimal fluff
4 = Mostly clear, minor ambiguity or verbosity
3 = Understandable but lacks structure or precision
2 = Vague, missing key steps, hard to follow
1 = Confusing, contradictory, or unclear
5分 = 结构清晰、表述明确、回答直接、冗余内容极少
4分 = 整体清晰,仅存在轻微歧义或冗余
3分 = 可理解但缺乏结构或精准度
2分 = 表述模糊、缺少关键步骤、难以理解
1分 = 令人困惑、自相矛盾或表述不清

Workflow

工作流程

  1. Assess structure, precision, and readability.
  2. Score clarity on a 1-5 integer scale using the rubric only.
  3. Write concise rationale tied directly to rubric criteria.
  4. Produce actionable suggestions that improve clarity.
  1. 评估回复的结构、精准度和可读性。
  2. 仅使用上述评分标准,以1-5的整数为清晰度打分。
  3. 撰写与评分标准直接关联的简洁理由。
  4. 提出可提升清晰度的可行建议。

Output Contract

输出约定

Return JSON only. Do not include markdown, backticks, prose, or extra keys.
Use exactly this schema:
{ "dimension": "clarity", "score": 1, "rationale": "...", "improvement_suggestions": [ "..." ] }
仅返回JSON格式内容。请勿包含markdown格式、反引号、散文或额外的键值对。
请严格使用以下格式:
{ "dimension": "clarity", "score": 1, "rationale": "...", "improvement_suggestions": [ "..." ] }

Hard Rules

硬性规则

  • dimension
    must always equal
    "clarity"
    .
  • score
    must be an integer from 1 to 5.
  • rationale
    must be concise (max 3 sentences).
  • Do not include step-by-step reasoning.
  • improvement_suggestions
    must be a non-empty array of concrete edits.
  • Never output text outside the JSON object.
  • dimension
    必须始终等于
    "clarity"
  • score
    必须是1到5之间的整数。
  • rationale
    必须简洁(最多3句话)。
  • 请勿包含分步推理内容。
  • improvement_suggestions
    必须是非空的具体修改建议数组。
  • 绝不要在JSON对象之外输出任何内容。