ethics-review
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseEthics Review
伦理审查
Comprehensive guidance for ethical assessment of technology systems, AI applications, and responsible innovation.
本指南为技术系统、AI应用及负责任创新的伦理评估提供全面指导。
When to Use This Skill
何时使用本技能
- Conducting ethical impact assessments for new projects
- Evaluating AI systems for ethical risks
- Establishing ethics review boards and processes
- Developing ethical guidelines for technology teams
- Assessing stakeholder impacts and potential harms
- 为新项目开展伦理影响评估
- 评估AI系统的伦理风险
- 建立伦理审查委员会及流程
- 为技术团队制定伦理准则
- 评估利益相关者影响及潜在危害
Core Ethical Principles
核心伦理原则
Foundation Principles
基础原则
| Principle | Description | Application |
|---|---|---|
| Beneficence | Do good, maximize benefits | Design for positive outcomes |
| Non-maleficence | Do no harm, minimize risks | Identify and mitigate harms |
| Autonomy | Respect individual choice | Informed consent, opt-out |
| Justice | Fair distribution of benefits/burdens | Equitable access, no discrimination |
| Transparency | Open about how systems work | Explainable AI, clear documentation |
| Accountability | Clear responsibility | Ownership, audit trails |
| Privacy | Protect personal information | Data minimization, consent |
| 原则 | 描述 | 应用 |
|---|---|---|
| 行善原则 | 行善,最大化收益 | 设计以实现积极成果 |
| 不伤害原则 | 避免伤害,最小化风险 | 识别并缓解危害 |
| 自主原则 | 尊重个人选择 | 知情同意、可退出机制 |
| 公正原则 | 公平分配收益与负担 | 平等获取机会,无歧视 |
| 透明原则 | 公开系统运作方式 | 可解释AI、清晰文档 |
| 问责原则 | 明确责任归属 | 所有权、审计追踪 |
| 隐私原则 | 保护个人信息 | 数据最小化、知情同意 |
Technology-Specific Principles
技术特定原则
text
AI/ML Systems:
├── Fairness - Equitable treatment across groups
├── Explainability - Understandable decisions
├── Reliability - Consistent, predictable behavior
├── Safety - Prevent harm, fail safely
├── Privacy - Protect personal data
├── Security - Resist adversarial attacks
├── Inclusiveness - Accessible to all users
└── Human Control - Meaningful human oversighttext
AI/ML Systems:
├── Fairness - Equitable treatment across groups
├── Explainability - Understandable decisions
├── Reliability - Consistent, predictable behavior
├── Safety - Prevent harm, fail safely
├── Privacy - Protect personal data
├── Security - Resist adversarial attacks
├── Inclusiveness - Accessible to all users
└── Human Control - Meaningful human oversightEthical Impact Assessment Framework
伦理影响评估框架
Assessment Process
评估流程
text
┌─────────────────────────────────────────────────────────────┐
│ Ethical Impact Assessment │
├─────────────────────────────────────────────────────────────┤
│ 1. Describe │ System purpose, capabilities, context │
├──────────────────┼──────────────────────────────────────────┤
│ 2. Stakeholder │ Identify all affected parties │
│ Analysis │ Map interests and concerns │
├──────────────────┼──────────────────────────────────────────┤
│ 3. Impact │ Assess benefits and harms │
│ Assessment │ Evaluate likelihood and severity │
├──────────────────┼──────────────────────────────────────────┤
│ 4. Ethical │ Apply ethical principles │
│ Analysis │ Identify conflicts and tensions │
├──────────────────┼──────────────────────────────────────────┤
│ 5. Mitigation │ Design controls and safeguards │
│ Planning │ Define monitoring approach │
├──────────────────┼──────────────────────────────────────────┤
│ 6. Decision & │ Approve, modify, or reject │
│ Review │ Schedule ongoing review │
└─────────────────────────────────────────────────────────────┘text
┌─────────────────────────────────────────────────────────────┐
│ Ethical Impact Assessment │
├─────────────────────────────────────────────────────────────┤
│ 1. Describe │ System purpose, capabilities, context │
├──────────────────┼──────────────────────────────────────────┤
│ 2. Stakeholder │ Identify all affected parties │
│ Analysis │ Map interests and concerns │
├──────────────────┼──────────────────────────────────────────┤
│ 3. Impact │ Assess benefits and harms │
│ Assessment │ Evaluate likelihood and severity │
├──────────────────┼──────────────────────────────────────────┤
│ 4. Ethical │ Apply ethical principles │
│ Analysis │ Identify conflicts and tensions │
├──────────────────┼──────────────────────────────────────────┤
│ 5. Mitigation │ Design controls and safeguards │
│ Planning │ Define monitoring approach │
├──────────────────┼──────────────────────────────────────────┤
│ 6. Decision & │ Approve, modify, or reject │
│ Review │ Schedule ongoing review │
└─────────────────────────────────────────────────────────────┘Ethical Impact Assessment Template
伦理影响评估模板
markdown
undefinedmarkdown
undefinedEthical Impact Assessment
伦理影响评估
1. System Description
1. 系统描述
Purpose
目的
[What is the system designed to do?]
[该系统的设计目标是什么?]
Capabilities
功能
[What can the system do? What decisions does it make or influence?]
[该系统能实现什么?它会做出或影响哪些决策?]
Context
应用场景
[Where and how will the system be used?]
[该系统将在何处及如何使用?]
Data
数据
[What data does the system use? How is it collected?]
[该系统使用哪些数据?数据如何收集?]
2. Stakeholder Analysis
2. 利益相关者分析
Direct Stakeholders
直接利益相关者
| Stakeholder | Relationship | Interests | Power | Concerns |
|---|---|---|---|---|
| [Group] | [Relationship] | [Interests] | [H/M/L] | [Concerns] |
| 利益相关者 | 关系 | 利益诉求 | 影响力 | 关切点 |
|---|---|---|---|---|
| [群体] | [关系] | [利益诉求] | [高/中/低] | [关切点] |
Indirect Stakeholders
间接利益相关者
| Stakeholder | How Affected | Interests | Concerns |
|---|---|---|---|
| [Group] | [Impact] | [Interests] | [Concerns] |
| 利益相关者 | 受影响方式 | 利益诉求 | 关切点 |
|---|---|---|---|
| [群体] | [影响] | [利益诉求] | [关切点] |
Vulnerable Groups
弱势群体
| Group | Vulnerability | Special Considerations |
|---|---|---|
| [Group] | [Why vulnerable] | [Protections needed] |
| 群体 | 脆弱性原因 | 特殊考量 |
|---|---|---|
| [群体] | [为何脆弱] | [所需保护措施] |
3. Impact Assessment
3. 影响评估
Benefits
收益
| Benefit | Beneficiary | Magnitude | Likelihood |
|---|---|---|---|
| [Benefit] | [Who] | [H/M/L] | [H/M/L] |
| 收益 | 受益方 | 影响程度 | 可能性 |
|---|---|---|---|
| [收益] | [对象] | [高/中/低] | [高/中/低] |
Potential Harms
潜在危害
| Harm | Affected Group | Severity | Likelihood | Reversible? |
|---|---|---|---|---|
| [Harm] | [Who] | [H/M/L] | [H/M/L] | [Y/N] |
| 危害 | 受影响群体 | 严重程度 | 可能性 | 可逆性? |
|---|---|---|---|---|
| [危害] | [对象] | [高/中/低] | [高/中/低] | [是/否] |
Unintended Consequences
意外后果
| Consequence | Description | Risk Level |
|---|---|---|
| [Consequence] | [Details] | [H/M/L] |
| 后果 | 描述 | 风险等级 |
|---|---|---|
| [后果] | [详情] | [高/中/低] |
4. Ethical Analysis
4. 伦理分析
Principle Evaluation
原则评估
| Principle | Supports | Tensions | Score (1-5) |
|---|---|---|---|
| Beneficence | [How] | [Conflicts] | [Score] |
| Non-maleficence | [How] | [Conflicts] | [Score] |
| Autonomy | [How] | [Conflicts] | [Score] |
| Justice | [How] | [Conflicts] | [Score] |
| Transparency | [How] | [Conflicts] | [Score] |
| Accountability | [How] | [Conflicts] | [Score] |
| Privacy | [How] | [Conflicts] | [Score] |
| 原则 | 符合情况 | 冲突点 | 评分(1-5) |
|---|---|---|---|
| 行善原则 | [如何符合] | [冲突点] | [评分] |
| 不伤害原则 | [如何符合] | [冲突点] | [评分] |
| 自主原则 | [如何符合] | [冲突点] | [评分] |
| 公正原则 | [如何符合] | [冲突点] | [评分] |
| 透明原则 | [如何符合] | [冲突点] | [评分] |
| 问责原则 | [如何符合] | [冲突点] | [评分] |
| 隐私原则 | [如何符合] | [冲突点] | [评分] |
Ethical Dilemmas
伦理困境
| Dilemma | Trade-off | Proposed Resolution |
|---|---|---|
| [Dilemma] | [Trade-off] | [Resolution] |
| 困境 | 权衡点 | 提议解决方案 |
|---|---|---|
| [困境] | [权衡点] | [解决方案] |
5. Mitigation Plan
5. 缓解计划
Technical Mitigations
技术缓解措施
| Risk | Mitigation | Owner | Status |
|---|---|---|---|
| [Risk] | [Control] | [Who] | [Status] |
| 风险 | 缓解方案 | 负责人 | 状态 |
|---|---|---|---|
| [风险] | [控制措施] | [负责人] | [状态] |
Procedural Mitigations
流程缓解措施
| Risk | Mitigation | Owner | Status |
|---|---|---|---|
| [Risk] | [Process] | [Who] | [Status] |
| 风险 | 缓解方案 | 负责人 | 状态 |
|---|---|---|---|
| [风险] | [流程] | [负责人] | [状态] |
Monitoring Plan
监控计划
| Metric | Threshold | Frequency | Response |
|---|---|---|---|
| [Metric] | [Limit] | [How often] | [Action] |
| 指标 | 阈值 | 频率 | 响应措施 |
|---|---|---|---|
| [指标] | [限制值] | [频率] | [行动] |
6. Decision
6. 决策
Recommendation
建议
[ ] Approve - Proceed with current design
[ ] Approve with conditions - Proceed after mitigations
[ ] Defer - Requires further analysis
[ ] Reject - Unacceptable ethical risks
[ ] 批准 - 按当前设计推进
[ ] 有条件批准 - 完成缓解措施后推进
[ ] 推迟 - 需要进一步分析
[ ] 拒绝 - 存在不可接受的伦理风险
Conditions (if applicable)
条件(如适用)
- [Condition]
- [Condition]
- [条件]
- [条件]
Review Schedule
审查时间表
- Initial review: [Date]
- Ongoing review: [Frequency]
- 初始审查:[日期]
- 持续审查:[频率]
Approvals
审批
| Role | Name | Decision | Date |
|---|---|---|---|
| Ethics Board | [ ] | ||
| Technical Lead | [ ] | ||
| Business Owner | [ ] | ||
| Legal | [ ] |
undefined| 角色 | 姓名 | 决策 | 日期 |
|---|---|---|---|
| 伦理委员会 | [ ] | ||
| 技术负责人 | [ ] | ||
| 业务负责人 | [ ] | ||
| 法务 | [ ] |
undefinedHarm Assessment Framework
危害评估框架
Categories of Harm
危害类别
text
Direct Harms:
├── Physical harm to individuals
├── Psychological harm (stress, manipulation)
├── Financial harm (fraud, loss)
├── Privacy harm (exposure, surveillance)
├── Discrimination harm (unfair treatment)
└── Autonomy harm (manipulation, coercion)
Indirect/Systemic Harms:
├── Environmental harm
├── Democratic harm (manipulation, division)
├── Economic harm (displacement, inequality)
├── Social harm (erosion of trust, relationships)
└── Cultural harm (homogenization, loss)
Group-Specific Harms:
├── Harm to marginalized groups
├── Harm to vulnerable populations
├── Harm to future generations
└── Harm to non-userstext
Direct Harms:
├── Physical harm to individuals
├── Psychological harm (stress, manipulation)
├── Financial harm (fraud, loss)
├── Privacy harm (exposure, surveillance)
├── Discrimination harm (unfair treatment)
└── Autonomy harm (manipulation, coercion)
Indirect/Systemic Harms:
├── Environmental harm
├── Democratic harm (manipulation, division)
├── Economic harm (displacement, inequality)
├── Social harm (erosion of trust, relationships)
└── Cultural harm (homogenization, loss)
Group-Specific Harms:
├── Harm to marginalized groups
├── Harm to vulnerable populations
├── Harm to future generations
└── Harm to non-usersHarm Severity Matrix
危害严重程度矩阵
text
REVERSIBILITY
Easy Difficult Permanent
S Low 1 2 3
E Medium 2 4 6
V High 3 6 9
E Extreme 4 8 12
R
I
T
Y
Score:
1-2: Acceptable with monitoring
3-4: Requires mitigation
6-8: Significant controls required
9-12: May be unacceptabletext
REVERSIBILITY
Easy Difficult Permanent
S Low 1 2 3
E Medium 2 4 6
V High 3 6 9
E Extreme 4 8 12
R
I
T
Y
Score:
1-2: Acceptable with monitoring
3-4: Requires mitigation
6-8: Significant controls required
9-12: May be unacceptableAI Ethics Specifics
AI伦理细则
AI Ethics Checklist
AI伦理检查表
csharp
public class AiEthicsChecklist
{
public List<EthicsCheckItem> GetChecklist()
{
return new List<EthicsCheckItem>
{
// Fairness
new("FAIR-01", "Bias Testing",
"Has the model been tested for bias across protected groups?",
EthicsCategory.Fairness, Priority.Critical),
new("FAIR-02", "Fairness Metrics",
"Are fairness metrics defined and monitored?",
EthicsCategory.Fairness, Priority.High),
new("FAIR-03", "Training Data",
"Is training data representative and free from historical bias?",
EthicsCategory.Fairness, Priority.Critical),
// Transparency
new("TRANS-01", "Explainability",
"Can the system explain its decisions to affected users?",
EthicsCategory.Transparency, Priority.High),
new("TRANS-02", "AI Disclosure",
"Are users informed they are interacting with AI?",
EthicsCategory.Transparency, Priority.Critical),
new("TRANS-03", "Limitation Disclosure",
"Are system limitations clearly communicated?",
EthicsCategory.Transparency, Priority.High),
// Human Control
new("CTRL-01", "Human Oversight",
"Is there meaningful human oversight of AI decisions?",
EthicsCategory.HumanControl, Priority.Critical),
new("CTRL-02", "Override Capability",
"Can humans override AI decisions when needed?",
EthicsCategory.HumanControl, Priority.High),
new("CTRL-03", "Escalation Path",
"Is there a clear escalation path for concerning outputs?",
EthicsCategory.HumanControl, Priority.High),
// Safety
new("SAFE-01", "Harm Prevention",
"Are there safeguards against harmful outputs?",
EthicsCategory.Safety, Priority.Critical),
new("SAFE-02", "Fail-Safe Design",
"Does the system fail safely when errors occur?",
EthicsCategory.Safety, Priority.High),
new("SAFE-03", "Adversarial Testing",
"Has the system been tested against adversarial inputs?",
EthicsCategory.Safety, Priority.High),
// Privacy
new("PRIV-01", "Data Minimization",
"Does the system collect only necessary data?",
EthicsCategory.Privacy, Priority.High),
new("PRIV-02", "Consent",
"Is there informed consent for data use?",
EthicsCategory.Privacy, Priority.Critical),
new("PRIV-03", "Data Protection",
"Is personal data adequately protected?",
EthicsCategory.Privacy, Priority.Critical),
// Accountability
new("ACCT-01", "Responsibility",
"Is there clear ownership for system outcomes?",
EthicsCategory.Accountability, Priority.High),
new("ACCT-02", "Audit Trail",
"Are decisions logged for accountability?",
EthicsCategory.Accountability, Priority.High),
new("ACCT-03", "Redress Mechanism",
"Is there a way for affected parties to seek redress?",
EthicsCategory.Accountability, Priority.High)
};
}
}csharp
public class AiEthicsChecklist
{
public List<EthicsCheckItem> GetChecklist()
{
return new List<EthicsCheckItem>
{
// Fairness
new("FAIR-01", "Bias Testing",
"Has the model been tested for bias across protected groups?",
EthicsCategory.Fairness, Priority.Critical),
new("FAIR-02", "Fairness Metrics",
"Are fairness metrics defined and monitored?",
EthicsCategory.Fairness, Priority.High),
new("FAIR-03", "Training Data",
"Is training data representative and free from historical bias?",
EthicsCategory.Fairness, Priority.Critical),
// Transparency
new("TRANS-01", "Explainability",
"Can the system explain its decisions to affected users?",
EthicsCategory.Transparency, Priority.High),
new("TRANS-02", "AI Disclosure",
"Are users informed they are interacting with AI?",
EthicsCategory.Transparency, Priority.Critical),
new("TRANS-03", "Limitation Disclosure",
"Are system limitations clearly communicated?",
EthicsCategory.Transparency, Priority.High),
// Human Control
new("CTRL-01", "Human Oversight",
"Is there meaningful human oversight of AI decisions?",
EthicsCategory.HumanControl, Priority.Critical),
new("CTRL-02", "Override Capability",
"Can humans override AI decisions when needed?",
EthicsCategory.HumanControl, Priority.High),
new("CTRL-03", "Escalation Path",
"Is there a clear escalation path for concerning outputs?",
EthicsCategory.HumanControl, Priority.High),
// Safety
new("SAFE-01", "Harm Prevention",
"Are there safeguards against harmful outputs?",
EthicsCategory.Safety, Priority.Critical),
new("SAFE-02", "Fail-Safe Design",
"Does the system fail safely when errors occur?",
EthicsCategory.Safety, Priority.High),
new("SAFE-03", "Adversarial Testing",
"Has the system been tested against adversarial inputs?",
EthicsCategory.Safety, Priority.High),
// Privacy
new("PRIV-01", "Data Minimization",
"Does the system collect only necessary data?",
EthicsCategory.Privacy, Priority.High),
new("PRIV-02", "Consent",
"Is there informed consent for data use?",
EthicsCategory.Privacy, Priority.Critical),
new("PRIV-03", "Data Protection",
"Is personal data adequately protected?",
EthicsCategory.Privacy, Priority.Critical),
// Accountability
new("ACCT-01", "Responsibility",
"Is there clear ownership for system outcomes?",
EthicsCategory.Accountability, Priority.High),
new("ACCT-02", "Audit Trail",
"Are decisions logged for accountability?",
EthicsCategory.Accountability, Priority.High),
new("ACCT-03", "Redress Mechanism",
"Is there a way for affected parties to seek redress?",
EthicsCategory.Accountability, Priority.High)
};
}
}Algorithmic Impact Questions
算法影响问题
| Question | Why It Matters |
|---|---|
| Who benefits from this algorithm? | Ensure equitable benefit distribution |
| Who might be harmed? | Identify vulnerable populations |
| What happens when it's wrong? | Understand failure impact |
| Can it be gamed or manipulated? | Assess adversarial risks |
| Does it entrench existing inequalities? | Check for systemic bias |
| What feedback loops might emerge? | Predict unintended consequences |
| Is there meaningful human oversight? | Ensure accountability |
| Can decisions be explained? | Support transparency |
| Is consent meaningful and informed? | Respect autonomy |
| What are the long-term societal effects? | Consider systemic impact |
| 问题 | 重要性 |
|---|---|
| 谁会从该算法中受益? | 确保收益公平分配 |
| 谁可能受到伤害? | 识别弱势群体 |
| 算法出错时会发生什么? | 了解故障影响 |
| 算法是否可能被滥用或操纵? | 评估对抗性风险 |
| 它是否会加剧现有不平等? | 检查系统性偏见 |
| 可能出现哪些反馈循环? | 预测意外后果 |
| 是否有有意义的人类监督? | 确保问责制 |
| 决策是否可解释? | 支持透明度 |
| 同意是否是知情且有意义的? | 尊重自主权 |
| 长期社会影响是什么? | 考虑系统性影响 |
Ethics Review Board
伦理审查委员会
Board Structure
委员会结构
text
Ethics Review Board Composition:
├── Chair (Senior Leadership)
├── Ethics Officer (if applicable)
├── Technical Lead (understands the technology)
├── Legal Representative
├── Privacy Officer
├── Business Representative
├── External Ethicist (optional but recommended)
└── User/Community Representative (for significant decisions)text
Ethics Review Board Composition:
├── Chair (Senior Leadership)
├── Ethics Officer (if applicable)
├── Technical Lead (understands the technology)
├── Legal Representative
├── Privacy Officer
├── Business Representative
├── External Ethicist (optional but recommended)
└── User/Community Representative (for significant decisions)Review Thresholds
审查阈值
| Trigger | Review Level | Timeline |
|---|---|---|
| New AI/ML system | Full board review | Before development |
| High-risk application | Full board review | Before deployment |
| Significant model update | Expedited review | Before release |
| Incident or complaint | Post-hoc review | Within 1 week |
| Annual review | Full board review | Annual |
| Employee concern | Expedited review | Within 2 weeks |
| 触发条件 | 审查级别 | 时间要求 |
|---|---|---|
| 新AI/ML系统 | 全体委员会审查 | 开发前 |
| 高风险应用 | 全体委员会审查 | 部署前 |
| 重大模型更新 | 快速审查 | 发布前 |
| 事件或投诉 | 事后审查 | 1周内 |
| 年度审查 | 全体委员会审查 | 每年 |
| 员工关切 | 快速审查 | 2周内 |
Board Decision Framework
委员会决策框架
csharp
public enum EthicsDecision
{
Approved, // Proceed as designed
ApprovedWithConditions, // Proceed after specified changes
RequiresRedesign, // Fundamental changes needed
Deferred, // Need more information
Rejected, // Unacceptable ethical risk
EscalateToExecutive // Beyond board authority
}
public class EthicsReviewResult
{
public required EthicsDecision Decision { get; init; }
public required string Rationale { get; init; }
public List<string> Conditions { get; init; } = new();
public List<string> MonitoringRequirements { get; init; } = new();
public DateTimeOffset? NextReviewDate { get; init; }
public List<BoardMemberVote> Votes { get; init; } = new();
}csharp
public enum EthicsDecision
{
Approved, // Proceed as designed
ApprovedWithConditions, // Proceed after specified changes
RequiresRedesign, // Fundamental changes needed
Deferred, // Need more information
Rejected, // Unacceptable ethical risk
EscalateToExecutive // Beyond board authority
}
public class EthicsReviewResult
{
public required EthicsDecision Decision { get; init; }
public required string Rationale { get; init; }
public List<string> Conditions { get; init; } = new();
public List<string> MonitoringRequirements { get; init; } = new();
public DateTimeOffset? NextReviewDate { get; init; }
public List<BoardMemberVote> Votes { get; init; } = new();
}Responsible Innovation Framework
负责任创新框架
Stage-Gate Ethics Integration
阶段门伦理整合
text
Stage 1: Ideation
├── Initial ethics screening
├── Identify potential concerns
└── Go/No-Go for research
Stage 2: Research & Design
├── Stakeholder analysis
├── Preliminary impact assessment
└── Ethics-by-design integration
Stage 3: Development
├── Ongoing ethics review
├── Testing for bias/harm
└── Documentation
Stage 4: Pre-Deployment
├── Full ethical impact assessment
├── Board review (if triggered)
└── Mitigation verification
Stage 5: Deployment
├── Monitoring plan activation
├── Feedback mechanisms
└── Incident response ready
Stage 6: Operations
├── Ongoing monitoring
├── Regular reviews
└── Continuous improvementtext
Stage 1: Ideation
├── Initial ethics screening
├── Identify potential concerns
└── Go/No-Go for research
Stage 2: Research & Design
├── Stakeholder analysis
├── Preliminary impact assessment
└── Ethics-by-design integration
Stage 3: Development
├── Ongoing ethics review
├── Testing for bias/harm
└── Documentation
Stage 4: Pre-Deployment
├── Full ethical impact assessment
├── Board review (if triggered)
└── Mitigation verification
Stage 5: Deployment
├── Monitoring plan activation
├── Feedback mechanisms
└── Incident response ready
Stage 6: Operations
├── Ongoing monitoring
├── Regular reviews
└── Continuous improvementEthics Review Checklist
伦理审查检查表
Pre-Development
开发前
- Ethical impact assessment completed
- Stakeholder analysis documented
- Potential harms identified
- Ethics review board consulted (if required)
- Mitigation plans defined
- 完成伦理影响评估
- 记录利益相关者分析
- 识别潜在危害
- 咨询伦理审查委员会(如要求)
- 制定缓解计划
Development
开发中
- Ethics-by-design principles applied
- Bias testing conducted
- Explainability built in
- Human oversight designed
- Documentation complete
- 应用伦理设计原则
- 开展偏见测试
- 内置可解释性
- 设计人类监督机制
- 完成文档
Pre-Deployment
部署前
- Full assessment reviewed
- All mitigations implemented
- Monitoring in place
- Redress mechanism ready
- Ethics sign-off obtained
- 完成全面评估审查
- 实施所有缓解措施
- 部署监控机制
- 准备补救机制
- 获取伦理签署同意
Operations
运营中
- Regular monitoring active
- Feedback collected and reviewed
- Incidents investigated
- Periodic re-assessment scheduled
- 启动常规监控
- 收集并审查反馈
- 调查事件
- 安排定期重新评估
Cross-References
交叉引用
- AI Governance: for regulatory compliance
ai-governance - Bias Assessment: Research fairness metrics via MCP (perplexity: "AI fairness metrics NIST")
- Data Privacy: for privacy considerations
gdpr-compliance
- AI治理:用于合规性监管
ai-governance - 偏见评估:通过MCP研究公平性指标(参考:"AI fairness metrics NIST")
- 数据隐私:用于隐私考量
gdpr-compliance