ethics-review

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Ethics Review

伦理审查

Comprehensive guidance for ethical assessment of technology systems, AI applications, and responsible innovation.
本指南为技术系统、AI应用及负责任创新的伦理评估提供全面指导。

When to Use This Skill

何时使用本技能

  • Conducting ethical impact assessments for new projects
  • Evaluating AI systems for ethical risks
  • Establishing ethics review boards and processes
  • Developing ethical guidelines for technology teams
  • Assessing stakeholder impacts and potential harms
  • 为新项目开展伦理影响评估
  • 评估AI系统的伦理风险
  • 建立伦理审查委员会及流程
  • 为技术团队制定伦理准则
  • 评估利益相关者影响及潜在危害

Core Ethical Principles

核心伦理原则

Foundation Principles

基础原则

PrincipleDescriptionApplication
BeneficenceDo good, maximize benefitsDesign for positive outcomes
Non-maleficenceDo no harm, minimize risksIdentify and mitigate harms
AutonomyRespect individual choiceInformed consent, opt-out
JusticeFair distribution of benefits/burdensEquitable access, no discrimination
TransparencyOpen about how systems workExplainable AI, clear documentation
AccountabilityClear responsibilityOwnership, audit trails
PrivacyProtect personal informationData minimization, consent
原则描述应用
行善原则行善,最大化收益设计以实现积极成果
不伤害原则避免伤害,最小化风险识别并缓解危害
自主原则尊重个人选择知情同意、可退出机制
公正原则公平分配收益与负担平等获取机会,无歧视
透明原则公开系统运作方式可解释AI、清晰文档
问责原则明确责任归属所有权、审计追踪
隐私原则保护个人信息数据最小化、知情同意

Technology-Specific Principles

技术特定原则

text
AI/ML Systems:
├── Fairness - Equitable treatment across groups
├── Explainability - Understandable decisions
├── Reliability - Consistent, predictable behavior
├── Safety - Prevent harm, fail safely
├── Privacy - Protect personal data
├── Security - Resist adversarial attacks
├── Inclusiveness - Accessible to all users
└── Human Control - Meaningful human oversight
text
AI/ML Systems:
├── Fairness - Equitable treatment across groups
├── Explainability - Understandable decisions
├── Reliability - Consistent, predictable behavior
├── Safety - Prevent harm, fail safely
├── Privacy - Protect personal data
├── Security - Resist adversarial attacks
├── Inclusiveness - Accessible to all users
└── Human Control - Meaningful human oversight

Ethical Impact Assessment Framework

伦理影响评估框架

Assessment Process

评估流程

text
┌─────────────────────────────────────────────────────────────┐
│                  Ethical Impact Assessment                   │
├─────────────────────────────────────────────────────────────┤
│  1. Describe     │  System purpose, capabilities, context   │
├──────────────────┼──────────────────────────────────────────┤
│  2. Stakeholder  │  Identify all affected parties           │
│     Analysis     │  Map interests and concerns              │
├──────────────────┼──────────────────────────────────────────┤
│  3. Impact       │  Assess benefits and harms               │
│     Assessment   │  Evaluate likelihood and severity        │
├──────────────────┼──────────────────────────────────────────┤
│  4. Ethical      │  Apply ethical principles                │
│     Analysis     │  Identify conflicts and tensions         │
├──────────────────┼──────────────────────────────────────────┤
│  5. Mitigation   │  Design controls and safeguards          │
│     Planning     │  Define monitoring approach              │
├──────────────────┼──────────────────────────────────────────┤
│  6. Decision &   │  Approve, modify, or reject              │
│     Review       │  Schedule ongoing review                 │
└─────────────────────────────────────────────────────────────┘
text
┌─────────────────────────────────────────────────────────────┐
│                  Ethical Impact Assessment                   │
├─────────────────────────────────────────────────────────────┤
│  1. Describe     │  System purpose, capabilities, context   │
├──────────────────┼──────────────────────────────────────────┤
│  2. Stakeholder  │  Identify all affected parties           │
│     Analysis     │  Map interests and concerns              │
├──────────────────┼──────────────────────────────────────────┤
│  3. Impact       │  Assess benefits and harms               │
│     Assessment   │  Evaluate likelihood and severity        │
├──────────────────┼──────────────────────────────────────────┤
│  4. Ethical      │  Apply ethical principles                │
│     Analysis     │  Identify conflicts and tensions         │
├──────────────────┼──────────────────────────────────────────┤
│  5. Mitigation   │  Design controls and safeguards          │
│     Planning     │  Define monitoring approach              │
├──────────────────┼──────────────────────────────────────────┤
│  6. Decision &   │  Approve, modify, or reject              │
│     Review       │  Schedule ongoing review                 │
└─────────────────────────────────────────────────────────────┘

Ethical Impact Assessment Template

伦理影响评估模板

markdown
undefined
markdown
undefined

Ethical Impact Assessment

伦理影响评估

1. System Description

1. 系统描述

Purpose

目的

[What is the system designed to do?]
[该系统的设计目标是什么?]

Capabilities

功能

[What can the system do? What decisions does it make or influence?]
[该系统能实现什么?它会做出或影响哪些决策?]

Context

应用场景

[Where and how will the system be used?]
[该系统将在何处及如何使用?]

Data

数据

[What data does the system use? How is it collected?]

[该系统使用哪些数据?数据如何收集?]

2. Stakeholder Analysis

2. 利益相关者分析

Direct Stakeholders

直接利益相关者

StakeholderRelationshipInterestsPowerConcerns
[Group][Relationship][Interests][H/M/L][Concerns]
利益相关者关系利益诉求影响力关切点
[群体][关系][利益诉求][高/中/低][关切点]

Indirect Stakeholders

间接利益相关者

StakeholderHow AffectedInterestsConcerns
[Group][Impact][Interests][Concerns]
利益相关者受影响方式利益诉求关切点
[群体][影响][利益诉求][关切点]

Vulnerable Groups

弱势群体

GroupVulnerabilitySpecial Considerations
[Group][Why vulnerable][Protections needed]

群体脆弱性原因特殊考量
[群体][为何脆弱][所需保护措施]

3. Impact Assessment

3. 影响评估

Benefits

收益

BenefitBeneficiaryMagnitudeLikelihood
[Benefit][Who][H/M/L][H/M/L]
收益受益方影响程度可能性
[收益][对象][高/中/低][高/中/低]

Potential Harms

潜在危害

HarmAffected GroupSeverityLikelihoodReversible?
[Harm][Who][H/M/L][H/M/L][Y/N]
危害受影响群体严重程度可能性可逆性?
[危害][对象][高/中/低][高/中/低][是/否]

Unintended Consequences

意外后果

ConsequenceDescriptionRisk Level
[Consequence][Details][H/M/L]

后果描述风险等级
[后果][详情][高/中/低]

4. Ethical Analysis

4. 伦理分析

Principle Evaluation

原则评估

PrincipleSupportsTensionsScore (1-5)
Beneficence[How][Conflicts][Score]
Non-maleficence[How][Conflicts][Score]
Autonomy[How][Conflicts][Score]
Justice[How][Conflicts][Score]
Transparency[How][Conflicts][Score]
Accountability[How][Conflicts][Score]
Privacy[How][Conflicts][Score]
原则符合情况冲突点评分(1-5)
行善原则[如何符合][冲突点][评分]
不伤害原则[如何符合][冲突点][评分]
自主原则[如何符合][冲突点][评分]
公正原则[如何符合][冲突点][评分]
透明原则[如何符合][冲突点][评分]
问责原则[如何符合][冲突点][评分]
隐私原则[如何符合][冲突点][评分]

Ethical Dilemmas

伦理困境

DilemmaTrade-offProposed Resolution
[Dilemma][Trade-off][Resolution]

困境权衡点提议解决方案
[困境][权衡点][解决方案]

5. Mitigation Plan

5. 缓解计划

Technical Mitigations

技术缓解措施

RiskMitigationOwnerStatus
[Risk][Control][Who][Status]
风险缓解方案负责人状态
[风险][控制措施][负责人][状态]

Procedural Mitigations

流程缓解措施

RiskMitigationOwnerStatus
[Risk][Process][Who][Status]
风险缓解方案负责人状态
[风险][流程][负责人][状态]

Monitoring Plan

监控计划

MetricThresholdFrequencyResponse
[Metric][Limit][How often][Action]

指标阈值频率响应措施
[指标][限制值][频率][行动]

6. Decision

6. 决策

Recommendation

建议

[ ] Approve - Proceed with current design [ ] Approve with conditions - Proceed after mitigations [ ] Defer - Requires further analysis [ ] Reject - Unacceptable ethical risks
[ ] 批准 - 按当前设计推进 [ ] 有条件批准 - 完成缓解措施后推进 [ ] 推迟 - 需要进一步分析 [ ] 拒绝 - 存在不可接受的伦理风险

Conditions (if applicable)

条件(如适用)

  1. [Condition]
  2. [Condition]
  1. [条件]
  2. [条件]

Review Schedule

审查时间表

  • Initial review: [Date]
  • Ongoing review: [Frequency]
  • 初始审查:[日期]
  • 持续审查:[频率]

Approvals

审批

RoleNameDecisionDate
Ethics Board[ ]
Technical Lead[ ]
Business Owner[ ]
Legal[ ]
undefined
角色姓名决策日期
伦理委员会[ ]
技术负责人[ ]
业务负责人[ ]
法务[ ]
undefined

Harm Assessment Framework

危害评估框架

Categories of Harm

危害类别

text
Direct Harms:
├── Physical harm to individuals
├── Psychological harm (stress, manipulation)
├── Financial harm (fraud, loss)
├── Privacy harm (exposure, surveillance)
├── Discrimination harm (unfair treatment)
└── Autonomy harm (manipulation, coercion)

Indirect/Systemic Harms:
├── Environmental harm
├── Democratic harm (manipulation, division)
├── Economic harm (displacement, inequality)
├── Social harm (erosion of trust, relationships)
└── Cultural harm (homogenization, loss)

Group-Specific Harms:
├── Harm to marginalized groups
├── Harm to vulnerable populations
├── Harm to future generations
└── Harm to non-users
text
Direct Harms:
├── Physical harm to individuals
├── Psychological harm (stress, manipulation)
├── Financial harm (fraud, loss)
├── Privacy harm (exposure, surveillance)
├── Discrimination harm (unfair treatment)
└── Autonomy harm (manipulation, coercion)

Indirect/Systemic Harms:
├── Environmental harm
├── Democratic harm (manipulation, division)
├── Economic harm (displacement, inequality)
├── Social harm (erosion of trust, relationships)
└── Cultural harm (homogenization, loss)

Group-Specific Harms:
├── Harm to marginalized groups
├── Harm to vulnerable populations
├── Harm to future generations
└── Harm to non-users

Harm Severity Matrix

危害严重程度矩阵

text
               REVERSIBILITY
               Easy    Difficult   Permanent
S      Low     1          2           3
E      Medium  2          4           6
V      High    3          6           9
E      Extreme 4          8          12
R
I
T
Y

Score:
1-2:  Acceptable with monitoring
3-4:  Requires mitigation
6-8:  Significant controls required
9-12: May be unacceptable
text
               REVERSIBILITY
               Easy    Difficult   Permanent
S      Low     1          2           3
E      Medium  2          4           6
V      High    3          6           9
E      Extreme 4          8          12
R
I
T
Y

Score:
1-2:  Acceptable with monitoring
3-4:  Requires mitigation
6-8:  Significant controls required
9-12: May be unacceptable

AI Ethics Specifics

AI伦理细则

AI Ethics Checklist

AI伦理检查表

csharp
public class AiEthicsChecklist
{
    public List<EthicsCheckItem> GetChecklist()
    {
        return new List<EthicsCheckItem>
        {
            // Fairness
            new("FAIR-01", "Bias Testing",
                "Has the model been tested for bias across protected groups?",
                EthicsCategory.Fairness, Priority.Critical),
            new("FAIR-02", "Fairness Metrics",
                "Are fairness metrics defined and monitored?",
                EthicsCategory.Fairness, Priority.High),
            new("FAIR-03", "Training Data",
                "Is training data representative and free from historical bias?",
                EthicsCategory.Fairness, Priority.Critical),

            // Transparency
            new("TRANS-01", "Explainability",
                "Can the system explain its decisions to affected users?",
                EthicsCategory.Transparency, Priority.High),
            new("TRANS-02", "AI Disclosure",
                "Are users informed they are interacting with AI?",
                EthicsCategory.Transparency, Priority.Critical),
            new("TRANS-03", "Limitation Disclosure",
                "Are system limitations clearly communicated?",
                EthicsCategory.Transparency, Priority.High),

            // Human Control
            new("CTRL-01", "Human Oversight",
                "Is there meaningful human oversight of AI decisions?",
                EthicsCategory.HumanControl, Priority.Critical),
            new("CTRL-02", "Override Capability",
                "Can humans override AI decisions when needed?",
                EthicsCategory.HumanControl, Priority.High),
            new("CTRL-03", "Escalation Path",
                "Is there a clear escalation path for concerning outputs?",
                EthicsCategory.HumanControl, Priority.High),

            // Safety
            new("SAFE-01", "Harm Prevention",
                "Are there safeguards against harmful outputs?",
                EthicsCategory.Safety, Priority.Critical),
            new("SAFE-02", "Fail-Safe Design",
                "Does the system fail safely when errors occur?",
                EthicsCategory.Safety, Priority.High),
            new("SAFE-03", "Adversarial Testing",
                "Has the system been tested against adversarial inputs?",
                EthicsCategory.Safety, Priority.High),

            // Privacy
            new("PRIV-01", "Data Minimization",
                "Does the system collect only necessary data?",
                EthicsCategory.Privacy, Priority.High),
            new("PRIV-02", "Consent",
                "Is there informed consent for data use?",
                EthicsCategory.Privacy, Priority.Critical),
            new("PRIV-03", "Data Protection",
                "Is personal data adequately protected?",
                EthicsCategory.Privacy, Priority.Critical),

            // Accountability
            new("ACCT-01", "Responsibility",
                "Is there clear ownership for system outcomes?",
                EthicsCategory.Accountability, Priority.High),
            new("ACCT-02", "Audit Trail",
                "Are decisions logged for accountability?",
                EthicsCategory.Accountability, Priority.High),
            new("ACCT-03", "Redress Mechanism",
                "Is there a way for affected parties to seek redress?",
                EthicsCategory.Accountability, Priority.High)
        };
    }
}
csharp
public class AiEthicsChecklist
{
    public List<EthicsCheckItem> GetChecklist()
    {
        return new List<EthicsCheckItem>
        {
            // Fairness
            new("FAIR-01", "Bias Testing",
                "Has the model been tested for bias across protected groups?",
                EthicsCategory.Fairness, Priority.Critical),
            new("FAIR-02", "Fairness Metrics",
                "Are fairness metrics defined and monitored?",
                EthicsCategory.Fairness, Priority.High),
            new("FAIR-03", "Training Data",
                "Is training data representative and free from historical bias?",
                EthicsCategory.Fairness, Priority.Critical),

            // Transparency
            new("TRANS-01", "Explainability",
                "Can the system explain its decisions to affected users?",
                EthicsCategory.Transparency, Priority.High),
            new("TRANS-02", "AI Disclosure",
                "Are users informed they are interacting with AI?",
                EthicsCategory.Transparency, Priority.Critical),
            new("TRANS-03", "Limitation Disclosure",
                "Are system limitations clearly communicated?",
                EthicsCategory.Transparency, Priority.High),

            // Human Control
            new("CTRL-01", "Human Oversight",
                "Is there meaningful human oversight of AI decisions?",
                EthicsCategory.HumanControl, Priority.Critical),
            new("CTRL-02", "Override Capability",
                "Can humans override AI decisions when needed?",
                EthicsCategory.HumanControl, Priority.High),
            new("CTRL-03", "Escalation Path",
                "Is there a clear escalation path for concerning outputs?",
                EthicsCategory.HumanControl, Priority.High),

            // Safety
            new("SAFE-01", "Harm Prevention",
                "Are there safeguards against harmful outputs?",
                EthicsCategory.Safety, Priority.Critical),
            new("SAFE-02", "Fail-Safe Design",
                "Does the system fail safely when errors occur?",
                EthicsCategory.Safety, Priority.High),
            new("SAFE-03", "Adversarial Testing",
                "Has the system been tested against adversarial inputs?",
                EthicsCategory.Safety, Priority.High),

            // Privacy
            new("PRIV-01", "Data Minimization",
                "Does the system collect only necessary data?",
                EthicsCategory.Privacy, Priority.High),
            new("PRIV-02", "Consent",
                "Is there informed consent for data use?",
                EthicsCategory.Privacy, Priority.Critical),
            new("PRIV-03", "Data Protection",
                "Is personal data adequately protected?",
                EthicsCategory.Privacy, Priority.Critical),

            // Accountability
            new("ACCT-01", "Responsibility",
                "Is there clear ownership for system outcomes?",
                EthicsCategory.Accountability, Priority.High),
            new("ACCT-02", "Audit Trail",
                "Are decisions logged for accountability?",
                EthicsCategory.Accountability, Priority.High),
            new("ACCT-03", "Redress Mechanism",
                "Is there a way for affected parties to seek redress?",
                EthicsCategory.Accountability, Priority.High)
        };
    }
}

Algorithmic Impact Questions

算法影响问题

QuestionWhy It Matters
Who benefits from this algorithm?Ensure equitable benefit distribution
Who might be harmed?Identify vulnerable populations
What happens when it's wrong?Understand failure impact
Can it be gamed or manipulated?Assess adversarial risks
Does it entrench existing inequalities?Check for systemic bias
What feedback loops might emerge?Predict unintended consequences
Is there meaningful human oversight?Ensure accountability
Can decisions be explained?Support transparency
Is consent meaningful and informed?Respect autonomy
What are the long-term societal effects?Consider systemic impact
问题重要性
谁会从该算法中受益?确保收益公平分配
谁可能受到伤害?识别弱势群体
算法出错时会发生什么?了解故障影响
算法是否可能被滥用或操纵?评估对抗性风险
它是否会加剧现有不平等?检查系统性偏见
可能出现哪些反馈循环?预测意外后果
是否有有意义的人类监督?确保问责制
决策是否可解释?支持透明度
同意是否是知情且有意义的?尊重自主权
长期社会影响是什么?考虑系统性影响

Ethics Review Board

伦理审查委员会

Board Structure

委员会结构

text
Ethics Review Board Composition:
├── Chair (Senior Leadership)
├── Ethics Officer (if applicable)
├── Technical Lead (understands the technology)
├── Legal Representative
├── Privacy Officer
├── Business Representative
├── External Ethicist (optional but recommended)
└── User/Community Representative (for significant decisions)
text
Ethics Review Board Composition:
├── Chair (Senior Leadership)
├── Ethics Officer (if applicable)
├── Technical Lead (understands the technology)
├── Legal Representative
├── Privacy Officer
├── Business Representative
├── External Ethicist (optional but recommended)
└── User/Community Representative (for significant decisions)

Review Thresholds

审查阈值

TriggerReview LevelTimeline
New AI/ML systemFull board reviewBefore development
High-risk applicationFull board reviewBefore deployment
Significant model updateExpedited reviewBefore release
Incident or complaintPost-hoc reviewWithin 1 week
Annual reviewFull board reviewAnnual
Employee concernExpedited reviewWithin 2 weeks
触发条件审查级别时间要求
新AI/ML系统全体委员会审查开发前
高风险应用全体委员会审查部署前
重大模型更新快速审查发布前
事件或投诉事后审查1周内
年度审查全体委员会审查每年
员工关切快速审查2周内

Board Decision Framework

委员会决策框架

csharp
public enum EthicsDecision
{
    Approved,                    // Proceed as designed
    ApprovedWithConditions,      // Proceed after specified changes
    RequiresRedesign,           // Fundamental changes needed
    Deferred,                   // Need more information
    Rejected,                   // Unacceptable ethical risk
    EscalateToExecutive         // Beyond board authority
}

public class EthicsReviewResult
{
    public required EthicsDecision Decision { get; init; }
    public required string Rationale { get; init; }
    public List<string> Conditions { get; init; } = new();
    public List<string> MonitoringRequirements { get; init; } = new();
    public DateTimeOffset? NextReviewDate { get; init; }
    public List<BoardMemberVote> Votes { get; init; } = new();
}
csharp
public enum EthicsDecision
{
    Approved,                    // Proceed as designed
    ApprovedWithConditions,      // Proceed after specified changes
    RequiresRedesign,           // Fundamental changes needed
    Deferred,                   // Need more information
    Rejected,                   // Unacceptable ethical risk
    EscalateToExecutive         // Beyond board authority
}

public class EthicsReviewResult
{
    public required EthicsDecision Decision { get; init; }
    public required string Rationale { get; init; }
    public List<string> Conditions { get; init; } = new();
    public List<string> MonitoringRequirements { get; init; } = new();
    public DateTimeOffset? NextReviewDate { get; init; }
    public List<BoardMemberVote> Votes { get; init; } = new();
}

Responsible Innovation Framework

负责任创新框架

Stage-Gate Ethics Integration

阶段门伦理整合

text
Stage 1: Ideation
├── Initial ethics screening
├── Identify potential concerns
└── Go/No-Go for research

Stage 2: Research & Design
├── Stakeholder analysis
├── Preliminary impact assessment
└── Ethics-by-design integration

Stage 3: Development
├── Ongoing ethics review
├── Testing for bias/harm
└── Documentation

Stage 4: Pre-Deployment
├── Full ethical impact assessment
├── Board review (if triggered)
└── Mitigation verification

Stage 5: Deployment
├── Monitoring plan activation
├── Feedback mechanisms
└── Incident response ready

Stage 6: Operations
├── Ongoing monitoring
├── Regular reviews
└── Continuous improvement
text
Stage 1: Ideation
├── Initial ethics screening
├── Identify potential concerns
└── Go/No-Go for research

Stage 2: Research & Design
├── Stakeholder analysis
├── Preliminary impact assessment
└── Ethics-by-design integration

Stage 3: Development
├── Ongoing ethics review
├── Testing for bias/harm
└── Documentation

Stage 4: Pre-Deployment
├── Full ethical impact assessment
├── Board review (if triggered)
└── Mitigation verification

Stage 5: Deployment
├── Monitoring plan activation
├── Feedback mechanisms
└── Incident response ready

Stage 6: Operations
├── Ongoing monitoring
├── Regular reviews
└── Continuous improvement

Ethics Review Checklist

伦理审查检查表

Pre-Development

开发前

  • Ethical impact assessment completed
  • Stakeholder analysis documented
  • Potential harms identified
  • Ethics review board consulted (if required)
  • Mitigation plans defined
  • 完成伦理影响评估
  • 记录利益相关者分析
  • 识别潜在危害
  • 咨询伦理审查委员会(如要求)
  • 制定缓解计划

Development

开发中

  • Ethics-by-design principles applied
  • Bias testing conducted
  • Explainability built in
  • Human oversight designed
  • Documentation complete
  • 应用伦理设计原则
  • 开展偏见测试
  • 内置可解释性
  • 设计人类监督机制
  • 完成文档

Pre-Deployment

部署前

  • Full assessment reviewed
  • All mitigations implemented
  • Monitoring in place
  • Redress mechanism ready
  • Ethics sign-off obtained
  • 完成全面评估审查
  • 实施所有缓解措施
  • 部署监控机制
  • 准备补救机制
  • 获取伦理签署同意

Operations

运营中

  • Regular monitoring active
  • Feedback collected and reviewed
  • Incidents investigated
  • Periodic re-assessment scheduled
  • 启动常规监控
  • 收集并审查反馈
  • 调查事件
  • 安排定期重新评估

Cross-References

交叉引用

  • AI Governance:
    ai-governance
    for regulatory compliance
  • Bias Assessment: Research fairness metrics via MCP (perplexity: "AI fairness metrics NIST")
  • Data Privacy:
    gdpr-compliance
    for privacy considerations
  • AI治理
    ai-governance
    用于合规性监管
  • 偏见评估:通过MCP研究公平性指标(参考:"AI fairness metrics NIST")
  • 数据隐私
    gdpr-compliance
    用于隐私考量

Resources

资源