nist-ai-rmf
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseNIST AI Risk Management Framework (AI RMF 1.0)
NIST AI风险管理框架(AI RMF 1.0)
This skill enables AI agents to perform a comprehensive AI risk assessment using the NIST AI Risk Management Framework (AI RMF 1.0), published January 2023 by the National Institute of Standards and Technology.
The AI RMF is a voluntary, technology- and sector-agnostic framework designed to help organizations manage risks associated with AI systems throughout their lifecycle. It promotes trustworthy AI development by addressing risks that affect individuals, organizations, and society.
Use this skill to identify, assess, and manage AI risks; establish governance structures; ensure trustworthy AI characteristics; and align with international AI risk management best practices.
Combine with "ISO 42001 AI Governance" for comprehensive compliance coverage or "OWASP LLM Top 10" for security-focused assessment.
本技能支持AI Agent采用美国国家标准与技术研究院(NIST)于2023年1月发布的NIST AI风险管理框架(AI RMF 1.0)开展全面的AI风险评估。
AI RMF是一个自愿性、与技术和行业无关的框架,旨在帮助组织在AI系统的全生命周期中管理相关风险。它通过解决影响个人、组织和社会的风险,推动可信AI的开发。
使用本技能可识别、评估和管理AI风险;建立治理结构;确保AI产品的可信特性;并与国际AI风险管理最佳实践保持一致。
可结合“ISO 42001 AI治理”实现全面合规覆盖,或结合“OWASP LLM Top 10”开展以安全为重点的评估。
When to Use This Skill
适用场景
Invoke this skill when:
- Assessing risks of AI systems before deployment
- Establishing AI governance and accountability structures
- Evaluating trustworthiness of AI products and services
- Preparing for regulatory compliance (EU AI Act, state AI laws)
- Conducting periodic AI risk reviews
- Evaluating third-party AI tools and vendors
- Building organizational AI risk management programs
- Documenting AI system risks for stakeholders
在以下场景中调用本技能:
- AI系统部署前的风险评估
- 建立AI治理与问责机制
- 评估AI产品和服务的可信性
- 为监管合规做准备(如欧盟AI法案、各州AI相关法律)
- 开展定期AI风险审查
- 评估第三方AI工具与供应商
- 构建组织级AI风险管理体系
- 为利益相关者记录AI系统风险
Inputs Required
所需输入
When executing this assessment, gather:
- ai_system_description: Description of the AI system (purpose, capabilities, deployment context, users, data sources) [REQUIRED]
- system_lifecycle_stage: Current stage (design, development, deployment, monitoring, decommissioning) [OPTIONAL, defaults to deployment]
- organization_context: Organization size, industry, risk tolerance, regulatory environment [OPTIONAL]
- existing_controls: Current risk management processes or controls in place [OPTIONAL]
- specific_concerns: Known risks, incidents, or areas of focus [OPTIONAL]
- stakeholders: Key stakeholders and affected communities [OPTIONAL]
执行本评估时,需收集以下信息:
- ai_system_description: AI系统描述(用途、能力、部署场景、用户群体、数据源)【必填】
- system_lifecycle_stage: 当前生命周期阶段(设计、开发、部署、监控、退役)【可选,默认值为部署阶段】
- organization_context: 组织规模、行业、风险承受能力、监管环境【可选】
- existing_controls: 已有的风险管理流程或控制措施【可选】
- specific_concerns: 已知风险、事件或重点关注领域【可选】
- stakeholders: 关键利益相关者及受影响群体【可选】
Trustworthy AI Characteristics
可信AI特性
The AI RMF identifies seven characteristics of trustworthy AI that serve as evaluation criteria across all functions:
- Valid and Reliable: System performs as intended with consistent results
- Safe: System does not endanger human life, health, property, or the environment
- Secure and Resilient: System withstands adverse events and recovers gracefully
- Accountable and Transparent: Information about the system is available to stakeholders
- Explainable and Interpretable: Mechanisms and outputs can be understood
- Privacy-Enhanced: Human autonomy and data rights are protected
- Fair with Harmful Bias Managed: System does not produce discriminatory outcomes
AI RMF定义了7项可信AI特性,作为所有功能维度的评估标准:
- 有效且可靠: 系统按预期运行,结果一致
- 安全: 系统不会危及人类生命、健康、财产或环境
- 安全且具韧性: 系统可抵御不利事件并平稳恢复
- 可问责且透明: 利益相关者可获取系统相关信息
- 可解释且易理解: 系统的机制和输出可被理解
- 隐私增强: 保护人类自主权和数据权利
- 公平且可控有害偏见: 系统不会产生歧视性结果
The 4 Core Functions
4个核心功能
The AI RMF Core is composed of four functions, each broken into categories and subcategories:
AI RMF核心包含4个功能维度,每个维度又分为多个类别和子类别:
GOVERN Function
GOVERN(治理)功能
Establishes organizational policies, processes, and accountability for AI risk management. GOVERN is cross-cutting and applies across all other functions.
建立组织层面的AI风险管理政策、流程和问责机制。GOVERN是跨领域功能,适用于其他所有功能维度。
GOVERN 1: Policies and Processes
GOVERN 1: 政策与流程
Policies, processes, procedures, and practices across the organization related to the mapping, measuring, and managing of AI risks are in place, transparent, and implemented effectively.
- GOVERN 1.1: Legal and regulatory requirements involving AI are understood, managed, and documented
- GOVERN 1.2: Trustworthy AI characteristics integrated into organizational policies, processes, and practices
- GOVERN 1.3: Processes to determine needed risk management activity levels based on organizational risk tolerance
- GOVERN 1.4: Risk management process and outcomes established through transparent policies, procedures, and controls
- GOVERN 1.5: Ongoing monitoring and periodic review of risk management process with clear roles and responsibilities
- GOVERN 1.6: Mechanisms to inventory AI systems resourced by organizational risk priorities
- GOVERN 1.7: Processes for decommissioning and phasing out AI systems safely
组织内与AI风险映射、度量和管理相关的政策、流程、程序及实践已建立、透明且有效实施。
- GOVERN 1.1: 理解、管理并记录涉及AI的法律和监管要求
- GOVERN 1.2: 将可信AI特性融入组织政策、流程和实践
- GOVERN 1.3: 根据组织风险承受能力,确定所需风险管理活动级别的流程
- GOVERN 1.4: 通过透明的政策、程序和控制措施,建立风险管理流程及结果
- GOVERN 1.5: 对风险管理流程进行持续监控和定期审查,明确角色与职责
- GOVERN 1.6: 依据组织风险优先级,建立AI系统清单的机制
- GOVERN 1.7: 安全退役AI系统的流程
GOVERN 2: Accountability Structures
GOVERN 2: 问责结构
Accountability structures ensure appropriate teams and individuals are empowered, responsible, and trained for AI risk management.
- GOVERN 2.1: Roles, responsibilities, and communication lines documented and clear
- GOVERN 2.2: Personnel receive AI risk management training
- GOVERN 2.3: Executive leadership takes responsibility for AI decisions
问责机制确保相关团队和个人获得授权、承担责任并接受AI风险管理培训。
- GOVERN 2.1: 记录并明确角色、职责和沟通渠道
- GOVERN 2.2: 人员接受AI风险管理培训
- GOVERN 2.3: 执行层领导对AI决策负责
GOVERN 3: Workforce Diversity and Inclusion
GOVERN 3: 劳动力多样性与包容性
Workforce diversity, equity, inclusion, and accessibility processes are prioritized in AI risk management.
- GOVERN 3.1: Decision-making informed by diverse team (demographics, disciplines, expertise)
- GOVERN 3.2: Policies define roles for human-AI configurations and oversight
在AI风险管理中优先考虑劳动力多样性、公平性、包容性和可访问性流程。
- GOVERN 3.1: 由多元化团队(人口特征、学科、专业知识)提供决策依据
- GOVERN 3.2: 政策定义人机协作配置和监督的角色
GOVERN 4: Risk Culture
GOVERN 4: 风险文化
Organizational teams are committed to a culture that considers and communicates AI risk.
- GOVERN 4.1: Policies foster critical thinking and safety-first mindset
- GOVERN 4.2: Teams document and communicate risks and impacts broadly
- GOVERN 4.3: Practices enable AI testing, incident identification, and information sharing
组织团队致力于形成重视并沟通AI风险的文化。
- GOVERN 4.1: 政策培养批判性思维和安全第一的心态
- GOVERN 4.2: 团队广泛记录并沟通风险及影响
- GOVERN 4.3: 实践支持AI测试、事件识别和信息共享
GOVERN 5: Stakeholder Engagement
GOVERN 5: 利益相关者参与
Processes are in place for robust engagement with relevant AI actors.
- GOVERN 5.1: Policies collect, consider, and integrate external feedback on impacts
- GOVERN 5.2: Mechanisms regularly incorporate adjudicated feedback into system design
建立与相关AI参与者充分互动的流程。
- GOVERN 5.1: 政策收集、考虑并整合外部对影响的反馈
- GOVERN 5.2: 定期将经裁决的反馈纳入系统设计的机制
GOVERN 6: Third-Party Risk
GOVERN 6: 第三方风险
Policies and procedures address AI risks from third-party software, data, and supply chain.
- GOVERN 6.1: Policies address risks from third-party entities including IP infringement
- GOVERN 6.2: Contingency processes handle failures in high-risk third-party systems
针对第三方软件、数据和供应链带来的AI风险,制定政策和程序。
- GOVERN 6.1: 政策解决第三方实体带来的风险,包括知识产权侵权
- GOVERN 6.2: 针对高风险第三方系统故障的应急流程
MAP Function
MAP(映射)功能
Identifies and contextualizes AI system risks within the operational environment.
在运营环境中识别并明确AI系统风险的背景信息。
MAP 1: Context Established
MAP 1: 背景建立
Context is established and understood.
- MAP 1.1: Intended purposes, beneficial uses, laws, norms, and deployment settings documented
- MAP 1.2: Interdisciplinary AI actors with demographic diversity participate and documented
- MAP 1.3: Organization's mission and goals for AI technology understood and documented
- MAP 1.4: Business value or context clearly defined or re-evaluated
- MAP 1.5: Organizational risk tolerances determined and documented
- MAP 1.6: System requirements elicited with socio-technical considerations
建立并理解系统背景。
- MAP 1.1: 记录预期用途、有益应用、法律规范和部署场景
- MAP 1.2: 多元化跨学科AI参与者参与并记录
- MAP 1.3: 理解并记录组织对AI技术的使命和目标
- MAP 1.4: 明确定义或重新评估业务价值或背景
- MAP 1.5: 确定并记录组织的风险承受能力
- MAP 1.6: 结合社会技术因素,获取系统需求
MAP 2: System Categorization
MAP 2: 系统分类
Categorization of the AI system is performed.
- MAP 2.1: Specific tasks and methods defined (classifiers, generative models, recommenders)
- MAP 2.2: System knowledge limits and human oversight documented
- MAP 2.3: Scientific integrity and TEVV considerations identified
对AI系统进行分类。
- MAP 2.1: 定义具体任务和方法(分类器、生成式模型、推荐系统等)
- MAP 2.2: 记录系统知识局限性和人工监督机制
- MAP 2.3: 识别科学完整性和TEVV(测试、评估、验证与确认)相关考虑因素
MAP 3: Capabilities and Costs
MAP 3: 能力与成本
AI capabilities, targeted usage, goals, expected benefits, and costs are understood.
- MAP 3.1: Potential benefits of intended functionality examined and documented
- MAP 3.2: Potential costs (monetary and non-monetary) from AI errors documented
- MAP 3.3: Targeted application scope specified based on capability
- MAP 3.4: Operator and practitioner proficiency assessed
- MAP 3.5: Human oversight processes defined and documented
理解AI的能力、目标用途、目标、预期收益和成本。
- MAP 3.1: 检查并记录预期功能的潜在收益
- MAP 3.2: 记录AI错误带来的潜在成本(货币和非货币)
- MAP 3.3: 根据能力明确目标应用范围
- MAP 3.4: 评估操作人员和从业者的熟练度
- MAP 3.5: 定义并记录人工监督流程
MAP 4: Component Risks
MAP 4: 组件风险
Risks and benefits are mapped for all components including third-party.
- MAP 4.1: Approaches for mapping technology and legal risks documented
- MAP 4.2: Internal risk controls for components identified and documented
为所有组件(包括第三方组件)映射风险和收益。
- MAP 4.1: 记录映射技术和法律风险的方法
- MAP 4.2: 识别并记录组件的内部风险控制措施
MAP 5: Impact Characterization
MAP 5: 影响特征
Impacts to individuals, groups, communities, organizations, and society are characterized.
- MAP 5.1: Likelihood and magnitude of impacts (beneficial and harmful) documented
- MAP 5.2: Practices for regular engagement with relevant AI actors documented
描述对个人、群体、社区、组织和社会的影响。
- MAP 5.1: 记录影响(有益和有害)的可能性和程度
- MAP 5.2: 记录与相关AI参与者定期互动的实践
MEASURE Function
MEASURE(度量)功能
Employs tools, techniques, and methodologies to assess, benchmark, and monitor AI risk.
采用工具、技术和方法来评估、基准测试和监控AI风险。
MEASURE 1: Methods and Metrics
MEASURE 1: 方法与指标
Appropriate methods and metrics are identified and applied.
- MEASURE 1.1: Approaches and metrics selected starting with most significant risks
- MEASURE 1.2: Appropriateness of metrics regularly assessed and updated
- MEASURE 1.3: Internal experts or independent assessors involved in assessments
识别并应用合适的方法与指标。
- MEASURE 1.1: 从最重大的风险开始,选择方法和指标
- MEASURE 1.2: 定期评估并更新指标的适用性
- MEASURE 1.3: 内部专家或独立评估人员参与评估
MEASURE 2: Trustworthiness Evaluation
MEASURE 2: 可信性评估
AI systems are evaluated for trustworthy characteristics.
- MEASURE 2.1: Test sets, metrics, and tool details documented during TEVV
- MEASURE 2.2: Evaluations with human subjects meet requirements and represent relevant populations
- MEASURE 2.3: Performance or assurance criteria measured and demonstrated
- MEASURE 2.4: Functionality and behavior monitored in production
- MEASURE 2.5: System demonstrated valid and reliable with generalizability limitations documented
- MEASURE 2.6: System regularly evaluated for safety risks with residual risk within tolerance
- MEASURE 2.7: Security and resilience evaluated and documented
- MEASURE 2.8: Transparency and accountability risks examined
- MEASURE 2.9: AI model explained, validated, and output interpreted within context
- MEASURE 2.10: Privacy risk examined and documented
- MEASURE 2.11: Fairness and bias evaluated with results documented
- MEASURE 2.12: Environmental impact and sustainability assessed
- MEASURE 2.13: Effectiveness of TEVV metrics evaluated
评估AI系统的可信特性。
- MEASURE 2.1: 在TEVV过程中记录测试集、指标和工具细节
- MEASURE 2.2: 涉及人类受试者的评估符合要求,且代表相关人群
- MEASURE 2.3: 度量并展示性能或保证标准
- MEASURE 2.4: 在生产环境中监控系统功能和行为
- MEASURE 2.5: 证明系统有效且可靠,记录泛化局限性
- MEASURE 2.6: 定期评估安全风险,确保剩余风险在承受范围内
- MEASURE 2.7: 评估并记录安全性和韧性
- MEASURE 2.8: 审查透明度和可问责性风险
- MEASURE 2.9: 在上下文环境中解释、验证AI模型并解读其输出
- MEASURE 2.10: 审查并记录隐私风险
- MEASURE 2.11: 评估并记录公平性与偏见
- MEASURE 2.12: 评估环境影响和可持续性
- MEASURE 2.13: 评估TEVV指标的有效性
MEASURE 3: Risk Tracking
MEASURE 3: 风险跟踪
Mechanisms for tracking identified AI risks over time are in place.
- MEASURE 3.1: Approaches track existing, unanticipated, and emergent risks
- MEASURE 3.2: Risk tracking considered for settings where assessment is difficult
- MEASURE 3.3: Feedback processes for end users to report problems and appeal outcomes
建立随时间跟踪已识别AI风险的机制。
- MEASURE 3.1: 跟踪现有、意外和新兴风险的方法
- MEASURE 3.2: 针对难以评估的场景,考虑风险跟踪
- MEASURE 3.3: 建立终端用户报告问题和申诉结果的反馈流程
MEASURE 4: Measurement Efficacy
MEASURE 4: 度量有效性
Feedback about efficacy of measurement is gathered and assessed.
- MEASURE 4.1: Measurement approaches informed by domain experts and end users
- MEASURE 4.2: Results validated for consistency with intended performance
- MEASURE 4.3: Measurable performance improvements or declines identified
收集并评估关于度量有效性的反馈。
- MEASURE 4.1: 度量方法由领域专家和终端用户提供信息
- MEASURE 4.2: 验证结果与预期性能的一致性
- MEASURE 4.3: 识别可度量的性能改进或下降
MANAGE Function
MANAGE(管理)功能
Allocates resources to mapped and measured risks on a regular basis.
定期为已映射和度量的风险分配资源。
MANAGE 1: Risk Prioritization
MANAGE 1: 风险优先级排序
AI risks based on assessments are prioritized, responded to, and managed.
- MANAGE 1.1: Determination made whether AI system achieves intended purposes
- MANAGE 1.2: Treatment of risks prioritized based on impact, likelihood, and resources
- MANAGE 1.3: Responses to high-priority risks developed (mitigate, transfer, avoid, accept)
- MANAGE 1.4: Negative residual risks documented for downstream users
基于评估结果对AI风险进行优先级排序、响应和管理。
- MANAGE 1.1: 判断AI系统是否达到预期用途
- MANAGE 1.2: 根据影响、可能性和资源,优先处理风险
- MANAGE 1.3: 制定高优先级风险的应对方案(缓解、转移、避免、接受)
- MANAGE 1.4: 为下游用户记录负面剩余风险
MANAGE 2: Benefit Maximization
MANAGE 2: 收益最大化
Strategies to maximize AI benefits and minimize negative impacts are planned and documented.
- MANAGE 2.1: Resources to manage risks considered alongside non-AI alternatives
- MANAGE 2.2: Mechanisms to sustain value of deployed systems
- MANAGE 2.3: Procedures to respond to and recover from unknown risks
- MANAGE 2.4: Mechanisms to supersede, disengage, or deactivate inconsistent systems
规划并记录最大化AI收益、最小化负面影响的策略。
- MANAGE 2.1: 管理风险的资源与非AI替代方案一并考虑
- MANAGE 2.2: 维持已部署系统价值的机制
- MANAGE 2.3: 应对和恢复未知风险的程序
- MANAGE 2.4: 取代、停用或关闭不一致系统的机制
MANAGE 3: Third-Party Risk Management
MANAGE 3: 第三方风险管理
AI risks and benefits from third-party entities are managed.
- MANAGE 3.1: Third-party risks regularly monitored with controls applied
- MANAGE 3.2: Pre-trained models monitored as part of regular maintenance
管理第三方实体带来的AI风险和收益。
- MANAGE 3.1: 定期监控第三方风险并应用控制措施
- MANAGE 3.2: 将预训练模型作为常规维护的一部分进行监控
MANAGE 4: Communication and Monitoring
MANAGE 4: 沟通与监控
Risk treatments and communication plans are documented and monitored.
- MANAGE 4.1: Post-deployment monitoring plans with user input, appeal, and decommissioning mechanisms
- MANAGE 4.2: Continual improvement activities integrated into updates
- MANAGE 4.3: Incidents communicated to relevant actors; tracking and recovery documented
记录并监控风险处理方案和沟通计划。
- MANAGE 4.1: 部署后监控计划,包含用户输入、申诉和退役机制
- MANAGE 4.2: 将持续改进活动整合到系统更新中
- MANAGE 4.3: 向相关参与者通报事件;记录跟踪和恢复情况
Audit Procedure
审计流程
Follow these steps systematically:
系统遵循以下步骤:
Step 1: System Understanding (15 minutes)
步骤1:系统理解(15分钟)
-
Review AI system:
- Analyze and
ai_system_descriptionsystem_lifecycle_stage - Identify system type (classifier, generative, recommender, autonomous, etc.)
- Document data sources, models, and deployment environment
- Note stakeholders and affected communities
- Analyze
-
Understand context:
- Review and regulatory environment
organization_context - Identify applicable laws and standards
- Note risk tolerance and existing controls
- Review
-
Define scope:
- Determine which functions and categories to assess
- Prioritize based on lifecycle stage and concerns
-
审查AI系统:
- 分析和
ai_system_descriptionsystem_lifecycle_stage - 识别系统类型(分类器、生成式、推荐系统、自主系统等)
- 记录数据源、模型和部署环境
- 记录利益相关者和受影响群体
- 分析
-
理解背景:
- 审查和监管环境
organization_context - 识别适用的法律和标准
- 记录风险承受能力和现有控制措施
- 审查
-
定义范围:
- 确定要评估的功能和类别
- 根据生命周期阶段和关注点确定优先级
Step 2: GOVERN Assessment (20 minutes)
步骤2:GOVERN(治理)评估(20分钟)
Evaluate organizational governance:
- G1: Are AI risk policies in place and transparent?
- G1.1: Legal/regulatory requirements understood and documented?
- G1.2: Trustworthy AI characteristics in organizational policies?
- G1.5: Monitoring and review processes planned?
- G1.6: AI system inventory maintained?
- G2: Accountability structures defined?
- G2.1: Roles and responsibilities clear?
- G2.3: Executive leadership accountable?
- G3.1: Diverse team informing decisions?
- G4: Risk culture fostered?
- G5: Stakeholder engagement processes in place?
- G6: Third-party risks addressed?
评估组织治理情况:
- G1: 是否已建立AI风险政策且透明?
- G1.1: 是否理解并记录法律/监管要求?
- G1.2: 组织政策中是否包含可信AI特性?
- G1.5: 是否规划了监控和审查流程?
- G1.6: 是否维护AI系统清单?
- G2: 是否定义了问责结构?
- G2.1: 角色和职责是否清晰?
- G2.3: 执行层领导是否承担责任?
- G3.1: 是否由多元化团队提供决策依据?
- G4: 是否培养了风险文化?
- G5: 是否建立了利益相关者参与流程?
- G6: 是否解决了第三方风险?
Step 3: MAP Assessment (20 minutes)
步骤3:MAP(映射)评估(20分钟)
Evaluate risk identification and context:
- M1.1: Intended purposes and deployment context documented?
- M1.5: Risk tolerances determined?
- M2.1: AI tasks and methods defined?
- M2.2: Knowledge limits and human oversight documented?
- M3.1: Benefits examined and documented?
- M3.2: Costs from AI errors documented?
- M3.5: Human oversight processes defined?
- M4.1: Component risks mapped?
- M5.1: Impact likelihood and magnitude documented?
评估风险识别和背景情况:
- M1.1: 是否已记录预期用途和部署背景?
- M1.5: 是否已确定风险承受能力?
- M2.1: 是否已定义AI任务和方法?
- M2.2: 是否已记录知识局限性和人工监督机制?
- M3.1: 是否已检查并记录收益?
- M3.2: 是否已记录AI错误带来的成本?
- M3.5: 是否已定义人工监督流程?
- M4.1: 是否已映射组件风险?
- M5.1: 是否已记录影响的可能性和程度?
Step 4: MEASURE Assessment (25 minutes)
步骤4:MEASURE(度量)评估(25分钟)
Evaluate risk measurement and monitoring:
- ME1.1: Risk metrics selected and applied?
- ME2.3: Performance criteria measured?
- ME2.4: Production behavior monitored?
- ME2.5: Validity and reliability demonstrated?
- ME2.6: Safety risks evaluated?
- ME2.7: Security and resilience evaluated?
- ME2.9: Model explainability documented?
- ME2.10: Privacy risk examined?
- ME2.11: Fairness and bias evaluated?
- ME3.1: Risk tracking in place?
- ME3.3: User feedback mechanisms established?
评估风险度量和监控情况:
- ME1.1: 是否已选择并应用风险指标?
- ME2.3: 是否已度量性能标准?
- ME2.4: 是否在生产环境中监控系统行为?
- ME2.5: 是否已证明系统有效且可靠?
- ME2.6: 是否已评估安全风险?
- ME2.7: 是否已评估安全性和韧性?
- ME2.9: 是否已记录模型可解释性?
- ME2.10: 是否已审查隐私风险?
- ME2.11: 是否已评估公平性与偏见?
- ME3.1: 是否已建立风险跟踪机制?
- ME3.3: 是否已建立用户反馈机制?
Step 5: MANAGE Assessment (20 minutes)
步骤5:MANAGE(管理)评估(20分钟)
Evaluate risk response and treatment:
- MA1.1: System achieves intended purposes?
- MA1.2: Risk treatment prioritized?
- MA1.3: Response plans for high-priority risks?
- MA2.1: Non-AI alternatives considered?
- MA2.3: Unknown risk response procedures?
- MA2.4: Deactivation mechanisms in place?
- MA3.1: Third-party risks monitored?
- MA4.1: Post-deployment monitoring implemented?
- MA4.3: Incident communication and recovery documented?
评估风险响应和处理情况:
- MA1.1: 系统是否达到预期用途?
- MA1.2: 是否对风险处理进行了优先级排序?
- MA1.3: 是否有高优先级风险的响应计划?
- MA2.1: 是否考虑了非AI替代方案?
- MA2.3: 是否有未知风险的响应程序?
- MA2.4: 是否有系统停用机制?
- MA3.1: 是否监控第三方风险?
- MA4.1: 是否实施了部署后监控?
- MA4.3: 是否已记录事件沟通和恢复情况?
Step 6: Report Generation (20 minutes)
步骤6:报告生成(20分钟)
Compile assessment findings with ratings and recommendations.
整理评估结果,包含评级和建议。
Output Format
输出格式
Generate a comprehensive NIST AI RMF assessment report:
markdown
undefined生成全面的NIST AI RMF评估报告:
markdown
undefinedNIST AI RMF Assessment Report
NIST AI RMF评估报告
AI System: [Name/Description]
Organization: [Name]
Date: [Date]
Lifecycle Stage: [Design/Development/Deployment/Monitoring]
Evaluator: [AI Agent or Human]
AI RMF Version: 1.0 (January 2023)
AI系统: [名称/描述]
组织: [名称]
日期: [日期]
生命周期阶段: [设计/开发/部署/监控]
评估者: [AI Agent或人类]
AI RMF版本: 1.0(2023年1月)
Executive Summary
执行摘要
Overall Risk Profile: [Low / Medium / High / Critical]
整体风险概况: [低 / 中 / 高 / 严重]
System Type: [Classifier / Generative / Recommender / Autonomous / Other]
Deployment Context: [Internal / Customer-facing / Public / Critical infrastructure]
Regulatory Applicability: [EU AI Act risk level, state laws, sector regulations]
系统类型: [分类器 / 生成式 / 推荐系统 / 自主系统 / 其他]
部署背景: [内部 / 面向客户 / 公共 / 关键基础设施]
监管适用性: [欧盟AI法案风险等级、州级AI法律、行业法规]
Key Findings
关键发现
- Total Issues: [X]
- Critical: [X] (immediate action required)
- High: [X] (action required within 30 days)
- Medium: [X] (action required within 90 days)
- Low: [X] (improvements recommended)
- 问题总数: [X]
- 严重: [X](需立即处理)
- 高: [X](30天内处理)
- 中: [X](90天内处理)
- 低: [X](建议改进)
Trustworthiness Summary
可信性摘要
| Characteristic | Status | Rating |
|---|---|---|
| Valid & Reliable | [Status] | [1-5] |
| Safe | [Status] | [1-5] |
| Secure & Resilient | [Status] | [1-5] |
| Accountable & Transparent | [Status] | [1-5] |
| Explainable & Interpretable | [Status] | [1-5] |
| Privacy-Enhanced | [Status] | [1-5] |
| Fair (Bias Managed) | [Status] | [1-5] |
| 特性 | 状态 | 评级 |
|---|---|---|
| 有效且可靠 | [状态] | [1-5] |
| 安全 | [状态] | [1-5] |
| 安全且具韧性 | [状态] | [1-5] |
| 可问责且透明 | [状态] | [1-5] |
| 可解释且易理解 | [状态] | [1-5] |
| 隐私增强 | [状态] | [1-5] |
| 公平(偏见可控) | [状态] | [1-5] |
GOVERN Function Assessment
GOVERN(治理)功能评估
GOVERN 1: Policies and Processes
GOVERN 1: 政策与流程
Rating: [Not Implemented / Partial / Substantial / Full]
Findings:
- [Finding 1 with evidence]
- [Finding 2 with evidence]
Gaps:
- [Gap description]
Recommendations:
- [Recommendation with priority]
评级: [未实施 / 部分实施 / 大部分实施 / 完全实施]
发现:
- [带证据的发现1]
- [带证据的发现2]
差距:
- [差距描述]
建议:
- [带优先级的建议]
GOVERN 2: Accountability Structures
GOVERN 2: 问责结构
Rating: [Not Implemented / Partial / Substantial / Full]
[Continue for all GOVERN categories...]
评级: [未实施 / 部分实施 / 大部分实施 / 完全实施]
[所有GOVERN类别评估以此类推...]
MAP Function Assessment
MAP(映射)功能评估
MAP 1: Context Established
MAP 1: 背景建立
Rating: [Not Implemented / Partial / Substantial / Full]
Findings:
- [Findings with evidence]
Gaps:
- [Gap description]
Recommendations:
- [Recommendation with priority]
[Continue for all MAP categories...]
评级: [未实施 / 部分实施 / 大部分实施 / 完全实施]
发现:
- [带证据的发现]
差距:
- [差距描述]
建议:
- [带优先级的建议]
[所有MAP类别评估以此类推...]
MEASURE Function Assessment
MEASURE(度量)功能评估
MEASURE 1: Methods and Metrics
MEASURE 1: 方法与指标
Rating: [Not Implemented / Partial / Substantial / Full]
Findings:
- [Findings with evidence]
Gaps:
- [Gap description]
Recommendations:
- [Recommendation with priority]
[Continue for all MEASURE categories...]
评级: [未实施 / 部分实施 / 大部分实施 / 完全实施]
发现:
- [带证据的发现]
差距:
- [差距描述]
建议:
- [带优先级的建议]
[所有MEASURE类别评估以此类推...]
MANAGE Function Assessment
MANAGE(管理)功能评估
MANAGE 1: Risk Prioritization
MANAGE 1: 风险优先级排序
Rating: [Not Implemented / Partial / Substantial / Full]
Findings:
- [Findings with evidence]
Gaps:
- [Gap description]
Recommendations:
- [Recommendation with priority]
[Continue for all MANAGE categories...]
评级: [未实施 / 部分实施 / 大部分实施 / 完全实施]
发现:
- [带证据的发现]
差距:
- [差距描述]
建议:
- [带优先级的建议]
[所有MANAGE类别评估以此类推...]
Risk Register
风险登记册
| ID | Risk Description | Function | Likelihood | Impact | Priority | Mitigation |
|---|---|---|---|---|---|---|
| R1 | [Description] | [G/M/ME/MA] | [L/M/H] | [L/M/H] | [P0-P3] | [Strategy] |
| R2 | [Description] | [G/M/ME/MA] | [L/M/H] | [L/M/H] | [P0-P3] | [Strategy] |
| ID | 风险描述 | 功能 | 可能性 | 影响 | 优先级 | 缓解策略 |
|---|---|---|---|---|---|---|
| R1 | [描述] | [G/M/ME/MA] | [低/中/高] | [低/中/高] | [P0-P3] | [策略] |
| R2 | [描述] | [G/M/ME/MA] | [低/中/高] | [低/中/高] | [P0-P3] | [策略] |
Remediation Roadmap
整改路线图
Phase 1: Critical (0-30 days)
阶段1: 严重(0-30天)
- [Action item with owner and deadline]
- [Action item with owner and deadline]
- [带负责人和截止日期的行动项]
- [带负责人和截止日期的行动项]
Phase 2: High Priority (30-90 days)
阶段2: 高优先级(30-90天)
- [Action item with owner and deadline]
- [带负责人和截止日期的行动项]
Phase 3: Medium Priority (90-180 days)
阶段3: 中优先级(90-180天)
- [Action item with owner and deadline]
- [带负责人和截止日期的行动项]
Phase 4: Continuous Improvement
阶段4: 持续改进
- [Ongoing practices]
- [持续实践]
Compliance Alignment
合规对齐
Regulatory Mapping
监管映射
| Regulation | Relevant AI RMF Functions | Status |
|---|---|---|
| EU AI Act | GOVERN, MAP, MEASURE | [Status] |
| NIST CSF 2.0 | GOVERN, MANAGE | [Status] |
| State AI Laws | GOVERN, MAP | [Status] |
| Sector Regulations | [Relevant functions] | [Status] |
| 法规 | 相关AI RMF功能 | 状态 |
|---|---|---|
| 欧盟AI法案 | GOVERN、MAP、MEASURE | [状态] |
| NIST CSF 2.0 | GOVERN、MANAGE | [状态] |
| 州级AI法律 | GOVERN、MAP | [状态] |
| 行业法规 | [相关功能] | [状态] |
Next Steps
下一步行动
Immediate Actions
立即行动
- Address critical findings
- Assign risk owners
- Establish monitoring cadence
- 处理严重发现
- 分配风险负责人
- 建立监控频率
Short-term (1-3 months)
短期(1-3个月)
- Implement Phase 1 remediation
- Establish governance structure
- Train personnel on AI RMF
- 实施阶段1整改
- 建立治理结构
- 开展AI RMF人员培训
Long-term (3-12 months)
长期(3-12个月)
- Complete all remediation phases
- Conduct follow-up assessment
- Integrate into organizational risk management
- 完成所有整改阶段
- 开展后续评估
- 整合到组织风险管理体系中
Resources
参考资源
- NIST AI RMF 1.0
- NIST AI RMF Playbook
- NIST AI RMF Generative AI Profile
- NIST Trustworthy AI Resource Center
Assessment Version: 1.0
Date: [Date]
---Scoring Guide
评分指南
Use this scale for subcategory ratings:
| Rating | Description |
|---|---|
| Not Implemented | No evidence of activity or documentation |
| Partial | Some activity but inconsistent or incomplete |
| Substantial | Mostly implemented with minor gaps |
| Full | Fully implemented and regularly maintained |
Use this scale for trustworthiness characteristics:
| Score | Description |
|---|---|
| 1 | Not addressed |
| 2 | Minimally addressed |
| 3 | Partially addressed |
| 4 | Substantially addressed |
| 5 | Fully addressed and monitored |
使用以下量表对子类别进行评级:
| 评级 | 描述 |
|---|---|
| 未实施 | 无活动或文档证据 |
| 部分实施 | 有部分活动,但不一致或不完整 |
| 大部分实施 | 大部分已实施,仅存在微小差距 |
| 完全实施 | 完全实施并定期维护 |
使用以下量表对可信AI特性进行评分:
| 分数 | 描述 |
|---|---|
| 1 | 未涉及 |
| 2 | 最低限度涉及 |
| 3 | 部分涉及 |
| 4 | 大部分涉及 |
| 5 | 完全涉及并监控 |
Generative AI Considerations
生成式AI注意事项
For generative AI systems, additionally evaluate (per NIST AI 600-1 GenAI Profile, July 2024):
- Content provenance: Mechanisms to track AI-generated content origin
- Confabulation risk: Controls for hallucinated or fabricated outputs
- Data privacy: Training data protections and consent
- Environmental impact: Computational resource consumption
- Information security: Prompt injection and adversarial robustness
- Harmful content: Filters and safeguards for toxic or dangerous outputs
- Third-party risks: Foundation model and API dependencies
- Human-AI interaction: User awareness that they are interacting with AI
对于生成式AI系统,需额外评估(依据2024年7月发布的NIST AI 600-1生成式AI配置文件):
- 内容来源: 跟踪AI生成内容来源的机制
- 虚构风险: 针对幻觉或伪造输出的控制措施
- 数据隐私: 训练数据保护和同意机制
- 环境影响: 计算资源消耗
- 信息安全: 提示注入和对抗性韧性
- 有害内容: 针对有毒或危险输出的过滤和防护措施
- 第三方风险: 基础模型和API依赖
- 人机交互: 用户知晓其与AI交互
Best Practices
最佳实践
- Start with GOVERN: Establish governance before mapping risks
- Iterate continuously: Risk management is ongoing, not one-time
- Engage stakeholders: Include diverse perspectives in assessments
- Document everything: Maintain evidence for accountability
- Align with existing frameworks: Integrate with NIST CSF, ISO 42001, SOC 2
- Tailor to context: Adapt depth of assessment to system risk level
- Test in production: Monitor deployed systems, not just pre-deployment
- Plan for failure: Have incident response and decommissioning procedures
- Consider societal impact: Look beyond organizational risks
- Stay current: Monitor evolving AI regulations and standards
- 从GOVERN开始: 在映射风险前先建立治理机制
- 持续迭代: 风险管理是持续过程,而非一次性活动
- 利益相关者参与: 在评估中纳入多元化视角
- 全面记录: 保留证据以确保可问责性
- 与现有框架对齐: 与NIST CSF、ISO 42001、SOC 2等框架整合
- 适配背景: 根据系统风险级别和组织背景调整评估深度
- 生产环境测试: 监控已部署系统,而非仅在部署前测试
- 规划故障应对: 制定事件响应和退役流程
- 考虑社会影响: 超越组织自身风险,关注社会层面影响
- 保持更新: 关注AI法规和标准的发展
Version
版本
1.0 - Initial release (NIST AI RMF 1.0 compliant)
Remember: The NIST AI RMF is voluntary and risk-based. Not all subcategories apply to every system. Tailor the assessment depth to the system's risk profile and organizational context.
1.0 - 初始版本(符合NIST AI RMF 1.0标准)
注意: NIST AI RMF是自愿性、基于风险的框架。并非所有子类别都适用于所有系统。需根据系统风险概况和组织背景调整评估深度。