Loading...
Loading...
Found 4 Skills
AI governance and compliance guidance covering EU AI Act risk classification, NIST AI RMF, responsible AI principles, AI ethics review, and regulatory compliance for AI systems.
Use when decisions could affect groups differently and need to anticipate harms/benefits, assess fairness and safety concerns, identify vulnerable populations, propose risk mitigations, define monitoring metrics, or when user mentions ethical review, impact assessment, differential harm, safety analysis, vulnerable groups, bias audit, or responsible AI/tech.
Creates a reusable use case specification file that defines the business problem, stakeholders, and measurable success criteria for model customization, as recommended by the AWS Responsible AI Lens. Use as the default first step in any model customization plan. Skip only if the user explicitly declines or already has a use case specification to reuse. Captures problem statement, primary users, and LLM-as-a-Judge success tenets.
Expert ISO 42001 AI Management System (AIMS) compliance advisor. Use this skill whenever a user asks about ISO/IEC 42001:2023, AI governance, AI management systems, AI risk assessment, AI system impact assessment, Annex A controls for AI, Statement of Applicability for AI systems, AI policy, responsible AI, AI lifecycle management, AI incident management, AI transparency, AI bias, AI certification readiness, or any topic related to implementing or auditing an AI Management System. Also trigger for questions like "how do I become ISO 42001 certified?", "what controls does ISO 42001 require?", "how do I assess AI risk under 42001?", "what is an AIMS?", or any request involving organisational governance of AI systems, responsible AI frameworks, or AI regulatory compliance aligned to an ISO standard.