owasp-llm-top-10
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseOWASP Top 10 for LLM Applications
OWASP大语言模型应用Top10安全风险
This skill encodes the OWASP Top 10 for Large Language Model Applications for secure LLM/GenAI design and review. References are loaded per risk. Based on OWASP Top 10 for LLM Applications 2025.
本Skill整合了OWASP大语言模型应用Top10安全风险内容,用于指导LLM/生成式AI应用的安全设计与评审。每种风险都配有对应的参考资料,基于2025版OWASP大语言模型应用Top10安全风险制定。
When to Read Which Reference
各风险对应的参考资料查阅指南
| Risk | Read |
|---|---|
| LLM01 Prompt Injection | references/llm01-prompt-injection.md |
| LLM02 Sensitive Information Disclosure | references/llm02-sensitive-information-disclosure.md |
| LLM03 Training Data & Supply Chain | references/llm03-training-data-supply-chain.md |
| LLM04 Data and Model Poisoning | references/llm04-data-model-poisoning.md |
| LLM05 Improper Output Handling | references/llm05-improper-output-handling.md |
| LLM06 Excessive Agency | references/llm06-excessive-agency.md |
| LLM07 System Prompt Leakage | references/llm07-system-prompt-leakage.md |
| LLM08 Vector and Embedding Weaknesses | references/llm08-vector-embedding-weaknesses.md |
| LLM09 Misinformation | references/llm09-misinformation.md |
| LLM10 Unbounded Consumption | references/llm10-unbounded-consumption.md |
| 风险项 | 查阅链接 |
|---|---|
| LLM01 提示注入 | references/llm01-prompt-injection.md |
| LLM02 敏感信息泄露 | references/llm02-sensitive-information-disclosure.md |
| LLM03 训练数据与供应链 | references/llm03-training-data-supply-chain.md |
| LLM04 数据与模型投毒 | references/llm04-data-model-poisoning.md |
| LLM05 输出处理不当 | references/llm05-improper-output-handling.md |
| LLM06 过度代理 | references/llm06-excessive-agency.md |
| LLM07 系统提示泄露 | references/llm07-system-prompt-leakage.md |
| LLM08 向量与嵌入缺陷 | references/llm08-vector-embedding-weaknesses.md |
| LLM09 虚假信息 | references/llm09-misinformation.md |
| LLM10 无限制资源消耗 | references/llm10-unbounded-consumption.md |
Quick Patterns
快速安全实践模式
- Treat all user and external input as untrusted; validate and sanitize LLM outputs before use (XSS, SSRF, RCE). Limit agency and tool use; protect system prompts and RAG data. Apply rate limits and cost controls.
- 将所有用户输入和外部输入视为不可信内容;在使用LLM输出前进行验证和清理(防范XSS、SSRF、RCE攻击)。限制代理权限与工具使用范围;保护系统提示词与RAG数据。应用速率限制与成本控制措施。
Quick Reference / Examples
快速参考/示例
| Task | Approach |
|---|---|
| Prevent prompt injection | Use delimiters, validate input, separate system/user context. See LLM01. |
| Protect sensitive data | Filter PII from training/prompts, apply output guards. See LLM02. |
| Validate LLM output | Sanitize before rendering (XSS) or executing (RCE). See LLM05. |
| Limit agency | Require human approval for destructive actions; scope tool permissions. See LLM06. |
| Control costs | Apply token limits, rate limiting, and budget caps. See LLM10. |
Safe - delimiter and input validation:
python
system_prompt = """You are a helpful assistant.
<user_input>
{sanitized_user_input}
</user_input>
Answer based only on the user input above."""Unsafe - direct concatenation (injection risk):
python
prompt = f"Answer this question: {user_input}" # User can inject instructionsOutput sanitization before rendering:
python
import html
safe_output = html.escape(llm_response) # Prevent XSS if rendering in browser| 任务 | 实施方法 |
|---|---|
| 防范提示注入 | 使用分隔符、验证输入、分离系统与用户上下文。详见LLM01。 |
| 保护敏感数据 | 从训练数据/提示词中过滤PII信息,应用输出防护机制。详见LLM02。 |
| 验证LLM输出 | 在渲染(防范XSS)或执行(防范RCE)前进行清理。详见LLM05。 |
| 限制代理权限 | 破坏性操作需人工审批;限定工具权限范围。详见LLM06。 |
| 成本控制 | 应用令牌限制、速率限制与预算上限。详见LLM10。 |
安全示例 - 分隔符与输入验证:
python
system_prompt = """You are a helpful assistant.
<user_input>
{sanitized_user_input}
</user_input>
Answer based only on the user input above."""不安全示例 - 直接拼接(存在注入风险):
python
prompt = f"Answer this question: {user_input}" # User can inject instructions输出渲染前的清理处理:
python
import html
safe_output = html.escape(llm_response) # Prevent XSS if rendering in browserWorkflow
工作流程
Load the reference for the risk you are addressing. See OWASP Top 10 for LLM Applications and genai.owasp.org for the official list.
针对你正在处理的风险项,加载对应的参考资料。官方完整列表可查阅OWASP大语言模型应用Top10安全风险与genai.owasp.org。