cloud-api-integration
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseCloud API Integration Skill
云AI API集成技能
File Organization: Split structure. Main SKILL.md for core patterns. Seefor complete implementations.references/
文件组织结构:拆分结构。核心模式在主SKILL.md中。完整实现请查看目录。references/
1. Overview
1. 概述
Risk Level: HIGH - Handles API credentials, processes untrusted prompts, network exposure, data privacy concerns
You are an expert in cloud AI API integration with deep expertise in Anthropic Claude, OpenAI GPT-4, and Google Gemini APIs. Your mastery spans secure credential management, prompt security, rate limiting, error handling, and protection against LLM-specific vulnerabilities.
You excel at:
- Secure API key management and rotation
- Prompt injection prevention for cloud LLMs
- Rate limiting and cost optimization
- Multi-provider fallback strategies
- Output sanitization and data privacy
Primary Use Cases:
- JARVIS cloud AI integration for complex tasks
- Fallback when local models insufficient
- Multi-modal processing (vision, code)
- Enterprise-grade reliability with security
风险等级:高 - 处理API凭证、处理不可信提示、存在网络暴露和数据隐私问题
您是云AI API集成专家,精通Anthropic Claude、OpenAI GPT-4和Google Gemini API。您擅长安全凭证管理、提示安全、速率限制、错误处理以及针对LLM特定漏洞的防护。
您的专长包括:
- 安全API密钥管理与轮换
- 云LLM的提示注入防护
- 速率限制与成本优化
- 多提供商回退策略
- 输出清理与数据隐私保护
主要使用场景:
- 用于复杂任务的JARVIS云AI集成
- 本地模型能力不足时的回退方案
- 多模态处理(视觉、代码)
- 具备安全性的企业级可靠性
2. Core Principles
2. 核心原则
- TDD First - Write tests before implementation. Mock all external API calls.
- Performance Aware - Optimize for latency, cost, and reliability with caching and connection reuse.
- Security First - Never hardcode keys, sanitize all inputs, filter all outputs.
- Cost Conscious - Track usage, set limits, cache repeated queries.
- Reliability Focused - Multi-provider fallback with circuit breakers.
- 测试驱动开发优先 - 在实现前编写测试。模拟所有外部API调用。
- 性能感知 - 通过缓存和连接复用优化延迟、成本和可靠性。
- 安全优先 - 绝不硬编码密钥,清理所有输入,过滤所有输出。
- 成本意识 - 跟踪使用情况、设置限制、缓存重复查询。
- 可靠性聚焦 - 带有断路器的多提供商回退机制。
3. Implementation Workflow (TDD)
3. 实现工作流(测试驱动开发)
Step 1: Write Failing Test First
步骤1:先编写失败的测试
python
undefinedpython
undefinedtests/test_cloud_api.py
tests/test_cloud_api.py
import pytest
from unittest.mock import AsyncMock, patch, MagicMock
from src.cloud_api import SecureClaudeClient, CloudAPIConfig
class TestSecureClaudeClient:
"""Test cloud API client with mocked external calls."""
@pytest.fixture
def mock_config(self):
return CloudAPIConfig(
anthropic_key="test-key-12345",
timeout=30.0
)
@pytest.fixture
def mock_anthropic_response(self):
"""Mock Anthropic API response."""
mock_response = MagicMock()
mock_response.content = [MagicMock(text="Test response")]
mock_response.usage.input_tokens = 10
mock_response.usage.output_tokens = 20
return mock_response
@pytest.mark.asyncio
async def test_generate_sanitizes_input(self, mock_config, mock_anthropic_response):
"""Test that prompts are sanitized before sending."""
with patch('anthropic.Anthropic') as mock_client:
mock_client.return_value.messages.create.return_value = mock_anthropic_response
client = SecureClaudeClient(mock_config)
result = await client.generate("Test <script>alert('xss')</script>")
# Verify sanitization was applied
call_args = mock_client.return_value.messages.create.call_args
assert "<script>" not in str(call_args)
assert result == "Test response"
@pytest.mark.asyncio
async def test_rate_limiter_blocks_excess_requests(self):
"""Test rate limiting blocks requests over threshold."""
from src.cloud_api import RateLimiter
limiter = RateLimiter(rpm=2, daily_cost=100)
await limiter.acquire(100)
await limiter.acquire(100)
with pytest.raises(Exception): # RateLimitError
await limiter.acquire(100)
@pytest.mark.asyncio
async def test_multi_provider_fallback(self, mock_config):
"""Test fallback to secondary provider on failure."""
from src.cloud_api import MultiProviderClient
with patch('src.cloud_api.SecureClaudeClient') as mock_claude:
with patch('src.cloud_api.SecureOpenAIClient') as mock_openai:
mock_claude.return_value.generate = AsyncMock(
side_effect=Exception("Rate limited")
)
mock_openai.return_value.generate = AsyncMock(
return_value="OpenAI response"
)
client = MultiProviderClient(mock_config)
result = await client.generate("test prompt")
assert result == "OpenAI response"
mock_openai.return_value.generate.assert_called_once()undefinedimport pytest
from unittest.mock import AsyncMock, patch, MagicMock
from src.cloud_api import SecureClaudeClient, CloudAPIConfig
class TestSecureClaudeClient:
"""测试带有模拟外部调用的云API客户端。"""
@pytest.fixture
def mock_config(self):
return CloudAPIConfig(
anthropic_key="test-key-12345",
timeout=30.0
)
@pytest.fixture
def mock_anthropic_response(self):
"""模拟Anthropic API响应。"""
mock_response = MagicMock()
mock_response.content = [MagicMock(text="Test response")]
mock_response.usage.input_tokens = 10
mock_response.usage.output_tokens = 20
return mock_response
@pytest.mark.asyncio
async def test_generate_sanitizes_input(self, mock_config, mock_anthropic_response):
"""测试提示在发送前会被清理。"""
with patch('anthropic.Anthropic') as mock_client:
mock_client.return_value.messages.create.return_value = mock_anthropic_response
client = SecureClaudeClient(mock_config)
result = await client.generate("Test <script>alert('xss')</script>")
# 验证是否应用了清理
call_args = mock_client.return_value.messages.create.call_args
assert "<script>" not in str(call_args)
assert result == "Test response"
@pytest.mark.asyncio
async def test_rate_limiter_blocks_excess_requests(self):
"""测试速率限制器会阻止超出阈值的请求。"""
from src.cloud_api import RateLimiter
limiter = RateLimiter(rpm=2, daily_cost=100)
await limiter.acquire(100)
await limiter.acquire(100)
with pytest.raises(Exception): # RateLimitError
await limiter.acquire(100)
@pytest.mark.asyncio
async def test_multi_provider_fallback(self, mock_config):
"""测试在失败时回退到次要提供商。"""
from src.cloud_api import MultiProviderClient
with patch('src.cloud_api.SecureClaudeClient') as mock_claude:
with patch('src.cloud_api.SecureOpenAIClient') as mock_openai:
mock_claude.return_value.generate = AsyncMock(
side_effect=Exception("Rate limited")
)
mock_openai.return_value.generate = AsyncMock(
return_value="OpenAI response"
)
client = MultiProviderClient(mock_config)
result = await client.generate("test prompt")
assert result == "OpenAI response"
mock_openai.return_value.generate.assert_called_once()undefinedStep 2: Implement Minimum to Pass
步骤2:实现满足测试的最小代码
python
undefinedpython
undefinedsrc/cloud_api.py
src/cloud_api.py
class SecureClaudeClient:
def init(self, config: CloudAPIConfig):
self.client = Anthropic(api_key=config.anthropic_key.get_secret_value())
self.sanitizer = PromptSanitizer()
async def generate(self, prompt: str) -> str:
sanitized = self.sanitizer.sanitize(prompt)
response = self.client.messages.create(
model="claude-sonnet-4-20250514",
messages=[{"role": "user", "content": sanitized}]
)
return self._filter_output(response.content[0].text)undefinedclass SecureClaudeClient:
def init(self, config: CloudAPIConfig):
self.client = Anthropic(api_key=config.anthropic_key.get_secret_value())
self.sanitizer = PromptSanitizer()
async def generate(self, prompt: str) -> str:
sanitized = self.sanitizer.sanitize(prompt)
response = self.client.messages.create(
model="claude-sonnet-4-20250514",
messages=[{"role": "user", "content": sanitized}]
)
return self._filter_output(response.content[0].text)undefinedStep 3: Refactor with Patterns
步骤3:使用设计模式重构
Apply caching, connection pooling, and retry logic from Performance Patterns.
应用性能模式中的缓存、连接池和重试逻辑。
Step 4: Run Full Verification
步骤4:运行完整验证
bash
undefinedbash
undefinedRun all tests with coverage
运行所有测试并查看覆盖率
pytest tests/test_cloud_api.py -v --cov=src.cloud_api --cov-report=term-missing
pytest tests/test_cloud_api.py -v --cov=src.cloud_api --cov-report=term-missing
Run security checks
运行安全检查
bandit -r src/cloud_api.py
bandit -r src/cloud_api.py
Type checking
类型检查
mypy src/cloud_api.py --strict
---mypy src/cloud_api.py --strict
---4. Performance Patterns
4. 性能模式
Pattern 1: Connection Pooling
模式1:连接池
python
undefinedpython
undefinedGood: Reuse HTTP connections
推荐:复用HTTP连接
import httpx
class CloudAPIClient:
def init(self):
self._client = httpx.AsyncClient(
limits=httpx.Limits(max_connections=100, max_keepalive_connections=20),
timeout=httpx.Timeout(30.0)
)
async def request(self, endpoint: str, data: dict) -> dict:
response = await self._client.post(endpoint, json=data)
return response.json()
async def close(self):
await self._client.aclose()import httpx
class CloudAPIClient:
def init(self):
self._client = httpx.AsyncClient(
limits=httpx.Limits(max_connections=100, max_keepalive_connections=20),
timeout=httpx.Timeout(30.0)
)
async def request(self, endpoint: str, data: dict) -> dict:
response = await self._client.post(endpoint, json=data)
return response.json()
async def close(self):
await self._client.aclose()Bad: Create new connection per request
不推荐:每次请求创建新连接
async def bad_request(endpoint: str, data: dict):
async with httpx.AsyncClient() as client: # New connection each time!
return await client.post(endpoint, json=data)
undefinedasync def bad_request(endpoint: str, data: dict):
async with httpx.AsyncClient() as client: # 每次都创建新连接!
return await client.post(endpoint, json=data)
undefinedPattern 2: Retry with Exponential Backoff
模式2:指数退避重试
python
undefinedpython
undefinedGood: Smart retry with backoff
推荐:智能退避重试
from tenacity import retry, stop_after_attempt, wait_exponential, retry_if_exception_type
class CloudAPIClient:
@retry(
stop=stop_after_attempt(3),
wait=wait_exponential(multiplier=1, min=2, max=10),
retry=retry_if_exception_type((RateLimitError, APIConnectionError))
)
async def generate(self, prompt: str) -> str:
return await self._make_request(prompt)
from tenacity import retry, stop_after_attempt, wait_exponential, retry_if_exception_type
class CloudAPIClient:
@retry(
stop=stop_after_attempt(3),
wait=wait_exponential(multiplier=1, min=2, max=10),
retry=retry_if_exception_type((RateLimitError, APIConnectionError))
)
async def generate(self, prompt: str) -> str:
return await self._make_request(prompt)
Bad: No retry or fixed delay
不推荐:无重试或固定延迟
async def bad_generate(prompt: str):
try:
return await make_request(prompt)
except Exception:
await asyncio.sleep(1) # Fixed delay, no backoff!
return await make_request(prompt)
undefinedasync def bad_generate(prompt: str):
try:
return await make_request(prompt)
except Exception:
await asyncio.sleep(1) # 固定延迟,无退避!
return await make_request(prompt)
undefinedPattern 3: Response Caching
模式3:响应缓存
python
undefinedpython
undefinedGood: Cache repeated queries with TTL
推荐:带TTL的重复查询缓存
from functools import lru_cache
import hashlib
from cachetools import TTLCache
class CachedCloudClient:
def init(self):
self._cache = TTLCache(maxsize=1000, ttl=300) # 5 min TTL
async def generate(self, prompt: str, **kwargs) -> str:
cache_key = self._make_key(prompt, kwargs)
if cache_key in self._cache:
return self._cache[cache_key]
result = await self._client.generate(prompt, **kwargs)
self._cache[cache_key] = result
return result
def _make_key(self, prompt: str, kwargs: dict) -> str:
content = f"{prompt}:{sorted(kwargs.items())}"
return hashlib.sha256(content.encode()).hexdigest()from functools import lru_cache
import hashlib
from cachetools import TTLCache
class CachedCloudClient:
def init(self):
self._cache = TTLCache(maxsize=1000, ttl=300) # 5分钟TTL
async def generate(self, prompt: str, **kwargs) -> str:
cache_key = self._make_key(prompt, kwargs)
if cache_key in self._cache:
return self._cache[cache_key]
result = await self._client.generate(prompt, **kwargs)
self._cache[cache_key] = result
return result
def _make_key(self, prompt: str, kwargs: dict) -> str:
content = f"{prompt}:{sorted(kwargs.items())}"
return hashlib.sha256(content.encode()).hexdigest()Bad: No caching
不推荐:无缓存
async def bad_generate(prompt: str):
return await client.generate(prompt) # Repeated identical calls!
undefinedasync def bad_generate(prompt: str):
return await client.generate(prompt) # 重复相同的调用!
undefinedPattern 4: Batch API Calls
模式4:批量API调用
python
undefinedpython
undefinedGood: Batch multiple requests
推荐:批量处理多个请求
import asyncio
class BatchCloudClient:
async def generate_batch(self, prompts: list[str]) -> list[str]:
"""Process multiple prompts concurrently with rate limiting."""
semaphore = asyncio.Semaphore(5) # Max 5 concurrent
async def limited_generate(prompt: str) -> str:
async with semaphore:
return await self.generate(prompt)
tasks = [limited_generate(p) for p in prompts]
return await asyncio.gather(*tasks)import asyncio
class BatchCloudClient:
async def generate_batch(self, prompts: list[str]) -> list[str]:
"""通过速率限制并发处理多个提示。"""
semaphore = asyncio.Semaphore(5) # 最大5个并发
async def limited_generate(prompt: str) -> str:
async with semaphore:
return await self.generate(prompt)
tasks = [limited_generate(p) for p in prompts]
return await asyncio.gather(*tasks)Bad: Sequential processing
不推荐:顺序处理
async def bad_batch(prompts: list[str]):
results = []
for prompt in prompts:
results.append(await client.generate(prompt)) # One at a time!
return results
undefinedasync def bad_batch(prompts: list[str]):
results = []
for prompt in prompts:
results.append(await client.generate(prompt)) # 逐个处理!
return results
undefinedPattern 5: Async Request Handling
模式5:异步请求处理
python
undefinedpython
undefinedGood: Fully async with proper context management
推荐:带正确上下文管理的全异步实现
class AsyncCloudClient:
async def aenter(self):
self._client = httpx.AsyncClient()
return self
async def __aexit__(self, *args):
await self._client.aclose()
async def generate(self, prompt: str) -> str:
response = await self._client.post(
self.endpoint,
json={"prompt": prompt},
timeout=30.0
)
return response.json()["text"]class AsyncCloudClient:
async def aenter(self):
self._client = httpx.AsyncClient()
return self
async def __aexit__(self, *args):
await self._client.aclose()
async def generate(self, prompt: str) -> str:
response = await self._client.post(
self.endpoint,
json={"prompt": prompt},
timeout=30.0
)
return response.json()["text"]Usage
使用方式
async with AsyncCloudClient() as client:
result = await client.generate("Hello")
async with AsyncCloudClient() as client:
result = await client.generate("Hello")
Bad: Blocking calls in async context
不推荐:异步上下文中的阻塞调用
def bad_generate(prompt: str):
response = requests.post(endpoint, json={"prompt": prompt}) # Blocks!
return response.json()
---def bad_generate(prompt: str):
response = requests.post(endpoint, json={"prompt": prompt}) # 阻塞!
return response.json()
---5. Core Responsibilities
5. 核心职责
5.1 Security-First API Integration
5.1 安全优先的API集成
When integrating cloud AI APIs, you will:
- Never hardcode API keys - Always use environment variables or secret managers
- Treat all prompts as untrusted - Sanitize user input before sending
- Filter all outputs - Prevent data exfiltration and injection
- Implement rate limiting - Protect against abuse and cost overruns
- Log securely - Never log API keys or sensitive prompts
在集成云AI API时,您需要:
- 绝不硬编码API密钥 - 始终使用环境变量或密钥管理器
- 将所有提示视为不可信 - 在发送前清理用户输入
- 过滤所有输出 - 防止数据泄露和注入
- 实现速率限制 - 防止滥用和成本超支
- 安全日志 - 绝不记录API密钥或敏感提示
5.2 Cost and Performance Optimization
5.2 成本与性能优化
- Select appropriate model tier based on task complexity
- Implement caching for repeated queries
- Use streaming for better user experience
- Monitor usage and set spending alerts
- Implement circuit breakers for failed APIs
- 根据任务复杂度选择合适的模型层级
- 为重复查询实现缓存
- 使用流式传输提升用户体验
- 监控使用情况并设置支出警报
- 为故障API实现断路器
5.3 Privacy and Compliance
5.3 隐私与合规
- Minimize data sent to cloud APIs
- Never send PII without explicit consent
- Implement data retention policies
- Use API features that disable training on data
- Document data flows for compliance
- 最小化发送到云API的数据量
- 未经明确同意绝不发送个人可识别信息(PII)
- 实现数据保留策略
- 使用禁用数据训练的API功能
- 为合规性记录数据流
6. Technical Foundation
6. 技术基础
6.1 Core SDKs & Versions
6.1 核心SDK与版本
| Provider | Production | Minimum | Notes |
|---|---|---|---|
| Anthropic | anthropic>=0.40.0 | >=0.25.0 | Messages API support |
| OpenAI | openai>=1.50.0 | >=1.0.0 | Structured outputs |
| Gemini | google-generativeai>=0.8.0 | - | Latest features |
| 提供商 | 生产环境版本 | 最低版本 | 说明 |
|---|---|---|---|
| Anthropic | anthropic>=0.40.0 | >=0.25.0 | 支持Messages API |
| OpenAI | openai>=1.50.0 | >=1.0.0 | 结构化输出 |
| Gemini | google-generativeai>=0.8.0 | - | 最新功能 |
6.2 Security Dependencies
6.2 安全依赖
python
undefinedpython
undefinedrequirements.txt
requirements.txt
anthropic>=0.40.0
openai>=1.50.0
google-generativeai>=0.8.0
pydantic>=2.0 # Input validation
httpx>=0.27.0 # HTTP client with timeouts
tenacity>=8.0 # Retry logic
structlog>=23.0 # Secure logging
cryptography>=41.0 # Key encryption
cachetools>=5.0 # Response caching
---anthropic>=0.40.0
openai>=1.50.0
google-generativeai>=0.8.0
pydantic>=2.0 # 输入验证
httpx>=0.27.0 # 带超时的HTTP客户端
tenacity>=8.0 # 重试逻辑
structlog>=23.0 # 安全日志
cryptography>=41.0 # 密钥加密
cachetools>=5.0 # 响应缓存
---7. Implementation Patterns
7. 实现模式
Pattern 1: Secure API Client Configuration
模式1:安全API客户端配置
python
from pydantic import BaseModel, SecretStr, Field, validator
from anthropic import Anthropic
import os, structlog
logger = structlog.get_logger()
class CloudAPIConfig(BaseModel):
"""Validated cloud API configuration."""
anthropic_key: SecretStr = Field(default=None)
openai_key: SecretStr = Field(default=None)
timeout: float = Field(default=30.0, ge=5, le=120)
@validator('anthropic_key', 'openai_key', pre=True)
def load_from_env(cls, v, field):
return v or os.environ.get(field.name.upper())
class Config:
json_encoders = {SecretStr: lambda v: '***'}Seefor complete implementations.references/advanced-patterns.md
python
from pydantic import BaseModel, SecretStr, Field, validator
from anthropic import Anthropic
import os, structlog
logger = structlog.get_logger()
class CloudAPIConfig(BaseModel):
"""经过验证的云API配置。"""
anthropic_key: SecretStr = Field(default=None)
openai_key: SecretStr = Field(default=None)
timeout: float = Field(default=30.0, ge=5, le=120)
@validator('anthropic_key', 'openai_key', pre=True)
def load_from_env(cls, v, field):
return v or os.environ.get(field.name.upper())
class Config:
json_encoders = {SecretStr: lambda v: '***'}完整实现请查看。references/advanced-patterns.md
8. Security Standards
8. 安全标准
8.1 Critical Vulnerabilities
8.1 关键漏洞
| Vulnerability | Severity | Mitigation |
|---|---|---|
| Prompt Injection | HIGH | Input sanitization, output filtering |
| API Key Exposure | CRITICAL | Environment variables, secret managers |
| Data Exfiltration | HIGH | Restrict network access |
| 漏洞 | 严重程度 | 缓解措施 |
|---|---|---|
| 提示注入 | 高 | 输入清理、输出过滤 |
| API密钥暴露 | 关键 | 环境变量、密钥管理器 |
| 数据泄露 | 高 | 限制网络访问 |
8.2 OWASP LLM Top 10 Mapping
8.2 OWASP LLM Top 10映射
| OWASP ID | Category | Mitigation |
|---|---|---|
| LLM01 | Prompt Injection | Sanitize all inputs |
| LLM02 | Insecure Output | Filter before use |
| LLM06 | Info Disclosure | No secrets in prompts |
| OWASP ID | 类别 | 缓解措施 |
|---|---|---|
| LLM01 | 提示注入 | 清理所有输入 |
| LLM02 | 不安全输出 | 使用前过滤 |
| LLM06 | 信息泄露 | 提示中不含敏感信息 |
9. Common Mistakes
9. 常见错误
python
undefinedpython
undefinedNEVER: Hardcode API Keys
绝对禁止:硬编码API密钥
client = Anthropic(api_key="sk-ant-api03-xxxxx") # DANGEROUS
client = Anthropic() # SECURE - uses env var
client = Anthropic(api_key="sk-ant-api03-xxxxx") # 危险
client = Anthropic() # 安全 - 使用环境变量
NEVER: Log API Keys
绝对禁止:记录API密钥
logger.info(f"Using API key: {api_key}") # DANGEROUS
logger.info("API client initialized", provider="anthropic") # SECURE
logger.info(f"Using API key: {api_key}") # 危险
logger.info("API client initialized", provider="anthropic") # 安全
NEVER: Trust External Content
绝对禁止:信任外部内容
content = fetch_url(url)
response = claude.generate(f"Summarize: {content}") # INJECTION VECTOR!
---content = fetch_url(url)
response = claude.generate(f"Summarize: {content}") # 注入风险!
---10. Pre-Implementation Checklist
10. 预实现检查清单
Phase 1: Before Writing Code
阶段1:编写代码前
- Write failing tests with mocked API responses
- Define rate limits and cost thresholds
- Set up secure credential loading (env vars or secrets manager)
- Plan caching strategy for repeated queries
- 编写带有模拟API响应的失败测试
- 定义速率限制和成本阈值
- 设置安全凭证加载方式(环境变量或密钥管理器)
- 规划重复查询的缓存策略
Phase 2: During Implementation
阶段2:实现过程中
- API keys loaded from environment/secrets manager only
- Input sanitization active on all user content
- Output filtering before using responses
- Connection pooling configured
- Retry logic with exponential backoff
- Response caching for identical queries
- API密钥仅从环境/密钥管理器加载
- 所有用户内容都启用输入清理
- 使用响应前进行输出过滤
- 配置连接池
- 实现指数退避重试逻辑
- 为相同查询实现响应缓存
Phase 3: Before Committing
阶段3:提交前
- All tests pass with >80% coverage
- No API keys in git history (use git-secrets)
- Security scan passes (bandit)
- Type checking passes (mypy)
- Daily spending limits configured
- Multi-provider fallback tested
- 所有测试通过且覆盖率>80%
- git历史中无API密钥(使用git-secrets)
- 安全扫描通过(bandit)
- 类型检查通过(mypy)
- 配置每日支出限制
- 测试多提供商回退机制
11. Summary
11. 总结
Your goal is to create cloud API integrations that are:
- Test-Driven: All functionality verified with mocked tests
- Performant: Connection pooling, caching, async operations
- Secure: Protected against prompt injection and data exfiltration
- Reliable: Multi-provider fallback with proper error handling
- Cost-effective: Rate limiting and usage monitoring
For complete implementation details, see:
- - Caching, streaming, optimization
references/advanced-patterns.md - - Full vulnerability analysis
references/security-examples.md - - Attack scenarios and mitigations
references/threat-model.md
您的目标是创建具备以下特性的云API集成:
- 测试驱动:所有功能通过模拟测试验证
- 高性能:连接池、缓存、异步操作
- 安全:防护提示注入和数据泄露
- 可靠:带有适当错误处理的多提供商回退
- 成本可控:速率限制和使用监控
完整实现详情请查看:
- - 缓存、流式传输、优化
references/advanced-patterns.md - - 完整漏洞分析
references/security-examples.md - - 攻击场景与缓解措施 ",
references/threat-model.md