prowler-api
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseWhen to Use
适用场景
Use this skill for Prowler-specific patterns:
- Row-Level Security (RLS) / tenant isolation
- RBAC permissions and role checks
- Provider lifecycle and validation
- Celery tasks with tenant context
- Multi-database architecture (4-database setup)
For generic DRF patterns (ViewSets, Serializers, Filters, JSON:API), use skill.
django-drf本技能适用于Prowler专属模式:
- 行级安全(RLS)/租户隔离
- RBAC权限与角色校验
- 服务商生命周期与验证
- 带租户上下文的Celery任务
- 多数据库架构(4数据库配置)
对于通用DRF模式(ViewSets、Serializers、Filters、JSON:API),请使用技能。
django-drfCritical Rules
核心规则
- ALWAYS use when querying outside ViewSet context
rls_transaction(tenant_id) - ALWAYS use before checking permissions (returns FIRST role only)
get_role() - ALWAYS use then
@set_tenantdecorator order@handle_provider_deletion - ALWAYS use explicit through models for M2M relationships (required for RLS)
- NEVER access without RLS context in Celery tasks
Provider.objects - NEVER bypass RLS by using raw SQL or
connection.cursor() - NEVER use Django's default M2M - RLS requires through models with
tenant_id
Note:accepts both UUID objects and strings - it converts internally viarls_transaction().str(value)
- 在ViewSet上下文之外执行查询时,必须使用
rls_transaction(tenant_id) - 校验权限前必须调用(仅返回第一个角色)
get_role() - 装饰器顺序必须是先再
@set_tenant@handle_provider_deletion - 多对多(M2M)关系必须使用显式关联模型(RLS要求)
- 在Celery任务中,禁止在无RLS上下文的情况下访问
Provider.objects - 禁止通过原生SQL或绕过RLS
connection.cursor() - 禁止使用Django默认的M2M关系 - RLS要求关联模型包含
tenant_id
注意:同时支持UUID对象和字符串 - 内部会通过rls_transaction()自动转换。str(value)
Architecture Overview
架构概述
4-Database Architecture
4数据库架构
| Database | Alias | Purpose | RLS |
|---|---|---|---|
| | Standard API queries | Yes |
| | Migrations, auth bypass | No |
| | Read-only queries | Yes |
| | Admin read replica | No |
python
undefined| 数据库 | 别名 | 用途 | RLS |
|---|---|---|---|
| | 标准API查询 | 是 |
| | 迁移、权限绕过 | 否 |
| | 只读查询 | 是 |
| | 管理员只读副本 | 否 |
python
undefinedWhen to use admin (bypasses RLS)
何时使用admin库(绕过RLS)
from api.db_router import MainRouter
User.objects.using(MainRouter.admin_db).get(id=user_id) # Auth lookups
from api.db_router import MainRouter
User.objects.using(MainRouter.admin_db).get(id=user_id) # 权限查询
Standard queries use default (RLS enforced)
标准查询使用default库(强制执行RLS)
Provider.objects.filter(connected=True) # Requires rls_transaction context
undefinedProvider.objects.filter(connected=True) # 需要rls_transaction上下文
undefinedRLS Transaction Flow
RLS事务流程
Request → Authentication → BaseRLSViewSet.initial()
│
├─ Extract tenant_id from JWT
├─ SET api.tenant_id = 'uuid' (PostgreSQL)
└─ All queries now tenant-scoped请求 → 身份验证 → BaseRLSViewSet.initial()
│
├─ 从JWT中提取tenant_id
├─ 设置api.tenant_id = 'uuid'(PostgreSQL)
└─ 所有查询现在均为租户范围Implementation Checklist
实施检查清单
When implementing Prowler-specific API features:
| # | Pattern | Reference | Key Points |
|---|---|---|---|
| 1 | RLS Models | | Inherit |
| 2 | RLS Transactions | | Use |
| 3 | RBAC Permissions | | |
| 4 | Provider Validation | | |
| 5 | Celery Tasks | | Task definitions, decorators ( |
| 6 | RLS Serializers | | Inherit |
| 7 | Through Models | | ALL M2M must use explicit through with |
Full file paths: See references/file-locations.md
在实现Prowler专属API功能时:
| # | 模式 | 参考 | 关键点 |
|---|---|---|---|
| 1 | RLS模型 | | 继承 |
| 2 | RLS事务 | | 使用 |
| 3 | RBAC权限 | | |
| 4 | 服务商验证 | | |
| 5 | Celery任务 | | 任务定义、装饰器( |
| 6 | RLS序列化器 | | 继承 |
| 7 | 关联模型 | | 所有M2M关系必须使用包含 |
完整文件路径:查看references/file-locations.md
Decision Trees
决策树
Which Base Model?
选择哪种基模型?
Tenant-scoped data → RowLevelSecurityProtectedModel
Global/shared data → models.Model + BaseSecurityConstraint (rare)
Partitioned time-series → PostgresPartitionedModel + RowLevelSecurityProtectedModel
Soft-deletable → Add is_deleted + ActiveProviderManager租户范围数据 → RowLevelSecurityProtectedModel
全局/共享数据 → models.Model + BaseSecurityConstraint(罕见)
分区时间序列数据 → PostgresPartitionedModel + RowLevelSecurityProtectedModel
软删除模型 → 添加is_deleted + ActiveProviderManagerWhich Manager?
选择哪种管理器?
Normal queries → Model.objects (excludes deleted)
Include deleted records → Model.all_objects
Celery task context → Must use rls_transaction() first常规查询 → Model.objects(排除已删除数据)
包含已删除记录 → Model.all_objects
Celery任务上下文 → 必须先使用rls_transaction()Which Database?
选择哪种数据库?
Standard API queries → default (automatic via ViewSet)
Read-only operations → replica (automatic for GET in BaseRLSViewSet)
Auth/admin operations → MainRouter.admin_db
Cross-tenant lookups → MainRouter.admin_db (use sparingly!)标准API查询 → default(通过ViewSet自动选择)
只读操作 → replica(BaseRLSViewSet中GET请求自动选择)
权限/管理员操作 → MainRouter.admin_db
跨租户查询 → MainRouter.admin_db(谨慎使用!)Celery Task Decorator Order?
Celery任务装饰器顺序?
@shared_task(base=RLSTask, name="...", queue="...")
@set_tenant # First: sets tenant context
@handle_provider_deletion # Second: handles deleted providers
def my_task(tenant_id, provider_id):
pass@shared_task(base=RLSTask, name="...", queue="...")
@set_tenant # 第一步:设置租户上下文
@handle_provider_deletion # 第二步:处理已删除的服务商
def my_task(tenant_id, provider_id):
passRLS Model Pattern
RLS模型模式
python
from api.rls import RowLevelSecurityProtectedModel, RowLevelSecurityConstraint
class MyModel(RowLevelSecurityProtectedModel):
# tenant FK inherited from parent
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
name = models.CharField(max_length=255)
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
updated_at = models.DateTimeField(auto_now=True, editable=False)
class Meta(RowLevelSecurityProtectedModel.Meta):
db_table = "my_models"
constraints = [
RowLevelSecurityConstraint(
field="tenant_id",
name="rls_on_%(class)s",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
]
class JSONAPIMeta:
resource_name = "my-models"python
from api.rls import RowLevelSecurityProtectedModel, RowLevelSecurityConstraint
class MyModel(RowLevelSecurityProtectedModel):
# 从父类继承tenant外键
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
name = models.CharField(max_length=255)
inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
updated_at = models.DateTimeField(auto_now=True, editable=False)
class Meta(RowLevelSecurityProtectedModel.Meta):
db_table = "my_models"
constraints = [
RowLevelSecurityConstraint(
field="tenant_id",
name="rls_on_%(class)s",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
]
class JSONAPIMeta:
resource_name = "my-models"M2M Relationships (MUST use through models)
M2M关系(必须使用关联模型)
python
class Resource(RowLevelSecurityProtectedModel):
tags = models.ManyToManyField(
ResourceTag,
through="ResourceTagMapping", # REQUIRED for RLS
)
class ResourceTagMapping(RowLevelSecurityProtectedModel):
# Through model MUST have tenant_id for RLS
resource = models.ForeignKey(Resource, on_delete=models.CASCADE)
tag = models.ForeignKey(ResourceTag, on_delete=models.CASCADE)
class Meta:
constraints = [
RowLevelSecurityConstraint(
field="tenant_id",
name="rls_on_%(class)s",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
]python
class Resource(RowLevelSecurityProtectedModel):
tags = models.ManyToManyField(
ResourceTag,
through="ResourceTagMapping", # RLS要求必须设置
)
class ResourceTagMapping(RowLevelSecurityProtectedModel):
# 关联模型必须包含tenant_id以支持RLS
resource = models.ForeignKey(Resource, on_delete=models.CASCADE)
tag = models.ForeignKey(ResourceTag, on_delete=models.CASCADE)
class Meta:
constraints = [
RowLevelSecurityConstraint(
field="tenant_id",
name="rls_on_%(class)s",
statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
),
]Async Task Response Pattern (202 Accepted)
异步任务响应模式(202 Accepted)
For long-running operations, return 202 with task reference:
python
@action(detail=True, methods=["post"], url_name="connection")
def connection(self, request, pk=None):
with transaction.atomic():
task = check_provider_connection_task.delay(
provider_id=pk, tenant_id=self.request.tenant_id
)
prowler_task = Task.objects.get(id=task.id)
serializer = TaskSerializer(prowler_task)
return Response(
data=serializer.data,
status=status.HTTP_202_ACCEPTED,
headers={"Content-Location": reverse("task-detail", kwargs={"pk": prowler_task.id})}
)对于长时间运行的操作,返回202状态码并附带任务引用:
python
@action(detail=True, methods=["post"], url_name="connection")
def connection(self, request, pk=None):
with transaction.atomic():
task = check_provider_connection_task.delay(
provider_id=pk, tenant_id=self.request.tenant_id
)
prowler_task = Task.objects.get(id=task.id)
serializer = TaskSerializer(prowler_task)
return Response(
data=serializer.data,
status=status.HTTP_202_ACCEPTED,
headers={"Content-Location": reverse("task-detail", kwargs={"pk": prowler_task.id})}
)Providers (11 Supported)
支持的服务商(共11种)
| Provider | UID Format | Example |
|---|---|---|
| AWS | 12 digits | |
| Azure | UUID v4 | |
| GCP | 6-30 chars, lowercase, letter start | |
| M365 | Valid domain | |
| Kubernetes | 2-251 chars | |
| GitHub | 1-39 chars | |
| IaC | Git URL | |
| Oracle Cloud | OCID format | |
| MongoDB Atlas | 24-char hex | |
| Alibaba Cloud | 16 digits | |
Adding new provider: Add to enum + create staticmethod.
ProviderChoicesvalidate_<provider>_uid()| 服务商 | UID格式 | 示例 |
|---|---|---|
| AWS | 12位数字 | |
| Azure | UUID v4 | |
| GCP | 6-30字符,小写,以字母开头 | |
| M365 | 有效域名 | |
| Kubernetes | 2-251字符 | |
| GitHub | 1-39字符 | |
| IaC | Git URL | |
| Oracle Cloud | OCID格式 | |
| MongoDB Atlas | 24位十六进制字符 | |
| Alibaba Cloud | 16位数字 | |
添加新服务商:添加至枚举,并创建静态方法。
ProviderChoicesvalidate_<provider>_uid()RBAC Permissions
RBAC权限
| Permission | Controls |
|---|---|
| User CRUD, role assignments |
| Tenant settings |
| Billing/subscription |
| Provider CRUD |
| Integration config |
| Scan execution |
| See all providers (bypasses provider_groups) |
| 权限 | 管控范围 |
|---|---|
| 用户增删改查、角色分配 |
| 租户设置 |
| 账单/订阅 |
| 服务商增删改查 |
| 集成配置 |
| 扫描执行 |
| 查看所有服务商(绕过provider_groups) |
RBAC Visibility Pattern
RBAC可见性模式
python
def get_queryset(self):
user_role = get_role(self.request.user)
if user_role.unlimited_visibility:
return Model.objects.filter(tenant_id=self.request.tenant_id)
else:
# Filter by provider_groups assigned to role
return Model.objects.filter(provider__in=get_providers(user_role))python
def get_queryset(self):
user_role = get_role(self.request.user)
if user_role.unlimited_visibility:
return Model.objects.filter(tenant_id=self.request.tenant_id)
else:
# 按角色分配的provider_groups过滤
return Model.objects.filter(provider__in=get_providers(user_role))Celery Queues
Celery队列
| Queue | Purpose |
|---|---|
| Prowler scan execution |
| Dashboard aggregations (severity, attack surface) |
| Compliance report generation |
| External integrations (Jira, S3, Security Hub) |
| Provider/tenant deletion (async) |
| Historical data backfill operations |
| Output generation (CSV, JSON, HTML, PDF) |
| 队列 | 用途 |
|---|---|
| Prowler扫描执行 |
| 仪表盘聚合(风险等级、攻击面) |
| 合规报告生成 |
| 外部集成(Jira、S3、Security Hub) |
| 服务商/租户删除(异步) |
| 历史数据回填操作 |
| 输出文件生成(CSV、JSON、HTML、PDF) |
Task Composition (Canvas)
任务组合(Canvas)
Use Celery's Canvas primitives for complex workflows:
| Primitive | Use For |
|---|---|
| Sequential execution: A → B → C |
| Parallel execution: A, B, C simultaneously |
| Combined | Chain with nested groups for complex workflows |
Note: Use(signature immutable) to prevent result passing. Use.si()if you need to pass results..s()
Examples: See assets/celery_patterns.py for chain, group, and combined patterns.
使用Celery的Canvas原语实现复杂工作流:
| 原语 | 适用场景 |
|---|---|
| 顺序执行:A → B → C |
| 并行执行:A、B、C同时执行 |
| 组合使用 | 链式调用嵌套分组实现复杂工作流 |
注意:使用(不可变签名)防止传递结果。如果需要传递结果,请使用.si()。.s()
示例:查看assets/celery_patterns.py中的链式、分组及组合模式示例。
Beat Scheduling (Periodic Tasks)
Beat调度(定时任务)
| Operation | Key Points |
|---|---|
| Create schedule | |
| Create periodic task | Use task name (not function), |
| Delete scheduled task | |
| Avoid race conditions | Use |
Examples: See assets/celery_patterns.py for schedule_provider_scan pattern.
| 操作 | 关键点 |
|---|---|
| 创建调度 | |
| 创建定时任务 | 使用任务名称(而非函数), |
| 删除定时任务 | |
| 避免竞态条件 | 使用 |
示例:查看assets/celery_patterns.py中的schedule_provider_scan模式。
Advanced Task Patterns
高级任务模式
@set_tenant
Behavior
@set_tenant@set_tenant
行为
@set_tenant| Mode | | |
|---|---|---|
| Popped (removed) | NO - function doesn't receive it |
| Read but kept | YES - function receives it |
| 模式 | kwargs中的 | 传递给函数的 |
|---|---|---|
| 被移除 | 否 - 函数不会接收该参数 |
| 读取但保留 | 是 - 函数会接收该参数 |
Key Patterns
核心模式
| Pattern | Description |
|---|---|
| Access |
| Proper logging in Celery tasks |
| Catch to save progress before hard kill |
| Defer execution by N seconds |
| Execute at specific time |
Examples: See assets/celery_patterns.py for all advanced patterns.
| 模式 | 描述 |
|---|---|
| 访问 |
| 在Celery任务中正确记录日志 |
| 捕获该异常以在强制终止前保存进度 |
| 延迟N秒执行 |
| 在指定时间执行 |
示例:查看assets/celery_patterns.py中的所有高级模式。
Celery Configuration
Celery配置
| Setting | Value | Purpose |
|---|---|---|
| | Prevent re-queue for long tasks |
| | Store results in PostgreSQL |
| | Track when tasks start |
| Task-specific | Raises |
| Task-specific | Hard kill (SIGKILL) |
Full config: See assets/celery_patterns.py and actual files at,config/celery.py.config/settings/celery.py
| 设置 | 值 | 用途 |
|---|---|---|
| | 防止长时间任务被重新入队 |
| | 将结果存储在PostgreSQL中 |
| | 跟踪任务开始时间 |
| 任务专属配置 | 触发 |
| 任务专属配置 | 强制终止任务(SIGKILL) |
完整配置:查看assets/celery_patterns.py及实际文件、config/celery.py。config/settings/celery.py
UUIDv7 for Partitioned Tables
分区表使用UUIDv7
FindingResourceFindingMappingpython
from uuid6 import uuid7
from api.uuid_utils import uuid7_start, uuid7_end, datetime_to_uuid7FindingResourceFindingMappingpython
from uuid6 import uuid7
from api.uuid_utils import uuid7_start, uuid7_end, datetime_to_uuid7Partition-aware filtering
分区感知过滤
start = uuid7_start(datetime_to_uuid7(date_from))
end = uuid7_end(datetime_to_uuid7(date_to), settings.FINDINGS_TABLE_PARTITION_MONTHS)
queryset.filter(id__gte=start, id__lt=end)
**Why UUIDv7?** Time-ordered UUIDs enable PostgreSQL to prune partitions during range queries.
---start = uuid7_start(datetime_to_uuid7(date_from))
end = uuid7_end(datetime_to_uuid7(date_to), settings.FINDINGS_TABLE_PARTITION_MONTHS)
queryset.filter(id__gte=start, id__lt=end)
**为什么使用UUIDv7?** 按时间排序的UUID可让PostgreSQL在范围查询时自动修剪分区。
---Batch Operations with RLS
支持RLS的批量操作
python
from api.db_utils import batch_delete, create_objects_in_batches, update_objects_in_batchespython
from api.db_utils import batch_delete, create_objects_in_batches, update_objects_in_batchesDelete in batches (RLS-aware)
批量删除(支持RLS)
batch_delete(tenant_id, queryset, batch_size=1000)
batch_delete(tenant_id, queryset, batch_size=1000)
Bulk create with RLS
支持RLS的批量创建
create_objects_in_batches(tenant_id, Finding, objects, batch_size=500)
create_objects_in_batches(tenant_id, Finding, objects, batch_size=500)
Bulk update with RLS
支持RLS的批量更新
update_objects_in_batches(tenant_id, Finding, objects, fields=["status"], batch_size=500)
---update_objects_in_batches(tenant_id, Finding, objects, fields=["status"], batch_size=500)
---Security Patterns
安全模式
Full examples: See assets/security_patterns.py
完整示例:查看assets/security_patterns.py
Tenant Isolation Summary
租户隔离总结
| Pattern | Rule |
|---|---|
| RLS in ViewSets | Automatic via |
| RLS in Celery | MUST use |
| Cross-tenant validation | Defense-in-depth: verify |
| Never trust user input | Use |
| Admin DB bypass | Only for cross-tenant admin ops - exposes ALL tenants' data |
| 模式 | 规则 |
|---|---|
| ViewSet中的RLS | 通过 |
| Celery中的RLS | 必须使用 |
| 跨租户验证 | 纵深防御:校验 |
| 绝不信任用户输入 | 使用JWT中的 |
| 管理员数据库绕过 | 仅用于跨租户管理员操作 - 会暴露所有租户的数据 |
Celery Task Security Summary
Celery任务安全总结
| Pattern | Rule |
|---|---|
| Named tasks only | NEVER use dynamic task names from user input |
| Validate arguments | Check UUID format before database queries |
| Safe queuing | Use |
| Modern retries | Use |
| Time limits | Set |
| Idempotency | Use |
| 模式 | 规则 |
|---|---|
| 仅使用命名任务 | 绝不要使用用户输入的动态任务名称 |
| 校验参数 | 在数据库查询前校验UUID格式 |
| 安全入队 | 使用 |
| 现代重试机制 | 使用 |
| 时间限制 | 设置 |
| 幂等性 | 使用 |
Quick Reference
快速参考
python
undefinedpython
undefinedSafe task queuing - task only enqueued after transaction commits
安全任务入队 - 仅在事务提交后入队
with transaction.atomic():
provider = Provider.objects.create(**data)
transaction.on_commit(
lambda: verify_provider_connection.delay(
tenant_id=str(request.tenant_id),
provider_id=str(provider.id)
)
)
with transaction.atomic():
provider = Provider.objects.create(**data)
transaction.on_commit(
lambda: verify_provider_connection.delay(
tenant_id=str(request.tenant_id),
provider_id=str(provider.id)
)
)
Modern retry pattern
现代重试模式
@shared_task(
base=RLSTask,
bind=True,
autoretry_for=(ConnectionError, TimeoutError, OperationalError),
retry_backoff=True,
retry_backoff_max=600,
retry_jitter=True,
max_retries=5,
soft_time_limit=300,
time_limit=360,
)
@set_tenant
def sync_provider_data(self, tenant_id, provider_id):
with rls_transaction(tenant_id):
# ... task logic
pass
@shared_task(
base=RLSTask,
bind=True,
autoretry_for=(ConnectionError, TimeoutError, OperationalError),
retry_backoff=True,
retry_backoff_max=600,
retry_jitter=True,
max_retries=5,
soft_time_limit=300,
time_limit=360,
)
@set_tenant
def sync_provider_data(self, tenant_id, provider_id):
with rls_transaction(tenant_id):
# ... 任务逻辑
pass
Idempotent task - safe to retry
幂等任务 - 可安全重试
@shared_task(base=RLSTask, acks_late=True)
@set_tenant
def process_finding(tenant_id, finding_uid, data):
with rls_transaction(tenant_id):
Finding.objects.update_or_create(uid=finding_uid, defaults=data)
---@shared_task(base=RLSTask, acks_late=True)
@set_tenant
def process_finding(tenant_id, finding_uid, data):
with rls_transaction(tenant_id):
Finding.objects.update_or_create(uid=finding_uid, defaults=data)
---Production Deployment Checklist
生产部署检查清单
Full settings: See references/production-settings.md
Run before every production deployment:
bash
cd api && poetry run python src/backend/manage.py check --deploy完整配置:查看references/production-settings.md
每次生产部署前执行:
bash
cd api && poetry run python src/backend/manage.py check --deployCritical Settings
核心配置
| Setting | Production Value | Risk if Wrong |
|---|---|---|
| | Exposes stack traces, settings, SQL queries |
| Env var, rotated | Session hijacking, CSRF bypass |
| Explicit list | Host header attacks |
| | Credentials sent over HTTP |
| | Session cookies over HTTP |
| | CSRF tokens over HTTP |
| | Downgrade attacks |
| | Connection pool exhaustion |
| 配置项 | 生产环境值 | 配置错误风险 |
|---|---|---|
| | 暴露堆栈跟踪、配置信息、SQL查询 |
| 环境变量,定期轮换 | 会话劫持、CSRF绕过 |
| 显式列表 | 主机头攻击 |
| | 凭据通过HTTP传输 |
| | 会话Cookie通过HTTP传输 |
| | CSRF令牌通过HTTP传输 |
| | 降级攻击 |
| | 连接池耗尽 |
Commands
命令
bash
undefinedbash
undefinedDevelopment
开发环境
cd api && poetry run python src/backend/manage.py runserver
cd api && poetry run python src/backend/manage.py shell
cd api && poetry run python src/backend/manage.py runserver
cd api && poetry run python src/backend/manage.py shell
Celery
Celery
cd api && poetry run celery -A config.celery worker -l info -Q scans,overview
cd api && poetry run celery -A config.celery beat -l info
cd api && poetry run celery -A config.celery worker -l info -Q scans,overview
cd api && poetry run celery -A config.celery beat -l info
Testing
测试
cd api && poetry run pytest -x --tb=short
cd api && poetry run pytest -x --tb=short
Production checks
生产环境检查
cd api && poetry run python src/backend/manage.py check --deploy
---cd api && poetry run python src/backend/manage.py check --deploy
---Resources
资源
Local References
本地参考
- File Locations: See references/file-locations.md
- Modeling Decisions: See references/modeling-decisions.md
- Configuration: See references/configuration.md
- Production Settings: See references/production-settings.md
- Security Patterns: See assets/security_patterns.py
- 文件位置:查看references/file-locations.md
- 建模决策:查看references/modeling-decisions.md
- 配置:查看references/configuration.md
- 生产环境配置:查看references/production-settings.md
- 安全模式:查看assets/security_patterns.py
Related Skills
相关技能
- Generic DRF Patterns: Use skill
django-drf - API Testing: Use skill
prowler-test-api
- 通用DRF模式:使用技能
django-drf - API测试:使用技能
prowler-test-api
Context7 MCP (Recommended)
Context7 MCP(推荐)
Prerequisite: Install Context7 MCP server for up-to-date documentation lookup.
When implementing or debugging Prowler-specific patterns, query these libraries via :
mcp_context7_query-docs| Library | Context7 ID | Use For |
|---|---|---|
| Celery | | Task patterns, queues, error handling |
| django-celery-beat | | Periodic task scheduling |
| Django | | Models, ORM, constraints, indexes |
Example queries:
mcp_context7_query-docs(libraryId="/websites/celeryq_dev_en_stable", query="shared_task decorator retry patterns")
mcp_context7_query-docs(libraryId="/celery/django-celery-beat", query="periodic task database scheduler")
mcp_context7_query-docs(libraryId="/websites/djangoproject_en_5_2", query="model constraints CheckConstraint UniqueConstraint")Note: Usefirst if you need to find the correct library ID.mcp_context7_resolve-library-id
前置条件:安装Context7 MCP服务器以获取最新文档查询。
在实现或调试Prowler专属模式时,通过查询以下库:
mcp_context7_query-docs| 库 | Context7 ID | 用途 |
|---|---|---|
| Celery | | 任务模式、队列、错误处理 |
| django-celery-beat | | 定时任务调度 |
| Django | | 模型、ORM、约束、索引 |
查询示例:
mcp_context7_query-docs(libraryId="/websites/celeryq_dev_en_stable", query="shared_task decorator retry patterns")
mcp_context7_query-docs(libraryId="/celery/django-celery-beat", query="periodic task database scheduler")
mcp_context7_query-docs(libraryId="/websites/djangoproject_en_5_2", query="model constraints CheckConstraint UniqueConstraint")注意:如果需要查找正确的库ID,请先使用。mcp_context7_resolve-library-id