prowler-api

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

When to Use

适用场景

Use this skill for Prowler-specific patterns:
  • Row-Level Security (RLS) / tenant isolation
  • RBAC permissions and role checks
  • Provider lifecycle and validation
  • Celery tasks with tenant context
  • Multi-database architecture (4-database setup)
For generic DRF patterns (ViewSets, Serializers, Filters, JSON:API), use
django-drf
skill.

本技能适用于Prowler专属模式:
  • 行级安全(RLS)/租户隔离
  • RBAC权限与角色校验
  • 服务商生命周期与验证
  • 带租户上下文的Celery任务
  • 多数据库架构(4数据库配置)
对于通用DRF模式(ViewSets、Serializers、Filters、JSON:API),请使用
django-drf
技能。

Critical Rules

核心规则

  • ALWAYS use
    rls_transaction(tenant_id)
    when querying outside ViewSet context
  • ALWAYS use
    get_role()
    before checking permissions (returns FIRST role only)
  • ALWAYS use
    @set_tenant
    then
    @handle_provider_deletion
    decorator order
  • ALWAYS use explicit through models for M2M relationships (required for RLS)
  • NEVER access
    Provider.objects
    without RLS context in Celery tasks
  • NEVER bypass RLS by using raw SQL or
    connection.cursor()
  • NEVER use Django's default M2M - RLS requires through models with
    tenant_id
Note:
rls_transaction()
accepts both UUID objects and strings - it converts internally via
str(value)
.

  • 在ViewSet上下文之外执行查询时,必须使用
    rls_transaction(tenant_id)
  • 校验权限前必须调用
    get_role()
    (仅返回第一个角色)
  • 装饰器顺序必须是先
    @set_tenant
    @handle_provider_deletion
  • 多对多(M2M)关系必须使用显式关联模型(RLS要求)
  • 在Celery任务中,禁止在无RLS上下文的情况下访问
    Provider.objects
  • 禁止通过原生SQL或
    connection.cursor()
    绕过RLS
  • 禁止使用Django默认的M2M关系 - RLS要求关联模型包含
    tenant_id
注意
rls_transaction()
同时支持UUID对象和字符串 - 内部会通过
str(value)
自动转换。

Architecture Overview

架构概述

4-Database Architecture

4数据库架构

DatabaseAliasPurposeRLS
default
prowler_user
Standard API queriesYes
admin
admin
Migrations, auth bypassNo
replica
prowler_user
Read-only queriesYes
admin_replica
admin
Admin read replicaNo
python
undefined
数据库别名用途RLS
default
prowler_user
标准API查询
admin
admin
迁移、权限绕过
replica
prowler_user
只读查询
admin_replica
admin
管理员只读副本
python
undefined

When to use admin (bypasses RLS)

何时使用admin库(绕过RLS)

from api.db_router import MainRouter User.objects.using(MainRouter.admin_db).get(id=user_id) # Auth lookups
from api.db_router import MainRouter User.objects.using(MainRouter.admin_db).get(id=user_id) # 权限查询

Standard queries use default (RLS enforced)

标准查询使用default库(强制执行RLS)

Provider.objects.filter(connected=True) # Requires rls_transaction context
undefined
Provider.objects.filter(connected=True) # 需要rls_transaction上下文
undefined

RLS Transaction Flow

RLS事务流程

Request → Authentication → BaseRLSViewSet.initial()
                                    ├─ Extract tenant_id from JWT
                                    ├─ SET api.tenant_id = 'uuid' (PostgreSQL)
                                    └─ All queries now tenant-scoped

请求 → 身份验证 → BaseRLSViewSet.initial()
                                    ├─ 从JWT中提取tenant_id
                                    ├─ 设置api.tenant_id = 'uuid'(PostgreSQL)
                                    └─ 所有查询现在均为租户范围

Implementation Checklist

实施检查清单

When implementing Prowler-specific API features:
#PatternReferenceKey Points
1RLS Models
api/rls.py
Inherit
RowLevelSecurityProtectedModel
, add constraint
2RLS Transactions
api/db_utils.py
Use
rls_transaction(tenant_id)
context manager
3RBAC Permissions
api/rbac/permissions.py
get_role()
,
get_providers()
,
Permissions
enum
4Provider Validation
api/models.py
validate_<provider>_uid()
methods on
Provider
model
5Celery Tasks
tasks/tasks.py
,
api/decorators.py
,
config/celery.py
Task definitions, decorators (
@set_tenant
,
@handle_provider_deletion
),
RLSTask
base
6RLS Serializers
api/v1/serializers.py
Inherit
RLSSerializer
to auto-inject
tenant_id
7Through Models
api/models.py
ALL M2M must use explicit through with
tenant_id
Full file paths: See references/file-locations.md

在实现Prowler专属API功能时:
#模式参考关键点
1RLS模型
api/rls.py
继承
RowLevelSecurityProtectedModel
,添加约束
2RLS事务
api/db_utils.py
使用
rls_transaction(tenant_id)
上下文管理器
3RBAC权限
api/rbac/permissions.py
get_role()
get_providers()
Permissions
枚举
4服务商验证
api/models.py
Provider
模型中的
validate_<provider>_uid()
方法
5Celery任务
tasks/tasks.py
api/decorators.py
config/celery.py
任务定义、装饰器(
@set_tenant
@handle_provider_deletion
)、
RLSTask
基类
6RLS序列化器
api/v1/serializers.py
继承
RLSSerializer
自动注入
tenant_id
7关联模型
api/models.py
所有M2M关系必须使用包含
tenant_id
的显式关联模型
完整文件路径:查看references/file-locations.md

Decision Trees

决策树

Which Base Model?

选择哪种基模型?

Tenant-scoped data       → RowLevelSecurityProtectedModel
Global/shared data       → models.Model + BaseSecurityConstraint (rare)
Partitioned time-series  → PostgresPartitionedModel + RowLevelSecurityProtectedModel
Soft-deletable           → Add is_deleted + ActiveProviderManager
租户范围数据       → RowLevelSecurityProtectedModel
全局/共享数据       → models.Model + BaseSecurityConstraint(罕见)
分区时间序列数据  → PostgresPartitionedModel + RowLevelSecurityProtectedModel
软删除模型           → 添加is_deleted + ActiveProviderManager

Which Manager?

选择哪种管理器?

Normal queries           → Model.objects (excludes deleted)
Include deleted records  → Model.all_objects
Celery task context      → Must use rls_transaction() first
常规查询           → Model.objects(排除已删除数据)
包含已删除记录  → Model.all_objects
Celery任务上下文      → 必须先使用rls_transaction()

Which Database?

选择哪种数据库?

Standard API queries     → default (automatic via ViewSet)
Read-only operations     → replica (automatic for GET in BaseRLSViewSet)
Auth/admin operations    → MainRouter.admin_db
Cross-tenant lookups     → MainRouter.admin_db (use sparingly!)
标准API查询     → default(通过ViewSet自动选择)
只读操作     → replica(BaseRLSViewSet中GET请求自动选择)
权限/管理员操作    → MainRouter.admin_db
跨租户查询     → MainRouter.admin_db(谨慎使用!)

Celery Task Decorator Order?

Celery任务装饰器顺序?

@shared_task(base=RLSTask, name="...", queue="...")
@set_tenant                    # First: sets tenant context
@handle_provider_deletion      # Second: handles deleted providers
def my_task(tenant_id, provider_id):
    pass

@shared_task(base=RLSTask, name="...", queue="...")
@set_tenant                    # 第一步:设置租户上下文
@handle_provider_deletion      # 第二步:处理已删除的服务商
def my_task(tenant_id, provider_id):
    pass

RLS Model Pattern

RLS模型模式

python
from api.rls import RowLevelSecurityProtectedModel, RowLevelSecurityConstraint

class MyModel(RowLevelSecurityProtectedModel):
    # tenant FK inherited from parent
    id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
    name = models.CharField(max_length=255)
    inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
    updated_at = models.DateTimeField(auto_now=True, editable=False)

    class Meta(RowLevelSecurityProtectedModel.Meta):
        db_table = "my_models"
        constraints = [
            RowLevelSecurityConstraint(
                field="tenant_id",
                name="rls_on_%(class)s",
                statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
            ),
        ]

    class JSONAPIMeta:
        resource_name = "my-models"
python
from api.rls import RowLevelSecurityProtectedModel, RowLevelSecurityConstraint

class MyModel(RowLevelSecurityProtectedModel):
    # 从父类继承tenant外键
    id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
    name = models.CharField(max_length=255)
    inserted_at = models.DateTimeField(auto_now_add=True, editable=False)
    updated_at = models.DateTimeField(auto_now=True, editable=False)

    class Meta(RowLevelSecurityProtectedModel.Meta):
        db_table = "my_models"
        constraints = [
            RowLevelSecurityConstraint(
                field="tenant_id",
                name="rls_on_%(class)s",
                statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
            ),
        ]

    class JSONAPIMeta:
        resource_name = "my-models"

M2M Relationships (MUST use through models)

M2M关系(必须使用关联模型)

python
class Resource(RowLevelSecurityProtectedModel):
    tags = models.ManyToManyField(
        ResourceTag,
        through="ResourceTagMapping",  # REQUIRED for RLS
    )

class ResourceTagMapping(RowLevelSecurityProtectedModel):
    # Through model MUST have tenant_id for RLS
    resource = models.ForeignKey(Resource, on_delete=models.CASCADE)
    tag = models.ForeignKey(ResourceTag, on_delete=models.CASCADE)

    class Meta:
        constraints = [
            RowLevelSecurityConstraint(
                field="tenant_id",
                name="rls_on_%(class)s",
                statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
            ),
        ]

python
class Resource(RowLevelSecurityProtectedModel):
    tags = models.ManyToManyField(
        ResourceTag,
        through="ResourceTagMapping",  # RLS要求必须设置
    )

class ResourceTagMapping(RowLevelSecurityProtectedModel):
    # 关联模型必须包含tenant_id以支持RLS
    resource = models.ForeignKey(Resource, on_delete=models.CASCADE)
    tag = models.ForeignKey(ResourceTag, on_delete=models.CASCADE)

    class Meta:
        constraints = [
            RowLevelSecurityConstraint(
                field="tenant_id",
                name="rls_on_%(class)s",
                statements=["SELECT", "INSERT", "UPDATE", "DELETE"],
            ),
        ]

Async Task Response Pattern (202 Accepted)

异步任务响应模式(202 Accepted)

For long-running operations, return 202 with task reference:
python
@action(detail=True, methods=["post"], url_name="connection")
def connection(self, request, pk=None):
    with transaction.atomic():
        task = check_provider_connection_task.delay(
            provider_id=pk, tenant_id=self.request.tenant_id
        )
    prowler_task = Task.objects.get(id=task.id)
    serializer = TaskSerializer(prowler_task)
    return Response(
        data=serializer.data,
        status=status.HTTP_202_ACCEPTED,
        headers={"Content-Location": reverse("task-detail", kwargs={"pk": prowler_task.id})}
    )

对于长时间运行的操作,返回202状态码并附带任务引用:
python
@action(detail=True, methods=["post"], url_name="connection")
def connection(self, request, pk=None):
    with transaction.atomic():
        task = check_provider_connection_task.delay(
            provider_id=pk, tenant_id=self.request.tenant_id
        )
    prowler_task = Task.objects.get(id=task.id)
    serializer = TaskSerializer(prowler_task)
    return Response(
        data=serializer.data,
        status=status.HTTP_202_ACCEPTED,
        headers={"Content-Location": reverse("task-detail", kwargs={"pk": prowler_task.id})}
    )

Providers (11 Supported)

支持的服务商(共11种)

ProviderUID FormatExample
AWS12 digits
123456789012
AzureUUID v4
a1b2c3d4-e5f6-...
GCP6-30 chars, lowercase, letter start
my-gcp-project
M365Valid domain
contoso.onmicrosoft.com
Kubernetes2-251 chars
arn:aws:eks:...
GitHub1-39 chars
my-org
IaCGit URL
https://github.com/user/repo.git
Oracle CloudOCID format
ocid1.tenancy.oc1..
MongoDB Atlas24-char hex
507f1f77bcf86cd799439011
Alibaba Cloud16 digits
1234567890123456
Adding new provider: Add to
ProviderChoices
enum + create
validate_<provider>_uid()
staticmethod.

服务商UID格式示例
AWS12位数字
123456789012
AzureUUID v4
a1b2c3d4-e5f6-...
GCP6-30字符,小写,以字母开头
my-gcp-project
M365有效域名
contoso.onmicrosoft.com
Kubernetes2-251字符
arn:aws:eks:...
GitHub1-39字符
my-org
IaCGit URL
https://github.com/user/repo.git
Oracle CloudOCID格式
ocid1.tenancy.oc1..
MongoDB Atlas24位十六进制字符
507f1f77bcf86cd799439011
Alibaba Cloud16位数字
1234567890123456
添加新服务商:添加至
ProviderChoices
枚举,并创建
validate_<provider>_uid()
静态方法。

RBAC Permissions

RBAC权限

PermissionControls
MANAGE_USERS
User CRUD, role assignments
MANAGE_ACCOUNT
Tenant settings
MANAGE_BILLING
Billing/subscription
MANAGE_PROVIDERS
Provider CRUD
MANAGE_INTEGRATIONS
Integration config
MANAGE_SCANS
Scan execution
UNLIMITED_VISIBILITY
See all providers (bypasses provider_groups)
权限管控范围
MANAGE_USERS
用户增删改查、角色分配
MANAGE_ACCOUNT
租户设置
MANAGE_BILLING
账单/订阅
MANAGE_PROVIDERS
服务商增删改查
MANAGE_INTEGRATIONS
集成配置
MANAGE_SCANS
扫描执行
UNLIMITED_VISIBILITY
查看所有服务商(绕过provider_groups)

RBAC Visibility Pattern

RBAC可见性模式

python
def get_queryset(self):
    user_role = get_role(self.request.user)
    if user_role.unlimited_visibility:
        return Model.objects.filter(tenant_id=self.request.tenant_id)
    else:
        # Filter by provider_groups assigned to role
        return Model.objects.filter(provider__in=get_providers(user_role))

python
def get_queryset(self):
    user_role = get_role(self.request.user)
    if user_role.unlimited_visibility:
        return Model.objects.filter(tenant_id=self.request.tenant_id)
    else:
        # 按角色分配的provider_groups过滤
        return Model.objects.filter(provider__in=get_providers(user_role))

Celery Queues

Celery队列

QueuePurpose
scans
Prowler scan execution
overview
Dashboard aggregations (severity, attack surface)
compliance
Compliance report generation
integrations
External integrations (Jira, S3, Security Hub)
deletion
Provider/tenant deletion (async)
backfill
Historical data backfill operations
scan-reports
Output generation (CSV, JSON, HTML, PDF)

队列用途
scans
Prowler扫描执行
overview
仪表盘聚合(风险等级、攻击面)
compliance
合规报告生成
integrations
外部集成(Jira、S3、Security Hub)
deletion
服务商/租户删除(异步)
backfill
历史数据回填操作
scan-reports
输出文件生成(CSV、JSON、HTML、PDF)

Task Composition (Canvas)

任务组合(Canvas)

Use Celery's Canvas primitives for complex workflows:
PrimitiveUse For
chain()
Sequential execution: A → B → C
group()
Parallel execution: A, B, C simultaneously
CombinedChain with nested groups for complex workflows
Note: Use
.si()
(signature immutable) to prevent result passing. Use
.s()
if you need to pass results.
Examples: See assets/celery_patterns.py for chain, group, and combined patterns.

使用Celery的Canvas原语实现复杂工作流:
原语适用场景
chain()
顺序执行:A → B → C
group()
并行执行:A、B、C同时执行
组合使用链式调用嵌套分组实现复杂工作流
注意:使用
.si()
(不可变签名)防止传递结果。如果需要传递结果,请使用
.s()
示例:查看assets/celery_patterns.py中的链式、分组及组合模式示例。

Beat Scheduling (Periodic Tasks)

Beat调度(定时任务)

OperationKey Points
Create schedule
IntervalSchedule.objects.get_or_create(every=24, period=HOURS)
Create periodic taskUse task name (not function),
kwargs=json.dumps(...)
Delete scheduled task
PeriodicTask.objects.filter(name=...).delete()
Avoid race conditionsUse
countdown=5
to wait for DB commit
Examples: See assets/celery_patterns.py for schedule_provider_scan pattern.

操作关键点
创建调度
IntervalSchedule.objects.get_or_create(every=24, period=HOURS)
创建定时任务使用任务名称(而非函数),
kwargs=json.dumps(...)
删除定时任务
PeriodicTask.objects.filter(name=...).delete()
避免竞态条件使用
countdown=5
等待数据库提交完成
示例:查看assets/celery_patterns.py中的schedule_provider_scan模式。

Advanced Task Patterns

高级任务模式

@set_tenant
Behavior

@set_tenant
行为

Mode
tenant_id
in kwargs
tenant_id
passed to function
@set_tenant
(default)
Popped (removed)NO - function doesn't receive it
@set_tenant(keep_tenant=True)
Read but keptYES - function receives it
模式kwargs中的
tenant_id
传递给函数的
tenant_id
@set_tenant
(默认)
被移除否 - 函数不会接收该参数
@set_tenant(keep_tenant=True)
读取但保留是 - 函数会接收该参数

Key Patterns

核心模式

PatternDescription
bind=True
Access
self.request.id
,
self.request.retries
get_task_logger(__name__)
Proper logging in Celery tasks
SoftTimeLimitExceeded
Catch to save progress before hard kill
countdown=30
Defer execution by N seconds
eta=datetime(...)
Execute at specific time
Examples: See assets/celery_patterns.py for all advanced patterns.

模式描述
bind=True
访问
self.request.id
self.request.retries
get_task_logger(__name__)
在Celery任务中正确记录日志
SoftTimeLimitExceeded
捕获该异常以在强制终止前保存进度
countdown=30
延迟N秒执行
eta=datetime(...)
在指定时间执行
示例:查看assets/celery_patterns.py中的所有高级模式。

Celery Configuration

Celery配置

SettingValuePurpose
BROKER_VISIBILITY_TIMEOUT
86400
(24h)
Prevent re-queue for long tasks
CELERY_RESULT_BACKEND
django-db
Store results in PostgreSQL
CELERY_TASK_TRACK_STARTED
True
Track when tasks start
soft_time_limit
Task-specificRaises
SoftTimeLimitExceeded
time_limit
Task-specificHard kill (SIGKILL)
Full config: See assets/celery_patterns.py and actual files at
config/celery.py
,
config/settings/celery.py
.

设置用途
BROKER_VISIBILITY_TIMEOUT
86400
(24小时)
防止长时间任务被重新入队
CELERY_RESULT_BACKEND
django-db
将结果存储在PostgreSQL中
CELERY_TASK_TRACK_STARTED
True
跟踪任务开始时间
soft_time_limit
任务专属配置触发
SoftTimeLimitExceeded
异常
time_limit
任务专属配置强制终止任务(SIGKILL)
完整配置:查看assets/celery_patterns.py及实际文件
config/celery.py
config/settings/celery.py

UUIDv7 for Partitioned Tables

分区表使用UUIDv7

Finding
and
ResourceFindingMapping
use UUIDv7 for time-based partitioning:
python
from uuid6 import uuid7
from api.uuid_utils import uuid7_start, uuid7_end, datetime_to_uuid7
Finding
ResourceFindingMapping
使用UUIDv7实现基于时间的分区:
python
from uuid6 import uuid7
from api.uuid_utils import uuid7_start, uuid7_end, datetime_to_uuid7

Partition-aware filtering

分区感知过滤

start = uuid7_start(datetime_to_uuid7(date_from)) end = uuid7_end(datetime_to_uuid7(date_to), settings.FINDINGS_TABLE_PARTITION_MONTHS) queryset.filter(id__gte=start, id__lt=end)

**Why UUIDv7?** Time-ordered UUIDs enable PostgreSQL to prune partitions during range queries.

---
start = uuid7_start(datetime_to_uuid7(date_from)) end = uuid7_end(datetime_to_uuid7(date_to), settings.FINDINGS_TABLE_PARTITION_MONTHS) queryset.filter(id__gte=start, id__lt=end)

**为什么使用UUIDv7?** 按时间排序的UUID可让PostgreSQL在范围查询时自动修剪分区。

---

Batch Operations with RLS

支持RLS的批量操作

python
from api.db_utils import batch_delete, create_objects_in_batches, update_objects_in_batches
python
from api.db_utils import batch_delete, create_objects_in_batches, update_objects_in_batches

Delete in batches (RLS-aware)

批量删除(支持RLS)

batch_delete(tenant_id, queryset, batch_size=1000)
batch_delete(tenant_id, queryset, batch_size=1000)

Bulk create with RLS

支持RLS的批量创建

create_objects_in_batches(tenant_id, Finding, objects, batch_size=500)
create_objects_in_batches(tenant_id, Finding, objects, batch_size=500)

Bulk update with RLS

支持RLS的批量更新

update_objects_in_batches(tenant_id, Finding, objects, fields=["status"], batch_size=500)

---
update_objects_in_batches(tenant_id, Finding, objects, fields=["status"], batch_size=500)

---

Security Patterns

安全模式

Full examples: See assets/security_patterns.py
完整示例:查看assets/security_patterns.py

Tenant Isolation Summary

租户隔离总结

PatternRule
RLS in ViewSetsAutomatic via
BaseRLSViewSet
- tenant_id from JWT
RLS in CeleryMUST use
@set_tenant
+
rls_transaction(tenant_id)
Cross-tenant validationDefense-in-depth: verify
obj.tenant_id == request.tenant_id
Never trust user inputUse
request.tenant_id
from JWT, never
request.data.get("tenant_id")
Admin DB bypassOnly for cross-tenant admin ops - exposes ALL tenants' data
模式规则
ViewSet中的RLS通过
BaseRLSViewSet
自动实现 - 从JWT中获取tenant_id
Celery中的RLS必须使用
@set_tenant
+
rls_transaction(tenant_id)
跨租户验证纵深防御:校验
obj.tenant_id == request.tenant_id
绝不信任用户输入使用JWT中的
request.tenant_id
,绝不要使用
request.data.get("tenant_id")
管理员数据库绕过仅用于跨租户管理员操作 - 会暴露所有租户的数据

Celery Task Security Summary

Celery任务安全总结

PatternRule
Named tasks onlyNEVER use dynamic task names from user input
Validate argumentsCheck UUID format before database queries
Safe queuingUse
transaction.on_commit()
to enqueue AFTER commit
Modern retriesUse
autoretry_for
,
retry_backoff
,
retry_jitter
Time limitsSet
soft_time_limit
and
time_limit
to prevent hung tasks
IdempotencyUse
update_or_create
or idempotency keys
模式规则
仅使用命名任务绝不要使用用户输入的动态任务名称
校验参数在数据库查询前校验UUID格式
安全入队使用
transaction.on_commit()
在提交完成后再入队
现代重试机制使用
autoretry_for
retry_backoff
retry_jitter
时间限制设置
soft_time_limit
time_limit
防止任务挂起
幂等性使用
update_or_create
或幂等键

Quick Reference

快速参考

python
undefined
python
undefined

Safe task queuing - task only enqueued after transaction commits

安全任务入队 - 仅在事务提交后入队

with transaction.atomic(): provider = Provider.objects.create(**data) transaction.on_commit( lambda: verify_provider_connection.delay( tenant_id=str(request.tenant_id), provider_id=str(provider.id) ) )
with transaction.atomic(): provider = Provider.objects.create(**data) transaction.on_commit( lambda: verify_provider_connection.delay( tenant_id=str(request.tenant_id), provider_id=str(provider.id) ) )

Modern retry pattern

现代重试模式

@shared_task( base=RLSTask, bind=True, autoretry_for=(ConnectionError, TimeoutError, OperationalError), retry_backoff=True, retry_backoff_max=600, retry_jitter=True, max_retries=5, soft_time_limit=300, time_limit=360, ) @set_tenant def sync_provider_data(self, tenant_id, provider_id): with rls_transaction(tenant_id): # ... task logic pass
@shared_task( base=RLSTask, bind=True, autoretry_for=(ConnectionError, TimeoutError, OperationalError), retry_backoff=True, retry_backoff_max=600, retry_jitter=True, max_retries=5, soft_time_limit=300, time_limit=360, ) @set_tenant def sync_provider_data(self, tenant_id, provider_id): with rls_transaction(tenant_id): # ... 任务逻辑 pass

Idempotent task - safe to retry

幂等任务 - 可安全重试

@shared_task(base=RLSTask, acks_late=True) @set_tenant def process_finding(tenant_id, finding_uid, data): with rls_transaction(tenant_id): Finding.objects.update_or_create(uid=finding_uid, defaults=data)

---
@shared_task(base=RLSTask, acks_late=True) @set_tenant def process_finding(tenant_id, finding_uid, data): with rls_transaction(tenant_id): Finding.objects.update_or_create(uid=finding_uid, defaults=data)

---

Production Deployment Checklist

生产部署检查清单

Full settings: See references/production-settings.md
Run before every production deployment:
bash
cd api && poetry run python src/backend/manage.py check --deploy
完整配置:查看references/production-settings.md
每次生产部署前执行:
bash
cd api && poetry run python src/backend/manage.py check --deploy

Critical Settings

核心配置

SettingProduction ValueRisk if Wrong
DEBUG
False
Exposes stack traces, settings, SQL queries
SECRET_KEY
Env var, rotatedSession hijacking, CSRF bypass
ALLOWED_HOSTS
Explicit listHost header attacks
SECURE_SSL_REDIRECT
True
Credentials sent over HTTP
SESSION_COOKIE_SECURE
True
Session cookies over HTTP
CSRF_COOKIE_SECURE
True
CSRF tokens over HTTP
SECURE_HSTS_SECONDS
31536000
(1 year)
Downgrade attacks
CONN_MAX_AGE
60
or higher
Connection pool exhaustion

配置项生产环境值配置错误风险
DEBUG
False
暴露堆栈跟踪、配置信息、SQL查询
SECRET_KEY
环境变量,定期轮换会话劫持、CSRF绕过
ALLOWED_HOSTS
显式列表主机头攻击
SECURE_SSL_REDIRECT
True
凭据通过HTTP传输
SESSION_COOKIE_SECURE
True
会话Cookie通过HTTP传输
CSRF_COOKIE_SECURE
True
CSRF令牌通过HTTP传输
SECURE_HSTS_SECONDS
31536000
(1年)
降级攻击
CONN_MAX_AGE
60
或更高
连接池耗尽

Commands

命令

bash
undefined
bash
undefined

Development

开发环境

cd api && poetry run python src/backend/manage.py runserver cd api && poetry run python src/backend/manage.py shell
cd api && poetry run python src/backend/manage.py runserver cd api && poetry run python src/backend/manage.py shell

Celery

Celery

cd api && poetry run celery -A config.celery worker -l info -Q scans,overview cd api && poetry run celery -A config.celery beat -l info
cd api && poetry run celery -A config.celery worker -l info -Q scans,overview cd api && poetry run celery -A config.celery beat -l info

Testing

测试

cd api && poetry run pytest -x --tb=short
cd api && poetry run pytest -x --tb=short

Production checks

生产环境检查

cd api && poetry run python src/backend/manage.py check --deploy

---
cd api && poetry run python src/backend/manage.py check --deploy

---

Resources

资源

Local References

本地参考

  • File Locations: See references/file-locations.md
  • Modeling Decisions: See references/modeling-decisions.md
  • Configuration: See references/configuration.md
  • Production Settings: See references/production-settings.md
  • Security Patterns: See assets/security_patterns.py
  • 文件位置:查看references/file-locations.md
  • 建模决策:查看references/modeling-decisions.md
  • 配置:查看references/configuration.md
  • 生产环境配置:查看references/production-settings.md
  • 安全模式:查看assets/security_patterns.py

Related Skills

相关技能

  • Generic DRF Patterns: Use
    django-drf
    skill
  • API Testing: Use
    prowler-test-api
    skill
  • 通用DRF模式:使用
    django-drf
    技能
  • API测试:使用
    prowler-test-api
    技能

Context7 MCP (Recommended)

Context7 MCP(推荐)

Prerequisite: Install Context7 MCP server for up-to-date documentation lookup.
When implementing or debugging Prowler-specific patterns, query these libraries via
mcp_context7_query-docs
:
LibraryContext7 IDUse For
Celery
/websites/celeryq_dev_en_stable
Task patterns, queues, error handling
django-celery-beat
/celery/django-celery-beat
Periodic task scheduling
Django
/websites/djangoproject_en_5_2
Models, ORM, constraints, indexes
Example queries:
mcp_context7_query-docs(libraryId="/websites/celeryq_dev_en_stable", query="shared_task decorator retry patterns")
mcp_context7_query-docs(libraryId="/celery/django-celery-beat", query="periodic task database scheduler")
mcp_context7_query-docs(libraryId="/websites/djangoproject_en_5_2", query="model constraints CheckConstraint UniqueConstraint")
Note: Use
mcp_context7_resolve-library-id
first if you need to find the correct library ID.
前置条件:安装Context7 MCP服务器以获取最新文档查询。
在实现或调试Prowler专属模式时,通过
mcp_context7_query-docs
查询以下库:
Context7 ID用途
Celery
/websites/celeryq_dev_en_stable
任务模式、队列、错误处理
django-celery-beat
/celery/django-celery-beat
定时任务调度
Django
/websites/djangoproject_en_5_2
模型、ORM、约束、索引
查询示例
mcp_context7_query-docs(libraryId="/websites/celeryq_dev_en_stable", query="shared_task decorator retry patterns")
mcp_context7_query-docs(libraryId="/celery/django-celery-beat", query="periodic task database scheduler")
mcp_context7_query-docs(libraryId="/websites/djangoproject_en_5_2", query="model constraints CheckConstraint UniqueConstraint")
注意:如果需要查找正确的库ID,请先使用
mcp_context7_resolve-library-id