cs-audit
waits for you to report bugs,
waits for you to point out optimization points,
waits for you to raise questions—but the request "I don't know where the problems are, just scan through and check" had no one to handle it.
fills this gap:
proactively scan within the scope defined by the user and generate a list of findings cross-classified by severity × nature.
This skill only identifies issues, not fixes them. Fixing is the responsibility of
/
.
Where to Place Files
codestable/audits/{YYYY-MM-DD}-{slug}/
├── index.md # Overview: scope, overall assessment, cross-matrix of findings list
├── finding-01.md
├── finding-02.md
└── ...
The date is the day of the audit. The slug should be short enough to identify the audit target at a glance (e.g.,
,
,
).
All audit documents include YAML frontmatter (with
set to
and
respectively) to facilitate retrieval via
.
Dimension Matrix (Cross Classification)
Each finding is tagged with two labels:
Severity:
(Must fix) |
(Should fix) |
(Could fix)
Cross examples:
- × : SQL injection, storing passwords in plaintext
- × : Null pointer under specific boundary conditions, with low actual trigger probability
- × : Redundant object allocation in loops, only needs modification in hot paths
Additionally, each finding includes
confidence level (
/
/
) and
suggested action (
/
).
See
for the complete template.
Workflow
Phase 1: Scope Convergence
Blind scanning of the entire repository for audits is not allowed—it's costly and generates excessive noise. First, help the user narrow down the scope to an executable range.
Ask the user for one of the three (any one is sufficient to start):
- Keywords: "Related to auth / payment / upload"
- Module / Directory: "Under "
- Paragraph description: "Recently users reported that the order page is slow, help me scan the order-related code"
If the user's description is clear, proceed directly to Phase 2. If the user says "scan the entire project" → push back—suggest scanning the most frequently modified modules or areas that have had issues recently.
After convergence, confirm with the user:
"Scanning and , approximately 12 files, covering three dimensions: security / performance / bug risks. Is the scope OK?"
Phase 2: Scanning
Scan dimension by dimension according to the scope defined by the user (scan all 5 dimensions if not specified):
- Bug risks: Null value paths, missing boundary conditions, race conditions, swallowed exceptions in error handling, unprotected type assertions
- Security: Injection risks, exposure of sensitive data, missing permission checks, insecure dependencies
- Performance: N+1 queries, repeated calculations, hot paths without caching, memory leaks, full loading without pagination
- Maintainability: Extra-long functions (>80 lines), cyclomatic complexity >15, duplicate logic blocks, magic constants, circular dependencies
- Architecture drift: Inconsistencies between code and records in , layer leakage, implicit cross-module coupling
Use Glob / Grep / Read to actually read the code during scanning. Each finding must record
+ specific code snippet.
Upper Limit: Report a maximum of 5 findings per dimension. Don't fill in numbers—stop when enough are found, don't force it if not enough.
Confidence Level Definition:
- : The code path can be confirmed to trigger, with clear impact
- : The issue can be located via static analysis, but the trigger conditions are uncertain
- : Suspicious clues, need further confirmation but worth marking
Phase 3: Rating + Output
- Tag each finding with nature + severity + confidence level + suggested action
- Write : scope, overall assessment, findings list table (cross-classified)
- Write one by one
Write index first, then findings—this order allows the AI to make an overall judgment first before expanding details, avoiding getting lost in individual findings.
Phase 4: Suggest Next Steps
At the end of index.md, provide priority suggestions:
- "It is recommended to immediately create issues for the 3 P0 findings"
- "The 5 P1 findings can be scheduled for the next iteration"
- "The 4 P2 findings can be reviewed when there is time"
Whichever option the user selects → route to
or
.
does not fix issues itself.
Boundaries with Adjacent Skills
| Skill | Trigger | How cs-audit Handles It |
|---|
| User reports a known bug | After identifying a bug during audit, suggest creating a |
| User points out a known optimization point | After identifying an optimization point during audit, suggest creating a |
| Investigate code around a specific question | Audit involves batch scanning across multiple dimensions, which is not equivalent to explore |
| Maintain architecture documents | cs-arch maintains documents, while cs-audit checks if code deviates from documents |
| Security review | The security dimension of audit is lightweight scanning; in-depth security review requires a special process |
Guardrails
- No blind scanning of the entire repository—Phase 1 must converge the scope; do not start without a scope
- Every finding must have evidence—file:line + code snippet + why it constitutes a problem. No findings like "feels wrong" or "might have issues" without evidence are allowed
- Confidence level must be marked—do not mark all findings as
- Maximum 5 findings per dimension—force the AI to select the most worthy findings, not dump all findings
- Only identify, do not fix—cs-audit does not produce code changes. It is considered out of bounds if "fix it by the way" appears
- Architecture drift references current documents—do not judge what the architecture should look like based on memory; must read and compare with
- Mark old audits as expired—when a new audit for the same module covers an old one, mark the old index with +
superseded-by: {new directory}
Exit Criteria
Related Documents
- — Template for index.md / finding-NN.md
codestable/reference/shared-conventions.md
— Shared standards across workflows
- — Reference source for architecture drift findings