Loading...
Loading...
Augments Trailmark code graphs with external audit findings from SARIF static analysis results and weAudit annotation files. Maps findings to graph nodes by file and line overlap, creates severity-based subgraphs, and enables cross-referencing findings with pre-analysis data (blast radius, taint, etc.). Use when projecting SARIF results onto a code graph, overlaying weAudit annotations, cross-referencing Semgrep or CodeQL findings with call graph data, or visualizing audit findings in the context of code structure.
npx skill4agent add trailofbits/skills audit-augmentationtrailmarkdiagramming-code| Rationalization | Why It's Wrong | Required Action |
|---|---|---|
| "The user only asked about SARIF, skip pre-analysis" | Without pre-analysis, you can't cross-reference findings with blast radius or taint | Always run |
| "Unmatched findings don't matter" | Unmatched findings may indicate parsing gaps or out-of-scope files | Report unmatched count and investigate if high |
| "One severity subgraph is enough" | Different severities need different triage workflows | Query all severity subgraphs, not just |
| "SARIF results speak for themselves" | Findings without graph context lack blast radius and taint reachability | Cross-reference with pre-analysis subgraphs |
| "weAudit and SARIF overlap, pick one" | Human auditors and tools find different things | Import both when available |
| "Tool isn't installed, I'll do it manually" | Manual analysis misses what tooling catches | Install trailmark first |
uv run trailmarkuv pip install trailmark# Augment with SARIF
uv run trailmark augment {targetDir} --sarif results.sarif
# Augment with weAudit
uv run trailmark augment {targetDir} --weaudit .vscode/alice.weaudit
# Both at once, output JSON
uv run trailmark augment {targetDir} \
--sarif results.sarif \
--weaudit .vscode/alice.weaudit \
--jsonfrom trailmark.query.api import QueryEngine
engine = QueryEngine.from_directory("{targetDir}", language="python")
# Run pre-analysis first for cross-referencing
engine.preanalysis()
# Augment with SARIF
result = engine.augment_sarif("results.sarif")
# result: {matched_findings: 12, unmatched_findings: 3, subgraphs_created: [...]}
# Augment with weAudit
result = engine.augment_weaudit(".vscode/alice.weaudit")
# Query findings
engine.findings() # All findings
engine.subgraph("sarif:error") # High-severity SARIF
engine.subgraph("weaudit:high") # High-severity weAudit
engine.subgraph("sarif:semgrep") # By tool name
engine.annotations_of("function_name") # Per-node lookupAugmentation Progress:
- [ ] Step 1: Build graph and run pre-analysis
- [ ] Step 2: Locate SARIF/weAudit files
- [ ] Step 3: Run augmentation
- [ ] Step 4: Inspect results and subgraphs
- [ ] Step 5: Cross-reference with pre-analysisengine = QueryEngine.from_directory("{targetDir}", language="{lang}")
engine.preanalysis()semgrep --sarif -o results.sarifcodeql database analyze --format=sarif-latest.vscode/<username>.weauditengine.augment_sarif()engine.augment_weaudit()unmatched_findingsengine.findings()engine.subgraph_names()sarif:errortaintedhigh_blast_radiusprivilege_boundaryfindingaudit_notesarif:<tool_name>weaudit:<author>[SEVERITY] rule-id: message (tool)| Subgraph | Contents |
|---|---|
| Nodes with SARIF error-level findings |
| Nodes with SARIF warning-level findings |
| Nodes with SARIF note-level findings |
| Nodes flagged by a specific tool |
| Nodes with high-severity weAudit findings |
| Nodes with medium-severity weAudit findings |
| Nodes with low-severity weAudit findings |
| All weAudit findings (entryType=0) |
| All weAudit notes (entryType=1) |
root_pathlocation.file_pathfile://