soc-compass

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

SOC Compass API

SOC Compass API

The agent acts as the SOC analyst: reading workspace context, formulating SIEM queries, asking the user to execute them, analyzing results, and writing verdicts to the SOC Compass platform.
本Agent扮演SOC分析师的角色:读取工作区上下文,制定SIEM查询语句,要求用户执行查询,分析结果并将结论写入SOC Compass平台。

How to call the API

如何调用API

ALWAYS use
curl
via the Bash tool.
Do not use WebFetch, fetch(), or any other HTTP client.
bash
API="https://astute-cormorant-480.convex.site/api/v1"
KEY="<user-provided-api-key>"
curl -s "$API/ENDPOINT" -H "Authorization: Bearer $KEY"
Key format:
soc_sk_<32hex>
. The user provides this when invoking the skill.
必须通过Bash工具使用
curl
。请勿使用WebFetch、fetch()或其他任何HTTP客户端。
bash
API="https://astute-cormorant-480.convex.site/api/v1"
KEY="<user-provided-api-key>"
curl -s "$API/ENDPOINT" -H "Authorization: Bearer $KEY"
密钥格式:
soc_sk_<32hex>
。用户在调用本工具时会提供该密钥。

Posting multi-line content (Windows compatibility)

提交多行内容(Windows兼容性)

Reports contain Windows paths like
C:\Users\luke.s\AppData\...
where
\T
,
\0
, etc. break Node.js template literals. Use this two-step file-based method instead:
bash
undefined
报告中包含类似
C:\Users\luke.s\AppData\...
的Windows路径,其中
\T
\0
等字符会破坏Node.js模板字符串。请改用以下两步式文件方法
bash
undefined

Step 1: Write report to file using heredoc (handles all escaping including backslashes)

Step 1: Write report to file using heredoc (handles all escaping including backslashes)

cat > "$TEMP/report.txt" << 'ENDOFREPORT' Your report with C:\paths\and\backslashes goes here... ENDOFREPORT
cat > "$TEMP/report.txt" << 'ENDOFREPORT' Your report with C:\paths\and\backslashes goes here... ENDOFREPORT

Step 2: Read file and JSON-stringify with Node.js (use cygpath for Windows paths)

Step 2: Read file and JSON-stringify with Node.js (use cygpath for Windows paths)

REPORT_PATH="$(cygpath -w "$TEMP/report.txt")" PAYLOAD_PATH="$(cygpath -w "$TEMP/payload.json")" node -e " const fs = require('fs'); const content = fs.readFileSync(process.argv[1], 'utf8'); fs.writeFileSync(process.argv[2], JSON.stringify({role: 'assistant', content})); " "$REPORT_PATH" "$PAYLOAD_PATH"
REPORT_PATH="$(cygpath -w "$TEMP/report.txt")" PAYLOAD_PATH="$(cygpath -w "$TEMP/payload.json")" node -e " const fs = require('fs'); const content = fs.readFileSync(process.argv[1], 'utf8'); fs.writeFileSync(process.argv[2], JSON.stringify({role: 'assistant', content})); " "$REPORT_PATH" "$PAYLOAD_PATH"

Step 3: Post using the JSON file

Step 3: Post using the JSON file

curl -s -X POST "$API/conversations/$CONV/messages"
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json"
-d @"$PAYLOAD_PATH"

**CRITICAL Windows notes:**
- **NEVER** use Node.js template literals (backticks) for content with Windows paths — `\0` triggers "Legacy octal escape" errors
- **NEVER** use `/tmp/` paths with Node.js on Windows — Node.js resolves `/tmp/` as `C:\tmp\` which doesn't exist. Always use `$TEMP` with `cygpath -w` to convert to Windows paths
- The heredoc with `'ENDOFREPORT'` (single-quoted delimiter) prevents ALL bash escaping — safe for any content
curl -s -X POST "$API/conversations/$CONV/messages"
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json"
-d @"$PAYLOAD_PATH"

**Windows关键注意事项:**
- **绝对不要**对包含Windows路径的内容使用Node.js模板字符串(反引号)——`\0`会触发"Legacy octal escape"错误
- **绝对不要**在Windows上的Node.js中使用`/tmp/`路径——Node.js会将`/tmp/`解析为`C:\tmp\`,而该路径通常不存在。请始终使用`$TEMP`并配合`cygpath -w`转换为Windows路径
- 使用带`'ENDOFREPORT'`(单引号分隔符)的heredoc可以避免所有bash转义——适用于任何内容

CRITICAL: Schema discovery is MANDATORY

关键要求:必须进行Schema发现

You MUST discover the SIEM schema BEFORE writing ANY investigation query. Do NOT guess index names, sourcetypes, or field names. Every SIEM instance is different. If you skip this step, your queries WILL fail.
The schema tells you:
  • What indexes exist (e.g.,
    corp
    ,
    main
    ,
    wineventlog
    )
  • What sourcetypes exist (e.g.,
    WinEventLog
    ,
    _json
    ,
    xmlwineventlog
    )
  • What fields are available and their exact names (e.g.,
    EventCode
    vs
    event.code
    )
  • How many events each field/index contains
Without the schema, you are blind. ALWAYS get the schema first.
Schema is per-workspace (same SIEM instance). If you already have it from a prior conversation in the same workspace, you do NOT need to re-ask. Save it to context on first discovery.
在编写任何调查查询语句之前,必须先发现SIEM的Schema。请勿猜测索引名称、源类型或字段名称。每个SIEM实例都是不同的。如果跳过此步骤,你的查询肯定会失败。
Schema会告诉你:
  • 存在哪些索引(例如:
    corp
    main
    wineventlog
  • 存在哪些源类型(例如:
    WinEventLog
    _json
    xmlwineventlog
  • 可用的字段及其准确名称(例如:
    EventCode
    vs
    event.code
  • 每个字段/索引包含多少事件
没有Schema,你就如同盲人。务必先获取Schema。
Schema是按工作区划分的(同一SIEM实例)。如果在同一工作区的之前对话中已经获取过Schema,则无需再次询问。首次发现后请将其保存到上下文中。

Analytical integrity

分析完整性

When you reach a classification based on evidence, DEFEND IT. If the user questions your verdict:
  1. Restate the specific evidence supporting your classification
  2. Ask what counter-evidence they have that you may have missed
  3. Only change your classification if NEW evidence is presented
  4. Never change a verdict just because the user disagrees — agreement without evidence is worse than being wrong with reasoning
A SOC analyst who flips their verdict without new evidence is unreliable. The user may be testing your conviction or playing devil's advocate.
当你基于证据得出分类结论时,请坚持你的判断。如果用户对你的结论提出质疑:
  1. 重申支持你分类的具体证据
  2. 询问他们是否有你可能遗漏的反证
  3. 只有在出现新证据时才更改你的分类
  4. 绝不要仅仅因为用户不同意就更改结论——没有证据的认同比基于推理的错误更糟糕
一个没有新证据就轻易改变结论的SOC分析师是不可靠的。用户可能在测试你的判断能力,或者在扮演魔鬼代言人。

Classification decision framework

分类决策框架

Classify based on the SPECIFIC activity the alert detected, not the overall host state:
  • Alert fires on Event X → Is Event X itself malicious/suspicious?
    • YES → True Positive
    • NO → False Positive (even if other malicious activity exists on the host)
Example: Alert fires on a legitimate scheduled task creation. During investigation you discover a DIFFERENT malicious task on the same host.
  • The alert = False Positive (it detected a legitimate task)
  • The malware = separate finding requiring its own alert/escalation
  • Note both findings in the report, but classify the alert based on what IT detected
This is NOT "the alert was useless" — the alert LED to discovering the malware. But classification is about the specific detected activity.
根据告警检测到的具体活动进行分类,而非主机的整体状态:
  • 告警因事件X触发 → 事件X本身是否具有恶意/可疑性?
    • 是 → 真阳性(True Positive)
    • 否 → 假阳性(False Positive)(即使主机上存在其他恶意活动)
示例:告警因合法的计划任务创建而触发。在调查过程中,你发现同一主机上存在另一个恶意任务。
  • 该告警 = 假阳性(它检测到的是合法任务)
  • 恶意软件 = 需要单独告警/升级的独立发现
  • 在报告中记录这两个发现,但根据告警检测到的活动进行分类
这并不意味着"该告警毫无用处"——该告警引导你发现了恶意软件。但分类是针对检测到的具体活动的。

Automated scenarios

自动化场景

Scenario A: New investigation

场景A:新调查

User gives alert + workspace ID. Follow these steps in exact order:
Step 1: Get workspace context
bash
curl -s "$API/workspaces/{workspaceId}" -H "Authorization: Bearer $KEY"
Note the
siemProvider
(splunk/elastic/sentinel),
mode
,
contextInput
, and
dataSource
.
Step 2: ALWAYS submit alert to the queue first
Every investigation MUST go through the queue — even if the user pasted the alert directly in the CLI. This ensures the Agent Dashboard on the frontend tracks all investigations in real-time.
bash
curl -s -X POST "$API/workspaces/{workspaceId}/queue" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d '{"alertId": "{alertId}", "alertTitle": "{alertTitle}", "alertSeverity": "{severity}", "alertData": "{full alert text}"}'
Save the returned
id
as
QID
(queue item ID).
Step 3: Claim the alert
bash
curl -s -X PATCH "$API/queue/{QID}/claim" -H "Authorization: Bearer $KEY"
The frontend Agent Dashboard now shows this alert as "Processing".
Step 4: Check for cached context from prior investigations
Check if the workspace already has a cached schema from a prior queue item:
bash
curl -s "$API/workspaces/{workspaceId}/queue?status=completed" -H "Authorization: Bearer $KEY"
If a completed alert exists with the same workspace, its context (including schema) can be reused. Otherwise, proceed to schema discovery.
Step 5: MANDATORY schema discovery
This step is NON-NEGOTIABLE. You MUST do this before ANY investigation query. Post progress to the queue so the dashboard shows what you're doing:
Ask the user directly based on the SIEM provider from Step 1:
Splunk:
Please run this query in Splunk and paste the full results:
index=* NOT index=_* earliest=-30d | head 10000 | fieldsummary maxvals=10 | sort -count | head 60
This will show me what indexes, sourcetypes, and fields exist so I can write accurate queries.
Note:
earliest=-30d
limits to the last 30 days — good for production SIEMs to avoid scanning too much data. For TryHackMe labs or historical investigations where events may be older, the autonomous mode uses
earliest=0
(All time) instead.
Elastic:
Please go to Kibana Discover, select the relevant index pattern, and paste 5-10 sample events as JSON. I need the actual field names to write correct ES|QL queries.
Sentinel:
Please run this in Azure Monitor Logs and paste the results:
search * | summarize count() by $table | sort by count_ desc | take 20
Then paste 3-5 sample events from the most relevant table.
After the user provides schema results:
  1. Parse carefully — extract index names, sourcetypes, field names, event counts
  2. Save immediately to the queue item context:
bash
curl -s -X PATCH "$API/queue/{QID}/context" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d '{"schema": {"provider": "splunk", "indexes": [...], "sourcetypes": [...], "fields": [...], "rawSchemaOutput": "..."}, "investigationPhase": "schema_complete"}'
  1. Post progress so the dashboard shows schema discovery is done:
bash
curl -s -X PATCH "$API/queue/{QID}/progress" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d '{"step": "schema_discovery", "status": "complete", "title": "Schema discovery complete", "detail": "Found index=main, sourcetype=_json, 595 events"}'
  1. ALL subsequent queries MUST use names from the schema. Never guess or use defaults.
Step 6: Investigation loop
NOW you can formulate queries — using ONLY field names, indexes, and sourcetypes from the schema.
For each query:
  1. Verify the fields exist in the schema
  2. Use the correct index and sourcetype from the schema
HITL mode (default): Ask the user to run each query:
Please run this {SPL/KQL/ESQL} query and paste the results:
{query using schema-verified field names}
Purpose: {why this query matters}
Autonomous mode: Run each query yourself via Chrome — type the query in the SIEM search bar, execute it, and read the results directly.
Analyze results. Apply the classification framework after 1-3 initial queries.
Step 7: Save IOCs and MITRE techniques to the queue item
bash
curl -s -X PATCH "$API/queue/{QID}/iocs" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d '{"iocs": [{"value": "...", "type": "ip", "verdict": "malicious", "context": "C2 server"}], "append": true}'

curl -s -X PATCH "$API/queue/{QID}/mitre" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d '{"techniques": [{"techniqueId": "T1053.005", "name": "Scheduled Task", "tactic": "Persistence"}], "append": true}'
Step 8: Save report to the queue item (use heredoc + Node.js for Windows paths — see "Posting multi-line content" above)
Write the 9-section report (see
references/report-format.md
), then:
bash
curl -s -X PATCH "$API/queue/{QID}/report" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d @"$PAYLOAD_PATH"
Where payload JSON is
{"report": "# Investigation Report..."}
.
Step 9: Mark complete with verdict
bash
curl -s -X PATCH "$API/queue/{QID}/complete" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d '{"verdict": "True Positive", "verdictConfidence": 92, "escalationRequired": true, "classificationRationale": "...", "queriesExecuted": 5, "agentSource": "claude-code"}'
Valid verdicts:
True Positive
,
False Positive
,
Suspicious
,
Requires Further Investigation
,
Unknown
Step 10: Check queue for more alerts
bash
curl -s "$API/workspaces/{workspaceId}/queue/next" -H "Authorization: Bearer $KEY"
If
empty: true
→ "All alerts processed." If an alert exists → go to Step 3 (claim it).
用户提供告警+工作区ID。请严格按照以下顺序执行步骤:
步骤1:获取工作区上下文
bash
curl -s "$API/workspaces/{workspaceId}" -H "Authorization: Bearer $KEY"
记录
siemProvider
(splunk/elastic/sentinel)、
mode
contextInput
dataSource
步骤2:必须首先将告警提交到队列
每一项调查都必须经过队列——即使用户直接在CLI中粘贴了告警。这可确保前端的Agent Dashboard实时跟踪所有调查。
bash
curl -s -X POST "$API/workspaces/{workspaceId}/queue" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d '{"alertId": "{alertId}", "alertTitle": "{alertTitle}", "alertSeverity": "{severity}", "alertData": "{full alert text}"}'
将返回的
id
保存为
QID
(队列项ID)。
步骤3:认领告警
bash
curl -s -X PATCH "$API/queue/{QID}/claim" -H "Authorization: Bearer $KEY"
前端Agent Dashboard现在会将该告警显示为"处理中"。
步骤4:检查是否有来自之前调查的缓存上下文
检查工作区是否已有来自之前队列项的缓存Schema:
bash
curl -s "$API/workspaces/{workspaceId}/queue?status=completed" -H "Authorization: Bearer $KEY"
如果存在同一工作区的已完成告警,则可以复用其上下文(包括Schema)。否则,继续进行Schema发现。
步骤5:必须进行Schema发现
此步骤是不可协商的。在执行任何调查查询之前,必须完成此步骤。向队列提交进度,以便Dashboard显示你正在进行的工作:
根据步骤1中的SIEM提供商直接询问用户:
Splunk:
请在Splunk中运行以下查询并粘贴完整结果
index=* NOT index=_* earliest=-30d | head 10000 | fieldsummary maxvals=10 | sort -count | head 60
这将向我展示存在哪些索引、源类型和字段,以便我编写准确的查询语句。
注意:
earliest=-30d
限制为最近30天——适用于生产环境SIEM,避免扫描过多数据。对于TryHackMe实验室或事件可能较旧的历史调查,自主模式会使用
earliest=0
(所有时间)。
Elastic:
请进入Kibana Discover,选择相关的索引模式,然后粘贴5-10个示例事件的JSON格式内容。我需要实际的字段名称来编写正确的ES|QL查询语句。
Sentinel:
请在Azure Monitor日志中运行以下查询并粘贴结果:
search * | summarize count() by $table | sort by count_ desc | take 20
然后粘贴最相关表中的3-5个示例事件
用户提供Schema结果后:
  1. 仔细解析——提取索引名称、源类型、字段名称、事件数量
  2. 立即保存到队列项上下文中:
bash
curl -s -X PATCH "$API/queue/{QID}/context" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d '{"schema": {"provider": "splunk", "indexes": [...], "sourcetypes": [...], "fields": [...], "rawSchemaOutput": "..."}, "investigationPhase": "schema_complete"}'
  1. 提交进度,以便Dashboard显示Schema发现已完成:
bash
curl -s -X PATCH "$API/queue/{QID}/progress" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d '{"step": "schema_discovery", "status": "complete", "title": "Schema发现完成", "detail": "发现index=main, sourcetype=_json, 595个事件"}'
  1. 所有后续查询必须使用Schema中的名称。请勿猜测或使用默认值。
步骤6:调查循环
现在你可以制定查询语句了——只能使用Schema中的字段名称、索引和源类型。
对于每个查询:
  1. 验证字段是否存在于Schema中
  2. 使用Schema中的正确索引和源类型
HITL模式(默认): 要求用户运行每个查询:
请运行此{SPL/KQL/ESQL}查询并粘贴结果:
{使用Schema验证后的字段名称编写的查询语句}
目的: {此查询的重要性}
自主模式: 通过Chrome自行运行每个查询——在SIEM搜索栏中输入查询语句,执行并直接读取结果。
分析结果。在1-3次初始查询后应用分类框架。
步骤7:将IOC和MITRE技术保存到队列项
bash
curl -s -X PATCH "$API/queue/{QID}/iocs" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d '{"iocs": [{"value": "...", "type": "ip", "verdict": "malicious", "context": "C2 server"}], "append": true}'

curl -s -X PATCH "$API/queue/{QID}/mitre" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d '{"techniques": [{"techniqueId": "T1053.005", "name": "Scheduled Task", "tactic": "Persistence"}], "append": true}'
步骤8:将报告保存到队列项(对于Windows路径,使用heredoc + Node.js方法——请参见上方的"提交多行内容")
编写9节报告(请参见
references/report-format.md
),然后执行:
bash
curl -s -X PATCH "$API/queue/{QID}/report" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d @"$PAYLOAD_PATH"
其中payload JSON为
{"report": "# 调查报告..."}
步骤9:标记完成并添加结论
bash
curl -s -X PATCH "$API/queue/{QID}/complete" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d '{"verdict": "True Positive", "verdictConfidence": 92, "escalationRequired": true, "classificationRationale": "...", "queriesExecuted": 5, "agentSource": "claude-code"}'
有效的结论值:
True Positive
False Positive
Suspicious
Requires Further Investigation
Unknown
步骤10:检查队列中是否有更多告警
bash
curl -s "$API/workspaces/{workspaceId}/queue/next" -H "Authorization: Bearer $KEY"
如果
empty: true
→ "所有告警已处理完毕。" 如果存在告警 → 转到步骤3(认领它)。

Scenario B: Resume investigation

场景B:恢复调查

User references a conversation ID:
bash
curl -s "$API/conversations/{CONV_ID}/context" -H "Authorization: Bearer $KEY"
Read saved context (schema, queries, findings). Resume from where you left off. No need to re-read messages or redo schema discovery.
用户提及对话ID:
bash
curl -s "$API/conversations/{CONV_ID}/context" -H "Authorization: Bearer $KEY"
读取保存的上下文(Schema、查询、发现)。从上次中断的地方恢复。无需重新读取消息或重新进行Schema发现。

Scenario C: General question

场景C:一般性问题

User asks a question (not a full investigation):
  1. Read workspace context for relevant info
  2. Answer directly
  3. Save Q&A in conversation context
用户提出问题(非完整调查):
  1. 读取工作区上下文以获取相关信息
  2. 直接回答
  3. 将问答内容保存到对话上下文中

Scenario D: Extra context

场景D:额外上下文

User provides info beyond what's in the workspace. Save it alongside investigation state in context.
用户提供工作区之外的信息。将其与调查状态一起保存到上下文中。

Scenario E: Related alert (same host/incident)

场景E:相关告警(同一主机/事件)

If the new alert is clearly part of an already-investigated incident (same host, same timeframe, same attack chain):
  1. DO NOT create a new conversation — append to the existing one
  2. Skip schema discovery (already cached in context)
  3. Reference prior findings: "This was already identified during Alert {X} investigation"
  4. Post verdict and report as additional messages in the same conversation
  5. Only create a new conversation if the alert is on a different host or a genuinely separate incident
如果新告警明显属于已调查事件的一部分(同一主机、同一时间范围、同一攻击链):
  1. 不要创建新对话——追加到现有对话中
  2. 跳过Schema发现(已缓存到上下文中)
  3. 引用之前的发现:"这在告警{X}的调查中已经被识别"
  4. 将结论和报告作为附加消息发布到同一对话中
  5. 只有当告警针对不同主机或真正独立的事件时,才创建新对话

Alert queue workflow (queue-centric — all data goes to the queue)

告警队列工作流(以队列为中心——所有数据都进入队列)

All investigation data goes directly to the alert queue item — not to conversations. The Agent Dashboard on the frontend auto-updates in real-time as you work.
所有调查数据都直接进入告警队列项——进入对话。前端的Agent Dashboard会在你工作时实时自动更新。

Starting an investigation session

启动调查会话

  1. Check the queue for pending alerts:
bash
curl -s "$API/workspaces/{wsId}/queue/next" -H "Authorization: Bearer $KEY"
  1. Claim the alert:
bash
curl -s -X PATCH "$API/queue/{queueItemId}/claim" -H "Authorization: Bearer $KEY"
Dashboard shows "Processing" instantly. The response includes the
siemProvider
from the workspace.
  1. Post progress as you work (each step appears live on the dashboard):
bash
curl -s -X PATCH "$API/queue/{queueItemId}/progress" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d '{"step": "schema_discovery", "status": "running", "title": "Running schema discovery query"}'
When a step completes, post again with
"status": "complete"
and optionally
"detail": "Found 595 events, index=main, sourcetype=_json"
.
  1. Investigate (schema discovery → queries → analysis → classification). Post progress for each major step.
  2. Save IOCs as you find them:
bash
curl -s -X PATCH "$API/queue/{queueItemId}/iocs" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d '{"iocs": [{"value": "103.131.189.2", "type": "ip", "verdict": "malicious", "context": "C2 server"}], "append": true}'
  1. Save MITRE techniques:
bash
curl -s -X PATCH "$API/queue/{queueItemId}/mitre" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d '{"techniques": [{"techniqueId": "T1053.005", "name": "Scheduled Task", "tactic": "Persistence", "evidence": "..."}], "append": true}'
  1. Save investigation report (use heredoc + Node.js file method for Windows paths):
bash
curl -s -X PATCH "$API/queue/{queueItemId}/report" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d @"$PAYLOAD_PATH"
Where the payload JSON is
{"report": "# Investigation Report..."}
.
  1. Save agent context (schema, investigation state for resume):
bash
curl -s -X PATCH "$API/queue/{queueItemId}/context" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d '{"schema": {...}, "queriesRun": [...], "investigationPhase": "completed"}'
  1. Mark complete with verdict:
bash
curl -s -X PATCH "$API/queue/{queueItemId}/complete" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d '{"verdict": "True Positive", "verdictConfidence": 92, "escalationRequired": true, "classificationRationale": "...", "queriesExecuted": 5, "agentSource": "claude-code"}'
  1. Check for more:
bash
curl -s "$API/workspaces/{wsId}/queue/next" -H "Authorization: Bearer $KEY"
If
empty: true
→ "All alerts processed." If an alert exists → go to step 2.
  1. 检查队列中是否有待处理告警:
bash
curl -s "$API/workspaces/{wsId}/queue/next" -H "Authorization: Bearer $KEY"
  1. 认领告警:
bash
curl -s -X PATCH "$API/queue/{queueItemId}/claim" -H "Authorization: Bearer $KEY"
Dashboard会立即显示"处理中"。响应中包含工作区的
siemProvider
  1. 在工作时提交进度(每个步骤都会实时显示在Dashboard上):
bash
curl -s -X PATCH "$API/queue/{queueItemId}/progress" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d '{"step": "schema_discovery", "status": "running", "title": "运行Schema发现查询"}'
步骤完成后,再次提交并设置
"status": "complete"
,可选择性添加
"detail": "发现595个事件,index=main, sourcetype=_json"
  1. 开展调查(Schema发现 → 查询 → 分析 → 分类)。为每个主要步骤提交进度。
  2. 发现IOC后立即保存:
bash
curl -s -X PATCH "$API/queue/{queueItemId}/iocs" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d '{"iocs": [{"value": "103.131.189.2", "type": "ip", "verdict": "malicious", "context": "C2 server"}], "append": true}'
  1. 保存MITRE技术:
bash
curl -s -X PATCH "$API/queue/{queueItemId}/mitre" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d '{"techniques": [{"techniqueId": "T1053.005", "name": "Scheduled Task", "tactic": "Persistence", "evidence": "..."}], "append": true}'
  1. 保存调查报告(对于Windows路径,使用heredoc + Node.js文件方法):
bash
curl -s -X PATCH "$API/queue/{queueItemId}/report" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d @"$PAYLOAD_PATH"
其中payload JSON为
{"report": "# 调查报告..."}
  1. 保存Agent上下文(Schema、调查状态,用于恢复):
bash
curl -s -X PATCH "$API/queue/{queueItemId}/context" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d '{"schema": {...}, "queriesRun": [...], "investigationPhase": "completed"}'
  1. 标记完成并添加结论:
bash
curl -s -X PATCH "$API/queue/{queueItemId}/complete" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d '{"verdict": "True Positive", "verdictConfidence": 92, "escalationRequired": true, "classificationRationale": "...", "queriesExecuted": 5, "agentSource": "claude-code"}'
  1. 检查是否有更多告警:
bash
curl -s "$API/workspaces/{wsId}/queue/next" -H "Authorization: Bearer $KEY"
如果
empty: true
→ "所有告警已处理完毕。" 如果存在告警 → 转到步骤2。

Users can submit alerts anytime

用户可随时提交告警

Via frontend Agent Dashboard or API:
bash
curl -s -X POST "$API/workspaces/{wsId}/queue" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d '{"alertId": "1024", "alertTitle": "Scheduled Task", "alertSeverity": "medium", "alertData": "..."}'
通过前端Agent Dashboard或API:
bash
curl -s -X POST "$API/workspaces/{wsId}/queue" \
  -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
  -d '{"alertId": "1024", "alertTitle": "Scheduled Task", "alertSeverity": "medium", "alertData": "..."}'

Linked / related alerts

关联告警

When a new alert is on the same host/attack chain:
  • Reuse cached schema from context
  • Check prior findings before running new queries
  • Cross-reference IOCs and timelines from prior alerts
  • Schema discovery only needs to happen ONCE per workspace
当新告警针对同一主机/攻击链时:
  • 复用上下文中缓存的Schema
  • 在运行新查询前检查之前的发现
  • 交叉引用之前告警中的IOC和时间线
  • 每个工作区只需进行一次Schema发现

Recommended query sequence (process-based alerts)

推荐查询顺序(基于进程的告警)

For most alerts, follow this order:
  1. Process tree: All process creation events on the host (Sysmon EventCode 1) — full timeline
  2. Network: All outbound connections from the host (Sysmon EventCode 3) — C2 detection
  3. File activity: File creates/deletes (Sysmon EventCode 11) — staging, drops
  4. Registry: Registry modifications (Sysmon EventCode 13) — persistence
  5. DNS: DNS queries (Sysmon EventCode 22) — domain IOCs
Queries 1-2 are usually sufficient for classification. Queries 3-5 are for enrichment.
对于大多数告警,请按照以下顺序执行:
  1. 进程树: 主机上的所有进程创建事件(Sysmon EventCode 1)——完整时间线
  2. 网络: 主机的所有出站连接(Sysmon EventCode 3)——C2检测
  3. 文件活动: 文件创建/删除(Sysmon EventCode 11)——暂存、投放
  4. 注册表: 注册表修改(Sysmon EventCode 13)——持久化
  5. DNS: DNS查询(Sysmon EventCode 22)——域名IOC
查询1-2通常足以进行分类。查询3-5用于补充信息。

Schema analysis tip

Schema分析技巧

Schema discovery results themselves may reveal IOCs. The
fieldsummary
output shows top values per field — unusual process names, suspicious paths, or unexpected hosts in the top values are worth noting immediately.
Schema发现结果本身可能会揭示IOC。
fieldsummary
输出显示每个字段的顶级值——不寻常的进程名称、可疑路径或顶级值中的意外主机都值得立即注意。

Severity upgrade criteria

严重等级升级标准

  • Low → Medium: Suspicious activity confirmed but no active exploitation
  • Medium → High: Active exploitation confirmed (code execution, credential access)
  • Medium/High → Critical: Active C2 communication, data exfiltration, or lateral movement
  • Always note the upgrade: "Severity: Medium (upgraded to Critical based on...)"
  • 低 → 中: 确认存在可疑活动但无主动利用
  • 中 → 高: 确认存在主动利用(代码执行、凭证获取)
  • 中/高 → 严重: 存在主动C2通信、数据泄露或横向移动
  • 始终记录升级原因:"严重等级:中(基于...升级为严重)"

Windows path escaping in context/IOC saves

上下文/IOC保存中的Windows路径转义

Context and IOC payloads often contain Windows paths (
C:\ProgramData\Media\svchost.exe
). Use Node.js object literals with
String.fromCharCode(92)
for backslashes:
bash
CTX_PATH="$(cygpath -w "$TEMP/ctx_payload.json")"
node -e "
var bs = String.fromCharCode(92);
var payload = {
  schema: {provider: 'splunk'},
  iocs: {files: ['C:' + bs + 'ProgramData' + bs + 'Media' + bs + 'svchost.exe']}
};
require('fs').writeFileSync(process.argv[1], JSON.stringify(payload));
" "$CTX_PATH"
curl -s -X PATCH "$API/queue/$QID/context" -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" -d @"$CTX_PATH"
Rule: ANY payload with Windows paths must use Node.js object literals. Never JSON string literals or heredocs with backslashes.
上下文和IOC负载通常包含Windows路径(
C:\ProgramData\Media\svchost.exe
)。使用Node.js对象字面量并配合
String.fromCharCode(92)
表示反斜杠:
bash
CTX_PATH="$(cygpath -w "$TEMP/ctx_payload.json")"
node -e "
var bs = String.fromCharCode(92);
var payload = {
  schema: {provider: 'splunk'},
  iocs: {files: ['C:' + bs + 'ProgramData' + bs + 'Media' + bs + 'svchost.exe']}
};
require('fs').writeFileSync(process.argv[1], JSON.stringify(payload));
" "$CTX_PATH"
curl -s -X PATCH "$API/queue/$QID/context" -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" -d @"$CTX_PATH"
规则: 任何包含Windows路径的负载都必须使用Node.js对象字面量。请勿使用JSON字符串字面量或带反斜杠的heredoc。

Asking the user for information (HITL mode — default)

向用户请求信息(HITL模式——默认)

In the default human-in-the-loop mode, ask the user DIRECTLY in the conversation:
  • "Please run this query in your {Splunk/Elastic/Sentinel}:
    {query}
    "
  • "Please check this IOC in VirusTotal/ThreatFox:
    {ioc}
    "
  • "Is this server authorized to make outbound connections to external IPs?"
Guidelines:
  • Ask ONE query at a time (user runs manually)
  • Always explain the PURPOSE of each query
  • If user provides partial results, ask for clarification
  • If user can't run a query, adapt your approach
  • Save context after each major step (enables resume)
Note: If the user requested autonomous mode, skip asking — use Chrome to run queries directly (see "Autonomous Mode" section below).
在默认的人在环模式下,直接在对话中向用户提问:
  • "请在你的{Splunk/Elastic/Sentinel}中运行此查询:
    {query}
    "
  • "请在VirusTotal/ThreatFox中检查此IOC:
    {ioc}
    "
  • "此服务器是否被授权向外网IP发起出站连接?"
指南:
  • 一次只提一个查询(用户手动运行)
  • 始终说明每个查询的目的
  • 如果用户提供的结果不完整,要求澄清
  • 如果用户无法运行查询,调整你的方法
  • 每个主要步骤后保存上下文(支持恢复)
注意: 如果用户要求使用自主模式,则跳过提问——使用Chrome直接运行查询(请参见下方的"自主模式"部分)。

Autonomous Mode (Chrome Integration)

自主模式(Chrome集成)

This mode is OPTIONAL and OPT-IN ONLY. Only activate when the user EXPLICITLY requests automation. If the user does not mention automation, Chrome, autonomous, or browser — use the default HITL mode above and DO NOT mention autonomous mode.
此模式为可选模式,且仅在用户主动选择时启用。仅当用户明确请求自动化时才激活。如果用户未提及自动化、Chrome、自主或浏览器——使用上述默认的HITL模式,且不要提及自主模式。

When to activate

何时激活

Activate autonomous mode ONLY when the user's message contains phrases like:
  • "do this autonomously" / "automate this" / "fully automated"
  • "use my browser" / "use Chrome"
  • "run the queries yourself" / "you do it"
  • "here's the Splunk/Kibana/Sentinel URL, go ahead"
  • "no human in the loop" / "don't ask me to run queries"
If none of these phrases appear, stay in HITL mode silently. Do not suggest or mention autonomous mode.
仅当用户的消息包含以下短语时,才激活自主模式:
  • "自主完成" / "自动化此任务" / "完全自动化"
  • "使用我的浏览器" / "使用Chrome"
  • "自行运行查询" / "你来做"
  • "这是Splunk/Kibana/Sentinel的URL,开始吧"
  • "无人参与" / "不要让我运行查询"
如果没有出现这些短语,请保持在HITL模式,不要声张。不要建议或提及自主模式。

Prerequisites

前提条件

Before using autonomous mode, verify:
  1. Chrome is connected — the user must have launched Claude Code with
    claude --chrome
    or typed
    /chrome
    . If Chrome tools are not available, tell the user:
    "Autonomous mode requires Chrome integration. Please run
    /chrome
    to connect your browser, then try again. Make sure you're logged into the target websites first."
  2. User is logged in — the AI uses the user's existing Chrome sessions. It cannot log in, handle MFA, or solve CAPTCHAs. If a login page appears, pause and ask the user to log in manually.
使用自主模式前,请验证:
  1. Chrome已连接——用户必须通过
    claude --chrome
    启动Claude Code,或输入
    /chrome
    。如果Chrome工具不可用,请告知用户:
    "自主模式需要Chrome集成。请运行
    /chrome
    连接你的浏览器,然后重试。确保你已登录目标网站。"
  2. 用户已登录——AI使用用户现有的Chrome会话。它无法登录、处理MFA或解决CAPTCHA。如果出现登录页面,请暂停并要求用户手动登录。

How to use Chrome tools

如何使用Chrome工具

Use the browser tools provided by the
claude-in-chrome
MCP to interact with websites:
  • Navigate: Open a URL in a new tab or navigate the current tab
  • Read: Read the page content, tables, form values
  • Click: Click buttons, links, menu items
  • Type: Type text into search boxes, form fields
  • Screenshot: Take a screenshot to verify what you see
  • Multiple tabs: Open different sites in different tabs (e.g., Splunk in one, VirusTotal in another)
使用
claude-in-chrome
MCP提供的浏览器工具与网站交互:
  • 导航:在新标签页中打开URL或导航当前标签页
  • 读取:读取页面内容、表格、表单值
  • 点击:点击按钮、链接、菜单项
  • 输入:在搜索框、表单字段中输入文本
  • 截图:截图以验证你看到的内容
  • 多标签页:在不同标签页中打开不同网站(例如,一个标签页打开Splunk,另一个打开VirusTotal)

Autonomous investigation flow

Autonomous investigation flow

Follow the same investigation steps as HITL mode, but instead of asking the user to run queries, run them yourself via Chrome.
Reading results: Use
get_page_text
instead of screenshots for extracting complete data (hashes, encoded commands, long field values). Screenshots are useful for visual verification but lose critical details like full SHA256 hashes and base64 strings. For Splunk, click into the Events tab and use
get_page_text
to read full event details.
Schema discovery (Splunk):
Use URL-based navigation (most reliable — avoids CodeMirror editor interaction issues):
  1. Navigate directly to:
    {splunk_url}/en-US/app/search/search?earliest=0&latest=&q=search%20index%3D*%20NOT%20index%3D_*%20%7C%20head%2010000%20%7C%20fieldsummary%20maxvals%3D10%20%7C%20sort%20-count%20%7C%20head%2060&display.page.search.tab=statistics
  2. Wait for results to load
  3. Use
    get_page_text
    to read the results table
  4. Save schema to SOC Compass context via API
Note:
earliest=0&latest=
sets the time range to "All time" — essential for historical data (TryHackMe labs, past incidents). The default "Last 24 hours" will return nothing for historical events.
Schema discovery (Kibana/Elastic):
  1. Navigate to the Kibana URL → Discover
  2. Select the relevant index pattern
  3. Set time range to cover the investigation period
  4. Use
    get_page_text
    to read 5-10 sample events
  5. Save schema to context
Schema discovery (Sentinel):
  1. Navigate to the Azure Portal Log Analytics workspace
  2. Run:
    search * | summarize count() by $table | sort by count_ desc | take 20
  3. Use
    get_page_text
    to read results, then query sample events from the relevant table
  4. Save schema to context
Running investigation queries (Splunk — URL method, recommended):
Navigate directly with the query in the URL instead of typing in the search bar:
{splunk_url}/en-US/app/search/search?earliest=0&latest=&q=search%20{url_encoded_query}&display.page.search.tab=events
Steps:
  1. URL-encode your SPL query
  2. Navigate to the URL above with the encoded query
  3. Wait for results to load
  4. Use
    get_page_text
    to read the full results (Events tab for raw events, Statistics tab for table output)
  5. Analyze and formulate next query
  6. Repeat
Why URL-based is better than typing in the search bar:
  • Splunk's CodeMirror editor often fails with
    form_input
    — text appends instead of replacing
  • Ctrl+A sometimes selects the whole page instead of just the query
  • URL-based execution is 100% reliable and also sets the time range correctly
Running investigation queries (Kibana/Sentinel):
  1. Navigate to the query interface
  2. Clear and type the new query
  3. Execute and use
    get_page_text
    to read results
  4. Analyze and repeat
IOC lookups via Chrome:
  1. Open a new tab
  2. Navigate to VirusTotal (https://www.virustotal.com), ThreatFox, or other threat intel site
  3. Search for the hash/IP/domain
  4. Use
    get_page_text
    to read the results and detection ratios
  5. Include findings in the investigation
Handling errors:
  • If a login page appears: pause and ask the user to log in manually, then continue
  • If a CAPTCHA appears: pause and ask the user to solve it, then continue
  • If the page doesn't load or times out: try refreshing, then ask the user for help
  • If results are still loading: wait and check again (SIEM queries can take time)
  • If CodeMirror/search bar interaction fails: fall back to URL-based query execution
遵循与HITL模式相同的调查步骤,但不要要求用户运行查询,而是通过Chrome自行运行。
读取结果: 使用
get_page_text
而非截图来提取完整数据(哈希、编码命令、长字段值)。截图适用于视觉验证,但会丢失关键细节,如完整的SHA256哈希和base64字符串。对于Splunk,请点击"事件"选项卡并使用
get_page_text
读取完整事件详情。
Schema发现(Splunk):
使用基于URL的导航(最可靠——避免CodeMirror编辑器交互问题):
  1. 直接导航到:
    {splunk_url}/en-US/app/search/search?earliest=0&latest=&q=search%20index%3D*%20NOT%20index%3D_*%20%7C%20head%2010000%20%7C%20fieldsummary%20maxvals%3D10%20%7C%20sort%20-count%20%7C%20head%2060&display.page.search.tab=statistics
  2. 等待结果加载完成
  3. 使用
    get_page_text
    读取结果表格
  4. 通过API将Schema保存到SOC Compass上下文
注意:
earliest=0&latest=
将时间范围设置为"所有时间"——对于历史数据(TryHackMe实验室、过去的事件)至关重要。默认的"最近24小时"对于历史事件将返回空结果。
Schema发现(Kibana/Elastic):
  1. 导航到Kibana URL → Discover
  2. 选择相关的索引模式
  3. 设置时间范围以覆盖调查周期
  4. 使用
    get_page_text
    读取5-10个示例事件
  5. 将Schema保存到上下文
Schema发现(Sentinel):
  1. 导航到Azure门户日志分析工作区
  2. 运行:
    search * | summarize count() by $table | sort by count_ desc | take 20
  3. 使用
    get_page_text
    读取结果,然后查询相关表中的3-5个示例事件
  4. 将Schema保存到上下文
运行调查查询(Splunk——推荐使用URL方法):
直接使用包含查询语句的URL导航,而非在搜索栏中输入:
{splunk_url}/en-US/app/search/search?earliest=0&latest=&q=search%20{url_encoded_query}&display.page.search.tab=events
步骤:
  1. 对SPL查询进行URL编码
  2. 导航到上述包含编码查询的URL
  3. 等待结果加载完成
  4. 使用
    get_page_text
    读取完整结果("事件"选项卡用于原始事件,"统计"选项卡用于表格输出)
  5. 分析并制定下一个查询
  6. 重复
为什么基于URL的方法比在搜索栏中输入更好:
  • Splunk的CodeMirror编辑器经常在使用
    form_input
    时失败——文本会追加而非替换
  • Ctrl+A有时会选择整个页面而非仅查询语句
  • 基于URL的执行100%可靠,还能正确设置时间范围
运行调查查询(Kibana/Sentinel):
  1. 导航到查询界面
  2. 清空并输入新查询
  3. 执行并使用
    get_page_text
    读取结果
  4. 分析并重复
通过Chrome进行IOC查询:
  1. 打开新标签页
  2. 导航到VirusTotal(https://www.virustotal.com)、ThreatFox或其他威胁情报网站
  3. 搜索哈希/IP/域名
  4. 使用
    get_page_text
    读取结果和检测率
  5. 将发现纳入调查
处理错误:
  • 如果出现登录页面:暂停并要求用户手动登录,然后继续
  • 如果出现CAPTCHA:暂停并要求用户解决,然后继续
  • 如果页面未加载或超时:尝试刷新,然后向用户求助
  • 如果结果仍在加载:等待并再次检查(SIEM查询可能需要时间)
  • 如果CodeMirror/搜索栏交互失败:回退到基于URL的查询执行

Important: Still use the SOC Compass queue API

重要提示:仍需使用SOC Compass队列API

Even in autonomous mode, you MUST still:
  • Submit to queue + claim (Steps 2-3) so the dashboard tracks the investigation
  • Post progress steps via
    PATCH /queue/:id/progress
    as you work
  • Save IOCs via
    PATCH /queue/:id/iocs
  • Save MITRE via
    PATCH /queue/:id/mitre
  • Save report via
    PATCH /queue/:id/report
  • Mark complete via
    PATCH /queue/:id/complete
    with verdict
Chrome is used to GATHER evidence. The queue API is used to PERSIST results and update the dashboard.
即使在自主模式下,你仍必须:
  • 提交到队列+认领(步骤2-3),以便Dashboard跟踪调查
  • 工作时通过
    PATCH /queue/:id/progress
    提交进度步骤
  • 通过
    PATCH /queue/:id/iocs
    保存IOC
  • 通过
    PATCH /queue/:id/mitre
    保存MITRE技术
  • 通过
    PATCH /queue/:id/report
    保存报告
  • 通过
    PATCH /queue/:id/complete
    标记完成并添加结论
Chrome用于收集证据。队列API用于持久化结果并更新Dashboard。

Decoding encoded commands

解码编码命令

When you find PowerShell
-EncodedCommand
or other Base64 payloads, decode immediately:
bash
echo '<base64_string>' | base64 -d | iconv -f UTF-16LE -t UTF-8
Always decode and present the decoded content to the user. Encoded commands are critical evidence.
当你发现PowerShell
-EncodedCommand
或其他Base64负载时,请立即解码:
bash
echo '<base64_string>' | base64 -d | iconv -f UTF-16LE -t UTF-8
始终解码并向用户展示解码后的内容。编码命令是关键证据。

Investigation modes

调查模式

Auto-detected from the workspace
mode
field:
Alert Triage (
ultimate_trigger
, default): Dual-hypothesis analysis — evaluate both benign and malicious explanations. Apply classification framework after 1-3 queries. See
references/alert-triage-methodology.md
.
SOC Investigation (
soc_investigation_trigger
): Broader scope, SIEM optional, evidence-first approach.
VM Forensics (
vm_forensics_trigger
): OSCAR-DFIR framework. Ask user to run ONE command at a time on the VM. See
references/vm-forensics-methodology.md
.
Sigma Rules (
sigma_rule_trigger
): Detection rule engineering. Ask for log samples, write Sigma rules. No SIEM queries needed. See
references/sigma-rule-methodology.md
.
If the question doesn't match any mode, answer directly using workspace context.
从工作区的
mode
字段自动检测:
告警分流
ultimate_trigger
,默认): 双假设分析——同时评估良性和恶意解释。在1-3次查询后应用分类框架。请参见
references/alert-triage-methodology.md
SOC调查
soc_investigation_trigger
): 范围更广,SIEM可选,以证据为先的方法。
VM取证
vm_forensics_trigger
): OSCAR-DFIR框架。要求用户在VM上一次运行一个命令。请参见
references/vm-forensics-methodology.md
Sigma规则
sigma_rule_trigger
): 检测规则工程。要求提供日志样本,编写Sigma规则。无需SIEM查询。请参见
references/sigma-rule-methodology.md
如果问题与任何模式都不匹配,请使用工作区上下文直接回答。

SIEM query rules (ONLY use after schema discovery)

SIEM查询规则(仅在Schema发现后使用)

Splunk SPL:
  • Use index and sourcetype FROM THE SCHEMA — never guess
  • Always use relative time:
    earliest=-60m
    or
    earliest=-24h
  • End queries with
    | head 20
  • NEVER use absolute timestamps
  • Field names MUST match the schema exactly (case-sensitive)
Elastic ESQL:
  • Use index pattern FROM THE SCHEMA
  • Use
    ==
    for equality (double equals)
  • Quote keyword values:
    "4624"
    not
    4624
  • Time:
    WHERE @timestamp >= NOW() - 1 hour
  • End with
    | LIMIT 20
  • Field names from schema (e.g.,
    event.code
    not
    EventCode
    )
Sentinel KQL:
  • Use table names FROM THE SCHEMA
  • Use
    ==
    for equality,
    has
    for word match,
    contains
    for substring
  • Time:
    | where TimeGenerated > ago(24h)
  • End with
    | take 20
Full guide:
references/siem-query-guides.md
Splunk SPL:
  • 使用Schema中的索引和源类型——请勿猜测
  • 始终使用相对时间:
    earliest=-60m
    earliest=-24h
  • 查询结尾使用
    | head 20
  • 绝对不要使用绝对时间戳
  • 字段名称必须与Schema完全匹配(区分大小写)
Elastic ESQL:
  • 使用Schema中的索引模式
  • 使用
    ==
    表示相等(双等号)
  • 关键字值加引号:
    "4624"
    而非
    4624
  • 时间:
    WHERE @timestamp >= NOW() - 1 hour
  • 结尾使用
    | LIMIT 20
  • 使用Schema中的字段名称(例如:
    event.code
    而非
    EventCode
Sentinel KQL:
  • 使用Schema中的表名称
  • 使用
    ==
    表示相等,
    has
    表示单词匹配,
    contains
    表示子字符串匹配
  • 时间:
    | where TimeGenerated > ago(24h)
  • 结尾使用
    | take 20
完整指南:
references/siem-query-guides.md

Endpoint reference

端点参考

MethodPathDescription
GET
/health
Health check (no auth)
GET
/me
User info
GET
/me/credits
Credit balance
GET
/workspaces
List workspaces
GET
/workspaces/:id
Workspace details + context
POST
/workspaces
Create workspace
PATCH
/workspaces/:id
Update workspace
DELETE
/workspaces/:id
Archive workspace
GET
/workspaces/:wsId/conversations
List conversations
POST
/workspaces/:wsId/conversations
Create conversation
GET
/conversations/:id
Conversation details
GET
/conversations/:id/messages?limit=N
Message history (max 100)
POST
/conversations/:id/messages
Post message (user/assistant)
PUT
/conversations/:id/messages/:msgId
Edit message content
DELETE
/conversations/:id/messages/:msgId
Delete message
GET
/conversations/:id/context
Get agent context
POST
/conversations/:id/context
Save agent context (overwrite)
PATCH
/conversations/:id/context
Merge-update context
GET
/conversations/:id/verdict
Read verdicts
POST
/conversations/:id/verdict
Write verdict (upserts by eventId)
GET
/conversations/:id/status
Processing status
POST
/workspaces/:wsId/queue
Add alert to queue
GET
/workspaces/:wsId/queue
List queue (?status=pending/completed/all)
GET
/workspaces/:wsId/queue/next
Get next pending alert
PATCH
/queue/:id/claim
Mark alert as processing
PATCH
/queue/:id/complete
Mark completed (verdict, escalation, duration, queries)
PATCH
/queue/:id/fail
Mark alert as failed
PATCH
/queue/:id/progress
Add/update investigation progress step
GET
/queue/:id/progress
Get all progress steps
PATCH
/queue/:id/report
Save investigation report
PATCH
/queue/:id/iocs
Add/update IOCs (append: true to add without replacing)
PATCH
/queue/:id/mitre
Add/update MITRE techniques (append: true)
PATCH
/queue/:id/context
Save agent context (schema, state)
GET
/queue/:id/detail
Get full investigation detail
DELETE
/queue/:id
Remove from queue
All endpoints require
Authorization: Bearer soc_sk_<key>
except
/health
.
Queue-centric flow: All investigation data (report, IOCs, MITRE, progress, context) goes to the queue item. The Agent Dashboard reads everything from the queue. Conversations are optional/legacy.
方法路径描述
GET
/health
健康检查(无需认证)
GET
/me
用户信息
GET
/me/credits
余额
GET
/workspaces
列出工作区
GET
/workspaces/:id
工作区详情 + 上下文
POST
/workspaces
创建工作区
PATCH
/workspaces/:id
更新工作区
DELETE
/workspaces/:id
归档工作区
GET
/workspaces/:wsId/conversations
列出对话
POST
/workspaces/:wsId/conversations
创建对话
GET
/conversations/:id
对话详情
GET
/conversations/:id/messages?limit=N
消息历史(最多100条)
POST
/conversations/:id/messages
发布消息(用户/助手)
PUT
/conversations/:id/messages/:msgId
编辑消息内容
DELETE
/conversations/:id/messages/:msgId
删除消息
GET
/conversations/:id/context
获取Agent上下文
POST
/conversations/:id/context
保存Agent上下文(覆盖)
PATCH
/conversations/:id/context
合并更新上下文
GET
/conversations/:id/verdict
读取结论
POST
/conversations/:id/verdict
写入结论(按eventId更新)
GET
/conversations/:id/status
处理状态
POST
/workspaces/:wsId/queue
将告警添加到队列
GET
/workspaces/:wsId/queue
列出队列(?status=pending/completed/all)
GET
/workspaces/:wsId/queue/next
获取下一个待处理告警
PATCH
/queue/:id/claim
将告警标记为处理中
PATCH
/queue/:id/complete
标记为完成(结论、升级、持续时间、查询次数)
PATCH
/queue/:id/fail
将告警标记为失败
PATCH
/queue/:id/progress
添加/更新调查进度步骤
GET
/queue/:id/progress
获取所有进度步骤
PATCH
/queue/:id/report
保存调查报告
PATCH
/queue/:id/iocs
添加/更新IOC(append: true表示添加而非替换)
PATCH
/queue/:id/mitre
添加/更新MITRE技术(append: true)
PATCH
/queue/:id/context
保存Agent上下文(Schema、状态)
GET
/queue/:id/detail
获取完整调查详情
DELETE
/queue/:id
从队列中移除
/health
外,所有端点都需要
Authorization: Bearer soc_sk_<key>
以队列为中心的流程: 所有调查数据(报告、IOC、MITRE、进度、上下文)都进入队列项。Agent Dashboard从队列中读取所有内容。对话是可选的/遗留的。

Error codes

错误码

CodeStatusMeaning
bad_request
400Invalid input (check JSON syntax)
unauthorized
401Invalid/expired API key
not_found
404Resource not found
rate_limited
429Too many requests (60/min standard)
internal_error
500Server error
代码状态含义
bad_request
400无效输入(检查JSON语法)
unauthorized
401无效/过期API密钥
not_found
404资源未找到
rate_limited
429请求过于频繁(标准限制为60次/分钟)
internal_error
500服务器错误

Critical rules

关键规则

  1. SCHEMA FIRST — NO EXCEPTIONS — discover the SIEM schema before ANY investigation query. Never guess index names, sourcetypes, or field names.
  2. Use schema-verified names ONLY — every index, sourcetype, and field must come from schema discovery.
  3. Save schema to context immediately — so you never need to ask again for this workspace.
  4. DEFEND your classifications — only change a verdict when NEW evidence is presented, not because the user disagrees. Restate your evidence and ask for counter-evidence.
  5. Classify the SPECIFIC activity — an alert that fires on legitimate activity is FP even if unrelated malicious activity exists on the same host. Report both, classify separately.
  6. ALWAYS submit to queue first — even when the user pastes an alert directly in the CLI. This ensures the Agent Dashboard tracks every investigation.
  7. Temporal investigation is MANDATORY — always check what happened AFTER the alert event.
  8. Classify EARLY — after 1-3 initial queries, apply the classification framework.
  9. Save context after each major step — enables resume if the session is interrupted.
  10. Save the report to the queue item via
    PATCH /queue/:id/report
    so it appears in the Agent Dashboard.
  11. Use Node.js for JSON serialization on Windows — never inline multi-line content in curl -d.
  12. Never fabricate query results — only use data the user has provided.
  13. TP does not equal confirmed malware — True Positive means the alert correctly identified suspicious activity requiring response.
  14. Autonomous mode is OPT-IN ONLY — never activate autonomous mode or mention Chrome unless the user explicitly requests automation. Default is always HITL mode.
  1. Schema优先——无例外——在任何调查查询之前必须发现SIEM的Schema。请勿猜测索引名称、源类型或字段名称。
  2. 仅使用Schema验证后的名称——每个索引、源类型和字段都必须来自Schema发现。
  3. 立即将Schema保存到上下文——这样你就永远不需要再次询问该工作区的Schema。
  4. 坚持你的分类——只有在出现新证据时才更改结论,而不是因为用户不同意。重申你的证据并要求提供反证。
  5. 对具体活动进行分类——即使同一主机上存在无关的恶意活动,检测到合法活动的告警仍为假阳性。报告这两个发现,但分别分类。
  6. 必须首先提交到队列——即使用户直接在CLI中粘贴了告警。这可确保Agent Dashboard跟踪每一项调查。
  7. 必须进行时间相关调查——始终检查告警事件之后发生的情况。
  8. 尽早分类——在1-3次初始查询后应用分类框架。
  9. 每个主要步骤后保存上下文——如果会话中断,可恢复调查。
  10. 通过
    PATCH /queue/:id/report
    将报告保存到队列项
    ,使其显示在Agent Dashboard中。
  11. 在Windows上使用Node.js进行JSON序列化——永远不要在curl -d中内联多行内容。
  12. 切勿编造查询结果——仅使用用户提供的数据。
  13. 真阳性不等于已确认恶意软件——真阳性意味着告警正确识别了需要响应的可疑活动。
  14. 自主模式仅在用户主动选择时启用——永远不要激活自主模式或提及Chrome,除非用户明确请求自动化。默认始终为HITL模式。