Loading...
Loading...
Records research provenance as a post-task epilogue, scanning conversation history at the end of a coding or research session to extract decisions, experiments, dead ends, claims, heuristics, and pivots, and writing them into the ara/ directory with user-vs-AI provenance tags. Use as a session epilogue — never during execution — to maintain a faithful, auditable trace of how a research project actually evolved.
npx skill4agent add orchestra-research/ai-research-skills ara-research-managerara/ara/ara/ara/ara/ara/| Event Type | Signals | Routes To |
|---|---|---|
| Decision | User chose between alternatives | |
| Experiment | Test ran, benchmark completed, quantitative result | |
| Dead End | Approach abandoned, "doesn't work", reverted | |
| Pivot | Major direction change based on evidence | |
| Claim | Assertion about the system, hypothesis stated | |
| Heuristic | Implementation trick, workaround, "the trick is" | |
| AI Action | Agent wrote code, ran command, created file | Session record only |
| Observation | Interesting but unclassified | |
| Tag | When | Example |
|---|---|---|
| User explicitly stated or confirmed | "Let's use GQA" |
| AI inferred; user did NOT confirm | AI notices a pattern |
| AI performed the action | AI wrote scheduler.py |
| AI suggested, user corrected | "No, threshold is 90%" |
ai-suggesteduserara/
PAPER.md # Root manifest + layer index
logic/ # What & Why
problem.md # Problem definition + gaps
claims.md # Falsifiable assertions + proof refs
concepts.md # Term definitions
experiments.md # Experiment plans (declarative)
solution/
architecture.md # System design
algorithm.md # Math + pseudocode
constraints.md # Boundary conditions
heuristics.md # Tricks + rationale + sensitivity
related_work.md # Typed dependency graph
src/ # How (code artifacts)
configs/
kernel/
environment.md
trace/ # Journey
exploration_tree.yaml # Research DAG
sessions/
session_index.yaml # Master session index
YYYY-MM-DD_NNN.yaml # Individual session records
evidence/ # Raw Proof
README.md
tables/
figures/
staging/ # Unclassified observations
observations.yamlchildren:tree:children:also_depends_on: [N{XX}]children:children:tree:
- id: N01
type: question
title: "{root research question}"
provenance: user
timestamp: "YYYY-MM-DDTHH:MM"
description: >
{what is being explored}
children:
- id: N02
type: experiment
title: "{what was tested}"
provenance: ai-executed
timestamp: "YYYY-MM-DDTHH:MM"
result: >
{what happened — include numbers}
evidence: [C{XX}, "{figure/table refs}"]
children:
- id: N03
type: decision
title: "{choice made based on N02 results}"
provenance: user
timestamp: "YYYY-MM-DDTHH:MM"
choice: >
{what was chosen and why}
alternatives:
- "{option not chosen}"
evidence: >
{what motivated this — reference parent nodes}
children:
- id: N04
type: dead_end
title: "{approach that failed}"
provenance: user
timestamp: "YYYY-MM-DDTHH:MM"
hypothesis: >
{what was expected to work}
failure_mode: >
{why it failed}
lesson: >
{what was learned}
- id: N05
type: experiment
title: "{alternative that worked}"
also_depends_on: [N02] # cross-edge: also informed by N02
provenance: ai-executed
timestamp: "YYYY-MM-DDTHH:MM"
result: >
{outcome}
evidence: [C{XX}]
- id: N06
type: dead_end
title: "{sibling approach tried from N01}"
provenance: user
timestamp: "YYYY-MM-DDTHH:MM"
hypothesis: >
{what was expected}
failure_mode: >
{why it failed}
lesson: >
{what was learned — motivated N02's direction}
- id: N07
type: pivot
title: "{new top-level research thread}"
provenance: user
timestamp: "YYYY-MM-DDTHH:MM"
from: "{previous direction}"
to: "{new direction}"
trigger: "{what caused the change}"| Type | Required Fields | When to Use |
|---|---|---|
| | Root research question or sub-question |
| | User chose between options |
| | Test/benchmark produced a result |
| | Approach abandoned |
| | Major direction change |
## C{XX}: {title}
- **Statement**: {falsifiable assertion}
- **Status**: hypothesis | untested | testing | supported | weakened | refuted | revised
- **Provenance**: user | ai-suggested | user-revised
- **Falsification criteria**: {what would disprove this}
- **Proof**: [{evidence refs or "pending"}]
- **Dependencies**: [C{YY}, ...]
- **Tags**: {comma-separated}## H{XX}: {title}
- **Rationale**: {why this works}
- **Provenance**: user | ai-suggested | user-revised
- **Sensitivity**: low | medium | high
- **Code ref**: [{file paths}]- id: O{XX}
timestamp: "YYYY-MM-DDTHH:MM"
provenance: user | ai-suggested | ai-executed
content: "{raw observation}"
context: "{what was happening}"
potential_type: claim | heuristic | decision | unknown
promoted: falsesession:
id: "YYYY-MM-DD_NNN"
timestamp: "YYYY-MM-DDTHH:MM"
summary: "{one-line summary of what happened}"
events_logged:
- type: decision | experiment | dead_end | pivot | claim | heuristic | observation
id: "{N/C/H/O}{XX}"
provenance: user | ai-suggested | ai-executed | user-revised
summary: "{what}"
ai_actions:
- action: "{what AI did}"
provenance: ai-executed
files_changed: ["{paths}"]
claims_touched:
- id: C{XX}
action: created | advanced | weakened | confirmed
provenance: user | ai-suggested
open_threads:
- "{what needs follow-up}"
ai_suggestions_pending:
- "{unconfirmed AI suggestions from this session}"mkdir -p ara/{logic/solution,src/{configs,kernel},trace/sessions,evidence/{tables,figures},staging}ara/PAPER.mdara/trace/sessions/session_index.yamlsessions: []ara/trace/exploration_tree.yamltree: []ara/staging/observations.yamlobservations: []ara/logic/claims.md# Claimsara/logic/problem.md# Problemara/logic/solution/heuristics.md# Heuristicsara/evidence/README.md# Evidence Indexstaging/observations.yamlai-suggestedevidence/<!-- CONFLICT: contradicts C{XX} -->stale: trueara/ara/trace/sessions/YYYY-MM-DD_NNN.yamlara/trace/sessions/session_index.yamlai-suggested