Loading...
Loading...
Search-aware context compression workflow for agent-studio. Use pnpm hybrid search + token-saver compression, then persist distilled learnings via MemoryRecord.
npx skill4agent add oimiragieo/agent-studio token-saver-context-compressionpnpm search:code[mem:*][rag:*]pnpm search:tokenssearch:codepnpm search:code "<query>"run_skill_workflow.py --output-format json.claude/hooks/memory/sync-memory-index.cjsgotchas.jsongotcha|pitfall|anti-pattern|risk|warning|failureissues.mdissue|bug|error|incident|defect|gapdecisions.mddecision|tradeoff|choose|selected|rationalepatterns.jsonnode .claude/skills/token-saver-context-compression/scripts/main.cjs --query "<question>" --mode evidence_aware --limit 20 --fail-on-insufficient-evidencepython .claude/skills/token-saver-context-compression/scripts/run_skill_workflow.py --file <path> --mode evidence_aware --query "<question>" --output-format json --fail-on-insufficient-evidencesearchcompressionmemoryRecordspatternsgotchasissuesdecisionsevidence.claude/workflows/token-saver-context-compression-skill-workflow.md.claude/tools/token-saver-context-compression/token-saver-context-compression.cjs.claude/skills/token-saver-context-compression/commands/token-saver-context-compression.md[mem:xxxxxxxx][rag:xxxxxxxx]pnpm search:tokens# Check if you need compression
pnpm search:tokens .claude/lib/memory
# Output: 60 files, 500KB, ~128K tokens ⚠ OVER CONTEXT
# Then compress with a targeted query
node .claude/skills/token-saver-context-compression/scripts/main.cjs \
--query "how does memory persistence work" --mode evidence_aware --limit 10cat .claude/context/memory/learnings.md.claude/context/memory/learnings.md.claude/context/memory/issues.md