Loading...
Loading...
Token optimization best practices for MCP server and tool interactions. Minimizes token consumption while maintaining effectiveness. USE WHEN: user mentions "token usage", "optimize tokens", "reduce API calls", "MCP efficiency", asks about "how to use less tokens", "MCP best practices", "limit output size", "efficient queries" DO NOT USE FOR: Code optimization - use `performance` instead, Text compression - this is about API usage patterns, Cost optimization (infrastructure) - use cloud/DevOps skills
npx skill4agent add claude-dev-suite/claude-dev-suite token-optimizationDeep Knowledge: Usewith technology:mcp__documentation__fetch_docsfor comprehensive documentation.token-optimization
performance| Principle | Description |
|---|---|
| Lazy Loading | Load information only when strictly necessary |
| Minimal Output | Request only needed data, use |
| Progressive Detail | Start with overview/summary, drill down only if needed |
| Cache First | Check if information is already in context before external calls |
| Anti-Pattern | Why It's Bad | Token-Efficient Solution |
|---|---|---|
| **SELECT *** | Returns unnecessary columns | Specify exact columns needed |
| No LIMIT clause | Returns entire dataset | Always add LIMIT (e.g., 100) |
| Full schema requests | Returns massive specs | Use |
| Recursive documentation fetch | Fetches entire doc tree | Use |
| Fetching full logs | Returns thousands of lines | Use |
| Copy-paste documentation | Duplicates content | Summarize and reference, don't quote verbatim |
| No pagination | Returns all results at once | Use offset/limit for large datasets |
| Full API schema exploration | Multi-MB specifications | Get endpoint list first, details on-demand |
| Issue | Check | Solution |
|---|---|---|
| Large MCP response | Output size > 2000 tokens | Add |
| Repeated API calls | Calling same tool multiple times | Cache results in conversation context |
| Slow context buildup | Too many tool calls | Batch related queries, use more specific tools |
| Unnecessary documentation fetch | Info already known | Check skill files first, fetch docs as last resort |
| Full table scan results | Database query returns too much | Add WHERE clause and LIMIT |
| Verbose error logs | Full stack traces repeated | Summarize errors, reference line numbers |
-- BAD: Query without limits
SELECT * FROM users
-- GOOD: Query with filters and limits
SELECT id, name, email FROM users WHERE active = true LIMIT 100execute_querylimitget_schema(compact=true)describe_tableexplain_query-- BAD: Full schema
get_api_schema(format="full")
-- GOOD: Summary only for overview
get_api_schema(format="summary")
-- GOOD: Path list with limit
list_api_paths(limit=50)
-- GOOD: Single endpoint details
get_api_endpoint_details(path="/users/{id}", method="GET")get_api_schema(format="summary")list_api_paths(limit=50)get_api_models(compact=true)search_api(limit=10)-- BAD: Entire document
fetch_docs(topic="react")
-- GOOD: Targeted search
search_docs(query="useEffect cleanup", maxResults=3)search_docs(maxResults=3)fetch_docs-- BAD: All logs
parse_logs(file="/var/log/app.log")
-- GOOD: Recent errors only
find_errors(file="/var/log/app.log", limit=50)
-- GOOD: Tail for live debugging
tail_logs(file="/var/log/app.log", lines=50)tail_logs(lines=50)find_errors(limit=50)parse_logs(limit=200)scan_dependenciesscan_allscan_secretsscan_allanalyze_complexity(path="src/specific/file.ts")find_duplicates(minLines=10)code_metricslimitcompact=trueFound 1523 rows. First 20:
| id | name | status |
|----|------|--------|
| 1 | ... | active |
...
Use offset=20 for next page.API has 45 endpoints. Summary by tag:
- users: 8 endpoints
- auth: 5 endpoints
- products: 12 endpoints
...
Use get_api_endpoint_details for specifics.Found 234 errors in last hour. Top 5 by frequency:
1. ConnectionTimeout: 89 occurrences
2. ValidationError: 45 occurrences
...
Use tail_logs or parse_logs with filters for details.Deep Knowledge: Usewith technology:mcp__documentation__fetch_docsfor advanced optimization techniques.token-optimization