Loading...
Loading...
Automatically intercepts and optimizes prompts using the prompt-learning MCP server. Learns from performance over time via embedding-indexed history. Uses APE, OPRO, DSPy patterns. Activate on "optimize prompt", "improve this prompt", "prompt engineering", or ANY complex task request. Requires prompt-learning MCP server. NOT for simple questions (just answer them), NOT for direct commands (just execute them), NOT for conversational responses (no optimization needed).
npx skill4agent add erichowens/some_claude_skills automatic-stateful-prompt-improvermcp__prompt-learning__optimize_prompt1. INTERCEPT the user's request
2. CALL: mcp__prompt-learning__optimize_prompt
- prompt: [user's original request]
- domain: [inferred domain]
- max_iterations: [3-20 based on complexity]
3. RECEIVE: optimized prompt + improvement details
4. INFORM user briefly: "I've refined your request for [reason]"
5. PROCEED with the OPTIMIZED version1. ASSESS: Did the response achieve the goal?
2. CALL: mcp__prompt-learning__record_feedback
- prompt_id: [from optimization response]
- success: [true/false]
- quality_score: [0.0-1.0]
3. This enables future retrievals to learn from outcomes| Factor | Low (3-5) | Medium (5-10) | High (10-20) |
|---|---|---|---|
| Complexity | Simple | Multi-step | Agent/pipeline |
| Ambiguity | Clear | Some | Underspecified |
| Domain | Known | Moderate | Novel |
| Stakes | Low | Moderate | Critical |
| Scenario | Improvement | Iterations |
|---|---|---|
| Simple task | 10-20% | 3-5 |
| Complex reasoning | 20-40% | 10-15 |
| Agent/pipeline | 30-50% | 15-20 |
| With history | +10-15% bonus | Varies |
| What it looks like | Why it's wrong |
|---|---|
| Prompt becomes overly complex with many constraints | Causes brittleness, model confusion, token waste |
| Instead: Apply Occam's Razor - simplest sufficient prompt wins |
| What it looks like | Why it's wrong |
|---|---|
| Focusing on templates rather than task understanding | Templates don't generalize; understanding does |
| Instead: Focus on WHAT the task requires, not HOW to format it |
| What it looks like | Why it's wrong |
|---|---|
| Multiple rewrites without tracking improvements | Can't know if changes help without metrics |
| Instead: Always define success criteria before optimizing |
| What it looks like | Why it's wrong |
|---|---|
| Assumes model can't do things it can | Over-scaffolding wastes tokens |
| Instead: Test capabilities before heavy prompting |
| File | Contents |
|---|---|
| APE, OPRO, CoT, instruction rewriting, constraint engineering |
| Warm start, embedding retrieval, MCP setup, drift detection |
| Decision matrices, complexity scoring, convergence algorithms |