Loading...
Loading...
Set up a new autoresearch experiment interactively. Collects domain, target file, eval command, metric, direction, and evaluator.
npx skill4agent add alirezarezvani/claude-skills setup/ar:setup # Interactive mode
/ar:setup engineering api-speed src/api.py "pytest bench.py" p50_ms lower
/ar:setup --list # Show existing experiments
/ar:setup --list-evaluators # Show available evaluatorspython {skill_path}/scripts/setup_experiment.py \
--domain {domain} --name {name} \
--target {target} --eval "{eval_cmd}" \
--metric {metric} --direction {direction} \
[--evaluator {evaluator}] [--scope {scope}]setup_experiment.py# Show existing experiments
python {skill_path}/scripts/setup_experiment.py --list
# Show available evaluators
python {skill_path}/scripts/setup_experiment.py --list-evaluators| Name | Metric | Use Case |
|---|---|---|
| | Function/API execution time |
| | File, bundle, Docker image size |
| | Test suite pass percentage |
| | Build/compile/Docker build time |
| | Peak memory during execution |
| | Headlines, titles, descriptions |
| | System prompts, agent instructions |
| | Social posts, ad copy, emails |
/ar:run {domain}/{name}/ar:loop {domain}/{name}