Loading...
Loading...
Agentica server + Claude proxy setup - architecture, startup sequence, debugging
npx skill4agent add parcadei/continuous-claude-v3 agentica-serverAgentica SDK (client code)
| S_M_BASE_URL=http://localhost:2345
v
ClientSessionManager
|
v
Agentica Server (agentica-server)
| INFERENCE_ENDPOINT_URL=http://localhost:8080/v1/chat/completions
v
Claude Proxy (claude_proxy.py)
|
v
Claude CLI (claude -p)| Variable | Set By | Used By | Purpose |
|---|---|---|---|
| Human | agentica-server | Where server sends LLM inference requests |
| Human | Agentica SDK client | Where SDK connects to session manager |
uv run python scripts/agentica/claude_proxy.py --port 8080cd workspace/agentica-research/agentica-server
INFERENCE_ENDPOINT_URL=http://localhost:8080/v1/chat/completions uv run agentica-server --port 2345S_M_BASE_URL=http://localhost:2345 uv run python scripts/agentica/your_script.py# Claude proxy health
curl http://localhost:8080/health
# Agentica server health
curl http://localhost:2345/healthdata: {"choices": [...]}\n\nModuleNotFoundError: No module named 'agentica_server'uv run agentica-servercd workspace/agentica-research/agentica-server--allowedToolsclaude -p ... --allowedTools Read Write Edit BashCRITICAL: Use ACTUAL tools. Never DESCRIBE using tools.| File | Purpose |
|---|---|
| OpenAI-compatible proxy with SSE streaming |
| Local agentica-server installation |
| Multi-agent pattern documentation |
# 1. Verify proxy responds
curl http://localhost:8080/health
# 2. Verify server responds
curl http://localhost:2345/health
# 3. Test inference through proxy
curl http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model":"claude","messages":[{"role":"user","content":"Say hello"}]}'S_M_BASE_URLINFERENCE_ENDPOINT_URL