Loading...
Loading...
Found 1,294 Skills
Battle-tested PyTorch training recipes for all domains — LLMs, vision, diffusion, medical imaging, protein/drug discovery, spatial omics, genomics. Covers training loops, optimizer selection (AdamW, Muon), LR scheduling, mixed precision, debugging, and systematic experimentation. Use when training or fine-tuning neural networks, debugging loss spikes or OOM, choosing architectures, or optimizing GPU throughput.
Intercept and debug HTTP traffic from any CLI, service, or script using HTTP Toolkit. Use when you need to inspect LLM API calls, backend requests, auth flows, or debug network-level issues across any language or runtime.
Enthu.AI platform help — contact center conversation intelligence with auto QA scorecards, agent coaching, compliance monitoring, and speech analytics. Use when setting up Enthu.AI QA scorecards for call center agents, calls not being scored or transcribed correctly, agents not seeing coaching insights from their calls, Enthu.AI integration with Aircall or RingCentral not syncing, comparing Enthu.AI vs Gong or CallMiner for contact center QA, or configuring sentiment analysis and keyword tracking. Do NOT use for building a general coaching program (use /sales-coaching) or reviewing a specific call transcript (use /sales-call-review).
Test PydanticAI agents using TestModel, FunctionModel, VCR cassettes, and inline snapshots. Use when writing unit tests, mocking LLM responses, or recording API interactions.
Retrieval-Augmented Generation (RAG) system design patterns, chunking strategies, embedding models, retrieval techniques, and context assembly. Use when designing RAG pipelines, improving retrieval quality, or building knowledge-grounded LLM applications.
Automated sitemap generation for all locale URLs, robots.txt configuration, and llms.txt for AI crawler optimization. Use when setting up sitemap.xml, configuring crawling rules, or improving discoverability for search engines and AI systems.
Build AI-native products with agency-control tradeoffs, calibration loops, and eval strategies. Use when building AI agents, LLM features, or products where AI handles user tasks autonomously. Part of the Modern Product Operating Model collection.
Use when running tests to validate implementations, collecting test evidence, or debugging failures. Load in TEST state. Covers unit tests (pytest/jest), API tests (curl), browser tests (Claude-in-Chrome), database verification. All results are code-verified, not LLM-judged.
Strategic clinical trial design feasibility assessment using ToolUniverse. Evaluates patient population sizing, biomarker prevalence, endpoint selection, comparator analysis, safety monitoring, and regulatory pathways. Creates comprehensive feasibility reports with evidence grading, enrollment projections, and trial design recommendations. Use when planning Phase 1/2 trials, assessing trial feasibility, or designing biomarker-driven studies.
Implement and maintain the OKX broker/provider integration for this workspace using okx-api SDK best practices, including auth/signing, spot/margin/futures/options trading, market/account endpoints, rate limiting, websocket subscriptions, and OKX error handling. Use when adding or changing any code under src/providers/okx or when an LLM needs canonical SDK usage patterns derived from .trae/okx-api-llm.txt.
Optimize content for AI search engines — ChatGPT, Perplexity, Claude, Gemini, Google AI Overviews. Use when the user asks about AI SEO, AISO, getting cited by AI, appearing in AI answers, answer engine optimization, AEO, GEO, LLMO, AI Overviews, zero-click search, or how to appear in ChatGPT/Perplexity results. For traditional SEO, see diagnose-seo.
Configure specific Sentry features beyond basic SDK setup. Use when asked to monitor AI/LLM calls, set up OpenTelemetry pipelines, or create alerts and notifications.