Loading...
Loading...
Found 776 Skills
Install, configure, and operate Strix for AI-driven application security testing. Use when you need to run authorized vulnerability scans against local codebases, GitHub repositories, staging URLs, domains, or CI pipelines; configure Docker and LLM providers; choose quick, standard, or deep scan depth; or pass authenticated testing instructions to Strix. Triggers on: strix, ai pentest, vulnerability scan cli, appsec scan, bug bounty automation, strix ci, strix docker, strix scan mode, strix instruction file, headless security scan.
A prompt repetition technique for improving LLM accuracy. Achieves significant performance gains in 67% (47/70) of 70 benchmarks. Automatically applied on lightweight models (haiku, flash, mini).
Instrument, trace, evaluate, and monitor LLM applications and AI agents with LangSmith. Use when setting up observability for LLM pipelines, running offline or online evaluations, managing prompts in the Prompt Hub, creating datasets for regression testing, or deploying agent servers. Triggers on: langsmith, langchain tracing, llm tracing, llm observability, llm evaluation, trace llm calls, @traceable, wrap_openai, langsmith evaluate, langsmith dataset, langsmith feedback, langsmith prompt hub, langsmith project, llm monitoring, llm debugging, llm quality, openevals, langsmith cli, langsmith experiment, annotate llm, llm judge.
Process external code review feedback with technical rigor. Use when receiving feedback from another LLM, human reviewer, or CI tool. Verifies claims before implementing, tracks disposition.
Methodology for effective AI-assisted software development. Use when helping users build software with AI coding assistants, debugging AI-generated code, planning features for AI implementation, managing version control in AI workflows, or when users mention "vibe coding," Cursor, Windsurf, or similar AI coding tools. Provides strategies for planning, testing, debugging, and iterating on code written with LLM assistance.
Implement and maintain the OKX broker/provider integration for this workspace using okx-api SDK best practices, including auth/signing, spot/margin/futures/options trading, market/account endpoints, rate limiting, websocket subscriptions, and OKX error handling. Use when adding or changing any code under src/providers/okx or when an LLM needs canonical SDK usage patterns derived from .trae/okx-api-llm.txt.
Read GitHub repos the RIGHT way - via gitmcp.io instead of raw scraping. Why this beats web search: (1) Semantic search across docs, not just keyword matching, (2) Smart code navigation with accurate file structure - zero hallucinations on repo layout, (3) Proper markdown output optimized for LLMs, not raw HTML/JSON garbage, (4) Aggregates README + /docs + code in one clean interface, (5) Respects rate limits and robots.txt. Stop pasting raw GitHub URLs - use this instead.
Pre-ship audit checklist for Ethereum dApps built with Scaffold-ETH 2. Give this to a separate reviewer agent (or fresh context) AFTER the build is complete. Covers only the bugs AI agents actually ship — validated by baseline testing against stock LLMs.
The essential mental models for building onchain — focused on what LLMs get wrong and what humans need explained. "Nothing is automatic" and "incentives are everything" are the core messages. Use when your human is new to onchain development, when they're designing a system, or when they ask "how does this actually work?" Also use when YOU are designing a system — the state machine + incentive framework catches design mistakes before they become dead code.
Remove LLM-generated code patterns that add noise without value. Use when reviewing diffs, PRs, or branches to clean up AI-generated code. Triggers include requests to "remove slop", "clean up AI code", "review for AI patterns", or checking diffs against main for unnecessary verbosity, redundant checks, or over-engineering introduced by LLMs. Language-agnostic.
Smart contract testing with Foundry — unit tests, fuzz testing, fork testing, invariant testing. What to test, what not to test, and what LLMs get wrong.
Apple FoundationModels framework for on-device LLM — text generation, guided generation with @Generable, tool calling, and snapshot streaming in iOS 26+.