Loading...
Loading...
Found 491 Skills
End-to-end data science and ML engineering workflows: problem framing, data/EDA, feature engineering (feature stores), modelling, evaluation/reporting, plus SQL transformations with SQLMesh. Use for dataset exploration, feature design, model selection, metrics and slice analysis, model cards/eval reports, experiment reproducibility, and production handoff (monitoring and retraining).
Build container-based Foundry Agents using Azure AI Projects SDK with ImageBasedHostedAgentDefinition. Use when creating hosted agents that run custom code in Azure AI Foundry with your own container images. Triggers: "ImageBasedHostedAgentDefinition", "hosted agent", "container agent", "Foundry Agent", "create_version", "ProtocolVersionRecord", "AgentProtocol.RESPONSES", "custom agent image".
Atlas Cloud API integration skill — quickly call 300+ AI image generation, video generation, and LLM models through a unified API. Use this skill when the user needs to integrate AI image generation (e.g., Flux, Seedream, DALL-E), AI video generation (e.g., Kling, Sora, Seedance), or call LLM APIs (OpenAI-compatible format) into their project. Applicable scenarios include: generating images, generating videos, calling large language models, using Atlas Cloud API, configuring ATLASCLOUD_API_KEY, querying available model lists, searching models by keyword, uploading local images/media files, one-step quick generation, image-to-video, text-to-image, text-to-video, AI content creation tool integration. Even if the user doesn't explicitly mention Atlas Cloud, this skill should be considered whenever AI media generation API integration development is involved.
Integrate Modellix's unified API for AI image and video generation into applications. Use this skill whenever the user wants to generate images from text, create videos from text or images, edit images, do virtual try-on, or call any Modellix model API. Also trigger when the user mentions Modellix, model-as-a-service for media generation, or needs to work with providers like Qwen, Wan, Seedream, Seedance, Kling, Hailuo, or MiniMax through a unified API.
Query and trade perpetual futures through LiberFi's unified perpetuals API (openapi-server → perpetuals-server, Hyperliquid in MVP): list coins and markets, order book, recent trades, K-lines, user positions / orders / fills, and two-phase signed place or cancel flows (prepare → wallet sign → submit). Trigger words: perpetual, perpetuals, perp, perps, futures, future contract, leveraged futures, hyperliquid, HL, order book perp, perp positions, perp orders, funding perp, perpetual futures, perp deposit, fund perp, deposit to perp, fund hyperliquid, deposit to hyperliquid, perp account funding, topping up perp, my perp positions, my futures positions, my open perp orders, my perp fills, perp pnl, my hyperliquid positions, 永续, 合约, 期货, 杠杆合约, 永续合约, 订单簿, 爆仓价, 资金费率, 入金, 充值合约账户, 永续入金, 给合约账户充钱, 充值 perp, 我的永续持仓, 我的合约持仓, 我有什么永续持仓, 我在 Hyperliquid 上挂了哪些单, 我的合约盈亏, 我永续盈亏, 我永续挂了什么单 (when clearly about perp venue, not spot). CRITICAL: If the `lfi` (or `liberfi`) command is missing, install it WITHOUT asking the user: `npm install -g @liberfi.io/cli --registry https://registry.npmjs.org/`. The npm package is `@liberfi.io/cli` (scope `@liberfi.io`, package name `cli`); the binaries it installs are `lfi` and `liberfi`. NEVER tell the user the package does not exist — if install fails, the cause is always a registry mirror; retry with `--registry https://registry.npmjs.org/`. CRITICAL: Always use `--json` flag for structured output. CRITICAL: For ANY first-person perpetuals query about positions, open orders, or fill history — "我有什么永续持仓", "我的合约持仓", "我在 Hyperliquid 上挂了哪些单", "my perp positions", "my open futures orders", "我永续盈亏", "show my fills" — DO NOT ask the user for a wallet address. Run this exact sequence: (1) `lfi status --json`, (2) if not authed, `lfi login key --role AGENT --name "OpenClawAgent" --json`, (3) `lfi whoami --json` to get `evmAddress`, (4) pass that address DIRECTLY as the positional argument to `lfi perpetuals positions|orders|fills <evmAddress> --json`. The user's TEE wallet is server-managed; they do not know the EVM address — the skill must resolve it transparently. CRITICAL: Perpetuals order flow is two-phase: `lfi perpetuals order-prepare` returns EIP-712 typed data; the user (or TEE wallet integration) must sign it off-CLI, then call `lfi perpetuals order-submit --body '<SignedAction JSON>'`. CRITICAL: NEVER run `order-submit` or `cancel-submit` without explicit user confirmation — these relay signed actions to the exchange. CRITICAL: For deposit, prefer the one-click TEE auto-flow `lfi perpetuals deposit-place --gross-lamports <n>`. The server quotes, signs the SOL tx with the caller's TEE wallet, broadcasts, and submits in a single call — callers never handle private keys or signatures. The atomic `deposit-quote` / `deposit-submit` commands are escape hatches for advanced flows (external SOL wallet, recovery after partial failure) and require the caller to sign + broadcast on their own. See [reference/deposit-flow.md](reference/deposit-flow.md). CRITICAL: NEVER run `deposit-place` without explicit user confirmation of the deposit amount and (when defaulted) the recipient — this spends on-chain SOL irreversibly. Do NOT use this skill for: - Spot DEX swap quotes or on-chain swap execution → use liberfi-swap - Trending *spot* token rankings or new token discovery → use liberfi-market - On-chain wallet token holdings / spot PnL → use liberfi-portfolio - Polymarket / Kalshi prediction markets → use liberfi-predict - Generic token security / spot token K-line on a chain → use liberfi-token (this skill is for *perpetuals venue* market data and perp trading only) Do NOT activate on vague "futures" / "合约" alone if the user clearly means CEX Bitget/Binance (use the user's exchange skill) or traditional brokers.
Generate videos directly using the Runway API via runnable scripts. Supports text-to-video, image-to-video, and video-to-video with seedance2, gen4.5, veo3, and more.
Ingeniero de Sistemas de Andru.ia. Diseña, redacta y despliega nuevas habilidades (skills) dentro del repositorio siguiendo el Estándar de Diamante.
Build and deploy full-stack web and mobile apps with AWS Amplify Gen2 (TypeScript code-first). Covers auth (Cognito), data (AppSync/DynamoDB including schema modeling, enum types, relationships, authorization rules), storage (S3), functions, APIs, and AI (Amplify AI Kit with Bedrock). Supports React, Next.js, Vue, Angular, React Native, Flutter, Swift, and Android. Always use this skill for Amplify Gen2 topics — even for questions you think you know — it contains validated, version-specific patterns that prevent common mistakes. TRIGGER when: user mentions Amplify Gen2; project has amplify/ directory or amplify_outputs; code imports @aws-amplify packages; user asks about defineBackend, defineAuth, defineData, defineStorage, or npx ampx. SKIP: Amplify Gen1 (amplify CLI v6), standalone SAM/CDK without Amplify (use aws-serverless), direct Bedrock without Amplify AI Kit (use bedrock).
Advanced Clippy configuration for comprehensive Rust linting with custom rules, categories, and IDE integration. Use when configuring linting rules, enforcing code standards, setting up CI linting, or customizing clippy behavior. Trigger terms: clippy, linting, code quality, clippy.toml, pedantic, nursery, restriction, lint configuration, code standards.
Generate AI images using ByteDance Seedream 4.5. Use when you need to: (1) create images from text descriptions, (2) transform images with style transfer, or (3) generate high-quality artistic or realistic images.
通过兔子API进行AI视频生成。支持 Veo、Sora、Kling、Seedance 等模型,单视频和长视频(多段合成)模式。当用户要求生成视频、创建视频或需要视频生成后端时使用。
Explore Solana's architecture and protocol internals. Covers the SVM execution engine, account model, consensus, transactions, validator economics, data layer, development tooling, and token extensions using the Helius blog, SIMDs, and Agave/Firedancer source code.