Loading...
Loading...
Found 776 Skills
Convert GitHub/GitLab/Gitee repositories into comprehensive OpenCode Skills using embedded LLM calls with multiple mirrors and rate limit handling
Use when building a custom provider integration on top of @prefactor/core so your app can instrument agent, llm, and tool workflows without relying on a prebuilt adapter package.
AI and machine learning workflow covering LLM application development, RAG implementation, agent architecture, ML pipelines, and AI-powered features.
Real-time investment context from Primary Logic — LLM-ranked relevance and impact signals from podcasts, articles, X/Twitter, Kalshi, Polymarket, earnings calls, filings, and other monitored sources across public and private companies.
Step-by-step guide for adding support for a new LLM in Dust. Use when adding a new model, or updating a previous one.
Performs semantic code intelligence and token optimization through context engineering and automated context packing. Use when reducing token overhead for large codebases, creating repository digests with Gitingest, packaging code context with Repomix, or tracing cross-file dependencies with llm-tldr.
A skill for improving prompts by applying general LLM/agent best practices. When the user provides a prompt, this skill outputs an improved version, identifies missing information, and provides specific improvement points. Use when the user asks to "improve this prompt", "review this prompt", or "make this prompt better".
Repository packaging for AI/LLM analysis. Capabilities: pack repos into single files, generate AI-friendly context, codebase snapshots, security audit prep, filter/exclude patterns, token counting, multiple output formats. Actions: pack, generate, export, analyze repositories for LLMs. Keywords: Repomix, repository packaging, LLM context, AI analysis, codebase snapshot, Claude context, ChatGPT context, Gemini context, code packaging, token count, file filtering, security audit, third-party library analysis, context window, single file output. Use when: packaging codebases for AI, generating LLM context, creating codebase snapshots, analyzing third-party libraries, preparing security audits, feeding repos to Claude/ChatGPT/Gemini.
Expert-level AI implementation, deployment, LLM integration, and production AI systems
Provides PyTorch-native distributed LLM pretraining using torchtitan with 4D parallelism (FSDP2, TP, PP, CP). Use when pretraining Llama 3.1, DeepSeek V3, or custom models at scale from 8 to 512+ GPUs with Float8, torch.compile, and distributed checkpointing.
Quantizes LLMs to 8-bit or 4-bit for 50-75% memory reduction with minimal accuracy loss. Use when GPU memory is limited, need to fit larger models, or want faster inference. Supports INT8, NF4, FP4 formats, QLoRA training, and 8-bit optimizers. Works with HuggingFace Transformers.
Design LLM-as-Judge evaluators for subjective criteria that code-based checks cannot handle. Use when a failure mode requires interpretation (tone, faithfulness, relevance, completeness). Do NOT use when the failure mode can be checked with code (regex, schema validation, execution tests). Do NOT use when you need to validate or calibrate the judge — use validate-evaluator instead.