Loading...
Loading...
Found 37 Skills
Query decomposition for multi-concept retrieval. Use when handling complex queries spanning multiple topics, implementing multi-hop retrieval, or improving coverage for compound questions.
LLM and ML model deployment for inference. Use when serving models in production, building AI APIs, or optimizing inference. Covers vLLM (LLM serving), TensorRT-LLM (GPU optimization), Ollama (local), BentoML (ML deployment), Triton (multi-model), LangChain (orchestration), LlamaIndex (RAG), and streaming patterns.
Use when "LangChain", "LLM chains", "ReAct agents", "tool calling", or asking about "RAG pipelines", "conversation memory", "document QA", "agent tools", "LangSmith"
한글(HWP/HWPX) 문서를 다양한 포맷(Text, HTML, ODT, PDF)으로 변환하고, Markdown/HTML을 HWPX로 생성하는 작업을 도와줍니다. LLM/RAG 파이프라인을 위한 문서 처리, 청킹, LangChain 연동을 지원합니다.
Use when adding multi-format RAG ingest, chunk, embed, and retrieval pipelines; pair with architect-python-uv-batch or architect-python-uv-fastapi-sqlalchemy.
Extract structured data from Office documents (DOCX, PPTX, XLSX, HWP, HWPX) using the Polaris AI DataInsight Doc Extract API. Use when the user wants to parse, analyze, or extract text, tables, charts, images, or shapes from document files. Invoke this skill whenever the user mentions extracting content from Word, PowerPoint, Excel, HWP, or HWPX files, wants to parse document structure, needs to convert document data for RAG pipelines, or asks about reading tables, charts, or text from Office-format documents — even if they don't explicitly mention "DataInsight" or "Polaris".
Interact with the Denser Retriever API to build and query knowledge bases. Use this skill whenever the user wants to create a knowledge base, upload documents (files or URLs), search/query a knowledge base, list or delete knowledge bases or documents, check document processing status, or check account usage/balance. Also trigger when the user mentions 'denser retriever', 'knowledge base', 'document search', 'semantic search', 'RAG pipeline', or wants to index and search their files.
Build LLM applications using Dify's visual workflow platform. Use when creating AI chatbots, implementing RAG pipelines, developing agents with tools, managing knowledge bases, deploying LLM apps, or building workflows with drag-and-drop. Supports hundreds of LLMs, Docker/Kubernetes deployment.
Use this skill when crafting LLM prompts, implementing chain-of-thought reasoning, designing few-shot examples, building RAG pipelines, or optimizing prompt performance. Triggers on prompt design, system prompts, few-shot learning, chain-of-thought, prompt chaining, RAG, retrieval-augmented generation, prompt templates, structured output, and any task requiring effective LLM interaction patterns.
Use this skill when working with Mastra - the TypeScript AI framework for building agents, workflows, tools, and AI-powered applications. Triggers on creating agents, defining workflows, configuring memory, RAG pipelines, MCP client/server setup, voice integration, evals/scorers, deployment, and Mastra CLI commands. Also triggers on "mastra dev", "mastra build", "mastra init", Mastra Studio, or any Mastra package imports.
Designs production-grade RAG pipelines with chunking optimization, retrieval evaluation, and pipeline architecture. Use when building a RAG system, selecting a chunking strategy, choosing a vector database, optimizing retrieval quality, designing embedding pipelines, or evaluating RAG performance with RAGAS metrics.
Document chunking implementations and benchmarking tools for RAG pipelines including fixed-size, semantic, recursive, and sentence-based strategies. Use when implementing document processing, optimizing chunk sizes, comparing chunking approaches, benchmarking retrieval performance, or when user mentions chunking, text splitting, document segmentation, RAG optimization, or chunk evaluation.