Loading...
Loading...
Found 225 Skills
Build LLM applications with LangChain and LangGraph. Use when creating RAG pipelines, agent workflows, chains, or complex LLM orchestration. Triggers on LangChain, LangGraph, LCEL, RAG, retrieval, agent chain.
Complete RAG and search engineering skill. Covers chunking strategies, hybrid retrieval (BM25 + vector), cross-encoder reranking, query rewriting, ranking pipelines, nDCG/MRR evaluation, and production search systems. Modern patterns for retrieval-augmented generation and semantic search.
Provides expertise on Chroma vector database integration for semantic search applications. Use when the user asks about vector search, embeddings, Chroma, semantic search, RAG systems, nearest neighbor search, or adding search functionality to their application.
Implement GraphRAG patterns combining knowledge graphs with retrieval for complex reasoning. Use this skill when building RAG over interconnected data or needing relationship-aware retrieval. Activate when: GraphRAG, knowledge graph, graph retrieval, entity relationships, Neo4j RAG, graph database, connected data.
Expert in aggregating, processing, and synthesizing information from multiple sources into coherent insights. Use when building knowledge graphs, ontologies, RAG systems, or extracting insights across documents. Triggers include "knowledge graph", "ontology", "synthesize information", "GraphRAG", "insight extraction", "cross-document analysis".
Expert prompt engineering for LLM applications including prompt design, optimization, RAG systems, agent architectures, and AI product development.
Use this skill for setting up vector similarity search with pgvector for AI/ML embeddings, RAG applications, or semantic search. **Trigger when user asks to:** - Store or search vector embeddings in PostgreSQL - Set up semantic search, similarity search, or nearest neighbor search - Create HNSW or IVFFlat indexes for vectors - Implement RAG (Retrieval Augmented Generation) with PostgreSQL - Optimize pgvector performance, recall, or memory usage - Use binary quantization for large vector datasets **Keywords:** pgvector, embeddings, semantic search, vector similarity, HNSW, IVFFlat, halfvec, cosine distance, nearest neighbor, RAG, LLM, AI search Covers: halfvec storage, HNSW index configuration (m, ef_construction, ef_search), quantization strategies, filtered search, bulk loading, and performance tuning.
Retrieval-Augmented Generation patterns for grounded LLM responses. Use when building RAG pipelines, constructing context from retrieved documents, adding citations, or implementing hybrid search.
Analyze AI/ML technical content (papers, articles, blog posts) and extract actionable insights filtered through enterprise AI engineering lens. Use when user provides URL/document for AI/ML content analysis, asks to "review this paper", or mentions technical content in domains like RAG, embeddings, fine-tuning, prompt engineering, LLM deployment.
AWS Bedrock foundation models for generative AI. Use when invoking foundation models, building AI applications, creating embeddings, configuring model access, or implementing RAG patterns.
Expert in building Retrieval-Augmented Generation systems. Masters embedding models, vector databases, chunking strategies, and retrieval optimization for LLM applications. Use when: building RAG, vector search, embeddings, semantic search, document retrieval.
Expert guidance for LlamaIndex development including RAG applications, vector stores, document processing, query engines, and building production AI applications.