Loading...
Loading...
INVOKE THIS SKILL when building ANY retrieval-augmented generation (RAG) system. Covers document loaders, RecursiveCharacterTextSplitter, embeddings (OpenAI), and vector stores (Chroma, FAISS, Pinecone).
npx skill4agent add langchain-ai/langchain-skills langchain-rag| Vector Store | Use Case | Persistence |
|---|---|---|
| InMemory | Testing | Memory only |
| FAISS | Local, high performance | Disk |
| Chroma | Development | Disk |
| Pinecone | Production, managed | Cloud |
</python>
<typescript>
End-to-end RAG pipeline: load documents, split into chunks, embed, store, retrieve, and generate a response.
```typescript
import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";
import { MemoryVectorStore } from "@langchain/classic/vectorstores/memory";
import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";
import { Document } from "@langchain/core/documents";
// 1. Load documents
const docs = [
new Document({ pageContent: "LangChain is a framework for LLM apps.", metadata: {} }),
new Document({ pageContent: "RAG = Retrieval Augmented Generation.", metadata: {} }),
];
// 2. Split documents
const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 500, chunkOverlap: 50 });
const splits = await splitter.splitDocuments(docs);
// 3. Create embeddings and store
const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small" });
const vectorstore = await MemoryVectorStore.fromDocuments(splits, embeddings);
// 4. Create retriever
const retriever = vectorstore.asRetriever({ k: 4 });
// 5. Use in RAG
const model = new ChatOpenAI({ model: "gpt-4.1" });
const query = "What is RAG?";
const relevantDocs = await retriever.invoke(query);
const context = relevantDocs.map(doc => doc.pageContent).join("\n\n");
const response = await model.invoke([
{ role: "system", content: `Use this context:\n\n${context}` },
{ role: "user", content: query },
]);</python>
<typescript>
Load a PDF file and extract each page as a separate document.
```typescript
import { PDFLoader } from "@langchain/community/document_loaders/fs/pdf";
const loader = new PDFLoader("./document.pdf");
const docs = await loader.load();
console.log(`Loaded ${docs.length} pages`);</python>
<typescript>
Fetch and parse content from a web URL into a document using Cheerio.
```typescript
import { CheerioWebBaseLoader } from "@langchain/community/document_loaders/web/cheerio";
const loader = new CheerioWebBaseLoader("https://docs.langchain.com");
const docs = await loader.load();</python>
</ex-loading-directory>
---
## Text Splitting
<ex-text-splitting>
<python>
Split documents into chunks using RecursiveCharacterTextSplitter with configurable size and overlap.
```python
from langchain_text_splitters import RecursiveCharacterTextSplitter
splitter = RecursiveCharacterTextSplitter(
chunk_size=1000, # Characters per chunk
chunk_overlap=200, # Overlap for context continuity
separators=["\n\n", "\n", " ", ""], # Split hierarchy
)
splits = splitter.split_documents(docs)</python>
<typescript>
Create a Chroma vector store connected to a running Chroma server.
```typescript
import { Chroma } from "@langchain/community/vectorstores/chroma";
import { OpenAIEmbeddings } from "@langchain/openai";
const vectorstore = await Chroma.fromDocuments(
splits,
new OpenAIEmbeddings(),
{ collectionName: "my-collection", url: "http://localhost:8000" }
);</python>
<typescript>
Create a FAISS vector store, save it to disk, and reload it.
```typescript
import { FaissStore } from "@langchain/community/vectorstores/faiss";
const vectorstore = await FaissStore.fromDocuments(splits, embeddings);
await vectorstore.save("./faiss_index");
const loaded = await FaissStore.load("./faiss_index", embeddings);</python>
<typescript>
Perform similarity search and retrieve results with relevance scores.
```typescript
// Basic search
const results = await vectorstore.similaritySearch(query, 5);
// With scores
const resultsWithScore = await vectorstore.similaritySearchWithScore(query, 5);
for (const [doc, score] of resultsWithScore) {
console.log(`Score: ${score}, Content: ${doc.pageContent}`);
}</python>
</ex-metadata-filtering>
<ex-rag-with-agent>
<python>
Create an agent that uses RAG as a tool for answering questions.
```python
from langchain.agents import create_agent
from langchain.tools import tool
@tool
def search_docs(query: str) -> str:
"""Search documentation for relevant information."""
docs = retriever.invoke(query)
return "\n\n".join([d.page_content for d in docs])
agent = create_agent(
model="gpt-4.1",
tools=[search_docs],
)
result = agent.invoke({
"messages": [{"role": "user", "content": "How do I create an agent?"}]
})</typescript>
</ex-rag-with-agent>
<boundaries>
### What You CAN Configure
- Chunk size/overlap
- Embedding model
- Number of results (k)
- Metadata filters
- Search algorithms: Similarity, MMR
### What You CANNOT Configure
- Embedding dimensions (per model)
- Mix embeddings from different models in same store
</boundaries>
<fix-chunk-size>
<python>
Chunk size 500-1500 is typically good.
```python
# WRONG: Too small (loses context) or too large (hits limits)
splitter = RecursiveCharacterTextSplitter(chunk_size=50)
splitter = RecursiveCharacterTextSplitter(chunk_size=10000)
# CORRECT
splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)</typescript>
</fix-chunk-size>
<fix-chunk-overlap>
<python>
Use overlap (10-20% of chunk size) to maintain context at boundaries.
```python
# WRONG: No overlap - context breaks at boundaries
splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
# CORRECT: 10-20% overlap
splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)</python>
<typescript>
Use persistent vector store instead of in-memory to avoid data loss.
```typescript
// WRONG: Memory - lost on restart
const vectorstore = await MemoryVectorStore.fromDocuments(docs, embeddings);
// CORRECT
const vectorstore = await Chroma.fromDocuments(docs, embeddings, { collectionName: "my-collection" });</python>
<typescript>
Use the same embedding model for indexing and querying.
```typescript
const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small" });
const vectorstore = await Chroma.fromDocuments(docs, embeddings);
const retriever = vectorstore.asRetriever(); // Uses same embeddings</python>
</fix-faiss-deserialization>
<fix-dimension-mismatch>
<python>
Ensure embedding dimensions match the vector store index dimensions.
```python
# WRONG: Index has 1536 dimensions but using 512-dim embeddings
pc.create_index(name="idx", dimension=1536, metric="cosine")
vectorstore = PineconeVectorStore.from_documents(
docs, OpenAIEmbeddings(model="text-embedding-3-small", dimensions=512), index=pc.Index("idx")
) # Error: dimension mismatch!
# CORRECT: Match dimensions
embeddings = OpenAIEmbeddings() # Default 1536