langchain-rag
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
Chinese<overview>
Retrieval Augmented Generation (RAG) enhances LLM responses by fetching relevant context from external knowledge sources.
</vectorstore-selection>
Pipeline:
- Index: Load → Split → Embed → Store
- Retrieve: Query → Embed → Search → Return docs
- Generate: Docs + Query → LLM → Response
Key Components:
- Document Loaders: Ingest data from files, web, databases
- Text Splitters: Break documents into chunks
- Embeddings: Convert text to vectors
- Vector Stores: Store and search embeddings </overview>
| Vector Store | Use Case | Persistence |
|---|---|---|
| InMemory | Testing | Memory only |
| FAISS | Local, high performance | Disk |
| Chroma | Development | Disk |
| Pinecone | Production, managed | Cloud |
<overview>
检索增强生成(RAG)通过从外部知识源获取相关上下文来增强LLM的响应能力。
</vectorstore-selection>
流程 Pipeline:
- 索引阶段 Index:加载 → 拆分 → 嵌入 → 存储
- 检索阶段 Retrieve:查询 → 嵌入 → 搜索 → 返回文档
- 生成阶段 Generate:文档 + 查询 → LLM → 响应
核心组件:
- 文档加载器 Document Loaders:从文件、网页、数据库中导入数据
- 文本拆分器 Text Splitters:将文档拆分为小块
- 嵌入模型 Embeddings:将文本转换为向量
- 向量存储 Vector Stores:存储和搜索向量 </overview>
| 向量存储 Vector Store | 适用场景 Use Case | 持久化方式 Persistence |
|---|---|---|
| InMemory | 测试场景 Testing | 仅内存 Memory only |
| FAISS | 本地部署、高性能 Local, high performance | 磁盘 Disk |
| Chroma | 开发阶段 Development | 磁盘 Disk |
| Pinecone | 生产环境、托管式 Production, managed | 云端 Cloud |
Complete RAG Pipeline
完整RAG流程
<ex-basic-rag-setup>
<python>
End-to-end RAG pipeline: load documents, split into chunks, embed, store, retrieve, and generate a response.
```python
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain_community.vectorstores import InMemoryVectorStore
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain_core.documents import Document
<ex-basic-rag-setup>
<python>
端到端RAG流程:加载文档、拆分为块、嵌入、存储、检索并生成响应。
```python
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain_community.vectorstores import InMemoryVectorStore
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain_core.documents import Document
1. Load documents
1. Load documents
docs = [
Document(page_content="LangChain is a framework for LLM apps.", metadata={}),
Document(page_content="RAG = Retrieval Augmented Generation.", metadata={}),
]
docs = [
Document(page_content="LangChain is a framework for LLM apps.", metadata={}),
Document(page_content="RAG = Retrieval Augmented Generation.", metadata={}),
]
2. Split documents
2. Split documents
splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=50)
splits = splitter.split_documents(docs)
splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=50)
splits = splitter.split_documents(docs)
3. Create embeddings and store
3. Create embeddings and store
embeddings = OpenAIEmbeddings(model="text-embedding-3-small")
vectorstore = InMemoryVectorStore.from_documents(splits, embeddings)
embeddings = OpenAIEmbeddings(model="text-embedding-3-small")
vectorstore = InMemoryVectorStore.from_documents(splits, embeddings)
4. Create retriever
4. Create retriever
retriever = vectorstore.as_retriever(search_kwargs={"k": 4})
retriever = vectorstore.as_retriever(search_kwargs={"k": 4})
5. Use in RAG
5. Use in RAG
model = ChatOpenAI(model="gpt-4.1")
query = "What is RAG?"
relevant_docs = retriever.invoke(query)
context = "\n\n".join([doc.page_content for doc in relevant_docs])
response = model.invoke([
{"role": "system", "content": f"Use this context:\n\n{context}"},
{"role": "user", "content": query},
])
</python>
<typescript>
End-to-end RAG pipeline: load documents, split into chunks, embed, store, retrieve, and generate a response.
```typescript
import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";
import { MemoryVectorStore } from "@langchain/classic/vectorstores/memory";
import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";
import { Document } from "@langchain/core/documents";
// 1. Load documents
const docs = [
new Document({ pageContent: "LangChain is a framework for LLM apps.", metadata: {} }),
new Document({ pageContent: "RAG = Retrieval Augmented Generation.", metadata: {} }),
];
// 2. Split documents
const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 500, chunkOverlap: 50 });
const splits = await splitter.splitDocuments(docs);
// 3. Create embeddings and store
const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small" });
const vectorstore = await MemoryVectorStore.fromDocuments(splits, embeddings);
// 4. Create retriever
const retriever = vectorstore.asRetriever({ k: 4 });
// 5. Use in RAG
const model = new ChatOpenAI({ model: "gpt-4.1" });
const query = "What is RAG?";
const relevantDocs = await retriever.invoke(query);
const context = relevantDocs.map(doc => doc.pageContent).join("\n\n");
const response = await model.invoke([
{ role: "system", content: `Use this context:\n\n${context}` },
{ role: "user", content: query },
]);model = ChatOpenAI(model="gpt-4.1")
query = "What is RAG?"
relevant_docs = retriever.invoke(query)
context = "\n\n".join([doc.page_content for doc in relevant_docs])
response = model.invoke([
{"role": "system", "content": f"Use this context:\n\n{context}"},
{"role": "user", "content": query},
])
</python>
<typescript>
端到端RAG流程:加载文档、拆分为块、嵌入、存储、检索并生成响应。
```typescript
import { ChatOpenAI, OpenAIEmbeddings } from "@langchain/openai";
import { MemoryVectorStore } from "@langchain/classic/vectorstores/memory";
import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";
import { Document } from "@langchain/core/documents";
// 1. Load documents
const docs = [
new Document({ pageContent: "LangChain is a framework for LLM apps.", metadata: {} }),
new Document({ pageContent: "RAG = Retrieval Augmented Generation.", metadata: {} }),
];
// 2. Split documents
const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 500, chunkOverlap: 50 });
const splits = await splitter.splitDocuments(docs);
// 3. Create embeddings and store
const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small" });
const vectorstore = await MemoryVectorStore.fromDocuments(splits, embeddings);
// 4. Create retriever
const retriever = vectorstore.asRetriever({ k: 4 });
// 5. Use in RAG
const model = new ChatOpenAI({ model: "gpt-4.1" });
const query = "What is RAG?";
const relevantDocs = await retriever.invoke(query);
const context = relevantDocs.map(doc => doc.pageContent).join("\n\n");
const response = await model.invoke([
{ role: "system", content: `Use this context:\n\n${context}` },
{ role: "user", content: query },
]);Document Loaders
文档加载器
<ex-loading-pdf>
<python>
Load a PDF file and extract each page as a separate document.
```python
from langchain_community.document_loaders import PyPDFLoader
</typescript>
</ex-loading-pdf>
<ex-loading-web-pages>
<python>
Fetch and parse content from a web URL into a document.
```python
from langchain_community.document_loaders import WebBaseLoader
</typescript>
</ex-loading-web-pages>
<ex-loading-directory>
<python>
Load all text files from a directory using a glob pattern.
```python
from langchain_community.document_loaders import DirectoryLoader, TextLoader
loader = PyPDFLoader("./document.pdf")
docs = loader.load()
print(f"Loaded {len(docs)} pages")
</python>
<typescript>
Load a PDF file and extract each page as a separate document.
```typescript
import { PDFLoader } from "@langchain/community/document_loaders/fs/pdf";
const loader = new PDFLoader("./document.pdf");
const docs = await loader.load();
console.log(`Loaded ${docs.length} pages`);loader = WebBaseLoader("https://docs.langchain.com")
docs = loader.load()
</python>
<typescript>
Fetch and parse content from a web URL into a document using Cheerio.
```typescript
import { CheerioWebBaseLoader } from "@langchain/community/document_loaders/web/cheerio";
const loader = new CheerioWebBaseLoader("https://docs.langchain.com");
const docs = await loader.load();<ex-loading-pdf>
<python>
加载PDF文件并将每页提取为单独的文档。
```python
from langchain_community.document_loaders import PyPDFLoader
</typescript>
</ex-loading-pdf>
<ex-loading-web-pages>
<python>
从网页URL获取并解析内容为文档。
```python
from langchain_community.document_loaders import WebBaseLoader
</typescript>
</ex-loading-web-pages>
<ex-loading-directory>
<python>
使用glob模式加载目录中的所有文本文件。
```python
from langchain_community.document_loaders import DirectoryLoader, TextLoader
loader = PyPDFLoader("./document.pdf")
docs = loader.load()
print(f"Loaded {len(docs)} pages")
</python>
<typescript>
加载PDF文件并将每页提取为单独的文档。
```typescript
import { PDFLoader } from "@langchain/community/document_loaders/fs/pdf";
const loader = new PDFLoader("./document.pdf");
const docs = await loader.load();
console.log(`Loaded ${docs.length} pages`);loader = WebBaseLoader("https://docs.langchain.com")
docs = loader.load()
</python>
<typescript>
使用Cheerio从网页URL获取并解析内容为文档。
```typescript
import { CheerioWebBaseLoader } from "@langchain/community/document_loaders/web/cheerio";
const loader = new CheerioWebBaseLoader("https://docs.langchain.com");
const docs = await loader.load();Load all text files from directory
Load all text files from directory
loader = DirectoryLoader(
"path/to/documents",
glob="**/*.txt", # Pattern for files to load
loader_cls=TextLoader
)
docs = loader.load()
</python>
</ex-loading-directory>
---loader = DirectoryLoader(
"path/to/documents",
glob="**/*.txt", # Pattern for files to load
loader_cls=TextLoader
)
docs = loader.load()
</python>
</ex-loading-directory>
---Text Splitting
文本拆分
<ex-text-splitting>
<python>
Split documents into chunks using RecursiveCharacterTextSplitter with configurable size and overlap.
```python
from langchain_text_splitters import RecursiveCharacterTextSplitter
splitter = RecursiveCharacterTextSplitter(
chunk_size=1000, # Characters per chunk
chunk_overlap=200, # Overlap for context continuity
separators=["\n\n", "\n", " ", ""], # Split hierarchy
)
splits = splitter.split_documents(docs)
</python>
</ex-text-splitting>
---<ex-text-splitting>
<python>
使用可配置大小和重叠度的RecursiveCharacterTextSplitter将文档拆分为块。
```python
from langchain_text_splitters import RecursiveCharacterTextSplitter
splitter = RecursiveCharacterTextSplitter(
chunk_size=1000, # Characters per chunk
chunk_overlap=200, # Overlap for context continuity
separators=["\n\n", "\n", " ", ""], # Split hierarchy
)
splits = splitter.split_documents(docs)
</python>
</ex-text-splitting>
---Vector Stores
向量存储
<ex-chroma-vectorstore>
<python>
Create a persistent Chroma vector store and reload it from disk.
```python
from langchain_chroma import Chroma
from langchain_openai import OpenAIEmbeddings
vectorstore = Chroma.from_documents(
documents=splits,
embedding=OpenAIEmbeddings(),
persist_directory="./chroma_db",
collection_name="my-collection",
)
<ex-chroma-vectorstore>
<python>
创建持久化Chroma向量存储并从磁盘重新加载。
```python
from langchain_chroma import Chroma
from langchain_openai import OpenAIEmbeddings
vectorstore = Chroma.from_documents(
documents=splits,
embedding=OpenAIEmbeddings(),
persist_directory="./chroma_db",
collection_name="my-collection",
)
Load existing
Load existing
vectorstore = Chroma(
persist_directory="./chroma_db",
embedding_function=OpenAIEmbeddings(),
collection_name="my-collection",
)
</python>
<typescript>
Create a Chroma vector store connected to a running Chroma server.
```typescript
import { Chroma } from "@langchain/community/vectorstores/chroma";
import { OpenAIEmbeddings } from "@langchain/openai";
const vectorstore = await Chroma.fromDocuments(
splits,
new OpenAIEmbeddings(),
{ collectionName: "my-collection", url: "http://localhost:8000" }
);vectorstore = FAISS.from_documents(splits, embeddings)
vectorstore.save_local("./faiss_index")
vectorstore = Chroma(
persist_directory="./chroma_db",
embedding_function=OpenAIEmbeddings(),
collection_name="my-collection",
)
</python>
<typescript>
创建连接到运行中Chroma服务器的Chroma向量存储。
```typescript
import { Chroma } from "@langchain/community/vectorstores/chroma";
import { OpenAIEmbeddings } from "@langchain/openai";
const vectorstore = await Chroma.fromDocuments(
splits,
new OpenAIEmbeddings(),
{ collectionName: "my-collection", url: "http://localhost:8000" }
);vectorstore = FAISS.from_documents(splits, embeddings)
vectorstore.save_local("./faiss_index")
Load (requires allow_dangerous_deserialization)
Load (requires allow_dangerous_deserialization)
loaded = FAISS.load_local(
"./faiss_index",
embeddings,
allow_dangerous_deserialization=True
)
</python>
<typescript>
Create a FAISS vector store, save it to disk, and reload it.
```typescript
import { FaissStore } from "@langchain/community/vectorstores/faiss";
const vectorstore = await FaissStore.fromDocuments(splits, embeddings);
await vectorstore.save("./faiss_index");
const loaded = await FaissStore.load("./faiss_index", embeddings);loaded = FAISS.load_local(
"./faiss_index",
embeddings,
allow_dangerous_deserialization=True
)
</python>
<typescript>
创建FAISS向量存储、保存到磁盘并重新加载。
```typescript
import { FaissStore } from "@langchain/community/vectorstores/faiss";
const vectorstore = await FaissStore.fromDocuments(splits, embeddings);
await vectorstore.save("./faiss_index");
const loaded = await FaissStore.load("./faiss_index", embeddings);Retrieval
检索
<ex-similarity-search>
<python>
Perform similarity search and retrieve results with relevance scores.
```python
<ex-similarity-search>
<python>
执行相似度搜索并获取带相关度分数的结果。
```python
Basic search
Basic search
results = vectorstore.similarity_search(query, k=5)
results = vectorstore.similarity_search(query, k=5)
With scores
With scores
results_with_score = vectorstore.similarity_search_with_score(query, k=5)
for doc, score in results_with_score:
print(f"Score: {score}, Content: {doc.page_content}")
</python>
<typescript>
Perform similarity search and retrieve results with relevance scores.
```typescript
// Basic search
const results = await vectorstore.similaritySearch(query, 5);
// With scores
const resultsWithScore = await vectorstore.similaritySearchWithScore(query, 5);
for (const [doc, score] of resultsWithScore) {
console.log(`Score: ${score}, Content: ${doc.pageContent}`);
}results_with_score = vectorstore.similarity_search_with_score(query, k=5)
for doc, score in results_with_score:
print(f"Score: {score}, Content: {doc.page_content}")
</python>
<typescript>
执行相似度搜索并获取带相关度分数的结果。
```typescript
// Basic search
const results = await vectorstore.similaritySearch(query, 5);
// With scores
const resultsWithScore = await vectorstore.similaritySearchWithScore(query, 5);
for (const [doc, score] of resultsWithScore) {
console.log(`Score: ${score}, Content: ${doc.pageContent}`);
}MMR balances relevance and diversity
MMR balances relevance and diversity
retriever = vectorstore.as_retriever(
search_type="mmr",
search_kwargs={"fetch_k": 20, "lambda_mult": 0.5, "k": 5},
)
</python>
</ex-mmr-search>
<ex-metadata-filtering>
<python>
Add metadata to documents and filter search results by metadata properties.
```pythonretriever = vectorstore.as_retriever(
search_type="mmr",
search_kwargs={"fetch_k": 20, "lambda_mult": 0.5, "k": 5},
)
</python>
</ex-mmr-search>
<ex-metadata-filtering>
<python>
为文档添加元数据并按元数据属性过滤搜索结果。
```pythonAdd metadata when creating documents
Add metadata when creating documents
docs = [
Document(
page_content="Python programming guide",
metadata={"language": "python", "topic": "programming"}
),
]
docs = [
Document(
page_content="Python programming guide",
metadata={"language": "python", "topic": "programming"}
),
]
Search with filter
Search with filter
results = vectorstore.similarity_search(
"programming",
k=5,
filter={"language": "python"} # Only Python docs
)
</python>
</ex-metadata-filtering>
<ex-rag-with-agent>
<python>
Create an agent that uses RAG as a tool for answering questions.
```python
from langchain.agents import create_agent
from langchain.tools import tool
@tool
def search_docs(query: str) -> str:
"""Search documentation for relevant information."""
docs = retriever.invoke(query)
return "\n\n".join([d.page_content for d in docs])
agent = create_agent(
model="gpt-4.1",
tools=[search_docs],
)
result = agent.invoke({
"messages": [{"role": "user", "content": "How do I create an agent?"}]
})const searchDocs = tool(
async (input) => {
const docs = await retriever.invoke(input.query);
return docs.map(d => d.pageContent).join("\n\n");
},
{
name: "search_docs",
description: "Search documentation for relevant information.",
schema: z.object({ query: z.string() }),
}
);
const agent = createAgent({
model: "gpt-4.1",
tools: [searchDocs],
});
const result = await agent.invoke({
messages: [{ role: "user", content: "How do I create an agent?" }],
});
</typescript>
</ex-rag-with-agent>
<boundaries>results = vectorstore.similarity_search(
"programming",
k=5,
filter={"language": "python"} # Only Python docs
)
</python>
</ex-metadata-filtering>
<ex-rag-with-agent>
<python>
创建一个将RAG作为工具来回答问题的Agent。
```python
from langchain.agents import create_agent
from langchain.tools import tool
@tool
def search_docs(query: str) -> str:
"""Search documentation for relevant information."""
docs = retriever.invoke(query)
return "\n\n".join([d.page_content for d in docs])
agent = create_agent(
model="gpt-4.1",
tools=[search_docs],
)
result = agent.invoke({
"messages": [{"role": "user", "content": "How do I create an agent?"}]
})const searchDocs = tool(
async (input) => {
const docs = await retriever.invoke(input.query);
return docs.map(d => d.pageContent).join("\n\n");
},
{
name: "search_docs",
description: "Search documentation for relevant information.",
schema: z.object({ query: z.string() }),
}
);
const agent = createAgent({
model: "gpt-4.1",
tools: [searchDocs],
});
const result = await agent.invoke({
messages: [{ role: "user", content: "How do I create an agent?" }],
});
</typescript>
</ex-rag-with-agent>
<boundaries>What You CAN Configure
可配置项
- Chunk size/overlap
- Embedding model
- Number of results (k)
- Metadata filters
- Search algorithms: Similarity, MMR
- 块大小/重叠度
- 嵌入模型
- 返回结果数量(k值)
- 元数据过滤
- 搜索算法:相似度搜索、MMR
What You CANNOT Configure
不可配置项
- Embedding dimensions (per model)
- Mix embeddings from different models in same store </boundaries>
- 嵌入维度(由模型决定)
- 在同一存储中混合不同模型的嵌入向量 </boundaries>
WRONG: Too small (loses context) or too large (hits limits)
WRONG: Too small (loses context) or too large (hits limits)
splitter = RecursiveCharacterTextSplitter(chunk_size=50)
splitter = RecursiveCharacterTextSplitter(chunk_size=10000)
splitter = RecursiveCharacterTextSplitter(chunk_size=50)
splitter = RecursiveCharacterTextSplitter(chunk_size=10000)
CORRECT
CORRECT
splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
</python>
<typescript>
Chunk size 500-1500 is typically good.
```typescript
// WRONG: Too small or too large
const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 50 });
// CORRECT
const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200 });splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
</python>
<typescript>
块大小通常设置为500-1500字符为宜。
```typescript
// WRONG: Too small or too large
const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 50 });
// CORRECT
const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000, chunkOverlap: 200 });WRONG: No overlap - context breaks at boundaries
WRONG: No overlap - context breaks at boundaries
splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
CORRECT: 10-20% overlap
CORRECT: 10-20% overlap
splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
</python>
</fix-chunk-overlap>
<fix-persist-vectorstore>
<python>
Use persistent vector store instead of in-memory to avoid data loss.
```pythonsplitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
</python>
</fix-chunk-overlap>
<fix-persist-vectorstore>
<python>
使用持久化向量存储而非内存存储,避免数据丢失。
```pythonWRONG: InMemory - lost on restart
WRONG: InMemory - lost on restart
vectorstore = InMemoryVectorStore.from_documents(docs, embeddings)
vectorstore = InMemoryVectorStore.from_documents(docs, embeddings)
CORRECT
CORRECT
vectorstore = Chroma.from_documents(docs, embeddings, persist_directory="./chroma_db")
</python>
<typescript>
Use persistent vector store instead of in-memory to avoid data loss.
```typescript
// WRONG: Memory - lost on restart
const vectorstore = await MemoryVectorStore.fromDocuments(docs, embeddings);
// CORRECT
const vectorstore = await Chroma.fromDocuments(docs, embeddings, { collectionName: "my-collection" });vectorstore = Chroma.from_documents(docs, embeddings, persist_directory="./chroma_db")
</python>
<typescript>
使用持久化向量存储而非内存存储,避免数据丢失。
```typescript
// WRONG: Memory - lost on restart
const vectorstore = await MemoryVectorStore.fromDocuments(docs, embeddings);
// CORRECT
const vectorstore = await Chroma.fromDocuments(docs, embeddings, { collectionName: "my-collection" });WRONG: Different embeddings for index and query - incompatible!
WRONG: Different embeddings for index and query - incompatible!
vectorstore = Chroma.from_documents(docs, OpenAIEmbeddings(model="text-embedding-3-small"))
retriever = vectorstore.as_retriever(embeddings=OpenAIEmbeddings(model="text-embedding-3-large"))
vectorstore = Chroma.from_documents(docs, OpenAIEmbeddings(model="text-embedding-3-small"))
retriever = vectorstore.as_retriever(embeddings=OpenAIEmbeddings(model="text-embedding-3-large"))
CORRECT: Same model
CORRECT: Same model
embeddings = OpenAIEmbeddings(model="text-embedding-3-small")
vectorstore = Chroma.from_documents(docs, embeddings)
retriever = vectorstore.as_retriever() # Uses same embeddings
</python>
<typescript>
Use the same embedding model for indexing and querying.
```typescript
const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small" });
const vectorstore = await Chroma.fromDocuments(docs, embeddings);
const retriever = vectorstore.asRetriever(); // Uses same embeddingsembeddings = OpenAIEmbeddings(model="text-embedding-3-small")
vectorstore = Chroma.from_documents(docs, embeddings)
retriever = vectorstore.as_retriever() # Uses same embeddings
</python>
<typescript>
索引和查询时使用相同的嵌入模型。
```typescript
const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small" });
const vectorstore = await Chroma.fromDocuments(docs, embeddings);
const retriever = vectorstore.asRetriever(); // Uses same embeddingsWRONG: Will raise error
WRONG: Will raise error
loaded_store = FAISS.load_local("./faiss_index", embeddings)
loaded_store = FAISS.load_local("./faiss_index", embeddings)
CORRECT
CORRECT
loaded_store = FAISS.load_local("./faiss_index", embeddings, allow_dangerous_deserialization=True)
</python>
</fix-faiss-deserialization>
<fix-dimension-mismatch>
<python>
Ensure embedding dimensions match the vector store index dimensions.
```pythonloaded_store = FAISS.load_local("./faiss_index", embeddings, allow_dangerous_deserialization=True)
</python>
</fix-faiss-deserialization>
<fix-dimension-mismatch>
<python>
确保嵌入维度与向量存储索引的维度匹配。
```pythonWRONG: Index has 1536 dimensions but using 512-dim embeddings
WRONG: Index has 1536 dimensions but using 512-dim embeddings
pc.create_index(name="idx", dimension=1536, metric="cosine")
vectorstore = PineconeVectorStore.from_documents(
docs, OpenAIEmbeddings(model="text-embedding-3-small", dimensions=512), index=pc.Index("idx")
) # Error: dimension mismatch!
pc.create_index(name="idx", dimension=1536, metric="cosine")
vectorstore = PineconeVectorStore.from_documents(
docs, OpenAIEmbeddings(model="text-embedding-3-small", dimensions=512), index=pc.Index("idx")
) # Error: dimension mismatch!
CORRECT: Match dimensions
CORRECT: Match dimensions
embeddings = OpenAIEmbeddings() # Default 1536
</python>
</fix-dimension-mismatch>embeddings = OpenAIEmbeddings() # Default 1536
</python>
</fix-dimension-mismatch>