pinecone
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChinesePinecone - Managed Vector Database
Pinecone - 托管式向量数据库
The vector database for production AI applications.
面向生产级AI应用的向量数据库。
When to use Pinecone
何时使用Pinecone
Use when:
- Need managed, serverless vector database
- Production RAG applications
- Auto-scaling required
- Low latency critical (<100ms)
- Don't want to manage infrastructure
- Need hybrid search (dense + sparse vectors)
Metrics:
- Fully managed SaaS
- Auto-scales to billions of vectors
- p95 latency <100ms
- 99.9% uptime SLA
Use alternatives instead:
- Chroma: Self-hosted, open-source
- FAISS: Offline, pure similarity search
- Weaviate: Self-hosted with more features
适用场景:
- 需要托管式、无服务器向量数据库
- 生产级RAG应用
- 需要自动扩容能力
- 对低延迟要求严苛(<100ms)
- 不想自行管理基础设施
- 需要混合搜索(稠密+稀疏向量)
核心指标:
- 全托管SaaS服务
- 可自动扩容至数十亿级向量
- p95延迟<100ms
- 99.9%正常运行时间SLA
替代方案适用场景:
- Chroma: 自托管、开源
- FAISS: 离线、纯相似度搜索
- Weaviate: 自托管且功能更丰富
Quick start
快速开始
Installation
安装
bash
pip install pinecone-clientbash
pip install pinecone-clientBasic usage
基础用法
python
from pinecone import Pinecone, ServerlessSpecpython
from pinecone import Pinecone, ServerlessSpecInitialize
初始化
pc = Pinecone(api_key="your-api-key")
pc = Pinecone(api_key="your-api-key")
Create index
创建索引
pc.create_index(
name="my-index",
dimension=1536, # Must match embedding dimension
metric="cosine", # or "euclidean", "dotproduct"
spec=ServerlessSpec(cloud="aws", region="us-east-1")
)
pc.create_index(
name="my-index",
dimension=1536, # 必须与嵌入模型维度匹配
metric="cosine", # 或 "euclidean", "dotproduct"
spec=ServerlessSpec(cloud="aws", region="us-east-1")
)
Connect to index
连接到索引
index = pc.Index("my-index")
index = pc.Index("my-index")
Upsert vectors
插入向量
index.upsert(vectors=[
{"id": "vec1", "values": [0.1, 0.2, ...], "metadata": {"category": "A"}},
{"id": "vec2", "values": [0.3, 0.4, ...], "metadata": {"category": "B"}}
])
index.upsert(vectors=[
{"id": "vec1", "values": [0.1, 0.2, ...], "metadata": {"category": "A"}},
{"id": "vec2", "values": [0.3, 0.4, ...], "metadata": {"category": "B"}}
])
Query
查询向量
results = index.query(
vector=[0.1, 0.2, ...],
top_k=5,
include_metadata=True
)
print(results["matches"])
undefinedresults = index.query(
vector=[0.1, 0.2, ...],
top_k=5,
include_metadata=True
)
print(results["matches"])
undefinedCore operations
核心操作
Create index
创建索引
python
undefinedpython
undefinedServerless (recommended)
无服务器版(推荐)
pc.create_index(
name="my-index",
dimension=1536,
metric="cosine",
spec=ServerlessSpec(
cloud="aws", # or "gcp", "azure"
region="us-east-1"
)
)
pc.create_index(
name="my-index",
dimension=1536,
metric="cosine",
spec=ServerlessSpec(
cloud="aws", # 或 "gcp", "azure"
region="us-east-1"
)
)
Pod-based (for consistent performance)
基于Pod的版本(用于稳定性能)
from pinecone import PodSpec
pc.create_index(
name="my-index",
dimension=1536,
metric="cosine",
spec=PodSpec(
environment="us-east1-gcp",
pod_type="p1.x1"
)
)
undefinedfrom pinecone import PodSpec
pc.create_index(
name="my-index",
dimension=1536,
metric="cosine",
spec=PodSpec(
environment="us-east1-gcp",
pod_type="p1.x1"
)
)
undefinedUpsert vectors
插入向量
python
undefinedpython
undefinedSingle upsert
单条插入
index.upsert(vectors=[
{
"id": "doc1",
"values": [0.1, 0.2, ...], # 1536 dimensions
"metadata": {
"text": "Document content",
"category": "tutorial",
"timestamp": "2025-01-01"
}
}
])
index.upsert(vectors=[
{
"id": "doc1",
"values": [0.1, 0.2, ...], # 1536维
"metadata": {
"text": "文档内容",
"category": "教程",
"timestamp": "2025-01-01"
}
}
])
Batch upsert (recommended)
批量插入(推荐)
vectors = [
{"id": f"vec{i}", "values": embedding, "metadata": metadata}
for i, (embedding, metadata) in enumerate(zip(embeddings, metadatas))
]
index.upsert(vectors=vectors, batch_size=100)
undefinedvectors = [
{"id": f"vec{i}", "values": embedding, "metadata": metadata}
for i, (embedding, metadata) in enumerate(zip(embeddings, metadatas))
]
index.upsert(vectors=vectors, batch_size=100)
undefinedQuery vectors
查询向量
python
undefinedpython
undefinedBasic query
基础查询
results = index.query(
vector=[0.1, 0.2, ...],
top_k=10,
include_metadata=True,
include_values=False
)
results = index.query(
vector=[0.1, 0.2, ...],
top_k=10,
include_metadata=True,
include_values=False
)
With metadata filtering
带元数据过滤的查询
results = index.query(
vector=[0.1, 0.2, ...],
top_k=5,
filter={"category": {"$eq": "tutorial"}}
)
results = index.query(
vector=[0.1, 0.2, ...],
top_k=5,
filter={"category": {"$eq": "教程"}}
)
Namespace query
命名空间查询
results = index.query(
vector=[0.1, 0.2, ...],
top_k=5,
namespace="production"
)
results = index.query(
vector=[0.1, 0.2, ...],
top_k=5,
namespace="production"
)
Access results
访问查询结果
for match in results["matches"]:
print(f"ID: {match['id']}")
print(f"Score: {match['score']}")
print(f"Metadata: {match['metadata']}")
undefinedfor match in results["matches"]:
print(f"ID: {match['id']}")
print(f"得分: {match['score']}")
print(f"元数据: {match['metadata']}")
undefinedMetadata filtering
元数据过滤
python
undefinedpython
undefinedExact match
精确匹配
filter = {"category": "tutorial"}
filter = {"category": "教程"}
Comparison
比较运算
filter = {"price": {"$gte": 100}} # $gt, $gte, $lt, $lte, $ne
filter = {"price": {"$gte": 100}} # $gt, $gte, $lt, $lte, $ne
Logical operators
逻辑运算符
filter = {
"$and": [
{"category": "tutorial"},
{"difficulty": {"$lte": 3}}
]
} # Also: $or
filter = {
"$and": [
{"category": "教程"},
{"difficulty": {"$lte": 3}}
]
} # 也支持: $or
In operator
In运算符
filter = {"tags": {"$in": ["python", "ml"]}}
undefinedfilter = {"tags": {"$in": ["python", "ml"]}}
undefinedNamespaces
命名空间
python
undefinedpython
undefinedPartition data by namespace
按命名空间分区存储数据
index.upsert(
vectors=[{"id": "vec1", "values": [...]}],
namespace="user-123"
)
index.upsert(
vectors=[{"id": "vec1", "values": [...]}],
namespace="user-123"
)
Query specific namespace
查询指定命名空间
results = index.query(
vector=[...],
namespace="user-123",
top_k=5
)
results = index.query(
vector=[...],
namespace="user-123",
top_k=5
)
List namespaces
列出所有命名空间
stats = index.describe_index_stats()
print(stats['namespaces'])
undefinedstats = index.describe_index_stats()
print(stats['namespaces'])
undefinedHybrid search (dense + sparse)
混合搜索(稠密+稀疏向量)
python
undefinedpython
undefinedUpsert with sparse vectors
插入带稀疏向量的数据
index.upsert(vectors=[
{
"id": "doc1",
"values": [0.1, 0.2, ...], # Dense vector
"sparse_values": {
"indices": [10, 45, 123], # Token IDs
"values": [0.5, 0.3, 0.8] # TF-IDF scores
},
"metadata": {"text": "..."}
}
])
index.upsert(vectors=[
{
"id": "doc1",
"values": [0.1, 0.2, ...], # 稠密向量
"sparse_values": {
"indices": [10, 45, 123], # 令牌ID
"values": [0.5, 0.3, 0.8] # TF-IDF分数
},
"metadata": {"text": "..."}
}
])
Hybrid query
混合搜索查询
results = index.query(
vector=[0.1, 0.2, ...],
sparse_vector={
"indices": [10, 45],
"values": [0.5, 0.3]
},
top_k=5,
alpha=0.5 # 0=sparse, 1=dense, 0.5=hybrid
)
undefinedresults = index.query(
vector=[0.1, 0.2, ...],
sparse_vector={
"indices": [10, 45],
"values": [0.5, 0.3]
},
top_k=5,
alpha=0.5 # 0=仅稀疏向量,1=仅稠密向量,0.5=混合
)
undefinedLangChain integration
LangChain 集成
python
from langchain_pinecone import PineconeVectorStore
from langchain_openai import OpenAIEmbeddingspython
from langchain_pinecone import PineconeVectorStore
from langchain_openai import OpenAIEmbeddingsCreate vector store
创建向量存储
vectorstore = PineconeVectorStore.from_documents(
documents=docs,
embedding=OpenAIEmbeddings(),
index_name="my-index"
)
vectorstore = PineconeVectorStore.from_documents(
documents=docs,
embedding=OpenAIEmbeddings(),
index_name="my-index"
)
Query
查询
results = vectorstore.similarity_search("query", k=5)
results = vectorstore.similarity_search("查询语句", k=5)
With metadata filter
带元数据过滤的查询
results = vectorstore.similarity_search(
"query",
k=5,
filter={"category": "tutorial"}
)
results = vectorstore.similarity_search(
"查询语句",
k=5,
filter={"category": "教程"}
)
As retriever
作为检索器使用
retriever = vectorstore.as_retriever(search_kwargs={"k": 10})
undefinedretriever = vectorstore.as_retriever(search_kwargs={"k": 10})
undefinedLlamaIndex integration
LlamaIndex 集成
python
from llama_index.vector_stores.pinecone import PineconeVectorStorepython
from llama_index.vector_stores.pinecone import PineconeVectorStoreConnect to Pinecone
连接到Pinecone
pc = Pinecone(api_key="your-key")
pinecone_index = pc.Index("my-index")
pc = Pinecone(api_key="your-key")
pinecone_index = pc.Index("my-index")
Create vector store
创建向量存储
vector_store = PineconeVectorStore(pinecone_index=pinecone_index)
vector_store = PineconeVectorStore(pinecone_index=pinecone_index)
Use in LlamaIndex
在LlamaIndex中使用
from llama_index.core import StorageContext, VectorStoreIndex
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(documents, storage_context=storage_context)
undefinedfrom llama_index.core import StorageContext, VectorStoreIndex
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(documents, storage_context=storage_context)
undefinedIndex management
索引管理
python
undefinedpython
undefinedList indices
列出所有索引
indexes = pc.list_indexes()
indexes = pc.list_indexes()
Describe index
查看索引详情
index_info = pc.describe_index("my-index")
print(index_info)
index_info = pc.describe_index("my-index")
print(index_info)
Get index stats
获取索引统计数据
stats = index.describe_index_stats()
print(f"Total vectors: {stats['total_vector_count']}")
print(f"Namespaces: {stats['namespaces']}")
stats = index.describe_index_stats()
print(f"总向量数: {stats['total_vector_count']}")
print(f"命名空间: {stats['namespaces']}")
Delete index
删除索引
pc.delete_index("my-index")
undefinedpc.delete_index("my-index")
undefinedDelete vectors
删除向量
python
undefinedpython
undefinedDelete by ID
按ID删除
index.delete(ids=["vec1", "vec2"])
index.delete(ids=["vec1", "vec2"])
Delete by filter
按过滤条件删除
index.delete(filter={"category": "old"})
index.delete(filter={"category": "旧数据"})
Delete all in namespace
删除命名空间内所有向量
index.delete(delete_all=True, namespace="test")
index.delete(delete_all=True, namespace="test")
Delete entire index
删除索引内所有向量
index.delete(delete_all=True)
undefinedindex.delete(delete_all=True)
undefinedBest practices
最佳实践
- Use serverless - Auto-scaling, cost-effective
- Batch upserts - More efficient (100-200 per batch)
- Add metadata - Enable filtering
- Use namespaces - Isolate data by user/tenant
- Monitor usage - Check Pinecone dashboard
- Optimize filters - Index frequently filtered fields
- Test with free tier - 1 index, 100K vectors free
- Use hybrid search - Better quality
- Set appropriate dimensions - Match embedding model
- Regular backups - Export important data
- 使用无服务器版 - 自动扩容、成本优化
- 批量插入 - 效率更高(每批100-200条)
- 添加元数据 - 启用过滤功能
- 使用命名空间 - 按用户/租户隔离数据
- 监控使用情况 - 查看Pinecone控制台
- 优化过滤条件 - 为频繁过滤的字段建立索引
- 免费层测试 - 免费提供1个索引、10万条向量
- 使用混合搜索 - 提升搜索质量
- 设置合适的维度 - 与嵌入模型维度匹配
- 定期备份 - 导出重要数据
Performance
性能表现
| Operation | Latency | Notes |
|---|---|---|
| Upsert | ~50-100ms | Per batch |
| Query (p50) | ~50ms | Depends on index size |
| Query (p95) | ~100ms | SLA target |
| Metadata filter | ~+10-20ms | Additional overhead |
| 操作 | 延迟 | 说明 |
|---|---|---|
| 插入 | ~50-100ms | 每批延迟 |
| 查询(p50) | ~50ms | 取决于索引大小 |
| 查询(p95) | ~100ms | SLA目标 |
| 元数据过滤 | ~+10-20ms | 额外开销 |
Pricing (as of 2025)
定价(截至2025年)
Serverless:
- $0.096 per million read units
- $0.06 per million write units
- $0.06 per GB storage/month
Free tier:
- 1 serverless index
- 100K vectors (1536 dimensions)
- Great for prototyping
无服务器版:
- 每百万次读取单元:$0.096
- 每百万次写入单元:$0.06
- 每GB存储/月:$0.06
免费层:
- 1个无服务器索引
- 10万条向量(1536维度)
- 非常适合原型开发
Resources
相关资源
- Website: https://www.pinecone.io
- Docs: https://docs.pinecone.io
- Console: https://app.pinecone.io
- Pricing: https://www.pinecone.io/pricing