google-gemini-embeddings
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseGoogle Gemini Embeddings
Google Gemini Embeddings
Complete production-ready guide for Google Gemini embeddings API
This skill provides comprehensive coverage of the model for generating text embeddings, including SDK usage, REST API patterns, batch processing, RAG integration with Cloudflare Vectorize, and advanced use cases like semantic search and document clustering.
gemini-embedding-001Google Gemini embeddings API 完整生产就绪指南
本技能全面介绍了用于生成文本嵌入的模型,包括SDK使用、REST API模式、批量处理、与Cloudflare Vectorize的RAG集成,以及语义搜索、文档聚类等高级应用场景。
gemini-embedding-001Table of Contents
目录
1. Quick Start
1. 快速入门
Installation
安装
Install the Google Generative AI SDK:
bash
bun add @google/genai@^1.27.0For TypeScript projects:
bash
bun add -d typescript@^5.0.0安装Google Generative AI SDK:
bash
bun add @google/genai@^1.27.0针对TypeScript项目:
bash
bun add -d typescript@^5.0.0Environment Setup
环境配置
Set your Gemini API key as an environment variable:
bash
export GEMINI_API_KEY="your-api-key-here"Get your API key from: https://aistudio.google.com/apikey
将Gemini API密钥设置为环境变量:
bash
export GEMINI_API_KEY="your-api-key-here"First Embedding Example
首个嵌入示例
typescript
import { GoogleGenAI } from "@google/genai";
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
const response = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: 'What is the meaning of life?',
config: {
taskType: 'RETRIEVAL_QUERY',
outputDimensionality: 768
}
});
console.log(response.embedding.values); // [0.012, -0.034, ...]
console.log(response.embedding.values.length); // 768Result: A 768-dimension embedding vector representing the semantic meaning of the text.
typescript
import { GoogleGenAI } from "@google/genai";
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
const response = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: 'What is the meaning of life?',
config: {
taskType: 'RETRIEVAL_QUERY',
outputDimensionality: 768
}
});
console.log(response.embedding.values); // [0.012, -0.034, ...]
console.log(response.embedding.values.length); // 768结果:一个768维度的嵌入向量,代表文本的语义含义。
2. gemini-embedding-001 Model
2. gemini-embedding-001 模型
Model Specifications
模型规格
Current Model: (stable, production-ready)
gemini-embedding-001- Status: Stable
- Experimental: (deprecated October 2025, do not use)
gemini-embedding-exp-03-07
当前模型:(稳定,可用于生产环境)
gemini-embedding-001- 状态:稳定
- 实验性模型:(2025年10月弃用,请勿使用)
gemini-embedding-exp-03-07
Dimensions
维度
The model supports flexible output dimensionality using Matryoshka Representation Learning:
| Dimension | Use Case | Storage | Performance |
|---|---|---|---|
| 768 | Recommended for most use cases | Low | Fast |
| 1536 | Balance between accuracy and efficiency | Medium | Medium |
| 3072 | Maximum accuracy (default) | High | Slower |
Default: 3072 dimensions
Recommended: 768 dimensions for most RAG applications
Load when you need detailed comparisons of storage costs, accuracy trade-offs, or migration strategies between dimensions.
references/dimension-guide.mdLoad when comparing Gemini embeddings with OpenAI (text-embedding-3-small/large) or Cloudflare Workers AI (BGE).
references/model-comparison.md该模型支持基于Matryoshka表示学习的灵活输出维度:
| 维度 | 适用场景 | 存储成本 | 性能 |
|---|---|---|---|
| 768 | 推荐用于大多数场景 | 低 | 快 |
| 1536 | 在准确性与效率间取得平衡 | 中 | 中等 |
| 3072 | 最高准确性(默认值) | 高 | 较慢 |
默认值:3072维度
推荐值:768维度,适用于大多数RAG应用
当需要详细对比存储成本、准确性权衡或不同维度间的迁移策略时,加载。
references/dimension-guide.md当需要对比Gemini嵌入与OpenAI(text-embedding-3-small/large)或Cloudflare Workers AI(BGE)时,加载。
references/model-comparison.mdRate Limits
速率限制
| Tier | RPM | TPM | RPD |
|---|---|---|---|
| Free | 100 | 30,000 | 1,000 |
| Tier 1 | 3,000 | 1,000,000 | - |
RPM = Requests Per Minute, TPM = Tokens Per Minute, RPD = Requests Per Day
| 层级 | RPM | TPM | RPD |
|---|---|---|---|
| 免费层 | 100 | 30,000 | 1,000 |
| Tier 1 | 3,000 | 1,000,000 | - |
RPM = 每分钟请求数,TPM = 每分钟处理令牌数,RPD = 每日请求数
Context Window
上下文窗口
- Input Limit: 2,048 tokens per text
- Input Type: Text only (no images, audio, or video)
- 输入限制:单段文本最多2048个令牌
- 输入类型:仅支持文本(不支持图片、音频或视频)
3. Basic Embeddings
3. 基础嵌入
SDK Approach (Node.js)
SDK方法(Node.js)
Single text embedding:
typescript
import { GoogleGenAI } from "@google/genai";
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
const response = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: 'The quick brown fox jumps over the lazy dog',
config: {
taskType: 'SEMANTIC_SIMILARITY',
outputDimensionality: 768
}
});
console.log(response.embedding.values);
// [0.00388, -0.00762, 0.01543, ...]单文本嵌入:
typescript
import { GoogleGenAI } from "@google/genai";
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
const response = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: 'The quick brown fox jumps over the lazy dog',
config: {
taskType: 'SEMANTIC_SIMILARITY',
outputDimensionality: 768
}
});
console.log(response.embedding.values);
// [0.00388, -0.00762, 0.01543, ...]Fetch Approach (Cloudflare Workers)
Fetch方法(Cloudflare Workers)
For Workers/edge environments without SDK support:
typescript
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const apiKey = env.GEMINI_API_KEY;
const text = "What is the meaning of life?";
const response = await fetch(
'https://generativelanguage.googleapis.com/v1beta/models/gemini-embedding-001:embedContent',
{
method: 'POST',
headers: {
'x-goog-api-key': apiKey,
'Content-Type': 'application/json'
},
body: JSON.stringify({
content: {
parts: [{ text }]
},
taskType: 'RETRIEVAL_QUERY',
outputDimensionality: 768
})
}
);
const data = await response.json();
// Response format:
// {
// embedding: {
// values: [0.012, -0.034, ...]
// }
// }
return new Response(JSON.stringify(data), {
headers: { 'Content-Type': 'application/json' }
});
}
};适用于不支持SDK的Workers/边缘环境:
typescript
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const apiKey = env.GEMINI_API_KEY;
const text = "What is the meaning of life?";
const response = await fetch(
'https://generativelanguage.googleapis.com/v1beta/models/gemini-embedding-001:embedContent',
{
method: 'POST',
headers: {
'x-goog-api-key': apiKey,
'Content-Type': 'application/json'
},
body: JSON.stringify({
content: {
parts: [{ text }]
},
taskType: 'RETRIEVAL_QUERY',
outputDimensionality: 768
})
}
);
const data = await response.json();
// 响应格式:
// {
// embedding: {
// values: [0.012, -0.034, ...]
// }
// }
return new Response(JSON.stringify(data), {
headers: { 'Content-Type': 'application/json' }
});
}
};Response Parsing
响应解析
typescript
interface EmbeddingResponse {
embedding: {
values: number[];
};
}
const response: EmbeddingResponse = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: 'Sample text',
config: { taskType: 'SEMANTIC_SIMILARITY', outputDimensionality: 768 }
});
const embedding: number[] = response.embedding.values;
const dimensions: number = embedding.length; // 768typescript
interface EmbeddingResponse {
embedding: {
values: number[];
};
}
const response: EmbeddingResponse = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: 'Sample text',
config: { taskType: 'SEMANTIC_SIMILARITY', outputDimensionality: 768 }
});
const embedding: number[] = response.embedding.values;
const dimensions: number = embedding.length; // 7684. Batch Embeddings
4. 批量嵌入
Multiple Texts in One Request (SDK)
单次请求处理多个文本(SDK)
Generate embeddings for multiple texts simultaneously:
typescript
import { GoogleGenAI } from "@google/genai";
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
const texts = [
"What is the meaning of life?",
"How does photosynthesis work?",
"Tell me about the history of the internet."
];
const response = await ai.models.embedContent({
model: 'gemini-embedding-001',
contents: texts, // Array of strings
config: {
taskType: 'RETRIEVAL_DOCUMENT',
outputDimensionality: 768
}
});
// Process each embedding
response.embeddings.forEach((embedding, index) => {
console.log(`Text ${index}: ${texts[index]}`);
console.log(`Embedding: ${embedding.values.slice(0, 5)}...`);
console.log(`Dimensions: ${embedding.values.length}`);
});同时为多个文本生成嵌入:
typescript
import { GoogleGenAI } from "@google/genai";
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });
const texts = [
"What is the meaning of life?",
"How does photosynthesis work?",
"Tell me about the history of the internet."
];
const response = await ai.models.embedContent({
model: 'gemini-embedding-001',
contents: texts, // 字符串数组
config: {
taskType: 'RETRIEVAL_DOCUMENT',
outputDimensionality: 768
}
});
// 处理每个嵌入
response.embeddings.forEach((embedding, index) => {
console.log(`文本 ${index}: ${texts[index]}`);
console.log(`嵌入向量: ${embedding.values.slice(0, 5)}...`);
console.log(`维度: ${embedding.values.length}`);
});Chunking for Rate Limits
针对速率限制的分块处理
When processing large datasets, chunk requests to stay within rate limits:
typescript
async function batchEmbedWithRateLimit(
texts: string[],
batchSize: number = 100, // Free tier: 100 RPM
delayMs: number = 60000 // 1 minute delay between batches
): Promise<number[][]> {
const allEmbeddings: number[][] = [];
for (let i = 0; i < texts.length; i += batchSize) {
const batch = texts.slice(i, i + batchSize);
console.log(`Processing batch ${i / batchSize + 1} (${batch.length} texts)`);
const response = await ai.models.embedContent({
model: 'gemini-embedding-001',
contents: batch,
config: {
taskType: 'RETRIEVAL_DOCUMENT',
outputDimensionality: 768
}
});
allEmbeddings.push(...response.embeddings.map(e => e.values));
// Wait before next batch (except last batch)
if (i + batchSize < texts.length) {
await new Promise(resolve => setTimeout(resolve, delayMs));
}
}
return allEmbeddings;
}
// Usage
const embeddings = await batchEmbedWithRateLimit(documents, 100);处理大型数据集时,将请求分块以避免超出速率限制:
typescript
async function batchEmbedWithRateLimit(
texts: string[],
batchSize: number = 100, // 免费层: 100 RPM
delayMs: number = 60000 // 批次间延迟1分钟
): Promise<number[][]> {
const allEmbeddings: number[][] = [];
for (let i = 0; i < texts.length; i += batchSize) {
const batch = texts.slice(i, i + batchSize);
console.log(`处理批次 ${i / batchSize + 1} (${batch.length} 条文本)`);
const response = await ai.models.embedContent({
model: 'gemini-embedding-001',
contents: batch,
config: {
taskType: 'RETRIEVAL_DOCUMENT',
outputDimensionality: 768
}
});
allEmbeddings.push(...response.embeddings.map(e => e.values));
// 等待下一批次(最后一批次除外)
if (i + batchSize < texts.length) {
await new Promise(resolve => setTimeout(resolve, delayMs));
}
}
return allEmbeddings;
}
// 使用示例
const embeddings = await batchEmbedWithRateLimit(documents, 100);5. Task Types
5. 任务类型
The parameter optimizes embeddings for specific use cases. Always specify a task type for best results.
taskTypetaskTypeAvailable Task Types (8 total)
可用任务类型(共8种)
| Task Type | Use Case | Example |
|---|---|---|
| RETRIEVAL_QUERY | User search queries | "How do I fix a flat tire?" |
| RETRIEVAL_DOCUMENT | Documents to be indexed/searched | Product descriptions, articles |
| SEMANTIC_SIMILARITY | Comparing text similarity | Duplicate detection, clustering |
| CLASSIFICATION | Categorizing texts | Spam detection, sentiment analysis |
| CLUSTERING | Grouping similar texts | Topic modeling, content organization |
| CODE_RETRIEVAL_QUERY | Code search queries | "function to sort array" |
| QUESTION_ANSWERING | Questions seeking answers | FAQ matching |
| FACT_VERIFICATION | Verifying claims with evidence | Fact-checking systems |
| 任务类型 | 适用场景 | 示例 |
|---|---|---|
| RETRIEVAL_QUERY | 用户搜索查询 | "如何修补漏气的轮胎?" |
| RETRIEVAL_DOCUMENT | 待索引/搜索的文档 | 产品描述、文章 |
| SEMANTIC_SIMILARITY | 文本相似度对比 | 重复内容检测、聚类 |
| CLASSIFICATION | 文本分类 | 垃圾邮件检测、情感分析 |
| CLUSTERING | 相似文本分组 | 主题建模、内容分类 |
| CODE_RETRIEVAL_QUERY | 代码搜索查询 | "排序数组的函数" |
| QUESTION_ANSWERING | 寻求答案的问题 | FAQ匹配 |
| FACT_VERIFICATION | 用证据验证声明 | 事实核查系统 |
RAG Systems (Most Common)
RAG系统(最常见场景)
typescript
// When embedding user queries
const queryEmbedding = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: userQuery,
config: {
taskType: 'RETRIEVAL_QUERY', // ← Use RETRIEVAL_QUERY
outputDimensionality: 768
}
});
// When embedding documents for indexing
const docEmbedding = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: documentText,
config: {
taskType: 'RETRIEVAL_DOCUMENT', // ← Use RETRIEVAL_DOCUMENT
outputDimensionality: 768
}
});Impact: Using correct task type improves search relevance by 10-30%.
typescript
// 嵌入用户查询时
const queryEmbedding = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: userQuery,
config: {
taskType: 'RETRIEVAL_QUERY', // ← 使用RETRIEVAL_QUERY
outputDimensionality: 768
}
});
// 嵌入待索引文档时
const docEmbedding = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: documentText,
config: {
taskType: 'RETRIEVAL_DOCUMENT', // ← 使用RETRIEVAL_DOCUMENT
outputDimensionality: 768
}
});影响:使用正确的任务类型可将搜索相关性提升10-30%。
6. Top 5 Errors
6. 五大常见错误
Error 1: Dimension Mismatch
错误1:维度不匹配
Error:
Vector dimensions do not match. Expected 768, got 3072Cause: Not specifying parameter (defaults to 3072).
outputDimensionalityFix:
typescript
// ❌ BAD: No outputDimensionality (defaults to 3072)
const embedding = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: text
});
// ✅ GOOD: Match Vectorize index dimensions
const embedding = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: text,
config: { outputDimensionality: 768 } // ← Match your index
});错误信息:
Vector dimensions do not match. Expected 768, got 3072原因:未指定参数(默认值为3072)。
outputDimensionality修复方案:
typescript
// ❌ 错误:未指定outputDimensionality(默认3072)
const embedding = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: text
});
// ✅ 正确:匹配Vectorize索引维度
const embedding = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: text,
config: { outputDimensionality: 768 } // ← 与你的索引维度匹配
});Error 2: Rate Limiting (429 Too Many Requests)
错误2:速率限制(429 请求过多)
Error:
429 Too Many Requests - Rate limit exceededCause: Exceeded 100 requests per minute (free tier).
Fix:
typescript
// ✅ GOOD: Exponential backoff
async function embedWithRetry(text: string, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await ai.models.embedContent({
model: 'gemini-embedding-001',
content: text,
config: { taskType: 'SEMANTIC_SIMILARITY', outputDimensionality: 768 }
});
} catch (error: any) {
if (error.status === 429 && attempt < maxRetries - 1) {
const delay = Math.pow(2, attempt) * 1000; // 1s, 2s, 4s
await new Promise(resolve => setTimeout(resolve, delay));
continue;
}
throw error;
}
}
}错误信息:
429 Too Many Requests - Rate limit exceeded原因:超出免费层每分钟100次请求的限制。
修复方案:
typescript
// ✅ 正确:指数退避策略
async function embedWithRetry(text: string, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await ai.models.embedContent({
model: 'gemini-embedding-001',
content: text,
config: { taskType: 'SEMANTIC_SIMILARITY', outputDimensionality: 768 }
});
} catch (error: any) {
if (error.status === 429 && attempt < maxRetries - 1) {
const delay = Math.pow(2, attempt) * 1000; // 1秒、2秒、4秒
await new Promise(resolve => setTimeout(resolve, delay));
continue;
}
throw error;
}
}
}Error 3: Text Truncation (Silent)
错误3:文本截断(无提示)
Error: No error! Text is silently truncated at 2,048 tokens.
Cause: Input text exceeds 2,048 token limit.
Fix: Chunk long texts before embedding:
typescript
function chunkText(text: string, maxTokens = 2000): string[] {
const words = text.split(/\s+/);
const chunks: string[] = [];
let currentChunk: string[] = [];
for (const word of words) {
currentChunk.push(word);
// Rough estimate: 1 token ≈ 0.75 words
if (currentChunk.length * 0.75 >= maxTokens) {
chunks.push(currentChunk.join(' '));
currentChunk = [];
}
}
if (currentChunk.length > 0) {
chunks.push(currentChunk.join(' '));
}
return chunks;
}问题:无错误提示!文本被静默截断至2048个令牌。
原因:输入文本超出2048个令牌的限制。
修复方案:嵌入前对长文本进行分块:
typescript
function chunkText(text: string, maxTokens = 2000): string[] {
const words = text.split(/\s+/);
const chunks: string[] = [];
let currentChunk: string[] = [];
for (const word of words) {
currentChunk.push(word);
// 粗略估算:1个令牌≈0.75个单词
if (currentChunk.length * 0.75 >= maxTokens) {
chunks.push(currentChunk.join(' '));
currentChunk = [];
}
}
if (currentChunk.length > 0) {
chunks.push(currentChunk.join(' '));
}
return chunks;
}Error 4: Incorrect Task Type
错误4:任务类型错误
Error: No error, but search quality is poor (10-30% worse).
Cause: Using wrong task type (e.g., for queries).
RETRIEVAL_DOCUMENTFix:
typescript
// ❌ BAD: Wrong task type for RAG query
const queryEmbedding = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: userQuery,
config: { taskType: 'RETRIEVAL_DOCUMENT' } // ← Wrong!
});
// ✅ GOOD: Correct task types
const queryEmbedding = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: userQuery,
config: { taskType: 'RETRIEVAL_QUERY', outputDimensionality: 768 }
});问题:无错误提示,但搜索质量较差(降低10-30%)。
原因:使用错误的任务类型(例如,对查询使用)。
RETRIEVAL_DOCUMENT修复方案:
typescript
// ❌ 错误:RAG查询使用错误的任务类型
const queryEmbedding = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: userQuery,
config: { taskType: 'RETRIEVAL_DOCUMENT' } // ← 错误!
});
// ✅ 正确:使用正确的任务类型
const queryEmbedding = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: userQuery,
config: { taskType: 'RETRIEVAL_QUERY', outputDimensionality: 768 }
});Error 5: Cosine Similarity Calculation Errors
错误5:余弦相似度计算错误
Error:
Similarity values out of range (-1.5 to 1.2)Cause: Using dot product instead of proper cosine similarity formula.
Fix:
typescript
// ✅ GOOD: Proper cosine similarity
function cosineSimilarity(a: number[], b: number[]): number {
if (a.length !== b.length) {
throw new Error('Vector dimensions must match');
}
let dotProduct = 0;
let magnitudeA = 0;
let magnitudeB = 0;
for (let i = 0; i < a.length; i++) {
dotProduct += a[i] * b[i];
magnitudeA += a[i] * a[i];
magnitudeB += b[i] * b[i];
}
if (magnitudeA === 0 || magnitudeB === 0) {
return 0; // Handle zero vectors
}
return dotProduct / (Math.sqrt(magnitudeA) * Math.sqrt(magnitudeB));
}Load for all 8 errors with detailed solutions, including batch size limits, vector storage precision loss, and model version confusion.
references/top-errors.md错误信息:
Similarity values out of range (-1.5 to 1.2)原因:使用点积而非正确的余弦相似度公式。
修复方案:
typescript
// ✅ 正确:标准余弦相似度计算
function cosineSimilarity(a: number[], b: number[]): number {
if (a.length !== b.length) {
throw new Error('向量维度必须匹配');
}
let dotProduct = 0;
let magnitudeA = 0;
let magnitudeB = 0;
for (let i = 0; i < a.length; i++) {
dotProduct += a[i] * b[i];
magnitudeA += a[i] * a[i];
magnitudeB += b[i] * b[i];
}
if (magnitudeA === 0 || magnitudeB === 0) {
return 0; // 处理零向量
}
return dotProduct / (Math.sqrt(magnitudeA) * Math.sqrt(magnitudeB));
}如需查看全部8种错误及详细解决方案(包括批次大小限制、向量存储精度损失、模型版本混淆等),请加载。
references/top-errors.md7. Best Practices
7. 最佳实践
Always Do
务必遵循
✅ Specify Task Type
typescript
const embedding = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: text,
config: { taskType: 'RETRIEVAL_QUERY' } // ← Always specify
});✅ Match Dimensions with Vectorize
typescript
const embedding = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: text,
config: { outputDimensionality: 768 } // ← Match index
});✅ Implement Rate Limiting
typescript
// Use exponential backoff for 429 errors (see Error 2)✅ Cache Embeddings
typescript
const cache = new Map<string, number[]>();
async function getCachedEmbedding(text: string): Promise<number[]> {
if (cache.has(text)) {
return cache.get(text)!;
}
const response = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: text,
config: { taskType: 'SEMANTIC_SIMILARITY', outputDimensionality: 768 }
});
const embedding = response.embedding.values;
cache.set(text, embedding);
return embedding;
}✅ Use Batch API for Multiple Texts
typescript
// Single batch request vs multiple individual requests
const embeddings = await ai.models.embedContent({
model: 'gemini-embedding-001',
contents: texts, // Array of texts
config: { taskType: 'RETRIEVAL_DOCUMENT', outputDimensionality: 768 }
});✅ 指定任务类型
typescript
const embedding = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: text,
config: { taskType: 'RETRIEVAL_QUERY' } // ← 始终指定
});✅ 与Vectorize维度匹配
typescript
const embedding = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: text,
config: { outputDimensionality: 768 } // ← 与索引维度匹配
});✅ 实现速率限制处理
typescript
// 针对429错误使用指数退避策略(参见错误2)✅ 缓存嵌入向量
typescript
const cache = new Map<string, number[]>();
async function getCachedEmbedding(text: string): Promise<number[]> {
if (cache.has(text)) {
return cache.get(text)!;
}
const response = await ai.models.embedContent({
model: 'gemini-embedding-001',
content: text,
config: { taskType: 'SEMANTIC_SIMILARITY', outputDimensionality: 768 }
});
const embedding = response.embedding.values;
cache.set(text, embedding);
return embedding;
}✅ 针对多文本使用批量API
typescript
// 单批次请求 vs 多个单独请求
const embeddings = await ai.models.embedContent({
model: 'gemini-embedding-001',
contents: texts, // 文本数组
config: { taskType: 'RETRIEVAL_DOCUMENT', outputDimensionality: 768 }
});Never Do
切勿违反
❌ Don't Skip Task Type - Reduces quality by 10-30%
❌ Don't Mix Different Dimensions - Can't compare embeddings
❌ Don't Use Wrong Task Type for RAG - Reduces search quality
❌ Don't Exceed 2,048 Tokens - Text will be silently truncated
❌ Don't Ignore Rate Limits - Will hit 429 errors
❌ 不要跳过任务类型 - 会降低10-30%的质量
❌ 不要混合不同维度 - 无法比较嵌入向量
❌ 不要为RAG使用错误的任务类型 - 降低搜索质量
❌ 不要超出2048个令牌 - 文本会被静默截断
❌ 不要忽略速率限制 - 会触发429错误
8. When to Load References
8. 何时加载参考文档
Load references/rag-patterns.md
when:
references/rag-patterns.md加载references/rag-patterns.md
的场景:
references/rag-patterns.md- Building a RAG (Retrieval Augmented Generation) system
- Need document ingestion pipeline with chunking strategies
- Implementing semantic search with cosine similarity
- Building conversational RAG with history
- Need citation RAG or multi-query RAG patterns
- Want complete examples of filtered RAG, streaming RAG, or hybrid search
- Need document clustering with K-means implementation
- 构建RAG(检索增强生成)系统
- 需要带分块策略的文档摄入流水线
- 实现基于余弦相似度的语义搜索
- 构建带历史记录的对话式RAG
- 需要引用型RAG或多查询RAG模式
- 需要过滤式RAG、流式RAG或混合搜索的完整示例
- 需要基于K-means实现的文档聚类
Load references/vectorize-integration.md
when:
references/vectorize-integration.md加载references/vectorize-integration.md
的场景:
references/vectorize-integration.md- Setting up Cloudflare Vectorize index for embeddings
- Need complete RAG example with Vectorize insert/query patterns
- Configuring dimension/metric settings for Vectorize
- Implementing metadata best practices
- Troubleshooting dimension mismatch errors with Vectorize
- Need index management commands (create/delete/list)
- 为嵌入向量设置Cloudflare Vectorize索引
- 需要带Vectorize插入/查询模式的完整RAG示例
- 配置Vectorize的维度/度量设置
- 实现元数据最佳实践
- 排查与Vectorize的维度不匹配错误
- 需要索引管理命令(创建/删除/列出)
Load references/dimension-guide.md
when:
references/dimension-guide.md加载references/dimension-guide.md
的场景:
references/dimension-guide.md- Deciding between 768, 1536, or 3072 dimensions
- Need storage cost analysis (100k vs 1M vectors)
- Understanding accuracy trade-offs (MTEB benchmarks)
- Migrating between different dimensions
- Want query performance comparisons
- Testing methodology for optimal dimension selection
- 在768、1536或3072维度间做选择
- 需要存储成本分析(10万 vs 100万向量)
- 了解准确性权衡(MTEB基准测试)
- 在不同维度间迁移
- 需要查询性能对比
- 测试最优维度选择的方法论
Load references/model-comparison.md
when:
references/model-comparison.md加载references/model-comparison.md
的场景:
references/model-comparison.md- Comparing Gemini vs OpenAI (text-embedding-3-small/large)
- Comparing Gemini vs Cloudflare Workers AI (BGE)
- Need MTEB benchmark scores
- Deciding which embedding model to use
- Migrating from OpenAI to Gemini
- Understanding cost differences between providers
- 对比Gemini与OpenAI(text-embedding-3-small/large)
- 对比Gemini与Cloudflare Workers AI(BGE)
- 需要MTEB基准测试分数
- 决定使用哪种嵌入模型
- 从OpenAI迁移到Gemini
- 了解不同提供商的成本差异
Load references/top-errors.md
when:
references/top-errors.md加载references/top-errors.md
的场景:
references/top-errors.md- Encountering any of the 8 documented errors
- Need detailed root cause analysis
- Want production-tested solutions with code examples
- Building error handling for production systems
- Need verification checklist before deployment
- 遇到任何8种已记录的错误
- 需要详细的根本原因分析
- 需要经过生产验证的带代码示例的解决方案
- 为生产系统构建错误处理机制
- 需要部署前的验证清单
Using Bundled Resources
使用捆绑资源
Templates (templates/)
模板(templates/)
- - Package configuration with verified versions
package.json - - Single text embedding with SDK
basic-embeddings.ts - - Fetch-based for Cloudflare Workers
embeddings-fetch.ts - - Batch processing with rate limiting
batch-embeddings.ts - - Complete RAG implementation with Vectorize
rag-with-vectorize.ts - - Cosine similarity and top-K search
semantic-search.ts - - K-means clustering implementation
clustering.ts
- - 带验证版本的包配置
package.json - - 使用SDK的单文本嵌入示例
basic-embeddings.ts - - 基于Fetch的Cloudflare Workers示例
embeddings-fetch.ts - - 带速率限制的批量处理示例
batch-embeddings.ts - - 与Vectorize集成的完整RAG实现
rag-with-vectorize.ts - - 余弦相似度与Top-K搜索
semantic-search.ts - - K-means聚类实现
clustering.ts
References (references/)
参考文档(references/)
- - Compare Gemini vs OpenAI vs Workers AI embeddings
model-comparison.md - - Cloudflare Vectorize setup and patterns
vectorize-integration.md - - Complete RAG implementation strategies
rag-patterns.md - - Choosing the right dimensions (768 vs 1536 vs 3072)
dimension-guide.md - - 8 common errors and detailed solutions
top-errors.md
- - 对比Gemini、OpenAI与Workers AI嵌入
model-comparison.md - - Cloudflare Vectorize设置与模式
vectorize-integration.md - - 完整的RAG实现策略
rag-patterns.md - - 选择合适的维度(768 vs 1536 vs 3072)
dimension-guide.md - - 8种常见错误及详细解决方案
top-errors.md
Scripts (scripts/)
脚本(scripts/)
- - Verify @google/genai package version is current
check-versions.sh
- - 验证@google/genai包版本是否为最新
check-versions.sh
Official Documentation
官方文档
- Embeddings Guide: https://ai.google.dev/gemini-api/docs/embeddings
- Model Spec: https://ai.google.dev/gemini-api/docs/models/gemini#gemini-embedding-001
- Rate Limits: https://ai.google.dev/gemini-api/docs/rate-limits
- SDK Reference: https://www.npmjs.com/package/@google/genai
- Context7 Library ID:
/websites/ai_google_dev_gemini-api
- 嵌入指南:https://ai.google.dev/gemini-api/docs/embeddings
- 模型规格:https://ai.google.dev/gemini-api/docs/models/gemini#gemini-embedding-001
- 速率限制:https://ai.google.dev/gemini-api/docs/rate-limits
- SDK参考:https://www.npmjs.com/package/@google/genai
- Context7库ID:
/websites/ai_google_dev_gemini-api
Related Skills
相关技能
- google-gemini-api - Main Gemini API for text/image generation
- cloudflare-vectorize - Vector database for storing embeddings
- cloudflare-workers-ai - Workers AI embeddings (BGE models)
- google-gemini-api - 用于文本/图像生成的主Gemini API
- cloudflare-vectorize - 用于存储嵌入向量的向量数据库
- cloudflare-workers-ai - Workers AI嵌入(BGE模型)
Success Metrics
成功指标
Token Savings: ~60% compared to manual implementation
Errors Prevented: 8 documented errors with solutions
Production Tested: ✅ Verified in RAG applications
Package Version: @google/genai@1.27.0
Last Updated: 2025-11-21
令牌节省:相比手动实现节省约60%
避免的错误:8种已记录的错误及解决方案
生产验证:✅ 在RAG应用中经过验证
包版本:@google/genai@1.27.0
最后更新:2025-11-21
License
许可证
MIT License - Free to use in personal and commercial projects.
Questions or Issues?
- GitHub: https://github.com/secondsky/claude-skills
- Email: maintainers@example.com
MIT许可证 - 可免费用于个人及商业项目。
有问题或疑问?
- GitHub: https://github.com/secondsky/claude-skills
- Email: maintainers@example.com