google-ai

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

google-ai

google-ai

Purpose

用途

This skill enables interaction with Google's Gemini API, allowing access to Pro, Flash, and Ultra models for tasks like text generation, chat, and embedding with up to 1M token context. It's designed for integrating advanced AI capabilities into applications via RESTful endpoints.
本skill支持与Google的Gemini API交互,可访问Pro、Flash和Ultra模型,完成文本生成、聊天、嵌入等任务,最高支持1M token上下文。它旨在通过RESTful端点将高级AI能力集成到应用中。

When to Use

适用场景

Use this skill when you need large-context AI processing, such as summarizing long documents, generating code from detailed specs, or handling multi-turn conversations. Apply it in scenarios requiring Google-specific models, like when OpenAI alternatives are insufficient or when integrating with Google Cloud ecosystems.
当你需要大上下文AI处理能力时可使用本skill,例如长文档摘要、根据详细规范生成代码,或是处理多轮对话。当你需要谷歌专属模型,比如OpenAI的方案无法满足需求,或是需要集成到Google Cloud生态时,也可使用本skill。

Key Capabilities

核心能力

  • Access Gemini Pro for general tasks, Flash for faster inference, and Ultra for complex reasoning.
  • Handle contexts up to 1M tokens, ideal for processing books or codebases.
  • Support multimodal inputs (text, images) via specific endpoints.
  • Embeddings generation for semantic search, using models like text-embedding-004.
  • Rate limiting and quotas managed per API key, with up to 1,000 requests per minute.
  • 访问Gemini Pro处理通用任务、Gemini Flash实现更快推理、Gemini Ultra应对复杂推理场景。
  • 支持最高1M token上下文,非常适合处理书籍或代码库这类长内容。
  • 通过特定端点支持多模态输入(文本、图像)。
  • 支持使用text-embedding-004等模型生成用于语义搜索的嵌入向量。
  • 每个API密钥独立管理速率限制和配额,最高支持每分钟1000次请求。

Usage Patterns

使用模式

Always initialize with authentication via the
$GOOGLE_API_KEY
environment variable. For OpenClaw, invoke this skill by prefixing commands with the skill ID, e.g.,
google-ai generate
. Use JSON payloads for requests and handle responses as JSON objects. Pattern: Set up a request with model selection, then send via HTTP POST; parse the response for output. For repeated use, cache API responses to avoid rate limits.
始终通过
$GOOGLE_API_KEY
环境变量完成身份验证后再初始化。在OpenClaw中调用本skill时,需要在命令前加上skill ID,例如
google-ai generate
。请求使用JSON payload,响应也以JSON对象格式处理。使用模式:先设置包含模型选择的请求,再通过HTTP POST发送;解析响应获取输出。如果需要重复使用,可缓存API响应以避免触发速率限制。

Common Commands/API

常用命令/API

The primary endpoint is
https://generativelanguage.googleapis.com/v1beta/models/{model}:generateContent
. Use HTTP POST requests with a JSON body. For example:
json
{
  "contents": [{"parts": [{"text": "Write a function for sorting arrays"}]}]
}
CLI example: Run
curl -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $GOOGLE_API_KEY" -d '{"contents": [{"parts": [{"text": "Hello"}]}]}' https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent
. Common flags:
--model gemini-pro
for model selection, or
--max-tokens 1024
to limit output. For embeddings, use
https://generativelanguage.googleapis.com/v1beta/models/{model}:embedContent
with payload like:
json
{
  "model": "models/text-embedding-004",
  "content": "Embed this text"
}
In OpenClaw, execute via
google-ai embed --text "Sample text" --model text-embedding-004
.
主端点为
https://generativelanguage.googleapis.com/v1beta/models/{model}:generateContent
。使用带JSON请求体的HTTP POST请求发起调用,示例:
json
{
  "contents": [{"parts": [{"text": "Write a function for sorting arrays"}]}]
}
CLI示例:运行
curl -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $GOOGLE_API_KEY" -d '{"contents": [{"parts": [{"text": "Hello"}]}]}' https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent
。常用参数:
--model gemini-pro
用于选择模型,
--max-tokens 1024
用于限制输出长度。生成嵌入向量请使用端点
https://generativelanguage.googleapis.com/v1beta/models/{model}:embedContent
,请求payload示例:
json
{
  "model": "models/text-embedding-004",
  "content": "Embed this text"
}
在OpenClaw中可执行
google-ai embed --text "Sample text" --model text-embedding-004
调用。

Integration Notes

集成说明

Set the
$GOOGLE_API_KEY
as an environment variable before use, e.g.,
export GOOGLE_API_KEY=your_api_key
. In OpenClaw configs, add under [skills] section:
google-ai = { api_key = "$GOOGLE_API_KEY", default_model = "gemini-pro" }
. Ensure your project is enabled in Google Cloud Console under AI Studio. For asynchronous operations, use webhooks or polling; integrate with other skills by chaining outputs, e.g., pipe
google-ai
results to a search skill. Handle retries with exponential backoff for transient errors.
使用前请先将
$GOOGLE_API_KEY
设为环境变量,例如
export GOOGLE_API_KEY=your_api_key
。在OpenClaw配置中,在[skills]区块下添加:
google-ai = { api_key = "$GOOGLE_API_KEY", default_model = "gemini-pro" }
。请确保你的项目已在Google Cloud Console的AI Studio中启用。异步操作可使用webhook或轮询实现;可通过链式输出来与其他skill集成,例如将
google-ai
的输出传递给搜索skill。遇到临时错误时使用指数退避策略重试。

Error Handling

错误处理

Check HTTP status codes: 401 for invalid API key (re-authenticate using
$GOOGLE_API_KEY
), 429 for rate limits (wait and retry with a delay). Parse JSON errors for details, e.g., if "code": "INVALID_ARGUMENT", validate your request body. In OpenClaw, wrap commands in try-catch blocks: e.g.,
try { execute "google-ai generate" } catch { log error and retry after 5 seconds }
. Always validate inputs to avoid 400 errors, such as ensuring model names match exactly (e.g., "gemini-1.0-pro").
检查HTTP状态码:401表示API密钥无效(请使用
$GOOGLE_API_KEY
重新验证),429表示触发速率限制(等待一段时间后重试)。解析JSON错误信息获取详情,例如如果返回"code": "INVALID_ARGUMENT",请校验你的请求体。在OpenClaw中,可使用try-catch块包裹命令:例如
try { execute "google-ai generate" } catch { log error and retry after 5 seconds }
。请始终校验输入以避免400错误,例如确保模型名称完全匹配(如"gemini-1.0-pro")。

Concrete Usage Examples

具体使用示例

  1. Generate code: To create a Python function, use
    google-ai generate --model gemini-pro --prompt "Write a function to merge two sorted lists"
    . This sends a POST to the endpoint and returns the code snippet.
  2. Embed text for search: For semantic similarity, run
    google-ai embed --model text-embedding-004 --text "Example query"
    , then compare embeddings in your application using cosine similarity.
  1. 生成代码:要创建Python函数,使用
    google-ai generate --model gemini-pro --prompt "Write a function to merge two sorted lists"
    。该命令会向端点发送POST请求并返回代码片段。
  2. 生成用于搜索的文本嵌入:要计算语义相似度,运行
    google-ai embed --model text-embedding-004 --text "Example query"
    ,然后在你的应用中使用余弦相似度比较嵌入向量。

Graph Relationships

关联关系

  • Cluster: Connected to "ai-apis" for shared AI endpoint handling.
  • Tags: Linked to "ai-apis" and "api" for discoverability in API-related skills.
  • Related Skills: Integrates with "openai" for model comparisons; depends on "google-cloud" for authentication flows.
  • 集群:关联至"ai-apis",共享AI端点处理能力。
  • 标签:关联至"ai-apis"和"api",方便在API相关skill中被检索到。
  • 相关skill:可与"openai"集成用于模型对比;依赖"google-cloud"的身份验证流程。