ruby-llm

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese
<essential_principles>
<essential_principles>

How RubyLLM Works

RubyLLM 工作原理

RubyLLM provides one beautiful Ruby API for all LLM providers. Same interface whether using GPT, Claude, Gemini, or local Ollama models.
RubyLLM 为所有LLM提供商提供简洁统一的Ruby API。无论使用GPT、Claude、Gemini还是本地Ollama模型,接口完全一致。

1. One API for Everything

1. 一站式统一API

ruby
undefined
ruby
undefined

Chat with any provider - same interface

与任意提供商聊天——接口完全相同

chat = RubyLLM.chat(model: 'gpt-4.1') chat = RubyLLM.chat(model: 'claude-sonnet-4-5') chat = RubyLLM.chat(model: 'gemini-2.0-flash')
chat = RubyLLM.chat(model: 'gpt-4.1') chat = RubyLLM.chat(model: 'claude-sonnet-4-5') chat = RubyLLM.chat(model: 'gemini-2.0-flash')

All return the same RubyLLM::Message object

均返回相同的RubyLLM::Message对象

response = chat.ask("Hello!") puts response.content
undefined
response = chat.ask("Hello!") puts response.content
undefined

2. Configuration First

2. 优先配置

Always configure API keys before use:
ruby
undefined
使用前请务必配置API密钥:
ruby
undefined

config/initializers/ruby_llm.rb

config/initializers/ruby_llm.rb

RubyLLM.configure do |config| config.openai_api_key = ENV['OPENAI_API_KEY'] config.anthropic_api_key = ENV['ANTHROPIC_API_KEY'] config.gemini_api_key = ENV['GEMINI_API_KEY'] config.request_timeout = 120 config.max_retries = 3 end
undefined
RubyLLM.configure do |config| config.openai_api_key = ENV['OPENAI_API_KEY'] config.anthropic_api_key = ENV['ANTHROPIC_API_KEY'] config.gemini_api_key = ENV['GEMINI_API_KEY'] config.request_timeout = 120 config.max_retries = 3 end
undefined

3. Tools Are Ruby Classes

3. 工具即Ruby类

Define tools as
RubyLLM::Tool
subclasses with
description
,
param
, and
execute
:
ruby
class Weather < RubyLLM::Tool
  description "Get current weather for a location"
  param :latitude, type: 'number', desc: "Latitude"
  param :longitude, type: 'number', desc: "Longitude"

  def execute(latitude:, longitude:)
    # Return structured data, not exceptions
    { temperature: 22, conditions: "Sunny" }
  rescue => e
    { error: e.message }  # Let LLM handle errors gracefully
  end
end

chat.with_tool(Weather).ask("What's the weather in Berlin?")
通过继承
RubyLLM::Tool
子类来定义工具,需包含
description
param
execute
ruby
class Weather < RubyLLM::Tool
  description "获取指定地点的当前天气"
  param :latitude, type: 'number', desc: "纬度"
  param :longitude, type: 'number', desc: "经度"

  def execute(latitude:, longitude:)
    # 返回结构化数据,而非异常
    { temperature: 22, conditions: "晴" }
  rescue => e
    { error: e.message }  # 让LLM优雅处理错误
  end
end

chat.with_tool(Weather).ask("柏林现在的天气如何?")

4. Rails Integration with acts_as_chat

4. 借助acts_as_chat实现Rails集成

Persist conversations automatically:
ruby
class Chat < ApplicationRecord
  acts_as_chat
end

chat = Chat.create!(model: 'gpt-4.1')
chat.ask("Hello!")  # Automatically persists messages
自动持久化对话记录:
ruby
class Chat < ApplicationRecord
  acts_as_chat
end

chat = Chat.create!(model: 'gpt-4.1')
chat.ask("Hello!")  # 自动持久化消息

5. Streaming with Blocks

5. 基于代码块的流式传输

Real-time responses via blocks:
ruby
chat.ask("Tell me a story") do |chunk|
  print chunk.content  # Print as it arrives
end
</essential_principles>
<intake> **What would you like to do?**
  1. Build a new AI feature (chat, embeddings, image generation)
  2. Add Rails chat integration (acts_as_chat, Turbo Streams)
  3. Implement tools/function calling
  4. Add streaming responses
  5. Debug an LLM interaction
  6. Optimize for production
  7. Something else
Wait for response, then read the matching workflow. </intake>
<routing> | Response | Workflow | |----------|----------| | 1, "new", "feature", "chat", "embed", "image" | `workflows/build-new-feature.md` | | 2, "rails", "acts_as", "persist", "turbo" | `workflows/add-rails-chat.md` | | 3, "tool", "function", "agent" | `workflows/implement-tools.md` | | 4, "stream", "real-time", "sse" | `workflows/add-streaming.md` | | 5, "debug", "error", "fix", "not working" | `workflows/debug-llm.md` | | 6, "production", "optimize", "performance", "scale" | `workflows/optimize-performance.md` | | 7, other | Clarify need, then select workflow or read references |
After reading the workflow, follow it exactly. </routing>
<verification_loop>
通过代码块实现实时响应:
ruby
chat.ask("给我讲个故事") do |chunk|
  print chunk.content  # 实时打印返回内容
end
</essential_principles>
<intake> **你想要做什么?**
  1. 构建新AI功能(聊天、嵌入、图像生成)
  2. 添加Rails聊天集成(acts_as_chat、Turbo Streams)
  3. 实现工具/函数调用
  4. 添加流式响应
  5. 调试LLM交互
  6. 针对生产环境优化
  7. 其他需求
等待回复,然后查看匹配的工作流。 </intake>
<routing> | 回复关键词 | 对应工作流 | |----------|----------| | 1, "new", "feature", "chat", "embed", "image" | `workflows/build-new-feature.md` | | 2, "rails", "acts_as", "persist", "turbo" | `workflows/add-rails-chat.md` | | 3, "tool", "function", "agent" | `workflows/implement-tools.md` | | 4, "stream", "real-time", "sse" | `workflows/add-streaming.md` | | 5, "debug", "error", "fix", "not working" | `workflows/debug-llm.md` | | 6, "production", "optimize", "performance", "scale" | `workflows/optimize-performance.md` | | 7, other | 明确需求后,选择对应工作流或查阅参考文档 |
查阅工作流后,请严格按照其步骤执行。 </routing>
<verification_loop>

After Every Change

每次修改后

bash
undefined
bash
undefined

1. Does it load?

1. 能否正常加载?

bin/rails console -e test
RubyLLM.chat.ask("Test")
bin/rails console -e test
RubyLLM.chat.ask("Test")

2. Do tests pass?

2. 测试是否通过?

bin/rails test test/models/chat_test.rb
bin/rails test test/models/chat_test.rb

3. Check for errors

3. 检查错误

bin/rails test 2>&1 | grep -E "(Error|Fail|exception)"

Report to user:
- "Config: API keys loaded"
- "Chat: Working with [model]"
- "Tests: X pass, Y fail"
</verification_loop>

<reference_index>
bin/rails test 2>&1 | grep -E "(Error|Fail|exception)"

向用户反馈:
- "配置:API密钥已加载"
- "聊天:与[model]正常工作"
- "测试:X通过,Y失败"
</verification_loop>

<reference_index>

Domain Knowledge

领域知识

All in
references/
:
Getting Started: getting-started.md Core Features: chat-api.md, tools.md, streaming.md, structured-output.md Rails: rails-integration.md Capabilities: embeddings.md, image-audio.md Infrastructure: providers.md, error-handling.md, mcp-integration.md Quality: anti-patterns.md </reference_index>
<workflows_index>
所有文档位于
references/
目录下:
入门指南: getting-started.md 核心功能: chat-api.md、tools.md、streaming.md、structured-output.md Rails相关: rails-integration.md 功能特性: embeddings.md、image-audio.md 基础设施: providers.md、error-handling.md、mcp-integration.md 质量保障: anti-patterns.md </reference_index>
<workflows_index>

Workflows

工作流

All in
workflows/
:
FilePurpose
build-new-feature.mdCreate new AI feature from scratch
add-rails-chat.mdAdd persistent chat to Rails app
implement-tools.mdCreate custom tools/function calling
add-streaming.mdAdd real-time streaming responses
debug-llm.mdFind and fix LLM issues
optimize-performance.mdProduction optimization
</workflows_index>
所有工作流位于
workflows/
目录下:
文件用途
build-new-feature.md从零开始创建新AI功能
add-rails-chat.md为Rails应用添加持久化聊天功能
implement-tools.md创建自定义工具/函数调用
add-streaming.md添加实时流式响应
debug-llm.md排查并修复LLM问题
optimize-performance.md生产环境优化
</workflows_index>