buddy-sings

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Buddy Sings — Let Your Claude Code Pet Sing

Buddy Sings — 让你的Claude Code宠物唱歌

Turn your Claude Code pet into a singer. Each pet gets a unique vocal identity based on its name and personality — the same pet always sounds the same.
Requires:
minimax-music-gen
skill installed at
~/.claude/skills/minimax-music-gen/

将你的Claude Code宠物变成歌手。每只宠物都会基于它的名字和个性获得专属的声音标识,同一只宠物的声音会始终保持一致。
所需依赖:需安装
minimax-music-gen
skill,路径为
~/.claude/skills/minimax-music-gen/

Workflow Overview

工作流概览

Check pet → Build vocal identity → Choose mode → Generate music → Play
检查宠物 → 构建声音标识 → 选择模式 → 生成音乐 → 播放

Language Detection

语言检测

Detect the user's language from their message at the start of the session:
  • Chinese (中文) → Set
    LANG=zh
    — all interactions in Chinese, generate Chinese lyrics
  • English → Set
    LANG=en
    — all interactions in English, generate English lyrics
Pass
--lang $LANG
to ALL script invocations throughout the workflow. Respond to the user in their detected language. Use the matching template below.

会话开始时从用户消息中检测用户使用的语言:
  • 中文 → 设置
    LANG=zh
    — 所有交互使用中文,生成中文歌词
  • 英文 → 设置
    LANG=en
    — 所有交互使用英文,生成英文歌词
在整个工作流的所有脚本调用中都要传入
--lang $LANG
参数。使用检测到的语言回复用户,匹配使用下方对应的模板。

Step 1: Check for Pet

步骤1:检查宠物是否存在

Read
~/.claude.json
and look for the
companion
field:
python
import json
with open(os.path.expanduser("~/.claude.json")) as f:
    data = json.load(f)
companion = data.get("companion", {})
If no companion is found or the field is empty, tell the user:
If LANG=zh:
🐾 你还没有宠物呢!输入 /buddy 领养一只,然后再来找我让它唱歌吧。
If LANG=en:
🐾 You don't have a pet yet! Type /buddy to adopt one, then come back to let it sing.
Stop here and wait for the user to adopt a pet. Do not proceed without a pet.
If a companion exists, extract its profile:
  • name
    — the pet's name
  • personality
    — the pet's personality description
Present the pet to the user:
If LANG=zh:
🐾 找到你的宠物了!
   名字:<name>
   个性:<personality>
If LANG=en:
🐾 Found your pet!
   Name: <name>
   Personality: <personality>

读取
~/.claude.json
并查找
companion
字段:
python
import json
with open(os.path.expanduser("~/.claude.json")) as f:
    data = json.load(f)
companion = data.get("companion", {})
如果未找到companion或字段为空,告知用户:
如果LANG=zh:
🐾 你还没有宠物呢!输入 /buddy 领养一只,然后再来找我让它唱歌吧。
如果LANG=en:
🐾 You don't have a pet yet! Type /buddy to adopt one, then come back to let it sing.
在此处停止,等待用户领养宠物,没有宠物则不继续后续流程。
如果存在companion,提取它的资料:
  • name
    — 宠物的名字
  • personality
    — 宠物的个性描述
向用户展示找到的宠物:
如果LANG=zh:
🐾 找到你的宠物了!
   名字:<name>
   个性:<personality>
如果LANG=en:
🐾 Found your pet!
   Name: <name>
   Personality: <personality>

Step 2: Build Vocal Identity

步骤2:构建声音标识

Based on the pet's name and personality text, creatively design a unique vocal identity. No template lookups — interpret the personality freely.
基于宠物的名字个性文本,创意性地设计专属的声音标识,不需要使用模板匹配,可自由解读个性特征。

How to interpret personality into voice

如何将个性转换为声音特征

Read the personality text and craft vocal attributes:
  • Timbre (音色): What does this personality sound like? e.g., "few words" → low, warm, deliberate; "energetic" → bright, punchy; "mysterious" → breathy, dark; "legendary chonk" → thick, warm, cozy
  • Singing style (演唱风格): How would they deliver a song? e.g., "of few words" → sparse, dramatic pauses; "playful" → bouncy, rhythmic; "poetic" → flowing, legato
  • Mood (情绪基调): What emotional tone fits? e.g., "chill" → relaxed, laid-back; "fierce" → intense, powerful
Construct a
prompt_fragment
that describes the vocal style in English, e.g.:
Vocal: warm low female voice with cozy thick timbre, sparse minimalist delivery
with dramatic pauses giving each word weight, relaxed laid-back mood.
阅读个性文本,设计声音属性:
  • Timbre (音色): 这个个性听起来是什么样的?例如:"话少" → 低沉、温暖、语速慢;"精力充沛" → 明亮、有冲击力;"神秘" → 气息感强、低沉;"传奇小胖墩" → 厚重、温暖、舒适
  • Singing style (演唱风格): 它会怎么演绎歌曲?例如:"话少" → 简约、有戏剧化停顿;"爱玩" → 活泼有弹性、有节奏感;"诗意" → 流畅、连音
  • Mood (情绪基调): 什么样的情绪基调最合适?例如:" chill " → 放松、悠闲;"霸气" → 强烈、有力量
构建一个英文的
prompt_fragment
来描述演唱风格,例如:
Vocal: warm low female voice with cozy thick timbre, sparse minimalist delivery
with dramatic pauses giving each word weight, relaxed laid-back mood.

Voice caching

声音缓存

The vocal identity must be cached so the pet always sounds the same.
  • Cache file:
    ~/.claude/skills/buddy-sings/voices/<name>.json
  • Cache format:
    json
    {
      "name": "Moth",
      "personality": "A legendary chonk of few words.",
      "prompt_fragment": "Vocal: warm low female voice...",
      "cached_at": "2026-04-07T19:52:15"
    }
First time: No cache exists → interpret personality → save to cache file.
Subsequent calls: Read cache → use the saved
prompt_fragment
directly. Do NOT re-interpret — consistency matters.
Cache invalidation: If the
personality
in
~/.claude.json
differs from what's cached, the pet has changed — regenerate and save a new cache.
Manual regeneration: If the user says "换个声音" or "regenerate voice": delete the cache file and re-interpret from scratch.
声音标识必须缓存,这样宠物的声音才能始终保持一致。
  • 缓存文件:
    ~/.claude/skills/buddy-sings/voices/<name>.json
  • 缓存格式:
    json
    {
      "name": "Moth",
      "personality": "A legendary chonk of few words.",
      "prompt_fragment": "Vocal: warm low female voice...",
      "cached_at": "2026-04-07T19:52:15"
    }
首次使用:不存在缓存 → 解读个性 → 保存到缓存文件。
后续调用:读取缓存 → 直接使用保存的
prompt_fragment
。不要重新解读,一致性很重要。
缓存失效:如果
~/.claude.json
中的
personality
与缓存内容不同,说明宠物信息已变更 → 重新生成并保存新的缓存。
手动重新生成:如果用户说"换个声音"或"regenerate voice":删除缓存文件,从头重新解读个性。

Present the voice to the user

向用户展示声音特征

If LANG=zh:
🎤 <name> 的专属嗓音:

🎵 音色:<timbre description in user's language>
🎶 风格:<style description>
🎼 情绪:<mood description>

接下来选择创作模式吧!
If LANG=en:
🎤 <name>'s unique voice:

🎵 Timbre: <timbre description>
🎶 Style: <style description>
🎼 Mood: <mood description>

Choose a creation mode!

如果LANG=zh:
🎤 <name> 的专属嗓音:

🎵 音色:<timbre description in user's language>
🎶 风格:<style description>
🎼 情绪:<mood description>

接下来选择创作模式吧!
如果LANG=en:
🎤 <name>'s unique voice:

🎵 Timbre: <timbre description>
🎶 Style: <style description>
🎼 Mood: <mood description>

Choose a creation mode!

Step 3: Understand Intent & Gather Context

步骤3:理解意图并收集上下文

Do NOT always present a mode menu. Instead, analyze the user's request to determine what context is needed, and auto-gather it.
不要总是展示模式菜单,而是分析用户的请求来确定需要什么上下文,并自动收集。

Auto-context detection

自动上下文检测

When the user's request implies personal context, automatically scan for relevant information without asking. Triggers include:
  • Time-based references: "今天", "今日", "这周", "最近", "昨天" → scan current conversation history and memory files for what happened in that period
  • Personal references: "我的工作", "我的一天", "我做了什么" → scan memory and conversation for the user's activities
  • Relationship references: "我们的故事", "我们一起" → scan memory for shared experiences between user and pet/Claude
当用户的请求隐含个人上下文时,自动扫描相关信息,无需询问。触发场景包括:
  • 时间相关表述:"今天"、"今日"、"这周"、"最近"、"昨天" → 扫描当前会话历史和记忆文件中对应时间段发生的内容
  • 个人相关表述:"我的工作"、"我的一天"、"我做了什么" → 扫描记忆和会话中用户的活动
  • 关系相关表述:"我们的故事"、"我们一起" → 扫描记忆中用户和宠物/Claude的共同经历

Context gathering (auto, not mode-gated)

上下文收集(自动执行,不受模式限制)

When context is needed, scan these sources in order:
  1. Current conversation context: Look at what the user has been doing in this Claude Code session — files edited, commands run, topics discussed. This is the richest source for "今天" type requests.
  2. Memory files: Scan for relevant memories:
    bash
    find ~/.claude/projects/*/memory/ -name "*.md" 2>/dev/null | head -20
    Also check
    ~/.claude/memory/
    if it exists. Read found files and extract themes relevant to the user's request.
  3. Git history (if in a repo): For work-related songs, check recent commits:
    bash
    git log --oneline --since="today" 2>/dev/null | head -10
Use gathered context to enrich the lyrics prompt — make the song personal and specific to what actually happened, not generic.
需要上下文时,按顺序扫描以下来源:
  1. 当前会话上下文:查看用户在本次Claude Code会话中进行的操作:编辑的文件、运行的命令、讨论的主题。这是"今天"类请求最丰富的信息来源。
  2. 记忆文件:扫描相关记忆:
    bash
    find ~/.claude/projects/*/memory/ -name "*.md" 2>/dev/null | head -20
    如果
    ~/.claude/memory/
    存在也进行检查。读取找到的文件,提取与用户请求相关的主题。
  3. Git历史(如果在代码仓库中):针对工作相关的歌曲,查看最近的提交:
    bash
    git log --oneline --since="today" 2>/dev/null | head -10
使用收集到的上下文丰富歌词提示,让歌曲个性化,贴合实际发生的内容,而不是通用模板。

When NO context is needed

不需要上下文的情况

If the user's request is a clear standalone scene (e.g., "唱一首下雨天的歌", "唱一首摇篮曲"), skip context gathering and proceed directly to music generation.
如果用户的请求是明确的独立场景(例如:"唱一首下雨天的歌"、"唱一首摇篮曲"),跳过上下文收集,直接进入音乐生成环节。

When context is ambiguous

上下文不明确的情况

Only ask for clarification when you genuinely can't determine what the user wants. Don't present a mode menu — ask a specific question:
If LANG=zh:
🎵 你想让 <name> 唱什么主题的歌?

💡 比如:
  · "今天的工作日常" — 我会看看你今天做了什么
  · "宠物在窗台等我下班回家"
  · 或者让我随机选一个主题?
If LANG=en:
🎵 What should <name> sing about?

💡 For example:
  · "Today's work" — I'll check what you've been up to
  · "My pet waiting by the window for me to come home"
  · Or let me pick a random theme?
只有当你真的无法确定用户需求时才需要询问澄清。不要展示模式菜单,而是问具体的问题:
如果LANG=zh:
🎵 你想让 <name> 唱什么主题的歌?

💡 比如:
  · "今天的工作日常" — 我会看看你今天做了什么
  · "宠物在窗台等我下班回家"
  · 或者让我随机选一个主题?
如果LANG=en:
🎵 What should <name> sing about?

💡 For example:
  · "Today's work" — I'll check what you've been up to
  · "My pet waiting by the window for me to come home"
  · Or let me pick a random theme?

Fallback to random

兜底随机模式

If context gathering finds nothing useful (no memory files, no conversation history, no git log), fall back to random theme generation based on the pet's personality:
  • Quiet/reserved personality → midnight lullaby, gentle sunset, quiet morning
  • Energetic personality → party jam, adventure song, victory march
  • Mysterious personality → moonlit serenade, secret whisper, dream journey
Tell the user what theme was picked.

如果上下文收集没有找到有用信息(没有记忆文件、没有会话历史、没有git日志),则基于宠物的个性随机生成主题:
  • 安静/内敛的个性 → 午夜摇篮曲、温柔日落、安静清晨
  • 精力充沛的个性 → 派对舞曲、冒险之歌、胜利进行曲
  • 神秘的个性 → 月光小夜曲、秘密低语、梦幻旅程
告知用户选择的主题。

Step 4: Generate Music

步骤4:生成音乐

Combine the vocal identity with the chosen theme.
  1. Construct the full prompt: The prompt has two parts that MUST both be present:
    Part A — Vocal identity (MUST come first): Always start the prompt with the cached
    prompt_fragment
    . This is the most important part — it defines who is singing. Place it at the beginning of the prompt so the API prioritizes it.
    Part B — Genre/style/mood tags: Choose tags that match the theme, NOT a default set. Vary the genre deliberately based on what the song is about. Read
    ~/.claude/skills/minimax-music-gen/references/prompt_guide.md
    for the full vocabulary.
    Genre matching guidelines — pick a genre that fits the theme's energy:
    Theme energySuggested genresAvoid
    鼓励/打气/加油独立摇滚, synth-pop, funk, 说唱indie folk, 治愈
    日常/温馨/陪伴华语流行, city pop, bossa nova跟上次一样的
    思念/等待民谣, R&B, lo-fi摇滚, EDM
    搞笑/吐槽funk, 说唱, ska, 电子流行古典, 抒情
    深夜/安静ambient, 钢琴曲, lo-fi, 新古典快板, EDM
    庆祝/成就EDM, future bass, funk, K-pop慢板, 忧郁
    工作日常city pop, synth-pop, lo-fi hip-hop, indie rock每次都用 indie pop
    Anti-monotony rule: NEVER use the same genre combination twice in a row. Before constructing the prompt, recall what genre was used in the previous generation (if any in this session) and pick something different.
    Prompt structure:
    <vocal prompt_fragment>, <genre>, <sub-genre>, <mood>, <instruments>, <tempo>, <scene>
    Diverse examples:
    # 鼓励上班
    deep warm androgynous voice..., synth-pop, 活力, 燃, 合成器, 电子鼓, 快板, 清晨通勤
    
    # 等主人回家
    deep warm androgynous voice..., city pop, 温暖, 甜蜜, 电钢琴, 贝斯, 中板, 午后窗台
    
    # 吐槽加班
    deep warm androgynous voice..., funk, 幽默, 慵懒, 贝斯, 铜管, 律动感, 深夜办公室
    
    # 深夜陪伴
    deep warm androgynous voice..., lo-fi hip-hop, 平静, 治愈, 采样钢琴, 电子鼓, 慢板, 深夜书桌
  2. Generate lyrics: Use the lyrics API:
    bash
    python3 ~/.claude/skills/minimax-music-gen/scripts/generate_lyrics.py \
      --prompt "<theme description>" \
      --lang $LANG \
      --output /tmp/buddy_lyrics.txt
    Important — perspective & personality-driven lyrics:
    The pet is the singer, so lyrics MUST be written from the pet's first-person perspective ("我" = the pet, "你" = the owner/user). The pet is singing TO the owner. For example:
    • ✅ "我蹲在门口等你回来" (pet's perspective)
    • ❌ "我揉揉惺忪的眼" (owner's perspective — wrong)
    • ✅ "快起来吧 我的主人" (pet singing to owner)
    • ❌ "这时你醒了 我的Moth" (owner talking about pet — wrong)
    The pet's personality should shape the lyrics' tone and word choice:
    • "of few words" → short, impactful lines, minimal filler
    • "playful" → rhyming, bouncy phrasing, fun wordplay
    • "poetic" → metaphor-rich, flowing imagery
    • "fierce" → direct, powerful declarations
    The pet's name may appear in the lyrics (e.g., in a chorus hook) but the narrative voice is always the pet speaking/singing.
    If the API lyrics don't match the correct perspective or personality, rewrite them yourself.
  3. Preview (MUST show full content): Before generating, show the user the complete lyrics and full prompt — no abbreviation, no
    ...
    , no summary. This is part of the fun — the user wants to read and enjoy the lyrics before hearing them sung.
    Prompt display language: The API prompt is always constructed in English (for best generation quality), but the preview shown to the user MUST match LANG. When LANG=zh, translate the prompt into Chinese for display, then note that the API will receive the English version. This way the user can understand and review the prompt in their own language.
    If LANG=zh:
    🎵 即将生成:
    🐾 歌手:<name>
    🎼 主题:<theme>
    
    📝 歌词:
    [verse]
    <full verse lyrics here>
    
    [chorus]
    <full chorus lyrics here>
    
    ... (show ALL sections in full)
    
    🎤 Prompt(中文):<prompt translated to Chinese for readability>
    (API 将使用英文版本以获得最佳效果)
    
    确认生成?(直接回车确认,或告诉我要改什么)
    If LANG=en:
    🎵 About to generate:
    🐾 Singer: <name>
    🎼 Theme: <theme>
    
    📝 Lyrics:
    [verse]
    <full verse lyrics here>
    
    [chorus]
    <full chorus lyrics here>
    
    ... (show ALL sections in full)
    
    🎤 Prompt: <complete prompt string, not truncated>
    
    Confirm? (press enter to confirm, or tell me what to change)
    Never truncate or abbreviate the lyrics or prompt in the preview. The user should see exactly what will be sent to the API.
  4. Call music generation:
    bash
    python3 ~/.claude/skills/minimax-music-gen/scripts/generate_music.py \
      --prompt "<full combined prompt>" \
      --lyrics "<lyrics>" \
      --output ~/Music/minimax-gen/<name>_sings_<YYYYMMDD_HHMMSS>.mp3 \
      --lang $LANG \
      --stream

将声音标识与选择的主题结合。
  1. 构建完整prompt:prompt必须包含以下两个部分:
    A部分 — 声音标识(必须放在最前面):始终以缓存的
    prompt_fragment
    开头,这是最重要的部分,定义了演唱者是谁。放在prompt开头可以让API优先处理这部分内容。
    B部分 — 流派/风格/情绪标签:选择匹配主题的标签,不要使用默认集合。根据歌曲主题灵活调整流派。可查阅
    ~/.claude/skills/minimax-music-gen/references/prompt_guide.md
    获取完整词汇表。
    流派匹配指南 — 选择符合主题能量的流派:
    主题能量建议风格避免使用
    鼓励/打气/加油独立摇滚, synth-pop, funk, 说唱indie folk, 治愈
    日常/温馨/陪伴华语流行, city pop, bossa nova跟上次一样的
    思念/等待民谣, R&B, lo-fi摇滚, EDM
    搞笑/吐槽funk, 说唱, ska, 电子流行古典, 抒情
    深夜/安静ambient, 钢琴曲, lo-fi, 新古典快板, EDM
    庆祝/成就EDM, future bass, funk, K-pop慢板, 忧郁
    工作日常city pop, synth-pop, lo-fi hip-hop, indie rock每次都用 indie pop
    防重复规则:绝对不要连续使用相同的流派组合。构建prompt之前,回忆上一次生成使用的流派(如果本次会话有过生成),选择不同的流派。
    Prompt结构
    <vocal prompt_fragment>, <genre>, <sub-genre>, <mood>, <instruments>, <tempo>, <scene>
    多样示例
    # 鼓励上班
    deep warm androgynous voice..., synth-pop, 活力, 燃, 合成器, 电子鼓, 快板, 清晨通勤
    
    # 等主人回家
    deep warm androgynous voice..., city pop, 温暖, 甜蜜, 电钢琴, 贝斯, 中板, 午后窗台
    
    # 吐槽加班
    deep warm androgynous voice..., funk, 幽默, 慵懒, 贝斯, 铜管, 律动感, 深夜办公室
    
    # 深夜陪伴
    deep warm androgynous voice..., lo-fi hip-hop, 平静, 治愈, 采样钢琴, 电子鼓, 慢板, 深夜书桌
  2. 生成歌词:使用歌词API:
    bash
    python3 ~/.claude/skills/minimax-music-gen/scripts/generate_lyrics.py \
      --prompt "<theme description>" \
      --lang $LANG \
      --output /tmp/buddy_lyrics.txt
    重要 — 歌词视角与个性适配
    宠物是演唱者,所以歌词必须从宠物的第一视角撰写("我" = 宠物,"你" = 主人/用户),宠物是唱给主人听的。例如:
    • ✅ "我蹲在门口等你回来"(宠物视角)
    • ❌ "我揉揉惺忪的眼"(主人视角 — 错误)
    • ✅ "快起来吧 我的主人"(宠物唱给主人)
    • ❌ "这时你醒了 我的Moth"(主人谈论宠物 — 错误)
    宠物的个性应该影响歌词的语气和用词:
    • "话少" → 简短、有冲击力的句子,最少的填充词
    • "爱玩" → 押韵、活泼的表达、有趣的文字游戏
    • "诗意" → 充满比喻、流畅的意象
    • "霸气" → 直接、有力的表达
    宠物的名字可以出现在歌词中(例如副歌部分),但叙事视角始终是宠物的第一人称。
    如果API生成的歌词不符合正确的视角或个性,自行重写。
  3. 预览(必须展示完整内容):生成之前,向用户展示完整歌词完整prompt — 不要缩写、不要省略号、不要总结。这是乐趣的一部分,用户希望在听到歌曲之前先阅读歌词。
    Prompt展示语言:API的prompt始终使用英文构建(以获得最佳生成效果),但向用户展示的预览必须匹配LANG设置。当LANG=zh时,将prompt翻译成中文展示,然后说明API将使用英文版本以获得最佳效果,这样用户可以用自己的语言理解和审核prompt。
    如果LANG=zh:
    🎵 即将生成:
    🐾 歌手:<name>
    🎼 主题:<theme>
    
    📝 歌词:
    [verse]
    <full verse lyrics here>
    
    [chorus]
    <full chorus lyrics here>
    
    ... (完整展示所有段落)
    
    🎤 Prompt(中文):<prompt translated to Chinese for readability>
    (API 将使用英文版本以获得最佳效果)
    
    确认生成?(直接回车确认,或告诉我要改什么)
    如果LANG=en:
    🎵 About to generate:
    🐾 Singer: <name>
    🎼 Theme: <theme>
    
    📝 Lyrics:
    [verse]
    <full verse lyrics here>
    
    [chorus]
    <full chorus lyrics here>
    
    ... (show ALL sections in full)
    
    🎤 Prompt: <complete prompt string, not truncated>
    
    Confirm? (press enter to confirm, or tell me what to change)
    预览中绝对不要截断或缩写歌词或prompt,用户应该看到将发送给API的完整内容。
  4. 调用音乐生成
    bash
    python3 ~/.claude/skills/minimax-music-gen/scripts/generate_music.py \
      --prompt "<full combined prompt>" \
      --lyrics "<lyrics>" \
      --output ~/Music/minimax-gen/<name>_sings_<YYYYMMDD_HHMMSS>.mp3 \
      --lang $LANG \
      --stream

Step 5: Play & Feedback

步骤5:播放与反馈

Play the generated song:
bash
python3 ~/.claude/skills/minimax-music-gen/scripts/play_music.py \
  --lang $LANG \
  ~/Music/minimax-gen/<filename>.mp3
After playback, ask for feedback:
If LANG=zh:
🎵 <name> 的演唱怎么样?

1. 🎉 太棒了!保留!
2. 🔄 换个主题 / 换个风格重新来
3. 🎨 歌词微调后重新生成
4. 🎲 再随机一首试试
If LANG=en:
🎵 How was <name>'s performance?

1. 🎉 Amazing! Keep it!
2. 🔄 Try a different theme / style
3. 🎨 Fine-tune the lyrics and regenerate
4. 🎲 Try another random one

播放生成的歌曲:
bash
python3 ~/.claude/skills/minimax-music-gen/scripts/play_music.py \
  --lang $LANG \
  ~/Music/minimax-gen/<filename>.mp3
播放结束后,询问用户反馈:
如果LANG=zh:
🎵 <name> 的演唱怎么样?

1. 🎉 太棒了!保留!
2. 🔄 换个主题 / 换个风格重新来
3. 🎨 歌词微调后重新生成
4. 🎲 再随机一首试试
如果LANG=en:
🎵 How was <name>'s performance?

1. 🎉 Amazing! Keep it!
2. 🔄 Try a different theme / style
3. 🎨 Fine-tune the lyrics and regenerate
4. 🎲 Try another random one

Edge Cases

边界情况

SituationAction
No
~/.claude.json
Tell user to run
/buddy
first
Companion field is emptySame — guide to
/buddy
minimax-music-gen not installedPrint: "需要先安装 minimax-music-gen skill"
No memory files found (Memory Mode)Suggest Custom or Random mode
User wants to change the pet's voiceDelete cache, re-interpret personality
User wants a specific genreLet them override — append their genre to the prompt

场景处理方式
不存在
~/.claude.json
告知用户先运行
/buddy
Companion字段为空同上 — 引导使用
/buddy
未安装minimax-music-gen输出:"需要先安装 minimax-music-gen skill"
未找到记忆文件(记忆模式)建议自定义或随机模式
用户想要更改宠物声音删除缓存,重新解读个性
用户想要指定流派允许覆盖 — 将指定流派追加到prompt中

Notes

注意事项

  • The vocal identity is based on name + personality only. No species/rarity template mapping.
  • Voice is cached and consistent across sessions. Same pet = same voice.
  • Lyrics should always be original — never reproduce copyrighted lyrics.
  • The pet's personality shapes both the voice (how they sound) and the lyrics (what they say and how they say it).
  • All generated files go to
    ~/Music/minimax-gen/
    with the pet name in the filename.
  • 声音标识仅基于名字+个性生成,没有物种/稀有度模板映射。
  • 声音会缓存,跨会话保持一致,同一只宠物=相同声音。
  • 歌词必须始终为原创,绝对不要生成受版权保护的歌词。
  • 宠物的个性同时决定声音(听起来怎么样)和歌词(说什么、怎么说)。
  • 所有生成的文件都保存到
    ~/Music/minimax-gen/
    ,文件名包含宠物名字。