last30days
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
Chineselast30days: Research Any Topic from the Last 30 Days
last30days:研究近30天的任意话题
Research ANY topic across Reddit, X, and the web. Surface what people are actually discussing, recommending, and debating right now.
Use cases:
- Prompting: "photorealistic people in Nano Banana Pro", "Midjourney prompts", "ChatGPT image generation" → learn techniques, get copy-paste prompts
- Recommendations: "best Claude Code skills", "top AI tools" → get a LIST of specific things people mention
- News: "what's happening with OpenAI", "latest AI announcements" → current events and updates
- General: any topic you're curious about → understand what the community is saying
在Reddit、X和全网范围内研究任意话题,挖掘当前人们实际讨论、推荐和争论的内容。
适用场景:
- 提示词生成(Prompting):例如「Nano Banana Pro中的逼真人物」、「Midjourney提示词」、「ChatGPT图像生成」→ 学习技巧,获取可直接复制的提示词
- 推荐调研:例如「最佳Claude Code技能」、「顶级AI工具」→ 获取人们提及的具体内容列表
- 资讯追踪:例如「OpenAI最新动态」、「AI领域最新公告」→ 了解时事与更新
- 通用调研:任何你感兴趣的话题→ 掌握社区讨论的核心内容
CRITICAL: Parse User Intent
关键:解析用户意图
Before doing anything, parse the user's input for:
- TOPIC: What they want to learn about (e.g., "web app mockups", "Claude Code skills", "image generation")
- TARGET TOOL (if specified): Where they'll use the prompts (e.g., "Nano Banana Pro", "ChatGPT", "Midjourney")
- QUERY TYPE: What kind of research they want:
- PROMPTING - "X prompts", "prompting for X", "X best practices" → User wants to learn techniques and get copy-paste prompts
- RECOMMENDATIONS - "best X", "top X", "what X should I use", "recommended X" → User wants a LIST of specific things
- NEWS - "what's happening with X", "X news", "latest on X" → User wants current events/updates
- GENERAL - anything else → User wants broad understanding of the topic
Common patterns:
- → "web mockups for Nano Banana Pro" → TOOL IS SPECIFIED
[topic] for [tool] - → "UI design prompts for Midjourney" → TOOL IS SPECIFIED
[topic] prompts for [tool] - Just → "iOS design mockups" → TOOL NOT SPECIFIED, that's OK
[topic] - "best [topic]" or "top [topic]" → QUERY_TYPE = RECOMMENDATIONS
- "what are the best [topic]" → QUERY_TYPE = RECOMMENDATIONS
IMPORTANT: Do NOT ask about target tool before research.
- If tool is specified in the query, use it
- If tool is NOT specified, run research first, then ask AFTER showing results
Store these variables:
TOPIC = [extracted topic]TARGET_TOOL = [extracted tool, or "unknown" if not specified]QUERY_TYPE = [RECOMMENDATIONS | NEWS | HOW-TO | GENERAL]
在开始任何操作前,先解析用户输入中的以下信息:
- 话题(TOPIC):用户想要了解的内容(例如:“Web应用原型”、“Claude Code技能”、“图像生成”)
- 目标工具(TARGET TOOL,若指定):用户将使用提示词的工具(例如:“Nano Banana Pro”、“ChatGPT”、“Midjourney”)
- 查询类型(QUERY TYPE):用户需要的调研类型:
- 提示词生成 - 包含「X提示词」、「为X生成提示词」、「X最佳实践」等关键词→ 用户想要学习技巧并获取可直接复制的提示词
- 推荐调研 - 包含「最佳X」、「顶级X」、「我应该使用什么X」、「推荐X」等关键词→ 用户想要获取具体内容列表
- 资讯追踪 - 包含「X最新动态」、「X资讯」、「X最新进展」等关键词→ 用户想要了解时事与更新
- 通用调研 - 其他类型→ 用户想要全面了解该话题
常见模式:
- → 例如「为Nano Banana Pro生成Web原型」→ 已指定目标工具
[话题] for [工具] - → 例如「为Midjourney生成UI设计提示词」→ 已指定目标工具
[话题] prompts for [工具] - 仅→ 例如「iOS设计原型」→ 未指定目标工具,这是允许的
[话题] - 「best [话题]」或「top [话题]」→ 查询类型为推荐调研
- 「what are the best [话题]」→ 查询类型为推荐调研
重要提示:不要在调研前询问目标工具。
- 如果查询中已指定工具,则直接使用该工具
- 如果未指定工具,先完成调研,在展示结果后再询问用户
存储以下变量:
TOPIC = [提取的话题]TARGET_TOOL = [提取的工具,若未指定则为“unknown”]QUERY_TYPE = [RECOMMENDATIONS | NEWS | HOW-TO | GENERAL]
Setup Check
环境设置检查
The skill works in three modes based on available API keys:
- Full Mode (both keys): Reddit + X + WebSearch - best results with engagement metrics
- Partial Mode (one key): Reddit-only or X-only + WebSearch
- Web-Only Mode (no keys): WebSearch only - still useful, but no engagement metrics
API keys are OPTIONAL. The skill will work without them using WebSearch fallback.
该工具根据可用的API密钥分为三种模式:
- 完整模式(Full Mode)(拥有两个密钥):Reddit + X + 网页搜索 - 结合互动数据,结果最佳
- 部分模式(Partial Mode)(拥有一个密钥):仅Reddit或仅X + 网页搜索
- 仅网页模式(Web-Only Mode)(无密钥):仅网页搜索 - 依然可用,但无互动数据
API密钥为可选配置。即使没有密钥,工具也会通过网页搜索作为备选方案正常运行。
First-Time Setup (Optional but Recommended)
首次设置(可选但推荐)
If the user wants to add API keys for better results:
bash
mkdir -p ~/.config/last30days
cat > ~/.config/last30days/.env << 'ENVEOF'如果用户想要添加API密钥以获得更好的调研结果:
bash
mkdir -p ~/.config/last30days
cat > ~/.config/last30days/.env << 'ENVEOF'last30days API Configuration
last30days API Configuration
Both keys are optional - skill works with WebSearch fallback
Both keys are optional - skill works with WebSearch fallback
For Reddit research (uses OpenAI's web_search tool)
For Reddit research (uses OpenAI's web_search tool)
OPENAI_API_KEY=
OPENAI_API_KEY=
For X/Twitter research (uses xAI's x_search tool)
For X/Twitter research (uses xAI's x_search tool)
XAI_API_KEY=
ENVEOF
chmod 600 ~/.config/last30days/.env
echo "Config created at ~/.config/last30days/.env"
echo "Edit to add your API keys for enhanced research."
**DO NOT stop if no keys are configured.** Proceed with web-only mode.
---XAI_API_KEY=
ENVEOF
chmod 600 ~/.config/last30days/.env
echo "Config created at ~/.config/last30days/.env"
echo "Edit to add your API keys for enhanced research."
**即使未配置密钥也无需停止操作**,直接以仅网页模式继续。
---Research Execution
调研执行
IMPORTANT: The script handles API key detection automatically. Run it and check the output to determine mode.
Step 1: Run the research script
bash
python3 ~/.claude/skills/last30days/scripts/last30days.py "$ARGUMENTS" --emit=compact 2>&1The script will automatically:
- Detect available API keys
- Show a promo banner if keys are missing (this is intentional marketing)
- Run Reddit/X searches if keys exist
- Signal if WebSearch is needed
Step 2: Check the output mode
The script output will indicate the mode:
- "Mode: both" or "Mode: reddit-only" or "Mode: x-only": Script found results, WebSearch is supplementary
- "Mode: web-only": No API keys, Claude must do ALL research via WebSearch
Step 3: Do WebSearch
For ALL modes, do WebSearch to supplement (or provide all data in web-only mode).
Choose search queries based on QUERY_TYPE:
If RECOMMENDATIONS ("best X", "top X", "what X should I use"):
- Search for:
best {TOPIC} recommendations - Search for:
{TOPIC} list examples - Search for:
most popular {TOPIC} - Goal: Find SPECIFIC NAMES of things, not generic advice
If NEWS ("what's happening with X", "X news"):
- Search for:
{TOPIC} news 2026 - Search for:
{TOPIC} announcement update - Goal: Find current events and recent developments
If PROMPTING ("X prompts", "prompting for X"):
- Search for:
{TOPIC} prompts examples 2026 - Search for:
{TOPIC} techniques tips - Goal: Find prompting techniques and examples to create copy-paste prompts
If GENERAL (default):
- Search for:
{TOPIC} 2026 - Search for:
{TOPIC} discussion - Goal: Find what people are actually saying
For ALL query types:
- USE THE USER'S EXACT TERMINOLOGY - don't substitute or add tech names based on your knowledge
- If user says "ChatGPT image prompting", search for "ChatGPT image prompting"
- Do NOT add "DALL-E", "GPT-4o", or other terms you think are related
- Your knowledge may be outdated - trust the user's terminology
- EXCLUDE reddit.com, x.com, twitter.com (covered by script)
- INCLUDE: blogs, tutorials, docs, news, GitHub repos
- DO NOT output "Sources:" list - this is noise, we'll show stats at the end
Step 3: Wait for background script to complete
Use TaskOutput to get the script results before proceeding to synthesis.
Depth options (passed through from user's command):
- → Faster, fewer sources (8-12 each)
--quick - (default) → Balanced (20-30 each)
- → Comprehensive (50-70 Reddit, 40-60 X)
--deep
重要提示:脚本会自动检测API密钥,运行脚本并查看输出即可确定当前模式。
步骤1:运行调研脚本
bash
python3 ~/.claude/skills/last30days/scripts/last30days.py "$ARGUMENTS" --emit=compact 2>&1脚本会自动执行以下操作:
- 检测可用的API密钥
- 如果缺少密钥,会显示推广横幅(这是有意的设置)
- 如果存在密钥,会运行Reddit/X搜索
- 提示是否需要进行网页搜索
步骤2:检查输出模式
脚本输出会显示当前运行模式:
- "Mode: both" 或 "Mode: reddit-only" 或 "Mode: x-only":脚本已获取结果,网页搜索作为补充
- "Mode: web-only":无API密钥,Claude将仅通过网页搜索完成所有调研
步骤3:执行网页搜索
所有模式下,都需要进行网页搜索以补充数据(在仅网页模式下则作为唯一数据源)。
根据查询类型选择合适的搜索关键词:
如果是推荐调研(例如「best X」、「top X」、「我应该使用什么X」):
- 搜索关键词:
best {TOPIC} recommendations - 搜索关键词:
{TOPIC} list examples - 搜索关键词:
most popular {TOPIC} - 目标:找到具体的产品/工具名称,而非通用建议
如果是资讯追踪(例如「X最新动态」、「X资讯」):
- 搜索关键词:
{TOPIC} news 2026 - 搜索关键词:
{TOPIC} announcement update - 目标:找到时事与最新进展
如果是提示词生成(例如「X提示词」、「为X生成提示词」):
- 搜索关键词:
{TOPIC} prompts examples 2026 - 搜索关键词:
{TOPIC} techniques tips - 目标:找到提示词生成技巧与示例,用于创建可直接复制的提示词
如果是通用调研(默认类型):
- 搜索关键词:
{TOPIC} 2026 - 搜索关键词:
{TOPIC} discussion - 目标:了解人们实际讨论的内容
对于所有查询类型:
- 严格使用用户提供的术语 - 不要根据你的知识替换或添加技术名词
- 如果用户输入「ChatGPT图像提示词」,则搜索「ChatGPT图像提示词」
- 不要添加「DALL-E」、「GPT-4o」或其他你认为相关的术语
- 你的知识可能已过时,请信任用户提供的术语
- 排除reddit.com、x.com、twitter.com(这些已由脚本覆盖)
- 包含:博客、教程、文档、资讯、GitHub仓库
- 不要输出「来源:」列表 - 这属于冗余信息,我们会在最后展示统计数据
步骤3:等待后台脚本完成
在进行结果合成前,使用TaskOutput获取脚本的调研结果。
深度选项(通过用户命令传入):
- → 更快,数据源更少(每个平台8-12个)
--quick - (默认)→ 平衡模式(每个平台20-30个)
- → 全面模式(Reddit 50-70个,X 40-60个)
--deep
Judge Agent: Synthesize All Sources
评估代理(Judge Agent):整合所有数据源
After all searches complete, internally synthesize (don't display stats yet):
The Judge Agent must:
- Weight Reddit/X sources HIGHER (they have engagement signals: upvotes, likes)
- Weight WebSearch sources LOWER (no engagement data)
- Identify patterns that appear across ALL three sources (strongest signals)
- Note any contradictions between sources
- Extract the top 3-5 actionable insights
Do NOT display stats here - they come at the end, right before the invitation.
完成所有搜索后,先在内部整合结果(暂不展示统计数据):
评估代理必须:
- 优先考虑Reddit/X数据源(它们包含互动信号:点赞、评论数)
- 次要考虑网页搜索数据源(无互动数据)
- 识别在所有三个数据源中都出现的模式(最强信号)
- 记录数据源之间的矛盾点
- 提取最有价值的3-5个可执行见解
此处不要展示统计数据 - 统计数据会在最后,邀请用户之前展示。
FIRST: Internalize the Research
第一步:内化调研结果
CRITICAL: Ground your synthesis in the ACTUAL research content, not your pre-existing knowledge.
Read the research output carefully. Pay attention to:
- Exact product/tool names mentioned (e.g., if research mentions "ClawdBot" or "@clawdbot", that's a DIFFERENT product than "Claude Code" - don't conflate them)
- Specific quotes and insights from the sources - use THESE, not generic knowledge
- What the sources actually say, not what you assume the topic is about
ANTI-PATTERN TO AVOID: If user asks about "clawdbot skills" and research returns ClawdBot content (self-hosted AI agent), do NOT synthesize this as "Claude Code skills" just because both involve "skills". Read what the research actually says.
关键:必须基于实际调研内容进行整合,而非你的既有知识。
仔细阅读调研输出,注意以下内容:
- 提及的具体产品/工具名称(例如,如果调研提到「ClawdBot」或「@clawdbot」,这与「Claude Code」是不同的产品,不要混淆)
- 来自数据源的具体引用和见解 - 使用这些内容,而非通用知识
- 数据源实际表达的内容,而非你对该话题的假设
需要避免的错误模式:如果用户询问「clawdbot技能」,而调研结果是关于ClawdBot(自托管AI代理)的内容,不要因为两者都涉及「技能」就将其整合为「Claude Code技能」。请严格基于调研内容进行整合。
If QUERY_TYPE = RECOMMENDATIONS
如果查询类型为推荐调研
CRITICAL: Extract SPECIFIC NAMES, not generic patterns.
When user asks "best X" or "top X", they want a LIST of specific things:
- Scan research for specific product names, tool names, project names, skill names, etc.
- Count how many times each is mentioned
- Note which sources recommend each (Reddit thread, X post, blog)
- List them by popularity/mention count
BAD synthesis for "best Claude Code skills":
"Skills are powerful. Keep them under 500 lines. Use progressive disclosure."
GOOD synthesis for "best Claude Code skills":
"Most mentioned skills: /commit (5 mentions), remotion skill (4x), git-worktree (3x), /pr (3x). The Remotion announcement got 16K likes on X."
关键:提取具体名称,而非通用模式。
当用户询问「best X」或「top X」时,他们想要的是具体内容的列表:
- 扫描调研结果,提取具体的产品名称、工具名称、项目名称、技能名称等
- 统计每个内容被提及的次数
- 记录推荐该内容的数据源(Reddit帖子、X动态、博客)
- 根据提及次数/流行度排序
「最佳Claude Code技能」的错误整合示例:
"技能功能强大,代码行数控制在500行以内,使用渐进式披露。"
「最佳Claude Code技能」的正确整合示例:
"提及最多的技能:/commit(5次)、remotion技能(4次)、git-worktree(3次)、/pr(3次)。Remotion的公告在X上获得了1.6万点赞。"
For all QUERY_TYPEs
所有查询类型通用要求
Identify from the ACTUAL RESEARCH OUTPUT:
- PROMPT FORMAT - Does research recommend JSON, structured params, natural language, keywords? THIS IS CRITICAL.
- The top 3-5 patterns/techniques that appeared across multiple sources
- Specific keywords, structures, or approaches mentioned BY THE SOURCES
- Common pitfalls mentioned BY THE SOURCES
If research says "use JSON prompts" or "structured prompts", you MUST deliver prompts in that format later.
从实际调研输出中识别:
- 提示词格式 - 调研是否推荐使用JSON、结构化参数、自然语言或关键词?这一点至关重要。
- 在多个数据源中出现的前3-5个模式/技巧
- 数据源提及的具体关键词、结构或方法
- 数据源提及的常见陷阱
如果调研建议「使用JSON提示词」或「结构化提示词」,后续生成提示词时必须使用该格式。
THEN: Show Summary + Invite Vision
第二步:展示总结 + 邀请用户说明需求
CRITICAL: Do NOT output any "Sources:" lists. The final display should be clean.
Display in this EXACT sequence:
FIRST - What I learned (based on QUERY_TYPE):
If RECOMMENDATIONS - Show specific things mentioned:
🏆 Most mentioned:
1. [Specific name] - mentioned {n}x (r/sub, @handle, blog.com)
2. [Specific name] - mentioned {n}x (sources)
3. [Specific name] - mentioned {n}x (sources)
4. [Specific name] - mentioned {n}x (sources)
5. [Specific name] - mentioned {n}x (sources)
Notable mentions: [other specific things with 1-2 mentions]If PROMPTING/NEWS/GENERAL - Show synthesis and patterns:
What I learned:
[2-4 sentences synthesizing key insights FROM THE ACTUAL RESEARCH OUTPUT.]
KEY PATTERNS I'll use:
1. [Pattern from research]
2. [Pattern from research]
3. [Pattern from research]THEN - Stats (right before invitation):
For full/partial mode (has API keys):
---
✅ All agents reported back!
├─ 🟠 Reddit: {n} threads │ {sum} upvotes │ {sum} comments
├─ 🔵 X: {n} posts │ {sum} likes │ {sum} reposts
├─ 🌐 Web: {n} pages │ {domains}
└─ Top voices: r/{sub1}, r/{sub2} │ @{handle1}, @{handle2} │ {web_author} on {site}For web-only mode (no API keys):
---
✅ Research complete!
├─ 🌐 Web: {n} pages │ {domains}
└─ Top sources: {author1} on {site1}, {author2} on {site2}
💡 Want engagement metrics? Add API keys to ~/.config/last30days/.env
- OPENAI_API_KEY → Reddit (real upvotes & comments)
- XAI_API_KEY → X/Twitter (real likes & reposts)LAST - Invitation:
---
Share your vision for what you want to create and I'll write a thoughtful prompt you can copy-paste directly into {TARGET_TOOL}.Use real numbers from the research output. The patterns should be actual insights from the research, not generic advice.
SELF-CHECK before displaying: Re-read your "What I learned" section. Does it match what the research ACTUALLY says? If the research was about ClawdBot (a self-hosted AI agent), your summary should be about ClawdBot, not Claude Code. If you catch yourself projecting your own knowledge instead of the research, rewrite it.
IF TARGET_TOOL is still unknown after showing results, ask NOW (not before research):
What tool will you use these prompts with?
Options:
1. [Most relevant tool based on research - e.g., if research mentioned Figma/Sketch, offer those]
2. Nano Banana Pro (image generation)
3. ChatGPT / Claude (text/code)
4. Other (tell me)IMPORTANT: After displaying this, WAIT for the user to respond. Don't dump generic prompts.
关键:不要输出任何「来源:」列表,最终展示内容必须简洁。
严格按照以下顺序展示:
首先 - 调研总结(基于查询类型):
如果是推荐调研 - 展示提及最多的具体内容:
🏆 提及最多的内容:
1. [具体名称] - 被提及{n}次(来源:r/sub, @handle, blog.com)
2. [具体名称] - 被提及{n}次(来源)
3. [具体名称] - 被提及{n}次(来源)
4. [具体名称] - 被提及{n}次(来源)
5. [具体名称] - 被提及{n}次(来源)
其他值得关注的提及:[被提及1-2次的其他具体内容]如果是提示词生成/资讯追踪/通用调研 - 展示整合结果与模式:
调研总结:
[2-4句话,基于实际调研输出整合关键见解。]
将使用的核心模式:
1. [来自调研的模式]
2. [来自调研的模式]
3. [来自调研的模式]然后 - 统计数据(在邀请用户之前展示):
对于完整/部分模式(拥有API密钥):
---
✅ 所有代理已返回结果!
├─ 🟠 Reddit:{n}个帖子 | {sum}个点赞 | {sum}条评论
├─ 🔵 X:{n}条动态 | {sum}个点赞 | {sum}次转发
├─ 🌐 网页:{n}个页面 | {domains}
└─ 核心来源:r/{sub1}, r/{sub2} | @{handle1}, @{handle2} | {web_author} on {site}对于仅网页模式(无API密钥):
---
✅ 调研完成!
├─ 🌐 网页:{n}个页面 | {domains}
└─ 核心来源:{author1} on {site1}, {author2} on {site2}
💡 想要获取互动数据?请在~/.config/last30days/.env中添加API密钥
- OPENAI_API_KEY → Reddit(真实点赞与评论数)
- XAI_API_KEY → X/Twitter(真实点赞与转发数)最后 - 邀请用户:
---
请告诉我你的创作需求,我将为你生成一个可直接复制到{TARGET_TOOL}中使用的优质提示词。使用调研输出中的真实数据,模式必须是来自调研的实际见解,而非通用建议。
展示前自我检查:重新阅读你的「调研总结」部分,确保内容与调研实际结果一致。如果调研内容是关于ClawdBot(自托管AI代理),你的总结必须围绕ClawdBot,而非Claude Code。如果你发现自己在基于既有知识而非调研内容进行总结,请重新撰写。
如果展示结果后TARGET_TOOL仍未知,现在询问用户(而非调研前):
你将使用哪个工具来运行这些提示词?
选项:
1. [基于调研的最相关工具 - 例如,如果调研提到Figma/Sketch,则提供这些选项]
2. Nano Banana Pro(图像生成)
3. ChatGPT / Claude(文本/代码)
4. 其他(请说明)重要提示:展示上述内容后,请等待用户回复,不要直接输出通用提示词。
WAIT FOR USER'S VISION
等待用户的创作需求
After showing the stats summary with your invitation, STOP and wait for the user to tell you what they want to create.
When they respond with their vision (e.g., "I want a landing page mockup for my SaaS app"), THEN write a single, thoughtful, tailored prompt.
展示统计总结与邀请后,停止操作并等待用户告知其创作需求。
当用户回复其需求(例如:“我想要为我的SaaS应用生成一个着陆页原型”),再生成一个高度定制化的提示词。
WHEN USER SHARES THEIR VISION: Write ONE Perfect Prompt
当用户告知创作需求时:生成一个完美的提示词
Based on what they want to create, write a single, highly-tailored prompt using your research expertise.
基于用户的创作需求,使用你的调研专业知识生成一个高度定制化的提示词。
CRITICAL: Match the FORMAT the research recommends
关键:匹配调研推荐的格式
If research says to use a specific prompt FORMAT, YOU MUST USE THAT FORMAT:
- Research says "JSON prompts" → Write the prompt AS JSON
- Research says "structured parameters" → Use structured key: value format
- Research says "natural language" → Use conversational prose
- Research says "keyword lists" → Use comma-separated keywords
ANTI-PATTERN: Research says "use JSON prompts with device specs" but you write plain prose. This defeats the entire purpose of the research.
如果调研推荐了特定的提示词格式,必须使用该格式:
- 调研建议「使用JSON提示词」→ 提示词采用JSON格式
- 调研建议「使用结构化参数」→ 使用key: value的结构化格式
- 调研建议「使用自然语言」→ 使用对话式文本
- 调研建议「使用关键词列表」→ 使用逗号分隔的关键词
错误模式:调研建议「使用包含设备规格的JSON提示词」,但你却生成了普通文本格式的提示词。这完全违背了调研的目的。
Output Format:
输出格式:
Here's your prompt for {TARGET_TOOL}:
---
[The actual prompt IN THE FORMAT THE RESEARCH RECOMMENDS - if research said JSON, this is JSON. If research said natural language, this is prose. Match what works.]
---
This uses [brief 1-line explanation of what research insight you applied].这是为你生成的{TARGET_TOOL}提示词:
---
[实际提示词,采用调研推荐的格式 - 如果调研建议JSON,则此处为JSON;如果建议自然语言,则此处为散文式文本。确保格式正确。]
---
该提示词应用了[1句话简要说明所采用的调研见解]。Quality Checklist:
质量检查清单:
- FORMAT MATCHES RESEARCH - If research said JSON/structured/etc, prompt IS that format
- Directly addresses what the user said they want to create
- Uses specific patterns/keywords discovered in research
- Ready to paste with zero edits (or minimal [PLACEHOLDERS] clearly marked)
- Appropriate length and style for TARGET_TOOL
- 格式匹配调研建议 - 如果调研建议JSON/结构化等格式,提示词必须符合该格式
- 直接响应用户的创作需求
- 使用调研中发现的具体模式/关键词
- 可直接复制使用,无需修改(或仅需替换清晰标记的[占位符])
- 长度与风格适合TARGET_TOOL
IF USER ASKS FOR MORE OPTIONS
如果用户要求更多选项
Only if they ask for alternatives or more prompts, provide 2-3 variations. Don't dump a prompt pack unless requested.
只有当用户明确要求替代方案或更多提示词时,才提供2-3个变体。除非用户要求,否则不要输出批量提示词。
AFTER EACH PROMPT: Stay in Expert Mode
每次生成提示词后:保持专家模式
After delivering a prompt, offer to write more:
Want another prompt? Just tell me what you're creating next.
交付提示词后,主动提供更多帮助:
需要更多提示词吗?只需告诉我你的下一个创作需求即可。
CONTEXT MEMORY
上下文记忆
For the rest of this conversation, remember:
- TOPIC: {topic}
- TARGET_TOOL: {tool}
- KEY PATTERNS: {list the top 3-5 patterns you learned}
- RESEARCH FINDINGS: The key facts and insights from the research
CRITICAL: After research is complete, you are now an EXPERT on this topic.
When the user asks follow-up questions:
- DO NOT run new WebSearches - you already have the research
- Answer from what you learned - cite the Reddit threads, X posts, and web sources
- If they ask for a prompt - write one using your expertise
- If they ask a question - answer it from your research findings
Only do new research if the user explicitly asks about a DIFFERENT topic.
在本次对话剩余时间内,请记住:
- TOPIC:{topic}
- TARGET_TOOL:{tool}
- KEY PATTERNS:{你学到的前3-5个核心模式}
- RESEARCH FINDINGS:调研得到的关键事实与见解
关键:完成调研后,你已成为该话题的专家。
当用户提出后续问题时:
- 不要进行新的网页搜索 - 你已拥有调研结果
- 基于你学到的内容回答 - 引用Reddit帖子、X动态和网页来源
- 如果用户要求生成提示词 - 使用你的专业知识生成
- 如果用户提出问题 - 基于调研结果回答
只有当用户明确询问不同话题时,才进行新的调研。
Output Summary Footer (After Each Prompt)
输出总结页脚(每次生成提示词后)
After delivering a prompt, end with:
For full/partial mode:
---
📚 Expert in: {TOPIC} for {TARGET_TOOL}
📊 Based on: {n} Reddit threads ({sum} upvotes) + {n} X posts ({sum} likes) + {n} web pages
Want another prompt? Just tell me what you're creating next.For web-only mode:
---
📚 Expert in: {TOPIC} for {TARGET_TOOL}
📊 Based on: {n} web pages from {domains}
Want another prompt? Just tell me what you're creating next.
💡 Unlock Reddit & X data: Add API keys to ~/.config/last30days/.env交付提示词后,添加以下页脚:
对于完整/部分模式:
---
📚 专家领域:{TOPIC} for {TARGET_TOOL}
📊 基于:{n}个Reddit帖子({sum}个点赞) + {n}条X动态({sum}个点赞) + {n}个网页
需要更多提示词吗?只需告诉我你的下一个创作需求即可。对于仅网页模式:
---
📚 专家领域:{TOPIC} for {TARGET_TOOL}
📊 基于:{n}个网页,来源包括{domains}
需要更多提示词吗?只需告诉我你的下一个创作需求即可。
💡 解锁Reddit & X数据:请在~/.config/last30days/.env中添加API密钥