social-media-trends-research
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseSocial Media Trends Research
社交媒体趋势调研
Overview
概述
Programmatic trend research using three free tools:
- pytrends: Google Trends data (velocity, volume, related queries)
- yars: Reddit scraping without API keys
- Perplexity MCP: Twitter/TikTok/Web trends (via Claude's built-in MCP)
This skill provides executable code for trend research. Use alongside for strategy and for deep queries.
content-marketing-social-listeningperplexity-search使用三款免费工具实现程序化趋势调研:
- pytrends:获取Google Trends数据(关键词增速、搜索量、相关查询)
- yars:无需API密钥即可爬取Reddit数据
- Perplexity MCP:获取Twitter/TikTok/网页端趋势(通过Claude内置的MCP功能)
本技能提供趋势调研的可执行代码。可搭配技能制定策略,或结合技能进行深度查询。
content-marketing-social-listeningperplexity-searchQuick Setup
快速设置
bash
undefinedbash
undefinedInstall dependencies (one-time)
安装依赖(仅需一次)
pip install pytrends requests --break-system-packages
No API keys required. Reddit scraping uses public .json endpoints.
---pip install pytrends requests --break-system-packages
无需API密钥。Reddit爬取使用公开的.json端点。
---Tool 1: pytrends (Google Trends)
工具1:pytrends(Google Trends)
What It Provides
功能说明
- Real-time trending searches by country
- Interest over time for keywords
- Related queries (rising = velocity indicators)
- Interest by region
- Related topics
- 按国家获取实时热门搜索词
- 关键词的时间热度趋势
- 相关查询(上升词=增速指标)
- 按地区的热度分布
- 相关话题
Basic Usage
基础用法
python
from pytrends.request import TrendReq
import timepython
from pytrends.request import TrendReq
import timeInitialize (no API key needed)
初始化(无需API密钥)
pytrends = TrendReq(hl='en-US', tz=330) # tz=330 for India (IST)
pytrends = TrendReq(hl='en-US', tz=330) # tz=330对应印度时区(IST)
Get real-time trending searches
获取实时热门搜索词
trending = pytrends.trending_searches(pn='india')
print(trending.head(20))
undefinedtrending = pytrends.trending_searches(pn='india')
print(trending.head(20))
undefinedResearch Your Niche Keywords
细分领域关键词调研
python
from pytrends.request import TrendReq
import time
pytrends = TrendReq(hl='en-US', tz=330)python
from pytrends.request import TrendReq
import time
pytrends = TrendReq(hl='en-US', tz=330)Define your niche keywords (max 5 per request)
定义细分领域关键词(每次请求最多5个)
keywords = ['heart health', 'cardiology', 'cholesterol']
keywords = ['heart health', 'cardiology', 'cholesterol']
Build payload
构建请求负载
pytrends.build_payload(keywords, timeframe='now 7-d', geo='IN')
pytrends.build_payload(keywords, timeframe='now 7-d', geo='IN')
Get interest over time
获取时间热度趋势
interest = pytrends.interest_over_time()
print(interest)
interest = pytrends.interest_over_time()
print(interest)
CRITICAL: Wait between requests to avoid rate limiting
重要提示:请求间需等待以避免限流
time.sleep(3)
time.sleep(3)
Get related queries (THIS IS GOLD - shows rising topics)
获取相关查询(核心价值:发现上升话题)
related = pytrends.related_queries()
for kw in keywords:
print(f"\n=== Rising queries for '{kw}' ===")
rising = related[kw]['rising']
if rising is not None:
print(rising.head(10))
undefinedrelated = pytrends.related_queries()
for kw in keywords:
print(f"\n=== '{kw}'的上升查询词 ===")
rising = related[kw]['rising']
if rising is not None:
print(rising.head(10))
undefinedFind Viral/Breakout Topics
挖掘爆款/突增话题
python
from pytrends.request import TrendReq
import time
pytrends = TrendReq(hl='en-US', tz=330)
def find_breakout_topics(keyword, geo=''):
"""Find topics with explosive growth (potential viral content)"""
pytrends.build_payload([keyword], timeframe='today 3-m', geo=geo)
time.sleep(3) # Rate limiting
related = pytrends.related_queries()
rising = related[keyword]['rising']
if rising is not None:
# Filter for breakout topics (marked as "Breakout" or very high %)
breakouts = rising[rising['value'] >= 1000] # 1000%+ growth
return breakouts
return Nonepython
from pytrends.request import TrendReq
import time
pytrends = TrendReq(hl='en-US', tz=330)
def find_breakout_topics(keyword, geo=''):
"""发现爆发式增长的话题(潜在爆款内容)"""
pytrends.build_payload([keyword], timeframe='today 3-m', geo=geo)
time.sleep(3) # 限流等待
related = pytrends.related_queries()
rising = related[keyword]['rising']
if rising is not None:
# 筛选爆款话题(标记为"Breakout"或增长率极高)
breakouts = rising[rising['value'] >= 1000] # 增长率1000%+
return breakouts
return NoneExample usage
使用示例
breakouts = find_breakout_topics('heart health', geo='IN')
print(breakouts)
undefinedbreakouts = find_breakout_topics('heart health', geo='IN')
print(breakouts)
undefinedRate Limiting Rules for pytrends
pytrends限流规则
python
import timepython
import timeSAFE: 1 request per 3-5 seconds for casual use
常规使用安全间隔:每3-5秒一次请求
time.sleep(5)
time.sleep(5)
BULK RESEARCH: 1 request per 60 seconds
批量调研间隔:每60秒一次请求
time.sleep(60)
time.sleep(60)
If you get rate limited (429 error): Wait 60-120 seconds, then continue
若触发限流(429错误):等待60-120秒后继续
If persistent issues: Wait 4-6 hours before resuming
若问题持续:等待4-6小时后再恢复请求
undefinedundefinedUseful Timeframes
常用时间范围
| Timeframe | Use Case |
|---|---|
| Last hour (real-time spikes) |
| Last 4 hours |
| Last 24 hours |
| Last 7 days (best for trends) |
| Last 30 days |
| Last 90 days (velocity analysis) |
| Last year (seasonal patterns) |
| 时间范围 | 使用场景 |
|---|---|
| 过去1小时(实时峰值) |
| 过去4小时 |
| 过去24小时 |
| 过去7天(趋势分析首选) |
| 过去30天 |
| 过去90天(增速分析) |
| 过去一年(季节性规律) |
Tool 2: Reddit (No API Keys - Public JSON Endpoints)
工具2:Reddit(无需API密钥 - 公开JSON端点)
What It Provides
功能说明
- Search Reddit for any keyword
- Get hot/top/rising posts from subreddits
- Post engagement data (upvotes, comments)
- No authentication required
- 在Reddit上搜索任意关键词
- 获取子版块的热门/置顶/上升帖子
- 帖子互动数据(点赞数、评论数)
- 无需身份验证
Basic Usage
基础用法
python
import requests
import time
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'}python
import requests
import time
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'}Search Reddit for your niche
在Reddit上搜索细分领域内容
url = "https://www.reddit.com/search.json?q=heart+health&limit=10&sort=relevance&t=week"
response = requests.get(url, headers=headers, timeout=10)
data = response.json()
url = "https://www.reddit.com/search.json?q=heart+health&limit=10&sort=relevance&t=week"
response = requests.get(url, headers=headers, timeout=10)
data = response.json()
Display results
展示结果
for child in data.get('data', {}).get('children', []):
post = child.get('data', {})
print(f"Title: {post.get('title')}")
print(f"Subreddit: r/{post.get('subreddit')}")
print(f"Score: {post.get('score')}")
print("---")
undefinedfor child in data.get('data', {}).get('children', []):
post = child.get('data', {})
print(f"标题: {post.get('title')}")
print(f"子版块: r/{post.get('subreddit')}")
print(f"点赞数: {post.get('score')}")
print("---")
undefinedGet Hot Posts from Specific Subreddits
获取特定子版块的热门帖子
python
import requests
import time
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'}python
import requests
import time
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'}Define subreddits relevant to your niche
定义与细分领域相关的子版块
subreddits = ['cardiology', 'health', 'medicine']
for sub in subreddits:
print(f"\n=== Hot in r/{sub} ===")
try:
url = f"https://www.reddit.com/r/{sub}/hot.json?limit=10"
response = requests.get(url, headers=headers, timeout=10)
data = response.json()
for child in data.get('data', {}).get('children', [])[:5]:
post = child.get('data', {})
print(f"- [{post.get('score')}] {post.get('title')[:60]}...")
except Exception as e:
print(f"Error: {e}")
time.sleep(3) # Rate limiting between requestsundefinedsubreddits = ['cardiology', 'health', 'medicine']
for sub in subreddits:
print(f"\n=== r/{sub}热门内容 ===")
try:
url = f"https://www.reddit.com/r/{sub}/hot.json?limit=10"
response = requests.get(url, headers=headers, timeout=10)
data = response.json()
for child in data.get('data', {}).get('children', [])[:5]:
post = child.get('data', {})
print(f"- [{post.get('score')}] {post.get('title')[:60]}...")
except Exception as e:
print(f"错误: {e}")
time.sleep(3) # 请求间等待以避免限流undefinedUsing the Bundled Reddit Scraper
使用内置的Reddit爬取工具
A helper class is included in :
scripts/reddit_scraper.pypython
from scripts.reddit_scraper import SimpleRedditScraper
scraper = SimpleRedditScraper()scripts/reddit_scraper.pypython
from scripts.reddit_scraper import SimpleRedditScraper
scraper = SimpleRedditScraper()Search
搜索内容
results = scraper.search("heart health tips", limit=20)
for post in results['posts']:
print(f"[{post['score']}] r/{post['subreddit']}: {post['title']}")
results = scraper.search("heart health tips", limit=20)
for post in results['posts']:
print(f"[{post['score']}] r/{post['subreddit']}: {post['title']}")
Get subreddit hot posts
获取子版块热门帖子
results = scraper.get_subreddit("health", sort="hot", limit=10)
for post in results['posts']:
print(f"[{post['score']}] {post['title']}")
undefinedresults = scraper.get_subreddit("health", sort="hot", limit=10)
for post in results['posts']:
print(f"[{post['score']}] {post['title']}")
undefinedRate Limiting Rules for Reddit
Reddit限流规则
python
import timepython
import timeSAFE: 1 request per 2-3 seconds
常规使用安全间隔:每2-3秒一次请求
time.sleep(3)
time.sleep(3)
If you get 429 errors: Wait 5-10 minutes
若触发429错误:等待5-10分钟
Never do more than 60 requests per hour
每小时请求数切勿超过60次
---
---Tool 3: Perplexity MCP (Twitter/TikTok/Web)
工具3:Perplexity MCP(Twitter/TikTok/网页端)
Use Claude's built-in Perplexity MCP for platforms you can't scrape directly.
使用Claude内置的Perplexity MCP功能,获取无法直接爬取的平台数据。
Query Templates for Trend Research
趋势调研查询模板
Twitter/X Trends:
"What are the most discussed [YOUR NICHE] topics on Twitter/X this week?
Include specific examples of viral tweets and their engagement."TikTok Trends (works from India):
"What [YOUR NICHE] content is trending on TikTok right now?
Include hashtags, view counts, and content formats that are working."YouTube Trends:
"What [YOUR NICHE] videos are getting the most views on YouTube this week?
Include channel names, view counts, and video topics."LinkedIn Professional:
"What [YOUR NICHE] topics are professionals discussing on LinkedIn this week?
Include examples of high-engagement posts."General Viral Content:
"What [YOUR NICHE] content has gone viral across social media in the past 7 days?
Include platform, format, and why it resonated."Twitter/X趋势:
"本周Twitter/X上关于[你的细分领域]的热门讨论话题有哪些?请包含具体的爆款推文及其互动数据。"TikTok趋势(支持印度地区使用):
"当前TikTok上哪些[你的细分领域]内容正在流行?请包含热门话题标签、播放量以及有效的内容形式。"YouTube趋势:
"本周YouTube上哪些[你的细分领域]视频播放量最高?请包含频道名称、播放量和视频主题。"LinkedIn职场话题:
"本周LinkedIn上职场人士正在讨论哪些[你的细分领域]话题?请包含高互动帖子示例。"通用爆款内容调研:
"过去7天内,哪些[你的细分领域]内容在全社交媒体平台成为爆款?请包含平台、内容形式以及走红原因。"Using Perplexity with perplexity-search Skill
结合perplexity-search技能使用
If you have the perplexity-search skill installed:
bash
python scripts/perplexity_search.py \
"What cardiology topics are trending on Twitter and TikTok this week? Include specific viral posts and hashtags." \
--model sonar-pro若已安装perplexity-search技能:
bash
python scripts/perplexity_search.py \
"本周Twitter和TikTok上哪些心脏病学话题正在流行?请包含具体的爆款帖子和话题标签。" \
--model sonar-proCombined Research Workflow
组合调研工作流
Complete Trend Research Function
完整趋势调研函数
python
from pytrends.request import TrendReq
import requests
import time
import json
from datetime import datetime
class TrendResearcher:
def __init__(self):
self.pytrends = TrendReq(hl='en-US', tz=330)
self.reddit_headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'
}
def _reddit_request(self, url):
"""Make a Reddit API request."""
try:
response = requests.get(url, headers=self.reddit_headers, timeout=10)
response.raise_for_status()
return response.json()
except Exception as e:
return {'error': str(e)}
def research_niche(self, keywords, subreddits=None, geo='IN'):
"""
Complete trend research for a niche.
Args:
keywords: List of keywords (max 5)
subreddits: List of subreddit names to monitor
geo: Geographic region code
Returns:
Dictionary with all research data
"""
results = {
'timestamp': datetime.now().isoformat(),
'keywords': keywords,
'google_trends': {},
'reddit': {},
'recommendations': []
}
# 1. Google Trends - Interest Over Time
print("📊 Fetching Google Trends data...")
try:
self.pytrends.build_payload(keywords[:5], timeframe='now 7-d', geo=geo)
results['google_trends']['interest'] = self.pytrends.interest_over_time().to_dict()
time.sleep(5)
# Related queries (rising topics)
related = self.pytrends.related_queries()
results['google_trends']['rising_queries'] = {}
for kw in keywords[:5]:
rising = related[kw]['rising']
if rising is not None:
results['google_trends']['rising_queries'][kw] = rising.head(10).to_dict()
time.sleep(5)
except Exception as e:
results['google_trends']['error'] = str(e)
# 2. Reddit Research
print("👽 Fetching Reddit discussions...")
if subreddits:
for sub in subreddits[:5]:
try:
url = f"https://www.reddit.com/r/{sub}/hot.json?limit=10"
data = self._reddit_request(url)
posts = []
for child in data.get('data', {}).get('children', [])[:5]:
post = child.get('data', {})
posts.append({
'title': post.get('title', ''),
'score': post.get('score', 0),
'comments': post.get('num_comments', 0)
})
results['reddit'][sub] = posts
time.sleep(3)
except Exception as e:
results['reddit'][sub] = {'error': str(e)}
# 3. Keyword search on Reddit
print("🔍 Searching Reddit for keywords...")
for kw in keywords[:3]:
try:
url = f"https://www.reddit.com/search.json?q={kw}&limit=10&sort=relevance&t=week"
data = self._reddit_request(url)
posts = []
for child in data.get('data', {}).get('children', [])[:5]:
post = child.get('data', {})
posts.append({
'title': post.get('title', ''),
'subreddit': post.get('subreddit', ''),
'score': post.get('score', 0),
'comments': post.get('num_comments', 0)
})
results['reddit'][f'search_{kw}'] = posts
time.sleep(3)
except Exception as e:
results['reddit'][f'search_{kw}'] = {'error': str(e)}
# 4. Generate recommendations
results['recommendations'] = self._generate_recommendations(results)
return results
def _generate_recommendations(self, data):
"""Generate content recommendations from research data"""
recommendations = []
# From rising queries
rising = data.get('google_trends', {}).get('rising_queries', {})
for kw, queries in rising.items():
if isinstance(queries, dict) and 'query' in queries:
for query in list(queries['query'].values())[:3]:
recommendations.append({
'source': 'Google Trends',
'topic': query,
'reason': f"Rising search term related to '{kw}'"
})
# From Reddit hot posts
for sub, posts in data.get('reddit', {}).items():
if isinstance(posts, list):
for post in posts[:2]:
if post.get('score', 0) > 50:
recommendations.append({
'source': f'Reddit r/{sub}',
'topic': post.get('title', ''),
'reason': f"High engagement ({post.get('score')} upvotes)"
})
return recommendationspython
from pytrends.request import TrendReq
import requests
import time
import json
from datetime import datetime
class TrendResearcher:
def __init__(self):
self.pytrends = TrendReq(hl='en-US', tz=330)
self.reddit_headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'
}
def _reddit_request(self, url):
"""发送Reddit API请求。"""
try:
response = requests.get(url, headers=self.reddit_headers, timeout=10)
response.raise_for_status()
return response.json()
except Exception as e:
return {'error': str(e)}
def research_niche(self, keywords, subreddits=None, geo='IN'):
"""
完成细分领域的趋势调研。
参数:
keywords: 关键词列表(最多5个)
subreddits: 需监控的子版块名称列表
geo: 地理区域代码
返回:
包含所有调研数据的字典
"""
results = {
'timestamp': datetime.now().isoformat(),
'keywords': keywords,
'google_trends': {},
'reddit': {},
'recommendations': []
}
# 1. Google Trends - 时间热度趋势
print("📊 获取Google Trends数据...")
try:
self.pytrends.build_payload(keywords[:5], timeframe='now 7-d', geo=geo)
results['google_trends']['interest'] = self.pytrends.interest_over_time().to_dict()
time.sleep(5)
# 相关查询(上升话题)
related = self.pytrends.related_queries()
results['google_trends']['rising_queries'] = {}
for kw in keywords[:5]:
rising = related[kw]['rising']
if rising is not None:
results['google_trends']['rising_queries'][kw] = rising.head(10).to_dict()
time.sleep(5)
except Exception as e:
results['google_trends']['error'] = str(e)
# 2. Reddit调研
print("👽 获取Reddit讨论数据...")
if subreddits:
for sub in subreddits[:5]:
try:
url = f"https://www.reddit.com/r/{sub}/hot.json?limit=10"
data = self._reddit_request(url)
posts = []
for child in data.get('data', {}).get('children', [])[:5]:
post = child.get('data', {})
posts.append({
'title': post.get('title', ''),
'score': post.get('score', 0),
'comments': post.get('num_comments', 0)
})
results['reddit'][sub] = posts
time.sleep(3)
except Exception as e:
results['reddit'][sub] = {'error': str(e)}
# 3. Reddit关键词搜索
print("🔍 在Reddit上搜索关键词...")
for kw in keywords[:3]:
try:
url = f"https://www.reddit.com/search.json?q={kw}&limit=10&sort=relevance&t=week"
data = self._reddit_request(url)
posts = []
for child in data.get('data', {}).get('children', [])[:5]:
post = child.get('data', {})
posts.append({
'title': post.get('title', ''),
'subreddit': post.get('subreddit', ''),
'score': post.get('score', 0),
'comments': post.get('num_comments', 0)
})
results['reddit'][f'search_{kw}'] = posts
time.sleep(3)
except Exception as e:
results['reddit'][f'search_{kw}'] = {'error': str(e)}
# 4. 生成推荐选题
results['recommendations'] = self._generate_recommendations(results)
return results
def _generate_recommendations(self, data):
"""根据调研数据生成内容推荐选题"""
recommendations = []
# 从上升查询词提取
rising = data.get('google_trends', {}).get('rising_queries', {})
for kw, queries in rising.items():
if isinstance(queries, dict) and 'query' in queries:
for query in list(queries['query'].values())[:3]:
recommendations.append({
'source': 'Google Trends',
'topic': query,
'reason': f"与'{kw}'相关的上升搜索词"
})
# 从Reddit热门帖子提取
for sub, posts in data.get('reddit', {}).items():
if isinstance(posts, list):
for post in posts[:2]:
if post.get('score', 0) > 50:
recommendations.append({
'source': f'Reddit r/{sub}',
'topic': post.get('title', ''),
'reason': f"高互动内容({post.get('score')}个点赞)"
})
return recommendationsUsage Example
使用示例
if name == "main":
researcher = TrendResearcher()
results = researcher.research_niche(
keywords=['heart health', 'cardiology', 'cholesterol'],
subreddits=['cardiology', 'health', 'medicine'],
geo='IN'
)
# Save results
with open('trend_research.json', 'w') as f:
json.dump(results, f, indent=2, default=str)
# Print recommendations
print("\n🎯 CONTENT RECOMMENDATIONS:")
for rec in results['recommendations']:
print(f"- [{rec['source']}] {rec['topic']}")
print(f" Why: {rec['reason']}")
---if name == "main":
researcher = TrendResearcher()
results = researcher.research_niche(
keywords=['heart health', 'cardiology', 'cholesterol'],
subreddits=['cardiology', 'health', 'medicine'],
geo='IN'
)
# 保存结果
with open('trend_research.json', 'w') as f:
json.dump(results, f, indent=2, default=str)
# 打印推荐选题
print("\n🎯 内容推荐选题:")
for rec in results['recommendations']:
print(f"- [{rec['source']}] {rec['topic']}")
print(f" 推荐理由: {rec['reason']}")
---Quick Reference Commands
快速参考命令
Daily Trend Check (5 minutes)
每日趋势检查(5分钟完成)
python
from pytrends.request import TrendReq
import requests
import timepython
from pytrends.request import TrendReq
import requests
import timeQuick Google Trends check
快速Google Trends检查
pytrends = TrendReq(hl='en-US', tz=330)
pytrends.build_payload(['your keyword'], timeframe='now 1-d')
print(pytrends.related_queries()['your keyword']['rising'])
time.sleep(5)
pytrends = TrendReq(hl='en-US', tz=330)
pytrends.build_payload(['your keyword'], timeframe='now 1-d')
print(pytrends.related_queries()['your keyword']['rising'])
time.sleep(5)
Quick Reddit check
快速Reddit检查
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'}
url = "https://www.reddit.com/search.json?q=your+keyword&limit=10&t=day"
response = requests.get(url, headers=headers, timeout=10)
data = response.json()
for child in data.get('data', {}).get('children', [])[:5]:
post = child.get('data', {})
print(f"[{post.get('score')}] {post.get('title')}")
undefinedheaders = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'}
url = "https://www.reddit.com/search.json?q=your+keyword&limit=10&t=day"
response = requests.get(url, headers=headers, timeout=10)
data = response.json()
for child in data.get('data', {}).get('children', [])[:5]:
post = child.get('data', {})
print(f"[{post.get('score')}] {post.get('title')}")
undefinedWeekly Deep Dive
每周深度调研
python
undefinedpython
undefinedUse the TrendResearcher class above with:
使用上述TrendResearcher类,配置:
- 5 core keywords
- 5个核心关键词
- 5 relevant subreddits
- 5个相关子版块
- 90-day timeframe for velocity analysis
- 90天时间范围用于增速分析
Then use Perplexity MCP for:
再结合Perplexity MCP获取:
- Twitter trends in your niche
- 细分领域的Twitter趋势
- TikTok viral content
- TikTok爆款内容
- YouTube trending videos
- YouTube热门视频
- LinkedIn discussions
- LinkedIn职场讨论
---
---Integration with Writing Skills
与写作技能的集成
After research, pass findings to your writing skills:
1. Run trend research (this skill)
2. Identify top 3-5 opportunities
3. Use content-marketing-social-listening for strategy
4. Use cardiology-content-repurposer or similar for content creation
5. Use authentic-voice for final polish完成调研后,可将结果传递给写作技能:
1. 运行趋势调研(本技能)
2. 筛选出3-5个最佳选题机会
3. 使用content-marketing-social-listening技能制定策略
4. 使用cardiology-content-repurposer或类似技能创建内容
5. 使用authentic-voice技能进行最终润色Troubleshooting
故障排除
pytrends Issues
pytrends问题
| Error | Solution |
|---|---|
| 429 Too Many Requests | Wait 60 seconds, then increase sleep time |
| Empty results | Check if keyword has search volume |
| Connection error | Check internet, retry in 5 minutes |
| 错误 | 解决方案 |
|---|---|
| 429 请求过多 | 等待60秒,然后增加请求间隔时间 |
| 无结果返回 | 检查关键词是否有搜索量 |
| 连接错误 | 检查网络,5分钟后重试 |
Reddit Issues
Reddit问题
| Error | Solution |
|---|---|
| 429 Rate Limited | Wait 10 minutes |
| Subreddit not found | Check subreddit name spelling |
| Empty results | Subreddit may be private or quarantined |
| Connection timeout | Increase timeout, check internet |
| 错误 | 解决方案 |
|---|---|
| 429 限流 | 等待10分钟 |
| 子版块未找到 | 检查子版块名称拼写 |
| 无结果返回 | 子版块可能为私有或被限制访问 |
| 连接超时 | 增加超时时间,检查网络 |
Best Practices
最佳实践
- Always use rate limiting: Sleep between requests
- Research in batches: Do weekly deep dives, not constant polling
- Save results: Cache research data locally
- Cross-reference: Validate trends across multiple platforms
- Act fast: Viral windows are short (24-72 hours)
- 始终使用限流机制:请求间添加等待时间
- 批量调研:每周进行一次深度调研,而非持续轮询
- 保存结果:将调研数据本地缓存
- 交叉验证:在多个平台间验证趋势真实性
- 快速行动:爆款内容的窗口期很短(24-72小时)
Platform Coverage Summary
平台覆盖总结
| Platform | Tool | Cost | Risk |
|---|---|---|---|
| Google Trends | pytrends | Free | Very Low |
| requests (public JSON) | Free | Low | |
| Twitter/X | Perplexity MCP | Free* | None |
| TikTok | Perplexity MCP | Free* | None |
| YouTube | Perplexity MCP | Free* | None |
| Perplexity MCP | Free* | None |
*Uses Claude's built-in MCP or OpenRouter credits if using perplexity-search skill
| 平台 | 工具 | 成本 | 风险 |
|---|---|---|---|
| Google Trends | pytrends | 免费 | 极低 |
| requests(公开JSON) | 免费 | 低 | |
| Twitter/X | Perplexity MCP | 免费* | 无 |
| TikTok | Perplexity MCP | 免费* | 无 |
| YouTube | Perplexity MCP | 免费* | 无 |
| Perplexity MCP | 免费* | 无 |
*使用Claude内置的MCP功能,若使用perplexity-search技能则可能消耗OpenRouter credits
Bundled Resources
内置资源
- : Main CLI tool for complete trend research
scripts/trend_research.py - : Simple Reddit scraper class (no API keys)
scripts/reddit_scraper.py
- :完整趋势调研的主CLI工具
scripts/trend_research.py - :简单的Reddit爬取类(无需API密钥)
scripts/reddit_scraper.py