reddit-sentiment-analysis

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Reddit Sentiment Analysis Skill

Reddit 情感分析 Skill

Purpose

用途

This skill enables systematic sentiment analysis of Reddit discussions to understand community opinions, preferences, and desires about products, brands, games, companies, or any topic. It produces actionable insights about what people like, what they criticize, and what improvements they wish for.
此Skill可对Reddit上的讨论进行系统性情感分析,以了解社区对产品、品牌、游戏、公司或任意话题的看法、偏好与诉求。它会生成可落地的洞察,包括人们的喜爱点、批评点以及期望的改进方向。

When to Use This Skill

何时使用此Skill

Use this skill when you need to:
  • Analyze community sentiment about games, products, or brands
  • Understand what features/aspects users appreciate
  • Identify common complaints and pain points
  • Discover what improvements or changes users desire
  • Generate competitive intelligence from community discussions
  • Track sentiment trends over time for a topic
  • Make data-driven product decisions based on user feedback
当你需要以下操作时,可使用此Skill:
  • 分析社区对游戏、产品或品牌的情感倾向
  • 了解用户认可的功能/维度
  • 识别常见的投诉与痛点
  • 发现用户期望的改进或变更
  • 从社区讨论中生成竞品情报
  • 追踪某一话题的情感趋势变化
  • 基于用户反馈做出数据驱动的产品决策

Prerequisites

前置条件

This skill requires the Reddit MCP server to be configured in
.mcp.json
:
json
{
  "mcpServers": {
    "reddit": {
      "command": "uvx",
      "args": ["mcp-server-reddit"]
    }
  }
}
使用此Skill需要在
.mcp.json
中配置Reddit MCP服务器:
json
{
  "mcpServers": {
    "reddit": {
      "command": "uvx",
      "args": ["mcp-server-reddit"]
    }
  }
}

Core Workflow

核心工作流程

Phase 1: Target Identification and Data Collection

阶段1:目标识别与数据收集

1. Define Analysis Target
  • Identify the product/brand/game/topic to analyze
  • Determine relevant subreddits (e.g., r/gaming, r/Games, product-specific subs)
  • Set time frame (recent posts, top posts from month/year)
  • Define scope (how many posts/comments to analyze)
2. Collect Reddit Data
  • Use
    mcp__reddit__get_subreddit_hot_posts
    for trending discussions
  • Use
    mcp__reddit__get_subreddit_top_posts
    for highly-rated content
  • Use
    mcp__reddit__get_post_content
    for detailed post + comments
  • Collect minimum 10-20 posts for meaningful analysis
  • Include both posts and top-level comments
1. 定义分析目标
  • 确定要分析的产品/品牌/游戏/话题
  • 选定相关的subreddit(例如:r/gaming、r/Games、产品专属社区)
  • 设置时间范围(近期帖子、月度/年度热门帖子)
  • 定义分析范围(要分析的帖子/评论数量)
2. 收集Reddit数据
  • 使用
    mcp__reddit__get_subreddit_hot_posts
    获取热门讨论
  • 使用
    mcp__reddit__get_subreddit_top_posts
    获取高评分内容
  • 使用
    mcp__reddit__get_post_content
    获取帖子详情+评论
  • 至少收集10-20个帖子以保证分析的有效性
  • 同时包含帖子内容与顶级评论

Phase 2: Sentiment Classification

阶段2:情感分类

3. Analyze Each Discussion
For each post and comment, classify sentiment into:
POSITIVE Signals:
  • Explicit praise: "I love...", "amazing", "best", "fantastic"
  • Recommendations: "highly recommend", "must-try", "worth it"
  • Emotional positivity: "fun", "enjoyable", "satisfying", "addicting"
  • Problem solutions: "finally fixed", "works great now"
  • Comparative praise: "better than X", "superior to"
NEGATIVE Signals:
  • Explicit criticism: "hate", "terrible", "worst", "awful"
  • Disappointment: "let down", "expected more", "overhyped"
  • Problems: "broken", "doesn't work", "buggy", "crashes"
  • Frustration: "annoying", "frustrating", "ridiculous"
  • Regret: "waste of money", "not worth it", "refunded"
NEUTRAL Signals:
  • Questions without sentiment
  • Factual statements
  • Technical discussions
  • Requests for information
WISH/DESIRE Signals:
  • "I wish...", "they should...", "would be better if..."
  • "needs more...", "lacking...", "missing..."
  • "hope they add...", "waiting for..."
  • Feature requests and suggestions
3. 分析每条讨论
针对每个帖子和评论,将情感分为以下类别:
正面信号:
  • 明确赞扬:"I love..."、"amazing"、"best"、"fantastic"
  • 推荐:"highly recommend"、"must-try"、"worth it"
  • 积极情绪:"fun"、"enjoyable"、"satisfying"、"addicting"
  • 问题解决:"finally fixed"、"works great now"
  • 对比式赞扬:"better than X"、"superior to"
负面信号:
  • 明确批评:"hate"、"terrible"、"worst"、"awful"
  • 失望:"let down"、"expected more"、"overhyped"
  • 问题:"broken"、"doesn't work"、"buggy"、"crashes"
  • 沮丧:"annoying"、"frustrating"、"ridiculous"
  • 后悔:"waste of money"、"not worth it"、"refunded"
中性信号:
  • 无情感倾向的问题
  • 事实陈述
  • 技术讨论
  • 信息请求
期望/诉求信号:
  • "I wish..."、"they should..."、"would be better if..."
  • "needs more..."、"lacking..."、"missing..."
  • "hope they add..."、"waiting for..."
  • 功能请求与建议

Phase 3: Entity and Aspect Extraction

阶段3:实体与维度提取

4. Identify Specific Aspects Mentioned
Extract what specifically is being discussed:
  • Features: gameplay mechanics, UI/UX, specific capabilities
  • Performance: speed, stability, optimization, bugs
  • Content: story, levels, variety, depth
  • Value: pricing, monetization, cost-benefit
  • Support: customer service, updates, community engagement
  • Comparisons: versus competitors, previous versions
4. 识别讨论的具体维度
提取讨论的具体对象:
  • 功能:游戏机制、UI/UX、特定功能
  • 性能:速度、稳定性、优化、Bug
  • 内容:剧情、关卡、多样性、深度
  • 价值:定价、变现模式、性价比
  • 支持:客户服务、更新、社区互动
  • 对比:与竞品对比、与前代版本对比

Phase 4: Aggregation and Summarization

阶段4:聚合与总结

5. Create Structured Summary
Generate output in this format:
markdown
undefined
5. 生成结构化总结
按以下格式生成输出:
markdown
undefined

Sentiment Analysis: [Topic/Product Name]

情感分析:[话题/产品名称]

Analysis Period: [Date Range] Subreddits Analyzed: [List] Posts Analyzed: [Number] Comments Analyzed: [Number]
分析周期:[日期范围] 分析的Subreddit:[列表] 分析的帖子数:[数量] 分析的评论数:[数量]

Overall Sentiment Score

整体情感得分

  • Positive: X%
  • Negative: Y%
  • Neutral: Z%
  • Mixed: W%
  • 正面:X%
  • 负面:Y%
  • 中性:Z%
  • 混合:W%

What People LIKE

用户喜爱点

  1. [Aspect/Feature Name] (mentioned X times, Y% positive)
    • Representative quotes: "[quote 1]", "[quote 2]"
    • Common themes: [summary]
  2. [Another Aspect] (...)
    • Representative quotes: ...
    • Common themes: ...
[Continue for top 5-7 positive aspects]
  1. [维度/功能名称](提及X次,Y%为正面)
    • 代表性评价:"[评价1]"、"[评价2]"
    • 常见主题:[总结]
  2. [另一维度](...)
    • 代表性评价:...
    • 常见主题:...
[继续列出前5-7个正面维度]

What People DISLIKE

用户不满点

  1. [Problem/Issue Name] (mentioned X times, Y% negative)
    • Representative quotes: "[quote 1]", "[quote 2]"
    • Common complaints: [summary]
    • Severity: [Low/Medium/High based on frequency and intensity]
  2. [Another Issue] (...)
    • Representative quotes: ...
    • Common complaints: ...
    • Severity: ...
[Continue for top 5-7 negative aspects]
  1. [问题/痛点名称](提及X次,Y%为负面)
    • 代表性评价:"[评价1]"、"[评价2]"
    • 常见投诉:[总结]
    • 严重程度:[根据提及频率与情绪强度分为低/中/高]
  2. [另一问题](...)
    • 代表性评价:...
    • 常见投诉:...
    • 严重程度:...
[继续列出前5-7个负面维度]

What People WISH FOR

用户期望

  1. [Feature/Improvement Request] (mentioned X times)
    • Representative quotes: "[quote 1]", "[quote 2]"
    • Common requests: [summary]
    • Urgency: [Low/Medium/High based on frequency and intensity]
  2. [Another Request] (...)
    • Representative quotes: ...
    • Common requests: ...
    • Urgency: ...
[Continue for top 5-7 requests]
  1. [功能/改进请求](提及X次)
    • 代表性评价:"[评价1]"、"[评价2]"
    • 常见请求:[总结]
    • 迫切程度:[根据提及频率与情绪强度分为低/中/高]
  2. [另一请求](...)
    • 代表性评价:...
    • 常见请求:...
    • 迫切程度:...
[继续列出前5-7个请求]

Key Insights

核心洞察

  • [Insight 1: Major finding about sentiment patterns]
  • [Insight 2: Surprising or notable trend]
  • [Insight 3: Competitive advantages/disadvantages]
  • [Insight 4: Recommended actions based on sentiment]
  • [洞察1:情感模式的主要发现]
  • [洞察2:意外或值得关注的趋势]
  • [洞察3:竞品优势/劣势]
  • [洞察4:基于情感的建议行动]

Trending Topics

热门话题

  • [Topic 1]: [Brief description of emerging discussion]
  • [Topic 2]: [Brief description]
  • [话题1]:[新兴讨论的简要描述]
  • [话题2]:[简要描述]

Competitor Mentions

竞品提及

  • [Competitor 1]: [Sentiment when mentioned, context]
  • [Competitor 2]: [Sentiment when mentioned, context]
undefined
  • [竞品1]:[提及时的情感倾向、上下文]
  • [竞品2]:[提及时的情感倾向、上下文]
undefined

Implementation Protocol

实施流程

Step 1: Create Todo List

步骤1:创建任务清单

javascript
TodoWrite([
  "Identify target subreddits for analysis",
  "Collect hot posts from subreddit(s)",
  "Collect top posts from time period",
  "Fetch detailed post content and comments",
  "Classify sentiment for each post/comment",
  "Extract aspects and entities mentioned",
  "Aggregate positive sentiment patterns",
  "Aggregate negative sentiment patterns",
  "Aggregate wish/desire patterns",
  "Calculate sentiment percentages",
  "Generate structured summary report"
])
javascript
TodoWrite([
  "Identify target subreddits for analysis",
  "Collect hot posts from subreddit(s)",
  "Collect top posts from time period",
  "Fetch detailed post content and comments",
  "Classify sentiment for each post/comment",
  "Extract aspects and entities mentioned",
  "Aggregate positive sentiment patterns",
  "Aggregate negative sentiment patterns",
  "Aggregate wish/desire patterns",
  "Calculate sentiment percentages",
  "Generate structured summary report"
])

Step 2: Parallel Data Collection

步骤2:并行数据收集

CRITICAL: Batch all Reddit API calls in a single message:
javascript
[Single Message - All Data Collection]:
  mcp__reddit__get_subreddit_hot_posts({subreddit_name: "gaming", limit: 20})
  mcp__reddit__get_subreddit_top_posts({subreddit_name: "gaming", time: "month", limit: 20})
  mcp__reddit__get_subreddit_hot_posts({subreddit_name: "Games", limit: 20})
  mcp__reddit__get_subreddit_top_posts({subreddit_name: "Games", time: "month", limit: 20})
关键:将所有Reddit API调用批量放在一条消息中:
javascript
[Single Message - All Data Collection]:
  mcp__reddit__get_subreddit_hot_posts({subreddit_name: "gaming", limit: 20})
  mcp__reddit__get_subreddit_top_posts({subreddit_name: "gaming", time: "month", limit: 20})
  mcp__reddit__get_subreddit_hot_posts({subreddit_name: "Games", limit: 20})
  mcp__reddit__get_subreddit_top_posts({subreddit_name: "Games", time: "month", limit: 20})

Step 3: Parallel Comment Analysis

步骤3:并行评论分析

For each relevant post ID, fetch details in parallel:
javascript
[Single Message - All Post Details]:
  mcp__reddit__get_post_content({post_id: "abc123", comment_depth: 3, comment_limit: 20})
  mcp__reddit__get_post_content({post_id: "def456", comment_depth: 3, comment_limit: 20})
  mcp__reddit__get_post_content({post_id: "ghi789", comment_depth: 3, comment_limit: 20})
  // ... up to 10-20 posts in parallel
针对每个相关的帖子ID,并行获取详情:
javascript
[Single Message - All Post Details]:
  mcp__reddit__get_post_content({post_id: "abc123", comment_depth: 3, comment_limit: 20})
  mcp__reddit__get_post_content({post_id: "def456", comment_depth: 3, comment_limit: 20})
  mcp__reddit__get_post_content({post_id: "ghi789", comment_depth: 3, comment_limit: 20})
  // ... 最多并行处理10-20个帖子

Step 4: Sentiment Analysis Engine

步骤4:情感分析引擎

For each piece of content (post title, post body, comment):
  1. Tokenize and normalize text
    • Convert to lowercase
    • Remove URLs, special characters
    • Identify key phrases
  2. Apply sentiment scoring
    javascript
    function analyzeSentiment(text) {
      const positive_score = countMatches(text, POSITIVE_KEYWORDS);
      const negative_score = countMatches(text, NEGATIVE_KEYWORDS);
      const wish_score = countMatches(text, WISH_KEYWORDS);
    
      return {
        sentiment: determineOverallSentiment(positive_score, negative_score),
        confidence: calculateConfidence(positive_score, negative_score),
        wishes: wish_score > 0,
        aspects: extractAspects(text)
      };
    }
  3. Extract context and aspects
    • What noun/feature is being discussed?
    • What adjectives describe it?
    • What verbs indicate action/desire?
针对每条内容(帖子标题、帖子正文、评论):
  1. 文本分词与标准化
    • 转换为小写
    • 移除URL、特殊字符
    • 识别关键短语
  2. 应用情感评分
    javascript
    function analyzeSentiment(text) {
      const positive_score = countMatches(text, POSITIVE_KEYWORDS);
      const negative_score = countMatches(text, NEGATIVE_KEYWORDS);
      const wish_score = countMatches(text, WISH_KEYWORDS);
    
      return {
        sentiment: determineOverallSentiment(positive_score, negative_score),
        confidence: calculateConfidence(positive_score, negative_score),
        wishes: wish_score > 0,
        aspects: extractAspects(text)
      };
    }
  3. 提取上下文与维度
    • 讨论的对象/功能是什么?
    • 用了哪些形容词描述它?
    • 哪些动词表达了行动/诉求?

Step 5: Generate Report

步骤5:生成报告

Save the structured summary to
/docs/reddit-sentiment-analysis-[topic]-[date].md
将结构化总结保存至
/docs/reddit-sentiment-analysis-[topic]-[date].md

Configuration Options

配置选项

Basic Analysis (Quick)

基础分析(快速)

  • Subreddits: 1-2
  • Posts: 10-20
  • Comments per post: 10-15
  • Time: ~5 minutes
  • Subreddit:1-2个
  • 帖子:10-20个
  • 每个帖子的评论数:10-15条
  • 耗时:约5分钟

Comprehensive Analysis (Deep)

全面分析(深度)

  • Subreddits: 3-5
  • Posts: 40-60
  • Comments per post: 20-30
  • Time: ~15 minutes
  • Subreddit:3-5个
  • 帖子:40-60个
  • 每个帖子的评论数:20-30条
  • 耗时:约15分钟

Competitive Analysis (Wide)

竞品分析(广度)

  • Subreddits: 5-10 (including competitor subs)
  • Posts: 60-100
  • Comments per post: 15-20
  • Time: ~20 minutes
  • Subreddit:5-10个(包含竞品社区)
  • 帖子:60-100个
  • 每个帖子的评论数:15-20条
  • 耗时:约20分钟

Example Usage

使用示例

Example 1: Gaming Sentiment Analysis

示例1:游戏情感分析

User: "Analyze Reddit sentiment about Elden Ring"

Agent workflow:
1. Create todos for analysis pipeline
2. Identify subreddits: r/Eldenring, r/gaming, r/Games
3. Collect hot + top posts (parallel): 60 posts total
4. Fetch post details (parallel): 30 most relevant posts
5. Analyze ~500 comments for sentiment
6. Extract aspects: combat, difficulty, exploration, story, performance
7. Generate summary showing:
   - LIKES: Combat system (95%), exploration (92%), art direction (88%)
   - DISLIKES: Performance issues (67%), unclear quest objectives (54%)
   - WISHES: Better quest tracking, PC optimization, more checkpoints
8. Save report to docs/reddit-sentiment-analysis-eldenring-2025-01-26.md
用户:"分析Reddit上关于Elden Ring的情感倾向"

Agent工作流程:
1. 创建分析流程的任务清单
2. 确定Subreddit:r/Eldenring、r/gaming、r/Games
3. 并行收集热门+顶级帖子:共60个帖子
4. 并行获取帖子详情:30个最相关的帖子
5. 分析约500条评论的情感
6. 提取维度:战斗、难度、探索、剧情、性能
7. 生成总结:
   - 喜爱点:战斗系统(95%)、探索(92%)、美术风格(88%)
   - 不满点:性能问题(67%)、任务目标不明确(54%)
   - 期望:更好的任务追踪、PC端优化、更多存档点
8. 将报告保存至docs/reddit-sentiment-analysis-eldenring-2025-01-26.md

Example 2: Product Brand Analysis

示例2:品牌情感分析

User: "What do people think about Tesla on Reddit?"

Agent workflow:
1. Create todos for brand analysis
2. Identify subreddits: r/teslamotors, r/electricvehicles, r/cars
3. Collect discussions mentioning "Tesla" (100 posts)
4. Analyze sentiment across aspects: quality, service, pricing, features
5. Generate brand perception summary:
   - LIKES: Autopilot, acceleration, software updates
   - DISLIKES: Build quality, service wait times, pricing
   - WISHES: Better quality control, more service centers, lower prices
   - Competitive position vs. other EVs
用户:"Reddit上的人们对Tesla的看法如何?"

Agent工作流程:
1. 创建品牌分析的任务清单
2. 确定Subreddit:r/teslamotors、r/electricvehicles、r/cars
3. 收集提及"Tesla"的讨论(100个帖子)
4. 分析各维度的情感:质量、服务、定价、功能
5. 生成品牌认知总结:
   - 喜爱点:Autopilot、加速、软件更新
   - 不满点:品控、服务等待时间、定价
   - 期望:更好的品控、更多服务中心、更低价格
   - 与其他电动车的竞品定位

Best Practices

最佳实践

DO:

建议:

  • ✅ Analyze multiple subreddits for balanced perspective
  • ✅ Include both hot and top posts for recency + quality
  • ✅ Read comments, not just post titles (comments have rich sentiment)
  • ✅ Provide direct quotes as evidence
  • ✅ Quantify sentiment with percentages and counts
  • ✅ Organize by aspect/feature, not just positive/negative
  • ✅ Save reports to
    /docs/
    directory
  • ✅ Batch all API calls in single messages
  • ✅ 分析多个Subreddit以获取平衡视角
  • ✅ 同时包含热门和顶级帖子,兼顾时效性与质量
  • ✅ 阅读评论,而非仅看帖子标题(评论包含更丰富的情感信息)
  • ✅ 提供直接引用作为证据
  • ✅ 用百分比和数量量化情感
  • ✅ 按维度/功能分类,而非仅按正负情感分类
  • ✅ 将报告保存至
    /docs/
    目录
  • ✅ 将所有API调用批量放在单条消息中

DON'T:

避免:

  • ❌ Only analyze one subreddit (bias risk)
  • ❌ Ignore comment sentiment (posts alone insufficient)
  • ❌ Make claims without quote evidence
  • ❌ Mix multiple products in one analysis (confusing)
  • ❌ Save reports to root directory
  • ❌ Make sequential API calls (use parallel batching)
  • ❌ 仅分析一个Subreddit(存在偏见风险)
  • ❌ 忽略评论情感(仅看帖子不足以全面分析)
  • ❌ 无引用依据的断言
  • ❌ 在一次分析中混合多个产品(易混淆)
  • ❌ 将报告保存至根目录
  • ❌ 顺序调用API(使用并行批量调用)

Advanced Features

高级功能

Temporal Sentiment Tracking

时间维度情感追踪

Compare sentiment across time periods:
javascript
[Parallel Time-Based Analysis]:
  get_subreddit_top_posts({time: "week"})
  get_subreddit_top_posts({time: "month"})
  get_subreddit_top_posts({time: "year"})
Generate trend report showing sentiment evolution.
对比不同时间段的情感:
javascript
[Parallel Time-Based Analysis]:
  get_subreddit_top_posts({time: "week"})
  get_subreddit_top_posts({time: "month"})
  get_subreddit_top_posts({time: "year"})
生成展示情感演变趋势的报告。

Competitive Benchmarking

竞品对标

Analyze multiple products simultaneously:
javascript
[Parallel Competitive Analysis]:
  // Collect data for Product A
  // Collect data for Product B
  // Collect data for Product C
Generate comparative sentiment matrix.
同时分析多个产品:
javascript
[Parallel Competitive Analysis]:
  // 收集产品A的数据
  // 收集产品B的数据
  // 收集产品C的数据
生成对比情感矩阵。

Aspect-Specific Deep Dive

特定维度深度分析

Focus on one feature/aspect across all mentions:
javascript
// Filter all content mentioning "multiplayer" or "co-op"
// Analyze sentiment specifically about that aspect
// Generate aspect-focused report
聚焦所有提及的某一功能/维度:
javascript
// 筛选所有提及"multiplayer"或"co-op"的内容
// 专门分析该维度的情感
// 生成维度聚焦的报告

Output Files

输出文件

All analysis reports are saved to:
  • /docs/reddit-sentiment-analysis-[topic]-[date].md
    - Main report
  • /docs/reddit-raw-data-[topic]-[date].json
    - Raw data (optional)
所有分析报告均保存至:
  • /docs/reddit-sentiment-analysis-[topic]-[date].md
    - 主报告
  • /docs/reddit-raw-data-[topic]-[date].json
    - 原始数据(可选)

Integration with Other Skills

与其他Skill的集成

This skill works well with:
  • competitive-analysis: Use sentiment data for market positioning
  • product-roadmap: Prioritize features based on user wishes
  • market-research: Combine with other data sources
  • trend-analysis: Track sentiment changes over time
此Skill可与以下Skill配合使用:
  • competitive-analysis:利用情感数据进行市场定位
  • product-roadmap:基于用户诉求优先规划功能
  • market-research:与其他数据源结合
  • trend-analysis:追踪情感随时间的变化

Troubleshooting

故障排除

Issue: Not enough posts found
  • Solution: Expand to more subreddits, increase time range
Issue: Sentiment too polarized (all positive or negative)
  • Solution: Check subreddit bias (fan subs vs. general subs)
Issue: Missing key aspects in analysis
  • Solution: Increase comment depth and limit
Issue: Analysis taking too long
  • Solution: Reduce number of posts, focus on top posts only
问题:找到的帖子数量不足
  • 解决方案:扩展至更多Subreddit,扩大时间范围
问题:情感过于极端(全正面或全负面)
  • 解决方案:检查Subreddit的偏见(粉丝社区 vs 通用社区)
问题:分析中遗漏关键维度
  • 解决方案:增加评论深度与数量
问题:分析耗时过长
  • 解决方案:减少帖子数量,仅聚焦顶级帖子

Summary

总结

This skill transforms unstructured Reddit discussions into actionable sentiment insights by:
  1. Systematically collecting relevant posts and comments
  2. Classifying sentiment with context and evidence
  3. Extracting specific aspects and features discussed
  4. Aggregating patterns into structured summaries
  5. Providing quantified insights with direct quotes
  6. Identifying what users like, dislike, and wish for
  7. Delivering reports ready for product/business decisions
The output is a comprehensive, evidence-based understanding of community sentiment that can drive product development, marketing strategy, and competitive positioning.
此Skill通过以下步骤将非结构化的Reddit讨论转化为可落地的情感洞察:
  1. 系统性收集相关帖子与评论
  2. 结合上下文与证据进行情感分类
  3. 提取讨论的具体维度与功能
  4. 将模式聚合为结构化总结
  5. 提供带直接引用的量化洞察
  6. 识别用户的喜爱点、不满点与诉求
  7. 生成可直接用于产品/业务决策的报告
输出内容是对社区情感的全面、有依据的解读,可驱动产品开发、营销策略与竞品定位。