tech-comparison

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

技术选型对比助手

Technology Selection Comparison Assistant

任务目标

Task Objectives

  • 本 Skill 用于:对比多个技术选项并提供建议,或根据项目需求推荐合适的技术方案
  • 能力包含:智能识别技术类型、动态选择对比维度、生成结构化报告
  • 触发条件:用户提及"对比"、"vs"、"选型"等关键词,或描述项目需求(如"我想开发跨平台应用")
  • This Skill is used to: compare multiple technical options and provide suggestions, or recommend appropriate technical solutions based on project requirements
  • Capabilities include: intelligent identification of technology types, dynamic selection of comparison dimensions, generation of structured reports
  • Trigger conditions: users mention keywords such as "compare", "vs", "selection", or describe project requirements (e.g., "I want to develop a cross-platform application")

操作步骤

Operation Steps

1. 理解需求

1. Understand Requirements

  • 识别技术列表:明确用户要对比的技术(1个、2个或多个)
  • 单技术深度分析:若用户仅提供单一技术(如"我想用SolidJS,它怎么样?"),则:
    • 分析其核心优势、典型场景、潜在局限
    • 推荐2-3个同类竞品供参考并简要说明差异
    • 仍按完整报告格式输出(表格可只有一列)
  • 基于需求推荐:若用户未明确列出技术,但描述了项目目标,先推荐2-4个主流候选技术并确认
  • 技术类型识别
    • 识别技术类型(前端框架、后端语言/框架、数据库、部署方案等)
    • 若用户提供的技术不属于同一类别(如Java vs Docker),先澄清需求:"您是想对比后端语言(如Java vs Go)还是部署方案(如Docker vs Podman)?"
    • 子生态归类规则:若用户提到的技术属于子生态(如Next.js、Nuxt.js、Remix),请先归类到其主技术栈(React/Vue)下,并额外考虑SSR/SSG特有维度
  • Identify Technology List: Clarify the technologies the user wants to compare (1, 2 or multiple)
  • In-depth Analysis of Single Technology: If the user only provides a single technology (e.g., "I want to use SolidJS, how is it?"):
    • Analyze its core advantages, typical scenarios, and potential limitations
    • Recommend 2-3 similar competitors for reference and briefly explain the differences
    • Still output in the complete report format (the table can have only one column)
  • Recommendation Based on Requirements: If the user does not clearly list technologies but describes project goals, first recommend 2-4 mainstream candidate technologies and confirm with the user
  • Technology Type Identification:
    • Identify technology types (front-end frameworks, back-end languages/frameworks, databases, deployment solutions, etc.)
    • If the technologies provided by the user do not belong to the same category (e.g., Java vs Docker), first clarify the requirements: "Do you want to compare back-end languages (e.g., Java vs Go) or deployment solutions (e.g., Docker vs Podman)?"
    • Sub-ecology Classification Rules: If the technologies mentioned by the user belong to sub-ecologies (e.g., Next.js, Nuxt.js, Remix), first classify them under their main technology stacks (React/Vue), and additionally consider SSR/SSG-specific dimensions

2. 动态确定对比维度

2. Dynamically Determine Comparison Dimensions

根据技术类型智能选择适用维度,参考 references/comparison-dimensions.md
  • 前端框架:学习曲线、性能、生态、社区活跃度、TypeScript支持
  • 后端框架:性能、开发效率、生态、学习曲线、适用场景
  • 数据库:数据模型、性能、扩展性、一致性、适用场景
  • 部署方案:性能、资源占用、打包体积、跨平台支持
  • 通用维度:长期维护性、企业采用情况、学习曲线
Intelligently select applicable dimensions based on technology types, refer to references/comparison-dimensions.md
  • Front-end frameworks: learning curve, performance, ecosystem, community activity, TypeScript support
  • Back-end frameworks: performance, development efficiency, ecosystem, learning curve, applicable scenarios
  • Databases: data model, performance, scalability, consistency, applicable scenarios
  • Deployment solutions: performance, resource usage, package size, cross-platform support
  • General dimensions: long-term maintainability, enterprise adoption, learning curve

3. 信息检索与分析

3. Information Retrieval and Analysis

基于知识储备进行对比分析:
  • 知识截止时间:本报告基于截至2024年底的公开技术资料生成
  • 数据优先级:优先引用长期稳定来源(官方文档、GitHub仓库、MDN),谨慎引用年度调查(如State of JS 2023)并标注时效性
  • 信息检索阶段
    1. 第一阶段:快速获取各技术概览,形成初步对比草稿
    2. 第二阶段:针对关键差异点进行深入分析,补充细节、基准数据或权威引用
  • 数据来源:官方文档、GitHub活跃度、npm trends、benchmark报告等
  • 若信息不足或数据可能过时,明确标注"建议查询最新数据"
Conduct comparative analysis based on knowledge reserves:
  • Knowledge Cut-off Time: This report is generated based on public technical materials as of the end of 2024
  • Data Priority: Prioritize citing long-term stable sources (official documents, GitHub repositories, MDN), and cite annual surveys (such as State of JS 2023) cautiously with timeliness annotations
  • Information Retrieval Phases:
    1. Phase 1: Quickly obtain an overview of each technology and form an initial comparison draft
    2. Phase 2: Conduct in-depth analysis of key differences, supplement details, benchmark data or authoritative citations
  • Data Sources: official documents, GitHub activity, npm trends, benchmark reports, etc.
  • If information is insufficient or data may be outdated, clearly mark "It is recommended to check the latest data"

4. 生成结构化报告

4. Generate Structured Report

按照 references/output-template.md 格式输出:
  • 知识截止声明:报告开头必须包含数据时效性说明
  • 📊 对比表格:使用Markdown表格,固定列顺序,缺失值用"—"或"需配合其他工具"
  • 📖 分析与解释:2-4段文字解释关键差异、背后原因、适用场景,避免过度承诺
  • 💡 建议与风险提示:根据用户需求给出具体推荐,并列出潜在风险
  • 📈 可视化(可选):使用Mermaid绘制简单图表(生态对比、性能趋势)
  • 🔗 参考文献:列出所有引用的来源(至少3个可靠来源),优先官方文档、GitHub、MDN
Output in the format specified in references/output-template.md:
  • Knowledge Cut-off Statement: The report must include a data timeliness statement at the beginning
  • 📊 Comparison Table: Use Markdown tables, fix column order, use "—" or "Requires matching with other tools" for missing values
  • 📖 Analysis and Explanation: 2-4 paragraphs explaining key differences, underlying reasons, applicable scenarios, avoiding over-promises
  • 💡 Suggestions and Risk Warnings: Provide specific recommendations based on user needs, and list potential risks
  • 📈 Visualization (Optional): Use Mermaid to draw simple charts (ecosystem comparison, performance trends)
  • 🔗 References: List all cited sources (at least 3 reliable sources), prioritizing official documents, GitHub, MDN

输出规范

Output Specifications

  • 语言:中文,专业但易懂
  • 避免主观臆断,所有观点需有依据
  • 若某技术不涉及某维度,标注"不适用"或"—"而非留空
  • 表格固定列顺序,确保可读性
  • 不声称"性能更快",而说"在XX场景下基准测试显示更优"
  • 标注数据时效性,建议用户验证最新信息
  • Language: Chinese, professional but easy to understand
  • Avoid subjective assumptions, all opinions must be evidence-based
  • If a technology does not involve a certain dimension, mark "Not applicable" or "—" instead of leaving it blank
  • Fix the column order of tables to ensure readability
  • Do not claim "Faster performance", instead say "Benchmark tests show better performance in XX scenarios"
  • Mark data timeliness and suggest users verify the latest information

资源索引

Resource Index

  • 维度库:见 references/comparison-dimensions.md(按技术类型分类的对比维度及典型关键词)
  • 输出模板:见 references/output-template.md(完整的报告格式模板及Mermaid图表示例)
  • Dimension Library: See references/comparison-dimensions.md (comparison dimensions classified by technology type and typical keywords)
  • Output Template: See references/output-template.md (complete report format template and Mermaid chart examples)

注意事项

Notes

  • 仅在需要时读取参考文档,保持上下文简洁
  • 当信息不足时,明确说明"当前缺乏权威数据",而非猜测
  • 保持中立立场,客观呈现各技术的优缺点
  • 标注知识截止时间(2024年底),建议用户验证最新信息
  • 充分利用智能体的知识储备和分析能力,避免为简单任务编写脚本
  • 处理模糊输入时,主动澄清用户真实需求
  • 提供风险提示,帮助用户全面了解技术潜在问题
  • Only read reference documents when necessary, keep the context concise
  • When information is insufficient, clearly state "Lack of authoritative data currently" instead of guessing
  • Maintain a neutral stance, objectively present the advantages and disadvantages of each technology
  • Mark the knowledge cut-off time (end of 2024) and suggest users verify the latest information
  • Make full use of the agent's knowledge reserves and analysis capabilities, avoid writing scripts for simple tasks
  • When handling ambiguous input, actively clarify the user's real needs
  • Provide risk warnings to help users fully understand potential technical issues

使用示例

Usage Examples

示例1:前端框架对比

Example 1: Front-end Framework Comparison

用户输入:"帮我对比 React、Vue 和 Svelte,我想做一个高性能的单页应用"
执行方式
  1. 识别为前端框架对比
  2. 动态选择维度:性能、学习曲线、生态、TypeScript支持、社区活跃度
  3. 基于知识生成对比表格和分析
  4. 根据用户"高性能单页应用"需求给出建议,并标注Svelte生态较小的风险
User Input: "Help me compare React, Vue and Svelte, I want to build a high-performance single-page application"
Implementation Method:
  1. Identify as front-end framework comparison
  2. Dynamically select dimensions: performance, learning curve, ecosystem, TypeScript support, community activity
  3. Generate comparison table and analysis based on knowledge
  4. Provide suggestions based on the user's "high-performance single-page application" requirement, and mark the risk of Svelte's relatively small ecosystem

示例2:子生态归类

Example 2: Sub-ecology Classification

用户输入:"Next.js 和 Nuxt.js 哪个更适合SEO?"
执行方式
  1. 识别为前端框架对比(SSR框架),归为主技术栈React vs Vue
  2. 额外考虑SSR/SSG特有维度
  3. 选择维度:SSR/SSG能力、SEO支持、生态、学习曲线
  4. 生成对比表格和分析
  5. 根据用户"SEO友好"需求明确推荐
User Input: "Which is more suitable for SEO, Next.js or Nuxt.js?"
Implementation Method:
  1. Identify as front-end framework comparison (SSR framework), classify under main technology stacks React vs Vue
  2. Additionally consider SSR/SSG-specific dimensions
  3. Select dimensions: SSR/SSG capabilities, SEO support, ecosystem, learning curve
  4. Generate comparison table and analysis
  5. Give a clear recommendation based on the user's "SEO-friendly" requirement

示例3:单技术深度分析

Example 3: In-depth Analysis of Single Technology

用户输入:"我想用SolidJS,它怎么样?"
执行方式
  1. 识别为单技术深度分析
  2. 分析SolidJS的核心优势(细粒度响应式)、典型场景(高性能应用)、潜在局限(生态较小)
  3. 推荐竞品:React(生态丰富)、Vue(学习曲线低)、Svelte(编译时优化)并简要说明差异
  4. 生成报告(表格单列)
User Input: "I want to use SolidJS, how is it?"
Implementation Method:
  1. Identify as in-depth analysis of single technology
  2. Analyze SolidJS's core advantages (fine-grained reactivity), typical scenarios (high-performance applications), potential limitations (relatively small ecosystem)
  3. Recommend competitors: React (rich ecosystem), Vue (low learning curve), Svelte (compile-time optimization) and briefly explain the differences
  4. Generate report (single-column table)

示例4:跨平台部署方案

Example 4: Cross-platform Deployment Solution

用户输入:"我打算开发一个跨平台桌面软件,Electron、Tauri 和 Neutralino 哪个更好?"
执行方式
  1. 识别为部署方案对比
  2. 选择维度:性能、资源占用、打包体积、跨平台支持、生态
  3. 生成对比表格和详细分析
  4. 根据用户需求给出推荐,提示Electron资源占用较高的风险
User Input: "I plan to develop a cross-platform desktop software, which is better: Electron, Tauri or Neutralino?"
Implementation Method:
  1. Identify as deployment solution comparison
  2. Select dimensions: performance, resource usage, package size, cross-platform support, ecosystem
  3. Generate comparison table and detailed analysis
  4. Provide recommendations based on user needs, and warn about the high resource usage of Electron

示例5:后端框架对比

Example 5: Back-end Framework Comparison

用户输入:"单纯想了解 Python Web 框架:Django、Flask 和 FastAPI 有什么区别?"
执行方式
  1. 识别为后端框架对比
  2. 选择维度:适用场景、开发效率、性能、学习曲线
  3. 生成对比表格和分析
  4. 提供通用选型建议
User Input: "I simply want to understand Python Web frameworks: what are the differences between Django, Flask and FastAPI?"
Implementation Method:
  1. Identify as back-end framework comparison
  2. Select dimensions: applicable scenarios, development efficiency, performance, learning curve
  3. Generate comparison table and analysis
  4. Provide general selection suggestions

示例6:模糊输入处理

Example 6: Ambiguous Input Handling

用户输入:"对比 Java 和 Docker"
执行方式
  1. 识别到技术不属于同一类别(语言 vs 容器)
  2. 澄清需求:"您是想对比后端语言(如Java vs Go)还是部署方案(如Docker vs Podman)?"
  3. 等待用户确认后继续分析
User Input: "Compare Java and Docker"
Implementation Method:
  1. Identify that the technologies do not belong to the same category (language vs container)
  2. Clarify requirements: "Do you want to compare back-end languages (e.g., Java vs Go) or deployment solutions (e.g., Docker vs Podman)?"
  3. Wait for user confirmation before continuing analysis