Technology Selection Comparison Assistant
Task Objectives
- This Skill is used to: compare multiple technical options and provide suggestions, or recommend appropriate technical solutions based on project requirements
- Capabilities include: intelligent identification of technology types, dynamic selection of comparison dimensions, generation of structured reports
- Trigger conditions: users mention keywords such as "compare", "vs", "selection", or describe project requirements (e.g., "I want to develop a cross-platform application")
Operation Steps
1. Understand Requirements
- Identify Technology List: Clarify the technologies the user wants to compare (1, 2 or multiple)
- In-depth Analysis of Single Technology: If the user only provides a single technology (e.g., "I want to use SolidJS, how is it?"):
- Analyze its core advantages, typical scenarios, and potential limitations
- Recommend 2-3 similar competitors for reference and briefly explain the differences
- Still output in the complete report format (the table can have only one column)
- Recommendation Based on Requirements: If the user does not clearly list technologies but describes project goals, first recommend 2-4 mainstream candidate technologies and confirm with the user
- Technology Type Identification:
- Identify technology types (front-end frameworks, back-end languages/frameworks, databases, deployment solutions, etc.)
- If the technologies provided by the user do not belong to the same category (e.g., Java vs Docker), first clarify the requirements: "Do you want to compare back-end languages (e.g., Java vs Go) or deployment solutions (e.g., Docker vs Podman)?"
- Sub-ecology Classification Rules: If the technologies mentioned by the user belong to sub-ecologies (e.g., Next.js, Nuxt.js, Remix), first classify them under their main technology stacks (React/Vue), and additionally consider SSR/SSG-specific dimensions
2. Dynamically Determine Comparison Dimensions
Intelligently select applicable dimensions based on technology types, refer to references/comparison-dimensions.md
- Front-end frameworks: learning curve, performance, ecosystem, community activity, TypeScript support
- Back-end frameworks: performance, development efficiency, ecosystem, learning curve, applicable scenarios
- Databases: data model, performance, scalability, consistency, applicable scenarios
- Deployment solutions: performance, resource usage, package size, cross-platform support
- General dimensions: long-term maintainability, enterprise adoption, learning curve
3. Information Retrieval and Analysis
Conduct comparative analysis based on knowledge reserves:
- Knowledge Cut-off Time: This report is generated based on public technical materials as of the end of 2024
- Data Priority: Prioritize citing long-term stable sources (official documents, GitHub repositories, MDN), and cite annual surveys (such as State of JS 2023) cautiously with timeliness annotations
- Information Retrieval Phases:
- Phase 1: Quickly obtain an overview of each technology and form an initial comparison draft
- Phase 2: Conduct in-depth analysis of key differences, supplement details, benchmark data or authoritative citations
- Data Sources: official documents, GitHub activity, npm trends, benchmark reports, etc.
- If information is insufficient or data may be outdated, clearly mark "It is recommended to check the latest data"
4. Generate Structured Report
Output in the format specified in references/output-template.md:
- Knowledge Cut-off Statement: The report must include a data timeliness statement at the beginning
- 📊 Comparison Table: Use Markdown tables, fix column order, use "—" or "Requires matching with other tools" for missing values
- 📖 Analysis and Explanation: 2-4 paragraphs explaining key differences, underlying reasons, applicable scenarios, avoiding over-promises
- 💡 Suggestions and Risk Warnings: Provide specific recommendations based on user needs, and list potential risks
- 📈 Visualization (Optional): Use Mermaid to draw simple charts (ecosystem comparison, performance trends)
- 🔗 References: List all cited sources (at least 3 reliable sources), prioritizing official documents, GitHub, MDN
Output Specifications
- Language: Chinese, professional but easy to understand
- Avoid subjective assumptions, all opinions must be evidence-based
- If a technology does not involve a certain dimension, mark "Not applicable" or "—" instead of leaving it blank
- Fix the column order of tables to ensure readability
- Do not claim "Faster performance", instead say "Benchmark tests show better performance in XX scenarios"
- Mark data timeliness and suggest users verify the latest information
Resource Index
- Dimension Library: See references/comparison-dimensions.md (comparison dimensions classified by technology type and typical keywords)
- Output Template: See references/output-template.md (complete report format template and Mermaid chart examples)
Notes
- Only read reference documents when necessary, keep the context concise
- When information is insufficient, clearly state "Lack of authoritative data currently" instead of guessing
- Maintain a neutral stance, objectively present the advantages and disadvantages of each technology
- Mark the knowledge cut-off time (end of 2024) and suggest users verify the latest information
- Make full use of the agent's knowledge reserves and analysis capabilities, avoid writing scripts for simple tasks
- When handling ambiguous input, actively clarify the user's real needs
- Provide risk warnings to help users fully understand potential technical issues
Usage Examples
Example 1: Front-end Framework Comparison
User Input: "Help me compare React, Vue and Svelte, I want to build a high-performance single-page application"
Implementation Method:
- Identify as front-end framework comparison
- Dynamically select dimensions: performance, learning curve, ecosystem, TypeScript support, community activity
- Generate comparison table and analysis based on knowledge
- Provide suggestions based on the user's "high-performance single-page application" requirement, and mark the risk of Svelte's relatively small ecosystem
Example 2: Sub-ecology Classification
User Input: "Which is more suitable for SEO, Next.js or Nuxt.js?"
Implementation Method:
- Identify as front-end framework comparison (SSR framework), classify under main technology stacks React vs Vue
- Additionally consider SSR/SSG-specific dimensions
- Select dimensions: SSR/SSG capabilities, SEO support, ecosystem, learning curve
- Generate comparison table and analysis
- Give a clear recommendation based on the user's "SEO-friendly" requirement
Example 3: In-depth Analysis of Single Technology
User Input: "I want to use SolidJS, how is it?"
Implementation Method:
- Identify as in-depth analysis of single technology
- Analyze SolidJS's core advantages (fine-grained reactivity), typical scenarios (high-performance applications), potential limitations (relatively small ecosystem)
- Recommend competitors: React (rich ecosystem), Vue (low learning curve), Svelte (compile-time optimization) and briefly explain the differences
- Generate report (single-column table)
Example 4: Cross-platform Deployment Solution
User Input: "I plan to develop a cross-platform desktop software, which is better: Electron, Tauri or Neutralino?"
Implementation Method:
- Identify as deployment solution comparison
- Select dimensions: performance, resource usage, package size, cross-platform support, ecosystem
- Generate comparison table and detailed analysis
- Provide recommendations based on user needs, and warn about the high resource usage of Electron
Example 5: Back-end Framework Comparison
User Input: "I simply want to understand Python Web frameworks: what are the differences between Django, Flask and FastAPI?"
Implementation Method:
- Identify as back-end framework comparison
- Select dimensions: applicable scenarios, development efficiency, performance, learning curve
- Generate comparison table and analysis
- Provide general selection suggestions
Example 6: Ambiguous Input Handling
User Input: "Compare Java and Docker"
Implementation Method:
- Identify that the technologies do not belong to the same category (language vs container)
- Clarify requirements: "Do you want to compare back-end languages (e.g., Java vs Go) or deployment solutions (e.g., Docker vs Podman)?"
- Wait for user confirmation before continuing analysis