When a user receives a GitHub Issue (bug report, question, feature request) and needs AI assistance to analyze the problem, decide whether to take action, and draft a response. The AI leads the entire process, and the user only makes judgments at key nodes.
Core Principles
- Diagnose before responding — Don't draw conclusions without reading the code, don't qualify the issue without finding the root cause
- Be honest with users — Admit it's a bug if it is, clearly explain architectural limitations, don't shift blame or make empty promises
- Quantify costs — "High cost" is not a conclusion; clearly explain where the cost lies: how many files need to be modified, which modules are involved, and whether there are testing conditions
- Provide alternative solutions — Not taking action doesn't mean ignoring the issue; tell users how to work around it for now
Workflow
Step 1: Obtain Issue Content
Goal: Get complete information about the issue.
Methods:
- The user provides the issue link or repository address
- Retrieve issue details via or WebFetch
- Extract key information: user environment, reproduction steps, expected behavior, actual behavior, user's guesses
Output: Briefly paraphrase the issue content to the user and confirm understanding is correct.
Prohibited: Start analyzing just by looking at the title. Must read the entire issue content.
Step 2: Code Diagnosis
Goal: Find the root cause in the code.
Methods:
- Extract keywords from the issue description (function name, error message, page name, etc.)
- Locate the relevant link in the code: from frontend entry → IPC call → backend processing → underlying implementation
- Draw the complete call chain, marking the files and line numbers for each link
- Confirm the root cause: where the code went wrong, or why the code doesn't support the user's scenario
Output: Show the user:
- Complete call chain (files + line numbers)
- One-sentence summary of the root cause
- Attach key code snippets if necessary
Prohibited:
- Guess the cause without reading the code
- Draw conclusions after only looking at one file (follow the complete link)
Step 3: Qualification
Goal: Determine what type this issue belongs to.
| Type | Judgment Criteria | Response Strategy |
|---|
| Bug | Within the product design scope, behavior does not meet expectations | Schedule a fix |
| Architectural Limitation | User scenario exceeds the product's design premises | Explain the current situation, evaluate whether expansion is worthwhile |
| Feature Request | The product itself is fine, the user wants new capabilities | Evaluate cost and priority |
| Usage Issue | The user's operation method is incorrect, but the product can be more user-friendly | Respond with guidance, consider optimizing the experience |
Key Judgment: Distinguish between "Should do but did it wrong" (bug) and "Didn't intend to do" (architectural limitation/feature).
Output: Explain the qualification conclusion and reasons to the user, and proceed only after user confirmation.
Step 4: Decision (Do it or not)
Goal: Based on the root cause and qualification, give a recommendation to do or not do it.
Evaluate Four Dimensions
- Scope of changes — Modify a few lines / modify one module / add a new module
- Impact scope — Only modify one file / need to modify the call chain of multiple files / need to refactor
- Testing conditions — Is there an environment to reproduce and verify (no environment = high risk)
- User workaround cost — Can the user solve it by themselves in other ways
Decision Matrix
| Scope of Changes | Has Testing Conditions | User Can Work Around | Recommendation |
|---|
| Small (a few lines) | Yes | — | Fix directly |
| Medium (one module) | Yes | — | Schedule for implementation |
| Large (new module/refactor) | Yes | No | Evaluate then schedule |
| Large (new module/refactor) | No | Yes | Record the requirement, do not implement temporarily |
| Any | No | Yes | Inform of workaround, record the requirement |
Output: Explain the recommendation and reasons to the user. If recommending not to do it, quantify the cost (how many files to modify, which modules are involved, why testing is not possible).
Proceed to the response stage only after the user confirms the decision.
Step 5: Draft a Response
Goal: Write a professional, appropriate, and informative issue response.
Response Structure (Three Layers)
- Explain scenario positioning — What scenario this function is designed for, let the user understand "why it is not supported currently"
- State actual impact — How much the lack of this feature affects the user, whether there is an alternative solution
- Explain follow-up plans — If implementing, provide the direction; if not, honestly explain the cost and reasons
Tone Principles
- Thank for feedback — Users deserve respect for taking time to submit an issue
- Don't shift blame — Don't say "You used it wrong", say "This scenario is not yet covered by us"
- Give specific suggestions — Don't just say "No", tell users what to do now
- Quantify costs — Let users understand it's not that we don't want to do it, but the objective cost is high
Response Template
Hi @{username}, thanks for your feedback!
**1. Function Positioning**
{What scenario this function is designed for, why it does not support the user's scenario currently}
**2. Actual Impact on You**
{Can the user work around it now, how to work around it, whether core functions are affected}
**3. Regarding {the capability the user expects}**
{Cost explanation + follow-up plan}
Output: Response draft, wait for user confirmation before publishing.
Prohibited:
- Publish directly to GitHub without user confirmation
- Respond to non-technical users with technical jargon
- Only state conclusions without explaining reasons
Step 6: Publish
After the user confirms the response content:
- Publish the comment via
- Add labels based on the qualification result (bug / enhancement / wontfix / question)
- If it needs to be recorded as a requirement, remind the user whether to add it to the requirement pool
Communication Specifications During the Process
AI-Led Rhythm
- Proactively advance to the next step after completing each step
- Wait for user confirmation before proceeding with key conclusions (qualification, decision, response content)
- The AI handles technical details on its own, only showing conclusions to the user
Nodes Requiring User Confirmation
| Step | What to Confirm |
|---|
| Step 1 | "Did I understand the issue content correctly?" |
| Step 3 | "Do you agree with this qualification?" |
| Step 4 | "Do you agree with this decision?" |
| Step 5 | "Can this response be published?" |
No Need to Ask the User
| Item | Do Directly |
|---|
| How to check the code | The AI follows the link on its own |
| How to analyze the root cause | The AI judges on its own |
| How to quantify costs | The AI evaluates on its own |