NSFC Proposal Expert Review Simulator
Important Disclaimer (Non-Official)
- The output of this skill is only for writing improvement and self-checking, and does not represent any official review caliber, nor does it constitute a funding conclusion or commitment.
- The "review grade/funding recommendation" is only for reference as priority and improvement direction; this section may be omitted if the user does not explicitly request it.
Skill Dependencies
- Parallel multi-group review mode depends on the skill.
- If is unavailable, disabled, or , it automatically degrades to single-group mode (still includes 5 experts).
- Expert prompt templates are located in , and aggregation rules are in
references/aggregation_rules.md
.
Security and Privacy (Hard Rules)
- By default, proposal content is treated as sensitive information: only process files/directories explicitly provided by the user; do not expand the scanning scope without permission.
- Unless explicitly requested by the user and risks are confirmed: no internet access, do not send large segments of the original text externally, and do not repeat unnecessary personal/unit information in the output.
- Only perform "text reading and review"; by default, do not execute any compilation/running (e.g., do not run LaTeX compilation, do not execute scripts).
- If output needs to be shared: prioritize providing "issue summary + actionable modification suggestions"; when necessary to quote the original text, only quote the shortest necessary fragment.
Input
Users must provide at least one of the following:
- : Proposal directory (recommended, containing multiple files)
- : Single main file (if the user only has one file)
- : Compressed package (if provided, first extract to a temporary directory for review, and record the extraction location in the report)
- Extraction security: Prohibit path traversal (e.g., ); avoid overwriting existing user files; if necessary, ask the user to confirm the extraction location first.
Optional supplementary inputs (do not assume if not provided by the user):
- : Prioritize a specific dimension (e.g., "innovation/feasibility/research foundation")
- : Output file path (default see "Output")
- : Tone preference (e.g., "strict/moderate/very specific")
- : Number of review panels (each panel has 5 fixed experts)
- Default value see
config.yaml:parallel_review.default_panel_count
- Upper limit see
config.yaml:parallel_review.max_panel_count
Output
- Default output filename see
config.yaml:output_settings.default_filename
, which is written to the proposal directory by default: .
- If the user specifies , follow the user's setting; but if the path is unwritable or the directory does not exist, fail fast and prompt the user to change the path.
- In parallel mode, output "expert consensus + independent opinions + aggregated suggestions"; original opinions of each expert can be attached as per configuration.
- In parallel mode, it is recommended to organize the original reviews of each panel into final deliverable files:
./{panel_dir}/G{panel_number}.md
( see config.yaml:output_settings.panel_dir
).
- Intermediate files are hidden in
config.yaml:output_settings.intermediate_dir
by default (default ) to avoid cluttering the root directory with plans, logs, and parallel runtime environments.
Workflow (Execution Specifications)
Read Configuration (Mandatory)
Before starting, read the
in the skill directory, and use the following configurations as the single source of truth during execution:
- : Review dimensions, weights, key points, and common issues (as a checklist)
- : P0/P1/P2 grading criteria (for classification)
- : Grades and suggestions (only output if requested by the user)
proposal_files.patterns/exclude
: Proposal file identification rules
- : Output filename, panel directory, intermediate process hiding strategy, and chapter switches
- : Parallel review switch, expert personas, aggregation strategy
Phase 1: Pre-Check (Fail Fast)
- Verify that the input path exists and is readable; if it is a directory, first filter the list of files to be read according to the rules.
- If the number of files is 0: directly report an error and request the user to provide the correct path/file.
- If the number of files is abnormally large or the directory obviously contains a large number of irrelevant files: first ask the user to confirm the review scope (to avoid reading the wrong directory or too many files).
Recommended deterministic approach (avoid "missing subdirectories / mis-scanning intermediate directories / scanning deliverables"):
bash
# List .tex files to be included in the review (recursive scan, automatically skipping panels/, .nsfc-reviewers/, .parallel_vibe/)
python3 <nsfc_reviewers_path>/scripts/list_proposal_files.py --proposal-path <proposal_root>
# Optional: Limit maximum number of files (exit code=3 if exceeded)
python3 <nsfc_reviewers_path>/scripts/list_proposal_files.py --proposal-path <proposal_root> --max-files 200
Phase 2: Read-Through and Structured Understanding
- Quickly read through the proposal in chapter order, extracting "project topic/scientific hypothesis/objectives/technical route/innovation points/research foundation/team and conditions/expected outcomes".
- Generate a "proposal structure index" (only to chapter level) as a positioning anchor for subsequent references.
Phase 3: Parallel Multi-Group Review (Priority) or Single-Group Mode (Degradation)
Pre-Judgment
- Calculate : prioritize user input , otherwise use
parallel_review.default_panel_count
.
- Limit to
[1, parallel_review.max_panel_count]
.
- Directly use single-group mode if any of the following conditions are met:
parallel_review.enabled == false
effective_panel_count == 1
- Cannot find the script
parallel-vibe Script Path Discovery Order
Search the following paths in order, use the first one found:
~/.claude/skills/parallel-vibe/scripts/parallel_vibe.py
~/.codex/skills/parallel-vibe/scripts/parallel_vibe.py
<current_repository>/parallel-vibe/scripts/parallel_vibe.py
If none are found: record a warning and degrade to serial mode without interrupting the overall review.
Parallel Mode Steps
- Prepare Intermediate Directory Structure (Highly Recommended):
bash
mkdir -p <proposal_root>/<intermediate_dir>/{logs/plans,snapshot}
- corresponds to
config.yaml:output_settings.intermediate_dir
(default ).
- Construct Master Prompt (completed by the host AI):
- Read 5 expert personas from (corresponding to
config.yaml:parallel_review.reviewer_personas[*].prompt_file
)
- Read the master prompt template from
references/master_prompt_template.md
- Inject context: "proposal structure index + key information summary" produced in Phase 2
- Inject unified quality threshold: issue classification, evidence anchors, output format
- Inject independence constraints: each expert makes independent judgments without assuming other experts' opinions
- Inject output constraints: write the panel's output to
config.yaml:parallel_review.panel_output_filename
in the root directory of the current thread's
- Replace template placeholders: replace with
config.yaml:parallel_review.panel_output_filename
- Save the master prompt to a temporary file (recommended to be placed in
<proposal_root>/<intermediate_dir>/logs/
for easy tracing without polluting the root directory), recorded as
- Generate parallel-vibe Plan File (Mandatory):
- Reason: Avoid escaping ultra-long prompts in the CLI; and avoid the automatic splitting logic of (this skill requires "the same master prompt to be executed repeatedly in N independent workspaces")
- Prioritize using the built-in script of this skill to generate the plan file:
bash
python3 <nsfc_reviewers_path>/scripts/build_parallel_vibe_plan.py \
--panel-count <effective_panel_count> \
--master-prompt-file <master_prompt_path> \
--out <plan_json_path>
- If the script cannot be used, manually generate (minimum structure):
json
{
"plan_version": 1,
"prompt": "nsfc-reviewers parallel panels",
"threads": [
{
"thread_id": "001",
"title": "Panel G001",
"runner": { "type": "<config:parallel_review.runner>", "profile": "<config:parallel_review.runner_profile>", "model": "", "args": [] },
"prompt": "<master_prompt>"
}
],
"synthesis": { "enabled": false }
}
- Call parallel-vibe (Based on Plan File):
- It is recommended to prepare a proposal snapshot (e.g., ) in
<proposal_root>/<intermediate_dir>/snapshot/
first, and use it as to avoid copying , and other intermediate/deliverable directories into the of each thread again.
- Minimum approach: Copy the "proposal directory to be reviewed" to and exclude intermediate/deliverable directories (specific commands depend on platform tools; principle: the snapshot directory should only contain proposal source files).
bash
python3 <parallel_vibe_path> \
--plan-file <plan_json_path> \
--src-dir <proposal_snapshot_root_or_proposal_root> \
--out-dir <proposal_root>/<intermediate_dir> \
--timeout-seconds <config:parallel_review.timeout_seconds> \
--no-synthesize
- Collect and Verify Panel Outputs:
- Read from the of each thread
- Typical path (after running but before organizing output):
<proposal_root>/<intermediate_dir>/.parallel_vibe/<project_id>/<thread_id>/workspace/<panel_output_filename>
- If you have executed "Phase 5: Output Organization", the parallel-vibe environment is usually located at:
<proposal_root>/<intermediate_dir>/parallel-vibe/<project_id>/...
- will print the current path in stdout; prioritize using this path
- If the output of individual threads is missing, record the missing and continue aggregating completed results (fail-soft)
- It is recommended to copy/organize the final review report of each panel into final deliverable files:
<proposal_root>/{panel_dir}/G{thread_id}.md
Single-Group Mode Steps (Degradation)
- Read to get 5 expert personas (innovation/methodology/foundation/strict/constructive)
- Simulate parallel process: let 5 experts complete "independent reviews" in sequence (without referencing each other)
- Perform intra-panel aggregation (consensus identification/severity escalation/deduplication and merging)
- Output the single-panel review report to
Phase 4: Aggregation, Sorting and Optional Conclusion
Cross-Panel Aggregation (Parallel Multi-Group Mode)
- Read cross-panel aggregation rules from
references/aggregation_rules.md
.
- Cross-compare N groups of to identify different expressions of the same issue.
- Cross-panel consensus threshold: pointed out by at least
ceil(N * consensus_threshold)
panels → cross-panel consensus (threshold see config.yaml:parallel_review.aggregation.consensus_threshold
).
- Secondary severity escalation: cross-panel consensus issues P2→P1→P0 (P0 no longer escalated).
- Merge duplicate issues, retaining the most complete evidence anchors and actionable suggestions.
- Retain independent opinions and label their source panel (e.g., "from Panel G001").
- Output results in order of P0→P1→P2, and provide a minimal viable modification sequence.
- If
keep_individual_reviews == true
, append original review reports of each panel in the appendix.
Single-Panel Summary
Output "summary of modification suggestions", sorted by P0→P1→P2, and provide a minimal viable modification sequence.
Optional Comprehensive Conclusion
Only output "comprehensive score/funding recommendation" when explicitly requested by the user (and reaffirm that it is for improvement reference).
Phase 5: Output Organization (Mandatory Execution)
Importance: This phase is key to ensuring output traceability and
cannot be skipped. When
config.yaml:output_settings.enforce_output_finalization == true
, you must not end the review without completing this phase.
Goal: Final deliverables are clearly visible; intermediate processes are uniformly managed in
config.yaml:output_settings.intermediate_dir
(default
) to avoid clutter in the root directory from plans, logs, parallel environments, etc.
Pre-Check (Execute in Order)
- Calculate and confirm (see Phase 3).
- Determine the review mode for this time:
- If
effective_panel_count > 1
and you actually called (or found the runtime environment) → regarded as "parallel mode"
- Otherwise → "serial mode" (still must create minimal intermediate directories and logs)
Recommended Approach: Use Script to Automate Output Organization (Deterministic, Highly Recommended)
bash
python3 <nsfc_reviewers_path>/scripts/finalize_output.py \
--review-path <proposal_root> \
--panel-count <effective_panel_count> \
--intermediate-dir <config:output_settings.intermediate_dir> \
--apply
Without
, it is a DRY-RUN: only print actions to be executed without modifying files.
Manual Organization (Fallback When Script Is Unavailable)
Both parallel and serial modes must create at least the minimal directory structure:
bash
mkdir -p <proposal_root>/<intermediate_dir>/{parallel-vibe,logs/plans,snapshot}
Parallel mode: Migrate the
project directory of
to
<intermediate_dir>/parallel-vibe/
(compatible with two source locations):
bash
# 1) Prioritize migration from <proposal_root>/<intermediate_dir>/.parallel_vibe/ (common when parallel-vibe --out-dir points to intermediate_dir)
if [ -d "<proposal_root>/<intermediate_dir>/.parallel_vibe" ]; then
mv "<proposal_root>/<intermediate_dir>/.parallel_vibe/"* "<proposal_root>/<intermediate_dir>/parallel-vibe/" 2>/dev/null || true
rmdir "<proposal_root>/<intermediate_dir>/.parallel_vibe" 2>/dev/null || true
fi
# 2) Compatible with old instances: migrate from root directory .parallel_vibe/
if [ -d "<proposal_root>/.parallel_vibe" ]; then
mv "<proposal_root>/.parallel_vibe/"* "<proposal_root>/<intermediate_dir>/parallel-vibe/" 2>/dev/null || true
rmdir "<proposal_root>/.parallel_vibe" 2>/dev/null || true
fi
Migrate logs and plan files (execute only if they exist):
bash
mv "<proposal_root>/master_prompt.txt" "<proposal_root>/<intermediate_dir>/logs/" 2>/dev/null || true
mv "<proposal_root>"/plan*.json "<proposal_root>/<intermediate_dir>/logs/plans/" 2>/dev/null || true
mv "<proposal_root>/proposal_snapshot" "<proposal_root>/<intermediate_dir>/snapshot/" 2>/dev/null || true
Verification Checklist (Self-Check After Execution, Cannot End If Not Passed)
Report Format (Hard Threshold)
For each P0/P1 issue, it must include at least one "evidence anchor":
- : Filename + chapter title/key sentence (line number optional)
- : Why this is an issue (no vague descriptions)
- : How it will affect review judgment/feasibility/persuasiveness
- : Actionable modification plan (prioritize providing "how to modify" and "what level of modification is sufficient")
- : How to self-check after modification (e.g., "add a roadmap", "add a paragraph of coordinate system comparison", "add a chain of pre-experiment evidence")
Recommended structure for parallel mode:
markdown
# NSFC Proposal Review Opinions (N Groups of Independent Experts)
## Review Configuration
- Number of review panels: N groups
- Experts per panel: 5
- Total expert person-times: N×5
## Cross-Panel Consensus (Pointed Out by Multiple Groups)
### P0 Level
## Independent Opinions (Proposed by Single Group)
### From Panel G001
## Summary of Modification Suggestions
## Appendix: Original Review Reports of Each Panel (Optional)
Usage Example
User input:
text
Please review the NSFC proposal at /path/to/nsfc_proposal and save the opinions to /path/to/output.md
Configuration Parameters
- : Review dimension configuration (weights/key points/common issues)
- : Definition of issue severity levels
- : Review grades and suggestions (optional output)
- : Proposal file identification rules
- : Output settings (default filename/panel directory/intermediate directory/chapter switches; and output organization verification:
enforce_output_finalization
/ warn_missing_intermediate
/ )
- : Parallel multi-group review configuration (switch/number of panels/expert persona references/cross-panel aggregation strategy)