Paper Select Journal
Core Principles
- Current information must be verified in real-time: scope, official website, industry recognition, CAS partition, and papers from the last 3 months are all time-sensitive information and cannot rely on old memory.
- Intermediate files are only allowed to be stored in the hidden directory under the current working directory; only if the user explicitly specifies another directory can the default value be overwritten.
- Set1 no longer relies on fixed semantic weights. The script is only responsible for minimum hard filtering and candidate pool sorting, and the real semantic planning is completed by the current host model.
- Set1 is not the final answer. The final report only retains Set3 with sufficient evidence, up to 10 journals.
- Do not recommend obviously warned, predatory journals or journals with an impact factor lower than ; if exceptions with IF lower than are indeed retained, it must be stated "why it is still a reliable choice recognized by human experts in the field".
- It is better to report fewer journals than to forcefully fill up 10 journals.
Input and Workspace
- User preferences are optional, but manuscript is mandatory.
- Manuscript can come from pasted title / abstract / full text fragments, or local , , , , files, or a combination of these.
- Once entering the hidden workspace process, subsequent files for script reading must be kept within the current run directory; do not point , ,
final_recommendations.json
to outside the run directory.
First initialize the hidden workspace:
bash
python3 <skill_root>/scripts/init_workspace.py --project-root .
The script will create
.paper-select-journal/run-<timestamp>/
, which contains at least:
All subsequent intermediate files must remain within this run directory.
Workflow
1. Write Manuscript Profile First
After fully understanding the paper, write the results into
analysis/manuscript_profile.json
.
- Template:
templates/manuscript_profile.template.json
- Writing guide:
references/manuscript-profile.md
Minimum fields:
The purpose of the profile is to help AI understand the manuscript, not to feed into a fixed scoring formula.
If user preferences are complex, prioritize writing the preferences as natural language into
or
, rather than forcing many hard-coded clues for the script.
If manual exception journals with low IF need to be retained, they can only be used as subsequent manual supplementary candidates, and it must be explained in the final report "why it is still a reliable choice in the field despite being below the threshold".
2. Generate Set1 Candidate Pool with Built-in
Built-in directory:
assets/journal_catalog/2023IF.xlsx
Run:
bash
python3 <skill_root>/scripts/shortlist_journals.py \
--workspace .paper-select-journal/run-<timestamp> \
--profile .paper-select-journal/run-<timestamp>/analysis/manuscript_profile.json
Outputs:
candidates/set1_candidates.json
candidates/set1_candidates.md
The script only performs minimum hard filtering here:
- Impact factor lower limit
- Journals explicitly excluded by users
- Basic metadata sorting (JIF, partition, OA ratio, citation count)
Do not misunderstand this step's output as "a final shortlist already sorted by semantics".
You must read the candidate pool, then combine with the manuscript to independently plan the journals that are truly worthy of entering Set2.
3. Verify Scope, Official Website, Partition and Quality via Internet to Get Set2
Based on
candidates/set1_candidates.json
and the manuscript profile, independently decide which candidates to verify first, and verify each one via the internet:
- Official website
- Aims & Scope
- CAS small category and its partition
- Industry recognition
- Whether there are warning / predatory journal signals
Priority sources:
- Journal official website
- PubMed / NLM
- Mainstream publisher pages
- Trusted partition information sources
Write the verified journals into:
analysis/set2_scope_review.json
Template:
templates/scope_review.template.json
Verification criteria:
references/journal-quality-checklist.md
4a. Fetch Raw PubMed Paper Evidence of Set2 from the Last 3 Months
Run:
bash
python3 <skill_root>/scripts/fetch_pubmed_recent.py \
--workspace .paper-select-journal/run-<timestamp> \
--profile .paper-select-journal/run-<timestamp>/analysis/manuscript_profile.json \
--scope-review .paper-select-journal/run-<timestamp>/analysis/set2_scope_review.json
Outputs:
pubmed/recent_articles.json
pubmed/recent_articles.md
This only provides raw evidence and is not responsible for scoring or sorting. The script only performs API calls, XML parsing, and sorting by date.
4b. AI Assess Theme Similarity to Decide Which Journals Enter Set3
You must read simultaneously:
analysis/manuscript_profile.json
pubmed/recent_articles.json
The "AI" here refers to the host model currently executing this skill:
- Completed by the current Claude session in Claude Code
- Completed by the current Codex session in Codex
- Do not call external AI APIs, independent model services, or separate scoring scripts for Step 4b
That is, the planning, semantic judgment, Set3 retention decision, and writing of
set3_similarity_review.json
in Step 4b must all be completed natively using the AI computing power provided in the current working environment.
Judge the semantic relevance between the last 3 months' papers of each Set2 journal and the manuscript in the following dimensions:
- Whether the theme is truly relevant, not just token collision
- Whether the research questions are close
- Whether the methodologies are close
- Whether the number and density of relevant papers are sufficient to support inclusion in the final recommendation
When executing, first quickly browse all Set2 recent papers to form a comparison framework, then make semantic judgments for each journal one by one, and finally decide Set3 retention uniformly and write reviewable reasons.
Do not degrade this step into mechanical token scoring or hard-coded weighting formulas.
Write the conclusions into:
analysis/set3_similarity_review.json
Template:
templates/set3_similarity_review.template.json
Each journal must include at least:
irrelevant_articles_count
5. Form Final Recommendation JSON
Based on
analysis/set3_similarity_review.json
, write up to
final journals into
analysis/final_recommendations.json
.
- Template:
templates/final_recommendations.template.json
- Field description:
references/report-schema.md
Must retain:
- Impact factor
- CAS small category and its partition
- Industry recognition
- Official website
- Reasons for recommendation
- Similar theme papers from the last 3 months
- AI for each evidence paper
6. Render Final Markdown Report
Run:
bash
python3 <skill_root>/scripts/render_report.py \
--workspace .paper-select-journal/run-<timestamp> \
--final-json .paper-select-journal/run-<timestamp>/analysis/final_recommendations.json
Final output:
reports/paper-select-journal-report.md
If using
to overwrite the default file name, it can only be written into the current run directory, and the final Markdown report cannot be written outside the hidden workspace.
Final Report Requirements
- All journals are written in the same Markdown file
- Each journal uses level, with , as needed below
- Each journal must clearly state: impact factor, CAS small category and partition, industry recognition, official website, recommendation reasons, as well as a table of similar theme papers from the last 3 months and AI relevance explanations
Decision Rules
- If scope does not match, do not force recommendation no matter how high the IF is
- If there are obvious warning / predatory journal risks, eliminate directly
- If there are no similar theme papers in the last 3 months, it is not necessarily eliminated, but the recommendation level should be lowered
- Journals with as should not be included in the final recommendation
- When CAS partition cannot be reliably verified, prioritize replacing with candidates with more transparent information
- When users do not have explicit preferences, independently choose the most reliable solution, and do not turn the question into a blocking issue
Command Path Description
- refers to the actual installation directory of the current skill.
- Do not assume that the user must have a source code copy of in the current working directory.
- If you are already in the skill root directory, you can also run directly.
Reference Files
references/manuscript-profile.md
references/journal-quality-checklist.md
references/report-schema.md