Loading...
Loading...
Found 2 Skills
Use this when the user explicitly requests to "verify/optimize in-text citations of the `{topic}_review.tex` review" or to "run check-review-alignment". Use the host AI's semantic understanding to verify each citation against the literature content one by one. **Only when fatal citation errors are found**, make minimal rewrites to the "sentences containing citations", and reuse the rendering script of `systematic-literature-review` to output PDF/Word (the script does not directly call the LLM API locally). Core principle: **Do not modify for the sake of modifying**. When it is uncertain whether it is a fatal error, keep the original content and issue a warning in the report. ⚠️ Not applicable in the following cases: - The user only wants to generate the main body of a systematic review (should use systematic-literature-review) - The user only wants to add/verify BibTeX entries (should use a dedicated bib management process)
Check the consistency and authenticity risks of citations and references in NSFC proposal text (read-only): Verify the existence of bibkey, format issues such as BibTeX fields and DOI, and generate structured input for the host AI to evaluate item-by-item whether the text expression actually cites the literature; by default, only an audit report is output, and the proposal or .bib file is not directly modified (unless the user explicitly requests it).