Guanzhao Quantitative Research & Investment
⚠️ Must Read: This document is long and must be read in full. Do not use the limit parameter to truncate it. The first 50 lines do not contain operation specifications.
Hard Rules (8 Rules, Violation Results in Failure)
-
First Step Before Starting: Check API Key First, Then Do Anything Else. The first action after receiving a new question must be to read
(or equivalently check the api_key field):
- If is an empty string → Stop immediately, directly output the new user guide message in the "Prerequisites" section, prohibit newSession, prohibit reading workflow / quick-lookup / any business documents, prohibit calling or any platform tools. Wait for the user to paste a Key starting with before executing the "Configuration Wizard".
- If is non-empty → Proceed to Rule 1.
- Only Exception: The user's current message is a Key starting with (enter configuration wizard) or chat/meta questions unrelated to data querying (e.g., "What can you do?").
- Why: Data query workflows will eventually call , which will definitely fail if api_key is empty. Intercepting at the entrance in advance avoids multiple failed calls and gives new users a direct, clear first impression.
-
Create a New Session for Each New Question/New Conversation: After receiving a user's new question, before calling any platform tools, you must create a new session (prefer to directly call the native
tool; only use
GZQ_PARAMS='{"user_query":"<user's question>"}' python scripts/call.py newSession
if the native
is not available in the current environment). newSession generates a local UUID and cannot be omitted;
is only used for local session initialization annotation to facilitate subsequent trace analysis.
- Why: Session files are automatically injected into all tool calls. Not creating a new session = reusing the task_id from the previous conversation = risk of variable name conflicts + session pollution.
- Multi-Session Isolation (Mandatory): When this conversation may use this skill in parallel with other conversations/processes (multiple Claude windows, shared development machines, parallel traces), in the first bash command of the chat, execute first:
bash
export QBS_SESSION_KEY=$(python -c "import uuid;print(uuid.uuid4().hex[:12])")
After that, all must be run in the same terminal session (environment variables are only visible within this session). This way, each conversation exclusively uses the output/.session.<key>.json
file without overwriting each other. If not set, it degrades to the default , which is only suitable for single-session scenarios.
- Only Exception: Follow-up questions in the same conversation (e.g., "Draw another chart" "Change the time period") can reuse the current session (keep unchanged).
-
Prioritize Native Tools; Script Wrapping Only When No Native Equivalent Exists: Must prioritize direct calls to native tools provided by the platform (such as
,
,
,
,
,
,
, etc.); prohibit wrapping these native tools using
, shell commands,
GZQ_PARAMS=... python scripts/call.py ...
, etc.;
is only used for: ① Management actions like
; ② Local script steps explicitly required by workflows; ③ Fallback when no equivalent native tool exists on the platform.
- ⛔ Typical Violation Examples (Direct Failure): The of is an array by design, intended to "confirm multiple asset names at once"; any repeated calls to it in for/while loops are violations, whether through native tools or wrapping. When confirming assets in batches, call it once and pass the complete array; if the array is too large, pass it in batches (one call per batch) instead of calling once per asset.
- Must Check Local Library First for Asset Confirmation: When the user explicitly says "confirm assets / batch confirmation / confirm / find code / find ticker", still follow the local asset routing: (can be read in full) →
grep presets/assets_db/{type}.yaml
(prohibit reading the entire file) → call only if no match is found. Do not directly call the tool just because the user used the word "confirm".
- Must Grep to Confirm Format for English Codes Without Market Suffix: When the user directly enters an English stock code (e.g., , , ) without a market suffix (, , ), do not query data directly based on user memory / guesses / suffix concatenation, must first
grep presets/assets_db/stock_us.yaml
to find the correct ticker before calling the tool.
- ⛔ Strictly Prohibit Wrapping with Inline Interpreter Heredoc / : The following writing methods are all violations, regardless of how complex the parameters are, how many batches there are, or how complicated the dependency relationships are:
python - <<'PY' ... subprocess.run(['python','scripts/call.py','<tool name>',...]) ... PY
python -c "import subprocess; subprocess.run(['python','scripts/call.py',...])"
node -e "...child_process.execSync('python scripts/call.py ...')..."
- Any writing method that drives multiple batches of with loops in inline scripts
- Reason: Such "self-written driver scripts" will bypass session injection, quota verification, and error protocols of this skill; traces will show task_id drift, stdout blocking, not existing on Windows, and other chain failures. The fallback status of call.py only allows one layer of calls (shell → call.py), and does not allow wrapping it with python/node interpreters outside.
- Compliant Template for Multiple Batches of : When the number of formulas exceeds the single-batch hard limit (20 entries) and needs to be split into batches, batch splitting and orchestration must be done by the LLM itself between tool calls, and script automation is prohibited. Each batch is an independent call; any parameter preprocessing (reading md, regex, dependency analysis, generating ) is done in LLM reasoning, and necessary intermediate products are saved to
output/tmp_batches/batch_K.json
using , then call batch by batch with:
bash
GZQ_PARAMS="$(cat output/tmp_batches/batch_K.json)" python scripts/call.py runMultiFormulaBatch
One independent shell command per batch, initiate the next batch only after the previous batch returns; prohibit writing a python script to run all batches at once. This is a hard rule, violation results in failure.
-
Read Workflow Before Operation: Load the corresponding workflow according to the "Scenario Routing" table below, do not guess parameter formats on your own.
-
Stop Immediately on Configuration/Authentication Errors; Do Not Switch to Authentication Collection in Normal Data Query Processes:
- Tool Returns API Key Missing Error (including message / ): Stop data query immediately, output the new user guide message (see template in "Prerequisites" section), prohibit continuing data query; wait for the user to paste the Key before executing the configuration wizard.
- Other Tool Errors (network, server errors, etc.): Directly report "Internal tool exception", do not provide authentication-related guidance.
-
First Sentence of Final Answer Must Be Data Conclusion: When answering the user, the first sentence must directly give the data conclusion (e.g., asset name + value, table, or "N items meet the criteria"), absolutely prohibit starting with process statements like "Successfully obtained" "Data obtained" "According to the returned results" "Let me do this". Violating this rule = must delete process statements and re-output.
-
Freeze User Conditions; Do Not Rewrite: Must check the user's original conditions word by word before execution, the following rewriting behaviors are all violations (must roll back and re-confirm once found):
- Percentage ↔ Decimal Conversion (e.g., "dividend rate >3%" is prohibited from being rewritten as )
- Relative Time Changed to Year Range (e.g., "past 10 years" is prohibited from being rewritten as "2015-2025")
- Asset Universe Replacement (e.g., "ordinary stocks" is prohibited from being rewritten as "Wind All A Components" or "non-ST stocks")
- Event Scope Expansion (e.g., "annual report/semi-annual report" is prohibited from being expanded to all performance disclosure types)
- Card Additional Condition Inheritance: After hitting a knowledge card, if the card contains additional conditions not explicitly proposed by the user, such as "first-time/non-ST/limit-up/liquidity threshold", must delete them first before execution, prohibit silently inheriting them into the final answer
-
Ask for Clarification First When Task Is Ambiguous; Prohibit Guessing and Starting: If the user's instruction has 2 or more reasonable interpretations (e.g., "batch confirm X" is unclear whether to confirm the index itself or all its components, "analyze Y" is unclear which dimension to use), the first step must be to ask the user for clarification, do not choose one interpretation based on speculation and execute it on your own. The clarification should briefly list the possible interpretations (example: "Do you mean ① … or ② …?"), wait for the user's confirmation before proceeding. Only Exception: The user's semantics are clear and unambiguous (e.g., "Give me today's closing price of Kweichow Moutai"), no need to ask for clarification.
-
Must Follow the Specified Process to Self-Heal When Version Mismatch Is Detected; Prohibit Forging Version Numbers in Any Way: There are two types of version mismatch signals, which must be handled separately.
【P0 Red Line — Not Allowed Under Any Circumstances】
- ❌ Prohibit using /
multi_replace_string_in_file
/ to overwrite / terminal , or any other means to modify any / / related fields in local , , , , attempting to make the local version number "match" the server.
- ❌ Prohibit changing the
skill_version_at_creation
of to a new value to bypass local verification.
- ⚠️ Such "changing strings to cheat verification" is deceptive self-healing: the local tool signature/parameter format is still the old version, the next call will definitely fail or behave abnormally, and will pollute the user's working directory. Once you are considering doing such edits, stop immediately and switch to the (A) / (B) process below.
Type (A): Local Session Version Mismatches Local SKILL.md Version (Signal: Tool output contains
"error": "SKILL_VERSION_MISMATCH"
, thrown by local
verification, reason is that the version recorded when the current session was created ≠ the current local SKILL.md version, usually because the user upgraded the skill midway):
- Stop immediately the current task, do not call any platform tools anymore;
- Call (create a new session, get a new task_id, and write the new version number);
- Forcefully re-read (this file) + the workflow document corresponding to the current scenario + the involved ;
- Re-execute the user's original task with the newly read tool signature and parameter format;
- Prohibit asking the user "whether to continue" or explaining the version upgrade process — complete self-healing silently and return the result directly.
Type (B): Server Requires Higher Version Than Local Skill Version (Signal: Any tool's returned message / intro / error contains "skill version is too low", "please execute the following command to update", "npx skills update", "skill_server_version",
with prompt to update, etc.; or the
returned by
is inconsistent with the latest server version and the server requires forced upgrade):
- Stop immediately the current task, do not continue calling any platform tools (calling again will only repeat the error);
- Directly output a clear update prompt to the user, and execute it via (the user has authorized the update command in the skill's built-in script path):
bash
# Default to update (suitable for old users who have installed it, regardless of whether it was installed in .claude / .cursor / global or project)
npx skills update pseudo-longinus/quant-buddy-skills -y
If the exit code of the previous step is non-0, or the output contains / (indicating the user has never installed it), fall back to the initial installation command:
bash
npx skills add pseudo-longinus/quant-buddy-skills -g -a '*' -y
If either command reports symlink / / permission errors on Windows, append at the end and retry once:
bash
npx skills add pseudo-longinus/quant-buddy-skills -g -a '*' --copy -y
If the user is unsure where it is currently installed, let them self-check: npx skills list -g --json
.
(If the command given in the server's field is different from this, follow the command returned by the server, do not change the package name/source/flags on your own.)
- Wait for the command to execute successfully (success sign: exit code 0 and output contains / ); if it still fails ( / / network/registry unreachable, etc.), give the original error to the user in full, let the user handle it offline, do not try to modify files to make it work, do not change sources, do not change package names;
- After the command succeeds, forcefully re-read (confirm is the new version) + the workflow of the current scenario + related ;
- Call to rebuild the session (update
skill_version_at_creation
to the new version);
- Re-execute the user's original task with the new tool signature.
Discrimination Tip: If you can't tell whether it's (A) or (B),
run (A) first newSession + re-read; if the retry still reports version error immediately or the server continues to prompt for update, switch to (B) and run
(fall back to
if necessary).
Never reverse it by "changing the local version number first to try".
Principle of Minimal Sufficiency (Self-Check Before Any Action)
Default to the narrowest path; only expand the scope when receiving clear evidence that "it is not enough".
Before preparing to read a file, call a tool, or expand the reading scope, answer three questions:
- What is the specific problem to be solved in this step? — Must be able to be written in one sentence as "Do Y to solve X", where X is a already existing requirement, not "may need X", "just in case", "prepare first".
- Is there a narrower option to achieve the same X? — More downstream output / more streamlined file / fewer fields / construct directly without calling this tool.
- What is the next step if the current choice fails? — If you can't answer, it means you haven't thought it through before acting.
If any answer is ambiguous → Do not take this step.
Only Valid Trigger for Expanding Scope: The previous tool clearly returned "missing data / field does not exist / failure", and the failure reason can be traced back. Do not use "for comprehensiveness", "for accuracy", "to avoid omissions" as reasons.
This principle covers: whether to read one more document; which variable to read with readData; whether to call confirmDataMulti for a certain field; whether to write formulas yourself or check existing datasets; and all similar decisions that may arise in the future.
Implementation at Tool Level: Before calling
/
/
or loading additional documents, must check the corresponding section in
recipes/tool-call-checklist.md
(5–10 lines per section). The top-level principle governs "whether to do it", and the checklist governs "how to do it specifically".
Skill Package Root Directory
The directory where this SKILL.md is located is the skill root directory (), all relative paths below are based on this.
All terminal commands must first
to this directory before execution.
SKILL_ROOT/
├── config.json ← API Key configuration (read on demand; not required to read for every question)
├── SKILL.md ← This file (entry + routing)
│
├── workflows/ ← Business process orchestration (routing target)
│ ├── fast-snapshot.md Fast Path: Latest market/valuation (≤3 assets, scalar)
│ ├── fast-window.md Fast Path: Latest N-day sequence/window statistics
│ ├── fast-report-period.md Fast Path: Latest reporting period financials (≤3 assets)
│ ├── quick-lookup.md Quick data query router + shared basic rules
│ ├── quick-snapshot.md Latest market/valuation snapshot (stop when fields are complete)
│ ├── quick-window.md Latest N-day short-window sequence/window statistics
│ ├── quick-report-period.md Latest reporting period financial indicators
│ ├── period-return-compare.md Fixed interval cumulative price change comparison
│ ├── global-rules-lite.md Simplified global rules (exclusive to quick-window/period-return-compare)
│ ├── quant-standard.md Standard process for stock selection/backtesting/factors/charts
│ ├── event-study.md Event study (window performance after given or identifiable events)
│ ├── regime-segmentation.md Threshold interval/continuous phase identification and interval statistics
│ └── render-kline.md K-line chart rendering and delivery
│
├── recipes/ ← Formula templates & tool usage (referenced by workflows)
│ ├── ma-crossover-backtest.md Moving average crossover strategy
│ ├── value-pe-strategy.md PE valuation stock selection
│ ├── upload-custom-data.md Upload custom data
│ ├── render-chart.md Render charts
│ ├── download-data.md Download data
│ └── industry-aggregation.md Industry aggregation ranking
│
├── references/ ← Reference documents
│ ├── environment.md Environment dependencies
│ ├── troubleshooting.md Troubleshooting
│ └── ru-billing.md RU billing
│
├── tools/ ← Complete parameter documentation for API tools
│ ├── run_multi_formula.md
│ ├── read_data.md
│ └── ... (no need to read in advance for normal links, check when parameter issues occur)
│
├── presets/ ← Verified commonly used data (loaded on demand)
│ ├── cases_index.yaml 106 case card directory (required for quantitative standard scenarios, not needed for quick data query)
│ ├── assets.yaml Commonly used assets (99 lines of selected assets, can be read in full at once)
│ ├── assets_db/ Full asset dictionary (divided by type, ⚠️ only grep for retrieval, prohibit reading the entire file; does not include index component mapping)
│ │ ├── stock_a.yaml 5505 A-shares (SH/SZ)
│ │ ├── stock_hk.yaml 2862 H-shares (HK prefix, market data only)
│ │ ├── stock_us.yaml 1044 U.S. stocks (.N/.O/.A, market data only)
│ │ ├── index.yaml 503 indices
│ │ └── future.yaml 257 futures
│ ├── functions.yaml Commonly used functions
│ ├── data_catalog.yaml Commonly used datasets
│ ├── sectors.yaml Industry sectors
│ └── themes.yaml Thematic sectors
│
├── scripts/ ← Execution scripts
│ ├── call.py Unified tool entry (all commands are called through it)
│ ├── executor.py Underlying of call.py (prohibit direct call)
│ ├── quant_api.py Python SDK (imported by other scripts)
│ ├── auth/ Authentication scripts
│ └── eval/ Evaluation scripts
│
└── output/ ← Output directory (automatically created)
├── .session.<key>.json Current session task_id (derived from QBS_SESSION_KEY, multi-session isolation)
├── ic_data/ IC scan results
└── *.png / *.csv Chart and data files
Global 429 Handling (Applicable to All Paths):
| error.code | Handling |
|---|
| / | Retry silently after reading seconds, do not expose to users |
| Stop immediately, read the quota limiting section in references/troubleshooting.md
, output prompt |
| / | Stop immediately, output: ⚠️ Daily quota has been used up, reset at 00:00 next day.
|
| (503) | Retry silently once after seconds, if still failed, inform "System is busy, please try again later" |
⛔ Execution Order (Must Read Before Routing, All Scenarios Must Comply)
Regardless of which leaf workflow is matched, the execution order is fixed as:
① read_skill_file(global-rules version, see table below) → ② read_skill_file(leaf workflow) → ③ Execute
Step ① Global Rules File Selection (Determined by Target Leaf Workflow):
| Target Leaf Workflow | File Read in Step ① |
|---|
| None (Fast Path, skip Step ①, execute directly) |
| None (Fast Path, skip Step ①, execute directly) |
| None (Fast Path, skip Step ①, execute directly) |
| workflows/global-rules-lite.md
|
| workflows/global-rules-lite.md
|
| All other workflows | workflows/global-rules.md
|
- Step ① is a hard prerequisite. After determining the target leaf, select and read the corresponding global-rules version according to the table above, then read the leaf workflow, and finally execute.
- Fast Path (fast-*.md) starts directly from Step ②, no need for Step ①.
Scenario Routing
First identify the user's intention, determine the target leaf workflow; then load according to the execution order above:
| Scenario | Trigger Words | Target Leaf Workflow |
|---|
| Latest market / valuation (snapshot) | Latest price, today's close, latest price change, current turnover rate, latest PE/PB/market capitalization… | Fast Path → / Complete link → → |
| Latest N-day sequence / window statistics | Latest 5 days, latest 20 days, latest N trading days, window high/low/amplitude… (single asset only, latest N days) | Fast Path → / Complete link → → |
| Latest reporting period financials | Operating income, net profit, attributable net profit, ROE, total assets, total liabilities, asset-liability ratio… | Fast Path → / Complete link → → |
| K-line chart (visualization) | K-line chart, draw chart, show trend… | → |
| Fixed interval cumulative price change | From A to B, from year month to year month, interval return, cumulative price change, interval performance, multi-asset interval comparison | → |
| Quantitative stock selection / backtesting / factors / charts / upload/download | Stock selection, backtesting, moving average, PE stock selection, factors, net value, upload CSV, download data, draw chart… | → |
| Directly run the formula chain file given by the user | "Run/execute all formulas in this file" "formula chain file" "formula chain" "Run according to this md/json" | → |
| Event study | Review, historical, price increase, interest rate cut, interest rate hike, event window, subsequent performance, better than expected, worse than expected, performance after policy… (given event or need to identify event date first) | → |
| Threshold interval statistics / continuous phase | Historical, each time, average, drawdown exceeds, falls more than from high, bear market interval, continuous phase, regime | → |
Upload, download, and charting are not independent scenarios — they are sub-steps within workflows, and workflow documents will guide you to read the corresponding
when needed.
Routing Hard Exclusions (Prioritize Over Trigger Word Matching)
The following rules are checked before trigger word matching; if hit, forced redirection is required, and cannot be overwritten by trigger words:
| User Intention Feature | Prohibit Entering | Force Redirect To | Judgment Basis |
|---|
| Intraday/real-time/current/now/today/today's + query intraday market (price change ranking, limit-up, intraday price change, etc.) | | (prioritize matching minute-frequency cards) | Requires dedicated formulas for minute-frequency cards; is already the global default |
| Intraday/real-time/current/today/today's + whole market/sector + TopN/ranking/threshold list/stock selection/screening/signal | | → Prioritize hitting "Real-time cross-section TopN ranking" or "Intraday threshold screening_list query" micro-process | Such high-frequency short questions have dedicated closed micro-processes |
| Given clear start and end dates, only ask interval cumulative price change/return | | | Essentially fixed interval return comparison, not causal window analysis, nor complex quantitative processes |
| Industry/sector aggregation ranking (e.g., "Top 5 price changes in Shenwan industries") | | | Requires cross-section aggregation, not single-asset sequence |
| Threshold-triggered discrete event identification (e.g., "Number of times price change exceeds X%", ask about performance after each time) | — | (threshold trigger mode) | Need to identify threshold event dates first, then do window analysis |
| Continuous interval defined by threshold conditions (e.g., "Historical bear markets" "Phases with drawdown over 30%") | | | Studies continuous phases rather than windows after discrete events |
| "Hit new high/low in recent N days" (without "first-time" modifier) | Prohibit adding "not met yesterday" condition | Judge by current state (state check), formula only compares current value with N-day extremum of yesterday | Only when the user explicitly mentions "first breakthrough/first breakdown" "newly added" "first time today" can the first-time trigger condition be added; see for details |
Judgment Mnemonics:
- Clear start/end date + only ask interval value → (fixed interval return comparison)
- Has event + asks "performance in subsequent N days/months" → (causal window)
- Has threshold condition + asks "performance after each occurrence" → (threshold trigger mode)
- Has threshold condition + asks "performance within continuous phase/interval" → (continuous phase statistics)
If the user's request meets any of the following patterns, it should be prioritized as a quick data query task, and directly jump according to the following routing, without entering other workflows first:
Fast Path Conditions (All 3 Points Must Be Met to Take Fast Path; Otherwise Take Complete Link):
- Number of assets ≤ 3
- All target fields belong to the fast_query whitelist (price/valuation/financial/derived fields, see ), no custom formulas/stock selection/ranking involved
- Not a whole market cross-section query (not scenarios like "whole market ranking/top N stocks")
Quick Data Query Routing (Judge by Priority in Order, Stop at First Match):
- Time anchor is "latest N-day window/sequence" → Read if Fast Path conditions are met, otherwise
workflows/global-rules-lite.md
→ workflows/quick-window.md
- Time anchor is "latest reporting period" and fields are financial → Read
workflows/fast-report-period.md
if Fast Path conditions are met, otherwise workflows/global-rules.md
→ workflows/quick-report-period.md
- User explicitly asks for "chart / K-line / trend with trading volume" → Directly load
workflows/render-kline.md
- Others (clearly latest completed trading day's market/valuation/multi-asset comparison, and does not include semantics like today/today's/current/now/real-time/intraday/ranking/screening) → Read
workflows/fast-snapshot.md
if Fast Path conditions are met, otherwise workflows/global-rules.md
→ workflows/quick-snapshot.md
The above routing does not require reading
workflows/quick-lookup.md
first.
Key Red Line Quick Check (Must Comply Even Without Reading global-rules.md)
The following 4 rules are extracted from global-rules.md, highest priority, effective for all scenarios:
- Freeze Event Definition: Event type/scope must match the user's original wording word by word. If the user says "annual report/semi-annual report", only check annual and semi-annual reports, do not expand to performance forecasts/express/quarterly reports; if the user says "State Council or Ministry of Housing and Urban-Rural Development", only include this level, do not expand to central bank/CBRC/local governments. If you think the user's definition may be missing, suggest expanding at the end of the answer, do not expand without permission.
- Evidence-only Answer: The final answer only outputs values, dates, rankings, and caliber descriptions directly supported by the current round of tool results. Without tool verification, prohibit default output of macro attribution, policy attribution, directional judgments ("usually" "often" "positive bias").
- Process-free Delivery: Prohibit process statements such as "Successfully obtained" "Let me do this" "According to the process" "Step 1/2/3" "According to workflow"; prohibit disclosing path, checkpoint name, workflow file name. Answer immediately after querying, do not show internal processes.
- Freeze Condition Caliber: User conditions must be executed as-is, prohibit any rewriting (percentage ↔ decimal, relative time → year range, asset universe replacement, card additional condition inheritance). See Rule 6 of Hard Rules for details.
Trigger Word References:
- Latest trading day close / latest disclosed PE / latest market capitalization (non-intraday, non-screening) →
- Latest 5 days / latest 20 trading days / latest N-day sequence / window high/low →
- Operating income / net profit / ROE / total assets / total liabilities / asset-liability ratio →
Prohibited:
- Prioritize calling , (unless the user explicitly wants to see charts)
- First do analytical expansion, then supplement structured values
- Before reading the corresponding leaf workflow directly call / / / output "unable to connect to the internet" or "unable to obtain real-time data"
- Silently inherit card additional conditions (first-time/non-ST/limit-up/liquidity threshold, etc.) into the final answer
- Use , , preview lines, truncated large tables as the complete result of list questions directly (must extract the complete list or clearly state it is incomplete)
Leaf Workflow Final Answer Contract Priority: The "final answer contract" in the leaf workflow is responsible for tightening the output format of the scenario; if the leaf workflow has met the stop condition, must output directly according to the contract, do not explain internal processes again.
Execution Authority Authorization Rules
Rule Hierarchy (From Highest to Lowest):
- SKILL.md: Routing + Global Access Control (4 Hard Rules, Routing Hard Exclusions)
- global-rules.md: Global contracts that all leaves must comply with (execution contract, evidence classification, short answer mode, no precision supplement, method restriction description, parameter specification, numerical precision, final answer consistency check)
- leaf workflow: Specific execution process for the current task (checkpoint, template, stop condition, formatting)
Conflict Resolution:
- Specific rules in leaf workflows (such as readData mode selection) take precedence over general rules in global-rules
- But leaf workflows cannot relax the red lines in global-rules (such as evidence classification thresholds, no precision supplement principle)
- Cannot borrow templates, fallbacks, or answer formats from other leaf workflows
Positioning of quick-lookup.md:
- Only serves as the routing entry and rule reference summary table for quick query sub-processes
- Each leaf workflow already contains all execution rules, no need to return to quick-lookup.md during execution
- quick-lookup.md does not define any leaf-specific rules
Global Execution Rules
Global contracts are detailed in workflows/global-rules.md
, which take effect automatically when entering any leaf workflow.
Leaf workflows can add stricter constraints internally, but cannot exempt or relax rules in global-rules.
Platform Data Coverage
| ✅ Supported | ⚠️ Conditionally Supported | ❌ Not Supported (Will Not Be Supported in the Short Term) |
|---|
| A-share individual stocks (Shanghai/Shenzhen Main Board/GEM/STAR Market/Beijing Stock Exchange) | ETF / LOF / OTC funds (first follow the result of , execute normally if confirmed; inform not supported only if confirmation fails) | Futures / Options |
| H-share individual stocks (HK + code, e.g., HK0001) | | Taiwan stocks / Korean stocks / Japanese stocks / German stocks and other overseas markets |
| U.S. stocks (NASDAQ: code.N; NYSE: code.O; AMEX: code.A) | | |
| Major broad-based indices (CSI 300, CSI 500, Wind All A, etc.) | | |
Data Scope Limitation for H-shares/U.S. Stocks: Currently, H-shares and U.S. stocks only support market price data (closing price, opening price, high price, low price, price change percentage, trading volume, turnover amount). Valuation data (PE/PB/market capitalization, etc.) and financial data (operating income/net profit/ROE, etc.) are not supported yet. When users query valuation or financial fields for H-shares/U.S. stocks, actively inform users that it is not currently supported instead of silently skipping.
Stock Code Format Quick Reference
| Market | Format | Example |
|---|
| A-share - Shanghai Stock Exchange | SH + code | SH600000 |
| A-share - Shenzhen Stock Exchange | SZ + code | SZ000001 |
| H-share | HK + code | HK0001 |
| U.S. Stock - NASDAQ | code.N | AAPL.N |
| U.S. Stock - NYSE | code.O | AAL.O |
| U.S. Stock - AMEX | code.A | SBE.A |
See
workflows/quick-lookup.md
§ Step 1 for asset confirmation failure (circuit breaker rules).
Environment dependencies (Python version, Playwright, API Key) →
references/environment.md
Troubleshooting →
references/troubleshooting.md
RU billing →
Prerequisites (Execute on Demand, Not the Default First Step for Simple Data Queries)
Credential Storage Instructions: The quant-buddy API Key of this skill
is only stored in the field of under the skill directory, environment variables (such as
) are not read. Only the optional
(event news search) uses environment variables.
Only in the following cases do you need to explicitly read
to check
:
- This round actually needs to call local scripts or platform tools, and no available session has been established in the current environment
- The previous round of tool calls has returned 401 / 402 / explicit authentication errors
- Workflow explicitly requires executing script chains (such as local Python script rendering)
For simple data query questions that have hit leaf workflows (quick-snapshot / quick-window / quick-report-period / render-kline):
- Do not read additionally just for formality
- Prioritize executing directly according to the leaf workflow
- Only return to the authentication wizard when tool calls have explicit authentication issues
Principle: Authentication checking serves execution, and should not be a fixed additional step for simple questions.
-
If
is
non-empty → Proceed normally
-
If
is
empty →
Stop immediately, prohibit continuing data query, output the following
new user guide message (output as-is, do not delete or modify):
⚠️ API Key has not been configured, data query is not available currently.
Go to
https://www.quantbuddy.cn/login to log in/register and obtain an API Key, then send it to me directly:
Help me configure APIkey: sk-xxxxxxxx
Configuration Wizard (User Pastes Key)
When the user's message contains a string starting with
:
- Extract the complete Key string starting with from the user's message
- Write the Key into the field of (write directly using )
- Must output: "✅ API Key configured successfully!"
- Automatic Retry: If there is a query interrupted by api_key missing error in this conversation (e.g., the user asked about market data before), first call (use the original user question as ) to create a new session, then immediately re-execute the query and give the data conclusion, no need for the user to initiate it again.
Runtime 401/402 → Stop immediately, prompt the user that the API Key is invalid/expired/quota exhausted, please go to the official website to obtain a new Key and reconfigure.
Tool Calling Method
All tools are called through
.
will print the result to stdout and write it to a temporary file at the same time.
Standard Call (One Step to Complete)
bash
python scripts/call.py <tool name> '{"key":"value"}'
Results are obtained directly from stdout. If stdout is truncated, you can read back
.
You can also pass parameters through environment variables (suitable for scenarios where parameters contain special characters):
bash
GZQ_PARAMS='<JSON>' python scripts/call.py <tool name>
Prohibited Items
| Prohibited | Reason |
|---|
| Create custom .py to write parameter files | Environment variable scheme has solved encoding issues |
| Directly call | encapsulates logic such as automatic saving for renderChart |
| pipe parameter passing (Windows) | GBK encoding truncates Chinese |
| Command line parameter passing JSON (Windows) | PS eats double quotes |
Division of Labor Among presets/, recipes/, tools/ Directories
| Directory | What It Is | When to Read |
|---|
| presets/ | Local snapshots of actual platform return values (YAML). Asset names, function formats, dataset index_title, industry/concept names, etc. Can be directly used in formulas, saving confirmation API calls. | Check preset before writing formulas; call the corresponding API only if not found, and add the new result back to preset. must be read at the beginning only in quantitative standard scenarios (stock selection/backtesting/factors/charts, etc., ); not needed for quick data query scenarios (quick-snapshot/window/report-period). |
| recipes/ | End-to-end usage examples (Markdown). Show complete scenarios — from parameter preparation to final output. | Loaded as guided by workflows in specific steps (not triggered independently). |
| tools/ | API parameter manuals (Markdown). Fields, types, constraints for each tool. | Do not read by default. Read only when workflows mark "must read" or when errors occur. |
In short: presets are data snapshots, recipes are complete tutorials, tools are parameter dictionaries.
Detailed usage and loading timing of the 6 preset files → "Preset Resources" table in the current workflow document.
Skill Update Deployment Specifications (Must Read for LLM)
When the user requests to update this skill (sends a zip package), must strictly comply with the following rules:
Backup Location
⚠️
Backup directory is prohibited from being placed under the directory. Otherwise, openclaw will recognize the backup as an independent skill, leading to possible misuse of old code in subsequent requests.
Correct approach:
bash
# Backup to a location outside skills/
cp -r ~/.openclaw/workspace/skills/quant-buddy-skill ~/.openclaw/workspace/quant-buddy-skill-backup-$(date +%Y%m%d%H%M)
# ❌ Wrong: Backup is still under skills/
# cp -r ~/.openclaw/workspace/skills/quant-buddy-skill ~/.openclaw/workspace/skills/quant-buddy-skill-backup-xxx
Unzip and Overwrite
bash
# Unzip to temporary directory first then copy (avoid nesting)
TMPDIR=$(mktemp -d)
unzip -o <zip package path> -d "$TMPDIR"
# If unzipped to nested directory quant-buddy-skill/, take the inner layer
if [ -d "$TMPDIR/quant-buddy-skill" ]; then
cp -rf "$TMPDIR/quant-buddy-skill/"* ~/.openclaw/workspace/skills/quant-buddy-skill/
else
cp -rf "$TMPDIR/"* ~/.openclaw/workspace/skills/quant-buddy-skill/
fi
rm -rf "$TMPDIR"
Post-Deployment Check
- Confirm there is only one directory related to this skill, , under
~/.openclaw/workspace/skills/
- Read to confirm the version number
- Keep the existing in (if the user has authenticated before)