aliyun-qwen-ocr
Original:🇺🇸 English
Translated
1 scriptsChecked / no sensitive code detected
Use when OCR-specialized extraction is needed with Alibaba Cloud Model Studio Qwen OCR models (`qwen-vl-ocr`, `qwen-vl-ocr-latest`, and snapshots), including document parsing, table parsing, multilingual OCR, formula recognition, and key information extraction.
1installs
Sourcecinience/alicloud-skills
Added on
NPX Install
npx skill4agent add cinience/alicloud-skills aliyun-qwen-ocrTags
Translated version includes tags in frontmatterSKILL.md Content
View Translation Comparison →Category: provider
Model Studio Qwen OCR
Validation
bash
mkdir -p output/aliyun-qwen-ocr
python -m py_compile skills/ai/multimodal/aliyun-qwen-ocr/scripts/prepare_ocr_request.py && echo "py_compile_ok" > output/aliyun-qwen-ocr/validate.txtPass criteria: command exits 0 and is generated.
output/aliyun-qwen-ocr/validate.txtOutput And Evidence
- Save request payloads, selected OCR task name, and normalized output expectations under .
output/aliyun-qwen-ocr/ - Keep the exact model, image source, and task configuration with each saved run.
Use Qwen OCR when the task is primarily text extraction or document structure parsing rather than broad visual reasoning.
Critical model names
Use one of these exact model strings:
qwen-vl-ocrqwen-vl-ocr-latestqwen-vl-ocr-2025-11-20qwen-vl-ocr-2025-08-28qwen-vl-ocr-2025-04-13qwen-vl-ocr-2024-10-28
Selection guidance:
- Use for the stable channel.
qwen-vl-ocr - Use only when you explicitly want the newest OCR behavior.
qwen-vl-ocr-latest - Pin when you need reproducible document parsing based on the Qwen3-VL OCR upgrade.
qwen-vl-ocr-2025-11-20
Prerequisites
- Install dependencies (recommended in a venv):
bash
python3 -m venv .venv
. .venv/bin/activate
python -m pip install requests- Set in environment, or add
DASHSCOPE_API_KEYtodashscope_api_key.~/.alibabacloud/credentials
Normalized interface (ocr.extract)
Request
- (string, required): HTTPS URL, local path, or
imageURL.data: - (string, optional): default
model.qwen-vl-ocr - (string, optional): use when you want custom extraction instructions.
prompt - (string, optional): built-in OCR task.
task - (object, optional): configuration for built-in task such as extraction fields.
task_config - (bool, optional): default
enable_rotate.false - (int, optional)
min_pixels - (int, optional)
max_pixels - (int, optional)
max_tokens - (float, optional): recommended to keep near default/low values.
temperature
Response
- (string): extracted text or structured markdown/html-style output.
text - (string)
model - (object, optional)
usage
Built-in OCR tasks
Use one of these values in :
tasktext_recognitionkey_information_extractiondocument_parsingtable_parsingformula_recognitionmulti_lanadvanced_recognition
Quick start
Custom prompt:
bash
python skills/ai/multimodal/aliyun-qwen-ocr/scripts/prepare_ocr_request.py \
--image "https://example.com/invoice.png" \
--prompt "Extract seller name, invoice date, amount, and tax number in JSON."Built-in task:
bash
python skills/ai/multimodal/aliyun-qwen-ocr/scripts/prepare_ocr_request.py \
--image "https://example.com/table.png" \
--task table_parsing \
--model qwen-vl-ocr-2025-11-20Operational guidance
- Prefer built-in OCR tasks for standard parsing jobs because they use official task prompts.
- For critical business fields, add downstream validation rules after OCR.
- and older snapshots default to
qwen-vl-ocrmax output tokens unless higher limits are approved by Alibaba Cloud;4096follows the model maximum.qwen-vl-ocr-2025-11-20 - Increase only when small text is missed; this raises token cost.
max_pixels
Output location
- Default output:
output/aliyun-qwen-ocr/request.json - Override base dir with .
OUTPUT_DIR
References
references/api_reference.mdreferences/sources.md