The Baidu Qianfan Group launched Qianfan-OCR, a 4B-parameter end-to-end mannequin designed to unify doc parsing, format evaluation, and doc understanding inside a single vision-language structure. Not like conventional multi-stage OCR pipelines that chain separate modules for format detection and textual content recognition, Qianfan-OCR performs direct image-to-Markdown conversion and helps prompt-driven duties like desk extraction and doc query answering.

Structure and Technical Specs
Qianfan-OCR makes use of the multimodal bridging structure from the Qianfan-VL framework. The system consists of three main parts:
- Imaginative and prescient Encoder (Qianfan-ViT): Employs an Any Decision design that tiles photos into 448 x 448 patches. It helps variable-resolution inputs as much as 4K, producing as much as 4,096 visible tokens per picture to keep up spatial decision for small fonts and dense textual content.
- Cross-Modal Adapter: A light-weight two-layer MLP with GELU activation that initiatives visible options into the language mannequin’s embedding area.
- Language Mannequin Spine (Qwen3-4B): A 4.0B-parameter mannequin with 36 layers and a local 32K context window. It makes use of Grouped-Question Consideration (GQA) to scale back KV cache reminiscence utilization by 4x.
‘Structure-as-Thought’ Mechanism
The principle characteristic of the mannequin is Structure-as-Thought, an optionally available pondering section triggered by tokens. Throughout this section, the mannequin generates structured format representations—together with bounding bins, component sorts, and studying order—earlier than producing the ultimate output.
- Practical Utility: This course of recovers express format evaluation capabilities (component localization and kind classification) typically misplaced in end-to-end paradigms.
- Efficiency Traits: Analysis on OmniDocBench v1.5 signifies that enabling the pondering section gives a constant benefit on paperwork with excessive “format label entropy”—these containing heterogeneous components like blended textual content, formulation, and diagrams.
- Effectivity: Bounding field coordinates are represented as devoted particular tokens (
to), decreasing pondering output size by roughly 50% in comparison with plain digit sequences.
Empirical Efficiency and Benchmarks
Qianfan-OCR was evaluated in opposition to each specialised OCR methods and normal vision-language fashions (VLMs).
Doc Parsing and Normal OCR
The mannequin ranks first amongst end-to-end fashions on a number of key benchmarks:
- OmniDocBench v1.5: Achieved a rating of 93.12, surpassing DeepSeek-OCR-v2 (91.09) and Gemini-3 Professional (90.33).
- OlmOCR Bench: Scored 79.8, main the end-to-end class.
- OCRBench: Achieved a rating of 880, rating first amongst all examined fashions.
On public KIE benchmarks, Qianfan-OCR achieved the very best common rating (87.9), outperforming considerably bigger fashions.
| Mannequin | Total Imply (KIE) | OCRBench KIE | Nanonets KIE (F1) |
| Qianfan-OCR (4B) | 87.9 | 95.0 | 86.5 |
| Qwen3-4B-VL | 83.5 | 89.0 | 83.3 |
| Qwen3-VL-235B-A22B | 84.2 | 94.0 | 83.8 |
| Gemini-3.1-Professional | 79.2 | 96.0 | 76.1 |
Doc Understanding
Comparative testing revealed that two-stage OCR+LLM pipelines typically fail on duties requiring spatial reasoning. As an example, all examined two-stage methods scored 0.0 on CharXiv benchmarks, because the textual content extraction section discards the visible context (axis relationships, knowledge level positions) needed for chart interpretation.


Deployment and Inference
Inference effectivity was measured in Pages Per Second (PPS) utilizing a single NVIDIA A100 GPU.
- Quantization: With W8A8 (AWQ) quantization, Qianfan-OCR achieved 1.024 PPS, a 2x speedup over the W16A16 baseline with negligible accuracy loss.
- Structure Benefit: Not like pipeline methods that depend on CPU-based format evaluation—which might change into a bottleneck—Qianfan-OCR is GPU-centric. This avoids inter-stage processing delays and permits for environment friendly large-batch inference.
Try Paper, Repo and Mannequin on HF. Additionally, be at liberty to observe us on Twitter and don’t neglect to affix our 120k+ ML SubReddit and Subscribe to our Publication. Wait! are you on telegram? now you possibly can be a part of us on telegram as nicely.

Michal Sutter is an information science skilled with a Grasp of Science in Information Science from the College of Padova. With a stable basis in statistical evaluation, machine studying, and knowledge engineering, Michal excels at remodeling advanced datasets into actionable insights.
Elevate your perspective with NextTech Information, the place innovation meets perception.
Uncover the newest breakthroughs, get unique updates, and join with a worldwide community of future-focused thinkers.
Unlock tomorrow’s tendencies in the present day: learn extra, subscribe to our publication, and change into a part of the NextTech neighborhood at NextTech-news.com

