Efficient Document Parsing via Parallel Token Prediction
Abstract
Document parsing, as a fundamental yet crucial vision task, is being revolutionized by vision-language models (VLMs). However, the autoregressive (AR) decoding inherent to VLMs creates a significant bottleneck, severely limiting parsing speed. In this paper, we propose Parallel-Token Prediction (PTP), a plugable, model-agnostic and simple-yet-effective method that enables VLMs to generate multiple future tokens in parallel with improved sample efficiency. Specifically, we insert some learnable tokens into the input sequence and design corresponding training objectives to equip the model with parallel decoding capabilities for document parsing. Furthermore, to support effective training, we develop a comprehensive data generation pipeline that efficiently produces large-scale, high-quality document parsing training data for VLMs. Extensive experiments on OmniDocBench and olmOCR-bench demonstrate that our method not only significantly improves decoding speed (1.6x-2.2x) but also reduces model hallucinations and exhibits strong generalization abilities.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Training-Free Acceleration for Document Parsing Vision-Language Model with Hierarchical Speculative Decoding (2026)
- Youtu-Parsing: Perception, Structuring and Recognition via High-Parallelism Decoding (2026)
- Up to 36x Speedup: Mask-based Parallel Inference Paradigm for Key Information Extraction in MLLMs (2026)
- Dolphin-v2: Universal Document Parsing via Scalable Anchor Prompting (2026)
- GLM-OCR Technical Report (2026)
- MMSpec: Benchmarking Speculative Decoding for Vision-Language Models (2026)
- P-EAGLE: Parallel-Drafting EAGLE with Scalable Training (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper