Configuration Parsing Warning:Invalid JSON for config file config.json
Holotron-12B -
Model Overview
Holotron-12B is a high-throughput, multimodal Vision-Language Model (VLM) designed specifically as a policy model for computer-use agents.
Developed through a close collaboration between H Company and NVIDIA research labs, this model is post-trained from the open NVIDIA Nemotron Nano VL architecture on H Company’s proprietary data mixture. It is optimized for scale, production efficiency, and handling long contexts with multiple images in interactive environments.
- Developed by: H Company & NVIDIA
- Model Type: Multimodal Vision-Language Model (Hybrid SSM-Attention)
- Base Model: NVIDIA's Nemotron-Nano-12B-v2-VL-BF16
- Parameters: ~12B
- License: NVIDIA Open Model License
- Release Date: 16th March 2026
- Blog Post: https://hcompany.ai/holotron-12b
Intended use
Unlike general-purpose multimodal models optimized for static vision or simple instruction following, Holotron-12B is trained for multimodal agentic workloads.
It is designed to serve as the "brain" for agents that must:
- Perceive complex screens and UI elements.
- Decide on multi-step workflows.
- Act efficiently in interactive environments (Web, Desktop, Mobile).
Its high-throughput capabilities make it an ideal choice for data generation, annotation, and online reinforcement learning loops.
Training Strategy
Holotron-12B was trained in two stages. We started from Nemotron-Nano-12B-v2-VL-BF16, a multimodal base model published by NVIDIA. We then performed supervised fine-tuning on H Company’s proprietary localization and navigation data mixture, focusing on screen understanding, grounding, and UI-level interactions.
The final checkpoint was trained on approximately 14 billion tokens.
Architecture: Hybrid SSM
Holotron-12B utilizes a Hybrid State-Space Model (SSM) and attention mechanism. This architecture allows for superior scalability compared to pure transformer models.
Linear Complexity: Avoids the quadratic computation cost associated with full attention mechanisms.
Reduced Memory Footprint: SSMs act as a linear recurrent model, storing only a constant state per layer regardless of sequence length. This eliminates the massive KV Cache requirements of standard transformers for long sequences.
Throughput: Capable of maintaining high throughput even with long context histories and multiple high-resolution images.
Results
High throughput
On WebVoyager in a real-world multimodal agent setup (long context, multiple high-resolution images, concurrency up to 100):
- Single NVIDIA H100, vLLM with latest SSM optimizations (v0.14.1)
- Holotron-12B achieved >2x higher throughput vs Holo2-8B
- In a controlled setup, throughput scaled to ~8.9k tokens/s at concurrency = 100, while Holo2-8B plateaued around ~5.1k tokens/s
Navigation and Localization: Computer use
On computer-use and navigation benchmarks, Holotron-12B shows strong improvements over the Nemotron base model and strong performance with established agent models. WebVoyager performance increased from 35.1% to 80.5%, exceeding Holo2-8B’s performance on the benchmark and illustrating the model’s ability to perform in an agentic setting.
Get Started with the Model
Please refer to https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-12B-v2-VL-BF16 for getting started with this architecture.
Dependencies
pip install torch "transformers>4.53,<4.54" causal_conv1d timm "mamba-ssm==2.2.5" accelerate open_clip_torch numpy pillow
Conclusion
Holotron-12B demonstrates that the NVIDIA Nemotron VL model provides a strong foundation for real-world multimodal agents when paired with the right training setup and infrastructure work.
The model offers strong agent performance, significantly improved inference throughput, and a clear path for future improvements, particularly around higher-resolution vision training.
We look forward to seeing what others build with Holotron-12B.
Citation
@misc{hai2026holotron2,
title={Holotron2},
author={H Company},
year={2026},
url=https://huggingface.co/collections/Hcompany/Holotron-12B,
}
- Downloads last month
- 18
Model tree for Hcompany/Holotron-12B
Base model
nvidia/NVIDIA-Nemotron-Nano-12B-v2-VL-BF16
