Qwen3.5 122B A10B - Custom GGUF Quantizations

🚨 CRITICAL COMPATIBILITY WARNING 🚨 These are iqk format quantizations and are EXCLUSIVE to the ik_llama.cpp fork. They will NOT work on mainline llama.cpp, standard LM Studio, standard Text Generation WebUI, or KoboldCPP. You must compile and run this using ikawrakow's llama.cpp fork (or a UI where you have manually swapped the backend to an ik_llama build).


This repository contains custom, mixed-precision ik_llama.cpp GGUF quantizations for Qwen/Qwen3.5-122B-A10B.

These quants were specifically designed to push the routed expert layers to slightly higher precision (IQ4_KS and IQ4_K) while maintaining strict precision on the attention and embedding layers.

⚠️ Disclaimer: The "Vibes Test"

These quantizations have NOT been formally tested for perplexity. They were compiled blindly as an experiment to see how the model handles shifting bottlenecks. There is no guarantee that they are mathematically optimal or perform flawlessly. They are provided entirely as-is. If they pass the vibes test for you, enjoy!

πŸ™ Credits & Acknowledgments

Massive credit goes to ubergarm/Qwen3.5-122B-A10B-GGUF. The imatrix.dat used to calculate these custom quants was pulled directly from their phenomenal enterprise-hardware run, and the custom layer-mapping recipes used here are heavily based on their original blending logic.


πŸ› οΈ Quantization Recipes

1. The IQ4_KS Mix

This mix balances an upgraded routed-expert layer with highly compressed (but imatrix-optimized) embeddings to save VRAM.

  • Token Embeddings & Output: IQ6_K
  • Attention / Delta Net / Shared Experts: Q8_0
  • Routed Experts: IQ4_KS

2. The IQ4_K Mix

This mix opts to spend a tiny bit more VRAM to give the model absolute Q8_0 precision on its vocabulary, alongside slightly heavier experts.

  • Token Embeddings & Output: Q8_0
  • Attention / Delta Net / Shared Experts: Q8_0
  • Routed Experts: IQ4_K

πŸ’» How to Run

  1. Clone and build the ik_llama.cpp fork from ikawrakow/ik_llama.cpp.
  2. Use the compiled llama-server or llama-cli from that specific build.

Example llama-server launch command:

./llama-server -m Qwen3.5-122B-A10B-IQ4_KS.gguf -c 8192 -ngl 99 -fa
Downloads last month
1,406
GGUF
Model size
122B params
Architecture
qwen35moe
Hardware compatibility
Log In to add your hardware
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for KeinNiemand/Qwen3.5-122B-A10B-IK_GGUF

Quantized
(61)
this model