Qwen3.5 122B A10B - Custom GGUF Quantizations
π¨ CRITICAL COMPATIBILITY WARNING π¨
These are iqk format quantizations and are EXCLUSIVE to the ik_llama.cpp fork. They will NOT work on mainline llama.cpp, standard LM Studio, standard Text Generation WebUI, or KoboldCPP. You must compile and run this using ikawrakow's llama.cpp fork (or a UI where you have manually swapped the backend to an ik_llama build).
This repository contains custom, mixed-precision ik_llama.cpp GGUF quantizations for Qwen/Qwen3.5-122B-A10B.
These quants were specifically designed to push the routed expert layers to slightly higher precision (IQ4_KS and IQ4_K) while maintaining strict precision on the attention and embedding layers.
β οΈ Disclaimer: The "Vibes Test"
These quantizations have NOT been formally tested for perplexity. They were compiled blindly as an experiment to see how the model handles shifting bottlenecks. There is no guarantee that they are mathematically optimal or perform flawlessly. They are provided entirely as-is. If they pass the vibes test for you, enjoy!
π Credits & Acknowledgments
Massive credit goes to ubergarm/Qwen3.5-122B-A10B-GGUF.
The imatrix.dat used to calculate these custom quants was pulled directly from their phenomenal enterprise-hardware run, and the custom layer-mapping recipes used here are heavily based on their original blending logic.
π οΈ Quantization Recipes
1. The IQ4_KS Mix
This mix balances an upgraded routed-expert layer with highly compressed (but imatrix-optimized) embeddings to save VRAM.
- Token Embeddings & Output:
IQ6_K - Attention / Delta Net / Shared Experts:
Q8_0 - Routed Experts:
IQ4_KS
2. The IQ4_K Mix
This mix opts to spend a tiny bit more VRAM to give the model absolute Q8_0 precision on its vocabulary, alongside slightly heavier experts.
- Token Embeddings & Output:
Q8_0 - Attention / Delta Net / Shared Experts:
Q8_0 - Routed Experts:
IQ4_K
π» How to Run
- Clone and build the
ik_llama.cppfork from ikawrakow/ik_llama.cpp. - Use the compiled
llama-serverorllama-clifrom that specific build.
Example llama-server launch command:
./llama-server -m Qwen3.5-122B-A10B-IQ4_KS.gguf -c 8192 -ngl 99 -fa
- Downloads last month
- 1,406
Model tree for KeinNiemand/Qwen3.5-122B-A10B-IK_GGUF
Base model
Qwen/Qwen3.5-122B-A10B