Dark Beast KLEIN 9b ๐ŸŸฆ V2 BFS 03/03/2026

This is the next-level face-swap specialized evolution of the Dark Beast lineage, built on the lightning-fast FLUX.2 Klein 9B accelerated model from Black Forest Labs.

Engineered with targeted optimizations for face swapping practices, it integrates BFS (Best Face Swap) technology to completely eliminate the rigid, unnatural look that plagued earlier face replacements โ€” delivering seamless, lifelike integrations with preserved identity, expression, and lighting.

It also fully fixes the portrait reference issue from the previous DB BlitZ versions, ensuring right reference adherence every time.

Special thanks to the scheme provider: https://github.com/alisson-anjos for the powerful BFS foundation that powers this breakthrough.๐ŸŸฆ

image


Important notes:

This version is exclusively designed around the Klein 9B accelerated edition โ€” no base model exists.

Usage is identical to Black Forest Labs' official FLUX.2 Klein 9B accelerated release: ultra-low steps (e.g., 4-5), CFG=1 fixed, blazing inference speed on consumer hardware.

In one sentence: Dark Beast's ferocious soul meets BFS (Best Face Swap) technology โ€” more natural, and truly unstoppable! ๐ŸŸฆ

for more infomation about BFS (Best Face Swap) :

https://huggingface.co/Alissonerdx

Alternatively, it can be directly applied to the entire Klein 9b/Qwen Edit base and Fine-tune models, through LoRA Adapter parameter injection.

image


Dark Beast KLEIN 9b ๐ŸŸฆ V1.5 BlitZ lora adapter 02/16/2026

DarkBeast5steps_extracted_lora_r256 uploaded

working fine with FLUX.2 Klein 9b models


Dark Beast KLEIN 9b ๐ŸŸฆ V1.5 BlitZ 02/08/2026

Fine-tunning of black-forest-labs/FLUX.2-klein-9B with BF16\FP8e4m3fn\NVFP4 quantization.

And Merge with @alcaitiff klein-9b-unchained-xxx

This is the ultimate speed-optimized Dark Beast V1 evolution, based on Flux.2 Klein 9B,

engineered specifically for lightning-fast low-step + CFG=1 workflows (5steps).

Also available in NVFP4 quantized format, optimized for acceleration on Blackwell architecture GPUs.

๏ผˆ like RTX50XX, PRO6000, B200, and others ๏ผ‰

Also supports non-50 series GPUs (automatic 16-bit operation), Verify environment is my ComfyUI 0.11

Key features:

Fully preserves the signature Dark Beast style, rich details, and intense Black Beast aesthetic from the standard lineage

Refined through advanced targeted distillation & fine-tuning, now perfectly dialed in for zero-CFG guidance at minimal steps

BlitZ-level inference speed โ€” breathtaking high-quality images in just 5 steps โšก

Recommended settings: 5 steps, CFG=1 (fixed), any seed you want

In one sentence: Taking Kleinโ€™s already blazing speed and cranking it to absolute BlitZ velocity while keeping every drop of that ferocious Dark Beast soul! ๐ŸŸฆ

Lightning-fast generation awaits โ€” unleash it now! ๐Ÿš€

Usage:

pip install sdnq
import torch
import diffusers
from sdnq import SDNQConfig # import sdnq to register it into diffusers and transformers
from sdnq.common import use_torch_compile as triton_is_available
from sdnq.loader import apply_sdnq_options_to_model

pipe = diffusers.Flux2KleinPipeline.from_pretrained("GuangyuanSD/FLUX.2-klein-9B-Blitz-Diffusers", torch_dtype=torch.bfloat16)

# Enable INT8 MatMul for AMD, Intel ARC and Nvidia GPUs:
if triton_is_available and (torch.cuda.is_available() or torch.xpu.is_available()):
    pipe.transformer = apply_sdnq_options_to_model(pipe.transformer, use_quantized_matmul=True)
    pipe.text_encoder = apply_sdnq_options_to_model(pipe.text_encoder, use_quantized_matmul=True)
    # pipe.transformer = torch.compile(pipe.transformer) # optional for faster speeds

pipe.enable_model_cpu_offload()

prompt = "A cat holding a sign that says hello world"
image = pipe(
    prompt=prompt,
    height=1024,
    width=1024,
    guidance_scale=1.0,
    num_inference_steps=4,
    generator=torch.manual_seed(0)
).images[0]

image.save("flux-klein-Blitz.png")

Original BF16 vs Blitz fine-tune comparison:

Quantization Model Size Visualization
Original BF16 18.2 GB Original BF16
Blitz fine-tune 18.2 GB DB-Klein2_00005_

Big thanks to @alcaitiff for the awesome work and killer contributions to training Z-Image and Klein models! Seriously impressive stuff! ๐Ÿš€

้žๅธธๆ„Ÿ่ฐข @alcaitiff ๅฏน Zimage ๅ’Œ Klein 9b ็š„ๆจกๅž‹่ฎญ็ปƒๅšๅ‡บ็š„ๆฐๅ‡บ่ดก็Œฎ๏ผ

Downloads last month
5,644
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for GuangyuanSD/FLUX.2-klein-9B-Blitz-ComfyUI

Finetuned
(14)
this model
Adapters
1 model