1

Gliese-Qwen3.5-27B-Abliterated-Caption

Gliese-Qwen3.5-27B-Abliterated-Caption is an abliterated evolution built on top of Qwen3.5-27B, designed specifically for generalized and unfiltered image captioning. The model applies advanced refusal direction analysis and abliterated training strategies to minimize internal refusal behaviors while maximizing descriptive capability and visual understanding. The result is a powerful 27B parameter vision-language model optimized for highly detailed captions, deep scene understanding, and rich visual descriptions.

This model is materialized for research and learning purposes only. The model has reduced internal refusal behaviors, and any content generated by it is used at the user’s own risk. The authors and hosting page disclaim any liability for content generated by this model. Users are responsible for ensuring that the model is used in a safe, ethical, and lawful manner.

Expert Image Captioning System (chat_template.jinja)https://huggingface.co/prithivMLmods/Gliese-Qwen3.5-27B-Abliterated-Caption/blob/main/chat_template.jinja [Recommended]

Standard or Default (chat_template.jinja)https://huggingface.co/prithivMLmods/Gliese-Qwen3.5-27B-Abliterated-Caption/blob/main/standard-chat_template/chat_template.jinja

Download the model

hf auth login --token <YOUR_HF_TOKEN>

hf download prithivMLmods/Gliese-Qwen3.5-27B-Abliterated-Caption

Key Highlights

  • Advanced Refusal Direction Analysis Uses targeted activation analysis to identify and mitigate refusal directions within the model’s latent space.

  • Abliterated Caption Training Fine-tuned for unfiltered and detailed caption generation, enabling comprehensive visual descriptions without excessive refusal behaviors.

  • Optimized Visual Understanding Enhanced to provide rich, context-aware descriptions of scenes, objects, people, and environments.

  • 27B Parameter Architecture Built on Qwen3.5-27B, delivering stronger multimodal reasoning and improved caption quality compared to smaller variants.

  • High-Fidelity Caption Generation Designed to produce long-form, structured, and semantically detailed captions suitable for dataset generation, annotation, and research.

  • Efficient Deployment Suitable for caption dataset creation, multimodal research, local inference pipelines, and AI development workflows.

Quick Start with Transformers

pip install transformers==5.3.0
# or
pip install git+https://github.com/huggingface/transformers.git
from transformers import Qwen3_5ForConditionalGeneration, AutoProcessor
import torch

model = Qwen3_5ForConditionalGeneration.from_pretrained(
    "prithivMLmods/Gliese-Qwen3.5-27B-Abliterated-Caption",
    torch_dtype="auto",
    device_map="auto"
)

processor = AutoProcessor.from_pretrained(
    "prithivMLmods/Gliese-Qwen3.5-27B-Abliterated-Caption"
)

messages = [
    {
        "role": "user",
        "content": [
            {"type": "text", "text": "Describe this image in extreme detail."}
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)

inputs = processor(
    text=[text],
    padding=True,
    return_tensors="pt"
).to("cuda")

generated_ids = model.generate(**inputs, max_new_tokens=512)

generated_ids_trimmed = [
    out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]

output_text = processor.batch_decode(
    generated_ids_trimmed,
    skip_special_tokens=True,
    clean_up_tokenization_spaces=False
)

print(output_text)

Intended Use

  • High-Detail Image Captioning – Generating extremely descriptive captions for images.
  • Dataset Generation – Creating large-scale caption datasets for multimodal training.
  • Vision-Language Research – Studying multimodal reasoning and captioning behavior.
  • Annotation Automation – Assisting in automatic labeling and visual description tasks.
  • Local Multimodal AI Deployment – Running powerful captioning models on local GPUs.

Limitations & Risks

Important Note: This model intentionally reduces built-in refusal mechanisms.

  • Unfiltered Outputs – The model may generate explicit or controversial captions depending on the input images.
  • User Responsibility – Generated outputs should be handled responsibly and within legal and ethical boundaries.
  • Model Size Constraints – While strong, a 27B model still has limitations compared to frontier-scale multimodal architectures.
Downloads last month
149
Safetensors
Model size
27B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Gliese-Qwen3.5-27B-Abliterated-Caption

Base model

Qwen/Qwen3.5-27B
Finetuned
(90)
this model
Quantizations
2 models

Collection including prithivMLmods/Gliese-Qwen3.5-27B-Abliterated-Caption