Qwen3-VL-4B-Instruct-abliterated-v1-GGUF

The Qwen3-VL-4B-Instruct-abliterated-v1 from prithivMLmods is a 4B-parameter vision-language model variant of Alibaba's Qwen3-VL-4B-Instruct, modified through abliteration (v1.0) to remove safety refusals and content filters, enabling uncensored, detailed captioning, reasoning, and instruction-following across complex, sensitive, artistic, technical, or abstract visual content while preserving the base model's advanced multimodal capabilities like 32-language OCR, long-context (up to 256K tokens), video understanding, and robust handling of diverse resolutions/aspect ratios. Tailored for high-fidelity descriptions with variational detail control—from concise summaries to intricate analyses—it excels in UI parsing, document extraction, chart interpretation, and agentic tasks without conventional guardrails, primarily in English with multilingual prompt adaptability for research, red-teaming, and creative applications. This abliterated version delivers factual, descriptive outputs on consumer GPUs (10-12GB VRAM BF16, faster quantized), supporting vLLM/Transformers inference for unrestricted visual reasoning.

Qwen3-VL-4B-Instruct-abliterated-v1 [GGUF]

File Name Quant Type File Size File Link
Qwen3-VL-4B-Instruct-abliterated-v1.IQ4_XS.gguf IQ4_XS 2.29 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.Q2_K.gguf Q2_K 1.67 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.Q3_K_L.gguf Q3_K_L 2.24 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.Q3_K_M.gguf Q3_K_M 2.08 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.Q3_K_S.gguf Q3_K_S 1.89 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.Q4_K_M.gguf Q4_K_M 2.5 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.Q4_K_S.gguf Q4_K_S 2.38 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.Q5_K_M.gguf Q5_K_M 2.89 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.Q5_K_S.gguf Q5_K_S 2.82 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.Q6_K.gguf Q6_K 3.31 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.Q8_0.gguf Q8_0 4.28 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.f16.gguf F16 8.05 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.i1-IQ1_M.gguf i1-IQ1_M 1.13 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.i1-IQ1_S.gguf i1-IQ1_S 1.06 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.i1-IQ2_M.gguf i1-IQ2_M 1.51 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.i1-IQ2_S.gguf i1-IQ2_S 1.42 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.i1-IQ2_XS.gguf i1-IQ2_XS 1.35 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.i1-IQ2_XXS.gguf i1-IQ2_XXS 1.25 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.i1-IQ3_M.gguf i1-IQ3_M 1.96 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.i1-IQ3_S.gguf i1-IQ3_S 1.9 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.i1-IQ3_XS.gguf i1-IQ3_XS 1.81 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.i1-IQ3_XXS.gguf i1-IQ3_XXS 1.67 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.i1-IQ4_NL.gguf i1-IQ4_NL 2.38 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.i1-IQ4_XS.gguf i1-IQ4_XS 2.27 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.i1-Q2_K.gguf i1-Q2_K 1.67 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.i1-Q2_K_S.gguf i1-Q2_K_S 1.56 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.i1-Q3_K_L.gguf i1-Q3_K_L 2.24 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.i1-Q3_K_M.gguf i1-Q3_K_M 2.08 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.i1-Q3_K_S.gguf i1-Q3_K_S 1.89 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.i1-Q4_0.gguf i1-Q4_0 2.38 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.i1-Q4_1.gguf i1-Q4_1 2.6 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.i1-Q4_K_M.gguf i1-Q4_K_M 2.5 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.i1-Q4_K_S.gguf i1-Q4_K_S 2.38 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.i1-Q5_K_M.gguf i1-Q5_K_M 2.89 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.i1-Q5_K_S.gguf i1-Q5_K_S 2.82 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.i1-Q6_K.gguf i1-Q6_K 3.31 GB Download
Qwen3-VL-4B-Instruct-abliterated-v1.imatrix.gguf imatrix 3.87 MB Download
Qwen3-VL-4B-Instruct-abliterated-v1.mmproj-Q8_0.gguf mmproj-Q8_0 454 MB Download
Qwen3-VL-4B-Instruct-abliterated-v1.mmproj-f16.gguf mmproj-f16 836 MB Download

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
1,336
GGUF
Model size
4B params
Architecture
qwen3vl
Hardware compatibility
Log In to add your hardware

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Qwen3-VL-4B-Instruct-abliterated-v1-GGUF

Collection including prithivMLmods/Qwen3-VL-4B-Instruct-abliterated-v1-GGUF