The Qwen3-VL-8B-Instruct-abliterated-v2 from prithivMLmods represents the second iteration (v2.0) of the abliterated variant of Alibaba's Qwen3-VL-8B-Instruct, an 8B-parameter vision-language model engineered to fully remove safety refusals and content filters through advanced abliteration techniques, delivering uncensored, highly detailed captioning, instruction-following, and multimodal reasoning across complex, sensitive, artistic, technical, abstract, or explicit visual content with Interleaved-MRoPE fusion, 32-language OCR, 262K context length, and robust support for diverse resolutions, aspect ratios, videos, and layouts. Building on v1 with refined uncensoring for even greater output fidelity and reduced artifacts, it enables variational detail control—from concise summaries to exhaustive, multi-granularity analyses—primarily in English with prompt-engineered multilingual adaptability, making it optimal for red-teaming, research in generative safety, creative visual storytelling, and unrestricted agentic applications on high-end GPUs (16-24GB VRAM BF16/FP8) via Transformers or vLLM. This version preserves the base model's state-of-the-art multimodal perception while eliminating guardrails for factual, descriptive responses in scenarios where conventional models would refuse.
| File Name | Quant Type | File Size | File Link |
|---|---|---|---|
| Qwen3-VL-8B-Instruct-abliterated-v2.IQ4_XS.gguf | IQ4_XS | 4.59 GB | Download |
| Qwen3-VL-8B-Instruct-abliterated-v2.Q2_K.gguf | Q2_K | 3.28 GB | Download |
| Qwen3-VL-8B-Instruct-abliterated-v2.Q3_K_L.gguf | Q3_K_L | 4.43 GB | Download |
| Qwen3-VL-8B-Instruct-abliterated-v2.Q3_K_M.gguf | Q3_K_M | 4.12 GB | Download |
| Qwen3-VL-8B-Instruct-abliterated-v2.Q3_K_S.gguf | Q3_K_S | 3.77 GB | Download |
| Qwen3-VL-8B-Instruct-abliterated-v2.Q4_K_M.gguf | Q4_K_M | 5.03 GB | Download |
| Qwen3-VL-8B-Instruct-abliterated-v2.Q4_K_S.gguf | Q4_K_S | 4.8 GB | Download |
| Qwen3-VL-8B-Instruct-abliterated-v2.Q5_K_M.gguf | Q5_K_M | 5.85 GB | Download |
| Qwen3-VL-8B-Instruct-abliterated-v2.Q5_K_S.gguf | Q5_K_S | 5.72 GB | Download |
| Qwen3-VL-8B-Instruct-abliterated-v2.Q6_K.gguf | Q6_K | 6.73 GB | Download |
| Qwen3-VL-8B-Instruct-abliterated-v2.Q8_0.gguf | Q8_0 | 8.71 GB | Download |
| Qwen3-VL-8B-Instruct-abliterated-v2.f16.gguf | F16 | 16.4 GB | Download |
| Qwen3-VL-8B-Instruct-abliterated-v2.mmproj-Q8_0.gguf | mmproj-Q8_0 | 752 MB | Download |
| Qwen3-VL-8B-Instruct-abliterated-v2.mmproj-f16.gguf | mmproj-f16 | 1.16 GB | Download |
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

NSFW内容描述很一般,不建议花时间搞这个。 实际使用还要安装俩东西: 1、安装已预编译的llama-cpp-python WHL wget https://cnb.cool/itgay/tools/-/lfs/32bb3244347d780b3d86ba2e4e51b123bb261f9406c92ada5cbd9deab742f985?name=llama_cpp_python-0.3.22-cp312-cp312-linux_x86_64.whl source /venv/bin/active pip install --no-cache-dir llama_cpp_python-0.3.22-cp312-cp312-linux_x86_64.whl 2、安装custom node cd custom_nodes git clone https://github.com/1038lab/ComfyUI-QwenVL cd ComfyUI-QwenVL pip install --no-cache-dir -r requirements.txt 3、拷贝文件到指定的地方 先启动一个工作流,看下日志输出里提示大模型文件应该放哪里,然后中断工作流后手动拷贝本地的大模型文件过去后再重启ComfyUI即可。