GGUF
English
qwen3
llama.cpp
unsloth
conversational

Qwen3 14B - Deepseek v3.2 Speciale Distill

This model was trained on a reasoning dataset of DeepSeek v3.2 Speciale.

  • 🧬 Datasets:

    • TeichAI/deepseek-v3.2-speciale-OpenCodeReasoning-3k
    • TeichAI/deepseek-v3.2-speciale-1000x
    • TeichAI/deepseek-v3.2-speciale-openr1-math-3k
  • 🏗 Base Model:

    • unsloth/Qwen3-14B
  • ⚡ Use cases:

    • Coding
    • Math
    • Chat
    • Deep Research

This qwen3 model was trained 2x faster with Unsloth and Huggingface's TRL library.

An Ollama Modelfile is included for easy deployment.

Downloads last month
1,214
GGUF
Model size
15B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TeichAI/Qwen3-14B-DeepSeek-v3.2-Speciale-Distill-GGUF

Finetuned
Qwen/Qwen3-14B
Finetuned
unsloth/Qwen3-14B
Quantized
(1)
this model

Datasets used to train TeichAI/Qwen3-14B-DeepSeek-v3.2-Speciale-Distill-GGUF