GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF

Specialized and Enhanced UNCENSORED/HERETIC GGUF quants for the new GLM-4.7-Flash, 30B-A3B MOE, mixture of experts model.

[ https://huggingface.co/zai-org/GLM-4.7-Flash ]

This model can be run on the GPU(s) and/or CPU due to 4 experts activated (appox 2B parameters active).

NOTE: Latest Llamacpp 7789 commit, with corrected quants.

Uncensored / Heretic'ed

De-censoring by Heretic (special thanks to "Olafangensan") seems to have reduced the size of thinking blocks in some cases and/or "focused" the model more.

Default Settings (Most Tasks)

temperature: 1.0
top-p: 0.95
max new tokens: 131072

REP PEN: 1.1 OR 1.0 (off) (if you get repeat issues)

You might also try GLM 4.6 settings (unsloth):

temperature = 0.8

top_p = 0.6 (recommended)

top_k = 2 (recommended)

max_generate_tokens = 16,384

That being said, I suggest min context of 8k-16K as final outputs (post thinking) can be long and detailed and in a number of cases has been observed "polishing" the final output one or more times IN the output section.

(Model can handle 200k context, non-roped.)

NON-UNCENSORED QUANTS:

https://huggingface.co/DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF

Quants General:

Quants and Imatrixes computed using latest LLAMACPP (commit: 7789, Jan 21 2026) which contains specific fixes for this model.

Quants prior to this commit (as well as Imatrix generation) performed poorly (re-quanization and re-imatrix generation are required).

Also note there are some issues with Flash Attn and low token generation speed (as Flash is offloaded to CPU in some cases). Disable Flash Attn until this issue is resolved / makes its way thru the "llamacpp / ai pipeline".

Specialized Quants

Specialized quants (IQ4_NL, Q5_1, Q4_1, Q8_0) are precision balanced to address a specific tensor issues in all layers that requires a specific quant type.

Other "normal" quants will also perform very well.

Quant Enhancements:

Imatrix is NEO and Code datasets by DavidAU - Dual Imatrix (2 imatrixes separately generated) to improve model performance.

All quants (specialized and "normal") are also enhanced with 16 bit (full) precision "output tensor" to further improve model performance.

Output tensor affects 10-20% of the fine output of the model - both thinking and output (final) generation.

Special thanks to :

  • Team ZAI-ORG for making an outstanding model.
  • Team P-E-W for fanstastic work on Heretic system.
  • Team Olafangensan for Heretic'ing the model.

Using an "uncensored" (refusals removed) model VS trained "uncensored" model

Usually when you a tell a model to generate horror, swear or x-rated content this is all you have to do to get said content type.

In the case of this model, it will not refuse your request, however it needs to be "pushed" a bit / directed a bit more in SOME CASES.

Although this model will generated x-rated content too, likewise you need to tell it to use "slang" (and include the terms you want) to get it generate the content correctly as the "expected" content level too.

Without these added directive(s), the content can be "bland" by comparison to an "uncensored model" or model trained on uncensored content.

Roughly, the model tries to generate the content but the "default" setting(s) are so "tame" it needs a push to generate at expected graphic, cursing or explicit levels.

Even with minimal direction (ie, use these words to swear: x,y,z), this will be enough to push the model to generate the requested content in the ahh... expected format.


Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:

In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;

Set the "Smoothing_factor" to 1.5

: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"

: in text-generation-webui -> parameters -> lower right.

: In Silly Tavern this is called: "Smoothing"

NOTE: For "text-generation-webui"

-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)

Source versions (and config files) of my models are here:

https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be

OTHER OPTIONS:

  • Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")

  • If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.

Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers

This a "Class 1" model:

For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:

[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]

You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:

[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]

Downloads last month
30,331
GGUF
Model size
30B params
Architecture
deepseek2
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF

Quantized
(2)
this model

Collections including DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF