All available quants and imatrix files here have now been updated to include the gating function fixes! See the discussion for details: https://huggingface.co/ubergarm/GLM-4.7-Flash-GGUF/discussions/1
ik_llama.cpp imatrix Quantizations of zai-org/GLM-4.7-Flash
NOTE ik_llama.cpp can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants.
Some of ik's new quants are supported with Nexesenex/croco.cpp fork of KoboldCPP with Windows builds for CUDA 12.9. Also check for Windows builds by Thireus here. which have been CUDA 12.8.
These quants provide best in class perplexity for the given memory footprint.
Big Thanks
Shout out to Wendell and the Level1Techs crew, the community Forums, YouTube Channel! BIG thanks for providing BIG hardware expertise and access to run these experiments and make these great quants available to the community!!!
Also thanks to all the folks in the quanting and inferencing community on BeaverAI Club Discord and on r/LocalLLaMA for tips and tricks helping each other run, test, and benchmark all the fun new models! Thanks to huggingface for hosting all these big quants!
Finally, I really appreciate the support from aifoundry.org so check out their open source RISC-V based solutions!
Quant Collection
Perplexity computed against wiki.test.raw. (lower is "better")
(not sure why the y-axis doesn't look log scale on this one)
These two are just a test quants for baseline perplexity comparison:
BF1655.786 GiB (16.003 BPW)- Final estimate: PPL over 565 chunks for n_ctx=512 = 9.8537 +/- 0.07939
- (ran with -mla 1 instead of usual -mla 3 see logs for details)
Q8_029.647 GiB (8.505 BPW)- Final estimate: PPL over 565 chunks for n_ctx=512 = 9.8206 +/- 0.07906
NOTE: The first split file is much smaller on purpose to only contain metadata, its fine!
IQ5_K 21.157 GiB (6.069 BPW)
Final estimate: PPL over 565 chunks for n_ctx=512 = 9.7951 +/- 0.07845
๐ Secret Recipe
#!/usr/bin/env bash
custom="
## Attention [0-47] (GPU)
blk\..*\.attn_k_b\.weight=q8_0
blk\..*\.attn_v_b\.weight=q8_0
# Balance of attn tensors (GPU)
blk\..*\.attn_kv_a_mqa\.weight=q8_0
blk\..*\.attn_q_a\.weight=q8_0
blk\..*\.attn_q_b\.weight=q8_0
blk\..*\.attn_output\.weight=q8_0
## First Dense Layer [0] (GPU)
blk\..*\.ffn_down\.weight=q8_0
blk\..*\.ffn_(gate|up)\.weight=q8_0
## Shared Expert (1-39) (GPU)
blk\..*\.ffn_down_shexp\.weight=q8_0
blk\..*\.ffn_(gate|up)_shexp\.weight=q8_0
## Routed Experts (1-39) (CPU)
blk\..*\.ffn_down_exps\.weight=iq6_k
blk\..*\.ffn_(gate|up)_exps\.weight=iq5_k
## Token embedding and output tensors (GPU)
token_embd\.weight=q8_0
output\.weight=q8_0
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/raid/models/ubergarm/GLM-4.7-Flash-GGUF/imatrix-GLM-4.7-Flash-BF16.dat \
/mnt/raid/models/ubergarm/GLM-4.7-Flash-GGUF/GLM-4.7-Flash-64x2.6B-BF16-00001-of-00002.gguf \
/mnt/raid/models/ubergarm/GLM-4.7-Flash-GGUF/GLM-4.7-Flash-IQ5_K.gguf \
IQ5_K \
24
MXFP4 15.901 GiB (4.562 BPW)
Final estimate: PPL over 565 chunks for n_ctx=512 = 8.4759 +/- 0.06153
NOTE: This is an odd ball MXFP4 not using imatrix data. It is compatible with both ik and mainline llama.cpp. Not sure why it shows the lowest "best" perplexity. Maybe the original model had some QAT targeting this quantization type? Still odd...
๐ Secret Recipe
Thanks to noctrex for the discussion here: https://huggingface.co/noctrex/GLM-4.7-Flash-MXFP4_MOE-GGUF/discussions/1#696fd9372990e7ef2f5730f8
#!/usr/bin/env bash
custom="
## Attention [0-47] (GPU)
blk\..*\.attn_k_b\.weight=q8_0
blk\..*\.attn_v_b\.weight=q8_0
# mainline does the following which hurts PPL a little on ik with -mla 3
#blk\..*\.attn_k_b\.weight=mxfp4
#blk\..*\.attn_v_b\.weight=mxfp4
# Balance of attn tensors (GPU)
blk\..*\.attn_kv_a_mqa\.weight=q8_0
blk\..*\.attn_q_a\.weight=q8_0
blk\..*\.attn_q_b\.weight=q8_0
blk\..*\.attn_output\.weight=q8_0
## First Dense Layer [0] (GPU)
blk\..*\.ffn_down\.weight=q8_0
blk\..*\.ffn_(gate|up)\.weight=q8_0
## Shared Expert (1-39) (GPU)
blk\..*\.ffn_down_shexp\.weight=q8_0
blk\..*\.ffn_(gate|up)_shexp\.weight=q8_0
## Routed Experts (1-39) (CPU)
blk\..*\.ffn_down_exps\.weight=mxfp4
blk\..*\.ffn_(gate|up)_exps\.weight=mxfp4
## Token embedding and output tensors (GPU)
token_embd\.weight=q8_0
output\.weight=q8_0
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
./build/bin/llama-quantize \
--custom-q "$custom" \
/mnt/raid/models/ubergarm/GLM-4.7-Flash-GGUF/GLM-4.7-Flash-64x2.6B-BF16-00001-of-00002.gguf \
/mnt/raid/models/ubergarm/GLM-4.7-Flash-GGUF/GLM-4.7-Flash-fat-MXFP4.gguf \
MXFP4 \
24
smol-IQ4_KSS 14.918 GiB (4.280 BPW)
Final estimate: PPL over 565 chunks for n_ctx=512 = 10.2529 +/- 0.08341
NOTE: This one seems abnormally "bad" perplexity relative to that odd ball MXFP4. Still need to do more testing beyond perplexity...
๐ Secret Recipe
#!/usr/bin/env bash
custom="
## Attention [0-47] (GPU)
blk\..*\.attn_k_b\.weight=q8_0
blk\..*\.attn_v_b\.weight=q8_0
# Balance of attn tensors (GPU)
blk\..*\.attn_kv_a_mqa\.weight=q8_0
blk\..*\.attn_q_a\.weight=q8_0
blk\..*\.attn_q_b\.weight=q8_0
blk\..*\.attn_output\.weight=q8_0
## First Dense Layer [0] (GPU)
blk\..*\.ffn_down\.weight=iq6_k
blk\..*\.ffn_(gate|up)\.weight=iq6_k
## Shared Expert (1-39) (GPU)
blk\..*\.ffn_down_shexp\.weight=q8_0
blk\..*\.ffn_(gate|up)_shexp\.weight=q8_0
## Routed Experts (1-39) (CPU)
blk\..*\.ffn_down_exps\.weight=iq4_kss
blk\..*\.ffn_(gate|up)_exps\.weight=iq4_kss
## Token embedding and output tensors (GPU)
token_embd\.weight=iq4_k
output\.weight=iq6_k
"
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/raid/models/ubergarm/GLM-4.7-Flash-GGUF/imatrix-GLM-4.7-Flash-BF16.dat \
/mnt/raid/models/ubergarm/GLM-4.7-Flash-GGUF/GLM-4.7-Flash-64x2.6B-BF16-00001-of-00002.gguf \
/mnt/raid/models/ubergarm/GLM-4.7-Flash-GGUF/GLM-4.7-Flash-smol-IQ4_KSS.gguf \
IQ4_KSS \
24
Quick Start
# Clone and checkout
$ git clone https://github.com/ikawrakow/ik_llama.cpp
$ cd ik_llama.cpp
# Build for hybrid CPU+CUDA
$ cmake -B build -DCMAKE_BUILD_TYPE=Release -DGGML_CUDA=ON
$ cmake --build build --config Release -j $(nproc)
# Full GPU Offload
./build/bin/llama-server \
--model "$model" \
--alias ubergarm/GLM-4.7-Flash \
-c 32768 \
-ctk q8_0 \
-ger \
--merge-qkv \
-mla 3 -amb 512 \
-ngl 99 \
-ub 4096 -b 4096 \
--threads 1 \
--host 127.0.0.1 \
--port 8080 \
--jinja \
--no-mmap
You can always bring your own template with --chat-template-file myTemplate.jinja and might need --special etc.
References
- Downloads last month
- 6,692
We're not able to determine the quantization variants.
Model tree for ubergarm/GLM-4.7-Flash-GGUF
Base model
zai-org/GLM-4.7-Flash