nm-testing/DeepSeek-Coder-V2-Lite-Instruct-quantized.w8a8
16B • Updated
• 1
nm-testing/l4-scout-int4-debug
20B • Updated
nm-testing/pixtral-12b-FP8-dynamic
Image-Text-to-Text
• Updated
• 300
• 1
nm-testing/TinyLlama-1.1B-Chat-v1.0-W4A16-G128-Asym-Updated-ActOrder
0.3B • Updated
• 2.76k
nm-testing/TinyLlama-1.1B-Chat-v1.0-awq-group128-asym256
0.3B • Updated
nm-testing/TinyLlama-1.1B-Chat-v1.0-W4A16-G128-Asym-Updated-Channel
0.3B • Updated
nm-testing/TinyLlama-1.1B-Chat-v1.0-W4A16-G128-Asym-Updated
0.3B • Updated
nm-testing/Llama-2-7b-hf-gsm8k-quant_w4a16_sym-uncompressed
7B • Updated
• 1
nm-testing/Llama-2-7b-hf-gsm8k-quant_w4a16_sym-compressed
1B • Updated
• 1
nm-testing/Llama-2-7b-hf-gsm8k-gptq_w4a16_sym-uncompressed
7B • Updated
• 1
nm-testing/Llama-2-7b-hf-gsm8k-gptq_w4a16_sym-compressed
1B • Updated
• 1
nm-testing/Llama-2-7b-hf-gsm8k-awq_w4a16_sym-uncompressed
7B • Updated
• 1
nm-testing/Llama-2-7b-hf-gsm8k-awq_w4a16_sym-compressed
1B • Updated
• 1
nm-testing/Llama-2-7b-hf-gsm8k-awq_gptq_sym-uncompressed
7B • Updated
• 6
nm-testing/Llama-2-7b-hf-gsm8k-awq_gptq_sym-compressed
1B • Updated
• 2
nm-testing/Mixtral-8x7B-Instruct-v0.1-FP8-Dynamic
47B • Updated
• 1
nm-testing/Llama-3.1-8B-Instruct-W4A16-G128-shared-pipeline
2B • Updated
• 1
nm-testing/Qwen2-VL-2B-Instruct-FP8-dynamic-cli
2B • Updated
• 1
nm-testing/Qwen2-VL-2B-Instruct-FP8_DYNAMIC
Image-Text-to-Text
• 2B • Updated
• 1
nm-testing/whisper-large-v3-quantized.w4a16
0.3B • Updated
• 1
nm-testing/whisper-large-v3-quantized.w8a8_sq
2B • Updated
• 1
nm-testing/whisper-large-v3-quantized.w8a8
2B • Updated
nm-testing/llama2.c-stories110M-gsm8k-fp8_dynamic-compressed
0.1B • Updated
• 1.01k
nm-testing/llama2.c-stories110M-gsm8k-recipe_w4a16_actorder_weight-compressed
60.5M • Updated
• 1.01k
nm-testing/Llama-3.2-1B-Instruct-W4A16-uncompressed-mse-hadamard
5B • Updated
nm-testing/llama2.c-stories15M
Text Generation
• 24.4M • Updated
• 4.46k
nm-testing/Meta-Llama-3-8B-Instruct-FP8-channel-output-activation-kv_cache-qkv_proj
8B • Updated
• 1
nm-testing/Meta-Llama-3-8B-Instruct-FP8-channel-output-activation-q_proj
8B • Updated
nm-testing/Meta-Llama-3-8B-Instruct-FP8-channel-output-activation
8B • Updated
• 1
nm-testing/Llama-3.2-1B-W4A16-Transforms
4B • Updated
• 1