-
nm-testing/Meta-Llama-3-8B-Instruct-W8A8-FP8-Channelwise-compressed-tensors
Text Generation • 8B • Updated • 5 • 2 -
nm-testing/Meta-Llama-3-8B-Instruct-FBGEMM-nonuniform
Text Generation • 8B • Updated • 3 -
nm-testing/Meta-Llama-3-8B-FP8-compressed-tensors-test
Text Generation • 8B • Updated • 4.34k -
nm-testing/Meta-Llama-3-8B-Instruct-W8-Channel-A8-Dynamic-Asym-Per-Token-Test
8B • Updated • 16 • 1
NM Testing
company
AI & ML interests
None defined yet.
Recent Activity
View all activity
-
nm-testing/Meta-Llama-3-8B-Instruct-W8A8-FP8-Channelwise-compressed-tensors
Text Generation • 8B • Updated • 5 • 2 -
nm-testing/Meta-Llama-3-8B-Instruct-FBGEMM-nonuniform
Text Generation • 8B • Updated • 3 -
nm-testing/Meta-Llama-3-8B-FP8-compressed-tensors-test
Text Generation • 8B • Updated • 4.34k -
nm-testing/Meta-Llama-3-8B-Instruct-W8-Channel-A8-Dynamic-Asym-Per-Token-Test
8B • Updated • 16 • 1
Collection of State-of-the-art FP8 Block Quantized Models
models
514
nm-testing/TinyLlama-1.1B-Chat-v1.0-kv_cache_default_tinyllama-e2e
1B
•
Updated
•
10
nm-testing/Phi-3-mini-4k-instruct-kv_cache_default_phi3-e2e
4B
•
Updated
•
7
nm-testing/TinyLlama-1.1B-Chat-v1.0-kv_cache_default_gptq_tinyllama-e2e
0.3B
•
Updated
•
4
nm-testing/TinyLlama-1.1B-Chat-v1.0-W8A8_tensor_weight_static_per_tensor_act-e2e
1B
•
Updated
•
7
nm-testing/TinyLlama-1.1B-Chat-v1.0-W8A8-e2e
1B
•
Updated
•
142
nm-testing/TinyLlama-1.1B-Chat-v1.0-W8A8_channel_weight_static_per_tensor-e2e
1B
•
Updated
•
4
nm-testing/TinyLlama-1.1B-Chat-v1.0-FP8A16_tensor-e2e
1B
•
Updated
•
5
nm-testing/TinyLlama-1.1B-Chat-v1.0-FP8A16_channel-e2e
1B
•
Updated
•
13
nm-testing/TinyLlama-1.1B-Chat-v1.0-FP8-e2e
1B
•
Updated
•
150
nm-testing/TinyLlama-1.1B-Chat-v1.0-FP8_DYNAMIC-e2e
1B
•
Updated
•
10