Model Overview

  • Model Architecture: MiniMaxM2ForCausalLM
    • Input: Text
    • Output: Text
  • Supported Hardware Microarchitecture: AMD MI300 MI350/MI355
  • ROCm: 7.0
  • PyTorch: 2.8.0
  • Transformers: 4.57.1
  • Operating System(s): Linux
  • Inference Engine: SGLang/vLLM
  • Model Optimizer: AMD-Quark (v0.11)
    • Weight quantization: OCP MXFP4, Static
    • Activation quantization: OCP MXFP4, Dynamic

Model Quantization

The model was quantized from QuixiAI/MiniMax-M2.1-bf16 using AMD-Quark. The weights are quantized to MXFP4 and activations are quantized to MXFP4.

Quantization scripts:

cd Quark/examples/torch/language_modeling/llm_ptq/
export exclude_layers="lm_head *block_sparse_moe.gate* *self_attn*"
python3 quantize_quark.py --model_dir $MODEL_DIR \
                          --quant_scheme mxfp4 \
                          --num_calib_data 128 \
                          --exclude_layers $exclude_layers \
                          --skip_evaluation \
                          --multi_gpu  \
                          --trust_remote_code \
                          --model_export hf_format \
                          --output_dir $output_dir

For further details or issues, please refer to the AMD-Quark documentation or contact the respective developers.

Evaluation

The model was evaluated on gsm8k benchmarks using the vllm framework.

Accuracy

Benchmark QuixiAI/MiniMax-M2.1-bf16 amd/MiniMax-M2.1-MXFP4(this model) Recovery
gsm8k (flexible-extract) 0.9356 0.9348 99.91%

Reproduction

The GSM8K results were obtained using the vLLM framework, based on the Docker image rocm/vllm:rocm7.0.0_vllm_0.11.2_20251210, and vLLM is installed from source inside the container.

Preparation in container

# Reinstall vLLM
pip uninstall vllm -y
git clone https://github.com/vllm-project/vllm.git
cd vllm
git checkout v0.13.0
pip install -r requirements/rocm.txt
python setup.py develop
cd ..

Launching server

VLLM_ROCM_USE_AITER=1 \
VLLM_DISABLE_COMPILE_CACHE=1 \
vllm serve "$MODEL" \
    --tensor-parallel-size 4 \
    --trust-remote-code \
    --max-model-len 32768 \
    --port 8899  

Evaluating model in a new terminal

python vllm/tests/evals/gsm8k/gsm8k_eval.py --host http://127.0.0.1 --port 8899 --num-questions 1000 --save-results logs

License

Modifications Copyright(c) 2026 Advanced Micro Devices, Inc. All rights reserved.

Downloads last month
-
Safetensors
Model size
116B params
Tensor type
BF16
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support