Qwen3-8B-W4A8#

运行 Docker 容器#

注意

w4a8 量化功能在 v0.9.1rc2 及以上版本中支持。

# Update the vllm-ascend image
export IMAGE=m.daocloud.io/quay.io/ascend/vllm-ascend:v0.12.0rc1
docker run --rm \
--name vllm-ascend \
--shm-size=1g \
--device /dev/davinci0 \
--device /dev/davinci_manager \
--device /dev/devmm_svm \
--device /dev/hisi_hdc \
-v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
-v /etc/ascend_install.info:/etc/ascend_install.info \
-v /root/.cache:/root/.cache \
-p 8000:8000 \
-it $IMAGE bash

安装 modelslim 并转换模型#

注意

您可以选择自行转换模型,也可以使用我们上传的量化模型,请参阅 https://www.modelscope.cn/models/vllm-ascend/Qwen3-8B-W4A8

# The branch(br_release_MindStudio_8.1.RC2_TR5_20260624) has been verified
git clone -b br_release_MindStudio_8.1.RC2_TR5_20260624 https://gitcode.com/Ascend/msit

cd msit/msmodelslim

# Install by run this script
bash install.sh
pip install accelerate

cd example/Qwen
# Original weight path, Replace with your local model path
MODEL_PATH=/home/models/Qwen3-8B
# Path to save converted weight, Replace with your local path
SAVE_PATH=/home/models/Qwen3-8B-w4a8
# Set an idle NPU card
export ASCEND_RT_VISIBLE_DEVICES=0

python quant_qwen.py \
          --model_path $MODEL_PATH \
          --save_directory $SAVE_PATH \
          --device_type npu \
          --model_type qwen3 \
          --calib_file None \
          --anti_method m6 \
          --anti_calib_file ./calib_data/mix_dataset.json \
          --w_bit 4 \
          --a_bit 8 \
          --is_lowbit True \
          --open_outlier False \
          --group_size 256 \
          --is_dynamic True \
          --trust_remote_code True \
          --w_method HQQ

验证量化模型#

转换后的模型文件如下所示

.
|-- config.json
|-- configuration.json
|-- generation_config.json
|-- merges.txt
|-- quant_model_description.json
|-- quant_model_weight_w4a8_dynamic-00001-of-00003.safetensors
|-- quant_model_weight_w4a8_dynamic-00002-of-00003.safetensors
|-- quant_model_weight_w4a8_dynamic-00003-of-00003.safetensors
|-- quant_model_weight_w4a8_dynamic.safetensors.index.json
|-- README.md
|-- tokenizer.json
`-- tokenizer_config.json

运行以下脚本以使用量化模型启动 vLLM 服务器

export VLLM_USE_MODELSCOPE=true
export MODEL_PATH=vllm-ascend/Qwen3-8B-W4A8
vllm serve ${MODEL_PATH} --served-model-name "qwen3-8b-w4a8" --max-model-len 4096 --quantization ascend

服务器启动后,您可以使用输入提示查询模型。

curl https://:8000/v1/completions \
    -H "Content-Type: application/json" \
    -d '{
        "model": "qwen3-8b-w4a8",
        "prompt": "what is large language model?",
        "max_tokens": "128",
        "top_p": "0.95",
        "top_k": "40",
        "temperature": "0.0"
    }'

运行以下脚本以使用量化模型在单NPU上执行离线推理

注意

要为 ascend 启用量化,量化方法(quantization method)必须为“ascend”。


from vllm import LLM, SamplingParams

prompts = [
    "Hello, my name is",
    "The future of AI is",
]
sampling_params = SamplingParams(temperature=0.6, top_p=0.95, top_k=40)

llm = LLM(model="/home/models/Qwen3-8B-w4a8",
          max_model_len=4096,
          quantization="ascend")

outputs = llm.generate(prompts, sampling_params)
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")