跳到内容

AutoAWQ

要创建新的 4 位量化模型,您可以使用 AutoAWQ。量化将模型的精度从 BF16/FP16 降低到 INT4,显著减小了其内存占用,同时提高了延迟和内存利用率。

安装

您可以使用 AutoAWQ 量化自己的模型,也可以从 Hugging Face 上提供的超过 6,500 个预量化模型中进行选择。要安装模型,请使用以下命令

pip install autoawq

量化

安装模型后,您可以对其进行量化。有关详细说明,请参阅 AutoAWQ 文档。本示例展示了如何量化 mistralai/Mistral-7B-Instruct-v0.2

from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer

model_path = 'mistralai/Mistral-7B-Instruct-v0.2'
quant_path = 'mistral-instruct-v0.2-awq'
quant_config = { "zero_point": True, "q_group_size": 128, "w_bit": 4, "version": "GEMM" }

# Load the model
model = AutoAWQForCausalLM.from_pretrained(
    model_path, **{"low_cpu_mem_usage": True, "use_cache": False}
)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)

# Quantize
model.quantize(tokenizer, quant_config=quant_config)

# Save the quantized model
model.save_quantized(quant_path)
tokenizer.save_pretrained(quant_path)

print(f'Model is quantized and saved at "{quant_path}"')

使用 vLLM 运行量化模型

要使用 vLLM 运行量化的 AWQ 模型,请参考以下 TheBloke/Llama-2-7b-Chat-AWQ 的示例。

python examples/offline_inference/llm_engine_example.py \
    --model TheBloke/Llama-2-7b-Chat-AWQ \
    --quantization awq

在 vLLM 中使用模型

量化后的 AWQ 模型也可以直接通过 LLM 入口点支持

from vllm import LLM, SamplingParams

# Sample prompts.
prompts = [
    "Hello, my name is",
    "The president of the United States is",
    "The capital of France is",
    "The future of AI is",
]
# Create a sampling params object.
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)

# Create an LLM.
llm = LLM(model="TheBloke/Llama-2-7b-Chat-AWQ", quantization="AWQ")
# Generate texts from the prompts. The output is a list of RequestOutput objects
# that contain the prompt, generated text, and other information.
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")