快速入门#

先决条件#

支持的设备#

  • Atlas A2 训练系列 (Atlas 800T A2, Atlas 900 A2 PoD, Atlas 200T A2 Box16, Atlas 300T A2)

  • Atlas 800I A2 推理系列 (Atlas 800I A2)

  • Atlas A3 训练系列 (Atlas 800T A3, Atlas 900 A3 SuperPoD, Atlas 9000 A3 SuperPoD)

  • Atlas 800I A3 推理系列 (Atlas 800I A3)

  • [实验] Atlas 300I 推理系列 (Atlas 300I Duo)

使用容器设置环境#

# Update DEVICE according to your device (/dev/davinci[0-7])
export DEVICE=/dev/davinci0
# Update the vllm-ascend image
# Atlas A2:
# export IMAGE=quay.io/ascend/vllm-ascend:v0.12.0rc1
# Atlas A3:
# export IMAGE=quay.io/ascend/vllm-ascend:v0.12.0rc1-a3
export IMAGE=quay.io/ascend/vllm-ascend:v0.12.0rc1
docker run --rm \
--name vllm-ascend \
--shm-size=1g \
--device $DEVICE \
--device /dev/davinci_manager \
--device /dev/devmm_svm \
--device /dev/hisi_hdc \
-v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
-v /etc/ascend_install.info:/etc/ascend_install.info \
-v /root/.cache:/root/.cache \
-p 8000:8000 \
-it $IMAGE bash
# Install curl
apt-get update -y && apt-get install -y curl
# Update DEVICE according to your device (/dev/davinci[0-7])
export DEVICE=/dev/davinci0
# Update the vllm-ascend image
# Atlas A2:
# export IMAGE=quay.io/ascend/vllm-ascend:v0.12.0rc1-openeuler
# Atlas A3:
# export IMAGE=quay.io/ascend/vllm-ascend:v0.12.0rc1-a3-openeuler
export IMAGE=quay.io/ascend/vllm-ascend:v0.12.0rc1-openeuler
docker run --rm \
--name vllm-ascend \
--shm-size=1g \
--device $DEVICE \
--device /dev/davinci_manager \
--device /dev/devmm_svm \
--device /dev/hisi_hdc \
-v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
-v /etc/ascend_install.info:/etc/ascend_install.info \
-v /root/.cache:/root/.cache \
-p 8000:8000 \
-it $IMAGE bash
# Install curl
yum update -y && yum install -y curl

默认工作目录是 /workspace,vLLM 和 vLLM Ascend 代码放在 /vllm-workspace 目录下,并以 开发模式 (pip install -e) 安装,以便开发者能够立即生效更改而无需重新安装。

用法#

您可以使用 ModelScope 镜像来加速下载

export VLLM_USE_MODELSCOPE=true

在 Ascend NPU 上启动 vLLM 有两种方式

安装 vLLM 后,您可以开始为一组输入提示生成文本(即离线批量推理)。

尝试直接运行下面的 Python 脚本,或使用 python3 shell 来生成文本

from vllm import LLM, SamplingParams

prompts = [
    "Hello, my name is",
    "The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
# The first run will take about 3-5 mins (10 MB/s) to download models
llm = LLM(model="Qwen/Qwen2.5-0.5B-Instruct")

outputs = llm.generate(prompts, sampling_params)

for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")

vLLM 还可以部署为一个实现 OpenAI API 协议的服务器。运行以下命令,使用 Qwen/Qwen2.5-0.5B-Instruct 模型启动 vLLM 服务器

# Deploy vLLM server (The first run will take about 3-5 mins (10 MB/s) to download models)
vllm serve Qwen/Qwen2.5-0.5B-Instruct &

如果您看到如下日志

INFO:     Started server process [3594]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)

恭喜,您已成功启动 vLLM 服务器!

您可以查询模型列表

curl https://:8000/v1/models | python3 -m json.tool

您也可以用输入提示查询模型

curl https://:8000/v1/completions \
    -H "Content-Type: application/json" \
    -d '{
        "model": "Qwen/Qwen2.5-0.5B-Instruct",
        "prompt": "Beijing is a",
        "max_tokens": 5,
        "temperature": 0
    }' | python3 -m json.tool

vLLM 作为后台进程运行,您可以使用 kill -2 $VLLM_PID 来优雅地停止后台进程,这与停止前台 vLLM 进程的 Ctrl-C 类似

  VLLM_PID=$(pgrep -f "vllm serve")
  kill -2 "$VLLM_PID"

输出如下

INFO:     Shutting down FastAPI HTTP server.
INFO:     Shutting down
INFO:     Waiting for application shutdown.
INFO:     Application shutdown complete.

最后,您可以使用 ctrl-D 退出容器。