跳到内容

多模态输入

本页介绍如何在 vLLM 中将多模态输入传递给多模态模型

注意

我们正在积极迭代多模态支持。有关即将进行的更改,请参阅此RFC;如果您有任何反馈或功能请求,请在GitHub上提交一个issue

离线推理

要输入多模态数据,请遵循vllm.inputs.PromptType中的此模式:

图像输入

您可以将单个图像传递给多模态字典的'image'字段,如下例所示

代码
from vllm import LLM

llm = LLM(model="llava-hf/llava-1.5-7b-hf")

# Refer to the HuggingFace repo for the correct format to use
prompt = "USER: <image>\nWhat is the content of this image?\nASSISTANT:"

# Load the image using PIL.Image
image = PIL.Image.open(...)

# Single prompt inference
outputs = llm.generate({
    "prompt": prompt,
    "multi_modal_data": {"image": image},
})

for o in outputs:
    generated_text = o.outputs[0].text
    print(generated_text)

# Batch inference
image_1 = PIL.Image.open(...)
image_2 = PIL.Image.open(...)
outputs = llm.generate(
    [
        {
            "prompt": "USER: <image>\nWhat is the content of this image?\nASSISTANT:",
            "multi_modal_data": {"image": image_1},
        },
        {
            "prompt": "USER: <image>\nWhat's the color of this image?\nASSISTANT:",
            "multi_modal_data": {"image": image_2},
        }
    ]
)

for o in outputs:
    generated_text = o.outputs[0].text
    print(generated_text)

完整示例: examples/offline_inference/vision_language.py

要在同一文本提示中替换多张图像,您可以传递一个图像列表

代码
from vllm import LLM

llm = LLM(
    model="microsoft/Phi-3.5-vision-instruct",
    trust_remote_code=True,  # Required to load Phi-3.5-vision
    max_model_len=4096,  # Otherwise, it may not fit in smaller GPUs
    limit_mm_per_prompt={"image": 2},  # The maximum number to accept
)

# Refer to the HuggingFace repo for the correct format to use
prompt = "<|user|>\n<|image_1|>\n<|image_2|>\nWhat is the content of each image?<|end|>\n<|assistant|>\n"

# Load the images using PIL.Image
image1 = PIL.Image.open(...)
image2 = PIL.Image.open(...)

outputs = llm.generate({
    "prompt": prompt,
    "multi_modal_data": {
        "image": [image1, image2]
    },
})

for o in outputs:
    generated_text = o.outputs[0].text
    print(generated_text)

完整示例: examples/offline_inference/vision_language_multi_image.py

如果使用LLM.chat方法,您可以直接在消息内容中以各种格式传递图像:图像URL、PIL Image对象或预计算的嵌入

from vllm import LLM
from vllm.assets.image import ImageAsset

llm = LLM(model="llava-hf/llava-1.5-7b-hf")
image_url = "https://picsum.photos/id/32/512/512"
image_pil = ImageAsset('cherry_blossom').pil_image
image_embeds = torch.load(...)

conversation = [
    {"role": "system", "content": "You are a helpful assistant"},
    {"role": "user", "content": "Hello"},
    {"role": "assistant", "content": "Hello! How can I assist you today?"},
    {
        "role": "user",
        "content": [{
            "type": "image_url",
            "image_url": {
                "url": image_url
            }
        },{
            "type": "image_pil",
            "image_pil": image_pil
        }, {
            "type": "image_embeds",
            "image_embeds": image_embeds
        }, {
            "type": "text",
            "text": "What's in these images?"
        }],
    },
]

# Perform inference and log output.
outputs = llm.chat(conversation)

for o in outputs:
    generated_text = o.outputs[0].text
    print(generated_text)

多图像输入可以扩展用于视频字幕。我们以支持视频的Qwen2-VL为例进行展示

代码
from vllm import LLM

# Specify the maximum number of frames per video to be 4. This can be changed.
llm = LLM("Qwen/Qwen2-VL-2B-Instruct", limit_mm_per_prompt={"image": 4})

# Create the request payload.
video_frames = ... # load your video making sure it only has the number of frames specified earlier.
message = {
    "role": "user",
    "content": [
        {"type": "text", "text": "Describe this set of frames. Consider the frames to be a part of the same video."},
    ],
}
for i in range(len(video_frames)):
    base64_image = encode_image(video_frames[i]) # base64 encoding.
    new_image = {"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{base64_image}"}}
    message["content"].append(new_image)

# Perform inference and log output.
outputs = llm.chat([message])

for o in outputs:
    generated_text = o.outputs[0].text
    print(generated_text)

视频输入

您可以直接将NumPy数组列表传递给多模态字典的'video'字段,而不是使用多图像输入。

除了NumPy数组,您还可以传递'torch.Tensor'实例,如使用Qwen2.5-VL的此示例所示

代码
from transformers import AutoProcessor
from vllm import LLM, SamplingParams
from qwen_vl_utils import process_vision_info

model_path = "Qwen/Qwen2.5-VL-3B-Instruct/"
video_path = "https://content.pexels.com/videos/free-videos.mp4"

llm = LLM(
    model=model_path,
    gpu_memory_utilization=0.8,
    enforce_eager=True,
    limit_mm_per_prompt={"video": 1},
)

sampling_params = SamplingParams(
    max_tokens=1024,
)

video_messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": [
            {"type": "text", "text": "describe this video."},
            {
                "type": "video",
                "video": video_path,
                "total_pixels": 20480 * 28 * 28,
                "min_pixels": 16 * 28 * 28
            }
        ]
    },
]

messages = video_messages
processor = AutoProcessor.from_pretrained(model_path)
prompt = processor.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)

image_inputs, video_inputs = process_vision_info(messages)
mm_data = {}
if video_inputs is not None:
    mm_data["video"] = video_inputs

llm_inputs = {
    "prompt": prompt,
    "multi_modal_data": mm_data,
}

outputs = llm.generate([llm_inputs], sampling_params=sampling_params)
for o in outputs:
    generated_text = o.outputs[0].text
    print(generated_text)

注意

'process_vision_info' 仅适用于Qwen2.5-VL及类似模型。

完整示例: examples/offline_inference/vision_language.py

音频输入

您可以将元组(array, sampling_rate)传递给多模态字典的'audio'字段。

完整示例: examples/offline_inference/audio_language.py

嵌入输入

要将属于特定数据类型(即图像、视频或音频)的预计算嵌入直接输入到语言模型,请将形状为(num_items, feature_size, hidden_size of LM)的张量传递给多模态字典的相应字段。

代码
from vllm import LLM

# Inference with image embeddings as input
llm = LLM(model="llava-hf/llava-1.5-7b-hf")

# Refer to the HuggingFace repo for the correct format to use
prompt = "USER: <image>\nWhat is the content of this image?\nASSISTANT:"

# Embeddings for single image
# torch.Tensor of shape (1, image_feature_size, hidden_size of LM)
image_embeds = torch.load(...)

outputs = llm.generate({
    "prompt": prompt,
    "multi_modal_data": {"image": image_embeds},
})

for o in outputs:
    generated_text = o.outputs[0].text
    print(generated_text)

对于Qwen2-VL和MiniCPM-V,我们接受与嵌入并行的附加参数

代码
# Construct the prompt based on your model
prompt = ...

# Embeddings for multiple images
# torch.Tensor of shape (num_images, image_feature_size, hidden_size of LM)
image_embeds = torch.load(...)

# Qwen2-VL
llm = LLM("Qwen/Qwen2-VL-2B-Instruct", limit_mm_per_prompt={"image": 4})
mm_data = {
    "image": {
        "image_embeds": image_embeds,
        # image_grid_thw is needed to calculate positional encoding.
        "image_grid_thw": torch.load(...),  # torch.Tensor of shape (1, 3),
    }
}

# MiniCPM-V
llm = LLM("openbmb/MiniCPM-V-2_6", trust_remote_code=True, limit_mm_per_prompt={"image": 4})
mm_data = {
    "image": {
        "image_embeds": image_embeds,
        # image_sizes is needed to calculate details of the sliced image.
        "image_sizes": [image.size for image in images],  # list of image sizes
    }
}

outputs = llm.generate({
    "prompt": prompt,
    "multi_modal_data": mm_data,
})

for o in outputs:
    generated_text = o.outputs[0].text
    print(generated_text)

在线服务

我们的OpenAI兼容服务器通过Chat Completions API接受多模态数据。

重要

使用Chat Completions API需要聊天模板。对于HF格式模型,默认聊天模板在chat_template.jsontokenizer_config.json中定义。

如果没有可用的默认聊天模板,我们将首先在 vllm/transformers_utils/chat_templates/registry.py中查找内置的回退模板。如果没有回退模板可用,将引发错误,您必须通过--chat-template参数手动提供聊天模板。

对于某些模型,我们在 examples中提供了替代的聊天模板。例如,VLM2Vec 使用 examples/template_vlm2vec.jinja,这与Phi-3-Vision的默认模板不同。

图像输入

图像输入根据OpenAI Vision API提供支持。以下是使用Phi-3.5-Vision的简单示例。

首先,启动OpenAI兼容服务器

vllm serve microsoft/Phi-3.5-vision-instruct --task generate \
  --trust-remote-code --max-model-len 4096 --limit-mm-per-prompt '{"image":2}'

然后,您可以按如下方式使用OpenAI客户端

代码
from openai import OpenAI

openai_api_key = "EMPTY"
openai_api_base = "https://:8000/v1"

client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)

# Single-image input inference
image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"

chat_response = client.chat.completions.create(
    model="microsoft/Phi-3.5-vision-instruct",
    messages=[{
        "role": "user",
        "content": [
            # NOTE: The prompt formatting with the image token `<image>` is not needed
            # since the prompt will be processed automatically by the API server.
            {"type": "text", "text": "What’s in this image?"},
            {"type": "image_url", "image_url": {"url": image_url}},
        ],
    }],
)
print("Chat completion output:", chat_response.choices[0].message.content)

# Multi-image input inference
image_url_duck = "https://upload.wikimedia.org/wikipedia/commons/d/da/2015_Kaczka_krzy%C5%BCowka_w_wodzie_%28samiec%29.jpg"
image_url_lion = "https://upload.wikimedia.org/wikipedia/commons/7/77/002_The_lion_king_Snyggve_in_the_Serengeti_National_Park_Photo_by_Giles_Laurent.jpg"

chat_response = client.chat.completions.create(
    model="microsoft/Phi-3.5-vision-instruct",
    messages=[{
        "role": "user",
        "content": [
            {"type": "text", "text": "What are the animals in these images?"},
            {"type": "image_url", "image_url": {"url": image_url_duck}},
            {"type": "image_url", "image_url": {"url": image_url_lion}},
        ],
    }],
)
print("Chat completion output:", chat_response.choices[0].message.content)

完整示例: examples/online_serving/openai_chat_completion_client_for_multimodal.py

提示

vLLM 也支持从本地文件路径加载:您可以在启动API服务器/引擎时通过--allowed-local-media-path指定允许的本地媒体路径,并在API请求中将文件路径作为url传递。

提示

无需在API请求的文本内容中放置图像占位符——它们已经由图像内容表示。实际上,您可以通过交错文本和图像内容的方式,将图像占位符放置在文本中间。

注意

默认情况下,通过HTTP URL获取图像的超时时间为5秒。您可以通过设置环境变量来覆盖此值

export VLLM_IMAGE_FETCH_TIMEOUT=<timeout>

视频输入

除了image_url,您还可以通过video_url传递视频文件。以下是使用LLaVA-OneVision的简单示例。

首先,启动OpenAI兼容服务器

vllm serve llava-hf/llava-onevision-qwen2-0.5b-ov-hf --task generate --max-model-len 8192

然后,您可以按如下方式使用OpenAI客户端

代码
from openai import OpenAI

openai_api_key = "EMPTY"
openai_api_base = "https://:8000/v1"

client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)

video_url = "http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerFun.mp4"

## Use video url in the payload
chat_completion_from_url = client.chat.completions.create(
    messages=[{
        "role":
        "user",
        "content": [
            {
                "type": "text",
                "text": "What's in this video?"
            },
            {
                "type": "video_url",
                "video_url": {
                    "url": video_url
                },
            },
        ],
    }],
    model=model,
    max_completion_tokens=64,
)

result = chat_completion_from_url.choices[0].message.content
print("Chat completion output from image url:", result)

完整示例: examples/online_serving/openai_chat_completion_client_for_multimodal.py

注意

默认情况下,通过HTTP URL获取视频的超时时间为30秒。您可以通过设置环境变量来覆盖此值

export VLLM_VIDEO_FETCH_TIMEOUT=<timeout>

音频输入

音频输入根据OpenAI Audio API提供支持。以下是使用Ultravox-v0.5-1B的简单示例。

首先,启动OpenAI兼容服务器

vllm serve fixie-ai/ultravox-v0_5-llama-3_2-1b

然后,您可以按如下方式使用OpenAI客户端

代码
import base64
import requests
from openai import OpenAI
from vllm.assets.audio import AudioAsset

def encode_base64_content_from_url(content_url: str) -> str:
    """Encode a content retrieved from a remote url to base64 format."""

    with requests.get(content_url) as response:
        response.raise_for_status()
        result = base64.b64encode(response.content).decode('utf-8')

    return result

openai_api_key = "EMPTY"
openai_api_base = "https://:8000/v1"

client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)

# Any format supported by librosa is supported
audio_url = AudioAsset("winning_call").url
audio_base64 = encode_base64_content_from_url(audio_url)

chat_completion_from_base64 = client.chat.completions.create(
    messages=[{
        "role": "user",
        "content": [
            {
                "type": "text",
                "text": "What's in this audio?"
            },
            {
                "type": "input_audio",
                "input_audio": {
                    "data": audio_base64,
                    "format": "wav"
                },
            },
        ],
    }],
    model=model,
    max_completion_tokens=64,
)

result = chat_completion_from_base64.choices[0].message.content
print("Chat completion output from input audio:", result)

或者,您可以传递audio_url,它是图像输入中image_url的音频对应项

代码
chat_completion_from_url = client.chat.completions.create(
    messages=[{
        "role": "user",
        "content": [
            {
                "type": "text",
                "text": "What's in this audio?"
            },
            {
                "type": "audio_url",
                "audio_url": {
                    "url": audio_url
                },
            },
        ],
    }],
    model=model,
    max_completion_tokens=64,
)

result = chat_completion_from_url.choices[0].message.content
print("Chat completion output from audio url:", result)

完整示例: examples/online_serving/openai_chat_completion_client_for_multimodal.py

注意

默认情况下,通过HTTP URL获取音频的超时时间为10秒。您可以通过设置环境变量来覆盖此值

export VLLM_AUDIO_FETCH_TIMEOUT=<timeout>

嵌入输入

要将属于特定数据类型(即图像、视频或音频)的预计算嵌入直接输入到语言模型,请将张量传递给多模态字典的相应字段。

图像嵌入输入

对于图像嵌入,您可以将base64编码的张量传递给image_embeds字段。以下示例演示了如何将图像嵌入传递给OpenAI服务器

代码
image_embedding = torch.load(...)
grid_thw = torch.load(...) # Required by Qwen/Qwen2-VL-2B-Instruct

buffer = io.BytesIO()
torch.save(image_embedding, buffer)
buffer.seek(0)
binary_data = buffer.read()
base64_image_embedding = base64.b64encode(binary_data).decode('utf-8')

client = OpenAI(
    # defaults to os.environ.get("OPENAI_API_KEY")
    api_key=openai_api_key,
    base_url=openai_api_base,
)

# Basic usage - this is equivalent to the LLaVA example for offline inference
model = "llava-hf/llava-1.5-7b-hf"
embeds =  {
    "type": "image_embeds",
    "image_embeds": f"{base64_image_embedding}" 
}

# Pass additional parameters (available to Qwen2-VL and MiniCPM-V)
model = "Qwen/Qwen2-VL-2B-Instruct"
embeds =  {
    "type": "image_embeds",
    "image_embeds": {
        "image_embeds": f"{base64_image_embedding}" , # Required
        "image_grid_thw": f"{base64_image_grid_thw}"  # Required by Qwen/Qwen2-VL-2B-Instruct
    },
}
model = "openbmb/MiniCPM-V-2_6"
embeds =  {
    "type": "image_embeds",
    "image_embeds": {
        "image_embeds": f"{base64_image_embedding}" , # Required
        "image_sizes": f"{base64_image_sizes}"  # Required by openbmb/MiniCPM-V-2_6
    },
}
chat_completion = client.chat.completions.create(
    messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": [
        {
            "type": "text",
            "text": "What's in this image?",
        },
        embeds,
        ],
    },
],
    model=model,
)

注意

仅一个消息可以包含{"type": "image_embeds"}。如果与需要额外参数的模型一起使用,您还必须为每个参数提供一个张量,例如image_grid_thwimage_sizes等。