跳到内容

推理输出

vLLM 支持像 DeepSeek R1 这样的推理模型,它们被设计用来生成包含推理步骤和最终结论的输出。

推理模型在其输出中返回一个额外的 reasoning 字段,其中包含导致最终结论的推理步骤。其他模型的输出中不存在此字段。

警告

reasoning 以前称为 reasoning_content。目前,reasoning_content 仍可正常工作。但是,我们建议您迁移到 reasoning,以防 reasoning_content 在将来被移除。

支持的模型

vLLM 目前支持以下推理模型

模型系列 解析器名称 结构化输出支持 工具调用
DeepSeek R1 系列 deepseek_r1 json, regex
DeepSeek-V3.1 deepseek_v3 json, regex
ERNIE-4.5-VL 系列 ernie45 json, regex
ERNIE-4.5-21B-A3B-Thinking ernie45 json, regex
GLM-4.5 系列 glm45 json, regex
Holo2 系列 holo2 json, regex
Hunyuan A13B 系列 hunyuan_a13b json, regex
IBM Granite 3.2 语言模型 granite
MiniMax-M2 minimax_m2_append_think json, regex
Qwen3 系列 qwen3 json, regex
QwQ-32B deepseek_r1 json, regex

注意

IBM Granite 3.2 和 DeepSeek-V3.1 的推理默认禁用;要启用它,您还必须在 chat_template_kwargs 中传递 thinking=True。Qwen3 系列的推理功能默认启用。要禁用它,您必须在 chat_template_kwargs 中传递 enable_thinking=False。DeepSeek-V3.1 工具调用在非思考模式下受支持。Holo2 的推理默认启用。要禁用它,您还必须在 chat_template_kwargs 中传递 thinking=False

快速入门

要使用推理模型,您需要在向聊天补全端点发出请求时指定 --reasoning-parser 标志。--reasoning-parser 标志指定用于从模型输出中提取推理内容的推理解析器。

vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B \
    --reasoning-parser deepseek_r1

接下来,向应该在响应中返回推理内容的模型发出请求。

代码
from openai import OpenAI

# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "https://:8000/v1"

client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)

models = client.models.list()
model = models.data[0].id

# Round 1
messages = [{"role": "user", "content": "9.11 and 9.8, which is greater?"}]
# For granite, add: `extra_body={"chat_template_kwargs": {"thinking": True}}`
# For Qwen3 series, if you want to disable thinking in reasoning mode, add:
# extra_body={"chat_template_kwargs": {"enable_thinking": False}}
response = client.chat.completions.create(model=model, messages=messages)

reasoning = response.choices[0].message.reasoning
content = response.choices[0].message.content

print("reasoning:", reasoning)
print("content:", content)

reasoning 字段包含导致最终结论的推理步骤,而 content 字段包含最终结论。

流式聊天补全

流式聊天补全也支持推理模型。reasoning 字段在 聊天补全响应块delta 字段中可用。

Json
{
    "id": "chatcmpl-123",
    "object": "chat.completion.chunk",
    "created": 1694268190,
    "model": "deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
    "system_fingerprint": "fp_44709d6fcb",
    "choices": [
        {
            "index": 0,
            "delta": {
                "role": "assistant",
                "reasoning": "is",
            },
            "logprobs": null,
            "finish_reason": null
        }
    ]
}

OpenAI Python 客户端库不正式支持流式输出的 reasoning 属性。但该客户端支持响应中的额外属性。您可以使用 hasattr 检查响应中是否存在 reasoning 属性。例如

代码
from openai import OpenAI

# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "https://:8000/v1"

client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)

models = client.models.list()
model = models.data[0].id

messages = [{"role": "user", "content": "9.11 and 9.8, which is greater?"}]
# For granite, add: `extra_body={"chat_template_kwargs": {"thinking": True}}`
# For Qwen3 series, if you want to disable thinking in reasoning mode, add:
# extra_body={"chat_template_kwargs": {"enable_thinking": False}}
stream = client.chat.completions.create(
    model=model,
    messages=messages,
    stream=True,
)

print("client: Start streaming chat completions...")
printed_reasoning = False
printed_content = False

for chunk in stream:
    # Safely extract reasoning and content from delta,
    # defaulting to None if attributes don't exist or are empty strings
    reasoning = (
        getattr(chunk.choices[0].delta, "reasoning", None) or None
    )
    content = getattr(chunk.choices[0].delta, "content", None) or None

    if reasoning is not None:
        if not printed_reasoning:
            printed_reasoning = True
            print("reasoning:", end="", flush=True)
        print(reasoning, end="", flush=True)
    elif content is not None:
        if not printed_content:
            printed_content = True
            print("\ncontent:", end="", flush=True)
        # Extract and print the content
        print(content, end="", flush=True)

请记住,在访问 reasoning 之前,请检查它是否存在于响应中。您可以查看 示例

工具调用

当同时启用工具调用和推理解析器时,推理内容也可用。此外,工具调用仅从 content 字段解析函数,不从 reasoning 中解析。

代码
from openai import OpenAI

client = OpenAI(base_url="https://:8000/v1", api_key="dummy")

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get the current weather in a given location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {"type": "string", "description": "City and state, e.g., 'San Francisco, CA'"},
                    "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
                },
                "required": ["location", "unit"],
            }
        },
    }
]

response = client.chat.completions.create(
    model=client.models.list().data[0].id,
    messages=[{"role": "user", "content": "What's the weather like in San Francisco?"}],
    tools=tools,
    tool_choice="auto",
)

print(response)
tool_call = response.choices[0].message.tool_calls[0].function

print(f"reasoning: {response.choices[0].message.reasoning}")
print(f"Function called: {tool_call.name}")
print(f"Arguments: {tool_call.arguments}")

更多示例,请参阅 examples/online_serving/openai_chat_completion_tool_calls_with_reasoning.py.

限制

  • 推理内容仅适用于在线服务的聊天补全端点 (/v1/chat/completions)。

如何支持新的推理模型

您可以添加一个新的 ReasoningParser,类似于 vllm/reasoning/deepseek_r1_reasoning_parser.py

代码
# import the required packages

from vllm.reasoning import ReasoningParser, ReasoningParserManager
from vllm.entrypoints.openai.protocol import ChatCompletionRequest, DeltaMessage

# define a reasoning parser and register it to vllm
# the name list in register_module can be used
# in --reasoning-parser.
class ExampleParser(ReasoningParser):
    def __init__(self, tokenizer: TokenizerLike):
        super().__init__(tokenizer)

    def extract_reasoning_streaming(
        self,
        previous_text: str,
        current_text: str,
        delta_text: str,
        previous_token_ids: Sequence[int],
        current_token_ids: Sequence[int],
        delta_token_ids: Sequence[int],
    ) -> DeltaMessage | None:
        """
        Instance method that should be implemented for extracting reasoning
        from an incomplete response; for use when handling reasoning calls and
        streaming. Has to be an instance method because  it requires state -
        the current tokens/diffs, but also the information about what has
        previously been parsed and extracted (see constructor)
        """

    def extract_reasoning(
        self,
        model_output: str,
        request: ChatCompletionRequest | ResponsesRequest,
    ) -> tuple[str | None, str | None]:
        """
        Extract reasoning content from a complete model-generated string.

        Used for non-streaming responses where we have the entire model response
        available before sending to the client.

        Parameters:
        model_output: str
            The model-generated string to extract reasoning content from.

        request: ChatCompletionRequest
            The request object that was used to generate the model_output.

        Returns:
        tuple[Optional[str], Optional[str]]
            A tuple containing the reasoning content and the content.
        """
# Register the reasoning parser
ReasoningParserManager.register_lazy_module(
    name="example",
    module_path="vllm.reasoning.example_reasoning_parser",
    class_name="ExampleParser",
)

此外,要启用结构化输出,您需要创建一个新的 Reasoner,类似于 vllm/reasoning/deepseek_r1_reasoning_parser.py 中的示例。

代码
@dataclass
class DeepSeekReasoner(Reasoner):
    """
    Reasoner for DeepSeek R series models.
    """
    start_token_id: int
    end_token_id: int

    start_token: str = "<think>"
    end_token: str = "</think>"

    @classmethod
    def from_tokenizer(cls, tokenizer: PreTrainedTokenizer) -> Reasoner:
        return cls(
            start_token_id=tokenizer.encode("<think>", add_special_tokens=False)[0],
            end_token_id=tokenizer.encode("</think>", add_special_tokens=False)[0],
        )

    def is_reasoning_end(self, input_ids: list[int]) -> bool:
        return self.end_token_id in input_ids

    def is_reasoning_end_streaming(self, input_ids: list[int], delta_ids: list[int]) -> bool:
        return self.end_token_id in delta_token_ids
    ...

xgrammar 这样的结构化输出引擎将使用 end_token_id 来检查推理内容是否存在于模型输出中,并在存在时跳过结构化输出。

最后,您可以通过使用 --reasoning-parser 标志来为模型启用推理。

vllm serve <model_tag> --reasoning-parser example