生产指标

生产指标#

vLLM 公开了一些指标,可用于监控系统的健康状况。这些指标通过 vLLM OpenAI 兼容 API 服务器上的 /metrics 端点公开。

您可以使用 Python 或 Docker 启动服务器

vllm serve unsloth/Llama-3.2-1B-Instruct

然后查询端点以获取服务器的最新指标

$ curl http://0.0.0.0:8000/metrics

# HELP vllm:iteration_tokens_total Histogram of number of tokens per engine_step.
# TYPE vllm:iteration_tokens_total histogram
vllm:iteration_tokens_total_sum{model_name="unsloth/Llama-3.2-1B-Instruct"} 0.0
vllm:iteration_tokens_total_bucket{le="1.0",model_name="unsloth/Llama-3.2-1B-Instruct"} 3.0
vllm:iteration_tokens_total_bucket{le="8.0",model_name="unsloth/Llama-3.2-1B-Instruct"} 3.0
vllm:iteration_tokens_total_bucket{le="16.0",model_name="unsloth/Llama-3.2-1B-Instruct"} 3.0
vllm:iteration_tokens_total_bucket{le="32.0",model_name="unsloth/Llama-3.2-1B-Instruct"} 3.0
vllm:iteration_tokens_total_bucket{le="64.0",model_name="unsloth/Llama-3.2-1B-Instruct"} 3.0
vllm:iteration_tokens_total_bucket{le="128.0",model_name="unsloth/Llama-3.2-1B-Instruct"} 3.0
vllm:iteration_tokens_total_bucket{le="256.0",model_name="unsloth/Llama-3.2-1B-Instruct"} 3.0
vllm:iteration_tokens_total_bucket{le="512.0",model_name="unsloth/Llama-3.2-1B-Instruct"} 3.0
...

以下是公开的指标

class Metrics:
    """
    vLLM uses a multiprocessing-based frontend for the OpenAI server.
    This means that we need to run prometheus_client in multiprocessing mode
    See https://prometheus.github.io/client_python/multiprocess/ for more
    details on limitations.
    """

    labelname_finish_reason = "finished_reason"
    labelname_waiting_lora_adapters = "waiting_lora_adapters"
    labelname_running_lora_adapters = "running_lora_adapters"
    labelname_max_lora = "max_lora"
    _gauge_cls = prometheus_client.Gauge
    _counter_cls = prometheus_client.Counter
    _histogram_cls = prometheus_client.Histogram

    def __init__(self, labelnames: List[str], vllm_config: VllmConfig):
        # Unregister any existing vLLM collectors (for CI/CD)
        self._unregister_vllm_metrics()

        max_model_len = vllm_config.model_config.max_model_len

        # Use this flag to hide metrics that were deprecated in
        # a previous release and which will be removed future
        self.show_hidden_metrics = \
            vllm_config.observability_config.show_hidden_metrics

        # System stats
        #   Scheduler State
        self.gauge_scheduler_running = self._gauge_cls(
            name="vllm:num_requests_running",
            documentation="Number of requests currently running on GPU.",
            labelnames=labelnames,
            multiprocess_mode="sum")
        self.gauge_scheduler_waiting = self._gauge_cls(
            name="vllm:num_requests_waiting",
            documentation="Number of requests waiting to be processed.",
            labelnames=labelnames,
            multiprocess_mode="sum")
        self.gauge_lora_info = self._gauge_cls(
            name="vllm:lora_requests_info",
            documentation="Running stats on lora requests.",
            labelnames=[
                self.labelname_running_lora_adapters,
                self.labelname_max_lora,
                self.labelname_waiting_lora_adapters,
            ],
            multiprocess_mode="livemostrecent",
        )

        # Deprecated in 0.8 - KV cache offloading is not used in V1
        # Hidden in 0.9, due to be removed in 0.10
        if self.show_hidden_metrics:
            self.gauge_scheduler_swapped = self._gauge_cls(
                name="vllm:num_requests_swapped",
                documentation=(
                    "Number of requests swapped to CPU. "
                    "DEPRECATED: KV cache offloading is not used in V1"),
                labelnames=labelnames,
                multiprocess_mode="sum")

        #   KV Cache Usage in %
        self.gauge_gpu_cache_usage = self._gauge_cls(
            name="vllm:gpu_cache_usage_perc",
            documentation="GPU KV-cache usage. 1 means 100 percent usage.",
            labelnames=labelnames,
            multiprocess_mode="sum")

        # Deprecated in 0.8 - KV cache offloading is not used in V1
        # Hidden in 0.9, due to be removed in 0.10
        if self.show_hidden_metrics:
            self.gauge_cpu_cache_usage = self._gauge_cls(
                name="vllm:cpu_cache_usage_perc",
                documentation=(
                    "CPU KV-cache usage. 1 means 100 percent usage. "
                    "DEPRECATED: KV cache offloading is not used in V1"),
                labelnames=labelnames,
                multiprocess_mode="sum")
            self.gauge_cpu_prefix_cache_hit_rate = self._gauge_cls(
                name="vllm:cpu_prefix_cache_hit_rate",
                documentation=(
                    "CPU prefix cache block hit rate. "
                    "DEPRECATED: KV cache offloading is not used in V1"),
                labelnames=labelnames,
                multiprocess_mode="sum")

        # Deprecated in 0.8 - replaced by queries+hits counters in V1
        # Hidden in 0.9, due to be removed in 0.10
        if self.show_hidden_metrics:
            self.gauge_gpu_prefix_cache_hit_rate = self._gauge_cls(
                name="vllm:gpu_prefix_cache_hit_rate",
                documentation=("GPU prefix cache block hit rate. "
                               "DEPRECATED: use vllm:gpu_prefix_cache_queries "
                               "and vllm:gpu_prefix_cache_queries in V1"),
                labelnames=labelnames,
                multiprocess_mode="sum")

        # Iteration stats
        self.counter_num_preemption = self._counter_cls(
            name="vllm:num_preemptions_total",
            documentation="Cumulative number of preemption from the engine.",
            labelnames=labelnames)
        self.counter_prompt_tokens = self._counter_cls(
            name="vllm:prompt_tokens_total",
            documentation="Number of prefill tokens processed.",
            labelnames=labelnames)
        self.counter_generation_tokens = self._counter_cls(
            name="vllm:generation_tokens_total",
            documentation="Number of generation tokens processed.",
            labelnames=labelnames)
        buckets = [1, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8096]
        if not vllm_config.model_config.enforce_eager:
            buckets = vllm_config.compilation_config.\
                cudagraph_capture_sizes.copy()
            buckets.sort()
        self.histogram_iteration_tokens = self._histogram_cls(
            name="vllm:iteration_tokens_total",
            documentation="Histogram of number of tokens per engine_step.",
            labelnames=labelnames,
            buckets=buckets)
        self.histogram_time_to_first_token = self._histogram_cls(
            name="vllm:time_to_first_token_seconds",
            documentation="Histogram of time to first token in seconds.",
            labelnames=labelnames,
            buckets=[
                0.001, 0.005, 0.01, 0.02, 0.04, 0.06, 0.08, 0.1, 0.25, 0.5,
                0.75, 1.0, 2.5, 5.0, 7.5, 10.0, 20.0, 40.0, 80.0, 160.0, 640.0,
                2560.0
            ])
        self.histogram_time_per_output_token = self._histogram_cls(
            name="vllm:time_per_output_token_seconds",
            documentation="Histogram of time per output token in seconds.",
            labelnames=labelnames,
            buckets=[
                0.01, 0.025, 0.05, 0.075, 0.1, 0.15, 0.2, 0.3, 0.4, 0.5, 0.75,
                1.0, 2.5, 5.0, 7.5, 10.0, 20.0, 40.0, 80.0
            ])

        # Request stats
        #   Latency
        request_latency_buckets = [
            0.3, 0.5, 0.8, 1.0, 1.5, 2.0, 2.5, 5.0, 10.0, 15.0, 20.0, 30.0,
            40.0, 50.0, 60.0, 120.0, 240.0, 480.0, 960.0, 1920.0, 7680.0
        ]
        self.histogram_e2e_time_request = self._histogram_cls(
            name="vllm:e2e_request_latency_seconds",
            documentation="Histogram of end to end request latency in seconds.",
            labelnames=labelnames,
            buckets=request_latency_buckets)
        self.histogram_queue_time_request = self._histogram_cls(
            name="vllm:request_queue_time_seconds",
            documentation=
            "Histogram of time spent in WAITING phase for request.",
            labelnames=labelnames,
            buckets=request_latency_buckets)
        self.histogram_inference_time_request = self._histogram_cls(
            name="vllm:request_inference_time_seconds",
            documentation=
            "Histogram of time spent in RUNNING phase for request.",
            labelnames=labelnames,
            buckets=request_latency_buckets)
        self.histogram_prefill_time_request = self._histogram_cls(
            name="vllm:request_prefill_time_seconds",
            documentation=
            "Histogram of time spent in PREFILL phase for request.",
            labelnames=labelnames,
            buckets=request_latency_buckets)
        self.histogram_decode_time_request = self._histogram_cls(
            name="vllm:request_decode_time_seconds",
            documentation=
            "Histogram of time spent in DECODE phase for request.",
            labelnames=labelnames,
            buckets=request_latency_buckets)
        # Deprecated in 0.8 - duplicates vllm:request_queue_time_seconds:
        # Hidden in 0.9, due to be removed in 0.10
        if self.show_hidden_metrics:
            self.histogram_time_in_queue_request = self._histogram_cls(
                name="vllm:time_in_queue_requests",
                documentation=
                ("Histogram of time the request spent in the queue in seconds. "
                 "DEPRECATED: use vllm:request_queue_time_seconds instead."),
                labelnames=labelnames,
                buckets=request_latency_buckets)

        # Deprecated in 0.8 - use prefill/decode/inference time metrics
        # Hidden in 0.9, due to be removed in 0.10
        if self.show_hidden_metrics:
            self.histogram_model_forward_time_request = self._histogram_cls(
                name="vllm:model_forward_time_milliseconds",
                documentation=
                ("Histogram of time spent in the model forward pass in ms. "
                 "DEPRECATED: use prefill/decode/inference time metrics instead"
                 ),
                labelnames=labelnames,
                buckets=build_1_2_3_5_8_buckets(3000))
            self.histogram_model_execute_time_request = self._histogram_cls(
                name="vllm:model_execute_time_milliseconds",
                documentation=
                ("Histogram of time spent in the model execute function in ms."
                 "DEPRECATED: use prefill/decode/inference time metrics instead"
                 ),
                labelnames=labelnames,
                buckets=build_1_2_3_5_8_buckets(3000))

        #   Metadata
        self.histogram_num_prompt_tokens_request = self._histogram_cls(
            name="vllm:request_prompt_tokens",
            documentation="Number of prefill tokens processed.",
            labelnames=labelnames,
            buckets=build_1_2_5_buckets(max_model_len),
        )
        self.histogram_num_generation_tokens_request = \
            self._histogram_cls(
                name="vllm:request_generation_tokens",
                documentation="Number of generation tokens processed.",
                labelnames=labelnames,
                buckets=build_1_2_5_buckets(max_model_len),
            )
        self.histogram_max_num_generation_tokens_request = self._histogram_cls(
            name="vllm:request_max_num_generation_tokens",
            documentation=
            "Histogram of maximum number of requested generation tokens.",
            labelnames=labelnames,
            buckets=build_1_2_5_buckets(max_model_len))
        self.histogram_n_request = self._histogram_cls(
            name="vllm:request_params_n",
            documentation="Histogram of the n request parameter.",
            labelnames=labelnames,
            buckets=[1, 2, 5, 10, 20],
        )
        self.histogram_max_tokens_request = self._histogram_cls(
            name="vllm:request_params_max_tokens",
            documentation="Histogram of the max_tokens request parameter.",
            labelnames=labelnames,
            buckets=build_1_2_5_buckets(max_model_len),
        )
        self.counter_request_success = self._counter_cls(
            name="vllm:request_success_total",
            documentation="Count of successfully processed requests.",
            labelnames=labelnames + [Metrics.labelname_finish_reason])

        # Speculative decoding stats
        self.gauge_spec_decode_draft_acceptance_rate = self._gauge_cls(
            name="vllm:spec_decode_draft_acceptance_rate",
            documentation="Speulative token acceptance rate.",
            labelnames=labelnames,
            multiprocess_mode="sum")
        self.gauge_spec_decode_efficiency = self._gauge_cls(
            name="vllm:spec_decode_efficiency",
            documentation="Speculative decoding system efficiency.",
            labelnames=labelnames,
            multiprocess_mode="sum")
        self.counter_spec_decode_num_accepted_tokens = (self._counter_cls(
            name="vllm:spec_decode_num_accepted_tokens_total",
            documentation="Number of accepted tokens.",
            labelnames=labelnames))
        self.counter_spec_decode_num_draft_tokens = self._counter_cls(
            name="vllm:spec_decode_num_draft_tokens_total",
            documentation="Number of draft tokens.",
            labelnames=labelnames)
        self.counter_spec_decode_num_emitted_tokens = (self._counter_cls(
            name="vllm:spec_decode_num_emitted_tokens_total",
            documentation="Number of emitted tokens.",
            labelnames=labelnames))


以下指标已弃用,并将在未来版本中移除

  • vllm:num_requests_swappedvllm:cpu_cache_usage_percvllm:cpu_prefix_cache_hit_rate,因为 V1 中未使用 KV 缓存卸载。

  • vllm:gpu_prefix_cache_hit_rate 已被 V1 中的 queries+hits 计数器取代。

  • vllm:time_in_queue_requests,因为它与 vllm:request_queue_time_seconds 重复。

  • vllm:model_forward_time_millisecondsvllm:model_execute_time_milliseconds,因为应使用 prefill/decode/inference 时间指标来代替。

注意:当指标在 X.Y 版本中被弃用时,它们会在 X.Y+1 版本中被隐藏,但可以使用 --show-hidden-metrics-for-version=X.Y 逃生舱口重新启用,然后在 X.Y+2 版本中被移除。