跳到内容

llmcompressor.modeling.qwen3_vl_moe

CalibrateQwen3VLMoeTextSparseMoeBlock

CalibrateQwen3VLMoeTextSparseMoeBlock(
    original: Qwen3VLMoeTextSparseMoeBlock,
    config: Qwen3VLMoeConfig,
    calibrate_all_experts: bool,
)

基类: MoECalibrationModule

Qwen3VLMoeTextSparseMoeBlock 的校准版本,该版本将所有 token 发送给所有专家。

源代码位于 llmcompressor/modeling/qwen3_vl_moe.py
def __init__(
    self,
    original: "Qwen3VLMoeTextSparseMoeBlock",
    config: "Qwen3VLMoeConfig",
    calibrate_all_experts: bool,
):
    super().__init__()
    text_config: "Qwen3VLMoeTextConfig" = config.get_text_config()

    self.hidden_size = text_config.hidden_size
    self.num_experts = text_config.num_experts
    self.top_k = original.top_k
    # Note: gate was changed to be a Linear layer in transformers==4.57.0
    # https://github.com/JJJYmmm/transformers/commit/f5dea1c694af8c994c769170813a8702332119ee
    self.gate = original.gate
    self.calibrate_all_experts = calibrate_all_experts
    self.experts = SequentialQwen3VLMoeTextExperts(text_config, original.experts)