vllm_gaudi.extension.ops ¶
MAX_EXPERTS_PER_SLICE module-attribute ¶
MAX_EXPERTS_PER_SLICE = int(
get("MAX_EXPERTS_PER_SLICE", -1)
)
DynamicFusedMOE ¶
基类: Module
Source code in vllm_gaudi/extension/ops.py
__init__ ¶
forward ¶
Source code in vllm_gaudi/extension/ops.py
MoeFP8Matmul ¶
基类: Module
Source code in vllm_gaudi/extension/ops.py
__init__ ¶
dequant_block_fp8_weight ¶
dequant_block_fp8_weight(layer: MoeFP8Matmul) -> Tensor
Source code in vllm_gaudi/extension/ops.py
forward ¶
get_dequant_weight ¶
get_dequant_weights_func ¶
set_high_precision ¶
MoeWNA16Matmul ¶
基类: Module
Matmul wrapper for compressed int4 WNA16 format
Source code in vllm_gaudi/extension/ops.py
VllmMixtureOfExpertsOp ¶
基类: Module
Source code in vllm_gaudi/extension/ops.py
moe_n_slice instance-attribute ¶
w13_list instance-attribute ¶
w13_list = ModuleList(
[(MoeMatmul()) for _ in (range(num_total_experts))]
)
w2_list instance-attribute ¶
w2_list = ModuleList(
[(MoeMatmul()) for _ in (range(num_total_experts))]
)
__init__ ¶
Source code in vllm_gaudi/extension/ops.py
forward ¶
forward(
hidden_states,
expert_routing_table,
router_weights,
permuted_weights=True,
activation="silu",
)
Source code in vllm_gaudi/extension/ops.py
VllmMixtureOfExpertsOpFP8 ¶
基类: Module
Source code in vllm_gaudi/extension/ops.py
moe_n_slice instance-attribute ¶
w13_list instance-attribute ¶
w13_list = ModuleList(
[(MoeFP8Matmul()) for _ in (range(num_experts))]
)
w2_list instance-attribute ¶
w2_list = ModuleList(
[(MoeFP8Matmul()) for _ in (range(num_experts))]
)
__init__ ¶
Source code in vllm_gaudi/extension/ops.py
forward ¶
Source code in vllm_gaudi/extension/ops.py
VllmMixtureOfExpertsOpFP8PerChannel ¶
基类: Module
Source code in vllm_gaudi/extension/ops.py
w13_list instance-attribute ¶
w13_list = ModuleList(
[(MoeFP8Matmul()) for _ in (range(num_experts))]
)
w2_list instance-attribute ¶
w2_list = ModuleList(
[(MoeFP8Matmul()) for _ in (range(num_experts))]
)
__init__ ¶
Source code in vllm_gaudi/extension/ops.py
forward ¶
Source code in vllm_gaudi/extension/ops.py
VllmMixtureOfExpertsOpWNA16 ¶
基类: Module
Mixture of Experts for compressed int4 WNA16
Source code in vllm_gaudi/extension/ops.py
1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 | |
moe_n_slice instance-attribute ¶
w13_list instance-attribute ¶
w13_list = ModuleList(
[(MoeWNA16Matmul()) for _ in (range(num_experts))]
)
w2_list instance-attribute ¶
w2_list = ModuleList(
[(MoeWNA16Matmul()) for _ in (range(num_experts))]
)
__init__ ¶
Source code in vllm_gaudi/extension/ops.py
forward ¶
Source code in vllm_gaudi/extension/ops.py
_flex_prompt_attention ¶
_flex_prompt_attention(
query: Tensor,
key: Tensor,
value: Tensor,
scale: float,
**ignored_args,
) -> Tensor
Source code in vllm_gaudi/extension/ops.py
_fsdpa_prompt_attention ¶
_fsdpa_prompt_attention(
query: Tensor,
key: Tensor,
value: Tensor,
scale: float,
fsdpa_op,
is_causal: bool,
attn_bias: Optional[Tensor] = None,
valid_seq_lengths: Optional[Tensor] = None,
window_size: Optional[int] = None,
**ignored_args,
) -> Tensor
Source code in vllm_gaudi/extension/ops.py
_get_all ¶
_get_context ¶
_include_past ¶
Source code in vllm_gaudi/extension/ops.py
_naive_prompt_attention ¶
_naive_prompt_attention(
query: Tensor,
key: Tensor,
value: Tensor,
scale: float,
attn_bias: Optional[Tensor] = None,
position_bias: Optional[Tensor] = None,
matmul_qk_op=matmul,
softmax_op=softmax,
matmul_av_op=matmul,
**ignored_args,
) -> Tensor
Source code in vllm_gaudi/extension/ops.py
apply_block_fp8_linear_hpu ¶
apply_block_fp8_linear_hpu(
input: Tensor,
layer: Module,
block_size: List[int],
bias: Optional[Tensor] = None,
do_unpad: bool = False,
force_channel_fp8: bool = False,
) -> Tensor
Source code in vllm_gaudi/extension/ops.py
apply_block_fp8_linear_hpu_dequant ¶
apply_block_fp8_linear_hpu_dequant(
input: Tensor,
weight: Tensor,
block_size: List[int],
weight_scale: Tensor,
input_scale: Optional[Tensor] = None,
bias: Optional[Tensor] = None,
original_M: Optional[Tensor] = None,
original_N: Optional[Tensor] = None,
do_unpad: bool = False,
) -> Tensor
Source code in vllm_gaudi/extension/ops.py
apply_fp8_linear_hpu ¶
apply_fp8_linear_hpu(
input: Tensor,
weight: Tensor,
weight_scale: Tensor,
input_scale: Optional[Tensor] = None,
bias: Optional[Tensor] = None,
trans_B: bool = True,
)
Source code in vllm_gaudi/extension/ops.py
b2b_impl ¶
dequant_block_fp8_weight_naive ¶
dequant_block_fp8_weight_naive(
weight,
weight_scale,
block_size,
dtype=bfloat16,
original_M=None,
original_N=None,
do_unpad=False,
)
Source code in vllm_gaudi/extension/ops.py
dispatch_bgmv_embedding ¶
wb_t_all 包含所有 LoRA-B 权重矩阵,它们在维度 0 上堆叠成一个张量,假定具有相同的秩。wb 是 wb_t_all 的转置和重塑版本,形状为 (num_loras * lora_rank, embedding_dim)。
LoRA-A 嵌入的输出(张量 x)重复 max_loras 次以匹配 wb 的形状。将 x 与掩码相乘,以将非活动 LoRA 索引的输入归零。将掩码输出与 wb 相乘并缩放它以获得最终输出。
Source code in vllm_gaudi/extension/ops.py
dispatch_bgmv_linear ¶
dispatch_bgmv_linear(
y: Tensor,
x: Tensor,
wa_t_all: Tensor,
wb_t_all: Tensor,
layer_idx: int,
scale: float,
)
wa_t_all 和 wb_t_all 包含所有 LoRA A 和 LoRA B 权重矩阵,它们在维度 0 上堆叠成单个张量,假定具有相同的秩。wa 是 wa_t_all 的重塑和转置版本,形状为 (h_in, max_loras * lora_rank),wb 是 wb_t_all 的转置和重塑版本,形状为 (max_loras * lora_rank, h_out)。
将输入 x 与 wa 进行矩阵乘法。将 x 与掩码相乘,以将非活动 LoRA 索引的输入归零。将掩码输出与 wb 进行矩阵乘法并缩放它以获得最终输出。
Source code in vllm_gaudi/extension/ops.py
dynamic_quant ¶
Source code in vllm_gaudi/extension/ops.py
flat_pa ¶
flat_pa(
query,
key_cache,
value_cache,
block_list,
block_mapping,
block_bias,
block_groups,
block_size,
scale,
matmul_qk_op,
position_bias,
matmul_av_op,
batch2block_matmul_op,
block2batch_matmul_op,
keys_fetch_func,
values_fetch_func,
**ignored_args,
)
Source code in vllm_gaudi/extension/ops.py
flat_pa_mla ¶
flat_pa_mla(
query,
key_cache,
value_cache,
block_list,
block_mapping,
block_bias,
block_groups,
block_size,
scale,
matmul_qk_op,
matmul_av_op,
batch2block_matmul_op,
block2batch_matmul_op,
keys_fetch_func,
values_fetch_func,
kv_lora_rank,
)
Source code in vllm_gaudi/extension/ops.py
fp8_block_linear_postprocess_weights ¶
Source code in vllm_gaudi/extension/ops.py
fp8_block_moe_prepare_weights ¶
Source code in vllm_gaudi/extension/ops.py
fp8_channel_moe_prepare_weights ¶
Source code in vllm_gaudi/extension/ops.py
gaudi_weight_wrapper ¶
Gaudi 权重转换的包装器。
Source code in vllm_gaudi/extension/ops.py
get_dequant_weights_func ¶
Source code in vllm_gaudi/extension/ops.py
get_inc_quant_method ¶
grouped_max ¶
Source code in vllm_gaudi/extension/ops.py
matmul_shape ¶
Source code in vllm_gaudi/extension/ops.py
pad_block_fp8_weight_naive ¶
Source code in vllm_gaudi/extension/ops.py
pad_weight ¶
将矩阵填充,使其维度成为 block_size 的倍数。
Source code in vllm_gaudi/extension/ops.py
per_tensor_dequantize ¶
Source code in vllm_gaudi/extension/ops.py
pipelined_pa ¶
pipelined_pa(
attn,
value,
block_bias,
block_groups,
block_mapping,
batch_size,
matmul_av_op,
batch2block_matmul_op,
block2batch_matmul_op,
)
Source code in vllm_gaudi/extension/ops.py
process_fp8_weight_tensor_strategy ¶
process_fp8_weight_tensor_strategy(
weight: Tensor,
weight_scale: Tensor,
logical_widths: list[int],
input_scale: Tensor | None = None,
) -> tuple[Tensor, Tensor, Tensor | None]
为逐张量量化策略处理权重。
Source code in vllm_gaudi/extension/ops.py
prompt_attention ¶
Source code in vllm_gaudi/extension/ops.py
requantize_with_max_scale ¶
requantize_with_max_scale(
weight: Tensor,
weight_scale: Tensor,
logical_widths: list[int],
) -> tuple[Tensor, Tensor]
Source code in vllm_gaudi/extension/ops.py
scaled_fp8_quant ¶
scaled_fp8_quant(
input: Tensor,
scale: Optional[Tensor] = None,
num_token_padding: Optional[int] = None,
scale_ub: Optional[Tensor] = None,
use_per_token_if_dynamic: bool = False,
) -> Tuple[Tensor, Tensor]
将输入张量量化到 FP8,并返回量化后的张量和缩放因子。此函数同时支持静态和动态量化:如果您提供缩放因子,它将使用静态缩放;如果您省略它,缩放因子将动态确定。该函数还允许对输出张量进行可选填充,以供下游内核受益。参数: input:要量化到 FP8 的输入张量 scale:FP8 量化的可选缩放因子 scale_ub:在动态每 token 情况下的可选缩放因子上限 num_token_padding:如果指定,则将输出的第一维度填充到至少此值。use_per_token_if_dynamic:在动态量化情况下,使用每张量还是每 token 量化。返回: Tuple[torch.Tensor, torch.Tensor]:FP8 格式的输出张量和缩放因子。
Source code in vllm_gaudi/extension/ops.py
synced_weight_loader ¶
unpad_weight ¶
移除矩阵的填充以恢复其原始形状。