vllm.transformers_utils.configs.moonvit
 
  Bases: PretrainedConfig
Source code in vllm/transformers_utils/configs/moonvit.py
  
 __init__(
    patch_size: int = 14,
    init_pos_emb_height: int = 64,
    init_pos_emb_width: int = 64,
    num_attention_heads: int = 16,
    num_hidden_layers: int = 27,
    hidden_size: int = 1152,
    intermediate_size: int = 4304,
    merge_kernel_size: tuple[int, int] = (2, 2),
    **kwargs,
)