vllm.model_executor.layers.rotary_embedding.deepseek_scaling_rope
 
  Bases: RotaryEmbedding
RotaryEmbedding extended with YaRN method.
Credits to Peng et al. github.com/jquesnelle/yarn
Source code in vllm/model_executor/layers/rotary_embedding/deepseek_scaling_rope.py
 | 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 |  | 
 instance-attribute  ¶
 mscale = float(
    yarn_get_mscale(scaling_factor, float(mscale))
    / yarn_get_mscale(scaling_factor, float(mscale_all_dim))
    * attn_factor
)
 
 __init__(
    head_size: int,
    rotary_dim: int,
    max_position_embeddings: int,
    base: float,
    is_neox_style: bool,
    scaling_factor: float,
    dtype: dtype,
    *,
    extrapolation_factor: float = 1,
    attn_factor: float = 1,
    beta_fast: int = 32,
    beta_slow: int = 1,
    mscale: float = 1,
    mscale_all_dim: float = 0,
) -> None
Source code in vllm/model_executor/layers/rotary_embedding/deepseek_scaling_rope.py
  
 _compute_cos_sin_cache() -> Tensor
Source code in vllm/model_executor/layers/rotary_embedding/deepseek_scaling_rope.py
  
  Source code in vllm/model_executor/layers/rotary_embedding/deepseek_scaling_rope.py
  
 forward(
    positions: Tensor,
    query: Tensor,
    key: Optional[Tensor] = None,
    offsets: Optional[Tensor] = None,
) -> tuple[Tensor, Optional[Tensor]]
PyTorch-native implementation equivalent to forward().