* Here R, K, V are generated by linear transforms of input, and W is parameter. Basically RWKV decomposes attention into R(target) * W(src, target) * K(src). So we can call R "receptance", and sigmoid means it's in 0~1 range.
* The R, K, V are generated by linear transforms of input, and W is parameter. The idea of RWKV is to decompose attention into R(target) * W(src, target) * K(src). So we can call R "receptance", and sigmoid means it's in 0~1 range.
* The Time-mix is similar to AFT (https://arxiv.org/abs/2105.14103). There are two differences.
* The Time-mix is similar to AFT (https://arxiv.org/abs/2105.14103). There are two differences.
(3) You don't need LayerNorm for Time-mix. In fact, the model converges faster when LayerNorm is removed.
Moreover we multiply the final output of Time-mix layer by γ(t). The reason for the α β γ factors, is because the context size is smaller when t is small, and this can be compensated using the α β γ factors.
Moreover we multiply the final output of Time-mix layer by γ(t). The reason for the α β γ factors, is because the context size is smaller when t is small, and this can be compensated using the α β γ factors.
* The Channel-mix is similar to GeGLU (https://arxiv.org/abs/2002.05202) with an extra R factor.
* The Channel-mix is similar to GeGLU (https://arxiv.org/abs/2002.05202) with an extra R factor.