@ -18,7 +18,7 @@ Write out the formulas for "token at pos 2" and "token at pos 3" and you will ge
kv / k is the memory mechanism. The token with high k can be remembered for a long duration, if W is close to 1 in the channel.
kv / k is the memory mechanism. The token with high k can be remembered for a long duration, if W is close to 1 in the channel.
RWKV v2 is parallelizable because the time-decay of each channel is data-indepedent (and trainable). For example, in usual RNN you can adjust the time-decay of a channel from 0.8 to 0.5 etc., while in RWKV v2 you simply move the information from a 0.8-channel to a 0.5-channel to achieve the same effect.
RWKV v2 is parallelizable because the time-decay of each channel is data-independent (and trainable). For example, in usual RNN you can adjust the time-decay of a channel from 0.8 to 0.5 etc., while in RWKV v2 you simply move the information from a 0.8-channel to a 0.5-channel to achieve the same effect.
It's also using my SmallInitEmb trick https://github.com/BlinkDL/SmallInitEmb (applicable to all transformers), and a custom CUDA kernel https://github.com/BlinkDL/RWKV-CUDA .
It's also using my SmallInitEmb trick https://github.com/BlinkDL/SmallInitEmb (applicable to all transformers), and a custom CUDA kernel https://github.com/BlinkDL/RWKV-CUDA .