@ -65,7 +65,7 @@ And it's also using a number of my tricks, such as:
* My CUDA kernel: https://github.com/BlinkDL/RWKV-CUDA to speedup training.
### The pseudocode (execution from top to bottom):
## The pseudocode (execution from top to bottom):

@ -79,9 +79,7 @@ kv / k is the memory mechanism. The token with high k can be remembered for a lo
**RWKV v2 is parallelizable because the time-decay of each channel is data-independent (and trainable)**. For example, in usual RNN you can adjust the time-decay of a channel from say 0.8 to 0.5 (these are called "gates"), while in RWKV v2 you simply move the information from a W-0.8-channel to a W-0.5-channel to achieve the same effect.