RWKV v2 is a RNN which can also be directly trained like a GPT transformer.
RWKV v2 is a RNN which can also be directly trained like a GPT transformer. You only need x_t, a_t, b_t of position t to compute the vectors for position t+1. Hence it can be 100x faster than GPT, and 100x more VRAM friendly.
You only need x_t, a_t, b_t of position t to compute the vectors for position t+1.
Hence it can be 100x faster than GPT, and 100x more VRAM friendly.
See the release for a **27M params model on enwik8 with 0.72 BPC(dev)**.
See the release for a **27M params model on enwik8 with 0.72 BPC(dev)**.
@ -20,6 +16,8 @@ Write out the formulas for "token at pos 2" and "token at pos 3" and you will ge
kv / k is the memory mechanism. The token with high k can be remembered for a long duration, if W is close to 1 in the channel.
kv / k is the memory mechanism. The token with high k can be remembered for a long duration, if W is close to 1 in the channel.
It's also using my SmallInitEmb trick https://github.com/BlinkDL/SmallInitEmb (applicable to all transformers).