diff --git a/README.md b/README.md index 19d50e7..9788634 100644 --- a/README.md +++ b/README.md @@ -2,11 +2,7 @@ ## RWKV v2: RNN with Transformer Performance -RWKV v2 is a RNN which can also be directly trained like a GPT transformer. - -You only need x_t, a_t, b_t of position t to compute the vectors for position t+1. - -Hence it can be 100x faster than GPT, and 100x more VRAM friendly. +RWKV v2 is a RNN which can also be directly trained like a GPT transformer. You only need x_t, a_t, b_t of position t to compute the vectors for position t+1. Hence it can be 100x faster than GPT, and 100x more VRAM friendly. See the release for a **27M params model on enwik8 with 0.72 BPC(dev)**. @@ -20,6 +16,8 @@ Write out the formulas for "token at pos 2" and "token at pos 3" and you will ge kv / k is the memory mechanism. The token with high k can be remembered for a long duration, if W is close to 1 in the channel. +It's also using my SmallInitEmb trick https://github.com/BlinkDL/SmallInitEmb (applicable to all transformers). + The pseudocode (execution from top to bottom): ![RWKV-v2-RNN](RWKV-v2-RNN.png)