Update README.md

main
PENG Bo 4 years ago committed by GitHub
parent e2f3465fe6
commit ecaf1f98aa
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -2,11 +2,7 @@
## RWKV v2: RNN with Transformer Performance
RWKV v2 is a RNN which can also be directly trained like a GPT transformer.
You only need x_t, a_t, b_t of position t to compute the vectors for position t+1.
Hence it can be 100x faster than GPT, and 100x more VRAM friendly.
RWKV v2 is a RNN which can also be directly trained like a GPT transformer. You only need x_t, a_t, b_t of position t to compute the vectors for position t+1. Hence it can be 100x faster than GPT, and 100x more VRAM friendly.
See the release for a **27M params model on enwik8 with 0.72 BPC(dev)**.
@ -20,6 +16,8 @@ Write out the formulas for "token at pos 2" and "token at pos 3" and you will ge
kv / k is the memory mechanism. The token with high k can be remembered for a long duration, if W is close to 1 in the channel.
It's also using my SmallInitEmb trick https://github.com/BlinkDL/SmallInitEmb (applicable to all transformers).
The pseudocode (execution from top to bottom):
![RWKV-v2-RNN](RWKV-v2-RNN.png)

Loading…
Cancel
Save