diff --git a/README.md b/README.md index 3db3029..f157b14 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ ## RWKV v2: RNN with Transformer Performance -RWKV v2 is a RNN which can also be directly trained like a GPT transformer. You only need x_t, a_t, b_t of position t to compute the vectors for position t+1. Hence it can be 100x faster than GPT, and 100x more VRAM friendly. +RWKV v2 is a RNN which can also be directly trained like a GPT transformer. You only need x_t, a_t, b_t of position t to compute the vectors for position t+1. Hence it can be 100x faster than GPT, and 100x more VRAM friendly, and you get a free sentence embedding. See the release for a **27M params model on enwik8 with 0.72 BPC(dev)**.