Update README.md

main
PENG Bo 3 years ago committed by GitHub
parent 6fe2f798a1
commit a6a14f1bf9
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -1,8 +1,8 @@
# The RWKV Language Model (and my tricks for LMs)
## RWKV: RNN with Transformer-level Performance
## RWKV: RNN with Transformer-level LLM Performance
RWKV is a RNN with Transformer-level performance, which can also be directly trained like a GPT transformer (parallelizable). And it's attention-free. You only need the hidden state at position t to compute the state at position t+1. You can use the "GPT" mode to quickly computer the hidden state for the "RNN" mode.
RWKV is a RNN with Transformer-level LLM performance, which can also be directly trained like a GPT transformer (parallelizable). And it's attention-free. You only need the hidden state at position t to compute the state at position t+1. You can use the "GPT" mode to quickly computer the hidden state for the "RNN" mode.
So it's combining the best of RNN and transformer - **great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding** (using the final hidden state).
@ -20,7 +20,7 @@ You are welcome to join the RWKV discord https://discord.gg/bDSBUMeFpc to build
**Download RWKV-4 0.1/0.4/1.5/3/7B weights**: https://huggingface.co/BlinkDL
I am training RWKV-4 7/14/24B on the Pile: https://wandb.ai/blinkdl/RWKV-v4-Pile
I am training RWKV-4 7/14B on the Pile: https://wandb.ai/blinkdl/RWKV-v4-Pile
![RWKV-eval](RWKV-eval.png)

Loading…
Cancel
Save