Update README.md

main
PENG Bo 3 years ago committed by GitHub
parent 6667ad18c2
commit c0c4ffc7b4
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -16,7 +16,13 @@ How it works: RWKV gathers information to a number of channels, which are also d
## Join our Discord: https://discord.gg/bDSBUMeFpc :)
You are welcome to join the RWKV discord https://discord.gg/bDSBUMeFpc to build upon it. We have plenty of potential compute (A100 40Gs) now (thanks to CoreWeave), so if you have interesting ideas I can run them. I am also looking for CUDA gurus to optimize the kernel (https://github.com/BlinkDL/RWKV-CUDA). Thank you.
You are welcome to join the RWKV discord https://discord.gg/bDSBUMeFpc to build upon it. We have plenty of potential compute (A100 40Gs) now (thanks to CoreWeave), so if you have interesting ideas I can run them.
I am training RWKV-3 on the Pile (https://github.com/BlinkDL/RWKV-v2-RNN-Pile):
![RWKV-v3-1.5B-Pile](RWKV-v3-1.5B-Pile.png)
All of the trained models will be open-source. Inference is very fast (only matrix-vector multiplications, no matrix-matrix multiplications) even on CPUs, so you can even run a LLM on your phone.
Here are some of my TODOs. Let's work together :)
@ -28,12 +34,6 @@ Here are some of my TODOs. Let's work together :)
* Test it on bidirectional & MLM tasks, and image & audio & video tokens.
I am training RWKV-3 on the Pile (https://github.com/BlinkDL/RWKV-v2-RNN-Pile):
![RWKV-v3-1.5B-Pile](RWKV-v3-1.5B-Pile.png)
All of the trained models will be open-source. Inference is very fast (only matrix-vector multiplications, no matrix-matrix multiplications) even on CPUs, and I believe you can run a 1.5B params RWKV-v2-RNN with reasonable speed on your phone.
User feedback:
> *I've so far toyed around the character-based model on our relatively small pre-training dataset (around 10GB of text), and the results are extremely good - similar ppl to models taking much, much longer to train.*

Loading…
Cancel
Save