@ -6,15 +6,19 @@ RWKV-2 is a RNN with Transformer-level performance, which can also be directly t
So it's combining the best of RNN and transformer - **great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding** (using the final hidden state).
Tweet from Sepp Hochreiter (thank you!): https://twitter.com/HochreiterSepp/status/1524270961314484227
**You can find me (BlinkDL) in the EleutherAI Discord: https://www.eleuther.ai/get-involved/**
**I am looking for CUDA gurus to optimize the kernel :) Please contact me if you are interested. Thank you.**
You can find me (BlinkDL) in the EleutherAI Discord too: https://www.eleuther.ai/get-involved/
User feedback:
> *I've so far toyed around the character-based model on our relatively small pre-training dataset (around 10GB of text), and the results are extremely good - similar ppl to models taking much, much longer to train.*