From 62fba642449331db98169d7b5f4aed3b8a6cce7c Mon Sep 17 00:00:00 2001 From: PENG Bo <33809201+BlinkDL@users.noreply.github.com> Date: Wed, 8 Mar 2023 13:43:02 +0800 Subject: [PATCH] Update README.md --- README.md | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index e682609..4fcf9a2 100644 --- a/README.md +++ b/README.md @@ -10,10 +10,16 @@ So it's combining the best of RNN and transformer - **great performance, fast in **Download RWKV-4 0.1/0.4/1.5/3/7/14B weights**: https://huggingface.co/BlinkDL +**Discord**: https://discord.gg/bDSBUMeFpc + +**Twitter**: https://twitter.com/BlinkDL_AI + **RWKV in 150 lines** (model, inference, text generation): https://github.com/BlinkDL/ChatRWKV/blob/main/RWKV_in_150_lines.py **ChatRWKV v2:** with "stream" and "split" strategies and INT8. **3G VRAM is enough to run RWKV 14B :)** https://github.com/BlinkDL/ChatRWKV/tree/main/v2 + ![RWKV-chat](RWKV-chat.png) + ```python os.environ["RWKV_JIT_ON"] = '1' os.environ["RWKV_CUDA_ON"] = '0' # if '1' then use CUDA kernel for seq mode (much faster) @@ -29,12 +35,8 @@ print(out.detach().cpu().numpy()) # same result as above ``` **Hugging Face space**: https://huggingface.co/spaces/BlinkDL/ChatRWKV-gradio -## Join our Discord: https://discord.gg/bDSBUMeFpc :) - You are welcome to join the RWKV discord https://discord.gg/bDSBUMeFpc to build upon it. We have plenty of potential compute (A100 40Gs) now (thanks to Stability and EleutherAI), so if you have interesting ideas I can run them. -**Twitter**: https://twitter.com/BlinkDL_AI - ![RWKV-eval2](RWKV-eval2.png) RWKV [loss vs token position] for 10000 ctx4k+ documents in Pile. RWKV 1B5-4k is mostly flat after ctx1500, but 3B-4k and 7B-4k and 14B-4k have some slopes, and they are getting better. This debunks the old view that RNNs cannot model long ctxlens. We can predict that RWKV 100B will be great, and RWKV 1T is probably all you need :)