diff --git a/README.md b/README.md index 596a120..415fcef 100644 --- a/README.md +++ b/README.md @@ -11,6 +11,7 @@ So it's combining the best of RNN and transformer - **great performance, fast in **ChatRWKV v2:** with "stream" and "split" strategies. **3G VRAM is enough to run RWKV 14B :)** https://github.com/BlinkDL/ChatRWKV/tree/main/v2 ``` os.environ["RWKV_JIT_ON"] = '1' +os.environ["RWKV_CUDA_ON"] = '0' # if '1' then use CUDA kernel for seq mode (much faster) from rwkv.model import RWKV # everything in /v2/rwkv folder model = RWKV(model='/fsx/BlinkDL/HF-MODEL/rwkv-4-pile-1b5/RWKV-4-Pile-1B5-20220903-8040', strategy='cuda fp16')