From 6b1ba8a9bd93090b61c6a0ae3988cb1d591c15fa Mon Sep 17 00:00:00 2001 From: PENG Bo <33809201+BlinkDL@users.noreply.github.com> Date: Tue, 5 Apr 2022 07:00:04 +0800 Subject: [PATCH] Update README.md --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index b939a44..a81a607 100644 --- a/README.md +++ b/README.md @@ -20,6 +20,8 @@ kv / k is the memory mechanism. The token with high k can be remembered for a lo It's also using my SmallInitEmb trick https://github.com/BlinkDL/SmallInitEmb (applicable to all transformers), and a custom CUDA kernel https://github.com/BlinkDL/RWKV-CUDA . +I find it might be nice to make the model stay on a mid-lr for a long period, because in theory that's where most learning shall happen. For example: 6e-4 to 1e-4 in 15% of steps, stays on 1e-4 for 60% of steps, then 1e-4 to 1e-5 in 25% of steps. + The pseudocode (execution from top to bottom): ![RWKV-v2-RNN](RWKV-v2-RNN.png)