From 803397945b8b59642b495002281b8282ac66d1dd Mon Sep 17 00:00:00 2001 From: PENG Bo <33809201+BlinkDL@users.noreply.github.com> Date: Mon, 27 Jun 2022 12:54:35 +0800 Subject: [PATCH] Update README.md --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 211e590..fd38e9d 100644 --- a/README.md +++ b/README.md @@ -79,6 +79,8 @@ kv / k is the memory mechanism. The token with high k can be remembered for a lo **RWKV v2 is parallelizable because the time-decay of each channel is data-independent (and trainable)**. For example, in usual RNN you can adjust the time-decay of a channel from say 0.8 to 0.5 (these are called "gates"), while in RWKV v2 you simply move the information from a W-0.8-channel to a W-0.5-channel to achieve the same effect. +======================================================================== + ### RWKV v2+ improvements (not yet uploaded to github. used in the latest 1.5B run) Use different trainable TimeMix factors for R / K / V in SA and FF layers. Example: