From aa90f10547afc9c5caf851ec909fb03139a40edb Mon Sep 17 00:00:00 2001 From: PENG Bo <33809201+BlinkDL@users.noreply.github.com> Date: Thu, 16 Jun 2022 11:53:01 +0800 Subject: [PATCH] Update README.md --- README.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/README.md b/README.md index cfa345e..612d41b 100644 --- a/README.md +++ b/README.md @@ -126,6 +126,12 @@ kv / k is the memory mechanism. The token with high k can be remembered for a lo RWKV v2 is parallelizable because the time-decay of each channel is data-independent (and trainable). For example, in usual RNN you can adjust the time-decay of a channel from say 0.8 to 0.5 (these are called "gates"), while in RWKV v2 you simply move the information from a W-0.8-channel to a W-0.5-channel to achieve the same effect. +## RWKV v2.x improvements + +The latest improvements: +* Use different TimeMix for R/K/V. +* Use preLN instead of postLN. + ## How to sample a large dataset (for training) I am using a trick to sample the Pile deterministically yet randomly enough.