From d8234047e69c1b376491eb7d3623f91af4119870 Mon Sep 17 00:00:00 2001 From: PENG Bo <33809201+BlinkDL@users.noreply.github.com> Date: Fri, 25 Mar 2022 06:05:42 +0800 Subject: [PATCH] Update README.md --- README.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 3d5434c..6846be7 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # The RWKV Language Model -## RWKV v2 RNN +## RWKV v2 RNN: Language is in O(1) RWKV v2 is a RNN which can also be directly trained like a GPT transformer. @@ -16,6 +16,8 @@ Write out the formulas for "token at pos 2" and "token at pos 3" and you will ge * a and b: EMAs of kv and k. * c and d: a and b combined with self-attention. +kv / k is the memory mechanism. The token with high k can be remember for a long period, if W is close to 1 in the channel. + The pseudocode (execution from top to bottom): ![RWKV-v2-RNN](RWKV-v2-RNN.png)