From 88297e7949f5fe6e6539e1b279cc211e8528a85a Mon Sep 17 00:00:00 2001 From: PENG Bo <33809201+BlinkDL@users.noreply.github.com> Date: Wed, 11 Aug 2021 15:32:35 +0800 Subject: [PATCH] Update README.md --- README.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index b35b9ff..03a936f 100644 --- a/README.md +++ b/README.md @@ -10,7 +10,7 @@ alt="\begin{align*} \end{align*} "> -* Here R, K, V are generated by linear transforms of input, and W is parameter. Basically RWKV decomposes attention into R(target) * W(src, target) * K(src). So we can call R "receptance", and sigmoid means it's in 0~1 range. +* The R, K, V are generated by linear transforms of input, and W is parameter. The idea of RWKV is to decompose attention into R(target) * W(src, target) * K(src). So we can call R "receptance", and sigmoid means it's in 0~1 range. * The Time-mix is similar to AFT (https://arxiv.org/abs/2105.14103). There are two differences. @@ -26,6 +26,8 @@ alt="\text{softmax}_t(\text{K}_{u,c}) = \frac{\exp(\text{K}_{u,c})}{\sum_{v \leq "https://render.githubusercontent.com/render/math?math=%5Cdisplaystyle+W_%7Bt%2Cu%2Cc%7D%3Df_h%28t-u%29%5Ccdot+%5Calpha_h%28u%29+%5Ccdot+%5Cbeta_h%28t%29" alt="W_{t,u,c}=f_h(t-u)\cdot \alpha_h(u) \cdot \beta_h(t)"> +(3) You don't need LayerNorm for Time-mix. In fact, the model converges faster when LayerNorm is removed. + Moreover we multiply the final output of Time-mix layer by γ(t). The reason for the α β γ factors, is because the context size is smaller when t is small, and this can be compensated using the α β γ factors. * The Channel-mix is similar to GeGLU (https://arxiv.org/abs/2002.05202) with an extra R factor.