From 9ca62d3a1e0ab42778ca38005278b38961f62619 Mon Sep 17 00:00:00 2001 From: PENG Bo <33809201+BlinkDL@users.noreply.github.com> Date: Fri, 25 Mar 2022 05:43:35 +0800 Subject: [PATCH] Update README.md --- README.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index ee435c8..9cf863e 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # The RWKV Language Model -## v2 +## RWKV v2 RNN RWKV v2 is a RNN which can also be directly trained like a GPT transformer. @@ -8,6 +8,8 @@ You only need x_t, a_t, b_t of position t to compute the vectors for position t+ Hence it can be 100x faster than GPT, and 100x more VRAM friendly. +## How it works + The a b c d factors work together to build a time-decay curve: u, 1, w, w^2, w^3, ... Write out the formulas for "token at pos 2" and "token at pos 3" and you will get the idea: