## RWKV: RNN with Transformer-level LLM Performance
## RWKV: Parallelizable RNN with Transformer-level LLM Performance (pronounced as "RwaKuv", comes from 4 major params: R W K V)
RWKV is an RNN with Transformer-level LLM performance, which can also be directly trained like a GPT transformer (parallelizable). And it's 100% attention-free. You only need the hidden state at position t to compute the state at position t+1. You can use the "GPT" mode to quickly compute the hidden state for the "RNN" mode.