From 68fcf7e29600701cbfd483e409790b414d1ec2d0 Mon Sep 17 00:00:00 2001 From: PENG Bo <33809201+BlinkDL@users.noreply.github.com> Date: Mon, 30 May 2022 01:56:10 +0800 Subject: [PATCH] Update README.md --- README.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/README.md b/README.md index a03f665..cfa345e 100644 --- a/README.md +++ b/README.md @@ -47,9 +47,7 @@ My LR schedule for the L24-D1024 RWKV-2: Fixing NaN or loss spikes: load a previous checkpoint, decrease LR a bit. I find you can decrease the LR faster than GPT, and eventually to 1/50 of LR_max. -Fine-tuning: for a small model, try 4e-5 lr, and decay to 1e-5 when it plateaus. - -**Important**: For fine-tuning the Pile model, change K_EPS from 1e-16 to 1e-9 (to avoid NaN) in https://github.com/BlinkDL/RWKV-LM/blob/main/RWKV-v2-RNN/src/model.py and https://github.com/BlinkDL/RWKV-LM/blob/main/RWKV-v2-RNN/src/model_run.py and disable HeadQK (so it's a pure RNN). You can compare the output with the latest code ( https://github.com/BlinkDL/RWKV-v2-RNN-Pile ) to verify it. +Fine-tuning: see https://github.com/BlinkDL/RWKV-v2-RNN-Pile. ## How it works