From a42915dab2944203174e473043e90575fe4affa8 Mon Sep 17 00:00:00 2001 From: randaller Date: Sat, 11 Mar 2023 14:17:48 +0300 Subject: [PATCH] Update README.md --- README.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/README.md b/README.md index 5a5b0a0..ed03320 100644 --- a/README.md +++ b/README.md @@ -76,6 +76,12 @@ Place (torrentroot)/tokenizer.model file to the [/tokenizer] folder of this repo python example-chat.py ./model ./tokenizer/tokenizer.model ``` +### Generation parameters + +![image](https://user-images.githubusercontent.com/22396871/224481168-122ef4d1-43b0-4579-8f7e-594936b7bafa.png) + +**Temperature** is one of key parameters of generation. You may wish to play with temperature, setting it around 0.7 .. 0.99. The more temperature is, the more model should follow your prompt, the less temperature instruct model to use more imagination. + ### Enable multi-line answers If you wish to stop generation not by "\n" sign, but by another signature, like "User:" (which is also good idea), or any other, make the following modification in the llama/generation.py: