@ -71,3 +71,9 @@ Place (torrentroot)/tokenizer.model file to the [/tokenizer] folder of this repo
```
python example-chat.py ./model ./tokenizer/tokenizer.model
### Enable multi-line answers
If you wish to stop generations not by "\n" sign, but by another signature, like "User:" (which is also good idea), or any other, make the following modification in the llama/generator.py:
