From cb395a25ae9faf05a8f03b2c1d58132bb98d59b1 Mon Sep 17 00:00:00 2001 From: randaller Date: Fri, 10 Mar 2023 12:47:39 +0300 Subject: [PATCH] Update README.md --- README.md | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/README.md b/README.md index 8ab9025..611b1cf 100644 --- a/README.md +++ b/README.md @@ -85,3 +85,13 @@ If you wish to stop generation not by "\n" sign, but by another signature, like ### Share the best with community Share your best prompts and generations with others here: https://github.com/randaller/llama-chat/issues/7 + +### Typical generation with prompt (not a chat) + +Simply comment those three lines in llama/generation.py to turn it to a generator back. + +![image](https://user-images.githubusercontent.com/22396871/224283389-e29de04e-28d1-4ccd-bf6b-81b29828d3eb.png) + +``` +python example.py ./model ./tokenizer/tokenizer.model +```