|
|
|
|
@ -85,3 +85,13 @@ If you wish to stop generation not by "\n" sign, but by another signature, like
|
|
|
|
|
### Share the best with community
|
|
|
|
|
|
|
|
|
|
Share your best prompts and generations with others here: https://github.com/randaller/llama-chat/issues/7
|
|
|
|
|
|
|
|
|
|
### Typical generation with prompt (not a chat)
|
|
|
|
|
|
|
|
|
|
Simply comment those three lines in llama/generation.py to turn it to a generator back.
|
|
|
|
|
|
|
|
|
|

|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
python example.py ./model ./tokenizer/tokenizer.model
|
|
|
|
|
```
|
|
|
|
|
|