Update README.md

main
randaller 3 years ago committed by GitHub
parent 5fa2239d57
commit 82cda6862b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -211,21 +211,21 @@ python hf-inference-cuda-example.py
Modify hf-training-example.py, also feel free to use more or less lines of SD prompts examples in csv file: Modify hf-training-example.py, also feel free to use more or less lines of SD prompts examples in csv file:
You may also prepare your own dataset, for example, with Positive and Negative and even Sampler etc lines interleaving in csv.
``` ```
MODEL = 'decapoda-research/llama-7b-hf' MODEL = 'decapoda-research/llama-7b-hf'
DATA_FILE_PATH = 'datasets/stable_diffusion_prompts.csv' DATA_FILE_PATH = 'datasets/stable_diffusion_prompts.csv'
OUTPUT_DIR = './trained' OUTPUT_DIR = './trained'
``` ```
*Note: You may also prepare your own dataset, for example, with Positive: and Negative: and even Sampler etc lines interleaving in csv.*
Then run the training, then after a long-long time, use something like this as prompt for LLaMA to generate SD prompts: Then run the training, then after a long-long time, use something like this as prompt for LLaMA to generate SD prompts:
``` ```
batch = tokenizer("A portrait of a beautiful girl, ", return_tensors="pt") batch = tokenizer("A portrait of a beautiful girl, ", return_tensors="pt")
``` ```
If you have used dataset with Positive: Negative: lines, initial prompt may looks like: *Note: If you have prepared and used own dataset with Positive: Negative: lines, the initial prompt may look like:*
``` ```
batch = tokenizer("Positive: A warship flying thru the Wormhole, ", return_tensors="pt") batch = tokenizer("Positive: A warship flying thru the Wormhole, ", return_tensors="pt")

Loading…
Cancel
Save