From 38d9378bb4a0d73e330b3d7cd96f0b5ad04ed82c Mon Sep 17 00:00:00 2001 From: randaller Date: Sun, 19 Mar 2023 18:38:33 +0300 Subject: [PATCH] Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 633bb55..690e3ed 100644 --- a/README.md +++ b/README.md @@ -225,13 +225,13 @@ Then run the training, then after a long-long time, use something like this as a batch = tokenizer("A portrait of a beautiful girl, ", return_tensors="pt") ``` -*Note: If you have prepared and used own dataset with Positive: Negative: lines, the initial prompt may look like:* +*Note: If you have prepared and used own dataset with Positive: Negative: lines, the initial LLaMA prompt may look like:* ``` batch = tokenizer("Positive: A warship flying thru the Wormhole, ", return_tensors="pt") ``` -Run inference, this should return continued prompt. +Run inference, this should return continued prompt for SD. ## Reference