From afbb6b12b759d52ab079446d402308969dd82fbd Mon Sep 17 00:00:00 2001 From: randaller Date: Sun, 19 Mar 2023 16:46:20 +0300 Subject: [PATCH] Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 91e1e65..1db135a 100644 --- a/README.md +++ b/README.md @@ -8,7 +8,7 @@ And on Steve Manuatu's repo: https://github.com/venuatu/llama And on Shawn Presser's repo: https://github.com/shawwn/llama -[HF 🤗 version](https://github.com/randaller/llama-chat#hugging-face--version) by Yam Peleg and Jason Phang: https://github.com/ypeleg/llama & https://github.com/zphang +[HF 🤗 version](https://github.com/randaller/llama-chat#hugging-face--version-inference--training) by Yam Peleg and Jason Phang: https://github.com/ypeleg/llama & https://github.com/zphang ## Examples of chats here @@ -27,7 +27,7 @@ I am running PyArrow version on a [12700k/128 Gb RAM/NVIDIA 3070ti 8Gb/fast huge For example, **30B model uses around 70 Gb of RAM**. 7B model fits into 18 Gb. 13B model uses 48 Gb. -If you do not have nvidia videocard, you may use another repo for cpu-only inference: https://github.com/randaller/llama-cpu or [HF 🤗 version](https://github.com/randaller/llama-chat#hugging-face--version). +If you do not have nvidia videocard, you may use another repo for cpu-only inference: https://github.com/randaller/llama-cpu or [HF 🤗 version](https://github.com/randaller/llama-chat#hugging-face--version-inference--training). ## Installation