From db27cfbce1d7f42c0972560270eb9876527724b9 Mon Sep 17 00:00:00 2001 From: randaller Date: Mon, 6 Mar 2023 00:16:38 +0300 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 616e3f8..23ff0a5 100755 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # Inference LLaMA models using CPU only -This repository is intended as a minimal, hackable and readable example to load [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) ([arXiv](https://arxiv.org/abs/2302.13971v1)) models and run inference by using only CPU. Thus requires no videocard, but 64 (better 128 Gb) of RAM and modern processor is required. +This repository is intended as a minimal, hackable and readable example to load [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) ([arXiv](https://arxiv.org/abs/2302.13971v1)) models and run inference by using only CPU. Thus requires no videocard, but 64 (better 128 Gb) of RAM and modern processor is required. Make sure you have enough swap space (128Gb should be ok :). ### Conda Environment Setup Example for Windows 10+ Download and install Anaconda Python https://www.anaconda.com and run Anaconda Prompt