From f888f75aad765ad6f84237363123059d7a070e65 Mon Sep 17 00:00:00 2001 From: randaller Date: Sat, 4 Mar 2023 13:56:53 +0300 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 40e8e2a..2f29e0a 100755 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # Inference LLaMA models using CPU only -This repository is intended as a minimal, hackable and readable example to load [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) ([arXiv](https://arxiv.org/abs/2302.13971v1)) models and run inference by using only CPU. No videocard is needed, but 64 (or better 128 Gb) of RAM is required. +This repository is intended as a minimal, hackable and readable example to load [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) ([arXiv](https://arxiv.org/abs/2302.13971v1)) models and run inference by using only CPU. Thus requires no videocard, but 64 (better 128 Gb) of RAM and modern processor is required. ### Setup In a conda env with pytorch / cuda available, run