From bf6195201260c54378ecda4bea9544b6dd45385d Mon Sep 17 00:00:00 2001 From: randaller Date: Mon, 6 Mar 2023 00:17:47 +0300 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 23ff0a5..f352942 100755 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -# Inference LLaMA models using CPU only +# Inference LLaMA models on desktops using CPU only This repository is intended as a minimal, hackable and readable example to load [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) ([arXiv](https://arxiv.org/abs/2302.13971v1)) models and run inference by using only CPU. Thus requires no videocard, but 64 (better 128 Gb) of RAM and modern processor is required. Make sure you have enough swap space (128Gb should be ok :).