diff --git a/README.md b/README.md index 77dcf18..8ba2a56 100755 --- a/README.md +++ b/README.md @@ -1,7 +1,6 @@ # Inference LLaMA models using CPU only -This repository is intended as a minimal, hackable and readable example to load [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) ([arXiv](https://arxiv.org/abs/2302.13971v1)) models and run inference. -In order to download the checkpoints and tokenizer, fill this [google form](https://forms.gle/jk851eBVbX1m5TAv5) +This repository is intended as a minimal, hackable and readable example to load [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) ([arXiv](https://arxiv.org/abs/2302.13971v1)) models and run inference by using only CPU. No videocard is needed. ### Setup In a conda env with pytorch / cuda available, run