Chat with Meta's LLaMA models at home made easy

Updated 3 years ago

Rust+OpenCL+AVX2 implementation of LLaMA inference code

Updated 3 years ago

RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.

Updated 3 years ago

Inference on CPU code for LLaMA models

Updated 3 years ago

Run Perl in the browser with WebPerl!

Updated 3 years ago

L.Mole - The Video Game - Disc1

Updated 3 years ago

Port of Facebook's LLaMA model in C/C++

Updated 3 years ago

a fork that installs runs on pytorch cpu-only

Updated 3 years ago