diff --git a/README.md b/README.md index de76a74..2cc25b1 100755 --- a/README.md +++ b/README.md @@ -58,7 +58,7 @@ Running model with single prompt on Windows computer equipped with 12700k, fast | model | RAM usage, fp32 | RAM usage, bf16 | fp32 inference | bf16 inference | | ------------- | ------------- | ------------- | ------------- | ------------- | | 7B | 44 Gb | 22 Gb | 170 seconds | 850 seconds | -| 13B | 77 Gb, peak 100 Gb | 38 Gb | 380 seconds | can't handle to wait | +| 13B | 77 Gb, peak 100 Gb | 38 Gb | 340 seconds | | ### RAM usage optimization By default, torch uses Float32 precision while running on CPU, which leads, for example, to use 44 GB of RAM for 7B model. We may use Bfloat16 precision on CPU too, which decreases RAM consumption/2, down to 22 GB for 7B model, but inference processing much slower.