Update README.md

main
randaller 3 years ago committed by GitHub
parent 8c59958afc
commit 724a2fd1c8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -195,7 +195,9 @@ torch.set_default_dtype(torch.bfloat16)
device_map = infer_auto_device_map(model, max_memory={0: "6GiB", "cpu": "128GiB"})
```
One with A100 might try to set 38Gb to a GPU and try to inference the model completely in the GPU VRAM.
One with A100 might try to set 38Gb to a GPU0 and try to inference the model completely in the GPU VRAM.
One with 4*A100 might wish to use: 0: "38GiB", 1: "38Gb" etc.
For me, with 6Gb for 3070ti, this works three times slower against pure CPU inference.

Loading…
Cancel
Save