|
|
|
|
@ -197,7 +197,7 @@ device_map = infer_auto_device_map(model, max_memory={0: "6GiB", "cpu": "128GiB"
|
|
|
|
|
|
|
|
|
|
One with A100 might try to set 38Gb to a GPU0 and try to inference the model completely in the GPU VRAM.
|
|
|
|
|
|
|
|
|
|
One with 4*A100 might wish to use: 0: "38GiB", 1: "38Gb" etc.
|
|
|
|
|
One with 4*A100 might wish to use: {0: "38GiB", 1: "38GiB", 2: "38GiB", 3: "38GiB", "cpu":"128GiB"}.
|
|
|
|
|
|
|
|
|
|
For me, with 6Gb for 3070ti, this works three times slower against pure CPU inference.
|
|
|
|
|
|
|
|
|
|
|