mirror of https://github.com/novarobot/llama.cpp
master
model-linking-policy-doc
jed/load-progress
jed/spm
patch-win-utf8
feat-input-prefix
q4_1_more_accel_kahan
q4_1_more_accel_loopsplit
q4_1_more_accel
tcp_server
llama-patch-enable-fma-msvc
mmap
dev
master-01a297b
master-074bea2
master-084e2f0
master-0b366e7
master-0f1b21c
master-22213a1
master-2456837
master-2e17dfd
master-2e664f1
master-305ba6f
master-368d0c8
master-3cd8dde
master-4122dff
master-483bab2
master-4870e45
master-50fae10
master-56e659a
master-5c19c70
master-69c9229
master-70f01cb
master-8cf9f34
master-928480e
master-9794052
master-9e17072
master-a140219
master-a791a68
master-ad072fc
master-ad5fd5b
master-ae44e23
master-c494ed5
master-d3f202d
master-d5850c5
master-d7def1a
master-da5303c
master-e4412b4
master-e6c9e09
master-ea10d3d
master-f5a77a6
master-f7dc43b
${ noResults }
1 Commits (d6aa749ccfd8dc6bb2f703a1d53cfe3adc637c07)
| Author | SHA1 | Message | Date |
|---|---|---|---|
|
|
16ffc013c6
|
Importer for GPTQ quantized LLaMA models (#301)
* [WIP, broken] Importer for GPTQ quantized LLaMA models Based on: https://github.com/qwopqwop200/GPTQ-for-LLaMa Current status: Something is busted. The output starts out decent, but quickly degrades into gibberish. This doesn't happen with either the original GPTQ-for-LLaMa using the same weights, or llama.cpp when using weights quantized by its own quantizer. Is there a bug in the conversion script that somehow only comes into play with a large context size? I did notice one potential issue. It's clearly not the main cause of the gibberish, since it doesn't happen when using q4_1 weights quantized by llama.cpp itself, but it seems concerning. When doing a matrix multiplication of f16 * f32 => f32 or q4_1 * f32 => f32, at least when the multiplication is not done with BLAS, the intermediate results are stored in the smaller format rather than f32. This seems like an unnecessary waste of precision, especially in the q4_1 case. I was originally hoping to validate the results by matching the Python implementation's output exactly, but precision and non-associativity issues make this very difficult, including when performing matrix multiplications and, especially, computing norms. Anyway, design details: The models being imported store per-layer weights in essentially q4_1 format, although the addend and scale are shared across an entire row rather than every group of 32 weights. This script duplicates the addend and scale to match ggml's expectations, at the cost of wasting some memory. However, there are two differences which I accommodated changing the output format (and adding corresponding support to main.cpp) rather than having the script match the existing one: - The tok_embeddings and output weights (i.e. the weights that aren't per-layer) are f16 instead of q4_1. They could be converted to q4_1, and the impact of the loss of precision would probably be low, but this would rule out exactly matching the Python implementation's output for validation. - There is no sharding, since the input doesn't have it, and for a CPU-only implementation it seems more useful to avoid having to deal with multiple files. The new format is differentiated from existing q4_1 format by changing the 'f16' header flag to a new value, 4. That said, I think a cleaner approach would be to change main.cpp to support loading each tensor with an arbitrary sharding configuration and type rather than hardcoding specific combinations of types. So far I've wasted too much time debugging to try implementing this... * Add missing permutation. Now it works. --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> |
3 years ago |