mirror of
https://git.adityakumar.xyz/llama.cpp.git
synced 2024-11-12 16:29:44 +00:00
77a73403ca
* ggml : Q4_2 ARM * ggml : add ggml_is_quantized() * llama : update llama_type_name() with Q4_2 entry * ggml : speed-up q4_2 - 4 threads: ~100ms -> ~90ms - 8 threads: ~55ms -> ~50ms * ggml : optimize q4_2 using vmlaq_n_f32 + vmulq_n_f32 |
||
---|---|---|
.. | ||
benchmark | ||
embedding | ||
main | ||
perplexity | ||
quantize | ||
quantize-stats | ||
alpaca.sh | ||
chat-13B.bat | ||
chat-13B.sh | ||
chat.sh | ||
CMakeLists.txt | ||
common.cpp | ||
common.h | ||
gpt4all.sh | ||
Miku.sh | ||
reason-act.sh |