mirror of
https://git.adityakumar.xyz/llama.cpp.git
synced 2024-11-08 23:19:43 +00:00
d01bccde9f
* ci : run ctest ggml-ci * ci : add open llama 3B-v2 tests ggml-ci * ci : disable wget progress output ggml-ci * ci : add open llama 3B-v2 tg tests for q4 and q5 quantizations ggml-ci * tests : try to fix tail free sampling test ggml-ci * ci : add K-quants ggml-ci * ci : add short perplexity tests ggml-ci * ci : add README.md * ppl : add --chunks argument to limit max number of chunks ggml-ci * ci : update README |
||
---|---|---|
.. | ||
CMakeLists.txt | ||
test-double-float.c | ||
test-grad0.c | ||
test-opt.c | ||
test-quantize-fns.cpp | ||
test-quantize-perf.cpp | ||
test-sampling.cpp | ||
test-tokenizer-0.cpp |