* Major refactoring - introduce C-style API
* Clean up
* Add <cassert>
* Add <iterator>
* Add <algorithm> ....
* Fix timing reporting and accumulation
* Measure eval time only for single-token calls
* Change llama_tokenize return meaning
* Improve performance by changing std::map to std::unordered_map and std::map<id, token> id_to_token; to std::vector<token> id_to_token;
* fix last commit on gpt_vocab_init add vocab.id_to_token.resize(vocab.token_to_id.size());
* Removed include <map>
* Nest struct token score inside gpt_vocab
* renamed token to tok
* [WIP, broken] Importer for GPTQ quantized LLaMA models
Based on: https://github.com/qwopqwop200/GPTQ-for-LLaMa
Current status: Something is busted. The output starts out decent, but
quickly degrades into gibberish. This doesn't happen with either the
original GPTQ-for-LLaMa using the same weights, or llama.cpp when using
weights quantized by its own quantizer. Is there a bug in the
conversion script that somehow only comes into play with a large context
size?
I did notice one potential issue. It's clearly not the main cause of
the gibberish, since it doesn't happen when using q4_1 weights quantized
by llama.cpp itself, but it seems concerning. When doing a matrix
multiplication of f16 * f32 => f32 or q4_1 * f32 => f32, at least when
the multiplication is not done with BLAS, the intermediate results are
stored in the smaller format rather than f32. This seems like an
unnecessary waste of precision, especially in the q4_1 case.
I was originally hoping to validate the results by matching the Python
implementation's output exactly, but precision and non-associativity
issues make this very difficult, including when performing matrix
multiplications and, especially, computing norms.
Anyway, design details:
The models being imported store per-layer weights in essentially q4_1
format, although the addend and scale are shared across an entire row
rather than every group of 32 weights. This script duplicates the
addend and scale to match ggml's expectations, at the cost of wasting
some memory.
However, there are two differences which I accommodated changing the
output format (and adding corresponding support to main.cpp) rather than
having the script match the existing one:
- The tok_embeddings and output weights (i.e. the weights that aren't
per-layer) are f16 instead of q4_1. They could be converted to q4_1,
and the impact of the loss of precision would probably be low, but
this would rule out exactly matching the Python implementation's
output for validation.
- There is no sharding, since the input doesn't have it, and for a
CPU-only implementation it seems more useful to avoid having to deal
with multiple files.
The new format is differentiated from existing q4_1 format by changing
the 'f16' header flag to a new value, 4. That said, I think a cleaner
approach would be to change main.cpp to support loading each tensor with
an arbitrary sharding configuration and type rather than hardcoding
specific combinations of types. So far I've wasted too much time
debugging to try implementing this...
* Add missing permutation. Now it works.
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Compute perplexity over prompt
* More accurate perplexity calculation - over all logits in the context window (so 512x more tokens!)
* Output all perplexitiies
* Add timing/ETA
* Enable ANSI colors on Windows 10+
On older versions function will silently fail without any ill effects
* Do not call SetConsoleMode if the mode is already set
* Update main.cpp
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Add test-tokenizer-0 to do a few tokenizations - feel free to expand
* Added option to convert-pth-to-ggml.py script to dump just the vocabulary
* Added ./models/ggml-vocab.bin containing just LLaMA vocab data (used for tests)
* Added utility to load vocabulary file from previous point (temporary implementation)
* Avoid using std::string_view and drop back to C++11 (hope I didn't break something)
* Rename gpt_vocab -> llama_vocab
* All CMake binaries go into ./bin/ now
* potential out of bounds read
* fix quantize
* style
* Update convert-pth-to-ggml.py
* mild cleanup
* don't need the space-prefixing here rn since main.cpp already does it
* new file magic + version header field
* readme notice
* missing newlines
Co-authored-by: slaren <2141330+slaren@users.noreply.github.com>
* fix coloring of last `n_batch` of prompt, and refactor line input
* forgot the newline that needs to be sent to the model
* (per #283) try to force flush of color reset in SIGINT handler
* Use F16 for memory_k and memory_v
* add command line switch to use f16 instead of f32 for memory k+v
---------
Co-authored-by: Ty Everett <ty@tyweb.us>
* Implement non-greedy tokenizer that tries to maximize token lengths
* Insert single space in front of the prompt
- this is to match original llama tokenizer behavior
---------
Co-authored-by: Jakub Horak <jakub.horak@ibawizard.net>
* added ctx_size parameter
* added it in more places
* Apply suggestions from code review
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* fixed color reset on exit
* added sigint handler for ansi_color_reset
* Update main.cpp
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Initial work on interactive mode.
* Improve interactive mode. Make rev. prompt optional.
* Update README to explain interactive mode.
* Fix OS X build
* Add back top_k
* Update utils.cpp
* Update utils.h
---------
Co-authored-by: Bill Hamilton <bill.hamilton@shopify.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Apply fixes suggested to build on windows
Issue: https://github.com/ggerganov/llama.cpp/issues/22
* Remove unsupported VLAs
* MSVC: Remove features that are only available on MSVC C++20.
* Fix zero initialization of the other fields.
* Change the use of vector for stack allocations.
* Adding repeat penalization
* Update utils.h
* Update utils.cpp
* Numeric fix
Should probably still scale by temp even if penalized
* Update comments, more proper application
I see that numbers can go negative so a fix from a referenced commit
* Minor formatting
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>