mirror of
https://git.adityakumar.xyz/llama.cpp.git
synced 2024-11-09 23:29:44 +00:00
35a84916fb
The prompt cache constitutes a nice speed up when using the same prompt prefix across multiple evaluations, but when using it, it will also be updated, which is not always desirable. One use case is to have a large prompt containing some context and usage rules, and a second part containing variable data of the problem being studied. In this case it's desirable to be able to save the first part once, and to always reuse it as-is without updating it with the second part. The new argument --prompt-cache-ro enables this read-only mode on the prompt cache. The prompt's contents that match the cache are loaded from the cache but the rest is not modified. This allowed to reduce a total analysis time from 112s to 49.7s here, without having to backup and restore a copy of the prompt, which takes significant time at 500 MB. Signed-off-by: Willy Tarreau <w@1wt.eu> |
||
---|---|---|
.. | ||
baby-llama | ||
benchmark | ||
embedding | ||
jeopardy | ||
main | ||
metal | ||
perplexity | ||
quantize | ||
quantize-stats | ||
save-load-state | ||
server | ||
alpaca.sh | ||
chat-13B.bat | ||
chat-13B.sh | ||
chat-persistent.sh | ||
chat.sh | ||
CMakeLists.txt | ||
common.cpp | ||
common.h | ||
gpt4all.sh | ||
Miku.sh | ||
reason-act.sh |