mirror of
https://git.adityakumar.xyz/llama.cpp.git
synced 2024-11-09 23:29:44 +00:00
readme : add link web chat PR
This commit is contained in:
parent
ed9a54e512
commit
b472f3fca5
1 changed files with 1 additions and 0 deletions
|
@ -11,6 +11,7 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++
|
||||||
|
|
||||||
**Hot topics:**
|
**Hot topics:**
|
||||||
|
|
||||||
|
- Simple web chat example: https://github.com/ggerganov/llama.cpp/pull/1998
|
||||||
- k-quants now support super-block size of 64: https://github.com/ggerganov/llama.cpp/pull/2001
|
- k-quants now support super-block size of 64: https://github.com/ggerganov/llama.cpp/pull/2001
|
||||||
- New roadmap: https://github.com/users/ggerganov/projects/7
|
- New roadmap: https://github.com/users/ggerganov/projects/7
|
||||||
- Azure CI brainstorming: https://github.com/ggerganov/llama.cpp/discussions/1985
|
- Azure CI brainstorming: https://github.com/ggerganov/llama.cpp/discussions/1985
|
||||||
|
|
Loading…
Reference in a new issue