mirror of
https://git.adityakumar.xyz/llama.cpp.git
synced 2024-11-09 15:29:43 +00:00
readme : server compile flag (#1874)
Explicitly include the server make instructions for C++ noobsl like me ;)
This commit is contained in:
parent
37e257c48e
commit
9dda13e5e1
1 changed files with 4 additions and 0 deletions
|
@ -16,6 +16,10 @@ This example allow you to have a llama.cpp http server to interact from a web pa
|
|||
To get started right away, run the following command, making sure to use the correct path for the model you have:
|
||||
|
||||
#### Unix-based systems (Linux, macOS, etc.):
|
||||
Make sure to build with the server option on
|
||||
```bash
|
||||
LLAMA_BUILD_SERVER=1 make
|
||||
```
|
||||
|
||||
```bash
|
||||
./server -m models/7B/ggml-model.bin --ctx_size 2048
|
||||
|
|
Loading…
Reference in a new issue